* [PATCH v4] net/zxdh: Provided zxdh basic init @ 2024-09-10 12:00 Junlong Wang 2024-09-24 1:35 ` [v4] " Junlong Wang ` (6 more replies) 0 siblings, 7 replies; 225+ messages in thread From: Junlong Wang @ 2024-09-10 12:00 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 150990 bytes --] provided zxdh initialization of zxdh PMD driver. include msg channel, np init and etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- V4: Resolve compilation issues V3: Resolve compilation issues V2: Resolve compilation issues and modify doc(zxdh.ini zdh.rst) V1: Provide zxdh basic init and open source NPSDK lib --- doc/guides/nics/features/zxdh.ini | 10 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 34 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 23 + drivers/net/zxdh/zxdh_common.c | 59 ++ drivers/net/zxdh/zxdh_common.h | 32 + drivers/net/zxdh/zxdh_ethdev.c | 1328 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 202 +++++ drivers/net/zxdh/zxdh_logs.h | 38 + drivers/net/zxdh/zxdh_msg.c | 1177 +++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 408 +++++++++ drivers/net/zxdh/zxdh_npsdk.c | 158 ++++ drivers/net/zxdh/zxdh_npsdk.h | 216 +++++ drivers/net/zxdh/zxdh_pci.c | 462 ++++++++++ drivers/net/zxdh/zxdh_pci.h | 259 ++++++ drivers/net/zxdh/zxdh_queue.c | 138 +++ drivers/net/zxdh/zxdh_queue.h | 85 ++ drivers/net/zxdh/zxdh_ring.h | 87 ++ drivers/net/zxdh/zxdh_rxtx.h | 48 ++ 20 files changed, 4766 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_npsdk.c create mode 100644 drivers/net/zxdh/zxdh_npsdk.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_ring.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..083c75511b --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,10 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y + diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..e878058b7b --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,34 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + + +Features +-------- + +Features of the zxdh PMD are: + +- Multi arch support: x86_64, ARMv8. + +Prerequisites +------------- + +- Learning about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- +X86-32, Power8, ARMv7 and BSD are not supported yet. + diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..1a3db8a04d 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..593e3c5933 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,23 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if arch_subdir != 'x86' and arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', + 'zxdh_common.c', + 'zxdh_pci.c', + 'zxdh_msg.c', + 'zxdh_queue.c', + 'zxdh_npsdk.c', + ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..55497f8a24 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <stdint.h> +#include <ethdev_driver.h> + +#include "zxdh_ethdev.h" +#include "zxdh_common.h" + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +int32_t zxdh_acquire_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return -1; + + return 0; +} + +int32_t zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + return 0; + } + + return -1; +} + +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..912eb9ad42 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_COMMON_H_ +#define _ZXDH_COMMON_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 +#define ZXDH_VF_LOCK_REG 0x90 + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +int32_t zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_acquire_lock(struct zxdh_hw *hw); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_COMMON_H_ */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..813ced24cd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,1328 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <rte_memcpy.h> +#include <rte_malloc.h> +#include <rte_interrupts.h> +#include <eal_interrupts.h> +#include <ethdev_pci.h> +#include <rte_kvargs.h> +#include <rte_hexdump.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" +#include "zxdh_rxtx.h" +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_npsdk.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *MZ_ZXDH_PMD_SHARED_DATA = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data = {0}; + +#define ZXDH_PMD_DEFAULT_HOST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_ORDER_PLATFORM | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA |\ + 1ULL << ZXDH_NET_F_MAC | \ + 1ULL << ZXDH_NET_F_CSUM |\ + 1ULL << ZXDH_NET_F_GUEST_CSUM |\ + 1ULL << ZXDH_NET_F_GUEST_TSO4 |\ + 1ULL << ZXDH_NET_F_GUEST_TSO6 |\ + 1ULL << ZXDH_NET_F_HOST_TSO4 |\ + 1ULL << ZXDH_NET_F_HOST_TSO6 |\ + 1ULL << ZXDH_NET_F_GUEST_UFO |\ + 1ULL << ZXDH_NET_F_HOST_UFO) + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U + +static unsigned int +log2above(unsigned int v) +{ + unsigned int l; + unsigned int r; + + for (l = 0, r = 0; (v >> 1); ++l, v >>= 1) + r |= (v & 1); + return l + r; +} + +static uint16_t zxdh_queue_desc_pre_setup(uint16_t desc) +{ + uint32_t nb_desc = desc; + + if (desc < ZXDH_MIN_QUEUE_DEPTH) { + PMD_RX_LOG(WARNING, + "nb_desc(%u) increased number of descriptors to the min queue depth (%u)", + desc, ZXDH_MIN_QUEUE_DEPTH); + return ZXDH_MIN_QUEUE_DEPTH; + } + + if (desc > ZXDH_MAX_QUEUE_DEPTH) { + PMD_RX_LOG(WARNING, + "nb_desc(%u) can't be greater than max_rxds (%d), turn to max queue depth", + desc, ZXDH_MAX_QUEUE_DEPTH); + return ZXDH_MAX_QUEUE_DEPTH; + } + + if (!rte_is_power_of_2(desc)) { + nb_desc = 1 << log2above(desc); + if (nb_desc > ZXDH_MAX_QUEUE_DEPTH) + nb_desc = ZXDH_MAX_QUEUE_DEPTH; + + PMD_RX_LOG(WARNING, + "nb_desc(%u) increased number of descriptors to the next power of two (%d)", + desc, nb_desc); + } + + return nb_desc; +} + +static int32_t hw_q_depth_handler(const char *key __rte_unused, + const char *value, void *ret_val) +{ + uint16_t val = 0; + struct zxdh_hw *hw = ret_val; + + val = strtoul(value, NULL, 0); + uint16_t q_depth = zxdh_queue_desc_pre_setup(val); + + hw->q_depth = q_depth; + return 0; +} + +static int32_t zxdh_dev_devargs_parse(struct rte_devargs *devargs, struct zxdh_hw *hw) +{ + struct rte_kvargs *kvlist = NULL; + int32_t ret = 0; + + if (devargs == NULL) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) { + PMD_INIT_LOG(ERR, "error when parsing param"); + return 0; + } + + ret = rte_kvargs_process(kvlist, "q_depth", hw_q_depth_handler, hw); + if (ret < 0) { + PMD_INIT_LOG(ERR, "Failed to parse q_depth"); + goto exit; + } + if (!hw->q_depth) + hw->q_depth = ZXDH_MIN_QUEUE_DEPTH; + +exit: + rte_kvargs_free(kvlist); + return ret; +} + +static int zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(MZ_ZXDH_PMD_SHARED_DATA, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(MZ_ZXDH_PMD_SHARED_DATA); + if (mz == NULL) { + PMD_INIT_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int zxdh_init_once(struct rte_eth_dev *eth_dev) +{ + PMD_INIT_LOG(DEBUG, "port 0x%x init...", eth_dev->data->port_id); + if (zxdh_init_shared_data()) + return -rte_errno; + + struct zxdh_shared_data *sd = zxdh_shared_data; + int ret = 0; + + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + + sd->dev_refcnt++; +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + hw->host_features = zxdh_vtpci_get_features(hw); + hw->host_features = ZXDH_PMD_DEFAULT_HOST_FEATURES; + + uint64_t guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + uint64_t nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + PMD_INIT_LOG(DEBUG, "get dev mac: %02X:%02X:%02X:%02X:%02X:%02X", + hw->mac_addr[0], hw->mac_addr[1], + hw->mac_addr[2], hw->mac_addr[3], + hw->mac_addr[4], hw->mac_addr[5]); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + PMD_INIT_LOG(DEBUG, "random dev mac: %02X:%02X:%02X:%02X:%02X:%02X", + hw->mac_addr[0], hw->mac_addr[1], + hw->mac_addr[2], hw->mac_addr[3], + hw->mac_addr[4], hw->mac_addr[5]); + } + uint32_t max_queue_pairs; + + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + PMD_INIT_LOG(DEBUG, "get max queue pairs %u", max_queue_pairs); + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + + PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); + + hw->weak_barriers = !vtpci_with_feature(hw, ZXDH_F_ORDER_PLATFORM); + return 0; +} + +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i, mbuf_num = 0; + + const char *type __rte_unused; + struct virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = get_queue_type(i); + if (queue_type == VTNET_RQ) + type = "rxq"; + else if (queue_type == VTNET_TQ) + type = "txq"; + else + continue; + + PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) { + rte_pktmbuf_free(buf); + mbuf_num++; + } + + PMD_INIT_LOG(DEBUG, "After freeing %s[%d] used and unused buf", type, i); + } + + PMD_INIT_LOG(DEBUG, "%d mbufs freed", mbuf_num); +} + +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = zxdh_read_pci_caps(pci_dev, hw); + + if (ret) { + PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->vport.vport); + goto err; + } + zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_modern_ops; + zxdh_vtpci_reset(hw); + zxdh_get_pci_dev_config(hw); + if (hw->vqs) { /* not reachable? */ + zxdh_dev_free_mbufs(eth_dev); + ret = zxdh_free_queues(eth_dev); + if (ret < 0) { + PMD_INIT_LOG(ERR, "port 0x%x free queue failed.", hw->vport.vport); + goto err; + } + } + eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + PMD_INIT_LOG(DEBUG, "PORT MAC: %02X:%02X:%02X:%02X:%02X:%02X", + eth_dev->data->mac_addrs->addr_bytes[0], + eth_dev->data->mac_addrs->addr_bytes[1], + eth_dev->data->mac_addrs->addr_bytes[2], + eth_dev->data->mac_addrs->addr_bytes[3], + eth_dev->data->mac_addrs->addr_bytes[4], + eth_dev->data->mac_addrs->addr_bytes[5]); + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && (hw->use_msix != ZXDH_MSIX_NONE)) { + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + PMD_INIT_LOG(DEBUG, "LSC enable"); + } else { + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + } + return 0; + +err: + PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} + + +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + PMD_INIT_LOG(INFO, "queue/interrupt unbinding"); + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + + +static void zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t status = 0; + /* Read interrupt status which clears interrupt */ + uint8_t isr = zxdh_vtpci_isr(hw); + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + /** todo provided later + * if (zxdh_dev_link_update(dev, 0) == 0) + * rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + */ + + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS)) { + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + if (status & ZXDH_NET_S_ANNOUNCE) + zxdh_notify_peers(dev); + } + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(INFO, "zxdh_pf2vf_intr_handler PF "); + zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(INFO, "zxdh_pf2vf_intr_handler VF "); + zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(INFO, "zxdh_risc2pf_intr_handler PF "); + zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(INFO, "zxdh_riscvf_intr_handler VF "); + zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + PMD_INIT_LOG(ERR, ""); + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_INIT_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + PMD_INIT_LOG(INFO, "queue/interrupt mask, nb_rx_queues %u", + dev->data->nb_rx_queues); + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(INFO, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + PMD_INIT_LOG(DEBUG, "queue/interrupt binding, nb_rx_queues %u", + dev->data->nb_rx_queues); + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUE_INTR_VEC_BASE); + PMD_INIT_LOG(INFO, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(INFO, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +/* + * Should be called only after device is paused. + */ +int32_t zxdh_inject_pkts(struct rte_eth_dev *dev, struct rte_mbuf **tx_pkts, int32_t nb_pkts) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct virtnet_tx *txvq = dev->data->tx_queues[0]; + int32_t ret = 0; + + hw->inject_pkts = tx_pkts; + ret = dev->tx_pkt_burst(txvq, tx_pkts, nb_pkts); + hw->inject_pkts = NULL; + + return ret; +} + +int32_t zxdh_dev_pause(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->started == 0) { + /* Device is just stopped. */ + return -1; + } + hw->started = 0; + hw->admin_status = 0; + /* + * Prevent the worker threads from touching queues to avoid contention, + * 1 ms should be enough for the ongoing Tx function to finish. + */ + rte_delay_ms(1); + return 0; +} + +void zxdh_notify_peers(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct virtnet_rx *rxvq = NULL; + struct rte_mbuf *rarp_mbuf = NULL; + + if (!dev->data->rx_queues) + return; + + rxvq = dev->data->rx_queues[0]; + if (!rxvq) + return; + + rarp_mbuf = rte_net_make_rarp_packet(rxvq->mpool, (struct rte_ether_addr *)hw->mac_addr); + if (rarp_mbuf == NULL) { + PMD_DRV_LOG(ERR, "failed to make RARP packet."); + return; + } + + /* If virtio port just stopped, no need to send RARP */ + rte_spinlock_lock(&hw->state_lock); + if (zxdh_dev_pause(dev) < 0) { + rte_pktmbuf_free(rarp_mbuf); + rte_spinlock_unlock(&hw->state_lock); + return; + } + zxdh_inject_pkts(dev, &rarp_mbuf, 1); + hw->started = 1; + hw->admin_status = 1; + rte_spinlock_unlock(&hw->state_lock); +} + +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_INIT_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_INIT_LOG(INFO, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + /** DO NOT try to remove this! This function will enable msix, + * or QEMU will encounter SIGSEGV when DRIVER_OK is sent. + * And for legacy devices, this should be done before queue/vec + * binding to change the config size from 20 to 24, or + * ZXDH_MSI_QUEUE_VECTOR (22) will be ignored. + **/ + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = NULL, + .dev_start = NULL, + .dev_stop = NULL, + .dev_close = NULL, + + .rx_queue_setup = NULL, + .rx_queue_intr_enable = NULL, + .rx_queue_intr_disable = NULL, + + .tx_queue_setup = NULL, +}; + + +static int32_t set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + /** todo later + * eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + */ + + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!vtpci_packed_queue(hw)) { + PMD_INIT_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!vtpci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_INIT_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + /** todo later provided rx/tx + * eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + * eth_dev->rx_pkt_burst = &zxdh_recv_mergeable_pkts_packed; + */ + + return 0; +} + +static void zxdh_msg_cb_reg(struct zxdh_hw *hw) +{ + if (hw->is_pf) + zxdh_bar_chan_msg_recv_register(MODULE_BAR_MSG_TO_PF, pf_recv_bar_msg); + else + zxdh_bar_chan_msg_recv_register(MODULE_BAR_MSG_TO_VF, vf_recv_bar_msg); +} + +static void zxdh_priv_res_init(struct zxdh_hw *hw) +{ + hw->vlan_fiter = (uint64_t *)rte_malloc("vlan_filter", 64 * sizeof(uint64_t), 1); + memset(hw->vlan_fiter, 0, 64 * sizeof(uint64_t)); + if (hw->is_pf) + hw->vfinfo = rte_zmalloc("vfinfo", ZXDH_MAX_VF * sizeof(struct vfinfo), 4); + else + hw->vfinfo = NULL; +} + +static void set_vfs_pcieid(struct zxdh_hw *hw) +{ + if (hw->pfinfo.vf_nums > ZXDH_MAX_VF) { + PMD_DRV_LOG(ERR, "vf nums %u out of range", hw->pfinfo.vf_nums); + return; + } + if (hw->vfinfo == NULL) { + PMD_DRV_LOG(ERR, " vfinfo uninited"); + return; + } + + PMD_DRV_LOG(INFO, "vf nums %d", hw->pfinfo.vf_nums); + int vf_idx; + + for (vf_idx = 0; vf_idx < hw->pfinfo.vf_nums; vf_idx++) + hw->vfinfo[vf_idx].pcieid = VF_PCIE_ID(hw->pcie_id, vf_idx); +} + + +static void zxdh_sriovinfo_init(struct zxdh_hw *hw) +{ + hw->pfinfo.pcieid = PF_PCIE_ID(hw->pcie_id); + + if (hw->is_pf) + set_vfs_pcieid(hw); +} + +static int zxdh_tbl_entry_offline_destroy(struct zxdh_hw *hw) +{ + int ret = 0; + uint32_t sdt_no; + + if (!g_dtb_data.init_done) + return ret; + + if (hw->is_pf) { + sdt_no = MK_SDT_NO(L2_ENTRY, hw->hash_search_index); + ret = dpp_dtb_hash_offline_delete(0, g_dtb_data.queueid, sdt_no, 0); + PMD_DRV_LOG(DEBUG, "%d dpp_dtb_hash_offline_delete sdt_no %d", + hw->port_id, sdt_no); + if (ret) + PMD_DRV_LOG(ERR, "%d dpp_dtb_hash_offline_delete sdt_no %d failed", + hw->port_id, sdt_no); + + sdt_no = MK_SDT_NO(MC, hw->hash_search_index); + ret = dpp_dtb_hash_offline_delete(0, g_dtb_data.queueid, sdt_no, 0); + PMD_DRV_LOG(DEBUG, "%d dpp_dtb_hash_offline_delete sdt_no %d", + hw->port_id, sdt_no); + if (ret) + PMD_DRV_LOG(ERR, "%d dpp_dtb_hash_offline_delete sdt_no %d failed", + hw->port_id, sdt_no); + } + return ret; +} + +static inline int zxdh_dtb_dump_res_init(struct zxdh_hw *hw __rte_unused, + DPP_DEV_INIT_CTRL_T *dpp_ctrl) +{ + int ret = 0; + int i; + + struct zxdh_dtb_bulk_dump_info dtb_dump_baseres[] = { + /* eram */ + {"zxdh_sdt_vport_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VPORT_ATT_TABLE, NULL}, + {"zxdh_sdt_panel_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_PANEL_ATT_TABLE, NULL}, + {"zxdh_sdt_rss_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_RSS_ATT_TABLE, NULL}, + {"zxdh_sdt_vlan_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VLAN_ATT_TABLE, NULL}, + /* hash */ + {"zxdh_sdt_l2_entry_table0", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE0, NULL}, + {"zxdh_sdt_l2_entry_table1", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE1, NULL}, + {"zxdh_sdt_l2_entry_table2", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE2, NULL}, + {"zxdh_sdt_l2_entry_table3", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE3, NULL}, + {"zxdh_sdt_mc_table0", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE0, NULL}, + {"zxdh_sdt_mc_table1", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE1, NULL}, + {"zxdh_sdt_mc_table2", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE2, NULL}, + {"zxdh_sdt_mc_table3", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE3, NULL}, + }; + for (i = 0; i < (int)RTE_DIM(dtb_dump_baseres); i++) { + struct zxdh_dtb_bulk_dump_info *p = dtb_dump_baseres + i; + const struct rte_memzone *generic_dump_mz = rte_memzone_reserve_aligned(p->mz_name, + p->mz_size, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (generic_dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "Cannot alloc mem for dtb tbl bulk dump, mz_name is %s, mz_size is %u", + p->mz_name, p->mz_size); + ret = -ENOMEM; + return ret; + } + p->mz = generic_dump_mz; + dpp_ctrl->dump_addr_info[i].vir_addr = generic_dump_mz->addr_64; + dpp_ctrl->dump_addr_info[i].phy_addr = generic_dump_mz->iova; + dpp_ctrl->dump_addr_info[i].sdt_no = p->sdt_no; + dpp_ctrl->dump_addr_info[i].size = p->mz_size; + + g_dtb_data.dtb_table_bulk_dump_mz[dpp_ctrl->dump_sdt_num] = generic_dump_mz; + dpp_ctrl->dump_sdt_num++; + } + return ret; +} + +static void dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + + if ((g_dtb_data.init_done) && (g_dtb_data.bind_device == dev)) { + PMD_DRV_LOG(INFO, "%s g_dtb_data free queue %d", + dev->data->name, g_dtb_data.queueid); + + int ret = 0; + + ret = dpp_np_online_uninstall(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) { + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + PMD_DRV_LOG(INFO, "%s free dtb_table_conf_mz ", dev->data->name); + g_dtb_data.dtb_table_conf_mz = NULL; + } + if (g_dtb_data.dtb_table_dump_mz) { + PMD_DRV_LOG(INFO, "%s free dtb_table_dump_mz ", dev->data->name); + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + int i; + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + PMD_DRV_LOG(INFO, "%s free dtb_table_bulk_dump_mz[%d]", + dev->data->name, i); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->npsdk_init_done = 0; +} + +static inline int npsdk_dtb_res_init(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (g_dtb_data.init_done) { + PMD_INIT_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + DPP_DEV_INIT_CTRL_T *dpp_ctrl = rte_malloc(NULL, sizeof(*dpp_ctrl) + + sizeof(DPP_DTB_ADDR_INFO_T) * 256, 0); + + if (dpp_ctrl == NULL) { + PMD_INIT_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(DPP_DTB_ADDR_INFO_T) * 256); + + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strcpy((char *)dpp_ctrl->port_name, dev->device->name); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + struct bar_offset_params param = {0}; + struct bar_offset_res res = {0}; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_INIT_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_INIT_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + /* */ + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_INIT_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + /* init bulk dump */ + zxdh_dtb_dump_res_init(hw, dpp_ctrl); + + ret = dpp_host_np_init(0, dpp_ctrl); + if (ret) { + PMD_INIT_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_INIT_LOG(INFO, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + free(dpp_ctrl); + return 0; + +free_res: + dtb_data_res_free(hw); + rte_free(dpp_ctrl); + return -ret; +} + +static int32_t dpp_res_uni_init(uint32_t type) +{ + uint32_t ret = 0; + uint32_t dev_id = 0; + DPP_APT_HASH_RES_INIT_T HashResInit = {0}; + DPP_APT_ERAM_RES_INIT_T EramResInit = {0}; + DPP_APT_STAT_RES_INIT_T StatResInit = {0}; + + memset(&HashResInit, 0x0, sizeof(DPP_APT_HASH_RES_INIT_T)); + memset(&EramResInit, 0x0, sizeof(DPP_APT_ERAM_RES_INIT_T)); + memset(&StatResInit, 0x0, sizeof(DPP_APT_STAT_RES_INIT_T)); + + ret = dpp_apt_hash_res_get(type, &HashResInit); + if (ret) { + PMD_DRV_LOG(ERR, "%s hash_res_get failed!", __func__); + return -1; + } + ret = dpp_apt_eram_res_get(type, &EramResInit); + if (ret) { + PMD_DRV_LOG(ERR, "%s eram_res_get failed!", __func__); + return -1; + } + ret = dpp_apt_stat_res_get(type, &StatResInit); + if (ret) { + PMD_DRV_LOG(ERR, "%s stat_res_get failed!", __func__); + return -1; + } + ret = dpp_apt_hash_global_res_init(dev_id); + if (ret) { + PMD_DRV_LOG(ERR, "%s hash_global_res_init failed!", __func__); + return -1; + } + + ret = dpp_apt_hash_func_res_init(dev_id, HashResInit.func_num, HashResInit.func_res); + if (ret) { + PMD_DRV_LOG(ERR, "%s hash_func_res_init failed!", __func__); + return -1; + } + + ret = dpp_apt_hash_bulk_res_init(dev_id, HashResInit.bulk_num, HashResInit.bulk_res); + if (ret) { + PMD_DRV_LOG(ERR, "%s hash_bulk_res_init failed!", __func__); + return -1; + } + ret = dpp_apt_hash_tbl_res_init(dev_id, HashResInit.tbl_num, HashResInit.tbl_res); + if (ret) { + PMD_DRV_LOG(ERR, "%s hash_tbl_res_init failed!", __func__); + return -1; + } + ret = dpp_apt_eram_res_init(dev_id, EramResInit.tbl_num, EramResInit.eram_res); + if (ret) { + PMD_DRV_LOG(ERR, "%s eram_res_init failed!", __func__); + return -1; + } + ret = dpp_stat_ppu_eram_baddr_set(dev_id, StatResInit.eram_baddr); + if (ret) { + PMD_DRV_LOG(ERR, "%s stat_ppu_eram_baddr_set failed!", __func__); + return -1; + } + ret = dpp_stat_ppu_eram_depth_set(dev_id, StatResInit.eram_depth); /* unit: 128bits */ + if (ret) { + PMD_DRV_LOG(ERR, "%s stat_ppu_eram_depth_set failed!", __func__); + return -1; + } + ret = dpp_se_cmmu_smmu1_cfg_set(dev_id, StatResInit.ddr_baddr); + if (ret) { + PMD_DRV_LOG(ERR, "%s dpp_se_cmmu_smmu1_cfg_set failed!", __func__); + return -1; + } + ret = dpp_stat_ppu_ddr_baddr_set(dev_id, StatResInit.ppu_ddr_offset); /* unit: 128bits */ + if (ret) { + PMD_DRV_LOG(ERR, "%s stat_ppu_ddr_baddr_set failed!", __func__); + return -1; + } + + return 0; +} + +static inline int npsdk_apt_res_init(struct rte_eth_dev *dev __rte_unused) +{ + int32_t ret = 0; + + ret = dpp_res_uni_init(SE_NIC_RES_TYPE); + if (ret) { + PMD_INIT_LOG(ERR, "init stand dpp res failed"); + return -1; + } + + return ret; +} +static int zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + uint32_t ret = 0; + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if ((zxdh_shared_data != NULL) && zxdh_shared_data->npsdk_init_done) { + g_dtb_data.dev_refcnt++; + zxdh_tbl_entry_offline_destroy(hw); + PMD_DRV_LOG(DEBUG, "no need to init dtb dtb chanenl %d devref %d", + g_dtb_data.queueid, g_dtb_data.dev_refcnt); + return 0; + } + + if (hw->is_pf) { + ret = npsdk_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "dpp apt init failed, ret:%d ", ret); + return -ret; + } + + ret = npsdk_apt_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "dpp apt init failed, ret:%d ", ret); + return -ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->npsdk_init_done = 1; + + return 0; +} + +static void zxdh_priv_res_free(struct zxdh_hw *priv) +{ + rte_free(priv->vlan_fiter); + priv->vlan_fiter = NULL; + rte_free(priv->vfinfo); + priv->vfinfo = NULL; +} + +static int zxdh_tbl_entry_destroy(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t sdt_no; + int ret = 0; + + if (!g_dtb_data.init_done) + return ret; + + if (hw->is_pf) { + sdt_no = MK_SDT_NO(L2_ENTRY, hw->hash_search_index); + ret = dpp_dtb_hash_online_delete(0, g_dtb_data.queueid, sdt_no); + if (ret) { + PMD_DRV_LOG(ERR, "%s dpp_dtb_hash_online_delete sdt_no %d failed ", + dev->data->name, sdt_no); + return -1; + } + + sdt_no = MK_SDT_NO(MC, hw->hash_search_index); + ret = dpp_dtb_hash_online_delete(0, g_dtb_data.queueid, sdt_no); + if (ret) { + PMD_DRV_LOG(ERR, "%s dpp_dtb_hash_online_delete sdt_no %d failed ", + dev->data->name, sdt_no); + return -1; + } + } + return ret; +} + +static void zxdh_np_destroy(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int ret; + + ret = zxdh_tbl_entry_destroy(dev); + if (ret) + return; + + if ((!g_dtb_data.init_done) && (!g_dtb_data.dev_refcnt)) + return; + + if (--g_dtb_data.dev_refcnt == 0) + dtb_data_res_free(hw); + + PMD_DRV_LOG(DEBUG, "g_dtb_data dev_refcnt %d", g_dtb_data.dev_refcnt); +} + +static int32_t zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int32_t ret; + + eth_dev->dev_ops = &zxdh_eth_dev_ops; + + /** + * Primary process does the whole initialization, + * for secondaryprocesses, we just select the same Rx and Tx function as primary. + */ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + VTPCI_OPS(hw) = &zxdh_modern_ops; + set_rxtx_funcs(eth_dev); + return 0; + } + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); + return -ENOMEM; + } + memset(hw, 0, sizeof(*hw)); + ret = zxdh_dev_devargs_parse(eth_dev->device->devargs, hw); + if (ret < 0) { + PMD_INIT_LOG(ERR, "dev args parse failed"); + return -EINVAL; + } + + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) { + PMD_INIT_LOG(ERR, "Bad mem resource."); + return -EIO; + } + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + rte_spinlock_init(&hw->state_lock); + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + hw->pfinfo.vf_nums = pci_dev->max_vfs; + } + + /* reset device and get dev config*/ + ret = zxdh_init_once(eth_dev); + if (ret != 0) + goto err_zxdh_init; + + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + + zxdh_priv_res_init(hw); + zxdh_sriovinfo_init(hw); + zxdh_msg_cb_reg(hw); + zxdh_configure_intr(eth_dev); + return 0; + +err_zxdh_init: + zxdh_intr_release(eth_dev); + zxdh_np_destroy(eth_dev); + zxdh_bar_msg_chan_exit(); + zxdh_priv_res_free(hw); + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; + return ret; +} + +int32_t zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + + +static int32_t zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev __rte_unused) +{ + if (rte_eal_process_type() == RTE_PROC_SECONDARY) + return 0; + /** todo later + * zxdh_dev_close(eth_dev); + */ + return 0; +} + +int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int32_t ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + if (ret == -ENODEV) { /* Port has already been released by close. */ + ret = 0; + } + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .driver = {.name = "net_zxdh", }, + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, DEBUG); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, DEBUG); + +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, DEBUG); +RTE_PMD_REGISTER_PARAM_STRING(net_zxdh, + "q_depth=<int>"); + diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..6683ec5edc --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,202 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_ETHDEV_H_ +#define _ZXDH_ETHDEV_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include "ethdev_pci.h" + +extern struct zxdh_dtb_shared_data g_dtb_data; +#define PF_PCIE_ID(pcie_id) ((pcie_id & 0xff00) | 1 << 11) +#define VF_PCIE_ID(pcie_id, vf_idx) ((pcie_id & 0xff00) | (1 << 11) | (vf_idx & 0xff)) + +#define ZXDH_QUEUES_NUM_MAX 256 + +/* ZXDH PCI vendor/device ID. */ +#define PCI_VENDOR_ID_ZTE 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +/* BAR definitions */ +#define ZXDH_NUM_BARS 2 +#define ZXDH_BAR0_INDEX 0 + +#define ZXDH_MIN_QUEUE_DEPTH 1024 +#define ZXDH_MAX_QUEUE_DEPTH 32768 + +#define ZXDH_MAX_VF 256 + +#define ZXDH_TBL_ERAM_DUMP_SIZE (4 * 1024 * 1024) +#define ZXDH_TBL_ZCAM_DUMP_SIZE (5 * 1024 * 1024) + +#define INVALID_DTBQUE 0xFFFF +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) + +/* + * Process dev config changed interrupt. Call the callback + * if link state changed, generate gratuitous RARP packet if + * the status indicates an ANNOUNCE. + */ +#define ZXDH_NET_S_LINK_UP 1 /* Link is up */ +#define ZXDH_NET_S_ANNOUNCE 2 /* Announcement is needed */ + +struct pfinfo { + uint16_t pcieid; + uint16_t vf_nums; +}; +struct vfinfo { + uint16_t vf_idx; + uint16_t pcieid; + uint16_t vport; + uint8_t flag; + uint8_t state; + uint8_t rsv; + struct rte_ether_addr mac_addr; + struct rte_ether_addr vf_mac[ZXDH_MAX_MAC_ADDRS]; +}; + +union VPORT { + uint16_t vport; + + __extension__ + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + +struct chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; /* 4B */ + +struct zxdh_hw { + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; + uint16_t max_mtu; + uint8_t vtnet_hdr_size; + uint8_t vlan_strip; + uint8_t use_msix; + uint8_t intr_enabled; + uint8_t started; + uint8_t weak_barriers; + + bool has_tx_offload; + bool has_rx_offload; + + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint16_t port_id; + + uint32_t notify_off_multiplier; + uint32_t speed; /* link speed in MB */ + uint32_t speed_mode; /* link speed in 1x 2x 3x */ + uint8_t duplex; + uint8_t *isr; + uint16_t *notify_base; + + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; + + uint16_t queue_num; + uint16_t device_id; + + uint16_t pcie_id; + uint8_t phyport; + bool msg_chan_init; + + uint8_t panel_id; + uint8_t rsv[1]; + + /** + * App management thread and virtio interrupt handler + * thread both can change device state, + * this lock is meant to avoid such a contention. + */ + rte_spinlock_t state_lock; + struct rte_mbuf **inject_pkts; + struct virtqueue **vqs; + + uint64_t bar_addr[ZXDH_NUM_BARS]; + struct rte_intr_handle *risc_intr; /* Interrupt handle of rsic_v to host */ + struct rte_intr_handle *dtb_intr; /* Interrupt handle of rsic_v to host */ + + struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; + union VPORT vport; + + uint8_t is_pf : 1, + switchoffload : 1; + uint8_t hash_search_index; + uint8_t admin_status; + + uint16_t vfid; + uint16_t q_depth; + uint64_t *vlan_fiter; + struct pfinfo pfinfo; + struct vfinfo *vfinfo; + struct rte_eth_dev *eth_dev; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int npsdk_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + +struct zxdh_dtb_shared_data { + int init_done; + char name[32]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +struct zxdh_dtb_bulk_dump_info { + const char *mz_name; + uint32_t mz_size; + uint32_t sdt_no; /** <@brief sdt no 0~255 */ + const struct rte_memzone *mz; +}; + +void zxdh_interrupt_handler(void *param); +int32_t zxdh_dev_pause(struct rte_eth_dev *dev); +int32_t zxdh_inject_pkts(struct rte_eth_dev *dev, struct rte_mbuf **tx_pkts, int32_t nb_pkts); +void zxdh_notify_peers(struct rte_eth_dev *dev); + +int32_t zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev); +int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_ETHDEV_H_ */ diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..fb9b2d452f --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_LOGS_H_ +#define _ZXDH_LOGS_H_ + +#include <rte_log.h> + +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") + +extern int32_t zxdh_logtype_init; +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_init, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int32_t zxdh_logtype_driver; +#define PMD_DRV_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_driver, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int zxdh_logtype_rx; +#define PMD_RX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_rx, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int zxdh_logtype_tx; +#define PMD_TX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_tx, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int32_t zxdh_logtype_msg; +#define PMD_MSG_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_msg, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +#endif /* _ZXDH_LOGS_H_ */ + diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..e625cbea82 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,1177 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <pthread.h> +#include <rte_cycles.h> +#include <inttypes.h> +#include <rte_malloc.h> + +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define REPS_INFO_FLAG_USABLE 0x00 +#define REPS_INFO_FLAG_USED 0xa0 + +#define BDF_ECAM(bus, devid, func) (((bus & 0xff) << 8) | (func & 0x07) | ((devid & 0x1f) << 3)) + +/** + * common.ko will work in 5 scenarios + * 1: SCENE_HOST_IN_DPU : host in DPU card + * 2: SCENE_ZF_IN_DPU : zf in DPU card + * 3: SCENE_NIC_WITH_DDR : inic with DDR + * 4: SCENE_NIC_NO_DDR : inic without DDR + * 5: SCENE_STD_NIC : std card + */ +#ifdef SCENE_HOST_IN_DPU +#define BAR_PF_NUM 31 +#define BAR_VF_NUM 1024 +#define BAR_INDEX_PF_TO_VF 1 +#define BAR_INDEX_MPF_TO_MPF 1 +#define BAR_INDEX_MPF_TO_PFVF 0xff +#define BAR_INDEX_PFVF_TO_MPF 0xff +#endif + +#ifdef SCENE_ZF_IN_DPU +#define BAR_PF_NUM 7 +#define BAR_VF_NUM 128 +#define BAR_INDEX_PF_TO_VF 0xff +#define BAR_INDEX_MPF_TO_MPF 1 +#define BAR_INDEX_MPF_TO_PFVF 0xff +#define BAR_INDEX_PFVF_TO_MPF 0xff +#endif + +#ifdef SCENE_NIC_WITH_DDR +#define BAR_PF_NUM 31 +#define BAR_VF_NUM 1024 +#define BAR_INDEX_PF_TO_VF 1 +#define BAR_INDEX_MPF_TO_MPF 0xff +#define BAR_INDEX_MPF_TO_PFVF 0xff +#define BAR_INDEX_PFVF_TO_MPF 0xff +#endif + +#ifdef SCENE_NIC_NO_DDR +#define BAR_PF_NUM 31 +#define BAR_VF_NUM 1024 +#define BAR_INDEX_PF_TO_VF 1 +#define BAR_INDEX_MPF_TO_MPF 0xff +#define BAR_INDEX_MPF_TO_PFVF 1 +#define BAR_INDEX_PFVF_TO_MPF 2 +#endif + +#ifdef SCENE_STD_NIC +#define BAR_PF_NUM 7 +#define BAR_VF_NUM 256 +#define BAR_INDEX_PF_TO_VF 1 +#define BAR_INDEX_MPF_TO_MPF 0xff +#define BAR_INDEX_MPF_TO_PFVF 1 +#define BAR_INDEX_PFVF_TO_MPF 2 +#endif + +#define SCENE_TEST +#ifdef SCENE_TEST +#define BAR_PF_NUM 7 +#define BAR_VF_NUM 256 +#define BAR_INDEX_PF_TO_VF 0 +#define BAR_INDEX_MPF_TO_MPF 0xff +#define BAR_INDEX_MPF_TO_PFVF 0 +#define BAR_INDEX_PFVF_TO_MPF 0 +#endif + +/** + * 0: left 2K, 1: right 2K + * src/dst: TO_RISC, TO_PFVF, TO_MPF + * MPF: 0 0 0 + * PF: 0 0 1 + * VF: 0 1 1 + **/ +#define BAR_MSG_SRC_NUM 3 +#define BAR_MSG_SRC_MPF 0 +#define BAR_MSG_SRC_PF 1 +#define BAR_MSG_SRC_VF 2 +#define BAR_MSG_SRC_ERR 0xff + +#define BAR_MSG_DST_NUM 3 +#define BAR_MSG_DST_RISC 0 +#define BAR_MSG_DST_MPF 2 +#define BAR_MSG_DST_PFVF 1 +#define BAR_MSG_DST_ERR 0xff + +#define BAR_SUBCHAN_INDEX_SEND 0 +#define BAR_SUBCHAN_INDEX_RECV 1 +#define BAR_SEQID_NUM_MAX 256 + +#define BAR_ALIGN_WORD_MASK 0xfffffffc +#define BAR_MSG_VALID_MASK 1 +#define BAR_MSG_VALID_OFFSET 0 + +#define BAR_MSG_CHAN_USABLE 0 +#define BAR_MSG_CHAN_USED 1 + +#define LOCK_TYPE_HARD (1) +#define LOCK_TYPE_SOFT (0) +#define BAR_INDEX_TO_RISC 0 + +#define BAR_MSG_POL_MASK (0x10) +#define BAR_MSG_POL_OFFSET (4) + +#define REPS_HEADER_LEN_OFFSET 1 +#define REPS_HEADER_PAYLOAD_OFFSET 4 +#define REPS_HEADER_REPLYED 0xff + +#define READ_CHECK 1 + +uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND}, + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV}, + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV, BAR_SUBCHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {BAR_INDEX_TO_RISC, BAR_INDEX_MPF_TO_PFVF, BAR_INDEX_MPF_TO_MPF}, + {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF}, + {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD}, + {LOCK_TYPE_SOFT, LOCK_TYPE_SOFT, LOCK_TYPE_HARD}, + {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD} +}; + +#define PCIEID_IS_PF_MASK (0x0800) +#define PCIEID_PF_IDX_MASK (0x0700) +#define PCIEID_VF_IDX_MASK (0x00ff) +#define PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define PCIEID_PF_IDX_OFFSET (8) +#define PCIEID_EP_IDX_OFFSET (12) + +#define MAX_EP_NUM (4) +#define PF_NUM_PER_EP (8) +#define VF_NUM_PER_PF (32) + +#define MULTIPLY_BY_8(x) ((x) << 3) +#define MULTIPLY_BY_32(x) ((x) << 5) +#define MULTIPLY_BY_256(x) ((x) << 8) + +#define MAX_HARD_SPINLOCK_NUM (511) +#define MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define SPINLOCK_POLLING_SPAN_US (100) + +#define LOCK_MASTER_ID_MASK (0x8000) +/* bar offset */ +#define BAR0_CHAN_RISC_OFFSET (0x2000) +#define BAR0_CHAN_PFVF_OFFSET (0x3000) +#define BAR0_SPINLOCK_OFFSET (0x4000) +#define FW_SHRD_OFFSET (0x5000) +#define FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) + +#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET) +#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET) +#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET) +#define CHAN_PFVF_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_PFVF_OFFSET) + +#define RSC_TBL_CONTENT_LEN_MAX (257 * 2) +#define TBL_MSG_PRO_SUCCESS 0xaa + +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[BAR_MSG_MODULE_NUM]; + +struct dev_stat { + bool is_mpf_scanned; + bool is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; +struct dev_stat g_dev_stat = {0}; + +static uint8_t __bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case MSG_CHAN_END_MPF: + src_index = BAR_MSG_SRC_MPF; + break; + case MSG_CHAN_END_PF: + src_index = BAR_MSG_SRC_PF; + break; + case MSG_CHAN_END_VF: + src_index = BAR_MSG_SRC_VF; + break; + default: + src_index = BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t __bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case MSG_CHAN_END_MPF: + dst_index = BAR_MSG_DST_MPF; + break; + case MSG_CHAN_END_PF: + dst_index = BAR_MSG_DST_PFVF; + break; + case MSG_CHAN_END_VF: + dst_index = BAR_MSG_DST_PFVF; + break; + case MSG_CHAN_END_RISC: + dst_index = BAR_MSG_DST_RISC; + break; + default: + dst_index = BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +struct seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct seqid_ring { + uint16_t cur_id; + pthread_spinlock_t lock; + struct seqid_item reps_info_tbl[BAR_SEQID_NUM_MAX]; +}; +struct seqid_ring g_seqid_ring = {0}; + +static int __bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct seqid_item *seqid_reps_info = NULL; + + pthread_spin_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + + do { + count++; + ++g_id; + g_id %= BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != REPS_INFO_FLAG_USABLE) && (count < BAR_SEQID_NUM_MAX)); + int rc; + + if (count >= BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = BAR_MSG_OK; + +out: + pthread_spin_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t __bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = __bar_chan_msgid_allocate(msg_id); + + if (ret != BAR_MSG_OK) + return BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return BAR_MSG_OK; +} + +static void __bar_chan_msgid_free(uint16_t msg_id) +{ + struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + pthread_spin_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + pthread_spin_unlock(&g_seqid_ring.lock); +} + +static uint64_t subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t __bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = __bar_msg_src_index_trans(in->src); + uint8_t dst_index = __bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return BAR_MSG_OK; +} + +static int __bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data) +{ + uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32, + subchan_addr, algin_offset); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int __bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata) +{ + uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32, + subchan_addr, algin_offset); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t __bar_chan_msg_header_set(uint64_t subchan_addr, struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + __bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx)); + + return BAR_MSG_OK; +} + +static uint16_t __bar_chan_msg_header_get(uint64_t subchan_addr, struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + __bar_chan_reg_read(subchan_addr, idx * 4, data + idx); + + return BAR_MSG_OK; +} + +static uint16_t __bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); /* 4B unit */ + uint32_t ix; + + for (ix = 0; ix < count; ix++) + __bar_chan_reg_write(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, *(data + ix)); + + /* not 4B align part */ + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + for (ix = 0; ix < remain; ix++) + remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix); + + __bar_chan_reg_write(subchan_addr, 4 * count + + BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return BAR_MSG_OK; +} + +static uint16_t __bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + __bar_chan_reg_read(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, (data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + __bar_chan_reg_read(subchan_addr, 4 * count + + BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (ix = 0; ix < remain; ix++) + *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix); + } + return BAR_MSG_OK; +} + +static uint16_t __bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + __bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + data &= (~BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + __bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data); + return BAR_MSG_OK; +} + +static uint16_t __bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + __bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + if (BAR_MSG_CHAN_USABLE == (data & BAR_MSG_VALID_MASK)) + return BAR_MSG_CHAN_USABLE; + + return BAR_MSG_CHAN_USED; +} + +#if READ_CHECK +static uint8_t temp_msg[BAR_MSG_ADDR_CHAN_INTERVAL]; +#endif +static uint16_t __bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr, + uint16_t payload_len, struct bar_msg_header *msg_header) +{ + __bar_chan_msg_header_set(subchan_addr, msg_header); +#if READ_CHECK + __bar_chan_msg_header_get(subchan_addr, (struct bar_msg_header *)temp_msg); +#endif + __bar_chan_msg_payload_set(subchan_addr, (uint8_t *)(payload_addr), payload_len); +#if READ_CHECK + __bar_chan_msg_payload_get(subchan_addr, temp_msg, payload_len); +#endif + __bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USED); + return BAR_MSG_OK; +} + +static uint16_t __bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + __bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)BAR_MSG_POL_MASK); + data |= ((uint32_t)label << BAR_MSG_POL_OFFSET); + __bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data); + return BAR_MSG_OK; +} + +static uint16_t __bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + __bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + __bar_chan_msg_payload_get(subchan_addr, recv_msg + REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = REPS_HEADER_REPLYED; /* set reps's valid */ + return BAR_MSG_OK; +} + +static int __bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return BAR_MSG_ERR_NULL_PARA; + } + uint8_t src_index = __bar_msg_src_index_trans(in->src); + uint8_t dst_index = __bar_msg_dst_index_trans(in->dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return BAR_MSG_ERR_TYPE; + } + if (in->module_id >= BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, + "recv buffer len: %" PRIu64 " is short than mininal 4 bytes\n", + result->buffer_len); + + return BAR_MSG_OK; +} + +static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & PCIEID_PF_IDX_MASK) >> PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & PCIEID_EP_IDX_MASK) >> PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case MSG_CHAN_END_RISC: + lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case MSG_CHAN_END_VF: + case MSG_CHAN_END_PF: + lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx + MULTIPLY_BY_8(1 + MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static uint8_t spinklock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + +static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t master_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, master_id); + break; + } + rte_delay_us_block(SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + +static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +int pf_recv_bar_msg(void *pay_load __rte_unused, + uint16_t len __rte_unused, + void *reps_buffer __rte_unused, + uint16_t *reps_len __rte_unused, + void *eth_dev __rte_unused) +{ + /* todo later provided*/ + return 0; +} + +int vf_recv_bar_msg(void *pay_load __rte_unused, + uint16_t len __rte_unused, + void *reps_buffer __rte_unused, + uint16_t *reps_len __rte_unused, + void *eth_dev __rte_unused) +{ + /* todo later provided*/ + return 0; +} + +static int bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u\n", src_pcieid, lockid); + if (dst == MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + CHAN_RISC_LABEL_OFFSET, + src_pcieid | LOCK_MASTER_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + CHAN_PFVF_LABEL_OFFSET, + src_pcieid | LOCK_MASTER_ID_MASK); + + return ret; +} + +static void bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u\n", src_pcieid, lockid); + if (dst == MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + CHAN_PFVF_LABEL_OFFSET); +} +/** + * Fun: PF init hard_spinlock addr + * @pcie_id: pf's pcie_id + * @bar_base_addr: + */ +int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET, + bar_base_addr + HW_LABEL_OFFSET); + lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET, + bar_base_addr + HW_LABEL_OFFSET); + return 0; +} + +/** + * Fun: lock the channel + */ +pthread_spinlock_t chan_lock; +static int bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = __bar_msg_src_index_trans(src); + uint8_t dst_index = __bar_msg_dst_index_trans(dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.\n"); + return BAR_MSG_ERR_TYPE; + } + uint16_t idx = lock_type_tbl[src_index][dst_index]; + + if (idx == LOCK_TYPE_SOFT) + pthread_spin_lock(&chan_lock); + else + ret = bar_hard_lock(src_pcieid, dst, virt_addr); + + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.\n", src_pcieid); + + return ret; +} +/** + * Fun: unlock the channel + */ +static int bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = __bar_msg_src_index_trans(src); + uint8_t dst_index = __bar_msg_dst_index_trans(dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.\n"); + return BAR_MSG_ERR_TYPE; + } + uint16_t idx = lock_type_tbl[src_index][dst_index]; + + if (idx == LOCK_TYPE_SOFT) + pthread_spin_unlock(&chan_lock); + else + bar_hard_unlock(src_pcieid, dst, virt_addr); + + return BAR_MSG_OK; +} + +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + uint16_t ret = __bar_chan_send_para_check(in, result); + + if (ret != BAR_MSG_OK) + goto exit; + + uint16_t seq_id; + + ret = __bar_chan_save_recv_info(result, &seq_id); + if (ret != BAR_MSG_OK) + goto exit; + + uint64_t subchan_addr; + + __bar_chan_subchan_addr_get(in, &subchan_addr); + struct bar_msg_header msg_header = {0}; + + msg_header.sync = BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != BAR_MSG_OK) { + __bar_chan_msgid_free(seq_id); + goto exit; + } + __bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + /* wait unset valid */ + uint32_t time_out_cnt = 0; + uint16_t valid; + + do { + rte_delay_us_block(BAR_MSG_POLLING_SPAN); + valid = __bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while ((time_out_cnt < BAR_MSG_TIMEOUT_TH) && (valid == BAR_MSG_CHAN_USED)); + + if ((time_out_cnt == BAR_MSG_TIMEOUT_TH) && (valid != BAR_MSG_CHAN_USABLE)) { + __bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USABLE); + __bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = BAR_MSG_ERR_TIME_OUT; + } else { + ret = __bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + __bar_chan_msgid_free(seq_id); + bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static uint64_t recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = __bar_msg_dst_index_trans(src_type); + uint8_t dst = __bar_msg_src_index_trans(dst_type); + + if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + + return subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static uint64_t reply_addr_get(uint8_t sync, uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = __bar_msg_dst_index_trans(src_type); + uint8_t dst = __bar_msg_src_index_trans(dst_type); + + if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + uint64_t recv_rep_addr; + + if (sync == BAR_CHAN_MSG_SYNC) + recv_rep_addr = subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t __bar_chan_msg_header_check(struct bar_msg_header *msg_header) +{ + uint8_t module_id = 0; + uint16_t len = 0; + + if (msg_header->valid != BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return BAR_MSG_ERR_MODULE; + } + + module_id = msg_header->module_id; + if (module_id >= (uint8_t)BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return BAR_MSG_ERR_MODULE; + } + + len = msg_header->len; + if (len > BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + module_id_name(module_id), module_id); + return BAR_MSG_ERR_MODULE_NOEXIST; + } + return BAR_MSG_OK; +} + +static void __bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header, + uint8_t *reciver_buff, void *dev) +{ + uint8_t *reps_buffer = rte_malloc(NULL, BAR_MSG_PAYLOAD_MAX_LEN, 0); + + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + uint16_t reps_len = 0; + + recv_func(reciver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + __bar_chan_msg_header_set(reply_addr, msg_header); + __bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + __bar_chan_msg_valid_set(reply_addr, BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static void __bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header, uint8_t *reciver_buff) +{ + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, reciver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = REPS_HEADER_REPLYED; + +free_id: + __bar_chan_msgid_free(msg_header->msg_id); +} + +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + uint64_t recv_addr = recv_addr_get(src, dst, virt_addr); + + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + struct bar_msg_header msg_header = {0}; + + __bar_chan_msg_header_get(recv_addr, &msg_header); + uint16_t ret = __bar_chan_msg_header_check(&msg_header); + + if (ret != BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0); + + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + __bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + uint64_t reps_addr = reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == BAR_CHAN_MSG_SYNC) { + __bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + __bar_chan_msg_valid_set(recv_addr, BAR_MSG_CHAN_USABLE); + if (msg_header.ack == BAR_CHAN_MSG_ACK) { + __bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + +exit: + rte_free(recved_msg); + return BAR_MSG_OK; +} + +int zxdh_bar_chan_msg_recv_register(uint8_t module_id, zxdh_bar_chan_msg_recv_callback callback) +{ + if (module_id >= (uint16_t)BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "register ERR: invalid module_id: %u.", module_id); + return BAR_MSG_ERR_MODULE; + } + if (callback == NULL) { + PMD_MSG_LOG(ERR, "register %s(%u) error: null callback.", + module_id_name(module_id), module_id); + return BAR_MEG_ERR_NULL_FUNC; + } + if (msg_recv_func_tbl[module_id] != NULL) { + PMD_MSG_LOG(ERR, "register warning, event:%s(%u) already be registered.", + module_id_name(module_id), module_id); + return BAR_MSG_ERR_REPEAT_REGISTER; + } + msg_recv_func_tbl[module_id] = callback; + PMD_MSG_LOG(DEBUG, "register module: %s(%u) success.", + module_id_name(module_id), module_id); + return BAR_MSG_OK; +} + +int zxdh_bar_chan_msg_recv_unregister(uint8_t module_id) +{ + if (module_id >= (uint16_t)BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "unregister ERR: invalid module_id :%u.", module_id); + return BAR_MSG_ERR_MODULE; + } + if (msg_recv_func_tbl[module_id] == NULL) { + PMD_MSG_LOG(ERR, "unregister wanning, event: %s(%d) has already be unregistered.", + module_id_name(module_id), module_id); + return BAR_MSG_ERR_UNGISTER; + } + msg_recv_func_tbl[module_id] = NULL; + PMD_MSG_LOG(DEBUG, "unregister module %s(%d) success.", + module_id_name(module_id), module_id); + return BAR_MSG_OK; +} + +static int bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + if (!res || !dev) + return BAR_MSG_ERR_NULL; + + struct tbl_msg_header tbl_msg = { + .type = TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + struct zxdh_pci_bar_msg in = {0}; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = MSG_CHAN_END_RISC; + in.module_id = BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + uint8_t recv_buf[RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + int ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret); + return ret; + } + struct tbl_msg_reps_header *tbl_reps = + (struct tbl_msg_reps_header *)(recv_buf + REPS_HEADER_PAYLOAD_OFFSET); + + if (tbl_reps->check != TBL_MSG_PRO_SUCCESS) { + PMD_MSG_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + rte_memcpy(res, + (recv_buf + REPS_HEADER_PAYLOAD_OFFSET + sizeof(struct tbl_msg_reps_header)), *len); + return ret; +} + +int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, TBL_FIELD_PNLID, &reps, &reps_len) != BAR_MSG_OK) + return -1; + + *panel_id = reps; + return BAR_MSG_OK; +} +int zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, TBL_FIELD_HASHID, &reps, &reps_len) != BAR_MSG_OK) + return -1; + + *hash_id = reps; + return BAR_MSG_OK; +} + +int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport) +{ + int ret = 0; + uint16_t check_token = 0; + uint16_t sum_res = 0; + + recv_addr_get(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, 0x0); + + if (!_msix_para) + return BAR_MSG_ERR_NULL; + + struct msix_msg msix_msg = { + .pcie_id = _msix_para->pcie_id, + .vector_risc = _msix_para->vector_risc, + .vector_pfvf = _msix_para->vector_pfvf, + .vector_mpf = _msix_para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = _msix_para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = _msix_para->driver_type, + .dst = MSG_CHAN_END_RISC, + .module_id = BAR_MODULE_MISX, + .src_pcieid = _msix_para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.\n", sum_res, check_token); + return BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + + return BAR_MSG_OK; +} + +int zxdh_get_bar_offset(struct bar_offset_params *paras, struct bar_offset_res *res) +{ + uint16_t check_token = 0; + uint16_t sum_res = 0; + int ret = 0; + + if (!paras) + return BAR_MSG_ERR_NULL; + + struct offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = {0}; + + in.payload_addr = &send_msg; + in.payload_len = sizeof(send_msg); + in.virt_addr = paras->virt_addr; + in.src = MSG_CHAN_END_PF; + in.dst = MSG_CHAN_END_RISC; + in.module_id = BAR_MODULE_OFFSET_GET; + in.src_pcieid = paras->pcie_id; + + struct bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.\n", sum_res, check_token); + return BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return BAR_MSG_OK; +} + +int zxdh_msg_chan_init(void) +{ + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return BAR_MSG_OK; + + pthread_spin_init(&chan_lock, 0); + g_seqid_ring.cur_id = 0; + pthread_spin_init(&g_seqid_ring.lock, 0); + uint16_t seq_id; + + for (seq_id = 0; seq_id < BAR_SEQID_NUM_MAX; seq_id++) { + struct seqid_item *reps_info = &(g_seqid_ring.reps_info_tbl[seq_id]); + + reps_info->id = seq_id; + reps_info->flag = REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return BAR_MSG_OK; +} + +int zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + PMD_MSG_LOG(DEBUG, "%s exit success!", __func__); + return BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..07c4a1b1da --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,408 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_MSG_CHAN_H_ +#define _ZXDH_MSG_CHAN_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 + +#define BAR_MSG_POLLING_SPAN 100 /* sleep us */ +#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) +#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) +#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) /* 10s */ + +#define BAR_CHAN_MSG_SYNC 0 +#define BAR_CHAN_MSG_ASYNC 1 +#define BAR_CHAN_MSG_NO_EMEC 0 +#define BAR_CHAN_MSG_EMEC 1 +#define BAR_CHAN_MSG_NO_ACK 0 +#define BAR_CHAN_MSG_ACK 1 + +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) /* 5 */ +#define ZXDH_QUE_INTR_VEC_NUM 256 + +#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header)) +#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header)) + +#define MSG_CHAN_RET_ERR_RECV_FAIL (-11) +#define ZXDH_INDIR_RQT_SIZE 256 +#define MODULE_EEPROM_DATA_LEN 128 + +enum BAR_MSG_RTN { + BAR_MSG_OK = 0, + BAR_MSG_ERR_MSGID, + BAR_MSG_ERR_NULL, + BAR_MSG_ERR_TYPE, /* Message type exception */ + BAR_MSG_ERR_MODULE, /* Module ID exception */ + BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + BAR_MSG_ERR_LEN, /* Message length exception */ + BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /** + * The sending interface parameter boundary structure pointer is empty + */ + BAR_MSG_ERR_NULL_PARA, + BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /** + * Unable to find the corresponding message processing function for this module + */ + BAR_MSG_ERR_MODULE_NOEXIST, + /** + * The virtual address in the parameters passed in by the sending interface is empty + */ + BAR_MSG_ERR_VIRTADDR_NULL, + BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + BAR_MSG_ERR_MPF_NOT_SCANNED, + BAR_MSG_ERR_KERNEL_READY, + BAR_MSG_ERR_USR_RET_ERR, + BAR_MSG_ERR_ERR_PCIEID, + BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +enum bar_module_id { + BAR_MODULE_DBG = 0, /* 0: debug */ + BAR_MODULE_TBL, /* 1: resource table */ + BAR_MODULE_MISX, /* 2: config msix */ + BAR_MODULE_SDA, /* 3: */ + BAR_MODULE_RDMA, /* 4: */ + BAR_MODULE_DEMO, /* 5: channel test */ + BAR_MODULE_SMMU, /* 6: */ + BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + BAR_MODULE_VDPA, /* 8: vdpa live migration */ + BAR_MODULE_VQM, /* 9: vqm live migration */ + BAR_MODULE_NP, /* 10: vf msg callback np */ + BAR_MODULE_VPORT, /* 11: get vport */ + BAR_MODULE_BDF, /* 12: get bdf */ + BAR_MODULE_RISC_READY, /* 13: */ + BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + BAR_MDOULE_NVME, /* 15: */ + BAR_MDOULE_NPSDK, /* 16: */ + BAR_MODULE_NP_TODO, /* 17: */ + MODULE_BAR_MSG_TO_PF, /* 18: */ + MODULE_BAR_MSG_TO_VF, /* 19: */ + + MODULE_FLASH = 32, + BAR_MODULE_OFFSET_GET = 33, + BAR_EVENT_OVS_WITH_VCB = 36, /* ovs<-->vcb */ + + BAR_MSG_MODULE_NUM = 100, +}; + +static inline const char *module_id_name(int val) +{ + switch (val) { + case BAR_MODULE_DBG: return "BAR_MODULE_DBG"; + case BAR_MODULE_TBL: return "BAR_MODULE_TBL"; + case BAR_MODULE_MISX: return "BAR_MODULE_MISX"; + case BAR_MODULE_SDA: return "BAR_MODULE_SDA"; + case BAR_MODULE_RDMA: return "BAR_MODULE_RDMA"; + case BAR_MODULE_DEMO: return "BAR_MODULE_DEMO"; + case BAR_MODULE_SMMU: return "BAR_MODULE_SMMU"; + case BAR_MODULE_MAC: return "BAR_MODULE_MAC"; + case BAR_MODULE_VDPA: return "BAR_MODULE_VDPA"; + case BAR_MODULE_VQM: return "BAR_MODULE_VQM"; + case BAR_MODULE_NP: return "BAR_MODULE_NP"; + case BAR_MODULE_VPORT: return "BAR_MODULE_VPORT"; + case BAR_MODULE_BDF: return "BAR_MODULE_BDF"; + case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY"; + case BAR_MODULE_REVERSE: return "BAR_MODULE_REVERSE"; + case BAR_MDOULE_NVME: return "BAR_MDOULE_NVME"; + case BAR_MDOULE_NPSDK: return "BAR_MDOULE_NPSDK"; + case BAR_MODULE_NP_TODO: return "BAR_MODULE_NP_TODO"; + case MODULE_BAR_MSG_TO_PF: return "MODULE_BAR_MSG_TO_PF"; + case MODULE_BAR_MSG_TO_VF: return "MODULE_BAR_MSG_TO_VF"; + case MODULE_FLASH: return "MODULE_FLASH"; + case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET"; + case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + +struct bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency? */ + uint8_t ack : 1; /* ack msg? */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; /* 12B */ + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; /* 32B */ + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; /* 16B */ + +struct msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; +/* private reps struct */ +struct bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; /* 8B */ + +struct bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; /* 12B */ + +struct bar_recv_msg { + /* fix 4B */ + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + union { + struct bar_msix_reps msix_reps; /* 8B */ + struct bar_offset_reps offset_reps; /* 12B */ + } __rte_packed; +} __rte_packed; + +enum pciebar_layout_type { + URI_VQM = 0, + URI_SPINLOCK = 1, + URI_FWCAP = 2, + URI_FWSHR = 3, + URI_DRS_SEC = 4, + URI_RSV = 5, + URI_CTRLCH = 6, + URI_1588 = 7, + URI_QBV = 8, + URI_MACPCS = 9, + URI_RDMA = 10, +/* DEBUG PF */ + URI_MNP = 11, + URI_MSPM = 12, + URI_MVQM = 13, + URI_MDPI = 14, + URI_NP = 15, +/* END DEBUG PF */ + URI_MAX, +}; + +enum RES_TBL_FILED { + TBL_FIELD_PCIEID = 0, + TBL_FIELD_BDF = 1, + TBL_FIELD_MSGCH = 2, + TBL_FIELD_DATACH = 3, + TBL_FIELD_VPORT = 4, + TBL_FIELD_PNLID = 5, + TBL_FIELD_PHYPORT = 6, + TBL_FIELD_SERDES_NUM = 7, + TBL_FIELD_NP_PORT = 8, + TBL_FIELD_SPEED = 9, + TBL_FIELD_HASHID = 10, + TBL_FIELD_NON, +}; + +struct tbl_msg_header { + uint8_t type; /* r/w */ + uint8_t field; /* which table? */ + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; /* 8B */ +struct tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; /* 4B */ + +enum TBL_MSG_TYPE { + TBL_TYPE_READ, + TBL_TYPE_WRITE, + TBL_TYPE_NON, +}; + +struct bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; +struct bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + +/* vec0 : dev interrupt + * vec1~3: risc interrupt + * vec4 : dtb interrupt + */ +enum { + MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, /* 1 */ + MSIX_FROM_MPF, /* 2 */ + MSIX_FROM_RISCV, /* 3 */ + MSG_VEC_NUM /* 4 */ +}; + +enum DRIVER_TYPE { + MSG_CHAN_END_MPF = 0, + MSG_CHAN_END_PF, + MSG_CHAN_END_VF, + MSG_CHAN_END_RISC, +}; + +enum MSG_TYPE { + /* loopback test type */ + TYPE_DEBUG = 0, + DST_RISCV, + DST_MPF, + DST_PF_OR_VF, + DST_ZF, + MSG_TYPE_NUM, +}; + +struct msg_header { + bool is_async; + enum MSG_TYPE msg_type; + enum bar_module_id msg_module_id; + uint8_t msg_priority; + uint16_t vport_dst; + uint16_t qid_dst; +}; + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +struct msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; /* 4B */ + +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, + uint16_t *reps_len, void *dev); + +/** + * Init msg_chan_pkt in probe() + * @return zero for success, negative for failure + */ +int16_t zxdh_msg_chan_pkt_init(void); +void zxdh_msg_chan_pkt_remove(void); /* Remove msg_chan_pkt in probe() */ + +/** + * Get the offset value of the specified module + * @bar_offset_params: input parameter + * @bar_offset_res: Module offset and length + */ +int zxdh_get_bar_offset(struct bar_offset_params *paras, struct bar_offset_res *res); + +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, + uint16_t *reps_len, void *dev); + +/** + * Send synchronization messages through PCIE BAR space + * @in: Message sending information + * @result: Message result feedback + * @return: 0 successful, other failures + */ +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); + +/** + * PCIE BAR spatial message method, registering message reception callback + * @module_id: Registration module ID + * @callback: Pointer to the receive processing function implemented by the module + * @return: 0 successful, other failures + * Usually called during driver initialization + */ +int zxdh_bar_chan_msg_recv_register(uint8_t module_id, zxdh_bar_chan_msg_recv_callback callback); + +/** + * PCIE BAR spatial message method, unregistered message receiving callback + * @module_id: Kernel PCIE device address + * @return: 0 successful, other failures + * Called during driver uninstallation + */ +int zxdh_bar_chan_msg_recv_unregister(uint8_t module_id); + +/** + * Provide a message receiving interface for device driver interrupt handling functions + * @src: Driver type for sending interrupts + * @dst: Device driver's own driver type + * @virt_addr: The communication bar address of the device + * @return: 0 successful, other failures + */ +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); + +/** + * Initialize spilock and clear the hardware lock address it belongs to + * @pcie_id: PCIE_id of PF device + * @bar_base_addr: Bar0 initial base address + */ +int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr); + +int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport); +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); + +int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id); +int zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id); + +int pf_recv_bar_msg(void *pay_load __rte_unused, + uint16_t len __rte_unused, + void *reps_buffer __rte_unused, + uint16_t *reps_len __rte_unused, + void *eth_dev __rte_unused); +int vf_recv_bar_msg(void *pay_load __rte_unused, + uint16_t len __rte_unused, + void *reps_buffer __rte_unused, + uint16_t *reps_len __rte_unused, + void *eth_dev __rte_unused); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_MSG_CHAN_H_ */ diff --git a/drivers/net/zxdh/zxdh_npsdk.c b/drivers/net/zxdh/zxdh_npsdk.c new file mode 100644 index 0000000000..eec644b01e --- /dev/null +++ b/drivers/net/zxdh/zxdh_npsdk.c @@ -0,0 +1,158 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <rte_common.h> +#include "zxdh_npsdk.h" + +int dpp_dtb_hash_offline_delete(uint32_t dev_id __rte_unused, + uint32_t queue_id __rte_unused, + uint32_t sdt_no __rte_unused, + uint32_t flush_mode __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_dtb_hash_online_delete(uint32_t dev_id __rte_unused, + uint32_t queue_id __rte_unused, + uint32_t sdt_no __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_hash_res_get(uint32_t type __rte_unused, + DPP_APT_HASH_RES_INIT_T *HashResInit __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_eram_res_get(uint32_t type __rte_unused, + DPP_APT_ERAM_RES_INIT_T *EramResInit __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_stat_res_get(uint32_t type __rte_unused, + DPP_APT_STAT_RES_INIT_T *StatResInit __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_hash_global_res_init(uint32_t dev_id __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_hash_func_res_init(uint32_t dev_id __rte_unused, + uint32_t func_num __rte_unused, + DPP_APT_HASH_FUNC_RES_T *HashFuncRes __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_hash_bulk_res_init(uint32_t dev_id __rte_unused, + uint32_t bulk_num __rte_unused, + DPP_APT_HASH_BULK_RES_T *BulkRes __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_hash_tbl_res_init(uint32_t dev_id __rte_unused, + uint32_t tbl_num __rte_unused, + DPP_APT_HASH_TABLE_T *HashTbl __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_apt_eram_res_init(uint32_t dev_id __rte_unused, + uint32_t tbl_num __rte_unused, + DPP_APT_ERAM_TABLE_T *EramTbl __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_stat_ppu_eram_baddr_set(uint32_t dev_id __rte_unused, + uint32_t ppu_eram_baddr __rte_unused) +{ + /* todo provided later */ + return 0; +} +int dpp_stat_ppu_eram_depth_set(uint32_t dev_id __rte_unused, + uint32_t ppu_eram_depth __rte_unused) +{ + /* todo provided later */ + return 0; +} +int dpp_se_cmmu_smmu1_cfg_set(uint32_t dev_id __rte_unused, + uint32_t base_addr __rte_unused) +{ + /* todo provided later */ + return 0; +} +int dpp_stat_ppu_ddr_baddr_set(uint32_t dev_id __rte_unused, + uint32_t ppu_ddr_baddr __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_host_np_init(uint32_t dev_id __rte_unused, + DPP_DEV_INIT_CTRL_T *p_dev_init_ctrl __rte_unused) +{ + /* todo provided later */ + return 0; +} +int dpp_np_online_uninstall(uint32_t dev_id __rte_unused, + char *port_name __rte_unused, + uint32_t queue_id __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_dtb_stat_ppu_cnt_get(uint32_t dev_id __rte_unused, + uint32_t queue_id __rte_unused, + STAT_CNT_MODE_E rd_mode __rte_unused, + uint32_t index __rte_unused, + uint32_t *p_data __rte_unused) +{ + /* todo provided later */ + return 0; +} + +int dpp_dtb_entry_get(uint32_t dev_id __rte_unused, + uint32_t queue_id __rte_unused, + DPP_DTB_USER_ENTRY_T *GetEntry __rte_unused, + uint32_t srh_mode __rte_unused) +{ + /* todo provided later */ + return 0; +} +int dpp_dtb_table_entry_write(uint32_t dev_id __rte_unused, + uint32_t queue_id __rte_unused, + uint32_t entryNum __rte_unused, + DPP_DTB_USER_ENTRY_T *DownEntrys __rte_unused) +{ + /* todo provided later */ + return 0; +} +int dpp_dtb_table_entry_delete(uint32_t dev_id __rte_unused, + uint32_t queue_id __rte_unused, + uint32_t entryNum __rte_unused, + DPP_DTB_USER_ENTRY_T *DeleteEntrys __rte_unused) +{ + /* todo provided later */ + return 0; +} + + diff --git a/drivers/net/zxdh/zxdh_npsdk.h b/drivers/net/zxdh/zxdh_npsdk.h new file mode 100644 index 0000000000..265f79d132 --- /dev/null +++ b/drivers/net/zxdh/zxdh_npsdk.h @@ -0,0 +1,216 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <stdint.h> + +#define DPP_PORT_NAME_MAX (32) +#define DPP_SMMU1_READ_REG_MAX_NUM (16) +#define DPP_DIR_TBL_BUF_MAX_NUM (DPP_SMMU1_READ_REG_MAX_NUM) +#define DPP_ETCAM_BLOCK_NUM (8) +#define DPP_SMMU0_LPM_AS_TBL_ID_NUM (8) +#define SE_NIC_RES_TYPE 0 + +#define ZXDH_SDT_VPORT_ATT_TABLE ((uint32_t)(1)) +#define ZXDH_SDT_PANEL_ATT_TABLE ((uint32_t)(2)) +#define ZXDH_SDT_RSS_ATT_TABLE ((uint32_t)(3)) +#define ZXDH_SDT_VLAN_ATT_TABLE ((uint32_t)(4)) +#define ZXDH_SDT_BROCAST_ATT_TABLE ((uint32_t)(6)) +#define ZXDH_SDT_UNICAST_ATT_TABLE ((uint32_t)(10)) +#define ZXDH_SDT_MULTICAST_ATT_TABLE ((uint32_t)(11)) + +#define ZXDH_SDT_L2_ENTRY_TABLE0 ((uint32_t)(64)) +#define ZXDH_SDT_L2_ENTRY_TABLE1 ((uint32_t)(65)) +#define ZXDH_SDT_L2_ENTRY_TABLE2 ((uint32_t)(66)) +#define ZXDH_SDT_L2_ENTRY_TABLE3 ((uint32_t)(67)) +#define ZXDH_SDT_L2_ENTRY_TABLE4 ((uint32_t)(68)) +#define ZXDH_SDT_L2_ENTRY_TABLE5 ((uint32_t)(69)) + +#define ZXDH_SDT_MC_TABLE0 ((uint32_t)(76)) +#define ZXDH_SDT_MC_TABLE1 ((uint32_t)(77)) +#define ZXDH_SDT_MC_TABLE2 ((uint32_t)(78)) +#define ZXDH_SDT_MC_TABLE3 ((uint32_t)(79)) +#define ZXDH_SDT_MC_TABLE4 ((uint32_t)(80)) +#define ZXDH_SDT_MC_TABLE5 ((uint32_t)(81)) + +#define MK_SDT_NO(table, hash_idx) \ + (ZXDH_SDT_##table##_TABLE0 + hash_idx) + +typedef struct dpp_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} DPP_DTB_ADDR_INFO_T; + +typedef struct dpp_dev_init_ctrl_t { + uint32_t vport; + char port_name[DPP_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + DPP_DTB_ADDR_INFO_T dump_addr_info[]; +} DPP_DEV_INIT_CTRL_T; + +typedef struct dpp_apt_hash_func_res_t { + uint32_t func_id; + uint32_t zblk_num; + uint32_t zblk_bitmap; + uint32_t ddr_dis; +} DPP_APT_HASH_FUNC_RES_T; + +typedef enum dpp_hash_ddr_width_mode { + DDR_WIDTH_INVALID = 0, + DDR_WIDTH_256b, + DDR_WIDTH_512b, +} DPP_HASH_DDR_WIDTH_MODE; + +typedef struct dpp_apt_hash_bulk_res_t { + uint32_t func_id; + uint32_t bulk_id; + uint32_t zcell_num; + uint32_t zreg_num; + uint32_t ddr_baddr; + uint32_t ddr_item_num; + DPP_HASH_DDR_WIDTH_MODE ddr_width_mode; + uint32_t ddr_crc_sel; + uint32_t ddr_ecc_en; +} DPP_APT_HASH_BULK_RES_T; + + +typedef struct dpp_sdt_tbl_hash_t { + uint32_t table_type; + uint32_t hash_id; + uint32_t hash_table_width; + uint32_t key_size; + uint32_t hash_table_id; + uint32_t learn_en; + uint32_t keep_alive; + uint32_t keep_alive_baddr; + uint32_t rsp_mode; + uint32_t hash_clutch_en; +} DPP_SDTTBL_HASH_T; + +typedef struct dpp_hash_entry { + uint8_t *p_key; + uint8_t *p_rst; +} DPP_HASH_ENTRY; + + +typedef uint32_t (*DPP_APT_HASH_ENTRY_SET_FUNC)(void *Data, DPP_HASH_ENTRY *Entry); +typedef uint32_t (*DPP_APT_HASH_ENTRY_GET_FUNC)(void *Data, DPP_HASH_ENTRY *Entry); + +typedef struct dpp_apt_hash_table_t { + uint32_t sdtNo; + uint32_t sdt_partner; + DPP_SDTTBL_HASH_T hashSdt; + uint32_t tbl_flag; + DPP_APT_HASH_ENTRY_SET_FUNC hash_set_func; + DPP_APT_HASH_ENTRY_GET_FUNC hash_get_func; +} DPP_APT_HASH_TABLE_T; + +typedef struct dpp_apt_hash_res_init_t { + uint32_t func_num; + uint32_t bulk_num; + uint32_t tbl_num; + DPP_APT_HASH_FUNC_RES_T *func_res; + DPP_APT_HASH_BULK_RES_T *bulk_res; + DPP_APT_HASH_TABLE_T *tbl_res; +} DPP_APT_HASH_RES_INIT_T; + +typedef struct dpp_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} DPP_SDTTBL_ERAM_T; + +typedef uint32_t (*DPP_APT_ERAM_SET_FUNC)(void *Data, uint32_t buf[4]); +typedef uint32_t (*DPP_APT_ERAM_GET_FUNC)(void *Data, uint32_t buf[4]); + +typedef struct dpp_apt_eram_table_t { + uint32_t sdtNo; + DPP_SDTTBL_ERAM_T ERamSdt; + uint32_t opr_mode; + uint32_t rd_mode; + DPP_APT_ERAM_SET_FUNC eram_set_func; + DPP_APT_ERAM_GET_FUNC eram_get_func; +} DPP_APT_ERAM_TABLE_T; + + +typedef struct dpp_apt_eram_res_init_t { + uint32_t tbl_num; + DPP_APT_ERAM_TABLE_T *eram_res; +} DPP_APT_ERAM_RES_INIT_T; + +typedef struct dpp_apt_stat_res_init_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_baddr; + uint32_t ppu_ddr_offset; +} DPP_APT_STAT_RES_INIT_T; + +typedef enum stat_cnt_mode_e { + STAT_64_MODE = 0, + STAT_128_MODE = 1, + STAT_MAX_MODE, +} STAT_CNT_MODE_E; + +typedef struct dpp_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} DPP_DTB_USER_ENTRY_T; + + +int dpp_dtb_hash_offline_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t sdt_no, uint32_t flush_mode); +int dpp_dtb_hash_online_delete(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no); +int dpp_apt_hash_res_get(uint32_t type, DPP_APT_HASH_RES_INIT_T *HashResInit); +int dpp_apt_eram_res_get(uint32_t type, DPP_APT_ERAM_RES_INIT_T *EramResInit); + +int dpp_apt_stat_res_get(uint32_t type, DPP_APT_STAT_RES_INIT_T *StatResInit); +int dpp_apt_hash_global_res_init(uint32_t dev_id); +int dpp_apt_hash_func_res_init(uint32_t dev_id, uint32_t func_num, + DPP_APT_HASH_FUNC_RES_T *HashFuncRes); +int dpp_apt_hash_bulk_res_init(uint32_t dev_id, uint32_t bulk_num, + DPP_APT_HASH_BULK_RES_T *BulkRes); +int dpp_apt_hash_tbl_res_init(uint32_t dev_id, uint32_t tbl_num, + DPP_APT_HASH_TABLE_T *HashTbl); +int dpp_apt_eram_res_init(uint32_t dev_id, uint32_t tbl_num, + DPP_APT_ERAM_TABLE_T *EramTbl); +int dpp_stat_ppu_eram_baddr_set(uint32_t dev_id, uint32_t ppu_eram_baddr); +int dpp_stat_ppu_eram_depth_set(uint32_t dev_id, uint32_t ppu_eram_depth); +int dpp_se_cmmu_smmu1_cfg_set(uint32_t dev_id, uint32_t base_addr); +int dpp_stat_ppu_ddr_baddr_set(uint32_t dev_id, uint32_t ppu_ddr_baddr); + +int dpp_host_np_init(uint32_t dev_id, DPP_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int dpp_np_online_uninstall(uint32_t dev_id, + char *port_name, + uint32_t queue_id); + +int dpp_dtb_stat_ppu_cnt_get(uint32_t dev_id, + uint32_t queue_id, + STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); + +int dpp_dtb_entry_get(uint32_t dev_id, + uint32_t queue_id, + DPP_DTB_USER_ENTRY_T *GetEntry, + uint32_t srh_mode); +int dpp_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entryNum, + DPP_DTB_USER_ENTRY_T *DownEntrys); +int dpp_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entryNum, + DPP_DTB_USER_ENTRY_T *DeleteEntrys); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..b32c2e7955 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,462 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <stdint.h> +#include <unistd.h> + +#ifdef RTE_EXEC_ENV_LINUX + #include <dirent.h> + #include <fcntl.h> +#endif + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_common.h> + +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" + +/* + * Following macros are derived from linux/pci_regs.h, however, + * we can't simply include that header here, as there is no such + * file for non-Linux platform. + */ +#define PCI_CAPABILITY_LIST 0x34 +#define PCI_CAP_ID_VNDR 0x09 +#define PCI_CAP_ID_MSIX 0x11 + +/* + * The remaining space is defined by each driver as the per-driver + * configuration space. + */ +#define ZXDH_PCI_CONFIG(hw) (((hw)->use_msix == ZXDH_MSIX_ENABLED) ? 24 : 20) +#define PCI_MSIX_ENABLE 0x8000 + +static inline int32_t check_vq_phys_addr_ok(struct virtqueue *vq) +{ + /** + * Virtio PCI device ZXDH_PCI_QUEUE_PF register is 32bit, + * and only accepts 32 bit page frame number. + * Check if the allocated physical memory exceeds 16TB. + */ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} +static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static void modern_read_dev_config(struct zxdh_hw *hw, + size_t offset, + void *dst, + int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void modern_write_dev_config(struct zxdh_hw *hw, + size_t offset, + const void *src, + int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint64_t modern_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void modern_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +static uint8_t modern_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void modern_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint8_t modern_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + +static uint16_t modern_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t modern_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint16_t modern_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void modern_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t modern_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void modern_del_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + +static void modern_notify_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!vtpci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + if (vtpci_with_feature(hw, ZXDH_F_RING_PACKED)) { + /* + * Bit[0:15]: vq queue index + * Bit[16:30]: avail index + * Bit[31]: avail wrap counter + */ + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags & + VRING_PACKED_DESC_F_AVAIL)) << 31) | + ((uint32_t)vq->vq_avail_idx << 16) | + vq->vq_queue_index; + } else { + /* + * Bit[0:15]: vq queue index + * Bit[16:31]: avail index + */ + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + } + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + +const struct zxdh_pci_ops zxdh_modern_ops = { + .read_dev_cfg = modern_read_dev_config, + .write_dev_cfg = modern_write_dev_config, + .get_status = modern_get_status, + .set_status = modern_set_status, + .get_features = modern_get_features, + .set_features = modern_set_features, + .get_isr = modern_get_isr, + .set_config_irq = modern_set_config_irq, + .set_queue_irq = modern_set_queue_irq, + .get_queue_num = modern_get_queue_num, + .set_queue_num = modern_set_queue_num, + .setup_queue = modern_setup_queue, + .del_queue = modern_del_queue, + .notify_queue = modern_notify_queue, +}; + +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} +void zxdh_vtpci_write_dev_config(struct zxdh_hw *hw, size_t offset, const void *src, int32_t length) +{ + VTPCI_OPS(hw)->write_dev_cfg(hw, offset, src, length); +} + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_features(hw); +} + +void zxdh_vtpci_reset(struct zxdh_hw *hw) +{ + PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + usleep(1000L); + } + PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= VTPCI_OPS(hw)->get_status(hw); + + VTPCI_OPS(hw)->set_status(hw, status); +} + +uint8_t zxdh_vtpci_get_status(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_status(hw); +} + +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_isr(hw); +} + +static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_INIT_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_INIT_LOG(ERR, "invalid cap: overflows bar space: %u > %" PRIu64, + offset + length, dev->mem_resource[bar].len); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + if (dev->mem_resource[0].addr == NULL) { + PMD_INIT_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + uint8_t pos = 0; + int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret); + return -1; + } + while (pos) { + struct zxdh_pci_cap cap; + + ret = rte_pci_read_config(dev, &cap, 2, pos); + if (ret != 2) { + PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr == PCI_CAP_ID_MSIX) { + /** + * Transitional devices would also have this capability, + * that's why we also check if msix is enabled. + * 1st byte is cap ID; 2nd byte is the position of next cap; + * next two bytes are the flags. + */ + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", + pos + 2, ret); + break; + } + hw->use_msix = (flags & PCI_MSIX_ENABLE) ? + ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED; + } + if (cap.cap_vndr != PCI_CAP_ID_VNDR) { + PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_INIT_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + uint16_t pcie_id = hw->pcie_id; + + if ((pcie_id >> 11) & 0x1) /* PF */ { + PMD_INIT_LOG(DEBUG, "EP %u PF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_INIT_LOG(ERR, "no modern pci device found."); + return -1; + } + return 0; +} + +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev) +{ + uint8_t pos = 0; + int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret); + return ZXDH_MSIX_NONE; + } + while (pos) { + uint8_t cap[2] = {0}; + + ret = rte_pci_read_config(dev, cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap[0] == PCI_CAP_ID_MSIX) { + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap)); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, + "failed to read pci cap at pos: %x ret %d", pos + 2, ret); + break; + } + if (flags & PCI_MSIX_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + pos = cap[1]; + } + return ZXDH_MSIX_NONE; + } diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..d6f3c552ad --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,259 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_PCI_H_ +#define _ZXDH_PCI_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <stdbool.h> +#include <rte_pci.h> +#include <rte_bus_pci.h> +#include <bus_pci_driver.h> +#include <ethdev_driver.h> + +#include "zxdh_ethdev.h" + +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +/* The feature bitmap for net */ +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ +#define ZXDH_NET_F_HOST_ECN 13 /* Host can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_CTRL_VQ 17 /* Control channel available */ +#define ZXDH_NET_F_CTRL_RX 18 /* Control channel RX mode support */ +#define ZXDH_NET_F_CTRL_VLAN 19 /* Control channel VLAN filtering */ +#define ZXDH_NET_F_CTRL_RX_EXTRA 20 /* Extra RX mode control support */ +#define ZXDH_NET_F_GUEST_ANNOUNCE 21 /* Guest can announce device on the network */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ +/* Do we get callbacks when the ring is completely used, even if we've suppressed them? */ +#define ZXDH_F_NOTIFY_ON_EMPTY 24 +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout? */ +#define VIRTIO_RING_F_INDIRECT_DESC 28 /* We support indirect buffer descriptors */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_IOMMU_PLATFORM 33 +#define ZXDH_F_RING_PACKED 34 +/* Inorder feature indicates that all buffers are used by the device + * in the same order in which they have been made available. + */ +#define ZXDH_F_IN_ORDER 35 +/** This feature indicates that memory accesses by the driver + * and the device are ordered in a way described by the platform. + */ +#define ZXDH_F_ORDER_PLATFORM 36 +/** + * This feature indicates that the driver passes extra data + * (besides identifying the virtqueue) in its device notifications. + */ +#define ZXDH_F_NOTIFICATION_DATA 38 +#define ZXDH_NET_F_SPEED_DUPLEX 63 /* Device set linkspeed and duplex */ + +/* The Guest publishes the used index for which it expects an interrupt + * at the end of the avail ring. Host should ignore the avail->flags field. + */ +/* The Host publishes the avail index for which it expects a kick + * at the end of the used ring. Guest should ignore the used->flags field. + */ +#define ZXDH_RING_F_EVENT_IDX 29 + +/* Maximum number of virtqueues per device. */ +#define ZXDH_MAX_VIRTQUEUE_PAIRS 8 +#define ZXDH_MAX_VIRTQUEUES (ZXDH_MAX_VIRTQUEUE_PAIRS * 2 + 1) + + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops) +#define VTPCI_IO(hw) (&zxdh_hw_internal[(hw)->port_id].io) + +/* + * How many bits to shift physical queue address written to QUEUE_PFN. + * 12 is historical, and due to x86 page size. + */ +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 + +/* The alignment to use between consumer and producer parts of vring. */ +#define ZXDH_PCI_VRING_ALIGN 4096 + +/******BAR0 SPACE********************************************************************/ +#define ZXDH_VQMREG_OFFSET 0x0000 +#define ZXDH_FWCAP_OFFSET 0x1000 +#define ZXDH_CTRLCH_OFFSET 0x2000 +#define ZXDH_MAC_OFFSET 0x24000 +#define ZXDH_SPINLOCK_OFFSET 0x4000 +#define ZXDH_FWSHRD_OFFSET 0x5000 +#define ZXDH_QUERES_SHARE_BASE (ZXDH_FWSHRD_OFFSET) +#define ZXDH_QUERES_SHARE_SIZE 512 + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + +/* + * While zxdh_hw is stored in shared memory, this structure stores + * some infos that may vary in the multiple process model locally. + * For example, the vtpci_ops pointer. + */ +struct zxdh_hw_internal { + const struct zxdh_pci_ops *vtpci_ops; + struct rte_pci_ioport io; +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +/* + * This structure is just a reference to read + * net device specific config space; it just a chodu structure + * + */ +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + /* + * speed, in units of 1Mb. All values 0 to INT_MAX are legal. + * Any other value stands for unknown. + */ + uint32_t speed; + /* 0x00 - half duplex + * 0x01 - full duplex + * Any other value stands for unknown. + */ + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; +struct zxdh_pci_notify_cap { + struct zxdh_pci_cap cap; + uint32_t notify_off_multiplier; /* Multiplier for queue_notify_off. */ +}; + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); + + uint8_t (*get_isr)(struct zxdh_hw *hw); + + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); + + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct virtqueue *vq); +}; + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_modern_ops; + +void zxdh_vtpci_reset(struct zxdh_hw *hw); +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw); +uint8_t zxdh_vtpci_get_status(struct zxdh_hw *hw); +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status); +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); +void zxdh_vtpci_write_dev_config(struct zxdh_hw *hw, size_t offset, + const void *src, int32_t length); +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_PCI_H_ */ diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..b6dd487a9d --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <stdint.h> + +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" + +/** + * Two types of mbuf to be cleaned: + * 1) mbuf that has been consumed by backend but not used by virtio. + * 2) mbuf that hasn't been consued by backend. + */ +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + + return NULL; +} + +static int32_t zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + if (zxdh_acquire_lock(hw) != 0) { + PMD_INIT_LOG(ERR, + "Could not acquire lock to release channel, timeout %d", timeout); + continue; + } + break; + } + + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Acquire lock timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + /* get coi table offset and index */ + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + /* Clear COI table */ + if (zxdh_release_channel(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = get_queue_type(i); + if (queue_type == VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.virtio_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} + +int32_t get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return VTNET_RQ; + else + return VTNET_TQ; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..c2d7bbe889 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,85 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_QUEUE_H_ +#define _ZXDH_QUEUE_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <rte_atomic.h> +#include <rte_memory.h> +#include <rte_mempool.h> +#include <rte_net.h> +#include <ethdev_driver.h> + +#include "zxdh_pci.h" +#include "zxdh_ring.h" +#include "zxdh_rxtx.h" + + +enum { + VTNET_RQ = 0, + VTNET_TQ = 1 +}; + +struct vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +}; + +struct virtqueue { + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ + struct { + /**< vring keeping descs and events */ + struct vring_packed ring; + bool used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ + uint16_t vq_nentries; /**< vring desc numbers */ + uint16_t vq_free_cnt; /**< num of desc available */ + uint16_t vq_avail_idx; /**< sync until needed */ + uint16_t vq_free_thresh; /**< free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /**< linear address of vring*/ + uint32_t vq_ring_size; + + union { + struct virtnet_rx rxq; + struct virtnet_tx txq; + }; + + /** < physical address of vring, + * or virtual address for virtio_user. + **/ + rte_iova_t vq_ring_mem; + + /** + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + **/ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /**< PCI queue index */ + uint16_t offset; /**< relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /**< RX software ring. */ + struct vq_desc_extra vq_descx[0]; +}; + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t get_queue_type(uint16_t vtpci_queue_idx); + +#endif diff --git a/drivers/net/zxdh/zxdh_ring.h b/drivers/net/zxdh/zxdh_ring.h new file mode 100644 index 0000000000..bd7c997993 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ring.h @@ -0,0 +1,87 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_RING_H_ +#define _ZXDH_RING_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <rte_common.h> + +/* This marks a buffer as continuing via the next field. */ +#define VRING_DESC_F_NEXT 1 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define VRING_DESC_F_INDIRECT 4 + +/* This flag means the descriptor was made available by the driver */ +#define VRING_PACKED_DESC_F_AVAIL (1 << (7)) +/* This flag means the descriptor was used by the device */ +#define VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define VRING_PACKED_DESC_F_AVAIL_USED \ + (VRING_PACKED_DESC_F_AVAIL | VRING_PACKED_DESC_F_USED) + +/* The Host uses this in used->flags to advise the Guest: don't kick me + * when you add a buffer. It's unreliable, so it's simply an + * optimization. Guest will still kick if it's out of buffers. + **/ +#define VRING_USED_F_NO_NOTIFY 1 + +/** The Guest uses this in avail->flags to advise the Host: don't + * interrupt me when you consume a buffer. It's unreliable, so it's + * simply an optimization. + **/ +#define VRING_AVAIL_F_NO_INTERRUPT 1 + +#define RING_EVENT_FLAGS_ENABLE 0x0 +#define RING_EVENT_FLAGS_DISABLE 0x1 +#define RING_EVENT_FLAGS_DESC 0x2 + +/** VirtIO ring descriptors: 16 bytes. + * These can chain together via "next". + **/ +struct vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +}; + +struct vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[0]; +}; + +/** For support of packed virtqueues in Virtio 1.1 the format of descriptors + * looks like this. + **/ +struct vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +}; + +struct vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +}; + +struct vring_packed { + uint32_t num; + struct vring_packed_desc *desc; + struct vring_packed_desc_event *driver; + struct vring_packed_desc_event *device; +}; + +#endif diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..7aedf568fe --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_RXTX_H_ +#define _ZXDH_RXTX_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +struct virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ +}; + +struct virtnet_rx { + struct virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +}; + +struct virtnet_tx { + struct virtqueue *vq; + const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +}; + +#endif -- 2.43.0 [-- Attachment #1.1.2: Type: text/html , Size: 344566 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v4] net/zxdh: Provided zxdh basic init 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang @ 2024-09-24 1:35 ` Junlong Wang 2024-09-25 22:39 ` [PATCH v4] " Ferruh Yigit ` (5 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-09-24 1:35 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 143 bytes --] Hi, Ferruh Could you please provide feedback on the patch I submitted? Any suggestions for improvement would be appreciated. Thanks [-- Attachment #1.1.2: Type: text/html , Size: 289 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4] net/zxdh: Provided zxdh basic init 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang 2024-09-24 1:35 ` [v4] " Junlong Wang @ 2024-09-25 22:39 ` Ferruh Yigit 2024-09-26 6:49 ` [v4] " Junlong Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-09-25 22:39 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, wang.yong19 On 9/10/2024 1:00 PM, Junlong Wang wrote: > provided zxdh initialization of zxdh PMD driver. > include msg channel, np init and etc. > Hi Junlong, It is very hard to review the driver as a single patch, it helps to split driver into multiple patches, please check the suggestion on it: https://patches.dpdk.org/project/dpdk/patch/20240916162856.11566-1-stephen@networkplumber.org/ Also there are errors reported by scripts, please fix them: ./devtools/checkpatches.sh -n1 ./devtools/check-meson.py > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > V4: Resolve compilation issues > V3: Resolve compilation issues > V2: Resolve compilation issues and modify doc(zxdh.ini zdh.rst) > V1: Provide zxdh basic init and open source NPSDK lib > --- > doc/guides/nics/features/zxdh.ini | 10 + > doc/guides/nics/index.rst | 1 + > doc/guides/nics/zxdh.rst | 34 + > drivers/net/meson.build | 1 + > drivers/net/zxdh/meson.build | 23 + > drivers/net/zxdh/zxdh_common.c | 59 ++ > drivers/net/zxdh/zxdh_common.h | 32 + > drivers/net/zxdh/zxdh_ethdev.c | 1328 +++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev.h | 202 +++++ > drivers/net/zxdh/zxdh_logs.h | 38 + > drivers/net/zxdh/zxdh_msg.c | 1177 +++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_msg.h | 408 +++++++++ > drivers/net/zxdh/zxdh_npsdk.c | 158 ++++ > drivers/net/zxdh/zxdh_npsdk.h | 216 +++++ > drivers/net/zxdh/zxdh_pci.c | 462 ++++++++++ > drivers/net/zxdh/zxdh_pci.h | 259 ++++++ > drivers/net/zxdh/zxdh_queue.c | 138 +++ > drivers/net/zxdh/zxdh_queue.h | 85 ++ > drivers/net/zxdh/zxdh_ring.h | 87 ++ > drivers/net/zxdh/zxdh_rxtx.h | 48 ++ > 20 files changed, 4766 insertions(+) > create mode 100644 doc/guides/nics/features/zxdh.ini > create mode 100644 doc/guides/nics/zxdh.rst > create mode 100644 drivers/net/zxdh/meson.build > create mode 100644 drivers/net/zxdh/zxdh_common.c > create mode 100644 drivers/net/zxdh/zxdh_common.h > create mode 100644 drivers/net/zxdh/zxdh_ethdev.c > create mode 100644 drivers/net/zxdh/zxdh_ethdev.h > create mode 100644 drivers/net/zxdh/zxdh_logs.h > create mode 100644 drivers/net/zxdh/zxdh_msg.c > create mode 100644 drivers/net/zxdh/zxdh_msg.h > create mode 100644 drivers/net/zxdh/zxdh_npsdk.c > create mode 100644 drivers/net/zxdh/zxdh_npsdk.h > create mode 100644 drivers/net/zxdh/zxdh_pci.c > create mode 100644 drivers/net/zxdh/zxdh_pci.h > create mode 100644 drivers/net/zxdh/zxdh_queue.c > create mode 100644 drivers/net/zxdh/zxdh_queue.h > create mode 100644 drivers/net/zxdh/zxdh_ring.h > create mode 100644 drivers/net/zxdh/zxdh_rxtx.h > > diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/ > features/zxdh.ini > new file mode 100644 > index 0000000000..083c75511b > --- /dev/null > +++ b/doc/guides/nics/features/zxdh.ini > @@ -0,0 +1,10 @@ > +; > +; Supported features of the 'zxdh' network poll mode driver. > +; > +; Refer to default.ini for the full list of available PMD features. > +; > +[Features] > +Linux = Y > +x86-64 = Y > +ARMv8 = Y > + > diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst > index c14bc7988a..8e371ac4a5 100644 > --- a/doc/guides/nics/index.rst > +++ b/doc/guides/nics/index.rst > @@ -69,3 +69,4 @@ Network Interface Controller Drivers > vhost > virtio > vmxnet3 > + zxdh > diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst > new file mode 100644 > index 0000000000..e878058b7b > --- /dev/null > +++ b/doc/guides/nics/zxdh.rst > @@ -0,0 +1,34 @@ > +.. SPDX-License-Identifier: BSD-3-Clause > + Copyright(c) 2023 ZTE Corporation. > + > +ZXDH Poll Mode Driver > +====================== > + > +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support > +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on > +the ZTE Ethernet Controller E310/E312. > + > Can you please provide a link for the product? There is a link in the prerequisetes section below, if that link is for product please move it here. > + > +Features > +-------- > + > +Features of the zxdh PMD are: > + > +- Multi arch support: x86_64, ARMv8. > + > +Prerequisites > +------------- > + > +- Learning about ZXDH NX Series Ethernet Controller NICs using > + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. > + > +Driver compilation and testing > +------------------------------ > + > +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` > +for details. > + > +Limitations or Known issues > +--------------------------- > +X86-32, Power8, ARMv7 and BSD are not supported yet. > + > diff --git a/drivers/net/meson.build b/drivers/net/meson.build > index fb6d34b782..1a3db8a04d 100644 > --- a/drivers/net/meson.build > +++ b/drivers/net/meson.build > @@ -62,6 +62,7 @@ drivers = [ > 'vhost', > 'virtio', > 'vmxnet3', > + 'zxdh', > ] > std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc > std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std > diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build > new file mode 100644 > index 0000000000..593e3c5933 > --- /dev/null > +++ b/drivers/net/zxdh/meson.build > @@ -0,0 +1,23 @@ > +# SPDX-License-Identifier: BSD-3-Clause > +# Copyright(c) 2024 ZTE Corporation > + > +if not is_linux > + build = false > + reason = 'only supported on Linux' > + subdir_done() > +endif > + > +if arch_subdir != 'x86' and arch_subdir ! > = 'arm' or not dpdk_conf.get('RTE_ARCH_64') > Why not check 'RTE_ARCH_X86_64' and 'RTE_ARCH_ARM64'? <...> > +/* dev_ops for zxdh, bare necessities for basic operation */ > +static const struct eth_dev_ops zxdh_eth_dev_ops = { > + .dev_configure = NULL, > + .dev_start = NULL, > + .dev_stop = NULL, > + .dev_close = NULL, > + > + .rx_queue_setup = NULL, > + .rx_queue_intr_enable = NULL, > + .rx_queue_intr_disable = NULL, > + > + .tx_queue_setup = NULL, > +}; > No ops is implemented, so when you run dpdk application when your device exist, what happens, does application crash? <...> > +static int32_t zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev __rte_unused) > +{ > + if (rte_eal_process_type() == RTE_PROC_SECONDARY) > + return 0; > + /** todo later > + * zxdh_dev_close(eth_dev); > Please either remove or implements todo comments in upstream driver. > + */ > + return 0; > +} > + > +int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) > +{ > + int32_t ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); > + > + if (ret == -ENODEV) { /* Port has already been released by close. */ > + ret = 0; > + } > + return ret; > +} > + > +static const struct rte_pci_id pci_id_zxdh_map[] = { > + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)}, > + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)}, > + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)}, > + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)}, > + {.vendor_id = 0, /* sentinel */ }, > +}; > +static struct rte_pci_driver zxdh_pmd = { > + .driver = {.name = "net_zxdh", }, > 'driver.name' is already set by 'RTE_PMD_REGISTER_PCI' macro, no need to set above. > + .id_table = pci_id_zxdh_map, > + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, > + .probe = zxdh_eth_pci_probe, > + .remove = zxdh_eth_pci_remove, > +}; > + > +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); > +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); > +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); > +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); > +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); > +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, DEBUG); > +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, DEBUG); > + > +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, DEBUG); > Default 'DEBUG' log level is too verbose, mostly default is NOTICE. > +RTE_PMD_REGISTER_PARAM_STRING(net_zxdh, > + "q_depth=<int>"); > Please document device arguments in the driver documentation. <...> ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v4] net/zxdh: Provided zxdh basic init 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang 2024-09-24 1:35 ` [v4] " Junlong Wang 2024-09-25 22:39 ` [PATCH v4] " Ferruh Yigit @ 2024-09-26 6:49 ` Junlong Wang 2024-10-07 21:43 ` [PATCH v4] " Stephen Hemminger ` (3 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-09-26 6:49 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 450 bytes --] >> +if not is_linux >> + build = false >> + reason = 'only supported on Linux' >> + subdir_done() >> +endif >> + >> +if arch_subdir != 'x86' and arch_subdir ! >> = 'arm' or not dpdk_conf.get('RTE_ARCH_64') >> >Why not check 'RTE_ARCH_X86_64' and 'RTE_ARCH_ARM64'? we will fix it and use 'RTE_ARCH_X86_64' and 'RTE_ARCH_ARM64' to check, other comments will be modified, and split the driver into multiple patches. Thanks! [-- Attachment #1.1.2: Type: text/html , Size: 987 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4] net/zxdh: Provided zxdh basic init 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang ` (2 preceding siblings ...) 2024-09-26 6:49 ` [v4] " Junlong Wang @ 2024-10-07 21:43 ` Stephen Hemminger 2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-07 21:43 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev, wang.yong19 On Tue, 10 Sep 2024 20:00:20 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst > new file mode 100644 > index 0000000000..e878058b7b > --- /dev/null > +++ b/doc/guides/nics/zxdh.rst > @@ -0,0 +1,34 @@ > +.. SPDX-License-Identifier: BSD-3-Clause > + Copyright(c) 2023 ZTE Corporation. > + > +ZXDH Poll Mode Driver > +====================== > + > +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support > +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on > +the ZTE Ethernet Controller E310/E312. > + > + > +Features > +-------- > + > +Features of the zxdh PMD are: > + > +- Multi arch support: x86_64, ARMv8. > + > +Prerequisites > +------------- > + > +- Learning about ZXDH NX Series Ethernet Controller NICs using > + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. > + > +Driver compilation and testing > +------------------------------ > + > +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` > +for details. > + > +Limitations or Known issues > +--------------------------- > +X86-32, Power8, ARMv7 and BSD are not supported yet. > + > Note: git gives warning when merging this. /home/shemminger/DPDK/main/.git/worktrees/zxdh/rebase-apply/patch:78: new blank line at EOF. + warning: 1 line adds whitespace errors. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 0/9] net/zxdh: introduce net zxdh driver 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang ` (3 preceding siblings ...) 2024-10-07 21:43 ` [PATCH v4] " Stephen Hemminger @ 2024-10-15 5:43 ` Junlong Wang 2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-12-19 22:38 ` [PATCH v4] net/zxdh: Provided zxdh basic init Stephen Hemminger 2024-12-20 1:47 ` Junlong Wang 6 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:43 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2472 bytes --] V5: - split driver into multiple patches,part of the zxdh driver, later provide dev start/stop,queue_setup,mac,vlan,rss ,etc. - fix errors reported by scripts. - move the product link in zxdh.rst. - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. - modify other comments according to Ferruh's comments. V4: - Resolve compilation issues Junlong Wang (9): net/zxdh: add zxdh ethdev pmd driver net/zxdh: add logging implementation net/zxdh: add zxdh device pci init implementation net/zxdh: add msg chan and msg hwlock init net/zxdh: add msg chan enable implementation net/zxdh: add zxdh get device backend infos net/zxdh: add configure zxdh intr implementation net/zxdh: add zxdh dev infos get ops net/zxdh: add zxdh dev configure ops doc/guides/nics/features/zxdh.ini | 9 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 30 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 22 + drivers/net/zxdh/zxdh_common.c | 368 +++++++++++ drivers/net/zxdh/zxdh_common.h | 42 ++ drivers/net/zxdh/zxdh_ethdev.c | 1021 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 105 +++ drivers/net/zxdh/zxdh_logs.h | 35 + drivers/net/zxdh/zxdh_msg.c | 992 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 228 +++++++ drivers/net/zxdh/zxdh_pci.c | 449 +++++++++++++ drivers/net/zxdh/zxdh_pci.h | 191 ++++++ drivers/net/zxdh/zxdh_queue.c | 131 ++++ drivers/net/zxdh/zxdh_queue.h | 280 ++++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++ 17 files changed, 3960 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 4566 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-10-15 5:43 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 0 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:43 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7509 bytes --] Add basic zxdh ethdev init and register PCI probe functions Update doc files Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 9 +++ doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 30 ++++++++++ drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 18 ++++++ drivers/net/zxdh/zxdh_ethdev.c | 92 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 44 +++++++++++++++ 7 files changed, 195 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..05c8091ed7 --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..4cf531e967 --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,30 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + +- Learning about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + +Features +-------- + +Features of the zxdh PMD are: + +- Multi arch support: x86_64, ARMv8. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- +X86-32, Power8, ARMv7 and BSD are not supported yet. diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..0a12914534 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..58e39c1f96 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', + ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..75d8b28cc3 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <ethdev_pci.h> +#include <bus_pci_driver.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + eth_dev->dev_ops = NULL; + + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) + return -ENOMEM; + + memset(hw, 0, sizeof(*hw)); + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) + return -EIO; + + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + } + + return ret; +} + +static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + +static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) +{ + int ret = 0; + + return ret; +} + +static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + int ret = 0; + + ret = zxdh_dev_close(eth_dev); + + return ret; +} + +static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..04023bfe84 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_ETHDEV_H_ +#define _ZXDH_ETHDEV_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "ethdev_driver.h" + +/* ZXDH PCI vendor/device ID. */ +#define PCI_VENDOR_ID_ZTE 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 + +struct zxdh_hw { + struct rte_eth_dev *eth_dev; + uint64_t bar_addr[ZXDH_NUM_BARS]; + + uint32_t speed; + uint16_t device_id; + uint16_t port_id; + + uint8_t duplex; + uint8_t is_pf; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_ETHDEV_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 14136 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 2/9] net/zxdh: add logging implementation 2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang ` (6 more replies) 2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 1 sibling, 7 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3287 bytes --] Adds zxdh logging implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++++-- drivers/net/zxdh/zxdh_logs.h | 35 ++++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_logs.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 75d8b28cc3..7220770c01 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -7,6 +7,7 @@ #include <rte_ethdev.h> #include "zxdh_ethdev.h" +#include "zxdh_logs.h" static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); return -ENOMEM; + } memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; - if (hw->bar_addr[0] == 0) + if (hw->bar_addr[0] == 0) { + PMD_INIT_LOG(ERR, "Bad mem resource."); return -EIO; + } hw->device_id = pci_dev->id.device_id; hw->port_id = eth_dev->data->port_id; @@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = { RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE); diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..e118d26379 --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_LOGS_H_ +#define _ZXDH_LOGS_H_ + +#include <rte_log.h> + +extern int zxdh_logtype_init; +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_init, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int zxdh_logtype_driver; +#define PMD_DRV_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_driver, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int zxdh_logtype_rx; +#define PMD_RX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_rx, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int zxdh_logtype_tx; +#define PMD_TX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_tx, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +extern int zxdh_logtype_msg; +#define PMD_MSG_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, zxdh_logtype_msg, \ + "offload_zxdh %s(): " fmt "\n", __func__, ## args) + +#endif /* _ZXDH_LOGS_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6031 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang ` (5 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 23053 bytes --] Add device pci init implementation, to obtain PCI capability and read configuration, etc Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 5 +- drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ drivers/net/zxdh/zxdh_ethdev.h | 20 ++- drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 109 +++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 +++++++ 7 files changed, 669 insertions(+), 4 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 58e39c1f96..080c6c7725 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -14,5 +14,6 @@ if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') endif sources = files( - 'zxdh_ethdev.c', - ) + 'zxdh_ethdev.c', + 'zxdh_pci.c', + ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 7220770c01..bb219c189f 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -8,6 +8,40 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" +#include "zxdh_pci.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; + +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = 0; + + ret = zxdh_read_pci_caps(pci_dev, hw); + if (ret) { + PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + goto err; + } + + zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_dev_pci_ops; + zxdh_vtpci_reset(hw); + zxdh_get_pci_dev_config(hw); + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && (hw->use_msix != ZXDH_MSIX_NONE)) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + else + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + + return 0; + +err: + PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + return ret; + +err_zxdh_init: + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 04023bfe84..18d9916713 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -9,6 +9,7 @@ extern "C" { #endif +#include <rte_ether.h> #include "ethdev_driver.h" /* ZXDH PCI vendor/device ID. */ @@ -23,16 +24,31 @@ extern "C" { #define ZXDH_MAX_MC_MAC_ADDRS 32 #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U + #define ZXDH_NUM_BARS 2 struct zxdh_hw { struct rte_eth_dev *eth_dev; - uint64_t bar_addr[ZXDH_NUM_BARS]; + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; - uint32_t speed; + uint64_t bar_addr[ZXDH_NUM_BARS]; + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; + uint32_t speed; + uint32_t notify_off_multiplier; + uint16_t *notify_base; + uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint8_t *isr; + uint8_t weak_barriers; + uint8_t use_msix; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; }; diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..73ec640b84 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,290 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#include <stdint.h> +#include <unistd.h> + +#ifdef RTE_EXEC_ENV_LINUX + #include <dirent.h> + #include <fcntl.h> +#endif + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_pci.h> +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +static void zxdh_read_dev_config(struct zxdh_hw *hw, + size_t offset, + void *dst, + int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void zxdh_write_dev_config(struct zxdh_hw *hw, + size_t offset, + const void *src, + int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint8_t zxdh_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint64_t zxdh_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +const struct zxdh_pci_ops zxdh_dev_pci_ops = { + .read_dev_cfg = zxdh_read_dev_config, + .write_dev_cfg = zxdh_write_dev_config, + .get_status = zxdh_get_status, + .set_status = zxdh_set_status, + .get_features = zxdh_get_features, + .set_features = zxdh_set_features, +}; + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_features(hw); +} + +void zxdh_vtpci_reset(struct zxdh_hw *hw) +{ + PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + usleep(1000L); + } + PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_INIT_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_INIT_LOG(ERR, "invalid cap: overflows bar space: %u > %" PRIu64, + offset + length, dev->mem_resource[bar].len); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + uint8_t pos = 0; + int32_t ret = 0; + + if (dev->mem_resource[0].addr == NULL) { + PMD_INIT_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + + ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret); + return -1; + } + while (pos) { + struct zxdh_pci_cap cap; + + ret = rte_pci_read_config(dev, &cap, 2, pos); + if (ret != 2) { + PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr == PCI_CAP_ID_MSIX) { + /** + * Transitional devices would also have this capability, + * that's why we also check if msix is enabled. + * 1st byte is cap ID; 2nd byte is the position of next cap; + * next two bytes are the flags. + */ + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", + pos + 2, ret); + break; + } + hw->use_msix = (flags & PCI_MSIX_ENABLE) ? + ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED; + } + if (cap.cap_vndr != PCI_CAP_ID_VNDR) { + PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_INIT_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + uint16_t pcie_id = hw->pcie_id; + + if ((pcie_id >> 11) & 0x1) /* PF */ { + PMD_INIT_LOG(DEBUG, "EP %u PF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_INIT_LOG(ERR, "no zxdh pci device found."); + return -1; + } + return 0; +} + +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} + +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + uint64_t guest_features = 0; + uint64_t nego_features = 0; + uint32_t max_queue_pairs = 0; + + hw->host_features = zxdh_vtpci_get_features(hw); + + guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + } + + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..deda73a65a --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_PCI_H_ +#define _ZXDH_PCI_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <stdbool.h> + +#include <bus_pci_driver.h> + +#include "zxdh_ethdev.h" + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +#define PCI_CAPABILITY_LIST 0x34 +#define PCI_CAP_ID_VNDR 0x09 +#define PCI_CAP_ID_MSIX 0x11 + +#define PCI_MSIX_ENABLE 0x8000 + +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_RING_PACKED 34 +#define ZXDH_F_IN_ORDER 35 +#define ZXDH_F_NOTIFICATION_DATA 38 + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + /* + * speed, in units of 1Mb. All values 0 to INT_MAX are legal. + * Any other value stands for unknown. + */ + uint32_t speed; + /* 0x00 - half duplex + * 0x01 - full duplex + * Any other value stands for unknown. + */ + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); +}; + +struct zxdh_hw_internal { + const struct zxdh_pci_ops *vtpci_ops; +}; + +#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops) + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_dev_pci_ops; + +void zxdh_vtpci_reset(struct zxdh_hw *hw); +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_PCI_H_ */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..e511843205 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_QUEUE_H_ +#define _ZXDH_QUEUE_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_rxtx.h" + +/** ring descriptors: 16 bytes. + * These can chain together via "next". + **/ +struct vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +}; + +struct vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[0]; +}; + +struct vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +}; + +struct vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +}; + +struct vring_packed { + uint32_t num; + struct vring_packed_desc *desc; + struct vring_packed_desc_event *driver; + struct vring_packed_desc_event *device; +}; + +struct vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +}; + +struct virtqueue { + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ + struct { + /**< vring keeping descs and events */ + struct vring_packed ring; + uint8_t used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ + uint16_t vq_nentries; /**< vring desc numbers */ + uint16_t vq_free_cnt; /**< num of desc available */ + uint16_t vq_avail_idx; /**< sync until needed */ + uint16_t vq_free_thresh; /**< free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /**< linear address of vring*/ + uint32_t vq_ring_size; + + union { + struct virtnet_rx rxq; + struct virtnet_tx txq; + }; + + /** < physical address of vring, + * or virtual address for virtio_user. + **/ + rte_iova_t vq_ring_mem; + + /** + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + **/ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /**< PCI queue index */ + uint16_t offset; /**< relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /**< RX software ring. */ + struct vq_desc_extra vq_descx[0]; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_QUEUE_H_ */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..6476bc15e2 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_RXTX_H_ +#define _ZXDH_RXTX_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <rte_common.h> +#include <rte_mbuf_core.h> + +struct virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ +}; + +struct virtnet_rx { + struct virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +}; + +struct virtnet_tx { + struct virtqueue *vq; + const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +}; + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_RXTX_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 54173 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 5/9] net/zxdh: add msg chan enable implementation Junlong Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8728 bytes --] Add msg channel and hwlock init implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 15 +++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 65 +++++++++++++ 5 files changed, 243 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 080c6c7725..9d0b5b9fd3 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -16,4 +16,5 @@ endif sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', + 'zxdh_msg.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bb219c189f..66b57c4e59 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -9,6 +9,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_pci.h" +#include "zxdh_msg.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret < 0) goto err_zxdh_init; + ret = zxdh_msg_chan_init(); + if (ret < 0) { + PMD_INIT_LOG(ERR, "Failed to init bar msg chan"); + goto err_zxdh_init; + } + hw->msg_chan_init = 1; + + ret = zxdh_msg_chan_hwlock_init(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: + zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 18d9916713..24eb3a5ca0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -51,6 +51,7 @@ struct zxdh_hw { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; + uint8_t msg_chan_init; }; #ifdef __cplusplus diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..4928711ad8 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <pthread.h> +#include <rte_cycles.h> +#include <inttypes.h> +#include <rte_malloc.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define REPS_INFO_FLAG_USABLE 0x00 +#define BAR_SEQID_NUM_MAX 256 + +#define ZXDH_BAR0_INDEX 0 + +#define PCIEID_IS_PF_MASK (0x0800) +#define PCIEID_PF_IDX_MASK (0x0700) +#define PCIEID_VF_IDX_MASK (0x00ff) +#define PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define PCIEID_PF_IDX_OFFSET (8) +#define PCIEID_EP_IDX_OFFSET (12) + +#define MULTIPLY_BY_8(x) ((x) << 3) +#define MULTIPLY_BY_32(x) ((x) << 5) +#define MULTIPLY_BY_256(x) ((x) << 8) + +#define MAX_EP_NUM (4) +#define MAX_HARD_SPINLOCK_NUM (511) + +#define BAR0_SPINLOCK_OFFSET (0x4000) +#define FW_SHRD_OFFSET (0x5000) +#define FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) + +struct dev_stat { + bool is_mpf_scanned; + bool is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; +struct dev_stat g_dev_stat = {0}; + +struct seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct seqid_ring { + uint16_t cur_id; + pthread_spinlock_t lock; + struct seqid_item reps_info_tbl[BAR_SEQID_NUM_MAX]; +}; +struct seqid_ring g_seqid_ring = {0}; + +static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & PCIEID_PF_IDX_MASK) >> PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & PCIEID_EP_IDX_MASK) >> PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case MSG_CHAN_END_RISC: + lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case MSG_CHAN_END_VF: + case MSG_CHAN_END_PF: + lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx + MULTIPLY_BY_8(1 + MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +/** + * Fun: PF init hard_spinlock addr + */ +static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET, + bar_base_addr + HW_LABEL_OFFSET); + lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET, + bar_base_addr + HW_LABEL_OFFSET); + return 0; +} + +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} + +pthread_spinlock_t chan_lock; +int zxdh_msg_chan_init(void) +{ + uint16_t seq_id = 0; + + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return BAR_MSG_OK; + + pthread_spin_init(&chan_lock, 0); + g_seqid_ring.cur_id = 0; + pthread_spin_init(&g_seqid_ring.lock, 0); + + for (seq_id = 0; seq_id < BAR_SEQID_NUM_MAX; seq_id++) { + struct seqid_item *reps_info = &(g_seqid_ring.reps_info_tbl[seq_id]); + + reps_info->id = seq_id; + reps_info->flag = REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return BAR_MSG_OK; +} + +int zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + return BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..a619e6ae21 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_MSG_H_ +#define _ZXDH_MSG_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <ethdev_driver.h> + +enum DRIVER_TYPE { + MSG_CHAN_END_MPF = 0, + MSG_CHAN_END_PF, + MSG_CHAN_END_VF, + MSG_CHAN_END_RISC, +}; + +enum BAR_MSG_RTN { + BAR_MSG_OK = 0, + BAR_MSG_ERR_MSGID, + BAR_MSG_ERR_NULL, + BAR_MSG_ERR_TYPE, /* Message type exception */ + BAR_MSG_ERR_MODULE, /* Module ID exception */ + BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + BAR_MSG_ERR_LEN, /* Message length exception */ + BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /** + * The sending interface parameter boundary structure pointer is empty + */ + BAR_MSG_ERR_NULL_PARA, + BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /** + * Unable to find the corresponding message processing function for this module + */ + BAR_MSG_ERR_MODULE_NOEXIST, + /** + * The virtual address in the parameters passed in by the sending interface is empty + */ + BAR_MSG_ERR_VIRTADDR_NULL, + BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + BAR_MSG_ERR_MPF_NOT_SCANNED, + BAR_MSG_ERR_KERNEL_READY, + BAR_MSG_ERR_USR_RET_ERR, + BAR_MSG_ERR_ERR_PCIEID, + BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_MSG_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17144 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 5/9] net/zxdh: add msg chan enable implementation 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 27228 bytes --] Add msg chan enable implementation to support send msg to get infos. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 6 + drivers/net/zxdh/zxdh_ethdev.h | 12 + drivers/net/zxdh/zxdh_msg.c | 655 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_msg.h | 127 +++++++ 4 files changed, 796 insertions(+), 4 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 66b57c4e59..d95ab4471a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_msg_chan_enable(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 24eb3a5ca0..a51181f1ce 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -29,10 +29,22 @@ extern "C" { #define ZXDH_NUM_BARS 2 +union VPORT { + uint16_t vport; + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + union VPORT vport; uint64_t bar_addr[ZXDH_NUM_BARS]; uint64_t host_features; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 4928711ad8..4e4930e5a1 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -35,10 +35,82 @@ #define MAX_EP_NUM (4) #define MAX_HARD_SPINLOCK_NUM (511) -#define BAR0_SPINLOCK_OFFSET (0x4000) -#define FW_SHRD_OFFSET (0x5000) -#define FW_SHRD_INNER_HW_LABEL_PAT (0x800) -#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) +#define LOCK_PRIMARY_ID_MASK (0x8000) +/* bar offset */ +#define BAR0_CHAN_RISC_OFFSET (0x2000) +#define BAR0_CHAN_PFVF_OFFSET (0x3000) +#define BAR0_SPINLOCK_OFFSET (0x4000) +#define FW_SHRD_OFFSET (0x5000) +#define FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) +#define ZXDH_CTRLCH_OFFSET (0x2000) +#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET) +#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET) +#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET) +#define CHAN_PFVF_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_PFVF_OFFSET) + +#define REPS_HEADER_LEN_OFFSET 1 +#define REPS_HEADER_PAYLOAD_OFFSET 4 +#define REPS_HEADER_REPLYED 0xff + +#define BAR_MSG_CHAN_USABLE 0 +#define BAR_MSG_CHAN_USED 1 + +#define BAR_MSG_POL_MASK (0x10) +#define BAR_MSG_POL_OFFSET (4) + +#define BAR_ALIGN_WORD_MASK 0xfffffffc +#define BAR_MSG_VALID_MASK 1 +#define BAR_MSG_VALID_OFFSET 0 + +#define REPS_INFO_FLAG_USABLE 0x00 +#define REPS_INFO_FLAG_USED 0xa0 + +#define BAR_PF_NUM 7 +#define BAR_VF_NUM 256 +#define BAR_INDEX_PF_TO_VF 0 +#define BAR_INDEX_MPF_TO_MPF 0xff +#define BAR_INDEX_MPF_TO_PFVF 0 +#define BAR_INDEX_PFVF_TO_MPF 0 + +#define MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define SPINLOCK_POLLING_SPAN_US (100) + +#define BAR_MSG_SRC_NUM 3 +#define BAR_MSG_SRC_MPF 0 +#define BAR_MSG_SRC_PF 1 +#define BAR_MSG_SRC_VF 2 +#define BAR_MSG_SRC_ERR 0xff +#define BAR_MSG_DST_NUM 3 +#define BAR_MSG_DST_RISC 0 +#define BAR_MSG_DST_MPF 2 +#define BAR_MSG_DST_PFVF 1 +#define BAR_MSG_DST_ERR 0xff + +#define LOCK_TYPE_HARD (1) +#define LOCK_TYPE_SOFT (0) +#define BAR_INDEX_TO_RISC 0 + +#define BAR_SUBCHAN_INDEX_SEND 0 +#define BAR_SUBCHAN_INDEX_RECV 1 + +uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND}, + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV}, + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV, BAR_SUBCHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {BAR_INDEX_TO_RISC, BAR_INDEX_MPF_TO_PFVF, BAR_INDEX_MPF_TO_MPF}, + {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF}, + {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD}, + {LOCK_TYPE_SOFT, LOCK_TYPE_SOFT, LOCK_TYPE_HARD}, + {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD} +}; struct dev_stat { bool is_mpf_scanned; @@ -97,6 +169,11 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; } +static uint8_t spinklock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) { label_write((uint64_t)label_addr, virt_lock_id, 0); @@ -104,6 +181,28 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u return 0; } +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t primary_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, primary_id); + break; + } + rte_delay_us_block(SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + /** * Fun: PF init hard_spinlock addr */ @@ -119,6 +218,554 @@ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) return 0; } +static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct seqid_item *seqid_reps_info = NULL; + + pthread_spin_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + + do { + count++; + ++g_id; + g_id %= BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != REPS_INFO_FLAG_USABLE) && (count < BAR_SEQID_NUM_MAX)); + int rc; + + if (count >= BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = BAR_MSG_OK; + +out: + pthread_spin_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = zxdh_bar_chan_msgid_allocate(msg_id); + + if (ret != BAR_MSG_OK) + return BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return BAR_MSG_OK; +} + +static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case MSG_CHAN_END_MPF: + src_index = BAR_MSG_SRC_MPF; + break; + case MSG_CHAN_END_PF: + src_index = BAR_MSG_SRC_PF; + break; + case MSG_CHAN_END_VF: + src_index = BAR_MSG_SRC_VF; + break; + default: + src_index = BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case MSG_CHAN_END_MPF: + dst_index = BAR_MSG_DST_MPF; + break; + case MSG_CHAN_END_PF: + dst_index = BAR_MSG_DST_PFVF; + break; + case MSG_CHAN_END_VF: + dst_index = BAR_MSG_DST_PFVF; + break; + case MSG_CHAN_END_RISC: + dst_index = BAR_MSG_DST_RISC; + break; + default: + dst_index = BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + uint8_t src_index = 0; + uint8_t dst_index = 0; + + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return BAR_MSG_ERR_NULL_PARA; + } + src_index = zxdh_bar_msg_src_index_trans(in->src); + dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return BAR_MSG_ERR_TYPE; + } + if (in->module_id >= BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, + "recv buffer len: %" PRIu64 " is short than minimal 4 bytes\n", + result->buffer_len); + + return BAR_MSG_OK; +} + +static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return BAR_MSG_OK; +} + +static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u\n", src_pcieid, lockid); + if (dst == MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + CHAN_RISC_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + CHAN_PFVF_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + + return ret; +} + +static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u\n", src_pcieid, lockid); + if (dst == MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + CHAN_PFVF_LABEL_OFFSET); +} + +pthread_spinlock_t chan_lock; +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist.\n"); + return BAR_MSG_ERR_TYPE; + } + uint16_t idx = lock_type_tbl[src_index][dst_index]; + + if (idx == LOCK_TYPE_SOFT) + pthread_spin_lock(&chan_lock); + else + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); + + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.\n", src_pcieid); + + return ret; +} + +static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist.\n"); + return BAR_MSG_ERR_TYPE; + } + uint16_t idx = lock_type_tbl[src_index][dst_index]; + + if (idx == LOCK_TYPE_SOFT) + pthread_spin_unlock(&chan_lock); + else + zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr); + + return BAR_MSG_OK; +} + +static void zxdh_bar_chan_msgid_free(uint16_t msg_id) +{ + struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + pthread_spin_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + pthread_spin_unlock(&g_seqid_ring.lock); +} + +static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, + uint32_t data) +{ + uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32, + subchan_addr, algin_offset); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, + uint32_t *pdata) +{ + uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "subchan addr: %" PRIu64 "offset: %" PRIu32, + subchan_addr, algin_offset); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr, + struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx)); + + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr, + struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx); + + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, + uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_write(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, + *(data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + for (ix = 0; ix < remain; ix++) + remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix); + + zxdh_bar_chan_reg_write(subchan_addr, 4 * count + + BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, (data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + zxdh_bar_chan_reg_read(subchan_addr, 4 * count + + BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (ix = 0; ix < remain; ix++) + *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix); + } + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + data &= (~BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data); + return BAR_MSG_OK; +} + +static uint8_t temp_msg[BAR_MSG_ADDR_CHAN_INTERVAL]; +static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr, + uint16_t payload_len, struct bar_msg_header *msg_header) +{ + uint16_t ret = 0; + ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); + + ret = zxdh_bar_chan_msg_header_get(subchan_addr, + (struct bar_msg_header *)temp_msg); + + ret = zxdh_bar_chan_msg_payload_set(subchan_addr, + (uint8_t *)(payload_addr), payload_len); + + ret = zxdh_bar_chan_msg_payload_get(subchan_addr, + temp_msg, payload_len); + + ret = zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USED); + return ret; +} + +static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + if (BAR_MSG_CHAN_USABLE == (data & BAR_MSG_VALID_MASK)) + return BAR_MSG_CHAN_USABLE; + + return BAR_MSG_CHAN_USED; +} + +static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)BAR_MSG_POL_MASK); + data |= ((uint32_t)label << BAR_MSG_POL_OFFSET); + zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data); + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + zxdh_bar_chan_msg_payload_get(subchan_addr, + recv_msg + REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = REPS_HEADER_REPLYED; /* set reps's valid */ + return BAR_MSG_OK; +} + +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + struct bar_msg_header msg_header = {0}; + uint16_t seq_id = 0; + uint64_t subchan_addr = 0; + uint32_t time_out_cnt = 0; + uint16_t valid = 0; + int ret = 0; + + ret = zxdh_bar_chan_send_para_check(in, result); + if (ret != BAR_MSG_OK) + goto exit; + + ret = zxdh_bar_chan_save_recv_info(result, &seq_id); + if (ret != BAR_MSG_OK) + goto exit; + + zxdh_bar_chan_subchan_addr_get(in, &subchan_addr); + + msg_header.sync = BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != BAR_MSG_OK) { + zxdh_bar_chan_msgid_free(seq_id); + goto exit; + } + zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + + do { + rte_delay_us_block(BAR_MSG_POLLING_SPAN); + valid = zxdh_bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while ((time_out_cnt < BAR_MSG_TIMEOUT_TH) && (valid == BAR_MSG_CHAN_USED)); + + if ((time_out_cnt == BAR_MSG_TIMEOUT_TH) && (valid != BAR_MSG_CHAN_USABLE)) { + zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USABLE); + zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = BAR_MSG_ERR_TIME_OUT; + } else { + ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + zxdh_bar_chan_msgid_free(seq_id); + zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static int bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport) +{ + struct bar_recv_msg recv_msg = {0}; + int ret = 0; + int check_token = 0; + int sum_res = 0; + + if (!_msix_para) + return BAR_MSG_ERR_NULL; + + struct msix_msg msix_msg = { + .pcie_id = _msix_para->pcie_id, + .vector_risc = _msix_para->vector_risc, + .vector_pfvf = _msix_para->vector_pfvf, + .vector_mpf = _msix_para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = _msix_para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = _msix_para->driver_type, + .dst = MSG_CHAN_END_RISC, + .module_id = BAR_MODULE_MISX, + .src_pcieid = _msix_para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.\n", sum_res, check_token); + return BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.\n", _msix_para->pcie_id); + return BAR_MSG_OK; +} + +int zxdh_msg_chan_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct msix_para misx_info = { + .vector_risc = MSIX_FROM_RISCV, + .vector_pfvf = MSIX_FROM_PFVF, + .vector_mpf = MSIX_FROM_MPF, + .pcie_id = hw->pcie_id, + .driver_type = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + }; + + return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); +} + int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a619e6ae21..88d27756e2 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,6 +13,19 @@ extern "C" { #include <ethdev_driver.h> +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 + +#define BAR_MSG_POLLING_SPAN 100 /* sleep us */ +#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) +#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) +#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) /* 10s */ + +#define BAR_CHAN_MSG_SYNC 0 + +#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header)) +#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header)) + enum DRIVER_TYPE { MSG_CHAN_END_MPF = 0, MSG_CHAN_END_PF, @@ -20,6 +33,13 @@ enum DRIVER_TYPE { MSG_CHAN_END_RISC, }; +enum MSG_VEC { + MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, + MSIX_FROM_MPF, + MSIX_FROM_RISCV, + MSG_VEC_NUM, +}; + enum BAR_MSG_RTN { BAR_MSG_OK = 0, BAR_MSG_ERR_MSGID, @@ -54,10 +74,117 @@ enum BAR_MSG_RTN { BAR_MSG_ERR_SOCKET, /* netlink sockte err */ }; +enum bar_module_id { + BAR_MODULE_DBG = 0, /* 0: debug */ + BAR_MODULE_TBL, /* 1: resource table */ + BAR_MODULE_MISX, /* 2: config msix */ + BAR_MODULE_SDA, /* 3: */ + BAR_MODULE_RDMA, /* 4: */ + BAR_MODULE_DEMO, /* 5: channel test */ + BAR_MODULE_SMMU, /* 6: */ + BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + BAR_MODULE_VDPA, /* 8: vdpa live migration */ + BAR_MODULE_VQM, /* 9: vqm live migration */ + BAR_MODULE_NP, /* 10: vf msg callback np */ + BAR_MODULE_VPORT, /* 11: get vport */ + BAR_MODULE_BDF, /* 12: get bdf */ + BAR_MODULE_RISC_READY, /* 13: */ + BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + BAR_MDOULE_NVME, /* 15: */ + BAR_MDOULE_NPSDK, /* 16: */ + BAR_MODULE_NP_TODO, /* 17: */ + MODULE_BAR_MSG_TO_PF, /* 18: */ + MODULE_BAR_MSG_TO_VF, /* 19: */ + + MODULE_FLASH = 32, + BAR_MODULE_OFFSET_GET = 33, + BAR_EVENT_OVS_WITH_VCB = 36, + + BAR_MSG_MODULE_NUM = 100, +}; + +struct msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; + +struct bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; + +struct bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; + +struct bar_recv_msg { + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + /* */ + union { + struct bar_msix_reps msix_reps; + struct bar_offset_reps offset_reps; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; + +struct bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency? */ + uint8_t ack : 1; /* ack msg? */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); +int zxdh_msg_chan_enable(struct rte_eth_dev *dev); +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 59822 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang ` (2 preceding siblings ...) 2024-10-15 5:44 ` [PATCH v5 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang ` (2 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 13323 bytes --] Add zxdh get device backend infos, use msg chan to send msg get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 249 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_common.h | 30 ++++ drivers/net/zxdh/zxdh_ethdev.c | 35 +++++ drivers/net/zxdh/zxdh_ethdev.h | 5 + drivers/net/zxdh/zxdh_msg.c | 3 - drivers/net/zxdh/zxdh_msg.h | 27 +++- 7 files changed, 346 insertions(+), 4 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 9d0b5b9fd3..9aec47e68f 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -17,4 +17,5 @@ sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', 'zxdh_msg.c', + 'zxdh_common.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..140d0f2322 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <string.h> + +#include <ethdev_driver.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" +#include "zxdh_common.h" + +#define ZXDH_MSG_RSP_SIZE_MAX 512 + +#define ZXDH_COMMON_TABLE_READ 0 +#define ZXDH_COMMON_TABLE_WRITE 1 + +#define ZXDH_COMMON_FIELD_PHYPORT 6 + +#define RSC_TBL_CONTENT_LEN_MAX (257 * 2) + +#define REPS_HEADER_PAYLOAD_OFFSET 4 +#define TBL_MSG_PRO_SUCCESS 0xaa + +struct zxdh_common_msg { + uint8_t type; /* 0:read table 1:write table */ + uint8_t field; + uint16_t pcie_id; + uint16_t slen; /* Data length for write table */ + uint16_t reserved; +} __rte_packed; + +struct zxdh_common_rsp_hdr { + uint8_t rsp_status; + uint16_t rsp_len; + uint8_t reserved; + uint8_t payload_status; + uint8_t rsv; + uint16_t payload_len; +} __rte_packed; + +struct tbl_msg_header { + uint8_t type; /* r/w */ + uint8_t field; + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; +struct tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; + +static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + uint8_t type, + uint8_t field, + void *buff, + uint16_t buff_size) +{ + uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size; + + desc->payload_addr = rte_zmalloc(NULL, msg_len, 0); + if (unlikely(desc->payload_addr == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate msg_data"); + return -ENOMEM; + } + memset(desc->payload_addr, 0, msg_len); + desc->payload_len = msg_len; + struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr; + + msg_data->type = type; + msg_data->field = field; + msg_data->pcie_id = hw->pcie_id; + msg_data->slen = buff_size; + if (buff_size != 0) + rte_memcpy(msg_data + 1, buff, buff_size); + + return 0; +} + +static int32_t zxdh_send_command(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + enum bar_module_id module_id, + struct zxdh_msg_recviver_mem *msg_rsp) +{ + desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + desc->src = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF; + desc->dst = MSG_CHAN_END_RISC; + desc->module_id = module_id; + desc->src_pcieid = hw->pcie_id; + + msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX; + msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0); + if (unlikely(msg_rsp->recv_buffer == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate messages response"); + return -ENOMEM; + } + + if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != BAR_MSG_OK) { + PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response"); + rte_free(msg_rsp->recv_buffer); + return -1; + } + + return 0; +} + +static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp, + void *buff, uint16_t len) +{ + struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer; + + if ((rsp_hdr->payload_status != 0xaa) || (rsp_hdr->payload_len != len)) { + PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d", + rsp_hdr->payload_status, rsp_hdr->payload_len); + return -1; + } + if (len != 0) + rte_memcpy(buff, rsp_hdr + 1, len); + + return 0; +} + +static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_msg_recviver_mem msg_rsp; + struct zxdh_pci_bar_msg desc; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT, + (void *)phyport, sizeof(*phyport)); + return ret; +} + +static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + param->pcie_id = hw->pcie_id; + param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param->src_type = BAR_MODULE_TBL; +} + +static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + if (!res || !dev) + return BAR_MSG_ERR_NULL; + + struct tbl_msg_header tbl_msg = { + .type = TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + struct zxdh_pci_bar_msg in = {0}; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = MSG_CHAN_END_RISC; + in.module_id = BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + uint8_t recv_buf[RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + int ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != BAR_MSG_OK) { + PMD_DRV_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret); + return ret; + } + struct tbl_msg_reps_header *tbl_reps = + (struct tbl_msg_reps_header *)(recv_buf + REPS_HEADER_PAYLOAD_OFFSET); + + if (tbl_reps->check != TBL_MSG_PRO_SUCCESS) { + PMD_DRV_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.\n", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + memcpy(res, + (recv_buf + REPS_HEADER_PAYLOAD_OFFSET + sizeof(struct tbl_msg_reps_header)), *len); + return ret; +} + +static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, TBL_FIELD_PNLID, &reps, &reps_len) != BAR_MSG_OK) + return -1; + + *panel_id = reps; + return BAR_MSG_OK; +} + +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..ec7011e820 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_COMMON_H_ +#define _ZXDH_COMMON_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_COMMON_H_ */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d95ab4471a..ee2e1c0d5d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -10,9 +10,21 @@ #include "zxdh_logs.h" #include "zxdh_pci.h" #include "zxdh_msg.h" +#include "zxdh_common.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +uint16_t vport_to_vfid(union VPORT v) +{ + /* epid > 4 is local soft queue. return 1192 */ + if (v.epid > 4) + return 1192; + if (v.vf_flag) + return v.epid * 256 + v.vfid; + else + return (v.epid * 8 + v.pfid) + 1152; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) return ret; } +static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) +{ + if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) { + PMD_INIT_LOG(ERR, "Failed to get phyport"); + return -1; + } + PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + + hw->vfid = vport_to_vfid(hw->vport); + + if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) { + PMD_INIT_LOG(ERR, "Failed to get panel_id"); + return -1; + } + PMD_INIT_LOG(INFO, "Get pannel id success: 0x%x", hw->panel_id); + + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); @@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_agent_comm(eth_dev, hw); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index a51181f1ce..2351393009 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -56,6 +56,7 @@ struct zxdh_hw { uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint16_t vfid; uint8_t *isr; uint8_t weak_barriers; @@ -63,9 +64,13 @@ struct zxdh_hw { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; + uint8_t phyport; + uint8_t panel_id; uint8_t msg_chan_init; }; +uint16_t vport_to_vfid(union VPORT v); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 4e4930e5a1..5a652a5f39 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -18,8 +18,6 @@ #define REPS_INFO_FLAG_USABLE 0x00 #define BAR_SEQID_NUM_MAX 256 -#define ZXDH_BAR0_INDEX 0 - #define PCIEID_IS_PF_MASK (0x0800) #define PCIEID_PF_IDX_MASK (0x0700) #define PCIEID_VF_IDX_MASK (0x00ff) @@ -43,7 +41,6 @@ #define FW_SHRD_OFFSET (0x5000) #define FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) -#define ZXDH_CTRLCH_OFFSET (0x2000) #define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET) #define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET) #define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET) diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 88d27756e2..5b599f8f6a 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,9 +13,13 @@ extern "C" { #include <ethdev_driver.h> +#define ZXDH_BAR0_INDEX 0 + +#define ZXDH_CTRLCH_OFFSET (0x2000) + #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 -#define BAR_MSG_POLLING_SPAN 100 /* sleep us */ +#define BAR_MSG_POLLING_SPAN 100 #define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) #define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) #define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) /* 10s */ @@ -103,6 +107,27 @@ enum bar_module_id { BAR_MSG_MODULE_NUM = 100, }; +enum RES_TBL_FILED { + TBL_FIELD_PCIEID = 0, + TBL_FIELD_BDF = 1, + TBL_FIELD_MSGCH = 2, + TBL_FIELD_DATACH = 3, + TBL_FIELD_VPORT = 4, + TBL_FIELD_PNLID = 5, + TBL_FIELD_PHYPORT = 6, + TBL_FIELD_SERDES_NUM = 7, + TBL_FIELD_NP_PORT = 8, + TBL_FIELD_SPEED = 9, + TBL_FIELD_HASHID = 10, + TBL_FIELD_NON, +}; + +enum TBL_MSG_TYPE { + TBL_TYPE_READ, + TBL_TYPE_WRITE, + TBL_TYPE_NON, +}; + struct msix_para { uint16_t pcie_id; uint16_t vector_risc; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 27954 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang ` (3 preceding siblings ...) 2024-10-15 5:44 ` [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24346 bytes --] configure zxdh intr include risc,dtb. and release intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 302 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_ethdev.h | 8 + drivers/net/zxdh/zxdh_msg.c | 187 ++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 11 ++ drivers/net/zxdh/zxdh_pci.c | 62 +++++++ drivers/net/zxdh/zxdh_pci.h | 12 ++ 6 files changed, 581 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ee2e1c0d5d..4f6711c9af 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -25,6 +25,302 @@ uint16_t vport_to_vfid(union VPORT v) return (v.epid * 8 + v.pfid) + 1152; } +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + + +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + +static void zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); +} + + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler\n"); + zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler\n"); + zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler\n"); + zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler"); + zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_INIT_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE); + PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_INIT_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -138,10 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: - zxdh_bar_msg_chan_exit(); + zxdh_intr_release(eth_dev); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 2351393009..7c5f5940cb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -11,6 +11,10 @@ extern "C" { #include <rte_ether.h> #include "ethdev_driver.h" +#include <rte_interrupts.h> +#include <eal_interrupts.h> + +#include "zxdh_queue.h" /* ZXDH PCI vendor/device ID. */ #define PCI_VENDOR_ID_ZTE 0x1cf2 @@ -44,6 +48,9 @@ struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + struct rte_intr_handle *risc_intr; + struct rte_intr_handle *dtb_intr; + struct virtqueue **vqs; union VPORT vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -60,6 +67,7 @@ struct zxdh_hw { uint8_t *isr; uint8_t weak_barriers; + uint8_t intr_enabled; uint8_t use_msix; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 5a652a5f39..2a1228288f 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -91,6 +91,12 @@ #define BAR_SUBCHAN_INDEX_SEND 0 #define BAR_SUBCHAN_INDEX_RECV 1 +#define BAR_CHAN_MSG_SYNC 0 +#define BAR_CHAN_MSG_NO_EMEC 0 +#define BAR_CHAN_MSG_EMEC 1 +#define BAR_CHAN_MSG_NO_ACK 0 +#define BAR_CHAN_MSG_ACK 1 + uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND}, {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV}, @@ -130,6 +136,36 @@ struct seqid_ring { }; struct seqid_ring g_seqid_ring = {0}; +static inline const char *module_id_name(int val) +{ + switch (val) { + case BAR_MODULE_DBG: return "BAR_MODULE_DBG"; + case BAR_MODULE_TBL: return "BAR_MODULE_TBL"; + case BAR_MODULE_MISX: return "BAR_MODULE_MISX"; + case BAR_MODULE_SDA: return "BAR_MODULE_SDA"; + case BAR_MODULE_RDMA: return "BAR_MODULE_RDMA"; + case BAR_MODULE_DEMO: return "BAR_MODULE_DEMO"; + case BAR_MODULE_SMMU: return "BAR_MODULE_SMMU"; + case BAR_MODULE_MAC: return "BAR_MODULE_MAC"; + case BAR_MODULE_VDPA: return "BAR_MODULE_VDPA"; + case BAR_MODULE_VQM: return "BAR_MODULE_VQM"; + case BAR_MODULE_NP: return "BAR_MODULE_NP"; + case BAR_MODULE_VPORT: return "BAR_MODULE_VPORT"; + case BAR_MODULE_BDF: return "BAR_MODULE_BDF"; + case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY"; + case BAR_MODULE_REVERSE: return "BAR_MODULE_REVERSE"; + case BAR_MDOULE_NVME: return "BAR_MDOULE_NVME"; + case BAR_MDOULE_NPSDK: return "BAR_MDOULE_NPSDK"; + case BAR_MODULE_NP_TODO: return "BAR_MODULE_NP_TODO"; + case MODULE_BAR_MSG_TO_PF: return "MODULE_BAR_MSG_TO_PF"; + case MODULE_BAR_MSG_TO_VF: return "MODULE_BAR_MSG_TO_VF"; + case MODULE_FLASH: return "MODULE_FLASH"; + case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET"; + case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; @@ -803,3 +839,154 @@ int zxdh_bar_msg_chan_exit(void) g_dev_stat.is_res_init = false; return BAR_MSG_OK; } + +static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + + return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static void zxdh_bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header, + uint8_t *receiver_buff) +{ + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = REPS_HEADER_REPLYED; + +free_id: + zxdh_bar_chan_msgid_free(msg_header->msg_id); +} + +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[BAR_MSG_MODULE_NUM]; +static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header, + uint8_t *receiver_buff, void *dev) +{ + uint8_t *reps_buffer = rte_malloc(NULL, BAR_MSG_PAYLOAD_MAX_LEN, 0); + + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + uint16_t reps_len = 0; + + recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + zxdh_bar_chan_msg_header_set(reply_addr, msg_header); + zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + zxdh_bar_chan_msg_valid_set(reply_addr, BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type, + uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + uint64_t recv_rep_addr; + + if (sync == BAR_CHAN_MSG_SYNC) + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t zxdh_bar_chan_msg_header_check(struct bar_msg_header *msg_header) +{ + if (msg_header->valid != BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return BAR_MSG_ERR_MODULE; + } + uint8_t module_id = msg_header->module_id; + + if (module_id >= (uint8_t)BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return BAR_MSG_ERR_MODULE; + } + uint16_t len = msg_header->len; + + if (len > BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + module_id_name(module_id), module_id); + return BAR_MSG_ERR_MODULE_NOEXIST; + } + return BAR_MSG_OK; +} + +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + struct bar_msg_header msg_header; + uint64_t recv_addr = 0; + uint16_t ret = 0; + + recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + zxdh_bar_chan_msg_header_get(recv_addr, &msg_header); + ret = zxdh_bar_chan_msg_header_check(&msg_header); + + if (ret != BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + + uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0); + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == BAR_CHAN_MSG_SYNC) { + zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + zxdh_bar_chan_msg_valid_set(recv_addr, BAR_MSG_CHAN_USABLE); + if (msg_header.ack == BAR_CHAN_MSG_ACK) { + zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + return 0; + +exit: + rte_free(recved_msg); + return BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 5b599f8f6a..2e4e6d3ba6 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -16,8 +16,15 @@ extern "C" { #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) /* 5 */ +#define ZXDH_QUEUE_INTR_VEC_NUM 256 #define BAR_MSG_POLLING_SPAN 100 #define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) @@ -202,6 +209,9 @@ struct bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, + void *reps_buffer, uint16_t *reps_len, void *dev); + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); @@ -209,6 +219,7 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); int zxdh_msg_chan_enable(struct rte_eth_dev *dev); int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 73ec640b84..1b953c7d0a 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -96,6 +96,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) rte_write32(features >> 32, &hw->common_cfg->guest_feature); } +static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint8_t zxdh_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -103,8 +121,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_status = zxdh_set_status, .get_features = zxdh_get_features, .set_features = zxdh_set_features, + .set_queue_irq = zxdh_set_queue_irq, + .set_config_irq = zxdh_set_config_irq, + .get_isr = zxdh_get_isr, }; +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_isr(hw); +} + uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) { return VTPCI_OPS(hw)->get_features(hw); @@ -288,3 +314,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) return 0; } + +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev) +{ + uint8_t pos = 0; + int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret); + return ZXDH_MSIX_NONE; + } + while (pos) { + uint8_t cap[2] = {0}; + + ret = rte_pci_read_config(dev, cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap[0] == PCI_CAP_ID_MSIX) { + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap)); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, + "failed to read pci cap at pos: %x ret %d", pos + 2, ret); + break; + } + if (flags & PCI_MSIX_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + pos = cap[1]; + } + return ZXDH_MSIX_NONE; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index deda73a65a..677dadd5c8 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -22,6 +22,13 @@ enum zxdh_msix_status { ZXDH_MSIX_ENABLED = 2 }; +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + #define PCI_CAPABILITY_LIST 0x34 #define PCI_CAP_ID_VNDR 0x09 #define PCI_CAP_ID_MSIX 0x11 @@ -124,6 +131,9 @@ struct zxdh_pci_ops { uint64_t (*get_features)(struct zxdh_hw *hw); void (*set_features)(struct zxdh_hw *hw, uint64_t features); + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + uint8_t (*get_isr)(struct zxdh_hw *hw); }; struct zxdh_hw_internal { @@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); #ifdef __cplusplus } -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53002 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang ` (4 preceding siblings ...) 2024-10-15 5:44 ` [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3892 bytes --] Add support for zxdh infos get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 62 +++++++++++++++++++++++++++++++++- 1 file changed, 61 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 4f6711c9af..e0f2c1985b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -12,6 +12,9 @@ #include "zxdh_msg.h" #include "zxdh_common.h" +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U + struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; uint16_t vport_to_vfid(union VPORT v) @@ -25,6 +28,58 @@ uint16_t vport_to_vfid(union VPORT v) return (v.epid * 8 + v.pfid) + 1152; } +static uint32_t zxdh_dev_speed_capa_get(uint32_t speed) +{ + switch (speed) { + case RTE_ETH_SPEED_NUM_10G: return RTE_ETH_LINK_SPEED_10G; + case RTE_ETH_SPEED_NUM_20G: return RTE_ETH_LINK_SPEED_20G; + case RTE_ETH_SPEED_NUM_25G: return RTE_ETH_LINK_SPEED_25G; + case RTE_ETH_SPEED_NUM_40G: return RTE_ETH_LINK_SPEED_40G; + case RTE_ETH_SPEED_NUM_50G: return RTE_ETH_LINK_SPEED_50G; + case RTE_ETH_SPEED_NUM_56G: return RTE_ETH_LINK_SPEED_56G; + case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G; + case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G; + default: return 0; + } +} + +static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + dev_info->speed_capa = zxdh_dev_speed_capa_get(hw->speed); + dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX); + dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX); + dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE; + dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN; + dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS; + dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_QINQ_STRIP); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM); + + return 0; +} + static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; @@ -321,6 +376,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_infos_get = zxdh_dev_infos_get, +}; + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -377,7 +437,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; int ret = 0; - eth_dev->dev_ops = NULL; + eth_dev->dev_ops = &zxdh_eth_dev_ops; /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 8397 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang ` (5 preceding siblings ...) 2024-10-15 5:44 ` [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-10-15 5:44 ` Junlong Wang 2024-10-15 15:37 ` Stephen Hemminger 2024-10-15 15:57 ` Stephen Hemminger 6 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-15 5:44 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 39748 bytes --] provided zxdh dev configure ops for queue check,reset,alloc resources,etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 11 +- drivers/net/zxdh/zxdh_common.c | 119 +++++++++ drivers/net/zxdh/zxdh_common.h | 12 + drivers/net/zxdh/zxdh_ethdev.c | 459 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 23 +- drivers/net/zxdh/zxdh_pci.c | 97 +++++++ drivers/net/zxdh/zxdh_pci.h | 28 ++ drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++ drivers/net/zxdh/zxdh_queue.h | 171 ++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 +- 10 files changed, 1045 insertions(+), 8 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_queue.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 9aec47e68f..cde96d8111 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -14,8 +14,9 @@ if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') endif sources = files( - 'zxdh_ethdev.c', - 'zxdh_pci.c', - 'zxdh_msg.c', - 'zxdh_common.c', - ) + 'zxdh_ethdev.c', + 'zxdh_pci.c', + 'zxdh_msg.c', + 'zxdh_common.c', + 'zxdh_queue.c', + ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 140d0f2322..9f23fcb658 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -20,6 +20,7 @@ #define ZXDH_COMMON_TABLE_WRITE 1 #define ZXDH_COMMON_FIELD_PHYPORT 6 +#define ZXDH_COMMON_FIELD_DATACH 3 #define RSC_TBL_CONTENT_LEN_MAX (257 * 2) @@ -247,3 +248,121 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); return ret; } + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +int32_t zxdh_acquire_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return -1; + + return 0; +} + +int32_t zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + return 0; + } + + return -1; +} + +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} + +static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_pci_bar_msg desc; + struct zxdh_msg_recviver_mem msg_rsp; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + if ((buff_size != 0) && (buff == NULL)) { + PMD_DRV_LOG(ERR, "Buff is invalid"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE, + field, buff, buff_size); + + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_datach_set(struct rte_eth_dev *dev) +{ + /* payload: queue_num(2byte) + pch1(2byte) + ** + pchn */ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t buff_size = (hw->queue_num + 1) * 2; + void *buff = rte_zmalloc(NULL, buff_size, 0); + + if (unlikely(buff == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate buff"); + return -ENOMEM; + } + memset(buff, 0, buff_size); + uint16_t *pdata = (uint16_t *)buff; + *pdata++ = hw->queue_num; + uint16_t i; + + for (i = 0; i < hw->queue_num; i++) + *(pdata + i) = hw->channel_context[i].ph_chno; + + int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH, + (void *)buff, buff_size); + + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to setup data channel of common table"); + + rte_free(buff); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index ec7011e820..5205d9db54 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -14,6 +14,10 @@ extern "C" { #include "zxdh_ethdev.h" +#define ZXDH_VF_LOCK_REG 0x90 +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 + struct zxdh_res_para { uint64_t virt_addr; uint16_t pcie_id; @@ -23,6 +27,14 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +int32_t zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_acquire_lock(struct zxdh_hw *hw); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); +int32_t zxdh_datach_set(struct rte_eth_dev *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e0f2c1985b..7143dea7ae 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -11,6 +11,7 @@ #include "zxdh_pci.h" #include "zxdh_msg.h" #include "zxdh_common.h" +#include "zxdh_queue.h" #define ZXDH_MIN_RX_BUFSIZE 64 #define ZXDH_MAX_RX_PKTLEN 14000U @@ -376,8 +377,466 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +static int32_t zxdh_features_update(struct zxdh_hw *hw, + const struct rte_eth_rxmode *rxmode, + const struct rte_eth_txmode *txmode) +{ + uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; + uint64_t req_features = hw->guest_features; + + if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM); + + if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) | + (1ULL << ZXDH_NET_F_GUEST_TSO6); + + if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_CSUM); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) | + (1ULL << ZXDH_NET_F_HOST_TSO6); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_UFO); + + req_features = req_features & hw->host_features; + hw->guest_features = req_features; + + VTPCI_OPS(hw)->set_features(hw, req_features); + + if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + PMD_DRV_LOG(ERR, "rx checksum not available on this host"); + return -ENOTSUP; + } + + if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); + return -ENOTSUP; + } + return 0; +} + +static bool rx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); +} + +static bool tx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); +} + +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i = 0; + + const char *type = NULL; + struct virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = zxdh_get_queue_type(i); + if (queue_type == VTNET_RQ) + type = "rxq"; + else if (queue_type == VTNET_TQ) + type = "txq"; + else + continue; + PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + rte_pktmbuf_free(buf); + } +} + +static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t base = (queue_type == VTNET_RQ) ? 0 : 1; + uint16_t i = 0; + uint16_t j = 0; + uint16_t done = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + rte_delay_us_block(1000); + /* acquire hw lock */ + if (zxdh_acquire_lock(hw) < 0) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout); + continue; + } + /* Iterate COI table and find free channel */ + for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) { + uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t)); + uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + + for (j = base; j < 32; j += 2) { + /* Got the available channel & update COI table */ + if ((var & (1 << j)) == 0) { + var |= (1 << j); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + done = 1; + break; + } + } + if (done) + break; + } + break; + } + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + zxdh_release_lock(hw); + /* check for no channel condition */ + if (done != 1) { + PMD_INIT_LOG(ERR, "NO availd queues\n"); + return -1; + } + /* reruen available channel ID */ + return (i * 32) + j; +} + +static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->channel_context[lch].valid == 1) { + PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u", + lch, hw->channel_context[lch].ph_chno); + return hw->channel_context[lch].ph_chno; + } + int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch)); + + if (pch < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + hw->channel_context[lch].ph_chno = (uint16_t)pch; + hw->channel_context[lch].valid = 1; + PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch); + return 0; +} + +static void zxdh_init_vring(struct virtqueue *vq) +{ + int32_t size = vq->vq_nentries; + uint8_t *ring_mem = vq->vq_ring_virt_mem; + + memset(ring_mem, 0, vq->vq_ring_size); + + vq->vq_used_cons_idx = 0; + vq->vq_desc_head_idx = 0; + vq->vq_avail_idx = 0; + vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); + vq->vq_free_cnt = vq->vq_nentries; + memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); + vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr(vq); +} + +static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) +{ + char vq_name[VIRTQUEUE_MAX_NAME_SZ] = {0}; + char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ] = {0}; + const struct rte_memzone *mz = NULL; + const struct rte_memzone *hdr_mz = NULL; + uint32_t size = 0; + struct zxdh_hw *hw = dev->data->dev_private; + struct virtnet_rx *rxvq = NULL; + struct virtnet_tx *txvq = NULL; + struct virtqueue *vq = NULL; + size_t sz_hdr_mz = 0; + void *sw_ring = NULL; + int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx); + int32_t numa_node = dev->device->numa_node; + uint16_t vtpci_phy_qidx = 0; + uint32_t vq_size = 0; + int32_t ret = 0; + + if (hw->channel_context[vtpci_logic_qidx].valid == 0) { + PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx); + return -EINVAL; + } + vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno; + + PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d", + vtpci_logic_qidx, vtpci_phy_qidx, numa_node); + + vq_size = ZXDH_QUEUE_DEPTH; + + if (VTPCI_OPS(hw)->set_queue_num != NULL) + VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size); + + snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx); + + size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra), + RTE_CACHE_LINE_SIZE); + if (queue_type == VTNET_TQ) { + /* + * For each xmit packet, allocate a zxdh_net_hdr + * and indirect ring elements + */ + sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region); + } + + vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node); + if (vq == NULL) { + PMD_INIT_LOG(ERR, "can not allocate vq"); + return -ENOMEM; + } + hw->vqs[vtpci_logic_qidx] = vq; + + vq->hw = hw; + vq->vq_queue_index = vtpci_phy_qidx; + vq->vq_nentries = vq_size; + + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL; + vq->vq_packed.event_flags_shadow = 0; + if (queue_type == VTNET_RQ) + vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE; + + /* + * Reserve a memzone for vring elements + */ + size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); + PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); + + mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + ZXDH_PCI_VRING_ALIGN); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(vq_name); + if (mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + + memset(mz->addr, 0, mz->len); + + vq->vq_ring_mem = mz->iova; + vq->vq_ring_virt_mem = mz->addr; + + zxdh_init_vring(vq); + + if (sz_hdr_mz) { + snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr", + dev->data->port_id, vtpci_phy_qidx); + hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + RTE_CACHE_LINE_SIZE); + if (hdr_mz == NULL) { + if (rte_errno == EEXIST) + hdr_mz = rte_memzone_lookup(vq_hdr_name); + if (hdr_mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + } + + if (queue_type == VTNET_RQ) { + size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]); + + sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node); + if (!sw_ring) { + PMD_INIT_LOG(ERR, "can not allocate RX soft ring"); + ret = -ENOMEM; + goto fail_q_alloc; + } + + vq->sw_ring = sw_ring; + rxvq = &vq->rxq; + rxvq->vq = vq; + rxvq->port_id = dev->data->port_id; + rxvq->mz = mz; + } else { /* queue_type == VTNET_TQ */ + txvq = &vq->txq; + txvq->vq = vq; + txvq->port_id = dev->data->port_id; + txvq->mz = mz; + txvq->virtio_net_hdr_mz = hdr_mz; + txvq->virtio_net_hdr_mem = hdr_mz->iova; + } + + vq->offset = offsetof(struct rte_mbuf, buf_iova); + if (queue_type == VTNET_TQ) { + struct zxdh_tx_region *txr = hdr_mz->addr; + uint32_t i; + + memset(txr, 0, vq_size * sizeof(*txr)); + for (i = 0; i < vq_size; i++) { + /* first indirect descriptor is always the tx header */ + struct vring_packed_desc *start_dp = txr[i].tx_packed_indir; + + vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) + + offsetof(struct zxdh_tx_region, tx_hdr); + /* length will be updated to actual pi hdr size when xmit pkt */ + start_dp->len = 0; + } + } + if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) { + PMD_INIT_LOG(ERR, "setup_queue failed"); + return -EINVAL; + } + return 0; +fail_q_alloc: + rte_free(sw_ring); + rte_memzone_free(hdr_mz); + rte_memzone_free(mz); + rte_free(vq); + return ret; +} + +static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) +{ + uint16_t lch; + struct zxdh_hw *hw = dev->data->dev_private; + + hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0); + if (!hw->vqs) { + PMD_INIT_LOG(ERR, "Failed to allocate vqs"); + return -ENOMEM; + } + for (lch = 0; lch < nr_vq; lch++) { + if (zxdh_acquire_channel(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire the channels"); + zxdh_free_queues(dev); + return -1; + } + if (zxdh_init_queue(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to alloc virtio queue"); + zxdh_free_queues(dev); + return -1; + } + } + return 0; +} + + +static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) +{ + const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t nr_vq = 0; + int32_t ret = 0; + + if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return -EINVAL; + } + if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues, + ZXDH_QUEUES_NUM_MAX); + return -EINVAL; + } + if ((rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) && (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE)) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + if ((rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) && (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE)) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + + ret = zxdh_features_update(hw, rxmode, txmode); + if (ret < 0) + return ret; + + /* check if lsc interrupt feature is enabled */ + if (dev->data->dev_conf.intr_conf.lsc) { + if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) { + PMD_DRV_LOG(ERR, "link status not supported by host"); + return -ENOTSUP; + } + } + + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + + nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; + if (nr_vq == hw->queue_num) { + /*no que changed */ + return 0; + } + + PMD_DRV_LOG(DEBUG, "que changed need reset "); + /* Reset the device although not necessary at startup */ + zxdh_vtpci_reset(hw); + + /* Tell the host we've noticed this device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK); + + /* Tell the host we've known how to drive the device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER); + /* The queue needs to be released when reconfiguring*/ + if (hw->vqs != NULL) { + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + } + + hw->queue_num = nr_vq; + ret = zxdh_alloc_queues(dev, nr_vq); + if (ret < 0) + return ret; + + zxdh_datach_set(dev); + + if (zxdh_configure_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to configure interrupt"); + zxdh_free_queues(dev); + return -1; + } + + zxdh_vtpci_reinit_complete(hw); + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = zxdh_dev_configure, .dev_infos_get = zxdh_dev_infos_get, }; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7c5f5940cb..f3fc5b3ce0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -14,8 +14,6 @@ extern "C" { #include <rte_interrupts.h> #include <eal_interrupts.h> -#include "zxdh_queue.h" - /* ZXDH PCI vendor/device ID. */ #define PCI_VENDOR_ID_ZTE 0x1cf2 @@ -28,11 +26,23 @@ extern "C" { #define ZXDH_MAX_MC_MAC_ADDRS 32 #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) +/** + * zxdh has a total of 4096 queues, + * pf/vf devices support up to 256 queues + * (include private queues) + */ #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_QUEUES_BASE 0 +#define ZXDH_TOTAL_QUEUES_NUM 4096 +#define ZXDH_QUEUES_NUM_MAX 256 +#define ZXDH_QUERES_SHARE_BASE (0x5000) #define ZXDH_NUM_BARS 2 +#define ZXDH_MBUF_BURST_SZ 64 + union VPORT { uint16_t vport; struct { @@ -44,6 +54,11 @@ union VPORT { }; }; +struct chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; @@ -51,6 +66,7 @@ struct zxdh_hw { struct rte_intr_handle *risc_intr; struct rte_intr_handle *dtb_intr; struct virtqueue **vqs; + struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; union VPORT vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -64,6 +80,7 @@ struct zxdh_hw { uint16_t device_id; uint16_t port_id; uint16_t vfid; + uint16_t queue_num; uint8_t *isr; uint8_t weak_barriers; @@ -75,6 +92,8 @@ struct zxdh_hw { uint8_t phyport; uint8_t panel_id; uint8_t msg_chan_init; + uint8_t has_tx_offload; + uint8_t has_rx_offload; }; uint16_t vport_to_vfid(union VPORT v); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 1b953c7d0a..a109faa599 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -114,6 +114,86 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw) return rte_read8(hw->isr); } +static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t check_vq_phys_addr_ok(struct virtqueue *vq) +{ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} + +static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void zxdh_del_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -124,6 +204,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_irq = zxdh_set_queue_irq, .set_config_irq = zxdh_set_config_irq, .get_isr = zxdh_get_isr, + .get_queue_num = zxdh_get_queue_num, + .set_queue_num = zxdh_set_queue_num, + .setup_queue = zxdh_setup_queue, + .del_queue = zxdh_del_queue, }; uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) @@ -150,6 +234,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw) PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); } +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= VTPCI_OPS(hw)->get_status(hw); + + VTPCI_OPS(hw)->set_status(hw, status); +} + static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) { uint8_t bar = cap->bar; diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index 677dadd5c8..f8949ec041 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,7 +12,9 @@ extern "C" { #include <stdint.h> #include <stdbool.h> +#include <rte_pci.h> #include <bus_pci_driver.h> +#include <ethdev_driver.h> #include "zxdh_ethdev.h" @@ -34,8 +36,20 @@ enum zxdh_msix_status { #define PCI_CAP_ID_MSIX 0x11 #define PCI_MSIX_ENABLE 0x8000 +#define ZXDH_PCI_VRING_ALIGN 4096 +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ + +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ #define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ @@ -60,6 +74,8 @@ enum zxdh_msix_status { #define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 #define ZXDH_CONFIG_STATUS_FAILED 0x80 +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 + struct zxdh_net_config { /* The config defining mac address (if ZXDH_NET_F_MAC) */ uint8_t mac[RTE_ETHER_ADDR_LEN]; @@ -122,6 +138,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + struct zxdh_pci_ops { void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); @@ -134,6 +155,11 @@ struct zxdh_pci_ops { uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); uint8_t (*get_isr)(struct zxdh_hw *hw); + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq); }; struct zxdh_hw_internal { @@ -155,6 +181,8 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw); +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..005b2a5578 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" +#include "zxdh_msg.h" + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + return NULL; +} + +static int32_t zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + if (zxdh_acquire_lock(hw) != 0) { + PMD_INIT_LOG(ERR, + "Could not acquire lock to release channel, timeout %d", timeout); + continue; + } + break; + } + + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Acquire lock timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return VTNET_RQ; + else + return VTNET_TQ; +} + +int32_t zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + if (zxdh_release_channel(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = zxdh_get_queue_type(i); + if (queue_type == VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.virtio_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index e511843205..8bf2993a48 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -15,7 +15,25 @@ extern "C" { #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" +#include "zxdh_pci.h" +enum { VTNET_RQ = 0, VTNET_TQ = 1 }; + +#define VIRTQUEUE_MAX_NAME_SZ 32 +#define RQ_QUEUE_IDX 0 +#define TQ_QUEUE_IDX 1 +#define ZXDH_MAX_TX_INDIRECT 8 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define VRING_DESC_F_WRITE 2 +/* This flag means the descriptor was made available by the driver */ +#define VRING_PACKED_DESC_F_AVAIL (1 << (7)) + +#define RING_EVENT_FLAGS_ENABLE 0x0 +#define RING_EVENT_FLAGS_DISABLE 0x1 +#define RING_EVENT_FLAGS_DESC 0x2 + +#define VQ_RING_DESC_CHAIN_END 32768 /** ring descriptors: 16 bytes. * These can chain together via "next". **/ @@ -26,6 +44,19 @@ struct vring_desc { uint16_t next; /* We chain unused descriptors via this. */ }; +struct vring_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was written to. */ + uint32_t len; +}; + +struct vring_used { + uint16_t flags; + uint16_t idx; + struct vring_used_elem ring[0]; +}; + struct vring_avail { uint16_t flags; uint16_t idx; @@ -102,6 +133,146 @@ struct virtqueue { struct vq_desc_extra vq_descx[0]; }; +struct zxdh_type_hdr { + uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ + uint8_t pd_len; + uint8_t num_buffers; + uint8_t reserved; +} __rte_packed; /* 4B */ + +struct zxdh_pi_hdr { + uint8_t pi_len; + uint8_t pkt_type; + uint16_t vlan_id; + uint32_t ipv6_extend; + uint16_t l3_offset; + uint16_t l4_offset; + uint8_t phy_port; + uint8_t pkt_flag_hi8; + uint16_t pkt_flag_lw16; + union { + struct { + uint64_t sa_idx; + uint8_t reserved_8[8]; + } dl; + struct { + uint32_t lro_flag; + uint32_t lro_mss; + uint16_t err_code; + uint16_t pm_id; + uint16_t pkt_len; + uint8_t reserved[2]; + } ul; + }; +} __rte_packed; /* 32B */ + +struct zxdh_pd_hdr_dl { + uint32_t ol_flag; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t dst_vfid; + uint32_t svlan_insert; + uint32_t cvlan_insert; +} __rte_packed; /* 16B */ + +struct zxdh_net_hdr_dl { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_dl pd_hdr; /* 16B */ +} __rte_packed; + +struct zxdh_pd_hdr_ul { + uint32_t pkt_flag; + uint32_t rss_hash; + uint32_t fd; + uint32_t striped_vlan_tci; + /* ovs */ + uint8_t tag_idx; + uint8_t tag_data; + uint16_t src_vfid; + /* */ + uint16_t pkt_type_out; + uint16_t pkt_type_in; +} __rte_packed; /* 24B */ + +struct zxdh_net_hdr_ul { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_ul pd_hdr; /* 24B */ +} __rte_packed; /* 60B */ + +struct zxdh_tx_region { + struct zxdh_net_hdr_dl tx_hdr; + union { + struct vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; + struct vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; + } __rte_aligned(16); +}; + +static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +{ + size_t size; + + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct vring_packed_desc); + size += sizeof(struct vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_packed_desc_event); + return size; + } + + size = num * sizeof(struct vring_desc); + size += sizeof(struct vring_avail) + (num * sizeof(uint16_t)); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_used) + (num * sizeof(struct vring_used_elem)); + return size; +} + +static inline void vring_init_packed(struct vring_packed *vr, uint8_t *p, + unsigned long align, uint32_t num) +{ + vr->num = num; + vr->desc = (struct vring_packed_desc *)p; + vr->driver = (struct vring_packed_desc_event *)(p + + vr->num * sizeof(struct vring_packed_desc)); + vr->device = (struct vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver + + sizeof(struct vring_packed_desc_event)), align); +} + +static inline void vring_desc_init_packed(struct virtqueue *vq, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n - 1; i++) { + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END; +} + +static inline void vring_desc_init_indirect_packed(struct vring_packed_desc *dp, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n; i++) { + dp[i].id = (uint16_t)i; + dp[i].flags = VRING_DESC_F_WRITE; + } +} + +static inline void virtqueue_disable_intr(struct virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 6476bc15e2..7e9f3a3ae6 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2023 ZTE Corporation + * Copyright(c) 2024 ZTE Corporation */ #ifndef _ZXDH_RXTX_H_ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 93265 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops 2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang @ 2024-10-15 15:37 ` Stephen Hemminger 2024-10-15 15:57 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-15 15:37 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, ferruh.yigit, wang.yong19 On Tue, 15 Oct 2024 13:44:35 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build > index 9aec47e68f..cde96d8111 100644 > --- a/drivers/net/zxdh/meson.build > +++ b/drivers/net/zxdh/meson.build > @@ -14,8 +14,9 @@ if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') > endif > > sources = files( > - 'zxdh_ethdev.c', > - 'zxdh_pci.c', > - 'zxdh_msg.c', > - 'zxdh_common.c', > - ) > + 'zxdh_ethdev.c', > + 'zxdh_pci.c', > + 'zxdh_msg.c', > + 'zxdh_common.c', > + 'zxdh_queue.c', > + ) If you make a new version, then fix indentation in earlier patch to avoid having to fix it in last patch. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops 2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-15 15:37 ` Stephen Hemminger @ 2024-10-15 15:57 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-15 15:57 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, ferruh.yigit, wang.yong19 On Tue, 15 Oct 2024 13:44:35 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > provided zxdh dev configure ops for queue > check,reset,alloc resources,etc. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> These build failures need to be addressed. Probably as simple as adding struct bar_msg_header msg_header = { 0 }; #################################################################################### #### [Begin job log] "ubuntu-22.04-gcc-stdatomic" at step Build and test #################################################################################### 951 | struct bar_msg_header msg_header; | ^~~~~~~~~~ ../drivers/net/zxdh/zxdh_msg.c:976:60: error: ‘msg_header.sync’ may be used uninitialized [-Werror=maybe-uninitialized] 976 | uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); | ~~~~~~~~~~^~~~~ ../drivers/net/zxdh/zxdh_msg.c:951:31: note: ‘msg_header’ declared here 951 | struct bar_msg_header msg_header; | ^~~~~~~~~~ In function ‘zxdh_bar_msg_ack_async_msg_proc’, inlined from ‘zxdh_bar_irq_recv’ at ../drivers/net/zxdh/zxdh_msg.c:984:3: ../drivers/net/zxdh/zxdh_msg.c:860:78: error: ‘msg_header.msg_id’ may be used uninitialized [-Werror=maybe-uninitialized] 860 | struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; | ~~~~~~~~~~^~~~~~~~ ../drivers/net/zxdh/zxdh_msg.c: In function ‘zxdh_bar_irq_recv’: ../drivers/net/zxdh/zxdh_msg.c:951:31: note: ‘msg_header’ declared here 951 | struct bar_msg_header msg_header; | ^~~~~~~~~~ cc1: all warnings being treated as errors [2198/3046] Compiling C object 'drivers/a715181@@tmp_rte_net_zxdh at sta/net_zxdh_zxdh_queue.c.o'. [2199/3046] Compiling C object 'drivers/a715181@@tmp_rte_raw_cnxk_bphy at sta/raw_cnxk_bphy_cnxk_bphy_cgx.c.o'. [2200/3046] Generating rte_net_vmxnet3.pmd.c with a custom command. [2201/3046] Compiling C object 'drivers/a715181@@tmp_rte_raw_cnxk_bphy at sta/raw_cnxk_bphy_cnxk_bphy.c.o'. [2202/3046] Compiling C object 'drivers/a715181@@tmp_rte_net_zxdh at sta/net_zxdh_zxdh_common.c.o'. ninja: build stopped: subcommand failed. ##[error]Process completed with exit code 1. #################################################################################### #### [End job log] "ubuntu-22.04-gcc-stdatomic" at step Build and test #################################################################################### ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 0/9] net/zxdh: introduce net zxdh driver 2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-10-16 8:16 ` Junlong Wang 2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 1 sibling, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:16 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2584 bytes --] v6: - Resolve ci/intel compilation issues. - fix meson.build indentation in earlier patch. V5: - split driver into multiple patches,part of the zxdh driver, later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. - fix errors reported by scripts. - move the product link in zxdh.rst. - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. - modify other comments according to Ferruh's comments. V4: - Resolve compilation issues Junlong Wang (9): net/zxdh: add zxdh ethdev pmd driver net/zxdh: add logging implementation net/zxdh: add zxdh device pci init implementation net/zxdh: add msg chan and msg hwlock init net/zxdh: add msg chan enable implementation net/zxdh: add zxdh get device backend infos net/zxdh: add configure zxdh intr implementation net/zxdh: add zxdh dev infos get ops net/zxdh: add zxdh dev configure ops doc/guides/nics/features/zxdh.ini | 9 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 30 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 22 + drivers/net/zxdh/zxdh_common.c | 368 +++++++++++ drivers/net/zxdh/zxdh_common.h | 42 ++ drivers/net/zxdh/zxdh_ethdev.c | 1020 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 100 +++ drivers/net/zxdh/zxdh_logs.h | 40 ++ drivers/net/zxdh/zxdh_msg.c | 986 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 228 +++++++ drivers/net/zxdh/zxdh_pci.c | 449 +++++++++++++ drivers/net/zxdh/zxdh_pci.h | 194 ++++++ drivers/net/zxdh/zxdh_queue.c | 131 ++++ drivers/net/zxdh/zxdh_queue.h | 281 ++++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++ 17 files changed, 3957 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 4765 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-10-16 8:16 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang ` (2 more replies) 0 siblings, 3 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:16 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7515 bytes --] Add basic zxdh ethdev init and register PCI probe functions Update doc files Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 9 +++ doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 30 ++++++++++ drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 18 ++++++ drivers/net/zxdh/zxdh_ethdev.c | 92 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 44 +++++++++++++++ 7 files changed, 195 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..05c8091ed7 --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..9832115e11 --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,30 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + +- Learning about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + +Features +-------- + +Features of the zxdh PMD are: + +- Multi arch support: x86_64, ARMv8. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- +X86-32, Power8, ARMv7 and BSD are not supported yet. diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..0a12914534 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..932fb1c835 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', +) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..75d8b28cc3 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <ethdev_pci.h> +#include <bus_pci_driver.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + eth_dev->dev_ops = NULL; + + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) + return -ENOMEM; + + memset(hw, 0, sizeof(*hw)); + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) + return -EIO; + + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + } + + return ret; +} + +static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + +static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) +{ + int ret = 0; + + return ret; +} + +static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + int ret = 0; + + ret = zxdh_dev_close(eth_dev); + + return ret; +} + +static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..04023bfe84 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_ETHDEV_H_ +#define _ZXDH_ETHDEV_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "ethdev_driver.h" + +/* ZXDH PCI vendor/device ID. */ +#define PCI_VENDOR_ID_ZTE 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 + +struct zxdh_hw { + struct rte_eth_dev *eth_dev; + uint64_t bar_addr[ZXDH_NUM_BARS]; + + uint32_t speed; + uint16_t device_id; + uint16_t port_id; + + uint8_t duplex; + uint8_t is_pf; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_ETHDEV_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 14136 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 2/9] net/zxdh: add logging implementation 2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang ` (6 more replies) 2024-10-21 9:03 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Thomas Monjalon 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2 siblings, 7 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3423 bytes --] Add zxdh logging implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++-- drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_logs.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 75d8b28cc3..7220770c01 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -7,6 +7,7 @@ #include <rte_ethdev.h> #include "zxdh_ethdev.h" +#include "zxdh_logs.h" static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); return -ENOMEM; + } memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; - if (hw->bar_addr[0] == 0) + if (hw->bar_addr[0] == 0) { + PMD_INIT_LOG(ERR, "Bad mem resource."); return -EIO; + } hw->device_id = pci_dev->id.device_id; hw->port_id = eth_dev->data->port_id; @@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = { RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE); diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..52dbd8228b --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_LOGS_H_ +#define _ZXDH_LOGS_H_ + +#include <rte_log.h> + +extern int zxdh_logtype_init; +#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init +#define PMD_INIT_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_driver; +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver +#define PMD_DRV_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_rx; +#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx +#define PMD_RX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_tx; +#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx +#define PMD_TX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_msg; +#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg +#define PMD_MSG_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +#endif /* _ZXDH_LOGS_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6152 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang ` (5 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 22835 bytes --] Add device pci init implementation, to obtain PCI capability and read configuration, etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ drivers/net/zxdh/zxdh_ethdev.h | 20 ++- drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++ 7 files changed, 659 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 932fb1c835..7db4e7bc71 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -15,4 +15,5 @@ endif sources = files( 'zxdh_ethdev.c', + 'zxdh_pci.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 7220770c01..f34b2af7b3 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -8,6 +8,40 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" +#include "zxdh_pci.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; + +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = 0; + + ret = zxdh_read_pci_caps(pci_dev, hw); + if (ret) { + PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + goto err; + } + + zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_dev_pci_ops; + zxdh_vtpci_reset(hw); + zxdh_get_pci_dev_config(hw); + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + else + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + + return 0; + +err: + PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + return ret; + +err_zxdh_init: + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 04023bfe84..18d9916713 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -9,6 +9,7 @@ extern "C" { #endif +#include <rte_ether.h> #include "ethdev_driver.h" /* ZXDH PCI vendor/device ID. */ @@ -23,16 +24,31 @@ extern "C" { #define ZXDH_MAX_MC_MAC_ADDRS 32 #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U + #define ZXDH_NUM_BARS 2 struct zxdh_hw { struct rte_eth_dev *eth_dev; - uint64_t bar_addr[ZXDH_NUM_BARS]; + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; - uint32_t speed; + uint64_t bar_addr[ZXDH_NUM_BARS]; + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; + uint32_t speed; + uint32_t notify_off_multiplier; + uint16_t *notify_base; + uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint8_t *isr; + uint8_t weak_barriers; + uint8_t use_msix; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; }; diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..e23dbcbef5 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,290 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <inttypes.h> +#include <unistd.h> + +#ifdef RTE_EXEC_ENV_LINUX + #include <dirent.h> + #include <fcntl.h> +#endif + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_pci.h> +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +static void zxdh_read_dev_config(struct zxdh_hw *hw, + size_t offset, + void *dst, + int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void zxdh_write_dev_config(struct zxdh_hw *hw, + size_t offset, + const void *src, + int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint8_t zxdh_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint64_t zxdh_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +const struct zxdh_pci_ops zxdh_dev_pci_ops = { + .read_dev_cfg = zxdh_read_dev_config, + .write_dev_cfg = zxdh_write_dev_config, + .get_status = zxdh_get_status, + .set_status = zxdh_set_status, + .get_features = zxdh_get_features, + .set_features = zxdh_set_features, +}; + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_features(hw); +} + +void zxdh_vtpci_reset(struct zxdh_hw *hw) +{ + PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + usleep(1000L); + } + PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_INIT_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_INIT_LOG(ERR, "invalid cap: overflows bar space"); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + uint8_t pos = 0; + int32_t ret = 0; + + if (dev->mem_resource[0].addr == NULL) { + PMD_INIT_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + + ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret); + return -1; + } + while (pos) { + struct zxdh_pci_cap cap; + + ret = rte_pci_read_config(dev, &cap, 2, pos); + if (ret != 2) { + PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr == PCI_CAP_ID_MSIX) { + /** + * Transitional devices would also have this capability, + * that's why we also check if msix is enabled. + * 1st byte is cap ID; 2nd byte is the position of next cap; + * next two bytes are the flags. + */ + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", + pos + 2, ret); + break; + } + hw->use_msix = (flags & PCI_MSIX_ENABLE) ? + ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED; + } + if (cap.cap_vndr != PCI_CAP_ID_VNDR) { + PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_INIT_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + uint16_t pcie_id = hw->pcie_id; + + if ((pcie_id >> 11) & 0x1) /* PF */ { + PMD_INIT_LOG(DEBUG, "EP %u PF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_INIT_LOG(ERR, "no zxdh pci device found."); + return -1; + } + return 0; +} + +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} + +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + uint64_t guest_features = 0; + uint64_t nego_features = 0; + uint32_t max_queue_pairs = 0; + + hw->host_features = zxdh_vtpci_get_features(hw); + + guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + } + + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..deda73a65a --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_PCI_H_ +#define _ZXDH_PCI_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <stdbool.h> + +#include <bus_pci_driver.h> + +#include "zxdh_ethdev.h" + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +#define PCI_CAPABILITY_LIST 0x34 +#define PCI_CAP_ID_VNDR 0x09 +#define PCI_CAP_ID_MSIX 0x11 + +#define PCI_MSIX_ENABLE 0x8000 + +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_RING_PACKED 34 +#define ZXDH_F_IN_ORDER 35 +#define ZXDH_F_NOTIFICATION_DATA 38 + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + /* + * speed, in units of 1Mb. All values 0 to INT_MAX are legal. + * Any other value stands for unknown. + */ + uint32_t speed; + /* 0x00 - half duplex + * 0x01 - full duplex + * Any other value stands for unknown. + */ + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); +}; + +struct zxdh_hw_internal { + const struct zxdh_pci_ops *vtpci_ops; +}; + +#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops) + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_dev_pci_ops; + +void zxdh_vtpci_reset(struct zxdh_hw *hw); +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_PCI_H_ */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..336c0701f4 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_QUEUE_H_ +#define _ZXDH_QUEUE_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_rxtx.h" + +/** ring descriptors: 16 bytes. + * These can chain together via "next". + **/ +struct vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +}; + +struct vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[0]; +}; + +struct vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +}; + +struct vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +}; + +struct vring_packed { + uint32_t num; + struct vring_packed_desc *desc; + struct vring_packed_desc_event *driver; + struct vring_packed_desc_event *device; +}; + +struct vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +}; + +struct virtqueue { + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ + struct { + /**< vring keeping descs and events */ + struct vring_packed ring; + uint8_t used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ + uint16_t vq_nentries; /**< vring desc numbers */ + uint16_t vq_free_cnt; /**< num of desc available */ + uint16_t vq_avail_idx; /**< sync until needed */ + uint16_t vq_free_thresh; /**< free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /**< linear address of vring*/ + uint32_t vq_ring_size; + + union { + struct virtnet_rx rxq; + struct virtnet_tx txq; + }; + + /** < physical address of vring, + * or virtual address for virtio_user. + **/ + rte_iova_t vq_ring_mem; + + /** + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + **/ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /**< PCI queue index */ + uint16_t offset; /**< relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /**< RX software ring. */ + struct vq_desc_extra vq_descx[0]; +}; + +#endif /* _ZXDH_QUEUE_H_ */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..7314f76d2c --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef _ZXDH_RXTX_H_ +#define _ZXDH_RXTX_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <rte_common.h> +#include <rte_mbuf_core.h> + +struct virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ +}; + +struct virtnet_rx { + struct virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +}; + +struct virtnet_tx { + struct virtqueue *vq; + const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +}; + +#endif /* _ZXDH_RXTX_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53727 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8734 bytes --] Add msg channel and hwlock init implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 15 +++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 65 +++++++++++++ 5 files changed, 243 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 7db4e7bc71..2e0c8fddae 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -16,4 +16,5 @@ endif sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', + 'zxdh_msg.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index f34b2af7b3..e69d11e9fd 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -9,6 +9,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_pci.h" +#include "zxdh_msg.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret < 0) goto err_zxdh_init; + ret = zxdh_msg_chan_init(); + if (ret < 0) { + PMD_INIT_LOG(ERR, "Failed to init bar msg chan"); + goto err_zxdh_init; + } + hw->msg_chan_init = 1; + + ret = zxdh_msg_chan_hwlock_init(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: + zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 18d9916713..24eb3a5ca0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -51,6 +51,7 @@ struct zxdh_hw { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; + uint8_t msg_chan_init; }; #ifdef __cplusplus diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..f2387803fc --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <pthread.h> +#include <rte_cycles.h> +#include <inttypes.h> +#include <rte_malloc.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define REPS_INFO_FLAG_USABLE 0x00 +#define BAR_SEQID_NUM_MAX 256 + +#define ZXDH_BAR0_INDEX 0 + +#define PCIEID_IS_PF_MASK (0x0800) +#define PCIEID_PF_IDX_MASK (0x0700) +#define PCIEID_VF_IDX_MASK (0x00ff) +#define PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define PCIEID_PF_IDX_OFFSET (8) +#define PCIEID_EP_IDX_OFFSET (12) + +#define MULTIPLY_BY_8(x) ((x) << 3) +#define MULTIPLY_BY_32(x) ((x) << 5) +#define MULTIPLY_BY_256(x) ((x) << 8) + +#define MAX_EP_NUM (4) +#define MAX_HARD_SPINLOCK_NUM (511) + +#define BAR0_SPINLOCK_OFFSET (0x4000) +#define FW_SHRD_OFFSET (0x5000) +#define FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) + +struct dev_stat { + bool is_mpf_scanned; + bool is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; +struct dev_stat g_dev_stat = {0}; + +struct seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct seqid_ring { + uint16_t cur_id; + pthread_spinlock_t lock; + struct seqid_item reps_info_tbl[BAR_SEQID_NUM_MAX]; +}; +struct seqid_ring g_seqid_ring = {0}; + +static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & PCIEID_PF_IDX_MASK) >> PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & PCIEID_EP_IDX_MASK) >> PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case MSG_CHAN_END_RISC: + lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case MSG_CHAN_END_VF: + case MSG_CHAN_END_PF: + lock_id = MULTIPLY_BY_8(ep_idx) + pf_idx + MULTIPLY_BY_8(1 + MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +/** + * Fun: PF init hard_spinlock addr + */ +static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET, + bar_base_addr + HW_LABEL_OFFSET); + lock_id = pcie_id_to_hard_lock(pcie_id, MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + BAR0_SPINLOCK_OFFSET, + bar_base_addr + HW_LABEL_OFFSET); + return 0; +} + +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} + +pthread_spinlock_t chan_lock; +int zxdh_msg_chan_init(void) +{ + uint16_t seq_id = 0; + + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return BAR_MSG_OK; + + pthread_spin_init(&chan_lock, 0); + g_seqid_ring.cur_id = 0; + pthread_spin_init(&g_seqid_ring.lock, 0); + + for (seq_id = 0; seq_id < BAR_SEQID_NUM_MAX; seq_id++) { + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id]; + + reps_info->id = seq_id; + reps_info->flag = REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return BAR_MSG_OK; +} + +int zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + return BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..a619e6ae21 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_MSG_H_ +#define _ZXDH_MSG_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> + +#include <ethdev_driver.h> + +enum DRIVER_TYPE { + MSG_CHAN_END_MPF = 0, + MSG_CHAN_END_PF, + MSG_CHAN_END_VF, + MSG_CHAN_END_RISC, +}; + +enum BAR_MSG_RTN { + BAR_MSG_OK = 0, + BAR_MSG_ERR_MSGID, + BAR_MSG_ERR_NULL, + BAR_MSG_ERR_TYPE, /* Message type exception */ + BAR_MSG_ERR_MODULE, /* Module ID exception */ + BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + BAR_MSG_ERR_LEN, /* Message length exception */ + BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /** + * The sending interface parameter boundary structure pointer is empty + */ + BAR_MSG_ERR_NULL_PARA, + BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /** + * Unable to find the corresponding message processing function for this module + */ + BAR_MSG_ERR_MODULE_NOEXIST, + /** + * The virtual address in the parameters passed in by the sending interface is empty + */ + BAR_MSG_ERR_VIRTADDR_NULL, + BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + BAR_MSG_ERR_MPF_NOT_SCANNED, + BAR_MSG_ERR_KERNEL_READY, + BAR_MSG_ERR_USR_RET_ERR, + BAR_MSG_ERR_ERR_PCIEID, + BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_MSG_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17190 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 5/9] net/zxdh: add msg chan enable implementation 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-16 8:18 ` [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-21 8:50 ` Thomas Monjalon 2024-10-21 10:56 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang ` (3 subsequent siblings) 6 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 26958 bytes --] Add msg chan enable implementation to support send msg to get infos. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 6 + drivers/net/zxdh/zxdh_ethdev.h | 12 + drivers/net/zxdh/zxdh_msg.c | 643 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_msg.h | 127 +++++++ 4 files changed, 787 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e69d11e9fd..5002e76e23 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_msg_chan_enable(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 24eb3a5ca0..a51181f1ce 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -29,10 +29,22 @@ extern "C" { #define ZXDH_NUM_BARS 2 +union VPORT { + uint16_t vport; + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + union VPORT vport; uint64_t bar_addr[ZXDH_NUM_BARS]; uint64_t host_features; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index f2387803fc..3ac4c8d796 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -32,13 +32,85 @@ #define MULTIPLY_BY_32(x) ((x) << 5) #define MULTIPLY_BY_256(x) ((x) << 8) -#define MAX_EP_NUM (4) +#define MAX_EP_NUM (4) #define MAX_HARD_SPINLOCK_NUM (511) +#define LOCK_PRIMARY_ID_MASK (0x8000) +/* bar offset */ +#define BAR0_CHAN_RISC_OFFSET (0x2000) +#define BAR0_CHAN_PFVF_OFFSET (0x3000) #define BAR0_SPINLOCK_OFFSET (0x4000) #define FW_SHRD_OFFSET (0x5000) #define FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) +#define ZXDH_CTRLCH_OFFSET (0x2000) +#define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET) +#define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET) +#define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET) +#define CHAN_PFVF_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_PFVF_OFFSET) + +#define REPS_HEADER_LEN_OFFSET 1 +#define REPS_HEADER_PAYLOAD_OFFSET 4 +#define REPS_HEADER_REPLYED 0xff + +#define BAR_MSG_CHAN_USABLE 0 +#define BAR_MSG_CHAN_USED 1 + +#define BAR_MSG_POL_MASK (0x10) +#define BAR_MSG_POL_OFFSET (4) + +#define BAR_ALIGN_WORD_MASK 0xfffffffc +#define BAR_MSG_VALID_MASK 1 +#define BAR_MSG_VALID_OFFSET 0 + +#define REPS_INFO_FLAG_USABLE 0x00 +#define REPS_INFO_FLAG_USED 0xa0 + +#define BAR_PF_NUM 7 +#define BAR_VF_NUM 256 +#define BAR_INDEX_PF_TO_VF 0 +#define BAR_INDEX_MPF_TO_MPF 0xff +#define BAR_INDEX_MPF_TO_PFVF 0 +#define BAR_INDEX_PFVF_TO_MPF 0 + +#define MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define SPINLOCK_POLLING_SPAN_US (100) + +#define BAR_MSG_SRC_NUM 3 +#define BAR_MSG_SRC_MPF 0 +#define BAR_MSG_SRC_PF 1 +#define BAR_MSG_SRC_VF 2 +#define BAR_MSG_SRC_ERR 0xff +#define BAR_MSG_DST_NUM 3 +#define BAR_MSG_DST_RISC 0 +#define BAR_MSG_DST_MPF 2 +#define BAR_MSG_DST_PFVF 1 +#define BAR_MSG_DST_ERR 0xff + +#define LOCK_TYPE_HARD (1) +#define LOCK_TYPE_SOFT (0) +#define BAR_INDEX_TO_RISC 0 + +#define BAR_SUBCHAN_INDEX_SEND 0 +#define BAR_SUBCHAN_INDEX_RECV 1 + +uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND}, + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV}, + {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV, BAR_SUBCHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {BAR_INDEX_TO_RISC, BAR_INDEX_MPF_TO_PFVF, BAR_INDEX_MPF_TO_MPF}, + {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF}, + {BAR_INDEX_TO_RISC, BAR_INDEX_PF_TO_VF, BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { + {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD}, + {LOCK_TYPE_SOFT, LOCK_TYPE_SOFT, LOCK_TYPE_HARD}, + {LOCK_TYPE_HARD, LOCK_TYPE_HARD, LOCK_TYPE_HARD} +}; struct dev_stat { bool is_mpf_scanned; @@ -97,6 +169,11 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; } +static uint8_t spinklock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) { label_write((uint64_t)label_addr, virt_lock_id, 0); @@ -104,6 +181,28 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u return 0; } +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t primary_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, primary_id); + break; + } + rte_delay_us_block(SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + /** * Fun: PF init hard_spinlock addr */ @@ -119,6 +218,548 @@ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) return 0; } +static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct seqid_item *seqid_reps_info = NULL; + + pthread_spin_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + + do { + count++; + ++g_id; + g_id %= BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != REPS_INFO_FLAG_USABLE) && (count < BAR_SEQID_NUM_MAX)); + int rc; + + if (count >= BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = BAR_MSG_OK; + +out: + pthread_spin_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = zxdh_bar_chan_msgid_allocate(msg_id); + + if (ret != BAR_MSG_OK) + return BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return BAR_MSG_OK; +} + +static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case MSG_CHAN_END_MPF: + src_index = BAR_MSG_SRC_MPF; + break; + case MSG_CHAN_END_PF: + src_index = BAR_MSG_SRC_PF; + break; + case MSG_CHAN_END_VF: + src_index = BAR_MSG_SRC_VF; + break; + default: + src_index = BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case MSG_CHAN_END_MPF: + dst_index = BAR_MSG_DST_MPF; + break; + case MSG_CHAN_END_PF: + dst_index = BAR_MSG_DST_PFVF; + break; + case MSG_CHAN_END_VF: + dst_index = BAR_MSG_DST_PFVF; + break; + case MSG_CHAN_END_RISC: + dst_index = BAR_MSG_DST_RISC; + break; + default: + dst_index = BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + uint8_t src_index = 0; + uint8_t dst_index = 0; + + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return BAR_MSG_ERR_NULL_PARA; + } + src_index = zxdh_bar_msg_src_index_trans(in->src); + dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return BAR_MSG_ERR_TYPE; + } + if (in->module_id >= BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, + "recv buffer len is short than minimal 4 bytes."); + + return BAR_MSG_OK; +} + +static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return BAR_MSG_OK; +} + +static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid); + if (dst == MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + CHAN_RISC_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + CHAN_PFVF_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + + return ret; +} + +static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid); + if (dst == MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + CHAN_PFVF_LABEL_OFFSET); +} + +pthread_spinlock_t chan_lock; +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); + return BAR_MSG_ERR_TYPE; + } + uint16_t idx = lock_type_tbl[src_index][dst_index]; + + if (idx == LOCK_TYPE_SOFT) + pthread_spin_lock(&chan_lock); + else + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); + + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); + + return ret; +} + +static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist."); + return BAR_MSG_ERR_TYPE; + } + uint16_t idx = lock_type_tbl[src_index][dst_index]; + + if (idx == LOCK_TYPE_SOFT) + pthread_spin_unlock(&chan_lock); + else + zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr); + + return BAR_MSG_OK; +} + +static void zxdh_bar_chan_msgid_free(uint16_t msg_id) +{ + struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + pthread_spin_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + pthread_spin_unlock(&g_seqid_ring.lock); +} + +static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data) +{ + uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata) +{ + uint32_t algin_offset = (offset & BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr, + struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx)); + + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr, + struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx); + + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_write(subchan_addr, + 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, *(data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + for (ix = 0; ix < remain; ix++) + remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix); + + zxdh_bar_chan_reg_write(subchan_addr, 4 * count + + BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + BAR_MSG_PLAYLOAD_OFFSET, (data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + zxdh_bar_chan_reg_read(subchan_addr, 4 * count + + BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (ix = 0; ix < remain; ix++) + *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix); + } + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + data &= (~BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data); + return BAR_MSG_OK; +} + +static uint8_t temp_msg[BAR_MSG_ADDR_CHAN_INTERVAL]; +static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr, + uint16_t payload_len, struct bar_msg_header *msg_header) +{ + uint16_t ret = 0; + ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); + + ret = zxdh_bar_chan_msg_header_get(subchan_addr, + (struct bar_msg_header *)temp_msg); + + ret = zxdh_bar_chan_msg_payload_set(subchan_addr, + (uint8_t *)(payload_addr), payload_len); + + ret = zxdh_bar_chan_msg_payload_get(subchan_addr, + temp_msg, payload_len); + + ret = zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USED); + return ret; +} + +static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + if (BAR_MSG_CHAN_USABLE == (data & BAR_MSG_VALID_MASK)) + return BAR_MSG_CHAN_USABLE; + + return BAR_MSG_CHAN_USED; +} + +static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)BAR_MSG_POL_MASK); + data |= ((uint32_t)label << BAR_MSG_POL_OFFSET); + zxdh_bar_chan_reg_write(subchan_addr, BAR_MSG_VALID_OFFSET, data); + return BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + zxdh_bar_chan_msg_payload_get(subchan_addr, + recv_msg + REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = REPS_HEADER_REPLYED; /* set reps's valid */ + return BAR_MSG_OK; +} + +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + struct bar_msg_header msg_header = {0}; + uint16_t seq_id = 0; + uint64_t subchan_addr = 0; + uint32_t time_out_cnt = 0; + uint16_t valid = 0; + int ret = 0; + + ret = zxdh_bar_chan_send_para_check(in, result); + if (ret != BAR_MSG_OK) + goto exit; + + ret = zxdh_bar_chan_save_recv_info(result, &seq_id); + if (ret != BAR_MSG_OK) + goto exit; + + zxdh_bar_chan_subchan_addr_get(in, &subchan_addr); + + msg_header.sync = BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != BAR_MSG_OK) { + zxdh_bar_chan_msgid_free(seq_id); + goto exit; + } + zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + + do { + rte_delay_us_block(BAR_MSG_POLLING_SPAN); + valid = zxdh_bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while (time_out_cnt < BAR_MSG_TIMEOUT_TH && valid == BAR_MSG_CHAN_USED); + + if (time_out_cnt == BAR_MSG_TIMEOUT_TH && valid != BAR_MSG_CHAN_USABLE) { + zxdh_bar_chan_msg_valid_set(subchan_addr, BAR_MSG_CHAN_USABLE); + zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = BAR_MSG_ERR_TIME_OUT; + } else { + ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + zxdh_bar_chan_msgid_free(seq_id); + zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static int bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport) +{ + struct bar_recv_msg recv_msg = {0}; + int ret = 0; + int check_token = 0; + int sum_res = 0; + + if (!_msix_para) + return BAR_MSG_ERR_NULL; + + struct msix_msg msix_msg = { + .pcie_id = _msix_para->pcie_id, + .vector_risc = _msix_para->vector_risc, + .vector_pfvf = _msix_para->vector_pfvf, + .vector_mpf = _msix_para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = _msix_para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = _msix_para->driver_type, + .dst = MSG_CHAN_END_RISC, + .module_id = BAR_MODULE_MISX, + .src_pcieid = _msix_para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token); + return BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", _msix_para->pcie_id); + return BAR_MSG_OK; +} + +int zxdh_msg_chan_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct msix_para misx_info = { + .vector_risc = MSIX_FROM_RISCV, + .vector_pfvf = MSIX_FROM_PFVF, + .vector_mpf = MSIX_FROM_MPF, + .pcie_id = hw->pcie_id, + .driver_type = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + }; + + return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); +} + int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a619e6ae21..06be0f18c8 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,6 +13,19 @@ extern "C" { #include <ethdev_driver.h> +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 + +#define BAR_MSG_POLLING_SPAN 100 +#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) +#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) +#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) + +#define BAR_CHAN_MSG_SYNC 0 + +#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header)) +#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header)) + enum DRIVER_TYPE { MSG_CHAN_END_MPF = 0, MSG_CHAN_END_PF, @@ -20,6 +33,13 @@ enum DRIVER_TYPE { MSG_CHAN_END_RISC, }; +enum MSG_VEC { + MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, + MSIX_FROM_MPF, + MSIX_FROM_RISCV, + MSG_VEC_NUM, +}; + enum BAR_MSG_RTN { BAR_MSG_OK = 0, BAR_MSG_ERR_MSGID, @@ -54,10 +74,117 @@ enum BAR_MSG_RTN { BAR_MSG_ERR_SOCKET, /* netlink sockte err */ }; +enum bar_module_id { + BAR_MODULE_DBG = 0, /* 0: debug */ + BAR_MODULE_TBL, /* 1: resource table */ + BAR_MODULE_MISX, /* 2: config msix */ + BAR_MODULE_SDA, /* 3: */ + BAR_MODULE_RDMA, /* 4: */ + BAR_MODULE_DEMO, /* 5: channel test */ + BAR_MODULE_SMMU, /* 6: */ + BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + BAR_MODULE_VDPA, /* 8: vdpa live migration */ + BAR_MODULE_VQM, /* 9: vqm live migration */ + BAR_MODULE_NP, /* 10: vf msg callback np */ + BAR_MODULE_VPORT, /* 11: get vport */ + BAR_MODULE_BDF, /* 12: get bdf */ + BAR_MODULE_RISC_READY, /* 13: */ + BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + BAR_MDOULE_NVME, /* 15: */ + BAR_MDOULE_NPSDK, /* 16: */ + BAR_MODULE_NP_TODO, /* 17: */ + MODULE_BAR_MSG_TO_PF, /* 18: */ + MODULE_BAR_MSG_TO_VF, /* 19: */ + + MODULE_FLASH = 32, + BAR_MODULE_OFFSET_GET = 33, + BAR_EVENT_OVS_WITH_VCB = 36, + + BAR_MSG_MODULE_NUM = 100, +}; + +struct msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; + +struct bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; + +struct bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; + +struct bar_recv_msg { + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + /* */ + union { + struct bar_msix_reps msix_reps; + struct bar_offset_reps offset_reps; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; + +struct bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency? */ + uint8_t ack : 1; /* ack msg? */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); +int zxdh_msg_chan_enable(struct rte_eth_dev *dev); +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 58939 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v6 5/9] net/zxdh: add msg chan enable implementation 2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-10-21 8:50 ` Thomas Monjalon 2024-10-21 10:56 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-10-21 8:50 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19 16/10/2024 10:18, Junlong Wang: > Add msg chan enable implementation to support > send msg to get infos. Would be interesting to explain which module is receiving the message. > +union VPORT { Why uppercase? for differenciate with vport below? In general we tend to use only lowercase. > + uint16_t vport; > + struct { > + uint16_t vfid:8; > + uint16_t pfid:3; > + uint16_t vf_flag:1; > + uint16_t epid:3; > + uint16_t direct_flag:1; > + }; > +}; I am curious about the spinlock function below. What is it doing exactly? > +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, > + uint64_t label_addr, uint16_t primary_id) > +{ > + uint32_t lock_rd_cnt = 0; > + > + do { > + /* read to lock */ > + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id); typo, it should be spinlock_read > + > + if (spl_val == 0) { > + label_write((uint64_t)label_addr, virt_lock_id, primary_id); > + break; > + } > + rte_delay_us_block(SPINLOCK_POLLING_SPAN_US); > + lock_rd_cnt++; > + } while (lock_rd_cnt < MAX_HARD_SPINLOCK_ASK_TIMES); > + if (lock_rd_cnt >= MAX_HARD_SPINLOCK_ASK_TIMES) > + return -1; > + > + return 0; > +} [...] > +pthread_spinlock_t chan_lock; > +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) > +{ > + int ret = 0; > + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); > + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); > + > + if (src_index == BAR_MSG_SRC_ERR || dst_index == BAR_MSG_DST_ERR) { > + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); > + return BAR_MSG_ERR_TYPE; > + } > + uint16_t idx = lock_type_tbl[src_index][dst_index]; > + > + if (idx == LOCK_TYPE_SOFT) > + pthread_spin_lock(&chan_lock); In general we avoid the pthread.h functions. Do you know we have rte_spinlock_lock()? > + else > + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); > + > + if (ret != 0) > + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); > + > + return ret; > +} [...] > --- a/drivers/net/zxdh/zxdh_msg.h > +++ b/drivers/net/zxdh/zxdh_msg.h > @@ -13,6 +13,19 @@ extern "C" { > > #include <ethdev_driver.h> > > +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 > + You should continue using the prefix ZXDH_ These definitions below have a high risk of namespace conflict in future. Using a namespace prefix is a good practice. > +#define BAR_MSG_POLLING_SPAN 100 > +#define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) > +#define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) > +#define BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) > + > +#define BAR_CHAN_MSG_SYNC 0 > + > +#define BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ > +#define BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header)) > +#define BAR_MSG_PAYLOAD_MAX_LEN (BAR_MSG_ADDR_CHAN_INTERVAL - > sizeof(struct bar_msg_header)) + > > enum DRIVER_TYPE { > > MSG_CHAN_END_MPF = 0, > MSG_CHAN_END_PF, > > @@ -20,6 +33,13 @@ enum DRIVER_TYPE { > > MSG_CHAN_END_RISC, > > }; > > +enum MSG_VEC { > + MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, > + MSIX_FROM_MPF, > + MSIX_FROM_RISCV, > + MSG_VEC_NUM, > +}; > + > > enum BAR_MSG_RTN { > > BAR_MSG_OK = 0, > BAR_MSG_ERR_MSGID, > > @@ -54,10 +74,117 @@ enum BAR_MSG_RTN { > > BAR_MSG_ERR_SOCKET, /* netlink sockte err */ > > }; > > +enum bar_module_id { > + BAR_MODULE_DBG = 0, /* 0: debug */ > + BAR_MODULE_TBL, /* 1: resource table */ > + BAR_MODULE_MISX, /* 2: config msix */ > + BAR_MODULE_SDA, /* 3: */ > + BAR_MODULE_RDMA, /* 4: */ > + BAR_MODULE_DEMO, /* 5: channel test */ > + BAR_MODULE_SMMU, /* 6: */ > + BAR_MODULE_MAC, /* 7: mac rx/tx stats */ > + BAR_MODULE_VDPA, /* 8: vdpa live migration */ > + BAR_MODULE_VQM, /* 9: vqm live migration */ > + BAR_MODULE_NP, /* 10: vf msg callback np */ > + BAR_MODULE_VPORT, /* 11: get vport */ > + BAR_MODULE_BDF, /* 12: get bdf */ > + BAR_MODULE_RISC_READY, /* 13: */ > + BAR_MODULE_REVERSE, /* 14: byte stream reverse */ > + BAR_MDOULE_NVME, /* 15: */ > + BAR_MDOULE_NPSDK, /* 16: */ > + BAR_MODULE_NP_TODO, /* 17: */ > + MODULE_BAR_MSG_TO_PF, /* 18: */ > + MODULE_BAR_MSG_TO_VF, /* 19: */ > + > + MODULE_FLASH = 32, > + BAR_MODULE_OFFSET_GET = 33, > + BAR_EVENT_OVS_WITH_VCB = 36, > + > + BAR_MSG_MODULE_NUM = 100, > +}; > + > +struct msix_para { > + uint16_t pcie_id; > + uint16_t vector_risc; > + uint16_t vector_pfvf; > + uint16_t vector_mpf; > + uint64_t virt_addr; > + uint16_t driver_type; /* refer to DRIVER_TYPE */ > +}; > + > +struct msix_msg { > + uint16_t pcie_id; > + uint16_t vector_risc; > + uint16_t vector_pfvf; > + uint16_t vector_mpf; > +}; > + > +struct zxdh_pci_bar_msg { > + uint64_t virt_addr; /* bar addr */ > + void *payload_addr; > + uint16_t payload_len; > + uint16_t emec; > + uint16_t src; /* refer to BAR_DRIVER_TYPE */ > + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ > + uint16_t module_id; > + uint16_t src_pcieid; > + uint16_t dst_pcieid; > + uint16_t usr; > +}; > + > +struct bar_msix_reps { > + uint16_t pcie_id; > + uint16_t check; > + uint16_t vport; > + uint16_t rsv; > +} __rte_packed; > + > +struct bar_offset_reps { > + uint16_t check; > + uint16_t rsv; > + uint32_t offset; > + uint32_t length; > +} __rte_packed; > + > +struct bar_recv_msg { > + uint8_t reps_ok; > + uint16_t reps_len; > + uint8_t rsv; > + /* */ > + union { > + struct bar_msix_reps msix_reps; > + struct bar_offset_reps offset_reps; > + } __rte_packed; > +} __rte_packed; > + > +struct zxdh_msg_recviver_mem { > + void *recv_buffer; /* first 4B is head, followed by payload */ > + uint64_t buffer_len; > +}; > + > +struct bar_msg_header { > + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ > + uint8_t sync : 1; > + uint8_t emec : 1; /* emergency? */ > + uint8_t ack : 1; /* ack msg? */ > + uint8_t poll : 1; > + uint8_t usr : 1; > + uint8_t rsv; > + uint16_t module_id; > + uint16_t len; > + uint16_t msg_id; > + uint16_t src_pcieid; > + uint16_t dst_pcieid; /* used in PF-->VF */ ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v6 5/9] net/zxdh: add msg chan enable implementation 2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang 2024-10-21 8:50 ` Thomas Monjalon @ 2024-10-21 10:56 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-21 10:56 UTC (permalink / raw) To: thomas; +Cc: dev, ferruh.yigit, stephen, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 698 bytes --] >> Add msg chan enable implementation to support >> send msg to get infos. > Would be interesting to explain which module is receiving the message. Send messages to the backend (device side) to obtain information. > I am curious about the spinlock function below. > What is it doing exactly? >> +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, >> + uint64_t label_addr, uint16_t primary_id) >> +{ >> + uint32_t lock_rd_cnt = 0; >> + >> + do { >> + /* read to lock */ >> + uint8_t spl_val = spinklock_read(virt_addr, virt_lock_id); Using locks to ensure message consistency when accessing backend information for multiple pf/vf. [-- Attachment #1.1.2: Type: text/html , Size: 1596 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang ` (2 preceding siblings ...) 2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-21 8:52 ` Thomas Monjalon 2024-10-16 8:18 ` [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang ` (2 subsequent siblings) 6 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 13063 bytes --] Add zxdh get device backend infos, use msg chan to send msg get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 249 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_common.h | 30 ++++ drivers/net/zxdh/zxdh_ethdev.c | 35 +++++ drivers/net/zxdh/zxdh_ethdev.h | 5 + drivers/net/zxdh/zxdh_msg.c | 3 - drivers/net/zxdh/zxdh_msg.h | 24 ++++ 7 files changed, 344 insertions(+), 3 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 2e0c8fddae..a16db47f89 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -17,4 +17,5 @@ sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', 'zxdh_msg.c', + 'zxdh_common.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..61993980c3 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <string.h> + +#include <ethdev_driver.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" +#include "zxdh_common.h" + +#define ZXDH_MSG_RSP_SIZE_MAX 512 + +#define ZXDH_COMMON_TABLE_READ 0 +#define ZXDH_COMMON_TABLE_WRITE 1 + +#define ZXDH_COMMON_FIELD_PHYPORT 6 + +#define RSC_TBL_CONTENT_LEN_MAX (257 * 2) + +#define REPS_HEADER_PAYLOAD_OFFSET 4 +#define TBL_MSG_PRO_SUCCESS 0xaa + +struct zxdh_common_msg { + uint8_t type; /* 0:read table 1:write table */ + uint8_t field; + uint16_t pcie_id; + uint16_t slen; /* Data length for write table */ + uint16_t reserved; +} __rte_packed; + +struct zxdh_common_rsp_hdr { + uint8_t rsp_status; + uint16_t rsp_len; + uint8_t reserved; + uint8_t payload_status; + uint8_t rsv; + uint16_t payload_len; +} __rte_packed; + +struct tbl_msg_header { + uint8_t type; /* r/w */ + uint8_t field; + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; +struct tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; + +static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + uint8_t type, + uint8_t field, + void *buff, + uint16_t buff_size) +{ + uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size; + + desc->payload_addr = rte_zmalloc(NULL, msg_len, 0); + if (unlikely(desc->payload_addr == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate msg_data"); + return -ENOMEM; + } + memset(desc->payload_addr, 0, msg_len); + desc->payload_len = msg_len; + struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr; + + msg_data->type = type; + msg_data->field = field; + msg_data->pcie_id = hw->pcie_id; + msg_data->slen = buff_size; + if (buff_size != 0) + rte_memcpy(msg_data + 1, buff, buff_size); + + return 0; +} + +static int32_t zxdh_send_command(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + enum bar_module_id module_id, + struct zxdh_msg_recviver_mem *msg_rsp) +{ + desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + desc->src = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF; + desc->dst = MSG_CHAN_END_RISC; + desc->module_id = module_id; + desc->src_pcieid = hw->pcie_id; + + msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX; + msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0); + if (unlikely(msg_rsp->recv_buffer == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate messages response"); + return -ENOMEM; + } + + if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != BAR_MSG_OK) { + PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response"); + rte_free(msg_rsp->recv_buffer); + return -1; + } + + return 0; +} + +static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp, + void *buff, uint16_t len) +{ + struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer; + + if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) { + PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d", + rsp_hdr->payload_status, rsp_hdr->payload_len); + return -1; + } + if (len != 0) + rte_memcpy(buff, rsp_hdr + 1, len); + + return 0; +} + +static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_msg_recviver_mem msg_rsp; + struct zxdh_pci_bar_msg desc; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT, + (void *)phyport, sizeof(*phyport)); + return ret; +} + +static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + param->pcie_id = hw->pcie_id; + param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param->src_type = BAR_MODULE_TBL; +} + +static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + if (!res || !dev) + return BAR_MSG_ERR_NULL; + + struct tbl_msg_header tbl_msg = { + .type = TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + struct zxdh_pci_bar_msg in = {0}; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = MSG_CHAN_END_RISC; + in.module_id = BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + uint8_t recv_buf[RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + int ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != BAR_MSG_OK) { + PMD_DRV_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + struct tbl_msg_reps_header *tbl_reps = + (struct tbl_msg_reps_header *)(recv_buf + REPS_HEADER_PAYLOAD_OFFSET); + + if (tbl_reps->check != TBL_MSG_PRO_SUCCESS) { + PMD_DRV_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + memcpy(res, + (recv_buf + REPS_HEADER_PAYLOAD_OFFSET + sizeof(struct tbl_msg_reps_header)), *len); + return ret; +} + +static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, TBL_FIELD_PNLID, &reps, &reps_len) != BAR_MSG_OK) + return -1; + + *panel_id = reps; + return BAR_MSG_OK; +} + +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..ec7011e820 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef _ZXDH_COMMON_H_ +#define _ZXDH_COMMON_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); + +#ifdef __cplusplus +} +#endif + +#endif /* _ZXDH_COMMON_H_ */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 5002e76e23..e282012afb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -10,9 +10,21 @@ #include "zxdh_logs.h" #include "zxdh_pci.h" #include "zxdh_msg.h" +#include "zxdh_common.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +uint16_t vport_to_vfid(union VPORT v) +{ + /* epid > 4 is local soft queue. return 1192 */ + if (v.epid > 4) + return 1192; + if (v.vf_flag) + return v.epid * 256 + v.vfid; + else + return (v.epid * 8 + v.pfid) + 1152; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) return ret; } +static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) +{ + if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) { + PMD_INIT_LOG(ERR, "Failed to get phyport"); + return -1; + } + PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + + hw->vfid = vport_to_vfid(hw->vport); + + if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) { + PMD_INIT_LOG(ERR, "Failed to get panel_id"); + return -1; + } + PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); @@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_agent_comm(eth_dev, hw); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index a51181f1ce..2351393009 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -56,6 +56,7 @@ struct zxdh_hw { uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint16_t vfid; uint8_t *isr; uint8_t weak_barriers; @@ -63,9 +64,13 @@ struct zxdh_hw { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; + uint8_t phyport; + uint8_t panel_id; uint8_t msg_chan_init; }; +uint16_t vport_to_vfid(union VPORT v); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 3ac4c8d796..e243f97703 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -18,8 +18,6 @@ #define REPS_INFO_FLAG_USABLE 0x00 #define BAR_SEQID_NUM_MAX 256 -#define ZXDH_BAR0_INDEX 0 - #define PCIEID_IS_PF_MASK (0x0800) #define PCIEID_PF_IDX_MASK (0x0700) #define PCIEID_VF_IDX_MASK (0x00ff) @@ -43,7 +41,6 @@ #define FW_SHRD_OFFSET (0x5000) #define FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) -#define ZXDH_CTRLCH_OFFSET (0x2000) #define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET) #define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET) #define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET) diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 06be0f18c8..7379f57d17 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,6 +13,9 @@ extern "C" { #include <ethdev_driver.h> +#define ZXDH_BAR0_INDEX 0 + +#define ZXDH_CTRLCH_OFFSET (0x2000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define BAR_MSG_POLLING_SPAN 100 @@ -103,6 +106,27 @@ enum bar_module_id { BAR_MSG_MODULE_NUM = 100, }; +enum RES_TBL_FILED { + TBL_FIELD_PCIEID = 0, + TBL_FIELD_BDF = 1, + TBL_FIELD_MSGCH = 2, + TBL_FIELD_DATACH = 3, + TBL_FIELD_VPORT = 4, + TBL_FIELD_PNLID = 5, + TBL_FIELD_PHYPORT = 6, + TBL_FIELD_SERDES_NUM = 7, + TBL_FIELD_NP_PORT = 8, + TBL_FIELD_SPEED = 9, + TBL_FIELD_HASHID = 10, + TBL_FIELD_NON, +}; + +enum TBL_MSG_TYPE { + TBL_TYPE_READ, + TBL_TYPE_WRITE, + TBL_TYPE_NON, +}; + struct msix_para { uint16_t pcie_id; uint16_t vector_risc; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 27639 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos 2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-10-21 8:52 ` Thomas Monjalon 0 siblings, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-10-21 8:52 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19 16/10/2024 10:18, Junlong Wang: > --- a/drivers/net/zxdh/zxdh_msg.c > +++ b/drivers/net/zxdh/zxdh_msg.c > @@ -18,8 +18,6 @@ > #define REPS_INFO_FLAG_USABLE 0x00 > #define BAR_SEQID_NUM_MAX 256 > > -#define ZXDH_BAR0_INDEX 0 > - > #define PCIEID_IS_PF_MASK (0x0800) > #define PCIEID_PF_IDX_MASK (0x0700) > #define PCIEID_VF_IDX_MASK (0x00ff) > @@ -43,7 +41,6 @@ > #define FW_SHRD_OFFSET (0x5000) > #define FW_SHRD_INNER_HW_LABEL_PAT (0x800) > #define HW_LABEL_OFFSET (FW_SHRD_OFFSET + FW_SHRD_INNER_HW_LABEL_PAT) > -#define ZXDH_CTRLCH_OFFSET (0x2000) > #define CHAN_RISC_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_RISC_OFFSET) > #define CHAN_PFVF_SPINLOCK_OFFSET (BAR0_SPINLOCK_OFFSET - BAR0_CHAN_PFVF_OFFSET) > #define CHAN_RISC_LABEL_OFFSET (HW_LABEL_OFFSET - BAR0_CHAN_RISC_OFFSET) Removing code is strange here. Maybe it should have been placed somewhere else initially? ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang ` (3 preceding siblings ...) 2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24394 bytes --] configure zxdh intr include risc,dtb. and release intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 301 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 8 + drivers/net/zxdh/zxdh_msg.c | 187 ++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 12 ++ drivers/net/zxdh/zxdh_pci.c | 62 +++++++ drivers/net/zxdh/zxdh_pci.h | 12 ++ 6 files changed, 582 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e282012afb..fc141712aa 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -25,6 +25,302 @@ uint16_t vport_to_vfid(union VPORT v) return (v.epid * 8 + v.pfid) + 1152; } +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + + +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + +static void zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); +} + + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler"); + zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler"); + zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler"); + zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler"); + zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_INIT_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE); + PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_INIT_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -138,9 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: + zxdh_intr_release(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 2351393009..7c5f5940cb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -11,6 +11,10 @@ extern "C" { #include <rte_ether.h> #include "ethdev_driver.h" +#include <rte_interrupts.h> +#include <eal_interrupts.h> + +#include "zxdh_queue.h" /* ZXDH PCI vendor/device ID. */ #define PCI_VENDOR_ID_ZTE 0x1cf2 @@ -44,6 +48,9 @@ struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + struct rte_intr_handle *risc_intr; + struct rte_intr_handle *dtb_intr; + struct virtqueue **vqs; union VPORT vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -60,6 +67,7 @@ struct zxdh_hw { uint8_t *isr; uint8_t weak_barriers; + uint8_t intr_enabled; uint8_t use_msix; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index e243f97703..098e0a74ed 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -91,6 +91,12 @@ #define BAR_SUBCHAN_INDEX_SEND 0 #define BAR_SUBCHAN_INDEX_RECV 1 +#define BAR_CHAN_MSG_SYNC 0 +#define BAR_CHAN_MSG_NO_EMEC 0 +#define BAR_CHAN_MSG_EMEC 1 +#define BAR_CHAN_MSG_NO_ACK 0 +#define BAR_CHAN_MSG_ACK 1 + uint8_t subchan_id_tbl[BAR_MSG_SRC_NUM][BAR_MSG_DST_NUM] = { {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND}, {BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_SEND, BAR_SUBCHAN_INDEX_RECV}, @@ -130,6 +136,36 @@ struct seqid_ring { }; struct seqid_ring g_seqid_ring = {0}; +static inline const char *module_id_name(int val) +{ + switch (val) { + case BAR_MODULE_DBG: return "BAR_MODULE_DBG"; + case BAR_MODULE_TBL: return "BAR_MODULE_TBL"; + case BAR_MODULE_MISX: return "BAR_MODULE_MISX"; + case BAR_MODULE_SDA: return "BAR_MODULE_SDA"; + case BAR_MODULE_RDMA: return "BAR_MODULE_RDMA"; + case BAR_MODULE_DEMO: return "BAR_MODULE_DEMO"; + case BAR_MODULE_SMMU: return "BAR_MODULE_SMMU"; + case BAR_MODULE_MAC: return "BAR_MODULE_MAC"; + case BAR_MODULE_VDPA: return "BAR_MODULE_VDPA"; + case BAR_MODULE_VQM: return "BAR_MODULE_VQM"; + case BAR_MODULE_NP: return "BAR_MODULE_NP"; + case BAR_MODULE_VPORT: return "BAR_MODULE_VPORT"; + case BAR_MODULE_BDF: return "BAR_MODULE_BDF"; + case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY"; + case BAR_MODULE_REVERSE: return "BAR_MODULE_REVERSE"; + case BAR_MDOULE_NVME: return "BAR_MDOULE_NVME"; + case BAR_MDOULE_NPSDK: return "BAR_MDOULE_NPSDK"; + case BAR_MODULE_NP_TODO: return "BAR_MODULE_NP_TODO"; + case MODULE_BAR_MSG_TO_PF: return "MODULE_BAR_MSG_TO_PF"; + case MODULE_BAR_MSG_TO_VF: return "MODULE_BAR_MSG_TO_VF"; + case MODULE_FLASH: return "MODULE_FLASH"; + case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET"; + case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; @@ -797,3 +833,154 @@ int zxdh_bar_msg_chan_exit(void) g_dev_stat.is_res_init = false; return BAR_MSG_OK; } + +static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + + return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static void zxdh_bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header, + uint8_t *receiver_buff) +{ + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = REPS_HEADER_REPLYED; + +free_id: + zxdh_bar_chan_msgid_free(msg_header->msg_id); +} + +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[BAR_MSG_MODULE_NUM]; +static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header, + uint8_t *receiver_buff, void *dev) +{ + uint8_t *reps_buffer = rte_malloc(NULL, BAR_MSG_PAYLOAD_MAX_LEN, 0); + + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + uint16_t reps_len = 0; + + recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + zxdh_bar_chan_msg_header_set(reply_addr, msg_header); + zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + zxdh_bar_chan_msg_valid_set(reply_addr, BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type, + uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == BAR_MSG_SRC_ERR || dst == BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + uint64_t recv_rep_addr; + + if (sync == BAR_CHAN_MSG_SYNC) + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t zxdh_bar_chan_msg_header_check(struct bar_msg_header *msg_header) +{ + if (msg_header->valid != BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return BAR_MSG_ERR_MODULE; + } + uint8_t module_id = msg_header->module_id; + + if (module_id >= (uint8_t)BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return BAR_MSG_ERR_MODULE; + } + uint16_t len = msg_header->len; + + if (len > BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + module_id_name(module_id), module_id); + return BAR_MSG_ERR_MODULE_NOEXIST; + } + return BAR_MSG_OK; +} + +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + struct bar_msg_header msg_header = {0}; + uint64_t recv_addr = 0; + uint16_t ret = 0; + + recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + zxdh_bar_chan_msg_header_get(recv_addr, &msg_header); + ret = zxdh_bar_chan_msg_header_check(&msg_header); + + if (ret != BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + + uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0); + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == BAR_CHAN_MSG_SYNC) { + zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + zxdh_bar_chan_msg_valid_set(recv_addr, BAR_MSG_CHAN_USABLE); + if (msg_header.ack == BAR_CHAN_MSG_ACK) { + zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + return 0; + +exit: + rte_free(recved_msg); + return BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 7379f57d17..6c7bed86f1 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -16,8 +16,16 @@ extern "C" { #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) /* 5 */ +#define ZXDH_QUEUE_INTR_VEC_NUM 256 + #define BAR_MSG_POLLING_SPAN 100 #define BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / BAR_MSG_POLLING_SPAN) #define BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / BAR_MSG_POLLING_SPAN) @@ -201,6 +209,9 @@ struct bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, + void *reps_buffer, uint16_t *reps_len, void *dev); + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); @@ -208,6 +219,7 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); int zxdh_msg_chan_enable(struct rte_eth_dev *dev); int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index e23dbcbef5..c63b7eee44 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -97,6 +97,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) rte_write32(features >> 32, &hw->common_cfg->guest_feature); } +static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint8_t zxdh_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -104,8 +122,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_status = zxdh_set_status, .get_features = zxdh_get_features, .set_features = zxdh_set_features, + .set_queue_irq = zxdh_set_queue_irq, + .set_config_irq = zxdh_set_config_irq, + .get_isr = zxdh_get_isr, }; +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_isr(hw); +} + uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) { return VTPCI_OPS(hw)->get_features(hw); @@ -288,3 +314,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) return 0; } + +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev) +{ + uint8_t pos = 0; + int32_t ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret); + return ZXDH_MSIX_NONE; + } + while (pos) { + uint8_t cap[2] = {0}; + + ret = rte_pci_read_config(dev, cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap[0] == PCI_CAP_ID_MSIX) { + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap)); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, + "failed to read pci cap at pos: %x ret %d", pos + 2, ret); + break; + } + if (flags & PCI_MSIX_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + pos = cap[1]; + } + return ZXDH_MSIX_NONE; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index deda73a65a..677dadd5c8 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -22,6 +22,13 @@ enum zxdh_msix_status { ZXDH_MSIX_ENABLED = 2 }; +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + #define PCI_CAPABILITY_LIST 0x34 #define PCI_CAP_ID_VNDR 0x09 #define PCI_CAP_ID_MSIX 0x11 @@ -124,6 +131,9 @@ struct zxdh_pci_ops { uint64_t (*get_features)(struct zxdh_hw *hw); void (*set_features)(struct zxdh_hw *hw, uint64_t features); + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + uint8_t (*get_isr)(struct zxdh_hw *hw); }; struct zxdh_hw_internal { @@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); #ifdef __cplusplus } -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53068 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang ` (4 preceding siblings ...) 2024-10-16 8:18 ` [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-21 8:54 ` Thomas Monjalon 2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 6 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3892 bytes --] Add support for zxdh infos get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 62 +++++++++++++++++++++++++++++++++- 1 file changed, 61 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index fc141712aa..65b649a156 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -12,6 +12,9 @@ #include "zxdh_msg.h" #include "zxdh_common.h" +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U + struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; uint16_t vport_to_vfid(union VPORT v) @@ -25,6 +28,58 @@ uint16_t vport_to_vfid(union VPORT v) return (v.epid * 8 + v.pfid) + 1152; } +static uint32_t zxdh_dev_speed_capa_get(uint32_t speed) +{ + switch (speed) { + case RTE_ETH_SPEED_NUM_10G: return RTE_ETH_LINK_SPEED_10G; + case RTE_ETH_SPEED_NUM_20G: return RTE_ETH_LINK_SPEED_20G; + case RTE_ETH_SPEED_NUM_25G: return RTE_ETH_LINK_SPEED_25G; + case RTE_ETH_SPEED_NUM_40G: return RTE_ETH_LINK_SPEED_40G; + case RTE_ETH_SPEED_NUM_50G: return RTE_ETH_LINK_SPEED_50G; + case RTE_ETH_SPEED_NUM_56G: return RTE_ETH_LINK_SPEED_56G; + case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G; + case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G; + default: return 0; + } +} + +static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + dev_info->speed_capa = zxdh_dev_speed_capa_get(hw->speed); + dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX); + dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX); + dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE; + dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN; + dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS; + dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_QINQ_STRIP); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM); + + return 0; +} + static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; @@ -321,6 +376,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_infos_get = zxdh_dev_infos_get, +}; + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -377,7 +437,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; int ret = 0; - eth_dev->dev_ops = NULL; + eth_dev->dev_ops = &zxdh_eth_dev_ops; /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 8397 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops 2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-10-21 8:54 ` Thomas Monjalon 0 siblings, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-10-21 8:54 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19 16/10/2024 10:18, Junlong Wang: > +static uint32_t zxdh_dev_speed_capa_get(uint32_t speed) > +{ > + switch (speed) { > + case RTE_ETH_SPEED_NUM_10G: return RTE_ETH_LINK_SPEED_10G; > + case RTE_ETH_SPEED_NUM_20G: return RTE_ETH_LINK_SPEED_20G; > + case RTE_ETH_SPEED_NUM_25G: return RTE_ETH_LINK_SPEED_25G; > + case RTE_ETH_SPEED_NUM_40G: return RTE_ETH_LINK_SPEED_40G; > + case RTE_ETH_SPEED_NUM_50G: return RTE_ETH_LINK_SPEED_50G; > + case RTE_ETH_SPEED_NUM_56G: return RTE_ETH_LINK_SPEED_56G; > + case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G; > + case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G; > + default: return 0; > + } > +} You could use rte_eth_speed_bitflag() instead. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang ` (5 preceding siblings ...) 2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-10-16 8:18 ` Junlong Wang 2024-10-18 5:18 ` [v6,9/9] " Junlong Wang 2024-10-19 11:17 ` Junlong Wang 6 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-16 8:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 39355 bytes --] provided zxdh dev configure ops for queue check,reset,alloc resources,etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 119 +++++++++ drivers/net/zxdh/zxdh_common.h | 12 + drivers/net/zxdh/zxdh_ethdev.c | 457 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 20 +- drivers/net/zxdh/zxdh_pci.c | 97 +++++++ drivers/net/zxdh/zxdh_pci.h | 31 +++ drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++ drivers/net/zxdh/zxdh_queue.h | 176 +++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 10 files changed, 1045 insertions(+), 3 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_queue.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index a16db47f89..b96aa5a27e 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -18,4 +18,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_msg.c', 'zxdh_common.c', + 'zxdh_queue.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 61993980c3..ed4396a6af 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -20,6 +20,7 @@ #define ZXDH_COMMON_TABLE_WRITE 1 #define ZXDH_COMMON_FIELD_PHYPORT 6 +#define ZXDH_COMMON_FIELD_DATACH 3 #define RSC_TBL_CONTENT_LEN_MAX (257 * 2) @@ -247,3 +248,121 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); return ret; } + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +int32_t zxdh_acquire_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return -1; + + return 0; +} + +int32_t zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + return 0; + } + + return -1; +} + +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} + +static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_pci_bar_msg desc; + struct zxdh_msg_recviver_mem msg_rsp; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + if (buff_size != 0 && buff == NULL) { + PMD_DRV_LOG(ERR, "Buff is invalid"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE, + field, buff, buff_size); + + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_datach_set(struct rte_eth_dev *dev) +{ + /* payload: queue_num(2byte) + pch1(2byte) + ** + pchn */ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t buff_size = (hw->queue_num + 1) * 2; + void *buff = rte_zmalloc(NULL, buff_size, 0); + + if (unlikely(buff == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate buff"); + return -ENOMEM; + } + memset(buff, 0, buff_size); + uint16_t *pdata = (uint16_t *)buff; + *pdata++ = hw->queue_num; + uint16_t i; + + for (i = 0; i < hw->queue_num; i++) + *(pdata + i) = hw->channel_context[i].ph_chno; + + int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH, + (void *)buff, buff_size); + + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to setup data channel of common table"); + + rte_free(buff); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index ec7011e820..24bbc7fee0 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -14,6 +14,10 @@ extern "C" { #include "zxdh_ethdev.h" +#define ZXDH_VF_LOCK_REG 0x90 +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 + struct zxdh_res_para { uint64_t virt_addr; uint16_t pcie_id; @@ -23,6 +27,14 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +int32_t zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_acquire_lock(struct zxdh_hw *hw); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); +int32_t zxdh_datach_set(struct rte_eth_dev *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 65b649a156..a1997facdb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -11,6 +11,7 @@ #include "zxdh_pci.h" #include "zxdh_msg.h" #include "zxdh_common.h" +#include "zxdh_queue.h" #define ZXDH_MIN_RX_BUFSIZE 64 #define ZXDH_MAX_RX_PKTLEN 14000U @@ -376,8 +377,464 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +static int32_t zxdh_features_update(struct zxdh_hw *hw, + const struct rte_eth_rxmode *rxmode, + const struct rte_eth_txmode *txmode) +{ + uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; + uint64_t req_features = hw->guest_features; + + if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM); + + if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) | + (1ULL << ZXDH_NET_F_GUEST_TSO6); + + if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_CSUM); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) | + (1ULL << ZXDH_NET_F_HOST_TSO6); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_UFO); + + req_features = req_features & hw->host_features; + hw->guest_features = req_features; + + VTPCI_OPS(hw)->set_features(hw, req_features); + + if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + PMD_DRV_LOG(ERR, "rx checksum not available on this host"); + return -ENOTSUP; + } + + if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); + return -ENOTSUP; + } + return 0; +} + +static bool rx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); +} + +static bool tx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); +} + +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i = 0; + + const char *type = NULL; + struct virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = zxdh_get_queue_type(i); + if (queue_type == VTNET_RQ) + type = "rxq"; + else if (queue_type == VTNET_TQ) + type = "txq"; + else + continue; + PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + rte_pktmbuf_free(buf); + } +} + +static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t base = (queue_type == VTNET_RQ) ? 0 : 1; + uint16_t i = 0; + uint16_t j = 0; + uint16_t done = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + rte_delay_us_block(1000); + /* acquire hw lock */ + if (zxdh_acquire_lock(hw) < 0) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout); + continue; + } + /* Iterate COI table and find free channel */ + for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) { + uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t)); + uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + + for (j = base; j < 32; j += 2) { + /* Got the available channel & update COI table */ + if ((var & (1 << j)) == 0) { + var |= (1 << j); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + done = 1; + break; + } + } + if (done) + break; + } + break; + } + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + zxdh_release_lock(hw); + /* check for no channel condition */ + if (done != 1) { + PMD_INIT_LOG(ERR, "NO availd queues"); + return -1; + } + /* return available channel ID */ + return (i * 32) + j; +} + +static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->channel_context[lch].valid == 1) { + PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u", + lch, hw->channel_context[lch].ph_chno); + return hw->channel_context[lch].ph_chno; + } + int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch)); + + if (pch < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + hw->channel_context[lch].ph_chno = (uint16_t)pch; + hw->channel_context[lch].valid = 1; + PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch); + return 0; +} + +static void zxdh_init_vring(struct virtqueue *vq) +{ + int32_t size = vq->vq_nentries; + uint8_t *ring_mem = vq->vq_ring_virt_mem; + + memset(ring_mem, 0, vq->vq_ring_size); + + vq->vq_used_cons_idx = 0; + vq->vq_desc_head_idx = 0; + vq->vq_avail_idx = 0; + vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); + vq->vq_free_cnt = vq->vq_nentries; + memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); + vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr(vq); +} + +static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) +{ + char vq_name[VIRTQUEUE_MAX_NAME_SZ] = {0}; + char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ] = {0}; + const struct rte_memzone *mz = NULL; + const struct rte_memzone *hdr_mz = NULL; + uint32_t size = 0; + struct zxdh_hw *hw = dev->data->dev_private; + struct virtnet_rx *rxvq = NULL; + struct virtnet_tx *txvq = NULL; + struct virtqueue *vq = NULL; + size_t sz_hdr_mz = 0; + void *sw_ring = NULL; + int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx); + int32_t numa_node = dev->device->numa_node; + uint16_t vtpci_phy_qidx = 0; + uint32_t vq_size = 0; + int32_t ret = 0; + + if (hw->channel_context[vtpci_logic_qidx].valid == 0) { + PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx); + return -EINVAL; + } + vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno; + + PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d", + vtpci_logic_qidx, vtpci_phy_qidx, numa_node); + + vq_size = ZXDH_QUEUE_DEPTH; + + if (VTPCI_OPS(hw)->set_queue_num != NULL) + VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size); + + snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx); + + size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra), + RTE_CACHE_LINE_SIZE); + if (queue_type == VTNET_TQ) { + /* + * For each xmit packet, allocate a zxdh_net_hdr + * and indirect ring elements + */ + sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region); + } + + vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node); + if (vq == NULL) { + PMD_INIT_LOG(ERR, "can not allocate vq"); + return -ENOMEM; + } + hw->vqs[vtpci_logic_qidx] = vq; + + vq->hw = hw; + vq->vq_queue_index = vtpci_phy_qidx; + vq->vq_nentries = vq_size; + + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL; + vq->vq_packed.event_flags_shadow = 0; + if (queue_type == VTNET_RQ) + vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE; + + /* + * Reserve a memzone for vring elements + */ + size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); + PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); + + mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + ZXDH_PCI_VRING_ALIGN); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(vq_name); + if (mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + + memset(mz->addr, 0, mz->len); + + vq->vq_ring_mem = mz->iova; + vq->vq_ring_virt_mem = mz->addr; + + zxdh_init_vring(vq); + + if (sz_hdr_mz) { + snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr", + dev->data->port_id, vtpci_phy_qidx); + hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + RTE_CACHE_LINE_SIZE); + if (hdr_mz == NULL) { + if (rte_errno == EEXIST) + hdr_mz = rte_memzone_lookup(vq_hdr_name); + if (hdr_mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + } + + if (queue_type == VTNET_RQ) { + size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]); + + sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node); + if (!sw_ring) { + PMD_INIT_LOG(ERR, "can not allocate RX soft ring"); + ret = -ENOMEM; + goto fail_q_alloc; + } + + vq->sw_ring = sw_ring; + rxvq = &vq->rxq; + rxvq->vq = vq; + rxvq->port_id = dev->data->port_id; + rxvq->mz = mz; + } else { + txvq = &vq->txq; + txvq->vq = vq; + txvq->port_id = dev->data->port_id; + txvq->mz = mz; + txvq->virtio_net_hdr_mz = hdr_mz; + txvq->virtio_net_hdr_mem = hdr_mz->iova; + } + + vq->offset = offsetof(struct rte_mbuf, buf_iova); + if (queue_type == VTNET_TQ) { + struct zxdh_tx_region *txr = hdr_mz->addr; + uint32_t i; + + memset(txr, 0, vq_size * sizeof(*txr)); + for (i = 0; i < vq_size; i++) { + /* first indirect descriptor is always the tx header */ + struct vring_packed_desc *start_dp = txr[i].tx_packed_indir; + + vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) + + offsetof(struct zxdh_tx_region, tx_hdr); + /* length will be updated to actual pi hdr size when xmit pkt */ + start_dp->len = 0; + } + } + if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) { + PMD_INIT_LOG(ERR, "setup_queue failed"); + return -EINVAL; + } + return 0; +fail_q_alloc: + rte_free(sw_ring); + rte_memzone_free(hdr_mz); + rte_memzone_free(mz); + rte_free(vq); + return ret; +} + +static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) +{ + uint16_t lch; + struct zxdh_hw *hw = dev->data->dev_private; + + hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0); + if (!hw->vqs) { + PMD_INIT_LOG(ERR, "Failed to allocate vqs"); + return -ENOMEM; + } + for (lch = 0; lch < nr_vq; lch++) { + if (zxdh_acquire_channel(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire the channels"); + zxdh_free_queues(dev); + return -1; + } + if (zxdh_init_queue(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to alloc virtio queue"); + zxdh_free_queues(dev); + return -1; + } + } + return 0; +} + + +static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) +{ + const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t nr_vq = 0; + int32_t ret = 0; + + if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return -EINVAL; + } + if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues, + ZXDH_QUEUES_NUM_MAX); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + + ret = zxdh_features_update(hw, rxmode, txmode); + if (ret < 0) + return ret; + + /* check if lsc interrupt feature is enabled */ + if (dev->data->dev_conf.intr_conf.lsc) { + if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) { + PMD_DRV_LOG(ERR, "link status not supported by host"); + return -ENOTSUP; + } + } + + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + + nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; + if (nr_vq == hw->queue_num) + return 0; + + PMD_DRV_LOG(DEBUG, "queue changed need reset "); + /* Reset the device although not necessary at startup */ + zxdh_vtpci_reset(hw); + + /* Tell the host we've noticed this device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK); + + /* Tell the host we've known how to drive the device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER); + /* The queue needs to be released when reconfiguring*/ + if (hw->vqs != NULL) { + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + } + + hw->queue_num = nr_vq; + ret = zxdh_alloc_queues(dev, nr_vq); + if (ret < 0) + return ret; + + zxdh_datach_set(dev); + + if (zxdh_configure_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to configure interrupt"); + zxdh_free_queues(dev); + return -1; + } + + zxdh_vtpci_reinit_complete(hw); + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = zxdh_dev_configure, .dev_infos_get = zxdh_dev_infos_get, }; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7c5f5940cb..d547785e2a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -14,10 +14,8 @@ extern "C" { #include <rte_interrupts.h> #include <eal_interrupts.h> -#include "zxdh_queue.h" - /* ZXDH PCI vendor/device ID. */ -#define PCI_VENDOR_ID_ZTE 0x1cf2 +#define PCI_VENDOR_ID_ZTE 0x1cf2 #define ZXDH_E310_PF_DEVICEID 0x8061 #define ZXDH_E310_VF_DEVICEID 0x8062 @@ -30,9 +28,16 @@ extern "C" { #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_QUEUES_BASE 0 +#define ZXDH_TOTAL_QUEUES_NUM 4096 +#define ZXDH_QUEUES_NUM_MAX 256 +#define ZXDH_QUERES_SHARE_BASE (0x5000) #define ZXDH_NUM_BARS 2 +#define ZXDH_MBUF_BURST_SZ 64 + union VPORT { uint16_t vport; struct { @@ -44,6 +49,11 @@ union VPORT { }; }; +struct chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; @@ -51,6 +61,7 @@ struct zxdh_hw { struct rte_intr_handle *risc_intr; struct rte_intr_handle *dtb_intr; struct virtqueue **vqs; + struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; union VPORT vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -64,6 +75,7 @@ struct zxdh_hw { uint16_t device_id; uint16_t port_id; uint16_t vfid; + uint16_t queue_num; uint8_t *isr; uint8_t weak_barriers; @@ -75,6 +87,8 @@ struct zxdh_hw { uint8_t phyport; uint8_t panel_id; uint8_t msg_chan_init; + uint8_t has_tx_offload; + uint8_t has_rx_offload; }; uint16_t vport_to_vfid(union VPORT v); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index c63b7eee44..b174e35325 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -115,6 +115,86 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw) return rte_read8(hw->isr); } +static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t check_vq_phys_addr_ok(struct virtqueue *vq) +{ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} + +static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void zxdh_del_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -125,6 +205,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_irq = zxdh_set_queue_irq, .set_config_irq = zxdh_set_config_irq, .get_isr = zxdh_get_isr, + .get_queue_num = zxdh_get_queue_num, + .set_queue_num = zxdh_set_queue_num, + .setup_queue = zxdh_setup_queue, + .del_queue = zxdh_del_queue, }; uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) @@ -151,6 +235,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw) PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); } +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= VTPCI_OPS(hw)->get_status(hw); + + VTPCI_OPS(hw)->set_status(hw, status); +} + static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) { uint8_t bar = cap->bar; diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index 677dadd5c8..bb6714ecc0 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,7 +12,9 @@ extern "C" { #include <stdint.h> #include <stdbool.h> +#include <rte_pci.h> #include <bus_pci_driver.h> +#include <ethdev_driver.h> #include "zxdh_ethdev.h" @@ -34,8 +36,21 @@ enum zxdh_msix_status { #define PCI_CAP_ID_MSIX 0x11 #define PCI_MSIX_ENABLE 0x8000 +#define ZXDH_PCI_VRING_ALIGN 4096 + +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ + +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ #define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ @@ -60,6 +75,8 @@ enum zxdh_msix_status { #define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 #define ZXDH_CONFIG_STATUS_FAILED 0x80 +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 + struct zxdh_net_config { /* The config defining mac address (if ZXDH_NET_F_MAC) */ uint8_t mac[RTE_ETHER_ADDR_LEN]; @@ -122,6 +139,12 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + + struct zxdh_pci_ops { void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); @@ -134,6 +157,11 @@ struct zxdh_pci_ops { uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); uint8_t (*get_isr)(struct zxdh_hw *hw); + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq); }; struct zxdh_hw_internal { @@ -156,6 +184,9 @@ uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw); +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..005b2a5578 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" +#include "zxdh_msg.h" + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + return NULL; +} + +static int32_t zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + if (zxdh_acquire_lock(hw) != 0) { + PMD_INIT_LOG(ERR, + "Could not acquire lock to release channel, timeout %d", timeout); + continue; + } + break; + } + + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Acquire lock timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return VTNET_RQ; + else + return VTNET_TQ; +} + +int32_t zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + if (zxdh_release_channel(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = zxdh_get_queue_type(i); + if (queue_type == VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.virtio_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 336c0701f4..e0fa5502e8 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -15,6 +15,25 @@ extern "C" { #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" +#include "zxdh_pci.h" + +enum { VTNET_RQ = 0, VTNET_TQ = 1 }; + +#define VIRTQUEUE_MAX_NAME_SZ 32 +#define RQ_QUEUE_IDX 0 +#define TQ_QUEUE_IDX 1 +#define ZXDH_MAX_TX_INDIRECT 8 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define VRING_DESC_F_WRITE 2 +/* This flag means the descriptor was made available by the driver */ +#define VRING_PACKED_DESC_F_AVAIL (1 << (7)) + +#define RING_EVENT_FLAGS_ENABLE 0x0 +#define RING_EVENT_FLAGS_DISABLE 0x1 +#define RING_EVENT_FLAGS_DESC 0x2 + +#define VQ_RING_DESC_CHAIN_END 32768 /** ring descriptors: 16 bytes. * These can chain together via "next". @@ -26,6 +45,19 @@ struct vring_desc { uint16_t next; /* We chain unused descriptors via this. */ }; +struct vring_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was written to. */ + uint32_t len; +}; + +struct vring_used { + uint16_t flags; + uint16_t idx; + struct vring_used_elem ring[0]; +}; + struct vring_avail { uint16_t flags; uint16_t idx; @@ -102,4 +134,148 @@ struct virtqueue { struct vq_desc_extra vq_descx[0]; }; +struct zxdh_type_hdr { + uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ + uint8_t pd_len; + uint8_t num_buffers; + uint8_t reserved; +} __rte_packed; /* 4B */ + +struct zxdh_pi_hdr { + uint8_t pi_len; + uint8_t pkt_type; + uint16_t vlan_id; + uint32_t ipv6_extend; + uint16_t l3_offset; + uint16_t l4_offset; + uint8_t phy_port; + uint8_t pkt_flag_hi8; + uint16_t pkt_flag_lw16; + union { + struct { + uint64_t sa_idx; + uint8_t reserved_8[8]; + } dl; + struct { + uint32_t lro_flag; + uint32_t lro_mss; + uint16_t err_code; + uint16_t pm_id; + uint16_t pkt_len; + uint8_t reserved[2]; + } ul; + }; +} __rte_packed; /* 32B */ + +struct zxdh_pd_hdr_dl { + uint32_t ol_flag; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t dst_vfid; + uint32_t svlan_insert; + uint32_t cvlan_insert; +} __rte_packed; /* 16B */ + +struct zxdh_net_hdr_dl { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_dl pd_hdr; /* 16B */ +} __rte_packed; + +struct zxdh_pd_hdr_ul { + uint32_t pkt_flag; + uint32_t rss_hash; + uint32_t fd; + uint32_t striped_vlan_tci; + /* ovs */ + uint8_t tag_idx; + uint8_t tag_data; + uint16_t src_vfid; + /* */ + uint16_t pkt_type_out; + uint16_t pkt_type_in; +} __rte_packed; /* 24B */ + +struct zxdh_net_hdr_ul { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_ul pd_hdr; /* 24B */ +} __rte_packed; /* 60B */ + +struct zxdh_tx_region { + struct zxdh_net_hdr_dl tx_hdr; + union { + struct vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; + struct vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; + } __rte_packed; +}; + +static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +{ + size_t size; + + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct vring_packed_desc); + size += sizeof(struct vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_packed_desc_event); + return size; + } + + size = num * sizeof(struct vring_desc); + size += sizeof(struct vring_avail) + (num * sizeof(uint16_t)); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_used) + (num * sizeof(struct vring_used_elem)); + return size; +} + +static inline void vring_init_packed(struct vring_packed *vr, uint8_t *p, + unsigned long align, uint32_t num) +{ + vr->num = num; + vr->desc = (struct vring_packed_desc *)p; + vr->driver = (struct vring_packed_desc_event *)(p + + vr->num * sizeof(struct vring_packed_desc)); + vr->device = (struct vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver + + sizeof(struct vring_packed_desc_event)), align); +} + +static inline void vring_desc_init_packed(struct virtqueue *vq, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n - 1; i++) { + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END; +} + +static inline void vring_desc_init_indirect_packed(struct vring_packed_desc *dp, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n; i++) { + dp[i].id = (uint16_t)i; + dp[i].flags = VRING_DESC_F_WRITE; + } +} + +static inline void virtqueue_disable_intr(struct virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow != RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); + +#ifdef __cplusplus +} +#endif + #endif /* _ZXDH_QUEUE_H_ */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 7314f76d2c..6476bc15e2 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -48,4 +48,8 @@ struct virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ }; +#ifdef __cplusplus +} +#endif + #endif /* _ZXDH_RXTX_H_ */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 92447 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v6,9/9] net/zxdh: add zxdh dev configure ops 2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang @ 2024-10-18 5:18 ` Junlong Wang 2024-10-18 6:48 ` David Marchand 2024-10-19 11:17 ` Junlong Wang 1 sibling, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-18 5:18 UTC (permalink / raw) To: dev; +Cc: stephen, ferruh.yigit, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 348 bytes --] Hi, Maintainer iol-unit-amd64-testing/ Debian 12 | FAIL DPDK:fast-tests / bitops_autotest FAIL 1.72 s (exit status 255 or signal 127 SIGinvalid) Do we need to solve this error? We haven't found the detailed reason for the error in the output log. I noticed that the patches submitted by others also have this error. Thanks. [-- Attachment #1.1.2: Type: text/html , Size: 700 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v6,9/9] net/zxdh: add zxdh dev configure ops 2024-10-18 5:18 ` [v6,9/9] " Junlong Wang @ 2024-10-18 6:48 ` David Marchand 0 siblings, 0 replies; 225+ messages in thread From: David Marchand @ 2024-10-18 6:48 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19 Hello Junlong, On Fri, Oct 18, 2024 at 7:21 AM Junlong Wang <wang.junlong1@zte.com.cn> wrote: > > Hi, Maintainer > > iol-unit-amd64-testing/ Debian 12 | FAIL > DPDK:fast-tests / bitops_autotest FAIL 1.72 s (exit status 255 or signal 127 SIGinvalid) > > Do we need to solve this error? We haven't found the detailed reason for the error in the output log. I noticed that the patches submitted by others also have this error. It was a false positive when testing your series (on top of next-net repo at the time) and you can ignore this failure. This is fixed in the main repo with: 899cffe059b8 ("test/bitops: fix 32-bit atomic test spurious failure"). next-net was rebased recently and contains the fix. -- David Marchand ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v6,9/9] net/zxdh: add zxdh dev configure ops 2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-18 5:18 ` [v6,9/9] " Junlong Wang @ 2024-10-19 11:17 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-19 11:17 UTC (permalink / raw) To: dev; +Cc: ferruh.yigit, stephen, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 155 bytes --] Hi, Maintainer Hope you can take some time to check if there are any modifications needed for the net/zxdh driver. Thank you very much! [-- Attachment #1.1.2: Type: text/html , Size: 367 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-10-21 9:03 ` Thomas Monjalon 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2 siblings, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-10-21 9:03 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, stephen, ferruh.yigit, wang.yong19 16/10/2024 10:16, Junlong Wang: > Add basic zxdh ethdev init and register PCI probe functions > Update doc files > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > doc/guides/nics/features/zxdh.ini | 9 +++ > doc/guides/nics/index.rst | 1 + > doc/guides/nics/zxdh.rst | 30 ++++++++++ > drivers/net/meson.build | 1 + > drivers/net/zxdh/meson.build | 18 ++++++ > drivers/net/zxdh/zxdh_ethdev.c | 92 +++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev.h | 44 +++++++++++++++ > 7 files changed, 195 insertions(+) Release notes are missing. [...] > +++ b/doc/guides/nics/zxdh.rst > @@ -0,0 +1,30 @@ > +.. SPDX-License-Identifier: BSD-3-Clause A single space is enough here. > + Copyright(c) 2024 ZTE Corporation. > + > +ZXDH Poll Mode Driver > +====================== > + > +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support > +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on > +the ZTE Ethernet Controller E310/E312. > + > +- Learning about ZXDH NX Series Ethernet Controller NICs using Active form is better: "Learn about" > + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. > + > +Features > +-------- > + > +Features of the zxdh PMD are: Do you name your driver uppercase or lowercase? Try to be consistent. > + > +- Multi arch support: x86_64, ARMv8. > + > + > +Driver compilation and testing > +------------------------------ > + > +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` > +for details. > + > +Limitations or Known issues > +--------------------------- > +X86-32, Power8, ARMv7 and BSD are not supported yet. Keep an empty line after a title. What about RISC-V and Windows? [...] > +++ b/drivers/net/zxdh/zxdh_ethdev.h > @@ -0,0 +1,44 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2024 ZTE Corporation > + */ > + > +#ifndef _ZXDH_ETHDEV_H_ > +#define _ZXDH_ETHDEV_H_ No need for the underscores at the beggining and end: ZXDH_ETHDEV_H > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include "ethdev_driver.h" The includes should be before extern "C" ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 0/9] net/zxdh: introduce net zxdh driver 2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-21 9:03 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Thomas Monjalon @ 2024-10-22 12:20 ` Junlong Wang 2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang ` (8 more replies) 2 siblings, 9 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2872 bytes --] v7: - add release notes and modify zxdh.rst issues. - avoid use pthread and use rte_spinlock_lock. - using the prefix ZXDH_ before some definitions. - resole issues according to thomas's comments. v6: - Resolve ci/intel compilation issues. - fix meson.build indentation in earlier patch. V5: - split driver into multiple patches,part of the zxdh driver, later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. - fix errors reported by scripts. - move the product link in zxdh.rst. - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. - modify other comments according to Ferruh's comments. Junlong Wang (9): net/zxdh: add zxdh ethdev pmd driver net/zxdh: add logging implementation net/zxdh: add zxdh device pci init implementation net/zxdh: add msg chan and msg hwlock init net/zxdh: add msg chan enable implementation net/zxdh: add zxdh get device backend infos net/zxdh: add configure zxdh intr implementation net/zxdh: add zxdh dev infos get ops net/zxdh: add zxdh dev configure ops doc/guides/nics/features/zxdh.ini | 9 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 31 + doc/guides/rel_notes/release_24_11.rst | 4 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 22 + drivers/net/zxdh/zxdh_common.c | 367 +++++++++ drivers/net/zxdh/zxdh_common.h | 42 + drivers/net/zxdh/zxdh_ethdev.c | 1006 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 100 +++ drivers/net/zxdh/zxdh_logs.h | 40 + drivers/net/zxdh/zxdh_msg.c | 982 +++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 228 ++++++ drivers/net/zxdh/zxdh_pci.c | 449 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 192 +++++ drivers/net/zxdh/zxdh_queue.c | 131 +++ drivers/net/zxdh/zxdh_queue.h | 282 +++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++ 18 files changed, 3942 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 5679 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-22 12:20 ` [PATCH v7 2/9] net/zxdh: add logging implementation Junlong Wang ` (7 subsequent siblings) 8 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8218 bytes --] Add basic zxdh ethdev init and register PCI probe functions Update doc files. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 9 +++ doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 31 +++++++++ doc/guides/rel_notes/release_24_11.rst | 4 ++ drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 18 +++++ drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++ 8 files changed, 200 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..05c8091ed7 --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..920ff5175e --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,31 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + +- Learn about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + +Features +-------- + +Features of the ZXDH PMD are: + +- Multi arch support: x86_64, ARMv8. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- + +X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet. diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index fa4822d928..d1edcfebee 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -161,6 +161,10 @@ New Features * Added initialization of FPGA modules related to flow HW offload. * Added basic handling of the virtual queues. +* **Updated ZTE zxdh net driver.** + + * Added ethdev driver support for zxdh NX Series Ethernet Controller. + * **Added cryptodev queue pair reset support.** A new API ``rte_cryptodev_queue_pair_reset`` is added diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..0a12914534 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..932fb1c835 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', +) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..75d8b28cc3 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <ethdev_pci.h> +#include <bus_pci_driver.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + eth_dev->dev_ops = NULL; + + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) + return -ENOMEM; + + memset(hw, 0, sizeof(*hw)); + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) + return -EIO; + + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + } + + return ret; +} + +static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + +static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) +{ + int ret = 0; + + return ret; +} + +static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + int ret = 0; + + ret = zxdh_dev_close(eth_dev); + + return ret; +} + +static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..086f3a0cdc --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_H +#define ZXDH_ETHDEV_H + +#include "ethdev_driver.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/* ZXDH PCI vendor/device ID. */ +#define PCI_VENDOR_ID_ZTE 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 + +struct zxdh_hw { + struct rte_eth_dev *eth_dev; + uint64_t bar_addr[ZXDH_NUM_BARS]; + + uint32_t speed; + uint16_t device_id; + uint16_t port_id; + + uint8_t duplex; + uint8_t is_pf; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_ETHDEV_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 15499 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 0/9] net/zxdh: introduce net zxdh driver 2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang ` (8 more replies) 0 siblings, 9 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3158 bytes --] v8: - fix flexible arrays銆乄address-of-packed-member error. - all structs銆乪num銆乨efine ,etc use zxdh/ZXDH_ prefixed. - use zxdh_try/release_lock,and move loop into zxdh_timedlock, make hardware lock follow spinlock pattern. v7: - add release notes and modify zxdh.rst issues. - avoid use pthread and use rte_spinlock_lock. - using the prefix ZXDH_ before some definitions. - resole issues according to thomas's comments. v6: - Resolve ci/intel compilation issues. - fix meson.build indentation in earlier patch. V5: - split driver into multiple patches,part of the zxdh driver, later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. - fix errors reported by scripts. - move the product link in zxdh.rst. - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. - modify other comments according to Ferruh's comments. Junlong Wang (9): net/zxdh: add zxdh ethdev pmd driver net/zxdh: add logging implementation net/zxdh: add zxdh device pci init implementation net/zxdh: add msg chan and msg hwlock init net/zxdh: add msg chan enable implementation net/zxdh: add zxdh get device backend infos net/zxdh: add configure zxdh intr implementation net/zxdh: add zxdh dev infos get ops net/zxdh: add zxdh dev configure ops MAINTAINERS | 6 + doc/guides/nics/features/zxdh.ini | 9 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 31 + doc/guides/rel_notes/release_24_11.rst | 4 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 22 + drivers/net/zxdh/zxdh_common.c | 385 ++++++++++ drivers/net/zxdh/zxdh_common.h | 42 ++ drivers/net/zxdh/zxdh_ethdev.c | 994 +++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 104 +++ drivers/net/zxdh/zxdh_logs.h | 40 + drivers/net/zxdh/zxdh_msg.c | 985 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 230 ++++++ drivers/net/zxdh/zxdh_pci.c | 445 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 192 +++++ drivers/net/zxdh/zxdh_queue.c | 123 +++ drivers/net/zxdh/zxdh_queue.h | 282 +++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++ 19 files changed, 3951 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6229 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-30 9:01 ` [PATCH v8 2/9] net/zxdh: add logging implementation Junlong Wang ` (7 subsequent siblings) 8 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8771 bytes --] Add basic zxdh ethdev init and register PCI probe functions Update doc files. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- MAINTAINERS | 6 ++ doc/guides/nics/features/zxdh.ini | 9 +++ doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 31 +++++++++ doc/guides/rel_notes/release_24_11.rst | 4 ++ drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 18 +++++ drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++ 9 files changed, 206 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h diff --git a/MAINTAINERS b/MAINTAINERS index ab64230920..a998bf0fd5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1043,6 +1043,12 @@ F: drivers/net/virtio/ F: doc/guides/nics/virtio.rst F: doc/guides/nics/features/virtio*.ini +ZTE zxdh +M: Lijie Shan <shan.lijie@zte.com.cn> +F: drivers/net/zxdh/ +F: doc/guides/nics/zxdh.rst +F: doc/guides/nics/features/zxdh.ini + Wind River AVP M: Steven Webster <steven.webster@windriver.com> M: Matt Peters <matt.peters@windriver.com> diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..05c8091ed7 --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..920ff5175e --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,31 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + +- Learn about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + +Features +-------- + +Features of the ZXDH PMD are: + +- Multi arch support: x86_64, ARMv8. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- + +X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet. diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index fa4822d928..986a611e08 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -161,6 +161,10 @@ New Features * Added initialization of FPGA modules related to flow HW offload. * Added basic handling of the virtual queues. + * **Updated ZTE zxdh net driver.** + + * Added ethdev driver support for zxdh NX Series Ethernet Controller. + * **Added cryptodev queue pair reset support.** A new API ``rte_cryptodev_queue_pair_reset`` is added diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..0a12914534 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..932fb1c835 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', +) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..5b6c9ec1bf --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <ethdev_pci.h> +#include <bus_pci_driver.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + eth_dev->dev_ops = NULL; + + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) + return -ENOMEM; + + memset(hw, 0, sizeof(*hw)); + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) + return -EIO; + + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + } + + return ret; +} + +static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + +static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) +{ + int ret = 0; + + return ret; +} + +static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + int ret = 0; + + ret = zxdh_dev_close(eth_dev); + + return ret; +} + +static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..93375aea11 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_H +#define ZXDH_ETHDEV_H + +#include "ethdev_driver.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/* ZXDH PCI vendor/device ID. */ +#define ZXDH_PCI_VENDOR_ID 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 + +struct zxdh_hw { + struct rte_eth_dev *eth_dev; + uint64_t bar_addr[ZXDH_NUM_BARS]; + + uint32_t speed; + uint16_t device_id; + uint16_t port_id; + + uint8_t duplex; + uint8_t is_pf; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_ETHDEV_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 16486 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang ` (12 more replies) 0 siblings, 13 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3269 bytes --] v9: - fix 'v8 3/9' patch use PCI bus API, and common PCI constants according to David Marchand's comments. v8: - fix flexible arrays、Waddress-of-packed-member error. - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. - use zxdh_try/release_lock,and move loop into zxdh_timedlock, make hardware lock follow spinlock pattern. v7: - add release notes and modify zxdh.rst issues. - avoid use pthread and use rte_spinlock_lock. - using the prefix ZXDH_ before some definitions. - resole issues according to thomas's comments. v6: - Resolve ci/intel compilation issues. - fix meson.build indentation in earlier patch. V5: - split driver into multiple patches,part of the zxdh driver, later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. - fix errors reported by scripts. - move the product link in zxdh.rst. - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. - modify other comments according to Ferruh's comments. Junlong Wang (9): net/zxdh: add zxdh ethdev pmd driver net/zxdh: add logging implementation net/zxdh: add zxdh device pci init implementation net/zxdh: add msg chan and msg hwlock init net/zxdh: add msg chan enable implementation net/zxdh: add zxdh get device backend infos net/zxdh: add configure zxdh intr implementation net/zxdh: add zxdh dev infos get ops net/zxdh: add zxdh dev configure ops MAINTAINERS | 6 + doc/guides/nics/features/zxdh.ini | 9 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 31 + doc/guides/rel_notes/release_24_11.rst | 4 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 22 + drivers/net/zxdh/zxdh_common.c | 385 ++++++++++ drivers/net/zxdh/zxdh_common.h | 42 ++ drivers/net/zxdh/zxdh_ethdev.c | 994 +++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 102 +++ drivers/net/zxdh/zxdh_logs.h | 40 + drivers/net/zxdh/zxdh_msg.c | 986 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 229 ++++++ drivers/net/zxdh/zxdh_pci.c | 402 ++++++++++ drivers/net/zxdh/zxdh_pci.h | 175 +++++ drivers/net/zxdh/zxdh_queue.c | 123 +++ drivers/net/zxdh/zxdh_queue.h | 281 +++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++ 19 files changed, 3888 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6482 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 0:57 ` Ferruh Yigit 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang ` (11 subsequent siblings) 12 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8780 bytes --] Add basic zxdh ethdev init and register PCI probe functions Update doc files. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- MAINTAINERS | 6 ++ doc/guides/nics/features/zxdh.ini | 9 +++ doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 31 +++++++++ doc/guides/rel_notes/release_24_11.rst | 4 ++ drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 18 +++++ drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++ 9 files changed, 206 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h diff --git a/MAINTAINERS b/MAINTAINERS index 8919d78919..a5534be2ab 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1051,6 +1051,12 @@ F: drivers/net/virtio/ F: doc/guides/nics/virtio.rst F: doc/guides/nics/features/virtio*.ini +ZTE zxdh +M: Lijie Shan <shan.lijie@zte.com.cn> +F: drivers/net/zxdh/ +F: doc/guides/nics/zxdh.rst +F: doc/guides/nics/features/zxdh.ini + Wind River AVP M: Steven Webster <steven.webster@windriver.com> M: Matt Peters <matt.peters@windriver.com> diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..05c8091ed7 --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..920ff5175e --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,31 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + +- Learn about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + +Features +-------- + +Features of the ZXDH PMD are: + +- Multi arch support: x86_64, ARMv8. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- + +X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet. diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 15b64a1829..dd9048d561 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -162,6 +162,10 @@ New Features * Added initialization of FPGA modules related to flow HW offload. * Added basic handling of the virtual queues. +* **Updated ZTE zxdh net driver.** + + * Added ethdev driver support for zxdh NX Series Ethernet Controller. + * **Added cryptodev queue pair reset support.** A new API ``rte_cryptodev_queue_pair_reset`` is added diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..0a12914534 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..932fb1c835 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', +) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..5b6c9ec1bf --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,92 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <ethdev_pci.h> +#include <bus_pci_driver.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + eth_dev->dev_ops = NULL; + + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) + return -ENOMEM; + + memset(hw, 0, sizeof(*hw)); + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) + return -EIO; + + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + } + + return ret; +} + +static int zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + +static int zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) +{ + int ret = 0; + + return ret; +} + +static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + int ret = 0; + + ret = zxdh_dev_close(eth_dev); + + return ret; +} + +static int zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..a11e3624a9 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_H +#define ZXDH_ETHDEV_H + +#include "ethdev_driver.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/* ZXDH PCI vendor/device ID. */ +#define ZXDH_PCI_VENDOR_ID 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 + +struct zxdh_hw { + struct rte_eth_dev *eth_dev; + uint64_t bar_addr[ZXDH_NUM_BARS]; + + uint32_t speed; + uint16_t device_id; + uint16_t port_id; + + uint8_t duplex; + uint8_t is_pf; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_ETHDEV_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 16540 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver 2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-11-02 0:57 ` Ferruh Yigit 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 0:57 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19, Lijie Shan On 11/1/2024 6:21 AM, Junlong Wang wrote: > Add basic zxdh ethdev init and register PCI probe functions > Update doc files. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > MAINTAINERS | 6 ++ > doc/guides/nics/features/zxdh.ini | 9 +++ > doc/guides/nics/index.rst | 1 + > doc/guides/nics/zxdh.rst | 31 +++++++++ > doc/guides/rel_notes/release_24_11.rst | 4 ++ > drivers/net/meson.build | 1 + > drivers/net/zxdh/meson.build | 18 +++++ > drivers/net/zxdh/zxdh_ethdev.c | 92 ++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++ > 9 files changed, 206 insertions(+) > create mode 100644 doc/guides/nics/features/zxdh.ini > create mode 100644 doc/guides/nics/zxdh.rst > create mode 100644 drivers/net/zxdh/meson.build > create mode 100644 drivers/net/zxdh/zxdh_ethdev.c > create mode 100644 drivers/net/zxdh/zxdh_ethdev.h > > diff --git a/MAINTAINERS b/MAINTAINERS > index 8919d78919..a5534be2ab 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -1051,6 +1051,12 @@ F: drivers/net/virtio/ > F: doc/guides/nics/virtio.rst > F: doc/guides/nics/features/virtio*.ini > > +ZTE zxdh > +M: Lijie Shan <shan.lijie@zte.com.cn> > You have your sign-off in the patch series, but adding someone else as maintainer? We need someone that has technical expertise on the code, is there a reason to not add your name as maintainer. > +F: drivers/net/zxdh/ > +F: doc/guides/nics/zxdh.rst > +F: doc/guides/nics/features/zxdh.ini > + > Minor comment, rest looks good to me: Please move this below "Wind River", this list is alphabetically shorted on company name. Last bit of the list is virtual drivers without specific company associated with them. <...> > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > new file mode 100644 > index 0000000000..5b6c9ec1bf > --- /dev/null > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -0,0 +1,92 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2024 ZTE Corporation > + */ > + > +#include <ethdev_pci.h> > +#include <bus_pci_driver.h> > +#include <rte_ethdev.h> > + > +#include "zxdh_ethdev.h" > + > +static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > +{ > DPDK syntax is to have return value in a separate line, like: static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { This is for all files in this series, can you please update all? ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-02 0:57 ` Ferruh Yigit @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver Junlong Wang ` (11 more replies) 1 sibling, 12 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3670 bytes --] v10: - move zxdh under Wind River in MAINTAINERS and add myself as the maintainer and add experimental into MAINTAINERS/driver file,elease notes. - changed DPDK syntax is to have return value in a separate line. - Add a keyword in log types for distinguished. - using regular comments (non doxygen syntax). - fix other issues. v9: - fix 'v8 3/9' patch use PCI bus API, and common PCI constants according to David Marchand's comments. v8: - fix flexible arrays、Waddress-of-packed-member error. - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. - use zxdh_try/release_lock,and move loop into zxdh_timedlock, make hardware lock follow spinlock pattern. v7: - add release notes and modify zxdh.rst issues. - avoid use pthread and use rte_spinlock_lock. - using the prefix ZXDH_ before some definitions. - resole issues according to thomas's comments. v6: - Resolve ci/intel compilation issues. - fix meson.build indentation in earlier patch. V5: - split driver into multiple patches,part of the zxdh driver, later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. - fix errors reported by scripts. - move the product link in zxdh.rst. - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. - modify other comments according to Ferruh's comments. Junlong Wang (10): net/zxdh: add zxdh ethdev pmd driver net/zxdh: add logging implementation net/zxdh: add zxdh device pci init implementation net/zxdh: add msg chan and msg hwlock init net/zxdh: add msg chan enable implementation net/zxdh: add zxdh get device backend infos net/zxdh: add configure zxdh intr implementation net/zxdh: add zxdh dev infos get ops net/zxdh: add zxdh dev configure ops net/zxdh: add zxdh dev close ops MAINTAINERS | 7 + doc/guides/nics/features/zxdh.ini | 9 + doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 33 + doc/guides/rel_notes/release_24_11.rst | 6 + drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 22 + drivers/net/zxdh/zxdh_common.c | 400 +++++++++ drivers/net/zxdh/zxdh_common.h | 41 + drivers/net/zxdh/zxdh_ethdev.c | 1041 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 102 +++ drivers/net/zxdh/zxdh_logs.h | 34 + drivers/net/zxdh/zxdh_msg.c | 1037 +++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 228 ++++++ drivers/net/zxdh/zxdh_pci.c | 420 ++++++++++ drivers/net/zxdh/zxdh_pci.h | 177 ++++ drivers/net/zxdh/zxdh_queue.c | 127 +++ drivers/net/zxdh/zxdh_queue.h | 285 +++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++ 19 files changed, 4026 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h create mode 100644 drivers/net/zxdh/zxdh_logs.h create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.c create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 7350 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-07 10:32 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang 2024-11-04 11:58 ` [PATCH v10 02/10] net/zxdh: add logging implementation Junlong Wang ` (10 subsequent siblings) 11 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8888 bytes --] Add basic zxdh ethdev init and register PCI probe functions Update doc files. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- MAINTAINERS | 7 ++ doc/guides/nics/features/zxdh.ini | 9 +++ doc/guides/nics/index.rst | 1 + doc/guides/nics/zxdh.rst | 33 +++++++++ doc/guides/rel_notes/release_24_11.rst | 6 ++ drivers/net/meson.build | 1 + drivers/net/zxdh/meson.build | 18 +++++ drivers/net/zxdh/zxdh_ethdev.c | 97 ++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 44 ++++++++++++ 9 files changed, 216 insertions(+) create mode 100644 doc/guides/nics/features/zxdh.ini create mode 100644 doc/guides/nics/zxdh.rst create mode 100644 drivers/net/zxdh/meson.build create mode 100644 drivers/net/zxdh/zxdh_ethdev.c create mode 100644 drivers/net/zxdh/zxdh_ethdev.h diff --git a/MAINTAINERS b/MAINTAINERS index 8919d78919..9a812b3632 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1058,6 +1058,13 @@ F: drivers/net/avp/ F: doc/guides/nics/avp.rst F: doc/guides/nics/features/avp.ini +ZTE zxdh +M: Junlong Wang <wang.junlong1@zte.com.cn> +M: Lijie Shan <shan.lijie@zte.com.cn> +F: drivers/net/zxdh/ +F: doc/guides/nics/zxdh.rst +F: doc/guides/nics/features/zxdh.ini + PCAP PMD F: drivers/net/pcap/ F: doc/guides/nics/pcap_ring.rst diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini new file mode 100644 index 0000000000..05c8091ed7 --- /dev/null +++ b/doc/guides/nics/features/zxdh.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'zxdh' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +x86-64 = Y +ARMv8 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index c14bc7988a..8e371ac4a5 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -69,3 +69,4 @@ Network Interface Controller Drivers vhost virtio vmxnet3 + zxdh diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst new file mode 100644 index 0000000000..1167d4c6be --- /dev/null +++ b/doc/guides/nics/zxdh.rst @@ -0,0 +1,33 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 ZTE Corporation. + +ZXDH Poll Mode Driver +====================== + +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on +the ZTE Ethernet Controller E310/E312. + +- Learn about ZXDH NX Series Ethernet Controller NICs using + `<https://enterprise.zte.com.cn/sup-detail.html?id=271&suptype=1>`_. + + +Features +-------- + +Features of the ZXDH PMD are: + +- Multi arch support: x86_64, ARMv8. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` +for details. + +Limitations or Known issues +--------------------------- + +Datapath and some eth_dev_ops are not supported and will be provided later. +X86-32, Power8, ARMv7, RISC-V, Windows and BSD are not supported yet. diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 517085f0b3..66b970e036 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -170,6 +170,12 @@ New Features * Added asynchronous flow support * Added MTU update +* **Updated ZTE zxdh net driver [EXPERIMENTAL].** + + * Added ethdev driver support for zxdh NX Series Ethernet Controller. + - Ability to initialize the NIC + - Does not support datapath + * **Added cryptodev queue pair reset support.** A new API ``rte_cryptodev_queue_pair_reset`` is added diff --git a/drivers/net/meson.build b/drivers/net/meson.build index fb6d34b782..0a12914534 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -62,6 +62,7 @@ drivers = [ 'vhost', 'virtio', 'vmxnet3', + 'zxdh', ] std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc std_deps += ['bus_pci'] # very many PMDs depend on PCI, so make std diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build new file mode 100644 index 0000000000..932fb1c835 --- /dev/null +++ b/drivers/net/zxdh/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2024 ZTE Corporation + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if not dpdk_conf.has('RTE_ARCH_X86_64') or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'zxdh_ethdev.c', +) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c new file mode 100644 index 0000000000..8689e56309 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <ethdev_pci.h> +#include <bus_pci_driver.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +static int +zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + eth_dev->dev_ops = NULL; + + /* Allocate memory for storing MAC addresses */ + eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) + return -ENOMEM; + + memset(hw, 0, sizeof(*hw)); + hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; + if (hw->bar_addr[0] == 0) + return -EIO; + + hw->device_id = pci_dev->id.device_id; + hw->port_id = eth_dev->data->port_id; + hw->eth_dev = eth_dev; + hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN; + hw->duplex = RTE_ETH_LINK_FULL_DUPLEX; + hw->is_pf = 0; + + if (pci_dev->id.device_id == ZXDH_E310_PF_DEVICEID || + pci_dev->id.device_id == ZXDH_E312_PF_DEVICEID) { + hw->is_pf = 1; + } + + return ret; +} + +static int +zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct zxdh_hw), + zxdh_eth_dev_init); +} + +static int +zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) +{ + int ret = 0; + + return ret; +} + +static int +zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + int ret = 0; + + ret = zxdh_dev_close(eth_dev); + + return ret; +} + +static int +zxdh_eth_pci_remove(struct rte_pci_device *pci_dev) +{ + int ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit); + + return ret; +} + +static const struct rte_pci_id pci_id_zxdh_map[] = { + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_PF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E310_VF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_PF_DEVICEID)}, + {RTE_PCI_DEVICE(ZXDH_PCI_VENDOR_ID, ZXDH_E312_VF_DEVICEID)}, + {.vendor_id = 0, /* sentinel */ }, +}; +static struct rte_pci_driver zxdh_pmd = { + .id_table = pci_id_zxdh_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + .probe = zxdh_eth_pci_probe, + .remove = zxdh_eth_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h new file mode 100644 index 0000000000..a11e3624a9 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_H +#define ZXDH_ETHDEV_H + +#include "ethdev_driver.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/* ZXDH PCI vendor/device ID. */ +#define ZXDH_PCI_VENDOR_ID 0x1cf2 + +#define ZXDH_E310_PF_DEVICEID 0x8061 +#define ZXDH_E310_VF_DEVICEID 0x8062 +#define ZXDH_E312_PF_DEVICEID 0x8049 +#define ZXDH_E312_VF_DEVICEID 0x8060 + +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 + +struct zxdh_hw { + struct rte_eth_dev *eth_dev; + uint64_t bar_addr[ZXDH_NUM_BARS]; + + uint32_t speed; + uint16_t device_id; + uint16_t port_id; + + uint8_t duplex; + uint8_t is_pf; +}; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_ETHDEV_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 16718 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-04 11:58 ` [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-11-07 10:32 ` Junlong Wang 2024-11-12 0:42 ` Thomas Monjalon 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang 1 sibling, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-11-07 10:32 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, shan.lijie, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 2438 bytes --] >On 11/6/2024 12:40 AM, Ferruh Yigit wrote: >> On 11/4/2024 11:58 AM, Junlong Wang wrote: >>> v10: >>> - >>> move zxdh under Wind River in MAINTAINERS and add myself as the maintainer >>> and add experimental into MAINTAINERS/driver file,elease notes. >>> - changed DPDK syntax is to have return value in a separate line. >>> - Add a keyword in log types for distinguished. >>> - using regular comments (non doxygen syntax). >>> - fix other issues. >>> >>> v9: >>> - fix 'v8 3/9' patch use PCI bus API, >>> and common PCI constants according to David Marchand's comments. >>> >>> v8: >>> - fix flexible arrays、Waddress-of-packed-member error. >>> - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. >>> - use zxdh_try/release_lock,and move loop into zxdh_timedlock, >>> make hardware lock follow spinlock pattern. >>> >>> v7: >>> - add release notes and modify zxdh.rst issues. >>> - avoid use pthread and use rte_spinlock_lock. >>> - using the prefix ZXDH_ before some definitions. >>> - resole issues according to thomas's comments. >>> >>> v6: >>> - Resolve ci/intel compilation issues. >>> - fix meson.build indentation in earlier patch. >>> >>> V5: >>> - split driver into multiple patches,part of the zxdh driver, >>> later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. >>> - fix errors reported by scripts. >>> - move the product link in zxdh.rst. >>> - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. >>> - modify other comments according to Ferruh's comments. >>> >>> Junlong Wang (10): >>> net/zxdh: add zxdh ethdev pmd driver >>> net/zxdh: add logging implementation >>> net/zxdh: add zxdh device pci init implementation >>> net/zxdh: add msg chan and msg hwlock init >>> net/zxdh: add msg chan enable implementation >>> net/zxdh: add zxdh get device backend infos >>> net/zxdh: add configure zxdh intr implementation >>> net/zxdh: add zxdh dev infos get ops >>> net/zxdh: add zxdh dev configure ops >>> net/zxdh: add zxdh dev close ops >>> >> >> For series, >> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> >> >> Series applied to dpdk-next-net/main, thanks. >> > Hi Junlong, > It seems we missed to mark driver as experimental, I will update it in > next-net. Sorry, I'm too careless, I will pay more attention next time. Thank you very much. [-- Attachment #1.1.2: Type: text/html , Size: 5195 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-07 10:32 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-11-12 0:42 ` Thomas Monjalon 0 siblings, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-11-12 0:42 UTC (permalink / raw) To: ferruh.yigit, Junlong Wang; +Cc: dev, shan.lijie, wang.yong19 07/11/2024 11:32, Junlong Wang: > >On 11/6/2024 12:40 AM, Ferruh Yigit wrote: > >> For series, > >> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> > >> > >> Series applied to dpdk-next-net/main, thanks. > >> > > > Hi Junlong, > > > It seems we missed to mark driver as experimental, I will update it in > > next-net. > > Sorry, I'm too careless, I will pay more attention next time. > Thank you very much. I'm removing the useless #ifdef __cplusplus while pulling in main, as we are trying to clean them in the repo. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 00/15] net/zxdh: updated net zxdh driver 2024-11-04 11:58 ` [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-07 10:32 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (14 more replies) 1 sibling, 15 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2383 bytes --] V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promiscuous/allmulticast ops implementations net/zxdh: vlan filter, vlan offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 17 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build | 4 + drivers/net/zxdh/zxdh_common.c | 24 + drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 556 +++++++- drivers/net/zxdh/zxdh_ethdev.h | 36 + drivers/net/zxdh/zxdh_ethdev_ops.c | 1500 +++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 60 + drivers/net/zxdh/zxdh_msg.c | 164 +++ drivers/net/zxdh/zxdh_msg.h | 231 +++ drivers/net/zxdh/zxdh_np.c | 2144 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 579 ++++++++ drivers/net/zxdh/zxdh_pci.c | 26 +- drivers/net/zxdh/zxdh_pci.h | 9 +- drivers/net/zxdh/zxdh_queue.c | 244 +++- drivers/net/zxdh/zxdh_queue.h | 142 +- drivers/net/zxdh/zxdh_rxtx.c | 802 +++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 6 + drivers/net/zxdh/zxdh_tables.c | 782 ++++++++++ drivers/net/zxdh/zxdh_tables.h | 231 +++ 21 files changed, 7541 insertions(+), 34 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 4505 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 01/15] net/zxdh: zxdh np init implementation 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-06 5:57 ` [PATCH v1 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang ` (13 subsequent siblings) 14 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37197 bytes --] (np)network Processor initialize resources in host, and initialize a channel for some tables insert/get/del. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 236 ++++++++++++++++++++-- drivers/net/zxdh/zxdh_ethdev.h | 28 +++ drivers/net/zxdh/zxdh_msg.c | 45 +++++ drivers/net/zxdh/zxdh_msg.h | 37 ++++ drivers/net/zxdh/zxdh_np.c | 347 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 2 +- drivers/net/zxdh/zxdh_pci.h | 6 +- drivers/net/zxdh/zxdh_queue.c | 2 +- drivers/net/zxdh/zxdh_queue.h | 14 +- 11 files changed, 883 insertions(+), 33 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index c9960f4c73..ab24a3145c 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -19,4 +19,5 @@ sources = files( 'zxdh_msg.c', 'zxdh_pci.c', 'zxdh_queue.c', + 'zxdh_np.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c786198535..740e579da8 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -5,6 +5,7 @@ #include <ethdev_pci.h> #include <bus_pci_driver.h> #include <rte_ethdev.h> +#include <rte_malloc.h> #include "zxdh_ethdev.h" #include "zxdh_logs.h" @@ -12,8 +13,15 @@ #include "zxdh_msg.h" #include "zxdh_common.h" #include "zxdh_queue.h" +#include "zxdh_np.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data = {0}; + +#define ZXDH_INVALID_DTBQUE 0xFFFF uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) @@ -406,14 +414,14 @@ zxdh_features_update(struct zxdh_hw *hw, ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { PMD_DRV_LOG(ERR, "rx checksum not available on this host"); return -ENOTSUP; } if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && - (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + (!zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); return -ENOTSUP; } @@ -421,20 +429,20 @@ zxdh_features_update(struct zxdh_hw *hw, } static bool -rx_offload_enabled(struct zxdh_hw *hw) +zxdh_rx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); } static bool -tx_offload_enabled(struct zxdh_hw *hw) +zxdh_tx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_UFO); } static void @@ -466,7 +474,7 @@ zxdh_dev_free_mbufs(struct rte_eth_dev *dev) continue; PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); - while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + while ((buf = zxdh_queue_detach_unused(vq)) != NULL) rte_pktmbuf_free(buf); } } @@ -550,9 +558,9 @@ zxdh_init_vring(struct zxdh_virtqueue *vq) vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); - vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); - vring_desc_init_packed(vq, size); - virtqueue_disable_intr(vq); + zxdh_vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + zxdh_vring_desc_init_packed(vq, size); + zxdh_queue_disable_intr(vq); } static int32_t @@ -621,7 +629,7 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* * Reserve a memzone for vring elements */ - size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + size = zxdh_vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -694,7 +702,8 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* first indirect descriptor is always the tx header */ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; - vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + zxdh_vring_desc_init_indirect_packed(start_dp, + RTE_DIM(txr[i].tx_packed_indir)); start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + offsetof(struct zxdh_tx_region, tx_hdr); /* length will be updated to actual pi hdr size when xmit pkt */ @@ -792,8 +801,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) } } - hw->has_tx_offload = tx_offload_enabled(hw); - hw->has_rx_offload = rx_offload_enabled(hw); + hw->has_tx_offload = zxdh_tx_offload_enabled(hw); + hw->has_rx_offload = zxdh_rx_offload_enabled(hw); nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) @@ -881,7 +890,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); /* If host does not support both status and MSI-X then disable LSC */ - if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; else eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; @@ -913,6 +922,183 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) return 0; } +static int +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_bar_offset_params param = {0}; + struct zxdh_bar_offset_res res = {0}; + int ret = 0; + + if (g_dtb_data.init_done) { + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_malloc(NULL, sizeof(*dpp_ctrl) + + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); + + if (dpp_ctrl == NULL) { + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256); + + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strcpy((char *)dpp_ctrl->port_name, dev->device->name); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = ZXDH_URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + + ret = zxdh_np_host_init(0, dpp_ctrl); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + rte_free(dpp_ctrl); + return 0; + +free_res: + rte_free(dpp_ctrl); + return ret; +} + +static int +zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int +zxdh_init_once(void) +{ + struct zxdh_shared_data *sd = zxdh_shared_data; + int ret = 0; + + if (zxdh_init_shared_data()) + return -1; + + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + /* RTE_PROC_PRIMARY */ + if (!sd->init_done) + sd->init_done = true; + sd->dev_refcnt++; +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int +zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + return ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 1; + + PMD_DRV_LOG(DEBUG, "np init ok "); + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -950,6 +1136,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_once(); + if (ret != 0) + goto err_zxdh_init; + ret = zxdh_init_device(eth_dev); if (ret < 0) goto err_zxdh_init; @@ -977,6 +1167,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); if (ret != 0) goto err_zxdh_init; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7658cbb461..6fdb5fb767 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -35,6 +35,10 @@ #define ZXDH_MBUF_BURST_SZ 64 +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) + union zxdh_virport_num { uint16_t vport; struct { @@ -89,6 +93,30 @@ struct zxdh_hw { uint8_t has_rx_offload; }; +struct zxdh_dtb_shared_data { + int init_done; + char name[32]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int32_t init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int32_t np_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); #endif /* ZXDH_ETHDEV_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 53cf972f86..a0a005b178 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1035,3 +1035,48 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) rte_free(recved_msg); return ZXDH_BAR_MSG_OK; } + +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, + struct zxdh_bar_offset_res *res) +{ + uint16_t check_token = 0; + uint16_t sum_res = 0; + int ret = 0; + + if (!paras) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = {0}; + + in.payload_addr = &send_msg; + in.payload_len = sizeof(send_msg); + in.virt_addr = paras->virt_addr; + in.src = ZXDH_MSG_CHAN_END_PF; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = ZXDH_BAR_MODULE_OFFSET_GET; + in.src_pcieid = paras->pcie_id; + + struct zxdh_bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 530ee406b1..fbc79e8f9d 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { ZXDH_TBL_TYPE_NON, }; +enum pciebar_layout_type { + ZXDH_URI_VQM = 0, + ZXDH_URI_SPINLOCK = 1, + ZXDH_URI_FWCAP = 2, + ZXDH_URI_FWSHR = 3, + ZXDH_URI_DRS_SEC = 4, + ZXDH_URI_RSV = 5, + ZXDH_URI_CTRLCH = 6, + ZXDH_URI_1588 = 7, + ZXDH_URI_QBV = 8, + ZXDH_URI_MACPCS = 9, + ZXDH_URI_RDMA = 10, + ZXDH_URI_MNP = 11, + ZXDH_URI_MSPM = 12, + ZXDH_URI_MVQM = 13, + ZXDH_URI_MDPI = 14, + ZXDH_URI_NP = 15, + ZXDH_URI_MAX, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { uint32_t length; } __rte_packed; +struct zxdh_bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; + +struct zxdh_bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + struct zxdh_bar_recv_msg { uint8_t reps_ok; uint16_t reps_len; @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +struct zxdh_offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c new file mode 100644 index 0000000000..9c50039fb1 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.c @@ -0,0 +1,347 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdlib.h> +#include <string.h> + +#include <rte_common.h> +#include <rte_log.h> + +#include "zxdh_np.h" +#include "zxdh_logs.h" + +static uint64_t g_np_bar_offset; +static ZXDH_DEV_MGR_T g_dev_mgr = {0}; +static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; + +#define ZXDH_COMM_ASSERT(x) assert(x) +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) + +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ +do {\ + if (NULL == (point)) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ + " Fail!", __FILE__, __LINE__, __func__, becall);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ + "Fail!", __FILE__, __LINE__, __func__, becall);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +static uint32_t +zxdh_np_dev_init(void) +{ + if (g_dev_mgr.is_init) { + PMD_DRV_LOG(ERR, "Dev is already initialized."); + return 0; + } + + g_dev_mgr.device_num = 0; + g_dev_mgr.is_init = 1; + + return 0; +} + +static uint32_t +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, + uint64_t riscv_addr, uint64_t dma_vir_addr, + uint64_t dma_phy_addr) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + if (!p_dev_mgr->is_init) { + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + ZXDH_RC_DEV_MGR_NOT_INIT); + return ZXDH_RC_DEV_MGR_NOT_INIT; + } + + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { + /* device is already exist. */ + PMD_DRV_LOG(ERR, "Device is added again!!!"); + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + } else { + /* device is new. */ + p_dev_info = (ZXDH_DEV_CFG_T *)malloc(sizeof(ZXDH_DEV_CFG_T)); + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; + p_dev_mgr->device_num++; + } + + p_dev_info->device_id = dev_id; + p_dev_info->dev_type = dev_type; + p_dev_info->access_type = access_type; + p_dev_info->pcie_addr = pcie_addr; + p_dev_info->riscv_addr = riscv_addr; + p_dev_info->dma_vir_addr = dma_vir_addr; + p_dev_info->dma_phy_addr = dma_phy_addr; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return ZXDH_DEV_TYPE_INVALID; + p_dev_info->agent_flag = agent_flag; + + return 0; +} + +static uint32_t +zxdh_np_sdt_mgr_init(void) +{ + if (!g_sdt_mgr.is_init) { + g_sdt_mgr.channel_num = 0; + g_sdt_mgr.is_init = 1; + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_mgr_create(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { + p_sdt_tbl_temp = malloc(sizeof(ZXDH_SDT_SOFT_TABLE_T)); + + p_sdt_tbl_temp->device_id = dev_id; + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; + + p_sdt_mgr->channel_num++; + } else { + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" + "is called repeatedly!", __func__, dev_id); + return -1; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) +{ + uint32_t rc = 0; + uint32_t i = 0; + + zxdh_np_sdt_mgr_init(); + + for (i = 0; i < dev_num; i++) { + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); + } + + return 0; +} + +static uint32_t +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, + uint32_t bitmap) +{ + uint32_t cls_id = 0; + uint32_t mem_id = 0; + uint32_t cls_use = 0; + uint32_t instr_mem = 0; + + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { + cls_use = (bitmap >> cls_id) & 0x1; + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; + } + + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); + } + + return 0; +} + +static ZXDH_DTB_MGR_T * +zxdh_np_dtb_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_dpp_dtb_mgr[dev_id]; +} + +static uint32_t +zxdh_np_dtb_soft_init(uint32_t dev_id) +{ + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) { + p_dpp_dtb_mgr[dev_id] = (ZXDH_DTB_MGR_T *)malloc(sizeof(ZXDH_DTB_MGR_T)); + memset(p_dpp_dtb_mgr[dev_id], 0, sizeof(ZXDH_DTB_MGR_T)); + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) + return 1; + } + + return 0; +} + +static unsigned int +zxdh_np_base_soft_init(unsigned int dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) +{ + unsigned int rt = 0; + unsigned int access_type = 0; + unsigned int dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; + unsigned int agent_flag = 0; + + rt = zxdh_np_dev_init(); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; + else + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) + agent_flag = ZXDH_DEV_AGENT_ENABLE; + else + agent_flag = ZXDH_DEV_AGENT_DISABLE; + + rt = zxdh_np_dev_add(dev_id, + p_init_ctrl->device_type, + access_type, + p_init_ctrl->pcie_vir_baddr, + p_init_ctrl->riscv_vir_baddr, + p_init_ctrl->dma_vir_baddr, + p_init_ctrl->dma_phy_baddr); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); + + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); + + dev_id_array[0] = dev_id; + rt = zxdh_np_sdt_init(1, dev_id_array); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); + + rt = zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_ppu_parse_cls_bitmap"); + + rt = zxdh_np_dtb_soft_init(dev_id); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); + + return 0; +} + +static uint32_t +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->vport = vport; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->agent_addr = agent_addr; + + return 0; +} + +static uint64_t +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) +{ + uint64_t np_addr = 0; + + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; + g_np_bar_offset = bar_offset; + + return np_addr; +} + +int +zxdh_np_host_init(uint32_t dev_id, + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) +{ + unsigned int rc = 0; + uint64_t agent_addr = 0; + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; + + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); + + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, + p_dev_init_ctrl->np_bar_offset); + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); + + rc = zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_vport_set"); + + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; + rc = zxdh_np_dev_agent_addr_set(dev_id, agent_addr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_agent_addr_set"); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h new file mode 100644 index 0000000000..573eafe796 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef ZXDH_NP_H +#define ZXDH_NP_H + +#include <stdint.h> + +#define ZXDH_PORT_NAME_MAX (32) +#define ZXDH_DEV_CHANNEL_MAX (2) +#define ZXDH_DEV_SDT_ID_MAX (256U) +/*DTB*/ +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) +#define ZXDH_DTB_QUEUE_NUM_MAX (128) + +#define ZXDH_PPU_CLS_ALL_START (0x3F) +#define ZXDH_PPU_CLUSTER_NUM (6) +#define ZXDH_PPU_INSTR_MEM_NUM (3) +#define ZXDH_SDT_CFG_LEN (2) + +#define ZXDH_RC_DEV_BASE (0x600) +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) + +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) + +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) + +typedef enum zxdh_module_init_e { + ZXDH_MODULE_INIT_NPPU = 0, + ZXDH_MODULE_INIT_PPU, + ZXDH_MODULE_INIT_SE, + ZXDH_MODULE_INIT_ETM, + ZXDH_MODULE_INIT_DLB, + ZXDH_MODULE_INIT_TRPG, + ZXDH_MODULE_INIT_TSN, + ZXDH_MODULE_INIT_MAX +} ZXDH_MODULE_INIT_E; + +typedef enum zxdh_dev_type_e { + ZXDH_DEV_TYPE_SIM = 0, + ZXDH_DEV_TYPE_VCS = 1, + ZXDH_DEV_TYPE_CHIP = 2, + ZXDH_DEV_TYPE_FPGA = 3, + ZXDH_DEV_TYPE_PCIE_ACC = 4, + ZXDH_DEV_TYPE_INVALID, +} ZXDH_DEV_TYPE_E; + +typedef enum zxdh_dev_access_type_e { + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, + ZXDH_DEV_ACCESS_TYPE_INVALID, +} ZXDH_DEV_ACCESS_TYPE_E; + +typedef enum zxdh_dev_agent_flag_e { + ZXDH_DEV_AGENT_DISABLE = 0, + ZXDH_DEV_AGENT_ENABLE = 1, + ZXDH_DEV_AGENT_INVALID, +} ZXDH_DEV_AGENT_FLAG_E; + +typedef struct zxdh_dtb_tab_up_user_addr_t { + uint32_t user_flag; + uint64_t phy_addr; + uint64_t vir_addr; +} ZXDH_DTB_TAB_UP_USER_ADDR_T; + +typedef struct zxdh_dtb_tab_up_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; +} ZXDH_DTB_TAB_UP_INFO_T; + +typedef struct zxdh_dtb_tab_down_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; +} ZXDH_DTB_TAB_DOWN_INFO_T; + +typedef struct zxdh_dtb_queue_info_t { + uint32_t init_flag; + uint32_t vport; + uint32_t vector; + ZXDH_DTB_TAB_UP_INFO_T tab_up; + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; +} ZXDH_DTB_QUEUE_INFO_T; + +typedef struct zxdh_dtb_mgr_t { + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_DTB_MGR_T; + +typedef struct zxdh_ppu_cls_bitmap_t { + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; +} ZXDH_PPU_CLS_BITMAP_T; + +typedef struct dpp_sdt_item_t { + uint32_t valid; + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; +} ZXDH_SDT_ITEM_T; + +typedef struct dpp_sdt_soft_table_t { + uint32_t device_id; + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; +} ZXDH_SDT_SOFT_TABLE_T; + +typedef struct zxdh_sys_init_ctrl_t { + ZXDH_DEV_TYPE_E device_type; + uint32_t flags; + uint32_t sa_id; + uint32_t case_num; + uint32_t lif0_port_type; + uint32_t lif1_port_type; + uint64_t pcie_vir_baddr; + uint64_t riscv_vir_baddr; + uint64_t dma_vir_baddr; + uint64_t dma_phy_baddr; +} ZXDH_SYS_INIT_CTRL_T; + +typedef struct dpp_dev_cfg_t { + uint32_t device_id; + ZXDH_DEV_TYPE_E dev_type; + uint32_t chip_ver; + uint32_t access_type; + uint32_t agent_flag; + uint32_t vport; + uint64_t pcie_addr; + uint64_t riscv_addr; + uint64_t dma_vir_addr; + uint64_t dma_phy_addr; + uint64_t agent_addr; + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; +} ZXDH_DEV_CFG_T; + +typedef struct zxdh_dev_mngr_t { + uint32_t device_num; + uint32_t is_init; + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_DEV_MGR_T; + +typedef struct zxdh_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} ZXDH_DTB_ADDR_INFO_T; + +typedef struct zxdh_dev_init_ctrl_t { + uint32_t vport; + char port_name[ZXDH_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; +} ZXDH_DEV_INIT_CTRL_T; + +typedef struct zxdh_sdt_mgr_t { + uint32_t channel_num; + uint32_t is_init; + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_SDT_MGR_T; + +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); + +#endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06d3f92b20..250e67d560 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) desc_addr = vq->vq_ring_mem; avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); - if (vtpci_packed_queue(vq->hw)) { + if (zxdh_pci_packed_queue(vq->hw)) { used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct zxdh_vring_packed_desc_event)), ZXDH_PCI_VRING_ALIGN); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ed6fd89742..d6487a574f 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { }; static inline int32_t -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } static inline int32_t -vtpci_packed_queue(struct zxdh_hw *hw) +zxdh_pci_packed_queue(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); } struct zxdh_pci_ops { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index 462a88b23c..b4ef90ea36 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -13,7 +13,7 @@ #include "zxdh_msg.h" struct rte_mbuf * -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { struct rte_mbuf *cookie = NULL; int32_t idx = 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1088bf08fc..1304d5e4ea 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -206,11 +206,11 @@ struct zxdh_tx_region { }; static inline size_t -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) { size_t size; - if (vtpci_packed_queue(hw)) { + if (zxdh_pci_packed_queue(hw)) { size = num * sizeof(struct zxdh_vring_packed_desc); size += sizeof(struct zxdh_vring_packed_desc_event); size = RTE_ALIGN_CEIL(size, align); @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) } static inline void -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, unsigned long align, uint32_t num) { vr->num = num; @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, } static inline void -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) { int32_t i = 0; @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) } static inline void -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) { int32_t i = 0; @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) } static inline void -virtqueue_disable_intr(struct zxdh_virtqueue *vq) +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) } } -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 80274 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 00/15] net/zxdh: updated net zxdh driver 2024-12-06 5:57 ` [PATCH v1 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (14 more replies) 0 siblings, 15 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2434 bytes --] V2: - resolve code style and github-robot build issue. V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promisc/allmulti ops implementations net/zxdh: vlan filter/ offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 18 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build | 4 + drivers/net/zxdh/zxdh_common.c | 24 + drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 575 +++++++- drivers/net/zxdh/zxdh_ethdev.h | 37 + drivers/net/zxdh/zxdh_ethdev_ops.c | 1593 +++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ drivers/net/zxdh/zxdh_msg.c | 164 +++ drivers/net/zxdh/zxdh_msg.h | 232 +++ drivers/net/zxdh/zxdh_np.c | 2144 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 579 ++++++++ drivers/net/zxdh/zxdh_pci.c | 26 +- drivers/net/zxdh/zxdh_pci.h | 9 +- drivers/net/zxdh/zxdh_queue.c | 242 +++- drivers/net/zxdh/zxdh_queue.h | 144 +- drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 6 + drivers/net/zxdh/zxdh_tables.c | 788 ++++++++++ drivers/net/zxdh/zxdh_tables.h | 232 +++ 21 files changed, 7682 insertions(+), 37 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 4609 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 01/15] net/zxdh: zxdh np init implementation 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-11 16:10 ` Stephen Hemminger ` (6 more replies) 2024-12-10 5:53 ` [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang ` (13 subsequent siblings) 14 siblings, 7 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37225 bytes --] (np)network Processor initialize resources in host, and initialize a channel for some tables insert/get/del. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 238 ++++++++++++++++++++-- drivers/net/zxdh/zxdh_ethdev.h | 27 +++ drivers/net/zxdh/zxdh_msg.c | 45 +++++ drivers/net/zxdh/zxdh_msg.h | 37 ++++ drivers/net/zxdh/zxdh_np.c | 347 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 2 +- drivers/net/zxdh/zxdh_pci.h | 6 +- drivers/net/zxdh/zxdh_queue.c | 2 +- drivers/net/zxdh/zxdh_queue.h | 14 +- 11 files changed, 884 insertions(+), 33 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index c9960f4c73..ab24a3145c 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -19,4 +19,5 @@ sources = files( 'zxdh_msg.c', 'zxdh_pci.c', 'zxdh_queue.c', + 'zxdh_np.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c786198535..c54d1f6669 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -5,6 +5,7 @@ #include <ethdev_pci.h> #include <bus_pci_driver.h> #include <rte_ethdev.h> +#include <rte_malloc.h> #include "zxdh_ethdev.h" #include "zxdh_logs.h" @@ -12,8 +13,15 @@ #include "zxdh_msg.h" #include "zxdh_common.h" #include "zxdh_queue.h" +#include "zxdh_np.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_INVALID_DTBQUE 0xFFFF uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) @@ -406,14 +414,14 @@ zxdh_features_update(struct zxdh_hw *hw, ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { PMD_DRV_LOG(ERR, "rx checksum not available on this host"); return -ENOTSUP; } if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && - (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + (!zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); return -ENOTSUP; } @@ -421,20 +429,20 @@ zxdh_features_update(struct zxdh_hw *hw, } static bool -rx_offload_enabled(struct zxdh_hw *hw) +zxdh_rx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); } static bool -tx_offload_enabled(struct zxdh_hw *hw) +zxdh_tx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_UFO); } static void @@ -466,7 +474,7 @@ zxdh_dev_free_mbufs(struct rte_eth_dev *dev) continue; PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); - while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + while ((buf = zxdh_queue_detach_unused(vq)) != NULL) rte_pktmbuf_free(buf); } } @@ -550,9 +558,9 @@ zxdh_init_vring(struct zxdh_virtqueue *vq) vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); - vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); - vring_desc_init_packed(vq, size); - virtqueue_disable_intr(vq); + zxdh_vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + zxdh_vring_desc_init_packed(vq, size); + zxdh_queue_disable_intr(vq); } static int32_t @@ -621,7 +629,7 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* * Reserve a memzone for vring elements */ - size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + size = zxdh_vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -694,7 +702,8 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* first indirect descriptor is always the tx header */ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; - vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + zxdh_vring_desc_init_indirect_packed(start_dp, + RTE_DIM(txr[i].tx_packed_indir)); start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + offsetof(struct zxdh_tx_region, tx_hdr); /* length will be updated to actual pi hdr size when xmit pkt */ @@ -792,8 +801,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) } } - hw->has_tx_offload = tx_offload_enabled(hw); - hw->has_rx_offload = rx_offload_enabled(hw); + hw->has_tx_offload = zxdh_tx_offload_enabled(hw); + hw->has_rx_offload = zxdh_rx_offload_enabled(hw); nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) @@ -881,7 +890,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); /* If host does not support both status and MSI-X then disable LSC */ - if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; else eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; @@ -913,6 +922,185 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) return 0; } +static int +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_bar_offset_params param = {0}; + struct zxdh_bar_offset_res res = {0}; + int ret = 0; + + if (g_dtb_data.init_done) { + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_malloc(NULL, sizeof(*dpp_ctrl) + + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); + + if (dpp_ctrl == NULL) { + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256); + + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strcpy((char *)dpp_ctrl->port_name, dev->device->name); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = ZXDH_URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + + ret = zxdh_np_host_init(0, dpp_ctrl); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + rte_free(dpp_ctrl); + return 0; + +free_res: + rte_free(dpp_ctrl); + return ret; +} + +static int +zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int +zxdh_init_once(void) +{ + int ret = 0; + + if (zxdh_init_shared_data()) + return -1; + + struct zxdh_shared_data *sd = zxdh_shared_data; + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + /* RTE_PROC_PRIMARY */ + if (!sd->init_done) + sd->init_done = true; + sd->dev_refcnt++; + +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int +zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + return ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 1; + + PMD_DRV_LOG(DEBUG, "np init ok "); + return 0; +} + + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -950,6 +1138,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_once(); + if (ret != 0) + goto err_zxdh_init; + ret = zxdh_init_device(eth_dev); if (ret < 0) goto err_zxdh_init; @@ -977,6 +1169,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); if (ret != 0) goto err_zxdh_init; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7658cbb461..78b1edd5a4 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -34,6 +34,9 @@ #define ZXDH_QUERES_SHARE_BASE (0x5000) #define ZXDH_MBUF_BURST_SZ 64 +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) union zxdh_virport_num { uint16_t vport; @@ -89,6 +92,30 @@ struct zxdh_hw { uint8_t has_rx_offload; }; +struct zxdh_dtb_shared_data { + int init_done; + char name[32]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int32_t init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int32_t np_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); #endif /* ZXDH_ETHDEV_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 53cf972f86..a0a005b178 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1035,3 +1035,48 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) rte_free(recved_msg); return ZXDH_BAR_MSG_OK; } + +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, + struct zxdh_bar_offset_res *res) +{ + uint16_t check_token = 0; + uint16_t sum_res = 0; + int ret = 0; + + if (!paras) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = {0}; + + in.payload_addr = &send_msg; + in.payload_len = sizeof(send_msg); + in.virt_addr = paras->virt_addr; + in.src = ZXDH_MSG_CHAN_END_PF; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = ZXDH_BAR_MODULE_OFFSET_GET; + in.src_pcieid = paras->pcie_id; + + struct zxdh_bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 530ee406b1..fbc79e8f9d 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { ZXDH_TBL_TYPE_NON, }; +enum pciebar_layout_type { + ZXDH_URI_VQM = 0, + ZXDH_URI_SPINLOCK = 1, + ZXDH_URI_FWCAP = 2, + ZXDH_URI_FWSHR = 3, + ZXDH_URI_DRS_SEC = 4, + ZXDH_URI_RSV = 5, + ZXDH_URI_CTRLCH = 6, + ZXDH_URI_1588 = 7, + ZXDH_URI_QBV = 8, + ZXDH_URI_MACPCS = 9, + ZXDH_URI_RDMA = 10, + ZXDH_URI_MNP = 11, + ZXDH_URI_MSPM = 12, + ZXDH_URI_MVQM = 13, + ZXDH_URI_MDPI = 14, + ZXDH_URI_NP = 15, + ZXDH_URI_MAX, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { uint32_t length; } __rte_packed; +struct zxdh_bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; + +struct zxdh_bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + struct zxdh_bar_recv_msg { uint8_t reps_ok; uint16_t reps_len; @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +struct zxdh_offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c new file mode 100644 index 0000000000..9c50039fb1 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.c @@ -0,0 +1,347 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdlib.h> +#include <string.h> + +#include <rte_common.h> +#include <rte_log.h> + +#include "zxdh_np.h" +#include "zxdh_logs.h" + +static uint64_t g_np_bar_offset; +static ZXDH_DEV_MGR_T g_dev_mgr = {0}; +static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; + +#define ZXDH_COMM_ASSERT(x) assert(x) +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) + +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ +do {\ + if (NULL == (point)) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ + " Fail!", __FILE__, __LINE__, __func__, becall);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ + "Fail!", __FILE__, __LINE__, __func__, becall);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +static uint32_t +zxdh_np_dev_init(void) +{ + if (g_dev_mgr.is_init) { + PMD_DRV_LOG(ERR, "Dev is already initialized."); + return 0; + } + + g_dev_mgr.device_num = 0; + g_dev_mgr.is_init = 1; + + return 0; +} + +static uint32_t +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, + uint64_t riscv_addr, uint64_t dma_vir_addr, + uint64_t dma_phy_addr) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + if (!p_dev_mgr->is_init) { + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + ZXDH_RC_DEV_MGR_NOT_INIT); + return ZXDH_RC_DEV_MGR_NOT_INIT; + } + + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { + /* device is already exist. */ + PMD_DRV_LOG(ERR, "Device is added again!!!"); + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + } else { + /* device is new. */ + p_dev_info = (ZXDH_DEV_CFG_T *)malloc(sizeof(ZXDH_DEV_CFG_T)); + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; + p_dev_mgr->device_num++; + } + + p_dev_info->device_id = dev_id; + p_dev_info->dev_type = dev_type; + p_dev_info->access_type = access_type; + p_dev_info->pcie_addr = pcie_addr; + p_dev_info->riscv_addr = riscv_addr; + p_dev_info->dma_vir_addr = dma_vir_addr; + p_dev_info->dma_phy_addr = dma_phy_addr; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return ZXDH_DEV_TYPE_INVALID; + p_dev_info->agent_flag = agent_flag; + + return 0; +} + +static uint32_t +zxdh_np_sdt_mgr_init(void) +{ + if (!g_sdt_mgr.is_init) { + g_sdt_mgr.channel_num = 0; + g_sdt_mgr.is_init = 1; + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_mgr_create(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { + p_sdt_tbl_temp = malloc(sizeof(ZXDH_SDT_SOFT_TABLE_T)); + + p_sdt_tbl_temp->device_id = dev_id; + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; + + p_sdt_mgr->channel_num++; + } else { + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" + "is called repeatedly!", __func__, dev_id); + return -1; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) +{ + uint32_t rc = 0; + uint32_t i = 0; + + zxdh_np_sdt_mgr_init(); + + for (i = 0; i < dev_num; i++) { + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); + } + + return 0; +} + +static uint32_t +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, + uint32_t bitmap) +{ + uint32_t cls_id = 0; + uint32_t mem_id = 0; + uint32_t cls_use = 0; + uint32_t instr_mem = 0; + + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { + cls_use = (bitmap >> cls_id) & 0x1; + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; + } + + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); + } + + return 0; +} + +static ZXDH_DTB_MGR_T * +zxdh_np_dtb_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_dpp_dtb_mgr[dev_id]; +} + +static uint32_t +zxdh_np_dtb_soft_init(uint32_t dev_id) +{ + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) { + p_dpp_dtb_mgr[dev_id] = (ZXDH_DTB_MGR_T *)malloc(sizeof(ZXDH_DTB_MGR_T)); + memset(p_dpp_dtb_mgr[dev_id], 0, sizeof(ZXDH_DTB_MGR_T)); + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) + return 1; + } + + return 0; +} + +static unsigned int +zxdh_np_base_soft_init(unsigned int dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) +{ + unsigned int rt = 0; + unsigned int access_type = 0; + unsigned int dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; + unsigned int agent_flag = 0; + + rt = zxdh_np_dev_init(); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; + else + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) + agent_flag = ZXDH_DEV_AGENT_ENABLE; + else + agent_flag = ZXDH_DEV_AGENT_DISABLE; + + rt = zxdh_np_dev_add(dev_id, + p_init_ctrl->device_type, + access_type, + p_init_ctrl->pcie_vir_baddr, + p_init_ctrl->riscv_vir_baddr, + p_init_ctrl->dma_vir_baddr, + p_init_ctrl->dma_phy_baddr); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); + + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); + + dev_id_array[0] = dev_id; + rt = zxdh_np_sdt_init(1, dev_id_array); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); + + rt = zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_ppu_parse_cls_bitmap"); + + rt = zxdh_np_dtb_soft_init(dev_id); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); + + return 0; +} + +static uint32_t +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->vport = vport; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->agent_addr = agent_addr; + + return 0; +} + +static uint64_t +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) +{ + uint64_t np_addr = 0; + + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; + g_np_bar_offset = bar_offset; + + return np_addr; +} + +int +zxdh_np_host_init(uint32_t dev_id, + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) +{ + unsigned int rc = 0; + uint64_t agent_addr = 0; + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; + + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); + + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, + p_dev_init_ctrl->np_bar_offset); + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); + + rc = zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_vport_set"); + + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; + rc = zxdh_np_dev_agent_addr_set(dev_id, agent_addr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_agent_addr_set"); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h new file mode 100644 index 0000000000..573eafe796 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef ZXDH_NP_H +#define ZXDH_NP_H + +#include <stdint.h> + +#define ZXDH_PORT_NAME_MAX (32) +#define ZXDH_DEV_CHANNEL_MAX (2) +#define ZXDH_DEV_SDT_ID_MAX (256U) +/*DTB*/ +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) +#define ZXDH_DTB_QUEUE_NUM_MAX (128) + +#define ZXDH_PPU_CLS_ALL_START (0x3F) +#define ZXDH_PPU_CLUSTER_NUM (6) +#define ZXDH_PPU_INSTR_MEM_NUM (3) +#define ZXDH_SDT_CFG_LEN (2) + +#define ZXDH_RC_DEV_BASE (0x600) +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) + +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) + +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) + +typedef enum zxdh_module_init_e { + ZXDH_MODULE_INIT_NPPU = 0, + ZXDH_MODULE_INIT_PPU, + ZXDH_MODULE_INIT_SE, + ZXDH_MODULE_INIT_ETM, + ZXDH_MODULE_INIT_DLB, + ZXDH_MODULE_INIT_TRPG, + ZXDH_MODULE_INIT_TSN, + ZXDH_MODULE_INIT_MAX +} ZXDH_MODULE_INIT_E; + +typedef enum zxdh_dev_type_e { + ZXDH_DEV_TYPE_SIM = 0, + ZXDH_DEV_TYPE_VCS = 1, + ZXDH_DEV_TYPE_CHIP = 2, + ZXDH_DEV_TYPE_FPGA = 3, + ZXDH_DEV_TYPE_PCIE_ACC = 4, + ZXDH_DEV_TYPE_INVALID, +} ZXDH_DEV_TYPE_E; + +typedef enum zxdh_dev_access_type_e { + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, + ZXDH_DEV_ACCESS_TYPE_INVALID, +} ZXDH_DEV_ACCESS_TYPE_E; + +typedef enum zxdh_dev_agent_flag_e { + ZXDH_DEV_AGENT_DISABLE = 0, + ZXDH_DEV_AGENT_ENABLE = 1, + ZXDH_DEV_AGENT_INVALID, +} ZXDH_DEV_AGENT_FLAG_E; + +typedef struct zxdh_dtb_tab_up_user_addr_t { + uint32_t user_flag; + uint64_t phy_addr; + uint64_t vir_addr; +} ZXDH_DTB_TAB_UP_USER_ADDR_T; + +typedef struct zxdh_dtb_tab_up_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; +} ZXDH_DTB_TAB_UP_INFO_T; + +typedef struct zxdh_dtb_tab_down_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; +} ZXDH_DTB_TAB_DOWN_INFO_T; + +typedef struct zxdh_dtb_queue_info_t { + uint32_t init_flag; + uint32_t vport; + uint32_t vector; + ZXDH_DTB_TAB_UP_INFO_T tab_up; + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; +} ZXDH_DTB_QUEUE_INFO_T; + +typedef struct zxdh_dtb_mgr_t { + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_DTB_MGR_T; + +typedef struct zxdh_ppu_cls_bitmap_t { + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; +} ZXDH_PPU_CLS_BITMAP_T; + +typedef struct dpp_sdt_item_t { + uint32_t valid; + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; +} ZXDH_SDT_ITEM_T; + +typedef struct dpp_sdt_soft_table_t { + uint32_t device_id; + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; +} ZXDH_SDT_SOFT_TABLE_T; + +typedef struct zxdh_sys_init_ctrl_t { + ZXDH_DEV_TYPE_E device_type; + uint32_t flags; + uint32_t sa_id; + uint32_t case_num; + uint32_t lif0_port_type; + uint32_t lif1_port_type; + uint64_t pcie_vir_baddr; + uint64_t riscv_vir_baddr; + uint64_t dma_vir_baddr; + uint64_t dma_phy_baddr; +} ZXDH_SYS_INIT_CTRL_T; + +typedef struct dpp_dev_cfg_t { + uint32_t device_id; + ZXDH_DEV_TYPE_E dev_type; + uint32_t chip_ver; + uint32_t access_type; + uint32_t agent_flag; + uint32_t vport; + uint64_t pcie_addr; + uint64_t riscv_addr; + uint64_t dma_vir_addr; + uint64_t dma_phy_addr; + uint64_t agent_addr; + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; +} ZXDH_DEV_CFG_T; + +typedef struct zxdh_dev_mngr_t { + uint32_t device_num; + uint32_t is_init; + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_DEV_MGR_T; + +typedef struct zxdh_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} ZXDH_DTB_ADDR_INFO_T; + +typedef struct zxdh_dev_init_ctrl_t { + uint32_t vport; + char port_name[ZXDH_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; +} ZXDH_DEV_INIT_CTRL_T; + +typedef struct zxdh_sdt_mgr_t { + uint32_t channel_num; + uint32_t is_init; + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_SDT_MGR_T; + +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); + +#endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06d3f92b20..250e67d560 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) desc_addr = vq->vq_ring_mem; avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); - if (vtpci_packed_queue(vq->hw)) { + if (zxdh_pci_packed_queue(vq->hw)) { used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct zxdh_vring_packed_desc_event)), ZXDH_PCI_VRING_ALIGN); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ed6fd89742..d6487a574f 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { }; static inline int32_t -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } static inline int32_t -vtpci_packed_queue(struct zxdh_hw *hw) +zxdh_pci_packed_queue(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); } struct zxdh_pci_ops { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index 462a88b23c..b4ef90ea36 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -13,7 +13,7 @@ #include "zxdh_msg.h" struct rte_mbuf * -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { struct rte_mbuf *cookie = NULL; int32_t idx = 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1088bf08fc..1304d5e4ea 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -206,11 +206,11 @@ struct zxdh_tx_region { }; static inline size_t -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) { size_t size; - if (vtpci_packed_queue(hw)) { + if (zxdh_pci_packed_queue(hw)) { size = num * sizeof(struct zxdh_vring_packed_desc); size += sizeof(struct zxdh_vring_packed_desc_event); size = RTE_ALIGN_CEIL(size, align); @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) } static inline void -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, unsigned long align, uint32_t num) { vr->num = num; @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, } static inline void -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) { int32_t i = 0; @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) } static inline void -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) { int32_t i = 0; @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) } static inline void -virtqueue_disable_intr(struct zxdh_virtqueue *vq) +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) } } -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 80270 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 01/15] net/zxdh: zxdh np init implementation 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-11 16:10 ` Stephen Hemminger 2024-12-12 2:06 ` Junlong Wang ` (5 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-11 16:10 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:19 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > (np)network Processor initialize resources in host, > and initialize a channel for some tables insert/get/del. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> This mostly looks good, just some small stuff. > --- > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 238 ++++++++++++++++++++-- > drivers/net/zxdh/zxdh_ethdev.h | 27 +++ > drivers/net/zxdh/zxdh_msg.c | 45 +++++ > drivers/net/zxdh/zxdh_msg.h | 37 ++++ > drivers/net/zxdh/zxdh_np.c | 347 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ > drivers/net/zxdh/zxdh_pci.c | 2 +- > drivers/net/zxdh/zxdh_pci.h | 6 +- > drivers/net/zxdh/zxdh_queue.c | 2 +- > drivers/net/zxdh/zxdh_queue.h | 14 +- > 11 files changed, 884 insertions(+), 33 deletions(-) > create mode 100644 drivers/net/zxdh/zxdh_np.c > create mode 100644 drivers/net/zxdh/zxdh_np.h > > diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build > index c9960f4c73..ab24a3145c 100644 > --- a/drivers/net/zxdh/meson.build > +++ b/drivers/net/zxdh/meson.build > @@ -19,4 +19,5 @@ sources = files( > 'zxdh_msg.c', > 'zxdh_pci.c', > 'zxdh_queue.c', > + 'zxdh_np.c', > ) > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > index c786198535..c54d1f6669 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.c > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -5,6 +5,7 @@ > #include <ethdev_pci.h> > #include <bus_pci_driver.h> > #include <rte_ethdev.h> > +#include <rte_malloc.h> > > #include "zxdh_ethdev.h" > #include "zxdh_logs.h" > @@ -12,8 +13,15 @@ > #include "zxdh_msg.h" > #include "zxdh_common.h" > #include "zxdh_queue.h" > +#include "zxdh_np.h" > > struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; If you want to support primary/secondary in future, variables in BSS are not shared between primary and secondary process > +struct zxdh_shared_data *zxdh_shared_data; > +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; > +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; > +struct zxdh_dtb_shared_data g_dtb_data; The shared data will be a problem if you support multiple devices. Or is this really a singleton device with only one bus and slot. > + > +#define ZXDH_INVALID_DTBQUE 0xFFFF > > uint16_t > zxdh_vport_to_vfid(union zxdh_virport_num v) > > +static int > +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + struct zxdh_bar_offset_params param = {0}; > + struct zxdh_bar_offset_res res = {0}; > + int ret = 0; > + > + if (g_dtb_data.init_done) { > + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", > + dev->device->name); > + return 0; > + } > + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; > + g_dtb_data.bind_device = dev; > + g_dtb_data.dev_refcnt++; > + g_dtb_data.init_done = 1; > + > + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_malloc(NULL, sizeof(*dpp_ctrl) + > + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); > + > + if (dpp_ctrl == NULL) { > + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); > + ret = -ENOMEM; > + goto free_res; > + } > + memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256); You could use rte_zmalloc() and avoid having to do memset. > + > + dpp_ctrl->queue_id = 0xff; > + dpp_ctrl->vport = hw->vport.vport; > + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; > + strcpy((char *)dpp_ctrl->port_name, dev->device->name); Why the cast, port_name is already character. Should use strlcpy() incase device name is bigger than port_name. > + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; > + > + param.pcie_id = hw->pcie_id; > + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; > + param.type = ZXDH_URI_NP; > + > + ret = zxdh_get_bar_offset(¶m, &res); > + if (ret) { > + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); > + goto free_res; > + } > + dpp_ctrl->np_bar_len = res.bar_length; > + dpp_ctrl->np_bar_offset = res.bar_offset; > + > + if (!g_dtb_data.dtb_table_conf_mz) { > + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", > + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); > + > + if (conf_mz == NULL) { > + PMD_DRV_LOG(ERR, > + "dev %s annot allocate memory for dtb table conf", > + dev->device->name); > + ret = -ENOMEM; > + goto free_res; > + } > + dpp_ctrl->down_vir_addr = conf_mz->addr_64; > + dpp_ctrl->down_phy_addr = conf_mz->iova; > + g_dtb_data.dtb_table_conf_mz = conf_mz; > + } > + > + if (!g_dtb_data.dtb_table_dump_mz) { > + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", > + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); > + > + if (dump_mz == NULL) { > + PMD_DRV_LOG(ERR, > + "dev %s Cannot allocate memory for dtb table dump", > + dev->device->name); > + ret = -ENOMEM; > + goto free_res; > + } > + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; > + dpp_ctrl->dump_phy_addr = dump_mz->iova; > + g_dtb_data.dtb_table_dump_mz = dump_mz; > + } > + > + ret = zxdh_np_host_init(0, dpp_ctrl); > + if (ret) { > + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); > + goto free_res; > + } > + > + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", > + dev->device->name, dpp_ctrl->queue_id); > + g_dtb_data.queueid = dpp_ctrl->queue_id; > + rte_free(dpp_ctrl); > + return 0; > + > +free_res: > + rte_free(dpp_ctrl); > + return ret; > +} > + > +static int > +zxdh_init_shared_data(void) > +{ > + const struct rte_memzone *mz; > + int ret = 0; > + > + rte_spinlock_lock(&zxdh_shared_data_lock); > + if (zxdh_shared_data == NULL) { > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { > + /* Allocate shared memory. */ > + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, > + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); > + if (mz == NULL) { > + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); > + ret = -rte_errno; > + goto error; > + } > + zxdh_shared_data = mz->addr; > + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); > + rte_spinlock_init(&zxdh_shared_data->lock); > + } else { /* Lookup allocated shared memory. */ > + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); > + if (mz == NULL) { > + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); > + ret = -rte_errno; > + goto error; > + } > + zxdh_shared_data = mz->addr; > + } > + } > + > +error: > + rte_spinlock_unlock(&zxdh_shared_data_lock); > + return ret; > +} > + > +static int > +zxdh_init_once(void) > +{ > + int ret = 0; > + > + if (zxdh_init_shared_data()) > + return -1; > + > + struct zxdh_shared_data *sd = zxdh_shared_data; > + rte_spinlock_lock(&sd->lock); > + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { > + if (!sd->init_done) { > + ++sd->secondary_cnt; > + sd->init_done = true; > + } > + goto out; > + } > + /* RTE_PROC_PRIMARY */ > + if (!sd->init_done) > + sd->init_done = true; > + sd->dev_refcnt++; > + > +out: > + rte_spinlock_unlock(&sd->lock); > + return ret; > +} > + > +static int > +zxdh_np_init(struct rte_eth_dev *eth_dev) > +{ > + struct zxdh_hw *hw = eth_dev->data->dev_private; > + int ret = 0; > + > + if (hw->is_pf) { > + ret = zxdh_np_dtb_res_init(eth_dev); > + if (ret) { > + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); > + return ret; > + } > + } > + if (zxdh_shared_data != NULL) > + zxdh_shared_data->np_init_done = 1; > + > + PMD_DRV_LOG(DEBUG, "np init ok "); > + return 0; > +} > + > + > static int > zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > { > @@ -950,6 +1138,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > hw->is_pf = 1; > } > > + ret = zxdh_init_once(); > + if (ret != 0) > + goto err_zxdh_init; > + > ret = zxdh_init_device(eth_dev); > if (ret < 0) > goto err_zxdh_init; > @@ -977,6 +1169,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > if (ret != 0) > goto err_zxdh_init; > > + ret = zxdh_np_init(eth_dev); > + if (ret) > + goto err_zxdh_init; > + > ret = zxdh_configure_intr(eth_dev); > if (ret != 0) > goto err_zxdh_init; > diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h > index 7658cbb461..78b1edd5a4 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.h > +++ b/drivers/net/zxdh/zxdh_ethdev.h > @@ -34,6 +34,9 @@ > #define ZXDH_QUERES_SHARE_BASE (0x5000) > > #define ZXDH_MBUF_BURST_SZ 64 > +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 > +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) > +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) > > union zxdh_virport_num { > uint16_t vport; > @@ -89,6 +92,30 @@ struct zxdh_hw { > uint8_t has_rx_offload; > }; > > +struct zxdh_dtb_shared_data { > + int init_done; You mix int and int32 when these are really booleans. Maybe use bool type > + char name[32]; Better to not hardcode 32 directly. Maybe ZXDH_MAX_NAMELEN as a #define > + uint16_t queueid; > + uint16_t vport; > + uint32_t vector; > + const struct rte_memzone *dtb_table_conf_mz; > + const struct rte_memzone *dtb_table_dump_mz; > + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; > + struct rte_eth_dev *bind_device; > + uint32_t dev_refcnt; > +}; > + > +/* Shared data between primary and secondary processes. */ > +struct zxdh_shared_data { > + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ > + int32_t init_done; /* Whether primary has done initialization. */ > + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ > + > + int32_t np_init_done; > + uint32_t dev_refcnt; > + struct zxdh_dtb_shared_data *dtb_data; > +}; > + > uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); > > #endif /* ZXDH_ETHDEV_H */ > diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c > index 53cf972f86..a0a005b178 100644 > --- a/drivers/net/zxdh/zxdh_msg.c > +++ b/drivers/net/zxdh/zxdh_msg.c > @@ -1035,3 +1035,48 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) > rte_free(recved_msg); > return ZXDH_BAR_MSG_OK; > } > + > +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, > + struct zxdh_bar_offset_res *res) > +{ > + uint16_t check_token = 0; > + uint16_t sum_res = 0; > + int ret = 0; unnecessary initialization, first usage will set. > + > + if (!paras) > + return ZXDH_BAR_MSG_ERR_NULL; > + > + struct zxdh_offset_get_msg send_msg = { > + .pcie_id = paras->pcie_id, > + .type = paras->type, > + }; > + struct zxdh_pci_bar_msg in = {0}; > + > + in.payload_addr = &send_msg; > + in.payload_len = sizeof(send_msg); > + in.virt_addr = paras->virt_addr; > + in.src = ZXDH_MSG_CHAN_END_PF; > + in.dst = ZXDH_MSG_CHAN_END_RISC; > + in.module_id = ZXDH_BAR_MODULE_OFFSET_GET; > + in.src_pcieid = paras->pcie_id; Could use struct initializer here > + struct zxdh_bar_recv_msg recv_msg = {0}; > + struct zxdh_msg_recviver_mem result = { > + .recv_buffer = &recv_msg, > + .buffer_len = sizeof(recv_msg), > + }; > + ret = zxdh_bar_chan_sync_msg_send(&in, &result); > + if (ret != ZXDH_BAR_MSG_OK) > + return -ret; > + > + check_token = recv_msg.offset_reps.check; > + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); > + > + if (check_token != sum_res) { > + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); > + return ZXDH_BAR_MSG_ERR_REPLY; > + } > + res->bar_offset = recv_msg.offset_reps.offset; > + res->bar_length = recv_msg.offset_reps.length; > + return ZXDH_BAR_MSG_OK; > +} > diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h > index 530ee406b1..fbc79e8f9d 100644 > --- a/drivers/net/zxdh/zxdh_msg.h > +++ b/drivers/net/zxdh/zxdh_msg.h > @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { > ZXDH_TBL_TYPE_NON, > }; > > +enum pciebar_layout_type { > + ZXDH_URI_VQM = 0, > + ZXDH_URI_SPINLOCK = 1, > + ZXDH_URI_FWCAP = 2, > + ZXDH_URI_FWSHR = 3, > + ZXDH_URI_DRS_SEC = 4, > + ZXDH_URI_RSV = 5, > + ZXDH_URI_CTRLCH = 6, > + ZXDH_URI_1588 = 7, > + ZXDH_URI_QBV = 8, > + ZXDH_URI_MACPCS = 9, > + ZXDH_URI_RDMA = 10, > + ZXDH_URI_MNP = 11, > + ZXDH_URI_MSPM = 12, > + ZXDH_URI_MVQM = 13, > + ZXDH_URI_MDPI = 14, > + ZXDH_URI_NP = 15, > + ZXDH_URI_MAX, > +}; > + > struct zxdh_msix_para { > uint16_t pcie_id; > uint16_t vector_risc; > @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { > uint32_t length; > } __rte_packed; > > +struct zxdh_bar_offset_params { > + uint64_t virt_addr; /* Bar space control space virtual address */ > + uint16_t pcie_id; > + uint16_t type; /* Module types corresponding to PCIBAR planning */ > +}; > + > +struct zxdh_bar_offset_res { > + uint32_t bar_offset; > + uint32_t bar_length; > +}; > + > struct zxdh_bar_recv_msg { > uint8_t reps_ok; > uint16_t reps_len; > @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { > uint16_t dst_pcieid; /* used in PF-->VF */ > }; > > +struct zxdh_offset_get_msg { > + uint16_t pcie_id; > + uint16_t type; > +}; > + > typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, > void *reps_buffer, uint16_t *reps_len, void *dev); > > +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); > int zxdh_msg_chan_init(void); > int zxdh_bar_msg_chan_exit(void); > int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); > diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c > new file mode 100644 > index 0000000000..9c50039fb1 > --- /dev/null > +++ b/drivers/net/zxdh/zxdh_np.c > @@ -0,0 +1,347 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2024 ZTE Corporation > + */ > + > +#include <stdlib.h> > +#include <string.h> > + > +#include <rte_common.h> > +#include <rte_log.h> > + > +#include "zxdh_np.h" > +#include "zxdh_logs.h" > + > +static uint64_t g_np_bar_offset; > +static ZXDH_DEV_MGR_T g_dev_mgr = {0}; > +static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; > +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; > +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; > + > +#define ZXDH_COMM_ASSERT(x) assert(x) > +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) > +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) > + > +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ > +do {\ > + if (NULL == (point)) {\ > + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ > + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ > + ZXDH_COMM_ASSERT(0);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ > +do {\ > + if ((rc) != 0) {\ > + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ > + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ > + ZXDH_COMM_ASSERT(0);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ > +do {\ > + if ((point) == NULL) {\ > + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ > + __FILE__, __LINE__, __func__);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ > +do {\ > + if ((rc) != 0) {\ > + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ > + " Fail!", __FILE__, __LINE__, __func__, becall);\ > + } \ > +} while (0) > + > +#define ZXDH_COMM_CHECK_RC(rc, becall)\ > +do {\ > + if ((rc) != 0) {\ > + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ > + "Fail!", __FILE__, __LINE__, __func__, becall);\ > + ZXDH_COMM_ASSERT(0);\ > + } \ > +} while (0) Better to use RTE_ASSERT() or RTE_VERIFY() here rather than custom macros > + > +static uint32_t > +zxdh_np_dev_init(void) > +{ > + if (g_dev_mgr.is_init) { > + PMD_DRV_LOG(ERR, "Dev is already initialized."); > + return 0; > + } > + > + g_dev_mgr.device_num = 0; > + g_dev_mgr.is_init = 1; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, > + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, > + uint64_t riscv_addr, uint64_t dma_vir_addr, > + uint64_t dma_phy_addr) > +{ > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + if (!p_dev_mgr->is_init) { > + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", > + ZXDH_RC_DEV_MGR_NOT_INIT); > + return ZXDH_RC_DEV_MGR_NOT_INIT; > + } > + > + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { > + /* device is already exist. */ > + PMD_DRV_LOG(ERR, "Device is added again!!!"); > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + } else { > + /* device is new. */ > + p_dev_info = (ZXDH_DEV_CFG_T *)malloc(sizeof(ZXDH_DEV_CFG_T)); > + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); > + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; > + p_dev_mgr->device_num++; > + } > + > + p_dev_info->device_id = dev_id; > + p_dev_info->dev_type = dev_type; > + p_dev_info->access_type = access_type; > + p_dev_info->pcie_addr = pcie_addr; > + p_dev_info->riscv_addr = riscv_addr; > + p_dev_info->dma_vir_addr = dma_vir_addr; > + p_dev_info->dma_phy_addr = dma_phy_addr; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) > +{ > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + > + if (p_dev_info == NULL) > + return ZXDH_DEV_TYPE_INVALID; > + p_dev_info->agent_flag = agent_flag; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_sdt_mgr_init(void) > +{ > + if (!g_sdt_mgr.is_init) { > + g_sdt_mgr.channel_num = 0; > + g_sdt_mgr.is_init = 1; > + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * > + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); > + } > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_sdt_mgr_create(uint32_t dev_id) > +{ > + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; > + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; > + > + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); > + > + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { > + p_sdt_tbl_temp = malloc(sizeof(ZXDH_SDT_SOFT_TABLE_T)); > + > + p_sdt_tbl_temp->device_id = dev_id; > + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); > + > + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; > + > + p_sdt_mgr->channel_num++; > + } else { > + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" > + "is called repeatedly!", __func__, dev_id); > + return -1; > + } > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) > +{ > + uint32_t rc = 0; > + uint32_t i = 0; > + > + zxdh_np_sdt_mgr_init(); > + > + for (i = 0; i < dev_num; i++) { > + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); > + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); > + } > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, > + uint32_t bitmap) > +{ > + uint32_t cls_id = 0; > + uint32_t mem_id = 0; > + uint32_t cls_use = 0; > + uint32_t instr_mem = 0; > + > + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { > + cls_use = (bitmap >> cls_id) & 0x1; > + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; > + } > + > + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { > + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; > + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); > + } > + > + return 0; > +} > + > +static ZXDH_DTB_MGR_T * > +zxdh_np_dtb_mgr_get(uint32_t dev_id) > +{ > + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) > + return NULL; > + else > + return p_dpp_dtb_mgr[dev_id]; > +} > + > +static uint32_t > +zxdh_np_dtb_soft_init(uint32_t dev_id) > +{ > + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; > + > + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); > + if (p_dtb_mgr == NULL) { > + p_dpp_dtb_mgr[dev_id] = (ZXDH_DTB_MGR_T *)malloc(sizeof(ZXDH_DTB_MGR_T)); malloc() returns void *, cast here is not needed. Why does DTB_MGR_T come from malloc when most of other data is using rte_malloc()? It will matter if you support multiprocess > + memset(p_dpp_dtb_mgr[dev_id], 0, sizeof(ZXDH_DTB_MGR_T)); > + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); > + if (p_dtb_mgr == NULL) > + return 1; > + } > + > + return 0; > +} > + > +static unsigned int > +zxdh_np_base_soft_init(unsigned int dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) > +{ > + unsigned int rt = 0; > + unsigned int access_type = 0; > + unsigned int dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; > + unsigned int agent_flag = 0; Why init variable here, and set it one line later? > + > + rt = zxdh_np_dev_init(); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); > + > + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) > + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; > + else > + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; > + > + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) > + agent_flag = ZXDH_DEV_AGENT_ENABLE; > + else > + agent_flag = ZXDH_DEV_AGENT_DISABLE; > + > + rt = zxdh_np_dev_add(dev_id, > + p_init_ctrl->device_type, > + access_type, > + p_init_ctrl->pcie_vir_baddr, > + p_init_ctrl->riscv_vir_baddr, > + p_init_ctrl->dma_vir_baddr, > + p_init_ctrl->dma_phy_baddr); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); > + > + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); > + > + dev_id_array[0] = dev_id; > + rt = zxdh_np_sdt_init(1, dev_id_array); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); > + > + rt = zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_ppu_parse_cls_bitmap"); > + > + rt = zxdh_np_dtb_soft_init(dev_id); > + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) > +{ > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + p_dev_info->vport = vport; > + > + return 0; > +} > + > +static uint32_t > +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) > +{ > + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; > + ZXDH_DEV_CFG_T *p_dev_info = NULL; > + > + p_dev_mgr = &g_dev_mgr; > + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; > + p_dev_info->agent_addr = agent_addr; > + > + return 0; > +} Always returns 0, could just be void and skip the ASSERTION later. > +static uint64_t > +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) > +{ > + uint64_t np_addr = 0; > + > + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) > + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; > + g_np_bar_offset = bar_offset; > + > + return np_addr; > +} > + > +int > +zxdh_np_host_init(uint32_t dev_id, > + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) > +{ > + unsigned int rc = 0; > + uint64_t agent_addr = 0; > + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; > + > + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); > + > + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); > + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, > + p_dev_init_ctrl->np_bar_offset); > + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; > + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); > + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); > + > + rc = zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); > + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_vport_set"); > + > + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; > + rc = zxdh_np_dev_agent_addr_set(dev_id, agent_addr); > + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_agent_addr_set"); > + return 0; > +} > diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h > new file mode 100644 > index 0000000000..573eafe796 > --- /dev/null > +++ b/drivers/net/zxdh/zxdh_np.h > @@ -0,0 +1,198 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2023 ZTE Corporation > + */ > + > +#ifndef ZXDH_NP_H > +#define ZXDH_NP_H > + > +#include <stdint.h> > + > +#define ZXDH_PORT_NAME_MAX (32) > +#define ZXDH_DEV_CHANNEL_MAX (2) > +#define ZXDH_DEV_SDT_ID_MAX (256U) > +/*DTB*/ > +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) > +#define ZXDH_DTB_QUEUE_NUM_MAX (128) > + > +#define ZXDH_PPU_CLS_ALL_START (0x3F) > +#define ZXDH_PPU_CLUSTER_NUM (6) > +#define ZXDH_PPU_INSTR_MEM_NUM (3) > +#define ZXDH_SDT_CFG_LEN (2) > + > +#define ZXDH_RC_DEV_BASE (0x600) > +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) > +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) > +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) > +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) > +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) > +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) > +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) > +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) > +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) > + > +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 > +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) > +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) > +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) > + > +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) > +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) > +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) > +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) > +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) > +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) > +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) > +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) > +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) > +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) > +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) > + > +typedef enum zxdh_module_init_e { > + ZXDH_MODULE_INIT_NPPU = 0, > + ZXDH_MODULE_INIT_PPU, > + ZXDH_MODULE_INIT_SE, > + ZXDH_MODULE_INIT_ETM, > + ZXDH_MODULE_INIT_DLB, > + ZXDH_MODULE_INIT_TRPG, > + ZXDH_MODULE_INIT_TSN, > + ZXDH_MODULE_INIT_MAX > +} ZXDH_MODULE_INIT_E; > + > +typedef enum zxdh_dev_type_e { > + ZXDH_DEV_TYPE_SIM = 0, > + ZXDH_DEV_TYPE_VCS = 1, > + ZXDH_DEV_TYPE_CHIP = 2, > + ZXDH_DEV_TYPE_FPGA = 3, > + ZXDH_DEV_TYPE_PCIE_ACC = 4, > + ZXDH_DEV_TYPE_INVALID, > +} ZXDH_DEV_TYPE_E; > + > +typedef enum zxdh_dev_access_type_e { > + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, > + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, > + ZXDH_DEV_ACCESS_TYPE_INVALID, > +} ZXDH_DEV_ACCESS_TYPE_E; > + > +typedef enum zxdh_dev_agent_flag_e { > + ZXDH_DEV_AGENT_DISABLE = 0, > + ZXDH_DEV_AGENT_ENABLE = 1, > + ZXDH_DEV_AGENT_INVALID, > +} ZXDH_DEV_AGENT_FLAG_E; > + > +typedef struct zxdh_dtb_tab_up_user_addr_t { > + uint32_t user_flag; > + uint64_t phy_addr; > + uint64_t vir_addr; > +} ZXDH_DTB_TAB_UP_USER_ADDR_T; > + > +typedef struct zxdh_dtb_tab_up_info_t { > + uint64_t start_phy_addr; > + uint64_t start_vir_addr; > + uint32_t item_size; > + uint32_t wr_index; > + uint32_t rd_index; > + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; > + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; > +} ZXDH_DTB_TAB_UP_INFO_T; > + > +typedef struct zxdh_dtb_tab_down_info_t { > + uint64_t start_phy_addr; > + uint64_t start_vir_addr; > + uint32_t item_size; > + uint32_t wr_index; > + uint32_t rd_index; > +} ZXDH_DTB_TAB_DOWN_INFO_T; > + > +typedef struct zxdh_dtb_queue_info_t { > + uint32_t init_flag; > + uint32_t vport; > + uint32_t vector; > + ZXDH_DTB_TAB_UP_INFO_T tab_up; > + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; > +} ZXDH_DTB_QUEUE_INFO_T; > + > +typedef struct zxdh_dtb_mgr_t { > + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; > +} ZXDH_DTB_MGR_T; > + > +typedef struct zxdh_ppu_cls_bitmap_t { > + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; > + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; > +} ZXDH_PPU_CLS_BITMAP_T; > + > +typedef struct dpp_sdt_item_t { > + uint32_t valid; > + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; > +} ZXDH_SDT_ITEM_T; > + > +typedef struct dpp_sdt_soft_table_t { > + uint32_t device_id; > + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; > +} ZXDH_SDT_SOFT_TABLE_T; > + > +typedef struct zxdh_sys_init_ctrl_t { > + ZXDH_DEV_TYPE_E device_type; > + uint32_t flags; > + uint32_t sa_id; > + uint32_t case_num; > + uint32_t lif0_port_type; > + uint32_t lif1_port_type; > + uint64_t pcie_vir_baddr; > + uint64_t riscv_vir_baddr; > + uint64_t dma_vir_baddr; > + uint64_t dma_phy_baddr; > +} ZXDH_SYS_INIT_CTRL_T; > + > +typedef struct dpp_dev_cfg_t { > + uint32_t device_id; > + ZXDH_DEV_TYPE_E dev_type; > + uint32_t chip_ver; > + uint32_t access_type; > + uint32_t agent_flag; > + uint32_t vport; > + uint64_t pcie_addr; > + uint64_t riscv_addr; > + uint64_t dma_vir_addr; > + uint64_t dma_phy_addr; > + uint64_t agent_addr; > + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; > +} ZXDH_DEV_CFG_T; > + > +typedef struct zxdh_dev_mngr_t { > + uint32_t device_num; > + uint32_t is_init; > + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; > +} ZXDH_DEV_MGR_T; > + > +typedef struct zxdh_dtb_addr_info_t { > + uint32_t sdt_no; > + uint32_t size; > + uint32_t phy_addr; > + uint32_t vir_addr; > +} ZXDH_DTB_ADDR_INFO_T; > + > +typedef struct zxdh_dev_init_ctrl_t { > + uint32_t vport; > + char port_name[ZXDH_PORT_NAME_MAX]; > + uint32_t vector; > + uint32_t queue_id; > + uint32_t np_bar_offset; > + uint32_t np_bar_len; > + uint32_t pcie_vir_addr; > + uint32_t down_phy_addr; > + uint32_t down_vir_addr; > + uint32_t dump_phy_addr; > + uint32_t dump_vir_addr; > + uint32_t dump_sdt_num; > + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; > +} ZXDH_DEV_INIT_CTRL_T; > + > +typedef struct zxdh_sdt_mgr_t { > + uint32_t channel_num; > + uint32_t is_init; > + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; > +} ZXDH_SDT_MGR_T; > + > +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); > + > +#endif /* ZXDH_NP_H */ > diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c > index 06d3f92b20..250e67d560 100644 > --- a/drivers/net/zxdh/zxdh_pci.c > +++ b/drivers/net/zxdh/zxdh_pci.c > @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) > > desc_addr = vq->vq_ring_mem; > avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); > - if (vtpci_packed_queue(vq->hw)) { > + if (zxdh_pci_packed_queue(vq->hw)) { > used_addr = RTE_ALIGN_CEIL((avail_addr + > sizeof(struct zxdh_vring_packed_desc_event)), > ZXDH_PCI_VRING_ALIGN); > diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h > index ed6fd89742..d6487a574f 100644 > --- a/drivers/net/zxdh/zxdh_pci.h > +++ b/drivers/net/zxdh/zxdh_pci.h > @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { > }; > > static inline int32_t > -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) > +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) > { > return (hw->guest_features & (1ULL << bit)) != 0; > } > > static inline int32_t > -vtpci_packed_queue(struct zxdh_hw *hw) > +zxdh_pci_packed_queue(struct zxdh_hw *hw) > { > - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); > + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); > } > > struct zxdh_pci_ops { > diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c > index 462a88b23c..b4ef90ea36 100644 > --- a/drivers/net/zxdh/zxdh_queue.c > +++ b/drivers/net/zxdh/zxdh_queue.c > @@ -13,7 +13,7 @@ > #include "zxdh_msg.h" > > struct rte_mbuf * > -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) > +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) > { > struct rte_mbuf *cookie = NULL; > int32_t idx = 0; > diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h > index 1088bf08fc..1304d5e4ea 100644 > --- a/drivers/net/zxdh/zxdh_queue.h > +++ b/drivers/net/zxdh/zxdh_queue.h > @@ -206,11 +206,11 @@ struct zxdh_tx_region { > }; > > static inline size_t > -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) > +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) > { > size_t size; > > - if (vtpci_packed_queue(hw)) { > + if (zxdh_pci_packed_queue(hw)) { > size = num * sizeof(struct zxdh_vring_packed_desc); > size += sizeof(struct zxdh_vring_packed_desc_event); > size = RTE_ALIGN_CEIL(size, align); > @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) > } > > static inline void > -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, > +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, > unsigned long align, uint32_t num) > { > vr->num = num; > @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, > } > > static inline void > -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) > +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) > { > int32_t i = 0; > > @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) > } > > static inline void > -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) > +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) > { > int32_t i = 0; > > @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) > } > > static inline void > -virtqueue_disable_intr(struct zxdh_virtqueue *vq) > +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) > { > if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { > vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; > @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) > } > } > > -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); > +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); > int32_t zxdh_free_queues(struct rte_eth_dev *dev); > int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); > ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 01/15] net/zxdh: zxdh np init implementation 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-11 16:10 ` Stephen Hemminger @ 2024-12-12 2:06 ` Junlong Wang 2024-12-12 3:35 ` Junlong Wang ` (4 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-12 2:06 UTC (permalink / raw) To: stephen; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 1510 bytes --] >> struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; >If you want to support primary/secondary in future, >variables in BSS are not shared between primary and secondary process This structure mainly registers some PCI ops and will not be shared between primary/secondary processes. >> +struct zxdh_shared_data *zxdh_shared_data; >> +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; >> +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; >> +struct zxdh_dtb_shared_data g_dtb_data; >The shared data will be a problem if you support multiple devices. >Or is this really a singleton device with only one bus and slot. In our latest version, we have optimized by placing this structure in a private data area and initializing each device independently. Can we update later? >> +static uint32_t >> +zxdh_np_dtb_soft_init(uint32_t dev_id) >> +{ >> + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; >> + >> + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); >> + if (p_dtb_mgr == NULL) { >> + p_dpp_dtb_mgr[dev_id] = (ZXDH_DTB_MGR_T *)malloc(sizeof(ZXDH_DTB_MGR_T)); >malloc() returns void *, cast here is not needed. >Why does DTB_MGR_T come from malloc when most of other data is using rte_malloc()? > >It will matter if you support multiprocess I didn't realize it, it was originally intended to use rte_malloc. Thank you for your comments. And other issues will modifyed according to the comments. [-- Attachment #1.1.2: Type: text/html , Size: 2909 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 01/15] net/zxdh: zxdh np init implementation 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-11 16:10 ` Stephen Hemminger 2024-12-12 2:06 ` Junlong Wang @ 2024-12-12 3:35 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (3 subsequent siblings) 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-12 3:35 UTC (permalink / raw) To: stephen; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 556 bytes --] >> (np)network Processor initialize resources in host, >> and initialize a channel for some tables insert/get/del. >> >> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > This mostly looks good, just some small stuff. Hi, Stephen May I ask a question. There are modification suggestions for the first patch of v2 now. Will the remaining patches continue to be reviewed. If there are any, we would like to modify them together. If not, we will modify the first patch and create a v3 version. Thanks. [-- Attachment #1.1.2: Type: text/html , Size: 1167 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 00/15] net/zxdh: updated net zxdh driver 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (2 preceding siblings ...) 2024-12-12 3:35 ` Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (14 more replies) 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (2 subsequent siblings) 6 siblings, 15 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2756 bytes --] V3: - use rte_zmalloc and rte_calloc to avoid memset. - remove unnecessary initialization, which first usage will set. - adjust some function which is always return 0, changed to void and skip the ASSERTION later. - resolved some WARNING:MACRO_ARG_UNUSED issues. - resolved some other issues. V2: - resolve code style and github-robot build issue. V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promisc/allmulti ops implementations net/zxdh: vlan filter/ offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 18 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build | 4 + drivers/net/zxdh/zxdh_common.c | 24 + drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 574 +++++++- drivers/net/zxdh/zxdh_ethdev.h | 41 +- drivers/net/zxdh/zxdh_ethdev_ops.c | 1595 +++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ drivers/net/zxdh/zxdh_msg.c | 166 +++ drivers/net/zxdh/zxdh_msg.h | 232 ++++ drivers/net/zxdh/zxdh_np.c | 2060 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 579 ++++++++ drivers/net/zxdh/zxdh_pci.c | 26 +- drivers/net/zxdh/zxdh_pci.h | 9 +- drivers/net/zxdh/zxdh_queue.c | 242 +++- drivers/net/zxdh/zxdh_queue.h | 144 +- drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 20 +- drivers/net/zxdh/zxdh_tables.c | 794 +++++++++++ drivers/net/zxdh/zxdh_tables.h | 231 ++++ 21 files changed, 7614 insertions(+), 47 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 5225 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 01/15] net/zxdh: zxdh np init implementation 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang ` (13 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 36779 bytes --] (np)network Processor initialize resources in host, and initialize a channel for some tables insert/get/del. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 234 +++++++++++++++++++++-- drivers/net/zxdh/zxdh_ethdev.h | 30 +++ drivers/net/zxdh/zxdh_msg.c | 44 +++++ drivers/net/zxdh/zxdh_msg.h | 37 ++++ drivers/net/zxdh/zxdh_np.c | 340 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 2 +- drivers/net/zxdh/zxdh_pci.h | 6 +- drivers/net/zxdh/zxdh_queue.c | 2 +- drivers/net/zxdh/zxdh_queue.h | 14 +- 11 files changed, 875 insertions(+), 33 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index c9960f4c73..ab24a3145c 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -19,4 +19,5 @@ sources = files( 'zxdh_msg.c', 'zxdh_pci.c', 'zxdh_queue.c', + 'zxdh_np.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c786198535..b8f4415e00 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -5,6 +5,7 @@ #include <ethdev_pci.h> #include <bus_pci_driver.h> #include <rte_ethdev.h> +#include <rte_malloc.h> #include "zxdh_ethdev.h" #include "zxdh_logs.h" @@ -12,8 +13,15 @@ #include "zxdh_msg.h" #include "zxdh_common.h" #include "zxdh_queue.h" +#include "zxdh_np.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_INVALID_DTBQUE 0xFFFF uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) @@ -406,14 +414,14 @@ zxdh_features_update(struct zxdh_hw *hw, ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { PMD_DRV_LOG(ERR, "rx checksum not available on this host"); return -ENOTSUP; } if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && - (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + (!zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); return -ENOTSUP; } @@ -421,20 +429,20 @@ zxdh_features_update(struct zxdh_hw *hw, } static bool -rx_offload_enabled(struct zxdh_hw *hw) +zxdh_rx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); } static bool -tx_offload_enabled(struct zxdh_hw *hw) +zxdh_tx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_UFO); } static void @@ -466,7 +474,7 @@ zxdh_dev_free_mbufs(struct rte_eth_dev *dev) continue; PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); - while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + while ((buf = zxdh_queue_detach_unused(vq)) != NULL) rte_pktmbuf_free(buf); } } @@ -550,9 +558,9 @@ zxdh_init_vring(struct zxdh_virtqueue *vq) vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); - vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); - vring_desc_init_packed(vq, size); - virtqueue_disable_intr(vq); + zxdh_vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + zxdh_vring_desc_init_packed(vq, size); + zxdh_queue_disable_intr(vq); } static int32_t @@ -621,7 +629,7 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* * Reserve a memzone for vring elements */ - size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + size = zxdh_vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -694,7 +702,8 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* first indirect descriptor is always the tx header */ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; - vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + zxdh_vring_desc_init_indirect_packed(start_dp, + RTE_DIM(txr[i].tx_packed_indir)); start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + offsetof(struct zxdh_tx_region, tx_hdr); /* length will be updated to actual pi hdr size when xmit pkt */ @@ -792,8 +801,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) } } - hw->has_tx_offload = tx_offload_enabled(hw); - hw->has_rx_offload = rx_offload_enabled(hw); + hw->has_tx_offload = zxdh_tx_offload_enabled(hw); + hw->has_rx_offload = zxdh_rx_offload_enabled(hw); nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) @@ -881,7 +890,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); /* If host does not support both status and MSI-X then disable LSC */ - if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; else eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; @@ -913,6 +922,181 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) return 0; } +static int +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_bar_offset_params param = {0}; + struct zxdh_bar_offset_res res = {0}; + int ret = 0; + + if (g_dtb_data.init_done) { + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_zmalloc(NULL, sizeof(*dpp_ctrl) + + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); + if (dpp_ctrl == NULL) { + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strlcpy(dpp_ctrl->port_name, dev->device->name, sizeof(dpp_ctrl->port_name)); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = ZXDH_URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + + ret = zxdh_np_host_init(0, dpp_ctrl); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + rte_free(dpp_ctrl); + return 0; + +free_res: + rte_free(dpp_ctrl); + return ret; +} + +static int +zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int +zxdh_init_once(void) +{ + int ret = 0; + + if (zxdh_init_shared_data()) + return -1; + + struct zxdh_shared_data *sd = zxdh_shared_data; + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + /* RTE_PROC_PRIMARY */ + if (!sd->init_done) + sd->init_done = true; + sd->dev_refcnt++; + +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int +zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + return ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 1; + + PMD_DRV_LOG(DEBUG, "np init ok "); + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -950,6 +1134,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_once(); + if (ret != 0) + goto err_zxdh_init; + ret = zxdh_init_device(eth_dev); if (ret < 0) goto err_zxdh_init; @@ -977,6 +1165,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); if (ret != 0) goto err_zxdh_init; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7658cbb461..b1f398b28e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -35,6 +35,12 @@ #define ZXDH_MBUF_BURST_SZ 64 +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) + +#define ZXDH_MAX_NAME_LEN 32 + union zxdh_virport_num { uint16_t vport; struct { @@ -89,6 +95,30 @@ struct zxdh_hw { uint8_t has_rx_offload; }; +struct zxdh_dtb_shared_data { + uint8_t init_done; + char name[ZXDH_MAX_NAME_LEN]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int32_t init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int32_t np_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); #endif /* ZXDH_ETHDEV_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 53cf972f86..dd7a518a51 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1035,3 +1035,47 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) rte_free(recved_msg); return ZXDH_BAR_MSG_OK; } + +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, + struct zxdh_bar_offset_res *res) +{ + uint16_t check_token; + uint16_t sum_res; + int ret; + + if (!paras) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = { + .payload_addr = &send_msg, + .payload_len = sizeof(send_msg), + .virt_addr = paras->virt_addr, + .src = ZXDH_MSG_CHAN_END_PF, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_OFFSET_GET, + .src_pcieid = paras->pcie_id, + }; + struct zxdh_bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 530ee406b1..fbc79e8f9d 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { ZXDH_TBL_TYPE_NON, }; +enum pciebar_layout_type { + ZXDH_URI_VQM = 0, + ZXDH_URI_SPINLOCK = 1, + ZXDH_URI_FWCAP = 2, + ZXDH_URI_FWSHR = 3, + ZXDH_URI_DRS_SEC = 4, + ZXDH_URI_RSV = 5, + ZXDH_URI_CTRLCH = 6, + ZXDH_URI_1588 = 7, + ZXDH_URI_QBV = 8, + ZXDH_URI_MACPCS = 9, + ZXDH_URI_RDMA = 10, + ZXDH_URI_MNP = 11, + ZXDH_URI_MSPM = 12, + ZXDH_URI_MVQM = 13, + ZXDH_URI_MDPI = 14, + ZXDH_URI_NP = 15, + ZXDH_URI_MAX, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { uint32_t length; } __rte_packed; +struct zxdh_bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; + +struct zxdh_bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + struct zxdh_bar_recv_msg { uint8_t reps_ok; uint16_t reps_len; @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +struct zxdh_offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c new file mode 100644 index 0000000000..e44d7ff501 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.c @@ -0,0 +1,340 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdlib.h> +#include <string.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_debug.h> +#include <rte_malloc.h> + +#include "zxdh_np.h" +#include "zxdh_logs.h" + +static uint64_t g_np_bar_offset; +static ZXDH_DEV_MGR_T g_dev_mgr; +static ZXDH_SDT_MGR_T g_sdt_mgr; +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; + +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) + +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ +do {\ + if (NULL == (point)) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ + " Fail!", __FILE__, __LINE__, __func__, becall);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ + "Fail!", __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +static uint32_t +zxdh_np_dev_init(void) +{ + if (g_dev_mgr.is_init) { + PMD_DRV_LOG(ERR, "Dev is already initialized."); + return 0; + } + + g_dev_mgr.device_num = 0; + g_dev_mgr.is_init = 1; + + return 0; +} + +static uint32_t +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, + uint64_t riscv_addr, uint64_t dma_vir_addr, + uint64_t dma_phy_addr) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + if (!p_dev_mgr->is_init) { + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + ZXDH_RC_DEV_MGR_NOT_INIT); + return ZXDH_RC_DEV_MGR_NOT_INIT; + } + + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { + /* device is already exist. */ + PMD_DRV_LOG(ERR, "Device is added again!!!"); + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + } else { + /* device is new. */ + p_dev_info = rte_malloc(NULL, sizeof(ZXDH_DEV_CFG_T), 0); + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; + p_dev_mgr->device_num++; + } + + p_dev_info->device_id = dev_id; + p_dev_info->dev_type = dev_type; + p_dev_info->access_type = access_type; + p_dev_info->pcie_addr = pcie_addr; + p_dev_info->riscv_addr = riscv_addr; + p_dev_info->dma_vir_addr = dma_vir_addr; + p_dev_info->dma_phy_addr = dma_phy_addr; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return ZXDH_DEV_TYPE_INVALID; + p_dev_info->agent_flag = agent_flag; + + return 0; +} + +static void +zxdh_np_sdt_mgr_init(void) +{ + if (!g_sdt_mgr.is_init) { + g_sdt_mgr.channel_num = 0; + g_sdt_mgr.is_init = 1; + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); + } +} + +static uint32_t +zxdh_np_sdt_mgr_create(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { + p_sdt_tbl_temp = rte_malloc(NULL, sizeof(ZXDH_SDT_SOFT_TABLE_T), 0); + + p_sdt_tbl_temp->device_id = dev_id; + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; + + p_sdt_mgr->channel_num++; + } else { + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" + "is called repeatedly!", __func__, dev_id); + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) +{ + uint32_t rc; + uint32_t i; + + zxdh_np_sdt_mgr_init(); + + for (i = 0; i < dev_num; i++) { + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); + } + + return rc; +} + +static void +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, + uint32_t bitmap) +{ + uint32_t cls_id; + uint32_t mem_id; + uint32_t cls_use; + uint32_t instr_mem; + + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { + cls_use = (bitmap >> cls_id) & 0x1; + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; + } + + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); + } +} + +static ZXDH_DTB_MGR_T * +zxdh_np_dtb_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_dpp_dtb_mgr[dev_id]; +} + +static uint32_t +zxdh_np_dtb_soft_init(uint32_t dev_id) +{ + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return 1; + + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) { + p_dpp_dtb_mgr[dev_id] = rte_zmalloc(NULL, sizeof(ZXDH_DTB_MGR_T), 0); + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_base_soft_init(uint32_t dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) +{ + uint32_t dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; + uint32_t rt; + uint32_t access_type; + uint32_t agent_flag; + + rt = zxdh_np_dev_init(); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; + else + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) + agent_flag = ZXDH_DEV_AGENT_ENABLE; + else + agent_flag = ZXDH_DEV_AGENT_DISABLE; + + rt = zxdh_np_dev_add(dev_id, + p_init_ctrl->device_type, + access_type, + p_init_ctrl->pcie_vir_baddr, + p_init_ctrl->riscv_vir_baddr, + p_init_ctrl->dma_vir_baddr, + p_init_ctrl->dma_phy_baddr); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); + + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); + + dev_id_array[0] = dev_id; + rt = zxdh_np_sdt_init(1, dev_id_array); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); + + zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); + + rt = zxdh_np_dtb_soft_init(dev_id); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); + + return rt; +} + +static void +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->vport = vport; +} + +static void +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->agent_addr = agent_addr; +} + +static uint64_t +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) +{ + uint64_t np_addr; + + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; + g_np_bar_offset = bar_offset; + + return np_addr; +} + +int +zxdh_np_host_init(uint32_t dev_id, + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) +{ + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; + uint32_t rc; + uint64_t agent_addr; + + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); + + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, + p_dev_init_ctrl->np_bar_offset); + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); + + zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); + + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; + zxdh_np_dev_agent_addr_set(dev_id, agent_addr); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h new file mode 100644 index 0000000000..573eafe796 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef ZXDH_NP_H +#define ZXDH_NP_H + +#include <stdint.h> + +#define ZXDH_PORT_NAME_MAX (32) +#define ZXDH_DEV_CHANNEL_MAX (2) +#define ZXDH_DEV_SDT_ID_MAX (256U) +/*DTB*/ +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) +#define ZXDH_DTB_QUEUE_NUM_MAX (128) + +#define ZXDH_PPU_CLS_ALL_START (0x3F) +#define ZXDH_PPU_CLUSTER_NUM (6) +#define ZXDH_PPU_INSTR_MEM_NUM (3) +#define ZXDH_SDT_CFG_LEN (2) + +#define ZXDH_RC_DEV_BASE (0x600) +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) + +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) + +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) + +typedef enum zxdh_module_init_e { + ZXDH_MODULE_INIT_NPPU = 0, + ZXDH_MODULE_INIT_PPU, + ZXDH_MODULE_INIT_SE, + ZXDH_MODULE_INIT_ETM, + ZXDH_MODULE_INIT_DLB, + ZXDH_MODULE_INIT_TRPG, + ZXDH_MODULE_INIT_TSN, + ZXDH_MODULE_INIT_MAX +} ZXDH_MODULE_INIT_E; + +typedef enum zxdh_dev_type_e { + ZXDH_DEV_TYPE_SIM = 0, + ZXDH_DEV_TYPE_VCS = 1, + ZXDH_DEV_TYPE_CHIP = 2, + ZXDH_DEV_TYPE_FPGA = 3, + ZXDH_DEV_TYPE_PCIE_ACC = 4, + ZXDH_DEV_TYPE_INVALID, +} ZXDH_DEV_TYPE_E; + +typedef enum zxdh_dev_access_type_e { + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, + ZXDH_DEV_ACCESS_TYPE_INVALID, +} ZXDH_DEV_ACCESS_TYPE_E; + +typedef enum zxdh_dev_agent_flag_e { + ZXDH_DEV_AGENT_DISABLE = 0, + ZXDH_DEV_AGENT_ENABLE = 1, + ZXDH_DEV_AGENT_INVALID, +} ZXDH_DEV_AGENT_FLAG_E; + +typedef struct zxdh_dtb_tab_up_user_addr_t { + uint32_t user_flag; + uint64_t phy_addr; + uint64_t vir_addr; +} ZXDH_DTB_TAB_UP_USER_ADDR_T; + +typedef struct zxdh_dtb_tab_up_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; +} ZXDH_DTB_TAB_UP_INFO_T; + +typedef struct zxdh_dtb_tab_down_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; +} ZXDH_DTB_TAB_DOWN_INFO_T; + +typedef struct zxdh_dtb_queue_info_t { + uint32_t init_flag; + uint32_t vport; + uint32_t vector; + ZXDH_DTB_TAB_UP_INFO_T tab_up; + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; +} ZXDH_DTB_QUEUE_INFO_T; + +typedef struct zxdh_dtb_mgr_t { + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_DTB_MGR_T; + +typedef struct zxdh_ppu_cls_bitmap_t { + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; +} ZXDH_PPU_CLS_BITMAP_T; + +typedef struct dpp_sdt_item_t { + uint32_t valid; + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; +} ZXDH_SDT_ITEM_T; + +typedef struct dpp_sdt_soft_table_t { + uint32_t device_id; + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; +} ZXDH_SDT_SOFT_TABLE_T; + +typedef struct zxdh_sys_init_ctrl_t { + ZXDH_DEV_TYPE_E device_type; + uint32_t flags; + uint32_t sa_id; + uint32_t case_num; + uint32_t lif0_port_type; + uint32_t lif1_port_type; + uint64_t pcie_vir_baddr; + uint64_t riscv_vir_baddr; + uint64_t dma_vir_baddr; + uint64_t dma_phy_baddr; +} ZXDH_SYS_INIT_CTRL_T; + +typedef struct dpp_dev_cfg_t { + uint32_t device_id; + ZXDH_DEV_TYPE_E dev_type; + uint32_t chip_ver; + uint32_t access_type; + uint32_t agent_flag; + uint32_t vport; + uint64_t pcie_addr; + uint64_t riscv_addr; + uint64_t dma_vir_addr; + uint64_t dma_phy_addr; + uint64_t agent_addr; + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; +} ZXDH_DEV_CFG_T; + +typedef struct zxdh_dev_mngr_t { + uint32_t device_num; + uint32_t is_init; + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_DEV_MGR_T; + +typedef struct zxdh_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} ZXDH_DTB_ADDR_INFO_T; + +typedef struct zxdh_dev_init_ctrl_t { + uint32_t vport; + char port_name[ZXDH_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; +} ZXDH_DEV_INIT_CTRL_T; + +typedef struct zxdh_sdt_mgr_t { + uint32_t channel_num; + uint32_t is_init; + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_SDT_MGR_T; + +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); + +#endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06d3f92b20..250e67d560 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) desc_addr = vq->vq_ring_mem; avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); - if (vtpci_packed_queue(vq->hw)) { + if (zxdh_pci_packed_queue(vq->hw)) { used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct zxdh_vring_packed_desc_event)), ZXDH_PCI_VRING_ALIGN); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ed6fd89742..d6487a574f 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { }; static inline int32_t -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } static inline int32_t -vtpci_packed_queue(struct zxdh_hw *hw) +zxdh_pci_packed_queue(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); } struct zxdh_pci_ops { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index 462a88b23c..b4ef90ea36 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -13,7 +13,7 @@ #include "zxdh_msg.h" struct rte_mbuf * -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { struct rte_mbuf *cookie = NULL; int32_t idx = 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1088bf08fc..1304d5e4ea 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -206,11 +206,11 @@ struct zxdh_tx_region { }; static inline size_t -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) { size_t size; - if (vtpci_packed_queue(hw)) { + if (zxdh_pci_packed_queue(hw)) { size = num * sizeof(struct zxdh_vring_packed_desc); size += sizeof(struct zxdh_vring_packed_desc_event); size = RTE_ALIGN_CEIL(size, align); @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) } static inline void -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, unsigned long align, uint32_t num) { vr->num = num; @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, } static inline void -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) { int32_t i = 0; @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) } static inline void -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) { int32_t i = 0; @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) } static inline void -virtqueue_disable_intr(struct zxdh_virtqueue *vq) +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) } } -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 79562 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 02/15] net/zxdh: zxdh np uninit implementation 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-17 11:41 ` [PATCH v3 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 03/15] net/zxdh: port tables init implementations Junlong Wang ` (12 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 19520 bytes --] (np)network processor release resources in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ drivers/net/zxdh/zxdh_np.c | 470 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 107 ++++++++ 3 files changed, 625 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b8f4415e00..4e114d95da 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret; + int i; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1010,6 +1056,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1177,6 +1224,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index e44d7ff501..28728b0c68 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -18,10 +18,21 @@ static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_REG_T g_dpp_reg_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -338,3 +349,462 @@ zxdh_np_host_init(uint32_t dev_id, return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return 0xffff; + + return p_dev_info->dev_type; +} + +static uint32_t +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint32_t byte_num; + uint32_t buffer_size; + uint32_t len; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + len = end_bit - start_bit + 1; + buffer_size = base_size_bit / 8; + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + *p_data = 0; + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + *p_data = (uint32_t)(((p_base[start_byte_index] >> (7U - (end_bit & 7))) + & (0xff >> (8U - len))) & 0xff); + return 0; + } + + if (start_bit & 7) { + *p_data = (p_base[start_byte_index] & (0xff >> (start_bit & 7))) & UINT8_MAX; + start_byte_index++; + } + + for (byte_num = start_byte_index; byte_num < end_byte_index; byte_num++) { + *p_data <<= 8; + *p_data += p_base[byte_num]; + } + + *p_data <<= 1 + (end_bit & 7); + *p_data += ((p_base[byte_num & (buffer_size - 1)] & (0xff << (7 - (end_bit & 7)))) >> + (7 - (end_bit & 7))) & 0xff; + + return 0; +} + +static uint32_t +zxdh_np_comm_read_bits_ex(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t msb_start_pos, uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_read_bits(p_base, + base_size_bit, + p_data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + return rtn; +} + +static uint32_t +zxdh_np_reg_read(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t rc = 0; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + for (i = 0; i < p_reg_info->field_num; i++) { + rc = zxdh_np_comm_read_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + (uint32_t *)p_data + i, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_read_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", dev_id, m_offset, n_offset); + } + } + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_read"); + + p_vm_info->dbi_en = vm_info.dbi_en; + p_vm_info->queue_en = vm_info.queue_en; + p_vm_info->epid = vm_info.cfg_epid; + p_vm_info->vector = vm_info.cfg_vector; + p_vm_info->vfunc_num = vm_info.cfg_vfunc_num; + p_vm_info->func_num = vm_info.cfg_func_num; + p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint8_t mask_value; + uint32_t byte_num; + uint32_t buffer_size; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + buffer_size = base_size_bit / 8; + + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + mask_value |= (((1 << (7 - (end_bit & 7))) - 1) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= (((data << (7 - (end_bit & 7)))) & 0xff); + return 0; + } + + if (7 != (end_bit & 7)) { + mask_value = ((0x7f >> (end_bit & 7)) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= ((data << (7 - (end_bit & 7))) & 0xff); + end_byte_index--; + data >>= 1 + (end_bit & 7); + } + + for (byte_num = end_byte_index; byte_num > start_byte_index; byte_num--) { + p_base[byte_num & (buffer_size - 1)] = data & 0xff; + data >>= 8; + } + + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + p_base[byte_num] &= mask_value; + p_base[byte_num] |= data; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits_ex(uint8_t *p_base, + uint32_t base_size_bit, + uint32_t data, + uint32_t msb_start_pos, + uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_write_bits(p_base, + base_size_bit, + data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + + return rtn; +} + +static uint32_t +zxdh_np_reg_write(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + + for (i = 0; i < p_reg_info->field_num; i++) { + if (p_field_info[i].len <= 32) { + temp_data = *((uint32_t *)p_data + i); + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + temp_data, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_comm_write_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", + dev_id, m_offset, n_offset); + } + } + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + vm_info.dbi_en = p_vm_info->dbi_en; + vm_info.queue_en = p_vm_info->queue_en; + vm_info.cfg_epid = p_vm_info->epid; + vm_info.cfg_vector = p_vm_info->vector; + vm_info.cfg_vfunc_num = p_vm_info->vfunc_num; + vm_info.cfg_func_num = p_vm_info->func_num; + vm_info.cfg_vfunc_active = p_vm_info->vfunc_active; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_write"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_enable_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t enable) +{ + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + vm_info.queue_en = enable; + rc = zxdh_np_dtb_queue_vm_info_set(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_set"); + + return rc; +} + +static uint32_t +zxdh_np_riscv_dpp_dtb_queue_id_release(uint32_t dev_id, + char name[ZXDH_PORT_NAME_MAX], uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + if (p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag != 1) { + PMD_DRV_LOG(ERR, "queue %d not alloc!", queue_id); + return 2; + } + + if (strcmp(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, name) != 0) { + PMD_DRV_LOG(ERR, "queue %d name %s error!", queue_id, name); + return 3; + } + zxdh_np_dtb_queue_enable_set(dev_id, queue_id, 0); + zxdh_np_riscv_dtb_mgr_queue_info_delete(dev_id, queue_id); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_unused_item_num_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_num) +{ + uint32_t rc; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) { + *p_item_num = 32; + return 0; + } + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_INFO_QUEUE_BUF_SPACE, + 0, queue_id, p_item_num); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_read"); + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_id_free(uint32_t dev_id, + uint32_t queue_id) +{ + uint32_t item_num = 0; + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + uint32_t rc; + + p_dtb_mgr = p_dpp_dtb_mgr[dev_id]; + if (p_dtb_mgr == NULL) + return 1; + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &item_num); + + p_dtb_mgr->queue_info[queue_id].init_flag = 0; + p_dtb_mgr->queue_info[queue_id].vport = 0; + p_dtb_mgr->queue_info[queue_id].vector = 0; + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_release(uint32_t devid, + char pname[32], + uint32_t queueid) +{ + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(devid, pname); + + rc = zxdh_np_riscv_dpp_dtb_queue_id_release(devid, pname, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_riscv_dpp_dtb_queue_id_release"); + + rc = zxdh_np_dtb_queue_id_free(devid, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_id_free"); + + return rc; +} + +static void +zxdh_np_dtb_mgr_destroy(uint32_t dev_id) +{ + if (p_dpp_dtb_mgr[dev_id] != NULL) { + free(p_dpp_dtb_mgr[dev_id]); + p_dpp_dtb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_tlb_mgr_destroy(uint32_t dev_id) +{ + if (g_p_dpp_tlb_mgr[dev_id] != NULL) { + free(g_p_dpp_tlb_mgr[dev_id]); + g_p_dpp_tlb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_sdt_mgr_destroy(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (p_sdt_tbl_temp != NULL) + free(p_sdt_tbl_temp); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; + + p_sdt_mgr->channel_num--; +} + +static void +zxdh_np_dev_del(uint32_t dev_id) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info != NULL) { + free(p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = NULL; + p_dev_mgr->device_num--; + } +} + +int +zxdh_np_online_uninit(uint32_t dev_id, + char *port_name, + uint32_t queue_id) +{ + uint32_t rc; + + rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "%s:dtb release error," + "port name %s queue id %d. ", __func__, port_name, queue_id); + + zxdh_np_dtb_mgr_destroy(dev_id); + zxdh_np_tlb_mgr_destroy(dev_id); + zxdh_np_sdt_mgr_destroy(dev_id); + zxdh_np_dev_del(dev_id); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 573eafe796..dc0e867827 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -47,6 +47,11 @@ #define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) #define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) +#define ZXDH_ACL_TBL_ID_MIN (0) +#define ZXDH_ACL_TBL_ID_MAX (7) +#define ZXDH_ACL_TBL_ID_NUM (8U) +#define ZXDH_ACL_BLOCK_NUM (8U) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -67,6 +72,15 @@ typedef enum zxdh_dev_type_e { ZXDH_DEV_TYPE_INVALID, } ZXDH_DEV_TYPE_E; +typedef enum zxdh_reg_info_e { + ZXDH_DTB_CFG_QUEUE_DTB_HADDR = 0, + ZXDH_DTB_CFG_QUEUE_DTB_LADDR = 1, + ZXDH_DTB_CFG_QUEUE_DTB_LEN = 2, + ZXDH_DTB_INFO_QUEUE_BUF_SPACE = 3, + ZXDH_DTB_CFG_EPID_V_FUNC_NUM = 4, + ZXDH_REG_ENUM_MAX_VALUE +} ZXDH_REG_INFO_E; + typedef enum zxdh_dev_access_type_e { ZXDH_DEV_ACCESS_TYPE_PCIE = 0, ZXDH_DEV_ACCESS_TYPE_RISCV = 1, @@ -79,6 +93,26 @@ typedef enum zxdh_dev_agent_flag_e { ZXDH_DEV_AGENT_INVALID, } ZXDH_DEV_AGENT_FLAG_E; +typedef enum zxdh_acl_pri_mode_e { + ZXDH_ACL_PRI_EXPLICIT = 1, + ZXDH_ACL_PRI_IMPLICIT, + ZXDH_ACL_PRI_SPECIFY, + ZXDH_ACL_PRI_INVALID, +} ZXDH_ACL_PRI_MODE_E; + +typedef struct zxdh_d_node { + void *data; + struct zxdh_d_node *prev; + struct zxdh_d_node *next; +} ZXDH_D_NODE; + +typedef struct zxdh_d_head { + uint32_t used; + uint32_t maxnum; + ZXDH_D_NODE *p_next; + ZXDH_D_NODE *p_prev; +} ZXDH_D_HEAD; + typedef struct zxdh_dtb_tab_up_user_addr_t { uint32_t user_flag; uint64_t phy_addr; @@ -193,6 +227,79 @@ typedef struct zxdh_sdt_mgr_t { ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; } ZXDH_SDT_MGR_T; +typedef struct zxdh_riscv_dtb_queue_USER_info_t { + uint32_t alloc_flag; + uint32_t queue_id; + uint32_t vport; + char user_name[ZXDH_PORT_NAME_MAX]; +} ZXDH_RISCV_DTB_QUEUE_USER_INFO_T; + +typedef struct zxdh_riscv_dtb_mgr { + uint32_t queue_alloc_count; + uint32_t queue_index; + ZXDH_RISCV_DTB_QUEUE_USER_INFO_T queue_user_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_RISCV_DTB_MGR; + +typedef struct zxdh_dtb_queue_vm_info_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t epid; + uint32_t vfunc_num; + uint32_t vector; + uint32_t func_num; + uint32_t vfunc_active; +} ZXDH_DTB_QUEUE_VM_INFO_T; + +typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t cfg_epid; + uint32_t cfg_vfunc_num; + uint32_t cfg_vector; + uint32_t cfg_func_num; + uint32_t cfg_vfunc_active; +} ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; + + +typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); +typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); + +typedef struct zxdh_field_t { + const char *p_name; + uint32_t flags; + uint16_t msb_pos; + + uint16_t len; + uint32_t default_value; + uint32_t default_step; +} ZXDH_FIELD_T; + +typedef struct zxdh_reg_t { + const char *reg_name; + uint32_t reg_no; + uint32_t module_no; + uint32_t flags; + uint32_t array_type; + uint32_t addr; + uint32_t width; + uint32_t m_size; + uint32_t n_size; + uint32_t m_step; + uint32_t n_step; + uint32_t field_num; + ZXDH_FIELD_T *p_fields; + + ZXDH_REG_WRITE p_write_fun; + ZXDH_REG_READ p_read_fun; +} ZXDH_REG_T; + +typedef struct zxdh_tlb_mgr_t { + uint32_t entry_num; + uint32_t va_width; + uint32_t pa_width; +} ZXDH_TLB_MGR_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); #endif /* ZXDH_NP_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45109 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 03/15] net/zxdh: port tables init implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-17 11:41 ` [PATCH v3 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-17 11:41 ` [PATCH v3 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 04/15] net/zxdh: port tables unint implementations Junlong Wang ` (11 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 42795 bytes --] insert port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 24 ++ drivers/net/zxdh/zxdh_msg.c | 65 ++++ drivers/net/zxdh/zxdh_msg.h | 72 ++++ drivers/net/zxdh/zxdh_np.c | 648 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 210 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 2 + drivers/net/zxdh/zxdh_tables.c | 105 ++++++ drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ 9 files changed, 1274 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index ab24a3145c..5b3af87c5b 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -20,4 +20,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_queue.c', 'zxdh_np.c', + 'zxdh_tables.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 4e114d95da..ff44816384 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -14,6 +14,7 @@ #include "zxdh_common.h" #include "zxdh_queue.h" #include "zxdh_np.h" +#include "zxdh_tables.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -1144,6 +1145,25 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_tables_init(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_init failed"); + return ret; + } + + ret = zxdh_panel_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -1220,6 +1240,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_tables_init(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index dd7a518a51..aa2e10fd45 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_msg.h" +#include "zxdh_pci.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 @@ -100,6 +101,7 @@ #define ZXDH_BAR_CHAN_MSG_EMEC 1 #define ZXDH_BAR_CHAN_MSG_NO_ACK 0 #define ZXDH_BAR_CHAN_MSG_ACK 1 +#define ZXDH_MSG_REPS_OK 0xff uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, @@ -1079,3 +1081,66 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, res->bar_length = recv_msg.offset_reps.length; return ZXDH_BAR_MSG_OK; } + +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + struct zxdh_pci_bar_msg in = { + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET), + .payload_addr = msg_req, + .payload_len = msg_req_len, + .src = ZXDH_MSG_CHAN_END_VF, + .dst = ZXDH_MSG_CHAN_END_PF, + .module_id = ZXDH_MODULE_BAR_MSG_TO_PF, + .src_pcieid = hw->pcie_id, + .dst_pcieid = ZXDH_PF_PCIE_ID(hw->pcie_id), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "vf[%d] send bar msg to pf failed.ret %d", hw->vport.vfid, ret); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + return 0; +} + +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_msg_head *msghead = &msg_info->msg_head; + + msghead->msg_type = type; + msghead->vport = hw->vport.vport; + msghead->vf_id = hw->vport.vfid; + msghead->pcieid = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index fbc79e8f9d..b7b17b8696 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -33,6 +33,19 @@ #define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) + +#define ZXDH_MSG_REPLYBODY_HEAD sizeof(enum zxdh_reps_flag) +#define ZXDH_MSG_HEADER_SIZE 4 +#define ZXDH_MSG_REPLY_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) + +#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_REQ_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -151,6 +164,13 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +enum zxdh_msg_type { + ZXDH_NULL = 0, + ZXDH_VF_PORT_INIT = 1, + + ZXDH_MSG_TYPE_END, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -240,6 +260,54 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_msg_reply_head { + uint8_t flag; + uint16_t reps_len; + uint8_t resvd; +} __rte_packed; + +enum zxdh_reps_flag { + ZXDH_REPS_FAIL, + ZXDH_REPS_SUCC = 0xaa, +} __rte_packed; + +struct zxdh_msg_reply_body { + enum zxdh_reps_flag flag; + union { + uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_reply_info { + struct zxdh_msg_reply_head reply_head; + struct zxdh_msg_reply_body reply_body; +} __rte_packed; + +struct zxdh_vf_init_msg { + uint8_t link_up; + uint8_t rsv; + uint16_t base_qid; + uint8_t rss_enable; +} __rte_packed; + +struct zxdh_msg_head { + enum zxdh_msg_type msg_type; + uint16_t vport; + uint16_t vf_id; + uint16_t pcieid; +} __rte_packed; + +struct zxdh_msg_info { + union { + uint8_t head_len[ZXDH_MSG_HEAD_LEN]; + struct zxdh_msg_head msg_head; + }; + union { + uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; + struct zxdh_vf_init_msg vf_init_msg; + } __rte_packed data; +} __rte_packed; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); @@ -253,5 +321,9 @@ int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info); +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 28728b0c68..db536d96e3 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_debug.h> #include <rte_malloc.h> +#include <rte_memcpy.h> #include "zxdh_np.h" #include "zxdh_logs.h" @@ -16,11 +17,14 @@ static uint64_t g_np_bar_offset; static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; +static uint32_t g_dpp_dtb_int_enable; +static uint32_t g_table_type[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; +ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -76,6 +80,92 @@ do {\ } \ } while (0) +#define ZXDH_COMM_CHECK_POINT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + + +#define ZXDH_COMM_CHECK_POINT_MEMORY_FREE(point, ptr)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", __FILE__, __LINE__, __func__);\ + rte_free(ptr);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, becall, ptr)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXICP %s:%d, %s Call"\ + " %s Fail!", __FILE__, __LINE__, __func__, becall);\ + rte_free(ptr);\ + } \ +} while (0) + +#define ZXDH_COMM_CONVERT16(w_data) \ + (((w_data) & 0xff) << 8) + +#define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) + +#define ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.item_size) + +#define ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.wr_index) + +#define ZXDH_DTB_QUEUE_INIT_FLAG_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].init_flag) + +static uint32_t +zxdh_np_comm_is_big_endian(void) +{ + ZXDH_ENDIAN_U c_data; + + c_data.a = 1; + + if (c_data.b == 1) + return 0; + else + return 1; +} + +static void +zxdh_np_comm_swap(uint8_t *p_uc_data, uint32_t dw_byte_len) +{ + uint16_t *p_w_tmp = NULL; + uint32_t *p_dw_tmp = NULL; + uint32_t dw_byte_num; + uint8_t uc_byte_mode; + uint32_t uc_is_big_flag; + uint32_t i; + + p_dw_tmp = (uint32_t *)(p_uc_data); + uc_is_big_flag = zxdh_np_comm_is_big_endian(); + if (uc_is_big_flag) + return; + + dw_byte_num = dw_byte_len >> 2; + uc_byte_mode = dw_byte_len % 4 & 0xff; + + for (i = 0; i < dw_byte_num; i++) { + (*p_dw_tmp) = ZXDH_COMM_CONVERT16(*p_dw_tmp); + p_dw_tmp++; + } + + if (uc_byte_mode > 1) { + p_w_tmp = (uint16_t *)(p_dw_tmp); + (*p_w_tmp) = ZXDH_COMM_CONVERT16(*p_w_tmp); + } +} + static uint32_t zxdh_np_dev_init(void) { @@ -503,7 +593,7 @@ zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, p_vm_info->func_num = vm_info.cfg_func_num; p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; - return 0; + return rc; } static uint32_t @@ -808,3 +898,559 @@ zxdh_np_online_uninit(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_sdt_tbl_type_get(uint32_t dev_id, uint32_t sdt_no) +{ + return g_table_type[dev_id][sdt_no]; +} + + +static ZXDH_DTB_TABLE_T * +zxdh_np_table_info_get(uint32_t table_type) +{ + return &g_dpp_dtb_table_info[table_type]; +} + +static uint32_t +zxdh_np_dtb_write_table_cmd(uint32_t dev_id, + ZXDH_DTB_TABLE_INFO_E table_type, + void *p_cmd_data, + void *p_cmd_buff) +{ + uint32_t field_cnt; + ZXDH_DTB_TABLE_T *p_table_info = NULL; + ZXDH_DTB_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_cmd_data); + ZXDH_COMM_CHECK_POINT(p_cmd_buff); + p_table_info = zxdh_np_table_info_get(table_type); + p_field_info = p_table_info->p_fields; + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_table_info); + + for (field_cnt = 0; field_cnt < p_table_info->field_num; field_cnt++) { + temp_data = *((uint32_t *)p_cmd_data + field_cnt) & ZXDH_COMM_GET_BIT_MASK(uint32_t, + p_field_info[field_cnt].len); + + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_cmd_buff, + ZXDH_DTB_TABLE_CMD_SIZE_BIT, + temp_data, + p_field_info[field_cnt].lsb_pos, + p_field_info[field_cnt].len); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_write_bits"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_smmu0_write_entry_data(uint32_t dev_id, + uint32_t mode, + uint32_t addr, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + ZXDH_DTB_ERAM_TABLE_FORM_T dtb_eram_form_info = {0}; + uint32_t rc = 0; + + dtb_eram_form_info.valid = ZXDH_DTB_TABLE_VALID; + dtb_eram_form_info.type_mode = ZXDH_DTB_TABLE_MODE_ERAM; + dtb_eram_form_info.data_mode = mode; + dtb_eram_form_info.cpu_wr = 1; + dtb_eram_form_info.addr = addr; + dtb_eram_form_info.cpu_rd = 0; + dtb_eram_form_info.cpu_rd_mode = 0; + + if (ZXDH_ERAM128_OPR_128b == mode) { + p_entry->data_in_cmd_flag = 0; + p_entry->data_size = 128 / 8; + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_128, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + memcpy(p_entry->data, p_data, 128 / 8); + } else if (ZXDH_ERAM128_OPR_64b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 64 / 8; + dtb_eram_form_info.data_l = *(p_data + 1); + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_64, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + } else if (ZXDH_ERAM128_OPR_1b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 1; + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_1, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, + uint32_t base_addr, + uint32_t index, + uint32_t wrt_mode, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t temp_idx; + uint32_t dtb_ind_addr; + uint32_t rc; + + switch (wrt_mode) { + case ZXDH_ERAM128_OPR_128b: + { + if ((0xFFFFFFFF - (base_addr)) < (index)) { + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, + base_addr, index, __func__); + + return ZXDH_PAR_CHK_INVALID_INDEX; + } + if (base_addr + index > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 7; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + if ((base_addr + (index >> 1)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 6; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + if ((base_addr + (index >> 7)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + + temp_idx = index; + } + } + + dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; + + PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + + rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, + wrt_mode, + dtb_ind_addr, + p_data, + p_entry); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_smmu0_write_entry_data"); + + return rc; +} + +static uint32_t +zxdh_np_eram_dtb_len_get(uint32_t mode) +{ + uint32_t dtb_len = 0; + + switch (mode) { + case ZXDH_ERAM128_OPR_128b: + { + dtb_len += 2; + break; + } + case ZXDH_ERAM128_OPR_64b: + case ZXDH_ERAM128_OPR_1b: + { + dtb_len += 1; + break; + } + default: + break; + } + + return dtb_len; +} + +static uint32_t +zxdh_np_dtb_eram_one_entry(uint32_t dev_id, + uint32_t sdt_no, + uint32_t del_en, + void *pdata, + uint32_t *p_dtb_len, + ZXDH_DTB_ENTRY_T *p_dtb_one_entry) +{ + uint32_t buff[ZXDH_SMMU0_READ_REG_MAX_NUM] = {0}; + ZXDH_SDTTBL_ERAM_T sdt_eram = {0}; + ZXDH_DTB_ERAM_ENTRY_INFO_T *peramdata = NULL; + uint32_t base_addr; + uint32_t index; + uint32_t opr_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(pdata); + ZXDH_COMM_CHECK_POINT(p_dtb_one_entry); + ZXDH_COMM_CHECK_POINT(p_dtb_len); + + peramdata = (ZXDH_DTB_ERAM_ENTRY_INFO_T *)pdata; + index = peramdata->index; + base_addr = sdt_eram.eram_base_addr; + opr_mode = sdt_eram.eram_mode; + + switch (opr_mode) { + case ZXDH_ERAM128_TBL_128b: + { + opr_mode = ZXDH_ERAM128_OPR_128b; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + opr_mode = ZXDH_ERAM128_OPR_64b; + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + opr_mode = ZXDH_ERAM128_OPR_1b; + break; + } + } + + if (del_en) { + memset((uint8_t *)buff, 0, sizeof(buff)); + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + buff, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(sdt_no, rc, "zxdh_dtb_se_smmu0_ind_write"); + } else { + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + peramdata->p_data, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_dtb_se_smmu0_ind_write"); + } + *p_dtb_len = zxdh_np_eram_dtb_len_get(opr_mode); + + return rc; +} + +static uint32_t +zxdh_np_dtb_data_write(uint8_t *p_data_buff, + uint32_t addr_offset, + ZXDH_DTB_ENTRY_T *entry) +{ + ZXDH_COMM_CHECK_POINT(p_data_buff); + ZXDH_COMM_CHECK_POINT(entry); + + uint8_t *p_cmd = p_data_buff + addr_offset; + uint32_t cmd_size = ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8; + + uint8_t *p_data = p_cmd + cmd_size; + uint32_t data_size = entry->data_size; + + uint8_t *cmd = (uint8_t *)entry->cmd; + uint8_t *data = (uint8_t *)entry->data; + + rte_memcpy(p_cmd, cmd, cmd_size); + + if (!entry->data_in_cmd_flag) { + zxdh_np_comm_swap(data, data_size); + rte_memcpy(p_data, data, data_size); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + *enable = vm_info.queue_en; + return rc; +} + +static uint32_t +zxdh_np_dtb_item_buff_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t len, + uint32_t *p_data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + + memcpy((uint8_t *)(addr), p_data, len * 4); + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_rd(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t *p_data) +{ + uint64_t addr; + uint32_t val; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + val = *((volatile uint32_t *)(addr)); + + *p_data = val; + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + *((volatile uint32_t *)(addr)) = data; + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_item_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_ITEM_INFO_T *p_item_info) +{ + ZXDH_DTB_QUEUE_LEN_T dtb_len = {0}; + uint32_t rc; + + dtb_len.cfg_dtb_cmd_type = p_item_info->cmd_type; + dtb_len.cfg_dtb_cmd_int_en = p_item_info->int_en; + dtb_len.cfg_queue_dtb_len = p_item_info->data_len; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_QUEUE_DTB_LEN, + 0, queue_id, (void *)&dtb_len); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_write"); + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t int_flag, + uint32_t data_len, + uint32_t *p_data, + uint32_t *p_item_index) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t unused_item_num = 0; + uint32_t queue_en = 0; + uint32_t ack_vale = 0; + uint64_t phy_addr; + uint32_t item_index; + uint32_t i; + uint32_t rc; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (data_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + rc = zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!,rc=%d", queue_id, rc); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + rc = zxdh_np_dtb_item_ack_rd(dev_id, queue_id, 0, + item_index, 0, &ack_vale); + + ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + rc = zxdh_np_dtb_item_buff_wr(dev_id, queue_id, 0, + item_index, 0, data_len, p_data); + + rc = zxdh_np_dtb_item_ack_wr(dev_id, queue_id, 0, + item_index, 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + item_info.cmd_vld = 1; + item_info.cmd_type = 0; + item_info.int_en = int_flag; + item_info.data_len = data_len / 4; + phy_addr = p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.start_phy_addr + + item_index * p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.item_size; + item_info.data_hddr = ((phy_addr >> 4) >> 32) & 0xffffffff; + item_info.data_laddr = (phy_addr >> 4) & 0xffffffff; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + *p_item_index = item_index; + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_down_table_data(uint32_t dev_id, + uint32_t queue_id, + uint32_t down_table_len, + uint8_t *p_down_table_buff, + uint32_t *p_element_id) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + dtb_interrupt_status = g_dpp_dtb_int_enable; + + rc = zxdh_np_dtb_tab_down_info_set(dev_id, + queue_id, + dtb_interrupt_status, + down_table_len / 4, + (uint32_t *)p_down_table_buff, + p_element_id); + return rc; +} + +int +zxdh_np_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *down_entries) +{ + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t tbl_type; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + p_data_buff = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = down_entries + entry_index; + sdt_no = pentry->sdt_no; + tbl_type = zxdh_np_sdt_tbl_type_get(dev_id, sdt_no); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_ADD_OR_UPDATE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index dc0e867827..40961c02a2 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -7,6 +7,8 @@ #include <stdint.h> +#define ZXDH_DISABLE (0) +#define ZXDH_ENABLE (1) #define ZXDH_PORT_NAME_MAX (32) #define ZXDH_DEV_CHANNEL_MAX (2) #define ZXDH_DEV_SDT_ID_MAX (256U) @@ -52,6 +54,94 @@ #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SMMU0_READ_REG_MAX_NUM (4) + +#define ZXDH_DTB_ITEM_ACK_SIZE (16) +#define ZXDH_DTB_ITEM_BUFF_SIZE (16 * 1024) +#define ZXDH_DTB_ITEM_SIZE (16 + 16 * 1024) +#define ZXDH_DTB_TAB_UP_SIZE ((16 + 16 * 1024) * 32) +#define ZXDH_DTB_TAB_DOWN_SIZE ((16 + 16 * 1024) * 32) + +#define ZXDH_DTB_TAB_UP_ACK_VLD_MASK (0x555555) +#define ZXDH_DTB_TAB_DOWN_ACK_VLD_MASK (0x5a5a5a) +#define ZXDH_DTB_TAB_ACK_IS_USING_MASK (0x11111100) +#define ZXDH_DTB_TAB_ACK_UNUSED_MASK (0x0) +#define ZXDH_DTB_TAB_ACK_SUCCESS_MASK (0xff) +#define ZXDH_DTB_TAB_ACK_FAILED_MASK (0x1) +#define ZXDH_DTB_TAB_ACK_CHECK_VALUE (0x12345678) + +#define ZXDH_DTB_TAB_ACK_VLD_SHIFT (104) +#define ZXDH_DTB_TAB_ACK_STATUS_SHIFT (96) +#define ZXDH_DTB_LEN_POS_SETP (16) +#define ZXDH_DTB_ITEM_ADD_OR_UPDATE (0) +#define ZXDH_DTB_ITEM_DELETE (1) + +#define ZXDH_ETCAM_LEN_SIZE (6) +#define ZXDH_ETCAM_BLOCK_NUM (8) +#define ZXDH_ETCAM_TBLID_NUM (8) +#define ZXDH_ETCAM_RAM_NUM (8) +#define ZXDH_ETCAM_RAM_WIDTH (80U) +#define ZXDH_ETCAM_WR_MASK_MAX (((uint32_t)1 << ZXDH_ETCAM_RAM_NUM) - 1) +#define ZXDH_ETCAM_WIDTH_MIN (ZXDH_ETCAM_RAM_WIDTH) +#define ZXDH_ETCAM_WIDTH_MAX (ZXDH_ETCAM_RAM_NUM * ZXDH_ETCAM_RAM_WIDTH) + +#define ZXDH_DTB_TABLE_DATA_BUFF_SIZE (16384) +#define ZXDH_DTB_TABLE_CMD_SIZE_BIT (128) + +#define ZXDH_SE_SMMU0_ERAM_BLOCK_NUM (32) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK (0x4000) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL \ + (ZXDH_SE_SMMU0_ERAM_BLOCK_NUM * ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK) + +/**errco code */ +#define ZXDH_RC_BASE (0x1000U) +#define ZXDH_PARAMETER_CHK_BASE (ZXDH_RC_BASE | 0x200) +#define ZXDH_PAR_CHK_POINT_NULL (ZXDH_PARAMETER_CHK_BASE | 0x001) +#define ZXDH_PAR_CHK_ARGIN_ZERO (ZXDH_PARAMETER_CHK_BASE | 0x002) +#define ZXDH_PAR_CHK_ARGIN_OVERFLOW (ZXDH_PARAMETER_CHK_BASE | 0x003) +#define ZXDH_PAR_CHK_ARGIN_ERROR (ZXDH_PARAMETER_CHK_BASE | 0x004) +#define ZXDH_PAR_CHK_INVALID_INDEX (ZXDH_PARAMETER_CHK_BASE | 0x005) +#define ZXDH_PAR_CHK_INVALID_RANGE (ZXDH_PARAMETER_CHK_BASE | 0x006) +#define ZXDH_PAR_CHK_INVALID_DEV_ID (ZXDH_PARAMETER_CHK_BASE | 0x007) +#define ZXDH_PAR_CHK_INVALID_PARA (ZXDH_PARAMETER_CHK_BASE | 0x008) + +#define ZXDH_ERAM128_BADDR_MASK (0x3FFFF80) + +#define ZXDH_DTB_TABLE_MODE_ERAM (0) +#define ZXDH_DTB_TABLE_MODE_DDR (1) +#define ZXDH_DTB_TABLE_MODE_ZCAM (2) +#define ZXDH_DTB_TABLE_MODE_ETCAM (3) +#define ZXDH_DTB_TABLE_MODE_MC_HASH (4) +#define ZXDH_DTB_TABLE_VALID (1) + +/* DTB module error code */ +#define ZXDH_RC_DTB_BASE (0xd00) +#define ZXDH_RC_DTB_MGR_EXIST (ZXDH_RC_DTB_BASE | 0x0) +#define ZXDH_RC_DTB_MGR_NOT_EXIST (ZXDH_RC_DTB_BASE | 0x1) +#define ZXDH_RC_DTB_QUEUE_RES_EMPTY (ZXDH_RC_DTB_BASE | 0x2) +#define ZXDH_RC_DTB_QUEUE_BUFF_SIZE_ERR (ZXDH_RC_DTB_BASE | 0x3) +#define ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY (ZXDH_RC_DTB_BASE | 0x4) +#define ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY (ZXDH_RC_DTB_BASE | 0x5) +#define ZXDH_RC_DTB_TAB_UP_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x6) +#define ZXDH_RC_DTB_TAB_DOWN_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x7) +#define ZXDH_RC_DTB_TAB_UP_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x8) +#define ZXDH_RC_DTB_TAB_DOWN_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x9) +#define ZXDH_RC_DTB_QUEUE_IS_WORKING (ZXDH_RC_DTB_BASE | 0xa) +#define ZXDH_RC_DTB_QUEUE_IS_NOT_INIT (ZXDH_RC_DTB_BASE | 0xb) +#define ZXDH_RC_DTB_MEMORY_ALLOC_ERR (ZXDH_RC_DTB_BASE | 0xc) +#define ZXDH_RC_DTB_PARA_INVALID (ZXDH_RC_DTB_BASE | 0xd) +#define ZXDH_RC_DMA_RANGE_INVALID (ZXDH_RC_DTB_BASE | 0xe) +#define ZXDH_RC_DMA_RCV_DATA_EMPTY (ZXDH_RC_DTB_BASE | 0xf) +#define ZXDH_RC_DTB_LPM_INSERT_FAIL (ZXDH_RC_DTB_BASE | 0x10) +#define ZXDH_RC_DTB_LPM_DELETE_FAIL (ZXDH_RC_DTB_BASE | 0x11) +#define ZXDH_RC_DTB_DOWN_LEN_INVALID (ZXDH_RC_DTB_BASE | 0x12) +#define ZXDH_RC_DTB_DOWN_HASH_CONFLICT (ZXDH_RC_DTB_BASE | 0x13) +#define ZXDH_RC_DTB_QUEUE_NOT_ALLOC (ZXDH_RC_DTB_BASE | 0x14) +#define ZXDH_RC_DTB_QUEUE_NAME_ERROR (ZXDH_RC_DTB_BASE | 0x15) +#define ZXDH_RC_DTB_DUMP_SIZE_SMALL (ZXDH_RC_DTB_BASE | 0x16) +#define ZXDH_RC_DTB_SEARCH_VPORT_QUEUE_ZERO (ZXDH_RC_DTB_BASE | 0x17) +#define ZXDH_RC_DTB_QUEUE_NOT_ENABLE (ZXDH_RC_DTB_BASE | 0x18) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -299,7 +389,127 @@ typedef struct zxdh_tlb_mgr_t { uint32_t pa_width; } ZXDH_TLB_MGR_T; +typedef enum zxdh_eram128_tbl_mode_e { + ZXDH_ERAM128_TBL_1b = 0, + ZXDH_ERAM128_TBL_32b = 1, + ZXDH_ERAM128_TBL_64b = 2, + ZXDH_ERAM128_TBL_128b = 3, + ZXDH_ERAM128_TBL_2b = 4, + ZXDH_ERAM128_TBL_4b = 5, + ZXDH_ERAM128_TBL_8b = 6, + ZXDH_ERAM128_TBL_16b = 7 +} ZXDH_ERAM128_TBL_MODE_E; + +typedef enum zxdh_eram128_opr_mode_e { + ZXDH_ERAM128_OPR_128b = 0, + ZXDH_ERAM128_OPR_64b = 1, + ZXDH_ERAM128_OPR_1b = 2, + ZXDH_ERAM128_OPR_32b = 3 + +} ZXDH_ERAM128_OPR_MODE_E; + +typedef enum zxdh_dtb_table_info_e { + ZXDH_DTB_TABLE_DDR = 0, + ZXDH_DTB_TABLE_ERAM_1 = 1, + ZXDH_DTB_TABLE_ERAM_64 = 2, + ZXDH_DTB_TABLE_ERAM_128 = 3, + ZXDH_DTB_TABLE_ZCAM = 4, + ZXDH_DTB_TABLE_ETCAM = 5, + ZXDH_DTB_TABLE_MC_HASH = 6, + ZXDH_DTB_TABLE_ENUM_MAX +} ZXDH_DTB_TABLE_INFO_E; + +typedef enum zxdh_sdt_table_type_e { + ZXDH_SDT_TBLT_INVALID = 0, + ZXDH_SDT_TBLT_ERAM = 1, + ZXDH_SDT_TBLT_DDR3 = 2, + ZXDH_SDT_TBLT_HASH = 3, + ZXDH_SDT_TBLT_LPM = 4, + ZXDH_SDT_TBLT_ETCAM = 5, + ZXDH_SDT_TBLT_PORTTBL = 6, + ZXDH_SDT_TBLT_MAX = 7, +} ZXDH_SDT_TABLE_TYPE_E; + +typedef struct zxdh_dtb_lpm_entry_t { + uint32_t dtb_len0; + uint8_t *p_data_buff0; + uint32_t dtb_len1; + uint8_t *p_data_buff1; +} ZXDH_DTB_LPM_ENTRY_T; + +typedef struct zxdh_dtb_entry_t { + uint8_t *cmd; + uint8_t *data; + uint32_t data_in_cmd_flag; + uint32_t data_size; +} ZXDH_DTB_ENTRY_T; + +typedef struct zxdh_dtb_eram_table_form_t { + uint32_t valid; + uint32_t type_mode; + uint32_t data_mode; + uint32_t cpu_wr; + uint32_t cpu_rd; + uint32_t cpu_rd_mode; + uint32_t addr; + uint32_t data_h; + uint32_t data_l; +} ZXDH_DTB_ERAM_TABLE_FORM_T; + +typedef struct zxdh_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} ZXDH_SDTTBL_ERAM_T; + +typedef union zxdh_endian_u { + unsigned int a; + unsigned char b; +} ZXDH_ENDIAN_U; + +typedef struct zxdh_dtb_field_t { + const char *p_name; + uint16_t lsb_pos; + uint16_t len; +} ZXDH_DTB_FIELD_T; + +typedef struct zxdh_dtb_table_t { + const char *table_type; + uint32_t table_no; + uint32_t field_num; + ZXDH_DTB_FIELD_T *p_fields; +} ZXDH_DTB_TABLE_T; + +typedef struct zxdh_dtb_queue_item_info_t { + uint32_t cmd_vld; + uint32_t cmd_type; + uint32_t int_en; + uint32_t data_len; + uint32_t data_laddr; + uint32_t data_hddr; +} ZXDH_DTB_QUEUE_ITEM_INFO_T; + +typedef struct zxdh_dtb_queue_len_t { + uint32_t cfg_dtb_cmd_type; + uint32_t cfg_dtb_cmd_int_en; + uint32_t cfg_queue_dtb_len; +} ZXDH_DTB_QUEUE_LEN_T; + +typedef struct zxdh_dtb_eram_entry_info_t { + uint32_t index; + uint32_t *p_data; +} ZXDH_DTB_ERAM_ENTRY_INFO_T; + +typedef struct zxdh_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} ZXDH_DTB_USER_ENTRY_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); +int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index d6487a574f..e3f13cb17d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,6 +12,8 @@ #include "zxdh_ethdev.h" +#define ZXDH_PF_PCIE_ID(pcie_id) (((pcie_id) & 0xff00) | 1 << 11) + enum zxdh_msix_status { ZXDH_MSIX_NONE = 0, ZXDH_MSIX_DISABLED = 1, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c new file mode 100644 index 0000000000..91376e6ec0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.c @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_np.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +#define ZXDH_SDT_VPORT_ATT_TABLE 1 +#define ZXDH_SDT_PANEL_ATT_TABLE 2 + +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_write = {ZXDH_SDT_VPORT_ATT_TABLE, (void *)&entry}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) + PMD_DRV_LOG(ERR, "write vport_att failed vfid:%d failed", vfid); + + return ret; +} + +int +zxdh_port_attr_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret; + + if (hw->is_pf) { + port_attr.hit_flag = 1; + port_attr.phy_port = hw->phyport; + port_attr.pf_vfid = zxdh_vport_to_vfid(hw->vport); + port_attr.rss_enable = 0; + if (!hw->is_pf) + port_attr.is_vf = 1; + + port_attr.mtu = dev->data->mtu; + port_attr.mtu_enable = 1; + port_attr.is_up = 0; + if (!port_attr.rss_enable) + port_attr.port_base_qid = 0; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + ret = -1; + } + } else { + struct zxdh_vf_init_msg *vf_init_msg = &msg_info.data.vf_init_msg; + + zxdh_msg_head_build(hw, ZXDH_VF_PORT_INIT, &msg_info); + msg_info.msg_head.msg_type = ZXDH_VF_PORT_INIT; + vf_init_msg->link_up = 1; + vf_init_msg->base_qid = 0; + vf_init_msg->rss_enable = 0; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port_init failed"); + ret = -1; + } + } + return ret; +}; + +int zxdh_panel_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int ret; + + if (!hw->is_pf) + return 0; + + struct zxdh_panel_table panel; + + memset(&panel, 0, sizeof(panel)); + panel.hit_flag = 1; + panel.pf_vfid = zxdh_vport_to_vfid(hw->vport); + panel.mtu_enable = 1; + panel.mtu = dev->data->mtu; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = hw->phyport, + .p_data = (uint32_t *)&panel + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) { + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + ret = -1; + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h new file mode 100644 index 0000000000..5d34af2f05 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_TABLES_H +#define ZXDH_TABLES_H + +#include <stdint.h> + +extern struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_DEVICE_NO 0 + +struct zxdh_port_attr_table { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint16_t mtu; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint8_t rss_hash_factor; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#else + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint16_t mtu; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint8_t rss_hash_factor; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#endif +}; + +struct zxdh_panel_table { + uint16_t port_vfid_1588 : 11, + rsv2 : 5; + uint16_t pf_vfid : 11, + rsv1 : 1, + enable_1588_tc : 2, + trust_mode : 1, + hit_flag : 1; + uint32_t mtu : 16, + mtu_enable : 1, + rsv : 3, + tm_base_queue : 12; + uint32_t rsv_1; + uint32_t rsv_2; +}; /* 16B */ + +int zxdh_port_attr_init(struct rte_eth_dev *dev); +int zxdh_panel_table_init(struct rte_eth_dev *dev); +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); + +#endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 100335 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 04/15] net/zxdh: port tables unint implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang ` (10 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8652 bytes --] delete port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 18 ++++++ drivers/net/zxdh/zxdh_msg.h | 1 + drivers/net/zxdh/zxdh_np.c | 103 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 33 ++++++++++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 164 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ff44816384..717a1d2b0b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -887,12 +887,30 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret; + + ret = zxdh_port_attr_uninit(dev); + if (ret) + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + + return ret; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index b7b17b8696..613ca71170 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, }; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index db536d96e3..740d302f91 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -1454,3 +1455,105 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_calloc(NULL, sizeof(uint8_t), ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, + "dpp_dtb_write_down_table_data", p_data_buff_ex); + + rte_free(p_data_buff_ex); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 40961c02a2..42a652dd6b 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -20,6 +20,8 @@ #define ZXDH_PPU_CLUSTER_NUM (6) #define ZXDH_PPU_INSTR_MEM_NUM (3) #define ZXDH_SDT_CFG_LEN (2) +#define ZXDH_SDT_H_TBL_TYPE_BT_POS (29) +#define ZXDH_SDT_H_TBL_TYPE_BT_LEN (3) #define ZXDH_RC_DEV_BASE (0x600) #define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) @@ -507,9 +509,16 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_sdt_tbl_data_t { + uint32_t data_high32; + uint32_t data_low32; +} ZXDH_SDT_TBL_DATA_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); +int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 91376e6ec0..9fd184e612 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,7 +11,8 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 -int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +int +zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { int ret = 0; @@ -70,6 +71,36 @@ zxdh_port_attr_init(struct rte_eth_dev *dev) return ret; }; +int +zxdh_port_attr_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + if (hw->is_pf == 1) { + ZXDH_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (uint32_t *)&port_attr}; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_VPORT_ATT_TABLE, + .p_entry_data = (void *)&port_attr_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "delete port attr table failed"); + ret = -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port tables uninit failed"); + ret = -1; + } + } + return ret; +} + int zxdh_panel_table_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5d34af2f05..5e9b36faee 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -144,5 +144,6 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18686 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 05/15] net/zxdh: rx/tx queue setup and intr enable 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang ` (9 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7822 bytes --] rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 33 ++++++++ 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 717a1d2b0b..521d7ed433 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -933,6 +933,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + struct zxdh_virtnet_tx *txvq = NULL; + uint16_t tx_free_thresh = 0; + + if (tx_conf->tx_deferred_start) { + PMD_TX_LOG(ERR, "Tx deferred start is not supported"); + return -EINVAL; + } + + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + + txvq = &vq->txq; + txvq->queue_id = vtpci_logic_qidx; + + tx_free_thresh = tx_conf->tx_free_thresh; + if (tx_free_thresh == 0) + tx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_TX_FREE_THRESH); + + /* tx_free_thresh must be less than the number of TX entries minus 3 */ + if (tx_free_thresh >= (vq->vq_nentries - 3)) { + PMD_TX_LOG(ERR, "TX entries - 3 (%u). (tx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries - 3, tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + vq->vq_free_thresh = tx_free_thresh; + + if (queue_idx < dev->data->nb_tx_queues) + dev->data->tx_queues[queue_idx] = txvq; + + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_enable_intr(vq); + zxdh_mb(hw->weak_barriers); + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_disable_intr(vq); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1304d5e4ea..2f602d894f 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -8,6 +8,7 @@ #include <stdint.h> #include <rte_common.h> +#include <rte_atomic.h> #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" @@ -30,6 +31,7 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 +#define ZXDH_QUEUE_DEPTH 1024 /* * ring descriptors: 16 bytes. @@ -270,8 +272,39 @@ zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) } } +static inline void +zxdh_queue_enable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow == ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +static inline void +zxdh_mb(uint8_t weak_barriers) +{ + if (weak_barriers) + rte_atomic_thread_fence(rte_memory_order_seq_cst); + else + rte_mb(); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); +int32_t zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf); +int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17339 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 06/15] net/zxdh: dev start/stop ops implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang ` (8 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 13892 bytes --] dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 61 +++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 24 ++++++++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 91 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 69 +++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 14 ++--- 8 files changed, 256 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..7b72be5f25 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux = Y x86-64 = Y ARMv8 = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 521d7ed433..59ee942bdd 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -899,12 +899,35 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + int ret = 0; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "intr disable failed"); + return -EINVAL; + } + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); @@ -928,9 +951,47 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EINVAL; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop = zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..83164a5c79 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,29 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED)) { + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags & + ZXDH_VRING_PACKED_DESC_F_AVAIL)) << 31) | + ((uint32_t)vq->vq_avail_idx << 16) | + vq->vq_queue_index; + } else { + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + } + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -216,6 +239,7 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_num = zxdh_set_queue_num, .setup_queue = zxdh_setup_queue, .del_queue = zxdh_del_queue, + .notify_queue = zxdh_notify_queue, }; uint8_t diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index e3f13cb17d..5c5f72b90e 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -144,6 +144,7 @@ struct zxdh_pci_ops { int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index af21f046ad..8c8f2605f6 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -274,3 +274,94 @@ zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) zxdh_queue_disable_intr(vq); return 0; } + +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num) +{ + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct zxdh_hw *hw = vq->hw; + struct zxdh_vq_desc_extra *dxp; + uint16_t flags = vq->vq_packed.cached_flags; + int32_t i; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + /* rx pkt fill in data_off */ + start_dp[idx].addr = rte_mbuf_iova_get(cookie[i]) + RTE_PKTMBUF_HEADROOM; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM; + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + zxdh_queue_store_flags_packed(&start_dp[idx], flags, hw->weak_barriers); + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[logic_qidx]; + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + uint16_t desc_idx; + int32_t error = 0; + + /* Allocate blank mbufs for the each rx descriptor */ + memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf)); + for (desc_idx = 0; desc_idx < ZXDH_MBUF_BURST_SZ; desc_idx++) + vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; + + while (!zxdh_queue_full(vq)) { + uint16_t free_cnt = vq->vq_free_cnt; + + free_cnt = RTE_MIN(ZXDH_MBUF_BURST_SZ, free_cnt); + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt) == 0)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + int32_t i; + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + } else { + PMD_DRV_LOG(ERR, "port %d rxq %d allocated bufs from %s failed", + hw->port_id, logic_qidx, rxvq->mpool->name); + break; + } + } + return 0; +} + +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + uint16_t i = 0; + struct zxdh_vring_packed_desc *descs = vq->vq_packed.ring.desc; + int32_t cnt = 0; + + i = vq->vq_used_cons_idx; + while (zxdh_desc_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 2f602d894f..6513aec3f0 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -25,6 +25,11 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) +#define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL_USED \ + (ZXDH_VRING_PACKED_DESC_F_AVAIL | ZXDH_VRING_PACKED_DESC_F_USED) #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 @@ -33,6 +38,9 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 + /* * ring descriptors: 16 bytes. * These can chain together via "next". @@ -290,6 +298,63 @@ zxdh_mb(uint8_t weak_barriers) rte_mb(); } +static inline int32_t +zxdh_queue_full(const struct zxdh_virtqueue *vq) +{ + return (vq->vq_free_cnt == 0); +} + +static inline void +zxdh_queue_store_flags_packed(struct zxdh_vring_packed_desc *dp, + uint16_t flags, uint8_t weak_barriers) + { + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + rte_io_wmb(); + dp->flags = flags; + #else + rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release); + #endif + } else { + rte_io_wmb(); + dp->flags = flags; + } +} + +static inline uint16_t +zxdh_queue_fetch_flags_packed(struct zxdh_vring_packed_desc *dp, + uint8_t weak_barriers) + { + uint16_t flags; + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + flags = dp->flags; + rte_io_rmb(); + #else + flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire); + #endif + } else { + flags = dp->flags; + rte_io_rmb(); + } + + return flags; +} + +static inline int32_t +zxdh_desc_used(struct zxdh_vring_packed_desc *desc, struct zxdh_virtqueue *vq) +{ + uint16_t flags = zxdh_queue_fetch_flags_packed(desc, vq->hw->weak_barriers); + uint16_t used = !!(flags & ZXDH_VRING_PACKED_DESC_F_USED); + uint16_t avail = !!(flags & ZXDH_VRING_PACKED_DESC_F_AVAIL); + return avail == used && used == vq->vq_packed.used_wrap_counter; +} + +static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) +{ + ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); @@ -306,5 +371,9 @@ int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp); int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx); +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq); +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num); #endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index de9353b223..8c7f734805 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -20,21 +20,19 @@ struct zxdh_virtnet_stats { uint64_t size_bins[8]; }; -struct zxdh_virtnet_rx { +struct __rte_cache_aligned zxdh_virtnet_rx { struct zxdh_virtqueue *vq; - - /* dummy mbuf, for wraparound when processing RX ring. */ - struct rte_mbuf fake_mbuf; - uint64_t mbuf_initializer; /* value to init mbufs. */ struct rte_mempool *mpool; /* mempool for mbuf allocation */ uint16_t queue_id; /* DPDK queue index. */ uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate RX ring. */ -} __rte_packed; + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; +}; -struct zxdh_virtnet_tx { +struct __rte_cache_aligned zxdh_virtnet_tx { struct zxdh_virtqueue *vq; const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ @@ -42,6 +40,6 @@ struct zxdh_virtnet_tx { uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate TX ring. */ -} __rte_packed; +}; #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 31827 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 07/15] net/zxdh: provided dev simple tx implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang ` (7 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18451 bytes --] provided dev simple tx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 20 ++ drivers/net/zxdh/zxdh_queue.h | 26 ++- drivers/net/zxdh/zxdh_rxtx.c | 396 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 446 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', + 'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 59ee942bdd..14939cdb10 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -951,6 +952,24 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -966,6 +985,7 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 6513aec3f0..9343df81ac 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,8 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + /* This marks a buffer as write-only (otherwise read-only). */ -#define ZXDH_VRING_DESC_F_WRITE 2 +#define ZXDH_VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) #define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) @@ -35,11 +42,17 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 + #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZE sizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -355,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 0000000000..81c387b8eb --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <stdalign.h> + +#include <rte_net.h> + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ + +#define ZXDH_PI_L3TYPE_IP 0x00 +#define ZXDH_PI_L3TYPE_IPV6 0x40 +#define ZXDH_PI_L3TYPE_NOIP 0x80 +#define ZXDH_PI_L3TYPE_RSV 0xC0 +#define ZXDH_PI_L3TYPE_MASK 0xC0 + +#define ZXDH_PCODE_MASK 0x1F +#define ZXDH_PCODE_IP_PKT_TYPE 0x01 +#define ZXDH_PCODE_TCP_PKT_TYPE 0x02 +#define ZXDH_PCODE_UDP_PKT_TYPE 0x03 +#define ZXDH_PCODE_NO_IP_PKT_TYPE 0x09 +#define ZXDH_PCODE_NO_REASSMBLE_TCP_PKT_TYPE 0x0C + +#define ZXDH_TX_MAX_SEGS 31 +#define ZXDH_RX_MAX_SEGS 31 + +static void +zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t curr_id = 0; + uint16_t free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num > 0 && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + num -= dxp->ndescs; + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + } while (curr_id != id); + } + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} + +static void +zxdh_ring_free_id_packed(struct zxdh_virtqueue *vq, uint16_t id) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static void +zxdh_xmit_cleanup_normal_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num-- && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + zxdh_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + +static void +zxdh_xmit_cleanup_packed(struct zxdh_virtqueue *vq, int32_t num, int32_t in_order) +{ + if (in_order) + zxdh_xmit_cleanup_inorder_packed(vq, num); + else + zxdh_xmit_cleanup_normal_packed(vq, num); +} + +static uint8_t +zxdh_xmit_get_ptype(struct rte_mbuf *m) +{ + uint8_t pcode = ZXDH_PCODE_NO_IP_PKT_TYPE; + uint8_t l3_ptype = ZXDH_PI_L3TYPE_NOIP; + + if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV4 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4)) { + l3_ptype = ZXDH_PI_L3TYPE_IP; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV6 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV6)) { + l3_ptype = ZXDH_PI_L3TYPE_IPV6; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else { + goto end; + } + + if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_TCP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)) + pcode = ZXDH_PCODE_TCP_PKT_TYPE; + else if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_UDP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)) + pcode = ZXDH_PCODE_UDP_PKT_TYPE; + +end: + return l3_ptype | ZXDH_PKT_FORM_CPU | pcode; +} + +static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, + struct zxdh_net_hdr_dl *hdr) +{ + uint16_t pkt_flag_lw16 = ZXDH_NO_IPID_UPDATE; + uint16_t l3_offset; + uint32_t ol_flag = 0; + + hdr->pi_hdr.pkt_flag_lw16 = rte_be_to_cpu_16(pkt_flag_lw16); + + hdr->pi_hdr.pkt_type = zxdh_xmit_get_ptype(cookie); + l3_offset = ZXDH_DL_NET_HDR_SIZE + cookie->outer_l2_len + + cookie->outer_l3_len + cookie->l2_len; + hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); + hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); +} + +static inline void zxdh_enqueue_xmit_packed_fast(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, int32_t in_order) +{ + struct zxdh_virtqueue *vq = txvq->vq; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + uint16_t flags = vq->vq_packed.cached_flags; + struct zxdh_net_hdr_dl *hdr = NULL; + + dxp->ndescs = 1; + dxp->cookie = cookie; + hdr = rte_pktmbuf_mtod_offset(cookie, struct zxdh_net_hdr_dl *, -ZXDH_DL_NET_HDR_SIZE); + zxdh_xmit_fill_net_hdr(cookie, hdr); + + uint16_t idx = vq->vq_avail_idx; + struct zxdh_vring_packed_desc *dp = &vq->vq_packed.ring.desc[idx]; + + dp->addr = rte_pktmbuf_iova(cookie) - ZXDH_DL_NET_HDR_SIZE; + dp->len = cookie->data_len + ZXDH_DL_NET_HDR_SIZE; + dp->id = id; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + vq->vq_free_cnt--; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(dp, flags, vq->hw->weak_barriers); +} + +static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, + uint16_t needed, + int32_t use_indirect, + int32_t in_order) +{ + struct zxdh_tx_region *txr = txvq->zxdh_net_hdr_mz->addr; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + void *hdr = NULL; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t idx = head_idx; + uint16_t prev = head_idx; + uint16_t head_flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + uint16_t seg_num = cookie->nb_segs; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vring_packed_desc *head_dp = &vq->vq_packed.ring.desc[idx]; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + + dxp->ndescs = needed; + dxp->cookie = cookie; + head_flags |= vq->vq_packed.cached_flags; + /* if offload disabled, it is not zeroed below, do it now */ + + if (use_indirect) { + /** + * setup tx ring slot to point to indirect + * descriptor list stored in reserved region. + * the first slot in indirect ring is already + * preset to point to the header in reserved region + **/ + start_dp[idx].addr = + txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr); + start_dp[idx].len = (seg_num + 1) * sizeof(struct zxdh_vring_packed_desc); + /* Packed descriptor id needs to be restored when inorder. */ + if (in_order) + start_dp[idx].id = idx; + + /* reset flags for indirect desc */ + head_flags = ZXDH_VRING_DESC_F_INDIRECT; + head_flags |= vq->vq_packed.cached_flags; + hdr = (void *)&txr[idx].tx_hdr; + /* loop below will fill in rest of the indirect elements */ + start_dp = txr[idx].tx_packed_indir; + start_dp->len = ZXDH_DL_NET_HDR_SIZE; /* update actual net or type hdr size */ + idx = 1; + } else { + /* setup first tx ring slot to point to header stored in reserved region. */ + start_dp[idx].addr = txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = ZXDH_DL_NET_HDR_SIZE; + head_flags |= ZXDH_VRING_DESC_F_NEXT; + hdr = (void *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + zxdh_xmit_fill_net_hdr(cookie, (struct zxdh_net_hdr_dl *)hdr); + + do { + start_dp[idx].addr = rte_pktmbuf_iova(cookie); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + uint16_t flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + start_dp[prev].id = id; + if (use_indirect) { + idx = head_idx; + if (++idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); +} + +uint16_t +zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct zxdh_virtnet_tx *txvq = tx_queue; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_hw *hw = vq->hw; + uint16_t nb_tx = 0; + + bool in_order = zxdh_pci_with_feature(hw, ZXDH_F_IN_ORDER); + + if (nb_pkts > vq->vq_free_cnt) + zxdh_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt, in_order); + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int32_t can_push = 0; + int32_t use_indirect = 0; + int32_t slots = 0; + int32_t need = 0; + + /* optimize ring usage */ + if ((zxdh_pci_with_feature(hw, ZXDH_F_ANY_LAYOUT) || + zxdh_pci_with_feature(hw, ZXDH_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= ZXDH_DL_NET_HDR_SIZE && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + alignof(struct zxdh_net_hdr_dl))) { + can_push = 1; + } else if (zxdh_pci_with_feature(hw, ZXDH_RING_F_INDIRECT_DESC) && + txm->nb_segs < ZXDH_MAX_TX_INDIRECT) { + use_indirect = 1; + } + /** + * How many main ring entries are needed to this Tx? + * indirect => 1 + * any_layout => number of segments + * default => number of segments + 1 + **/ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); + need = slots - vq->vq_free_cnt; + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + zxdh_xmit_cleanup_packed(vq, need, in_order); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, "port[ep:%d, pf:%d, vf:%d, vfid:%d, pcieid:%d], queue:%d[pch:%d]. No free tx desc to xmit", + hw->vport.epid, hw->vport.pfid, hw->vport.vfid, + hw->vfid, hw->pcie_id, txvq->queue_id, + hw->channel_context[txvq->queue_id].ph_chno); + break; + } + } + /* Enqueue Packet buffers */ + if (can_push) + zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + } + if (likely(nb_tx)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + return nb_tx; +} + +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + int32_t error; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + return nb_tx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 8c7f734805..0a02d319b2 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -42,4 +42,8 @@ struct __rte_cache_aligned zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ }; +uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45246 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 08/15] net/zxdh: provided dev simple rx implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 09/15] net/zxdh: link info update, set link up/down Junlong Wang ` (6 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11246 bytes --] provided dev simple rx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_rxtx.c | 313 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 319 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7b72be5f25..bb44e93fad 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess aware = Y +Scattered Rx = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 14939cdb10..0d63129d8d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -967,6 +967,8 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; + return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 81c387b8eb..00e926bab9 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -394,3 +481,229 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** + * desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + vq->vq_descx[id].cookie = NULL; + if (unlikely(cookie == NULL)) { + PMD_RX_LOG(ERR, + "vring descriptor with no mbuf cookie at %u", vq->vq_used_cons_idx); + break; + } + rx_pkts[i] = cookie; + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + } + return i; +} + +static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *hdr) +{ + struct zxdh_pd_hdr_ul *pd_hdr = &hdr->pd_hdr; + struct zxdh_pi_hdr *pi_hdr = &hdr->pi_hdr; + uint32_t idx = 0; + + m->pkt_len = rte_be_to_cpu_16(pi_hdr->ul.pkt_len); + + uint16_t pkt_type_outer = rte_be_to_cpu_16(pd_hdr->pkt_type_out); + + idx = (pkt_type_outer >> 12) & 0xF; + m->packet_type = zxdh_outer_l2_type[idx]; + idx = (pkt_type_outer >> 8) & 0xF; + m->packet_type |= zxdh_outer_l3_type[idx]; + idx = (pkt_type_outer >> 4) & 0xF; + m->packet_type |= zxdh_outer_l4_type[idx]; + idx = pkt_type_outer & 0xF; + m->packet_type |= zxdh_tunnel_type[idx]; + + uint16_t pkt_type_inner = rte_be_to_cpu_16(pd_hdr->pkt_type_in); + + if (pkt_type_inner) { + idx = (pkt_type_inner >> 12) & 0xF; + m->packet_type |= zxdh_inner_l2_type[idx]; + idx = (pkt_type_inner >> 8) & 0xF; + m->packet_type |= zxdh_inner_l3_type[idx]; + idx = (pkt_type_inner >> 4) & 0xF; + m->packet_type |= zxdh_inner_l4_type[idx]; + } + + return 0; +} + +static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +{ + int32_t error = 0; + /* + * Requeue the discarded mbuf. This should always be + * successful since it was just dequeued. + */ + error = zxdh_enqueue_recv_refill_packed(vq, &m, 1); + if (unlikely(error)) { + PMD_RX_LOG(ERR, "cannot enqueue discarded mbuf"); + rte_pktmbuf_free(m); + } +} + +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct zxdh_virtnet_rx *rxvq = rx_queue; + struct zxdh_virtqueue *vq = rxvq->vq; + struct zxdh_hw *hw = vq->hw; + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_mbuf *rxm = NULL; + struct rte_mbuf *prev = NULL; + uint32_t len[ZXDH_MBUF_BURST_SZ] = {0}; + struct rte_mbuf *rcv_pkts[ZXDH_MBUF_BURST_SZ] = {NULL}; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint16_t hdr_size = 0; + int32_t error = 0; + uint16_t nb_rx = 0; + uint16_t num = nb_pkts; + + if (unlikely(num > ZXDH_MBUF_BURST_SZ)) + num = ZXDH_MBUF_BURST_SZ; + + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + uint16_t i; + uint16_t rcvd_pkt_len = 0; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + struct zxdh_net_hdr_ul *header = + (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM); + + seg_num = header->type_hdr.num_buffers; + if (seg_num == 0) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + seg_num = 1; + } + /* bit[0:6]-pd_len unit:2B */ + uint16_t pd_len = header->type_hdr.pd_len << 1; + /* Private queue only handle type hdr */ + hdr_size = pd_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rcvd_pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + /* Update rte_mbuf according to pi/pd header */ + if (zxdh_rx_update_mbuf(rxm, header) < 0) { + zxdh_discard_rxbuf(vq, rxm); + continue; + } + seg_res = seg_num - 1; + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_len = (uint16_t)(len[i]); + + rcvd_pkt_len += (uint32_t)(len[i]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + seg_res -= 1; + } + + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, ZXDH_MBUF_BURST_SZ); + uint16_t extra_idx = 0; + + rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); + if (unlikely(rcv_cnt == 0)) { + PMD_RX_LOG(ERR, "No enough segments for packet."); + rte_pktmbuf_free(rx_pkts[nb_rx]); + break; + } + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + rcvd_pkt_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + + /* Allocate new mbuf for the used descriptor */ + if (likely(!zxdh_queue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + if (likely(nb_enqueued)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + return nb_rx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 0a02d319b2..cc0004324a 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -45,5 +45,7 @@ struct __rte_cache_aligned zxdh_virtnet_tx { uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 28874 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 09/15] net/zxdh: link info update, set link up/down 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang ` (5 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 23805 bytes --] provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 14 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c | 57 ++++++++++ drivers/net/zxdh/zxdh_msg.h | 40 +++++++ drivers/net/zxdh/zxdh_np.c | 172 ++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 20 ++++ drivers/net/zxdh/zxdh_tables.c | 15 +++ drivers/net/zxdh/zxdh_tables.h | 6 +- 13 files changed, 503 insertions(+), 9 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index bb44e93fad..7da3aaced1 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -10,3 +10,5 @@ ARMv8 = Y SR-IOV = Y Multiprocess aware = Y Scattered Rx = Y +Link status = Y +Link status event = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', + 'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 0d63129d8d..d3876ec9b3 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,12 +106,18 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } - /* Interrupt handler triggered by NIC for handling specific interrupt. */ static void zxdh_fromriscv_intr_handler(void *param) @@ -1006,6 +1013,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + zxdh_dev_set_link_up(dev); + return 0; } @@ -1020,6 +1029,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index b1f398b28e..c0b719062c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -72,6 +72,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -93,6 +94,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c new file mode 100644 index 0000000000..5a0af98cc0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_msg.h" +#include "zxdh_ethdev_ops.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int32_t ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + port_attr.is_up = link_status; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + } else { + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; + port_attr_msg->value = link_status; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); + return ret; + } + } + return ret; +} + +static int32_t +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + uint16_t status = 0; + int32_t ret = 0; + + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + + link->link_status = status; + + if (status == RTE_ETH_LINK_DOWN) { + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } else { + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), + ZXDH_BAR_MODULE_MAC); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_LINK_GET); + return -1; + } + link->link_speed = reply_info.reply_body.link_msg.speed; + hw->speed_mode = reply_info.reply_body.link_msg.speed_modes; + if ((reply_info.reply_body.link_msg.duplex & RTE_ETH_LINK_FULL_DUPLEX) == + RTE_ETH_LINK_FULL_DUPLEX) + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + else + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + } + hw->speed = link->link_speed; + + return 0; +} + +static int zxdh_set_link_status(struct rte_eth_dev *dev, uint8_t link_status) +{ + uint16_t curr_link_status = dev->data->dev_link.link_status; + + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (link_status == curr_link_status) { + PMD_DRV_LOG(DEBUG, "curr_link_status %u", curr_link_status); + return 0; + } + + hw->admin_status = link_status; + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); + return ret; + } + dev->data->dev_link.link_status = hw->admin_status & link.link_status; + + if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) { + dev->data->dev_link.link_speed = link.link_speed; + dev->data->dev_link.link_duplex = link.link_duplex; + } else { + dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } + return zxdh_config_port_status(dev, dev->data->dev_link.link_status); +} + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_UP); + + if (ret) + PMD_DRV_LOG(ERR, "Set link up failed, code:%d", ret); + + return ret; +} + +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + memset(&link, 0, sizeof(link)); + link.link_duplex = hw->duplex; + link.link_speed = hw->speed; + link.link_autoneg = RTE_ETH_LINK_AUTONEG; + + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + return ret; + } + link.link_status &= hw->admin_status; + if (link.link_status == RTE_ETH_LINK_DOWN) + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + ret = zxdh_config_port_status(dev, link.link_status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + return ret; + } + return rte_eth_linkstatus_set(dev, &link); +} + +int zxdh_dev_set_link_down(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_DOWN); + + if (ret) + PMD_DRV_LOG(ERR, "Set link down failed"); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h new file mode 100644 index 0000000000..c6d6ca56fd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_OPS_H +#define ZXDH_ETHDEV_OPS_H + +#include "zxdh_ethdev.h" + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev); +int zxdh_dev_set_link_down(struct rte_eth_dev *dev); +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); + +#endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index aa2e10fd45..23a7ed2097 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1134,6 +1134,51 @@ int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, return 0; } +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_pci_bar_msg in = {0}; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + in.payload_addr = &msg_req; + in.payload_len = msg_req_len; + in.virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + in.src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = module_id; + in.src_pcieid = hw->pcie_id; + if (zxdh_bar_chan_sync_msg_send(&in, &result) != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "Failed to send sync messages or receive response"); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + + return 0; +} + void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info) { @@ -1144,3 +1189,15 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, msghead->vf_id = hw->vport.vfid; msghead->pcieid = hw->pcie_id; } + +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_agent_msg_head *agent_head = &msg_info->agent_msg_head; + + agent_head->msg_type = type; + agent_head->panel_id = hw->panel_id; + agent_head->phyport = hw->phyport; + agent_head->vf_id = hw->vfid; + agent_head->pcie_id = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 613ca71170..a78075c914 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -164,11 +164,18 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +/* riscv msg opcodes */ +enum zxdh_agent_msg_type { + ZXDH_MAC_LINK_GET = 14, +}; + enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_PORT_ATTRS_SET = 25, + ZXDH_MSG_TYPE_END, }; @@ -261,6 +268,15 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_link_info_msg { + uint8_t autoneg; + uint8_t link_state; + uint8_t blink_enable; + uint8_t duplex; + uint32_t speed_modes; + uint32_t speed; +} __rte_packed; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -276,6 +292,7 @@ struct zxdh_msg_reply_body { enum zxdh_reps_flag flag; union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + struct zxdh_link_info_msg link_msg; } __rte_packed; } __rte_packed; @@ -291,6 +308,12 @@ struct zxdh_vf_init_msg { uint8_t rss_enable; } __rte_packed; +struct zxdh_port_attr_set_msg { + uint32_t mode; + uint32_t value; + uint8_t allmulti_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -298,14 +321,26 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_agent_msg_head { + enum zxdh_agent_msg_type msg_type; + uint8_t panel_id; + uint8_t phyport; + uint8_t rsv; + uint16_t vf_id; + uint16_t pcie_id; +} __rte_packed; + struct zxdh_msg_info { union { uint8_t head_len[ZXDH_MSG_HEAD_LEN]; struct zxdh_msg_head msg_head; + struct zxdh_agent_msg_head agent_msg_head; }; union { uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; struct zxdh_vf_init_msg vf_init_msg; + struct zxdh_port_attr_set_msg port_attr_msg; + struct zxdh_link_info_msg link_msg; } __rte_packed data; } __rte_packed; @@ -326,5 +361,10 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info); int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len); +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info); +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 740d302f91..f2518b6d7c 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -36,6 +36,10 @@ ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ ((_inttype_)(((_bitqnt_) < 32))) +#define ZXDH_COMM_UINT32_GET_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + ((_uidst_) = (((_uisrc_) >> (_uistartpos_)) & \ + (ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_))))) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1456,15 +1460,11 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } -static uint32_t +static void zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) { - uint32_t rc = 0; - p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; - - return rc; } int @@ -1507,7 +1507,7 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, pentry = delete_entries + entry_index; sdt_no = pentry->sdt_no; - rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); switch (tbl_type) { case ZXDH_SDT_TBLT_ERAM: { @@ -1557,3 +1557,163 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, rte_free(p_data_buff_ex); return 0; } + +static uint32_t +zxdh_np_sdt_tbl_data_parser(uint32_t sdt_hig32, uint32_t sdt_low32, void *p_sdt_info) +{ + uint32_t tbl_type = 0; + uint32_t clutch_en = 0; + + ZXDH_SDTTBL_ERAM_T *p_sdt_eram = NULL; + ZXDH_SDTTBL_PORTTBL_T *p_sdt_porttbl = NULL; + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_hig32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + ZXDH_COMM_UINT32_GET_BITS(clutch_en, sdt_low32, 0, 1); + + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + p_sdt_eram = (ZXDH_SDTTBL_ERAM_T *)p_sdt_info; + p_sdt_eram->table_type = tbl_type; + p_sdt_eram->eram_clutch_en = clutch_en; + break; + } + + case ZXDH_SDT_TBLT_PORTTBL: + { + p_sdt_porttbl = (ZXDH_SDTTBL_PORTTBL_T *)p_sdt_info; + p_sdt_porttbl->table_type = tbl_type; + p_sdt_porttbl->porttbl_clutch_en = clutch_en; + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} + +static uint32_t +zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t rc; + + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + + rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); + if (rc != 0) + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + + return rc; +} + +static void +zxdh_np_eram_index_cal(uint32_t eram_mode, uint32_t index, + uint32_t *p_row_index, uint32_t *p_col_index) +{ + uint32_t row_index = 0; + uint32_t col_index = 0; + + switch (eram_mode) { + case ZXDH_ERAM128_TBL_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_TBL_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + *p_row_index = row_index; + *p_col_index = col_index; +} + +static uint32_t +zxdh_np_dtb_eram_data_get(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no, + ZXDH_DTB_ERAM_ENTRY_INFO_T *p_dump_eram_entry) +{ + uint32_t index = p_dump_eram_entry->index; + uint32_t *p_data = p_dump_eram_entry->p_data; + ZXDH_SDTTBL_ERAM_T sdt_eram_info = {0}; + uint32_t temp_data[4] = {0}; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t rd_mode; + uint32_t rc; + + rc = zxdh_np_soft_sdt_tbl_get(queue_id, sdt_no, &sdt_eram_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_soft_sdt_tbl_get"); + rd_mode = sdt_eram_info.eram_mode; + + zxdh_np_eram_index_cal(rd_mode, index, &row_index, &col_index); + + switch (rd_mode) { + case ZXDH_ERAM128_TBL_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + case ZXDH_ERAM128_TBL_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + case ZXDH_ERAM128_TBL_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + return rc; +} + +int +zxdh_np_dtb_table_entry_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, + uint32_t srh_mode) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t tbl_type = 0; + uint32_t rc; + uint32_t sdt_no; + + sdt_no = get_entry->sdt_no; + zxdh_np_sdt_tbl_data_get(srh_mode, sdt_no, &sdt_tbl); + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_tbl.data_high32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_data_get(dev_id, + queue_id, + sdt_no, + (ZXDH_DTB_ERAM_ENTRY_INFO_T *)get_entry->p_entry_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_eram_data_get"); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 42a652dd6b..ac3931ba65 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -514,11 +514,31 @@ typedef struct zxdh_sdt_tbl_data_t { uint32_t data_low32; } ZXDH_SDT_TBL_DATA_T; +typedef struct zxdh_sdt_tbl_etcam_t { + uint32_t table_type; + uint32_t etcam_id; + uint32_t etcam_key_mode; + uint32_t etcam_table_id; + uint32_t no_as_rsp_mode; + uint32_t as_en; + uint32_t as_eram_baddr; + uint32_t as_rsp_mode; + uint32_t etcam_table_depth; + uint32_t etcam_clutch_en; +} ZXDH_SDTTBL_ETCAM_T; + +typedef struct zxdh_sdt_tbl_porttbl_t { + uint32_t table_type; + uint32_t porttbl_clutch_en; +} ZXDH_SDTTBL_PORTTBL_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); +int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 9fd184e612..db0132ce3f 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -134,3 +134,18 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } + +int +zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VPORT_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5e9b36faee..8676a8b375 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,9 +7,10 @@ #include <stdint.h> -extern struct zxdh_dtb_shared_data g_dtb_data; - #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 + +extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN @@ -145,5 +146,6 @@ int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 51008 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 10/15] net/zxdh: mac set/add/remove ops implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (8 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 11/15] net/zxdh: promisc/allmulti " Junlong Wang ` (4 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24676 bytes --] provided mac set/add/remove ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 32 +++- drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 233 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 12 ++ drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 197 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 36 +++++ 12 files changed, 549 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7da3aaced1..dc09fe3453 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess aware = Y Scattered Rx = Y Link status = Y Link status event = Y +Unicast MAC filter = Y +Multicast MAC filter = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 4f18c97ed7..75883a8897 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d3876ec9b3..85ada87cdc 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -979,6 +979,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1014,6 +1031,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_queue_notify(vq); } zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); return 0; } @@ -1032,6 +1052,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add = zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set = zxdh_dev_mac_addr_set, }; static int32_t @@ -1073,15 +1096,20 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) PMD_DRV_LOG(ERR, "Failed to get phyport"); return -1; } - PMD_DRV_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + PMD_DRV_LOG(DEBUG, "Get phyport success: 0x%x", hw->phyport); hw->vfid = zxdh_vport_to_vfid(hw->vport); + if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) { + PMD_DRV_LOG(ERR, "Failed to get hash idx"); + return -1; + } + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { PMD_DRV_LOG(ERR, "Failed to get panel_id"); return -1; } - PMD_DRV_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + PMD_DRV_LOG(DEBUG, "Get panel id success: 0x%x", hw->panel_id); return 0; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index c0b719062c..5b95cb1c2a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -80,6 +80,8 @@ struct zxdh_hw { uint16_t port_id; uint16_t vfid; uint16_t queue_num; + uint16_t mc_num; + uint16_t uc_num; uint8_t *isr; uint8_t weak_barriers; @@ -92,6 +94,7 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t hash_search_index; uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 5a0af98cc0..751f80e9b4 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -164,3 +164,236 @@ int zxdh_dev_set_link_down(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Set link down failed"); return ret; } + +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_ether_addr *old_addr = &dev->data->mac_addrs[0]; + struct zxdh_msg_info msg_info = {0}; + uint16_t ret = 0; + + if (!rte_is_valid_assigned_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "mac address is invalid!"); + return -EINVAL; + } + + if (hw->is_pf) { + ret = zxdh_del_mac_table(hw->vport.vport, old_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num--; + + ret = zxdh_set_mac_table(hw->vport.vport, addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + mac_filter->mac_flag = true; + rte_memcpy(&mac_filter->mac, old_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_DEL); + return ret; + } + hw->uc_num--; + PMD_DRV_LOG(INFO, "Success to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + rte_memcpy(&mac_filter->mac, addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } + rte_ether_addr_copy(addr, (struct rte_ether_addr *)hw->mac_addr); + return ret; +} + +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + uint16_t i, ret; + + if (index >= ZXDH_MAX_MAC_ADDRS) { + PMD_DRV_LOG(ERR, "Add mac index (%u) is out of range", index); + return -EINVAL; + } + + for (i = 0; (i != ZXDH_MAX_MAC_ADDRS); ++i) { + if (memcmp(&dev->data->mac_addrs[i], mac_addr, sizeof(*mac_addr))) + continue; + + PMD_DRV_LOG(INFO, "MAC address already configured"); + return -EADDRINUSE; + } + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } + dev->data->mac_addrs[index] = *mac_addr; + return 0; +} +/** + * Fun: + */ +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index]; + uint16_t ret = 0; + + if (index >= ZXDH_MAX_MAC_ADDRS) + return; + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } + memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index c6d6ca56fd..4630bb70db 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -10,5 +10,9 @@ int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq); +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a78075c914..44ce5d1b7f 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -46,6 +46,9 @@ #define ZXDH_MSG_REQ_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) +#define ZXDH_MAC_FILTER 0xaa +#define ZXDH_MAC_UNFILTER 0xff + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -173,6 +176,8 @@ enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_MAC_ADD = 3, + ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, @@ -314,6 +319,12 @@ struct zxdh_port_attr_set_msg { uint8_t allmulti_follow; } __rte_packed; +struct zxdh_mac_filter { + uint8_t mac_flag; + uint8_t filter_flag; + struct rte_ether_addr mac; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -341,6 +352,7 @@ struct zxdh_msg_info { struct zxdh_vf_init_msg vf_init_msg; struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; + struct zxdh_mac_filter mac_filter_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index ac3931ba65..19d1f03f59 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -532,6 +532,11 @@ typedef struct zxdh_sdt_tbl_porttbl_t { uint32_t porttbl_clutch_en; } ZXDH_SDTTBL_PORTTBL_T; +typedef struct zxdh_dtb_hash_entry_info_t { + uint8_t *p_actu_key; + uint8_t *p_rst; +} ZXDH_DTB_HASH_ENTRY_INFO_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index db0132ce3f..f5b607584d 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,10 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_MAC_HASH_INDEX_BASE 64 +#define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) +#define ZXDH_MC_GROUP_NUM 4 + int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -149,3 +153,196 @@ zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) return ret; } + +int +zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "Insert mac_table failed"); + return -ret; + } + } else { + for (group_id = 0; group_id < 4; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, + addr, sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (ret == 0) { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + } + } else { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + else + multicast_table.entry.mc_bitmap[index] = + false; + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + else + multicast_table.entry.mc_pf_enable = false; + } + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "add mac_table failed, code:%d", ret); + return -ret; + } + } + } + return 0; +} + +int +zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret, del_flag = 0; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "delete l2_fwd_hash_table failed, code:%d", ret); + return -ret; + } + } else { + multicast_table.key.vf_group_id = vport_num.vfid / 64; + rte_memcpy(multicast_table.key.mac_addr, addr, sizeof(struct rte_ether_addr)); + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (vport_num.vf_flag) + multicast_table.entry.mc_bitmap[index] &= + ~(rte_cpu_to_be_32(UINT32_C(1) << (31 - index))); + else + multicast_table.entry.mc_pf_enable = 0; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add mc_table failed, code:%d", ret); + return -ret; + } + + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + if (multicast_table.entry.mc_bitmap[0] == 0 && + multicast_table.entry.mc_bitmap[1] == 0 && + multicast_table.entry.mc_pf_enable == 0) { + if (group_id == (ZXDH_MC_GROUP_NUM - 1)) + del_flag = 1; + } else { + break; + } + } + if (del_flag) { + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + } + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8676a8b375..f16c4923ef 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -142,10 +142,46 @@ struct zxdh_panel_table { uint32_t rsv_2; }; /* 16B */ +struct zxdh_mac_unicast_key { + uint16_t rsv; + uint8_t dmac_addr[6]; +}; + +struct zxdh_mac_unicast_entry { + uint8_t rsv1 : 7, + hit_flag : 1; + uint8_t rsv; + uint16_t vfid; +}; + +struct zxdh_mac_unicast_table { + struct zxdh_mac_unicast_key key; + struct zxdh_mac_unicast_entry entry; +}; + +struct zxdh_mac_multicast_key { + uint8_t rsv; + uint8_t vf_group_id; + uint8_t mac_addr[6]; +}; + +struct zxdh_mac_multicast_entry { + uint32_t mc_pf_enable; + uint32_t rsv1; + uint32_t mc_bitmap[2]; +}; + +struct zxdh_mac_multicast_table { + struct zxdh_mac_multicast_key key; + struct zxdh_mac_multicast_entry entry; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 68977 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 11/15] net/zxdh: promisc/allmulti ops implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (9 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 12/15] net/zxdh: vlan filter/ offload " Junlong Wang ` (3 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18498 bytes --] provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 21 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 132 +++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_tables.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 417 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index dc09fe3453..e9b237e102 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -14,3 +14,5 @@ Link status = Y Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 85ada87cdc..1d64b877c1 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -901,8 +901,16 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) int ret; ret = zxdh_port_attr_uninit(dev); - if (ret) + if (ret) { PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "uninit promisc_table failed"); + return ret; + } return ret; } @@ -1055,6 +1063,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add = zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set = zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable = zxdh_dev_allmulticast_disable, }; static int32_t @@ -1306,6 +1318,13 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 5b95cb1c2a..3cdac5de73 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -98,6 +98,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 751f80e9b4..aed4e6410c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -397,3 +397,135 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 1; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->promisc_status == 1) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, false); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = false; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 0; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = true; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 1; + } + return ret; +} + +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 1) { + if (hw->is_pf) { + if (hw->promisc_status == 1) + goto end; + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + if (hw->promisc_status == 1) + goto end; + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = false; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 0; + } + return ret; +end: + hw->allmulti_status = 0; + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 4630bb70db..394ddedc0e 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -14,5 +14,9 @@ int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_ad uint32_t index, uint32_t vmdq); int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 44ce5d1b7f..2abf579a80 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -48,6 +48,8 @@ #define ZXDH_MAC_FILTER 0xaa #define ZXDH_MAC_UNFILTER 0xff +#define ZXDH_PROMISC_MODE 1 +#define ZXDH_ALLMULTI_MODE 2 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,7 @@ enum zxdh_msg_type { ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, + ZXDH_PORT_PROMISC_SET = 26, ZXDH_MSG_TYPE_END, }; @@ -325,6 +328,12 @@ struct zxdh_mac_filter { struct rte_ether_addr mac; } __rte_packed; +struct zxdh_port_promisc_msg { + uint8_t mode; + uint8_t value; + uint8_t mc_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -353,6 +362,7 @@ struct zxdh_msg_info { struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; + struct zxdh_port_promisc_msg port_promisc_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f5b607584d..45aeb3e3e4 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,10 +10,15 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_BROCAST_ATT_TABLE 6 +#define ZXDH_SDT_UNICAST_ATT_TABLE 10 +#define ZXDH_SDT_MULTICAST_ATT_TABLE 11 #define ZXDH_MAC_HASH_INDEX_BASE 64 #define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) #define ZXDH_MC_GROUP_NUM 4 +#define ZXDH_BASE_VFID 1152 +#define ZXDH_TABLE_HIT_FLAG 128 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -346,3 +351,221 @@ zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_se } return 0; } + +int +zxdh_promisc_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_promisc_table_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_unitcast_table uc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T uc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&uc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + uc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + uc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + } else { + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} + +int +zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_multicast_table mc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T mc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&mc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + mc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + mc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + + } else { + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index f16c4923ef..0a1ddf7d9e 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -176,12 +176,34 @@ struct zxdh_mac_multicast_table { struct zxdh_mac_multicast_entry entry; }; +struct zxdh_brocast_table { + uint32_t flag; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_unitcast_table { + uint32_t uc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_multicast_table { + uint32_t mc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_promisc_table_init(struct rte_eth_dev *dev); +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45386 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 12/15] net/zxdh: vlan filter/ offload ops implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (10 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 11/15] net/zxdh: promisc/allmulti " Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang ` (2 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20668 bytes --] provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +++++- drivers/net/zxdh/zxdh_ethdev_ops.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h | 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 99 +++++++++++++ drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 417 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index e9b237e102..6fb006c2da 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -16,3 +16,6 @@ Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode = Y +VLAN filter = Y +VLAN offload = Y +QinQ offload = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1d64b877c1..cc32b467a9 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -758,6 +758,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -815,7 +843,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -847,6 +875,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1067,6 +1097,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable = zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set = zxdh_dev_vlan_offload_set, }; static int32_t @@ -1325,6 +1357,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " vlan filter table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index aed4e6410c..94c5e6dbc8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include <rte_malloc.h> + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -529,3 +533,222 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int +zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); + return -1; + } + + idx = vlan_id / ZXDH_VLAN_FILTER_GROUPS; + bit_idx = vlan_id % ZXDH_VLAN_FILTER_GROUPS; + + if (on) { + if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { + PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_ADD; + } else { + if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { + PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_DEL; + } + + if (hw->is_pf) { + ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); + if (ret) { + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + return -1; + } + } else { + struct zxdh_msg_info msg = {0}; + zxdh_msg_head_build(hw, msg_type, &msg); + msg.data.vlan_filter_msg.vlan_id = vlan_id; + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, msg_type); + return ret; + } + } + + if (on) + dev->data->vlan_filter_conf.ids[idx] |= (1ULL << bit_idx); + else + dev->data->vlan_filter_conf.ids[idx] &= ~(1ULL << bit_idx); + + return 0; +} + +int +zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + rxmode = &dev->data->dev_conf.rxmode; + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_QINQ_STRIP_MASK) { + memset(&msg, 0, sizeof(struct zxdh_msg_info)); + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 394ddedc0e..058d271ab3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -18,5 +18,7 @@ int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 2abf579a80..ec15388f7a 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -50,6 +50,8 @@ #define ZXDH_MAC_UNFILTER 0xff #define ZXDH_PROMISC_MODE 1 #define ZXDH_ALLMULTI_MODE 2 +#define ZXDH_VLAN_STRIP_MSG_TYPE 0 +#define ZXDH_QINQ_STRIP_MSG_TYPE 1 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,10 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_VLAN_FILTER_SET = 17, + ZXDH_VLAN_FILTER_ADD = 18, + ZXDH_VLAN_FILTER_DEL = 19, + ZXDH_VLAN_OFFLOAD = 21, ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, @@ -341,6 +347,19 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_vlan_filter { + uint16_t vlan_id; +}; + +struct zxdh_vlan_filter_set { + uint8_t enable; +}; + +struct zxdh_vlan_offload { + uint8_t enable; + uint8_t type; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -363,6 +382,9 @@ struct zxdh_msg_info { struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; struct zxdh_port_promisc_msg port_promisc_msg; + struct zxdh_vlan_filter vlan_filter_msg; + struct zxdh_vlan_filter_set vlan_filter_set_msg; + struct zxdh_vlan_offload vlan_offload_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 00e926bab9..e36ba39423 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -11,6 +11,9 @@ #include "zxdh_pci.h" #include "zxdh_queue.h" +#define ZXDH_SVLAN_TPID 0x88a8 +#define ZXDH_CVLAN_TPID 0x8100 + #define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ #define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ #define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ @@ -21,6 +24,9 @@ #define ZXDH_PI_L3TYPE_RSV 0xC0 #define ZXDH_PI_L3TYPE_MASK 0xC0 +#define ZXDH_PD_OFFLOAD_SVLAN_INSERT (1 << 14) +#define ZXDH_PD_OFFLOAD_CVLAN_INSERT (1 << 13) + #define ZXDH_PCODE_MASK 0x1F #define ZXDH_PCODE_IP_PKT_TYPE 0x01 #define ZXDH_PCODE_TCP_PKT_TYPE 0x02 @@ -259,6 +265,18 @@ static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + if (cookie->ol_flags & RTE_MBUF_F_TX_VLAN) { + ol_flag |= ZXDH_PD_OFFLOAD_CVLAN_INSERT; + hdr->pi_hdr.vlan_id = rte_be_to_cpu_16(cookie->vlan_tci); + hdr->pd_hdr.cvlan_insert = + rte_be_to_cpu_32((ZXDH_CVLAN_TPID << 16) | cookie->vlan_tci); + } + if (cookie->ol_flags & RTE_MBUF_F_TX_QINQ) { + ol_flag |= ZXDH_PD_OFFLOAD_SVLAN_INSERT; + hdr->pd_hdr.svlan_insert = + rte_be_to_cpu_32((ZXDH_SVLAN_TPID << 16) | cookie->vlan_tci_outer); + } + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); } diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 45aeb3e3e4..ca98b36da2 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 #define ZXDH_SDT_MULTICAST_ATT_TABLE 11 @@ -19,6 +20,10 @@ #define ZXDH_MC_GROUP_NUM 4 #define ZXDH_BASE_VFID 1152 #define ZXDH_TABLE_HIT_FLAG 128 +#define ZXDH_FIRST_VLAN_GROUP_BITS 23 +#define ZXDH_VLAN_GROUP_BITS 31 +#define ZXDH_VLAN_GROUP_NUM 35 +#define ZXDH_VLAN_FILTER_VLANID_STEP 120 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -569,3 +574,97 @@ zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) } return 0; } + +int +zxdh_vlan_filter_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_vlan_filter_table vlan_table = {0}; + int16_t ret = 0; + + if (!hw->is_pf) + return 0; + + for (uint8_t vlan_group = 0; vlan_group < ZXDH_VLAN_GROUP_NUM; vlan_group++) { + if (vlan_group == 0) { + vlan_table.vlans[0] |= (1 << ZXDH_FIRST_VLAN_GROUP_BITS); + vlan_table.vlans[0] |= (1 << ZXDH_VLAN_GROUP_BITS); + + } else { + vlan_table.vlans[0] = 0; + } + uint32_t index = (vlan_group << 11) | hw->vport.vfid; + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = { + .index = index, + .p_data = (uint32_t *)&vlan_table + }; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d], vlan_group:%d, init vlan filter table failed", + hw->vport.vfid, vlan_group); + ret = -1; + } + } + + return ret; +} + +int +zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) +{ + struct zxdh_vlan_filter_table vlan_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + memset(&vlan_table, 0, sizeof(struct zxdh_vlan_filter_table)); + int table_num = vlan_id / ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t index = (table_num << 11) | vport_num.vfid; + uint16_t group = (vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP) / 8 + 1; + + uint8_t val = sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t); + uint8_t vlan_tbl_index = group / val; + uint16_t used_group = vlan_tbl_index * val; + + used_group = (used_group == 0 ? 0 : (used_group - 1)); + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (uint32_t *)&vlan_table}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret) { + PMD_DRV_LOG(ERR, "get vlan table failed"); + return -1; + } + uint16_t relative_vlan_id = vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t *base_group = &vlan_table.vlans[0]; + + *base_group |= 1 << 31; + base_group = &vlan_table.vlans[vlan_tbl_index]; + uint8_t valid_bits = (vlan_tbl_index == 0 ? + ZXDH_FIRST_VLAN_GROUP_BITS : ZXDH_VLAN_GROUP_BITS) + 1; + + uint8_t shift_left = (valid_bits - (relative_vlan_id - used_group * 8) % valid_bits) - 1; + + if (enable) + *base_group |= 1 << shift_left; + else + *base_group &= ~(1 << shift_left); + + + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_VLAN_ATT_TABLE, + .p_entry_data = &entry_data + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write vlan table failed"); + return -1; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 0a1ddf7d9e..28d4f6f7cf 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -43,7 +43,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -73,7 +73,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -194,6 +194,10 @@ struct zxdh_multicast_table { uint32_t bitmap[2]; }; +struct zxdh_vlan_filter_table { + uint32_t vlans[4]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -205,5 +209,7 @@ int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t has int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 55042 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (11 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 12/15] net/zxdh: vlan filter/ offload " Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 25467 bytes --] provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 51 ++++ drivers/net/zxdh/zxdh_ethdev.h | 4 +- drivers/net/zxdh/zxdh_ethdev_ops.c | 410 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ drivers/net/zxdh/zxdh_msg.h | 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++++++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 605 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 6fb006c2da..415ca547d0 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -19,3 +19,6 @@ Allmulticast mode = Y VLAN filter = Y VLAN offload = Y QinQ offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index cc32b467a9..17fca8e909 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -60,6 +60,8 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; + dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | @@ -784,9 +786,48 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) return ret; } + ret = zxdh_rss_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "rss configure failed"); + return ret; + } + return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -873,6 +914,12 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } + zxdh_pci_reinit_complete(hw); end: @@ -1099,6 +1146,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable = zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set = zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 3cdac5de73..bd4e1587c8 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -82,7 +82,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; - + uint16_t *rss_reta; uint8_t *isr; uint8_t weak_barriers; uint8_t intr_enabled; @@ -100,6 +100,8 @@ struct zxdh_hw { uint8_t admin_status; uint8_t promisc_status; uint8_t allmulti_status; + uint8_t rss_enable; + uint8_t rss_init; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 94c5e6dbc8..c12947cb4d 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -3,6 +3,7 @@ */ #include <rte_malloc.h> +#include <rte_ether.h> #include "zxdh_ethdev.h" #include "zxdh_pci.h" @@ -12,6 +13,14 @@ #include "zxdh_logs.h" #define ZXDH_VLAN_FILTER_GROUPS 64 +#define ZXDH_INVALID_LOGIC_QID 0xFFFFU + +/* Supported RSS */ +#define ZXDH_RSS_HF_MASK (~(ZXDH_RSS_HF)) +#define ZXDH_HF_F5 1 +#define ZXDH_HF_F3 2 +#define ZXDH_HF_MAC_VLAN 4 +#define ZXDH_HF_ALL 0 static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { @@ -752,3 +761,404 @@ zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } + +int +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; + uint16_t idx; + uint16_t i; + uint16_t pos; + int ret; + + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); + return -EINVAL; + } + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate RSS reta"); + return -ENOMEM; + } + } + for (idx = 0, i = 0; (i < reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + if (reta_conf[idx].reta[pos] > dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "reta table value err(%u >= %u)", + reta_conf[idx].reta[pos], dev->data->nb_rx_queues); + return -EINVAL; + } + if (hw->rss_reta[i] != reta_conf[idx].reta[pos]) + break; + } + if (i == reta_size) { + PMD_DRV_LOG(DEBUG, "reta table same with buffered table"); + return 0; + } + memcpy(old_reta, hw->rss_reta, sizeof(old_reta)); + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + hw->rss_reta[i] = reta_conf[idx].reta[pos]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < reta_size; i++) + msg.data.rss_reta.reta[i] = + (hw->channel_context[hw->rss_reta[i] * 2].ph_chno); + + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return ret; +} + +static uint16_t +zxdh_hw_qid_to_logic_qid(struct rte_eth_dev *dev, uint16_t qid) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t rx_queues = dev->data->nb_rx_queues; + uint16_t i; + + for (i = 0; i < rx_queues; i++) { + if (qid == hw->channel_context[i * 2].ph_chno) + return i; + } + return ZXDH_INVALID_LOGIC_QID; +} + +int +zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + uint16_t idx; + uint16_t i; + int ret = 0; + uint16_t qid_logic; + + ret = (!reta_size || reta_size > RTE_ETH_RSS_RETA_SIZE_256); + if (ret) { + PMD_DRV_LOG(ERR, "request reta size(%u) not same with buffered(%u)", + reta_size, RTE_ETH_RSS_RETA_SIZE_256); + return -EINVAL; + } + + /* Fill each entry of the table even if its bit is not set. */ + for (idx = 0, i = 0; (i != reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = hw->rss_reta[i]; + } + + + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_GET, &msg); + + if (hw->is_pf) { + ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table get failed"); + return -EINVAL; + } + } + + struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + + qid_logic = zxdh_hw_qid_to_logic_qid(dev, reta_table->reta[i]); + if (qid_logic == ZXDH_INVALID_LOGIC_QID) { + PMD_DRV_LOG(ERR, "rsp phy reta qid (%u) is illegal(%u)", + reta_table->reta[i], qid_logic); + return -EINVAL; + } + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = qid_logic; + } + return 0; +} + +static uint32_t +zxdh_rss_hf_to_hw(uint64_t hf) +{ + uint32_t hw_hf = 0; + + if (hf & ZXDH_HF_MAC_VLAN_ETH) + hw_hf |= ZXDH_HF_MAC_VLAN; + if (hf & ZXDH_HF_F3_ETH) + hw_hf |= ZXDH_HF_F3; + if (hf & ZXDH_HF_F5_ETH) + hw_hf |= ZXDH_HF_F5; + + if (hw_hf == (ZXDH_HF_MAC_VLAN | ZXDH_HF_F3 | ZXDH_HF_F5)) + hw_hf = ZXDH_HF_ALL; + return hw_hf; +} + +static uint64_t +zxdh_rss_hf_to_eth(uint32_t hw_hf) +{ + uint64_t hf = 0; + + if (hw_hf == ZXDH_HF_ALL) + return (ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH); + + if (hw_hf & ZXDH_HF_MAC_VLAN) + hf |= ZXDH_HF_MAC_VLAN_ETH; + if (hw_hf & ZXDH_HF_F3) + hf |= ZXDH_HF_F3_ETH; + if (hw_hf & ZXDH_HF_F5) + hf |= ZXDH_HF_F5_ETH; + + return hf; +} + +int +zxdh_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + uint32_t hw_hf_new, hw_hf_old; + int need_update_hf = 0; + int ret = 0; + + ret = rss_conf->rss_hf & ZXDH_RSS_HF_MASK; + if (ret) { + PMD_DRV_LOG(ERR, "Not support some hash function (%08lx)", rss_conf->rss_hf); + return -EINVAL; + } + + hw_hf_new = zxdh_rss_hf_to_hw(rss_conf->rss_hf); + hw_hf_old = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + + if ((hw_hf_new != hw_hf_old || !!rss_conf->rss_hf)) + need_update_hf = 1; + + if (need_update_hf) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = !!rss_conf->rss_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = !!rss_conf->rss_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf_new; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf_new; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + old_rss_conf->rss_hf = rss_conf->rss_hf; + } + + return 0; +} + +int +zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret; + uint32_t hw_hf; + + if (rss_conf == NULL) { + PMD_DRV_LOG(ERR, "rss conf is NULL"); + return -ENOMEM; + } + + hw_hf = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + rss_conf->rss_hf = zxdh_rss_hf_to_eth(hw_hf); + + zxdh_msg_head_build(hw, ZXDH_RSS_HF_GET, &msg); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + reply_msg.reply_body.rss_hf.rss_hf = port_attr.rss_hash_factor; + } else { + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + rss_conf->rss_hf = zxdh_rss_hf_to_eth(reply_msg.reply_body.rss_hf.rss_hf); + + return 0; +} + +static int +zxdh_get_rss_enable_conf(struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) + return dev->data->nb_rx_queues == 1 ? 0 : 1; + else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE) + return 0; + + return 0; +} + +int +zxdh_rss_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *dev_data = dev->data; + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg = {0}; + int ret = 0; + uint32_t hw_hf; + uint32_t i; + + if (dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(ERR, "port %u nb_rx_queues is 0", dev->data->port_id); + return -1; + } + + /* config rss enable */ + uint8_t curr_rss_enable = zxdh_get_rss_enable_conf(dev); + + if (hw->rss_enable != curr_rss_enable) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = curr_rss_enable; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = curr_rss_enable; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + hw->rss_enable = curr_rss_enable; + } + + if (curr_rss_enable && hw->rss_init == 0) { + /* config hash factor */ + dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = ZXDH_HF_F5_ETH; + hw_hf = zxdh_rss_hf_to_hw(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf); + memset(&msg, 0, sizeof(msg)); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + hw->rss_init = 1; + } + + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "alloc memory fail"); + return -1; + } + } + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + hw->rss_reta[i] = i % dev_data->nb_rx_queues; + + /* hw config reta */ + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + msg.data.rss_reta.reta[i] = + hw->channel_context[hw->rss_reta[i] * 2].ph_chno; + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 058d271ab3..860716d079 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,8 +5,25 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <rte_ether.h> + #include "zxdh_ethdev.h" +#define ZXDH_ETH_RSS_L2 RTE_ETH_RSS_L2_PAYLOAD +#define ZXDH_ETH_RSS_IP \ + (RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6) +#define ZXDH_ETH_RSS_TCP (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP) +#define ZXDH_ETH_RSS_UDP (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP) +#define ZXDH_ETH_RSS_SCTP (RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + +#define ZXDH_HF_F5_ETH (ZXDH_ETH_RSS_TCP | ZXDH_ETH_RSS_UDP | ZXDH_ETH_RSS_SCTP) +#define ZXDH_HF_F3_ETH ZXDH_ETH_RSS_IP +#define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 +#define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -20,5 +37,14 @@ int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_configure(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index ec15388f7a..45a9b10aa4 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -182,6 +182,11 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_RSS_ENABLE = 7, + ZXDH_RSS_RETA_SET = 8, + ZXDH_RSS_RETA_GET = 9, + ZXDH_RSS_HF_SET = 15, + ZXDH_RSS_HF_GET = 16, ZXDH_VLAN_FILTER_SET = 17, ZXDH_VLAN_FILTER_ADD = 18, ZXDH_VLAN_FILTER_DEL = 19, @@ -291,6 +296,14 @@ struct zxdh_link_info_msg { uint32_t speed; } __rte_packed; +struct zxdh_rss_reta { + uint32_t reta[RTE_ETH_RSS_RETA_SIZE_256]; +}; + +struct zxdh_rss_hf { + uint32_t rss_hf; +}; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -307,6 +320,8 @@ struct zxdh_msg_reply_body { union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; struct zxdh_link_info_msg link_msg; + struct zxdh_rss_hf rss_hf; + struct zxdh_rss_reta rss_reta; } __rte_packed; } __rte_packed; @@ -360,6 +375,10 @@ struct zxdh_vlan_offload { uint8_t type; } __rte_packed; +struct zxdh_rss_enable { + uint8_t enable; +}; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -385,6 +404,9 @@ struct zxdh_msg_info { struct zxdh_vlan_filter vlan_filter_msg; struct zxdh_vlan_filter_set vlan_filter_set_msg; struct zxdh_vlan_offload vlan_offload_msg; + struct zxdh_rss_reta rss_reta; + struct zxdh_rss_enable rss_enable; + struct zxdh_rss_hf rss_hf; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index ca98b36da2..af148a974e 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_RSS_ATT_TABLE 3 #define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 @@ -668,3 +669,84 @@ zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) } return 0; } + +int +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + for (uint16_t j = 0; j < 8; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; + else + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; + #else + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; + #endif + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] |= 0x8000; + #else + rss_vqid.vqm_qid[0] |= 0x8000; + #endif + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = { + .index = vport_num.vfid * 32 + i, + .p_data = (uint32_t *)&rss_vqid + }; + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_RSS_ATT_TABLE, + .p_entry_data = &entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write rss base qid failed vfid:%d", vport_num.vfid); + return ret; + } + } + return 0; +} + +int +zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vport_num.vfid * 32 + i, (uint32_t *)&rss_vqid}; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_RSS_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &user_entry, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get rss tbl failed, vfid:%d", vport_num.vfid); + return -1; + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] &= 0x7FFF; + #else + rss_vqid.vqm_qid[0] &= 0x7FFF; + #endif + uint8_t size = sizeof(struct zxdh_rss_to_vqid_table) / sizeof(uint16_t); + + for (int j = 0; j < size; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j + 1]; + else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j - 1]; + #else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j]; + #endif + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 28d4f6f7cf..c8d1de3bbb 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,6 +8,7 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 extern struct zxdh_dtb_shared_data g_dtb_data; @@ -198,6 +199,10 @@ struct zxdh_vlan_filter_table { uint32_t vlans[4]; }; +struct zxdh_rss_to_vqid_table { + uint16_t vqm_qid[8]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -211,5 +216,7 @@ int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); +int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 63391 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 14/15] net/zxdh: basic stats ops implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (12 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37392 bytes --] basic stats ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 353 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 27 +++ drivers/net/zxdh/zxdh_msg.h | 16 ++ drivers/net/zxdh/zxdh_np.c | 341 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 30 +++ drivers/net/zxdh/zxdh_queue.h | 2 + drivers/net/zxdh/zxdh_rxtx.c | 83 ++++++- drivers/net/zxdh/zxdh_tables.h | 5 + 11 files changed, 859 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 415ca547d0..98c141cf95 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -22,3 +22,5 @@ QinQ offload = Y RSS hash = Y RSS reta update = Y Inner RSS = Y +Basic stats = Y +Stats per queue = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3cc6a1d348..c8a52b587c 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -32,6 +32,7 @@ Features of the ZXDH PMD are: - VLAN stripping and inserting - QINQ stripping and inserting - Receive Side Scaling (RSS) +- Port hardware statistics Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 17fca8e909..0326d143ec 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1150,6 +1150,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .reta_query = zxdh_dev_rss_reta_query, .rss_hash_update = zxdh_rss_hash_update, .rss_hash_conf_get = zxdh_rss_hash_conf_get, + .stats_get = zxdh_dev_stats_get, + .stats_reset = zxdh_dev_stats_reset, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index c12947cb4d..2377ff202d 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -11,6 +11,8 @@ #include "zxdh_ethdev_ops.h" #include "zxdh_tables.h" #include "zxdh_logs.h" +#include "zxdh_rxtx.h" +#include "zxdh_np.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -22,6 +24,108 @@ #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +struct zxdh_hw_mac_stats { + uint64_t rx_total; + uint64_t rx_pause; + uint64_t rx_unicast; + uint64_t rx_multicast; + uint64_t rx_broadcast; + uint64_t rx_vlan; + uint64_t rx_size_64; + uint64_t rx_size_65_127; + uint64_t rx_size_128_255; + uint64_t rx_size_256_511; + uint64_t rx_size_512_1023; + uint64_t rx_size_1024_1518; + uint64_t rx_size_1519_mru; + uint64_t rx_undersize; + uint64_t rx_oversize; + uint64_t rx_fragment; + uint64_t rx_jabber; + uint64_t rx_control; + uint64_t rx_eee; + + uint64_t tx_total; + uint64_t tx_pause; + uint64_t tx_unicast; + uint64_t tx_multicast; + uint64_t tx_broadcast; + uint64_t tx_vlan; + uint64_t tx_size_64; + uint64_t tx_size_65_127; + uint64_t tx_size_128_255; + uint64_t tx_size_256_511; + uint64_t tx_size_512_1023; + uint64_t tx_size_1024_1518; + uint64_t tx_size_1519_mtu; + uint64_t tx_undersize; + uint64_t tx_oversize; + uint64_t tx_fragment; + uint64_t tx_jabber; + uint64_t tx_control; + uint64_t tx_eee; + + uint64_t rx_error; + uint64_t rx_fcs_error; + uint64_t rx_drop; + + uint64_t tx_error; + uint64_t tx_fcs_error; + uint64_t tx_drop; + +} __rte_packed; + +struct zxdh_hw_mac_bytes { + uint64_t rx_total_bytes; + uint64_t rx_good_bytes; + uint64_t tx_total_bytes; + uint64_t tx_good_bytes; +} __rte_packed; + +struct zxdh_np_stats_data { + uint64_t n_pkts_dropped; + uint64_t n_bytes_dropped; +}; + +struct zxdh_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct zxdh_xstats_name_off zxdh_rxq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_rx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_rx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_rx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_rx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_rx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_rx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[7])}, +}; + +static const struct zxdh_xstats_name_off zxdh_txq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_tx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_tx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_tx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_tx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_tx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_tx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[7])}, +}; + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1162,3 +1266,252 @@ zxdh_rss_configure(struct rte_eth_dev *dev) } return 0; } + +static int32_t +zxdh_hw_vqm_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode, + struct zxdh_hw_vqm_stats *hw_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_GET: + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get hw stats"); + return -EAGAIN; + } + struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body; + + rte_memcpy(hw_stats, &reply_body->vqm_stats, sizeof(struct zxdh_hw_vqm_stats)); + return 0; +} + +static int zxdh_hw_mac_stats_get(struct rte_eth_dev *dev, + struct zxdh_hw_mac_stats *mac_stats, + struct zxdh_hw_mac_bytes *mac_bytes) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET); + uint64_t stats_addr = 0; + uint64_t bytes_addr = 0; + + if (hw->speed <= RTE_ETH_SPEED_NUM_25G) { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4); + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4); + } else { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4; + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4; + } + + rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats)); + rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes)); + return 0; +} + +static void zxdh_data_hi_to_lo(uint64_t *data) +{ + uint32_t n_data_hi; + uint32_t n_data_lo; + + n_data_lo = *data >> 32; + n_data_hi = *data; + *data = (uint64_t)(rte_le_to_cpu_32(n_data_hi)) << 32 | + rte_le_to_cpu_32(n_data_lo); +} + +static int zxdh_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_np_stats_data stats_data; + uint32_t stats_id = zxdh_vport_to_vfid(hw->vport); + uint32_t idx = 0; + int ret = 0; + + idx = stats_id + ZXDH_BROAD_STATS_EGRESS_BASE; + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_tx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_tx_broadcast); + + idx = stats_id + ZXDH_BROAD_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_rx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_rx_broadcast); + + idx = stats_id + ZXDH_MTU_STATS_EGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + + np_stats->np_tx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_tx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_bytes); + + idx = stats_id + ZXDH_MTU_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + np_stats->np_rx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_rx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_bytes); + + return 0; +} + +static int +zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_stats_get(dev, np_stats); + if (ret) { + PMD_DRV_LOG(ERR, "get np stats failed"); + return -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_GET_NP_STATS, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", + hw->vport.vport); + return -1; + } + memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); + } + return ret; +} + +int +zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_hw_vqm_stats vqm_stats = {0}; + struct zxdh_hw_np_stats np_stats = {0}; + struct zxdh_hw_mac_stats mac_stats = {0}; + struct zxdh_hw_mac_bytes mac_bytes = {0}; + uint32_t i = 0; + + zxdh_hw_vqm_stats_get(dev, ZXDH_VQM_DEV_STATS_GET, &vqm_stats); + if (hw->is_pf) + zxdh_hw_mac_stats_get(dev, &mac_stats, &mac_bytes); + + zxdh_hw_np_stats_get(dev, &np_stats); + + stats->ipackets = vqm_stats.rx_total; + stats->opackets = vqm_stats.tx_total; + stats->ibytes = vqm_stats.rx_bytes; + stats->obytes = vqm_stats.tx_bytes; + stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop; + stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts; + stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts; + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[i]; + + if (rxvq == NULL) + continue; + stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[0].offset); + stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[1].offset); + stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[5].offset); + } + + for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_tx *txvq = dev->data->tx_queues[i]; + + if (txvq == NULL) + continue; + stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[0].offset); + stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[1].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[5].offset); + } + return 0; +} + +static int zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to reset hw stats"); + return -EAGAIN; + } + return 0; +} + +int zxdh_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET); + if (hw->is_pf) + zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 860716d079..f35378e691 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <stdint.h> + #include <rte_ether.h> #include "zxdh_ethdev.h" @@ -24,6 +26,29 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +struct zxdh_hw_vqm_stats { + uint64_t rx_total; + uint64_t tx_total; + uint64_t rx_bytes; + uint64_t tx_bytes; + uint64_t rx_error; + uint64_t tx_error; + uint64_t rx_drop; +} __rte_packed; + +struct zxdh_hw_np_stats { + uint64_t np_rx_broadcast; + uint64_t np_tx_broadcast; + uint64_t np_rx_mtu_drop_pkts; + uint64_t np_tx_mtu_drop_pkts; + uint64_t np_rx_mtu_drop_bytes; + uint64_t np_tx_mtu_drop_bytes; + uint64_t np_rx_mtr_drop_pkts; + uint64_t np_tx_mtr_drop_pkts; + uint64_t np_rx_mtr_drop_bytes; + uint64_t np_tx_mtr_drop_bytes; +}; + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -46,5 +71,7 @@ int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_configure(struct rte_eth_dev *dev); +int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int zxdh_dev_stats_reset(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 45a9b10aa4..159c8c9c71 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -9,10 +9,16 @@ #include <ethdev_driver.h> +#include "zxdh_ethdev_ops.h" + #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) #define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) +#define ZXDH_MAC_OFFSET (0x24000) +#define ZXDH_MAC_STATS_OFFSET (0x1408) +#define ZXDH_MAC_BYTES_OFFSET (0xb000) + #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 #define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) @@ -173,7 +179,13 @@ enum pciebar_layout_type { /* riscv msg opcodes */ enum zxdh_agent_msg_type { + ZXDH_MAC_STATS_GET = 10, + ZXDH_MAC_STATS_RESET, ZXDH_MAC_LINK_GET = 14, + ZXDH_VQM_DEV_STATS_GET = 21, + ZXDH_VQM_DEV_STATS_RESET, + ZXDH_VQM_QUEUE_STATS_GET = 24, + ZXDH_VQM_QUEUE_STATS_RESET, }; enum zxdh_msg_type { @@ -195,6 +207,8 @@ enum zxdh_msg_type { ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, + ZXDH_GET_NP_STATS = 31, + ZXDH_MSG_TYPE_END, }; @@ -322,6 +336,8 @@ struct zxdh_msg_reply_body { struct zxdh_link_info_msg link_msg; struct zxdh_rss_hf rss_hf; struct zxdh_rss_reta rss_reta; + struct zxdh_hw_vqm_stats vqm_stats; + struct zxdh_hw_np_stats np_stats; } __rte_packed; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index f2518b6d7c..7ec53b1aa6 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -26,6 +26,7 @@ ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; +ZXDH_PPU_STAT_CFG_T g_ppu_stat_cfg = {0}; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -117,6 +118,18 @@ do {\ #define ZXDH_COMM_CONVERT16(w_data) \ (((w_data) & 0xff) << 8) +#define ZXDH_DTB_TAB_UP_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.wr_index) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].user_flag) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].phy_addr) + +#define ZXDH_DTB_TAB_UP_DATA_LEN_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.data_len[(INDEX)]) + #define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) @@ -1717,3 +1730,331 @@ zxdh_np_dtb_table_entry_get(uint32_t dev_id, return 0; } + +static void +zxdh_np_stat_cfg_soft_get(uint32_t dev_id, + ZXDH_PPU_STAT_CFG_T *p_stat_cfg) +{ + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_stat_cfg); + + p_stat_cfg->ddr_base_addr = g_ppu_stat_cfg.ddr_base_addr; + p_stat_cfg->eram_baddr = g_ppu_stat_cfg.eram_baddr; + p_stat_cfg->eram_depth = g_ppu_stat_cfg.eram_depth; + p_stat_cfg->ppu_addr_offset = g_ppu_stat_cfg.ppu_addr_offset; +} + +static uint32_t +zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t int_flag, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_desc_data) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t queue_en = 0; + uint32_t rc; + + zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!", queue_id); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (desc_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + zxdh_np_dtb_item_buff_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + item_index, 0, desc_len, p_desc_data); + + ZXDH_DTB_TAB_UP_DATA_LEN_GET(dev_id, queue_id, item_index) = data_len; + + item_info.cmd_vld = 1; + item_info.cmd_type = ZXDH_DTB_DIR_UP_TYPE; + item_info.int_en = int_flag; + item_info.data_len = desc_len / 4; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_dump_desc_info(uint32_t dev_id, + uint32_t queue_id, + uint32_t queue_element_id, + uint32_t *p_dump_info, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_dump_data) +{ + uint32_t dtb_interrupt_status = 0; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_dump_data); + rc = zxdh_np_dtb_tab_up_info_set(dev_id, + queue_id, + queue_element_id, + dtb_interrupt_status, + data_len, + desc_len, + p_dump_info); + if (rc != 0) { + PMD_DRV_LOG(ERR, "the queue %d element id %d dump" + " info set failed!", queue_id, queue_element_id); + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + queue_element_id, 0, ZXDH_DTB_TAB_ACK_UNUSED_MASK); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_index) +{ + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + uint32_t i; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + zxdh_np_dtb_item_ack_rd(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, &ack_vale); + + ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + *p_item_index = item_index; + + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t *p_phy_haddr, + uint32_t *p_phy_laddr) +{ + uint32_t rc = 0; + uint64_t addr; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(dev_id, queue_id, item_index) == + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE) + addr = ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(dev_id, queue_id, item_index); + else + addr = ZXDH_DTB_ITEM_ACK_SIZE; + + *p_phy_haddr = (addr >> 32) & 0xffffffff; + *p_phy_laddr = addr & 0xffffffff; + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_dma_dump(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t depth, + uint32_t *p_data, + uint32_t *element_id) +{ + uint8_t form_buff[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint32_t dump_dst_phy_haddr = 0; + uint32_t dump_dst_phy_laddr = 0; + uint32_t queue_item_index = 0; + uint32_t data_len; + uint32_t desc_len; + uint32_t rc; + + rc = zxdh_np_dtb_tab_up_free_item_get(dev_id, queue_id, &queue_item_index); + if (rc != 0) { + PMD_DRV_LOG(ERR, "dpp_dtb_tab_up_free_item_get failed = %d!", base_addr); + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + } + + *element_id = queue_item_index; + + rc = zxdh_np_dtb_tab_up_item_addr_get(dev_id, queue_id, queue_item_index, + &dump_dst_phy_haddr, &dump_dst_phy_laddr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_tab_up_item_addr_get"); + + data_len = depth * 128 / 32; + desc_len = ZXDH_DTB_LEN_POS_SETP / 4; + + rc = zxdh_np_dtb_write_dump_desc_info(dev_id, queue_id, queue_item_index, + (uint32_t *)form_buff, data_len, desc_len, p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_dump_desc_info"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t index, + uint32_t rd_mode, + uint32_t *p_data) +{ + uint32_t temp_data[4] = {0}; + uint32_t element_id = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t eram_dump_base_addr; + uint32_t rc; + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_OPR_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_OPR_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + + eram_dump_base_addr = base_addr + row_index; + rc = zxdh_np_dtb_se_smmu0_dma_dump(dev_id, + queue_id, + eram_dump_base_addr, + 1, + temp_data, + &element_id); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_dma_dump"); + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_stat_smmu0_int_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t smmu0_base_addr, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t eram_rd_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + if (rd_mode == ZXDH_STAT_128_MODE) + eram_rd_mode = ZXDH_ERAM128_OPR_128b; + else + eram_rd_mode = ZXDH_ERAM128_OPR_64b; + + rc = zxdh_np_dtb_se_smmu0_ind_read(dev_id, + queue_id, + smmu0_base_addr, + index, + eram_rd_mode, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_ind_read"); + + return rc; +} + +int +zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + ZXDH_PPU_STAT_CFG_T stat_cfg = {0}; + uint32_t ppu_eram_baddr; + uint32_t ppu_eram_depth; + uint32_t rc = 0; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + memset(&stat_cfg, 0x0, sizeof(stat_cfg)); + + zxdh_np_stat_cfg_soft_get(dev_id, &stat_cfg); + + ppu_eram_depth = stat_cfg.eram_depth; + ppu_eram_baddr = stat_cfg.eram_baddr; + + if ((index >> (ZXDH_STAT_128_MODE - rd_mode)) < ppu_eram_depth) { + rc = zxdh_np_dtb_stat_smmu0_int_read(dev_id, + queue_id, + ppu_eram_baddr, + rd_mode, + index, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_stat_smmu0_int_read"); + } + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 19d1f03f59..7da29cf7bd 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -432,6 +432,18 @@ typedef enum zxdh_sdt_table_type_e { ZXDH_SDT_TBLT_MAX = 7, } ZXDH_SDT_TABLE_TYPE_E; +typedef enum zxdh_dtb_dir_type_e { + ZXDH_DTB_DIR_DOWN_TYPE = 0, + ZXDH_DTB_DIR_UP_TYPE = 1, + ZXDH_DTB_DIR_TYPE_MAX, +} ZXDH_DTB_DIR_TYPE_E; + +typedef enum zxdh_dtb_tab_up_user_addr_type_e { + ZXDH_DTB_TAB_UP_NOUSER_ADDR_TYPE = 0, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE = 1, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_MAX, +} ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_E; + typedef struct zxdh_dtb_lpm_entry_t { uint32_t dtb_len0; uint8_t *p_data_buff0; @@ -537,6 +549,19 @@ typedef struct zxdh_dtb_hash_entry_info_t { uint8_t *p_rst; } ZXDH_DTB_HASH_ENTRY_INFO_T; +typedef struct zxdh_ppu_stat_cfg_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_base_addr; + uint32_t ppu_addr_offset; +} ZXDH_PPU_STAT_CFG_T; + +typedef enum zxdh_stat_cnt_mode_e { + ZXDH_STAT_64_MODE = 0, + ZXDH_STAT_128_MODE = 1, + ZXDH_STAT_MAX_MODE, +} ZXDH_STAT_CNT_MODE_E; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, @@ -545,5 +570,10 @@ int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); +int zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 9343df81ac..deb0dd891a 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -53,6 +53,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) #define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) #define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) +#define ZXDH_PD_HDR_SIZE_MAX 256 +#define ZXDH_PD_HDR_SIZE_MIN ZXDH_TYPE_HDR_SIZE /* * ring descriptors: 16 bytes. diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index e36ba39423..9f315cecc6 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -406,6 +406,40 @@ static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); } +static void +zxdh_update_packet_stats(struct zxdh_virtnet_stats *stats, struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + struct rte_ether_addr *ea = NULL; + + stats->bytes += s; + + if (s == 64) { + stats->size_bins[1]++; + } else if (s > 64 && s < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + stats->size_bins[bin]++; + } else { + if (s < 64) + stats->size_bins[0]++; + else if (s < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } +} + uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -459,12 +493,19 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt break; } } + if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + break; + } /* Enqueue Packet buffers */ if (can_push) zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); else zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + zxdh_update_packet_stats(&txvq->stats, txm); } + txvq->stats.packets += nb_tx; if (likely(nb_tx)) { if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { zxdh_queue_notify(vq); @@ -474,9 +515,10 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt return nb_tx; } -uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { + struct zxdh_virtnet_tx *txvq = tx_queue; uint16_t nb_tx; for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { @@ -496,6 +538,12 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t rte_errno = -error; break; } + if (m->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", m->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + rte_errno = ENOMEM; + break; + } } return nb_tx; } @@ -571,7 +619,7 @@ static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *h return 0; } -static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +static void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) { int32_t error = 0; /* @@ -613,7 +661,13 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, for (i = 0; i < num; i++) { rxm = rcv_pkts[i]; - + if (unlikely(len[i] < ZXDH_UL_NET_HDR_SIZE)) { + nb_enqueued++; + PMD_RX_LOG(ERR, "RX, len:%u err", len[i]); + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } struct zxdh_net_hdr_ul *header = (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM); @@ -623,8 +677,22 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); seg_num = 1; } + if (seg_num > ZXDH_RX_MAX_SEGS) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* bit[0:6]-pd_len unit:2B */ uint16_t pd_len = header->type_hdr.pd_len << 1; + if (pd_len > ZXDH_PD_HDR_SIZE_MAX || pd_len < ZXDH_PD_HDR_SIZE_MIN) { + PMD_RX_LOG(ERR, "pd_len:%d is invalid", pd_len); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* Private queue only handle type hdr */ hdr_size = pd_len; rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; @@ -639,6 +707,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, /* Update rte_mbuf according to pi/pd header */ if (zxdh_rx_update_mbuf(rxm, header) < 0) { zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; continue; } seg_res = seg_num - 1; @@ -661,8 +730,11 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } @@ -675,6 +747,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(rcv_cnt == 0)) { PMD_RX_LOG(ERR, "No enough segments for packet."); rte_pktmbuf_free(rx_pkts[nb_rx]); + rxvq->stats.errors++; break; } while (extra_idx < rcv_cnt) { @@ -694,11 +767,15 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } + rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ if (likely(!zxdh_queue_full(vq))) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index c8d1de3bbb..a77ec46d84 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -11,6 +11,11 @@ #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 +#define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 +#define ZXDH_BROAD_STATS_EGRESS_BASE 0xC902 +#define ZXDH_BROAD_STATS_INGRESS_BASE 0xD102 + extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 87145 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v3 15/15] net/zxdh: mtu update ops implementations 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (13 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 14/15] net/zxdh: basic stats ops implementations Junlong Wang @ 2024-12-17 11:41 ` Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-17 11:41 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8573 bytes --] mtu update ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 5 ++ drivers/net/zxdh/zxdh_ethdev_ops.c | 78 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 3 ++ drivers/net/zxdh/zxdh_tables.c | 42 ++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 4 ++ 7 files changed, 135 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 98c141cf95..3561e31666 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -24,3 +24,4 @@ RSS reta update = Y Inner RSS = Y Basic stats = Y Stats per queue = Y +MTU update = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index c8a52b587c..58e0c49a2e 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -33,6 +33,8 @@ Features of the ZXDH PMD are: - QINQ stripping and inserting - Receive Side Scaling (RSS) - Port hardware statistics +- MTU update +- Jumbo frames Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 0326d143ec..147b66b998 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -63,6 +63,10 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - + RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + dev_info->min_mtu = ZXDH_ETHER_MIN_MTU; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -1152,6 +1156,7 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .rss_hash_conf_get = zxdh_rss_hash_conf_get, .stats_get = zxdh_dev_stats_get, .stats_reset = zxdh_dev_stats_reset, + .mtu_set = zxdh_dev_mtu_set, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 2377ff202d..77df006fec 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -13,6 +13,7 @@ #include "zxdh_logs.h" #include "zxdh_rxtx.h" #include "zxdh_np.h" +#include "zxdh_queue.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -1515,3 +1516,80 @@ int zxdh_dev_stats_reset(struct rte_eth_dev *dev) return 0; } + +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_panel_table panel = {0}; + struct zxdh_port_attr_table vport_att = {0}; + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); + uint16_t max_mtu = 0; + int ret = 0; + + max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + if (new_mtu < ZXDH_ETHER_MIN_MTU || new_mtu > max_mtu) { + PMD_DRV_LOG(ERR, "invalid mtu:%d, range[%d, %d]", + new_mtu, ZXDH_ETHER_MIN_MTU, max_mtu); + return -EINVAL; + } + + if (dev->data->mtu == new_mtu) + return 0; + + if (hw->is_pf) { + memset(&panel, 0, sizeof(panel)); + memset(&vport_att, 0, sizeof(vport_att)); + ret = zxdh_get_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get_panel_attr ret:%d", ret); + return -1; + } + + ret = zxdh_get_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, get vport dpp_ret:%d", vfid, ret); + return -1; + } + + panel.mtu = new_mtu; + panel.mtu_enable = 1; + ret = zxdh_set_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set zxdh_dev_mtu failed, ret:%u", ret); + return ret; + } + + vport_att.mtu_enable = 1; + vport_att.mtu = new_mtu; + ret = zxdh_set_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, set vport dpp_ret:%d", vfid, ret); + return ret; + } + } else { + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_MTU_EN_FLAG; + attr_msg->value = 1; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_EN_FLAG); + return ret; + } + attr_msg->mode = ZXDH_PORT_MTU_FLAG; + attr_msg->value = new_mtu; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_FLAG); + return ret; + } + } + dev->data->mtu = new_mtu; + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index f35378e691..fac6cbd5e8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -26,6 +26,8 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +#define ZXDH_ETHER_MIN_MTU 68 + struct zxdh_hw_vqm_stats { uint64_t rx_total; uint64_t tx_total; @@ -73,5 +75,6 @@ int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss int zxdh_rss_configure(struct rte_eth_dev *dev); int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int zxdh_dev_stats_reset(struct rte_eth_dev *dev); +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index af148a974e..c1b693a613 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -150,6 +150,48 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + + if (ret != 0) + PMD_DRV_LOG(ERR, "get panel table failed"); + + return ret; +} + +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) + PMD_DRV_LOG(ERR, "Insert panel table failed"); + + return ret; +} + int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index a77ec46d84..e2bdb01688 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,8 +8,10 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_MTU_FLAG 9 #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_MTU_EN_FLAG 42 #define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 #define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 @@ -223,5 +225,7 @@ int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18747 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 00/15] net/zxdh: updated net zxdh driver 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (3 preceding siblings ...) 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (14 more replies) 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang 6 siblings, 15 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2795 bytes --] V4: - resolved ci compile issues. V3: - use rte_zmalloc and rte_calloc to avoid memset. - remove unnecessary initialization, which first usage will set. - adjust some function which is always return 0, changed to void and skip the ASSERTION later. - resolved some WARNING:MACRO_ARG_UNUSED issues. - resolved some other issues. V2: - resolve code style and github-robot build issue. V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promisc/allmulti ops implementations net/zxdh: vlan filter/ offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 18 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build | 4 + drivers/net/zxdh/zxdh_common.c | 24 + drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 575 +++++++- drivers/net/zxdh/zxdh_ethdev.h | 40 + drivers/net/zxdh/zxdh_ethdev_ops.c | 1595 +++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ drivers/net/zxdh/zxdh_msg.c | 166 +++ drivers/net/zxdh/zxdh_msg.h | 232 ++++ drivers/net/zxdh/zxdh_np.c | 2060 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 579 ++++++++ drivers/net/zxdh/zxdh_pci.c | 26 +- drivers/net/zxdh/zxdh_pci.h | 9 +- drivers/net/zxdh/zxdh_queue.c | 242 +++- drivers/net/zxdh/zxdh_queue.h | 144 +- drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 20 +- drivers/net/zxdh/zxdh_tables.c | 794 +++++++++++ drivers/net/zxdh/zxdh_tables.h | 231 ++++ 21 files changed, 7615 insertions(+), 46 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 5307 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 01/15] net/zxdh: zxdh np init implementation 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang ` (13 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 36779 bytes --] (np)network Processor initialize resources in host, and initialize a channel for some tables insert/get/del. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 234 +++++++++++++++++++++-- drivers/net/zxdh/zxdh_ethdev.h | 30 +++ drivers/net/zxdh/zxdh_msg.c | 44 +++++ drivers/net/zxdh/zxdh_msg.h | 37 ++++ drivers/net/zxdh/zxdh_np.c | 340 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 2 +- drivers/net/zxdh/zxdh_pci.h | 6 +- drivers/net/zxdh/zxdh_queue.c | 2 +- drivers/net/zxdh/zxdh_queue.h | 14 +- 11 files changed, 875 insertions(+), 33 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index c9960f4c73..ab24a3145c 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -19,4 +19,5 @@ sources = files( 'zxdh_msg.c', 'zxdh_pci.c', 'zxdh_queue.c', + 'zxdh_np.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c786198535..b8f4415e00 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -5,6 +5,7 @@ #include <ethdev_pci.h> #include <bus_pci_driver.h> #include <rte_ethdev.h> +#include <rte_malloc.h> #include "zxdh_ethdev.h" #include "zxdh_logs.h" @@ -12,8 +13,15 @@ #include "zxdh_msg.h" #include "zxdh_common.h" #include "zxdh_queue.h" +#include "zxdh_np.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_INVALID_DTBQUE 0xFFFF uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) @@ -406,14 +414,14 @@ zxdh_features_update(struct zxdh_hw *hw, ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { PMD_DRV_LOG(ERR, "rx checksum not available on this host"); return -ENOTSUP; } if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && - (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + (!zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); return -ENOTSUP; } @@ -421,20 +429,20 @@ zxdh_features_update(struct zxdh_hw *hw, } static bool -rx_offload_enabled(struct zxdh_hw *hw) +zxdh_rx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); } static bool -tx_offload_enabled(struct zxdh_hw *hw) +zxdh_tx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_UFO); } static void @@ -466,7 +474,7 @@ zxdh_dev_free_mbufs(struct rte_eth_dev *dev) continue; PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); - while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + while ((buf = zxdh_queue_detach_unused(vq)) != NULL) rte_pktmbuf_free(buf); } } @@ -550,9 +558,9 @@ zxdh_init_vring(struct zxdh_virtqueue *vq) vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); - vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); - vring_desc_init_packed(vq, size); - virtqueue_disable_intr(vq); + zxdh_vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + zxdh_vring_desc_init_packed(vq, size); + zxdh_queue_disable_intr(vq); } static int32_t @@ -621,7 +629,7 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* * Reserve a memzone for vring elements */ - size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + size = zxdh_vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -694,7 +702,8 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* first indirect descriptor is always the tx header */ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; - vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + zxdh_vring_desc_init_indirect_packed(start_dp, + RTE_DIM(txr[i].tx_packed_indir)); start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + offsetof(struct zxdh_tx_region, tx_hdr); /* length will be updated to actual pi hdr size when xmit pkt */ @@ -792,8 +801,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) } } - hw->has_tx_offload = tx_offload_enabled(hw); - hw->has_rx_offload = rx_offload_enabled(hw); + hw->has_tx_offload = zxdh_tx_offload_enabled(hw); + hw->has_rx_offload = zxdh_rx_offload_enabled(hw); nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) @@ -881,7 +890,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); /* If host does not support both status and MSI-X then disable LSC */ - if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; else eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; @@ -913,6 +922,181 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) return 0; } +static int +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_bar_offset_params param = {0}; + struct zxdh_bar_offset_res res = {0}; + int ret = 0; + + if (g_dtb_data.init_done) { + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_zmalloc(NULL, sizeof(*dpp_ctrl) + + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); + if (dpp_ctrl == NULL) { + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strlcpy(dpp_ctrl->port_name, dev->device->name, sizeof(dpp_ctrl->port_name)); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = ZXDH_URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + + ret = zxdh_np_host_init(0, dpp_ctrl); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + rte_free(dpp_ctrl); + return 0; + +free_res: + rte_free(dpp_ctrl); + return ret; +} + +static int +zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int +zxdh_init_once(void) +{ + int ret = 0; + + if (zxdh_init_shared_data()) + return -1; + + struct zxdh_shared_data *sd = zxdh_shared_data; + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + /* RTE_PROC_PRIMARY */ + if (!sd->init_done) + sd->init_done = true; + sd->dev_refcnt++; + +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int +zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + return ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 1; + + PMD_DRV_LOG(DEBUG, "np init ok "); + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -950,6 +1134,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_once(); + if (ret != 0) + goto err_zxdh_init; + ret = zxdh_init_device(eth_dev); if (ret < 0) goto err_zxdh_init; @@ -977,6 +1165,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); if (ret != 0) goto err_zxdh_init; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7658cbb461..b1f398b28e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -35,6 +35,12 @@ #define ZXDH_MBUF_BURST_SZ 64 +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) + +#define ZXDH_MAX_NAME_LEN 32 + union zxdh_virport_num { uint16_t vport; struct { @@ -89,6 +95,30 @@ struct zxdh_hw { uint8_t has_rx_offload; }; +struct zxdh_dtb_shared_data { + uint8_t init_done; + char name[ZXDH_MAX_NAME_LEN]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int32_t init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int32_t np_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); #endif /* ZXDH_ETHDEV_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 53cf972f86..dd7a518a51 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1035,3 +1035,47 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) rte_free(recved_msg); return ZXDH_BAR_MSG_OK; } + +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, + struct zxdh_bar_offset_res *res) +{ + uint16_t check_token; + uint16_t sum_res; + int ret; + + if (!paras) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = { + .payload_addr = &send_msg, + .payload_len = sizeof(send_msg), + .virt_addr = paras->virt_addr, + .src = ZXDH_MSG_CHAN_END_PF, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_OFFSET_GET, + .src_pcieid = paras->pcie_id, + }; + struct zxdh_bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 530ee406b1..fbc79e8f9d 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { ZXDH_TBL_TYPE_NON, }; +enum pciebar_layout_type { + ZXDH_URI_VQM = 0, + ZXDH_URI_SPINLOCK = 1, + ZXDH_URI_FWCAP = 2, + ZXDH_URI_FWSHR = 3, + ZXDH_URI_DRS_SEC = 4, + ZXDH_URI_RSV = 5, + ZXDH_URI_CTRLCH = 6, + ZXDH_URI_1588 = 7, + ZXDH_URI_QBV = 8, + ZXDH_URI_MACPCS = 9, + ZXDH_URI_RDMA = 10, + ZXDH_URI_MNP = 11, + ZXDH_URI_MSPM = 12, + ZXDH_URI_MVQM = 13, + ZXDH_URI_MDPI = 14, + ZXDH_URI_NP = 15, + ZXDH_URI_MAX, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { uint32_t length; } __rte_packed; +struct zxdh_bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; + +struct zxdh_bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + struct zxdh_bar_recv_msg { uint8_t reps_ok; uint16_t reps_len; @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +struct zxdh_offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c new file mode 100644 index 0000000000..e44d7ff501 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.c @@ -0,0 +1,340 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdlib.h> +#include <string.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_debug.h> +#include <rte_malloc.h> + +#include "zxdh_np.h" +#include "zxdh_logs.h" + +static uint64_t g_np_bar_offset; +static ZXDH_DEV_MGR_T g_dev_mgr; +static ZXDH_SDT_MGR_T g_sdt_mgr; +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; + +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) + +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ +do {\ + if (NULL == (point)) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ + " Fail!", __FILE__, __LINE__, __func__, becall);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ + "Fail!", __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +static uint32_t +zxdh_np_dev_init(void) +{ + if (g_dev_mgr.is_init) { + PMD_DRV_LOG(ERR, "Dev is already initialized."); + return 0; + } + + g_dev_mgr.device_num = 0; + g_dev_mgr.is_init = 1; + + return 0; +} + +static uint32_t +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, + uint64_t riscv_addr, uint64_t dma_vir_addr, + uint64_t dma_phy_addr) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + if (!p_dev_mgr->is_init) { + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + ZXDH_RC_DEV_MGR_NOT_INIT); + return ZXDH_RC_DEV_MGR_NOT_INIT; + } + + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { + /* device is already exist. */ + PMD_DRV_LOG(ERR, "Device is added again!!!"); + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + } else { + /* device is new. */ + p_dev_info = rte_malloc(NULL, sizeof(ZXDH_DEV_CFG_T), 0); + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; + p_dev_mgr->device_num++; + } + + p_dev_info->device_id = dev_id; + p_dev_info->dev_type = dev_type; + p_dev_info->access_type = access_type; + p_dev_info->pcie_addr = pcie_addr; + p_dev_info->riscv_addr = riscv_addr; + p_dev_info->dma_vir_addr = dma_vir_addr; + p_dev_info->dma_phy_addr = dma_phy_addr; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return ZXDH_DEV_TYPE_INVALID; + p_dev_info->agent_flag = agent_flag; + + return 0; +} + +static void +zxdh_np_sdt_mgr_init(void) +{ + if (!g_sdt_mgr.is_init) { + g_sdt_mgr.channel_num = 0; + g_sdt_mgr.is_init = 1; + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); + } +} + +static uint32_t +zxdh_np_sdt_mgr_create(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { + p_sdt_tbl_temp = rte_malloc(NULL, sizeof(ZXDH_SDT_SOFT_TABLE_T), 0); + + p_sdt_tbl_temp->device_id = dev_id; + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; + + p_sdt_mgr->channel_num++; + } else { + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" + "is called repeatedly!", __func__, dev_id); + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) +{ + uint32_t rc; + uint32_t i; + + zxdh_np_sdt_mgr_init(); + + for (i = 0; i < dev_num; i++) { + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); + } + + return rc; +} + +static void +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, + uint32_t bitmap) +{ + uint32_t cls_id; + uint32_t mem_id; + uint32_t cls_use; + uint32_t instr_mem; + + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { + cls_use = (bitmap >> cls_id) & 0x1; + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; + } + + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); + } +} + +static ZXDH_DTB_MGR_T * +zxdh_np_dtb_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_dpp_dtb_mgr[dev_id]; +} + +static uint32_t +zxdh_np_dtb_soft_init(uint32_t dev_id) +{ + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return 1; + + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) { + p_dpp_dtb_mgr[dev_id] = rte_zmalloc(NULL, sizeof(ZXDH_DTB_MGR_T), 0); + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_base_soft_init(uint32_t dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) +{ + uint32_t dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; + uint32_t rt; + uint32_t access_type; + uint32_t agent_flag; + + rt = zxdh_np_dev_init(); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; + else + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) + agent_flag = ZXDH_DEV_AGENT_ENABLE; + else + agent_flag = ZXDH_DEV_AGENT_DISABLE; + + rt = zxdh_np_dev_add(dev_id, + p_init_ctrl->device_type, + access_type, + p_init_ctrl->pcie_vir_baddr, + p_init_ctrl->riscv_vir_baddr, + p_init_ctrl->dma_vir_baddr, + p_init_ctrl->dma_phy_baddr); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); + + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); + + dev_id_array[0] = dev_id; + rt = zxdh_np_sdt_init(1, dev_id_array); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); + + zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); + + rt = zxdh_np_dtb_soft_init(dev_id); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); + + return rt; +} + +static void +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->vport = vport; +} + +static void +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->agent_addr = agent_addr; +} + +static uint64_t +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) +{ + uint64_t np_addr; + + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; + g_np_bar_offset = bar_offset; + + return np_addr; +} + +int +zxdh_np_host_init(uint32_t dev_id, + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) +{ + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; + uint32_t rc; + uint64_t agent_addr; + + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); + + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, + p_dev_init_ctrl->np_bar_offset); + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); + + zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); + + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; + zxdh_np_dev_agent_addr_set(dev_id, agent_addr); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h new file mode 100644 index 0000000000..573eafe796 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef ZXDH_NP_H +#define ZXDH_NP_H + +#include <stdint.h> + +#define ZXDH_PORT_NAME_MAX (32) +#define ZXDH_DEV_CHANNEL_MAX (2) +#define ZXDH_DEV_SDT_ID_MAX (256U) +/*DTB*/ +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) +#define ZXDH_DTB_QUEUE_NUM_MAX (128) + +#define ZXDH_PPU_CLS_ALL_START (0x3F) +#define ZXDH_PPU_CLUSTER_NUM (6) +#define ZXDH_PPU_INSTR_MEM_NUM (3) +#define ZXDH_SDT_CFG_LEN (2) + +#define ZXDH_RC_DEV_BASE (0x600) +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) + +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) + +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) + +typedef enum zxdh_module_init_e { + ZXDH_MODULE_INIT_NPPU = 0, + ZXDH_MODULE_INIT_PPU, + ZXDH_MODULE_INIT_SE, + ZXDH_MODULE_INIT_ETM, + ZXDH_MODULE_INIT_DLB, + ZXDH_MODULE_INIT_TRPG, + ZXDH_MODULE_INIT_TSN, + ZXDH_MODULE_INIT_MAX +} ZXDH_MODULE_INIT_E; + +typedef enum zxdh_dev_type_e { + ZXDH_DEV_TYPE_SIM = 0, + ZXDH_DEV_TYPE_VCS = 1, + ZXDH_DEV_TYPE_CHIP = 2, + ZXDH_DEV_TYPE_FPGA = 3, + ZXDH_DEV_TYPE_PCIE_ACC = 4, + ZXDH_DEV_TYPE_INVALID, +} ZXDH_DEV_TYPE_E; + +typedef enum zxdh_dev_access_type_e { + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, + ZXDH_DEV_ACCESS_TYPE_INVALID, +} ZXDH_DEV_ACCESS_TYPE_E; + +typedef enum zxdh_dev_agent_flag_e { + ZXDH_DEV_AGENT_DISABLE = 0, + ZXDH_DEV_AGENT_ENABLE = 1, + ZXDH_DEV_AGENT_INVALID, +} ZXDH_DEV_AGENT_FLAG_E; + +typedef struct zxdh_dtb_tab_up_user_addr_t { + uint32_t user_flag; + uint64_t phy_addr; + uint64_t vir_addr; +} ZXDH_DTB_TAB_UP_USER_ADDR_T; + +typedef struct zxdh_dtb_tab_up_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; +} ZXDH_DTB_TAB_UP_INFO_T; + +typedef struct zxdh_dtb_tab_down_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; +} ZXDH_DTB_TAB_DOWN_INFO_T; + +typedef struct zxdh_dtb_queue_info_t { + uint32_t init_flag; + uint32_t vport; + uint32_t vector; + ZXDH_DTB_TAB_UP_INFO_T tab_up; + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; +} ZXDH_DTB_QUEUE_INFO_T; + +typedef struct zxdh_dtb_mgr_t { + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_DTB_MGR_T; + +typedef struct zxdh_ppu_cls_bitmap_t { + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; +} ZXDH_PPU_CLS_BITMAP_T; + +typedef struct dpp_sdt_item_t { + uint32_t valid; + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; +} ZXDH_SDT_ITEM_T; + +typedef struct dpp_sdt_soft_table_t { + uint32_t device_id; + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; +} ZXDH_SDT_SOFT_TABLE_T; + +typedef struct zxdh_sys_init_ctrl_t { + ZXDH_DEV_TYPE_E device_type; + uint32_t flags; + uint32_t sa_id; + uint32_t case_num; + uint32_t lif0_port_type; + uint32_t lif1_port_type; + uint64_t pcie_vir_baddr; + uint64_t riscv_vir_baddr; + uint64_t dma_vir_baddr; + uint64_t dma_phy_baddr; +} ZXDH_SYS_INIT_CTRL_T; + +typedef struct dpp_dev_cfg_t { + uint32_t device_id; + ZXDH_DEV_TYPE_E dev_type; + uint32_t chip_ver; + uint32_t access_type; + uint32_t agent_flag; + uint32_t vport; + uint64_t pcie_addr; + uint64_t riscv_addr; + uint64_t dma_vir_addr; + uint64_t dma_phy_addr; + uint64_t agent_addr; + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; +} ZXDH_DEV_CFG_T; + +typedef struct zxdh_dev_mngr_t { + uint32_t device_num; + uint32_t is_init; + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_DEV_MGR_T; + +typedef struct zxdh_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} ZXDH_DTB_ADDR_INFO_T; + +typedef struct zxdh_dev_init_ctrl_t { + uint32_t vport; + char port_name[ZXDH_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; +} ZXDH_DEV_INIT_CTRL_T; + +typedef struct zxdh_sdt_mgr_t { + uint32_t channel_num; + uint32_t is_init; + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_SDT_MGR_T; + +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); + +#endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06d3f92b20..250e67d560 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) desc_addr = vq->vq_ring_mem; avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); - if (vtpci_packed_queue(vq->hw)) { + if (zxdh_pci_packed_queue(vq->hw)) { used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct zxdh_vring_packed_desc_event)), ZXDH_PCI_VRING_ALIGN); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ed6fd89742..d6487a574f 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { }; static inline int32_t -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } static inline int32_t -vtpci_packed_queue(struct zxdh_hw *hw) +zxdh_pci_packed_queue(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); } struct zxdh_pci_ops { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index 462a88b23c..b4ef90ea36 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -13,7 +13,7 @@ #include "zxdh_msg.h" struct rte_mbuf * -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { struct rte_mbuf *cookie = NULL; int32_t idx = 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1088bf08fc..1304d5e4ea 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -206,11 +206,11 @@ struct zxdh_tx_region { }; static inline size_t -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) { size_t size; - if (vtpci_packed_queue(hw)) { + if (zxdh_pci_packed_queue(hw)) { size = num * sizeof(struct zxdh_vring_packed_desc); size += sizeof(struct zxdh_vring_packed_desc_event); size = RTE_ALIGN_CEIL(size, align); @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) } static inline void -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, unsigned long align, uint32_t num) { vr->num = num; @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, } static inline void -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) { int32_t i = 0; @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) } static inline void -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) { int32_t i = 0; @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) } static inline void -virtqueue_disable_intr(struct zxdh_virtqueue *vq) +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) } } -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 79562 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 02/15] net/zxdh: zxdh np uninit implementation 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-18 9:25 ` [PATCH v4 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 03/15] net/zxdh: port tables init implementations Junlong Wang ` (12 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 19520 bytes --] (np)network processor release resources in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ drivers/net/zxdh/zxdh_np.c | 470 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 107 ++++++++ 3 files changed, 625 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b8f4415e00..4e114d95da 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret; + int i; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1010,6 +1056,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1177,6 +1224,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index e44d7ff501..28728b0c68 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -18,10 +18,21 @@ static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_REG_T g_dpp_reg_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -338,3 +349,462 @@ zxdh_np_host_init(uint32_t dev_id, return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return 0xffff; + + return p_dev_info->dev_type; +} + +static uint32_t +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint32_t byte_num; + uint32_t buffer_size; + uint32_t len; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + len = end_bit - start_bit + 1; + buffer_size = base_size_bit / 8; + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + *p_data = 0; + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + *p_data = (uint32_t)(((p_base[start_byte_index] >> (7U - (end_bit & 7))) + & (0xff >> (8U - len))) & 0xff); + return 0; + } + + if (start_bit & 7) { + *p_data = (p_base[start_byte_index] & (0xff >> (start_bit & 7))) & UINT8_MAX; + start_byte_index++; + } + + for (byte_num = start_byte_index; byte_num < end_byte_index; byte_num++) { + *p_data <<= 8; + *p_data += p_base[byte_num]; + } + + *p_data <<= 1 + (end_bit & 7); + *p_data += ((p_base[byte_num & (buffer_size - 1)] & (0xff << (7 - (end_bit & 7)))) >> + (7 - (end_bit & 7))) & 0xff; + + return 0; +} + +static uint32_t +zxdh_np_comm_read_bits_ex(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t msb_start_pos, uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_read_bits(p_base, + base_size_bit, + p_data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + return rtn; +} + +static uint32_t +zxdh_np_reg_read(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t rc = 0; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + for (i = 0; i < p_reg_info->field_num; i++) { + rc = zxdh_np_comm_read_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + (uint32_t *)p_data + i, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_read_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", dev_id, m_offset, n_offset); + } + } + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_read"); + + p_vm_info->dbi_en = vm_info.dbi_en; + p_vm_info->queue_en = vm_info.queue_en; + p_vm_info->epid = vm_info.cfg_epid; + p_vm_info->vector = vm_info.cfg_vector; + p_vm_info->vfunc_num = vm_info.cfg_vfunc_num; + p_vm_info->func_num = vm_info.cfg_func_num; + p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint8_t mask_value; + uint32_t byte_num; + uint32_t buffer_size; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + buffer_size = base_size_bit / 8; + + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + mask_value |= (((1 << (7 - (end_bit & 7))) - 1) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= (((data << (7 - (end_bit & 7)))) & 0xff); + return 0; + } + + if (7 != (end_bit & 7)) { + mask_value = ((0x7f >> (end_bit & 7)) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= ((data << (7 - (end_bit & 7))) & 0xff); + end_byte_index--; + data >>= 1 + (end_bit & 7); + } + + for (byte_num = end_byte_index; byte_num > start_byte_index; byte_num--) { + p_base[byte_num & (buffer_size - 1)] = data & 0xff; + data >>= 8; + } + + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + p_base[byte_num] &= mask_value; + p_base[byte_num] |= data; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits_ex(uint8_t *p_base, + uint32_t base_size_bit, + uint32_t data, + uint32_t msb_start_pos, + uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_write_bits(p_base, + base_size_bit, + data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + + return rtn; +} + +static uint32_t +zxdh_np_reg_write(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + + for (i = 0; i < p_reg_info->field_num; i++) { + if (p_field_info[i].len <= 32) { + temp_data = *((uint32_t *)p_data + i); + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + temp_data, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_comm_write_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", + dev_id, m_offset, n_offset); + } + } + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + vm_info.dbi_en = p_vm_info->dbi_en; + vm_info.queue_en = p_vm_info->queue_en; + vm_info.cfg_epid = p_vm_info->epid; + vm_info.cfg_vector = p_vm_info->vector; + vm_info.cfg_vfunc_num = p_vm_info->vfunc_num; + vm_info.cfg_func_num = p_vm_info->func_num; + vm_info.cfg_vfunc_active = p_vm_info->vfunc_active; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_write"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_enable_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t enable) +{ + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + vm_info.queue_en = enable; + rc = zxdh_np_dtb_queue_vm_info_set(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_set"); + + return rc; +} + +static uint32_t +zxdh_np_riscv_dpp_dtb_queue_id_release(uint32_t dev_id, + char name[ZXDH_PORT_NAME_MAX], uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + if (p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag != 1) { + PMD_DRV_LOG(ERR, "queue %d not alloc!", queue_id); + return 2; + } + + if (strcmp(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, name) != 0) { + PMD_DRV_LOG(ERR, "queue %d name %s error!", queue_id, name); + return 3; + } + zxdh_np_dtb_queue_enable_set(dev_id, queue_id, 0); + zxdh_np_riscv_dtb_mgr_queue_info_delete(dev_id, queue_id); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_unused_item_num_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_num) +{ + uint32_t rc; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) { + *p_item_num = 32; + return 0; + } + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_INFO_QUEUE_BUF_SPACE, + 0, queue_id, p_item_num); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_read"); + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_id_free(uint32_t dev_id, + uint32_t queue_id) +{ + uint32_t item_num = 0; + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + uint32_t rc; + + p_dtb_mgr = p_dpp_dtb_mgr[dev_id]; + if (p_dtb_mgr == NULL) + return 1; + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &item_num); + + p_dtb_mgr->queue_info[queue_id].init_flag = 0; + p_dtb_mgr->queue_info[queue_id].vport = 0; + p_dtb_mgr->queue_info[queue_id].vector = 0; + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_release(uint32_t devid, + char pname[32], + uint32_t queueid) +{ + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(devid, pname); + + rc = zxdh_np_riscv_dpp_dtb_queue_id_release(devid, pname, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_riscv_dpp_dtb_queue_id_release"); + + rc = zxdh_np_dtb_queue_id_free(devid, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_id_free"); + + return rc; +} + +static void +zxdh_np_dtb_mgr_destroy(uint32_t dev_id) +{ + if (p_dpp_dtb_mgr[dev_id] != NULL) { + free(p_dpp_dtb_mgr[dev_id]); + p_dpp_dtb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_tlb_mgr_destroy(uint32_t dev_id) +{ + if (g_p_dpp_tlb_mgr[dev_id] != NULL) { + free(g_p_dpp_tlb_mgr[dev_id]); + g_p_dpp_tlb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_sdt_mgr_destroy(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (p_sdt_tbl_temp != NULL) + free(p_sdt_tbl_temp); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; + + p_sdt_mgr->channel_num--; +} + +static void +zxdh_np_dev_del(uint32_t dev_id) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info != NULL) { + free(p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = NULL; + p_dev_mgr->device_num--; + } +} + +int +zxdh_np_online_uninit(uint32_t dev_id, + char *port_name, + uint32_t queue_id) +{ + uint32_t rc; + + rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "%s:dtb release error," + "port name %s queue id %d. ", __func__, port_name, queue_id); + + zxdh_np_dtb_mgr_destroy(dev_id); + zxdh_np_tlb_mgr_destroy(dev_id); + zxdh_np_sdt_mgr_destroy(dev_id); + zxdh_np_dev_del(dev_id); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 573eafe796..dc0e867827 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -47,6 +47,11 @@ #define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) #define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) +#define ZXDH_ACL_TBL_ID_MIN (0) +#define ZXDH_ACL_TBL_ID_MAX (7) +#define ZXDH_ACL_TBL_ID_NUM (8U) +#define ZXDH_ACL_BLOCK_NUM (8U) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -67,6 +72,15 @@ typedef enum zxdh_dev_type_e { ZXDH_DEV_TYPE_INVALID, } ZXDH_DEV_TYPE_E; +typedef enum zxdh_reg_info_e { + ZXDH_DTB_CFG_QUEUE_DTB_HADDR = 0, + ZXDH_DTB_CFG_QUEUE_DTB_LADDR = 1, + ZXDH_DTB_CFG_QUEUE_DTB_LEN = 2, + ZXDH_DTB_INFO_QUEUE_BUF_SPACE = 3, + ZXDH_DTB_CFG_EPID_V_FUNC_NUM = 4, + ZXDH_REG_ENUM_MAX_VALUE +} ZXDH_REG_INFO_E; + typedef enum zxdh_dev_access_type_e { ZXDH_DEV_ACCESS_TYPE_PCIE = 0, ZXDH_DEV_ACCESS_TYPE_RISCV = 1, @@ -79,6 +93,26 @@ typedef enum zxdh_dev_agent_flag_e { ZXDH_DEV_AGENT_INVALID, } ZXDH_DEV_AGENT_FLAG_E; +typedef enum zxdh_acl_pri_mode_e { + ZXDH_ACL_PRI_EXPLICIT = 1, + ZXDH_ACL_PRI_IMPLICIT, + ZXDH_ACL_PRI_SPECIFY, + ZXDH_ACL_PRI_INVALID, +} ZXDH_ACL_PRI_MODE_E; + +typedef struct zxdh_d_node { + void *data; + struct zxdh_d_node *prev; + struct zxdh_d_node *next; +} ZXDH_D_NODE; + +typedef struct zxdh_d_head { + uint32_t used; + uint32_t maxnum; + ZXDH_D_NODE *p_next; + ZXDH_D_NODE *p_prev; +} ZXDH_D_HEAD; + typedef struct zxdh_dtb_tab_up_user_addr_t { uint32_t user_flag; uint64_t phy_addr; @@ -193,6 +227,79 @@ typedef struct zxdh_sdt_mgr_t { ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; } ZXDH_SDT_MGR_T; +typedef struct zxdh_riscv_dtb_queue_USER_info_t { + uint32_t alloc_flag; + uint32_t queue_id; + uint32_t vport; + char user_name[ZXDH_PORT_NAME_MAX]; +} ZXDH_RISCV_DTB_QUEUE_USER_INFO_T; + +typedef struct zxdh_riscv_dtb_mgr { + uint32_t queue_alloc_count; + uint32_t queue_index; + ZXDH_RISCV_DTB_QUEUE_USER_INFO_T queue_user_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_RISCV_DTB_MGR; + +typedef struct zxdh_dtb_queue_vm_info_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t epid; + uint32_t vfunc_num; + uint32_t vector; + uint32_t func_num; + uint32_t vfunc_active; +} ZXDH_DTB_QUEUE_VM_INFO_T; + +typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t cfg_epid; + uint32_t cfg_vfunc_num; + uint32_t cfg_vector; + uint32_t cfg_func_num; + uint32_t cfg_vfunc_active; +} ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; + + +typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); +typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); + +typedef struct zxdh_field_t { + const char *p_name; + uint32_t flags; + uint16_t msb_pos; + + uint16_t len; + uint32_t default_value; + uint32_t default_step; +} ZXDH_FIELD_T; + +typedef struct zxdh_reg_t { + const char *reg_name; + uint32_t reg_no; + uint32_t module_no; + uint32_t flags; + uint32_t array_type; + uint32_t addr; + uint32_t width; + uint32_t m_size; + uint32_t n_size; + uint32_t m_step; + uint32_t n_step; + uint32_t field_num; + ZXDH_FIELD_T *p_fields; + + ZXDH_REG_WRITE p_write_fun; + ZXDH_REG_READ p_read_fun; +} ZXDH_REG_T; + +typedef struct zxdh_tlb_mgr_t { + uint32_t entry_num; + uint32_t va_width; + uint32_t pa_width; +} ZXDH_TLB_MGR_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); #endif /* ZXDH_NP_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45109 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 03/15] net/zxdh: port tables init implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-18 9:25 ` [PATCH v4 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-18 9:25 ` [PATCH v4 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 04/15] net/zxdh: port tables unint implementations Junlong Wang ` (11 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 42795 bytes --] insert port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 24 ++ drivers/net/zxdh/zxdh_msg.c | 65 ++++ drivers/net/zxdh/zxdh_msg.h | 72 ++++ drivers/net/zxdh/zxdh_np.c | 648 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 210 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 2 + drivers/net/zxdh/zxdh_tables.c | 105 ++++++ drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ 9 files changed, 1274 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index ab24a3145c..5b3af87c5b 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -20,4 +20,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_queue.c', 'zxdh_np.c', + 'zxdh_tables.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 4e114d95da..ff44816384 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -14,6 +14,7 @@ #include "zxdh_common.h" #include "zxdh_queue.h" #include "zxdh_np.h" +#include "zxdh_tables.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -1144,6 +1145,25 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_tables_init(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_init failed"); + return ret; + } + + ret = zxdh_panel_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -1220,6 +1240,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_tables_init(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index dd7a518a51..aa2e10fd45 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_msg.h" +#include "zxdh_pci.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 @@ -100,6 +101,7 @@ #define ZXDH_BAR_CHAN_MSG_EMEC 1 #define ZXDH_BAR_CHAN_MSG_NO_ACK 0 #define ZXDH_BAR_CHAN_MSG_ACK 1 +#define ZXDH_MSG_REPS_OK 0xff uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, @@ -1079,3 +1081,66 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, res->bar_length = recv_msg.offset_reps.length; return ZXDH_BAR_MSG_OK; } + +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + struct zxdh_pci_bar_msg in = { + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET), + .payload_addr = msg_req, + .payload_len = msg_req_len, + .src = ZXDH_MSG_CHAN_END_VF, + .dst = ZXDH_MSG_CHAN_END_PF, + .module_id = ZXDH_MODULE_BAR_MSG_TO_PF, + .src_pcieid = hw->pcie_id, + .dst_pcieid = ZXDH_PF_PCIE_ID(hw->pcie_id), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "vf[%d] send bar msg to pf failed.ret %d", hw->vport.vfid, ret); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + return 0; +} + +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_msg_head *msghead = &msg_info->msg_head; + + msghead->msg_type = type; + msghead->vport = hw->vport.vport; + msghead->vf_id = hw->vport.vfid; + msghead->pcieid = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index fbc79e8f9d..b7b17b8696 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -33,6 +33,19 @@ #define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) + +#define ZXDH_MSG_REPLYBODY_HEAD sizeof(enum zxdh_reps_flag) +#define ZXDH_MSG_HEADER_SIZE 4 +#define ZXDH_MSG_REPLY_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) + +#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_REQ_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -151,6 +164,13 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +enum zxdh_msg_type { + ZXDH_NULL = 0, + ZXDH_VF_PORT_INIT = 1, + + ZXDH_MSG_TYPE_END, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -240,6 +260,54 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_msg_reply_head { + uint8_t flag; + uint16_t reps_len; + uint8_t resvd; +} __rte_packed; + +enum zxdh_reps_flag { + ZXDH_REPS_FAIL, + ZXDH_REPS_SUCC = 0xaa, +} __rte_packed; + +struct zxdh_msg_reply_body { + enum zxdh_reps_flag flag; + union { + uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_reply_info { + struct zxdh_msg_reply_head reply_head; + struct zxdh_msg_reply_body reply_body; +} __rte_packed; + +struct zxdh_vf_init_msg { + uint8_t link_up; + uint8_t rsv; + uint16_t base_qid; + uint8_t rss_enable; +} __rte_packed; + +struct zxdh_msg_head { + enum zxdh_msg_type msg_type; + uint16_t vport; + uint16_t vf_id; + uint16_t pcieid; +} __rte_packed; + +struct zxdh_msg_info { + union { + uint8_t head_len[ZXDH_MSG_HEAD_LEN]; + struct zxdh_msg_head msg_head; + }; + union { + uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; + struct zxdh_vf_init_msg vf_init_msg; + } __rte_packed data; +} __rte_packed; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); @@ -253,5 +321,9 @@ int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info); +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 28728b0c68..db536d96e3 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_debug.h> #include <rte_malloc.h> +#include <rte_memcpy.h> #include "zxdh_np.h" #include "zxdh_logs.h" @@ -16,11 +17,14 @@ static uint64_t g_np_bar_offset; static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; +static uint32_t g_dpp_dtb_int_enable; +static uint32_t g_table_type[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; +ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -76,6 +80,92 @@ do {\ } \ } while (0) +#define ZXDH_COMM_CHECK_POINT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + + +#define ZXDH_COMM_CHECK_POINT_MEMORY_FREE(point, ptr)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", __FILE__, __LINE__, __func__);\ + rte_free(ptr);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, becall, ptr)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXICP %s:%d, %s Call"\ + " %s Fail!", __FILE__, __LINE__, __func__, becall);\ + rte_free(ptr);\ + } \ +} while (0) + +#define ZXDH_COMM_CONVERT16(w_data) \ + (((w_data) & 0xff) << 8) + +#define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) + +#define ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.item_size) + +#define ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.wr_index) + +#define ZXDH_DTB_QUEUE_INIT_FLAG_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].init_flag) + +static uint32_t +zxdh_np_comm_is_big_endian(void) +{ + ZXDH_ENDIAN_U c_data; + + c_data.a = 1; + + if (c_data.b == 1) + return 0; + else + return 1; +} + +static void +zxdh_np_comm_swap(uint8_t *p_uc_data, uint32_t dw_byte_len) +{ + uint16_t *p_w_tmp = NULL; + uint32_t *p_dw_tmp = NULL; + uint32_t dw_byte_num; + uint8_t uc_byte_mode; + uint32_t uc_is_big_flag; + uint32_t i; + + p_dw_tmp = (uint32_t *)(p_uc_data); + uc_is_big_flag = zxdh_np_comm_is_big_endian(); + if (uc_is_big_flag) + return; + + dw_byte_num = dw_byte_len >> 2; + uc_byte_mode = dw_byte_len % 4 & 0xff; + + for (i = 0; i < dw_byte_num; i++) { + (*p_dw_tmp) = ZXDH_COMM_CONVERT16(*p_dw_tmp); + p_dw_tmp++; + } + + if (uc_byte_mode > 1) { + p_w_tmp = (uint16_t *)(p_dw_tmp); + (*p_w_tmp) = ZXDH_COMM_CONVERT16(*p_w_tmp); + } +} + static uint32_t zxdh_np_dev_init(void) { @@ -503,7 +593,7 @@ zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, p_vm_info->func_num = vm_info.cfg_func_num; p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; - return 0; + return rc; } static uint32_t @@ -808,3 +898,559 @@ zxdh_np_online_uninit(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_sdt_tbl_type_get(uint32_t dev_id, uint32_t sdt_no) +{ + return g_table_type[dev_id][sdt_no]; +} + + +static ZXDH_DTB_TABLE_T * +zxdh_np_table_info_get(uint32_t table_type) +{ + return &g_dpp_dtb_table_info[table_type]; +} + +static uint32_t +zxdh_np_dtb_write_table_cmd(uint32_t dev_id, + ZXDH_DTB_TABLE_INFO_E table_type, + void *p_cmd_data, + void *p_cmd_buff) +{ + uint32_t field_cnt; + ZXDH_DTB_TABLE_T *p_table_info = NULL; + ZXDH_DTB_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_cmd_data); + ZXDH_COMM_CHECK_POINT(p_cmd_buff); + p_table_info = zxdh_np_table_info_get(table_type); + p_field_info = p_table_info->p_fields; + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_table_info); + + for (field_cnt = 0; field_cnt < p_table_info->field_num; field_cnt++) { + temp_data = *((uint32_t *)p_cmd_data + field_cnt) & ZXDH_COMM_GET_BIT_MASK(uint32_t, + p_field_info[field_cnt].len); + + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_cmd_buff, + ZXDH_DTB_TABLE_CMD_SIZE_BIT, + temp_data, + p_field_info[field_cnt].lsb_pos, + p_field_info[field_cnt].len); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_write_bits"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_smmu0_write_entry_data(uint32_t dev_id, + uint32_t mode, + uint32_t addr, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + ZXDH_DTB_ERAM_TABLE_FORM_T dtb_eram_form_info = {0}; + uint32_t rc = 0; + + dtb_eram_form_info.valid = ZXDH_DTB_TABLE_VALID; + dtb_eram_form_info.type_mode = ZXDH_DTB_TABLE_MODE_ERAM; + dtb_eram_form_info.data_mode = mode; + dtb_eram_form_info.cpu_wr = 1; + dtb_eram_form_info.addr = addr; + dtb_eram_form_info.cpu_rd = 0; + dtb_eram_form_info.cpu_rd_mode = 0; + + if (ZXDH_ERAM128_OPR_128b == mode) { + p_entry->data_in_cmd_flag = 0; + p_entry->data_size = 128 / 8; + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_128, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + memcpy(p_entry->data, p_data, 128 / 8); + } else if (ZXDH_ERAM128_OPR_64b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 64 / 8; + dtb_eram_form_info.data_l = *(p_data + 1); + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_64, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + } else if (ZXDH_ERAM128_OPR_1b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 1; + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_1, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, + uint32_t base_addr, + uint32_t index, + uint32_t wrt_mode, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t temp_idx; + uint32_t dtb_ind_addr; + uint32_t rc; + + switch (wrt_mode) { + case ZXDH_ERAM128_OPR_128b: + { + if ((0xFFFFFFFF - (base_addr)) < (index)) { + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, + base_addr, index, __func__); + + return ZXDH_PAR_CHK_INVALID_INDEX; + } + if (base_addr + index > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 7; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + if ((base_addr + (index >> 1)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 6; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + if ((base_addr + (index >> 7)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + + temp_idx = index; + } + } + + dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; + + PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + + rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, + wrt_mode, + dtb_ind_addr, + p_data, + p_entry); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_smmu0_write_entry_data"); + + return rc; +} + +static uint32_t +zxdh_np_eram_dtb_len_get(uint32_t mode) +{ + uint32_t dtb_len = 0; + + switch (mode) { + case ZXDH_ERAM128_OPR_128b: + { + dtb_len += 2; + break; + } + case ZXDH_ERAM128_OPR_64b: + case ZXDH_ERAM128_OPR_1b: + { + dtb_len += 1; + break; + } + default: + break; + } + + return dtb_len; +} + +static uint32_t +zxdh_np_dtb_eram_one_entry(uint32_t dev_id, + uint32_t sdt_no, + uint32_t del_en, + void *pdata, + uint32_t *p_dtb_len, + ZXDH_DTB_ENTRY_T *p_dtb_one_entry) +{ + uint32_t buff[ZXDH_SMMU0_READ_REG_MAX_NUM] = {0}; + ZXDH_SDTTBL_ERAM_T sdt_eram = {0}; + ZXDH_DTB_ERAM_ENTRY_INFO_T *peramdata = NULL; + uint32_t base_addr; + uint32_t index; + uint32_t opr_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(pdata); + ZXDH_COMM_CHECK_POINT(p_dtb_one_entry); + ZXDH_COMM_CHECK_POINT(p_dtb_len); + + peramdata = (ZXDH_DTB_ERAM_ENTRY_INFO_T *)pdata; + index = peramdata->index; + base_addr = sdt_eram.eram_base_addr; + opr_mode = sdt_eram.eram_mode; + + switch (opr_mode) { + case ZXDH_ERAM128_TBL_128b: + { + opr_mode = ZXDH_ERAM128_OPR_128b; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + opr_mode = ZXDH_ERAM128_OPR_64b; + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + opr_mode = ZXDH_ERAM128_OPR_1b; + break; + } + } + + if (del_en) { + memset((uint8_t *)buff, 0, sizeof(buff)); + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + buff, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(sdt_no, rc, "zxdh_dtb_se_smmu0_ind_write"); + } else { + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + peramdata->p_data, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_dtb_se_smmu0_ind_write"); + } + *p_dtb_len = zxdh_np_eram_dtb_len_get(opr_mode); + + return rc; +} + +static uint32_t +zxdh_np_dtb_data_write(uint8_t *p_data_buff, + uint32_t addr_offset, + ZXDH_DTB_ENTRY_T *entry) +{ + ZXDH_COMM_CHECK_POINT(p_data_buff); + ZXDH_COMM_CHECK_POINT(entry); + + uint8_t *p_cmd = p_data_buff + addr_offset; + uint32_t cmd_size = ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8; + + uint8_t *p_data = p_cmd + cmd_size; + uint32_t data_size = entry->data_size; + + uint8_t *cmd = (uint8_t *)entry->cmd; + uint8_t *data = (uint8_t *)entry->data; + + rte_memcpy(p_cmd, cmd, cmd_size); + + if (!entry->data_in_cmd_flag) { + zxdh_np_comm_swap(data, data_size); + rte_memcpy(p_data, data, data_size); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + *enable = vm_info.queue_en; + return rc; +} + +static uint32_t +zxdh_np_dtb_item_buff_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t len, + uint32_t *p_data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + + memcpy((uint8_t *)(addr), p_data, len * 4); + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_rd(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t *p_data) +{ + uint64_t addr; + uint32_t val; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + val = *((volatile uint32_t *)(addr)); + + *p_data = val; + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + *((volatile uint32_t *)(addr)) = data; + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_item_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_ITEM_INFO_T *p_item_info) +{ + ZXDH_DTB_QUEUE_LEN_T dtb_len = {0}; + uint32_t rc; + + dtb_len.cfg_dtb_cmd_type = p_item_info->cmd_type; + dtb_len.cfg_dtb_cmd_int_en = p_item_info->int_en; + dtb_len.cfg_queue_dtb_len = p_item_info->data_len; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_QUEUE_DTB_LEN, + 0, queue_id, (void *)&dtb_len); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_write"); + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t int_flag, + uint32_t data_len, + uint32_t *p_data, + uint32_t *p_item_index) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t unused_item_num = 0; + uint32_t queue_en = 0; + uint32_t ack_vale = 0; + uint64_t phy_addr; + uint32_t item_index; + uint32_t i; + uint32_t rc; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (data_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + rc = zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!,rc=%d", queue_id, rc); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + rc = zxdh_np_dtb_item_ack_rd(dev_id, queue_id, 0, + item_index, 0, &ack_vale); + + ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + rc = zxdh_np_dtb_item_buff_wr(dev_id, queue_id, 0, + item_index, 0, data_len, p_data); + + rc = zxdh_np_dtb_item_ack_wr(dev_id, queue_id, 0, + item_index, 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + item_info.cmd_vld = 1; + item_info.cmd_type = 0; + item_info.int_en = int_flag; + item_info.data_len = data_len / 4; + phy_addr = p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.start_phy_addr + + item_index * p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.item_size; + item_info.data_hddr = ((phy_addr >> 4) >> 32) & 0xffffffff; + item_info.data_laddr = (phy_addr >> 4) & 0xffffffff; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + *p_item_index = item_index; + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_down_table_data(uint32_t dev_id, + uint32_t queue_id, + uint32_t down_table_len, + uint8_t *p_down_table_buff, + uint32_t *p_element_id) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + dtb_interrupt_status = g_dpp_dtb_int_enable; + + rc = zxdh_np_dtb_tab_down_info_set(dev_id, + queue_id, + dtb_interrupt_status, + down_table_len / 4, + (uint32_t *)p_down_table_buff, + p_element_id); + return rc; +} + +int +zxdh_np_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *down_entries) +{ + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t tbl_type; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + p_data_buff = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = down_entries + entry_index; + sdt_no = pentry->sdt_no; + tbl_type = zxdh_np_sdt_tbl_type_get(dev_id, sdt_no); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_ADD_OR_UPDATE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index dc0e867827..40961c02a2 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -7,6 +7,8 @@ #include <stdint.h> +#define ZXDH_DISABLE (0) +#define ZXDH_ENABLE (1) #define ZXDH_PORT_NAME_MAX (32) #define ZXDH_DEV_CHANNEL_MAX (2) #define ZXDH_DEV_SDT_ID_MAX (256U) @@ -52,6 +54,94 @@ #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SMMU0_READ_REG_MAX_NUM (4) + +#define ZXDH_DTB_ITEM_ACK_SIZE (16) +#define ZXDH_DTB_ITEM_BUFF_SIZE (16 * 1024) +#define ZXDH_DTB_ITEM_SIZE (16 + 16 * 1024) +#define ZXDH_DTB_TAB_UP_SIZE ((16 + 16 * 1024) * 32) +#define ZXDH_DTB_TAB_DOWN_SIZE ((16 + 16 * 1024) * 32) + +#define ZXDH_DTB_TAB_UP_ACK_VLD_MASK (0x555555) +#define ZXDH_DTB_TAB_DOWN_ACK_VLD_MASK (0x5a5a5a) +#define ZXDH_DTB_TAB_ACK_IS_USING_MASK (0x11111100) +#define ZXDH_DTB_TAB_ACK_UNUSED_MASK (0x0) +#define ZXDH_DTB_TAB_ACK_SUCCESS_MASK (0xff) +#define ZXDH_DTB_TAB_ACK_FAILED_MASK (0x1) +#define ZXDH_DTB_TAB_ACK_CHECK_VALUE (0x12345678) + +#define ZXDH_DTB_TAB_ACK_VLD_SHIFT (104) +#define ZXDH_DTB_TAB_ACK_STATUS_SHIFT (96) +#define ZXDH_DTB_LEN_POS_SETP (16) +#define ZXDH_DTB_ITEM_ADD_OR_UPDATE (0) +#define ZXDH_DTB_ITEM_DELETE (1) + +#define ZXDH_ETCAM_LEN_SIZE (6) +#define ZXDH_ETCAM_BLOCK_NUM (8) +#define ZXDH_ETCAM_TBLID_NUM (8) +#define ZXDH_ETCAM_RAM_NUM (8) +#define ZXDH_ETCAM_RAM_WIDTH (80U) +#define ZXDH_ETCAM_WR_MASK_MAX (((uint32_t)1 << ZXDH_ETCAM_RAM_NUM) - 1) +#define ZXDH_ETCAM_WIDTH_MIN (ZXDH_ETCAM_RAM_WIDTH) +#define ZXDH_ETCAM_WIDTH_MAX (ZXDH_ETCAM_RAM_NUM * ZXDH_ETCAM_RAM_WIDTH) + +#define ZXDH_DTB_TABLE_DATA_BUFF_SIZE (16384) +#define ZXDH_DTB_TABLE_CMD_SIZE_BIT (128) + +#define ZXDH_SE_SMMU0_ERAM_BLOCK_NUM (32) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK (0x4000) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL \ + (ZXDH_SE_SMMU0_ERAM_BLOCK_NUM * ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK) + +/**errco code */ +#define ZXDH_RC_BASE (0x1000U) +#define ZXDH_PARAMETER_CHK_BASE (ZXDH_RC_BASE | 0x200) +#define ZXDH_PAR_CHK_POINT_NULL (ZXDH_PARAMETER_CHK_BASE | 0x001) +#define ZXDH_PAR_CHK_ARGIN_ZERO (ZXDH_PARAMETER_CHK_BASE | 0x002) +#define ZXDH_PAR_CHK_ARGIN_OVERFLOW (ZXDH_PARAMETER_CHK_BASE | 0x003) +#define ZXDH_PAR_CHK_ARGIN_ERROR (ZXDH_PARAMETER_CHK_BASE | 0x004) +#define ZXDH_PAR_CHK_INVALID_INDEX (ZXDH_PARAMETER_CHK_BASE | 0x005) +#define ZXDH_PAR_CHK_INVALID_RANGE (ZXDH_PARAMETER_CHK_BASE | 0x006) +#define ZXDH_PAR_CHK_INVALID_DEV_ID (ZXDH_PARAMETER_CHK_BASE | 0x007) +#define ZXDH_PAR_CHK_INVALID_PARA (ZXDH_PARAMETER_CHK_BASE | 0x008) + +#define ZXDH_ERAM128_BADDR_MASK (0x3FFFF80) + +#define ZXDH_DTB_TABLE_MODE_ERAM (0) +#define ZXDH_DTB_TABLE_MODE_DDR (1) +#define ZXDH_DTB_TABLE_MODE_ZCAM (2) +#define ZXDH_DTB_TABLE_MODE_ETCAM (3) +#define ZXDH_DTB_TABLE_MODE_MC_HASH (4) +#define ZXDH_DTB_TABLE_VALID (1) + +/* DTB module error code */ +#define ZXDH_RC_DTB_BASE (0xd00) +#define ZXDH_RC_DTB_MGR_EXIST (ZXDH_RC_DTB_BASE | 0x0) +#define ZXDH_RC_DTB_MGR_NOT_EXIST (ZXDH_RC_DTB_BASE | 0x1) +#define ZXDH_RC_DTB_QUEUE_RES_EMPTY (ZXDH_RC_DTB_BASE | 0x2) +#define ZXDH_RC_DTB_QUEUE_BUFF_SIZE_ERR (ZXDH_RC_DTB_BASE | 0x3) +#define ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY (ZXDH_RC_DTB_BASE | 0x4) +#define ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY (ZXDH_RC_DTB_BASE | 0x5) +#define ZXDH_RC_DTB_TAB_UP_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x6) +#define ZXDH_RC_DTB_TAB_DOWN_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x7) +#define ZXDH_RC_DTB_TAB_UP_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x8) +#define ZXDH_RC_DTB_TAB_DOWN_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x9) +#define ZXDH_RC_DTB_QUEUE_IS_WORKING (ZXDH_RC_DTB_BASE | 0xa) +#define ZXDH_RC_DTB_QUEUE_IS_NOT_INIT (ZXDH_RC_DTB_BASE | 0xb) +#define ZXDH_RC_DTB_MEMORY_ALLOC_ERR (ZXDH_RC_DTB_BASE | 0xc) +#define ZXDH_RC_DTB_PARA_INVALID (ZXDH_RC_DTB_BASE | 0xd) +#define ZXDH_RC_DMA_RANGE_INVALID (ZXDH_RC_DTB_BASE | 0xe) +#define ZXDH_RC_DMA_RCV_DATA_EMPTY (ZXDH_RC_DTB_BASE | 0xf) +#define ZXDH_RC_DTB_LPM_INSERT_FAIL (ZXDH_RC_DTB_BASE | 0x10) +#define ZXDH_RC_DTB_LPM_DELETE_FAIL (ZXDH_RC_DTB_BASE | 0x11) +#define ZXDH_RC_DTB_DOWN_LEN_INVALID (ZXDH_RC_DTB_BASE | 0x12) +#define ZXDH_RC_DTB_DOWN_HASH_CONFLICT (ZXDH_RC_DTB_BASE | 0x13) +#define ZXDH_RC_DTB_QUEUE_NOT_ALLOC (ZXDH_RC_DTB_BASE | 0x14) +#define ZXDH_RC_DTB_QUEUE_NAME_ERROR (ZXDH_RC_DTB_BASE | 0x15) +#define ZXDH_RC_DTB_DUMP_SIZE_SMALL (ZXDH_RC_DTB_BASE | 0x16) +#define ZXDH_RC_DTB_SEARCH_VPORT_QUEUE_ZERO (ZXDH_RC_DTB_BASE | 0x17) +#define ZXDH_RC_DTB_QUEUE_NOT_ENABLE (ZXDH_RC_DTB_BASE | 0x18) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -299,7 +389,127 @@ typedef struct zxdh_tlb_mgr_t { uint32_t pa_width; } ZXDH_TLB_MGR_T; +typedef enum zxdh_eram128_tbl_mode_e { + ZXDH_ERAM128_TBL_1b = 0, + ZXDH_ERAM128_TBL_32b = 1, + ZXDH_ERAM128_TBL_64b = 2, + ZXDH_ERAM128_TBL_128b = 3, + ZXDH_ERAM128_TBL_2b = 4, + ZXDH_ERAM128_TBL_4b = 5, + ZXDH_ERAM128_TBL_8b = 6, + ZXDH_ERAM128_TBL_16b = 7 +} ZXDH_ERAM128_TBL_MODE_E; + +typedef enum zxdh_eram128_opr_mode_e { + ZXDH_ERAM128_OPR_128b = 0, + ZXDH_ERAM128_OPR_64b = 1, + ZXDH_ERAM128_OPR_1b = 2, + ZXDH_ERAM128_OPR_32b = 3 + +} ZXDH_ERAM128_OPR_MODE_E; + +typedef enum zxdh_dtb_table_info_e { + ZXDH_DTB_TABLE_DDR = 0, + ZXDH_DTB_TABLE_ERAM_1 = 1, + ZXDH_DTB_TABLE_ERAM_64 = 2, + ZXDH_DTB_TABLE_ERAM_128 = 3, + ZXDH_DTB_TABLE_ZCAM = 4, + ZXDH_DTB_TABLE_ETCAM = 5, + ZXDH_DTB_TABLE_MC_HASH = 6, + ZXDH_DTB_TABLE_ENUM_MAX +} ZXDH_DTB_TABLE_INFO_E; + +typedef enum zxdh_sdt_table_type_e { + ZXDH_SDT_TBLT_INVALID = 0, + ZXDH_SDT_TBLT_ERAM = 1, + ZXDH_SDT_TBLT_DDR3 = 2, + ZXDH_SDT_TBLT_HASH = 3, + ZXDH_SDT_TBLT_LPM = 4, + ZXDH_SDT_TBLT_ETCAM = 5, + ZXDH_SDT_TBLT_PORTTBL = 6, + ZXDH_SDT_TBLT_MAX = 7, +} ZXDH_SDT_TABLE_TYPE_E; + +typedef struct zxdh_dtb_lpm_entry_t { + uint32_t dtb_len0; + uint8_t *p_data_buff0; + uint32_t dtb_len1; + uint8_t *p_data_buff1; +} ZXDH_DTB_LPM_ENTRY_T; + +typedef struct zxdh_dtb_entry_t { + uint8_t *cmd; + uint8_t *data; + uint32_t data_in_cmd_flag; + uint32_t data_size; +} ZXDH_DTB_ENTRY_T; + +typedef struct zxdh_dtb_eram_table_form_t { + uint32_t valid; + uint32_t type_mode; + uint32_t data_mode; + uint32_t cpu_wr; + uint32_t cpu_rd; + uint32_t cpu_rd_mode; + uint32_t addr; + uint32_t data_h; + uint32_t data_l; +} ZXDH_DTB_ERAM_TABLE_FORM_T; + +typedef struct zxdh_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} ZXDH_SDTTBL_ERAM_T; + +typedef union zxdh_endian_u { + unsigned int a; + unsigned char b; +} ZXDH_ENDIAN_U; + +typedef struct zxdh_dtb_field_t { + const char *p_name; + uint16_t lsb_pos; + uint16_t len; +} ZXDH_DTB_FIELD_T; + +typedef struct zxdh_dtb_table_t { + const char *table_type; + uint32_t table_no; + uint32_t field_num; + ZXDH_DTB_FIELD_T *p_fields; +} ZXDH_DTB_TABLE_T; + +typedef struct zxdh_dtb_queue_item_info_t { + uint32_t cmd_vld; + uint32_t cmd_type; + uint32_t int_en; + uint32_t data_len; + uint32_t data_laddr; + uint32_t data_hddr; +} ZXDH_DTB_QUEUE_ITEM_INFO_T; + +typedef struct zxdh_dtb_queue_len_t { + uint32_t cfg_dtb_cmd_type; + uint32_t cfg_dtb_cmd_int_en; + uint32_t cfg_queue_dtb_len; +} ZXDH_DTB_QUEUE_LEN_T; + +typedef struct zxdh_dtb_eram_entry_info_t { + uint32_t index; + uint32_t *p_data; +} ZXDH_DTB_ERAM_ENTRY_INFO_T; + +typedef struct zxdh_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} ZXDH_DTB_USER_ENTRY_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); +int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index d6487a574f..e3f13cb17d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,6 +12,8 @@ #include "zxdh_ethdev.h" +#define ZXDH_PF_PCIE_ID(pcie_id) (((pcie_id) & 0xff00) | 1 << 11) + enum zxdh_msix_status { ZXDH_MSIX_NONE = 0, ZXDH_MSIX_DISABLED = 1, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c new file mode 100644 index 0000000000..91376e6ec0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.c @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_np.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +#define ZXDH_SDT_VPORT_ATT_TABLE 1 +#define ZXDH_SDT_PANEL_ATT_TABLE 2 + +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_write = {ZXDH_SDT_VPORT_ATT_TABLE, (void *)&entry}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) + PMD_DRV_LOG(ERR, "write vport_att failed vfid:%d failed", vfid); + + return ret; +} + +int +zxdh_port_attr_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret; + + if (hw->is_pf) { + port_attr.hit_flag = 1; + port_attr.phy_port = hw->phyport; + port_attr.pf_vfid = zxdh_vport_to_vfid(hw->vport); + port_attr.rss_enable = 0; + if (!hw->is_pf) + port_attr.is_vf = 1; + + port_attr.mtu = dev->data->mtu; + port_attr.mtu_enable = 1; + port_attr.is_up = 0; + if (!port_attr.rss_enable) + port_attr.port_base_qid = 0; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + ret = -1; + } + } else { + struct zxdh_vf_init_msg *vf_init_msg = &msg_info.data.vf_init_msg; + + zxdh_msg_head_build(hw, ZXDH_VF_PORT_INIT, &msg_info); + msg_info.msg_head.msg_type = ZXDH_VF_PORT_INIT; + vf_init_msg->link_up = 1; + vf_init_msg->base_qid = 0; + vf_init_msg->rss_enable = 0; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port_init failed"); + ret = -1; + } + } + return ret; +}; + +int zxdh_panel_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int ret; + + if (!hw->is_pf) + return 0; + + struct zxdh_panel_table panel; + + memset(&panel, 0, sizeof(panel)); + panel.hit_flag = 1; + panel.pf_vfid = zxdh_vport_to_vfid(hw->vport); + panel.mtu_enable = 1; + panel.mtu = dev->data->mtu; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = hw->phyport, + .p_data = (uint32_t *)&panel + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) { + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + ret = -1; + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h new file mode 100644 index 0000000000..5d34af2f05 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_TABLES_H +#define ZXDH_TABLES_H + +#include <stdint.h> + +extern struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_DEVICE_NO 0 + +struct zxdh_port_attr_table { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint16_t mtu; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint8_t rss_hash_factor; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#else + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint16_t mtu; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint8_t rss_hash_factor; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#endif +}; + +struct zxdh_panel_table { + uint16_t port_vfid_1588 : 11, + rsv2 : 5; + uint16_t pf_vfid : 11, + rsv1 : 1, + enable_1588_tc : 2, + trust_mode : 1, + hit_flag : 1; + uint32_t mtu : 16, + mtu_enable : 1, + rsv : 3, + tm_base_queue : 12; + uint32_t rsv_1; + uint32_t rsv_2; +}; /* 16B */ + +int zxdh_port_attr_init(struct rte_eth_dev *dev); +int zxdh_panel_table_init(struct rte_eth_dev *dev); +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); + +#endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 100335 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 04/15] net/zxdh: port tables unint implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang ` (10 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8641 bytes --] delete port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 18 ++++++ drivers/net/zxdh/zxdh_msg.h | 1 + drivers/net/zxdh/zxdh_np.c | 103 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 33 ++++++++++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 164 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ff44816384..717a1d2b0b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -887,12 +887,30 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret; + + ret = zxdh_port_attr_uninit(dev); + if (ret) + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + + return ret; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index b7b17b8696..613ca71170 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, }; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index db536d96e3..99a7dc11b4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -1454,3 +1455,105 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff_ex, + &element_id); + rte_free(p_data_buff); + ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, + "dpp_dtb_write_down_table_data", p_data_buff_ex); + + rte_free(p_data_buff_ex); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 40961c02a2..42a652dd6b 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -20,6 +20,8 @@ #define ZXDH_PPU_CLUSTER_NUM (6) #define ZXDH_PPU_INSTR_MEM_NUM (3) #define ZXDH_SDT_CFG_LEN (2) +#define ZXDH_SDT_H_TBL_TYPE_BT_POS (29) +#define ZXDH_SDT_H_TBL_TYPE_BT_LEN (3) #define ZXDH_RC_DEV_BASE (0x600) #define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) @@ -507,9 +509,16 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_sdt_tbl_data_t { + uint32_t data_high32; + uint32_t data_low32; +} ZXDH_SDT_TBL_DATA_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); +int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 91376e6ec0..9fd184e612 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,7 +11,8 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 -int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +int +zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { int ret = 0; @@ -70,6 +71,36 @@ zxdh_port_attr_init(struct rte_eth_dev *dev) return ret; }; +int +zxdh_port_attr_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + if (hw->is_pf == 1) { + ZXDH_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (uint32_t *)&port_attr}; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_VPORT_ATT_TABLE, + .p_entry_data = (void *)&port_attr_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "delete port attr table failed"); + ret = -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port tables uninit failed"); + ret = -1; + } + } + return ret; +} + int zxdh_panel_table_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5d34af2f05..5e9b36faee 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -144,5 +144,6 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18675 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 05/15] net/zxdh: rx/tx queue setup and intr enable 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang ` (9 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7822 bytes --] rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 33 ++++++++ 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 717a1d2b0b..521d7ed433 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -933,6 +933,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + struct zxdh_virtnet_tx *txvq = NULL; + uint16_t tx_free_thresh = 0; + + if (tx_conf->tx_deferred_start) { + PMD_TX_LOG(ERR, "Tx deferred start is not supported"); + return -EINVAL; + } + + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + + txvq = &vq->txq; + txvq->queue_id = vtpci_logic_qidx; + + tx_free_thresh = tx_conf->tx_free_thresh; + if (tx_free_thresh == 0) + tx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_TX_FREE_THRESH); + + /* tx_free_thresh must be less than the number of TX entries minus 3 */ + if (tx_free_thresh >= (vq->vq_nentries - 3)) { + PMD_TX_LOG(ERR, "TX entries - 3 (%u). (tx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries - 3, tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + vq->vq_free_thresh = tx_free_thresh; + + if (queue_idx < dev->data->nb_tx_queues) + dev->data->tx_queues[queue_idx] = txvq; + + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_enable_intr(vq); + zxdh_mb(hw->weak_barriers); + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_disable_intr(vq); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1304d5e4ea..2f602d894f 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -8,6 +8,7 @@ #include <stdint.h> #include <rte_common.h> +#include <rte_atomic.h> #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" @@ -30,6 +31,7 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 +#define ZXDH_QUEUE_DEPTH 1024 /* * ring descriptors: 16 bytes. @@ -270,8 +272,39 @@ zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) } } +static inline void +zxdh_queue_enable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow == ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +static inline void +zxdh_mb(uint8_t weak_barriers) +{ + if (weak_barriers) + rte_atomic_thread_fence(rte_memory_order_seq_cst); + else + rte_mb(); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); +int32_t zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf); +int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17339 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 06/15] net/zxdh: dev start/stop ops implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-21 0:51 ` Stephen Hemminger 2024-12-18 9:25 ` [PATCH v4 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang ` (8 subsequent siblings) 14 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 13892 bytes --] dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 61 +++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 24 ++++++++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 91 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 69 +++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 14 ++--- 8 files changed, 256 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..7b72be5f25 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux = Y x86-64 = Y ARMv8 = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 521d7ed433..59ee942bdd 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -899,12 +899,35 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + int ret = 0; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "intr disable failed"); + return -EINVAL; + } + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); @@ -928,9 +951,47 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EINVAL; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop = zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..83164a5c79 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,29 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED)) { + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags & + ZXDH_VRING_PACKED_DESC_F_AVAIL)) << 31) | + ((uint32_t)vq->vq_avail_idx << 16) | + vq->vq_queue_index; + } else { + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + } + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -216,6 +239,7 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_num = zxdh_set_queue_num, .setup_queue = zxdh_setup_queue, .del_queue = zxdh_del_queue, + .notify_queue = zxdh_notify_queue, }; uint8_t diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index e3f13cb17d..5c5f72b90e 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -144,6 +144,7 @@ struct zxdh_pci_ops { int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index af21f046ad..8c8f2605f6 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -274,3 +274,94 @@ zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) zxdh_queue_disable_intr(vq); return 0; } + +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num) +{ + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct zxdh_hw *hw = vq->hw; + struct zxdh_vq_desc_extra *dxp; + uint16_t flags = vq->vq_packed.cached_flags; + int32_t i; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + /* rx pkt fill in data_off */ + start_dp[idx].addr = rte_mbuf_iova_get(cookie[i]) + RTE_PKTMBUF_HEADROOM; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM; + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + zxdh_queue_store_flags_packed(&start_dp[idx], flags, hw->weak_barriers); + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[logic_qidx]; + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + uint16_t desc_idx; + int32_t error = 0; + + /* Allocate blank mbufs for the each rx descriptor */ + memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf)); + for (desc_idx = 0; desc_idx < ZXDH_MBUF_BURST_SZ; desc_idx++) + vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; + + while (!zxdh_queue_full(vq)) { + uint16_t free_cnt = vq->vq_free_cnt; + + free_cnt = RTE_MIN(ZXDH_MBUF_BURST_SZ, free_cnt); + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt) == 0)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + int32_t i; + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + } else { + PMD_DRV_LOG(ERR, "port %d rxq %d allocated bufs from %s failed", + hw->port_id, logic_qidx, rxvq->mpool->name); + break; + } + } + return 0; +} + +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + uint16_t i = 0; + struct zxdh_vring_packed_desc *descs = vq->vq_packed.ring.desc; + int32_t cnt = 0; + + i = vq->vq_used_cons_idx; + while (zxdh_desc_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 2f602d894f..6513aec3f0 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -25,6 +25,11 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) +#define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL_USED \ + (ZXDH_VRING_PACKED_DESC_F_AVAIL | ZXDH_VRING_PACKED_DESC_F_USED) #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 @@ -33,6 +38,9 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 + /* * ring descriptors: 16 bytes. * These can chain together via "next". @@ -290,6 +298,63 @@ zxdh_mb(uint8_t weak_barriers) rte_mb(); } +static inline int32_t +zxdh_queue_full(const struct zxdh_virtqueue *vq) +{ + return (vq->vq_free_cnt == 0); +} + +static inline void +zxdh_queue_store_flags_packed(struct zxdh_vring_packed_desc *dp, + uint16_t flags, uint8_t weak_barriers) + { + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + rte_io_wmb(); + dp->flags = flags; + #else + rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release); + #endif + } else { + rte_io_wmb(); + dp->flags = flags; + } +} + +static inline uint16_t +zxdh_queue_fetch_flags_packed(struct zxdh_vring_packed_desc *dp, + uint8_t weak_barriers) + { + uint16_t flags; + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + flags = dp->flags; + rte_io_rmb(); + #else + flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire); + #endif + } else { + flags = dp->flags; + rte_io_rmb(); + } + + return flags; +} + +static inline int32_t +zxdh_desc_used(struct zxdh_vring_packed_desc *desc, struct zxdh_virtqueue *vq) +{ + uint16_t flags = zxdh_queue_fetch_flags_packed(desc, vq->hw->weak_barriers); + uint16_t used = !!(flags & ZXDH_VRING_PACKED_DESC_F_USED); + uint16_t avail = !!(flags & ZXDH_VRING_PACKED_DESC_F_AVAIL); + return avail == used && used == vq->vq_packed.used_wrap_counter; +} + +static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) +{ + ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); @@ -306,5 +371,9 @@ int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp); int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx); +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq); +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num); #endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index de9353b223..8c7f734805 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -20,21 +20,19 @@ struct zxdh_virtnet_stats { uint64_t size_bins[8]; }; -struct zxdh_virtnet_rx { +struct __rte_cache_aligned zxdh_virtnet_rx { struct zxdh_virtqueue *vq; - - /* dummy mbuf, for wraparound when processing RX ring. */ - struct rte_mbuf fake_mbuf; - uint64_t mbuf_initializer; /* value to init mbufs. */ struct rte_mempool *mpool; /* mempool for mbuf allocation */ uint16_t queue_id; /* DPDK queue index. */ uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate RX ring. */ -} __rte_packed; + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; +}; -struct zxdh_virtnet_tx { +struct __rte_cache_aligned zxdh_virtnet_tx { struct zxdh_virtqueue *vq; const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ @@ -42,6 +40,6 @@ struct zxdh_virtnet_tx { uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate TX ring. */ -} __rte_packed; +}; #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 31827 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4 06/15] net/zxdh: dev start/stop ops implementations 2024-12-18 9:25 ` [PATCH v4 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-21 0:51 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-21 0:51 UTC (permalink / raw) To: Junlong Wang; +Cc: dev On Wed, 18 Dec 2024 17:25:53 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > +static void > +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) > +{ > + uint32_t notify_data = 0; > + > + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { > + rte_write16(vq->vq_queue_index, vq->notify_addr); > + return; > + } > + > + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED)) { > + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags & > + ZXDH_VRING_PACKED_DESC_F_AVAIL)) << 31) | > + ((uint32_t)vq->vq_avail_idx << 16) | > + vq->vq_queue_index; > + } else { > + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; > + } > + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", > + vq->vq_queue_index, notify_data, vq->notify_addr); > + rte_write32(notify_data, vq->notify_addr); > +} Looks like the notify_data part could be simplified to: notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED) && (vq->vq_packed.cached_flags & ZXDH_VRING_PACKED_DESC_F_AVAIL) notify_data |= RTE_BIT32(31); ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 07/15] net/zxdh: provided dev simple tx implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang ` (7 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18451 bytes --] provided dev simple tx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 21 ++ drivers/net/zxdh/zxdh_queue.h | 26 ++- drivers/net/zxdh/zxdh_rxtx.c | 396 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 447 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', + 'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 59ee942bdd..b6057edeaf 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -951,6 +952,25 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -966,6 +986,7 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 6513aec3f0..9343df81ac 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,8 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + /* This marks a buffer as write-only (otherwise read-only). */ -#define ZXDH_VRING_DESC_F_WRITE 2 +#define ZXDH_VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) #define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) @@ -35,11 +42,17 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 + #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZE sizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -355,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 0000000000..10034a0e98 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <stdalign.h> + +#include <rte_net.h> + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ + +#define ZXDH_PI_L3TYPE_IP 0x00 +#define ZXDH_PI_L3TYPE_IPV6 0x40 +#define ZXDH_PI_L3TYPE_NOIP 0x80 +#define ZXDH_PI_L3TYPE_RSV 0xC0 +#define ZXDH_PI_L3TYPE_MASK 0xC0 + +#define ZXDH_PCODE_MASK 0x1F +#define ZXDH_PCODE_IP_PKT_TYPE 0x01 +#define ZXDH_PCODE_TCP_PKT_TYPE 0x02 +#define ZXDH_PCODE_UDP_PKT_TYPE 0x03 +#define ZXDH_PCODE_NO_IP_PKT_TYPE 0x09 +#define ZXDH_PCODE_NO_REASSMBLE_TCP_PKT_TYPE 0x0C + +#define ZXDH_TX_MAX_SEGS 31 +#define ZXDH_RX_MAX_SEGS 31 + +static void +zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t curr_id = 0; + uint16_t free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num > 0 && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + num -= dxp->ndescs; + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + } while (curr_id != id); + } + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} + +static void +zxdh_ring_free_id_packed(struct zxdh_virtqueue *vq, uint16_t id) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static void +zxdh_xmit_cleanup_normal_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num-- && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + zxdh_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + +static void +zxdh_xmit_cleanup_packed(struct zxdh_virtqueue *vq, int32_t num, int32_t in_order) +{ + if (in_order) + zxdh_xmit_cleanup_inorder_packed(vq, num); + else + zxdh_xmit_cleanup_normal_packed(vq, num); +} + +static uint8_t +zxdh_xmit_get_ptype(struct rte_mbuf *m) +{ + uint8_t pcode = ZXDH_PCODE_NO_IP_PKT_TYPE; + uint8_t l3_ptype = ZXDH_PI_L3TYPE_NOIP; + + if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV4 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4)) { + l3_ptype = ZXDH_PI_L3TYPE_IP; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV6 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV6)) { + l3_ptype = ZXDH_PI_L3TYPE_IPV6; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else { + goto end; + } + + if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_TCP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)) + pcode = ZXDH_PCODE_TCP_PKT_TYPE; + else if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_UDP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)) + pcode = ZXDH_PCODE_UDP_PKT_TYPE; + +end: + return l3_ptype | ZXDH_PKT_FORM_CPU | pcode; +} + +static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, + struct zxdh_net_hdr_dl *hdr) +{ + uint16_t pkt_flag_lw16 = ZXDH_NO_IPID_UPDATE; + uint16_t l3_offset; + uint32_t ol_flag = 0; + + hdr->pi_hdr.pkt_flag_lw16 = rte_be_to_cpu_16(pkt_flag_lw16); + + hdr->pi_hdr.pkt_type = zxdh_xmit_get_ptype(cookie); + l3_offset = ZXDH_DL_NET_HDR_SIZE + cookie->outer_l2_len + + cookie->outer_l3_len + cookie->l2_len; + hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); + hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); +} + +static inline void zxdh_enqueue_xmit_packed_fast(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, int32_t in_order) +{ + struct zxdh_virtqueue *vq = txvq->vq; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + uint16_t flags = vq->vq_packed.cached_flags; + struct zxdh_net_hdr_dl *hdr = NULL; + + dxp->ndescs = 1; + dxp->cookie = cookie; + hdr = rte_pktmbuf_mtod_offset(cookie, struct zxdh_net_hdr_dl *, -ZXDH_DL_NET_HDR_SIZE); + zxdh_xmit_fill_net_hdr(cookie, hdr); + + uint16_t idx = vq->vq_avail_idx; + struct zxdh_vring_packed_desc *dp = &vq->vq_packed.ring.desc[idx]; + + dp->addr = rte_pktmbuf_iova(cookie) - ZXDH_DL_NET_HDR_SIZE; + dp->len = cookie->data_len + ZXDH_DL_NET_HDR_SIZE; + dp->id = id; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + vq->vq_free_cnt--; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(dp, flags, vq->hw->weak_barriers); +} + +static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, + uint16_t needed, + int32_t use_indirect, + int32_t in_order) +{ + struct zxdh_tx_region *txr = txvq->zxdh_net_hdr_mz->addr; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + void *hdr = NULL; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t idx = head_idx; + uint16_t prev = head_idx; + uint16_t head_flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + uint16_t seg_num = cookie->nb_segs; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vring_packed_desc *head_dp = &vq->vq_packed.ring.desc[idx]; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + + dxp->ndescs = needed; + dxp->cookie = cookie; + head_flags |= vq->vq_packed.cached_flags; + /* if offload disabled, it is not zeroed below, do it now */ + + if (use_indirect) { + /** + * setup tx ring slot to point to indirect + * descriptor list stored in reserved region. + * the first slot in indirect ring is already + * preset to point to the header in reserved region + **/ + start_dp[idx].addr = + txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr); + start_dp[idx].len = (seg_num + 1) * sizeof(struct zxdh_vring_packed_desc); + /* Packed descriptor id needs to be restored when inorder. */ + if (in_order) + start_dp[idx].id = idx; + + /* reset flags for indirect desc */ + head_flags = ZXDH_VRING_DESC_F_INDIRECT; + head_flags |= vq->vq_packed.cached_flags; + hdr = (void *)&txr[idx].tx_hdr; + /* loop below will fill in rest of the indirect elements */ + start_dp = txr[idx].tx_packed_indir; + start_dp->len = ZXDH_DL_NET_HDR_SIZE; /* update actual net or type hdr size */ + idx = 1; + } else { + /* setup first tx ring slot to point to header stored in reserved region. */ + start_dp[idx].addr = txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = ZXDH_DL_NET_HDR_SIZE; + head_flags |= ZXDH_VRING_DESC_F_NEXT; + hdr = (void *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + zxdh_xmit_fill_net_hdr(cookie, (struct zxdh_net_hdr_dl *)hdr); + + do { + start_dp[idx].addr = rte_pktmbuf_iova(cookie); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + uint16_t flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + start_dp[prev].id = id; + if (use_indirect) { + idx = head_idx; + if (++idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); +} + +uint16_t +zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct zxdh_virtnet_tx *txvq = tx_queue; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_hw *hw = vq->hw; + uint16_t nb_tx = 0; + + bool in_order = zxdh_pci_with_feature(hw, ZXDH_F_IN_ORDER); + + if (nb_pkts > vq->vq_free_cnt) + zxdh_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt, in_order); + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int32_t can_push = 0; + int32_t use_indirect = 0; + int32_t slots = 0; + int32_t need = 0; + + /* optimize ring usage */ + if ((zxdh_pci_with_feature(hw, ZXDH_F_ANY_LAYOUT) || + zxdh_pci_with_feature(hw, ZXDH_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= ZXDH_DL_NET_HDR_SIZE && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + alignof(struct zxdh_net_hdr_dl))) { + can_push = 1; + } else if (zxdh_pci_with_feature(hw, ZXDH_RING_F_INDIRECT_DESC) && + txm->nb_segs < ZXDH_MAX_TX_INDIRECT) { + use_indirect = 1; + } + /** + * How many main ring entries are needed to this Tx? + * indirect => 1 + * any_layout => number of segments + * default => number of segments + 1 + **/ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); + need = slots - vq->vq_free_cnt; + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + zxdh_xmit_cleanup_packed(vq, need, in_order); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, "port[ep:%d, pf:%d, vf:%d, vfid:%d, pcieid:%d], queue:%d[pch:%d]. No free desc to xmit", + hw->vport.epid, hw->vport.pfid, hw->vport.vfid, + hw->vfid, hw->pcie_id, txvq->queue_id, + hw->channel_context[txvq->queue_id].ph_chno); + break; + } + } + /* Enqueue Packet buffers */ + if (can_push) + zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + } + if (likely(nb_tx)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + return nb_tx; +} + +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + int32_t error; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + return nb_tx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 8c7f734805..0a02d319b2 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -42,4 +42,8 @@ struct __rte_cache_aligned zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ }; +uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45245 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 08/15] net/zxdh: provided dev simple rx implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 09/15] net/zxdh: link info update, set link up/down Junlong Wang ` (6 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11243 bytes --] provided dev simple rx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 1 + drivers/net/zxdh/zxdh_rxtx.c | 313 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 318 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7b72be5f25..bb44e93fad 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess aware = Y +Scattered Rx = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b6057edeaf..0d63129d8d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -967,6 +967,7 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 10034a0e98..06290d48bb 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -394,3 +481,229 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** + * desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + vq->vq_descx[id].cookie = NULL; + if (unlikely(cookie == NULL)) { + PMD_RX_LOG(ERR, + "vring descriptor with no mbuf cookie at %u", vq->vq_used_cons_idx); + break; + } + rx_pkts[i] = cookie; + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + } + return i; +} + +static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *hdr) +{ + struct zxdh_pd_hdr_ul *pd_hdr = &hdr->pd_hdr; + struct zxdh_pi_hdr *pi_hdr = &hdr->pi_hdr; + uint32_t idx = 0; + + m->pkt_len = rte_be_to_cpu_16(pi_hdr->ul.pkt_len); + + uint16_t pkt_type_outer = rte_be_to_cpu_16(pd_hdr->pkt_type_out); + + idx = (pkt_type_outer >> 12) & 0xF; + m->packet_type = zxdh_outer_l2_type[idx]; + idx = (pkt_type_outer >> 8) & 0xF; + m->packet_type |= zxdh_outer_l3_type[idx]; + idx = (pkt_type_outer >> 4) & 0xF; + m->packet_type |= zxdh_outer_l4_type[idx]; + idx = pkt_type_outer & 0xF; + m->packet_type |= zxdh_tunnel_type[idx]; + + uint16_t pkt_type_inner = rte_be_to_cpu_16(pd_hdr->pkt_type_in); + + if (pkt_type_inner) { + idx = (pkt_type_inner >> 12) & 0xF; + m->packet_type |= zxdh_inner_l2_type[idx]; + idx = (pkt_type_inner >> 8) & 0xF; + m->packet_type |= zxdh_inner_l3_type[idx]; + idx = (pkt_type_inner >> 4) & 0xF; + m->packet_type |= zxdh_inner_l4_type[idx]; + } + + return 0; +} + +static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +{ + int32_t error = 0; + /* + * Requeue the discarded mbuf. This should always be + * successful since it was just dequeued. + */ + error = zxdh_enqueue_recv_refill_packed(vq, &m, 1); + if (unlikely(error)) { + PMD_RX_LOG(ERR, "cannot enqueue discarded mbuf"); + rte_pktmbuf_free(m); + } +} + +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct zxdh_virtnet_rx *rxvq = rx_queue; + struct zxdh_virtqueue *vq = rxvq->vq; + struct zxdh_hw *hw = vq->hw; + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_mbuf *rxm = NULL; + struct rte_mbuf *prev = NULL; + uint32_t len[ZXDH_MBUF_BURST_SZ] = {0}; + struct rte_mbuf *rcv_pkts[ZXDH_MBUF_BURST_SZ] = {NULL}; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint16_t hdr_size = 0; + int32_t error = 0; + uint16_t nb_rx = 0; + uint16_t num = nb_pkts; + + if (unlikely(num > ZXDH_MBUF_BURST_SZ)) + num = ZXDH_MBUF_BURST_SZ; + + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + uint16_t i; + uint16_t rcvd_pkt_len = 0; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + struct zxdh_net_hdr_ul *header = + (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM); + + seg_num = header->type_hdr.num_buffers; + if (seg_num == 0) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + seg_num = 1; + } + /* bit[0:6]-pd_len unit:2B */ + uint16_t pd_len = header->type_hdr.pd_len << 1; + /* Private queue only handle type hdr */ + hdr_size = pd_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rcvd_pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + /* Update rte_mbuf according to pi/pd header */ + if (zxdh_rx_update_mbuf(rxm, header) < 0) { + zxdh_discard_rxbuf(vq, rxm); + continue; + } + seg_res = seg_num - 1; + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_len = (uint16_t)(len[i]); + + rcvd_pkt_len += (uint32_t)(len[i]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + seg_res -= 1; + } + + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, ZXDH_MBUF_BURST_SZ); + uint16_t extra_idx = 0; + + rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); + if (unlikely(rcv_cnt == 0)) { + PMD_RX_LOG(ERR, "No enough segments for packet."); + rte_pktmbuf_free(rx_pkts[nb_rx]); + break; + } + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + rcvd_pkt_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + + /* Allocate new mbuf for the used descriptor */ + if (likely(!zxdh_queue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + if (likely(nb_enqueued)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + return nb_rx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 0a02d319b2..cc0004324a 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -45,5 +45,7 @@ struct __rte_cache_aligned zxdh_virtnet_tx { uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 28867 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 09/15] net/zxdh: link info update, set link up/down 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang ` (5 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 23805 bytes --] provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 14 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c | 57 ++++++++++ drivers/net/zxdh/zxdh_msg.h | 40 +++++++ drivers/net/zxdh/zxdh_np.c | 172 ++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 20 ++++ drivers/net/zxdh/zxdh_tables.c | 15 +++ drivers/net/zxdh/zxdh_tables.h | 6 +- 13 files changed, 503 insertions(+), 9 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index bb44e93fad..7da3aaced1 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -10,3 +10,5 @@ ARMv8 = Y SR-IOV = Y Multiprocess aware = Y Scattered Rx = Y +Link status = Y +Link status event = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', + 'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 0d63129d8d..d3876ec9b3 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,12 +106,18 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } - /* Interrupt handler triggered by NIC for handling specific interrupt. */ static void zxdh_fromriscv_intr_handler(void *param) @@ -1006,6 +1013,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + zxdh_dev_set_link_up(dev); + return 0; } @@ -1020,6 +1029,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index b1f398b28e..c0b719062c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -72,6 +72,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -93,6 +94,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c new file mode 100644 index 0000000000..5a0af98cc0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_msg.h" +#include "zxdh_ethdev_ops.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int32_t ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + port_attr.is_up = link_status; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + } else { + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; + port_attr_msg->value = link_status; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); + return ret; + } + } + return ret; +} + +static int32_t +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + uint16_t status = 0; + int32_t ret = 0; + + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + + link->link_status = status; + + if (status == RTE_ETH_LINK_DOWN) { + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } else { + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), + ZXDH_BAR_MODULE_MAC); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_LINK_GET); + return -1; + } + link->link_speed = reply_info.reply_body.link_msg.speed; + hw->speed_mode = reply_info.reply_body.link_msg.speed_modes; + if ((reply_info.reply_body.link_msg.duplex & RTE_ETH_LINK_FULL_DUPLEX) == + RTE_ETH_LINK_FULL_DUPLEX) + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + else + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + } + hw->speed = link->link_speed; + + return 0; +} + +static int zxdh_set_link_status(struct rte_eth_dev *dev, uint8_t link_status) +{ + uint16_t curr_link_status = dev->data->dev_link.link_status; + + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (link_status == curr_link_status) { + PMD_DRV_LOG(DEBUG, "curr_link_status %u", curr_link_status); + return 0; + } + + hw->admin_status = link_status; + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); + return ret; + } + dev->data->dev_link.link_status = hw->admin_status & link.link_status; + + if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) { + dev->data->dev_link.link_speed = link.link_speed; + dev->data->dev_link.link_duplex = link.link_duplex; + } else { + dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } + return zxdh_config_port_status(dev, dev->data->dev_link.link_status); +} + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_UP); + + if (ret) + PMD_DRV_LOG(ERR, "Set link up failed, code:%d", ret); + + return ret; +} + +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + memset(&link, 0, sizeof(link)); + link.link_duplex = hw->duplex; + link.link_speed = hw->speed; + link.link_autoneg = RTE_ETH_LINK_AUTONEG; + + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + return ret; + } + link.link_status &= hw->admin_status; + if (link.link_status == RTE_ETH_LINK_DOWN) + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + ret = zxdh_config_port_status(dev, link.link_status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + return ret; + } + return rte_eth_linkstatus_set(dev, &link); +} + +int zxdh_dev_set_link_down(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_DOWN); + + if (ret) + PMD_DRV_LOG(ERR, "Set link down failed"); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h new file mode 100644 index 0000000000..c6d6ca56fd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_OPS_H +#define ZXDH_ETHDEV_OPS_H + +#include "zxdh_ethdev.h" + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev); +int zxdh_dev_set_link_down(struct rte_eth_dev *dev); +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); + +#endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index aa2e10fd45..23a7ed2097 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1134,6 +1134,51 @@ int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, return 0; } +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_pci_bar_msg in = {0}; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + in.payload_addr = &msg_req; + in.payload_len = msg_req_len; + in.virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + in.src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = module_id; + in.src_pcieid = hw->pcie_id; + if (zxdh_bar_chan_sync_msg_send(&in, &result) != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "Failed to send sync messages or receive response"); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + + return 0; +} + void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info) { @@ -1144,3 +1189,15 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, msghead->vf_id = hw->vport.vfid; msghead->pcieid = hw->pcie_id; } + +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_agent_msg_head *agent_head = &msg_info->agent_msg_head; + + agent_head->msg_type = type; + agent_head->panel_id = hw->panel_id; + agent_head->phyport = hw->phyport; + agent_head->vf_id = hw->vfid; + agent_head->pcie_id = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 613ca71170..a78075c914 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -164,11 +164,18 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +/* riscv msg opcodes */ +enum zxdh_agent_msg_type { + ZXDH_MAC_LINK_GET = 14, +}; + enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_PORT_ATTRS_SET = 25, + ZXDH_MSG_TYPE_END, }; @@ -261,6 +268,15 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_link_info_msg { + uint8_t autoneg; + uint8_t link_state; + uint8_t blink_enable; + uint8_t duplex; + uint32_t speed_modes; + uint32_t speed; +} __rte_packed; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -276,6 +292,7 @@ struct zxdh_msg_reply_body { enum zxdh_reps_flag flag; union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + struct zxdh_link_info_msg link_msg; } __rte_packed; } __rte_packed; @@ -291,6 +308,12 @@ struct zxdh_vf_init_msg { uint8_t rss_enable; } __rte_packed; +struct zxdh_port_attr_set_msg { + uint32_t mode; + uint32_t value; + uint8_t allmulti_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -298,14 +321,26 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_agent_msg_head { + enum zxdh_agent_msg_type msg_type; + uint8_t panel_id; + uint8_t phyport; + uint8_t rsv; + uint16_t vf_id; + uint16_t pcie_id; +} __rte_packed; + struct zxdh_msg_info { union { uint8_t head_len[ZXDH_MSG_HEAD_LEN]; struct zxdh_msg_head msg_head; + struct zxdh_agent_msg_head agent_msg_head; }; union { uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; struct zxdh_vf_init_msg vf_init_msg; + struct zxdh_port_attr_set_msg port_attr_msg; + struct zxdh_link_info_msg link_msg; } __rte_packed data; } __rte_packed; @@ -326,5 +361,10 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info); int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len); +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info); +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 99a7dc11b4..1f06539263 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -36,6 +36,10 @@ ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ ((_inttype_)(((_bitqnt_) < 32))) +#define ZXDH_COMM_UINT32_GET_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + ((_uidst_) = (((_uisrc_) >> (_uistartpos_)) & \ + (ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_))))) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1456,15 +1460,11 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } -static uint32_t +static void zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) { - uint32_t rc = 0; - p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; - - return rc; } int @@ -1507,7 +1507,7 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, pentry = delete_entries + entry_index; sdt_no = pentry->sdt_no; - rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); switch (tbl_type) { case ZXDH_SDT_TBLT_ERAM: { @@ -1557,3 +1557,163 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, rte_free(p_data_buff_ex); return 0; } + +static uint32_t +zxdh_np_sdt_tbl_data_parser(uint32_t sdt_hig32, uint32_t sdt_low32, void *p_sdt_info) +{ + uint32_t tbl_type = 0; + uint32_t clutch_en = 0; + + ZXDH_SDTTBL_ERAM_T *p_sdt_eram = NULL; + ZXDH_SDTTBL_PORTTBL_T *p_sdt_porttbl = NULL; + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_hig32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + ZXDH_COMM_UINT32_GET_BITS(clutch_en, sdt_low32, 0, 1); + + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + p_sdt_eram = (ZXDH_SDTTBL_ERAM_T *)p_sdt_info; + p_sdt_eram->table_type = tbl_type; + p_sdt_eram->eram_clutch_en = clutch_en; + break; + } + + case ZXDH_SDT_TBLT_PORTTBL: + { + p_sdt_porttbl = (ZXDH_SDTTBL_PORTTBL_T *)p_sdt_info; + p_sdt_porttbl->table_type = tbl_type; + p_sdt_porttbl->porttbl_clutch_en = clutch_en; + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} + +static uint32_t +zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t rc; + + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + + rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); + if (rc != 0) + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + + return rc; +} + +static void +zxdh_np_eram_index_cal(uint32_t eram_mode, uint32_t index, + uint32_t *p_row_index, uint32_t *p_col_index) +{ + uint32_t row_index = 0; + uint32_t col_index = 0; + + switch (eram_mode) { + case ZXDH_ERAM128_TBL_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_TBL_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + *p_row_index = row_index; + *p_col_index = col_index; +} + +static uint32_t +zxdh_np_dtb_eram_data_get(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no, + ZXDH_DTB_ERAM_ENTRY_INFO_T *p_dump_eram_entry) +{ + uint32_t index = p_dump_eram_entry->index; + uint32_t *p_data = p_dump_eram_entry->p_data; + ZXDH_SDTTBL_ERAM_T sdt_eram_info = {0}; + uint32_t temp_data[4] = {0}; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t rd_mode; + uint32_t rc; + + rc = zxdh_np_soft_sdt_tbl_get(queue_id, sdt_no, &sdt_eram_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_soft_sdt_tbl_get"); + rd_mode = sdt_eram_info.eram_mode; + + zxdh_np_eram_index_cal(rd_mode, index, &row_index, &col_index); + + switch (rd_mode) { + case ZXDH_ERAM128_TBL_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + case ZXDH_ERAM128_TBL_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + case ZXDH_ERAM128_TBL_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + return rc; +} + +int +zxdh_np_dtb_table_entry_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, + uint32_t srh_mode) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t tbl_type = 0; + uint32_t rc; + uint32_t sdt_no; + + sdt_no = get_entry->sdt_no; + zxdh_np_sdt_tbl_data_get(srh_mode, sdt_no, &sdt_tbl); + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_tbl.data_high32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_data_get(dev_id, + queue_id, + sdt_no, + (ZXDH_DTB_ERAM_ENTRY_INFO_T *)get_entry->p_entry_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_eram_data_get"); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 42a652dd6b..ac3931ba65 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -514,11 +514,31 @@ typedef struct zxdh_sdt_tbl_data_t { uint32_t data_low32; } ZXDH_SDT_TBL_DATA_T; +typedef struct zxdh_sdt_tbl_etcam_t { + uint32_t table_type; + uint32_t etcam_id; + uint32_t etcam_key_mode; + uint32_t etcam_table_id; + uint32_t no_as_rsp_mode; + uint32_t as_en; + uint32_t as_eram_baddr; + uint32_t as_rsp_mode; + uint32_t etcam_table_depth; + uint32_t etcam_clutch_en; +} ZXDH_SDTTBL_ETCAM_T; + +typedef struct zxdh_sdt_tbl_porttbl_t { + uint32_t table_type; + uint32_t porttbl_clutch_en; +} ZXDH_SDTTBL_PORTTBL_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); +int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 9fd184e612..db0132ce3f 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -134,3 +134,18 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } + +int +zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VPORT_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5e9b36faee..8676a8b375 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,9 +7,10 @@ #include <stdint.h> -extern struct zxdh_dtb_shared_data g_dtb_data; - #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 + +extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN @@ -145,5 +146,6 @@ int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 51008 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 10/15] net/zxdh: mac set/add/remove ops implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (8 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 11/15] net/zxdh: promisc/allmulti " Junlong Wang ` (4 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24676 bytes --] provided mac set/add/remove ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 32 +++- drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 233 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 12 ++ drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 197 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 36 +++++ 12 files changed, 549 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7da3aaced1..dc09fe3453 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess aware = Y Scattered Rx = Y Link status = Y Link status event = Y +Unicast MAC filter = Y +Multicast MAC filter = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 4f18c97ed7..75883a8897 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d3876ec9b3..85ada87cdc 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -979,6 +979,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1014,6 +1031,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_queue_notify(vq); } zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); return 0; } @@ -1032,6 +1052,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add = zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set = zxdh_dev_mac_addr_set, }; static int32_t @@ -1073,15 +1096,20 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) PMD_DRV_LOG(ERR, "Failed to get phyport"); return -1; } - PMD_DRV_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + PMD_DRV_LOG(DEBUG, "Get phyport success: 0x%x", hw->phyport); hw->vfid = zxdh_vport_to_vfid(hw->vport); + if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) { + PMD_DRV_LOG(ERR, "Failed to get hash idx"); + return -1; + } + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { PMD_DRV_LOG(ERR, "Failed to get panel_id"); return -1; } - PMD_DRV_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + PMD_DRV_LOG(DEBUG, "Get panel id success: 0x%x", hw->panel_id); return 0; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index c0b719062c..5b95cb1c2a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -80,6 +80,8 @@ struct zxdh_hw { uint16_t port_id; uint16_t vfid; uint16_t queue_num; + uint16_t mc_num; + uint16_t uc_num; uint8_t *isr; uint8_t weak_barriers; @@ -92,6 +94,7 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t hash_search_index; uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 5a0af98cc0..751f80e9b4 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -164,3 +164,236 @@ int zxdh_dev_set_link_down(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Set link down failed"); return ret; } + +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_ether_addr *old_addr = &dev->data->mac_addrs[0]; + struct zxdh_msg_info msg_info = {0}; + uint16_t ret = 0; + + if (!rte_is_valid_assigned_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "mac address is invalid!"); + return -EINVAL; + } + + if (hw->is_pf) { + ret = zxdh_del_mac_table(hw->vport.vport, old_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num--; + + ret = zxdh_set_mac_table(hw->vport.vport, addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + mac_filter->mac_flag = true; + rte_memcpy(&mac_filter->mac, old_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_DEL); + return ret; + } + hw->uc_num--; + PMD_DRV_LOG(INFO, "Success to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + rte_memcpy(&mac_filter->mac, addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } + rte_ether_addr_copy(addr, (struct rte_ether_addr *)hw->mac_addr); + return ret; +} + +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + uint16_t i, ret; + + if (index >= ZXDH_MAX_MAC_ADDRS) { + PMD_DRV_LOG(ERR, "Add mac index (%u) is out of range", index); + return -EINVAL; + } + + for (i = 0; (i != ZXDH_MAX_MAC_ADDRS); ++i) { + if (memcmp(&dev->data->mac_addrs[i], mac_addr, sizeof(*mac_addr))) + continue; + + PMD_DRV_LOG(INFO, "MAC address already configured"); + return -EADDRINUSE; + } + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } + dev->data->mac_addrs[index] = *mac_addr; + return 0; +} +/** + * Fun: + */ +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index]; + uint16_t ret = 0; + + if (index >= ZXDH_MAX_MAC_ADDRS) + return; + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } + memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index c6d6ca56fd..4630bb70db 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -10,5 +10,9 @@ int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq); +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a78075c914..44ce5d1b7f 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -46,6 +46,9 @@ #define ZXDH_MSG_REQ_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) +#define ZXDH_MAC_FILTER 0xaa +#define ZXDH_MAC_UNFILTER 0xff + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -173,6 +176,8 @@ enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_MAC_ADD = 3, + ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, @@ -314,6 +319,12 @@ struct zxdh_port_attr_set_msg { uint8_t allmulti_follow; } __rte_packed; +struct zxdh_mac_filter { + uint8_t mac_flag; + uint8_t filter_flag; + struct rte_ether_addr mac; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -341,6 +352,7 @@ struct zxdh_msg_info { struct zxdh_vf_init_msg vf_init_msg; struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; + struct zxdh_mac_filter mac_filter_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index ac3931ba65..19d1f03f59 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -532,6 +532,11 @@ typedef struct zxdh_sdt_tbl_porttbl_t { uint32_t porttbl_clutch_en; } ZXDH_SDTTBL_PORTTBL_T; +typedef struct zxdh_dtb_hash_entry_info_t { + uint8_t *p_actu_key; + uint8_t *p_rst; +} ZXDH_DTB_HASH_ENTRY_INFO_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index db0132ce3f..f5b607584d 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,10 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_MAC_HASH_INDEX_BASE 64 +#define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) +#define ZXDH_MC_GROUP_NUM 4 + int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -149,3 +153,196 @@ zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) return ret; } + +int +zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "Insert mac_table failed"); + return -ret; + } + } else { + for (group_id = 0; group_id < 4; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, + addr, sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (ret == 0) { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + } + } else { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + else + multicast_table.entry.mc_bitmap[index] = + false; + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + else + multicast_table.entry.mc_pf_enable = false; + } + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "add mac_table failed, code:%d", ret); + return -ret; + } + } + } + return 0; +} + +int +zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret, del_flag = 0; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "delete l2_fwd_hash_table failed, code:%d", ret); + return -ret; + } + } else { + multicast_table.key.vf_group_id = vport_num.vfid / 64; + rte_memcpy(multicast_table.key.mac_addr, addr, sizeof(struct rte_ether_addr)); + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (vport_num.vf_flag) + multicast_table.entry.mc_bitmap[index] &= + ~(rte_cpu_to_be_32(UINT32_C(1) << (31 - index))); + else + multicast_table.entry.mc_pf_enable = 0; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add mc_table failed, code:%d", ret); + return -ret; + } + + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + if (multicast_table.entry.mc_bitmap[0] == 0 && + multicast_table.entry.mc_bitmap[1] == 0 && + multicast_table.entry.mc_pf_enable == 0) { + if (group_id == (ZXDH_MC_GROUP_NUM - 1)) + del_flag = 1; + } else { + break; + } + } + if (del_flag) { + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + } + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8676a8b375..f16c4923ef 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -142,10 +142,46 @@ struct zxdh_panel_table { uint32_t rsv_2; }; /* 16B */ +struct zxdh_mac_unicast_key { + uint16_t rsv; + uint8_t dmac_addr[6]; +}; + +struct zxdh_mac_unicast_entry { + uint8_t rsv1 : 7, + hit_flag : 1; + uint8_t rsv; + uint16_t vfid; +}; + +struct zxdh_mac_unicast_table { + struct zxdh_mac_unicast_key key; + struct zxdh_mac_unicast_entry entry; +}; + +struct zxdh_mac_multicast_key { + uint8_t rsv; + uint8_t vf_group_id; + uint8_t mac_addr[6]; +}; + +struct zxdh_mac_multicast_entry { + uint32_t mc_pf_enable; + uint32_t rsv1; + uint32_t mc_bitmap[2]; +}; + +struct zxdh_mac_multicast_table { + struct zxdh_mac_multicast_key key; + struct zxdh_mac_multicast_entry entry; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 68977 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 11/15] net/zxdh: promisc/allmulti ops implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (9 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:25 ` [PATCH v4 12/15] net/zxdh: vlan filter/ offload " Junlong Wang ` (3 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18516 bytes --] provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 21 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 132 +++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_tables.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 417 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index dc09fe3453..e9b237e102 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -14,3 +14,5 @@ Link status = Y Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 85ada87cdc..1d64b877c1 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -901,8 +901,16 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) int ret; ret = zxdh_port_attr_uninit(dev); - if (ret) + if (ret) { PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "uninit promisc_table failed"); + return ret; + } return ret; } @@ -1055,6 +1063,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add = zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set = zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable = zxdh_dev_allmulticast_disable, }; static int32_t @@ -1306,6 +1318,13 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 5b95cb1c2a..3cdac5de73 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -98,6 +98,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 751f80e9b4..aed4e6410c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -397,3 +397,135 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 1; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->promisc_status == 1) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, false); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = false; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 0; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = true; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 1; + } + return ret; +} + +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 1) { + if (hw->is_pf) { + if (hw->promisc_status == 1) + goto end; + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + if (hw->promisc_status == 1) + goto end; + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = false; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 0; + } + return ret; +end: + hw->allmulti_status = 0; + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 4630bb70db..394ddedc0e 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -14,5 +14,9 @@ int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_ad uint32_t index, uint32_t vmdq); int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 44ce5d1b7f..2abf579a80 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -48,6 +48,8 @@ #define ZXDH_MAC_FILTER 0xaa #define ZXDH_MAC_UNFILTER 0xff +#define ZXDH_PROMISC_MODE 1 +#define ZXDH_ALLMULTI_MODE 2 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,7 @@ enum zxdh_msg_type { ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, + ZXDH_PORT_PROMISC_SET = 26, ZXDH_MSG_TYPE_END, }; @@ -325,6 +328,12 @@ struct zxdh_mac_filter { struct rte_ether_addr mac; } __rte_packed; +struct zxdh_port_promisc_msg { + uint8_t mode; + uint8_t value; + uint8_t mc_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -353,6 +362,7 @@ struct zxdh_msg_info { struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; + struct zxdh_port_promisc_msg port_promisc_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f5b607584d..45aeb3e3e4 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,10 +10,15 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_BROCAST_ATT_TABLE 6 +#define ZXDH_SDT_UNICAST_ATT_TABLE 10 +#define ZXDH_SDT_MULTICAST_ATT_TABLE 11 #define ZXDH_MAC_HASH_INDEX_BASE 64 #define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) #define ZXDH_MC_GROUP_NUM 4 +#define ZXDH_BASE_VFID 1152 +#define ZXDH_TABLE_HIT_FLAG 128 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -346,3 +351,221 @@ zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_se } return 0; } + +int +zxdh_promisc_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_promisc_table_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_unitcast_table uc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T uc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&uc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + uc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + uc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + } else { + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} + +int +zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_multicast_table mc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T mc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&mc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + mc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + mc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + + } else { + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index f16c4923ef..fb30c8f32e 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -176,6 +176,24 @@ struct zxdh_mac_multicast_table { struct zxdh_mac_multicast_entry entry; }; +struct zxdh_brocast_table { + uint32_t flag; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_unitcast_table { + uint32_t uc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_multicast_table { + uint32_t mc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -183,5 +201,9 @@ int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_promisc_table_init(struct rte_eth_dev *dev); +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45419 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 12/15] net/zxdh: vlan filter/ offload ops implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (10 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 11/15] net/zxdh: promisc/allmulti " Junlong Wang @ 2024-12-18 9:25 ` Junlong Wang 2024-12-18 9:26 ` [PATCH v4 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang ` (2 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:25 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20602 bytes --] provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +++++- drivers/net/zxdh/zxdh_ethdev_ops.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h | 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 99 +++++++++++++ drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 417 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index e9b237e102..6fb006c2da 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -16,3 +16,6 @@ Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode = Y +VLAN filter = Y +VLAN offload = Y +QinQ offload = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1d64b877c1..cc32b467a9 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -758,6 +758,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -815,7 +843,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -847,6 +875,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1067,6 +1097,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable = zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set = zxdh_dev_vlan_offload_set, }; static int32_t @@ -1325,6 +1357,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " vlan filter table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index aed4e6410c..94c5e6dbc8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include <rte_malloc.h> + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -529,3 +533,222 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int +zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); + return -1; + } + + idx = vlan_id / ZXDH_VLAN_FILTER_GROUPS; + bit_idx = vlan_id % ZXDH_VLAN_FILTER_GROUPS; + + if (on) { + if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { + PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_ADD; + } else { + if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { + PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_DEL; + } + + if (hw->is_pf) { + ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); + if (ret) { + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + return -1; + } + } else { + struct zxdh_msg_info msg = {0}; + zxdh_msg_head_build(hw, msg_type, &msg); + msg.data.vlan_filter_msg.vlan_id = vlan_id; + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, msg_type); + return ret; + } + } + + if (on) + dev->data->vlan_filter_conf.ids[idx] |= (1ULL << bit_idx); + else + dev->data->vlan_filter_conf.ids[idx] &= ~(1ULL << bit_idx); + + return 0; +} + +int +zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + rxmode = &dev->data->dev_conf.rxmode; + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_QINQ_STRIP_MASK) { + memset(&msg, 0, sizeof(struct zxdh_msg_info)); + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 394ddedc0e..058d271ab3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -18,5 +18,7 @@ int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 2abf579a80..ec15388f7a 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -50,6 +50,8 @@ #define ZXDH_MAC_UNFILTER 0xff #define ZXDH_PROMISC_MODE 1 #define ZXDH_ALLMULTI_MODE 2 +#define ZXDH_VLAN_STRIP_MSG_TYPE 0 +#define ZXDH_QINQ_STRIP_MSG_TYPE 1 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,10 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_VLAN_FILTER_SET = 17, + ZXDH_VLAN_FILTER_ADD = 18, + ZXDH_VLAN_FILTER_DEL = 19, + ZXDH_VLAN_OFFLOAD = 21, ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, @@ -341,6 +347,19 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_vlan_filter { + uint16_t vlan_id; +}; + +struct zxdh_vlan_filter_set { + uint8_t enable; +}; + +struct zxdh_vlan_offload { + uint8_t enable; + uint8_t type; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -363,6 +382,9 @@ struct zxdh_msg_info { struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; struct zxdh_port_promisc_msg port_promisc_msg; + struct zxdh_vlan_filter vlan_filter_msg; + struct zxdh_vlan_filter_set vlan_filter_set_msg; + struct zxdh_vlan_offload vlan_offload_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 06290d48bb..0ffce50042 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -11,6 +11,9 @@ #include "zxdh_pci.h" #include "zxdh_queue.h" +#define ZXDH_SVLAN_TPID 0x88a8 +#define ZXDH_CVLAN_TPID 0x8100 + #define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ #define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ #define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ @@ -21,6 +24,9 @@ #define ZXDH_PI_L3TYPE_RSV 0xC0 #define ZXDH_PI_L3TYPE_MASK 0xC0 +#define ZXDH_PD_OFFLOAD_SVLAN_INSERT (1 << 14) +#define ZXDH_PD_OFFLOAD_CVLAN_INSERT (1 << 13) + #define ZXDH_PCODE_MASK 0x1F #define ZXDH_PCODE_IP_PKT_TYPE 0x01 #define ZXDH_PCODE_TCP_PKT_TYPE 0x02 @@ -259,6 +265,18 @@ static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + if (cookie->ol_flags & RTE_MBUF_F_TX_VLAN) { + ol_flag |= ZXDH_PD_OFFLOAD_CVLAN_INSERT; + hdr->pi_hdr.vlan_id = rte_be_to_cpu_16(cookie->vlan_tci); + hdr->pd_hdr.cvlan_insert = + rte_be_to_cpu_32((ZXDH_CVLAN_TPID << 16) | cookie->vlan_tci); + } + if (cookie->ol_flags & RTE_MBUF_F_TX_QINQ) { + ol_flag |= ZXDH_PD_OFFLOAD_SVLAN_INSERT; + hdr->pd_hdr.svlan_insert = + rte_be_to_cpu_32((ZXDH_SVLAN_TPID << 16) | cookie->vlan_tci_outer); + } + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); } diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 45aeb3e3e4..ca98b36da2 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 #define ZXDH_SDT_MULTICAST_ATT_TABLE 11 @@ -19,6 +20,10 @@ #define ZXDH_MC_GROUP_NUM 4 #define ZXDH_BASE_VFID 1152 #define ZXDH_TABLE_HIT_FLAG 128 +#define ZXDH_FIRST_VLAN_GROUP_BITS 23 +#define ZXDH_VLAN_GROUP_BITS 31 +#define ZXDH_VLAN_GROUP_NUM 35 +#define ZXDH_VLAN_FILTER_VLANID_STEP 120 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -569,3 +574,97 @@ zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) } return 0; } + +int +zxdh_vlan_filter_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_vlan_filter_table vlan_table = {0}; + int16_t ret = 0; + + if (!hw->is_pf) + return 0; + + for (uint8_t vlan_group = 0; vlan_group < ZXDH_VLAN_GROUP_NUM; vlan_group++) { + if (vlan_group == 0) { + vlan_table.vlans[0] |= (1 << ZXDH_FIRST_VLAN_GROUP_BITS); + vlan_table.vlans[0] |= (1 << ZXDH_VLAN_GROUP_BITS); + + } else { + vlan_table.vlans[0] = 0; + } + uint32_t index = (vlan_group << 11) | hw->vport.vfid; + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = { + .index = index, + .p_data = (uint32_t *)&vlan_table + }; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d], vlan_group:%d, init vlan filter table failed", + hw->vport.vfid, vlan_group); + ret = -1; + } + } + + return ret; +} + +int +zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) +{ + struct zxdh_vlan_filter_table vlan_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + memset(&vlan_table, 0, sizeof(struct zxdh_vlan_filter_table)); + int table_num = vlan_id / ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t index = (table_num << 11) | vport_num.vfid; + uint16_t group = (vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP) / 8 + 1; + + uint8_t val = sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t); + uint8_t vlan_tbl_index = group / val; + uint16_t used_group = vlan_tbl_index * val; + + used_group = (used_group == 0 ? 0 : (used_group - 1)); + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (uint32_t *)&vlan_table}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret) { + PMD_DRV_LOG(ERR, "get vlan table failed"); + return -1; + } + uint16_t relative_vlan_id = vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t *base_group = &vlan_table.vlans[0]; + + *base_group |= 1 << 31; + base_group = &vlan_table.vlans[vlan_tbl_index]; + uint8_t valid_bits = (vlan_tbl_index == 0 ? + ZXDH_FIRST_VLAN_GROUP_BITS : ZXDH_VLAN_GROUP_BITS) + 1; + + uint8_t shift_left = (valid_bits - (relative_vlan_id - used_group * 8) % valid_bits) - 1; + + if (enable) + *base_group |= 1 << shift_left; + else + *base_group &= ~(1 << shift_left); + + + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_VLAN_ATT_TABLE, + .p_entry_data = &entry_data + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write vlan table failed"); + return -1; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index fb30c8f32e..8dac8f30dd 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -43,7 +43,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -73,7 +73,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -194,6 +194,10 @@ struct zxdh_multicast_table { uint32_t bitmap[2]; }; +struct zxdh_vlan_filter_table { + uint32_t vlans[4]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -205,5 +209,7 @@ int zxdh_promisc_table_init(struct rte_eth_dev *dev); int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 54926 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (11 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 12/15] net/zxdh: vlan filter/ offload " Junlong Wang @ 2024-12-18 9:26 ` Junlong Wang 2024-12-21 0:44 ` Stephen Hemminger 2024-12-18 9:26 ` [PATCH v4 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-18 9:26 ` [PATCH v4 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:26 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 25403 bytes --] provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 52 ++++ drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 410 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ drivers/net/zxdh/zxdh_msg.h | 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++++++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 606 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 6fb006c2da..415ca547d0 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -19,3 +19,6 @@ Allmulticast mode = Y VLAN filter = Y VLAN offload = Y QinQ offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index cc32b467a9..1349559c9b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -61,6 +61,9 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; + dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -784,9 +787,48 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) return ret; } + ret = zxdh_rss_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "rss configure failed"); + return ret; + } + return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -873,6 +915,12 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } + zxdh_pci_reinit_complete(hw); end: @@ -1099,6 +1147,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable = zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set = zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 3cdac5de73..2934fa264a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -82,6 +82,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; + uint16_t *rss_reta; uint8_t *isr; uint8_t weak_barriers; @@ -100,6 +101,8 @@ struct zxdh_hw { uint8_t admin_status; uint8_t promisc_status; uint8_t allmulti_status; + uint8_t rss_enable; + uint8_t rss_init; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 94c5e6dbc8..c12947cb4d 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -3,6 +3,7 @@ */ #include <rte_malloc.h> +#include <rte_ether.h> #include "zxdh_ethdev.h" #include "zxdh_pci.h" @@ -12,6 +13,14 @@ #include "zxdh_logs.h" #define ZXDH_VLAN_FILTER_GROUPS 64 +#define ZXDH_INVALID_LOGIC_QID 0xFFFFU + +/* Supported RSS */ +#define ZXDH_RSS_HF_MASK (~(ZXDH_RSS_HF)) +#define ZXDH_HF_F5 1 +#define ZXDH_HF_F3 2 +#define ZXDH_HF_MAC_VLAN 4 +#define ZXDH_HF_ALL 0 static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { @@ -752,3 +761,404 @@ zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } + +int +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; + uint16_t idx; + uint16_t i; + uint16_t pos; + int ret; + + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); + return -EINVAL; + } + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate RSS reta"); + return -ENOMEM; + } + } + for (idx = 0, i = 0; (i < reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + if (reta_conf[idx].reta[pos] > dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "reta table value err(%u >= %u)", + reta_conf[idx].reta[pos], dev->data->nb_rx_queues); + return -EINVAL; + } + if (hw->rss_reta[i] != reta_conf[idx].reta[pos]) + break; + } + if (i == reta_size) { + PMD_DRV_LOG(DEBUG, "reta table same with buffered table"); + return 0; + } + memcpy(old_reta, hw->rss_reta, sizeof(old_reta)); + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + hw->rss_reta[i] = reta_conf[idx].reta[pos]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < reta_size; i++) + msg.data.rss_reta.reta[i] = + (hw->channel_context[hw->rss_reta[i] * 2].ph_chno); + + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return ret; +} + +static uint16_t +zxdh_hw_qid_to_logic_qid(struct rte_eth_dev *dev, uint16_t qid) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t rx_queues = dev->data->nb_rx_queues; + uint16_t i; + + for (i = 0; i < rx_queues; i++) { + if (qid == hw->channel_context[i * 2].ph_chno) + return i; + } + return ZXDH_INVALID_LOGIC_QID; +} + +int +zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + uint16_t idx; + uint16_t i; + int ret = 0; + uint16_t qid_logic; + + ret = (!reta_size || reta_size > RTE_ETH_RSS_RETA_SIZE_256); + if (ret) { + PMD_DRV_LOG(ERR, "request reta size(%u) not same with buffered(%u)", + reta_size, RTE_ETH_RSS_RETA_SIZE_256); + return -EINVAL; + } + + /* Fill each entry of the table even if its bit is not set. */ + for (idx = 0, i = 0; (i != reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = hw->rss_reta[i]; + } + + + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_GET, &msg); + + if (hw->is_pf) { + ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table get failed"); + return -EINVAL; + } + } + + struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + + qid_logic = zxdh_hw_qid_to_logic_qid(dev, reta_table->reta[i]); + if (qid_logic == ZXDH_INVALID_LOGIC_QID) { + PMD_DRV_LOG(ERR, "rsp phy reta qid (%u) is illegal(%u)", + reta_table->reta[i], qid_logic); + return -EINVAL; + } + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = qid_logic; + } + return 0; +} + +static uint32_t +zxdh_rss_hf_to_hw(uint64_t hf) +{ + uint32_t hw_hf = 0; + + if (hf & ZXDH_HF_MAC_VLAN_ETH) + hw_hf |= ZXDH_HF_MAC_VLAN; + if (hf & ZXDH_HF_F3_ETH) + hw_hf |= ZXDH_HF_F3; + if (hf & ZXDH_HF_F5_ETH) + hw_hf |= ZXDH_HF_F5; + + if (hw_hf == (ZXDH_HF_MAC_VLAN | ZXDH_HF_F3 | ZXDH_HF_F5)) + hw_hf = ZXDH_HF_ALL; + return hw_hf; +} + +static uint64_t +zxdh_rss_hf_to_eth(uint32_t hw_hf) +{ + uint64_t hf = 0; + + if (hw_hf == ZXDH_HF_ALL) + return (ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH); + + if (hw_hf & ZXDH_HF_MAC_VLAN) + hf |= ZXDH_HF_MAC_VLAN_ETH; + if (hw_hf & ZXDH_HF_F3) + hf |= ZXDH_HF_F3_ETH; + if (hw_hf & ZXDH_HF_F5) + hf |= ZXDH_HF_F5_ETH; + + return hf; +} + +int +zxdh_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + uint32_t hw_hf_new, hw_hf_old; + int need_update_hf = 0; + int ret = 0; + + ret = rss_conf->rss_hf & ZXDH_RSS_HF_MASK; + if (ret) { + PMD_DRV_LOG(ERR, "Not support some hash function (%08lx)", rss_conf->rss_hf); + return -EINVAL; + } + + hw_hf_new = zxdh_rss_hf_to_hw(rss_conf->rss_hf); + hw_hf_old = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + + if ((hw_hf_new != hw_hf_old || !!rss_conf->rss_hf)) + need_update_hf = 1; + + if (need_update_hf) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = !!rss_conf->rss_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = !!rss_conf->rss_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf_new; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf_new; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + old_rss_conf->rss_hf = rss_conf->rss_hf; + } + + return 0; +} + +int +zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret; + uint32_t hw_hf; + + if (rss_conf == NULL) { + PMD_DRV_LOG(ERR, "rss conf is NULL"); + return -ENOMEM; + } + + hw_hf = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + rss_conf->rss_hf = zxdh_rss_hf_to_eth(hw_hf); + + zxdh_msg_head_build(hw, ZXDH_RSS_HF_GET, &msg); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + reply_msg.reply_body.rss_hf.rss_hf = port_attr.rss_hash_factor; + } else { + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + rss_conf->rss_hf = zxdh_rss_hf_to_eth(reply_msg.reply_body.rss_hf.rss_hf); + + return 0; +} + +static int +zxdh_get_rss_enable_conf(struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) + return dev->data->nb_rx_queues == 1 ? 0 : 1; + else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE) + return 0; + + return 0; +} + +int +zxdh_rss_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *dev_data = dev->data; + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg = {0}; + int ret = 0; + uint32_t hw_hf; + uint32_t i; + + if (dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(ERR, "port %u nb_rx_queues is 0", dev->data->port_id); + return -1; + } + + /* config rss enable */ + uint8_t curr_rss_enable = zxdh_get_rss_enable_conf(dev); + + if (hw->rss_enable != curr_rss_enable) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = curr_rss_enable; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = curr_rss_enable; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + hw->rss_enable = curr_rss_enable; + } + + if (curr_rss_enable && hw->rss_init == 0) { + /* config hash factor */ + dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = ZXDH_HF_F5_ETH; + hw_hf = zxdh_rss_hf_to_hw(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf); + memset(&msg, 0, sizeof(msg)); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + hw->rss_init = 1; + } + + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "alloc memory fail"); + return -1; + } + } + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + hw->rss_reta[i] = i % dev_data->nb_rx_queues; + + /* hw config reta */ + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + msg.data.rss_reta.reta[i] = + hw->channel_context[hw->rss_reta[i] * 2].ph_chno; + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 058d271ab3..860716d079 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,8 +5,25 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <rte_ether.h> + #include "zxdh_ethdev.h" +#define ZXDH_ETH_RSS_L2 RTE_ETH_RSS_L2_PAYLOAD +#define ZXDH_ETH_RSS_IP \ + (RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6) +#define ZXDH_ETH_RSS_TCP (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP) +#define ZXDH_ETH_RSS_UDP (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP) +#define ZXDH_ETH_RSS_SCTP (RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + +#define ZXDH_HF_F5_ETH (ZXDH_ETH_RSS_TCP | ZXDH_ETH_RSS_UDP | ZXDH_ETH_RSS_SCTP) +#define ZXDH_HF_F3_ETH ZXDH_ETH_RSS_IP +#define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 +#define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -20,5 +37,14 @@ int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_configure(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index ec15388f7a..45a9b10aa4 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -182,6 +182,11 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_RSS_ENABLE = 7, + ZXDH_RSS_RETA_SET = 8, + ZXDH_RSS_RETA_GET = 9, + ZXDH_RSS_HF_SET = 15, + ZXDH_RSS_HF_GET = 16, ZXDH_VLAN_FILTER_SET = 17, ZXDH_VLAN_FILTER_ADD = 18, ZXDH_VLAN_FILTER_DEL = 19, @@ -291,6 +296,14 @@ struct zxdh_link_info_msg { uint32_t speed; } __rte_packed; +struct zxdh_rss_reta { + uint32_t reta[RTE_ETH_RSS_RETA_SIZE_256]; +}; + +struct zxdh_rss_hf { + uint32_t rss_hf; +}; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -307,6 +320,8 @@ struct zxdh_msg_reply_body { union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; struct zxdh_link_info_msg link_msg; + struct zxdh_rss_hf rss_hf; + struct zxdh_rss_reta rss_reta; } __rte_packed; } __rte_packed; @@ -360,6 +375,10 @@ struct zxdh_vlan_offload { uint8_t type; } __rte_packed; +struct zxdh_rss_enable { + uint8_t enable; +}; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -385,6 +404,9 @@ struct zxdh_msg_info { struct zxdh_vlan_filter vlan_filter_msg; struct zxdh_vlan_filter_set vlan_filter_set_msg; struct zxdh_vlan_offload vlan_offload_msg; + struct zxdh_rss_reta rss_reta; + struct zxdh_rss_enable rss_enable; + struct zxdh_rss_hf rss_hf; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index ca98b36da2..af148a974e 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_RSS_ATT_TABLE 3 #define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 @@ -668,3 +669,84 @@ zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) } return 0; } + +int +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + for (uint16_t j = 0; j < 8; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; + else + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; + #else + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; + #endif + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] |= 0x8000; + #else + rss_vqid.vqm_qid[0] |= 0x8000; + #endif + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = { + .index = vport_num.vfid * 32 + i, + .p_data = (uint32_t *)&rss_vqid + }; + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_RSS_ATT_TABLE, + .p_entry_data = &entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write rss base qid failed vfid:%d", vport_num.vfid); + return ret; + } + } + return 0; +} + +int +zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vport_num.vfid * 32 + i, (uint32_t *)&rss_vqid}; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_RSS_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &user_entry, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get rss tbl failed, vfid:%d", vport_num.vfid); + return -1; + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] &= 0x7FFF; + #else + rss_vqid.vqm_qid[0] &= 0x7FFF; + #endif + uint8_t size = sizeof(struct zxdh_rss_to_vqid_table) / sizeof(uint16_t); + + for (int j = 0; j < size; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j + 1]; + else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j - 1]; + #else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j]; + #endif + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8dac8f30dd..7bac39375c 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,6 +8,7 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 extern struct zxdh_dtb_shared_data g_dtb_data; @@ -198,6 +199,10 @@ struct zxdh_vlan_filter_table { uint32_t vlans[4]; }; +struct zxdh_rss_to_vqid_table { + uint16_t vqm_qid[8]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -211,5 +216,7 @@ int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); +int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 63369 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-18 9:26 ` [PATCH v4 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-21 0:44 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-21 0:44 UTC (permalink / raw) To: Junlong Wang; +Cc: dev On Wed, 18 Dec 2024 17:26:00 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > provided rss hash config/update, reta update/get ops. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > doc/guides/nics/features/zxdh.ini | 3 + > doc/guides/nics/zxdh.rst | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 52 ++++ > drivers/net/zxdh/zxdh_ethdev.h | 3 + > drivers/net/zxdh/zxdh_ethdev_ops.c | 410 +++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ > drivers/net/zxdh/zxdh_msg.h | 22 ++ > drivers/net/zxdh/zxdh_tables.c | 82 ++++++ > drivers/net/zxdh/zxdh_tables.h | 7 + > 9 files changed, 606 insertions(+) > Some suggestions: > +int > +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, > + struct rte_eth_rss_reta_entry64 *reta_conf, > + uint16_t reta_size) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + struct zxdh_msg_info msg = {0}; > + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; > + uint16_t idx; > + uint16_t i; > + uint16_t pos; > + int ret; > + > + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { > + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); > + return -EINVAL; > + } > + if (!hw->rss_reta) { > + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); This could be rte_calloc() ... > +int > +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) > +{ > + struct zxdh_rss_to_vqid_table rss_vqid = {0}; > + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; > + int ret = 0; > + > + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { > + for (uint16_t j = 0; j < 8; j++) { > + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN > + if (j % 2 == 0) > + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; > + else > + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; > + #else > + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; > + #endif Please put #if in first column not indented. Better yet, use rte_byteorder functions to elimnate #if code pattern. > + } > + > + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN > + rss_vqid.vqm_qid[1] |= 0x8000; > + #else > + rss_vqid.vqm_qid[0] |= 0x8000; > + #endif ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 14/15] net/zxdh: basic stats ops implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (12 preceding siblings ...) 2024-12-18 9:26 ` [PATCH v4 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-18 9:26 ` Junlong Wang 2024-12-18 9:26 ` [PATCH v4 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:26 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37376 bytes --] basic stats ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 353 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 27 +++ drivers/net/zxdh/zxdh_msg.h | 16 ++ drivers/net/zxdh/zxdh_np.c | 341 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 30 +++ drivers/net/zxdh/zxdh_queue.h | 2 + drivers/net/zxdh/zxdh_rxtx.c | 83 ++++++- drivers/net/zxdh/zxdh_tables.h | 5 + 11 files changed, 859 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 415ca547d0..98c141cf95 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -22,3 +22,5 @@ QinQ offload = Y RSS hash = Y RSS reta update = Y Inner RSS = Y +Basic stats = Y +Stats per queue = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3cc6a1d348..c8a52b587c 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -32,6 +32,7 @@ Features of the ZXDH PMD are: - VLAN stripping and inserting - QINQ stripping and inserting - Receive Side Scaling (RSS) +- Port hardware statistics Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1349559c9b..a1822e1556 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1151,6 +1151,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .reta_query = zxdh_dev_rss_reta_query, .rss_hash_update = zxdh_rss_hash_update, .rss_hash_conf_get = zxdh_rss_hash_conf_get, + .stats_get = zxdh_dev_stats_get, + .stats_reset = zxdh_dev_stats_reset, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index c12947cb4d..2c10f171aa 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -11,6 +11,8 @@ #include "zxdh_ethdev_ops.h" #include "zxdh_tables.h" #include "zxdh_logs.h" +#include "zxdh_rxtx.h" +#include "zxdh_np.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -22,6 +24,108 @@ #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +struct zxdh_hw_mac_stats { + uint64_t rx_total; + uint64_t rx_pause; + uint64_t rx_unicast; + uint64_t rx_multicast; + uint64_t rx_broadcast; + uint64_t rx_vlan; + uint64_t rx_size_64; + uint64_t rx_size_65_127; + uint64_t rx_size_128_255; + uint64_t rx_size_256_511; + uint64_t rx_size_512_1023; + uint64_t rx_size_1024_1518; + uint64_t rx_size_1519_mru; + uint64_t rx_undersize; + uint64_t rx_oversize; + uint64_t rx_fragment; + uint64_t rx_jabber; + uint64_t rx_control; + uint64_t rx_eee; + + uint64_t tx_total; + uint64_t tx_pause; + uint64_t tx_unicast; + uint64_t tx_multicast; + uint64_t tx_broadcast; + uint64_t tx_vlan; + uint64_t tx_size_64; + uint64_t tx_size_65_127; + uint64_t tx_size_128_255; + uint64_t tx_size_256_511; + uint64_t tx_size_512_1023; + uint64_t tx_size_1024_1518; + uint64_t tx_size_1519_mtu; + uint64_t tx_undersize; + uint64_t tx_oversize; + uint64_t tx_fragment; + uint64_t tx_jabber; + uint64_t tx_control; + uint64_t tx_eee; + + uint64_t rx_error; + uint64_t rx_fcs_error; + uint64_t rx_drop; + + uint64_t tx_error; + uint64_t tx_fcs_error; + uint64_t tx_drop; + +} __rte_packed; + +struct zxdh_hw_mac_bytes { + uint64_t rx_total_bytes; + uint64_t rx_good_bytes; + uint64_t tx_total_bytes; + uint64_t tx_good_bytes; +} __rte_packed; + +struct zxdh_np_stats_data { + uint64_t n_pkts_dropped; + uint64_t n_bytes_dropped; +}; + +struct zxdh_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct zxdh_xstats_name_off zxdh_rxq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_rx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_rx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_rx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_rx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_rx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_rx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[7])}, +}; + +static const struct zxdh_xstats_name_off zxdh_txq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_tx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_tx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_tx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_tx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_tx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_tx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[7])}, +}; + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1162,3 +1266,252 @@ zxdh_rss_configure(struct rte_eth_dev *dev) } return 0; } + +static int32_t +zxdh_hw_vqm_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode, + struct zxdh_hw_vqm_stats *hw_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_GET: + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get hw stats"); + return -1; + } + struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body; + + rte_memcpy(hw_stats, &reply_body->vqm_stats, sizeof(struct zxdh_hw_vqm_stats)); + return 0; +} + +static int zxdh_hw_mac_stats_get(struct rte_eth_dev *dev, + struct zxdh_hw_mac_stats *mac_stats, + struct zxdh_hw_mac_bytes *mac_bytes) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET); + uint64_t stats_addr = 0; + uint64_t bytes_addr = 0; + + if (hw->speed <= RTE_ETH_SPEED_NUM_25G) { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4); + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4); + } else { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4; + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4; + } + + rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats)); + rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes)); + return 0; +} + +static void zxdh_data_hi_to_lo(uint64_t *data) +{ + uint32_t n_data_hi; + uint32_t n_data_lo; + + n_data_lo = *data >> 32; + n_data_hi = *data; + *data = (uint64_t)(rte_le_to_cpu_32(n_data_hi)) << 32 | + rte_le_to_cpu_32(n_data_lo); +} + +static int zxdh_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_np_stats_data stats_data; + uint32_t stats_id = zxdh_vport_to_vfid(hw->vport); + uint32_t idx = 0; + int ret = 0; + + idx = stats_id + ZXDH_BROAD_STATS_EGRESS_BASE; + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_tx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_tx_broadcast); + + idx = stats_id + ZXDH_BROAD_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_rx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_rx_broadcast); + + idx = stats_id + ZXDH_MTU_STATS_EGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + + np_stats->np_tx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_tx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_bytes); + + idx = stats_id + ZXDH_MTU_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + np_stats->np_rx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_rx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_bytes); + + return 0; +} + +static int +zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_stats_get(dev, np_stats); + if (ret) { + PMD_DRV_LOG(ERR, "get np stats failed"); + return -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_GET_NP_STATS, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", + hw->vport.vport); + return -1; + } + memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); + } + return ret; +} + +int +zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_hw_vqm_stats vqm_stats = {0}; + struct zxdh_hw_np_stats np_stats = {0}; + struct zxdh_hw_mac_stats mac_stats = {0}; + struct zxdh_hw_mac_bytes mac_bytes = {0}; + uint32_t i = 0; + + zxdh_hw_vqm_stats_get(dev, ZXDH_VQM_DEV_STATS_GET, &vqm_stats); + if (hw->is_pf) + zxdh_hw_mac_stats_get(dev, &mac_stats, &mac_bytes); + + zxdh_hw_np_stats_get(dev, &np_stats); + + stats->ipackets = vqm_stats.rx_total; + stats->opackets = vqm_stats.tx_total; + stats->ibytes = vqm_stats.rx_bytes; + stats->obytes = vqm_stats.tx_bytes; + stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop; + stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts; + stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts; + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[i]; + + if (rxvq == NULL) + continue; + stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[0].offset); + stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[1].offset); + stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[5].offset); + } + + for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_tx *txvq = dev->data->tx_queues[i]; + + if (txvq == NULL) + continue; + stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[0].offset); + stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[1].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[5].offset); + } + return 0; +} + +static int zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to reset hw stats"); + return -1; + } + return 0; +} + +int zxdh_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET); + if (hw->is_pf) + zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 860716d079..f35378e691 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <stdint.h> + #include <rte_ether.h> #include "zxdh_ethdev.h" @@ -24,6 +26,29 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +struct zxdh_hw_vqm_stats { + uint64_t rx_total; + uint64_t tx_total; + uint64_t rx_bytes; + uint64_t tx_bytes; + uint64_t rx_error; + uint64_t tx_error; + uint64_t rx_drop; +} __rte_packed; + +struct zxdh_hw_np_stats { + uint64_t np_rx_broadcast; + uint64_t np_tx_broadcast; + uint64_t np_rx_mtu_drop_pkts; + uint64_t np_tx_mtu_drop_pkts; + uint64_t np_rx_mtu_drop_bytes; + uint64_t np_tx_mtu_drop_bytes; + uint64_t np_rx_mtr_drop_pkts; + uint64_t np_tx_mtr_drop_pkts; + uint64_t np_rx_mtr_drop_bytes; + uint64_t np_tx_mtr_drop_bytes; +}; + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -46,5 +71,7 @@ int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_configure(struct rte_eth_dev *dev); +int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int zxdh_dev_stats_reset(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 45a9b10aa4..159c8c9c71 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -9,10 +9,16 @@ #include <ethdev_driver.h> +#include "zxdh_ethdev_ops.h" + #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) #define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) +#define ZXDH_MAC_OFFSET (0x24000) +#define ZXDH_MAC_STATS_OFFSET (0x1408) +#define ZXDH_MAC_BYTES_OFFSET (0xb000) + #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 #define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) @@ -173,7 +179,13 @@ enum pciebar_layout_type { /* riscv msg opcodes */ enum zxdh_agent_msg_type { + ZXDH_MAC_STATS_GET = 10, + ZXDH_MAC_STATS_RESET, ZXDH_MAC_LINK_GET = 14, + ZXDH_VQM_DEV_STATS_GET = 21, + ZXDH_VQM_DEV_STATS_RESET, + ZXDH_VQM_QUEUE_STATS_GET = 24, + ZXDH_VQM_QUEUE_STATS_RESET, }; enum zxdh_msg_type { @@ -195,6 +207,8 @@ enum zxdh_msg_type { ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, + ZXDH_GET_NP_STATS = 31, + ZXDH_MSG_TYPE_END, }; @@ -322,6 +336,8 @@ struct zxdh_msg_reply_body { struct zxdh_link_info_msg link_msg; struct zxdh_rss_hf rss_hf; struct zxdh_rss_reta rss_reta; + struct zxdh_hw_vqm_stats vqm_stats; + struct zxdh_hw_np_stats np_stats; } __rte_packed; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 1f06539263..42679635f4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -26,6 +26,7 @@ ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; +ZXDH_PPU_STAT_CFG_T g_ppu_stat_cfg; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -117,6 +118,18 @@ do {\ #define ZXDH_COMM_CONVERT16(w_data) \ (((w_data) & 0xff) << 8) +#define ZXDH_DTB_TAB_UP_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.wr_index) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].user_flag) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].phy_addr) + +#define ZXDH_DTB_TAB_UP_DATA_LEN_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.data_len[(INDEX)]) + #define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) @@ -1717,3 +1730,331 @@ zxdh_np_dtb_table_entry_get(uint32_t dev_id, return 0; } + +static void +zxdh_np_stat_cfg_soft_get(uint32_t dev_id, + ZXDH_PPU_STAT_CFG_T *p_stat_cfg) +{ + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_stat_cfg); + + p_stat_cfg->ddr_base_addr = g_ppu_stat_cfg.ddr_base_addr; + p_stat_cfg->eram_baddr = g_ppu_stat_cfg.eram_baddr; + p_stat_cfg->eram_depth = g_ppu_stat_cfg.eram_depth; + p_stat_cfg->ppu_addr_offset = g_ppu_stat_cfg.ppu_addr_offset; +} + +static uint32_t +zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t int_flag, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_desc_data) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t queue_en = 0; + uint32_t rc; + + zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!", queue_id); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (desc_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + zxdh_np_dtb_item_buff_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + item_index, 0, desc_len, p_desc_data); + + ZXDH_DTB_TAB_UP_DATA_LEN_GET(dev_id, queue_id, item_index) = data_len; + + item_info.cmd_vld = 1; + item_info.cmd_type = ZXDH_DTB_DIR_UP_TYPE; + item_info.int_en = int_flag; + item_info.data_len = desc_len / 4; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_dump_desc_info(uint32_t dev_id, + uint32_t queue_id, + uint32_t queue_element_id, + uint32_t *p_dump_info, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_dump_data) +{ + uint32_t dtb_interrupt_status = 0; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_dump_data); + rc = zxdh_np_dtb_tab_up_info_set(dev_id, + queue_id, + queue_element_id, + dtb_interrupt_status, + data_len, + desc_len, + p_dump_info); + if (rc != 0) { + PMD_DRV_LOG(ERR, "the queue %d element id %d dump" + " info set failed!", queue_id, queue_element_id); + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + queue_element_id, 0, ZXDH_DTB_TAB_ACK_UNUSED_MASK); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_index) +{ + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + uint32_t i; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + zxdh_np_dtb_item_ack_rd(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, &ack_vale); + + ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + *p_item_index = item_index; + + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t *p_phy_haddr, + uint32_t *p_phy_laddr) +{ + uint32_t rc = 0; + uint64_t addr; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(dev_id, queue_id, item_index) == + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE) + addr = ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(dev_id, queue_id, item_index); + else + addr = ZXDH_DTB_ITEM_ACK_SIZE; + + *p_phy_haddr = (addr >> 32) & 0xffffffff; + *p_phy_laddr = addr & 0xffffffff; + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_dma_dump(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t depth, + uint32_t *p_data, + uint32_t *element_id) +{ + uint8_t form_buff[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint32_t dump_dst_phy_haddr = 0; + uint32_t dump_dst_phy_laddr = 0; + uint32_t queue_item_index = 0; + uint32_t data_len; + uint32_t desc_len; + uint32_t rc; + + rc = zxdh_np_dtb_tab_up_free_item_get(dev_id, queue_id, &queue_item_index); + if (rc != 0) { + PMD_DRV_LOG(ERR, "dpp_dtb_tab_up_free_item_get failed = %d!", base_addr); + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + } + + *element_id = queue_item_index; + + rc = zxdh_np_dtb_tab_up_item_addr_get(dev_id, queue_id, queue_item_index, + &dump_dst_phy_haddr, &dump_dst_phy_laddr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_tab_up_item_addr_get"); + + data_len = depth * 128 / 32; + desc_len = ZXDH_DTB_LEN_POS_SETP / 4; + + rc = zxdh_np_dtb_write_dump_desc_info(dev_id, queue_id, queue_item_index, + (uint32_t *)form_buff, data_len, desc_len, p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_dump_desc_info"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t index, + uint32_t rd_mode, + uint32_t *p_data) +{ + uint32_t temp_data[4] = {0}; + uint32_t element_id = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t eram_dump_base_addr; + uint32_t rc; + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_OPR_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_OPR_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + + eram_dump_base_addr = base_addr + row_index; + rc = zxdh_np_dtb_se_smmu0_dma_dump(dev_id, + queue_id, + eram_dump_base_addr, + 1, + temp_data, + &element_id); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_dma_dump"); + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_stat_smmu0_int_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t smmu0_base_addr, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t eram_rd_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + if (rd_mode == ZXDH_STAT_128_MODE) + eram_rd_mode = ZXDH_ERAM128_OPR_128b; + else + eram_rd_mode = ZXDH_ERAM128_OPR_64b; + + rc = zxdh_np_dtb_se_smmu0_ind_read(dev_id, + queue_id, + smmu0_base_addr, + index, + eram_rd_mode, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_ind_read"); + + return rc; +} + +int +zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + ZXDH_PPU_STAT_CFG_T stat_cfg = {0}; + uint32_t ppu_eram_baddr; + uint32_t ppu_eram_depth; + uint32_t rc = 0; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + memset(&stat_cfg, 0x0, sizeof(stat_cfg)); + + zxdh_np_stat_cfg_soft_get(dev_id, &stat_cfg); + + ppu_eram_depth = stat_cfg.eram_depth; + ppu_eram_baddr = stat_cfg.eram_baddr; + + if ((index >> (ZXDH_STAT_128_MODE - rd_mode)) < ppu_eram_depth) { + rc = zxdh_np_dtb_stat_smmu0_int_read(dev_id, + queue_id, + ppu_eram_baddr, + rd_mode, + index, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_stat_smmu0_int_read"); + } + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 19d1f03f59..7da29cf7bd 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -432,6 +432,18 @@ typedef enum zxdh_sdt_table_type_e { ZXDH_SDT_TBLT_MAX = 7, } ZXDH_SDT_TABLE_TYPE_E; +typedef enum zxdh_dtb_dir_type_e { + ZXDH_DTB_DIR_DOWN_TYPE = 0, + ZXDH_DTB_DIR_UP_TYPE = 1, + ZXDH_DTB_DIR_TYPE_MAX, +} ZXDH_DTB_DIR_TYPE_E; + +typedef enum zxdh_dtb_tab_up_user_addr_type_e { + ZXDH_DTB_TAB_UP_NOUSER_ADDR_TYPE = 0, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE = 1, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_MAX, +} ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_E; + typedef struct zxdh_dtb_lpm_entry_t { uint32_t dtb_len0; uint8_t *p_data_buff0; @@ -537,6 +549,19 @@ typedef struct zxdh_dtb_hash_entry_info_t { uint8_t *p_rst; } ZXDH_DTB_HASH_ENTRY_INFO_T; +typedef struct zxdh_ppu_stat_cfg_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_base_addr; + uint32_t ppu_addr_offset; +} ZXDH_PPU_STAT_CFG_T; + +typedef enum zxdh_stat_cnt_mode_e { + ZXDH_STAT_64_MODE = 0, + ZXDH_STAT_128_MODE = 1, + ZXDH_STAT_MAX_MODE, +} ZXDH_STAT_CNT_MODE_E; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, @@ -545,5 +570,10 @@ int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); +int zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 9343df81ac..deb0dd891a 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -53,6 +53,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) #define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) #define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) +#define ZXDH_PD_HDR_SIZE_MAX 256 +#define ZXDH_PD_HDR_SIZE_MIN ZXDH_TYPE_HDR_SIZE /* * ring descriptors: 16 bytes. diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 0ffce50042..27a61d46dd 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -406,6 +406,40 @@ static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); } +static void +zxdh_update_packet_stats(struct zxdh_virtnet_stats *stats, struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + struct rte_ether_addr *ea = NULL; + + stats->bytes += s; + + if (s == 64) { + stats->size_bins[1]++; + } else if (s > 64 && s < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + stats->size_bins[bin]++; + } else { + if (s < 64) + stats->size_bins[0]++; + else if (s < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } +} + uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -459,12 +493,19 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt break; } } + if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + break; + } /* Enqueue Packet buffers */ if (can_push) zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); else zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + zxdh_update_packet_stats(&txvq->stats, txm); } + txvq->stats.packets += nb_tx; if (likely(nb_tx)) { if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { zxdh_queue_notify(vq); @@ -474,9 +515,10 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt return nb_tx; } -uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { + struct zxdh_virtnet_tx *txvq = tx_queue; uint16_t nb_tx; for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { @@ -496,6 +538,12 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t rte_errno = -error; break; } + if (m->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", m->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + rte_errno = ENOMEM; + break; + } } return nb_tx; } @@ -571,7 +619,7 @@ static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *h return 0; } -static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +static void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) { int32_t error = 0; /* @@ -613,7 +661,13 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, for (i = 0; i < num; i++) { rxm = rcv_pkts[i]; - + if (unlikely(len[i] < ZXDH_UL_NET_HDR_SIZE)) { + nb_enqueued++; + PMD_RX_LOG(ERR, "RX, len:%u err", len[i]); + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } struct zxdh_net_hdr_ul *header = (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM); @@ -623,8 +677,22 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); seg_num = 1; } + if (seg_num > ZXDH_RX_MAX_SEGS) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* bit[0:6]-pd_len unit:2B */ uint16_t pd_len = header->type_hdr.pd_len << 1; + if (pd_len > ZXDH_PD_HDR_SIZE_MAX || pd_len < ZXDH_PD_HDR_SIZE_MIN) { + PMD_RX_LOG(ERR, "pd_len:%d is invalid", pd_len); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* Private queue only handle type hdr */ hdr_size = pd_len; rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; @@ -639,6 +707,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, /* Update rte_mbuf according to pi/pd header */ if (zxdh_rx_update_mbuf(rxm, header) < 0) { zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; continue; } seg_res = seg_num - 1; @@ -661,8 +730,11 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } @@ -675,6 +747,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(rcv_cnt == 0)) { PMD_RX_LOG(ERR, "No enough segments for packet."); rte_pktmbuf_free(rx_pkts[nb_rx]); + rxvq->stats.errors++; break; } while (extra_idx < rcv_cnt) { @@ -694,11 +767,15 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } + rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ if (likely(!zxdh_queue_full(vq))) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 7bac39375c..c7da40f294 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -11,6 +11,11 @@ #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 +#define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 +#define ZXDH_BROAD_STATS_EGRESS_BASE 0xC902 +#define ZXDH_BROAD_STATS_INGRESS_BASE 0xD102 + extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 87119 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v4 15/15] net/zxdh: mtu update ops implementations 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (13 preceding siblings ...) 2024-12-18 9:26 ` [PATCH v4 14/15] net/zxdh: basic stats ops implementations Junlong Wang @ 2024-12-18 9:26 ` Junlong Wang 2024-12-21 0:33 ` Stephen Hemminger 14 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-12-18 9:26 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8573 bytes --] mtu update ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 5 ++ drivers/net/zxdh/zxdh_ethdev_ops.c | 78 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 3 ++ drivers/net/zxdh/zxdh_tables.c | 42 ++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 4 ++ 7 files changed, 135 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 98c141cf95..3561e31666 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -24,3 +24,4 @@ RSS reta update = Y Inner RSS = Y Basic stats = Y Stats per queue = Y +MTU update = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index c8a52b587c..58e0c49a2e 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -33,6 +33,8 @@ Features of the ZXDH PMD are: - QINQ stripping and inserting - Receive Side Scaling (RSS) - Port hardware statistics +- MTU update +- Jumbo frames Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index a1822e1556..2906e3be6e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -64,6 +64,10 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - + RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + dev_info->min_mtu = ZXDH_ETHER_MIN_MTU; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -1153,6 +1157,7 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .rss_hash_conf_get = zxdh_rss_hash_conf_get, .stats_get = zxdh_dev_stats_get, .stats_reset = zxdh_dev_stats_reset, + .mtu_set = zxdh_dev_mtu_set, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 2c10f171aa..8e72a9a6b2 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -13,6 +13,7 @@ #include "zxdh_logs.h" #include "zxdh_rxtx.h" #include "zxdh_np.h" +#include "zxdh_queue.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -1515,3 +1516,80 @@ int zxdh_dev_stats_reset(struct rte_eth_dev *dev) return 0; } + +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_panel_table panel = {0}; + struct zxdh_port_attr_table vport_att = {0}; + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); + uint16_t max_mtu = 0; + int ret = 0; + + max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + if (new_mtu < ZXDH_ETHER_MIN_MTU || new_mtu > max_mtu) { + PMD_DRV_LOG(ERR, "invalid mtu:%d, range[%d, %d]", + new_mtu, ZXDH_ETHER_MIN_MTU, max_mtu); + return -EINVAL; + } + + if (dev->data->mtu == new_mtu) + return 0; + + if (hw->is_pf) { + memset(&panel, 0, sizeof(panel)); + memset(&vport_att, 0, sizeof(vport_att)); + ret = zxdh_get_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get_panel_attr ret:%d", ret); + return -1; + } + + ret = zxdh_get_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, get vport dpp_ret:%d", vfid, ret); + return -1; + } + + panel.mtu = new_mtu; + panel.mtu_enable = 1; + ret = zxdh_set_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set zxdh_dev_mtu failed, ret:%u", ret); + return ret; + } + + vport_att.mtu_enable = 1; + vport_att.mtu = new_mtu; + ret = zxdh_set_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, set vport dpp_ret:%d", vfid, ret); + return ret; + } + } else { + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_MTU_EN_FLAG; + attr_msg->value = 1; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_EN_FLAG); + return ret; + } + attr_msg->mode = ZXDH_PORT_MTU_FLAG; + attr_msg->value = new_mtu; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_FLAG); + return ret; + } + } + dev->data->mtu = new_mtu; + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index f35378e691..fac6cbd5e8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -26,6 +26,8 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +#define ZXDH_ETHER_MIN_MTU 68 + struct zxdh_hw_vqm_stats { uint64_t rx_total; uint64_t tx_total; @@ -73,5 +75,6 @@ int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss int zxdh_rss_configure(struct rte_eth_dev *dev); int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int zxdh_dev_stats_reset(struct rte_eth_dev *dev); +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index af148a974e..c1b693a613 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -150,6 +150,48 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + + if (ret != 0) + PMD_DRV_LOG(ERR, "get panel table failed"); + + return ret; +} + +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) + PMD_DRV_LOG(ERR, "Insert panel table failed"); + + return ret; +} + int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index c7da40f294..adedf3d0d3 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,8 +8,10 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_MTU_FLAG 9 #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_MTU_EN_FLAG 42 #define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 #define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 @@ -223,5 +225,7 @@ int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18747 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4 15/15] net/zxdh: mtu update ops implementations 2024-12-18 9:26 ` [PATCH v4 15/15] net/zxdh: mtu update " Junlong Wang @ 2024-12-21 0:33 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-21 0:33 UTC (permalink / raw) To: Junlong Wang; +Cc: dev On Wed, 18 Dec 2024 17:26:02 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + struct zxdh_panel_table panel = {0}; > + struct zxdh_port_attr_table vport_att = {0}; > + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); > + uint16_t max_mtu = 0; > + int ret = 0; useless initializations. > + > + max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; > + if (new_mtu < ZXDH_ETHER_MIN_MTU || new_mtu > max_mtu) { > + PMD_DRV_LOG(ERR, "invalid mtu:%d, range[%d, %d]", > + new_mtu, ZXDH_ETHER_MIN_MTU, max_mtu); > + return -EINVAL; > + } These checks are redundant. See rte_ethdev.c::eth_dev_validate_mtu function. It already checks the mtu against values returned from info_get. > + > + if (dev->data->mtu == new_mtu) > + return 0; This should be done in ethdev_set_mtu but does not look like that is checked. Will look into fixing it there. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 00/15] net/zxdh: updated net zxdh driver 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (4 preceding siblings ...) 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (16 more replies) 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang 6 siblings, 17 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 2997 bytes --] V5: - Simplify the notify_data part in the zxdh_notify_queue function. - Replace rte_zmalloc with rte_calloc in the rss_reta_update function. - Remove unnecessary check in mtu_set function. V4: - resolved ci compile issues. V3: - use rte_zmalloc and rte_calloc to avoid memset. - remove unnecessary initialization, which first usage will set. - adjust some function which is always return 0, changed to void and skip the ASSERTION later. - resolved some WARNING:MACRO_ARG_UNUSED issues. - resolved some other issues. V2: - resolve code style and github-robot build issue. V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promisc/allmulti ops implementations net/zxdh: vlan filter/ offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 18 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build | 4 + drivers/net/zxdh/zxdh_common.c | 24 + drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 595 +++++++- drivers/net/zxdh/zxdh_ethdev.h | 40 + drivers/net/zxdh/zxdh_ethdev_ops.c | 1573 +++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ drivers/net/zxdh/zxdh_msg.c | 169 +++ drivers/net/zxdh/zxdh_msg.h | 232 ++++ drivers/net/zxdh/zxdh_np.c | 2060 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 579 ++++++++ drivers/net/zxdh/zxdh_pci.c | 23 +- drivers/net/zxdh/zxdh_pci.h | 9 +- drivers/net/zxdh/zxdh_queue.c | 242 +++- drivers/net/zxdh/zxdh_queue.h | 144 +- drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 20 +- drivers/net/zxdh/zxdh_tables.c | 794 +++++++++++ drivers/net/zxdh/zxdh_tables.h | 231 ++++ 21 files changed, 7613 insertions(+), 46 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 5670 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 01/15] net/zxdh: zxdh np init implementation 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang ` (15 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 36779 bytes --] (np)network Processor initialize resources in host, and initialize a channel for some tables insert/get/del. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 234 +++++++++++++++++++++-- drivers/net/zxdh/zxdh_ethdev.h | 30 +++ drivers/net/zxdh/zxdh_msg.c | 44 +++++ drivers/net/zxdh/zxdh_msg.h | 37 ++++ drivers/net/zxdh/zxdh_np.c | 340 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 2 +- drivers/net/zxdh/zxdh_pci.h | 6 +- drivers/net/zxdh/zxdh_queue.c | 2 +- drivers/net/zxdh/zxdh_queue.h | 14 +- 11 files changed, 875 insertions(+), 33 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index c9960f4c73..ab24a3145c 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -19,4 +19,5 @@ sources = files( 'zxdh_msg.c', 'zxdh_pci.c', 'zxdh_queue.c', + 'zxdh_np.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c786198535..b8f4415e00 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -5,6 +5,7 @@ #include <ethdev_pci.h> #include <bus_pci_driver.h> #include <rte_ethdev.h> +#include <rte_malloc.h> #include "zxdh_ethdev.h" #include "zxdh_logs.h" @@ -12,8 +13,15 @@ #include "zxdh_msg.h" #include "zxdh_common.h" #include "zxdh_queue.h" +#include "zxdh_np.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_INVALID_DTBQUE 0xFFFF uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) @@ -406,14 +414,14 @@ zxdh_features_update(struct zxdh_hw *hw, ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { PMD_DRV_LOG(ERR, "rx checksum not available on this host"); return -ENOTSUP; } if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && - (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + (!zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); return -ENOTSUP; } @@ -421,20 +429,20 @@ zxdh_features_update(struct zxdh_hw *hw, } static bool -rx_offload_enabled(struct zxdh_hw *hw) +zxdh_rx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); } static bool -tx_offload_enabled(struct zxdh_hw *hw) +zxdh_tx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_UFO); } static void @@ -466,7 +474,7 @@ zxdh_dev_free_mbufs(struct rte_eth_dev *dev) continue; PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); - while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + while ((buf = zxdh_queue_detach_unused(vq)) != NULL) rte_pktmbuf_free(buf); } } @@ -550,9 +558,9 @@ zxdh_init_vring(struct zxdh_virtqueue *vq) vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); - vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); - vring_desc_init_packed(vq, size); - virtqueue_disable_intr(vq); + zxdh_vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + zxdh_vring_desc_init_packed(vq, size); + zxdh_queue_disable_intr(vq); } static int32_t @@ -621,7 +629,7 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* * Reserve a memzone for vring elements */ - size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + size = zxdh_vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -694,7 +702,8 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* first indirect descriptor is always the tx header */ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; - vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + zxdh_vring_desc_init_indirect_packed(start_dp, + RTE_DIM(txr[i].tx_packed_indir)); start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + offsetof(struct zxdh_tx_region, tx_hdr); /* length will be updated to actual pi hdr size when xmit pkt */ @@ -792,8 +801,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) } } - hw->has_tx_offload = tx_offload_enabled(hw); - hw->has_rx_offload = rx_offload_enabled(hw); + hw->has_tx_offload = zxdh_tx_offload_enabled(hw); + hw->has_rx_offload = zxdh_rx_offload_enabled(hw); nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) @@ -881,7 +890,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); /* If host does not support both status and MSI-X then disable LSC */ - if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; else eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; @@ -913,6 +922,181 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) return 0; } +static int +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_bar_offset_params param = {0}; + struct zxdh_bar_offset_res res = {0}; + int ret = 0; + + if (g_dtb_data.init_done) { + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_zmalloc(NULL, sizeof(*dpp_ctrl) + + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); + if (dpp_ctrl == NULL) { + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strlcpy(dpp_ctrl->port_name, dev->device->name, sizeof(dpp_ctrl->port_name)); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = ZXDH_URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + + ret = zxdh_np_host_init(0, dpp_ctrl); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + rte_free(dpp_ctrl); + return 0; + +free_res: + rte_free(dpp_ctrl); + return ret; +} + +static int +zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int +zxdh_init_once(void) +{ + int ret = 0; + + if (zxdh_init_shared_data()) + return -1; + + struct zxdh_shared_data *sd = zxdh_shared_data; + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + /* RTE_PROC_PRIMARY */ + if (!sd->init_done) + sd->init_done = true; + sd->dev_refcnt++; + +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int +zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + return ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 1; + + PMD_DRV_LOG(DEBUG, "np init ok "); + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -950,6 +1134,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_once(); + if (ret != 0) + goto err_zxdh_init; + ret = zxdh_init_device(eth_dev); if (ret < 0) goto err_zxdh_init; @@ -977,6 +1165,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); if (ret != 0) goto err_zxdh_init; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7658cbb461..b1f398b28e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -35,6 +35,12 @@ #define ZXDH_MBUF_BURST_SZ 64 +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) + +#define ZXDH_MAX_NAME_LEN 32 + union zxdh_virport_num { uint16_t vport; struct { @@ -89,6 +95,30 @@ struct zxdh_hw { uint8_t has_rx_offload; }; +struct zxdh_dtb_shared_data { + uint8_t init_done; + char name[ZXDH_MAX_NAME_LEN]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int32_t init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int32_t np_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); #endif /* ZXDH_ETHDEV_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 53cf972f86..dd7a518a51 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1035,3 +1035,47 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) rte_free(recved_msg); return ZXDH_BAR_MSG_OK; } + +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, + struct zxdh_bar_offset_res *res) +{ + uint16_t check_token; + uint16_t sum_res; + int ret; + + if (!paras) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = { + .payload_addr = &send_msg, + .payload_len = sizeof(send_msg), + .virt_addr = paras->virt_addr, + .src = ZXDH_MSG_CHAN_END_PF, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_OFFSET_GET, + .src_pcieid = paras->pcie_id, + }; + struct zxdh_bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 530ee406b1..fbc79e8f9d 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { ZXDH_TBL_TYPE_NON, }; +enum pciebar_layout_type { + ZXDH_URI_VQM = 0, + ZXDH_URI_SPINLOCK = 1, + ZXDH_URI_FWCAP = 2, + ZXDH_URI_FWSHR = 3, + ZXDH_URI_DRS_SEC = 4, + ZXDH_URI_RSV = 5, + ZXDH_URI_CTRLCH = 6, + ZXDH_URI_1588 = 7, + ZXDH_URI_QBV = 8, + ZXDH_URI_MACPCS = 9, + ZXDH_URI_RDMA = 10, + ZXDH_URI_MNP = 11, + ZXDH_URI_MSPM = 12, + ZXDH_URI_MVQM = 13, + ZXDH_URI_MDPI = 14, + ZXDH_URI_NP = 15, + ZXDH_URI_MAX, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { uint32_t length; } __rte_packed; +struct zxdh_bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; + +struct zxdh_bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + struct zxdh_bar_recv_msg { uint8_t reps_ok; uint16_t reps_len; @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +struct zxdh_offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c new file mode 100644 index 0000000000..e44d7ff501 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.c @@ -0,0 +1,340 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdlib.h> +#include <string.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_debug.h> +#include <rte_malloc.h> + +#include "zxdh_np.h" +#include "zxdh_logs.h" + +static uint64_t g_np_bar_offset; +static ZXDH_DEV_MGR_T g_dev_mgr; +static ZXDH_SDT_MGR_T g_sdt_mgr; +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; + +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) + +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ +do {\ + if (NULL == (point)) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ + " Fail!", __FILE__, __LINE__, __func__, becall);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ + "Fail!", __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +static uint32_t +zxdh_np_dev_init(void) +{ + if (g_dev_mgr.is_init) { + PMD_DRV_LOG(ERR, "Dev is already initialized."); + return 0; + } + + g_dev_mgr.device_num = 0; + g_dev_mgr.is_init = 1; + + return 0; +} + +static uint32_t +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, + uint64_t riscv_addr, uint64_t dma_vir_addr, + uint64_t dma_phy_addr) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + if (!p_dev_mgr->is_init) { + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + ZXDH_RC_DEV_MGR_NOT_INIT); + return ZXDH_RC_DEV_MGR_NOT_INIT; + } + + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { + /* device is already exist. */ + PMD_DRV_LOG(ERR, "Device is added again!!!"); + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + } else { + /* device is new. */ + p_dev_info = rte_malloc(NULL, sizeof(ZXDH_DEV_CFG_T), 0); + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; + p_dev_mgr->device_num++; + } + + p_dev_info->device_id = dev_id; + p_dev_info->dev_type = dev_type; + p_dev_info->access_type = access_type; + p_dev_info->pcie_addr = pcie_addr; + p_dev_info->riscv_addr = riscv_addr; + p_dev_info->dma_vir_addr = dma_vir_addr; + p_dev_info->dma_phy_addr = dma_phy_addr; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return ZXDH_DEV_TYPE_INVALID; + p_dev_info->agent_flag = agent_flag; + + return 0; +} + +static void +zxdh_np_sdt_mgr_init(void) +{ + if (!g_sdt_mgr.is_init) { + g_sdt_mgr.channel_num = 0; + g_sdt_mgr.is_init = 1; + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); + } +} + +static uint32_t +zxdh_np_sdt_mgr_create(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { + p_sdt_tbl_temp = rte_malloc(NULL, sizeof(ZXDH_SDT_SOFT_TABLE_T), 0); + + p_sdt_tbl_temp->device_id = dev_id; + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; + + p_sdt_mgr->channel_num++; + } else { + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" + "is called repeatedly!", __func__, dev_id); + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) +{ + uint32_t rc; + uint32_t i; + + zxdh_np_sdt_mgr_init(); + + for (i = 0; i < dev_num; i++) { + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); + } + + return rc; +} + +static void +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, + uint32_t bitmap) +{ + uint32_t cls_id; + uint32_t mem_id; + uint32_t cls_use; + uint32_t instr_mem; + + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { + cls_use = (bitmap >> cls_id) & 0x1; + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; + } + + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); + } +} + +static ZXDH_DTB_MGR_T * +zxdh_np_dtb_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_dpp_dtb_mgr[dev_id]; +} + +static uint32_t +zxdh_np_dtb_soft_init(uint32_t dev_id) +{ + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return 1; + + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) { + p_dpp_dtb_mgr[dev_id] = rte_zmalloc(NULL, sizeof(ZXDH_DTB_MGR_T), 0); + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_base_soft_init(uint32_t dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) +{ + uint32_t dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; + uint32_t rt; + uint32_t access_type; + uint32_t agent_flag; + + rt = zxdh_np_dev_init(); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; + else + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) + agent_flag = ZXDH_DEV_AGENT_ENABLE; + else + agent_flag = ZXDH_DEV_AGENT_DISABLE; + + rt = zxdh_np_dev_add(dev_id, + p_init_ctrl->device_type, + access_type, + p_init_ctrl->pcie_vir_baddr, + p_init_ctrl->riscv_vir_baddr, + p_init_ctrl->dma_vir_baddr, + p_init_ctrl->dma_phy_baddr); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); + + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); + + dev_id_array[0] = dev_id; + rt = zxdh_np_sdt_init(1, dev_id_array); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); + + zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); + + rt = zxdh_np_dtb_soft_init(dev_id); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); + + return rt; +} + +static void +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->vport = vport; +} + +static void +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->agent_addr = agent_addr; +} + +static uint64_t +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) +{ + uint64_t np_addr; + + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; + g_np_bar_offset = bar_offset; + + return np_addr; +} + +int +zxdh_np_host_init(uint32_t dev_id, + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) +{ + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; + uint32_t rc; + uint64_t agent_addr; + + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); + + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, + p_dev_init_ctrl->np_bar_offset); + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); + + zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); + + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; + zxdh_np_dev_agent_addr_set(dev_id, agent_addr); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h new file mode 100644 index 0000000000..573eafe796 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef ZXDH_NP_H +#define ZXDH_NP_H + +#include <stdint.h> + +#define ZXDH_PORT_NAME_MAX (32) +#define ZXDH_DEV_CHANNEL_MAX (2) +#define ZXDH_DEV_SDT_ID_MAX (256U) +/*DTB*/ +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) +#define ZXDH_DTB_QUEUE_NUM_MAX (128) + +#define ZXDH_PPU_CLS_ALL_START (0x3F) +#define ZXDH_PPU_CLUSTER_NUM (6) +#define ZXDH_PPU_INSTR_MEM_NUM (3) +#define ZXDH_SDT_CFG_LEN (2) + +#define ZXDH_RC_DEV_BASE (0x600) +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) + +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) + +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) + +typedef enum zxdh_module_init_e { + ZXDH_MODULE_INIT_NPPU = 0, + ZXDH_MODULE_INIT_PPU, + ZXDH_MODULE_INIT_SE, + ZXDH_MODULE_INIT_ETM, + ZXDH_MODULE_INIT_DLB, + ZXDH_MODULE_INIT_TRPG, + ZXDH_MODULE_INIT_TSN, + ZXDH_MODULE_INIT_MAX +} ZXDH_MODULE_INIT_E; + +typedef enum zxdh_dev_type_e { + ZXDH_DEV_TYPE_SIM = 0, + ZXDH_DEV_TYPE_VCS = 1, + ZXDH_DEV_TYPE_CHIP = 2, + ZXDH_DEV_TYPE_FPGA = 3, + ZXDH_DEV_TYPE_PCIE_ACC = 4, + ZXDH_DEV_TYPE_INVALID, +} ZXDH_DEV_TYPE_E; + +typedef enum zxdh_dev_access_type_e { + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, + ZXDH_DEV_ACCESS_TYPE_INVALID, +} ZXDH_DEV_ACCESS_TYPE_E; + +typedef enum zxdh_dev_agent_flag_e { + ZXDH_DEV_AGENT_DISABLE = 0, + ZXDH_DEV_AGENT_ENABLE = 1, + ZXDH_DEV_AGENT_INVALID, +} ZXDH_DEV_AGENT_FLAG_E; + +typedef struct zxdh_dtb_tab_up_user_addr_t { + uint32_t user_flag; + uint64_t phy_addr; + uint64_t vir_addr; +} ZXDH_DTB_TAB_UP_USER_ADDR_T; + +typedef struct zxdh_dtb_tab_up_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; +} ZXDH_DTB_TAB_UP_INFO_T; + +typedef struct zxdh_dtb_tab_down_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; +} ZXDH_DTB_TAB_DOWN_INFO_T; + +typedef struct zxdh_dtb_queue_info_t { + uint32_t init_flag; + uint32_t vport; + uint32_t vector; + ZXDH_DTB_TAB_UP_INFO_T tab_up; + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; +} ZXDH_DTB_QUEUE_INFO_T; + +typedef struct zxdh_dtb_mgr_t { + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_DTB_MGR_T; + +typedef struct zxdh_ppu_cls_bitmap_t { + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; +} ZXDH_PPU_CLS_BITMAP_T; + +typedef struct dpp_sdt_item_t { + uint32_t valid; + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; +} ZXDH_SDT_ITEM_T; + +typedef struct dpp_sdt_soft_table_t { + uint32_t device_id; + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; +} ZXDH_SDT_SOFT_TABLE_T; + +typedef struct zxdh_sys_init_ctrl_t { + ZXDH_DEV_TYPE_E device_type; + uint32_t flags; + uint32_t sa_id; + uint32_t case_num; + uint32_t lif0_port_type; + uint32_t lif1_port_type; + uint64_t pcie_vir_baddr; + uint64_t riscv_vir_baddr; + uint64_t dma_vir_baddr; + uint64_t dma_phy_baddr; +} ZXDH_SYS_INIT_CTRL_T; + +typedef struct dpp_dev_cfg_t { + uint32_t device_id; + ZXDH_DEV_TYPE_E dev_type; + uint32_t chip_ver; + uint32_t access_type; + uint32_t agent_flag; + uint32_t vport; + uint64_t pcie_addr; + uint64_t riscv_addr; + uint64_t dma_vir_addr; + uint64_t dma_phy_addr; + uint64_t agent_addr; + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; +} ZXDH_DEV_CFG_T; + +typedef struct zxdh_dev_mngr_t { + uint32_t device_num; + uint32_t is_init; + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_DEV_MGR_T; + +typedef struct zxdh_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} ZXDH_DTB_ADDR_INFO_T; + +typedef struct zxdh_dev_init_ctrl_t { + uint32_t vport; + char port_name[ZXDH_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; +} ZXDH_DEV_INIT_CTRL_T; + +typedef struct zxdh_sdt_mgr_t { + uint32_t channel_num; + uint32_t is_init; + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_SDT_MGR_T; + +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); + +#endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06d3f92b20..250e67d560 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) desc_addr = vq->vq_ring_mem; avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); - if (vtpci_packed_queue(vq->hw)) { + if (zxdh_pci_packed_queue(vq->hw)) { used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct zxdh_vring_packed_desc_event)), ZXDH_PCI_VRING_ALIGN); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ed6fd89742..d6487a574f 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { }; static inline int32_t -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } static inline int32_t -vtpci_packed_queue(struct zxdh_hw *hw) +zxdh_pci_packed_queue(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); } struct zxdh_pci_ops { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index 462a88b23c..b4ef90ea36 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -13,7 +13,7 @@ #include "zxdh_msg.h" struct rte_mbuf * -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { struct rte_mbuf *cookie = NULL; int32_t idx = 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1088bf08fc..1304d5e4ea 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -206,11 +206,11 @@ struct zxdh_tx_region { }; static inline size_t -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) { size_t size; - if (vtpci_packed_queue(hw)) { + if (zxdh_pci_packed_queue(hw)) { size = num * sizeof(struct zxdh_vring_packed_desc); size += sizeof(struct zxdh_vring_packed_desc_event); size = RTE_ALIGN_CEIL(size, align); @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) } static inline void -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, unsigned long align, uint32_t num) { vr->num = num; @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, } static inline void -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) { int32_t i = 0; @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) } static inline void -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) { int32_t i = 0; @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) } static inline void -virtqueue_disable_intr(struct zxdh_virtqueue *vq) +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) } } -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 79562 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 02/15] net/zxdh: zxdh np uninit implementation 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-23 11:02 ` [PATCH v5 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 03/15] net/zxdh: port tables init implementations Junlong Wang ` (14 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 19520 bytes --] (np)network processor release resources in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ drivers/net/zxdh/zxdh_np.c | 470 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 107 ++++++++ 3 files changed, 625 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b8f4415e00..4e114d95da 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret; + int i; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1010,6 +1056,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1177,6 +1224,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index e44d7ff501..28728b0c68 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -18,10 +18,21 @@ static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_REG_T g_dpp_reg_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -338,3 +349,462 @@ zxdh_np_host_init(uint32_t dev_id, return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return 0xffff; + + return p_dev_info->dev_type; +} + +static uint32_t +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint32_t byte_num; + uint32_t buffer_size; + uint32_t len; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + len = end_bit - start_bit + 1; + buffer_size = base_size_bit / 8; + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + *p_data = 0; + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + *p_data = (uint32_t)(((p_base[start_byte_index] >> (7U - (end_bit & 7))) + & (0xff >> (8U - len))) & 0xff); + return 0; + } + + if (start_bit & 7) { + *p_data = (p_base[start_byte_index] & (0xff >> (start_bit & 7))) & UINT8_MAX; + start_byte_index++; + } + + for (byte_num = start_byte_index; byte_num < end_byte_index; byte_num++) { + *p_data <<= 8; + *p_data += p_base[byte_num]; + } + + *p_data <<= 1 + (end_bit & 7); + *p_data += ((p_base[byte_num & (buffer_size - 1)] & (0xff << (7 - (end_bit & 7)))) >> + (7 - (end_bit & 7))) & 0xff; + + return 0; +} + +static uint32_t +zxdh_np_comm_read_bits_ex(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t msb_start_pos, uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_read_bits(p_base, + base_size_bit, + p_data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + return rtn; +} + +static uint32_t +zxdh_np_reg_read(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t rc = 0; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + for (i = 0; i < p_reg_info->field_num; i++) { + rc = zxdh_np_comm_read_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + (uint32_t *)p_data + i, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_read_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", dev_id, m_offset, n_offset); + } + } + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_read"); + + p_vm_info->dbi_en = vm_info.dbi_en; + p_vm_info->queue_en = vm_info.queue_en; + p_vm_info->epid = vm_info.cfg_epid; + p_vm_info->vector = vm_info.cfg_vector; + p_vm_info->vfunc_num = vm_info.cfg_vfunc_num; + p_vm_info->func_num = vm_info.cfg_func_num; + p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint8_t mask_value; + uint32_t byte_num; + uint32_t buffer_size; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + buffer_size = base_size_bit / 8; + + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + mask_value |= (((1 << (7 - (end_bit & 7))) - 1) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= (((data << (7 - (end_bit & 7)))) & 0xff); + return 0; + } + + if (7 != (end_bit & 7)) { + mask_value = ((0x7f >> (end_bit & 7)) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= ((data << (7 - (end_bit & 7))) & 0xff); + end_byte_index--; + data >>= 1 + (end_bit & 7); + } + + for (byte_num = end_byte_index; byte_num > start_byte_index; byte_num--) { + p_base[byte_num & (buffer_size - 1)] = data & 0xff; + data >>= 8; + } + + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + p_base[byte_num] &= mask_value; + p_base[byte_num] |= data; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits_ex(uint8_t *p_base, + uint32_t base_size_bit, + uint32_t data, + uint32_t msb_start_pos, + uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_write_bits(p_base, + base_size_bit, + data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + + return rtn; +} + +static uint32_t +zxdh_np_reg_write(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + + for (i = 0; i < p_reg_info->field_num; i++) { + if (p_field_info[i].len <= 32) { + temp_data = *((uint32_t *)p_data + i); + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + temp_data, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_comm_write_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", + dev_id, m_offset, n_offset); + } + } + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + vm_info.dbi_en = p_vm_info->dbi_en; + vm_info.queue_en = p_vm_info->queue_en; + vm_info.cfg_epid = p_vm_info->epid; + vm_info.cfg_vector = p_vm_info->vector; + vm_info.cfg_vfunc_num = p_vm_info->vfunc_num; + vm_info.cfg_func_num = p_vm_info->func_num; + vm_info.cfg_vfunc_active = p_vm_info->vfunc_active; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_write"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_enable_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t enable) +{ + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + vm_info.queue_en = enable; + rc = zxdh_np_dtb_queue_vm_info_set(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_set"); + + return rc; +} + +static uint32_t +zxdh_np_riscv_dpp_dtb_queue_id_release(uint32_t dev_id, + char name[ZXDH_PORT_NAME_MAX], uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + if (p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag != 1) { + PMD_DRV_LOG(ERR, "queue %d not alloc!", queue_id); + return 2; + } + + if (strcmp(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, name) != 0) { + PMD_DRV_LOG(ERR, "queue %d name %s error!", queue_id, name); + return 3; + } + zxdh_np_dtb_queue_enable_set(dev_id, queue_id, 0); + zxdh_np_riscv_dtb_mgr_queue_info_delete(dev_id, queue_id); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_unused_item_num_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_num) +{ + uint32_t rc; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) { + *p_item_num = 32; + return 0; + } + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_INFO_QUEUE_BUF_SPACE, + 0, queue_id, p_item_num); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_read"); + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_id_free(uint32_t dev_id, + uint32_t queue_id) +{ + uint32_t item_num = 0; + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + uint32_t rc; + + p_dtb_mgr = p_dpp_dtb_mgr[dev_id]; + if (p_dtb_mgr == NULL) + return 1; + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &item_num); + + p_dtb_mgr->queue_info[queue_id].init_flag = 0; + p_dtb_mgr->queue_info[queue_id].vport = 0; + p_dtb_mgr->queue_info[queue_id].vector = 0; + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_release(uint32_t devid, + char pname[32], + uint32_t queueid) +{ + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(devid, pname); + + rc = zxdh_np_riscv_dpp_dtb_queue_id_release(devid, pname, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_riscv_dpp_dtb_queue_id_release"); + + rc = zxdh_np_dtb_queue_id_free(devid, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_id_free"); + + return rc; +} + +static void +zxdh_np_dtb_mgr_destroy(uint32_t dev_id) +{ + if (p_dpp_dtb_mgr[dev_id] != NULL) { + free(p_dpp_dtb_mgr[dev_id]); + p_dpp_dtb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_tlb_mgr_destroy(uint32_t dev_id) +{ + if (g_p_dpp_tlb_mgr[dev_id] != NULL) { + free(g_p_dpp_tlb_mgr[dev_id]); + g_p_dpp_tlb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_sdt_mgr_destroy(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (p_sdt_tbl_temp != NULL) + free(p_sdt_tbl_temp); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; + + p_sdt_mgr->channel_num--; +} + +static void +zxdh_np_dev_del(uint32_t dev_id) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info != NULL) { + free(p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = NULL; + p_dev_mgr->device_num--; + } +} + +int +zxdh_np_online_uninit(uint32_t dev_id, + char *port_name, + uint32_t queue_id) +{ + uint32_t rc; + + rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "%s:dtb release error," + "port name %s queue id %d. ", __func__, port_name, queue_id); + + zxdh_np_dtb_mgr_destroy(dev_id); + zxdh_np_tlb_mgr_destroy(dev_id); + zxdh_np_sdt_mgr_destroy(dev_id); + zxdh_np_dev_del(dev_id); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 573eafe796..dc0e867827 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -47,6 +47,11 @@ #define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) #define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) +#define ZXDH_ACL_TBL_ID_MIN (0) +#define ZXDH_ACL_TBL_ID_MAX (7) +#define ZXDH_ACL_TBL_ID_NUM (8U) +#define ZXDH_ACL_BLOCK_NUM (8U) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -67,6 +72,15 @@ typedef enum zxdh_dev_type_e { ZXDH_DEV_TYPE_INVALID, } ZXDH_DEV_TYPE_E; +typedef enum zxdh_reg_info_e { + ZXDH_DTB_CFG_QUEUE_DTB_HADDR = 0, + ZXDH_DTB_CFG_QUEUE_DTB_LADDR = 1, + ZXDH_DTB_CFG_QUEUE_DTB_LEN = 2, + ZXDH_DTB_INFO_QUEUE_BUF_SPACE = 3, + ZXDH_DTB_CFG_EPID_V_FUNC_NUM = 4, + ZXDH_REG_ENUM_MAX_VALUE +} ZXDH_REG_INFO_E; + typedef enum zxdh_dev_access_type_e { ZXDH_DEV_ACCESS_TYPE_PCIE = 0, ZXDH_DEV_ACCESS_TYPE_RISCV = 1, @@ -79,6 +93,26 @@ typedef enum zxdh_dev_agent_flag_e { ZXDH_DEV_AGENT_INVALID, } ZXDH_DEV_AGENT_FLAG_E; +typedef enum zxdh_acl_pri_mode_e { + ZXDH_ACL_PRI_EXPLICIT = 1, + ZXDH_ACL_PRI_IMPLICIT, + ZXDH_ACL_PRI_SPECIFY, + ZXDH_ACL_PRI_INVALID, +} ZXDH_ACL_PRI_MODE_E; + +typedef struct zxdh_d_node { + void *data; + struct zxdh_d_node *prev; + struct zxdh_d_node *next; +} ZXDH_D_NODE; + +typedef struct zxdh_d_head { + uint32_t used; + uint32_t maxnum; + ZXDH_D_NODE *p_next; + ZXDH_D_NODE *p_prev; +} ZXDH_D_HEAD; + typedef struct zxdh_dtb_tab_up_user_addr_t { uint32_t user_flag; uint64_t phy_addr; @@ -193,6 +227,79 @@ typedef struct zxdh_sdt_mgr_t { ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; } ZXDH_SDT_MGR_T; +typedef struct zxdh_riscv_dtb_queue_USER_info_t { + uint32_t alloc_flag; + uint32_t queue_id; + uint32_t vport; + char user_name[ZXDH_PORT_NAME_MAX]; +} ZXDH_RISCV_DTB_QUEUE_USER_INFO_T; + +typedef struct zxdh_riscv_dtb_mgr { + uint32_t queue_alloc_count; + uint32_t queue_index; + ZXDH_RISCV_DTB_QUEUE_USER_INFO_T queue_user_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_RISCV_DTB_MGR; + +typedef struct zxdh_dtb_queue_vm_info_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t epid; + uint32_t vfunc_num; + uint32_t vector; + uint32_t func_num; + uint32_t vfunc_active; +} ZXDH_DTB_QUEUE_VM_INFO_T; + +typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t cfg_epid; + uint32_t cfg_vfunc_num; + uint32_t cfg_vector; + uint32_t cfg_func_num; + uint32_t cfg_vfunc_active; +} ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; + + +typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); +typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); + +typedef struct zxdh_field_t { + const char *p_name; + uint32_t flags; + uint16_t msb_pos; + + uint16_t len; + uint32_t default_value; + uint32_t default_step; +} ZXDH_FIELD_T; + +typedef struct zxdh_reg_t { + const char *reg_name; + uint32_t reg_no; + uint32_t module_no; + uint32_t flags; + uint32_t array_type; + uint32_t addr; + uint32_t width; + uint32_t m_size; + uint32_t n_size; + uint32_t m_step; + uint32_t n_step; + uint32_t field_num; + ZXDH_FIELD_T *p_fields; + + ZXDH_REG_WRITE p_write_fun; + ZXDH_REG_READ p_read_fun; +} ZXDH_REG_T; + +typedef struct zxdh_tlb_mgr_t { + uint32_t entry_num; + uint32_t va_width; + uint32_t pa_width; +} ZXDH_TLB_MGR_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); #endif /* ZXDH_NP_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45109 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 03/15] net/zxdh: port tables init implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-23 11:02 ` [PATCH v5 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-23 11:02 ` [PATCH v5 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 04/15] net/zxdh: port tables unint implementations Junlong Wang ` (13 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 42795 bytes --] insert port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 24 ++ drivers/net/zxdh/zxdh_msg.c | 65 ++++ drivers/net/zxdh/zxdh_msg.h | 72 ++++ drivers/net/zxdh/zxdh_np.c | 648 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 210 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 2 + drivers/net/zxdh/zxdh_tables.c | 105 ++++++ drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ 9 files changed, 1274 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index ab24a3145c..5b3af87c5b 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -20,4 +20,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_queue.c', 'zxdh_np.c', + 'zxdh_tables.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 4e114d95da..ff44816384 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -14,6 +14,7 @@ #include "zxdh_common.h" #include "zxdh_queue.h" #include "zxdh_np.h" +#include "zxdh_tables.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -1144,6 +1145,25 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_tables_init(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_init failed"); + return ret; + } + + ret = zxdh_panel_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -1220,6 +1240,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_tables_init(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index dd7a518a51..aa2e10fd45 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_msg.h" +#include "zxdh_pci.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 @@ -100,6 +101,7 @@ #define ZXDH_BAR_CHAN_MSG_EMEC 1 #define ZXDH_BAR_CHAN_MSG_NO_ACK 0 #define ZXDH_BAR_CHAN_MSG_ACK 1 +#define ZXDH_MSG_REPS_OK 0xff uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, @@ -1079,3 +1081,66 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, res->bar_length = recv_msg.offset_reps.length; return ZXDH_BAR_MSG_OK; } + +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + struct zxdh_pci_bar_msg in = { + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET), + .payload_addr = msg_req, + .payload_len = msg_req_len, + .src = ZXDH_MSG_CHAN_END_VF, + .dst = ZXDH_MSG_CHAN_END_PF, + .module_id = ZXDH_MODULE_BAR_MSG_TO_PF, + .src_pcieid = hw->pcie_id, + .dst_pcieid = ZXDH_PF_PCIE_ID(hw->pcie_id), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "vf[%d] send bar msg to pf failed.ret %d", hw->vport.vfid, ret); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + return 0; +} + +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_msg_head *msghead = &msg_info->msg_head; + + msghead->msg_type = type; + msghead->vport = hw->vport.vport; + msghead->vf_id = hw->vport.vfid; + msghead->pcieid = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index fbc79e8f9d..b7b17b8696 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -33,6 +33,19 @@ #define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) + +#define ZXDH_MSG_REPLYBODY_HEAD sizeof(enum zxdh_reps_flag) +#define ZXDH_MSG_HEADER_SIZE 4 +#define ZXDH_MSG_REPLY_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) + +#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_REQ_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -151,6 +164,13 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +enum zxdh_msg_type { + ZXDH_NULL = 0, + ZXDH_VF_PORT_INIT = 1, + + ZXDH_MSG_TYPE_END, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -240,6 +260,54 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_msg_reply_head { + uint8_t flag; + uint16_t reps_len; + uint8_t resvd; +} __rte_packed; + +enum zxdh_reps_flag { + ZXDH_REPS_FAIL, + ZXDH_REPS_SUCC = 0xaa, +} __rte_packed; + +struct zxdh_msg_reply_body { + enum zxdh_reps_flag flag; + union { + uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_reply_info { + struct zxdh_msg_reply_head reply_head; + struct zxdh_msg_reply_body reply_body; +} __rte_packed; + +struct zxdh_vf_init_msg { + uint8_t link_up; + uint8_t rsv; + uint16_t base_qid; + uint8_t rss_enable; +} __rte_packed; + +struct zxdh_msg_head { + enum zxdh_msg_type msg_type; + uint16_t vport; + uint16_t vf_id; + uint16_t pcieid; +} __rte_packed; + +struct zxdh_msg_info { + union { + uint8_t head_len[ZXDH_MSG_HEAD_LEN]; + struct zxdh_msg_head msg_head; + }; + union { + uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; + struct zxdh_vf_init_msg vf_init_msg; + } __rte_packed data; +} __rte_packed; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); @@ -253,5 +321,9 @@ int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info); +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 28728b0c68..db536d96e3 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_debug.h> #include <rte_malloc.h> +#include <rte_memcpy.h> #include "zxdh_np.h" #include "zxdh_logs.h" @@ -16,11 +17,14 @@ static uint64_t g_np_bar_offset; static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; +static uint32_t g_dpp_dtb_int_enable; +static uint32_t g_table_type[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; +ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -76,6 +80,92 @@ do {\ } \ } while (0) +#define ZXDH_COMM_CHECK_POINT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + + +#define ZXDH_COMM_CHECK_POINT_MEMORY_FREE(point, ptr)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", __FILE__, __LINE__, __func__);\ + rte_free(ptr);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, becall, ptr)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXICP %s:%d, %s Call"\ + " %s Fail!", __FILE__, __LINE__, __func__, becall);\ + rte_free(ptr);\ + } \ +} while (0) + +#define ZXDH_COMM_CONVERT16(w_data) \ + (((w_data) & 0xff) << 8) + +#define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) + +#define ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.item_size) + +#define ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.wr_index) + +#define ZXDH_DTB_QUEUE_INIT_FLAG_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].init_flag) + +static uint32_t +zxdh_np_comm_is_big_endian(void) +{ + ZXDH_ENDIAN_U c_data; + + c_data.a = 1; + + if (c_data.b == 1) + return 0; + else + return 1; +} + +static void +zxdh_np_comm_swap(uint8_t *p_uc_data, uint32_t dw_byte_len) +{ + uint16_t *p_w_tmp = NULL; + uint32_t *p_dw_tmp = NULL; + uint32_t dw_byte_num; + uint8_t uc_byte_mode; + uint32_t uc_is_big_flag; + uint32_t i; + + p_dw_tmp = (uint32_t *)(p_uc_data); + uc_is_big_flag = zxdh_np_comm_is_big_endian(); + if (uc_is_big_flag) + return; + + dw_byte_num = dw_byte_len >> 2; + uc_byte_mode = dw_byte_len % 4 & 0xff; + + for (i = 0; i < dw_byte_num; i++) { + (*p_dw_tmp) = ZXDH_COMM_CONVERT16(*p_dw_tmp); + p_dw_tmp++; + } + + if (uc_byte_mode > 1) { + p_w_tmp = (uint16_t *)(p_dw_tmp); + (*p_w_tmp) = ZXDH_COMM_CONVERT16(*p_w_tmp); + } +} + static uint32_t zxdh_np_dev_init(void) { @@ -503,7 +593,7 @@ zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, p_vm_info->func_num = vm_info.cfg_func_num; p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; - return 0; + return rc; } static uint32_t @@ -808,3 +898,559 @@ zxdh_np_online_uninit(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_sdt_tbl_type_get(uint32_t dev_id, uint32_t sdt_no) +{ + return g_table_type[dev_id][sdt_no]; +} + + +static ZXDH_DTB_TABLE_T * +zxdh_np_table_info_get(uint32_t table_type) +{ + return &g_dpp_dtb_table_info[table_type]; +} + +static uint32_t +zxdh_np_dtb_write_table_cmd(uint32_t dev_id, + ZXDH_DTB_TABLE_INFO_E table_type, + void *p_cmd_data, + void *p_cmd_buff) +{ + uint32_t field_cnt; + ZXDH_DTB_TABLE_T *p_table_info = NULL; + ZXDH_DTB_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_cmd_data); + ZXDH_COMM_CHECK_POINT(p_cmd_buff); + p_table_info = zxdh_np_table_info_get(table_type); + p_field_info = p_table_info->p_fields; + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_table_info); + + for (field_cnt = 0; field_cnt < p_table_info->field_num; field_cnt++) { + temp_data = *((uint32_t *)p_cmd_data + field_cnt) & ZXDH_COMM_GET_BIT_MASK(uint32_t, + p_field_info[field_cnt].len); + + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_cmd_buff, + ZXDH_DTB_TABLE_CMD_SIZE_BIT, + temp_data, + p_field_info[field_cnt].lsb_pos, + p_field_info[field_cnt].len); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_write_bits"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_smmu0_write_entry_data(uint32_t dev_id, + uint32_t mode, + uint32_t addr, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + ZXDH_DTB_ERAM_TABLE_FORM_T dtb_eram_form_info = {0}; + uint32_t rc = 0; + + dtb_eram_form_info.valid = ZXDH_DTB_TABLE_VALID; + dtb_eram_form_info.type_mode = ZXDH_DTB_TABLE_MODE_ERAM; + dtb_eram_form_info.data_mode = mode; + dtb_eram_form_info.cpu_wr = 1; + dtb_eram_form_info.addr = addr; + dtb_eram_form_info.cpu_rd = 0; + dtb_eram_form_info.cpu_rd_mode = 0; + + if (ZXDH_ERAM128_OPR_128b == mode) { + p_entry->data_in_cmd_flag = 0; + p_entry->data_size = 128 / 8; + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_128, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + memcpy(p_entry->data, p_data, 128 / 8); + } else if (ZXDH_ERAM128_OPR_64b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 64 / 8; + dtb_eram_form_info.data_l = *(p_data + 1); + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_64, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + } else if (ZXDH_ERAM128_OPR_1b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 1; + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_1, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, + uint32_t base_addr, + uint32_t index, + uint32_t wrt_mode, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t temp_idx; + uint32_t dtb_ind_addr; + uint32_t rc; + + switch (wrt_mode) { + case ZXDH_ERAM128_OPR_128b: + { + if ((0xFFFFFFFF - (base_addr)) < (index)) { + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, + base_addr, index, __func__); + + return ZXDH_PAR_CHK_INVALID_INDEX; + } + if (base_addr + index > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 7; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + if ((base_addr + (index >> 1)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 6; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + if ((base_addr + (index >> 7)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + + temp_idx = index; + } + } + + dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; + + PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + + rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, + wrt_mode, + dtb_ind_addr, + p_data, + p_entry); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_smmu0_write_entry_data"); + + return rc; +} + +static uint32_t +zxdh_np_eram_dtb_len_get(uint32_t mode) +{ + uint32_t dtb_len = 0; + + switch (mode) { + case ZXDH_ERAM128_OPR_128b: + { + dtb_len += 2; + break; + } + case ZXDH_ERAM128_OPR_64b: + case ZXDH_ERAM128_OPR_1b: + { + dtb_len += 1; + break; + } + default: + break; + } + + return dtb_len; +} + +static uint32_t +zxdh_np_dtb_eram_one_entry(uint32_t dev_id, + uint32_t sdt_no, + uint32_t del_en, + void *pdata, + uint32_t *p_dtb_len, + ZXDH_DTB_ENTRY_T *p_dtb_one_entry) +{ + uint32_t buff[ZXDH_SMMU0_READ_REG_MAX_NUM] = {0}; + ZXDH_SDTTBL_ERAM_T sdt_eram = {0}; + ZXDH_DTB_ERAM_ENTRY_INFO_T *peramdata = NULL; + uint32_t base_addr; + uint32_t index; + uint32_t opr_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(pdata); + ZXDH_COMM_CHECK_POINT(p_dtb_one_entry); + ZXDH_COMM_CHECK_POINT(p_dtb_len); + + peramdata = (ZXDH_DTB_ERAM_ENTRY_INFO_T *)pdata; + index = peramdata->index; + base_addr = sdt_eram.eram_base_addr; + opr_mode = sdt_eram.eram_mode; + + switch (opr_mode) { + case ZXDH_ERAM128_TBL_128b: + { + opr_mode = ZXDH_ERAM128_OPR_128b; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + opr_mode = ZXDH_ERAM128_OPR_64b; + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + opr_mode = ZXDH_ERAM128_OPR_1b; + break; + } + } + + if (del_en) { + memset((uint8_t *)buff, 0, sizeof(buff)); + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + buff, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(sdt_no, rc, "zxdh_dtb_se_smmu0_ind_write"); + } else { + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + peramdata->p_data, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_dtb_se_smmu0_ind_write"); + } + *p_dtb_len = zxdh_np_eram_dtb_len_get(opr_mode); + + return rc; +} + +static uint32_t +zxdh_np_dtb_data_write(uint8_t *p_data_buff, + uint32_t addr_offset, + ZXDH_DTB_ENTRY_T *entry) +{ + ZXDH_COMM_CHECK_POINT(p_data_buff); + ZXDH_COMM_CHECK_POINT(entry); + + uint8_t *p_cmd = p_data_buff + addr_offset; + uint32_t cmd_size = ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8; + + uint8_t *p_data = p_cmd + cmd_size; + uint32_t data_size = entry->data_size; + + uint8_t *cmd = (uint8_t *)entry->cmd; + uint8_t *data = (uint8_t *)entry->data; + + rte_memcpy(p_cmd, cmd, cmd_size); + + if (!entry->data_in_cmd_flag) { + zxdh_np_comm_swap(data, data_size); + rte_memcpy(p_data, data, data_size); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + *enable = vm_info.queue_en; + return rc; +} + +static uint32_t +zxdh_np_dtb_item_buff_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t len, + uint32_t *p_data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + + memcpy((uint8_t *)(addr), p_data, len * 4); + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_rd(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t *p_data) +{ + uint64_t addr; + uint32_t val; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + val = *((volatile uint32_t *)(addr)); + + *p_data = val; + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + *((volatile uint32_t *)(addr)) = data; + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_item_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_ITEM_INFO_T *p_item_info) +{ + ZXDH_DTB_QUEUE_LEN_T dtb_len = {0}; + uint32_t rc; + + dtb_len.cfg_dtb_cmd_type = p_item_info->cmd_type; + dtb_len.cfg_dtb_cmd_int_en = p_item_info->int_en; + dtb_len.cfg_queue_dtb_len = p_item_info->data_len; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_QUEUE_DTB_LEN, + 0, queue_id, (void *)&dtb_len); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_write"); + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t int_flag, + uint32_t data_len, + uint32_t *p_data, + uint32_t *p_item_index) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t unused_item_num = 0; + uint32_t queue_en = 0; + uint32_t ack_vale = 0; + uint64_t phy_addr; + uint32_t item_index; + uint32_t i; + uint32_t rc; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (data_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + rc = zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!,rc=%d", queue_id, rc); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + rc = zxdh_np_dtb_item_ack_rd(dev_id, queue_id, 0, + item_index, 0, &ack_vale); + + ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + rc = zxdh_np_dtb_item_buff_wr(dev_id, queue_id, 0, + item_index, 0, data_len, p_data); + + rc = zxdh_np_dtb_item_ack_wr(dev_id, queue_id, 0, + item_index, 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + item_info.cmd_vld = 1; + item_info.cmd_type = 0; + item_info.int_en = int_flag; + item_info.data_len = data_len / 4; + phy_addr = p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.start_phy_addr + + item_index * p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.item_size; + item_info.data_hddr = ((phy_addr >> 4) >> 32) & 0xffffffff; + item_info.data_laddr = (phy_addr >> 4) & 0xffffffff; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + *p_item_index = item_index; + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_down_table_data(uint32_t dev_id, + uint32_t queue_id, + uint32_t down_table_len, + uint8_t *p_down_table_buff, + uint32_t *p_element_id) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + dtb_interrupt_status = g_dpp_dtb_int_enable; + + rc = zxdh_np_dtb_tab_down_info_set(dev_id, + queue_id, + dtb_interrupt_status, + down_table_len / 4, + (uint32_t *)p_down_table_buff, + p_element_id); + return rc; +} + +int +zxdh_np_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *down_entries) +{ + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t tbl_type; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + p_data_buff = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = down_entries + entry_index; + sdt_no = pentry->sdt_no; + tbl_type = zxdh_np_sdt_tbl_type_get(dev_id, sdt_no); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_ADD_OR_UPDATE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index dc0e867827..40961c02a2 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -7,6 +7,8 @@ #include <stdint.h> +#define ZXDH_DISABLE (0) +#define ZXDH_ENABLE (1) #define ZXDH_PORT_NAME_MAX (32) #define ZXDH_DEV_CHANNEL_MAX (2) #define ZXDH_DEV_SDT_ID_MAX (256U) @@ -52,6 +54,94 @@ #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SMMU0_READ_REG_MAX_NUM (4) + +#define ZXDH_DTB_ITEM_ACK_SIZE (16) +#define ZXDH_DTB_ITEM_BUFF_SIZE (16 * 1024) +#define ZXDH_DTB_ITEM_SIZE (16 + 16 * 1024) +#define ZXDH_DTB_TAB_UP_SIZE ((16 + 16 * 1024) * 32) +#define ZXDH_DTB_TAB_DOWN_SIZE ((16 + 16 * 1024) * 32) + +#define ZXDH_DTB_TAB_UP_ACK_VLD_MASK (0x555555) +#define ZXDH_DTB_TAB_DOWN_ACK_VLD_MASK (0x5a5a5a) +#define ZXDH_DTB_TAB_ACK_IS_USING_MASK (0x11111100) +#define ZXDH_DTB_TAB_ACK_UNUSED_MASK (0x0) +#define ZXDH_DTB_TAB_ACK_SUCCESS_MASK (0xff) +#define ZXDH_DTB_TAB_ACK_FAILED_MASK (0x1) +#define ZXDH_DTB_TAB_ACK_CHECK_VALUE (0x12345678) + +#define ZXDH_DTB_TAB_ACK_VLD_SHIFT (104) +#define ZXDH_DTB_TAB_ACK_STATUS_SHIFT (96) +#define ZXDH_DTB_LEN_POS_SETP (16) +#define ZXDH_DTB_ITEM_ADD_OR_UPDATE (0) +#define ZXDH_DTB_ITEM_DELETE (1) + +#define ZXDH_ETCAM_LEN_SIZE (6) +#define ZXDH_ETCAM_BLOCK_NUM (8) +#define ZXDH_ETCAM_TBLID_NUM (8) +#define ZXDH_ETCAM_RAM_NUM (8) +#define ZXDH_ETCAM_RAM_WIDTH (80U) +#define ZXDH_ETCAM_WR_MASK_MAX (((uint32_t)1 << ZXDH_ETCAM_RAM_NUM) - 1) +#define ZXDH_ETCAM_WIDTH_MIN (ZXDH_ETCAM_RAM_WIDTH) +#define ZXDH_ETCAM_WIDTH_MAX (ZXDH_ETCAM_RAM_NUM * ZXDH_ETCAM_RAM_WIDTH) + +#define ZXDH_DTB_TABLE_DATA_BUFF_SIZE (16384) +#define ZXDH_DTB_TABLE_CMD_SIZE_BIT (128) + +#define ZXDH_SE_SMMU0_ERAM_BLOCK_NUM (32) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK (0x4000) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL \ + (ZXDH_SE_SMMU0_ERAM_BLOCK_NUM * ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK) + +/**errco code */ +#define ZXDH_RC_BASE (0x1000U) +#define ZXDH_PARAMETER_CHK_BASE (ZXDH_RC_BASE | 0x200) +#define ZXDH_PAR_CHK_POINT_NULL (ZXDH_PARAMETER_CHK_BASE | 0x001) +#define ZXDH_PAR_CHK_ARGIN_ZERO (ZXDH_PARAMETER_CHK_BASE | 0x002) +#define ZXDH_PAR_CHK_ARGIN_OVERFLOW (ZXDH_PARAMETER_CHK_BASE | 0x003) +#define ZXDH_PAR_CHK_ARGIN_ERROR (ZXDH_PARAMETER_CHK_BASE | 0x004) +#define ZXDH_PAR_CHK_INVALID_INDEX (ZXDH_PARAMETER_CHK_BASE | 0x005) +#define ZXDH_PAR_CHK_INVALID_RANGE (ZXDH_PARAMETER_CHK_BASE | 0x006) +#define ZXDH_PAR_CHK_INVALID_DEV_ID (ZXDH_PARAMETER_CHK_BASE | 0x007) +#define ZXDH_PAR_CHK_INVALID_PARA (ZXDH_PARAMETER_CHK_BASE | 0x008) + +#define ZXDH_ERAM128_BADDR_MASK (0x3FFFF80) + +#define ZXDH_DTB_TABLE_MODE_ERAM (0) +#define ZXDH_DTB_TABLE_MODE_DDR (1) +#define ZXDH_DTB_TABLE_MODE_ZCAM (2) +#define ZXDH_DTB_TABLE_MODE_ETCAM (3) +#define ZXDH_DTB_TABLE_MODE_MC_HASH (4) +#define ZXDH_DTB_TABLE_VALID (1) + +/* DTB module error code */ +#define ZXDH_RC_DTB_BASE (0xd00) +#define ZXDH_RC_DTB_MGR_EXIST (ZXDH_RC_DTB_BASE | 0x0) +#define ZXDH_RC_DTB_MGR_NOT_EXIST (ZXDH_RC_DTB_BASE | 0x1) +#define ZXDH_RC_DTB_QUEUE_RES_EMPTY (ZXDH_RC_DTB_BASE | 0x2) +#define ZXDH_RC_DTB_QUEUE_BUFF_SIZE_ERR (ZXDH_RC_DTB_BASE | 0x3) +#define ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY (ZXDH_RC_DTB_BASE | 0x4) +#define ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY (ZXDH_RC_DTB_BASE | 0x5) +#define ZXDH_RC_DTB_TAB_UP_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x6) +#define ZXDH_RC_DTB_TAB_DOWN_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x7) +#define ZXDH_RC_DTB_TAB_UP_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x8) +#define ZXDH_RC_DTB_TAB_DOWN_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x9) +#define ZXDH_RC_DTB_QUEUE_IS_WORKING (ZXDH_RC_DTB_BASE | 0xa) +#define ZXDH_RC_DTB_QUEUE_IS_NOT_INIT (ZXDH_RC_DTB_BASE | 0xb) +#define ZXDH_RC_DTB_MEMORY_ALLOC_ERR (ZXDH_RC_DTB_BASE | 0xc) +#define ZXDH_RC_DTB_PARA_INVALID (ZXDH_RC_DTB_BASE | 0xd) +#define ZXDH_RC_DMA_RANGE_INVALID (ZXDH_RC_DTB_BASE | 0xe) +#define ZXDH_RC_DMA_RCV_DATA_EMPTY (ZXDH_RC_DTB_BASE | 0xf) +#define ZXDH_RC_DTB_LPM_INSERT_FAIL (ZXDH_RC_DTB_BASE | 0x10) +#define ZXDH_RC_DTB_LPM_DELETE_FAIL (ZXDH_RC_DTB_BASE | 0x11) +#define ZXDH_RC_DTB_DOWN_LEN_INVALID (ZXDH_RC_DTB_BASE | 0x12) +#define ZXDH_RC_DTB_DOWN_HASH_CONFLICT (ZXDH_RC_DTB_BASE | 0x13) +#define ZXDH_RC_DTB_QUEUE_NOT_ALLOC (ZXDH_RC_DTB_BASE | 0x14) +#define ZXDH_RC_DTB_QUEUE_NAME_ERROR (ZXDH_RC_DTB_BASE | 0x15) +#define ZXDH_RC_DTB_DUMP_SIZE_SMALL (ZXDH_RC_DTB_BASE | 0x16) +#define ZXDH_RC_DTB_SEARCH_VPORT_QUEUE_ZERO (ZXDH_RC_DTB_BASE | 0x17) +#define ZXDH_RC_DTB_QUEUE_NOT_ENABLE (ZXDH_RC_DTB_BASE | 0x18) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -299,7 +389,127 @@ typedef struct zxdh_tlb_mgr_t { uint32_t pa_width; } ZXDH_TLB_MGR_T; +typedef enum zxdh_eram128_tbl_mode_e { + ZXDH_ERAM128_TBL_1b = 0, + ZXDH_ERAM128_TBL_32b = 1, + ZXDH_ERAM128_TBL_64b = 2, + ZXDH_ERAM128_TBL_128b = 3, + ZXDH_ERAM128_TBL_2b = 4, + ZXDH_ERAM128_TBL_4b = 5, + ZXDH_ERAM128_TBL_8b = 6, + ZXDH_ERAM128_TBL_16b = 7 +} ZXDH_ERAM128_TBL_MODE_E; + +typedef enum zxdh_eram128_opr_mode_e { + ZXDH_ERAM128_OPR_128b = 0, + ZXDH_ERAM128_OPR_64b = 1, + ZXDH_ERAM128_OPR_1b = 2, + ZXDH_ERAM128_OPR_32b = 3 + +} ZXDH_ERAM128_OPR_MODE_E; + +typedef enum zxdh_dtb_table_info_e { + ZXDH_DTB_TABLE_DDR = 0, + ZXDH_DTB_TABLE_ERAM_1 = 1, + ZXDH_DTB_TABLE_ERAM_64 = 2, + ZXDH_DTB_TABLE_ERAM_128 = 3, + ZXDH_DTB_TABLE_ZCAM = 4, + ZXDH_DTB_TABLE_ETCAM = 5, + ZXDH_DTB_TABLE_MC_HASH = 6, + ZXDH_DTB_TABLE_ENUM_MAX +} ZXDH_DTB_TABLE_INFO_E; + +typedef enum zxdh_sdt_table_type_e { + ZXDH_SDT_TBLT_INVALID = 0, + ZXDH_SDT_TBLT_ERAM = 1, + ZXDH_SDT_TBLT_DDR3 = 2, + ZXDH_SDT_TBLT_HASH = 3, + ZXDH_SDT_TBLT_LPM = 4, + ZXDH_SDT_TBLT_ETCAM = 5, + ZXDH_SDT_TBLT_PORTTBL = 6, + ZXDH_SDT_TBLT_MAX = 7, +} ZXDH_SDT_TABLE_TYPE_E; + +typedef struct zxdh_dtb_lpm_entry_t { + uint32_t dtb_len0; + uint8_t *p_data_buff0; + uint32_t dtb_len1; + uint8_t *p_data_buff1; +} ZXDH_DTB_LPM_ENTRY_T; + +typedef struct zxdh_dtb_entry_t { + uint8_t *cmd; + uint8_t *data; + uint32_t data_in_cmd_flag; + uint32_t data_size; +} ZXDH_DTB_ENTRY_T; + +typedef struct zxdh_dtb_eram_table_form_t { + uint32_t valid; + uint32_t type_mode; + uint32_t data_mode; + uint32_t cpu_wr; + uint32_t cpu_rd; + uint32_t cpu_rd_mode; + uint32_t addr; + uint32_t data_h; + uint32_t data_l; +} ZXDH_DTB_ERAM_TABLE_FORM_T; + +typedef struct zxdh_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} ZXDH_SDTTBL_ERAM_T; + +typedef union zxdh_endian_u { + unsigned int a; + unsigned char b; +} ZXDH_ENDIAN_U; + +typedef struct zxdh_dtb_field_t { + const char *p_name; + uint16_t lsb_pos; + uint16_t len; +} ZXDH_DTB_FIELD_T; + +typedef struct zxdh_dtb_table_t { + const char *table_type; + uint32_t table_no; + uint32_t field_num; + ZXDH_DTB_FIELD_T *p_fields; +} ZXDH_DTB_TABLE_T; + +typedef struct zxdh_dtb_queue_item_info_t { + uint32_t cmd_vld; + uint32_t cmd_type; + uint32_t int_en; + uint32_t data_len; + uint32_t data_laddr; + uint32_t data_hddr; +} ZXDH_DTB_QUEUE_ITEM_INFO_T; + +typedef struct zxdh_dtb_queue_len_t { + uint32_t cfg_dtb_cmd_type; + uint32_t cfg_dtb_cmd_int_en; + uint32_t cfg_queue_dtb_len; +} ZXDH_DTB_QUEUE_LEN_T; + +typedef struct zxdh_dtb_eram_entry_info_t { + uint32_t index; + uint32_t *p_data; +} ZXDH_DTB_ERAM_ENTRY_INFO_T; + +typedef struct zxdh_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} ZXDH_DTB_USER_ENTRY_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); +int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index d6487a574f..e3f13cb17d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,6 +12,8 @@ #include "zxdh_ethdev.h" +#define ZXDH_PF_PCIE_ID(pcie_id) (((pcie_id) & 0xff00) | 1 << 11) + enum zxdh_msix_status { ZXDH_MSIX_NONE = 0, ZXDH_MSIX_DISABLED = 1, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c new file mode 100644 index 0000000000..91376e6ec0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.c @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_np.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +#define ZXDH_SDT_VPORT_ATT_TABLE 1 +#define ZXDH_SDT_PANEL_ATT_TABLE 2 + +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_write = {ZXDH_SDT_VPORT_ATT_TABLE, (void *)&entry}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) + PMD_DRV_LOG(ERR, "write vport_att failed vfid:%d failed", vfid); + + return ret; +} + +int +zxdh_port_attr_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret; + + if (hw->is_pf) { + port_attr.hit_flag = 1; + port_attr.phy_port = hw->phyport; + port_attr.pf_vfid = zxdh_vport_to_vfid(hw->vport); + port_attr.rss_enable = 0; + if (!hw->is_pf) + port_attr.is_vf = 1; + + port_attr.mtu = dev->data->mtu; + port_attr.mtu_enable = 1; + port_attr.is_up = 0; + if (!port_attr.rss_enable) + port_attr.port_base_qid = 0; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + ret = -1; + } + } else { + struct zxdh_vf_init_msg *vf_init_msg = &msg_info.data.vf_init_msg; + + zxdh_msg_head_build(hw, ZXDH_VF_PORT_INIT, &msg_info); + msg_info.msg_head.msg_type = ZXDH_VF_PORT_INIT; + vf_init_msg->link_up = 1; + vf_init_msg->base_qid = 0; + vf_init_msg->rss_enable = 0; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port_init failed"); + ret = -1; + } + } + return ret; +}; + +int zxdh_panel_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int ret; + + if (!hw->is_pf) + return 0; + + struct zxdh_panel_table panel; + + memset(&panel, 0, sizeof(panel)); + panel.hit_flag = 1; + panel.pf_vfid = zxdh_vport_to_vfid(hw->vport); + panel.mtu_enable = 1; + panel.mtu = dev->data->mtu; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = hw->phyport, + .p_data = (uint32_t *)&panel + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) { + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + ret = -1; + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h new file mode 100644 index 0000000000..5d34af2f05 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_TABLES_H +#define ZXDH_TABLES_H + +#include <stdint.h> + +extern struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_DEVICE_NO 0 + +struct zxdh_port_attr_table { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint16_t mtu; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint8_t rss_hash_factor; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#else + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint16_t mtu; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint8_t rss_hash_factor; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#endif +}; + +struct zxdh_panel_table { + uint16_t port_vfid_1588 : 11, + rsv2 : 5; + uint16_t pf_vfid : 11, + rsv1 : 1, + enable_1588_tc : 2, + trust_mode : 1, + hit_flag : 1; + uint32_t mtu : 16, + mtu_enable : 1, + rsv : 3, + tm_base_queue : 12; + uint32_t rsv_1; + uint32_t rsv_2; +}; /* 16B */ + +int zxdh_port_attr_init(struct rte_eth_dev *dev); +int zxdh_panel_table_init(struct rte_eth_dev *dev); +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); + +#endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 100335 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 04/15] net/zxdh: port tables unint implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang ` (12 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8641 bytes --] delete port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 18 ++++++ drivers/net/zxdh/zxdh_msg.h | 1 + drivers/net/zxdh/zxdh_np.c | 103 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 33 ++++++++++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 164 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ff44816384..717a1d2b0b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -887,12 +887,30 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret; + + ret = zxdh_port_attr_uninit(dev); + if (ret) + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + + return ret; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index b7b17b8696..613ca71170 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, }; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index db536d96e3..99a7dc11b4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -1454,3 +1455,105 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff_ex, + &element_id); + rte_free(p_data_buff); + ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, + "dpp_dtb_write_down_table_data", p_data_buff_ex); + + rte_free(p_data_buff_ex); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 40961c02a2..42a652dd6b 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -20,6 +20,8 @@ #define ZXDH_PPU_CLUSTER_NUM (6) #define ZXDH_PPU_INSTR_MEM_NUM (3) #define ZXDH_SDT_CFG_LEN (2) +#define ZXDH_SDT_H_TBL_TYPE_BT_POS (29) +#define ZXDH_SDT_H_TBL_TYPE_BT_LEN (3) #define ZXDH_RC_DEV_BASE (0x600) #define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) @@ -507,9 +509,16 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_sdt_tbl_data_t { + uint32_t data_high32; + uint32_t data_low32; +} ZXDH_SDT_TBL_DATA_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); +int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 91376e6ec0..9fd184e612 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,7 +11,8 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 -int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +int +zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { int ret = 0; @@ -70,6 +71,36 @@ zxdh_port_attr_init(struct rte_eth_dev *dev) return ret; }; +int +zxdh_port_attr_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + if (hw->is_pf == 1) { + ZXDH_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (uint32_t *)&port_attr}; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_VPORT_ATT_TABLE, + .p_entry_data = (void *)&port_attr_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "delete port attr table failed"); + ret = -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port tables uninit failed"); + ret = -1; + } + } + return ret; +} + int zxdh_panel_table_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5d34af2f05..5e9b36faee 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -144,5 +144,6 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18675 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 05/15] net/zxdh: rx/tx queue setup and intr enable 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang ` (11 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7822 bytes --] rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 33 ++++++++ 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 717a1d2b0b..521d7ed433 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -933,6 +933,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + struct zxdh_virtnet_tx *txvq = NULL; + uint16_t tx_free_thresh = 0; + + if (tx_conf->tx_deferred_start) { + PMD_TX_LOG(ERR, "Tx deferred start is not supported"); + return -EINVAL; + } + + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + + txvq = &vq->txq; + txvq->queue_id = vtpci_logic_qidx; + + tx_free_thresh = tx_conf->tx_free_thresh; + if (tx_free_thresh == 0) + tx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_TX_FREE_THRESH); + + /* tx_free_thresh must be less than the number of TX entries minus 3 */ + if (tx_free_thresh >= (vq->vq_nentries - 3)) { + PMD_TX_LOG(ERR, "TX entries - 3 (%u). (tx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries - 3, tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + vq->vq_free_thresh = tx_free_thresh; + + if (queue_idx < dev->data->nb_tx_queues) + dev->data->tx_queues[queue_idx] = txvq; + + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_enable_intr(vq); + zxdh_mb(hw->weak_barriers); + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_disable_intr(vq); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1304d5e4ea..2f602d894f 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -8,6 +8,7 @@ #include <stdint.h> #include <rte_common.h> +#include <rte_atomic.h> #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" @@ -30,6 +31,7 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 +#define ZXDH_QUEUE_DEPTH 1024 /* * ring descriptors: 16 bytes. @@ -270,8 +272,39 @@ zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) } } +static inline void +zxdh_queue_enable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow == ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +static inline void +zxdh_mb(uint8_t weak_barriers) +{ + if (weak_barriers) + rte_atomic_thread_fence(rte_memory_order_seq_cst); + else + rte_mb(); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); +int32_t zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf); +int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17339 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 06/15] net/zxdh: dev start/stop ops implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang ` (10 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 14258 bytes --] dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 71 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 21 +++++++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 91 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 69 +++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 14 ++--- 8 files changed, 263 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..7b72be5f25 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux = Y x86-64 = Y ARMv8 = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 521d7ed433..6e603b967e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -899,12 +899,40 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + uint16_t i; + int ret; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "intr disable failed"); + return ret; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); @@ -928,9 +956,52 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EINVAL; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop = zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..6b2c4482b2 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,26 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED) && + (vq->vq_packed.cached_flags & ZXDH_VRING_PACKED_DESC_F_AVAIL)) + notify_data |= RTE_BIT32(31); + + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -216,6 +236,7 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_num = zxdh_set_queue_num, .setup_queue = zxdh_setup_queue, .del_queue = zxdh_del_queue, + .notify_queue = zxdh_notify_queue, }; uint8_t diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index e3f13cb17d..5c5f72b90e 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -144,6 +144,7 @@ struct zxdh_pci_ops { int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index af21f046ad..8c8f2605f6 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -274,3 +274,94 @@ zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) zxdh_queue_disable_intr(vq); return 0; } + +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num) +{ + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct zxdh_hw *hw = vq->hw; + struct zxdh_vq_desc_extra *dxp; + uint16_t flags = vq->vq_packed.cached_flags; + int32_t i; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + /* rx pkt fill in data_off */ + start_dp[idx].addr = rte_mbuf_iova_get(cookie[i]) + RTE_PKTMBUF_HEADROOM; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM; + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + zxdh_queue_store_flags_packed(&start_dp[idx], flags, hw->weak_barriers); + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[logic_qidx]; + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + uint16_t desc_idx; + int32_t error = 0; + + /* Allocate blank mbufs for the each rx descriptor */ + memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf)); + for (desc_idx = 0; desc_idx < ZXDH_MBUF_BURST_SZ; desc_idx++) + vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; + + while (!zxdh_queue_full(vq)) { + uint16_t free_cnt = vq->vq_free_cnt; + + free_cnt = RTE_MIN(ZXDH_MBUF_BURST_SZ, free_cnt); + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt) == 0)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + int32_t i; + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + } else { + PMD_DRV_LOG(ERR, "port %d rxq %d allocated bufs from %s failed", + hw->port_id, logic_qidx, rxvq->mpool->name); + break; + } + } + return 0; +} + +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + uint16_t i = 0; + struct zxdh_vring_packed_desc *descs = vq->vq_packed.ring.desc; + int32_t cnt = 0; + + i = vq->vq_used_cons_idx; + while (zxdh_desc_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 2f602d894f..6513aec3f0 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -25,6 +25,11 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) +#define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL_USED \ + (ZXDH_VRING_PACKED_DESC_F_AVAIL | ZXDH_VRING_PACKED_DESC_F_USED) #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 @@ -33,6 +38,9 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 + /* * ring descriptors: 16 bytes. * These can chain together via "next". @@ -290,6 +298,63 @@ zxdh_mb(uint8_t weak_barriers) rte_mb(); } +static inline int32_t +zxdh_queue_full(const struct zxdh_virtqueue *vq) +{ + return (vq->vq_free_cnt == 0); +} + +static inline void +zxdh_queue_store_flags_packed(struct zxdh_vring_packed_desc *dp, + uint16_t flags, uint8_t weak_barriers) + { + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + rte_io_wmb(); + dp->flags = flags; + #else + rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release); + #endif + } else { + rte_io_wmb(); + dp->flags = flags; + } +} + +static inline uint16_t +zxdh_queue_fetch_flags_packed(struct zxdh_vring_packed_desc *dp, + uint8_t weak_barriers) + { + uint16_t flags; + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + flags = dp->flags; + rte_io_rmb(); + #else + flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire); + #endif + } else { + flags = dp->flags; + rte_io_rmb(); + } + + return flags; +} + +static inline int32_t +zxdh_desc_used(struct zxdh_vring_packed_desc *desc, struct zxdh_virtqueue *vq) +{ + uint16_t flags = zxdh_queue_fetch_flags_packed(desc, vq->hw->weak_barriers); + uint16_t used = !!(flags & ZXDH_VRING_PACKED_DESC_F_USED); + uint16_t avail = !!(flags & ZXDH_VRING_PACKED_DESC_F_AVAIL); + return avail == used && used == vq->vq_packed.used_wrap_counter; +} + +static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) +{ + ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); @@ -306,5 +371,9 @@ int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp); int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx); +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq); +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num); #endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index de9353b223..8c7f734805 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -20,21 +20,19 @@ struct zxdh_virtnet_stats { uint64_t size_bins[8]; }; -struct zxdh_virtnet_rx { +struct __rte_cache_aligned zxdh_virtnet_rx { struct zxdh_virtqueue *vq; - - /* dummy mbuf, for wraparound when processing RX ring. */ - struct rte_mbuf fake_mbuf; - uint64_t mbuf_initializer; /* value to init mbufs. */ struct rte_mempool *mpool; /* mempool for mbuf allocation */ uint16_t queue_id; /* DPDK queue index. */ uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate RX ring. */ -} __rte_packed; + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; +}; -struct zxdh_virtnet_tx { +struct __rte_cache_aligned zxdh_virtnet_tx { struct zxdh_virtqueue *vq; const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ @@ -42,6 +40,6 @@ struct zxdh_virtnet_tx { uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate TX ring. */ -} __rte_packed; +}; #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 32296 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 07/15] net/zxdh: provided dev simple tx implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang ` (9 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18454 bytes --] provided dev simple tx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 22 ++ drivers/net/zxdh/zxdh_queue.h | 26 ++- drivers/net/zxdh/zxdh_rxtx.c | 396 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 448 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', + 'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 6e603b967e..aef77e86a0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -956,6 +957,25 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -971,6 +991,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 6513aec3f0..9343df81ac 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,8 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + /* This marks a buffer as write-only (otherwise read-only). */ -#define ZXDH_VRING_DESC_F_WRITE 2 +#define ZXDH_VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) #define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) @@ -35,11 +42,17 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 + #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZE sizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -355,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 0000000000..10034a0e98 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <stdalign.h> + +#include <rte_net.h> + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ + +#define ZXDH_PI_L3TYPE_IP 0x00 +#define ZXDH_PI_L3TYPE_IPV6 0x40 +#define ZXDH_PI_L3TYPE_NOIP 0x80 +#define ZXDH_PI_L3TYPE_RSV 0xC0 +#define ZXDH_PI_L3TYPE_MASK 0xC0 + +#define ZXDH_PCODE_MASK 0x1F +#define ZXDH_PCODE_IP_PKT_TYPE 0x01 +#define ZXDH_PCODE_TCP_PKT_TYPE 0x02 +#define ZXDH_PCODE_UDP_PKT_TYPE 0x03 +#define ZXDH_PCODE_NO_IP_PKT_TYPE 0x09 +#define ZXDH_PCODE_NO_REASSMBLE_TCP_PKT_TYPE 0x0C + +#define ZXDH_TX_MAX_SEGS 31 +#define ZXDH_RX_MAX_SEGS 31 + +static void +zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t curr_id = 0; + uint16_t free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num > 0 && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + num -= dxp->ndescs; + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + } while (curr_id != id); + } + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} + +static void +zxdh_ring_free_id_packed(struct zxdh_virtqueue *vq, uint16_t id) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static void +zxdh_xmit_cleanup_normal_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num-- && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + zxdh_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + +static void +zxdh_xmit_cleanup_packed(struct zxdh_virtqueue *vq, int32_t num, int32_t in_order) +{ + if (in_order) + zxdh_xmit_cleanup_inorder_packed(vq, num); + else + zxdh_xmit_cleanup_normal_packed(vq, num); +} + +static uint8_t +zxdh_xmit_get_ptype(struct rte_mbuf *m) +{ + uint8_t pcode = ZXDH_PCODE_NO_IP_PKT_TYPE; + uint8_t l3_ptype = ZXDH_PI_L3TYPE_NOIP; + + if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV4 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4)) { + l3_ptype = ZXDH_PI_L3TYPE_IP; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV6 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV6)) { + l3_ptype = ZXDH_PI_L3TYPE_IPV6; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else { + goto end; + } + + if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_TCP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)) + pcode = ZXDH_PCODE_TCP_PKT_TYPE; + else if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_UDP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)) + pcode = ZXDH_PCODE_UDP_PKT_TYPE; + +end: + return l3_ptype | ZXDH_PKT_FORM_CPU | pcode; +} + +static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, + struct zxdh_net_hdr_dl *hdr) +{ + uint16_t pkt_flag_lw16 = ZXDH_NO_IPID_UPDATE; + uint16_t l3_offset; + uint32_t ol_flag = 0; + + hdr->pi_hdr.pkt_flag_lw16 = rte_be_to_cpu_16(pkt_flag_lw16); + + hdr->pi_hdr.pkt_type = zxdh_xmit_get_ptype(cookie); + l3_offset = ZXDH_DL_NET_HDR_SIZE + cookie->outer_l2_len + + cookie->outer_l3_len + cookie->l2_len; + hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); + hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); +} + +static inline void zxdh_enqueue_xmit_packed_fast(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, int32_t in_order) +{ + struct zxdh_virtqueue *vq = txvq->vq; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + uint16_t flags = vq->vq_packed.cached_flags; + struct zxdh_net_hdr_dl *hdr = NULL; + + dxp->ndescs = 1; + dxp->cookie = cookie; + hdr = rte_pktmbuf_mtod_offset(cookie, struct zxdh_net_hdr_dl *, -ZXDH_DL_NET_HDR_SIZE); + zxdh_xmit_fill_net_hdr(cookie, hdr); + + uint16_t idx = vq->vq_avail_idx; + struct zxdh_vring_packed_desc *dp = &vq->vq_packed.ring.desc[idx]; + + dp->addr = rte_pktmbuf_iova(cookie) - ZXDH_DL_NET_HDR_SIZE; + dp->len = cookie->data_len + ZXDH_DL_NET_HDR_SIZE; + dp->id = id; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + vq->vq_free_cnt--; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(dp, flags, vq->hw->weak_barriers); +} + +static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, + uint16_t needed, + int32_t use_indirect, + int32_t in_order) +{ + struct zxdh_tx_region *txr = txvq->zxdh_net_hdr_mz->addr; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + void *hdr = NULL; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t idx = head_idx; + uint16_t prev = head_idx; + uint16_t head_flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + uint16_t seg_num = cookie->nb_segs; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vring_packed_desc *head_dp = &vq->vq_packed.ring.desc[idx]; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + + dxp->ndescs = needed; + dxp->cookie = cookie; + head_flags |= vq->vq_packed.cached_flags; + /* if offload disabled, it is not zeroed below, do it now */ + + if (use_indirect) { + /** + * setup tx ring slot to point to indirect + * descriptor list stored in reserved region. + * the first slot in indirect ring is already + * preset to point to the header in reserved region + **/ + start_dp[idx].addr = + txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr); + start_dp[idx].len = (seg_num + 1) * sizeof(struct zxdh_vring_packed_desc); + /* Packed descriptor id needs to be restored when inorder. */ + if (in_order) + start_dp[idx].id = idx; + + /* reset flags for indirect desc */ + head_flags = ZXDH_VRING_DESC_F_INDIRECT; + head_flags |= vq->vq_packed.cached_flags; + hdr = (void *)&txr[idx].tx_hdr; + /* loop below will fill in rest of the indirect elements */ + start_dp = txr[idx].tx_packed_indir; + start_dp->len = ZXDH_DL_NET_HDR_SIZE; /* update actual net or type hdr size */ + idx = 1; + } else { + /* setup first tx ring slot to point to header stored in reserved region. */ + start_dp[idx].addr = txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = ZXDH_DL_NET_HDR_SIZE; + head_flags |= ZXDH_VRING_DESC_F_NEXT; + hdr = (void *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + zxdh_xmit_fill_net_hdr(cookie, (struct zxdh_net_hdr_dl *)hdr); + + do { + start_dp[idx].addr = rte_pktmbuf_iova(cookie); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + uint16_t flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + start_dp[prev].id = id; + if (use_indirect) { + idx = head_idx; + if (++idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); +} + +uint16_t +zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct zxdh_virtnet_tx *txvq = tx_queue; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_hw *hw = vq->hw; + uint16_t nb_tx = 0; + + bool in_order = zxdh_pci_with_feature(hw, ZXDH_F_IN_ORDER); + + if (nb_pkts > vq->vq_free_cnt) + zxdh_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt, in_order); + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int32_t can_push = 0; + int32_t use_indirect = 0; + int32_t slots = 0; + int32_t need = 0; + + /* optimize ring usage */ + if ((zxdh_pci_with_feature(hw, ZXDH_F_ANY_LAYOUT) || + zxdh_pci_with_feature(hw, ZXDH_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= ZXDH_DL_NET_HDR_SIZE && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + alignof(struct zxdh_net_hdr_dl))) { + can_push = 1; + } else if (zxdh_pci_with_feature(hw, ZXDH_RING_F_INDIRECT_DESC) && + txm->nb_segs < ZXDH_MAX_TX_INDIRECT) { + use_indirect = 1; + } + /** + * How many main ring entries are needed to this Tx? + * indirect => 1 + * any_layout => number of segments + * default => number of segments + 1 + **/ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); + need = slots - vq->vq_free_cnt; + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + zxdh_xmit_cleanup_packed(vq, need, in_order); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, "port[ep:%d, pf:%d, vf:%d, vfid:%d, pcieid:%d], queue:%d[pch:%d]. No free desc to xmit", + hw->vport.epid, hw->vport.pfid, hw->vport.vfid, + hw->vfid, hw->pcie_id, txvq->queue_id, + hw->channel_context[txvq->queue_id].ph_chno); + break; + } + } + /* Enqueue Packet buffers */ + if (can_push) + zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + } + if (likely(nb_tx)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + return nb_tx; +} + +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + int32_t error; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + return nb_tx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 8c7f734805..0a02d319b2 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -42,4 +42,8 @@ struct __rte_cache_aligned zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ }; +uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45252 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 08/15] net/zxdh: provided dev simple rx implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 09/15] net/zxdh: link info update, set link up/down Junlong Wang ` (8 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11243 bytes --] provided dev simple rx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 1 + drivers/net/zxdh/zxdh_rxtx.c | 313 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 318 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7b72be5f25..bb44e93fad 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess aware = Y +Scattered Rx = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index aef77e86a0..bc4d2a937b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -972,6 +972,7 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 10034a0e98..06290d48bb 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -394,3 +481,229 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** + * desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + vq->vq_descx[id].cookie = NULL; + if (unlikely(cookie == NULL)) { + PMD_RX_LOG(ERR, + "vring descriptor with no mbuf cookie at %u", vq->vq_used_cons_idx); + break; + } + rx_pkts[i] = cookie; + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + } + return i; +} + +static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *hdr) +{ + struct zxdh_pd_hdr_ul *pd_hdr = &hdr->pd_hdr; + struct zxdh_pi_hdr *pi_hdr = &hdr->pi_hdr; + uint32_t idx = 0; + + m->pkt_len = rte_be_to_cpu_16(pi_hdr->ul.pkt_len); + + uint16_t pkt_type_outer = rte_be_to_cpu_16(pd_hdr->pkt_type_out); + + idx = (pkt_type_outer >> 12) & 0xF; + m->packet_type = zxdh_outer_l2_type[idx]; + idx = (pkt_type_outer >> 8) & 0xF; + m->packet_type |= zxdh_outer_l3_type[idx]; + idx = (pkt_type_outer >> 4) & 0xF; + m->packet_type |= zxdh_outer_l4_type[idx]; + idx = pkt_type_outer & 0xF; + m->packet_type |= zxdh_tunnel_type[idx]; + + uint16_t pkt_type_inner = rte_be_to_cpu_16(pd_hdr->pkt_type_in); + + if (pkt_type_inner) { + idx = (pkt_type_inner >> 12) & 0xF; + m->packet_type |= zxdh_inner_l2_type[idx]; + idx = (pkt_type_inner >> 8) & 0xF; + m->packet_type |= zxdh_inner_l3_type[idx]; + idx = (pkt_type_inner >> 4) & 0xF; + m->packet_type |= zxdh_inner_l4_type[idx]; + } + + return 0; +} + +static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +{ + int32_t error = 0; + /* + * Requeue the discarded mbuf. This should always be + * successful since it was just dequeued. + */ + error = zxdh_enqueue_recv_refill_packed(vq, &m, 1); + if (unlikely(error)) { + PMD_RX_LOG(ERR, "cannot enqueue discarded mbuf"); + rte_pktmbuf_free(m); + } +} + +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct zxdh_virtnet_rx *rxvq = rx_queue; + struct zxdh_virtqueue *vq = rxvq->vq; + struct zxdh_hw *hw = vq->hw; + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_mbuf *rxm = NULL; + struct rte_mbuf *prev = NULL; + uint32_t len[ZXDH_MBUF_BURST_SZ] = {0}; + struct rte_mbuf *rcv_pkts[ZXDH_MBUF_BURST_SZ] = {NULL}; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint16_t hdr_size = 0; + int32_t error = 0; + uint16_t nb_rx = 0; + uint16_t num = nb_pkts; + + if (unlikely(num > ZXDH_MBUF_BURST_SZ)) + num = ZXDH_MBUF_BURST_SZ; + + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + uint16_t i; + uint16_t rcvd_pkt_len = 0; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + struct zxdh_net_hdr_ul *header = + (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM); + + seg_num = header->type_hdr.num_buffers; + if (seg_num == 0) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + seg_num = 1; + } + /* bit[0:6]-pd_len unit:2B */ + uint16_t pd_len = header->type_hdr.pd_len << 1; + /* Private queue only handle type hdr */ + hdr_size = pd_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rcvd_pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + /* Update rte_mbuf according to pi/pd header */ + if (zxdh_rx_update_mbuf(rxm, header) < 0) { + zxdh_discard_rxbuf(vq, rxm); + continue; + } + seg_res = seg_num - 1; + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_len = (uint16_t)(len[i]); + + rcvd_pkt_len += (uint32_t)(len[i]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + seg_res -= 1; + } + + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, ZXDH_MBUF_BURST_SZ); + uint16_t extra_idx = 0; + + rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); + if (unlikely(rcv_cnt == 0)) { + PMD_RX_LOG(ERR, "No enough segments for packet."); + rte_pktmbuf_free(rx_pkts[nb_rx]); + break; + } + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + rcvd_pkt_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + + /* Allocate new mbuf for the used descriptor */ + if (likely(!zxdh_queue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + if (likely(nb_enqueued)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + return nb_rx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 0a02d319b2..cc0004324a 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -45,5 +45,7 @@ struct __rte_cache_aligned zxdh_virtnet_tx { uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 28867 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 09/15] net/zxdh: link info update, set link up/down 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang ` (7 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24395 bytes --] provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 22 +++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c | 60 ++++++++++ drivers/net/zxdh/zxdh_msg.h | 40 +++++++ drivers/net/zxdh/zxdh_np.c | 172 ++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 20 ++++ drivers/net/zxdh/zxdh_tables.c | 15 +++ drivers/net/zxdh/zxdh_tables.h | 6 +- 13 files changed, 514 insertions(+), 9 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index bb44e93fad..7da3aaced1 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -10,3 +10,5 @@ ARMv8 = Y SR-IOV = Y Multiprocess aware = Y Scattered Rx = Y +Link status = Y +Link status event = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', + 'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bc4d2a937b..e6056db14a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,12 +106,18 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } - /* Interrupt handler triggered by NIC for handling specific interrupt. */ static void zxdh_fromriscv_intr_handler(void *param) @@ -914,6 +921,13 @@ zxdh_dev_stop(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "intr disable failed"); return ret; } + + ret = zxdh_dev_set_link_down(dev); + if (ret) { + PMD_DRV_LOG(ERR, "set port %s link down failed!", dev->device->name); + return ret; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1012,6 +1026,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + + zxdh_dev_set_link_up(dev); + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1031,6 +1048,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index b1f398b28e..c0b719062c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -72,6 +72,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -93,6 +94,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c new file mode 100644 index 0000000000..5a0af98cc0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_msg.h" +#include "zxdh_ethdev_ops.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int32_t ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + port_attr.is_up = link_status; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + } else { + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; + port_attr_msg->value = link_status; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); + return ret; + } + } + return ret; +} + +static int32_t +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + uint16_t status = 0; + int32_t ret = 0; + + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + + link->link_status = status; + + if (status == RTE_ETH_LINK_DOWN) { + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } else { + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), + ZXDH_BAR_MODULE_MAC); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_LINK_GET); + return -1; + } + link->link_speed = reply_info.reply_body.link_msg.speed; + hw->speed_mode = reply_info.reply_body.link_msg.speed_modes; + if ((reply_info.reply_body.link_msg.duplex & RTE_ETH_LINK_FULL_DUPLEX) == + RTE_ETH_LINK_FULL_DUPLEX) + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + else + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + } + hw->speed = link->link_speed; + + return 0; +} + +static int zxdh_set_link_status(struct rte_eth_dev *dev, uint8_t link_status) +{ + uint16_t curr_link_status = dev->data->dev_link.link_status; + + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (link_status == curr_link_status) { + PMD_DRV_LOG(DEBUG, "curr_link_status %u", curr_link_status); + return 0; + } + + hw->admin_status = link_status; + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); + return ret; + } + dev->data->dev_link.link_status = hw->admin_status & link.link_status; + + if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) { + dev->data->dev_link.link_speed = link.link_speed; + dev->data->dev_link.link_duplex = link.link_duplex; + } else { + dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } + return zxdh_config_port_status(dev, dev->data->dev_link.link_status); +} + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_UP); + + if (ret) + PMD_DRV_LOG(ERR, "Set link up failed, code:%d", ret); + + return ret; +} + +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + memset(&link, 0, sizeof(link)); + link.link_duplex = hw->duplex; + link.link_speed = hw->speed; + link.link_autoneg = RTE_ETH_LINK_AUTONEG; + + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + return ret; + } + link.link_status &= hw->admin_status; + if (link.link_status == RTE_ETH_LINK_DOWN) + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + ret = zxdh_config_port_status(dev, link.link_status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + return ret; + } + return rte_eth_linkstatus_set(dev, &link); +} + +int zxdh_dev_set_link_down(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_DOWN); + + if (ret) + PMD_DRV_LOG(ERR, "Set link down failed"); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h new file mode 100644 index 0000000000..c6d6ca56fd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_OPS_H +#define ZXDH_ETHDEV_OPS_H + +#include "zxdh_ethdev.h" + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev); +int zxdh_dev_set_link_down(struct rte_eth_dev *dev); +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); + +#endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index aa2e10fd45..a6e19bbdd8 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1134,6 +1134,54 @@ int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, return 0; } +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + struct zxdh_pci_bar_msg in = { + .payload_addr = &msg_req, + .payload_len = msg_req_len, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + .src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = module_id, + .src_pcieid = hw->pcie_id, + }; + + if (zxdh_bar_chan_sync_msg_send(&in, &result) != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "Failed to send sync messages or receive response"); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + + return 0; +} + void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info) { @@ -1144,3 +1192,15 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, msghead->vf_id = hw->vport.vfid; msghead->pcieid = hw->pcie_id; } + +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_agent_msg_head *agent_head = &msg_info->agent_msg_head; + + agent_head->msg_type = type; + agent_head->panel_id = hw->panel_id; + agent_head->phyport = hw->phyport; + agent_head->vf_id = hw->vfid; + agent_head->pcie_id = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 613ca71170..a78075c914 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -164,11 +164,18 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +/* riscv msg opcodes */ +enum zxdh_agent_msg_type { + ZXDH_MAC_LINK_GET = 14, +}; + enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_PORT_ATTRS_SET = 25, + ZXDH_MSG_TYPE_END, }; @@ -261,6 +268,15 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_link_info_msg { + uint8_t autoneg; + uint8_t link_state; + uint8_t blink_enable; + uint8_t duplex; + uint32_t speed_modes; + uint32_t speed; +} __rte_packed; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -276,6 +292,7 @@ struct zxdh_msg_reply_body { enum zxdh_reps_flag flag; union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + struct zxdh_link_info_msg link_msg; } __rte_packed; } __rte_packed; @@ -291,6 +308,12 @@ struct zxdh_vf_init_msg { uint8_t rss_enable; } __rte_packed; +struct zxdh_port_attr_set_msg { + uint32_t mode; + uint32_t value; + uint8_t allmulti_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -298,14 +321,26 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_agent_msg_head { + enum zxdh_agent_msg_type msg_type; + uint8_t panel_id; + uint8_t phyport; + uint8_t rsv; + uint16_t vf_id; + uint16_t pcie_id; +} __rte_packed; + struct zxdh_msg_info { union { uint8_t head_len[ZXDH_MSG_HEAD_LEN]; struct zxdh_msg_head msg_head; + struct zxdh_agent_msg_head agent_msg_head; }; union { uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; struct zxdh_vf_init_msg vf_init_msg; + struct zxdh_port_attr_set_msg port_attr_msg; + struct zxdh_link_info_msg link_msg; } __rte_packed data; } __rte_packed; @@ -326,5 +361,10 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info); int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len); +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info); +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 99a7dc11b4..1f06539263 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -36,6 +36,10 @@ ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ ((_inttype_)(((_bitqnt_) < 32))) +#define ZXDH_COMM_UINT32_GET_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + ((_uidst_) = (((_uisrc_) >> (_uistartpos_)) & \ + (ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_))))) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1456,15 +1460,11 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } -static uint32_t +static void zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) { - uint32_t rc = 0; - p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; - - return rc; } int @@ -1507,7 +1507,7 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, pentry = delete_entries + entry_index; sdt_no = pentry->sdt_no; - rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); switch (tbl_type) { case ZXDH_SDT_TBLT_ERAM: { @@ -1557,3 +1557,163 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, rte_free(p_data_buff_ex); return 0; } + +static uint32_t +zxdh_np_sdt_tbl_data_parser(uint32_t sdt_hig32, uint32_t sdt_low32, void *p_sdt_info) +{ + uint32_t tbl_type = 0; + uint32_t clutch_en = 0; + + ZXDH_SDTTBL_ERAM_T *p_sdt_eram = NULL; + ZXDH_SDTTBL_PORTTBL_T *p_sdt_porttbl = NULL; + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_hig32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + ZXDH_COMM_UINT32_GET_BITS(clutch_en, sdt_low32, 0, 1); + + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + p_sdt_eram = (ZXDH_SDTTBL_ERAM_T *)p_sdt_info; + p_sdt_eram->table_type = tbl_type; + p_sdt_eram->eram_clutch_en = clutch_en; + break; + } + + case ZXDH_SDT_TBLT_PORTTBL: + { + p_sdt_porttbl = (ZXDH_SDTTBL_PORTTBL_T *)p_sdt_info; + p_sdt_porttbl->table_type = tbl_type; + p_sdt_porttbl->porttbl_clutch_en = clutch_en; + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} + +static uint32_t +zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t rc; + + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + + rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); + if (rc != 0) + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + + return rc; +} + +static void +zxdh_np_eram_index_cal(uint32_t eram_mode, uint32_t index, + uint32_t *p_row_index, uint32_t *p_col_index) +{ + uint32_t row_index = 0; + uint32_t col_index = 0; + + switch (eram_mode) { + case ZXDH_ERAM128_TBL_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_TBL_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + *p_row_index = row_index; + *p_col_index = col_index; +} + +static uint32_t +zxdh_np_dtb_eram_data_get(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no, + ZXDH_DTB_ERAM_ENTRY_INFO_T *p_dump_eram_entry) +{ + uint32_t index = p_dump_eram_entry->index; + uint32_t *p_data = p_dump_eram_entry->p_data; + ZXDH_SDTTBL_ERAM_T sdt_eram_info = {0}; + uint32_t temp_data[4] = {0}; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t rd_mode; + uint32_t rc; + + rc = zxdh_np_soft_sdt_tbl_get(queue_id, sdt_no, &sdt_eram_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_soft_sdt_tbl_get"); + rd_mode = sdt_eram_info.eram_mode; + + zxdh_np_eram_index_cal(rd_mode, index, &row_index, &col_index); + + switch (rd_mode) { + case ZXDH_ERAM128_TBL_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + case ZXDH_ERAM128_TBL_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + case ZXDH_ERAM128_TBL_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + return rc; +} + +int +zxdh_np_dtb_table_entry_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, + uint32_t srh_mode) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t tbl_type = 0; + uint32_t rc; + uint32_t sdt_no; + + sdt_no = get_entry->sdt_no; + zxdh_np_sdt_tbl_data_get(srh_mode, sdt_no, &sdt_tbl); + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_tbl.data_high32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_data_get(dev_id, + queue_id, + sdt_no, + (ZXDH_DTB_ERAM_ENTRY_INFO_T *)get_entry->p_entry_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_eram_data_get"); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 42a652dd6b..ac3931ba65 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -514,11 +514,31 @@ typedef struct zxdh_sdt_tbl_data_t { uint32_t data_low32; } ZXDH_SDT_TBL_DATA_T; +typedef struct zxdh_sdt_tbl_etcam_t { + uint32_t table_type; + uint32_t etcam_id; + uint32_t etcam_key_mode; + uint32_t etcam_table_id; + uint32_t no_as_rsp_mode; + uint32_t as_en; + uint32_t as_eram_baddr; + uint32_t as_rsp_mode; + uint32_t etcam_table_depth; + uint32_t etcam_clutch_en; +} ZXDH_SDTTBL_ETCAM_T; + +typedef struct zxdh_sdt_tbl_porttbl_t { + uint32_t table_type; + uint32_t porttbl_clutch_en; +} ZXDH_SDTTBL_PORTTBL_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); +int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 9fd184e612..db0132ce3f 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -134,3 +134,18 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } + +int +zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VPORT_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5e9b36faee..8676a8b375 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,9 +7,10 @@ #include <stdint.h> -extern struct zxdh_dtb_shared_data g_dtb_data; - #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 + +extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN @@ -145,5 +146,6 @@ int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 52659 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 10/15] net/zxdh: mac set/add/remove ops implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (8 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 11/15] net/zxdh: promisc/allmulti " Junlong Wang ` (6 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24778 bytes --] provided mac set/add/remove ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 33 ++++- drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 231 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 12 ++ drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 197 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 36 +++++ 12 files changed, 548 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7da3aaced1..dc09fe3453 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess aware = Y Scattered Rx = Y Link status = Y Link status event = Y +Unicast MAC filter = Y +Multicast MAC filter = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 4f18c97ed7..75883a8897 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e6056db14a..ea3c08be58 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -991,6 +991,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1029,6 +1046,10 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1051,6 +1072,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add = zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set = zxdh_dev_mac_addr_set, }; static int32_t @@ -1092,15 +1116,20 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) PMD_DRV_LOG(ERR, "Failed to get phyport"); return -1; } - PMD_DRV_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + PMD_DRV_LOG(DEBUG, "Get phyport success: 0x%x", hw->phyport); hw->vfid = zxdh_vport_to_vfid(hw->vport); + if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) { + PMD_DRV_LOG(ERR, "Failed to get hash idx"); + return -1; + } + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { PMD_DRV_LOG(ERR, "Failed to get panel_id"); return -1; } - PMD_DRV_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + PMD_DRV_LOG(DEBUG, "Get panel id success: 0x%x", hw->panel_id); return 0; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index c0b719062c..5b95cb1c2a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -80,6 +80,8 @@ struct zxdh_hw { uint16_t port_id; uint16_t vfid; uint16_t queue_num; + uint16_t mc_num; + uint16_t uc_num; uint8_t *isr; uint8_t weak_barriers; @@ -92,6 +94,7 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t hash_search_index; uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 5a0af98cc0..35e37483e3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -164,3 +164,234 @@ int zxdh_dev_set_link_down(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Set link down failed"); return ret; } + +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_ether_addr *old_addr = &dev->data->mac_addrs[0]; + struct zxdh_msg_info msg_info = {0}; + uint16_t ret = 0; + + if (!rte_is_valid_assigned_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "mac address is invalid!"); + return -EINVAL; + } + + if (hw->is_pf) { + ret = zxdh_del_mac_table(hw->vport.vport, old_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num--; + + ret = zxdh_set_mac_table(hw->vport.vport, addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + mac_filter->mac_flag = true; + rte_memcpy(&mac_filter->mac, old_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_DEL); + return ret; + } + hw->uc_num--; + PMD_DRV_LOG(INFO, "Success to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + rte_memcpy(&mac_filter->mac, addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } + rte_ether_addr_copy(addr, (struct rte_ether_addr *)hw->mac_addr); + return ret; +} + +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + uint16_t i, ret; + + if (index >= ZXDH_MAX_MAC_ADDRS) { + PMD_DRV_LOG(ERR, "Add mac index (%u) is out of range", index); + return -EINVAL; + } + + for (i = 0; (i != ZXDH_MAX_MAC_ADDRS); ++i) { + if (memcmp(&dev->data->mac_addrs[i], mac_addr, sizeof(*mac_addr))) + continue; + + PMD_DRV_LOG(INFO, "MAC address already configured"); + return -EADDRINUSE; + } + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } + dev->data->mac_addrs[index] = *mac_addr; + return 0; +} + +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index]; + uint16_t ret = 0; + + if (index >= ZXDH_MAX_MAC_ADDRS) + return; + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } + memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index c6d6ca56fd..4630bb70db 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -10,5 +10,9 @@ int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq); +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a78075c914..44ce5d1b7f 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -46,6 +46,9 @@ #define ZXDH_MSG_REQ_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) +#define ZXDH_MAC_FILTER 0xaa +#define ZXDH_MAC_UNFILTER 0xff + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -173,6 +176,8 @@ enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_MAC_ADD = 3, + ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, @@ -314,6 +319,12 @@ struct zxdh_port_attr_set_msg { uint8_t allmulti_follow; } __rte_packed; +struct zxdh_mac_filter { + uint8_t mac_flag; + uint8_t filter_flag; + struct rte_ether_addr mac; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -341,6 +352,7 @@ struct zxdh_msg_info { struct zxdh_vf_init_msg vf_init_msg; struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; + struct zxdh_mac_filter mac_filter_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index ac3931ba65..19d1f03f59 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -532,6 +532,11 @@ typedef struct zxdh_sdt_tbl_porttbl_t { uint32_t porttbl_clutch_en; } ZXDH_SDTTBL_PORTTBL_T; +typedef struct zxdh_dtb_hash_entry_info_t { + uint8_t *p_actu_key; + uint8_t *p_rst; +} ZXDH_DTB_HASH_ENTRY_INFO_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index db0132ce3f..f5b607584d 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,10 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_MAC_HASH_INDEX_BASE 64 +#define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) +#define ZXDH_MC_GROUP_NUM 4 + int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -149,3 +153,196 @@ zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) return ret; } + +int +zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "Insert mac_table failed"); + return -ret; + } + } else { + for (group_id = 0; group_id < 4; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, + addr, sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (ret == 0) { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + } + } else { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + else + multicast_table.entry.mc_bitmap[index] = + false; + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + else + multicast_table.entry.mc_pf_enable = false; + } + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "add mac_table failed, code:%d", ret); + return -ret; + } + } + } + return 0; +} + +int +zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret, del_flag = 0; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "delete l2_fwd_hash_table failed, code:%d", ret); + return -ret; + } + } else { + multicast_table.key.vf_group_id = vport_num.vfid / 64; + rte_memcpy(multicast_table.key.mac_addr, addr, sizeof(struct rte_ether_addr)); + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (vport_num.vf_flag) + multicast_table.entry.mc_bitmap[index] &= + ~(rte_cpu_to_be_32(UINT32_C(1) << (31 - index))); + else + multicast_table.entry.mc_pf_enable = 0; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add mc_table failed, code:%d", ret); + return -ret; + } + + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + if (multicast_table.entry.mc_bitmap[0] == 0 && + multicast_table.entry.mc_bitmap[1] == 0 && + multicast_table.entry.mc_pf_enable == 0) { + if (group_id == (ZXDH_MC_GROUP_NUM - 1)) + del_flag = 1; + } else { + break; + } + } + if (del_flag) { + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + } + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8676a8b375..f16c4923ef 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -142,10 +142,46 @@ struct zxdh_panel_table { uint32_t rsv_2; }; /* 16B */ +struct zxdh_mac_unicast_key { + uint16_t rsv; + uint8_t dmac_addr[6]; +}; + +struct zxdh_mac_unicast_entry { + uint8_t rsv1 : 7, + hit_flag : 1; + uint8_t rsv; + uint16_t vfid; +}; + +struct zxdh_mac_unicast_table { + struct zxdh_mac_unicast_key key; + struct zxdh_mac_unicast_entry entry; +}; + +struct zxdh_mac_multicast_key { + uint8_t rsv; + uint8_t vf_group_id; + uint8_t mac_addr[6]; +}; + +struct zxdh_mac_multicast_entry { + uint32_t mc_pf_enable; + uint32_t rsv1; + uint32_t mc_bitmap[2]; +}; + +struct zxdh_mac_multicast_table { + struct zxdh_mac_multicast_key key; + struct zxdh_mac_multicast_entry entry; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 69160 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 11/15] net/zxdh: promisc/allmulti ops implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (9 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 12/15] net/zxdh: vlan filter/ offload " Junlong Wang ` (5 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18478 bytes --] provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 21 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 128 +++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_tables.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 413 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index dc09fe3453..e9b237e102 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -14,3 +14,5 @@ Link status = Y Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ea3c08be58..d4165aa80c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -901,8 +901,16 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) int ret; ret = zxdh_port_attr_uninit(dev); - if (ret) + if (ret) { PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "uninit promisc_table failed"); + return ret; + } return ret; } @@ -1075,6 +1083,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add = zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set = zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable = zxdh_dev_allmulticast_disable, }; static int32_t @@ -1326,6 +1338,13 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 5b95cb1c2a..3cdac5de73 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -98,6 +98,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 35e37483e3..ad3d10258c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -395,3 +395,131 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 1; + } + return ret; +} + +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->promisc_status == 1) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, false); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = false; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 0; + } + return ret; +} + +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = true; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 1; + } + return ret; +} + +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 1) { + if (hw->is_pf) { + if (hw->promisc_status == 1) + goto end; + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + if (hw->promisc_status == 1) + goto end; + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = false; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 0; + } + return ret; +end: + hw->allmulti_status = 0; + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 4630bb70db..394ddedc0e 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -14,5 +14,9 @@ int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_ad uint32_t index, uint32_t vmdq); int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 44ce5d1b7f..2abf579a80 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -48,6 +48,8 @@ #define ZXDH_MAC_FILTER 0xaa #define ZXDH_MAC_UNFILTER 0xff +#define ZXDH_PROMISC_MODE 1 +#define ZXDH_ALLMULTI_MODE 2 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,7 @@ enum zxdh_msg_type { ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, + ZXDH_PORT_PROMISC_SET = 26, ZXDH_MSG_TYPE_END, }; @@ -325,6 +328,12 @@ struct zxdh_mac_filter { struct rte_ether_addr mac; } __rte_packed; +struct zxdh_port_promisc_msg { + uint8_t mode; + uint8_t value; + uint8_t mc_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -353,6 +362,7 @@ struct zxdh_msg_info { struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; + struct zxdh_port_promisc_msg port_promisc_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f5b607584d..45aeb3e3e4 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,10 +10,15 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_BROCAST_ATT_TABLE 6 +#define ZXDH_SDT_UNICAST_ATT_TABLE 10 +#define ZXDH_SDT_MULTICAST_ATT_TABLE 11 #define ZXDH_MAC_HASH_INDEX_BASE 64 #define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) #define ZXDH_MC_GROUP_NUM 4 +#define ZXDH_BASE_VFID 1152 +#define ZXDH_TABLE_HIT_FLAG 128 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -346,3 +351,221 @@ zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_se } return 0; } + +int +zxdh_promisc_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_promisc_table_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_unitcast_table uc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T uc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&uc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + uc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + uc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + } else { + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} + +int +zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_multicast_table mc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T mc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&mc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + mc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + mc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + + } else { + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index f16c4923ef..fb30c8f32e 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -176,6 +176,24 @@ struct zxdh_mac_multicast_table { struct zxdh_mac_multicast_entry entry; }; +struct zxdh_brocast_table { + uint32_t flag; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_unitcast_table { + uint32_t uc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_multicast_table { + uint32_t mc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -183,5 +201,9 @@ int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_promisc_table_init(struct rte_eth_dev *dev); +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45335 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 12/15] net/zxdh: vlan filter/ offload ops implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (10 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 11/15] net/zxdh: promisc/allmulti " Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang ` (4 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20602 bytes --] provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +++++- drivers/net/zxdh/zxdh_ethdev_ops.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h | 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 99 +++++++++++++ drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 417 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index e9b237e102..6fb006c2da 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -16,3 +16,6 @@ Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode = Y +VLAN filter = Y +VLAN offload = Y +QinQ offload = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d4165aa80c..7809b24d8b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -758,6 +758,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -815,7 +843,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -847,6 +875,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1087,6 +1117,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable = zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set = zxdh_dev_vlan_offload_set, }; static int32_t @@ -1345,6 +1377,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " vlan filter table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index ad3d10258c..c4a1521723 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include <rte_malloc.h> + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -523,3 +527,222 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int +zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); + return -1; + } + + idx = vlan_id / ZXDH_VLAN_FILTER_GROUPS; + bit_idx = vlan_id % ZXDH_VLAN_FILTER_GROUPS; + + if (on) { + if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { + PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_ADD; + } else { + if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { + PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_DEL; + } + + if (hw->is_pf) { + ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); + if (ret) { + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + return -1; + } + } else { + struct zxdh_msg_info msg = {0}; + zxdh_msg_head_build(hw, msg_type, &msg); + msg.data.vlan_filter_msg.vlan_id = vlan_id; + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, msg_type); + return ret; + } + } + + if (on) + dev->data->vlan_filter_conf.ids[idx] |= (1ULL << bit_idx); + else + dev->data->vlan_filter_conf.ids[idx] &= ~(1ULL << bit_idx); + + return 0; +} + +int +zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + rxmode = &dev->data->dev_conf.rxmode; + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_QINQ_STRIP_MASK) { + memset(&msg, 0, sizeof(struct zxdh_msg_info)); + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 394ddedc0e..058d271ab3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -18,5 +18,7 @@ int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 2abf579a80..ec15388f7a 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -50,6 +50,8 @@ #define ZXDH_MAC_UNFILTER 0xff #define ZXDH_PROMISC_MODE 1 #define ZXDH_ALLMULTI_MODE 2 +#define ZXDH_VLAN_STRIP_MSG_TYPE 0 +#define ZXDH_QINQ_STRIP_MSG_TYPE 1 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,10 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_VLAN_FILTER_SET = 17, + ZXDH_VLAN_FILTER_ADD = 18, + ZXDH_VLAN_FILTER_DEL = 19, + ZXDH_VLAN_OFFLOAD = 21, ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, @@ -341,6 +347,19 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_vlan_filter { + uint16_t vlan_id; +}; + +struct zxdh_vlan_filter_set { + uint8_t enable; +}; + +struct zxdh_vlan_offload { + uint8_t enable; + uint8_t type; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -363,6 +382,9 @@ struct zxdh_msg_info { struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; struct zxdh_port_promisc_msg port_promisc_msg; + struct zxdh_vlan_filter vlan_filter_msg; + struct zxdh_vlan_filter_set vlan_filter_set_msg; + struct zxdh_vlan_offload vlan_offload_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 06290d48bb..0ffce50042 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -11,6 +11,9 @@ #include "zxdh_pci.h" #include "zxdh_queue.h" +#define ZXDH_SVLAN_TPID 0x88a8 +#define ZXDH_CVLAN_TPID 0x8100 + #define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ #define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ #define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ @@ -21,6 +24,9 @@ #define ZXDH_PI_L3TYPE_RSV 0xC0 #define ZXDH_PI_L3TYPE_MASK 0xC0 +#define ZXDH_PD_OFFLOAD_SVLAN_INSERT (1 << 14) +#define ZXDH_PD_OFFLOAD_CVLAN_INSERT (1 << 13) + #define ZXDH_PCODE_MASK 0x1F #define ZXDH_PCODE_IP_PKT_TYPE 0x01 #define ZXDH_PCODE_TCP_PKT_TYPE 0x02 @@ -259,6 +265,18 @@ static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + if (cookie->ol_flags & RTE_MBUF_F_TX_VLAN) { + ol_flag |= ZXDH_PD_OFFLOAD_CVLAN_INSERT; + hdr->pi_hdr.vlan_id = rte_be_to_cpu_16(cookie->vlan_tci); + hdr->pd_hdr.cvlan_insert = + rte_be_to_cpu_32((ZXDH_CVLAN_TPID << 16) | cookie->vlan_tci); + } + if (cookie->ol_flags & RTE_MBUF_F_TX_QINQ) { + ol_flag |= ZXDH_PD_OFFLOAD_SVLAN_INSERT; + hdr->pd_hdr.svlan_insert = + rte_be_to_cpu_32((ZXDH_SVLAN_TPID << 16) | cookie->vlan_tci_outer); + } + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); } diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 45aeb3e3e4..ca98b36da2 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 #define ZXDH_SDT_MULTICAST_ATT_TABLE 11 @@ -19,6 +20,10 @@ #define ZXDH_MC_GROUP_NUM 4 #define ZXDH_BASE_VFID 1152 #define ZXDH_TABLE_HIT_FLAG 128 +#define ZXDH_FIRST_VLAN_GROUP_BITS 23 +#define ZXDH_VLAN_GROUP_BITS 31 +#define ZXDH_VLAN_GROUP_NUM 35 +#define ZXDH_VLAN_FILTER_VLANID_STEP 120 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -569,3 +574,97 @@ zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) } return 0; } + +int +zxdh_vlan_filter_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_vlan_filter_table vlan_table = {0}; + int16_t ret = 0; + + if (!hw->is_pf) + return 0; + + for (uint8_t vlan_group = 0; vlan_group < ZXDH_VLAN_GROUP_NUM; vlan_group++) { + if (vlan_group == 0) { + vlan_table.vlans[0] |= (1 << ZXDH_FIRST_VLAN_GROUP_BITS); + vlan_table.vlans[0] |= (1 << ZXDH_VLAN_GROUP_BITS); + + } else { + vlan_table.vlans[0] = 0; + } + uint32_t index = (vlan_group << 11) | hw->vport.vfid; + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = { + .index = index, + .p_data = (uint32_t *)&vlan_table + }; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d], vlan_group:%d, init vlan filter table failed", + hw->vport.vfid, vlan_group); + ret = -1; + } + } + + return ret; +} + +int +zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) +{ + struct zxdh_vlan_filter_table vlan_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + memset(&vlan_table, 0, sizeof(struct zxdh_vlan_filter_table)); + int table_num = vlan_id / ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t index = (table_num << 11) | vport_num.vfid; + uint16_t group = (vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP) / 8 + 1; + + uint8_t val = sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t); + uint8_t vlan_tbl_index = group / val; + uint16_t used_group = vlan_tbl_index * val; + + used_group = (used_group == 0 ? 0 : (used_group - 1)); + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (uint32_t *)&vlan_table}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret) { + PMD_DRV_LOG(ERR, "get vlan table failed"); + return -1; + } + uint16_t relative_vlan_id = vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t *base_group = &vlan_table.vlans[0]; + + *base_group |= 1 << 31; + base_group = &vlan_table.vlans[vlan_tbl_index]; + uint8_t valid_bits = (vlan_tbl_index == 0 ? + ZXDH_FIRST_VLAN_GROUP_BITS : ZXDH_VLAN_GROUP_BITS) + 1; + + uint8_t shift_left = (valid_bits - (relative_vlan_id - used_group * 8) % valid_bits) - 1; + + if (enable) + *base_group |= 1 << shift_left; + else + *base_group &= ~(1 << shift_left); + + + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_VLAN_ATT_TABLE, + .p_entry_data = &entry_data + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write vlan table failed"); + return -1; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index fb30c8f32e..8dac8f30dd 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -43,7 +43,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -73,7 +73,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -194,6 +194,10 @@ struct zxdh_multicast_table { uint32_t bitmap[2]; }; +struct zxdh_vlan_filter_table { + uint32_t vlans[4]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -205,5 +209,7 @@ int zxdh_promisc_table_init(struct rte_eth_dev *dev); int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 54926 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (11 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 12/15] net/zxdh: vlan filter/ offload " Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 14/15] net/zxdh: basic stats ops implementations Junlong Wang ` (3 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 25363 bytes --] provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 52 ++++ drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 407 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ drivers/net/zxdh/zxdh_msg.h | 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++++++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 603 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 6fb006c2da..415ca547d0 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -19,3 +19,6 @@ Allmulticast mode = Y VLAN filter = Y VLAN offload = Y QinQ offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 7809b24d8b..1c04719cd4 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -61,6 +61,9 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; + dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -784,9 +787,48 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) return ret; } + ret = zxdh_rss_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "rss configure failed"); + return ret; + } + return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -873,6 +915,12 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } + zxdh_pci_reinit_complete(hw); end: @@ -1119,6 +1167,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable = zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set = zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 3cdac5de73..2934fa264a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -82,6 +82,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; + uint16_t *rss_reta; uint8_t *isr; uint8_t weak_barriers; @@ -100,6 +101,8 @@ struct zxdh_hw { uint8_t admin_status; uint8_t promisc_status; uint8_t allmulti_status; + uint8_t rss_enable; + uint8_t rss_init; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index c4a1521723..d333717e87 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -3,6 +3,7 @@ */ #include <rte_malloc.h> +#include <rte_ether.h> #include "zxdh_ethdev.h" #include "zxdh_pci.h" @@ -12,6 +13,14 @@ #include "zxdh_logs.h" #define ZXDH_VLAN_FILTER_GROUPS 64 +#define ZXDH_INVALID_LOGIC_QID 0xFFFFU + +/* Supported RSS */ +#define ZXDH_RSS_HF_MASK (~(ZXDH_RSS_HF)) +#define ZXDH_HF_F5 1 +#define ZXDH_HF_F3 2 +#define ZXDH_HF_MAC_VLAN 4 +#define ZXDH_HF_ALL 0 static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { @@ -746,3 +755,401 @@ zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } + +int +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; + uint16_t idx; + uint16_t i; + uint16_t pos; + int ret; + + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); + return -EINVAL; + } + if (!hw->rss_reta) { + hw->rss_reta = rte_calloc(NULL, RTE_ETH_RSS_RETA_SIZE_256, sizeof(uint16_t), 0); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate RSS reta"); + return -ENOMEM; + } + } + for (idx = 0, i = 0; (i < reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + if (reta_conf[idx].reta[pos] > dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "reta table value err(%u >= %u)", + reta_conf[idx].reta[pos], dev->data->nb_rx_queues); + return -EINVAL; + } + if (hw->rss_reta[i] != reta_conf[idx].reta[pos]) + break; + } + if (i == reta_size) { + PMD_DRV_LOG(INFO, "reta table same with buffered table"); + return 0; + } + memcpy(old_reta, hw->rss_reta, sizeof(old_reta)); + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + hw->rss_reta[i] = reta_conf[idx].reta[pos]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < reta_size; i++) + msg.data.rss_reta.reta[i] = + (hw->channel_context[hw->rss_reta[i] * 2].ph_chno); + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return ret; +} + +static uint16_t +zxdh_hw_qid_to_logic_qid(struct rte_eth_dev *dev, uint16_t qid) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t rx_queues = dev->data->nb_rx_queues; + uint16_t i; + + for (i = 0; i < rx_queues; i++) { + if (qid == hw->channel_context[i * 2].ph_chno) + return i; + } + return ZXDH_INVALID_LOGIC_QID; +} + +int +zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + uint16_t idx; + uint16_t i; + int ret = 0; + uint16_t qid_logic; + + ret = (!reta_size || reta_size > RTE_ETH_RSS_RETA_SIZE_256); + if (ret) { + PMD_DRV_LOG(ERR, "request reta size(%u) not same with buffered(%u)", + reta_size, RTE_ETH_RSS_RETA_SIZE_256); + return -EINVAL; + } + + /* Fill each entry of the table even if its bit is not set. */ + for (idx = 0, i = 0; (i != reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = hw->rss_reta[i]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_GET, &msg); + + if (hw->is_pf) { + ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table get failed"); + return -EINVAL; + } + } + + struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + + qid_logic = zxdh_hw_qid_to_logic_qid(dev, reta_table->reta[i]); + if (qid_logic == ZXDH_INVALID_LOGIC_QID) { + PMD_DRV_LOG(ERR, "rsp phy reta qid (%u) is illegal(%u)", + reta_table->reta[i], qid_logic); + return -EINVAL; + } + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = qid_logic; + } + return 0; +} + +static uint32_t +zxdh_rss_hf_to_hw(uint64_t hf) +{ + uint32_t hw_hf = 0; + + if (hf & ZXDH_HF_MAC_VLAN_ETH) + hw_hf |= ZXDH_HF_MAC_VLAN; + if (hf & ZXDH_HF_F3_ETH) + hw_hf |= ZXDH_HF_F3; + if (hf & ZXDH_HF_F5_ETH) + hw_hf |= ZXDH_HF_F5; + + if (hw_hf == (ZXDH_HF_MAC_VLAN | ZXDH_HF_F3 | ZXDH_HF_F5)) + hw_hf = ZXDH_HF_ALL; + return hw_hf; +} + +static uint64_t +zxdh_rss_hf_to_eth(uint32_t hw_hf) +{ + uint64_t hf = 0; + + if (hw_hf == ZXDH_HF_ALL) + return (ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH); + + if (hw_hf & ZXDH_HF_MAC_VLAN) + hf |= ZXDH_HF_MAC_VLAN_ETH; + if (hw_hf & ZXDH_HF_F3) + hf |= ZXDH_HF_F3_ETH; + if (hw_hf & ZXDH_HF_F5) + hf |= ZXDH_HF_F5_ETH; + + return hf; +} + +int +zxdh_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + uint32_t hw_hf_new, hw_hf_old; + int need_update_hf = 0; + int ret = 0; + + ret = rss_conf->rss_hf & ZXDH_RSS_HF_MASK; + if (ret) { + PMD_DRV_LOG(ERR, "Not support some hash function (%08lx)", rss_conf->rss_hf); + return -EINVAL; + } + + hw_hf_new = zxdh_rss_hf_to_hw(rss_conf->rss_hf); + hw_hf_old = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + + if ((hw_hf_new != hw_hf_old || !!rss_conf->rss_hf)) + need_update_hf = 1; + + if (need_update_hf) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = !!rss_conf->rss_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = !!rss_conf->rss_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf_new; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf_new; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + old_rss_conf->rss_hf = rss_conf->rss_hf; + } + + return 0; +} + +int +zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret; + uint32_t hw_hf; + + if (rss_conf == NULL) { + PMD_DRV_LOG(ERR, "rss conf is NULL"); + return -ENOMEM; + } + + hw_hf = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + rss_conf->rss_hf = zxdh_rss_hf_to_eth(hw_hf); + + zxdh_msg_head_build(hw, ZXDH_RSS_HF_GET, &msg); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + reply_msg.reply_body.rss_hf.rss_hf = port_attr.rss_hash_factor; + } else { + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + rss_conf->rss_hf = zxdh_rss_hf_to_eth(reply_msg.reply_body.rss_hf.rss_hf); + + return 0; +} + +static int +zxdh_get_rss_enable_conf(struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) + return dev->data->nb_rx_queues == 1 ? 0 : 1; + else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE) + return 0; + + return 0; +} + +int +zxdh_rss_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *dev_data = dev->data; + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg = {0}; + int ret = 0; + uint32_t hw_hf; + uint32_t i; + + if (dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(ERR, "port %u nb_rx_queues is 0", dev->data->port_id); + return -1; + } + + /* config rss enable */ + uint8_t curr_rss_enable = zxdh_get_rss_enable_conf(dev); + + if (hw->rss_enable != curr_rss_enable) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = curr_rss_enable; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = curr_rss_enable; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + hw->rss_enable = curr_rss_enable; + } + + if (curr_rss_enable && hw->rss_init == 0) { + /* config hash factor */ + dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = ZXDH_HF_F5_ETH; + hw_hf = zxdh_rss_hf_to_hw(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf); + memset(&msg, 0, sizeof(msg)); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + hw->rss_init = 1; + } + + if (!hw->rss_reta) { + hw->rss_reta = rte_calloc(NULL, RTE_ETH_RSS_RETA_SIZE_256, sizeof(uint16_t), 0); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "alloc memory fail"); + return -1; + } + } + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + hw->rss_reta[i] = i % dev_data->nb_rx_queues; + + /* hw config reta */ + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + msg.data.rss_reta.reta[i] = + hw->channel_context[hw->rss_reta[i] * 2].ph_chno; + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 058d271ab3..860716d079 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,8 +5,25 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <rte_ether.h> + #include "zxdh_ethdev.h" +#define ZXDH_ETH_RSS_L2 RTE_ETH_RSS_L2_PAYLOAD +#define ZXDH_ETH_RSS_IP \ + (RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6) +#define ZXDH_ETH_RSS_TCP (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP) +#define ZXDH_ETH_RSS_UDP (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP) +#define ZXDH_ETH_RSS_SCTP (RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + +#define ZXDH_HF_F5_ETH (ZXDH_ETH_RSS_TCP | ZXDH_ETH_RSS_UDP | ZXDH_ETH_RSS_SCTP) +#define ZXDH_HF_F3_ETH ZXDH_ETH_RSS_IP +#define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 +#define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -20,5 +37,14 @@ int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_configure(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index ec15388f7a..45a9b10aa4 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -182,6 +182,11 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_RSS_ENABLE = 7, + ZXDH_RSS_RETA_SET = 8, + ZXDH_RSS_RETA_GET = 9, + ZXDH_RSS_HF_SET = 15, + ZXDH_RSS_HF_GET = 16, ZXDH_VLAN_FILTER_SET = 17, ZXDH_VLAN_FILTER_ADD = 18, ZXDH_VLAN_FILTER_DEL = 19, @@ -291,6 +296,14 @@ struct zxdh_link_info_msg { uint32_t speed; } __rte_packed; +struct zxdh_rss_reta { + uint32_t reta[RTE_ETH_RSS_RETA_SIZE_256]; +}; + +struct zxdh_rss_hf { + uint32_t rss_hf; +}; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -307,6 +320,8 @@ struct zxdh_msg_reply_body { union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; struct zxdh_link_info_msg link_msg; + struct zxdh_rss_hf rss_hf; + struct zxdh_rss_reta rss_reta; } __rte_packed; } __rte_packed; @@ -360,6 +375,10 @@ struct zxdh_vlan_offload { uint8_t type; } __rte_packed; +struct zxdh_rss_enable { + uint8_t enable; +}; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -385,6 +404,9 @@ struct zxdh_msg_info { struct zxdh_vlan_filter vlan_filter_msg; struct zxdh_vlan_filter_set vlan_filter_set_msg; struct zxdh_vlan_offload vlan_offload_msg; + struct zxdh_rss_reta rss_reta; + struct zxdh_rss_enable rss_enable; + struct zxdh_rss_hf rss_hf; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index ca98b36da2..2939d9ae8b 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_RSS_ATT_TABLE 3 #define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 @@ -668,3 +669,84 @@ zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) } return 0; } + +int +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + for (uint16_t j = 0; j < 8; j++) { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; + else + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; +#else + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; +#endif + } + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] |= 0x8000; +#else + rss_vqid.vqm_qid[0] |= 0x8000; +#endif + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = { + .index = vport_num.vfid * 32 + i, + .p_data = (uint32_t *)&rss_vqid + }; + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_RSS_ATT_TABLE, + .p_entry_data = &entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write rss base qid failed vfid:%d", vport_num.vfid); + return ret; + } + } + return 0; +} + +int +zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vport_num.vfid * 32 + i, (uint32_t *)&rss_vqid}; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_RSS_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &user_entry, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get rss tbl failed, vfid:%d", vport_num.vfid); + return -1; + } + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] &= 0x7FFF; +#else + rss_vqid.vqm_qid[0] &= 0x7FFF; +#endif + uint8_t size = sizeof(struct zxdh_rss_to_vqid_table) / sizeof(uint16_t); + + for (int j = 0; j < size; j++) { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j + 1]; + else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j - 1]; +#else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j]; +#endif + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8dac8f30dd..7bac39375c 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,6 +8,7 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 extern struct zxdh_dtb_shared_data g_dtb_data; @@ -198,6 +199,10 @@ struct zxdh_vlan_filter_table { uint32_t vlans[4]; }; +struct zxdh_rss_to_vqid_table { + uint16_t vqm_qid[8]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -211,5 +216,7 @@ int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); +int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 62745 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 14/15] net/zxdh: basic stats ops implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (12 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-23 11:02 ` [PATCH v5 15/15] net/zxdh: mtu update " Junlong Wang ` (2 subsequent siblings) 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37376 bytes --] basic stats ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 353 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 27 +++ drivers/net/zxdh/zxdh_msg.h | 16 ++ drivers/net/zxdh/zxdh_np.c | 341 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 30 +++ drivers/net/zxdh/zxdh_queue.h | 2 + drivers/net/zxdh/zxdh_rxtx.c | 83 ++++++- drivers/net/zxdh/zxdh_tables.h | 5 + 11 files changed, 859 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 415ca547d0..98c141cf95 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -22,3 +22,5 @@ QinQ offload = Y RSS hash = Y RSS reta update = Y Inner RSS = Y +Basic stats = Y +Stats per queue = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3cc6a1d348..c8a52b587c 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -32,6 +32,7 @@ Features of the ZXDH PMD are: - VLAN stripping and inserting - QINQ stripping and inserting - Receive Side Scaling (RSS) +- Port hardware statistics Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1c04719cd4..d87ad15824 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1171,6 +1171,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .reta_query = zxdh_dev_rss_reta_query, .rss_hash_update = zxdh_rss_hash_update, .rss_hash_conf_get = zxdh_rss_hash_conf_get, + .stats_get = zxdh_dev_stats_get, + .stats_reset = zxdh_dev_stats_reset, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index d333717e87..1b219bd26d 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -11,6 +11,8 @@ #include "zxdh_ethdev_ops.h" #include "zxdh_tables.h" #include "zxdh_logs.h" +#include "zxdh_rxtx.h" +#include "zxdh_np.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -22,6 +24,108 @@ #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +struct zxdh_hw_mac_stats { + uint64_t rx_total; + uint64_t rx_pause; + uint64_t rx_unicast; + uint64_t rx_multicast; + uint64_t rx_broadcast; + uint64_t rx_vlan; + uint64_t rx_size_64; + uint64_t rx_size_65_127; + uint64_t rx_size_128_255; + uint64_t rx_size_256_511; + uint64_t rx_size_512_1023; + uint64_t rx_size_1024_1518; + uint64_t rx_size_1519_mru; + uint64_t rx_undersize; + uint64_t rx_oversize; + uint64_t rx_fragment; + uint64_t rx_jabber; + uint64_t rx_control; + uint64_t rx_eee; + + uint64_t tx_total; + uint64_t tx_pause; + uint64_t tx_unicast; + uint64_t tx_multicast; + uint64_t tx_broadcast; + uint64_t tx_vlan; + uint64_t tx_size_64; + uint64_t tx_size_65_127; + uint64_t tx_size_128_255; + uint64_t tx_size_256_511; + uint64_t tx_size_512_1023; + uint64_t tx_size_1024_1518; + uint64_t tx_size_1519_mtu; + uint64_t tx_undersize; + uint64_t tx_oversize; + uint64_t tx_fragment; + uint64_t tx_jabber; + uint64_t tx_control; + uint64_t tx_eee; + + uint64_t rx_error; + uint64_t rx_fcs_error; + uint64_t rx_drop; + + uint64_t tx_error; + uint64_t tx_fcs_error; + uint64_t tx_drop; + +} __rte_packed; + +struct zxdh_hw_mac_bytes { + uint64_t rx_total_bytes; + uint64_t rx_good_bytes; + uint64_t tx_total_bytes; + uint64_t tx_good_bytes; +} __rte_packed; + +struct zxdh_np_stats_data { + uint64_t n_pkts_dropped; + uint64_t n_bytes_dropped; +}; + +struct zxdh_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct zxdh_xstats_name_off zxdh_rxq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_rx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_rx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_rx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_rx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_rx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_rx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[7])}, +}; + +static const struct zxdh_xstats_name_off zxdh_txq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_tx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_tx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_tx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_tx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_tx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_tx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[7])}, +}; + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1153,3 +1257,252 @@ zxdh_rss_configure(struct rte_eth_dev *dev) } return 0; } + +static int32_t +zxdh_hw_vqm_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode, + struct zxdh_hw_vqm_stats *hw_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_GET: + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get hw stats"); + return -1; + } + struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body; + + rte_memcpy(hw_stats, &reply_body->vqm_stats, sizeof(struct zxdh_hw_vqm_stats)); + return 0; +} + +static int zxdh_hw_mac_stats_get(struct rte_eth_dev *dev, + struct zxdh_hw_mac_stats *mac_stats, + struct zxdh_hw_mac_bytes *mac_bytes) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET); + uint64_t stats_addr = 0; + uint64_t bytes_addr = 0; + + if (hw->speed <= RTE_ETH_SPEED_NUM_25G) { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4); + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4); + } else { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4; + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4; + } + + rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats)); + rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes)); + return 0; +} + +static void zxdh_data_hi_to_lo(uint64_t *data) +{ + uint32_t n_data_hi; + uint32_t n_data_lo; + + n_data_lo = *data >> 32; + n_data_hi = *data; + *data = (uint64_t)(rte_le_to_cpu_32(n_data_hi)) << 32 | + rte_le_to_cpu_32(n_data_lo); +} + +static int zxdh_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_np_stats_data stats_data; + uint32_t stats_id = zxdh_vport_to_vfid(hw->vport); + uint32_t idx = 0; + int ret = 0; + + idx = stats_id + ZXDH_BROAD_STATS_EGRESS_BASE; + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_tx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_tx_broadcast); + + idx = stats_id + ZXDH_BROAD_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_rx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_rx_broadcast); + + idx = stats_id + ZXDH_MTU_STATS_EGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + + np_stats->np_tx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_tx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_bytes); + + idx = stats_id + ZXDH_MTU_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + np_stats->np_rx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_rx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_bytes); + + return 0; +} + +static int +zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_stats_get(dev, np_stats); + if (ret) { + PMD_DRV_LOG(ERR, "get np stats failed"); + return -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_GET_NP_STATS, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", + hw->vport.vport); + return -1; + } + memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); + } + return ret; +} + +int +zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_hw_vqm_stats vqm_stats = {0}; + struct zxdh_hw_np_stats np_stats = {0}; + struct zxdh_hw_mac_stats mac_stats = {0}; + struct zxdh_hw_mac_bytes mac_bytes = {0}; + uint32_t i = 0; + + zxdh_hw_vqm_stats_get(dev, ZXDH_VQM_DEV_STATS_GET, &vqm_stats); + if (hw->is_pf) + zxdh_hw_mac_stats_get(dev, &mac_stats, &mac_bytes); + + zxdh_hw_np_stats_get(dev, &np_stats); + + stats->ipackets = vqm_stats.rx_total; + stats->opackets = vqm_stats.tx_total; + stats->ibytes = vqm_stats.rx_bytes; + stats->obytes = vqm_stats.tx_bytes; + stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop; + stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts; + stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts; + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[i]; + + if (rxvq == NULL) + continue; + stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[0].offset); + stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[1].offset); + stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[5].offset); + } + + for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_tx *txvq = dev->data->tx_queues[i]; + + if (txvq == NULL) + continue; + stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[0].offset); + stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[1].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[5].offset); + } + return 0; +} + +static int zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to reset hw stats"); + return -1; + } + return 0; +} + +int zxdh_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET); + if (hw->is_pf) + zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 860716d079..f35378e691 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <stdint.h> + #include <rte_ether.h> #include "zxdh_ethdev.h" @@ -24,6 +26,29 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +struct zxdh_hw_vqm_stats { + uint64_t rx_total; + uint64_t tx_total; + uint64_t rx_bytes; + uint64_t tx_bytes; + uint64_t rx_error; + uint64_t tx_error; + uint64_t rx_drop; +} __rte_packed; + +struct zxdh_hw_np_stats { + uint64_t np_rx_broadcast; + uint64_t np_tx_broadcast; + uint64_t np_rx_mtu_drop_pkts; + uint64_t np_tx_mtu_drop_pkts; + uint64_t np_rx_mtu_drop_bytes; + uint64_t np_tx_mtu_drop_bytes; + uint64_t np_rx_mtr_drop_pkts; + uint64_t np_tx_mtr_drop_pkts; + uint64_t np_rx_mtr_drop_bytes; + uint64_t np_tx_mtr_drop_bytes; +}; + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -46,5 +71,7 @@ int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_configure(struct rte_eth_dev *dev); +int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int zxdh_dev_stats_reset(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 45a9b10aa4..159c8c9c71 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -9,10 +9,16 @@ #include <ethdev_driver.h> +#include "zxdh_ethdev_ops.h" + #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) #define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) +#define ZXDH_MAC_OFFSET (0x24000) +#define ZXDH_MAC_STATS_OFFSET (0x1408) +#define ZXDH_MAC_BYTES_OFFSET (0xb000) + #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 #define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) @@ -173,7 +179,13 @@ enum pciebar_layout_type { /* riscv msg opcodes */ enum zxdh_agent_msg_type { + ZXDH_MAC_STATS_GET = 10, + ZXDH_MAC_STATS_RESET, ZXDH_MAC_LINK_GET = 14, + ZXDH_VQM_DEV_STATS_GET = 21, + ZXDH_VQM_DEV_STATS_RESET, + ZXDH_VQM_QUEUE_STATS_GET = 24, + ZXDH_VQM_QUEUE_STATS_RESET, }; enum zxdh_msg_type { @@ -195,6 +207,8 @@ enum zxdh_msg_type { ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, + ZXDH_GET_NP_STATS = 31, + ZXDH_MSG_TYPE_END, }; @@ -322,6 +336,8 @@ struct zxdh_msg_reply_body { struct zxdh_link_info_msg link_msg; struct zxdh_rss_hf rss_hf; struct zxdh_rss_reta rss_reta; + struct zxdh_hw_vqm_stats vqm_stats; + struct zxdh_hw_np_stats np_stats; } __rte_packed; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 1f06539263..42679635f4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -26,6 +26,7 @@ ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; +ZXDH_PPU_STAT_CFG_T g_ppu_stat_cfg; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -117,6 +118,18 @@ do {\ #define ZXDH_COMM_CONVERT16(w_data) \ (((w_data) & 0xff) << 8) +#define ZXDH_DTB_TAB_UP_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.wr_index) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].user_flag) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].phy_addr) + +#define ZXDH_DTB_TAB_UP_DATA_LEN_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.data_len[(INDEX)]) + #define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) @@ -1717,3 +1730,331 @@ zxdh_np_dtb_table_entry_get(uint32_t dev_id, return 0; } + +static void +zxdh_np_stat_cfg_soft_get(uint32_t dev_id, + ZXDH_PPU_STAT_CFG_T *p_stat_cfg) +{ + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_stat_cfg); + + p_stat_cfg->ddr_base_addr = g_ppu_stat_cfg.ddr_base_addr; + p_stat_cfg->eram_baddr = g_ppu_stat_cfg.eram_baddr; + p_stat_cfg->eram_depth = g_ppu_stat_cfg.eram_depth; + p_stat_cfg->ppu_addr_offset = g_ppu_stat_cfg.ppu_addr_offset; +} + +static uint32_t +zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t int_flag, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_desc_data) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t queue_en = 0; + uint32_t rc; + + zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!", queue_id); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (desc_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + zxdh_np_dtb_item_buff_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + item_index, 0, desc_len, p_desc_data); + + ZXDH_DTB_TAB_UP_DATA_LEN_GET(dev_id, queue_id, item_index) = data_len; + + item_info.cmd_vld = 1; + item_info.cmd_type = ZXDH_DTB_DIR_UP_TYPE; + item_info.int_en = int_flag; + item_info.data_len = desc_len / 4; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_dump_desc_info(uint32_t dev_id, + uint32_t queue_id, + uint32_t queue_element_id, + uint32_t *p_dump_info, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_dump_data) +{ + uint32_t dtb_interrupt_status = 0; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_dump_data); + rc = zxdh_np_dtb_tab_up_info_set(dev_id, + queue_id, + queue_element_id, + dtb_interrupt_status, + data_len, + desc_len, + p_dump_info); + if (rc != 0) { + PMD_DRV_LOG(ERR, "the queue %d element id %d dump" + " info set failed!", queue_id, queue_element_id); + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + queue_element_id, 0, ZXDH_DTB_TAB_ACK_UNUSED_MASK); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_index) +{ + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + uint32_t i; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + zxdh_np_dtb_item_ack_rd(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, &ack_vale); + + ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + *p_item_index = item_index; + + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t *p_phy_haddr, + uint32_t *p_phy_laddr) +{ + uint32_t rc = 0; + uint64_t addr; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(dev_id, queue_id, item_index) == + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE) + addr = ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(dev_id, queue_id, item_index); + else + addr = ZXDH_DTB_ITEM_ACK_SIZE; + + *p_phy_haddr = (addr >> 32) & 0xffffffff; + *p_phy_laddr = addr & 0xffffffff; + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_dma_dump(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t depth, + uint32_t *p_data, + uint32_t *element_id) +{ + uint8_t form_buff[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint32_t dump_dst_phy_haddr = 0; + uint32_t dump_dst_phy_laddr = 0; + uint32_t queue_item_index = 0; + uint32_t data_len; + uint32_t desc_len; + uint32_t rc; + + rc = zxdh_np_dtb_tab_up_free_item_get(dev_id, queue_id, &queue_item_index); + if (rc != 0) { + PMD_DRV_LOG(ERR, "dpp_dtb_tab_up_free_item_get failed = %d!", base_addr); + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + } + + *element_id = queue_item_index; + + rc = zxdh_np_dtb_tab_up_item_addr_get(dev_id, queue_id, queue_item_index, + &dump_dst_phy_haddr, &dump_dst_phy_laddr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_tab_up_item_addr_get"); + + data_len = depth * 128 / 32; + desc_len = ZXDH_DTB_LEN_POS_SETP / 4; + + rc = zxdh_np_dtb_write_dump_desc_info(dev_id, queue_id, queue_item_index, + (uint32_t *)form_buff, data_len, desc_len, p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_dump_desc_info"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t index, + uint32_t rd_mode, + uint32_t *p_data) +{ + uint32_t temp_data[4] = {0}; + uint32_t element_id = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t eram_dump_base_addr; + uint32_t rc; + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_OPR_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_OPR_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + + eram_dump_base_addr = base_addr + row_index; + rc = zxdh_np_dtb_se_smmu0_dma_dump(dev_id, + queue_id, + eram_dump_base_addr, + 1, + temp_data, + &element_id); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_dma_dump"); + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_stat_smmu0_int_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t smmu0_base_addr, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t eram_rd_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + if (rd_mode == ZXDH_STAT_128_MODE) + eram_rd_mode = ZXDH_ERAM128_OPR_128b; + else + eram_rd_mode = ZXDH_ERAM128_OPR_64b; + + rc = zxdh_np_dtb_se_smmu0_ind_read(dev_id, + queue_id, + smmu0_base_addr, + index, + eram_rd_mode, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_ind_read"); + + return rc; +} + +int +zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + ZXDH_PPU_STAT_CFG_T stat_cfg = {0}; + uint32_t ppu_eram_baddr; + uint32_t ppu_eram_depth; + uint32_t rc = 0; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + memset(&stat_cfg, 0x0, sizeof(stat_cfg)); + + zxdh_np_stat_cfg_soft_get(dev_id, &stat_cfg); + + ppu_eram_depth = stat_cfg.eram_depth; + ppu_eram_baddr = stat_cfg.eram_baddr; + + if ((index >> (ZXDH_STAT_128_MODE - rd_mode)) < ppu_eram_depth) { + rc = zxdh_np_dtb_stat_smmu0_int_read(dev_id, + queue_id, + ppu_eram_baddr, + rd_mode, + index, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_stat_smmu0_int_read"); + } + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 19d1f03f59..7da29cf7bd 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -432,6 +432,18 @@ typedef enum zxdh_sdt_table_type_e { ZXDH_SDT_TBLT_MAX = 7, } ZXDH_SDT_TABLE_TYPE_E; +typedef enum zxdh_dtb_dir_type_e { + ZXDH_DTB_DIR_DOWN_TYPE = 0, + ZXDH_DTB_DIR_UP_TYPE = 1, + ZXDH_DTB_DIR_TYPE_MAX, +} ZXDH_DTB_DIR_TYPE_E; + +typedef enum zxdh_dtb_tab_up_user_addr_type_e { + ZXDH_DTB_TAB_UP_NOUSER_ADDR_TYPE = 0, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE = 1, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_MAX, +} ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_E; + typedef struct zxdh_dtb_lpm_entry_t { uint32_t dtb_len0; uint8_t *p_data_buff0; @@ -537,6 +549,19 @@ typedef struct zxdh_dtb_hash_entry_info_t { uint8_t *p_rst; } ZXDH_DTB_HASH_ENTRY_INFO_T; +typedef struct zxdh_ppu_stat_cfg_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_base_addr; + uint32_t ppu_addr_offset; +} ZXDH_PPU_STAT_CFG_T; + +typedef enum zxdh_stat_cnt_mode_e { + ZXDH_STAT_64_MODE = 0, + ZXDH_STAT_128_MODE = 1, + ZXDH_STAT_MAX_MODE, +} ZXDH_STAT_CNT_MODE_E; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, @@ -545,5 +570,10 @@ int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); +int zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 9343df81ac..deb0dd891a 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -53,6 +53,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) #define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) #define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) +#define ZXDH_PD_HDR_SIZE_MAX 256 +#define ZXDH_PD_HDR_SIZE_MIN ZXDH_TYPE_HDR_SIZE /* * ring descriptors: 16 bytes. diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 0ffce50042..27a61d46dd 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -406,6 +406,40 @@ static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); } +static void +zxdh_update_packet_stats(struct zxdh_virtnet_stats *stats, struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + struct rte_ether_addr *ea = NULL; + + stats->bytes += s; + + if (s == 64) { + stats->size_bins[1]++; + } else if (s > 64 && s < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + stats->size_bins[bin]++; + } else { + if (s < 64) + stats->size_bins[0]++; + else if (s < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } +} + uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -459,12 +493,19 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt break; } } + if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + break; + } /* Enqueue Packet buffers */ if (can_push) zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); else zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + zxdh_update_packet_stats(&txvq->stats, txm); } + txvq->stats.packets += nb_tx; if (likely(nb_tx)) { if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { zxdh_queue_notify(vq); @@ -474,9 +515,10 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt return nb_tx; } -uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { + struct zxdh_virtnet_tx *txvq = tx_queue; uint16_t nb_tx; for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { @@ -496,6 +538,12 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t rte_errno = -error; break; } + if (m->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", m->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + rte_errno = ENOMEM; + break; + } } return nb_tx; } @@ -571,7 +619,7 @@ static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *h return 0; } -static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +static void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) { int32_t error = 0; /* @@ -613,7 +661,13 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, for (i = 0; i < num; i++) { rxm = rcv_pkts[i]; - + if (unlikely(len[i] < ZXDH_UL_NET_HDR_SIZE)) { + nb_enqueued++; + PMD_RX_LOG(ERR, "RX, len:%u err", len[i]); + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } struct zxdh_net_hdr_ul *header = (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM); @@ -623,8 +677,22 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); seg_num = 1; } + if (seg_num > ZXDH_RX_MAX_SEGS) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* bit[0:6]-pd_len unit:2B */ uint16_t pd_len = header->type_hdr.pd_len << 1; + if (pd_len > ZXDH_PD_HDR_SIZE_MAX || pd_len < ZXDH_PD_HDR_SIZE_MIN) { + PMD_RX_LOG(ERR, "pd_len:%d is invalid", pd_len); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* Private queue only handle type hdr */ hdr_size = pd_len; rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; @@ -639,6 +707,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, /* Update rte_mbuf according to pi/pd header */ if (zxdh_rx_update_mbuf(rxm, header) < 0) { zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; continue; } seg_res = seg_num - 1; @@ -661,8 +730,11 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } @@ -675,6 +747,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(rcv_cnt == 0)) { PMD_RX_LOG(ERR, "No enough segments for packet."); rte_pktmbuf_free(rx_pkts[nb_rx]); + rxvq->stats.errors++; break; } while (extra_idx < rcv_cnt) { @@ -694,11 +767,15 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } + rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ if (likely(!zxdh_queue_full(vq))) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 7bac39375c..c7da40f294 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -11,6 +11,11 @@ #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 +#define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 +#define ZXDH_BROAD_STATS_EGRESS_BASE 0xC902 +#define ZXDH_BROAD_STATS_INGRESS_BASE 0xD102 + extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 87119 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v5 15/15] net/zxdh: mtu update ops implementations 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (13 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 14/15] net/zxdh: basic stats ops implementations Junlong Wang @ 2024-12-23 11:02 ` Junlong Wang 2024-12-24 20:30 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Stephen Hemminger 2024-12-24 20:47 ` Stephen Hemminger 16 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-23 11:02 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8134 bytes --] mtu update ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 5 +++ drivers/net/zxdh/zxdh_ethdev_ops.c | 65 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 3 ++ drivers/net/zxdh/zxdh_tables.c | 42 +++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 4 ++ 7 files changed, 122 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 98c141cf95..3561e31666 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -24,3 +24,4 @@ RSS reta update = Y Inner RSS = Y Basic stats = Y Stats per queue = Y +MTU update = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index c8a52b587c..58e0c49a2e 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -33,6 +33,8 @@ Features of the ZXDH PMD are: - QINQ stripping and inserting - Receive Side Scaling (RSS) - Port hardware statistics +- MTU update +- Jumbo frames Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d87ad15824..e992c3f6cf 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -64,6 +64,10 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - + RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + dev_info->min_mtu = ZXDH_ETHER_MIN_MTU; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -1173,6 +1177,7 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .rss_hash_conf_get = zxdh_rss_hash_conf_get, .stats_get = zxdh_dev_stats_get, .stats_reset = zxdh_dev_stats_reset, + .mtu_set = zxdh_dev_mtu_set, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 1b219bd26d..495e2432c7 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -13,6 +13,7 @@ #include "zxdh_logs.h" #include "zxdh_rxtx.h" #include "zxdh_np.h" +#include "zxdh_queue.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -1506,3 +1507,67 @@ int zxdh_dev_stats_reset(struct rte_eth_dev *dev) return 0; } + +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_panel_table panel = {0}; + struct zxdh_port_attr_table vport_att = {0}; + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); + int ret; + + if (hw->is_pf) { + ret = zxdh_get_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get_panel_attr ret:%d", ret); + return -1; + } + + ret = zxdh_get_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, get vport dpp_ret:%d", vfid, ret); + return -1; + } + + panel.mtu = new_mtu; + panel.mtu_enable = 1; + ret = zxdh_set_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set zxdh_dev_mtu failed, ret:%u", ret); + return ret; + } + + vport_att.mtu_enable = 1; + vport_att.mtu = new_mtu; + ret = zxdh_set_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, set vport dpp_ret:%d", vfid, ret); + return ret; + } + } else { + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_MTU_EN_FLAG; + attr_msg->value = 1; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_EN_FLAG); + return ret; + } + attr_msg->mode = ZXDH_PORT_MTU_FLAG; + attr_msg->value = new_mtu; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_FLAG); + return ret; + } + } + dev->data->mtu = new_mtu; + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index f35378e691..fac6cbd5e8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -26,6 +26,8 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +#define ZXDH_ETHER_MIN_MTU 68 + struct zxdh_hw_vqm_stats { uint64_t rx_total; uint64_t tx_total; @@ -73,5 +75,6 @@ int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss int zxdh_rss_configure(struct rte_eth_dev *dev); int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int zxdh_dev_stats_reset(struct rte_eth_dev *dev); +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 2939d9ae8b..d6cbde3a21 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -150,6 +150,48 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + + if (ret != 0) + PMD_DRV_LOG(ERR, "get panel table failed"); + + return ret; +} + +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) + PMD_DRV_LOG(ERR, "Insert panel table failed"); + + return ret; +} + int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index c7da40f294..adedf3d0d3 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,8 +8,10 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_MTU_FLAG 9 #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_MTU_EN_FLAG 42 #define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 #define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 @@ -223,5 +225,7 @@ int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17609 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v5 00/15] net/zxdh: updated net zxdh driver 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (14 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 15/15] net/zxdh: mtu update " Junlong Wang @ 2024-12-24 20:30 ` Stephen Hemminger 2024-12-24 20:47 ` Stephen Hemminger 16 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-24 20:30 UTC (permalink / raw) To: Junlong Wang; +Cc: dev On Mon, 23 Dec 2024 19:02:34 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > V5: > - Simplify the notify_data part in the zxdh_notify_queue function. > - Replace rte_zmalloc with rte_calloc in the rss_reta_update function. > - Remove unnecessary check in mtu_set function. > > V4: > - resolved ci compile issues. > > V3: > - use rte_zmalloc and rte_calloc to avoid memset. > - remove unnecessary initialization, which first usage will set. > - adjust some function which is always return 0, changed to void > and skip the ASSERTION later. > - resolved some WARNING:MACRO_ARG_UNUSED issues. > - resolved some other issues. > > V2: > - resolve code style and github-robot build issue. > > V1: > - updated net zxdh driver > provided insert/delete/get table code funcs. > provided link/mac/vlan/promiscuous/rss/mtu ops. > > Junlong Wang (15): > net/zxdh: zxdh np init implementation > net/zxdh: zxdh np uninit implementation > net/zxdh: port tables init implementations > net/zxdh: port tables unint implementations > net/zxdh: rx/tx queue setup and intr enable > net/zxdh: dev start/stop ops implementations > net/zxdh: provided dev simple tx implementations > net/zxdh: provided dev simple rx implementations > net/zxdh: link info update, set link up/down > net/zxdh: mac set/add/remove ops implementations > net/zxdh: promisc/allmulti ops implementations > net/zxdh: vlan filter/ offload ops implementations > net/zxdh: rss hash config/update, reta update/get > net/zxdh: basic stats ops implementations > net/zxdh: mtu update ops implementations > > doc/guides/nics/features/zxdh.ini | 18 + > doc/guides/nics/zxdh.rst | 17 + > drivers/net/zxdh/meson.build | 4 + > drivers/net/zxdh/zxdh_common.c | 24 + > drivers/net/zxdh/zxdh_common.h | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 595 +++++++- > drivers/net/zxdh/zxdh_ethdev.h | 40 + > drivers/net/zxdh/zxdh_ethdev_ops.c | 1573 +++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ > drivers/net/zxdh/zxdh_msg.c | 169 +++ > drivers/net/zxdh/zxdh_msg.h | 232 ++++ > drivers/net/zxdh/zxdh_np.c | 2060 ++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 579 ++++++++ > drivers/net/zxdh/zxdh_pci.c | 23 +- > drivers/net/zxdh/zxdh_pci.h | 9 +- > drivers/net/zxdh/zxdh_queue.c | 242 +++- > drivers/net/zxdh/zxdh_queue.h | 144 +- > drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ > drivers/net/zxdh/zxdh_rxtx.h | 20 +- > drivers/net/zxdh/zxdh_tables.c | 794 +++++++++++ > drivers/net/zxdh/zxdh_tables.h | 231 ++++ > 21 files changed, 7613 insertions(+), 46 deletions(-) > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h > create mode 100644 drivers/net/zxdh/zxdh_np.c > create mode 100644 drivers/net/zxdh/zxdh_np.h > create mode 100644 drivers/net/zxdh/zxdh_rxtx.c > create mode 100644 drivers/net/zxdh/zxdh_tables.c > create mode 100644 drivers/net/zxdh/zxdh_tables.h > This looks good, I saw a couple things that could be addressed later. First, the log messages are a little inconsistent. Some have blanks before or after message, others not. And some end with period but most don't. Easy to fix later. The other minor thing is still some calls to free functions checking for null, I can add this follow on patch after merging to address this. diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e992c3f6cf..942435d318 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -944,19 +944,15 @@ zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) if (ret) PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); - if (g_dtb_data.dtb_table_conf_mz) - rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + g_dtb_data.dtb_table_conf_mz = NULL; - if (g_dtb_data.dtb_table_dump_mz) { - rte_memzone_free(g_dtb_data.dtb_table_dump_mz); - g_dtb_data.dtb_table_dump_mz = NULL; - } + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { - if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { - rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); - g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; - } + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; } g_dtb_data.init_done = 0; g_dtb_data.bind_device = NULL; @@ -1053,10 +1049,8 @@ zxdh_dev_close(struct rte_eth_dev *dev) zxdh_bar_msg_chan_exit(); - if (dev->data->mac_addrs != NULL) { - rte_free(dev->data->mac_addrs); - dev->data->mac_addrs = NULL; - } + rte_free(dev->data->mac_addrs); + dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 42679635f4..11ad92e78f 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -873,8 +873,7 @@ zxdh_np_sdt_mgr_destroy(uint32_t dev_id) p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); - if (p_sdt_tbl_temp != NULL) - free(p_sdt_tbl_temp); + free(p_sdt_tbl_temp); ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v5 00/15] net/zxdh: updated net zxdh driver 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (15 preceding siblings ...) 2024-12-24 20:30 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Stephen Hemminger @ 2024-12-24 20:47 ` Stephen Hemminger 16 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-24 20:47 UTC (permalink / raw) To: Junlong Wang; +Cc: dev On Mon, 23 Dec 2024 19:02:34 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > V5: > - Simplify the notify_data part in the zxdh_notify_queue function. > - Replace rte_zmalloc with rte_calloc in the rss_reta_update function. > - Remove unnecessary check in mtu_set function. > > V4: > - resolved ci compile issues. > > V3: > - use rte_zmalloc and rte_calloc to avoid memset. > - remove unnecessary initialization, which first usage will set. > - adjust some function which is always return 0, changed to void > and skip the ASSERTION later. > - resolved some WARNING:MACRO_ARG_UNUSED issues. > - resolved some other issues. > > V2: > - resolve code style and github-robot build issue. > > V1: > - updated net zxdh driver > provided insert/delete/get table code funcs. > provided link/mac/vlan/promiscuous/rss/mtu ops. > > Junlong Wang (15): > net/zxdh: zxdh np init implementation > net/zxdh: zxdh np uninit implementation > net/zxdh: port tables init implementations > net/zxdh: port tables unint implementations > net/zxdh: rx/tx queue setup and intr enable > net/zxdh: dev start/stop ops implementations > net/zxdh: provided dev simple tx implementations > net/zxdh: provided dev simple rx implementations > net/zxdh: link info update, set link up/down > net/zxdh: mac set/add/remove ops implementations > net/zxdh: promisc/allmulti ops implementations > net/zxdh: vlan filter/ offload ops implementations > net/zxdh: rss hash config/update, reta update/get > net/zxdh: basic stats ops implementations > net/zxdh: mtu update ops implementations > > doc/guides/nics/features/zxdh.ini | 18 + > doc/guides/nics/zxdh.rst | 17 + > drivers/net/zxdh/meson.build | 4 + > drivers/net/zxdh/zxdh_common.c | 24 + > drivers/net/zxdh/zxdh_common.h | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 595 +++++++- > drivers/net/zxdh/zxdh_ethdev.h | 40 + > drivers/net/zxdh/zxdh_ethdev_ops.c | 1573 +++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ > drivers/net/zxdh/zxdh_msg.c | 169 +++ > drivers/net/zxdh/zxdh_msg.h | 232 ++++ > drivers/net/zxdh/zxdh_np.c | 2060 ++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 579 ++++++++ > drivers/net/zxdh/zxdh_pci.c | 23 +- > drivers/net/zxdh/zxdh_pci.h | 9 +- > drivers/net/zxdh/zxdh_queue.c | 242 +++- > drivers/net/zxdh/zxdh_queue.h | 144 +- > drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ > drivers/net/zxdh/zxdh_rxtx.h | 20 +- > drivers/net/zxdh/zxdh_tables.c | 794 +++++++++++ > drivers/net/zxdh/zxdh_tables.h | 231 ++++ > 21 files changed, 7613 insertions(+), 46 deletions(-) > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h > create mode 100644 drivers/net/zxdh/zxdh_np.c > create mode 100644 drivers/net/zxdh/zxdh_np.h > create mode 100644 drivers/net/zxdh/zxdh_rxtx.c > create mode 100644 drivers/net/zxdh/zxdh_tables.c > create mode 100644 drivers/net/zxdh/zxdh_tables.h > I did a build with -Waddress-of-packed member enabled and it reports some possible issues with the driver. The virtqueue structure is marked as packed but the virtnet_rx structure embedded is listed as cache aligned. One solution would be to be more selective in using __rte_packed, and only apply where needed because of mixed members or requirements of the hardware. Full warnings; [1442/2983] Compiling C object drivers/libtmp_rte_net_zxdh.a.p/net_zxdh_zxdh_queue.c.o ../drivers/net/zxdh/zxdh_queue.c: In function ‘zxdh_dev_rx_queue_setup’: ../drivers/net/zxdh/zxdh_queue.c:190:40: warning: taking address of packed member of ‘struct zxdh_virtqueue’ may result in an unaligned pointer value [-Waddress-of-packed-member] 190 | struct zxdh_virtnet_rx *rxvq = &vq->rxq; | ^~~~~~~~ ../drivers/net/zxdh/zxdh_queue.c: In function ‘zxdh_dev_tx_queue_setup’: ../drivers/net/zxdh/zxdh_queue.c:234:16: warning: taking address of packed member of ‘struct zxdh_virtqueue’ may result in an unaligned pointer value [-Waddress-of-packed-member] 234 | txvq = &vq->txq; | ^~~~~~~~ ../drivers/net/zxdh/zxdh_queue.c: In function ‘zxdh_dev_rx_queue_setup_finish’: ../drivers/net/zxdh/zxdh_queue.c:314:40: warning: taking address of packed member of ‘struct zxdh_virtqueue’ may result in an unaligned pointer value [-Waddress-of-packed-member] 314 | struct zxdh_virtnet_rx *rxvq = &vq->rxq; | ^~~~~~~~ [1449/2983] Compiling C object drivers/libtmp_rte_net_zxdh.a.p/net_zxdh_zxdh_ethdev_ops.c.o ../drivers/net/zxdh/zxdh_ethdev_ops.c: In function ‘zxdh_dev_rss_reta_update’: ../drivers/net/zxdh/zxdh_ethdev_ops.c:921:59: warning: taking address of packed member of ‘union <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member] 921 | ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); | ^~~~~~~~~~~~~~~~~~ ../drivers/net/zxdh/zxdh_ethdev_ops.c: In function ‘zxdh_dev_rss_reta_query’: ../drivers/net/zxdh/zxdh_ethdev_ops.c:979:59: warning: taking address of packed member of ‘union <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member] 979 | ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../drivers/net/zxdh/zxdh_ethdev_ops.c:993:44: warning: taking address of packed member of ‘union <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member] 993 | struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../drivers/net/zxdh/zxdh_ethdev_ops.c: In function ‘zxdh_rss_configure’: ../drivers/net/zxdh/zxdh_ethdev_ops.c:1247:59: warning: taking address of packed member of ‘union <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member] 1247 | ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); | ^~~~~~~~~~~~~~~~~~ ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 00/15] net/zxdh: updated net zxdh driver 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (5 preceding siblings ...) 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 01/15] net/zxdh: zxdh np init implementation Junlong Wang ` (14 more replies) 6 siblings, 15 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3187 bytes --] V6: - Remove unnecessary __rte_packed in the virtqueue structure and others. - Remove Some blank before or after log message, and remove some end with period in log message. V5: - Simplify the notify_data part in the zxdh_notify_queue function. - Replace rte_zmalloc with rte_calloc in the rss_reta_update function. - Remove unnecessary check in mtu_set function. V4: - resolved ci compile issues. V3: - use rte_zmalloc and rte_calloc to avoid memset. - remove unnecessary initialization, which first usage will set. - adjust some function which is always return 0, changed to void and skip the ASSERTION later. - resolved some WARNING:MACRO_ARG_UNUSED issues. - resolved some other issues. V2: - resolve code style and github-robot build issue. V1: - updated net zxdh driver provided insert/delete/get table code funcs. provided link/mac/vlan/promiscuous/rss/mtu ops. Junlong Wang (15): net/zxdh: zxdh np init implementation net/zxdh: zxdh np uninit implementation net/zxdh: port tables init implementations net/zxdh: port tables unint implementations net/zxdh: rx/tx queue setup and intr enable net/zxdh: dev start/stop ops implementations net/zxdh: provided dev simple tx implementations net/zxdh: provided dev simple rx implementations net/zxdh: link info update, set link up/down net/zxdh: mac set/add/remove ops implementations net/zxdh: promisc/allmulti ops implementations net/zxdh: vlan filter/ offload ops implementations net/zxdh: rss hash config/update, reta update/get net/zxdh: basic stats ops implementations net/zxdh: mtu update ops implementations doc/guides/nics/features/zxdh.ini | 18 + doc/guides/nics/zxdh.rst | 17 + drivers/net/zxdh/meson.build | 4 + drivers/net/zxdh/zxdh_common.c | 28 +- drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 603 +++++++- drivers/net/zxdh/zxdh_ethdev.h | 40 + drivers/net/zxdh/zxdh_ethdev_ops.c | 1573 +++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 80 ++ drivers/net/zxdh/zxdh_msg.c | 205 ++- drivers/net/zxdh/zxdh_msg.h | 232 ++++ drivers/net/zxdh/zxdh_np.c | 2060 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 579 ++++++++ drivers/net/zxdh/zxdh_pci.c | 27 +- drivers/net/zxdh/zxdh_pci.h | 9 +- drivers/net/zxdh/zxdh_queue.c | 242 +++- drivers/net/zxdh/zxdh_queue.h | 164 ++- drivers/net/zxdh/zxdh_rxtx.c | 804 +++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 20 +- drivers/net/zxdh/zxdh_tables.c | 794 +++++++++++ drivers/net/zxdh/zxdh_tables.h | 231 ++++ 21 files changed, 7649 insertions(+), 82 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.c create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6046 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 01/15] net/zxdh: zxdh np init implementation 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang ` (13 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 36779 bytes --] (np)network Processor initialize resources in host, and initialize a channel for some tables insert/get/del. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 234 +++++++++++++++++++++-- drivers/net/zxdh/zxdh_ethdev.h | 30 +++ drivers/net/zxdh/zxdh_msg.c | 44 +++++ drivers/net/zxdh/zxdh_msg.h | 37 ++++ drivers/net/zxdh/zxdh_np.c | 340 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 198 +++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 2 +- drivers/net/zxdh/zxdh_pci.h | 6 +- drivers/net/zxdh/zxdh_queue.c | 2 +- drivers/net/zxdh/zxdh_queue.h | 14 +- 11 files changed, 875 insertions(+), 33 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_np.c create mode 100644 drivers/net/zxdh/zxdh_np.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index c9960f4c73..ab24a3145c 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -19,4 +19,5 @@ sources = files( 'zxdh_msg.c', 'zxdh_pci.c', 'zxdh_queue.c', + 'zxdh_np.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c786198535..b8f4415e00 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -5,6 +5,7 @@ #include <ethdev_pci.h> #include <bus_pci_driver.h> #include <rte_ethdev.h> +#include <rte_malloc.h> #include "zxdh_ethdev.h" #include "zxdh_logs.h" @@ -12,8 +13,15 @@ #include "zxdh_msg.h" #include "zxdh_common.h" #include "zxdh_queue.h" +#include "zxdh_np.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +struct zxdh_shared_data *zxdh_shared_data; +const char *ZXDH_PMD_SHARED_DATA_MZ = "zxdh_pmd_shared_data"; +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER; +struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_INVALID_DTBQUE 0xFFFF uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) @@ -406,14 +414,14 @@ zxdh_features_update(struct zxdh_hw *hw, ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { PMD_DRV_LOG(ERR, "rx checksum not available on this host"); return -ENOTSUP; } if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && - (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + (!zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); return -ENOTSUP; } @@ -421,20 +429,20 @@ zxdh_features_update(struct zxdh_hw *hw, } static bool -rx_offload_enabled(struct zxdh_hw *hw) +zxdh_rx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); } static bool -tx_offload_enabled(struct zxdh_hw *hw) +zxdh_tx_offload_enabled(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || - vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); + return zxdh_pci_with_feature(hw, ZXDH_NET_F_CSUM) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + zxdh_pci_with_feature(hw, ZXDH_NET_F_HOST_UFO); } static void @@ -466,7 +474,7 @@ zxdh_dev_free_mbufs(struct rte_eth_dev *dev) continue; PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); - while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + while ((buf = zxdh_queue_detach_unused(vq)) != NULL) rte_pktmbuf_free(buf); } } @@ -550,9 +558,9 @@ zxdh_init_vring(struct zxdh_virtqueue *vq) vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); - vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); - vring_desc_init_packed(vq, size); - virtqueue_disable_intr(vq); + zxdh_vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + zxdh_vring_desc_init_packed(vq, size); + zxdh_queue_disable_intr(vq); } static int32_t @@ -621,7 +629,7 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* * Reserve a memzone for vring elements */ - size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + size = zxdh_vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -694,7 +702,8 @@ zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) /* first indirect descriptor is always the tx header */ struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; - vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + zxdh_vring_desc_init_indirect_packed(start_dp, + RTE_DIM(txr[i].tx_packed_indir)); start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + offsetof(struct zxdh_tx_region, tx_hdr); /* length will be updated to actual pi hdr size when xmit pkt */ @@ -792,8 +801,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) } } - hw->has_tx_offload = tx_offload_enabled(hw); - hw->has_rx_offload = rx_offload_enabled(hw); + hw->has_tx_offload = zxdh_tx_offload_enabled(hw); + hw->has_rx_offload = zxdh_rx_offload_enabled(hw); nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) @@ -881,7 +890,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); /* If host does not support both status and MSI-X then disable LSC */ - if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; else eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; @@ -913,6 +922,181 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) return 0; } +static int +zxdh_np_dtb_res_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_bar_offset_params param = {0}; + struct zxdh_bar_offset_res res = {0}; + int ret = 0; + + if (g_dtb_data.init_done) { + PMD_DRV_LOG(DEBUG, "DTB res already init done, dev %s no need init", + dev->device->name); + return 0; + } + g_dtb_data.queueid = ZXDH_INVALID_DTBQUE; + g_dtb_data.bind_device = dev; + g_dtb_data.dev_refcnt++; + g_dtb_data.init_done = 1; + + ZXDH_DEV_INIT_CTRL_T *dpp_ctrl = rte_zmalloc(NULL, sizeof(*dpp_ctrl) + + sizeof(ZXDH_DTB_ADDR_INFO_T) * 256, 0); + if (dpp_ctrl == NULL) { + PMD_DRV_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->queue_id = 0xff; + dpp_ctrl->vport = hw->vport.vport; + dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC; + strlcpy(dpp_ctrl->port_name, dev->device->name, sizeof(dpp_ctrl->port_name)); + dpp_ctrl->pcie_vir_addr = (uint32_t)hw->bar_addr[0]; + + param.pcie_id = hw->pcie_id; + param.virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param.type = ZXDH_URI_NP; + + ret = zxdh_get_bar_offset(¶m, &res); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s get npbar offset failed", dev->device->name); + goto free_res; + } + dpp_ctrl->np_bar_len = res.bar_length; + dpp_ctrl->np_bar_offset = res.bar_offset; + + if (!g_dtb_data.dtb_table_conf_mz) { + const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz", + ZXDH_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (conf_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s annot allocate memory for dtb table conf", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->down_vir_addr = conf_mz->addr_64; + dpp_ctrl->down_phy_addr = conf_mz->iova; + g_dtb_data.dtb_table_conf_mz = conf_mz; + } + + if (!g_dtb_data.dtb_table_dump_mz) { + const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz", + ZXDH_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE); + + if (dump_mz == NULL) { + PMD_DRV_LOG(ERR, + "dev %s Cannot allocate memory for dtb table dump", + dev->device->name); + ret = -ENOMEM; + goto free_res; + } + dpp_ctrl->dump_vir_addr = dump_mz->addr_64; + dpp_ctrl->dump_phy_addr = dump_mz->iova; + g_dtb_data.dtb_table_dump_mz = dump_mz; + } + + ret = zxdh_np_host_init(0, dpp_ctrl); + if (ret) { + PMD_DRV_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret); + goto free_res; + } + + PMD_DRV_LOG(DEBUG, "dev %s dpp host np init ok.dtb queue %d", + dev->device->name, dpp_ctrl->queue_id); + g_dtb_data.queueid = dpp_ctrl->queue_id; + rte_free(dpp_ctrl); + return 0; + +free_res: + rte_free(dpp_ctrl); + return ret; +} + +static int +zxdh_init_shared_data(void) +{ + const struct rte_memzone *mz; + int ret = 0; + + rte_spinlock_lock(&zxdh_shared_data_lock); + if (zxdh_shared_data == NULL) { + if (rte_eal_process_type() == RTE_PROC_PRIMARY) { + /* Allocate shared memory. */ + mz = rte_memzone_reserve(ZXDH_PMD_SHARED_DATA_MZ, + sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot allocate zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data)); + rte_spinlock_init(&zxdh_shared_data->lock); + } else { /* Lookup allocated shared memory. */ + mz = rte_memzone_lookup(ZXDH_PMD_SHARED_DATA_MZ); + if (mz == NULL) { + PMD_DRV_LOG(ERR, "Cannot attach zxdh shared data"); + ret = -rte_errno; + goto error; + } + zxdh_shared_data = mz->addr; + } + } + +error: + rte_spinlock_unlock(&zxdh_shared_data_lock); + return ret; +} + +static int +zxdh_init_once(void) +{ + int ret = 0; + + if (zxdh_init_shared_data()) + return -1; + + struct zxdh_shared_data *sd = zxdh_shared_data; + rte_spinlock_lock(&sd->lock); + if (rte_eal_process_type() == RTE_PROC_SECONDARY) { + if (!sd->init_done) { + ++sd->secondary_cnt; + sd->init_done = true; + } + goto out; + } + /* RTE_PROC_PRIMARY */ + if (!sd->init_done) + sd->init_done = true; + sd->dev_refcnt++; + +out: + rte_spinlock_unlock(&sd->lock); + return ret; +} + +static int +zxdh_np_init(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_dtb_res_init(eth_dev); + if (ret) { + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + return ret; + } + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 1; + + PMD_DRV_LOG(DEBUG, "np init ok "); + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -950,6 +1134,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_once(); + if (ret != 0) + goto err_zxdh_init; + ret = zxdh_init_device(eth_dev); if (ret < 0) goto err_zxdh_init; @@ -977,6 +1165,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_np_init(eth_dev); + if (ret) + goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); if (ret != 0) goto err_zxdh_init; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7658cbb461..b1f398b28e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -35,6 +35,12 @@ #define ZXDH_MBUF_BURST_SZ 64 +#define ZXDH_MAX_BASE_DTB_TABLE_COUNT 30 +#define ZXDH_DTB_TABLE_DUMP_SIZE (32 * (16 + 16 * 1024)) +#define ZXDH_DTB_TABLE_CONF_SIZE (32 * (16 + 16 * 1024)) + +#define ZXDH_MAX_NAME_LEN 32 + union zxdh_virport_num { uint16_t vport; struct { @@ -89,6 +95,30 @@ struct zxdh_hw { uint8_t has_rx_offload; }; +struct zxdh_dtb_shared_data { + uint8_t init_done; + char name[ZXDH_MAX_NAME_LEN]; + uint16_t queueid; + uint16_t vport; + uint32_t vector; + const struct rte_memzone *dtb_table_conf_mz; + const struct rte_memzone *dtb_table_dump_mz; + const struct rte_memzone *dtb_table_bulk_dump_mz[ZXDH_MAX_BASE_DTB_TABLE_COUNT]; + struct rte_eth_dev *bind_device; + uint32_t dev_refcnt; +}; + +/* Shared data between primary and secondary processes. */ +struct zxdh_shared_data { + rte_spinlock_t lock; /* Global spinlock for primary and secondary processes. */ + int32_t init_done; /* Whether primary has done initialization. */ + unsigned int secondary_cnt; /* Number of secondary processes init'd. */ + + int32_t np_init_done; + uint32_t dev_refcnt; + struct zxdh_dtb_shared_data *dtb_data; +}; + uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); #endif /* ZXDH_ETHDEV_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 53cf972f86..dd7a518a51 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1035,3 +1035,47 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) rte_free(recved_msg); return ZXDH_BAR_MSG_OK; } + +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, + struct zxdh_bar_offset_res *res) +{ + uint16_t check_token; + uint16_t sum_res; + int ret; + + if (!paras) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_offset_get_msg send_msg = { + .pcie_id = paras->pcie_id, + .type = paras->type, + }; + struct zxdh_pci_bar_msg in = { + .payload_addr = &send_msg, + .payload_len = sizeof(send_msg), + .virt_addr = paras->virt_addr, + .src = ZXDH_MSG_CHAN_END_PF, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_OFFSET_GET, + .src_pcieid = paras->pcie_id, + }; + struct zxdh_bar_recv_msg recv_msg = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.offset_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&send_msg, sizeof(send_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + res->bar_offset = recv_msg.offset_reps.offset; + res->bar_length = recv_msg.offset_reps.length; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 530ee406b1..fbc79e8f9d 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -131,6 +131,26 @@ enum ZXDH_TBL_MSG_TYPE { ZXDH_TBL_TYPE_NON, }; +enum pciebar_layout_type { + ZXDH_URI_VQM = 0, + ZXDH_URI_SPINLOCK = 1, + ZXDH_URI_FWCAP = 2, + ZXDH_URI_FWSHR = 3, + ZXDH_URI_DRS_SEC = 4, + ZXDH_URI_RSV = 5, + ZXDH_URI_CTRLCH = 6, + ZXDH_URI_1588 = 7, + ZXDH_URI_QBV = 8, + ZXDH_URI_MACPCS = 9, + ZXDH_URI_RDMA = 10, + ZXDH_URI_MNP = 11, + ZXDH_URI_MSPM = 12, + ZXDH_URI_MVQM = 13, + ZXDH_URI_MDPI = 14, + ZXDH_URI_NP = 15, + ZXDH_URI_MAX, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -174,6 +194,17 @@ struct zxdh_bar_offset_reps { uint32_t length; } __rte_packed; +struct zxdh_bar_offset_params { + uint64_t virt_addr; /* Bar space control space virtual address */ + uint16_t pcie_id; + uint16_t type; /* Module types corresponding to PCIBAR planning */ +}; + +struct zxdh_bar_offset_res { + uint32_t bar_offset; + uint32_t bar_length; +}; + struct zxdh_bar_recv_msg { uint8_t reps_ok; uint16_t reps_len; @@ -204,9 +235,15 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +struct zxdh_offset_get_msg { + uint16_t pcie_id; + uint16_t type; +}; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); +int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, struct zxdh_bar_offset_res *res); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c new file mode 100644 index 0000000000..e44d7ff501 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.c @@ -0,0 +1,340 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdlib.h> +#include <string.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_debug.h> +#include <rte_malloc.h> + +#include "zxdh_np.h" +#include "zxdh_logs.h" + +static uint64_t g_np_bar_offset; +static ZXDH_DEV_MGR_T g_dev_mgr; +static ZXDH_SDT_MGR_T g_sdt_mgr; +ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; + +#define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) +#define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) + +#define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ +do {\ + if (NULL == (point)) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", (dev_id), __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "dev: %d ZXIC %s:%d !"\ + "-- %s Call %s Fail!", (dev_id), __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_POINT_NO_ASSERT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d !-- %s Call %s"\ + " Fail!", __FILE__, __LINE__, __func__, becall);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC(rc, becall)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d!-- %s Call %s "\ + "Fail!", __FILE__, __LINE__, __func__, becall);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +static uint32_t +zxdh_np_dev_init(void) +{ + if (g_dev_mgr.is_init) { + PMD_DRV_LOG(ERR, "Dev is already initialized."); + return 0; + } + + g_dev_mgr.device_num = 0; + g_dev_mgr.is_init = 1; + + return 0; +} + +static uint32_t +zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, + ZXDH_DEV_ACCESS_TYPE_E access_type, uint64_t pcie_addr, + uint64_t riscv_addr, uint64_t dma_vir_addr, + uint64_t dma_phy_addr) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + if (!p_dev_mgr->is_init) { + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + ZXDH_RC_DEV_MGR_NOT_INIT); + return ZXDH_RC_DEV_MGR_NOT_INIT; + } + + if (p_dev_mgr->p_dev_array[dev_id] != NULL) { + /* device is already exist. */ + PMD_DRV_LOG(ERR, "Device is added again!!!"); + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + } else { + /* device is new. */ + p_dev_info = rte_malloc(NULL, sizeof(ZXDH_DEV_CFG_T), 0); + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = p_dev_info; + p_dev_mgr->device_num++; + } + + p_dev_info->device_id = dev_id; + p_dev_info->dev_type = dev_type; + p_dev_info->access_type = access_type; + p_dev_info->pcie_addr = pcie_addr; + p_dev_info->riscv_addr = riscv_addr; + p_dev_info->dma_vir_addr = dma_vir_addr; + p_dev_info->dma_phy_addr = dma_phy_addr; + + return 0; +} + +static uint32_t +zxdh_np_dev_agent_status_set(uint32_t dev_id, uint32_t agent_flag) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return ZXDH_DEV_TYPE_INVALID; + p_dev_info->agent_flag = agent_flag; + + return 0; +} + +static void +zxdh_np_sdt_mgr_init(void) +{ + if (!g_sdt_mgr.is_init) { + g_sdt_mgr.channel_num = 0; + g_sdt_mgr.is_init = 1; + memset(g_sdt_mgr.sdt_tbl_array, 0, ZXDH_DEV_CHANNEL_MAX * + sizeof(ZXDH_SDT_SOFT_TABLE_T *)); + } +} + +static uint32_t +zxdh_np_sdt_mgr_create(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (ZXDH_SDT_SOFT_TBL_GET(dev_id) == NULL) { + p_sdt_tbl_temp = rte_malloc(NULL, sizeof(ZXDH_SDT_SOFT_TABLE_T), 0); + + p_sdt_tbl_temp->device_id = dev_id; + memset(p_sdt_tbl_temp->sdt_array, 0, ZXDH_DEV_SDT_ID_MAX * sizeof(ZXDH_SDT_ITEM_T)); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = p_sdt_tbl_temp; + + p_sdt_mgr->channel_num++; + } else { + PMD_DRV_LOG(ERR, "Error: %s for dev[%d]" + "is called repeatedly!", __func__, dev_id); + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_init(uint32_t dev_num, uint32_t *dev_id_array) +{ + uint32_t rc; + uint32_t i; + + zxdh_np_sdt_mgr_init(); + + for (i = 0; i < dev_num; i++) { + rc = zxdh_np_sdt_mgr_create(dev_id_array[i]); + ZXDH_COMM_CHECK_RC(rc, "zxdh_sdt_mgr_create"); + } + + return rc; +} + +static void +zxdh_np_ppu_parse_cls_bitmap(uint32_t dev_id, + uint32_t bitmap) +{ + uint32_t cls_id; + uint32_t mem_id; + uint32_t cls_use; + uint32_t instr_mem; + + for (cls_id = 0; cls_id < ZXDH_PPU_CLUSTER_NUM; cls_id++) { + cls_use = (bitmap >> cls_id) & 0x1; + g_ppu_cls_bit_map[dev_id].cls_use[cls_id] = cls_use; + } + + for (mem_id = 0; mem_id < ZXDH_PPU_INSTR_MEM_NUM; mem_id++) { + instr_mem = (bitmap >> (mem_id * 2)) & 0x3; + g_ppu_cls_bit_map[dev_id].instr_mem[mem_id] = ((instr_mem > 0) ? 1 : 0); + } +} + +static ZXDH_DTB_MGR_T * +zxdh_np_dtb_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_dpp_dtb_mgr[dev_id]; +} + +static uint32_t +zxdh_np_dtb_soft_init(uint32_t dev_id) +{ + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return 1; + + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) { + p_dpp_dtb_mgr[dev_id] = rte_zmalloc(NULL, sizeof(ZXDH_DTB_MGR_T), 0); + p_dtb_mgr = zxdh_np_dtb_mgr_get(dev_id); + if (p_dtb_mgr == NULL) + return 1; + } + + return 0; +} + +static uint32_t +zxdh_np_base_soft_init(uint32_t dev_id, ZXDH_SYS_INIT_CTRL_T *p_init_ctrl) +{ + uint32_t dev_id_array[ZXDH_DEV_CHANNEL_MAX] = {0}; + uint32_t rt; + uint32_t access_type; + uint32_t agent_flag; + + rt = zxdh_np_dev_init(); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_init"); + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_ACCESS_TYPE) + access_type = ZXDH_DEV_ACCESS_TYPE_RISCV; + else + access_type = ZXDH_DEV_ACCESS_TYPE_PCIE; + + if (p_init_ctrl->flags & ZXDH_INIT_FLAG_AGENT_FLAG) + agent_flag = ZXDH_DEV_AGENT_ENABLE; + else + agent_flag = ZXDH_DEV_AGENT_DISABLE; + + rt = zxdh_np_dev_add(dev_id, + p_init_ctrl->device_type, + access_type, + p_init_ctrl->pcie_vir_baddr, + p_init_ctrl->riscv_vir_baddr, + p_init_ctrl->dma_vir_baddr, + p_init_ctrl->dma_phy_baddr); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_add"); + + rt = zxdh_np_dev_agent_status_set(dev_id, agent_flag); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dev_agent_status_set"); + + dev_id_array[0] = dev_id; + rt = zxdh_np_sdt_init(1, dev_id_array); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_sdt_init"); + + zxdh_np_ppu_parse_cls_bitmap(dev_id, ZXDH_PPU_CLS_ALL_START); + + rt = zxdh_np_dtb_soft_init(dev_id); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rt, "zxdh_dtb_soft_init"); + + return rt; +} + +static void +zxdh_np_dev_vport_set(uint32_t dev_id, uint32_t vport) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->vport = vport; +} + +static void +zxdh_np_dev_agent_addr_set(uint32_t dev_id, uint64_t agent_addr) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + p_dev_info->agent_addr = agent_addr; +} + +static uint64_t +zxdh_np_addr_calc(uint64_t pcie_vir_baddr, uint32_t bar_offset) +{ + uint64_t np_addr; + + np_addr = ((pcie_vir_baddr + bar_offset) > ZXDH_PCIE_NP_MEM_SIZE) + ? (pcie_vir_baddr + bar_offset - ZXDH_PCIE_NP_MEM_SIZE) : 0; + g_np_bar_offset = bar_offset; + + return np_addr; +} + +int +zxdh_np_host_init(uint32_t dev_id, + ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl) +{ + ZXDH_SYS_INIT_CTRL_T sys_init_ctrl = {0}; + uint32_t rc; + uint64_t agent_addr; + + ZXDH_COMM_CHECK_POINT_NO_ASSERT(p_dev_init_ctrl); + + sys_init_ctrl.flags = (ZXDH_DEV_ACCESS_TYPE_PCIE << 0) | (ZXDH_DEV_AGENT_ENABLE << 10); + sys_init_ctrl.pcie_vir_baddr = zxdh_np_addr_calc(p_dev_init_ctrl->pcie_vir_addr, + p_dev_init_ctrl->np_bar_offset); + sys_init_ctrl.device_type = ZXDH_DEV_TYPE_CHIP; + rc = zxdh_np_base_soft_init(dev_id, &sys_init_ctrl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_base_soft_init"); + + zxdh_np_dev_vport_set(dev_id, p_dev_init_ctrl->vport); + + agent_addr = ZXDH_PCIE_AGENT_ADDR_OFFSET + p_dev_init_ctrl->pcie_vir_addr; + zxdh_np_dev_agent_addr_set(dev_id, agent_addr); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h new file mode 100644 index 0000000000..573eafe796 --- /dev/null +++ b/drivers/net/zxdh/zxdh_np.h @@ -0,0 +1,198 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 ZTE Corporation + */ + +#ifndef ZXDH_NP_H +#define ZXDH_NP_H + +#include <stdint.h> + +#define ZXDH_PORT_NAME_MAX (32) +#define ZXDH_DEV_CHANNEL_MAX (2) +#define ZXDH_DEV_SDT_ID_MAX (256U) +/*DTB*/ +#define ZXDH_DTB_QUEUE_ITEM_NUM_MAX (32) +#define ZXDH_DTB_QUEUE_NUM_MAX (128) + +#define ZXDH_PPU_CLS_ALL_START (0x3F) +#define ZXDH_PPU_CLUSTER_NUM (6) +#define ZXDH_PPU_INSTR_MEM_NUM (3) +#define ZXDH_SDT_CFG_LEN (2) + +#define ZXDH_RC_DEV_BASE (0x600) +#define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) +#define ZXDH_RC_DEV_RANGE_INVALID (ZXDH_RC_DEV_BASE | 0x1) +#define ZXDH_RC_DEV_CALL_FUNC_FAIL (ZXDH_RC_DEV_BASE | 0x2) +#define ZXDH_RC_DEV_TYPE_INVALID (ZXDH_RC_DEV_BASE | 0x3) +#define ZXDH_RC_DEV_CONNECT_FAIL (ZXDH_RC_DEV_BASE | 0x4) +#define ZXDH_RC_DEV_MSG_INVALID (ZXDH_RC_DEV_BASE | 0x5) +#define ZXDH_RC_DEV_NOT_EXIST (ZXDH_RC_DEV_BASE | 0x6) +#define ZXDH_RC_DEV_MGR_NOT_INIT (ZXDH_RC_DEV_BASE | 0x7) +#define ZXDH_RC_DEV_CFG_NOT_INIT (ZXDH_RC_DEV_BASE | 0x8) + +#define ZXDH_SYS_VF_NP_BASE_OFFSET 0 +#define ZXDH_PCIE_DTB4K_ADDR_OFFSET (0x6000) +#define ZXDH_PCIE_NP_MEM_SIZE (0x2000000) +#define ZXDH_PCIE_AGENT_ADDR_OFFSET (0x2000) + +#define ZXDH_INIT_FLAG_ACCESS_TYPE (1 << 0) +#define ZXDH_INIT_FLAG_SERDES_DOWN_TP (1 << 1) +#define ZXDH_INIT_FLAG_DDR_BACKDOOR (1 << 2) +#define ZXDH_INIT_FLAG_SA_MODE (1 << 3) +#define ZXDH_INIT_FLAG_SA_MESH (1 << 4) +#define ZXDH_INIT_FLAG_SA_SERDES_MODE (1 << 5) +#define ZXDH_INIT_FLAG_INT_DEST_MODE (1 << 6) +#define ZXDH_INIT_FLAG_LIF0_MODE (1 << 7) +#define ZXDH_INIT_FLAG_DMA_ENABLE (1 << 8) +#define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) +#define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) + +typedef enum zxdh_module_init_e { + ZXDH_MODULE_INIT_NPPU = 0, + ZXDH_MODULE_INIT_PPU, + ZXDH_MODULE_INIT_SE, + ZXDH_MODULE_INIT_ETM, + ZXDH_MODULE_INIT_DLB, + ZXDH_MODULE_INIT_TRPG, + ZXDH_MODULE_INIT_TSN, + ZXDH_MODULE_INIT_MAX +} ZXDH_MODULE_INIT_E; + +typedef enum zxdh_dev_type_e { + ZXDH_DEV_TYPE_SIM = 0, + ZXDH_DEV_TYPE_VCS = 1, + ZXDH_DEV_TYPE_CHIP = 2, + ZXDH_DEV_TYPE_FPGA = 3, + ZXDH_DEV_TYPE_PCIE_ACC = 4, + ZXDH_DEV_TYPE_INVALID, +} ZXDH_DEV_TYPE_E; + +typedef enum zxdh_dev_access_type_e { + ZXDH_DEV_ACCESS_TYPE_PCIE = 0, + ZXDH_DEV_ACCESS_TYPE_RISCV = 1, + ZXDH_DEV_ACCESS_TYPE_INVALID, +} ZXDH_DEV_ACCESS_TYPE_E; + +typedef enum zxdh_dev_agent_flag_e { + ZXDH_DEV_AGENT_DISABLE = 0, + ZXDH_DEV_AGENT_ENABLE = 1, + ZXDH_DEV_AGENT_INVALID, +} ZXDH_DEV_AGENT_FLAG_E; + +typedef struct zxdh_dtb_tab_up_user_addr_t { + uint32_t user_flag; + uint64_t phy_addr; + uint64_t vir_addr; +} ZXDH_DTB_TAB_UP_USER_ADDR_T; + +typedef struct zxdh_dtb_tab_up_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; + uint32_t data_len[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; + ZXDH_DTB_TAB_UP_USER_ADDR_T user_addr[ZXDH_DTB_QUEUE_ITEM_NUM_MAX]; +} ZXDH_DTB_TAB_UP_INFO_T; + +typedef struct zxdh_dtb_tab_down_info_t { + uint64_t start_phy_addr; + uint64_t start_vir_addr; + uint32_t item_size; + uint32_t wr_index; + uint32_t rd_index; +} ZXDH_DTB_TAB_DOWN_INFO_T; + +typedef struct zxdh_dtb_queue_info_t { + uint32_t init_flag; + uint32_t vport; + uint32_t vector; + ZXDH_DTB_TAB_UP_INFO_T tab_up; + ZXDH_DTB_TAB_DOWN_INFO_T tab_down; +} ZXDH_DTB_QUEUE_INFO_T; + +typedef struct zxdh_dtb_mgr_t { + ZXDH_DTB_QUEUE_INFO_T queue_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_DTB_MGR_T; + +typedef struct zxdh_ppu_cls_bitmap_t { + uint32_t cls_use[ZXDH_PPU_CLUSTER_NUM]; + uint32_t instr_mem[ZXDH_PPU_INSTR_MEM_NUM]; +} ZXDH_PPU_CLS_BITMAP_T; + +typedef struct dpp_sdt_item_t { + uint32_t valid; + uint32_t table_cfg[ZXDH_SDT_CFG_LEN]; +} ZXDH_SDT_ITEM_T; + +typedef struct dpp_sdt_soft_table_t { + uint32_t device_id; + ZXDH_SDT_ITEM_T sdt_array[ZXDH_DEV_SDT_ID_MAX]; +} ZXDH_SDT_SOFT_TABLE_T; + +typedef struct zxdh_sys_init_ctrl_t { + ZXDH_DEV_TYPE_E device_type; + uint32_t flags; + uint32_t sa_id; + uint32_t case_num; + uint32_t lif0_port_type; + uint32_t lif1_port_type; + uint64_t pcie_vir_baddr; + uint64_t riscv_vir_baddr; + uint64_t dma_vir_baddr; + uint64_t dma_phy_baddr; +} ZXDH_SYS_INIT_CTRL_T; + +typedef struct dpp_dev_cfg_t { + uint32_t device_id; + ZXDH_DEV_TYPE_E dev_type; + uint32_t chip_ver; + uint32_t access_type; + uint32_t agent_flag; + uint32_t vport; + uint64_t pcie_addr; + uint64_t riscv_addr; + uint64_t dma_vir_addr; + uint64_t dma_phy_addr; + uint64_t agent_addr; + uint32_t init_flags[ZXDH_MODULE_INIT_MAX]; +} ZXDH_DEV_CFG_T; + +typedef struct zxdh_dev_mngr_t { + uint32_t device_num; + uint32_t is_init; + ZXDH_DEV_CFG_T *p_dev_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_DEV_MGR_T; + +typedef struct zxdh_dtb_addr_info_t { + uint32_t sdt_no; + uint32_t size; + uint32_t phy_addr; + uint32_t vir_addr; +} ZXDH_DTB_ADDR_INFO_T; + +typedef struct zxdh_dev_init_ctrl_t { + uint32_t vport; + char port_name[ZXDH_PORT_NAME_MAX]; + uint32_t vector; + uint32_t queue_id; + uint32_t np_bar_offset; + uint32_t np_bar_len; + uint32_t pcie_vir_addr; + uint32_t down_phy_addr; + uint32_t down_vir_addr; + uint32_t dump_phy_addr; + uint32_t dump_vir_addr; + uint32_t dump_sdt_num; + ZXDH_DTB_ADDR_INFO_T dump_addr_info[]; +} ZXDH_DEV_INIT_CTRL_T; + +typedef struct zxdh_sdt_mgr_t { + uint32_t channel_num; + uint32_t is_init; + ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; +} ZXDH_SDT_MGR_T; + +int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); + +#endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06d3f92b20..250e67d560 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -159,7 +159,7 @@ zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) desc_addr = vq->vq_ring_mem; avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); - if (vtpci_packed_queue(vq->hw)) { + if (zxdh_pci_packed_queue(vq->hw)) { used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct zxdh_vring_packed_desc_event)), ZXDH_PCI_VRING_ALIGN); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ed6fd89742..d6487a574f 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -114,15 +114,15 @@ struct zxdh_pci_common_cfg { }; static inline int32_t -vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +zxdh_pci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } static inline int32_t -vtpci_packed_queue(struct zxdh_hw *hw) +zxdh_pci_packed_queue(struct zxdh_hw *hw) { - return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); + return zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED); } struct zxdh_pci_ops { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index 462a88b23c..b4ef90ea36 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -13,7 +13,7 @@ #include "zxdh_msg.h" struct rte_mbuf * -zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { struct rte_mbuf *cookie = NULL; int32_t idx = 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1088bf08fc..1304d5e4ea 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -206,11 +206,11 @@ struct zxdh_tx_region { }; static inline size_t -vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +zxdh_vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) { size_t size; - if (vtpci_packed_queue(hw)) { + if (zxdh_pci_packed_queue(hw)) { size = num * sizeof(struct zxdh_vring_packed_desc); size += sizeof(struct zxdh_vring_packed_desc_event); size = RTE_ALIGN_CEIL(size, align); @@ -226,7 +226,7 @@ vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) } static inline void -vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, +zxdh_vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, unsigned long align, uint32_t num) { vr->num = num; @@ -238,7 +238,7 @@ vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, } static inline void -vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +zxdh_vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) { int32_t i = 0; @@ -251,7 +251,7 @@ vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) } static inline void -vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +zxdh_vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) { int32_t i = 0; @@ -262,7 +262,7 @@ vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) } static inline void -virtqueue_disable_intr(struct zxdh_virtqueue *vq) +zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) { if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; @@ -270,7 +270,7 @@ virtqueue_disable_intr(struct zxdh_virtqueue *vq) } } -struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 79562 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 02/15] net/zxdh: zxdh np uninit implementation 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang 2024-12-26 3:37 ` [PATCH v6 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 03/15] net/zxdh: port tables init implementations Junlong Wang ` (12 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 19520 bytes --] (np)network processor release resources in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ drivers/net/zxdh/zxdh_np.c | 470 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 107 ++++++++ 3 files changed, 625 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b8f4415e00..4e114d95da 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret; + int i; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1010,6 +1056,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1177,6 +1224,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index e44d7ff501..28728b0c68 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -18,10 +18,21 @@ static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; +ZXDH_REG_T g_dpp_reg_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -338,3 +349,462 @@ zxdh_np_host_init(uint32_t dev_id, return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return 0xffff; + + return p_dev_info->dev_type; +} + +static uint32_t +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint32_t byte_num; + uint32_t buffer_size; + uint32_t len; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + len = end_bit - start_bit + 1; + buffer_size = base_size_bit / 8; + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + *p_data = 0; + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + *p_data = (uint32_t)(((p_base[start_byte_index] >> (7U - (end_bit & 7))) + & (0xff >> (8U - len))) & 0xff); + return 0; + } + + if (start_bit & 7) { + *p_data = (p_base[start_byte_index] & (0xff >> (start_bit & 7))) & UINT8_MAX; + start_byte_index++; + } + + for (byte_num = start_byte_index; byte_num < end_byte_index; byte_num++) { + *p_data <<= 8; + *p_data += p_base[byte_num]; + } + + *p_data <<= 1 + (end_bit & 7); + *p_data += ((p_base[byte_num & (buffer_size - 1)] & (0xff << (7 - (end_bit & 7)))) >> + (7 - (end_bit & 7))) & 0xff; + + return 0; +} + +static uint32_t +zxdh_np_comm_read_bits_ex(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t msb_start_pos, uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_read_bits(p_base, + base_size_bit, + p_data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + return rtn; +} + +static uint32_t +zxdh_np_reg_read(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t rc = 0; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + for (i = 0; i < p_reg_info->field_num; i++) { + rc = zxdh_np_comm_read_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + (uint32_t *)p_data + i, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_read_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", dev_id, m_offset, n_offset); + } + } + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_read"); + + p_vm_info->dbi_en = vm_info.dbi_en; + p_vm_info->queue_en = vm_info.queue_en; + p_vm_info->epid = vm_info.cfg_epid; + p_vm_info->vector = vm_info.cfg_vector; + p_vm_info->vfunc_num = vm_info.cfg_vfunc_num; + p_vm_info->func_num = vm_info.cfg_func_num; + p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index; + uint32_t end_byte_index; + uint8_t mask_value; + uint32_t byte_num; + uint32_t buffer_size; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + buffer_size = base_size_bit / 8; + + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + mask_value |= (((1 << (7 - (end_bit & 7))) - 1) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= (((data << (7 - (end_bit & 7)))) & 0xff); + return 0; + } + + if (7 != (end_bit & 7)) { + mask_value = ((0x7f >> (end_bit & 7)) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= ((data << (7 - (end_bit & 7))) & 0xff); + end_byte_index--; + data >>= 1 + (end_bit & 7); + } + + for (byte_num = end_byte_index; byte_num > start_byte_index; byte_num--) { + p_base[byte_num & (buffer_size - 1)] = data & 0xff; + data >>= 8; + } + + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + p_base[byte_num] &= mask_value; + p_base[byte_num] |= data; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits_ex(uint8_t *p_base, + uint32_t base_size_bit, + uint32_t data, + uint32_t msb_start_pos, + uint32_t len) +{ + uint32_t rtn; + + rtn = zxdh_np_comm_write_bits(p_base, + base_size_bit, + data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + + return rtn; +} + +static uint32_t +zxdh_np_reg_write(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + uint32_t i; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + + for (i = 0; i < p_reg_info->field_num; i++) { + if (p_field_info[i].len <= 32) { + temp_data = *((uint32_t *)p_data + i); + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + temp_data, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_comm_write_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", + dev_id, m_offset, n_offset); + } + } + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + vm_info.dbi_en = p_vm_info->dbi_en; + vm_info.queue_en = p_vm_info->queue_en; + vm_info.cfg_epid = p_vm_info->epid; + vm_info.cfg_vector = p_vm_info->vector; + vm_info.cfg_vfunc_num = p_vm_info->vfunc_num; + vm_info.cfg_func_num = p_vm_info->func_num; + vm_info.cfg_vfunc_active = p_vm_info->vfunc_active; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_write"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_enable_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t enable) +{ + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + uint32_t rc; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + vm_info.queue_en = enable; + rc = zxdh_np_dtb_queue_vm_info_set(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_set"); + + return rc; +} + +static uint32_t +zxdh_np_riscv_dpp_dtb_queue_id_release(uint32_t dev_id, + char name[ZXDH_PORT_NAME_MAX], uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + if (p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag != 1) { + PMD_DRV_LOG(ERR, "queue %d not alloc!", queue_id); + return 2; + } + + if (strcmp(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, name) != 0) { + PMD_DRV_LOG(ERR, "queue %d name %s error!", queue_id, name); + return 3; + } + zxdh_np_dtb_queue_enable_set(dev_id, queue_id, 0); + zxdh_np_riscv_dtb_mgr_queue_info_delete(dev_id, queue_id); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_unused_item_num_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_num) +{ + uint32_t rc; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) { + *p_item_num = 32; + return 0; + } + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_INFO_QUEUE_BUF_SPACE, + 0, queue_id, p_item_num); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_read"); + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_id_free(uint32_t dev_id, + uint32_t queue_id) +{ + uint32_t item_num = 0; + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + uint32_t rc; + + p_dtb_mgr = p_dpp_dtb_mgr[dev_id]; + if (p_dtb_mgr == NULL) + return 1; + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &item_num); + + p_dtb_mgr->queue_info[queue_id].init_flag = 0; + p_dtb_mgr->queue_info[queue_id].vport = 0; + p_dtb_mgr->queue_info[queue_id].vector = 0; + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_release(uint32_t devid, + char pname[32], + uint32_t queueid) +{ + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(devid, pname); + + rc = zxdh_np_riscv_dpp_dtb_queue_id_release(devid, pname, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_riscv_dpp_dtb_queue_id_release"); + + rc = zxdh_np_dtb_queue_id_free(devid, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_id_free"); + + return rc; +} + +static void +zxdh_np_dtb_mgr_destroy(uint32_t dev_id) +{ + if (p_dpp_dtb_mgr[dev_id] != NULL) { + free(p_dpp_dtb_mgr[dev_id]); + p_dpp_dtb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_tlb_mgr_destroy(uint32_t dev_id) +{ + if (g_p_dpp_tlb_mgr[dev_id] != NULL) { + free(g_p_dpp_tlb_mgr[dev_id]); + g_p_dpp_tlb_mgr[dev_id] = NULL; + } +} + +static void +zxdh_np_sdt_mgr_destroy(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (p_sdt_tbl_temp != NULL) + free(p_sdt_tbl_temp); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; + + p_sdt_mgr->channel_num--; +} + +static void +zxdh_np_dev_del(uint32_t dev_id) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info != NULL) { + free(p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = NULL; + p_dev_mgr->device_num--; + } +} + +int +zxdh_np_online_uninit(uint32_t dev_id, + char *port_name, + uint32_t queue_id) +{ + uint32_t rc; + + rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "%s:dtb release error," + "port name %s queue id %d. ", __func__, port_name, queue_id); + + zxdh_np_dtb_mgr_destroy(dev_id); + zxdh_np_tlb_mgr_destroy(dev_id); + zxdh_np_sdt_mgr_destroy(dev_id); + zxdh_np_dev_del(dev_id); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 573eafe796..dc0e867827 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -47,6 +47,11 @@ #define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) #define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) +#define ZXDH_ACL_TBL_ID_MIN (0) +#define ZXDH_ACL_TBL_ID_MAX (7) +#define ZXDH_ACL_TBL_ID_NUM (8U) +#define ZXDH_ACL_BLOCK_NUM (8U) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -67,6 +72,15 @@ typedef enum zxdh_dev_type_e { ZXDH_DEV_TYPE_INVALID, } ZXDH_DEV_TYPE_E; +typedef enum zxdh_reg_info_e { + ZXDH_DTB_CFG_QUEUE_DTB_HADDR = 0, + ZXDH_DTB_CFG_QUEUE_DTB_LADDR = 1, + ZXDH_DTB_CFG_QUEUE_DTB_LEN = 2, + ZXDH_DTB_INFO_QUEUE_BUF_SPACE = 3, + ZXDH_DTB_CFG_EPID_V_FUNC_NUM = 4, + ZXDH_REG_ENUM_MAX_VALUE +} ZXDH_REG_INFO_E; + typedef enum zxdh_dev_access_type_e { ZXDH_DEV_ACCESS_TYPE_PCIE = 0, ZXDH_DEV_ACCESS_TYPE_RISCV = 1, @@ -79,6 +93,26 @@ typedef enum zxdh_dev_agent_flag_e { ZXDH_DEV_AGENT_INVALID, } ZXDH_DEV_AGENT_FLAG_E; +typedef enum zxdh_acl_pri_mode_e { + ZXDH_ACL_PRI_EXPLICIT = 1, + ZXDH_ACL_PRI_IMPLICIT, + ZXDH_ACL_PRI_SPECIFY, + ZXDH_ACL_PRI_INVALID, +} ZXDH_ACL_PRI_MODE_E; + +typedef struct zxdh_d_node { + void *data; + struct zxdh_d_node *prev; + struct zxdh_d_node *next; +} ZXDH_D_NODE; + +typedef struct zxdh_d_head { + uint32_t used; + uint32_t maxnum; + ZXDH_D_NODE *p_next; + ZXDH_D_NODE *p_prev; +} ZXDH_D_HEAD; + typedef struct zxdh_dtb_tab_up_user_addr_t { uint32_t user_flag; uint64_t phy_addr; @@ -193,6 +227,79 @@ typedef struct zxdh_sdt_mgr_t { ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; } ZXDH_SDT_MGR_T; +typedef struct zxdh_riscv_dtb_queue_USER_info_t { + uint32_t alloc_flag; + uint32_t queue_id; + uint32_t vport; + char user_name[ZXDH_PORT_NAME_MAX]; +} ZXDH_RISCV_DTB_QUEUE_USER_INFO_T; + +typedef struct zxdh_riscv_dtb_mgr { + uint32_t queue_alloc_count; + uint32_t queue_index; + ZXDH_RISCV_DTB_QUEUE_USER_INFO_T queue_user_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_RISCV_DTB_MGR; + +typedef struct zxdh_dtb_queue_vm_info_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t epid; + uint32_t vfunc_num; + uint32_t vector; + uint32_t func_num; + uint32_t vfunc_active; +} ZXDH_DTB_QUEUE_VM_INFO_T; + +typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t cfg_epid; + uint32_t cfg_vfunc_num; + uint32_t cfg_vector; + uint32_t cfg_func_num; + uint32_t cfg_vfunc_active; +} ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; + + +typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); +typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); + +typedef struct zxdh_field_t { + const char *p_name; + uint32_t flags; + uint16_t msb_pos; + + uint16_t len; + uint32_t default_value; + uint32_t default_step; +} ZXDH_FIELD_T; + +typedef struct zxdh_reg_t { + const char *reg_name; + uint32_t reg_no; + uint32_t module_no; + uint32_t flags; + uint32_t array_type; + uint32_t addr; + uint32_t width; + uint32_t m_size; + uint32_t n_size; + uint32_t m_step; + uint32_t n_step; + uint32_t field_num; + ZXDH_FIELD_T *p_fields; + + ZXDH_REG_WRITE p_write_fun; + ZXDH_REG_READ p_read_fun; +} ZXDH_REG_T; + +typedef struct zxdh_tlb_mgr_t { + uint32_t entry_num; + uint32_t va_width; + uint32_t pa_width; +} ZXDH_TLB_MGR_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); #endif /* ZXDH_NP_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45109 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 03/15] net/zxdh: port tables init implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang 2024-12-26 3:37 ` [PATCH v6 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-26 3:37 ` [PATCH v6 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 04/15] net/zxdh: port tables unint implementations Junlong Wang ` (11 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 42795 bytes --] insert port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 24 ++ drivers/net/zxdh/zxdh_msg.c | 65 ++++ drivers/net/zxdh/zxdh_msg.h | 72 ++++ drivers/net/zxdh/zxdh_np.c | 648 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 210 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 2 + drivers/net/zxdh/zxdh_tables.c | 105 ++++++ drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ 9 files changed, 1274 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index ab24a3145c..5b3af87c5b 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -20,4 +20,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_queue.c', 'zxdh_np.c', + 'zxdh_tables.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 4e114d95da..ff44816384 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -14,6 +14,7 @@ #include "zxdh_common.h" #include "zxdh_queue.h" #include "zxdh_np.h" +#include "zxdh_tables.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -1144,6 +1145,25 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_tables_init(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_init failed"); + return ret; + } + + ret = zxdh_panel_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -1220,6 +1240,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_tables_init(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index dd7a518a51..aa2e10fd45 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_msg.h" +#include "zxdh_pci.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 @@ -100,6 +101,7 @@ #define ZXDH_BAR_CHAN_MSG_EMEC 1 #define ZXDH_BAR_CHAN_MSG_NO_ACK 0 #define ZXDH_BAR_CHAN_MSG_ACK 1 +#define ZXDH_MSG_REPS_OK 0xff uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, @@ -1079,3 +1081,66 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, res->bar_length = recv_msg.offset_reps.length; return ZXDH_BAR_MSG_OK; } + +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + struct zxdh_pci_bar_msg in = { + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET), + .payload_addr = msg_req, + .payload_len = msg_req_len, + .src = ZXDH_MSG_CHAN_END_VF, + .dst = ZXDH_MSG_CHAN_END_PF, + .module_id = ZXDH_MODULE_BAR_MSG_TO_PF, + .src_pcieid = hw->pcie_id, + .dst_pcieid = ZXDH_PF_PCIE_ID(hw->pcie_id), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "vf[%d] send bar msg to pf failed.ret %d", hw->vport.vfid, ret); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + return 0; +} + +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_msg_head *msghead = &msg_info->msg_head; + + msghead->msg_type = type; + msghead->vport = hw->vport.vport; + msghead->vf_id = hw->vport.vfid; + msghead->pcieid = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index fbc79e8f9d..b7b17b8696 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -33,6 +33,19 @@ #define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) + +#define ZXDH_MSG_REPLYBODY_HEAD sizeof(enum zxdh_reps_flag) +#define ZXDH_MSG_HEADER_SIZE 4 +#define ZXDH_MSG_REPLY_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) + +#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_REQ_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -151,6 +164,13 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +enum zxdh_msg_type { + ZXDH_NULL = 0, + ZXDH_VF_PORT_INIT = 1, + + ZXDH_MSG_TYPE_END, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -240,6 +260,54 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_msg_reply_head { + uint8_t flag; + uint16_t reps_len; + uint8_t resvd; +} __rte_packed; + +enum zxdh_reps_flag { + ZXDH_REPS_FAIL, + ZXDH_REPS_SUCC = 0xaa, +} __rte_packed; + +struct zxdh_msg_reply_body { + enum zxdh_reps_flag flag; + union { + uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_reply_info { + struct zxdh_msg_reply_head reply_head; + struct zxdh_msg_reply_body reply_body; +} __rte_packed; + +struct zxdh_vf_init_msg { + uint8_t link_up; + uint8_t rsv; + uint16_t base_qid; + uint8_t rss_enable; +} __rte_packed; + +struct zxdh_msg_head { + enum zxdh_msg_type msg_type; + uint16_t vport; + uint16_t vf_id; + uint16_t pcieid; +} __rte_packed; + +struct zxdh_msg_info { + union { + uint8_t head_len[ZXDH_MSG_HEAD_LEN]; + struct zxdh_msg_head msg_head; + }; + union { + uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; + struct zxdh_vf_init_msg vf_init_msg; + } __rte_packed data; +} __rte_packed; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); @@ -253,5 +321,9 @@ int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info); +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 28728b0c68..db536d96e3 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_debug.h> #include <rte_malloc.h> +#include <rte_memcpy.h> #include "zxdh_np.h" #include "zxdh_logs.h" @@ -16,11 +17,14 @@ static uint64_t g_np_bar_offset; static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; +static uint32_t g_dpp_dtb_int_enable; +static uint32_t g_table_type[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; +ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -76,6 +80,92 @@ do {\ } \ } while (0) +#define ZXDH_COMM_CHECK_POINT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + RTE_ASSERT(0);\ + } \ +} while (0) + + +#define ZXDH_COMM_CHECK_POINT_MEMORY_FREE(point, ptr)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", __FILE__, __LINE__, __func__);\ + rte_free(ptr);\ + RTE_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, becall, ptr)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXICP %s:%d, %s Call"\ + " %s Fail!", __FILE__, __LINE__, __func__, becall);\ + rte_free(ptr);\ + } \ +} while (0) + +#define ZXDH_COMM_CONVERT16(w_data) \ + (((w_data) & 0xff) << 8) + +#define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) + +#define ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.item_size) + +#define ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.wr_index) + +#define ZXDH_DTB_QUEUE_INIT_FLAG_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].init_flag) + +static uint32_t +zxdh_np_comm_is_big_endian(void) +{ + ZXDH_ENDIAN_U c_data; + + c_data.a = 1; + + if (c_data.b == 1) + return 0; + else + return 1; +} + +static void +zxdh_np_comm_swap(uint8_t *p_uc_data, uint32_t dw_byte_len) +{ + uint16_t *p_w_tmp = NULL; + uint32_t *p_dw_tmp = NULL; + uint32_t dw_byte_num; + uint8_t uc_byte_mode; + uint32_t uc_is_big_flag; + uint32_t i; + + p_dw_tmp = (uint32_t *)(p_uc_data); + uc_is_big_flag = zxdh_np_comm_is_big_endian(); + if (uc_is_big_flag) + return; + + dw_byte_num = dw_byte_len >> 2; + uc_byte_mode = dw_byte_len % 4 & 0xff; + + for (i = 0; i < dw_byte_num; i++) { + (*p_dw_tmp) = ZXDH_COMM_CONVERT16(*p_dw_tmp); + p_dw_tmp++; + } + + if (uc_byte_mode > 1) { + p_w_tmp = (uint16_t *)(p_dw_tmp); + (*p_w_tmp) = ZXDH_COMM_CONVERT16(*p_w_tmp); + } +} + static uint32_t zxdh_np_dev_init(void) { @@ -503,7 +593,7 @@ zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, p_vm_info->func_num = vm_info.cfg_func_num; p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; - return 0; + return rc; } static uint32_t @@ -808,3 +898,559 @@ zxdh_np_online_uninit(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_sdt_tbl_type_get(uint32_t dev_id, uint32_t sdt_no) +{ + return g_table_type[dev_id][sdt_no]; +} + + +static ZXDH_DTB_TABLE_T * +zxdh_np_table_info_get(uint32_t table_type) +{ + return &g_dpp_dtb_table_info[table_type]; +} + +static uint32_t +zxdh_np_dtb_write_table_cmd(uint32_t dev_id, + ZXDH_DTB_TABLE_INFO_E table_type, + void *p_cmd_data, + void *p_cmd_buff) +{ + uint32_t field_cnt; + ZXDH_DTB_TABLE_T *p_table_info = NULL; + ZXDH_DTB_FIELD_T *p_field_info = NULL; + uint32_t temp_data; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_cmd_data); + ZXDH_COMM_CHECK_POINT(p_cmd_buff); + p_table_info = zxdh_np_table_info_get(table_type); + p_field_info = p_table_info->p_fields; + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_table_info); + + for (field_cnt = 0; field_cnt < p_table_info->field_num; field_cnt++) { + temp_data = *((uint32_t *)p_cmd_data + field_cnt) & ZXDH_COMM_GET_BIT_MASK(uint32_t, + p_field_info[field_cnt].len); + + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_cmd_buff, + ZXDH_DTB_TABLE_CMD_SIZE_BIT, + temp_data, + p_field_info[field_cnt].lsb_pos, + p_field_info[field_cnt].len); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_write_bits"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_smmu0_write_entry_data(uint32_t dev_id, + uint32_t mode, + uint32_t addr, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + ZXDH_DTB_ERAM_TABLE_FORM_T dtb_eram_form_info = {0}; + uint32_t rc = 0; + + dtb_eram_form_info.valid = ZXDH_DTB_TABLE_VALID; + dtb_eram_form_info.type_mode = ZXDH_DTB_TABLE_MODE_ERAM; + dtb_eram_form_info.data_mode = mode; + dtb_eram_form_info.cpu_wr = 1; + dtb_eram_form_info.addr = addr; + dtb_eram_form_info.cpu_rd = 0; + dtb_eram_form_info.cpu_rd_mode = 0; + + if (ZXDH_ERAM128_OPR_128b == mode) { + p_entry->data_in_cmd_flag = 0; + p_entry->data_size = 128 / 8; + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_128, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + memcpy(p_entry->data, p_data, 128 / 8); + } else if (ZXDH_ERAM128_OPR_64b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 64 / 8; + dtb_eram_form_info.data_l = *(p_data + 1); + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_64, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + } else if (ZXDH_ERAM128_OPR_1b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 1; + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_1, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, + uint32_t base_addr, + uint32_t index, + uint32_t wrt_mode, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t temp_idx; + uint32_t dtb_ind_addr; + uint32_t rc; + + switch (wrt_mode) { + case ZXDH_ERAM128_OPR_128b: + { + if ((0xFFFFFFFF - (base_addr)) < (index)) { + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, + base_addr, index, __func__); + + return ZXDH_PAR_CHK_INVALID_INDEX; + } + if (base_addr + index > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 7; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + if ((base_addr + (index >> 1)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 6; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + if ((base_addr + (index >> 7)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + + temp_idx = index; + } + } + + dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; + + PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + + rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, + wrt_mode, + dtb_ind_addr, + p_data, + p_entry); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_smmu0_write_entry_data"); + + return rc; +} + +static uint32_t +zxdh_np_eram_dtb_len_get(uint32_t mode) +{ + uint32_t dtb_len = 0; + + switch (mode) { + case ZXDH_ERAM128_OPR_128b: + { + dtb_len += 2; + break; + } + case ZXDH_ERAM128_OPR_64b: + case ZXDH_ERAM128_OPR_1b: + { + dtb_len += 1; + break; + } + default: + break; + } + + return dtb_len; +} + +static uint32_t +zxdh_np_dtb_eram_one_entry(uint32_t dev_id, + uint32_t sdt_no, + uint32_t del_en, + void *pdata, + uint32_t *p_dtb_len, + ZXDH_DTB_ENTRY_T *p_dtb_one_entry) +{ + uint32_t buff[ZXDH_SMMU0_READ_REG_MAX_NUM] = {0}; + ZXDH_SDTTBL_ERAM_T sdt_eram = {0}; + ZXDH_DTB_ERAM_ENTRY_INFO_T *peramdata = NULL; + uint32_t base_addr; + uint32_t index; + uint32_t opr_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(pdata); + ZXDH_COMM_CHECK_POINT(p_dtb_one_entry); + ZXDH_COMM_CHECK_POINT(p_dtb_len); + + peramdata = (ZXDH_DTB_ERAM_ENTRY_INFO_T *)pdata; + index = peramdata->index; + base_addr = sdt_eram.eram_base_addr; + opr_mode = sdt_eram.eram_mode; + + switch (opr_mode) { + case ZXDH_ERAM128_TBL_128b: + { + opr_mode = ZXDH_ERAM128_OPR_128b; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + opr_mode = ZXDH_ERAM128_OPR_64b; + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + opr_mode = ZXDH_ERAM128_OPR_1b; + break; + } + } + + if (del_en) { + memset((uint8_t *)buff, 0, sizeof(buff)); + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + buff, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(sdt_no, rc, "zxdh_dtb_se_smmu0_ind_write"); + } else { + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + peramdata->p_data, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_dtb_se_smmu0_ind_write"); + } + *p_dtb_len = zxdh_np_eram_dtb_len_get(opr_mode); + + return rc; +} + +static uint32_t +zxdh_np_dtb_data_write(uint8_t *p_data_buff, + uint32_t addr_offset, + ZXDH_DTB_ENTRY_T *entry) +{ + ZXDH_COMM_CHECK_POINT(p_data_buff); + ZXDH_COMM_CHECK_POINT(entry); + + uint8_t *p_cmd = p_data_buff + addr_offset; + uint32_t cmd_size = ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8; + + uint8_t *p_data = p_cmd + cmd_size; + uint32_t data_size = entry->data_size; + + uint8_t *cmd = (uint8_t *)entry->cmd; + uint8_t *data = (uint8_t *)entry->data; + + rte_memcpy(p_cmd, cmd, cmd_size); + + if (!entry->data_in_cmd_flag) { + zxdh_np_comm_swap(data, data_size); + rte_memcpy(p_data, data, data_size); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + *enable = vm_info.queue_en; + return rc; +} + +static uint32_t +zxdh_np_dtb_item_buff_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t len, + uint32_t *p_data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + + memcpy((uint8_t *)(addr), p_data, len * 4); + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_rd(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t *p_data) +{ + uint64_t addr; + uint32_t val; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + val = *((volatile uint32_t *)(addr)); + + *p_data = val; + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t data) +{ + uint64_t addr; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + *((volatile uint32_t *)(addr)) = data; + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_item_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_ITEM_INFO_T *p_item_info) +{ + ZXDH_DTB_QUEUE_LEN_T dtb_len = {0}; + uint32_t rc; + + dtb_len.cfg_dtb_cmd_type = p_item_info->cmd_type; + dtb_len.cfg_dtb_cmd_int_en = p_item_info->int_en; + dtb_len.cfg_queue_dtb_len = p_item_info->data_len; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_QUEUE_DTB_LEN, + 0, queue_id, (void *)&dtb_len); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_write"); + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t int_flag, + uint32_t data_len, + uint32_t *p_data, + uint32_t *p_item_index) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t unused_item_num = 0; + uint32_t queue_en = 0; + uint32_t ack_vale = 0; + uint64_t phy_addr; + uint32_t item_index; + uint32_t i; + uint32_t rc; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (data_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + rc = zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!,rc=%d", queue_id, rc); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + rc = zxdh_np_dtb_item_ack_rd(dev_id, queue_id, 0, + item_index, 0, &ack_vale); + + ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + rc = zxdh_np_dtb_item_buff_wr(dev_id, queue_id, 0, + item_index, 0, data_len, p_data); + + rc = zxdh_np_dtb_item_ack_wr(dev_id, queue_id, 0, + item_index, 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + item_info.cmd_vld = 1; + item_info.cmd_type = 0; + item_info.int_en = int_flag; + item_info.data_len = data_len / 4; + phy_addr = p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.start_phy_addr + + item_index * p_dpp_dtb_mgr[dev_id]->queue_info[queue_id].tab_down.item_size; + item_info.data_hddr = ((phy_addr >> 4) >> 32) & 0xffffffff; + item_info.data_laddr = (phy_addr >> 4) & 0xffffffff; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + *p_item_index = item_index; + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_down_table_data(uint32_t dev_id, + uint32_t queue_id, + uint32_t down_table_len, + uint8_t *p_down_table_buff, + uint32_t *p_element_id) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + dtb_interrupt_status = g_dpp_dtb_int_enable; + + rc = zxdh_np_dtb_tab_down_info_set(dev_id, + queue_id, + dtb_interrupt_status, + down_table_len / 4, + (uint32_t *)p_down_table_buff, + p_element_id); + return rc; +} + +int +zxdh_np_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *down_entries) +{ + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t tbl_type; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + p_data_buff = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_zmalloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = down_entries + entry_index; + sdt_no = pentry->sdt_no; + tbl_type = zxdh_np_sdt_tbl_type_get(dev_id, sdt_no); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_ADD_OR_UPDATE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index dc0e867827..40961c02a2 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -7,6 +7,8 @@ #include <stdint.h> +#define ZXDH_DISABLE (0) +#define ZXDH_ENABLE (1) #define ZXDH_PORT_NAME_MAX (32) #define ZXDH_DEV_CHANNEL_MAX (2) #define ZXDH_DEV_SDT_ID_MAX (256U) @@ -52,6 +54,94 @@ #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SMMU0_READ_REG_MAX_NUM (4) + +#define ZXDH_DTB_ITEM_ACK_SIZE (16) +#define ZXDH_DTB_ITEM_BUFF_SIZE (16 * 1024) +#define ZXDH_DTB_ITEM_SIZE (16 + 16 * 1024) +#define ZXDH_DTB_TAB_UP_SIZE ((16 + 16 * 1024) * 32) +#define ZXDH_DTB_TAB_DOWN_SIZE ((16 + 16 * 1024) * 32) + +#define ZXDH_DTB_TAB_UP_ACK_VLD_MASK (0x555555) +#define ZXDH_DTB_TAB_DOWN_ACK_VLD_MASK (0x5a5a5a) +#define ZXDH_DTB_TAB_ACK_IS_USING_MASK (0x11111100) +#define ZXDH_DTB_TAB_ACK_UNUSED_MASK (0x0) +#define ZXDH_DTB_TAB_ACK_SUCCESS_MASK (0xff) +#define ZXDH_DTB_TAB_ACK_FAILED_MASK (0x1) +#define ZXDH_DTB_TAB_ACK_CHECK_VALUE (0x12345678) + +#define ZXDH_DTB_TAB_ACK_VLD_SHIFT (104) +#define ZXDH_DTB_TAB_ACK_STATUS_SHIFT (96) +#define ZXDH_DTB_LEN_POS_SETP (16) +#define ZXDH_DTB_ITEM_ADD_OR_UPDATE (0) +#define ZXDH_DTB_ITEM_DELETE (1) + +#define ZXDH_ETCAM_LEN_SIZE (6) +#define ZXDH_ETCAM_BLOCK_NUM (8) +#define ZXDH_ETCAM_TBLID_NUM (8) +#define ZXDH_ETCAM_RAM_NUM (8) +#define ZXDH_ETCAM_RAM_WIDTH (80U) +#define ZXDH_ETCAM_WR_MASK_MAX (((uint32_t)1 << ZXDH_ETCAM_RAM_NUM) - 1) +#define ZXDH_ETCAM_WIDTH_MIN (ZXDH_ETCAM_RAM_WIDTH) +#define ZXDH_ETCAM_WIDTH_MAX (ZXDH_ETCAM_RAM_NUM * ZXDH_ETCAM_RAM_WIDTH) + +#define ZXDH_DTB_TABLE_DATA_BUFF_SIZE (16384) +#define ZXDH_DTB_TABLE_CMD_SIZE_BIT (128) + +#define ZXDH_SE_SMMU0_ERAM_BLOCK_NUM (32) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK (0x4000) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL \ + (ZXDH_SE_SMMU0_ERAM_BLOCK_NUM * ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK) + +/**errco code */ +#define ZXDH_RC_BASE (0x1000U) +#define ZXDH_PARAMETER_CHK_BASE (ZXDH_RC_BASE | 0x200) +#define ZXDH_PAR_CHK_POINT_NULL (ZXDH_PARAMETER_CHK_BASE | 0x001) +#define ZXDH_PAR_CHK_ARGIN_ZERO (ZXDH_PARAMETER_CHK_BASE | 0x002) +#define ZXDH_PAR_CHK_ARGIN_OVERFLOW (ZXDH_PARAMETER_CHK_BASE | 0x003) +#define ZXDH_PAR_CHK_ARGIN_ERROR (ZXDH_PARAMETER_CHK_BASE | 0x004) +#define ZXDH_PAR_CHK_INVALID_INDEX (ZXDH_PARAMETER_CHK_BASE | 0x005) +#define ZXDH_PAR_CHK_INVALID_RANGE (ZXDH_PARAMETER_CHK_BASE | 0x006) +#define ZXDH_PAR_CHK_INVALID_DEV_ID (ZXDH_PARAMETER_CHK_BASE | 0x007) +#define ZXDH_PAR_CHK_INVALID_PARA (ZXDH_PARAMETER_CHK_BASE | 0x008) + +#define ZXDH_ERAM128_BADDR_MASK (0x3FFFF80) + +#define ZXDH_DTB_TABLE_MODE_ERAM (0) +#define ZXDH_DTB_TABLE_MODE_DDR (1) +#define ZXDH_DTB_TABLE_MODE_ZCAM (2) +#define ZXDH_DTB_TABLE_MODE_ETCAM (3) +#define ZXDH_DTB_TABLE_MODE_MC_HASH (4) +#define ZXDH_DTB_TABLE_VALID (1) + +/* DTB module error code */ +#define ZXDH_RC_DTB_BASE (0xd00) +#define ZXDH_RC_DTB_MGR_EXIST (ZXDH_RC_DTB_BASE | 0x0) +#define ZXDH_RC_DTB_MGR_NOT_EXIST (ZXDH_RC_DTB_BASE | 0x1) +#define ZXDH_RC_DTB_QUEUE_RES_EMPTY (ZXDH_RC_DTB_BASE | 0x2) +#define ZXDH_RC_DTB_QUEUE_BUFF_SIZE_ERR (ZXDH_RC_DTB_BASE | 0x3) +#define ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY (ZXDH_RC_DTB_BASE | 0x4) +#define ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY (ZXDH_RC_DTB_BASE | 0x5) +#define ZXDH_RC_DTB_TAB_UP_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x6) +#define ZXDH_RC_DTB_TAB_DOWN_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x7) +#define ZXDH_RC_DTB_TAB_UP_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x8) +#define ZXDH_RC_DTB_TAB_DOWN_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x9) +#define ZXDH_RC_DTB_QUEUE_IS_WORKING (ZXDH_RC_DTB_BASE | 0xa) +#define ZXDH_RC_DTB_QUEUE_IS_NOT_INIT (ZXDH_RC_DTB_BASE | 0xb) +#define ZXDH_RC_DTB_MEMORY_ALLOC_ERR (ZXDH_RC_DTB_BASE | 0xc) +#define ZXDH_RC_DTB_PARA_INVALID (ZXDH_RC_DTB_BASE | 0xd) +#define ZXDH_RC_DMA_RANGE_INVALID (ZXDH_RC_DTB_BASE | 0xe) +#define ZXDH_RC_DMA_RCV_DATA_EMPTY (ZXDH_RC_DTB_BASE | 0xf) +#define ZXDH_RC_DTB_LPM_INSERT_FAIL (ZXDH_RC_DTB_BASE | 0x10) +#define ZXDH_RC_DTB_LPM_DELETE_FAIL (ZXDH_RC_DTB_BASE | 0x11) +#define ZXDH_RC_DTB_DOWN_LEN_INVALID (ZXDH_RC_DTB_BASE | 0x12) +#define ZXDH_RC_DTB_DOWN_HASH_CONFLICT (ZXDH_RC_DTB_BASE | 0x13) +#define ZXDH_RC_DTB_QUEUE_NOT_ALLOC (ZXDH_RC_DTB_BASE | 0x14) +#define ZXDH_RC_DTB_QUEUE_NAME_ERROR (ZXDH_RC_DTB_BASE | 0x15) +#define ZXDH_RC_DTB_DUMP_SIZE_SMALL (ZXDH_RC_DTB_BASE | 0x16) +#define ZXDH_RC_DTB_SEARCH_VPORT_QUEUE_ZERO (ZXDH_RC_DTB_BASE | 0x17) +#define ZXDH_RC_DTB_QUEUE_NOT_ENABLE (ZXDH_RC_DTB_BASE | 0x18) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -299,7 +389,127 @@ typedef struct zxdh_tlb_mgr_t { uint32_t pa_width; } ZXDH_TLB_MGR_T; +typedef enum zxdh_eram128_tbl_mode_e { + ZXDH_ERAM128_TBL_1b = 0, + ZXDH_ERAM128_TBL_32b = 1, + ZXDH_ERAM128_TBL_64b = 2, + ZXDH_ERAM128_TBL_128b = 3, + ZXDH_ERAM128_TBL_2b = 4, + ZXDH_ERAM128_TBL_4b = 5, + ZXDH_ERAM128_TBL_8b = 6, + ZXDH_ERAM128_TBL_16b = 7 +} ZXDH_ERAM128_TBL_MODE_E; + +typedef enum zxdh_eram128_opr_mode_e { + ZXDH_ERAM128_OPR_128b = 0, + ZXDH_ERAM128_OPR_64b = 1, + ZXDH_ERAM128_OPR_1b = 2, + ZXDH_ERAM128_OPR_32b = 3 + +} ZXDH_ERAM128_OPR_MODE_E; + +typedef enum zxdh_dtb_table_info_e { + ZXDH_DTB_TABLE_DDR = 0, + ZXDH_DTB_TABLE_ERAM_1 = 1, + ZXDH_DTB_TABLE_ERAM_64 = 2, + ZXDH_DTB_TABLE_ERAM_128 = 3, + ZXDH_DTB_TABLE_ZCAM = 4, + ZXDH_DTB_TABLE_ETCAM = 5, + ZXDH_DTB_TABLE_MC_HASH = 6, + ZXDH_DTB_TABLE_ENUM_MAX +} ZXDH_DTB_TABLE_INFO_E; + +typedef enum zxdh_sdt_table_type_e { + ZXDH_SDT_TBLT_INVALID = 0, + ZXDH_SDT_TBLT_ERAM = 1, + ZXDH_SDT_TBLT_DDR3 = 2, + ZXDH_SDT_TBLT_HASH = 3, + ZXDH_SDT_TBLT_LPM = 4, + ZXDH_SDT_TBLT_ETCAM = 5, + ZXDH_SDT_TBLT_PORTTBL = 6, + ZXDH_SDT_TBLT_MAX = 7, +} ZXDH_SDT_TABLE_TYPE_E; + +typedef struct zxdh_dtb_lpm_entry_t { + uint32_t dtb_len0; + uint8_t *p_data_buff0; + uint32_t dtb_len1; + uint8_t *p_data_buff1; +} ZXDH_DTB_LPM_ENTRY_T; + +typedef struct zxdh_dtb_entry_t { + uint8_t *cmd; + uint8_t *data; + uint32_t data_in_cmd_flag; + uint32_t data_size; +} ZXDH_DTB_ENTRY_T; + +typedef struct zxdh_dtb_eram_table_form_t { + uint32_t valid; + uint32_t type_mode; + uint32_t data_mode; + uint32_t cpu_wr; + uint32_t cpu_rd; + uint32_t cpu_rd_mode; + uint32_t addr; + uint32_t data_h; + uint32_t data_l; +} ZXDH_DTB_ERAM_TABLE_FORM_T; + +typedef struct zxdh_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} ZXDH_SDTTBL_ERAM_T; + +typedef union zxdh_endian_u { + unsigned int a; + unsigned char b; +} ZXDH_ENDIAN_U; + +typedef struct zxdh_dtb_field_t { + const char *p_name; + uint16_t lsb_pos; + uint16_t len; +} ZXDH_DTB_FIELD_T; + +typedef struct zxdh_dtb_table_t { + const char *table_type; + uint32_t table_no; + uint32_t field_num; + ZXDH_DTB_FIELD_T *p_fields; +} ZXDH_DTB_TABLE_T; + +typedef struct zxdh_dtb_queue_item_info_t { + uint32_t cmd_vld; + uint32_t cmd_type; + uint32_t int_en; + uint32_t data_len; + uint32_t data_laddr; + uint32_t data_hddr; +} ZXDH_DTB_QUEUE_ITEM_INFO_T; + +typedef struct zxdh_dtb_queue_len_t { + uint32_t cfg_dtb_cmd_type; + uint32_t cfg_dtb_cmd_int_en; + uint32_t cfg_queue_dtb_len; +} ZXDH_DTB_QUEUE_LEN_T; + +typedef struct zxdh_dtb_eram_entry_info_t { + uint32_t index; + uint32_t *p_data; +} ZXDH_DTB_ERAM_ENTRY_INFO_T; + +typedef struct zxdh_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} ZXDH_DTB_USER_ENTRY_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); +int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index d6487a574f..e3f13cb17d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,6 +12,8 @@ #include "zxdh_ethdev.h" +#define ZXDH_PF_PCIE_ID(pcie_id) (((pcie_id) & 0xff00) | 1 << 11) + enum zxdh_msix_status { ZXDH_MSIX_NONE = 0, ZXDH_MSIX_DISABLED = 1, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c new file mode 100644 index 0000000000..91376e6ec0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.c @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_np.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +#define ZXDH_SDT_VPORT_ATT_TABLE 1 +#define ZXDH_SDT_PANEL_ATT_TABLE 2 + +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_write = {ZXDH_SDT_VPORT_ATT_TABLE, (void *)&entry}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) + PMD_DRV_LOG(ERR, "write vport_att failed vfid:%d failed", vfid); + + return ret; +} + +int +zxdh_port_attr_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret; + + if (hw->is_pf) { + port_attr.hit_flag = 1; + port_attr.phy_port = hw->phyport; + port_attr.pf_vfid = zxdh_vport_to_vfid(hw->vport); + port_attr.rss_enable = 0; + if (!hw->is_pf) + port_attr.is_vf = 1; + + port_attr.mtu = dev->data->mtu; + port_attr.mtu_enable = 1; + port_attr.is_up = 0; + if (!port_attr.rss_enable) + port_attr.port_base_qid = 0; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + ret = -1; + } + } else { + struct zxdh_vf_init_msg *vf_init_msg = &msg_info.data.vf_init_msg; + + zxdh_msg_head_build(hw, ZXDH_VF_PORT_INIT, &msg_info); + msg_info.msg_head.msg_type = ZXDH_VF_PORT_INIT; + vf_init_msg->link_up = 1; + vf_init_msg->base_qid = 0; + vf_init_msg->rss_enable = 0; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port_init failed"); + ret = -1; + } + } + return ret; +}; + +int zxdh_panel_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int ret; + + if (!hw->is_pf) + return 0; + + struct zxdh_panel_table panel; + + memset(&panel, 0, sizeof(panel)); + panel.hit_flag = 1; + panel.pf_vfid = zxdh_vport_to_vfid(hw->vport); + panel.mtu_enable = 1; + panel.mtu = dev->data->mtu; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = hw->phyport, + .p_data = (uint32_t *)&panel + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) { + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + ret = -1; + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h new file mode 100644 index 0000000000..5d34af2f05 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_TABLES_H +#define ZXDH_TABLES_H + +#include <stdint.h> + +extern struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_DEVICE_NO 0 + +struct zxdh_port_attr_table { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint16_t mtu; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint8_t rss_hash_factor; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#else + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint16_t mtu; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint8_t rss_hash_factor; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#endif +}; + +struct zxdh_panel_table { + uint16_t port_vfid_1588 : 11, + rsv2 : 5; + uint16_t pf_vfid : 11, + rsv1 : 1, + enable_1588_tc : 2, + trust_mode : 1, + hit_flag : 1; + uint32_t mtu : 16, + mtu_enable : 1, + rsv : 3, + tm_base_queue : 12; + uint32_t rsv_1; + uint32_t rsv_2; +}; /* 16B */ + +int zxdh_port_attr_init(struct rte_eth_dev *dev); +int zxdh_panel_table_init(struct rte_eth_dev *dev); +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); + +#endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 100335 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 04/15] net/zxdh: port tables unint implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (2 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang ` (10 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8641 bytes --] delete port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 18 ++++++ drivers/net/zxdh/zxdh_msg.h | 1 + drivers/net/zxdh/zxdh_np.c | 103 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 33 ++++++++++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 164 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ff44816384..717a1d2b0b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -887,12 +887,30 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret; + + ret = zxdh_port_attr_uninit(dev); + if (ret) + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + + return ret; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index b7b17b8696..613ca71170 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, }; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index db536d96e3..99a7dc11b4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -1454,3 +1455,105 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t entry_index; + uint32_t sdt_no; + uint32_t addr_offset; + uint32_t max_size; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + + p_data_buff_ex = rte_calloc(NULL, 1, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff_ex, + &element_id); + rte_free(p_data_buff); + ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, + "dpp_dtb_write_down_table_data", p_data_buff_ex); + + rte_free(p_data_buff_ex); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 40961c02a2..42a652dd6b 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -20,6 +20,8 @@ #define ZXDH_PPU_CLUSTER_NUM (6) #define ZXDH_PPU_INSTR_MEM_NUM (3) #define ZXDH_SDT_CFG_LEN (2) +#define ZXDH_SDT_H_TBL_TYPE_BT_POS (29) +#define ZXDH_SDT_H_TBL_TYPE_BT_LEN (3) #define ZXDH_RC_DEV_BASE (0x600) #define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) @@ -507,9 +509,16 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_sdt_tbl_data_t { + uint32_t data_high32; + uint32_t data_low32; +} ZXDH_SDT_TBL_DATA_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); +int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 91376e6ec0..9fd184e612 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,7 +11,8 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 -int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +int +zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { int ret = 0; @@ -70,6 +71,36 @@ zxdh_port_attr_init(struct rte_eth_dev *dev) return ret; }; +int +zxdh_port_attr_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + if (hw->is_pf == 1) { + ZXDH_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (uint32_t *)&port_attr}; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_VPORT_ATT_TABLE, + .p_entry_data = (void *)&port_attr_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "delete port attr table failed"); + ret = -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port tables uninit failed"); + ret = -1; + } + } + return ret; +} + int zxdh_panel_table_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5d34af2f05..5e9b36faee 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -144,5 +144,6 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18675 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 05/15] net/zxdh: rx/tx queue setup and intr enable 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (3 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang ` (9 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7822 bytes --] rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 33 ++++++++ 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 717a1d2b0b..521d7ed433 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -933,6 +933,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + struct zxdh_virtnet_tx *txvq = NULL; + uint16_t tx_free_thresh = 0; + + if (tx_conf->tx_deferred_start) { + PMD_TX_LOG(ERR, "Tx deferred start is not supported"); + return -EINVAL; + } + + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + + txvq = &vq->txq; + txvq->queue_id = vtpci_logic_qidx; + + tx_free_thresh = tx_conf->tx_free_thresh; + if (tx_free_thresh == 0) + tx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_TX_FREE_THRESH); + + /* tx_free_thresh must be less than the number of TX entries minus 3 */ + if (tx_free_thresh >= (vq->vq_nentries - 3)) { + PMD_TX_LOG(ERR, "TX entries - 3 (%u). (tx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries - 3, tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + vq->vq_free_thresh = tx_free_thresh; + + if (queue_idx < dev->data->nb_tx_queues) + dev->data->tx_queues[queue_idx] = txvq; + + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_enable_intr(vq); + zxdh_mb(hw->weak_barriers); + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_disable_intr(vq); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1304d5e4ea..2f602d894f 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -8,6 +8,7 @@ #include <stdint.h> #include <rte_common.h> +#include <rte_atomic.h> #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" @@ -30,6 +31,7 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 +#define ZXDH_QUEUE_DEPTH 1024 /* * ring descriptors: 16 bytes. @@ -270,8 +272,39 @@ zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) } } +static inline void +zxdh_queue_enable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow == ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +static inline void +zxdh_mb(uint8_t weak_barriers) +{ + if (weak_barriers) + rte_atomic_thread_fence(rte_memory_order_seq_cst); + else + rte_mb(); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); +int32_t zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf); +int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17339 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 06/15] net/zxdh: dev start/stop ops implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (4 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang ` (8 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 14258 bytes --] dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 71 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 21 +++++++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 91 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 69 +++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 14 ++--- 8 files changed, 263 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..7b72be5f25 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux = Y x86-64 = Y ARMv8 = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 521d7ed433..6e603b967e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -899,12 +899,40 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + uint16_t i; + int ret; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "intr disable failed"); + return ret; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); @@ -928,9 +956,52 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EINVAL; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop = zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..6b2c4482b2 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,26 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED) && + (vq->vq_packed.cached_flags & ZXDH_VRING_PACKED_DESC_F_AVAIL)) + notify_data |= RTE_BIT32(31); + + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -216,6 +236,7 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_num = zxdh_set_queue_num, .setup_queue = zxdh_setup_queue, .del_queue = zxdh_del_queue, + .notify_queue = zxdh_notify_queue, }; uint8_t diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index e3f13cb17d..5c5f72b90e 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -144,6 +144,7 @@ struct zxdh_pci_ops { int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index af21f046ad..8c8f2605f6 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -274,3 +274,94 @@ zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) zxdh_queue_disable_intr(vq); return 0; } + +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num) +{ + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct zxdh_hw *hw = vq->hw; + struct zxdh_vq_desc_extra *dxp; + uint16_t flags = vq->vq_packed.cached_flags; + int32_t i; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + /* rx pkt fill in data_off */ + start_dp[idx].addr = rte_mbuf_iova_get(cookie[i]) + RTE_PKTMBUF_HEADROOM; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM; + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + zxdh_queue_store_flags_packed(&start_dp[idx], flags, hw->weak_barriers); + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[logic_qidx]; + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + uint16_t desc_idx; + int32_t error = 0; + + /* Allocate blank mbufs for the each rx descriptor */ + memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf)); + for (desc_idx = 0; desc_idx < ZXDH_MBUF_BURST_SZ; desc_idx++) + vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; + + while (!zxdh_queue_full(vq)) { + uint16_t free_cnt = vq->vq_free_cnt; + + free_cnt = RTE_MIN(ZXDH_MBUF_BURST_SZ, free_cnt); + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt) == 0)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + int32_t i; + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + } else { + PMD_DRV_LOG(ERR, "port %d rxq %d allocated bufs from %s failed", + hw->port_id, logic_qidx, rxvq->mpool->name); + break; + } + } + return 0; +} + +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + uint16_t i = 0; + struct zxdh_vring_packed_desc *descs = vq->vq_packed.ring.desc; + int32_t cnt = 0; + + i = vq->vq_used_cons_idx; + while (zxdh_desc_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 2f602d894f..6513aec3f0 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -25,6 +25,11 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) +#define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL_USED \ + (ZXDH_VRING_PACKED_DESC_F_AVAIL | ZXDH_VRING_PACKED_DESC_F_USED) #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 @@ -33,6 +38,9 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 + /* * ring descriptors: 16 bytes. * These can chain together via "next". @@ -290,6 +298,63 @@ zxdh_mb(uint8_t weak_barriers) rte_mb(); } +static inline int32_t +zxdh_queue_full(const struct zxdh_virtqueue *vq) +{ + return (vq->vq_free_cnt == 0); +} + +static inline void +zxdh_queue_store_flags_packed(struct zxdh_vring_packed_desc *dp, + uint16_t flags, uint8_t weak_barriers) + { + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + rte_io_wmb(); + dp->flags = flags; + #else + rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release); + #endif + } else { + rte_io_wmb(); + dp->flags = flags; + } +} + +static inline uint16_t +zxdh_queue_fetch_flags_packed(struct zxdh_vring_packed_desc *dp, + uint8_t weak_barriers) + { + uint16_t flags; + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + flags = dp->flags; + rte_io_rmb(); + #else + flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire); + #endif + } else { + flags = dp->flags; + rte_io_rmb(); + } + + return flags; +} + +static inline int32_t +zxdh_desc_used(struct zxdh_vring_packed_desc *desc, struct zxdh_virtqueue *vq) +{ + uint16_t flags = zxdh_queue_fetch_flags_packed(desc, vq->hw->weak_barriers); + uint16_t used = !!(flags & ZXDH_VRING_PACKED_DESC_F_USED); + uint16_t avail = !!(flags & ZXDH_VRING_PACKED_DESC_F_AVAIL); + return avail == used && used == vq->vq_packed.used_wrap_counter; +} + +static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) +{ + ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); @@ -306,5 +371,9 @@ int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp); int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx); +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq); +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num); #endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index de9353b223..8c7f734805 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -20,21 +20,19 @@ struct zxdh_virtnet_stats { uint64_t size_bins[8]; }; -struct zxdh_virtnet_rx { +struct __rte_cache_aligned zxdh_virtnet_rx { struct zxdh_virtqueue *vq; - - /* dummy mbuf, for wraparound when processing RX ring. */ - struct rte_mbuf fake_mbuf; - uint64_t mbuf_initializer; /* value to init mbufs. */ struct rte_mempool *mpool; /* mempool for mbuf allocation */ uint16_t queue_id; /* DPDK queue index. */ uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate RX ring. */ -} __rte_packed; + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; +}; -struct zxdh_virtnet_tx { +struct __rte_cache_aligned zxdh_virtnet_tx { struct zxdh_virtqueue *vq; const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ @@ -42,6 +40,6 @@ struct zxdh_virtnet_tx { uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; const struct rte_memzone *mz; /* mem zone to populate TX ring. */ -} __rte_packed; +}; #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 32296 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 07/15] net/zxdh: provided dev simple tx implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (5 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang ` (7 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18454 bytes --] provided dev simple tx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 22 ++ drivers/net/zxdh/zxdh_queue.h | 26 ++- drivers/net/zxdh/zxdh_rxtx.c | 396 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 448 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', + 'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 6e603b967e..aef77e86a0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -956,6 +957,25 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -971,6 +991,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 6513aec3f0..9343df81ac 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,8 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + /* This marks a buffer as write-only (otherwise read-only). */ -#define ZXDH_VRING_DESC_F_WRITE 2 +#define ZXDH_VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) #define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) @@ -35,11 +42,17 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 + #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZE sizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -355,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 0000000000..10034a0e98 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <stdalign.h> + +#include <rte_net.h> + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ + +#define ZXDH_PI_L3TYPE_IP 0x00 +#define ZXDH_PI_L3TYPE_IPV6 0x40 +#define ZXDH_PI_L3TYPE_NOIP 0x80 +#define ZXDH_PI_L3TYPE_RSV 0xC0 +#define ZXDH_PI_L3TYPE_MASK 0xC0 + +#define ZXDH_PCODE_MASK 0x1F +#define ZXDH_PCODE_IP_PKT_TYPE 0x01 +#define ZXDH_PCODE_TCP_PKT_TYPE 0x02 +#define ZXDH_PCODE_UDP_PKT_TYPE 0x03 +#define ZXDH_PCODE_NO_IP_PKT_TYPE 0x09 +#define ZXDH_PCODE_NO_REASSMBLE_TCP_PKT_TYPE 0x0C + +#define ZXDH_TX_MAX_SEGS 31 +#define ZXDH_RX_MAX_SEGS 31 + +static void +zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t curr_id = 0; + uint16_t free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num > 0 && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + num -= dxp->ndescs; + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + } while (curr_id != id); + } + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} + +static void +zxdh_ring_free_id_packed(struct zxdh_virtqueue *vq, uint16_t id) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static void +zxdh_xmit_cleanup_normal_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num-- && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + zxdh_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + +static void +zxdh_xmit_cleanup_packed(struct zxdh_virtqueue *vq, int32_t num, int32_t in_order) +{ + if (in_order) + zxdh_xmit_cleanup_inorder_packed(vq, num); + else + zxdh_xmit_cleanup_normal_packed(vq, num); +} + +static uint8_t +zxdh_xmit_get_ptype(struct rte_mbuf *m) +{ + uint8_t pcode = ZXDH_PCODE_NO_IP_PKT_TYPE; + uint8_t l3_ptype = ZXDH_PI_L3TYPE_NOIP; + + if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV4 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4)) { + l3_ptype = ZXDH_PI_L3TYPE_IP; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV6 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV6)) { + l3_ptype = ZXDH_PI_L3TYPE_IPV6; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else { + goto end; + } + + if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_TCP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)) + pcode = ZXDH_PCODE_TCP_PKT_TYPE; + else if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_UDP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)) + pcode = ZXDH_PCODE_UDP_PKT_TYPE; + +end: + return l3_ptype | ZXDH_PKT_FORM_CPU | pcode; +} + +static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, + struct zxdh_net_hdr_dl *hdr) +{ + uint16_t pkt_flag_lw16 = ZXDH_NO_IPID_UPDATE; + uint16_t l3_offset; + uint32_t ol_flag = 0; + + hdr->pi_hdr.pkt_flag_lw16 = rte_be_to_cpu_16(pkt_flag_lw16); + + hdr->pi_hdr.pkt_type = zxdh_xmit_get_ptype(cookie); + l3_offset = ZXDH_DL_NET_HDR_SIZE + cookie->outer_l2_len + + cookie->outer_l3_len + cookie->l2_len; + hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); + hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); +} + +static inline void zxdh_enqueue_xmit_packed_fast(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, int32_t in_order) +{ + struct zxdh_virtqueue *vq = txvq->vq; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + uint16_t flags = vq->vq_packed.cached_flags; + struct zxdh_net_hdr_dl *hdr = NULL; + + dxp->ndescs = 1; + dxp->cookie = cookie; + hdr = rte_pktmbuf_mtod_offset(cookie, struct zxdh_net_hdr_dl *, -ZXDH_DL_NET_HDR_SIZE); + zxdh_xmit_fill_net_hdr(cookie, hdr); + + uint16_t idx = vq->vq_avail_idx; + struct zxdh_vring_packed_desc *dp = &vq->vq_packed.ring.desc[idx]; + + dp->addr = rte_pktmbuf_iova(cookie) - ZXDH_DL_NET_HDR_SIZE; + dp->len = cookie->data_len + ZXDH_DL_NET_HDR_SIZE; + dp->id = id; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + vq->vq_free_cnt--; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(dp, flags, vq->hw->weak_barriers); +} + +static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, + uint16_t needed, + int32_t use_indirect, + int32_t in_order) +{ + struct zxdh_tx_region *txr = txvq->zxdh_net_hdr_mz->addr; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + void *hdr = NULL; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t idx = head_idx; + uint16_t prev = head_idx; + uint16_t head_flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + uint16_t seg_num = cookie->nb_segs; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vring_packed_desc *head_dp = &vq->vq_packed.ring.desc[idx]; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + + dxp->ndescs = needed; + dxp->cookie = cookie; + head_flags |= vq->vq_packed.cached_flags; + /* if offload disabled, it is not zeroed below, do it now */ + + if (use_indirect) { + /** + * setup tx ring slot to point to indirect + * descriptor list stored in reserved region. + * the first slot in indirect ring is already + * preset to point to the header in reserved region + **/ + start_dp[idx].addr = + txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr); + start_dp[idx].len = (seg_num + 1) * sizeof(struct zxdh_vring_packed_desc); + /* Packed descriptor id needs to be restored when inorder. */ + if (in_order) + start_dp[idx].id = idx; + + /* reset flags for indirect desc */ + head_flags = ZXDH_VRING_DESC_F_INDIRECT; + head_flags |= vq->vq_packed.cached_flags; + hdr = (void *)&txr[idx].tx_hdr; + /* loop below will fill in rest of the indirect elements */ + start_dp = txr[idx].tx_packed_indir; + start_dp->len = ZXDH_DL_NET_HDR_SIZE; /* update actual net or type hdr size */ + idx = 1; + } else { + /* setup first tx ring slot to point to header stored in reserved region. */ + start_dp[idx].addr = txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = ZXDH_DL_NET_HDR_SIZE; + head_flags |= ZXDH_VRING_DESC_F_NEXT; + hdr = (void *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + zxdh_xmit_fill_net_hdr(cookie, (struct zxdh_net_hdr_dl *)hdr); + + do { + start_dp[idx].addr = rte_pktmbuf_iova(cookie); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + uint16_t flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + start_dp[prev].id = id; + if (use_indirect) { + idx = head_idx; + if (++idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); +} + +uint16_t +zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct zxdh_virtnet_tx *txvq = tx_queue; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_hw *hw = vq->hw; + uint16_t nb_tx = 0; + + bool in_order = zxdh_pci_with_feature(hw, ZXDH_F_IN_ORDER); + + if (nb_pkts > vq->vq_free_cnt) + zxdh_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt, in_order); + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int32_t can_push = 0; + int32_t use_indirect = 0; + int32_t slots = 0; + int32_t need = 0; + + /* optimize ring usage */ + if ((zxdh_pci_with_feature(hw, ZXDH_F_ANY_LAYOUT) || + zxdh_pci_with_feature(hw, ZXDH_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= ZXDH_DL_NET_HDR_SIZE && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + alignof(struct zxdh_net_hdr_dl))) { + can_push = 1; + } else if (zxdh_pci_with_feature(hw, ZXDH_RING_F_INDIRECT_DESC) && + txm->nb_segs < ZXDH_MAX_TX_INDIRECT) { + use_indirect = 1; + } + /** + * How many main ring entries are needed to this Tx? + * indirect => 1 + * any_layout => number of segments + * default => number of segments + 1 + **/ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); + need = slots - vq->vq_free_cnt; + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + zxdh_xmit_cleanup_packed(vq, need, in_order); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, "port[ep:%d, pf:%d, vf:%d, vfid:%d, pcieid:%d], queue:%d[pch:%d]. No free desc to xmit", + hw->vport.epid, hw->vport.pfid, hw->vport.vfid, + hw->vfid, hw->pcie_id, txvq->queue_id, + hw->channel_context[txvq->queue_id].ph_chno); + break; + } + } + /* Enqueue Packet buffers */ + if (can_push) + zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + } + if (likely(nb_tx)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + return nb_tx; +} + +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + int32_t error; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + return nb_tx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 8c7f734805..0a02d319b2 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -42,4 +42,8 @@ struct __rte_cache_aligned zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ }; +uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45252 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 08/15] net/zxdh: provided dev simple rx implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (6 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 09/15] net/zxdh: link info update, set link up/down Junlong Wang ` (6 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11243 bytes --] provided dev simple rx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 1 + drivers/net/zxdh/zxdh_rxtx.c | 313 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 318 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7b72be5f25..bb44e93fad 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess aware = Y +Scattered Rx = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index aef77e86a0..bc4d2a937b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -972,6 +972,7 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 10034a0e98..06290d48bb 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -394,3 +481,229 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** + * desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + vq->vq_descx[id].cookie = NULL; + if (unlikely(cookie == NULL)) { + PMD_RX_LOG(ERR, + "vring descriptor with no mbuf cookie at %u", vq->vq_used_cons_idx); + break; + } + rx_pkts[i] = cookie; + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + } + return i; +} + +static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *hdr) +{ + struct zxdh_pd_hdr_ul *pd_hdr = &hdr->pd_hdr; + struct zxdh_pi_hdr *pi_hdr = &hdr->pi_hdr; + uint32_t idx = 0; + + m->pkt_len = rte_be_to_cpu_16(pi_hdr->ul.pkt_len); + + uint16_t pkt_type_outer = rte_be_to_cpu_16(pd_hdr->pkt_type_out); + + idx = (pkt_type_outer >> 12) & 0xF; + m->packet_type = zxdh_outer_l2_type[idx]; + idx = (pkt_type_outer >> 8) & 0xF; + m->packet_type |= zxdh_outer_l3_type[idx]; + idx = (pkt_type_outer >> 4) & 0xF; + m->packet_type |= zxdh_outer_l4_type[idx]; + idx = pkt_type_outer & 0xF; + m->packet_type |= zxdh_tunnel_type[idx]; + + uint16_t pkt_type_inner = rte_be_to_cpu_16(pd_hdr->pkt_type_in); + + if (pkt_type_inner) { + idx = (pkt_type_inner >> 12) & 0xF; + m->packet_type |= zxdh_inner_l2_type[idx]; + idx = (pkt_type_inner >> 8) & 0xF; + m->packet_type |= zxdh_inner_l3_type[idx]; + idx = (pkt_type_inner >> 4) & 0xF; + m->packet_type |= zxdh_inner_l4_type[idx]; + } + + return 0; +} + +static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +{ + int32_t error = 0; + /* + * Requeue the discarded mbuf. This should always be + * successful since it was just dequeued. + */ + error = zxdh_enqueue_recv_refill_packed(vq, &m, 1); + if (unlikely(error)) { + PMD_RX_LOG(ERR, "cannot enqueue discarded mbuf"); + rte_pktmbuf_free(m); + } +} + +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct zxdh_virtnet_rx *rxvq = rx_queue; + struct zxdh_virtqueue *vq = rxvq->vq; + struct zxdh_hw *hw = vq->hw; + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_mbuf *rxm = NULL; + struct rte_mbuf *prev = NULL; + uint32_t len[ZXDH_MBUF_BURST_SZ] = {0}; + struct rte_mbuf *rcv_pkts[ZXDH_MBUF_BURST_SZ] = {NULL}; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint16_t hdr_size = 0; + int32_t error = 0; + uint16_t nb_rx = 0; + uint16_t num = nb_pkts; + + if (unlikely(num > ZXDH_MBUF_BURST_SZ)) + num = ZXDH_MBUF_BURST_SZ; + + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + uint16_t i; + uint16_t rcvd_pkt_len = 0; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + struct zxdh_net_hdr_ul *header = + (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM); + + seg_num = header->type_hdr.num_buffers; + if (seg_num == 0) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + seg_num = 1; + } + /* bit[0:6]-pd_len unit:2B */ + uint16_t pd_len = header->type_hdr.pd_len << 1; + /* Private queue only handle type hdr */ + hdr_size = pd_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rcvd_pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + /* Update rte_mbuf according to pi/pd header */ + if (zxdh_rx_update_mbuf(rxm, header) < 0) { + zxdh_discard_rxbuf(vq, rxm); + continue; + } + seg_res = seg_num - 1; + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_len = (uint16_t)(len[i]); + + rcvd_pkt_len += (uint32_t)(len[i]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + seg_res -= 1; + } + + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, ZXDH_MBUF_BURST_SZ); + uint16_t extra_idx = 0; + + rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); + if (unlikely(rcv_cnt == 0)) { + PMD_RX_LOG(ERR, "No enough segments for packet."); + rte_pktmbuf_free(rx_pkts[nb_rx]); + break; + } + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + rcvd_pkt_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + + /* Allocate new mbuf for the used descriptor */ + if (likely(!zxdh_queue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + if (likely(nb_enqueued)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + return nb_rx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 0a02d319b2..cc0004324a 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -45,5 +45,7 @@ struct __rte_cache_aligned zxdh_virtnet_tx { uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 28867 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 09/15] net/zxdh: link info update, set link up/down 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (7 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang ` (5 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24395 bytes --] provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 22 +++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c | 60 ++++++++++ drivers/net/zxdh/zxdh_msg.h | 40 +++++++ drivers/net/zxdh/zxdh_np.c | 172 ++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 20 ++++ drivers/net/zxdh/zxdh_tables.c | 15 +++ drivers/net/zxdh/zxdh_tables.h | 6 +- 13 files changed, 514 insertions(+), 9 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index bb44e93fad..7da3aaced1 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -10,3 +10,5 @@ ARMv8 = Y SR-IOV = Y Multiprocess aware = Y Scattered Rx = Y +Link status = Y +Link status event = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', + 'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bc4d2a937b..e6056db14a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,12 +106,18 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } - /* Interrupt handler triggered by NIC for handling specific interrupt. */ static void zxdh_fromriscv_intr_handler(void *param) @@ -914,6 +921,13 @@ zxdh_dev_stop(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "intr disable failed"); return ret; } + + ret = zxdh_dev_set_link_down(dev); + if (ret) { + PMD_DRV_LOG(ERR, "set port %s link down failed!", dev->device->name); + return ret; + } + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1012,6 +1026,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + + zxdh_dev_set_link_up(dev); + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1031,6 +1048,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index b1f398b28e..c0b719062c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -72,6 +72,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -93,6 +94,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c new file mode 100644 index 0000000000..5a0af98cc0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_msg.h" +#include "zxdh_ethdev_ops.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int32_t ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + port_attr.is_up = link_status; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -1; + } + } else { + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; + port_attr_msg->value = link_status; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); + return ret; + } + } + return ret; +} + +static int32_t +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + uint16_t status = 0; + int32_t ret = 0; + + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + + link->link_status = status; + + if (status == RTE_ETH_LINK_DOWN) { + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } else { + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), + ZXDH_BAR_MODULE_MAC); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_LINK_GET); + return -1; + } + link->link_speed = reply_info.reply_body.link_msg.speed; + hw->speed_mode = reply_info.reply_body.link_msg.speed_modes; + if ((reply_info.reply_body.link_msg.duplex & RTE_ETH_LINK_FULL_DUPLEX) == + RTE_ETH_LINK_FULL_DUPLEX) + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + else + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + } + hw->speed = link->link_speed; + + return 0; +} + +static int zxdh_set_link_status(struct rte_eth_dev *dev, uint8_t link_status) +{ + uint16_t curr_link_status = dev->data->dev_link.link_status; + + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (link_status == curr_link_status) { + PMD_DRV_LOG(DEBUG, "curr_link_status %u", curr_link_status); + return 0; + } + + hw->admin_status = link_status; + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); + return ret; + } + dev->data->dev_link.link_status = hw->admin_status & link.link_status; + + if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) { + dev->data->dev_link.link_speed = link.link_speed; + dev->data->dev_link.link_duplex = link.link_duplex; + } else { + dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } + return zxdh_config_port_status(dev, dev->data->dev_link.link_status); +} + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_UP); + + if (ret) + PMD_DRV_LOG(ERR, "Set link up failed, code:%d", ret); + + return ret; +} + +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + memset(&link, 0, sizeof(link)); + link.link_duplex = hw->duplex; + link.link_speed = hw->speed; + link.link_autoneg = RTE_ETH_LINK_AUTONEG; + + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + return ret; + } + link.link_status &= hw->admin_status; + if (link.link_status == RTE_ETH_LINK_DOWN) + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + ret = zxdh_config_port_status(dev, link.link_status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + return ret; + } + return rte_eth_linkstatus_set(dev, &link); +} + +int zxdh_dev_set_link_down(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_DOWN); + + if (ret) + PMD_DRV_LOG(ERR, "Set link down failed"); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h new file mode 100644 index 0000000000..c6d6ca56fd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_OPS_H +#define ZXDH_ETHDEV_OPS_H + +#include "zxdh_ethdev.h" + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev); +int zxdh_dev_set_link_down(struct rte_eth_dev *dev); +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); + +#endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index aa2e10fd45..a6e19bbdd8 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1134,6 +1134,54 @@ int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, return 0; } +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + struct zxdh_pci_bar_msg in = { + .payload_addr = &msg_req, + .payload_len = msg_req_len, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + .src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = module_id, + .src_pcieid = hw->pcie_id, + }; + + if (zxdh_bar_chan_sync_msg_send(&in, &result) != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "Failed to send sync messages or receive response"); + return -1; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -1; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -1; + } + + return 0; +} + void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info) { @@ -1144,3 +1192,15 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, msghead->vf_id = hw->vport.vfid; msghead->pcieid = hw->pcie_id; } + +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_agent_msg_head *agent_head = &msg_info->agent_msg_head; + + agent_head->msg_type = type; + agent_head->panel_id = hw->panel_id; + agent_head->phyport = hw->phyport; + agent_head->vf_id = hw->vfid; + agent_head->pcie_id = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 613ca71170..a78075c914 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -164,11 +164,18 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +/* riscv msg opcodes */ +enum zxdh_agent_msg_type { + ZXDH_MAC_LINK_GET = 14, +}; + enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_PORT_ATTRS_SET = 25, + ZXDH_MSG_TYPE_END, }; @@ -261,6 +268,15 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_link_info_msg { + uint8_t autoneg; + uint8_t link_state; + uint8_t blink_enable; + uint8_t duplex; + uint32_t speed_modes; + uint32_t speed; +} __rte_packed; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -276,6 +292,7 @@ struct zxdh_msg_reply_body { enum zxdh_reps_flag flag; union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + struct zxdh_link_info_msg link_msg; } __rte_packed; } __rte_packed; @@ -291,6 +308,12 @@ struct zxdh_vf_init_msg { uint8_t rss_enable; } __rte_packed; +struct zxdh_port_attr_set_msg { + uint32_t mode; + uint32_t value; + uint8_t allmulti_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -298,14 +321,26 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_agent_msg_head { + enum zxdh_agent_msg_type msg_type; + uint8_t panel_id; + uint8_t phyport; + uint8_t rsv; + uint16_t vf_id; + uint16_t pcie_id; +} __rte_packed; + struct zxdh_msg_info { union { uint8_t head_len[ZXDH_MSG_HEAD_LEN]; struct zxdh_msg_head msg_head; + struct zxdh_agent_msg_head agent_msg_head; }; union { uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; struct zxdh_vf_init_msg vf_init_msg; + struct zxdh_port_attr_set_msg port_attr_msg; + struct zxdh_link_info_msg link_msg; } __rte_packed data; } __rte_packed; @@ -326,5 +361,10 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info); int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len); +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info); +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 99a7dc11b4..1f06539263 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -36,6 +36,10 @@ ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ ((_inttype_)(((_bitqnt_) < 32))) +#define ZXDH_COMM_UINT32_GET_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + ((_uidst_) = (((_uisrc_) >> (_uistartpos_)) & \ + (ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_))))) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1456,15 +1460,11 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } -static uint32_t +static void zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) { - uint32_t rc = 0; - p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; - - return rc; } int @@ -1507,7 +1507,7 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, pentry = delete_entries + entry_index; sdt_no = pentry->sdt_no; - rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); switch (tbl_type) { case ZXDH_SDT_TBLT_ERAM: { @@ -1557,3 +1557,163 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, rte_free(p_data_buff_ex); return 0; } + +static uint32_t +zxdh_np_sdt_tbl_data_parser(uint32_t sdt_hig32, uint32_t sdt_low32, void *p_sdt_info) +{ + uint32_t tbl_type = 0; + uint32_t clutch_en = 0; + + ZXDH_SDTTBL_ERAM_T *p_sdt_eram = NULL; + ZXDH_SDTTBL_PORTTBL_T *p_sdt_porttbl = NULL; + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_hig32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + ZXDH_COMM_UINT32_GET_BITS(clutch_en, sdt_low32, 0, 1); + + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + p_sdt_eram = (ZXDH_SDTTBL_ERAM_T *)p_sdt_info; + p_sdt_eram->table_type = tbl_type; + p_sdt_eram->eram_clutch_en = clutch_en; + break; + } + + case ZXDH_SDT_TBLT_PORTTBL: + { + p_sdt_porttbl = (ZXDH_SDTTBL_PORTTBL_T *)p_sdt_info; + p_sdt_porttbl->table_type = tbl_type; + p_sdt_porttbl->porttbl_clutch_en = clutch_en; + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} + +static uint32_t +zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t rc; + + zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + + rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); + if (rc != 0) + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + + return rc; +} + +static void +zxdh_np_eram_index_cal(uint32_t eram_mode, uint32_t index, + uint32_t *p_row_index, uint32_t *p_col_index) +{ + uint32_t row_index = 0; + uint32_t col_index = 0; + + switch (eram_mode) { + case ZXDH_ERAM128_TBL_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_TBL_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + *p_row_index = row_index; + *p_col_index = col_index; +} + +static uint32_t +zxdh_np_dtb_eram_data_get(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no, + ZXDH_DTB_ERAM_ENTRY_INFO_T *p_dump_eram_entry) +{ + uint32_t index = p_dump_eram_entry->index; + uint32_t *p_data = p_dump_eram_entry->p_data; + ZXDH_SDTTBL_ERAM_T sdt_eram_info = {0}; + uint32_t temp_data[4] = {0}; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t rd_mode; + uint32_t rc; + + rc = zxdh_np_soft_sdt_tbl_get(queue_id, sdt_no, &sdt_eram_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_soft_sdt_tbl_get"); + rd_mode = sdt_eram_info.eram_mode; + + zxdh_np_eram_index_cal(rd_mode, index, &row_index, &col_index); + + switch (rd_mode) { + case ZXDH_ERAM128_TBL_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + case ZXDH_ERAM128_TBL_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + case ZXDH_ERAM128_TBL_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + return rc; +} + +int +zxdh_np_dtb_table_entry_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, + uint32_t srh_mode) +{ + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + uint32_t tbl_type = 0; + uint32_t rc; + uint32_t sdt_no; + + sdt_no = get_entry->sdt_no; + zxdh_np_sdt_tbl_data_get(srh_mode, sdt_no, &sdt_tbl); + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_tbl.data_high32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_data_get(dev_id, + queue_id, + sdt_no, + (ZXDH_DTB_ERAM_ENTRY_INFO_T *)get_entry->p_entry_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_eram_data_get"); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 42a652dd6b..ac3931ba65 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -514,11 +514,31 @@ typedef struct zxdh_sdt_tbl_data_t { uint32_t data_low32; } ZXDH_SDT_TBL_DATA_T; +typedef struct zxdh_sdt_tbl_etcam_t { + uint32_t table_type; + uint32_t etcam_id; + uint32_t etcam_key_mode; + uint32_t etcam_table_id; + uint32_t no_as_rsp_mode; + uint32_t as_en; + uint32_t as_eram_baddr; + uint32_t as_rsp_mode; + uint32_t etcam_table_depth; + uint32_t etcam_clutch_en; +} ZXDH_SDTTBL_ETCAM_T; + +typedef struct zxdh_sdt_tbl_porttbl_t { + uint32_t table_type; + uint32_t porttbl_clutch_en; +} ZXDH_SDTTBL_PORTTBL_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); +int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 9fd184e612..db0132ce3f 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -134,3 +134,18 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } + +int +zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VPORT_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5e9b36faee..8676a8b375 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,9 +7,10 @@ #include <stdint.h> -extern struct zxdh_dtb_shared_data g_dtb_data; - #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 + +extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN @@ -145,5 +146,6 @@ int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 52659 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 10/15] net/zxdh: mac set/add/remove ops implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (8 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 11/15] net/zxdh: promisc/allmulti " Junlong Wang ` (4 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24778 bytes --] provided mac set/add/remove ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 33 ++++- drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 231 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 12 ++ drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 197 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 36 +++++ 12 files changed, 548 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7da3aaced1..dc09fe3453 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess aware = Y Scattered Rx = Y Link status = Y Link status event = Y +Unicast MAC filter = Y +Multicast MAC filter = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 4f18c97ed7..75883a8897 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e6056db14a..ea3c08be58 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -991,6 +991,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1029,6 +1046,10 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); + for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; for (i = 0; i < dev->data->nb_tx_queues; i++) @@ -1051,6 +1072,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add = zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set = zxdh_dev_mac_addr_set, }; static int32_t @@ -1092,15 +1116,20 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) PMD_DRV_LOG(ERR, "Failed to get phyport"); return -1; } - PMD_DRV_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + PMD_DRV_LOG(DEBUG, "Get phyport success: 0x%x", hw->phyport); hw->vfid = zxdh_vport_to_vfid(hw->vport); + if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) { + PMD_DRV_LOG(ERR, "Failed to get hash idx"); + return -1; + } + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { PMD_DRV_LOG(ERR, "Failed to get panel_id"); return -1; } - PMD_DRV_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + PMD_DRV_LOG(DEBUG, "Get panel id success: 0x%x", hw->panel_id); return 0; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index c0b719062c..5b95cb1c2a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -80,6 +80,8 @@ struct zxdh_hw { uint16_t port_id; uint16_t vfid; uint16_t queue_num; + uint16_t mc_num; + uint16_t uc_num; uint8_t *isr; uint8_t weak_barriers; @@ -92,6 +94,7 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t hash_search_index; uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 5a0af98cc0..35e37483e3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -164,3 +164,234 @@ int zxdh_dev_set_link_down(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Set link down failed"); return ret; } + +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_ether_addr *old_addr = &dev->data->mac_addrs[0]; + struct zxdh_msg_info msg_info = {0}; + uint16_t ret = 0; + + if (!rte_is_valid_assigned_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "mac address is invalid!"); + return -EINVAL; + } + + if (hw->is_pf) { + ret = zxdh_del_mac_table(hw->vport.vport, old_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num--; + + ret = zxdh_set_mac_table(hw->vport.vport, addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + mac_filter->mac_flag = true; + rte_memcpy(&mac_filter->mac, old_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_DEL); + return ret; + } + hw->uc_num--; + PMD_DRV_LOG(INFO, "Success to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + rte_memcpy(&mac_filter->mac, addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } + rte_ether_addr_copy(addr, (struct rte_ether_addr *)hw->mac_addr); + return ret; +} + +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + uint16_t i, ret; + + if (index >= ZXDH_MAX_MAC_ADDRS) { + PMD_DRV_LOG(ERR, "Add mac index (%u) is out of range", index); + return -EINVAL; + } + + for (i = 0; (i != ZXDH_MAX_MAC_ADDRS); ++i) { + if (memcmp(&dev->data->mac_addrs[i], mac_addr, sizeof(*mac_addr))) + continue; + + PMD_DRV_LOG(INFO, "MAC address already configured"); + return -EADDRINUSE; + } + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } + dev->data->mac_addrs[index] = *mac_addr; + return 0; +} + +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index]; + uint16_t ret = 0; + + if (index >= ZXDH_MAX_MAC_ADDRS) + return; + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } + memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index c6d6ca56fd..4630bb70db 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -10,5 +10,9 @@ int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq); +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a78075c914..44ce5d1b7f 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -46,6 +46,9 @@ #define ZXDH_MSG_REQ_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) +#define ZXDH_MAC_FILTER 0xaa +#define ZXDH_MAC_UNFILTER 0xff + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -173,6 +176,8 @@ enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_MAC_ADD = 3, + ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, @@ -314,6 +319,12 @@ struct zxdh_port_attr_set_msg { uint8_t allmulti_follow; } __rte_packed; +struct zxdh_mac_filter { + uint8_t mac_flag; + uint8_t filter_flag; + struct rte_ether_addr mac; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -341,6 +352,7 @@ struct zxdh_msg_info { struct zxdh_vf_init_msg vf_init_msg; struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; + struct zxdh_mac_filter mac_filter_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index ac3931ba65..19d1f03f59 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -532,6 +532,11 @@ typedef struct zxdh_sdt_tbl_porttbl_t { uint32_t porttbl_clutch_en; } ZXDH_SDTTBL_PORTTBL_T; +typedef struct zxdh_dtb_hash_entry_info_t { + uint8_t *p_actu_key; + uint8_t *p_rst; +} ZXDH_DTB_HASH_ENTRY_INFO_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index db0132ce3f..f5b607584d 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,10 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_MAC_HASH_INDEX_BASE 64 +#define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) +#define ZXDH_MC_GROUP_NUM 4 + int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -149,3 +153,196 @@ zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) return ret; } + +int +zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "Insert mac_table failed"); + return -ret; + } + } else { + for (group_id = 0; group_id < 4; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, + addr, sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (ret == 0) { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + } + } else { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + else + multicast_table.entry.mc_bitmap[index] = + false; + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + else + multicast_table.entry.mc_pf_enable = false; + } + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "add mac_table failed, code:%d", ret); + return -ret; + } + } + } + return 0; +} + +int +zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret, del_flag = 0; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "delete l2_fwd_hash_table failed, code:%d", ret); + return -ret; + } + } else { + multicast_table.key.vf_group_id = vport_num.vfid / 64; + rte_memcpy(multicast_table.key.mac_addr, addr, sizeof(struct rte_ether_addr)); + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (vport_num.vf_flag) + multicast_table.entry.mc_bitmap[index] &= + ~(rte_cpu_to_be_32(UINT32_C(1) << (31 - index))); + else + multicast_table.entry.mc_pf_enable = 0; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add mc_table failed, code:%d", ret); + return -ret; + } + + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + if (multicast_table.entry.mc_bitmap[0] == 0 && + multicast_table.entry.mc_bitmap[1] == 0 && + multicast_table.entry.mc_pf_enable == 0) { + if (group_id == (ZXDH_MC_GROUP_NUM - 1)) + del_flag = 1; + } else { + break; + } + } + if (del_flag) { + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + } + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8676a8b375..f16c4923ef 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -142,10 +142,46 @@ struct zxdh_panel_table { uint32_t rsv_2; }; /* 16B */ +struct zxdh_mac_unicast_key { + uint16_t rsv; + uint8_t dmac_addr[6]; +}; + +struct zxdh_mac_unicast_entry { + uint8_t rsv1 : 7, + hit_flag : 1; + uint8_t rsv; + uint16_t vfid; +}; + +struct zxdh_mac_unicast_table { + struct zxdh_mac_unicast_key key; + struct zxdh_mac_unicast_entry entry; +}; + +struct zxdh_mac_multicast_key { + uint8_t rsv; + uint8_t vf_group_id; + uint8_t mac_addr[6]; +}; + +struct zxdh_mac_multicast_entry { + uint32_t mc_pf_enable; + uint32_t rsv1; + uint32_t mc_bitmap[2]; +}; + +struct zxdh_mac_multicast_table { + struct zxdh_mac_multicast_key key; + struct zxdh_mac_multicast_entry entry; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 69160 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 11/15] net/zxdh: promisc/allmulti ops implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (9 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 12/15] net/zxdh: vlan filter/ offload " Junlong Wang ` (3 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18478 bytes --] provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 21 ++- drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 128 +++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_tables.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 413 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index dc09fe3453..e9b237e102 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -14,3 +14,5 @@ Link status = Y Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ea3c08be58..d4165aa80c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -901,8 +901,16 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) int ret; ret = zxdh_port_attr_uninit(dev); - if (ret) + if (ret) { PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "uninit promisc_table failed"); + return ret; + } return ret; } @@ -1075,6 +1083,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add = zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set = zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable = zxdh_dev_allmulticast_disable, }; static int32_t @@ -1326,6 +1338,13 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 5b95cb1c2a..3cdac5de73 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -98,6 +98,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 35e37483e3..ad3d10258c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -395,3 +395,131 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 1; + } + return ret; +} + +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->promisc_status == 1) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, false); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = false; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 0; + } + return ret; +} + +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = true; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 1; + } + return ret; +} + +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 1) { + if (hw->is_pf) { + if (hw->promisc_status == 1) + goto end; + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + if (hw->promisc_status == 1) + goto end; + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = false; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 0; + } + return ret; +end: + hw->allmulti_status = 0; + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 4630bb70db..394ddedc0e 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -14,5 +14,9 @@ int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_ad uint32_t index, uint32_t vmdq); int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 44ce5d1b7f..2abf579a80 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -48,6 +48,8 @@ #define ZXDH_MAC_FILTER 0xaa #define ZXDH_MAC_UNFILTER 0xff +#define ZXDH_PROMISC_MODE 1 +#define ZXDH_ALLMULTI_MODE 2 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,7 @@ enum zxdh_msg_type { ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, + ZXDH_PORT_PROMISC_SET = 26, ZXDH_MSG_TYPE_END, }; @@ -325,6 +328,12 @@ struct zxdh_mac_filter { struct rte_ether_addr mac; } __rte_packed; +struct zxdh_port_promisc_msg { + uint8_t mode; + uint8_t value; + uint8_t mc_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -353,6 +362,7 @@ struct zxdh_msg_info { struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; + struct zxdh_port_promisc_msg port_promisc_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f5b607584d..45aeb3e3e4 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,10 +10,15 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_BROCAST_ATT_TABLE 6 +#define ZXDH_SDT_UNICAST_ATT_TABLE 10 +#define ZXDH_SDT_MULTICAST_ATT_TABLE 11 #define ZXDH_MAC_HASH_INDEX_BASE 64 #define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) #define ZXDH_MC_GROUP_NUM 4 +#define ZXDH_BASE_VFID 1152 +#define ZXDH_TABLE_HIT_FLAG 128 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -346,3 +351,221 @@ zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_se } return 0; } + +int +zxdh_promisc_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_promisc_table_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int +zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_unitcast_table uc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T uc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&uc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + uc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + uc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + } else { + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} + +int +zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_multicast_table mc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T mc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&mc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + mc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + mc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + + } else { + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index f16c4923ef..fb30c8f32e 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -176,6 +176,24 @@ struct zxdh_mac_multicast_table { struct zxdh_mac_multicast_entry entry; }; +struct zxdh_brocast_table { + uint32_t flag; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_unitcast_table { + uint32_t uc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_multicast_table { + uint32_t mc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -183,5 +201,9 @@ int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_promisc_table_init(struct rte_eth_dev *dev); +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45335 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 12/15] net/zxdh: vlan filter/ offload ops implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (10 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 11/15] net/zxdh: promisc/allmulti " Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang ` (2 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20602 bytes --] provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +++++- drivers/net/zxdh/zxdh_ethdev_ops.c | 223 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h | 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 99 +++++++++++++ drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 417 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index e9b237e102..6fb006c2da 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -16,3 +16,6 @@ Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode = Y +VLAN filter = Y +VLAN offload = Y +QinQ offload = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d4165aa80c..7809b24d8b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -758,6 +758,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -815,7 +843,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -847,6 +875,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1087,6 +1117,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable = zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set = zxdh_dev_vlan_offload_set, }; static int32_t @@ -1345,6 +1377,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " vlan filter table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index ad3d10258c..c4a1521723 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include <rte_malloc.h> + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -523,3 +527,222 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int +zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); + return -1; + } + + idx = vlan_id / ZXDH_VLAN_FILTER_GROUPS; + bit_idx = vlan_id % ZXDH_VLAN_FILTER_GROUPS; + + if (on) { + if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { + PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_ADD; + } else { + if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { + PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_DEL; + } + + if (hw->is_pf) { + ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); + if (ret) { + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + return -1; + } + } else { + struct zxdh_msg_info msg = {0}; + zxdh_msg_head_build(hw, msg_type, &msg); + msg.data.vlan_filter_msg.vlan_id = vlan_id; + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, msg_type); + return ret; + } + } + + if (on) + dev->data->vlan_filter_conf.ids[idx] |= (1ULL << bit_idx); + else + dev->data->vlan_filter_conf.ids[idx] &= ~(1ULL << bit_idx); + + return 0; +} + +int +zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + rxmode = &dev->data->dev_conf.rxmode; + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_QINQ_STRIP_MASK) { + memset(&msg, 0, sizeof(struct zxdh_msg_info)); + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 394ddedc0e..058d271ab3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -18,5 +18,7 @@ int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 2abf579a80..ec15388f7a 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -50,6 +50,8 @@ #define ZXDH_MAC_UNFILTER 0xff #define ZXDH_PROMISC_MODE 1 #define ZXDH_ALLMULTI_MODE 2 +#define ZXDH_VLAN_STRIP_MSG_TYPE 0 +#define ZXDH_QINQ_STRIP_MSG_TYPE 1 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,10 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_VLAN_FILTER_SET = 17, + ZXDH_VLAN_FILTER_ADD = 18, + ZXDH_VLAN_FILTER_DEL = 19, + ZXDH_VLAN_OFFLOAD = 21, ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, @@ -341,6 +347,19 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_vlan_filter { + uint16_t vlan_id; +}; + +struct zxdh_vlan_filter_set { + uint8_t enable; +}; + +struct zxdh_vlan_offload { + uint8_t enable; + uint8_t type; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -363,6 +382,9 @@ struct zxdh_msg_info { struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; struct zxdh_port_promisc_msg port_promisc_msg; + struct zxdh_vlan_filter vlan_filter_msg; + struct zxdh_vlan_filter_set vlan_filter_set_msg; + struct zxdh_vlan_offload vlan_offload_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 06290d48bb..0ffce50042 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -11,6 +11,9 @@ #include "zxdh_pci.h" #include "zxdh_queue.h" +#define ZXDH_SVLAN_TPID 0x88a8 +#define ZXDH_CVLAN_TPID 0x8100 + #define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ #define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ #define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ @@ -21,6 +24,9 @@ #define ZXDH_PI_L3TYPE_RSV 0xC0 #define ZXDH_PI_L3TYPE_MASK 0xC0 +#define ZXDH_PD_OFFLOAD_SVLAN_INSERT (1 << 14) +#define ZXDH_PD_OFFLOAD_CVLAN_INSERT (1 << 13) + #define ZXDH_PCODE_MASK 0x1F #define ZXDH_PCODE_IP_PKT_TYPE 0x01 #define ZXDH_PCODE_TCP_PKT_TYPE 0x02 @@ -259,6 +265,18 @@ static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + if (cookie->ol_flags & RTE_MBUF_F_TX_VLAN) { + ol_flag |= ZXDH_PD_OFFLOAD_CVLAN_INSERT; + hdr->pi_hdr.vlan_id = rte_be_to_cpu_16(cookie->vlan_tci); + hdr->pd_hdr.cvlan_insert = + rte_be_to_cpu_32((ZXDH_CVLAN_TPID << 16) | cookie->vlan_tci); + } + if (cookie->ol_flags & RTE_MBUF_F_TX_QINQ) { + ol_flag |= ZXDH_PD_OFFLOAD_SVLAN_INSERT; + hdr->pd_hdr.svlan_insert = + rte_be_to_cpu_32((ZXDH_SVLAN_TPID << 16) | cookie->vlan_tci_outer); + } + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); } diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 45aeb3e3e4..ca98b36da2 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 #define ZXDH_SDT_MULTICAST_ATT_TABLE 11 @@ -19,6 +20,10 @@ #define ZXDH_MC_GROUP_NUM 4 #define ZXDH_BASE_VFID 1152 #define ZXDH_TABLE_HIT_FLAG 128 +#define ZXDH_FIRST_VLAN_GROUP_BITS 23 +#define ZXDH_VLAN_GROUP_BITS 31 +#define ZXDH_VLAN_GROUP_NUM 35 +#define ZXDH_VLAN_FILTER_VLANID_STEP 120 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -569,3 +574,97 @@ zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) } return 0; } + +int +zxdh_vlan_filter_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_vlan_filter_table vlan_table = {0}; + int16_t ret = 0; + + if (!hw->is_pf) + return 0; + + for (uint8_t vlan_group = 0; vlan_group < ZXDH_VLAN_GROUP_NUM; vlan_group++) { + if (vlan_group == 0) { + vlan_table.vlans[0] |= (1 << ZXDH_FIRST_VLAN_GROUP_BITS); + vlan_table.vlans[0] |= (1 << ZXDH_VLAN_GROUP_BITS); + + } else { + vlan_table.vlans[0] = 0; + } + uint32_t index = (vlan_group << 11) | hw->vport.vfid; + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = { + .index = index, + .p_data = (uint32_t *)&vlan_table + }; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d], vlan_group:%d, init vlan filter table failed", + hw->vport.vfid, vlan_group); + ret = -1; + } + } + + return ret; +} + +int +zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) +{ + struct zxdh_vlan_filter_table vlan_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + memset(&vlan_table, 0, sizeof(struct zxdh_vlan_filter_table)); + int table_num = vlan_id / ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t index = (table_num << 11) | vport_num.vfid; + uint16_t group = (vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP) / 8 + 1; + + uint8_t val = sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t); + uint8_t vlan_tbl_index = group / val; + uint16_t used_group = vlan_tbl_index * val; + + used_group = (used_group == 0 ? 0 : (used_group - 1)); + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (uint32_t *)&vlan_table}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret) { + PMD_DRV_LOG(ERR, "get vlan table failed"); + return -1; + } + uint16_t relative_vlan_id = vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t *base_group = &vlan_table.vlans[0]; + + *base_group |= 1 << 31; + base_group = &vlan_table.vlans[vlan_tbl_index]; + uint8_t valid_bits = (vlan_tbl_index == 0 ? + ZXDH_FIRST_VLAN_GROUP_BITS : ZXDH_VLAN_GROUP_BITS) + 1; + + uint8_t shift_left = (valid_bits - (relative_vlan_id - used_group * 8) % valid_bits) - 1; + + if (enable) + *base_group |= 1 << shift_left; + else + *base_group &= ~(1 << shift_left); + + + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_VLAN_ATT_TABLE, + .p_entry_data = &entry_data + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write vlan table failed"); + return -1; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index fb30c8f32e..8dac8f30dd 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -43,7 +43,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -73,7 +73,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -194,6 +194,10 @@ struct zxdh_multicast_table { uint32_t bitmap[2]; }; +struct zxdh_vlan_filter_table { + uint32_t vlans[4]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -205,5 +209,7 @@ int zxdh_promisc_table_init(struct rte_eth_dev *dev); int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 54926 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (11 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 12/15] net/zxdh: vlan filter/ offload " Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 25363 bytes --] provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 52 ++++ drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 407 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ drivers/net/zxdh/zxdh_msg.h | 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++++++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 603 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 6fb006c2da..415ca547d0 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -19,3 +19,6 @@ Allmulticast mode = Y VLAN filter = Y VLAN offload = Y QinQ offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 7809b24d8b..1c04719cd4 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -61,6 +61,9 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; + dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -784,9 +787,48 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) return ret; } + ret = zxdh_rss_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "rss configure failed"); + return ret; + } + return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -873,6 +915,12 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } + zxdh_pci_reinit_complete(hw); end: @@ -1119,6 +1167,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable = zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set = zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 3cdac5de73..2934fa264a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -82,6 +82,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; + uint16_t *rss_reta; uint8_t *isr; uint8_t weak_barriers; @@ -100,6 +101,8 @@ struct zxdh_hw { uint8_t admin_status; uint8_t promisc_status; uint8_t allmulti_status; + uint8_t rss_enable; + uint8_t rss_init; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index c4a1521723..d333717e87 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -3,6 +3,7 @@ */ #include <rte_malloc.h> +#include <rte_ether.h> #include "zxdh_ethdev.h" #include "zxdh_pci.h" @@ -12,6 +13,14 @@ #include "zxdh_logs.h" #define ZXDH_VLAN_FILTER_GROUPS 64 +#define ZXDH_INVALID_LOGIC_QID 0xFFFFU + +/* Supported RSS */ +#define ZXDH_RSS_HF_MASK (~(ZXDH_RSS_HF)) +#define ZXDH_HF_F5 1 +#define ZXDH_HF_F3 2 +#define ZXDH_HF_MAC_VLAN 4 +#define ZXDH_HF_ALL 0 static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { @@ -746,3 +755,401 @@ zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } + +int +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; + uint16_t idx; + uint16_t i; + uint16_t pos; + int ret; + + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); + return -EINVAL; + } + if (!hw->rss_reta) { + hw->rss_reta = rte_calloc(NULL, RTE_ETH_RSS_RETA_SIZE_256, sizeof(uint16_t), 0); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate RSS reta"); + return -ENOMEM; + } + } + for (idx = 0, i = 0; (i < reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + if (reta_conf[idx].reta[pos] > dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "reta table value err(%u >= %u)", + reta_conf[idx].reta[pos], dev->data->nb_rx_queues); + return -EINVAL; + } + if (hw->rss_reta[i] != reta_conf[idx].reta[pos]) + break; + } + if (i == reta_size) { + PMD_DRV_LOG(INFO, "reta table same with buffered table"); + return 0; + } + memcpy(old_reta, hw->rss_reta, sizeof(old_reta)); + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + hw->rss_reta[i] = reta_conf[idx].reta[pos]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < reta_size; i++) + msg.data.rss_reta.reta[i] = + (hw->channel_context[hw->rss_reta[i] * 2].ph_chno); + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return ret; +} + +static uint16_t +zxdh_hw_qid_to_logic_qid(struct rte_eth_dev *dev, uint16_t qid) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t rx_queues = dev->data->nb_rx_queues; + uint16_t i; + + for (i = 0; i < rx_queues; i++) { + if (qid == hw->channel_context[i * 2].ph_chno) + return i; + } + return ZXDH_INVALID_LOGIC_QID; +} + +int +zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + uint16_t idx; + uint16_t i; + int ret = 0; + uint16_t qid_logic; + + ret = (!reta_size || reta_size > RTE_ETH_RSS_RETA_SIZE_256); + if (ret) { + PMD_DRV_LOG(ERR, "request reta size(%u) not same with buffered(%u)", + reta_size, RTE_ETH_RSS_RETA_SIZE_256); + return -EINVAL; + } + + /* Fill each entry of the table even if its bit is not set. */ + for (idx = 0, i = 0; (i != reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = hw->rss_reta[i]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_GET, &msg); + + if (hw->is_pf) { + ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table get failed"); + return -EINVAL; + } + } + + struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + + qid_logic = zxdh_hw_qid_to_logic_qid(dev, reta_table->reta[i]); + if (qid_logic == ZXDH_INVALID_LOGIC_QID) { + PMD_DRV_LOG(ERR, "rsp phy reta qid (%u) is illegal(%u)", + reta_table->reta[i], qid_logic); + return -EINVAL; + } + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = qid_logic; + } + return 0; +} + +static uint32_t +zxdh_rss_hf_to_hw(uint64_t hf) +{ + uint32_t hw_hf = 0; + + if (hf & ZXDH_HF_MAC_VLAN_ETH) + hw_hf |= ZXDH_HF_MAC_VLAN; + if (hf & ZXDH_HF_F3_ETH) + hw_hf |= ZXDH_HF_F3; + if (hf & ZXDH_HF_F5_ETH) + hw_hf |= ZXDH_HF_F5; + + if (hw_hf == (ZXDH_HF_MAC_VLAN | ZXDH_HF_F3 | ZXDH_HF_F5)) + hw_hf = ZXDH_HF_ALL; + return hw_hf; +} + +static uint64_t +zxdh_rss_hf_to_eth(uint32_t hw_hf) +{ + uint64_t hf = 0; + + if (hw_hf == ZXDH_HF_ALL) + return (ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH); + + if (hw_hf & ZXDH_HF_MAC_VLAN) + hf |= ZXDH_HF_MAC_VLAN_ETH; + if (hw_hf & ZXDH_HF_F3) + hf |= ZXDH_HF_F3_ETH; + if (hw_hf & ZXDH_HF_F5) + hf |= ZXDH_HF_F5_ETH; + + return hf; +} + +int +zxdh_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + uint32_t hw_hf_new, hw_hf_old; + int need_update_hf = 0; + int ret = 0; + + ret = rss_conf->rss_hf & ZXDH_RSS_HF_MASK; + if (ret) { + PMD_DRV_LOG(ERR, "Not support some hash function (%08lx)", rss_conf->rss_hf); + return -EINVAL; + } + + hw_hf_new = zxdh_rss_hf_to_hw(rss_conf->rss_hf); + hw_hf_old = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + + if ((hw_hf_new != hw_hf_old || !!rss_conf->rss_hf)) + need_update_hf = 1; + + if (need_update_hf) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = !!rss_conf->rss_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = !!rss_conf->rss_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf_new; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf_new; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + old_rss_conf->rss_hf = rss_conf->rss_hf; + } + + return 0; +} + +int +zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret; + uint32_t hw_hf; + + if (rss_conf == NULL) { + PMD_DRV_LOG(ERR, "rss conf is NULL"); + return -ENOMEM; + } + + hw_hf = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + rss_conf->rss_hf = zxdh_rss_hf_to_eth(hw_hf); + + zxdh_msg_head_build(hw, ZXDH_RSS_HF_GET, &msg); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + reply_msg.reply_body.rss_hf.rss_hf = port_attr.rss_hash_factor; + } else { + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + rss_conf->rss_hf = zxdh_rss_hf_to_eth(reply_msg.reply_body.rss_hf.rss_hf); + + return 0; +} + +static int +zxdh_get_rss_enable_conf(struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) + return dev->data->nb_rx_queues == 1 ? 0 : 1; + else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE) + return 0; + + return 0; +} + +int +zxdh_rss_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *dev_data = dev->data; + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg = {0}; + int ret = 0; + uint32_t hw_hf; + uint32_t i; + + if (dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(ERR, "port %u nb_rx_queues is 0", dev->data->port_id); + return -1; + } + + /* config rss enable */ + uint8_t curr_rss_enable = zxdh_get_rss_enable_conf(dev); + + if (hw->rss_enable != curr_rss_enable) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = curr_rss_enable; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = curr_rss_enable; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + hw->rss_enable = curr_rss_enable; + } + + if (curr_rss_enable && hw->rss_init == 0) { + /* config hash factor */ + dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = ZXDH_HF_F5_ETH; + hw_hf = zxdh_rss_hf_to_hw(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf); + memset(&msg, 0, sizeof(msg)); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + hw->rss_init = 1; + } + + if (!hw->rss_reta) { + hw->rss_reta = rte_calloc(NULL, RTE_ETH_RSS_RETA_SIZE_256, sizeof(uint16_t), 0); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "alloc memory fail"); + return -1; + } + } + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + hw->rss_reta[i] = i % dev_data->nb_rx_queues; + + /* hw config reta */ + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + msg.data.rss_reta.reta[i] = + hw->channel_context[hw->rss_reta[i] * 2].ph_chno; + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 058d271ab3..860716d079 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,8 +5,25 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <rte_ether.h> + #include "zxdh_ethdev.h" +#define ZXDH_ETH_RSS_L2 RTE_ETH_RSS_L2_PAYLOAD +#define ZXDH_ETH_RSS_IP \ + (RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6) +#define ZXDH_ETH_RSS_TCP (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP) +#define ZXDH_ETH_RSS_UDP (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP) +#define ZXDH_ETH_RSS_SCTP (RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + +#define ZXDH_HF_F5_ETH (ZXDH_ETH_RSS_TCP | ZXDH_ETH_RSS_UDP | ZXDH_ETH_RSS_SCTP) +#define ZXDH_HF_F3_ETH ZXDH_ETH_RSS_IP +#define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 +#define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -20,5 +37,14 @@ int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_configure(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index ec15388f7a..45a9b10aa4 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -182,6 +182,11 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_RSS_ENABLE = 7, + ZXDH_RSS_RETA_SET = 8, + ZXDH_RSS_RETA_GET = 9, + ZXDH_RSS_HF_SET = 15, + ZXDH_RSS_HF_GET = 16, ZXDH_VLAN_FILTER_SET = 17, ZXDH_VLAN_FILTER_ADD = 18, ZXDH_VLAN_FILTER_DEL = 19, @@ -291,6 +296,14 @@ struct zxdh_link_info_msg { uint32_t speed; } __rte_packed; +struct zxdh_rss_reta { + uint32_t reta[RTE_ETH_RSS_RETA_SIZE_256]; +}; + +struct zxdh_rss_hf { + uint32_t rss_hf; +}; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -307,6 +320,8 @@ struct zxdh_msg_reply_body { union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; struct zxdh_link_info_msg link_msg; + struct zxdh_rss_hf rss_hf; + struct zxdh_rss_reta rss_reta; } __rte_packed; } __rte_packed; @@ -360,6 +375,10 @@ struct zxdh_vlan_offload { uint8_t type; } __rte_packed; +struct zxdh_rss_enable { + uint8_t enable; +}; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -385,6 +404,9 @@ struct zxdh_msg_info { struct zxdh_vlan_filter vlan_filter_msg; struct zxdh_vlan_filter_set vlan_filter_set_msg; struct zxdh_vlan_offload vlan_offload_msg; + struct zxdh_rss_reta rss_reta; + struct zxdh_rss_enable rss_enable; + struct zxdh_rss_hf rss_hf; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index ca98b36da2..2939d9ae8b 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_RSS_ATT_TABLE 3 #define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 @@ -668,3 +669,84 @@ zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) } return 0; } + +int +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + for (uint16_t j = 0; j < 8; j++) { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; + else + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; +#else + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; +#endif + } + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] |= 0x8000; +#else + rss_vqid.vqm_qid[0] |= 0x8000; +#endif + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = { + .index = vport_num.vfid * 32 + i, + .p_data = (uint32_t *)&rss_vqid + }; + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_RSS_ATT_TABLE, + .p_entry_data = &entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write rss base qid failed vfid:%d", vport_num.vfid); + return ret; + } + } + return 0; +} + +int +zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vport_num.vfid * 32 + i, (uint32_t *)&rss_vqid}; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_RSS_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &user_entry, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get rss tbl failed, vfid:%d", vport_num.vfid); + return -1; + } + +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] &= 0x7FFF; +#else + rss_vqid.vqm_qid[0] &= 0x7FFF; +#endif + uint8_t size = sizeof(struct zxdh_rss_to_vqid_table) / sizeof(uint16_t); + + for (int j = 0; j < size; j++) { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j + 1]; + else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j - 1]; +#else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j]; +#endif + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 8dac8f30dd..7bac39375c 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,6 +8,7 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 extern struct zxdh_dtb_shared_data g_dtb_data; @@ -198,6 +199,10 @@ struct zxdh_vlan_filter_table { uint32_t vlans[4]; }; +struct zxdh_rss_to_vqid_table { + uint16_t vqm_qid[8]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -211,5 +216,7 @@ int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); +int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 62745 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 14/15] net/zxdh: basic stats ops implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (12 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 2024-12-26 3:37 ` [PATCH v6 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37376 bytes --] basic stats ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 353 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 27 +++ drivers/net/zxdh/zxdh_msg.h | 16 ++ drivers/net/zxdh/zxdh_np.c | 341 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 30 +++ drivers/net/zxdh/zxdh_queue.h | 2 + drivers/net/zxdh/zxdh_rxtx.c | 83 ++++++- drivers/net/zxdh/zxdh_tables.h | 5 + 11 files changed, 859 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 415ca547d0..98c141cf95 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -22,3 +22,5 @@ QinQ offload = Y RSS hash = Y RSS reta update = Y Inner RSS = Y +Basic stats = Y +Stats per queue = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3cc6a1d348..c8a52b587c 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -32,6 +32,7 @@ Features of the ZXDH PMD are: - VLAN stripping and inserting - QINQ stripping and inserting - Receive Side Scaling (RSS) +- Port hardware statistics Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1c04719cd4..d87ad15824 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1171,6 +1171,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .reta_query = zxdh_dev_rss_reta_query, .rss_hash_update = zxdh_rss_hash_update, .rss_hash_conf_get = zxdh_rss_hash_conf_get, + .stats_get = zxdh_dev_stats_get, + .stats_reset = zxdh_dev_stats_reset, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index d333717e87..1b219bd26d 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -11,6 +11,8 @@ #include "zxdh_ethdev_ops.h" #include "zxdh_tables.h" #include "zxdh_logs.h" +#include "zxdh_rxtx.h" +#include "zxdh_np.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -22,6 +24,108 @@ #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +struct zxdh_hw_mac_stats { + uint64_t rx_total; + uint64_t rx_pause; + uint64_t rx_unicast; + uint64_t rx_multicast; + uint64_t rx_broadcast; + uint64_t rx_vlan; + uint64_t rx_size_64; + uint64_t rx_size_65_127; + uint64_t rx_size_128_255; + uint64_t rx_size_256_511; + uint64_t rx_size_512_1023; + uint64_t rx_size_1024_1518; + uint64_t rx_size_1519_mru; + uint64_t rx_undersize; + uint64_t rx_oversize; + uint64_t rx_fragment; + uint64_t rx_jabber; + uint64_t rx_control; + uint64_t rx_eee; + + uint64_t tx_total; + uint64_t tx_pause; + uint64_t tx_unicast; + uint64_t tx_multicast; + uint64_t tx_broadcast; + uint64_t tx_vlan; + uint64_t tx_size_64; + uint64_t tx_size_65_127; + uint64_t tx_size_128_255; + uint64_t tx_size_256_511; + uint64_t tx_size_512_1023; + uint64_t tx_size_1024_1518; + uint64_t tx_size_1519_mtu; + uint64_t tx_undersize; + uint64_t tx_oversize; + uint64_t tx_fragment; + uint64_t tx_jabber; + uint64_t tx_control; + uint64_t tx_eee; + + uint64_t rx_error; + uint64_t rx_fcs_error; + uint64_t rx_drop; + + uint64_t tx_error; + uint64_t tx_fcs_error; + uint64_t tx_drop; + +} __rte_packed; + +struct zxdh_hw_mac_bytes { + uint64_t rx_total_bytes; + uint64_t rx_good_bytes; + uint64_t tx_total_bytes; + uint64_t tx_good_bytes; +} __rte_packed; + +struct zxdh_np_stats_data { + uint64_t n_pkts_dropped; + uint64_t n_bytes_dropped; +}; + +struct zxdh_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct zxdh_xstats_name_off zxdh_rxq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_rx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_rx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_rx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_rx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_rx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_rx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[7])}, +}; + +static const struct zxdh_xstats_name_off zxdh_txq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_tx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_tx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_tx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_tx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_tx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_tx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[7])}, +}; + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1153,3 +1257,252 @@ zxdh_rss_configure(struct rte_eth_dev *dev) } return 0; } + +static int32_t +zxdh_hw_vqm_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode, + struct zxdh_hw_vqm_stats *hw_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_GET: + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get hw stats"); + return -1; + } + struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body; + + rte_memcpy(hw_stats, &reply_body->vqm_stats, sizeof(struct zxdh_hw_vqm_stats)); + return 0; +} + +static int zxdh_hw_mac_stats_get(struct rte_eth_dev *dev, + struct zxdh_hw_mac_stats *mac_stats, + struct zxdh_hw_mac_bytes *mac_bytes) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET); + uint64_t stats_addr = 0; + uint64_t bytes_addr = 0; + + if (hw->speed <= RTE_ETH_SPEED_NUM_25G) { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4); + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4); + } else { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4; + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4; + } + + rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats)); + rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes)); + return 0; +} + +static void zxdh_data_hi_to_lo(uint64_t *data) +{ + uint32_t n_data_hi; + uint32_t n_data_lo; + + n_data_lo = *data >> 32; + n_data_hi = *data; + *data = (uint64_t)(rte_le_to_cpu_32(n_data_hi)) << 32 | + rte_le_to_cpu_32(n_data_lo); +} + +static int zxdh_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_np_stats_data stats_data; + uint32_t stats_id = zxdh_vport_to_vfid(hw->vport); + uint32_t idx = 0; + int ret = 0; + + idx = stats_id + ZXDH_BROAD_STATS_EGRESS_BASE; + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_tx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_tx_broadcast); + + idx = stats_id + ZXDH_BROAD_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_rx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_rx_broadcast); + + idx = stats_id + ZXDH_MTU_STATS_EGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + + np_stats->np_tx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_tx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_bytes); + + idx = stats_id + ZXDH_MTU_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + np_stats->np_rx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_rx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_bytes); + + return 0; +} + +static int +zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_stats_get(dev, np_stats); + if (ret) { + PMD_DRV_LOG(ERR, "get np stats failed"); + return -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_GET_NP_STATS, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", + hw->vport.vport); + return -1; + } + memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); + } + return ret; +} + +int +zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_hw_vqm_stats vqm_stats = {0}; + struct zxdh_hw_np_stats np_stats = {0}; + struct zxdh_hw_mac_stats mac_stats = {0}; + struct zxdh_hw_mac_bytes mac_bytes = {0}; + uint32_t i = 0; + + zxdh_hw_vqm_stats_get(dev, ZXDH_VQM_DEV_STATS_GET, &vqm_stats); + if (hw->is_pf) + zxdh_hw_mac_stats_get(dev, &mac_stats, &mac_bytes); + + zxdh_hw_np_stats_get(dev, &np_stats); + + stats->ipackets = vqm_stats.rx_total; + stats->opackets = vqm_stats.tx_total; + stats->ibytes = vqm_stats.rx_bytes; + stats->obytes = vqm_stats.tx_bytes; + stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop; + stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts; + stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts; + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[i]; + + if (rxvq == NULL) + continue; + stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[0].offset); + stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[1].offset); + stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[5].offset); + } + + for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_tx *txvq = dev->data->tx_queues[i]; + + if (txvq == NULL) + continue; + stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[0].offset); + stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[1].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[5].offset); + } + return 0; +} + +static int zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to reset hw stats"); + return -1; + } + return 0; +} + +int zxdh_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET); + if (hw->is_pf) + zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 860716d079..f35378e691 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <stdint.h> + #include <rte_ether.h> #include "zxdh_ethdev.h" @@ -24,6 +26,29 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +struct zxdh_hw_vqm_stats { + uint64_t rx_total; + uint64_t tx_total; + uint64_t rx_bytes; + uint64_t tx_bytes; + uint64_t rx_error; + uint64_t tx_error; + uint64_t rx_drop; +} __rte_packed; + +struct zxdh_hw_np_stats { + uint64_t np_rx_broadcast; + uint64_t np_tx_broadcast; + uint64_t np_rx_mtu_drop_pkts; + uint64_t np_tx_mtu_drop_pkts; + uint64_t np_rx_mtu_drop_bytes; + uint64_t np_tx_mtu_drop_bytes; + uint64_t np_rx_mtr_drop_pkts; + uint64_t np_tx_mtr_drop_pkts; + uint64_t np_rx_mtr_drop_bytes; + uint64_t np_tx_mtr_drop_bytes; +}; + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -46,5 +71,7 @@ int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_configure(struct rte_eth_dev *dev); +int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int zxdh_dev_stats_reset(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 45a9b10aa4..159c8c9c71 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -9,10 +9,16 @@ #include <ethdev_driver.h> +#include "zxdh_ethdev_ops.h" + #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) #define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) +#define ZXDH_MAC_OFFSET (0x24000) +#define ZXDH_MAC_STATS_OFFSET (0x1408) +#define ZXDH_MAC_BYTES_OFFSET (0xb000) + #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 #define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) @@ -173,7 +179,13 @@ enum pciebar_layout_type { /* riscv msg opcodes */ enum zxdh_agent_msg_type { + ZXDH_MAC_STATS_GET = 10, + ZXDH_MAC_STATS_RESET, ZXDH_MAC_LINK_GET = 14, + ZXDH_VQM_DEV_STATS_GET = 21, + ZXDH_VQM_DEV_STATS_RESET, + ZXDH_VQM_QUEUE_STATS_GET = 24, + ZXDH_VQM_QUEUE_STATS_RESET, }; enum zxdh_msg_type { @@ -195,6 +207,8 @@ enum zxdh_msg_type { ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, + ZXDH_GET_NP_STATS = 31, + ZXDH_MSG_TYPE_END, }; @@ -322,6 +336,8 @@ struct zxdh_msg_reply_body { struct zxdh_link_info_msg link_msg; struct zxdh_rss_hf rss_hf; struct zxdh_rss_reta rss_reta; + struct zxdh_hw_vqm_stats vqm_stats; + struct zxdh_hw_np_stats np_stats; } __rte_packed; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 1f06539263..42679635f4 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -26,6 +26,7 @@ ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX]; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; +ZXDH_PPU_STAT_CFG_T g_ppu_stat_cfg; #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) @@ -117,6 +118,18 @@ do {\ #define ZXDH_COMM_CONVERT16(w_data) \ (((w_data) & 0xff) << 8) +#define ZXDH_DTB_TAB_UP_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.wr_index) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].user_flag) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].phy_addr) + +#define ZXDH_DTB_TAB_UP_DATA_LEN_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.data_len[(INDEX)]) + #define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) @@ -1717,3 +1730,331 @@ zxdh_np_dtb_table_entry_get(uint32_t dev_id, return 0; } + +static void +zxdh_np_stat_cfg_soft_get(uint32_t dev_id, + ZXDH_PPU_STAT_CFG_T *p_stat_cfg) +{ + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_stat_cfg); + + p_stat_cfg->ddr_base_addr = g_ppu_stat_cfg.ddr_base_addr; + p_stat_cfg->eram_baddr = g_ppu_stat_cfg.eram_baddr; + p_stat_cfg->eram_depth = g_ppu_stat_cfg.eram_depth; + p_stat_cfg->ppu_addr_offset = g_ppu_stat_cfg.ppu_addr_offset; +} + +static uint32_t +zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t int_flag, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_desc_data) +{ + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + uint32_t queue_en = 0; + uint32_t rc; + + zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!", queue_id); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (desc_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + zxdh_np_dtb_item_buff_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + item_index, 0, desc_len, p_desc_data); + + ZXDH_DTB_TAB_UP_DATA_LEN_GET(dev_id, queue_id, item_index) = data_len; + + item_info.cmd_vld = 1; + item_info.cmd_type = ZXDH_DTB_DIR_UP_TYPE; + item_info.int_en = int_flag; + item_info.data_len = desc_len / 4; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + + return rc; +} + +static uint32_t +zxdh_np_dtb_write_dump_desc_info(uint32_t dev_id, + uint32_t queue_id, + uint32_t queue_element_id, + uint32_t *p_dump_info, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_dump_data) +{ + uint32_t dtb_interrupt_status = 0; + uint32_t rc; + + ZXDH_COMM_CHECK_POINT(p_dump_data); + rc = zxdh_np_dtb_tab_up_info_set(dev_id, + queue_id, + queue_element_id, + dtb_interrupt_status, + data_len, + desc_len, + p_dump_info); + if (rc != 0) { + PMD_DRV_LOG(ERR, "the queue %d element id %d dump" + " info set failed!", queue_id, queue_element_id); + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + queue_element_id, 0, ZXDH_DTB_TAB_ACK_UNUSED_MASK); + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_index) +{ + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + uint32_t i; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + zxdh_np_dtb_item_ack_rd(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, &ack_vale); + + ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + *p_item_index = item_index; + + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t *p_phy_haddr, + uint32_t *p_phy_laddr) +{ + uint32_t rc = 0; + uint64_t addr; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(dev_id, queue_id, item_index) == + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE) + addr = ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(dev_id, queue_id, item_index); + else + addr = ZXDH_DTB_ITEM_ACK_SIZE; + + *p_phy_haddr = (addr >> 32) & 0xffffffff; + *p_phy_laddr = addr & 0xffffffff; + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_dma_dump(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t depth, + uint32_t *p_data, + uint32_t *element_id) +{ + uint8_t form_buff[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint32_t dump_dst_phy_haddr = 0; + uint32_t dump_dst_phy_laddr = 0; + uint32_t queue_item_index = 0; + uint32_t data_len; + uint32_t desc_len; + uint32_t rc; + + rc = zxdh_np_dtb_tab_up_free_item_get(dev_id, queue_id, &queue_item_index); + if (rc != 0) { + PMD_DRV_LOG(ERR, "dpp_dtb_tab_up_free_item_get failed = %d!", base_addr); + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + } + + *element_id = queue_item_index; + + rc = zxdh_np_dtb_tab_up_item_addr_get(dev_id, queue_id, queue_item_index, + &dump_dst_phy_haddr, &dump_dst_phy_laddr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_tab_up_item_addr_get"); + + data_len = depth * 128 / 32; + desc_len = ZXDH_DTB_LEN_POS_SETP / 4; + + rc = zxdh_np_dtb_write_dump_desc_info(dev_id, queue_id, queue_item_index, + (uint32_t *)form_buff, data_len, desc_len, p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_dump_desc_info"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t index, + uint32_t rd_mode, + uint32_t *p_data) +{ + uint32_t temp_data[4] = {0}; + uint32_t element_id = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t eram_dump_base_addr; + uint32_t rc; + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_OPR_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_OPR_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + + eram_dump_base_addr = base_addr + row_index; + rc = zxdh_np_dtb_se_smmu0_dma_dump(dev_id, + queue_id, + eram_dump_base_addr, + 1, + temp_data, + &element_id); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_dma_dump"); + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_stat_smmu0_int_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t smmu0_base_addr, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t eram_rd_mode; + uint32_t rc; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + if (rd_mode == ZXDH_STAT_128_MODE) + eram_rd_mode = ZXDH_ERAM128_OPR_128b; + else + eram_rd_mode = ZXDH_ERAM128_OPR_64b; + + rc = zxdh_np_dtb_se_smmu0_ind_read(dev_id, + queue_id, + smmu0_base_addr, + index, + eram_rd_mode, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_ind_read"); + + return rc; +} + +int +zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + ZXDH_PPU_STAT_CFG_T stat_cfg = {0}; + uint32_t ppu_eram_baddr; + uint32_t ppu_eram_depth; + uint32_t rc = 0; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + memset(&stat_cfg, 0x0, sizeof(stat_cfg)); + + zxdh_np_stat_cfg_soft_get(dev_id, &stat_cfg); + + ppu_eram_depth = stat_cfg.eram_depth; + ppu_eram_baddr = stat_cfg.eram_baddr; + + if ((index >> (ZXDH_STAT_128_MODE - rd_mode)) < ppu_eram_depth) { + rc = zxdh_np_dtb_stat_smmu0_int_read(dev_id, + queue_id, + ppu_eram_baddr, + rd_mode, + index, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_stat_smmu0_int_read"); + } + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 19d1f03f59..7da29cf7bd 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -432,6 +432,18 @@ typedef enum zxdh_sdt_table_type_e { ZXDH_SDT_TBLT_MAX = 7, } ZXDH_SDT_TABLE_TYPE_E; +typedef enum zxdh_dtb_dir_type_e { + ZXDH_DTB_DIR_DOWN_TYPE = 0, + ZXDH_DTB_DIR_UP_TYPE = 1, + ZXDH_DTB_DIR_TYPE_MAX, +} ZXDH_DTB_DIR_TYPE_E; + +typedef enum zxdh_dtb_tab_up_user_addr_type_e { + ZXDH_DTB_TAB_UP_NOUSER_ADDR_TYPE = 0, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE = 1, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_MAX, +} ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_E; + typedef struct zxdh_dtb_lpm_entry_t { uint32_t dtb_len0; uint8_t *p_data_buff0; @@ -537,6 +549,19 @@ typedef struct zxdh_dtb_hash_entry_info_t { uint8_t *p_rst; } ZXDH_DTB_HASH_ENTRY_INFO_T; +typedef struct zxdh_ppu_stat_cfg_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_base_addr; + uint32_t ppu_addr_offset; +} ZXDH_PPU_STAT_CFG_T; + +typedef enum zxdh_stat_cnt_mode_e { + ZXDH_STAT_64_MODE = 0, + ZXDH_STAT_128_MODE = 1, + ZXDH_STAT_MAX_MODE, +} ZXDH_STAT_CNT_MODE_E; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, @@ -545,5 +570,10 @@ int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); +int zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 9343df81ac..deb0dd891a 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -53,6 +53,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) #define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) #define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) +#define ZXDH_PD_HDR_SIZE_MAX 256 +#define ZXDH_PD_HDR_SIZE_MIN ZXDH_TYPE_HDR_SIZE /* * ring descriptors: 16 bytes. diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 0ffce50042..27a61d46dd 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -406,6 +406,40 @@ static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); } +static void +zxdh_update_packet_stats(struct zxdh_virtnet_stats *stats, struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + struct rte_ether_addr *ea = NULL; + + stats->bytes += s; + + if (s == 64) { + stats->size_bins[1]++; + } else if (s > 64 && s < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + stats->size_bins[bin]++; + } else { + if (s < 64) + stats->size_bins[0]++; + else if (s < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } +} + uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -459,12 +493,19 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt break; } } + if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + break; + } /* Enqueue Packet buffers */ if (can_push) zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); else zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + zxdh_update_packet_stats(&txvq->stats, txm); } + txvq->stats.packets += nb_tx; if (likely(nb_tx)) { if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { zxdh_queue_notify(vq); @@ -474,9 +515,10 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt return nb_tx; } -uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { + struct zxdh_virtnet_tx *txvq = tx_queue; uint16_t nb_tx; for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { @@ -496,6 +538,12 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t rte_errno = -error; break; } + if (m->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", m->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + rte_errno = ENOMEM; + break; + } } return nb_tx; } @@ -571,7 +619,7 @@ static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *h return 0; } -static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +static void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) { int32_t error = 0; /* @@ -613,7 +661,13 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, for (i = 0; i < num; i++) { rxm = rcv_pkts[i]; - + if (unlikely(len[i] < ZXDH_UL_NET_HDR_SIZE)) { + nb_enqueued++; + PMD_RX_LOG(ERR, "RX, len:%u err", len[i]); + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } struct zxdh_net_hdr_ul *header = (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM); @@ -623,8 +677,22 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); seg_num = 1; } + if (seg_num > ZXDH_RX_MAX_SEGS) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* bit[0:6]-pd_len unit:2B */ uint16_t pd_len = header->type_hdr.pd_len << 1; + if (pd_len > ZXDH_PD_HDR_SIZE_MAX || pd_len < ZXDH_PD_HDR_SIZE_MIN) { + PMD_RX_LOG(ERR, "pd_len:%d is invalid", pd_len); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* Private queue only handle type hdr */ hdr_size = pd_len; rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; @@ -639,6 +707,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, /* Update rte_mbuf according to pi/pd header */ if (zxdh_rx_update_mbuf(rxm, header) < 0) { zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; continue; } seg_res = seg_num - 1; @@ -661,8 +730,11 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } @@ -675,6 +747,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(rcv_cnt == 0)) { PMD_RX_LOG(ERR, "No enough segments for packet."); rte_pktmbuf_free(rx_pkts[nb_rx]); + rxvq->stats.errors++; break; } while (extra_idx < rcv_cnt) { @@ -694,11 +767,15 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } + rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ if (likely(!zxdh_queue_full(vq))) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 7bac39375c..c7da40f294 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -11,6 +11,11 @@ #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 +#define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 +#define ZXDH_BROAD_STATS_EGRESS_BASE 0xC902 +#define ZXDH_BROAD_STATS_INGRESS_BASE 0xD102 + extern struct zxdh_dtb_shared_data g_dtb_data; struct zxdh_port_attr_table { -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 87119 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v6 15/15] net/zxdh: mtu update ops implementations 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang ` (13 preceding siblings ...) 2024-12-26 3:37 ` [PATCH v6 14/15] net/zxdh: basic stats ops implementations Junlong Wang @ 2024-12-26 3:37 ` Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-26 3:37 UTC (permalink / raw) To: stephen; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 33457 bytes --] mtu update ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 4 +- drivers/net/zxdh/zxdh_ethdev.c | 33 +++++++----- drivers/net/zxdh/zxdh_ethdev_ops.c | 87 ++++++++++++++++++++++++++---- drivers/net/zxdh/zxdh_ethdev_ops.h | 3 ++ drivers/net/zxdh/zxdh_msg.c | 36 ++++++------- drivers/net/zxdh/zxdh_np.c | 26 ++++----- drivers/net/zxdh/zxdh_pci.c | 4 +- drivers/net/zxdh/zxdh_queue.h | 20 +++---- drivers/net/zxdh/zxdh_rxtx.c | 8 +-- drivers/net/zxdh/zxdh_tables.c | 46 +++++++++++++++- drivers/net/zxdh/zxdh_tables.h | 4 ++ 13 files changed, 198 insertions(+), 76 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 98c141cf95..3561e31666 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -24,3 +24,4 @@ RSS reta update = Y Inner RSS = Y Basic stats = Y Stats per queue = Y +MTU update = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index c8a52b587c..58e0c49a2e 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -33,6 +33,8 @@ Features of the ZXDH PMD are: - QINQ stripping and inserting - Receive Side Scaling (RSS) - Port hardware statistics +- MTU update +- Jumbo frames Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 75883a8897..3e9c044fe7 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -216,7 +216,7 @@ zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16 if (ret != ZXDH_BAR_MSG_OK) { PMD_DRV_LOG(ERR, - "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + "send sync_msg failed. pcieid: 0x%x, ret: %d", dev->pcie_id, ret); return ret; } struct zxdh_tbl_msg_reps_header *tbl_reps = @@ -224,7 +224,7 @@ zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16 if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { PMD_DRV_LOG(ERR, - "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + "get resource_field failed. pcieid: 0x%x, ret: %d", dev->pcie_id, ret); return ret; } *len = tbl_reps->len; diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index d87ad15824..c177b1c6d7 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -64,6 +64,10 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - + RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + dev_info->min_mtu = ZXDH_ETHER_MIN_MTU; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -255,7 +259,7 @@ zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) uint8_t i; if (!hw->risc_intr) { - PMD_DRV_LOG(ERR, " to allocate risc_intr"); + PMD_DRV_LOG(ERR, "to allocate risc_intr"); hw->risc_intr = rte_zmalloc("risc_intr", ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); if (hw->risc_intr == NULL) { @@ -821,7 +825,7 @@ zxdh_rss_qid_config(struct rte_eth_dev *dev) attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); if (ret) { - PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); return ret; } @@ -887,7 +891,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) if (nr_vq == hw->queue_num) goto end; - PMD_DRV_LOG(DEBUG, "queue changed need reset "); + PMD_DRV_LOG(DEBUG, "queue changed need reset"); /* Reset the device although not necessary at startup */ zxdh_pci_reset(hw); @@ -1030,13 +1034,13 @@ zxdh_dev_close(struct rte_eth_dev *dev) ret = zxdh_dev_stop(dev); if (ret != 0) { - PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + PMD_DRV_LOG(ERR, "stop port %s failed", dev->device->name); return -1; } ret = zxdh_tables_uninit(dev); if (ret != 0) { - PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed", __func__, dev->device->name); return -1; } @@ -1063,11 +1067,11 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; if (!zxdh_pci_packed_queue(hw)) { - PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + PMD_DRV_LOG(ERR, "port %u not support packed queue", eth_dev->data->port_id); return -1; } if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { - PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + PMD_DRV_LOG(ERR, "port %u not support rx mergeable", eth_dev->data->port_id); return -1; } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; @@ -1134,7 +1138,7 @@ zxdh_dev_start(struct rte_eth_dev *dev) ret = zxdh_mac_config(hw->eth_dev); if (ret) - PMD_DRV_LOG(ERR, " mac config failed"); + PMD_DRV_LOG(ERR, "mac config failed"); for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; @@ -1173,6 +1177,7 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .rss_hash_conf_get = zxdh_rss_hash_conf_get, .stats_get = zxdh_dev_stats_get, .stats_reset = zxdh_dev_stats_reset, + .mtu_set = zxdh_dev_mtu_set, }; static int32_t @@ -1184,7 +1189,7 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) ret = zxdh_read_pci_caps(pci_dev, hw); if (ret) { - PMD_DRV_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + PMD_DRV_LOG(ERR, "port 0x%x pci caps read failed", hw->port_id); goto err; } @@ -1397,14 +1402,14 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) if (hw->is_pf) { ret = zxdh_np_dtb_res_init(eth_dev); if (ret) { - PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d ", ret); + PMD_DRV_LOG(ERR, "np dtb init failed, ret:%d", ret); return ret; } } if (zxdh_shared_data != NULL) zxdh_shared_data->np_init_done = 1; - PMD_DRV_LOG(DEBUG, "np init ok "); + PMD_DRV_LOG(DEBUG, "np init ok"); return 0; } @@ -1421,7 +1426,7 @@ zxdh_tables_init(struct rte_eth_dev *dev) ret = zxdh_panel_table_init(dev); if (ret) { - PMD_DRV_LOG(ERR, " panel table init failed"); + PMD_DRV_LOG(ERR, "panel table init failed"); return ret; } @@ -1433,7 +1438,7 @@ zxdh_tables_init(struct rte_eth_dev *dev) ret = zxdh_vlan_filter_table_init(dev); if (ret) { - PMD_DRV_LOG(ERR, " vlan filter table init failed"); + PMD_DRV_LOG(ERR, "vlan filter table init failed"); return ret; } @@ -1461,7 +1466,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; if (hw->bar_addr[0] == 0) { - PMD_DRV_LOG(ERR, "Bad mem resource."); + PMD_DRV_LOG(ERR, "Bad mem resource"); return -EIO; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 1b219bd26d..cb0211083f 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -13,6 +13,7 @@ #include "zxdh_logs.h" #include "zxdh_rxtx.h" #include "zxdh_np.h" +#include "zxdh_queue.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -136,7 +137,7 @@ static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_st if (hw->is_pf) { ret = zxdh_get_port_attr(hw->vfid, &port_attr); if (ret) { - PMD_DRV_LOG(ERR, "write port_attr failed"); + PMD_DRV_LOG(ERR, "get port_attr failed"); return -1; } port_attr.is_up = link_status; @@ -258,7 +259,7 @@ int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete _ ret = zxdh_link_info_get(dev, &link); if (ret != 0) { - PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); return ret; } link.link_status &= hw->admin_status; @@ -267,7 +268,7 @@ int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete _ ret = zxdh_config_port_status(dev, link.link_status); if (ret != 0) { - PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + PMD_DRV_LOG(ERR, "set port attr %d failed", link.link_status); return ret; } return rte_eth_linkstatus_set(dev, &link); @@ -317,7 +318,7 @@ int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); if (ret) { - PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", hw->vport.vport, ZXDH_MAC_DEL); return ret; } @@ -330,7 +331,7 @@ int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); if (ret) { - PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", hw->vport.vport, ZXDH_MAC_ADD); return ret; } @@ -666,13 +667,13 @@ zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) if (on) { if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { - PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + PMD_DRV_LOG(ERR, "vlan:%d has already added", vlan_id); return 0; } msg_type = ZXDH_VLAN_FILTER_ADD; } else { if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { - PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + PMD_DRV_LOG(ERR, "vlan:%d has already deleted", vlan_id); return 0; } msg_type = ZXDH_VLAN_FILTER_DEL; @@ -681,7 +682,7 @@ zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) if (hw->is_pf) { ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); if (ret) { - PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed", vlan_id); return -1; } } else { @@ -690,7 +691,7 @@ zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) msg.data.vlan_filter_msg.vlan_id = vlan_id; ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); if (ret) { - PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", hw->vport.vport, msg_type); return ret; } @@ -1399,8 +1400,8 @@ zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats &reply_info, sizeof(struct zxdh_msg_reply_info)); if (ret) { PMD_DRV_LOG(ERR, - "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", - hw->vport.vport); + "%s Failed to send msg: port 0x%x msg type", + __func__, hw->vport.vport); return -1; } memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); @@ -1506,3 +1507,67 @@ int zxdh_dev_stats_reset(struct rte_eth_dev *dev) return 0; } + +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_panel_table panel = {0}; + struct zxdh_port_attr_table vport_att = {0}; + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); + int ret; + + if (hw->is_pf) { + ret = zxdh_get_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get_panel_attr failed ret:%d", ret); + return ret; + } + + ret = zxdh_get_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, get vport failed ret:%d", vfid, ret); + return ret; + } + + panel.mtu = new_mtu; + panel.mtu_enable = 1; + ret = zxdh_set_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set zxdh_dev_mtu failed, ret:%u", ret); + return ret; + } + + vport_att.mtu_enable = 1; + vport_att.mtu = new_mtu; + ret = zxdh_set_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, set vport failed ret:%d", vfid, ret); + return ret; + } + } else { + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_MTU_EN_FLAG; + attr_msg->value = 1; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_MTU_EN_FLAG); + return ret; + } + attr_msg->mode = ZXDH_PORT_MTU_FLAG; + attr_msg->value = new_mtu; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_MTU_FLAG); + return ret; + } + } + dev->data->mtu = new_mtu; + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index f35378e691..fac6cbd5e8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -26,6 +26,8 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +#define ZXDH_ETHER_MIN_MTU 68 + struct zxdh_hw_vqm_stats { uint64_t rx_total; uint64_t tx_total; @@ -73,5 +75,6 @@ int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss int zxdh_rss_configure(struct rte_eth_dev *dev); int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int zxdh_dev_stats_reset(struct rte_eth_dev *dev); +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index a6e19bbdd8..9471c74fac 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -417,30 +417,30 @@ zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, uint8_t dst_index = 0; if (in == NULL || result == NULL) { - PMD_MSG_LOG(ERR, "send para ERR: null para."); + PMD_MSG_LOG(ERR, "send para ERR: null para"); return ZXDH_BAR_MSG_ERR_NULL_PARA; } src_index = zxdh_bar_msg_src_index_trans(in->src); dst_index = zxdh_bar_msg_dst_index_trans(in->dst); if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { - PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist"); return ZXDH_BAR_MSG_ERR_TYPE; } if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) { - PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d", in->module_id); return ZXDH_BAR_MSG_ERR_MODULE; } if (in->payload_addr == NULL) { - PMD_MSG_LOG(ERR, "send para ERR: null message."); + PMD_MSG_LOG(ERR, "send para ERR: null message"); return ZXDH_BAR_MSG_ERR_BODY_NULL; } if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { - PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long", in->payload_len); return ZXDH_BAR_MSG_ERR_LEN; } if (in->virt_addr == 0 || result->recv_buffer == NULL) { - PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL"); return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL; } if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET) @@ -508,13 +508,13 @@ zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_ uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { - PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist"); return ZXDH_BAR_MSG_ERR_TYPE; } ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); if (ret != 0) - PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock", src_pcieid); return ret; } @@ -526,7 +526,7 @@ zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t vir uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { - PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist."); + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist"); return ZXDH_BAR_MSG_ERR_TYPE; } @@ -775,7 +775,7 @@ zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recvive if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) { zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE); zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); - PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out"); ret = ZXDH_BAR_MSG_ERR_TIME_OUT; } else { ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, @@ -843,11 +843,11 @@ zxdh_bar_chan_enable(struct zxdh_msix_para *para, uint16_t *vport) sum_res = zxdh_bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); if (check_token != sum_res) { - PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token); + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x", sum_res, check_token); return ZXDH_BAR_MSG_ERR_REPLY; } *vport = recv_msg.msix_reps.vport; - PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id); + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success", para->pcie_id); return ZXDH_BAR_MSG_OK; } @@ -967,19 +967,19 @@ zxdh_bar_chan_msg_header_check(struct zxdh_bar_msg_header *msg_header) uint8_t module_id = 0; if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) { - PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used"); return ZXDH_BAR_MSG_ERR_MODULE; } module_id = msg_header->module_id; if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) { - PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u", module_id); return ZXDH_BAR_MSG_ERR_MODULE; } len = msg_header->len; if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { - PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u", len); return ZXDH_BAR_MSG_ERR_LEN; } if (msg_recv_func_tbl[msg_header->module_id] == NULL) { @@ -1001,7 +1001,7 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); if (recv_addr == 0) { - PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u)", src, dst); return -1; } @@ -1009,13 +1009,13 @@ zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) ret = zxdh_bar_chan_msg_header_check(&msg_header); if (ret != ZXDH_BAR_MSG_OK) { - PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u", ret); return -1; } recved_msg = rte_malloc(NULL, msg_header.len, 0); if (recved_msg == NULL) { - PMD_MSG_LOG(ERR, "malloc temp buff failed."); + PMD_MSG_LOG(ERR, "malloc temp buff failed"); return -1; } zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 42679635f4..74c0fc3ad7 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -188,7 +188,7 @@ static uint32_t zxdh_np_dev_init(void) { if (g_dev_mgr.is_init) { - PMD_DRV_LOG(ERR, "Dev is already initialized."); + PMD_DRV_LOG(ERR, "Dev is already initialized"); return 0; } @@ -209,14 +209,14 @@ zxdh_np_dev_add(uint32_t dev_id, ZXDH_DEV_TYPE_E dev_type, p_dev_mgr = &g_dev_mgr; if (!p_dev_mgr->is_init) { - PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!!!", + PMD_DRV_LOG(ERR, "ErrorCode[ 0x%x]: Device Manager is not init!", ZXDH_RC_DEV_MGR_NOT_INIT); return ZXDH_RC_DEV_MGR_NOT_INIT; } if (p_dev_mgr->p_dev_array[dev_id] != NULL) { /* device is already exist. */ - PMD_DRV_LOG(ERR, "Device is added again!!!"); + PMD_DRV_LOG(ERR, "Device is added again!"); p_dev_info = p_dev_mgr->p_dev_array[dev_id]; } else { /* device is new. */ @@ -907,7 +907,7 @@ zxdh_np_online_uninit(uint32_t dev_id, rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); if (rc != 0) PMD_DRV_LOG(ERR, "%s:dtb release error," - "port name %s queue id %d. ", __func__, port_name, queue_id); + "port name %s queue id %d", __func__, port_name, queue_id); zxdh_np_dtb_mgr_destroy(dev_id); zxdh_np_tlb_mgr_destroy(dev_id); @@ -1030,7 +1030,7 @@ zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, case ZXDH_ERAM128_OPR_128b: { if ((0xFFFFFFFF - (base_addr)) < (index)) { - PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, base_addr, index, __func__); @@ -1067,7 +1067,7 @@ zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; - PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + PMD_DRV_LOG(INFO, "dtb eram item 1bit addr: 0x%x", dtb_ind_addr); rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, wrt_mode, @@ -1314,7 +1314,7 @@ zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, uint32_t rc; if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { - PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + PMD_DRV_LOG(ERR, "dtb queue %d is not init", queue_id); return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; } @@ -1447,7 +1447,7 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, if (dtb_len > max_size) { rte_free(p_data_buff); rte_free(p_data_buff_ex); - PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + PMD_DRV_LOG(ERR, "%s error dtb_len > %u!", __func__, max_size); return ZXDH_RC_DTB_DOWN_LEN_INVALID; } @@ -1542,7 +1542,7 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, if (dtb_len > max_size) { rte_free(p_data_buff); rte_free(p_data_buff_ex); - PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + PMD_DRV_LOG(ERR, "%s error dtb_len>%u!", __func__, max_size); return ZXDH_RC_DTB_DOWN_LEN_INVALID; } @@ -1620,7 +1620,7 @@ zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); if (rc != 0) - PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error", sdt_no); return rc; } @@ -1763,7 +1763,7 @@ zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, } if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { - PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + PMD_DRV_LOG(ERR, "dtb queue %d is not init", queue_id); return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; } @@ -1829,7 +1829,7 @@ zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, uint32_t i; if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { - PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + PMD_DRV_LOG(ERR, "dtb queue %d is not init", queue_id); return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; } @@ -1874,7 +1874,7 @@ zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, uint64_t addr; if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { - PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + PMD_DRV_LOG(ERR, "dtb queue %d is not init", queue_id); return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; } diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 6b2c4482b2..959b1b8cff 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -254,7 +254,7 @@ zxdh_pci_get_features(struct zxdh_hw *hw) void zxdh_pci_reset(struct zxdh_hw *hw) { - PMD_DRV_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + PMD_DRV_LOG(INFO, "port %u device start reset, just wait", hw->port_id); uint32_t retry = 0; ZXDH_VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); @@ -379,7 +379,7 @@ zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) } if (hw->common_cfg == NULL || hw->notify_base == NULL || hw->dev_cfg == NULL || hw->isr == NULL) { - PMD_DRV_LOG(ERR, "no zxdh pci device found."); + PMD_DRV_LOG(ERR, "no zxdh pci device found"); return -1; } return 0; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index deb0dd891a..1a1994d965 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -65,7 +65,7 @@ struct zxdh_vring_desc { uint32_t len; /* Length. */ uint16_t flags; /* The flags as indicated above. */ uint16_t next; /* We chain unused descriptors via this. */ -} __rte_packed; +}; struct zxdh_vring_used_elem { /* Index of start of used descriptor chain. */ @@ -78,38 +78,38 @@ struct zxdh_vring_used { uint16_t flags; uint16_t idx; struct zxdh_vring_used_elem ring[]; -} __rte_packed; +}; struct zxdh_vring_avail { uint16_t flags; uint16_t idx; uint16_t ring[]; -} __rte_packed; +}; struct zxdh_vring_packed_desc { uint64_t addr; uint32_t len; uint16_t id; uint16_t flags; -} __rte_packed; +}; struct zxdh_vring_packed_desc_event { uint16_t desc_event_off_wrap; uint16_t desc_event_flags; -} __rte_packed; +}; struct zxdh_vring_packed { uint32_t num; struct zxdh_vring_packed_desc *desc; struct zxdh_vring_packed_desc_event *driver; struct zxdh_vring_packed_desc_event *device; -} __rte_packed; +}; struct zxdh_vq_desc_extra { void *cookie; uint16_t ndescs; uint16_t next; -} __rte_packed; +}; struct zxdh_virtqueue { struct zxdh_hw *hw; /* < zxdh_hw structure pointer. */ @@ -121,7 +121,7 @@ struct zxdh_virtqueue { uint16_t cached_flags; /* < cached flags for descs */ uint16_t event_flags_shadow; uint16_t rsv1; - } __rte_packed vq_packed; + } vq_packed; uint16_t vq_used_cons_idx; /* < last consumed descriptor */ uint16_t vq_nentries; /* < vring desc numbers */ uint16_t vq_free_cnt; /* < num of desc available */ @@ -154,7 +154,7 @@ struct zxdh_virtqueue { uint16_t *notify_addr; struct rte_mbuf **sw_ring; /* < RX software ring. */ struct zxdh_vq_desc_extra vq_descx[]; -} __rte_packed; +}; struct zxdh_type_hdr { uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ @@ -227,7 +227,7 @@ struct zxdh_tx_region { union { struct zxdh_vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; struct zxdh_vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; - } __rte_packed; + }; }; static inline size_t diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 27a61d46dd..6353d496f2 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -494,7 +494,7 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt } } if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { - PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); + PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); txvq->stats.truncated_err += nb_pkts - nb_tx; break; } @@ -727,7 +727,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (!seg_res) { if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { - PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); rxvq->stats.errors++; @@ -745,7 +745,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); if (unlikely(rcv_cnt == 0)) { - PMD_RX_LOG(ERR, "No enough segments for packet."); + PMD_RX_LOG(ERR, "No enough segments for packet"); rte_pktmbuf_free(rx_pkts[nb_rx]); rxvq->stats.errors++; break; @@ -764,7 +764,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, seg_res -= rcv_cnt; if (!seg_res) { if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { - PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); rxvq->stats.errors++; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 2939d9ae8b..fe228e2227 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -150,6 +150,48 @@ int zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + + if (ret != 0) + PMD_DRV_LOG(ERR, "get panel table failed"); + + return ret; +} + +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) + PMD_DRV_LOG(ERR, "Insert panel table failed"); + + return ret; +} + int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -160,7 +202,7 @@ zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); if (ret != 0) - PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d", vfid, ret); return ret; } @@ -529,7 +571,7 @@ zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); if (ret) { - PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); return -ret; } return 0; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index c7da40f294..adedf3d0d3 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -8,8 +8,10 @@ #include <stdint.h> #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_MTU_FLAG 9 #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_MTU_EN_FLAG 42 #define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 #define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 @@ -223,5 +225,7 @@ int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 79618 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-13 19:38 ` Stephen Hemminger ` (2 more replies) 2024-12-10 5:53 ` [PATCH v2 03/15] net/zxdh: port tables init implementations Junlong Wang ` (12 subsequent siblings) 14 siblings, 3 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20413 bytes --] (np)network processor release resources in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ drivers/net/zxdh/zxdh_np.c | 494 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 107 +++++++ 3 files changed, 647 insertions(+), 2 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c54d1f6669..b28ea4ae6f 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret = 0; + int i = 0; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1013,6 +1059,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1181,6 +1228,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 9c50039fb1..454252cffc 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -12,15 +12,26 @@ #include "zxdh_logs.h" static uint64_t g_np_bar_offset; -static ZXDH_DEV_MGR_T g_dev_mgr = {0}; -static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; +static ZXDH_DEV_MGR_T g_dev_mgr; +static ZXDH_SDT_MGR_T g_sdt_mgr; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_REG_T g_dpp_reg_info[4]; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -345,3 +356,482 @@ zxdh_np_host_init(uint32_t dev_id, ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_agent_addr_set"); return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return 0xffff; + + return p_dev_info->dev_type; +} + +static uint32_t +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t len = 0; + uint32_t start_byte_index = 0; + uint32_t end_byte_index = 0; + uint32_t byte_num = 0; + uint32_t buffer_size = 0; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + len = end_bit - start_bit + 1; + buffer_size = base_size_bit / 8; + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + *p_data = 0; + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + *p_data = (uint32_t)(((p_base[start_byte_index] >> (7U - (end_bit & 7))) + & (0xff >> (8U - len))) & 0xff); + return 0; + } + + if (start_bit & 7) { + *p_data = (p_base[start_byte_index] & (0xff >> (start_bit & 7))) & UINT8_MAX; + start_byte_index++; + } + + for (byte_num = start_byte_index; byte_num < end_byte_index; byte_num++) { + *p_data <<= 8; + *p_data += p_base[byte_num]; + } + + *p_data <<= 1 + (end_bit & 7); + *p_data += ((p_base[byte_num & (buffer_size - 1)] & (0xff << (7 - (end_bit & 7)))) >> + (7 - (end_bit & 7))) & 0xff; + + return 0; +} + +static uint32_t +zxdh_np_comm_read_bits_ex(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t msb_start_pos, uint32_t len) +{ + uint32_t rtn = 0; + + rtn = zxdh_np_comm_read_bits(p_base, + base_size_bit, + p_data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + return rtn; +} + +static uint32_t +zxdh_np_reg_read(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t rc = 0; + uint32_t i = 0; + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + for (i = 0; i < p_reg_info->field_num; i++) { + rc = zxdh_np_comm_read_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + (uint32_t *)p_data + i, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_read_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", dev_id, m_offset, n_offset); + } + } + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_read"); + + p_vm_info->dbi_en = vm_info.dbi_en; + p_vm_info->queue_en = vm_info.queue_en; + p_vm_info->epid = vm_info.cfg_epid; + p_vm_info->vector = vm_info.cfg_vector; + p_vm_info->vfunc_num = vm_info.cfg_vfunc_num; + p_vm_info->func_num = vm_info.cfg_func_num; + p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index = 0; + uint32_t end_byte_index = 0; + uint8_t mask_value = 0; + uint32_t byte_num = 0; + uint32_t buffer_size = 0; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + buffer_size = base_size_bit / 8; + + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + mask_value |= (((1 << (7 - (end_bit & 7))) - 1) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= (((data << (7 - (end_bit & 7)))) & 0xff); + return 0; + } + + if (7 != (end_bit & 7)) { + mask_value = ((0x7f >> (end_bit & 7)) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= ((data << (7 - (end_bit & 7))) & 0xff); + end_byte_index--; + data >>= 1 + (end_bit & 7); + } + + for (byte_num = end_byte_index; byte_num > start_byte_index; byte_num--) { + p_base[byte_num & (buffer_size - 1)] = data & 0xff; + data >>= 8; + } + + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + p_base[byte_num] &= mask_value; + p_base[byte_num] |= data; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits_ex(uint8_t *p_base, + uint32_t base_size_bit, + uint32_t data, + uint32_t msb_start_pos, + uint32_t len) +{ + uint32_t rtn = 0; + + rtn = zxdh_np_comm_write_bits(p_base, + base_size_bit, + data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + + return rtn; +} + +static uint32_t +zxdh_np_reg_write(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t rc = 0; + uint32_t i = 0; + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + uint32_t temp_data = 0; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + + for (i = 0; i < p_reg_info->field_num; i++) { + if (p_field_info[i].len <= 32) { + temp_data = *((uint32_t *)p_data + i); + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + temp_data, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_comm_write_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", + dev_id, m_offset, n_offset); + } + } + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + vm_info.dbi_en = p_vm_info->dbi_en; + vm_info.queue_en = p_vm_info->queue_en; + vm_info.cfg_epid = p_vm_info->epid; + vm_info.cfg_vector = p_vm_info->vector; + vm_info.cfg_vfunc_num = p_vm_info->vfunc_num; + vm_info.cfg_func_num = p_vm_info->func_num; + vm_info.cfg_vfunc_active = p_vm_info->vfunc_active; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_write"); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + vm_info.queue_en = enable; + rc = zxdh_np_dtb_queue_vm_info_set(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_set"); + + return rc; +} + +static uint32_t +zxdh_np_riscv_dpp_dtb_queue_id_release(uint32_t dev_id, + char name[ZXDH_PORT_NAME_MAX], uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + if (p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag != 1) { + PMD_DRV_LOG(ERR, "queue %d not alloc!", queue_id); + return 2; + } + + if (strcmp(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, name) != 0) { + PMD_DRV_LOG(ERR, "queue %d name %s error!", queue_id, name); + return 3; + } + zxdh_np_dtb_queue_enable_set(dev_id, queue_id, 0); + zxdh_np_riscv_dtb_mgr_queue_info_delete(dev_id, queue_id); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_unused_item_num_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_num) +{ + uint32_t rc = 0; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) { + *p_item_num = 32; + return 0; + } + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_INFO_QUEUE_BUF_SPACE, + 0, queue_id, p_item_num); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_read"); + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_id_free(uint32_t dev_id, + uint32_t queue_id) +{ + uint32_t rc = 0; + uint32_t item_num = 0; + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + p_dtb_mgr = p_dpp_dtb_mgr[dev_id]; + if (p_dtb_mgr == NULL) + return 1; + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &item_num); + + p_dtb_mgr->queue_info[queue_id].init_flag = 0; + p_dtb_mgr->queue_info[queue_id].vport = 0; + p_dtb_mgr->queue_info[queue_id].vector = 0; + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_release(uint32_t devid, + char pname[32], + uint32_t queueid) +{ + uint32_t rc = 0; + + ZXDH_COMM_CHECK_DEV_POINT(devid, pname); + + rc = zxdh_np_riscv_dpp_dtb_queue_id_release(devid, pname, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_riscv_dpp_dtb_queue_id_release"); + + rc = zxdh_np_dtb_queue_id_free(devid, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_id_free"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_mgr_destroy(uint32_t dev_id) +{ + if (p_dpp_dtb_mgr[dev_id] != NULL) { + free(p_dpp_dtb_mgr[dev_id]); + p_dpp_dtb_mgr[dev_id] = NULL; + } + + return 0; +} + +static uint32_t +zxdh_np_tlb_mgr_destroy(uint32_t dev_id) +{ + if (g_p_dpp_tlb_mgr[dev_id] != NULL) { + free(g_p_dpp_tlb_mgr[dev_id]); + g_p_dpp_tlb_mgr[dev_id] = NULL; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_mgr_destroy(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (p_sdt_tbl_temp != NULL) + free(p_sdt_tbl_temp); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; + + p_sdt_mgr->channel_num--; + + return 0; +} + +static uint32_t +zxdh_np_dev_del(uint32_t dev_id) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info != NULL) { + free(p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = NULL; + p_dev_mgr->device_num--; + } + + return 0; +} + +int +zxdh_np_online_uninit(uint32_t dev_id, + char *port_name, + uint32_t queue_id) +{ + uint32_t rc = 0; + + rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "%s:dtb release error," + "port name %s queue id %d. ", __func__, port_name, queue_id); + + rc = zxdh_np_dtb_mgr_destroy(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_dtb_mgr_destroy error!"); + + rc = zxdh_np_tlb_mgr_destroy(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_tlb_mgr_destroy error!"); + + rc = zxdh_np_sdt_mgr_destroy(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_sdt_mgr_destroy error!"); + + rc = zxdh_np_dev_del(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_dev_del error!"); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 573eafe796..dc0e867827 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -47,6 +47,11 @@ #define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) #define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) +#define ZXDH_ACL_TBL_ID_MIN (0) +#define ZXDH_ACL_TBL_ID_MAX (7) +#define ZXDH_ACL_TBL_ID_NUM (8U) +#define ZXDH_ACL_BLOCK_NUM (8U) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -67,6 +72,15 @@ typedef enum zxdh_dev_type_e { ZXDH_DEV_TYPE_INVALID, } ZXDH_DEV_TYPE_E; +typedef enum zxdh_reg_info_e { + ZXDH_DTB_CFG_QUEUE_DTB_HADDR = 0, + ZXDH_DTB_CFG_QUEUE_DTB_LADDR = 1, + ZXDH_DTB_CFG_QUEUE_DTB_LEN = 2, + ZXDH_DTB_INFO_QUEUE_BUF_SPACE = 3, + ZXDH_DTB_CFG_EPID_V_FUNC_NUM = 4, + ZXDH_REG_ENUM_MAX_VALUE +} ZXDH_REG_INFO_E; + typedef enum zxdh_dev_access_type_e { ZXDH_DEV_ACCESS_TYPE_PCIE = 0, ZXDH_DEV_ACCESS_TYPE_RISCV = 1, @@ -79,6 +93,26 @@ typedef enum zxdh_dev_agent_flag_e { ZXDH_DEV_AGENT_INVALID, } ZXDH_DEV_AGENT_FLAG_E; +typedef enum zxdh_acl_pri_mode_e { + ZXDH_ACL_PRI_EXPLICIT = 1, + ZXDH_ACL_PRI_IMPLICIT, + ZXDH_ACL_PRI_SPECIFY, + ZXDH_ACL_PRI_INVALID, +} ZXDH_ACL_PRI_MODE_E; + +typedef struct zxdh_d_node { + void *data; + struct zxdh_d_node *prev; + struct zxdh_d_node *next; +} ZXDH_D_NODE; + +typedef struct zxdh_d_head { + uint32_t used; + uint32_t maxnum; + ZXDH_D_NODE *p_next; + ZXDH_D_NODE *p_prev; +} ZXDH_D_HEAD; + typedef struct zxdh_dtb_tab_up_user_addr_t { uint32_t user_flag; uint64_t phy_addr; @@ -193,6 +227,79 @@ typedef struct zxdh_sdt_mgr_t { ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; } ZXDH_SDT_MGR_T; +typedef struct zxdh_riscv_dtb_queue_USER_info_t { + uint32_t alloc_flag; + uint32_t queue_id; + uint32_t vport; + char user_name[ZXDH_PORT_NAME_MAX]; +} ZXDH_RISCV_DTB_QUEUE_USER_INFO_T; + +typedef struct zxdh_riscv_dtb_mgr { + uint32_t queue_alloc_count; + uint32_t queue_index; + ZXDH_RISCV_DTB_QUEUE_USER_INFO_T queue_user_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_RISCV_DTB_MGR; + +typedef struct zxdh_dtb_queue_vm_info_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t epid; + uint32_t vfunc_num; + uint32_t vector; + uint32_t func_num; + uint32_t vfunc_active; +} ZXDH_DTB_QUEUE_VM_INFO_T; + +typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t cfg_epid; + uint32_t cfg_vfunc_num; + uint32_t cfg_vector; + uint32_t cfg_func_num; + uint32_t cfg_vfunc_active; +} ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; + + +typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); +typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); + +typedef struct zxdh_field_t { + const char *p_name; + uint32_t flags; + uint16_t msb_pos; + + uint16_t len; + uint32_t default_value; + uint32_t default_step; +} ZXDH_FIELD_T; + +typedef struct zxdh_reg_t { + const char *reg_name; + uint32_t reg_no; + uint32_t module_no; + uint32_t flags; + uint32_t array_type; + uint32_t addr; + uint32_t width; + uint32_t m_size; + uint32_t n_size; + uint32_t m_step; + uint32_t n_step; + uint32_t field_num; + ZXDH_FIELD_T *p_fields; + + ZXDH_REG_WRITE p_write_fun; + ZXDH_REG_READ p_read_fun; +} ZXDH_REG_T; + +typedef struct zxdh_tlb_mgr_t { + uint32_t entry_num; + uint32_t va_width; + uint32_t pa_width; +} ZXDH_TLB_MGR_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); #endif /* ZXDH_NP_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 47737 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation 2024-12-10 5:53 ` [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-13 19:38 ` Stephen Hemminger 2024-12-13 19:41 ` Stephen Hemminger 2024-12-13 19:41 ` Stephen Hemminger 2 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:38 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:20 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > (np)network processor release resources in host. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ > drivers/net/zxdh/zxdh_np.c | 494 ++++++++++++++++++++++++++++++++- > drivers/net/zxdh/zxdh_np.h | 107 +++++++ > 3 files changed, 647 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > index c54d1f6669..b28ea4ae6f 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.c > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) > return ret; > } > > +static void > +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) > +{ > + struct rte_eth_dev *dev = hw->eth_dev; > + int ret = 0; > + int i = 0; > + Why initialize these variables (ret and i)? They are set immediately later in the loop. Programmers are often taught to always initialize all variables, but doing so defeats the checking in modern compilers that are able to detect variables that are used uninitialized. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation 2024-12-10 5:53 ` [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-13 19:38 ` Stephen Hemminger @ 2024-12-13 19:41 ` Stephen Hemminger 2024-12-13 19:41 ` Stephen Hemminger 2 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:41 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:20 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > +static uint32_t > +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, > + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) > +{ > + uint32_t len = 0; > + uint32_t start_byte_index = 0; > + uint32_t end_byte_index = 0; > + uint32_t byte_num = 0; > + uint32_t buffer_size = 0; > + > + if (0 != (base_size_bit % 8)) > + return 1; > + > + if (start_bit > end_bit) > + return 1; > + > + if (base_size_bit < end_bit) > + return 1; > + > + len = end_bit - start_bit + 1; Another case of the "initialize everything" style. len is set to zero when declared then first use assigns it. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation 2024-12-10 5:53 ` [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-13 19:38 ` Stephen Hemminger 2024-12-13 19:41 ` Stephen Hemminger @ 2024-12-13 19:41 ` Stephen Hemminger 2 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:41 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:20 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c > index 9c50039fb1..454252cffc 100644 > --- a/drivers/net/zxdh/zxdh_np.c > +++ b/drivers/net/zxdh/zxdh_np.c > @@ -12,15 +12,26 @@ > #include "zxdh_logs.h" > > static uint64_t g_np_bar_offset; > -static ZXDH_DEV_MGR_T g_dev_mgr = {0}; > -static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; > +static ZXDH_DEV_MGR_T g_dev_mgr; > +static ZXDH_SDT_MGR_T g_sdt_mgr; > ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; > ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; > +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; > +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL} Don't think you to initialize these to NULL since in C global variables get initialized to 0 automatically. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 03/15] net/zxdh: port tables init implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-10 5:53 ` [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-13 19:42 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 04/15] net/zxdh: port tables unint implementations Junlong Wang ` (11 subsequent siblings) 14 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 43284 bytes --] insert port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 23 ++ drivers/net/zxdh/zxdh_msg.c | 63 ++++ drivers/net/zxdh/zxdh_msg.h | 72 ++++ drivers/net/zxdh/zxdh_np.c | 662 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 210 +++++++++++ drivers/net/zxdh/zxdh_pci.h | 2 + drivers/net/zxdh/zxdh_tables.c | 104 ++++++ drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ 9 files changed, 1285 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index ab24a3145c..5b3af87c5b 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -20,4 +20,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_queue.c', 'zxdh_np.c', + 'zxdh_tables.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b28ea4ae6f..8a9ca87183 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -14,6 +14,7 @@ #include "zxdh_common.h" #include "zxdh_queue.h" #include "zxdh_np.h" +#include "zxdh_tables.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -1147,6 +1148,24 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_tables_init(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_init failed"); + return ret; + } + + ret = zxdh_panel_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) @@ -1224,6 +1243,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_tables_init(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index a0a005b178..1aed979de3 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_msg.h" +#include "zxdh_pci.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 @@ -100,6 +101,7 @@ #define ZXDH_BAR_CHAN_MSG_EMEC 1 #define ZXDH_BAR_CHAN_MSG_NO_ACK 0 #define ZXDH_BAR_CHAN_MSG_ACK 1 +#define ZXDH_MSG_REPS_OK 0xff uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, @@ -1080,3 +1082,64 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, res->bar_length = recv_msg.offset_reps.length; return ZXDH_BAR_MSG_OK; } + +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_pci_bar_msg in = {0}; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + in.virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + in.payload_addr = msg_req; + in.payload_len = msg_req_len; + in.src = ZXDH_MSG_CHAN_END_VF; + in.dst = ZXDH_MSG_CHAN_END_PF; + in.module_id = ZXDH_MODULE_BAR_MSG_TO_PF; + in.src_pcieid = hw->pcie_id; + in.dst_pcieid = ZXDH_PF_PCIE_ID(hw->pcie_id); + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "vf[%d] send bar msg to pf failed.ret %d", hw->vport.vfid, ret); + return -EAGAIN; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -EAGAIN; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -EAGAIN; + } + return 0; +} + +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_msg_head *msghead = &msg_info->msg_head; + + msghead->msg_type = type; + msghead->vport = hw->vport.vport; + msghead->vf_id = hw->vport.vfid; + msghead->pcieid = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index fbc79e8f9d..35ed5d1a1c 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -33,6 +33,19 @@ #define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) + +#define ZXDH_MSG_REPLYBODY_HEAD sizeof(enum zxdh_reps_flag) +#define ZXDH_MSG_HEADER_SIZE 4 +#define ZXDH_MSG_REPLY_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) + +#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_REQ_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -151,6 +164,13 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +enum zxdh_msg_type { + ZXDH_NULL = 0, + ZXDH_VF_PORT_INIT = 1, + + ZXDH_MSG_TYPE_END, +} __rte_packed; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -240,6 +260,54 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_msg_reply_head { + uint8_t flag; + uint16_t reps_len; + uint8_t resvd; +} __rte_packed; + +enum zxdh_reps_flag { + ZXDH_REPS_FAIL, + ZXDH_REPS_SUCC = 0xaa, +} __rte_packed; + +struct zxdh_msg_reply_body { + enum zxdh_reps_flag flag; + union { + uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_reply_info { + struct zxdh_msg_reply_head reply_head; + struct zxdh_msg_reply_body reply_body; +} __rte_packed; + +struct zxdh_vf_init_msg { + uint8_t link_up; + uint8_t rsv; + uint16_t base_qid; + uint8_t rss_enable; +} __rte_packed; + +struct zxdh_msg_head { + enum zxdh_msg_type msg_type; + uint16_t vport; + uint16_t vf_id; + uint16_t pcieid; +} __rte_packed; + +struct zxdh_msg_info { + union { + uint8_t head_len[ZXDH_MSG_HEAD_LEN]; + struct zxdh_msg_head msg_head; + }; + union { + uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; + struct zxdh_vf_init_msg vf_init_msg; + } __rte_packed data; +} __rte_packed; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); @@ -253,5 +321,9 @@ int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info); +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 454252cffc..736994643d 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -7,6 +7,8 @@ #include <rte_common.h> #include <rte_log.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> #include "zxdh_np.h" #include "zxdh_logs.h" @@ -14,11 +16,14 @@ static uint64_t g_np_bar_offset; static ZXDH_DEV_MGR_T g_dev_mgr; static ZXDH_SDT_MGR_T g_sdt_mgr; +static uint32_t g_dpp_dtb_int_enable; +static uint32_t g_table_type[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_REG_T g_dpp_reg_info[4]; +ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) @@ -75,6 +80,98 @@ do {\ } \ } while (0) +#define ZXDH_COMM_CHECK_POINT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + + +#define ZXDH_COMM_CHECK_POINT_MEMORY_FREE(point, ptr)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", __FILE__, __LINE__, __func__);\ + rte_free(ptr);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, becall, ptr)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXICP %s:%d, %s Call"\ + " %s Fail!", __FILE__, __LINE__, __func__, becall);\ + rte_free(ptr);\ + } \ +} while (0) + +#define ZXDH_COMM_CONVERT16(w_data) \ + (((w_data) & 0xff) << 8) + +#define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) + +#define ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.item_size) + +#define ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.wr_index) + +#define ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.start_phy_addr) + +#define ZXDH_DTB_QUEUE_INIT_FLAG_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].init_flag) + +static uint32_t +zxdh_np_comm_is_big_endian(void) +{ + ZXDH_ENDIAN_U c_data; + + c_data.a = 1; + + if (c_data.b == 1) + return 0; + else + return 1; +} + +static void +zxdh_np_comm_swap(uint8_t *p_uc_data, uint32_t dw_byte_len) +{ + uint32_t dw_byte_num = 0; + uint8_t uc_byte_mode = 0; + uint32_t uc_is_big_flag = 0; + uint32_t i = 0; + uint16_t *p_w_tmp = NULL; + uint32_t *p_dw_tmp = NULL; + + + p_dw_tmp = (uint32_t *)(p_uc_data); + + uc_is_big_flag = zxdh_np_comm_is_big_endian(); + + if (uc_is_big_flag) + return; + + dw_byte_num = dw_byte_len >> 2; + uc_byte_mode = dw_byte_len % 4 & 0xff; + + for (i = 0; i < dw_byte_num; i++) { + (*p_dw_tmp) = ZXDH_COMM_CONVERT16(*p_dw_tmp); + p_dw_tmp++; + } + + if (uc_byte_mode > 1) { + p_w_tmp = (uint16_t *)(p_dw_tmp); + (*p_w_tmp) = ZXDH_COMM_CONVERT16(*p_w_tmp); + } +} + static uint32_t zxdh_np_dev_init(void) { @@ -835,3 +932,568 @@ zxdh_np_online_uninit(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_sdt_tbl_type_get(uint32_t dev_id, uint32_t sdt_no) +{ + return g_table_type[dev_id][sdt_no]; +} + + +static ZXDH_DTB_TABLE_T * +zxdh_np_table_info_get(uint32_t table_type) +{ + return &g_dpp_dtb_table_info[table_type]; +} + +static uint32_t +zxdh_np_dtb_write_table_cmd(uint32_t dev_id, + ZXDH_DTB_TABLE_INFO_E table_type, + void *p_cmd_data, + void *p_cmd_buff) +{ + uint32_t rc = 0; + uint32_t field_cnt = 0; + ZXDH_DTB_TABLE_T *p_table_info; + ZXDH_DTB_FIELD_T *p_field_info = NULL; + uint32_t temp_data = 0; + + ZXDH_COMM_CHECK_POINT(p_cmd_data); + ZXDH_COMM_CHECK_POINT(p_cmd_buff); + p_table_info = zxdh_np_table_info_get(table_type); + p_field_info = p_table_info->p_fields; + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_table_info); + + for (field_cnt = 0; field_cnt < p_table_info->field_num; field_cnt++) { + temp_data = *((uint32_t *)p_cmd_data + field_cnt) & ZXDH_COMM_GET_BIT_MASK(uint32_t, + p_field_info[field_cnt].len); + + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_cmd_buff, + ZXDH_DTB_TABLE_CMD_SIZE_BIT, + temp_data, + p_field_info[field_cnt].lsb_pos, + p_field_info[field_cnt].len); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_write_bits"); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_smmu0_write_entry_data(uint32_t dev_id, + uint32_t mode, + uint32_t addr, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t rc = 0; + ZXDH_DTB_ERAM_TABLE_FORM_T dtb_eram_form_info = {0}; + + dtb_eram_form_info.valid = ZXDH_DTB_TABLE_VALID; + dtb_eram_form_info.type_mode = ZXDH_DTB_TABLE_MODE_ERAM; + dtb_eram_form_info.data_mode = mode; + dtb_eram_form_info.cpu_wr = 1; + dtb_eram_form_info.addr = addr; + dtb_eram_form_info.cpu_rd = 0; + dtb_eram_form_info.cpu_rd_mode = 0; + + if (ZXDH_ERAM128_OPR_128b == mode) { + p_entry->data_in_cmd_flag = 0; + p_entry->data_size = 128 / 8; + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_128, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + memcpy(p_entry->data, p_data, 128 / 8); + } else if (ZXDH_ERAM128_OPR_64b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 64 / 8; + dtb_eram_form_info.data_l = *(p_data + 1); + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_64, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + } else if (ZXDH_ERAM128_OPR_1b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 1; + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_1, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, + uint32_t base_addr, + uint32_t index, + uint32_t wrt_mode, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t rc = 0; + uint32_t temp_idx = 0; + uint32_t dtb_ind_addr = 0; + + switch (wrt_mode) { + case ZXDH_ERAM128_OPR_128b: + { + if ((0xFFFFFFFF - (base_addr)) < (index)) { + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, + base_addr, index, __func__); + + return ZXDH_PAR_CHK_INVALID_INDEX; + } + if (base_addr + index > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 7; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + if ((base_addr + (index >> 1)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 6; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + if ((base_addr + (index >> 7)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + + temp_idx = index; + } + } + + dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; + + PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + + rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, + wrt_mode, + dtb_ind_addr, + p_data, + p_entry); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_smmu0_write_entry_data"); + + return 0; +} + +static uint32_t +zxdh_np_eram_dtb_len_get(uint32_t mode) +{ + uint32_t dtb_len = 0; + + switch (mode) { + case ZXDH_ERAM128_OPR_128b: + { + dtb_len += 2; + break; + } + case ZXDH_ERAM128_OPR_64b: + case ZXDH_ERAM128_OPR_1b: + { + dtb_len += 1; + break; + } + default: + break; + } + + return dtb_len; +} + +static uint32_t +zxdh_np_dtb_eram_one_entry(uint32_t dev_id, + uint32_t sdt_no, + uint32_t del_en, + void *pdata, + uint32_t *p_dtb_len, + ZXDH_DTB_ENTRY_T *p_dtb_one_entry) +{ + uint32_t rc = 0; + uint32_t base_addr = 0; + uint32_t index = 0; + uint32_t opr_mode = 0; + uint32_t buff[ZXDH_SMMU0_READ_REG_MAX_NUM] = {0}; + + ZXDH_SDTTBL_ERAM_T sdt_eram = {0}; + ZXDH_DTB_ERAM_ENTRY_INFO_T *peramdata = NULL; + + ZXDH_COMM_CHECK_POINT(pdata); + ZXDH_COMM_CHECK_POINT(p_dtb_one_entry); + ZXDH_COMM_CHECK_POINT(p_dtb_len); + + peramdata = (ZXDH_DTB_ERAM_ENTRY_INFO_T *)pdata; + index = peramdata->index; + base_addr = sdt_eram.eram_base_addr; + opr_mode = sdt_eram.eram_mode; + + switch (opr_mode) { + case ZXDH_ERAM128_TBL_128b: + { + opr_mode = ZXDH_ERAM128_OPR_128b; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + opr_mode = ZXDH_ERAM128_OPR_64b; + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + opr_mode = ZXDH_ERAM128_OPR_1b; + break; + } + } + + if (del_en) { + memset((uint8_t *)buff, 0, sizeof(buff)); + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + buff, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(sdt_no, rc, "zxdh_dtb_se_smmu0_ind_write"); + } else { + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + peramdata->p_data, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_dtb_se_smmu0_ind_write"); + } + *p_dtb_len = zxdh_np_eram_dtb_len_get(opr_mode); + + return 0; +} + +static uint32_t +zxdh_np_dtb_data_write(uint8_t *p_data_buff, + uint32_t addr_offset, + ZXDH_DTB_ENTRY_T *entry) +{ + ZXDH_COMM_CHECK_POINT(p_data_buff); + ZXDH_COMM_CHECK_POINT(entry); + + uint8_t *p_cmd = p_data_buff + addr_offset; + uint32_t cmd_size = ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8; + + uint8_t *p_data = p_cmd + cmd_size; + uint32_t data_size = entry->data_size; + + uint8_t *cmd = (uint8_t *)entry->cmd; + uint8_t *data = (uint8_t *)entry->data; + + rte_memcpy(p_cmd, cmd, cmd_size); + + if (!entry->data_in_cmd_flag) { + zxdh_np_comm_swap(data, data_size); + rte_memcpy(p_data, data, data_size); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + *enable = vm_info.queue_en; + return rc; +} + +static uint32_t +zxdh_np_dtb_item_buff_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t len, + uint32_t *p_data) +{ + uint64_t addr = 0; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + + memcpy((uint8_t *)(addr), p_data, len * 4); + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_rd(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t *p_data) +{ + uint64_t addr = 0; + uint32_t val = 0; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + val = *((volatile uint32_t *)(addr)); + + *p_data = val; + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t data) +{ + uint64_t addr = 0; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + *((volatile uint32_t *)(addr)) = data; + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_item_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_ITEM_INFO_T *p_item_info) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_LEN_T dtb_len = {0}; + + dtb_len.cfg_dtb_cmd_type = p_item_info->cmd_type; + dtb_len.cfg_dtb_cmd_int_en = p_item_info->int_en; + dtb_len.cfg_queue_dtb_len = p_item_info->data_len; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_QUEUE_DTB_LEN, + 0, queue_id, (void *)&dtb_len); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_write"); + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t int_flag, + uint32_t data_len, + uint32_t *p_data, + uint32_t *p_item_index) +{ + uint32_t rc = 0; + uint32_t i = 0; + uint32_t queue_en = 0; + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (data_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + rc = zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!,rc=%d", queue_id, rc); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + rc = zxdh_np_dtb_item_ack_rd(dev_id, queue_id, 0, + item_index, 0, &ack_vale); + + ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + rc = zxdh_np_dtb_item_buff_wr(dev_id, queue_id, 0, + item_index, 0, data_len, p_data); + + rc = zxdh_np_dtb_item_ack_wr(dev_id, queue_id, 0, + item_index, 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + item_info.cmd_vld = 1; + item_info.cmd_type = 0; + item_info.int_en = int_flag; + item_info.data_len = data_len / 4; + item_info.data_hddr = ((ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(dev_id, queue_id, + item_index) >> 4) >> 32) & 0xffffffff; + item_info.data_laddr = (ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(dev_id, queue_id, + item_index) >> 4) & 0xffffffff; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + *p_item_index = item_index; + + return 0; +} + +static uint32_t +zxdh_np_dtb_write_down_table_data(uint32_t dev_id, + uint32_t queue_id, + uint32_t down_table_len, + uint8_t *p_down_table_buff, + uint32_t *p_element_id) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + dtb_interrupt_status = g_dpp_dtb_int_enable; + + rc = zxdh_np_dtb_tab_down_info_set(dev_id, + queue_id, + dtb_interrupt_status, + down_table_len / 4, + (uint32_t *)p_down_table_buff, + p_element_id); + return rc; +} + +int +zxdh_np_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *down_entries) +{ + uint32_t rc = 0; + uint32_t entry_index = 0; + uint32_t sdt_no = 0; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t addr_offset = 0; + uint32_t max_size = 0; + uint8_t *p_data_buff = NULL; + + uint8_t *p_data_buff_ex = NULL; + ZXDH_DTB_LPM_ENTRY_T lpm_entry = {0}; + + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX] = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + + p_data_buff = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + memset(p_data_buff, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + p_data_buff_ex = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + memset(p_data_buff_ex, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + memset((uint8_t *)&lpm_entry, 0x0, sizeof(ZXDH_DTB_LPM_ENTRY_T)); + memset((uint8_t *)&dtb_one_entry, 0x0, sizeof(ZXDH_DTB_ENTRY_T)); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = down_entries + entry_index; + sdt_no = pentry->sdt_no; + tbl_type = zxdh_np_sdt_tbl_type_get(dev_id, sdt_no); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_ADD_OR_UPDATE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index dc0e867827..40961c02a2 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -7,6 +7,8 @@ #include <stdint.h> +#define ZXDH_DISABLE (0) +#define ZXDH_ENABLE (1) #define ZXDH_PORT_NAME_MAX (32) #define ZXDH_DEV_CHANNEL_MAX (2) #define ZXDH_DEV_SDT_ID_MAX (256U) @@ -52,6 +54,94 @@ #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SMMU0_READ_REG_MAX_NUM (4) + +#define ZXDH_DTB_ITEM_ACK_SIZE (16) +#define ZXDH_DTB_ITEM_BUFF_SIZE (16 * 1024) +#define ZXDH_DTB_ITEM_SIZE (16 + 16 * 1024) +#define ZXDH_DTB_TAB_UP_SIZE ((16 + 16 * 1024) * 32) +#define ZXDH_DTB_TAB_DOWN_SIZE ((16 + 16 * 1024) * 32) + +#define ZXDH_DTB_TAB_UP_ACK_VLD_MASK (0x555555) +#define ZXDH_DTB_TAB_DOWN_ACK_VLD_MASK (0x5a5a5a) +#define ZXDH_DTB_TAB_ACK_IS_USING_MASK (0x11111100) +#define ZXDH_DTB_TAB_ACK_UNUSED_MASK (0x0) +#define ZXDH_DTB_TAB_ACK_SUCCESS_MASK (0xff) +#define ZXDH_DTB_TAB_ACK_FAILED_MASK (0x1) +#define ZXDH_DTB_TAB_ACK_CHECK_VALUE (0x12345678) + +#define ZXDH_DTB_TAB_ACK_VLD_SHIFT (104) +#define ZXDH_DTB_TAB_ACK_STATUS_SHIFT (96) +#define ZXDH_DTB_LEN_POS_SETP (16) +#define ZXDH_DTB_ITEM_ADD_OR_UPDATE (0) +#define ZXDH_DTB_ITEM_DELETE (1) + +#define ZXDH_ETCAM_LEN_SIZE (6) +#define ZXDH_ETCAM_BLOCK_NUM (8) +#define ZXDH_ETCAM_TBLID_NUM (8) +#define ZXDH_ETCAM_RAM_NUM (8) +#define ZXDH_ETCAM_RAM_WIDTH (80U) +#define ZXDH_ETCAM_WR_MASK_MAX (((uint32_t)1 << ZXDH_ETCAM_RAM_NUM) - 1) +#define ZXDH_ETCAM_WIDTH_MIN (ZXDH_ETCAM_RAM_WIDTH) +#define ZXDH_ETCAM_WIDTH_MAX (ZXDH_ETCAM_RAM_NUM * ZXDH_ETCAM_RAM_WIDTH) + +#define ZXDH_DTB_TABLE_DATA_BUFF_SIZE (16384) +#define ZXDH_DTB_TABLE_CMD_SIZE_BIT (128) + +#define ZXDH_SE_SMMU0_ERAM_BLOCK_NUM (32) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK (0x4000) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL \ + (ZXDH_SE_SMMU0_ERAM_BLOCK_NUM * ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK) + +/**errco code */ +#define ZXDH_RC_BASE (0x1000U) +#define ZXDH_PARAMETER_CHK_BASE (ZXDH_RC_BASE | 0x200) +#define ZXDH_PAR_CHK_POINT_NULL (ZXDH_PARAMETER_CHK_BASE | 0x001) +#define ZXDH_PAR_CHK_ARGIN_ZERO (ZXDH_PARAMETER_CHK_BASE | 0x002) +#define ZXDH_PAR_CHK_ARGIN_OVERFLOW (ZXDH_PARAMETER_CHK_BASE | 0x003) +#define ZXDH_PAR_CHK_ARGIN_ERROR (ZXDH_PARAMETER_CHK_BASE | 0x004) +#define ZXDH_PAR_CHK_INVALID_INDEX (ZXDH_PARAMETER_CHK_BASE | 0x005) +#define ZXDH_PAR_CHK_INVALID_RANGE (ZXDH_PARAMETER_CHK_BASE | 0x006) +#define ZXDH_PAR_CHK_INVALID_DEV_ID (ZXDH_PARAMETER_CHK_BASE | 0x007) +#define ZXDH_PAR_CHK_INVALID_PARA (ZXDH_PARAMETER_CHK_BASE | 0x008) + +#define ZXDH_ERAM128_BADDR_MASK (0x3FFFF80) + +#define ZXDH_DTB_TABLE_MODE_ERAM (0) +#define ZXDH_DTB_TABLE_MODE_DDR (1) +#define ZXDH_DTB_TABLE_MODE_ZCAM (2) +#define ZXDH_DTB_TABLE_MODE_ETCAM (3) +#define ZXDH_DTB_TABLE_MODE_MC_HASH (4) +#define ZXDH_DTB_TABLE_VALID (1) + +/* DTB module error code */ +#define ZXDH_RC_DTB_BASE (0xd00) +#define ZXDH_RC_DTB_MGR_EXIST (ZXDH_RC_DTB_BASE | 0x0) +#define ZXDH_RC_DTB_MGR_NOT_EXIST (ZXDH_RC_DTB_BASE | 0x1) +#define ZXDH_RC_DTB_QUEUE_RES_EMPTY (ZXDH_RC_DTB_BASE | 0x2) +#define ZXDH_RC_DTB_QUEUE_BUFF_SIZE_ERR (ZXDH_RC_DTB_BASE | 0x3) +#define ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY (ZXDH_RC_DTB_BASE | 0x4) +#define ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY (ZXDH_RC_DTB_BASE | 0x5) +#define ZXDH_RC_DTB_TAB_UP_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x6) +#define ZXDH_RC_DTB_TAB_DOWN_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x7) +#define ZXDH_RC_DTB_TAB_UP_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x8) +#define ZXDH_RC_DTB_TAB_DOWN_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x9) +#define ZXDH_RC_DTB_QUEUE_IS_WORKING (ZXDH_RC_DTB_BASE | 0xa) +#define ZXDH_RC_DTB_QUEUE_IS_NOT_INIT (ZXDH_RC_DTB_BASE | 0xb) +#define ZXDH_RC_DTB_MEMORY_ALLOC_ERR (ZXDH_RC_DTB_BASE | 0xc) +#define ZXDH_RC_DTB_PARA_INVALID (ZXDH_RC_DTB_BASE | 0xd) +#define ZXDH_RC_DMA_RANGE_INVALID (ZXDH_RC_DTB_BASE | 0xe) +#define ZXDH_RC_DMA_RCV_DATA_EMPTY (ZXDH_RC_DTB_BASE | 0xf) +#define ZXDH_RC_DTB_LPM_INSERT_FAIL (ZXDH_RC_DTB_BASE | 0x10) +#define ZXDH_RC_DTB_LPM_DELETE_FAIL (ZXDH_RC_DTB_BASE | 0x11) +#define ZXDH_RC_DTB_DOWN_LEN_INVALID (ZXDH_RC_DTB_BASE | 0x12) +#define ZXDH_RC_DTB_DOWN_HASH_CONFLICT (ZXDH_RC_DTB_BASE | 0x13) +#define ZXDH_RC_DTB_QUEUE_NOT_ALLOC (ZXDH_RC_DTB_BASE | 0x14) +#define ZXDH_RC_DTB_QUEUE_NAME_ERROR (ZXDH_RC_DTB_BASE | 0x15) +#define ZXDH_RC_DTB_DUMP_SIZE_SMALL (ZXDH_RC_DTB_BASE | 0x16) +#define ZXDH_RC_DTB_SEARCH_VPORT_QUEUE_ZERO (ZXDH_RC_DTB_BASE | 0x17) +#define ZXDH_RC_DTB_QUEUE_NOT_ENABLE (ZXDH_RC_DTB_BASE | 0x18) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -299,7 +389,127 @@ typedef struct zxdh_tlb_mgr_t { uint32_t pa_width; } ZXDH_TLB_MGR_T; +typedef enum zxdh_eram128_tbl_mode_e { + ZXDH_ERAM128_TBL_1b = 0, + ZXDH_ERAM128_TBL_32b = 1, + ZXDH_ERAM128_TBL_64b = 2, + ZXDH_ERAM128_TBL_128b = 3, + ZXDH_ERAM128_TBL_2b = 4, + ZXDH_ERAM128_TBL_4b = 5, + ZXDH_ERAM128_TBL_8b = 6, + ZXDH_ERAM128_TBL_16b = 7 +} ZXDH_ERAM128_TBL_MODE_E; + +typedef enum zxdh_eram128_opr_mode_e { + ZXDH_ERAM128_OPR_128b = 0, + ZXDH_ERAM128_OPR_64b = 1, + ZXDH_ERAM128_OPR_1b = 2, + ZXDH_ERAM128_OPR_32b = 3 + +} ZXDH_ERAM128_OPR_MODE_E; + +typedef enum zxdh_dtb_table_info_e { + ZXDH_DTB_TABLE_DDR = 0, + ZXDH_DTB_TABLE_ERAM_1 = 1, + ZXDH_DTB_TABLE_ERAM_64 = 2, + ZXDH_DTB_TABLE_ERAM_128 = 3, + ZXDH_DTB_TABLE_ZCAM = 4, + ZXDH_DTB_TABLE_ETCAM = 5, + ZXDH_DTB_TABLE_MC_HASH = 6, + ZXDH_DTB_TABLE_ENUM_MAX +} ZXDH_DTB_TABLE_INFO_E; + +typedef enum zxdh_sdt_table_type_e { + ZXDH_SDT_TBLT_INVALID = 0, + ZXDH_SDT_TBLT_ERAM = 1, + ZXDH_SDT_TBLT_DDR3 = 2, + ZXDH_SDT_TBLT_HASH = 3, + ZXDH_SDT_TBLT_LPM = 4, + ZXDH_SDT_TBLT_ETCAM = 5, + ZXDH_SDT_TBLT_PORTTBL = 6, + ZXDH_SDT_TBLT_MAX = 7, +} ZXDH_SDT_TABLE_TYPE_E; + +typedef struct zxdh_dtb_lpm_entry_t { + uint32_t dtb_len0; + uint8_t *p_data_buff0; + uint32_t dtb_len1; + uint8_t *p_data_buff1; +} ZXDH_DTB_LPM_ENTRY_T; + +typedef struct zxdh_dtb_entry_t { + uint8_t *cmd; + uint8_t *data; + uint32_t data_in_cmd_flag; + uint32_t data_size; +} ZXDH_DTB_ENTRY_T; + +typedef struct zxdh_dtb_eram_table_form_t { + uint32_t valid; + uint32_t type_mode; + uint32_t data_mode; + uint32_t cpu_wr; + uint32_t cpu_rd; + uint32_t cpu_rd_mode; + uint32_t addr; + uint32_t data_h; + uint32_t data_l; +} ZXDH_DTB_ERAM_TABLE_FORM_T; + +typedef struct zxdh_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} ZXDH_SDTTBL_ERAM_T; + +typedef union zxdh_endian_u { + unsigned int a; + unsigned char b; +} ZXDH_ENDIAN_U; + +typedef struct zxdh_dtb_field_t { + const char *p_name; + uint16_t lsb_pos; + uint16_t len; +} ZXDH_DTB_FIELD_T; + +typedef struct zxdh_dtb_table_t { + const char *table_type; + uint32_t table_no; + uint32_t field_num; + ZXDH_DTB_FIELD_T *p_fields; +} ZXDH_DTB_TABLE_T; + +typedef struct zxdh_dtb_queue_item_info_t { + uint32_t cmd_vld; + uint32_t cmd_type; + uint32_t int_en; + uint32_t data_len; + uint32_t data_laddr; + uint32_t data_hddr; +} ZXDH_DTB_QUEUE_ITEM_INFO_T; + +typedef struct zxdh_dtb_queue_len_t { + uint32_t cfg_dtb_cmd_type; + uint32_t cfg_dtb_cmd_int_en; + uint32_t cfg_queue_dtb_len; +} ZXDH_DTB_QUEUE_LEN_T; + +typedef struct zxdh_dtb_eram_entry_info_t { + uint32_t index; + uint32_t *p_data; +} ZXDH_DTB_ERAM_ENTRY_INFO_T; + +typedef struct zxdh_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} ZXDH_DTB_USER_ENTRY_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); +int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index d6487a574f..e3f13cb17d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,6 +12,8 @@ #include "zxdh_ethdev.h" +#define ZXDH_PF_PCIE_ID(pcie_id) (((pcie_id) & 0xff00) | 1 << 11) + enum zxdh_msix_status { ZXDH_MSIX_NONE = 0, ZXDH_MSIX_DISABLED = 1, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c new file mode 100644 index 0000000000..4284fefe3a --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.c @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_np.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +#define ZXDH_SDT_VPORT_ATT_TABLE 1 +#define ZXDH_SDT_PANEL_ATT_TABLE 2 + +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_write = {ZXDH_SDT_VPORT_ATT_TABLE, (void *)&entry}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) + PMD_DRV_LOG(ERR, "write vport_att failed vfid:%d failed", vfid); + + return ret; +} + +int +zxdh_port_attr_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + port_attr.hit_flag = 1; + port_attr.phy_port = hw->phyport; + port_attr.pf_vfid = zxdh_vport_to_vfid(hw->vport); + port_attr.rss_enable = 0; + if (!hw->is_pf) + port_attr.is_vf = 1; + + port_attr.mtu = dev->data->mtu; + port_attr.mtu_enable = 1; + port_attr.is_up = 0; + if (!port_attr.rss_enable) + port_attr.port_base_qid = 0; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -EAGAIN; + } + } else { + struct zxdh_vf_init_msg *vf_init_msg = &msg_info.data.vf_init_msg; + + zxdh_msg_head_build(hw, ZXDH_VF_PORT_INIT, &msg_info); + msg_info.msg_head.msg_type = ZXDH_VF_PORT_INIT; + vf_init_msg->link_up = 1; + vf_init_msg->base_qid = 0; + vf_init_msg->rss_enable = 0; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port_init failed"); + return -EAGAIN; + } + } + return ret; +}; + +int zxdh_panel_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + + struct zxdh_panel_table panel; + + memset(&panel, 0, sizeof(panel)); + panel.hit_flag = 1; + panel.pf_vfid = zxdh_vport_to_vfid(hw->vport); + panel.mtu_enable = 1; + panel.mtu = dev->data->mtu; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = hw->phyport, + .p_data = (uint32_t *)&panel + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) { + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + return -EAGAIN; + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h new file mode 100644 index 0000000000..5d34af2f05 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_TABLES_H +#define ZXDH_TABLES_H + +#include <stdint.h> + +extern struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_DEVICE_NO 0 + +struct zxdh_port_attr_table { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint16_t mtu; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint8_t rss_hash_factor; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#else + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint16_t mtu; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint8_t rss_hash_factor; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#endif +}; + +struct zxdh_panel_table { + uint16_t port_vfid_1588 : 11, + rsv2 : 5; + uint16_t pf_vfid : 11, + rsv1 : 1, + enable_1588_tc : 2, + trust_mode : 1, + hit_flag : 1; + uint32_t mtu : 16, + mtu_enable : 1, + rsv : 3, + tm_base_queue : 12; + uint32_t rsv_1; + uint32_t rsv_2; +}; /* 16B */ + +int zxdh_port_attr_init(struct rte_eth_dev *dev); +int zxdh_panel_table_init(struct rte_eth_dev *dev); +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); + +#endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 101018 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 03/15] net/zxdh: port tables init implementations 2024-12-10 5:53 ` [PATCH v2 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-13 19:42 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:42 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:21 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > insert port tables in host. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 23 ++ > drivers/net/zxdh/zxdh_msg.c | 63 ++++ > drivers/net/zxdh/zxdh_msg.h | 72 ++++ > drivers/net/zxdh/zxdh_np.c | 662 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 210 +++++++++++ > drivers/net/zxdh/zxdh_pci.h | 2 + > drivers/net/zxdh/zxdh_tables.c | 104 ++++++ > drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ > 9 files changed, 1285 insertions(+) > create mode 100644 drivers/net/zxdh/zxdh_tables.c > create mode 100644 drivers/net/zxdh/zxdh_tables.h Is this a problem? ### [PATCH] net/zxdh: port tables init implementations WARNING:MACRO_ARG_UNUSED: Argument 'INDEX' is not used in function-like macro #329: FILE: drivers/net/zxdh/zxdh_np.c:124: +#define ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.start_phy_addr) total: 0 errors, 1 warnings, 0 checks, 1396 lines checked ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 04/15] net/zxdh: port tables unint implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-13 19:45 ` Stephen Hemminger 2024-12-13 19:48 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang ` (10 subsequent siblings) 14 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 9137 bytes --] delete port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 19 ++++++ drivers/net/zxdh/zxdh_msg.h | 1 + drivers/net/zxdh/zxdh_np.c | 113 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 36 ++++++++++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 177 insertions(+), 2 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 8a9ca87183..a72319758a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -887,12 +887,31 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + return ret; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 35ed5d1a1c..9997417f28 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 736994643d..1c1c3cbbcc 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -24,6 +24,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) @@ -1497,3 +1498,115 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + uint32_t rc = 0; + uint32_t entry_index = 0; + uint32_t sdt_no = 0; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t addr_offset = 0; + uint32_t max_size = 0; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + ZXDH_DTB_LPM_ENTRY_T lpm_entry = {0}; + + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + memset(p_data_buff, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + p_data_buff_ex = + (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE * sizeof(uint8_t), 0); + memset(p_data_buff_ex, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + memset((uint8_t *)&lpm_entry, 0x0, sizeof(ZXDH_DTB_LPM_ENTRY_T)); + + memset((uint8_t *)&dtb_one_entry, 0x0, sizeof(ZXDH_DTB_ENTRY_T)); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, + "dpp_dtb_write_down_table_data", p_data_buff_ex); + + rte_free(p_data_buff_ex); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 40961c02a2..42a652dd6b 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -20,6 +20,8 @@ #define ZXDH_PPU_CLUSTER_NUM (6) #define ZXDH_PPU_INSTR_MEM_NUM (3) #define ZXDH_SDT_CFG_LEN (2) +#define ZXDH_SDT_H_TBL_TYPE_BT_POS (29) +#define ZXDH_SDT_H_TBL_TYPE_BT_LEN (3) #define ZXDH_RC_DEV_BASE (0x600) #define ZXDH_RC_DEV_PARA_INVALID (ZXDH_RC_DEV_BASE | 0x0) @@ -507,9 +509,16 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_sdt_tbl_data_t { + uint32_t data_high32; + uint32_t data_low32; +} ZXDH_SDT_TBL_DATA_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); +int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 4284fefe3a..f0d9fab37b 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,7 +11,8 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 -int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +int +zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { int ret = 0; @@ -70,7 +71,38 @@ zxdh_port_attr_init(struct rte_eth_dev *dev) return ret; }; -int zxdh_panel_table_init(struct rte_eth_dev *dev) +int +zxdh_port_attr_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + if (hw->is_pf == 1) { + ZXDH_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (uint32_t *)&port_attr}; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_VPORT_ATT_TABLE, + .p_entry_data = (void *)&port_attr_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "delete port attr table failed"); + return -ret; + } + } else { + zxdh_msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port tables uninit failed"); + return -ret; + } + } + return ret; +} + +int +zxdh_panel_table_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5d34af2f05..5e9b36faee 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -144,5 +144,6 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 19734 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 04/15] net/zxdh: port tables unint implementations 2024-12-10 5:53 ` [PATCH v2 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-13 19:45 ` Stephen Hemminger 2024-12-13 19:48 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:45 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:22 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > elete port tables in host. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/zxdh_ethdev.c | 19 ++++++ > drivers/net/zxdh/zxdh_msg.h | 1 + > drivers/net/zxdh/zxdh_np.c | 113 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 9 +++ > drivers/net/zxdh/zxdh_tables.c | 36 ++++++++++- > drivers/net/zxdh/zxdh_tables.h | 1 + > 6 files changed, 177 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > index 8a9ca87183..a72319758a 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.c > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -887,12 +887,31 @@ zxdh_np_uninit(struct rte_eth_dev *dev) > zxdh_np_dtb_data_res_free(hw); > } > > +static int > +zxdh_tables_uninit(struct rte_eth_dev *dev) > +{ > + int ret = 0; > + > + ret = zxdh_port_attr_uninit(dev); > + if (ret) { > + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); > + return ret; > + } > + return ret; > +} > + Could be simplified to: static int zxdh_tables_uninit(struct rte_eth_dev *dev) { int ret; ret = zxdh_port_attr_uninit(dev); if (ret) PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); return ret; } ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 04/15] net/zxdh: port tables unint implementations 2024-12-10 5:53 ` [PATCH v2 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-13 19:45 ` Stephen Hemminger @ 2024-12-13 19:48 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:48 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:22 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > delete port tables in host. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/zxdh_ethdev.c | 19 ++++++ > drivers/net/zxdh/zxdh_msg.h | 1 + > drivers/net/zxdh/zxdh_np.c | 113 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 9 +++ > drivers/net/zxdh/zxdh_tables.c | 36 ++++++++++- > drivers/net/zxdh/zxdh_tables.h | 1 + > 6 files changed, 177 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > index 8a9ca87183..a72319758a 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.c > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -887,12 +887,31 @@ zxdh_np_uninit(struct rte_eth_dev *dev) > zxdh_np_dtb_data_res_free(hw); > } > > +static int > +zxdh_tables_uninit(struct rte_eth_dev *dev) > +{ > + int ret = 0; > + > + ret = zxdh_port_attr_uninit(dev); > + if (ret) { > + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); > + return ret; > + } > + return ret; > +} > + > static int > zxdh_dev_close(struct rte_eth_dev *dev) > { > struct zxdh_hw *hw = dev->data->dev_private; > int ret = 0; > > + ret = zxdh_tables_uninit(dev); > + if (ret != 0) { > + PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); > + return -1; > + } > + > zxdh_intr_release(dev); > zxdh_np_uninit(dev); > zxdh_pci_reset(hw); > diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h > index 35ed5d1a1c..9997417f28 100644 > --- a/drivers/net/zxdh/zxdh_msg.h > +++ b/drivers/net/zxdh/zxdh_msg.h > @@ -167,6 +167,7 @@ enum pciebar_layout_type { > enum zxdh_msg_type { > ZXDH_NULL = 0, > ZXDH_VF_PORT_INIT = 1, > + ZXDH_VF_PORT_UNINIT = 2, > > ZXDH_MSG_TYPE_END, > } __rte_packed; > diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c > index 736994643d..1c1c3cbbcc 100644 > --- a/drivers/net/zxdh/zxdh_np.c > +++ b/drivers/net/zxdh/zxdh_np.c > @@ -24,6 +24,7 @@ ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; > ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; > ZXDH_REG_T g_dpp_reg_info[4]; > ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; > +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; > > #define ZXDH_COMM_ASSERT(x) assert(x) > #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) > @@ -1497,3 +1498,115 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, > > return rc; > } > + > +static uint32_t > +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) > +{ > + uint32_t rc = 0; > + > + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; > + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; > + > + return rc; > +} > + > +int > +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, > + uint32_t queue_id, > + uint32_t entrynum, > + ZXDH_DTB_USER_ENTRY_T *delete_entries) > +{ > + uint32_t rc = 0; > + uint32_t entry_index = 0; > + uint32_t sdt_no = 0; > + uint32_t tbl_type = 0; > + uint32_t element_id = 0xff; > + uint32_t one_dtb_len = 0; > + uint32_t dtb_len = 0; > + uint32_t addr_offset = 0; > + uint32_t max_size = 0; > + uint8_t *p_data_buff = NULL; > + uint8_t *p_data_buff_ex = NULL; > + ZXDH_DTB_LPM_ENTRY_T lpm_entry = {0}; > + > + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; > + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; > + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; > + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; > + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; > + > + ZXDH_COMM_CHECK_POINT(delete_entries); > + > + p_data_buff = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); cast not needed. > + ZXDH_COMM_CHECK_POINT(p_data_buff); > + memset(p_data_buff, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); If you use rte_calloc() the memset is not needed. > + > + p_data_buff_ex = > + (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE * sizeof(uint8_t), 0); cast not needed. > + memset(p_data_buff_ex, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); use rte_calloc instead. > + > + memset((uint8_t *)&lpm_entry, 0x0, sizeof(ZXDH_DTB_LPM_ENTRY_T)); unnecessary, you already initialized this. > + memset((uint8_t *)&dtb_one_entry, 0x0, sizeof(ZXDH_DTB_ENTRY_T)); Also already internalized. > + memset(entry_cmd, 0x0, sizeof(entry_cmd)); > + memset(entry_data, 0x0, sizeof(entry_data)); ditto ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 05/15] net/zxdh: rx/tx queue setup and intr enable 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang ` (9 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7822 bytes --] rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 33 ++++++++ 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index a72319758a..9f5a9116bb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -934,6 +934,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + struct zxdh_virtnet_tx *txvq = NULL; + uint16_t tx_free_thresh = 0; + + if (tx_conf->tx_deferred_start) { + PMD_TX_LOG(ERR, "Tx deferred start is not supported"); + return -EINVAL; + } + + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + + txvq = &vq->txq; + txvq->queue_id = vtpci_logic_qidx; + + tx_free_thresh = tx_conf->tx_free_thresh; + if (tx_free_thresh == 0) + tx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_TX_FREE_THRESH); + + /* tx_free_thresh must be less than the number of TX entries minus 3 */ + if (tx_free_thresh >= (vq->vq_nentries - 3)) { + PMD_TX_LOG(ERR, "TX entries - 3 (%u). (tx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries - 3, tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + vq->vq_free_thresh = tx_free_thresh; + + if (queue_idx < dev->data->nb_tx_queues) + dev->data->tx_queues[queue_idx] = txvq; + + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_enable_intr(vq); + zxdh_mb(hw->weak_barriers); + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_disable_intr(vq); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1304d5e4ea..2f602d894f 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -8,6 +8,7 @@ #include <stdint.h> #include <rte_common.h> +#include <rte_atomic.h> #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" @@ -30,6 +31,7 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 +#define ZXDH_QUEUE_DEPTH 1024 /* * ring descriptors: 16 bytes. @@ -270,8 +272,39 @@ zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) } } +static inline void +zxdh_queue_enable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow == ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +static inline void +zxdh_mb(uint8_t weak_barriers) +{ + if (weak_barriers) + rte_atomic_thread_fence(rte_memory_order_seq_cst); + else + rte_mb(); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); +int32_t zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf); +int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17339 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 06/15] net/zxdh: dev start/stop ops implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-13 21:05 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang ` (8 subsequent siblings) 14 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 12215 bytes --] dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 61 +++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 24 ++++++++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 91 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 69 +++++++++++++++++++++++ 7 files changed, 250 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..7b72be5f25 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux = Y x86-64 = Y ARMv8 = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 9f5a9116bb..8b26d56394 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -900,12 +900,35 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + int ret = 0; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_RX_LOG(ERR, "intr disable failed"); + return -EAGAIN; + } + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, " stop port %s failed.", dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :tables uninit %s failed ", __func__, dev->device->name); @@ -929,9 +952,47 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EIO; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop = zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..83164a5c79 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,29 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED)) { + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags & + ZXDH_VRING_PACKED_DESC_F_AVAIL)) << 31) | + ((uint32_t)vq->vq_avail_idx << 16) | + vq->vq_queue_index; + } else { + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + } + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -216,6 +239,7 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_num = zxdh_set_queue_num, .setup_queue = zxdh_setup_queue, .del_queue = zxdh_del_queue, + .notify_queue = zxdh_notify_queue, }; uint8_t diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index e3f13cb17d..5c5f72b90e 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -144,6 +144,7 @@ struct zxdh_pci_ops { int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index af21f046ad..8c8f2605f6 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -274,3 +274,94 @@ zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) zxdh_queue_disable_intr(vq); return 0; } + +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num) +{ + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct zxdh_hw *hw = vq->hw; + struct zxdh_vq_desc_extra *dxp; + uint16_t flags = vq->vq_packed.cached_flags; + int32_t i; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + /* rx pkt fill in data_off */ + start_dp[idx].addr = rte_mbuf_iova_get(cookie[i]) + RTE_PKTMBUF_HEADROOM; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM; + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + zxdh_queue_store_flags_packed(&start_dp[idx], flags, hw->weak_barriers); + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[logic_qidx]; + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + uint16_t desc_idx; + int32_t error = 0; + + /* Allocate blank mbufs for the each rx descriptor */ + memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf)); + for (desc_idx = 0; desc_idx < ZXDH_MBUF_BURST_SZ; desc_idx++) + vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; + + while (!zxdh_queue_full(vq)) { + uint16_t free_cnt = vq->vq_free_cnt; + + free_cnt = RTE_MIN(ZXDH_MBUF_BURST_SZ, free_cnt); + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt) == 0)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + int32_t i; + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + } else { + PMD_DRV_LOG(ERR, "port %d rxq %d allocated bufs from %s failed", + hw->port_id, logic_qidx, rxvq->mpool->name); + break; + } + } + return 0; +} + +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + uint16_t i = 0; + struct zxdh_vring_packed_desc *descs = vq->vq_packed.ring.desc; + int32_t cnt = 0; + + i = vq->vq_used_cons_idx; + while (zxdh_desc_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 2f602d894f..6513aec3f0 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -25,6 +25,11 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) +#define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL_USED \ + (ZXDH_VRING_PACKED_DESC_F_AVAIL | ZXDH_VRING_PACKED_DESC_F_USED) #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 @@ -33,6 +38,9 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 + /* * ring descriptors: 16 bytes. * These can chain together via "next". @@ -290,6 +298,63 @@ zxdh_mb(uint8_t weak_barriers) rte_mb(); } +static inline int32_t +zxdh_queue_full(const struct zxdh_virtqueue *vq) +{ + return (vq->vq_free_cnt == 0); +} + +static inline void +zxdh_queue_store_flags_packed(struct zxdh_vring_packed_desc *dp, + uint16_t flags, uint8_t weak_barriers) + { + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + rte_io_wmb(); + dp->flags = flags; + #else + rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release); + #endif + } else { + rte_io_wmb(); + dp->flags = flags; + } +} + +static inline uint16_t +zxdh_queue_fetch_flags_packed(struct zxdh_vring_packed_desc *dp, + uint8_t weak_barriers) + { + uint16_t flags; + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + flags = dp->flags; + rte_io_rmb(); + #else + flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire); + #endif + } else { + flags = dp->flags; + rte_io_rmb(); + } + + return flags; +} + +static inline int32_t +zxdh_desc_used(struct zxdh_vring_packed_desc *desc, struct zxdh_virtqueue *vq) +{ + uint16_t flags = zxdh_queue_fetch_flags_packed(desc, vq->hw->weak_barriers); + uint16_t used = !!(flags & ZXDH_VRING_PACKED_DESC_F_USED); + uint16_t avail = !!(flags & ZXDH_VRING_PACKED_DESC_F_AVAIL); + return avail == used && used == vq->vq_packed.used_wrap_counter; +} + +static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) +{ + ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); @@ -306,5 +371,9 @@ int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp); int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx); +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq); +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 27708 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 06/15] net/zxdh: dev start/stop ops implementations 2024-12-10 5:53 ` [PATCH v2 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-13 21:05 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 21:05 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:24 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > v start/stop implementations, start/stop the rx/tx queues. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- If I re-enable the warnings for format and unaligned packed then this shows up: ../drivers/net/zxdh/zxdh_queue.c: In function ‘zxdh_dev_rx_queue_setup_finish’: ../drivers/net/zxdh/zxdh_queue.c:321:59: warning: taking address of packed member of ‘struct zxdh_virtnet_rx’ may result in an unaligned pointer value [-Waddress-of-packed-member] 321 | vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; | ^~~~~~~~~~~~~~~~ The problem is that the driver is using __rte_packed on structures like zxdh_virtnet_rx. Unlike some other OS's. DPDK best practice is to only use packed where required by the hardware. Please don't use __rte_packed unless needed. To save space, it makes sense to reorder structure members to fill holes and put hot members together for caching. You should put fake_mbuf at the end and mark it with __rte_cache_aligned. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 07/15] net/zxdh: provided dev simple tx implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang ` (7 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18441 bytes --] provided dev simple tx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 20 ++ drivers/net/zxdh/zxdh_queue.h | 26 ++- drivers/net/zxdh/zxdh_rxtx.c | 396 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 446 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', + 'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 8b26d56394..cbaa2de3ca 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -952,6 +953,24 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -967,6 +986,7 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 6513aec3f0..9343df81ac 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,8 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + /* This marks a buffer as write-only (otherwise read-only). */ -#define ZXDH_VRING_DESC_F_WRITE 2 +#define ZXDH_VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) #define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) @@ -35,11 +42,17 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 + #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZE sizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -355,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 0000000000..10034a0e98 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,396 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <stdalign.h> + +#include <rte_net.h> + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ + +#define ZXDH_PI_L3TYPE_IP 0x00 +#define ZXDH_PI_L3TYPE_IPV6 0x40 +#define ZXDH_PI_L3TYPE_NOIP 0x80 +#define ZXDH_PI_L3TYPE_RSV 0xC0 +#define ZXDH_PI_L3TYPE_MASK 0xC0 + +#define ZXDH_PCODE_MASK 0x1F +#define ZXDH_PCODE_IP_PKT_TYPE 0x01 +#define ZXDH_PCODE_TCP_PKT_TYPE 0x02 +#define ZXDH_PCODE_UDP_PKT_TYPE 0x03 +#define ZXDH_PCODE_NO_IP_PKT_TYPE 0x09 +#define ZXDH_PCODE_NO_REASSMBLE_TCP_PKT_TYPE 0x0C + +#define ZXDH_TX_MAX_SEGS 31 +#define ZXDH_RX_MAX_SEGS 31 + +static void +zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t curr_id = 0; + uint16_t free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num > 0 && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + num -= dxp->ndescs; + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + } while (curr_id != id); + } + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} + +static void +zxdh_ring_free_id_packed(struct zxdh_virtqueue *vq, uint16_t id) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static void +zxdh_xmit_cleanup_normal_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num-- && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + zxdh_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + +static void +zxdh_xmit_cleanup_packed(struct zxdh_virtqueue *vq, int32_t num, int32_t in_order) +{ + if (in_order) + zxdh_xmit_cleanup_inorder_packed(vq, num); + else + zxdh_xmit_cleanup_normal_packed(vq, num); +} + +static uint8_t +zxdh_xmit_get_ptype(struct rte_mbuf *m) +{ + uint8_t pcode = ZXDH_PCODE_NO_IP_PKT_TYPE; + uint8_t l3_ptype = ZXDH_PI_L3TYPE_NOIP; + + if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV4 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4)) { + l3_ptype = ZXDH_PI_L3TYPE_IP; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV6 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV6)) { + l3_ptype = ZXDH_PI_L3TYPE_IPV6; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else { + goto end; + } + + if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_TCP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)) + pcode = ZXDH_PCODE_TCP_PKT_TYPE; + else if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_UDP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)) + pcode = ZXDH_PCODE_UDP_PKT_TYPE; + +end: + return l3_ptype | ZXDH_PKT_FORM_CPU | pcode; +} + +static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, + struct zxdh_net_hdr_dl *hdr) +{ + uint16_t pkt_flag_lw16 = ZXDH_NO_IPID_UPDATE; + uint16_t l3_offset; + uint32_t ol_flag = 0; + + hdr->pi_hdr.pkt_flag_lw16 = rte_be_to_cpu_16(pkt_flag_lw16); + + hdr->pi_hdr.pkt_type = zxdh_xmit_get_ptype(cookie); + l3_offset = ZXDH_DL_NET_HDR_SIZE + cookie->outer_l2_len + + cookie->outer_l3_len + cookie->l2_len; + hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); + hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); +} + +static inline void zxdh_enqueue_xmit_packed_fast(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, int32_t in_order) +{ + struct zxdh_virtqueue *vq = txvq->vq; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + uint16_t flags = vq->vq_packed.cached_flags; + struct zxdh_net_hdr_dl *hdr = NULL; + + dxp->ndescs = 1; + dxp->cookie = cookie; + hdr = rte_pktmbuf_mtod_offset(cookie, struct zxdh_net_hdr_dl *, -ZXDH_DL_NET_HDR_SIZE); + zxdh_xmit_fill_net_hdr(cookie, hdr); + + uint16_t idx = vq->vq_avail_idx; + struct zxdh_vring_packed_desc *dp = &vq->vq_packed.ring.desc[idx]; + + dp->addr = rte_pktmbuf_iova(cookie) - ZXDH_DL_NET_HDR_SIZE; + dp->len = cookie->data_len + ZXDH_DL_NET_HDR_SIZE; + dp->id = id; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + vq->vq_free_cnt--; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(dp, flags, vq->hw->weak_barriers); +} + +static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, + uint16_t needed, + int32_t use_indirect, + int32_t in_order) +{ + struct zxdh_tx_region *txr = txvq->zxdh_net_hdr_mz->addr; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + void *hdr = NULL; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t idx = head_idx; + uint16_t prev = head_idx; + uint16_t head_flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + uint16_t seg_num = cookie->nb_segs; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vring_packed_desc *head_dp = &vq->vq_packed.ring.desc[idx]; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + + dxp->ndescs = needed; + dxp->cookie = cookie; + head_flags |= vq->vq_packed.cached_flags; + /* if offload disabled, it is not zeroed below, do it now */ + + if (use_indirect) { + /** + * setup tx ring slot to point to indirect + * descriptor list stored in reserved region. + * the first slot in indirect ring is already + * preset to point to the header in reserved region + **/ + start_dp[idx].addr = + txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr); + start_dp[idx].len = (seg_num + 1) * sizeof(struct zxdh_vring_packed_desc); + /* Packed descriptor id needs to be restored when inorder. */ + if (in_order) + start_dp[idx].id = idx; + + /* reset flags for indirect desc */ + head_flags = ZXDH_VRING_DESC_F_INDIRECT; + head_flags |= vq->vq_packed.cached_flags; + hdr = (void *)&txr[idx].tx_hdr; + /* loop below will fill in rest of the indirect elements */ + start_dp = txr[idx].tx_packed_indir; + start_dp->len = ZXDH_DL_NET_HDR_SIZE; /* update actual net or type hdr size */ + idx = 1; + } else { + /* setup first tx ring slot to point to header stored in reserved region. */ + start_dp[idx].addr = txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = ZXDH_DL_NET_HDR_SIZE; + head_flags |= ZXDH_VRING_DESC_F_NEXT; + hdr = (void *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + zxdh_xmit_fill_net_hdr(cookie, (struct zxdh_net_hdr_dl *)hdr); + + do { + start_dp[idx].addr = rte_pktmbuf_iova(cookie); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + uint16_t flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + start_dp[prev].id = id; + if (use_indirect) { + idx = head_idx; + if (++idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); +} + +uint16_t +zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct zxdh_virtnet_tx *txvq = tx_queue; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_hw *hw = vq->hw; + uint16_t nb_tx = 0; + + bool in_order = zxdh_pci_with_feature(hw, ZXDH_F_IN_ORDER); + + if (nb_pkts > vq->vq_free_cnt) + zxdh_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt, in_order); + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int32_t can_push = 0; + int32_t use_indirect = 0; + int32_t slots = 0; + int32_t need = 0; + + /* optimize ring usage */ + if ((zxdh_pci_with_feature(hw, ZXDH_F_ANY_LAYOUT) || + zxdh_pci_with_feature(hw, ZXDH_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= ZXDH_DL_NET_HDR_SIZE && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + alignof(struct zxdh_net_hdr_dl))) { + can_push = 1; + } else if (zxdh_pci_with_feature(hw, ZXDH_RING_F_INDIRECT_DESC) && + txm->nb_segs < ZXDH_MAX_TX_INDIRECT) { + use_indirect = 1; + } + /** + * How many main ring entries are needed to this Tx? + * indirect => 1 + * any_layout => number of segments + * default => number of segments + 1 + **/ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); + need = slots - vq->vq_free_cnt; + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + zxdh_xmit_cleanup_packed(vq, need, in_order); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, "port[ep:%d, pf:%d, vf:%d, vfid:%d, pcieid:%d], queue:%d[pch:%d]. No free desc to xmit", + hw->vport.epid, hw->vport.pfid, hw->vport.vfid, + hw->vfid, hw->pcie_id, txvq->queue_id, + hw->channel_context[txvq->queue_id].ph_chno); + break; + } + } + /* Enqueue Packet buffers */ + if (can_push) + zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + } + if (likely(nb_tx)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + return nb_tx; +} + +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + int32_t error; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + return nb_tx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index de9353b223..e07e01e821 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -44,4 +44,8 @@ struct zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ } __rte_packed; +uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45231 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 08/15] net/zxdh: provided dev simple rx implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 09/15] net/zxdh: link info update, set link up/down Junlong Wang ` (6 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11226 bytes --] provided dev simple rx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_rxtx.c | 313 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 319 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7b72be5f25..bb44e93fad 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess aware = Y +Scattered Rx = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index cbaa2de3ca..acf11adb9e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -968,6 +968,8 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; + return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 10034a0e98..06290d48bb 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -394,3 +481,229 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** + * desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + vq->vq_descx[id].cookie = NULL; + if (unlikely(cookie == NULL)) { + PMD_RX_LOG(ERR, + "vring descriptor with no mbuf cookie at %u", vq->vq_used_cons_idx); + break; + } + rx_pkts[i] = cookie; + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + } + return i; +} + +static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *hdr) +{ + struct zxdh_pd_hdr_ul *pd_hdr = &hdr->pd_hdr; + struct zxdh_pi_hdr *pi_hdr = &hdr->pi_hdr; + uint32_t idx = 0; + + m->pkt_len = rte_be_to_cpu_16(pi_hdr->ul.pkt_len); + + uint16_t pkt_type_outer = rte_be_to_cpu_16(pd_hdr->pkt_type_out); + + idx = (pkt_type_outer >> 12) & 0xF; + m->packet_type = zxdh_outer_l2_type[idx]; + idx = (pkt_type_outer >> 8) & 0xF; + m->packet_type |= zxdh_outer_l3_type[idx]; + idx = (pkt_type_outer >> 4) & 0xF; + m->packet_type |= zxdh_outer_l4_type[idx]; + idx = pkt_type_outer & 0xF; + m->packet_type |= zxdh_tunnel_type[idx]; + + uint16_t pkt_type_inner = rte_be_to_cpu_16(pd_hdr->pkt_type_in); + + if (pkt_type_inner) { + idx = (pkt_type_inner >> 12) & 0xF; + m->packet_type |= zxdh_inner_l2_type[idx]; + idx = (pkt_type_inner >> 8) & 0xF; + m->packet_type |= zxdh_inner_l3_type[idx]; + idx = (pkt_type_inner >> 4) & 0xF; + m->packet_type |= zxdh_inner_l4_type[idx]; + } + + return 0; +} + +static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +{ + int32_t error = 0; + /* + * Requeue the discarded mbuf. This should always be + * successful since it was just dequeued. + */ + error = zxdh_enqueue_recv_refill_packed(vq, &m, 1); + if (unlikely(error)) { + PMD_RX_LOG(ERR, "cannot enqueue discarded mbuf"); + rte_pktmbuf_free(m); + } +} + +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct zxdh_virtnet_rx *rxvq = rx_queue; + struct zxdh_virtqueue *vq = rxvq->vq; + struct zxdh_hw *hw = vq->hw; + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_mbuf *rxm = NULL; + struct rte_mbuf *prev = NULL; + uint32_t len[ZXDH_MBUF_BURST_SZ] = {0}; + struct rte_mbuf *rcv_pkts[ZXDH_MBUF_BURST_SZ] = {NULL}; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint16_t hdr_size = 0; + int32_t error = 0; + uint16_t nb_rx = 0; + uint16_t num = nb_pkts; + + if (unlikely(num > ZXDH_MBUF_BURST_SZ)) + num = ZXDH_MBUF_BURST_SZ; + + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + uint16_t i; + uint16_t rcvd_pkt_len = 0; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + struct zxdh_net_hdr_ul *header = + (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM); + + seg_num = header->type_hdr.num_buffers; + if (seg_num == 0) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + seg_num = 1; + } + /* bit[0:6]-pd_len unit:2B */ + uint16_t pd_len = header->type_hdr.pd_len << 1; + /* Private queue only handle type hdr */ + hdr_size = pd_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rcvd_pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + /* Update rte_mbuf according to pi/pd header */ + if (zxdh_rx_update_mbuf(rxm, header) < 0) { + zxdh_discard_rxbuf(vq, rxm); + continue; + } + seg_res = seg_num - 1; + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_len = (uint16_t)(len[i]); + + rcvd_pkt_len += (uint32_t)(len[i]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + seg_res -= 1; + } + + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, ZXDH_MBUF_BURST_SZ); + uint16_t extra_idx = 0; + + rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); + if (unlikely(rcv_cnt == 0)) { + PMD_RX_LOG(ERR, "No enough segments for packet."); + rte_pktmbuf_free(rx_pkts[nb_rx]); + break; + } + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + rcvd_pkt_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + + /* Allocate new mbuf for the used descriptor */ + if (likely(!zxdh_queue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + if (likely(nb_enqueued)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + return nb_rx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index e07e01e821..6c1c132479 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -47,5 +47,7 @@ struct zxdh_virtnet_tx { uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 28849 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 09/15] net/zxdh: link info update, set link up/down 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-13 19:57 ` Stephen Hemminger 2024-12-13 20:08 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang ` (5 subsequent siblings) 14 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 23602 bytes --] provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 13 ++ drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c | 56 +++++++++ drivers/net/zxdh/zxdh_msg.h | 40 +++++++ drivers/net/zxdh/zxdh_np.c | 183 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 20 ++++ drivers/net/zxdh/zxdh_tables.c | 15 +++ drivers/net/zxdh/zxdh_tables.h | 3 + 13 files changed, 518 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index bb44e93fad..7da3aaced1 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -10,3 +10,5 @@ ARMv8 = Y SR-IOV = Y Multiprocess aware = Y Scattered Rx = Y +Link status = Y +Link status event = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', + 'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index acf11adb9e..3636da2184 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,9 +106,16 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } @@ -1007,6 +1015,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + zxdh_dev_set_link_up(dev); + return 0; } @@ -1021,6 +1031,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 78b1edd5a4..ac8fd2c294 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -69,6 +69,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -90,6 +91,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c new file mode 100644 index 0000000000..635868c4c0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_msg.h" +#include "zxdh_ethdev_ops.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int32_t ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -EAGAIN; + } + port_attr.is_up = link_status; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -EAGAIN; + } + } else { + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; + port_attr_msg->value = link_status; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); + return ret; + } + } + return ret; +} + +static int32_t +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + uint16_t status = 0; + int32_t ret = 0; + + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + + link->link_status = status; + + if (status == RTE_ETH_LINK_DOWN) { + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } else { + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), + ZXDH_BAR_MODULE_MAC); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_LINK_GET); + return -EAGAIN; + } + link->link_speed = reply_info.reply_body.link_msg.speed; + hw->speed_mode = reply_info.reply_body.link_msg.speed_modes; + if ((reply_info.reply_body.link_msg.duplex & RTE_ETH_LINK_FULL_DUPLEX) == + RTE_ETH_LINK_FULL_DUPLEX) + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + else + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + } + hw->speed = link->link_speed; + + return 0; +} + +static int zxdh_set_link_status(struct rte_eth_dev *dev, uint8_t link_status) +{ + uint16_t curr_link_status = dev->data->dev_link.link_status; + + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (link_status == curr_link_status) { + PMD_DRV_LOG(INFO, "curr_link_status %u", curr_link_status); + return 0; + } + + hw->admin_status = link_status; + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); + return ret; + } + dev->data->dev_link.link_status = hw->admin_status & link.link_status; + + if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) { + dev->data->dev_link.link_speed = link.link_speed; + dev->data->dev_link.link_duplex = link.link_duplex; + } else { + dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } + return zxdh_config_port_status(dev, dev->data->dev_link.link_status); +} + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_UP); + + if (ret) + PMD_DRV_LOG(ERR, "Set link up failed, code:%d", ret); + + return ret; +} + +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + memset(&link, 0, sizeof(link)); + link.link_duplex = hw->duplex; + link.link_speed = hw->speed; + link.link_autoneg = RTE_ETH_LINK_AUTONEG; + + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + return ret; + } + link.link_status &= hw->admin_status; + if (link.link_status == RTE_ETH_LINK_DOWN) + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + ret = zxdh_config_port_status(dev, link.link_status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + return ret; + } + return rte_eth_linkstatus_set(dev, &link); +} + +int zxdh_dev_set_link_down(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_DOWN); + + if (ret) + PMD_DRV_LOG(ERR, "Set link down failed"); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h new file mode 100644 index 0000000000..c6d6ca56fd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_OPS_H +#define ZXDH_ETHDEV_OPS_H + +#include "zxdh_ethdev.h" + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev); +int zxdh_dev_set_link_down(struct rte_eth_dev *dev); +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); + +#endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 1aed979de3..b984f32fdc 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1133,6 +1133,50 @@ int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, return 0; } +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_pci_bar_msg in = {0}; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + in.payload_addr = &msg_req; + in.payload_len = msg_req_len; + in.virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + in.src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = module_id; + in.src_pcieid = hw->pcie_id; + if (zxdh_bar_chan_sync_msg_send(&in, &result) != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "Failed to send sync messages or receive response"); + return -EAGAIN; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -EAGAIN; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -EAGAIN; + } + return 0; +} + void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info) { @@ -1143,3 +1187,15 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, msghead->vf_id = hw->vport.vfid; msghead->pcieid = hw->pcie_id; } + +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_agent_msg_head *agent_head = &msg_info->agent_msg_head; + + agent_head->msg_type = type; + agent_head->panel_id = hw->panel_id; + agent_head->phyport = hw->phyport; + agent_head->vf_id = hw->vfid; + agent_head->pcie_id = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 9997417f28..069e8b74d7 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -164,11 +164,18 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +/* riscv msg opcodes */ +enum zxdh_agent_msg_type { + ZXDH_MAC_LINK_GET = 14, +} __rte_packed; + enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_PORT_ATTRS_SET = 25, + ZXDH_MSG_TYPE_END, } __rte_packed; @@ -261,6 +268,15 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_link_info_msg { + uint8_t autoneg; + uint8_t link_state; + uint8_t blink_enable; + uint8_t duplex; + uint32_t speed_modes; + uint32_t speed; +} __rte_packed; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -276,6 +292,7 @@ struct zxdh_msg_reply_body { enum zxdh_reps_flag flag; union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + struct zxdh_link_info_msg link_msg; } __rte_packed; } __rte_packed; @@ -291,6 +308,12 @@ struct zxdh_vf_init_msg { uint8_t rss_enable; } __rte_packed; +struct zxdh_port_attr_set_msg { + uint32_t mode; + uint32_t value; + uint8_t allmulti_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -298,14 +321,26 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_agent_msg_head { + enum zxdh_agent_msg_type msg_type; + uint8_t panel_id; + uint8_t phyport; + uint8_t rsv; + uint16_t vf_id; + uint16_t pcie_id; +} __rte_packed; + struct zxdh_msg_info { union { uint8_t head_len[ZXDH_MSG_HEAD_LEN]; struct zxdh_msg_head msg_head; + struct zxdh_agent_msg_head agent_msg_head; }; union { uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; struct zxdh_vf_init_msg vf_init_msg; + struct zxdh_port_attr_set_msg port_attr_msg; + struct zxdh_link_info_msg link_msg; } __rte_packed data; } __rte_packed; @@ -326,5 +361,10 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info); int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len); +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info); +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 1c1c3cbbcc..4e32de0151 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -36,6 +36,16 @@ ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ ((_inttype_)(((_bitqnt_) < 32))) +#define ZXDH_COMM_UINT32_GET_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + ((_uidst_) = (((_uisrc_) >> (_uistartpos_)) & \ + (ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_))))) + +#define ZXDH_COMM_UINT32_WRITE_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + (((_uidst_) & ~(ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_)) << (_uistartpos_)))) + +#define ZXDH_COMM_CONVERT32(dw_data) \ + (((dw_data) & 0xff) << 24) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1610,3 +1620,176 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, rte_free(p_data_buff_ex); return 0; } + +static uint32_t +zxdh_np_sdt_tbl_data_parser(uint32_t sdt_hig32, uint32_t sdt_low32, void *p_sdt_info) +{ + uint32_t tbl_type = 0; + uint32_t clutch_en = 0; + + ZXDH_SDTTBL_ERAM_T *p_sdt_eram = NULL; + ZXDH_SDTTBL_PORTTBL_T *p_sdt_porttbl = NULL; + + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_hig32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + ZXDH_COMM_UINT32_GET_BITS(clutch_en, sdt_low32, 0, 1); + + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + p_sdt_eram = (ZXDH_SDTTBL_ERAM_T *)p_sdt_info; + p_sdt_eram->table_type = tbl_type; + p_sdt_eram->eram_clutch_en = clutch_en; + break; + } + + case ZXDH_SDT_TBLT_PORTTBL: + { + p_sdt_porttbl = (ZXDH_SDTTBL_PORTTBL_T *)p_sdt_info; + p_sdt_porttbl->table_type = tbl_type; + p_sdt_porttbl->porttbl_clutch_en = clutch_en; + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} + +static uint32_t +zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) +{ + uint32_t rc = 0; + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_sdt_tbl_data_get"); + + rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); + + if (rc != 0) + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_sdt_tbl_data_parser"); + + return rc; +} + +static uint32_t +zxdh_np_eram_index_cal(uint32_t eram_mode, uint32_t index, + uint32_t *p_row_index, uint32_t *p_col_index) +{ + uint32_t rc = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + + switch (eram_mode) { + case ZXDH_ERAM128_TBL_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_TBL_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + *p_row_index = row_index; + *p_col_index = col_index; + + return rc; +} + +static uint32_t +zxdh_np_dtb_eram_data_get(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no, + ZXDH_DTB_ERAM_ENTRY_INFO_T *p_dump_eram_entry) +{ + uint32_t rc = 0; + uint32_t rd_mode = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t temp_data[4] = {0}; + uint32_t index = p_dump_eram_entry->index; + uint32_t *p_data = p_dump_eram_entry->p_data; + + ZXDH_SDTTBL_ERAM_T sdt_eram_info = {0}; + + rc = zxdh_np_soft_sdt_tbl_get(queue_id, sdt_no, &sdt_eram_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_soft_sdt_tbl_get"); + rd_mode = sdt_eram_info.eram_mode; + + rc = zxdh_np_eram_index_cal(rd_mode, index, &row_index, &col_index); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dtb_eram_index_cal"); + + switch (rd_mode) { + case ZXDH_ERAM128_TBL_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_TBL_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + return rc; +} + +int +zxdh_np_dtb_table_entry_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, + uint32_t srh_mode) +{ + uint32_t rc = 0; + uint32_t sdt_no = 0; + uint32_t tbl_type = 0; + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + + memset(&sdt_tbl, 0x0, sizeof(ZXDH_SDT_TBL_DATA_T)); + sdt_no = get_entry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(srh_mode, sdt_no, &sdt_tbl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_sdt_tbl_data_get"); + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_tbl.data_high32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_data_get(dev_id, + queue_id, + sdt_no, + (ZXDH_DTB_ERAM_ENTRY_INFO_T *)get_entry->p_entry_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_eram_data_get"); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 42a652dd6b..ac3931ba65 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -514,11 +514,31 @@ typedef struct zxdh_sdt_tbl_data_t { uint32_t data_low32; } ZXDH_SDT_TBL_DATA_T; +typedef struct zxdh_sdt_tbl_etcam_t { + uint32_t table_type; + uint32_t etcam_id; + uint32_t etcam_key_mode; + uint32_t etcam_table_id; + uint32_t no_as_rsp_mode; + uint32_t as_en; + uint32_t as_eram_baddr; + uint32_t as_rsp_mode; + uint32_t etcam_table_depth; + uint32_t etcam_clutch_en; +} ZXDH_SDTTBL_ETCAM_T; + +typedef struct zxdh_sdt_tbl_porttbl_t { + uint32_t table_type; + uint32_t porttbl_clutch_en; +} ZXDH_SDTTBL_PORTTBL_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); +int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f0d9fab37b..d0aa8b3533 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,21 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +int +zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VPORT_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + + return ret; +} + int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5e9b36faee..c3c852d768 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,6 +7,8 @@ #include <stdint.h> +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 + extern struct zxdh_dtb_shared_data g_dtb_data; #define ZXDH_DEVICE_NO 0 @@ -145,5 +147,6 @@ int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 50695 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 09/15] net/zxdh: link info update, set link up/down 2024-12-10 5:53 ` [PATCH v2 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-13 19:57 ` Stephen Hemminger 2024-12-13 20:08 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 19:57 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:27 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > provided link info update, set link up /down, > and link intr. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > doc/guides/nics/features/zxdh.ini | 2 + > doc/guides/nics/zxdh.rst | 3 + > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 13 ++ > drivers/net/zxdh/zxdh_ethdev.h | 2 + > drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ > drivers/net/zxdh/zxdh_msg.c | 56 +++++++++ > drivers/net/zxdh/zxdh_msg.h | 40 +++++++ > drivers/net/zxdh/zxdh_np.c | 183 +++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 20 ++++ > drivers/net/zxdh/zxdh_tables.c | 15 +++ > drivers/net/zxdh/zxdh_tables.h | 3 + > 13 files changed, 518 insertions(+) > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h WARNING:MACRO_ARG_UNUSED: Argument '_uisrc_' is not used in function-like macro #486: FILE: drivers/net/zxdh/zxdh_np.c:43: +#define ZXDH_COMM_UINT32_WRITE_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + (((_uidst_) & ~(ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_)) << (_uistartpos_)))) total: 0 errors, 1 warnings, 0 checks, 664 lines checked ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v2 09/15] net/zxdh: link info update, set link up/down 2024-12-10 5:53 ` [PATCH v2 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-13 19:57 ` Stephen Hemminger @ 2024-12-13 20:08 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-13 20:08 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev On Tue, 10 Dec 2024 13:53:27 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > provided link info update, set link up /down, > and link intr. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > doc/guides/nics/features/zxdh.ini | 2 + > doc/guides/nics/zxdh.rst | 3 + > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 13 ++ > drivers/net/zxdh/zxdh_ethdev.h | 2 + > drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ > drivers/net/zxdh/zxdh_msg.c | 56 +++++++++ > drivers/net/zxdh/zxdh_msg.h | 40 +++++++ > drivers/net/zxdh/zxdh_np.c | 183 +++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_np.h | 20 ++++ > drivers/net/zxdh/zxdh_tables.c | 15 +++ > drivers/net/zxdh/zxdh_tables.h | 3 + > 13 files changed, 518 insertions(+) > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c > create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h > > diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini > index bb44e93fad..7da3aaced1 100644 > --- a/doc/guides/nics/features/zxdh.ini > +++ b/doc/guides/nics/features/zxdh.ini > @@ -10,3 +10,5 @@ ARMv8 = Y > SR-IOV = Y > Multiprocess aware = Y > Scattered Rx = Y > +Link status = Y > +Link status event = Y > diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst > index f42db9c1f1..fdbc3b3923 100644 > --- a/doc/guides/nics/zxdh.rst > +++ b/doc/guides/nics/zxdh.rst > @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: > - Multiple queues for TX and RX > - SR-IOV VF > - Scattered and gather for TX and RX > +- Link Auto-negotiation > +- Link state information > +- Set Link down or up > > > Driver compilation and testing > diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build > index 20b2cf484a..48f8f5e1ee 100644 > --- a/drivers/net/zxdh/meson.build > +++ b/drivers/net/zxdh/meson.build > @@ -22,4 +22,5 @@ sources = files( > 'zxdh_np.c', > 'zxdh_tables.c', > 'zxdh_rxtx.c', > + 'zxdh_ethdev_ops.c', > ) > diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c > index acf11adb9e..3636da2184 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.c > +++ b/drivers/net/zxdh/zxdh_ethdev.c > @@ -16,6 +16,7 @@ > #include "zxdh_np.h" > #include "zxdh_tables.h" > #include "zxdh_rxtx.h" > +#include "zxdh_ethdev_ops.h" > > struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; > struct zxdh_shared_data *zxdh_shared_data; > @@ -105,9 +106,16 @@ static void > zxdh_devconf_intr_handler(void *param) > { > struct rte_eth_dev *dev = param; > + struct zxdh_hw *hw = dev->data->dev_private; > + > + uint8_t isr = zxdh_pci_isr(hw); > > if (zxdh_intr_unmask(dev) < 0) > PMD_DRV_LOG(ERR, "interrupt enable failed"); > + if (isr & ZXDH_PCI_ISR_CONFIG) { > + if (zxdh_dev_link_update(dev, 0) == 0) > + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); > + } > } > > > @@ -1007,6 +1015,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) > vq = hw->vqs[logic_qidx]; > zxdh_queue_notify(vq); > } > + zxdh_dev_set_link_up(dev); > + > return 0; > } > > @@ -1021,6 +1031,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { > .tx_queue_setup = zxdh_dev_tx_queue_setup, > .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, > .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, > + .link_update = zxdh_dev_link_update, > + .dev_set_link_up = zxdh_dev_set_link_up, > + .dev_set_link_down = zxdh_dev_set_link_down, > }; > > static int32_t > diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h > index 78b1edd5a4..ac8fd2c294 100644 > --- a/drivers/net/zxdh/zxdh_ethdev.h > +++ b/drivers/net/zxdh/zxdh_ethdev.h > @@ -69,6 +69,7 @@ struct zxdh_hw { > uint64_t guest_features; > uint32_t max_queue_pairs; > uint32_t speed; > + uint32_t speed_mode; > uint32_t notify_off_multiplier; > uint16_t *notify_base; > uint16_t pcie_id; > @@ -90,6 +91,7 @@ struct zxdh_hw { > uint8_t panel_id; > uint8_t has_tx_offload; > uint8_t has_rx_offload; > + uint8_t admin_status; > }; > > struct zxdh_dtb_shared_data { > diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c > new file mode 100644 > index 0000000000..635868c4c0 > --- /dev/null > +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c > @@ -0,0 +1,166 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2024 ZTE Corporation > + */ > + > +#include "zxdh_ethdev.h" > +#include "zxdh_pci.h" > +#include "zxdh_msg.h" > +#include "zxdh_ethdev_ops.h" > +#include "zxdh_tables.h" > +#include "zxdh_logs.h" > + > +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + struct zxdh_port_attr_table port_attr = {0}; > + struct zxdh_msg_info msg_info = {0}; > + int32_t ret = 0; > + > + if (hw->is_pf) { > + ret = zxdh_get_port_attr(hw->vfid, &port_attr); > + if (ret) { > + PMD_DRV_LOG(ERR, "write port_attr failed"); > + return -EAGAIN; > + } > + port_attr.is_up = link_status; > + > + ret = zxdh_set_port_attr(hw->vfid, &port_attr); > + if (ret) { > + PMD_DRV_LOG(ERR, "write port_attr failed"); > + return -EAGAIN; > + } > + } else { > + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; > + > + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); > + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; > + port_attr_msg->value = link_status; > + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); > + if (ret) { > + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", > + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); > + return ret; > + } > + } > + return ret; > +} > + > +static int32_t > +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + struct zxdh_msg_info msg_info = {0}; > + struct zxdh_msg_reply_info reply_info = {0}; > + uint16_t status = 0; > + int32_t ret = 0; > + > + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) > + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), > + &status, sizeof(status)); > + > + link->link_status = status; > + > + if (status == RTE_ETH_LINK_DOWN) { > + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; > + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; > + } else { > + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); > + > + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), > + &reply_info, sizeof(struct zxdh_msg_reply_info), > + ZXDH_BAR_MODULE_MAC); > + if (ret) { > + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", > + hw->vport.vport, ZXDH_MAC_LINK_GET); > + return -EAGAIN; > + } Not sure if EAGAIN is best choice here. Does send_msg return an error code already? The problem with EAGAIN is that implies that application should retry and it will then have chance to succeed. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 10/15] net/zxdh: mac set/add/remove ops implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (8 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 11/15] net/zxdh: promisc/allmulti " Junlong Wang ` (4 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24838 bytes --] provided mac set/add/remove ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 32 +++- drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 233 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 14 +- drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 197 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 36 +++++ 12 files changed, 550 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 7da3aaced1..dc09fe3453 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess aware = Y Scattered Rx = Y Link status = Y Link status event = Y +Unicast MAC filter = Y +Multicast MAC filter = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 4f18c97ed7..75883a8897 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 3636da2184..e9000dc59f 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -981,6 +981,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1016,6 +1033,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_queue_notify(vq); } zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); return 0; } @@ -1034,6 +1054,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add = zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set = zxdh_dev_mac_addr_set, }; static int32_t @@ -1075,15 +1098,20 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) PMD_DRV_LOG(ERR, "Failed to get phyport"); return -1; } - PMD_DRV_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + PMD_DRV_LOG(DEBUG, "Get phyport success: 0x%x", hw->phyport); hw->vfid = zxdh_vport_to_vfid(hw->vport); + if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) { + PMD_DRV_LOG(ERR, "Failed to get hash idx"); + return -1; + } + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { PMD_DRV_LOG(ERR, "Failed to get panel_id"); return -1; } - PMD_DRV_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + PMD_DRV_LOG(DEBUG, "Get panel id success: 0x%x", hw->panel_id); return 0; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index ac8fd2c294..afdb4636ee 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -77,6 +77,8 @@ struct zxdh_hw { uint16_t port_id; uint16_t vfid; uint16_t queue_num; + uint16_t mc_num; + uint16_t uc_num; uint8_t *isr; uint8_t weak_barriers; @@ -89,6 +91,7 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t hash_search_index; uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 635868c4c0..2374250868 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -164,3 +164,236 @@ int zxdh_dev_set_link_down(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Set link down failed"); return ret; } + +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_ether_addr *old_addr = &dev->data->mac_addrs[0]; + struct zxdh_msg_info msg_info = {0}; + uint16_t ret = 0; + + if (!rte_is_valid_assigned_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "mac address is invalid!"); + return -EINVAL; + } + + if (hw->is_pf) { + ret = zxdh_del_mac_table(hw->vport.vport, old_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->uc_num--; + + ret = zxdh_set_mac_table(hw->vport.vport, addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->uc_num++; + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + mac_filter->mac_flag = true; + rte_memcpy(&mac_filter->mac, old_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_DEL); + return ret; + } + hw->uc_num--; + PMD_DRV_LOG(INFO, "Success to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + rte_memcpy(&mac_filter->mac, addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } + rte_ether_addr_copy(addr, (struct rte_ether_addr *)hw->mac_addr); + return ret; +} + +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + uint16_t i, ret; + + if (index >= ZXDH_MAX_MAC_ADDRS) { + PMD_DRV_LOG(ERR, "Add mac index (%u) is out of range", index); + return -EINVAL; + } + + for (i = 0; (i != ZXDH_MAX_MAC_ADDRS); ++i) { + if (memcmp(&dev->data->mac_addrs[i], mac_addr, sizeof(*mac_addr))) + continue; + + PMD_DRV_LOG(INFO, "MAC address already configured"); + return -EADDRINUSE; + } + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return -ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return -ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } + dev->data->mac_addrs[index] = *mac_addr; + return 0; +} +/** + * Fun: + */ +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index]; + uint16_t ret = 0; + + if (index >= ZXDH_MAX_MAC_ADDRS) + return; + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } + memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index c6d6ca56fd..4630bb70db 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -10,5 +10,9 @@ int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq); +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 069e8b74d7..56018103bd 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -42,10 +42,13 @@ #define ZXDH_MSG_REPLY_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) -#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_HEAD_LEN 8 #define ZXDH_MSG_REQ_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) +#define ZXDH_MAC_FILTER 0xaa +#define ZXDH_MAC_UNFILTER 0xff + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -173,6 +176,8 @@ enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_MAC_ADD = 3, + ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, @@ -314,6 +319,12 @@ struct zxdh_port_attr_set_msg { uint8_t allmulti_follow; } __rte_packed; +struct zxdh_mac_filter { + uint8_t mac_flag; + uint8_t filter_flag; + struct rte_ether_addr mac; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -341,6 +352,7 @@ struct zxdh_msg_info { struct zxdh_vf_init_msg vf_init_msg; struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; + struct zxdh_mac_filter mac_filter_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index ac3931ba65..19d1f03f59 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -532,6 +532,11 @@ typedef struct zxdh_sdt_tbl_porttbl_t { uint32_t porttbl_clutch_en; } ZXDH_SDTTBL_PORTTBL_T; +typedef struct zxdh_dtb_hash_entry_info_t { + uint8_t *p_actu_key; + uint8_t *p_rst; +} ZXDH_DTB_HASH_ENTRY_INFO_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index d0aa8b3533..8b1d1b91ce 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,10 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_MAC_HASH_INDEX_BASE 64 +#define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) +#define ZXDH_MC_GROUP_NUM 4 + int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -149,3 +153,196 @@ zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } + +int +zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "Insert mac_table failed"); + return -ret; + } + } else { + for (group_id = 0; group_id < 4; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, + addr, sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (ret == 0) { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + } + } else { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - index)); + else + multicast_table.entry.mc_bitmap[index] = + false; + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + else + multicast_table.entry.mc_pf_enable = false; + } + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "add mac_table failed, code:%d", ret); + return -ret; + } + } + } + return 0; +} + +int +zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret, del_flag = 0; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "delete l2_fwd_hash_table failed, code:%d", ret); + return -ret; + } + } else { + multicast_table.key.vf_group_id = vport_num.vfid / 64; + rte_memcpy(multicast_table.key.mac_addr, addr, sizeof(struct rte_ether_addr)); + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (vport_num.vf_flag) + multicast_table.entry.mc_bitmap[index] &= + ~(rte_cpu_to_be_32(UINT32_C(1) << (31 - index))); + else + multicast_table.entry.mc_pf_enable = 0; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add mc_table failed, code:%d", ret); + return -ret; + } + + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + if (multicast_table.entry.mc_bitmap[0] == 0 && + multicast_table.entry.mc_bitmap[1] == 0 && + multicast_table.entry.mc_pf_enable == 0) { + if (group_id == (ZXDH_MC_GROUP_NUM - 1)) + del_flag = 1; + } else { + break; + } + } + if (del_flag) { + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + } + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index c3c852d768..caa8aebdd2 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -143,10 +143,46 @@ struct zxdh_panel_table { uint32_t rsv_2; }; /* 16B */ +struct zxdh_mac_unicast_key { + uint16_t rsv; + uint8_t dmac_addr[6]; +}; + +struct zxdh_mac_unicast_entry { + uint8_t rsv1 : 7, + hit_flag : 1; + uint8_t rsv; + uint16_t vfid; +}; + +struct zxdh_mac_unicast_table { + struct zxdh_mac_unicast_key key; + struct zxdh_mac_unicast_entry entry; +}; + +struct zxdh_mac_multicast_key { + uint8_t rsv; + uint8_t vf_group_id; + uint8_t mac_addr[6]; +}; + +struct zxdh_mac_multicast_entry { + uint32_t mc_pf_enable; + uint32_t rsv1; + uint32_t mc_bitmap[2]; +}; + +struct zxdh_mac_multicast_table { + struct zxdh_mac_multicast_key key; + struct zxdh_mac_multicast_entry entry; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 69306 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 11/15] net/zxdh: promisc/allmulti ops implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (9 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 12/15] net/zxdh: vlan filter/ offload " Junlong Wang ` (3 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18413 bytes --] provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 18 +++ drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 132 +++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_tables.c | 219 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 411 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index dc09fe3453..e9b237e102 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -14,3 +14,5 @@ Link status = Y Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index e9000dc59f..a92a113f25 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -906,6 +906,13 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); return ret; } + + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "uninit promisc_table failed"); + return ret; + } + return ret; } @@ -1057,6 +1064,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add = zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set = zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable = zxdh_dev_allmulticast_disable, }; static int32_t @@ -1311,6 +1322,13 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index afdb4636ee..1c1a4b58ce 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -95,6 +95,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 2374250868..2655b035eb 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -397,3 +397,135 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 1; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->promisc_status == 1) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, false); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = false; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 0; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = true; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 1; + } + return ret; +} + +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 1) { + if (hw->is_pf) { + if (hw->promisc_status == 1) + goto end; + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + if (hw->promisc_status == 1) + goto end; + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = false; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 0; + } + return ret; +end: + hw->allmulti_status = 0; + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 4630bb70db..394ddedc0e 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -14,5 +14,9 @@ int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_ad uint32_t index, uint32_t vmdq); int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 56018103bd..f6d54b1df7 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -48,6 +48,8 @@ #define ZXDH_MAC_FILTER 0xaa #define ZXDH_MAC_UNFILTER 0xff +#define ZXDH_PROMISC_MODE 1 +#define ZXDH_ALLMULTI_MODE 2 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,7 @@ enum zxdh_msg_type { ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, + ZXDH_PORT_PROMISC_SET = 26, ZXDH_MSG_TYPE_END, } __rte_packed; @@ -325,6 +328,12 @@ struct zxdh_mac_filter { struct rte_ether_addr mac; } __rte_packed; +struct zxdh_port_promisc_msg { + uint8_t mode; + uint8_t value; + uint8_t mc_follow; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -353,6 +362,7 @@ struct zxdh_msg_info { struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; + struct zxdh_port_promisc_msg port_promisc_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 8b1d1b91ce..f5d12e50fb 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,10 +10,15 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_BROCAST_ATT_TABLE 6 +#define ZXDH_SDT_UNICAST_ATT_TABLE 10 +#define ZXDH_SDT_MULTICAST_ATT_TABLE 11 #define ZXDH_MAC_HASH_INDEX_BASE 64 #define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) #define ZXDH_MC_GROUP_NUM 4 +#define ZXDH_BASE_VFID 1152 +#define ZXDH_TABLE_HIT_FLAG 128 int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -346,3 +351,217 @@ zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_se } return 0; } + +int zxdh_promisc_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_unitcast_table uc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T uc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&uc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + uc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + uc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + } else { + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} + +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_multicast_table mc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T mc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&mc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + mc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + mc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + + } else { + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index caa8aebdd2..88cdff0053 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -177,12 +177,34 @@ struct zxdh_mac_multicast_table { struct zxdh_mac_multicast_entry entry; }; +struct zxdh_brocast_table { + uint32_t flag; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_unitcast_table { + uint32_t uc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_multicast_table { + uint32_t mc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_promisc_table_init(struct rte_eth_dev *dev); +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45156 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 12/15] net/zxdh: vlan filter/ offload ops implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (10 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 11/15] net/zxdh: promisc/allmulti " Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang ` (2 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20663 bytes --] provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +++++- drivers/net/zxdh/zxdh_ethdev_ops.c | 221 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h | 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 97 +++++++++++++ drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 413 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index e9b237e102..6fb006c2da 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -16,3 +16,6 @@ Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode = Y +VLAN filter = Y +VLAN offload = Y +QinQ offload = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index a92a113f25..8a8f588956 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -759,6 +759,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -816,7 +844,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -848,6 +876,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1068,6 +1098,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable = zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set = zxdh_dev_vlan_offload_set, }; static int32_t @@ -1329,6 +1361,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " vlan filter table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 2655b035eb..aae4b90eb4 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include <rte_malloc.h> + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -529,3 +533,220 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); + return -1; + } + + idx = vlan_id / ZXDH_VLAN_FILTER_GROUPS; + bit_idx = vlan_id % ZXDH_VLAN_FILTER_GROUPS; + + if (on) { + if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { + PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_ADD; + } else { + if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { + PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_DEL; + } + + if (hw->is_pf) { + ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); + if (ret) { + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + return -1; + } + } else { + struct zxdh_msg_info msg = {0}; + zxdh_msg_head_build(hw, msg_type, &msg); + msg.data.vlan_filter_msg.vlan_id = vlan_id; + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, msg_type); + return ret; + } + } + + if (on) + dev->data->vlan_filter_conf.ids[idx] |= (1ULL << bit_idx); + else + dev->data->vlan_filter_conf.ids[idx] &= ~(1ULL << bit_idx); + + return 0; +} + +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + rxmode = &dev->data->dev_conf.rxmode; + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_QINQ_STRIP_MASK) { + memset(&msg, 0, sizeof(struct zxdh_msg_info)); + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 394ddedc0e..058d271ab3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -18,5 +18,7 @@ int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index f6d54b1df7..b1da47fc47 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -50,6 +50,8 @@ #define ZXDH_MAC_UNFILTER 0xff #define ZXDH_PROMISC_MODE 1 #define ZXDH_ALLMULTI_MODE 2 +#define ZXDH_VLAN_STRIP_MSG_TYPE 0 +#define ZXDH_QINQ_STRIP_MSG_TYPE 1 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -180,6 +182,10 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_VLAN_FILTER_SET = 17, + ZXDH_VLAN_FILTER_ADD = 18, + ZXDH_VLAN_FILTER_DEL = 19, + ZXDH_VLAN_OFFLOAD = 21, ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, @@ -341,6 +347,19 @@ struct zxdh_msg_head { uint16_t pcieid; } __rte_packed; +struct zxdh_vlan_filter { + uint16_t vlan_id; +}; + +struct zxdh_vlan_filter_set { + uint8_t enable; +}; + +struct zxdh_vlan_offload { + uint8_t enable; + uint8_t type; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -363,6 +382,9 @@ struct zxdh_msg_info { struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; struct zxdh_port_promisc_msg port_promisc_msg; + struct zxdh_vlan_filter vlan_filter_msg; + struct zxdh_vlan_filter_set vlan_filter_set_msg; + struct zxdh_vlan_offload vlan_offload_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 06290d48bb..0ffce50042 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -11,6 +11,9 @@ #include "zxdh_pci.h" #include "zxdh_queue.h" +#define ZXDH_SVLAN_TPID 0x88a8 +#define ZXDH_CVLAN_TPID 0x8100 + #define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ #define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ #define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ @@ -21,6 +24,9 @@ #define ZXDH_PI_L3TYPE_RSV 0xC0 #define ZXDH_PI_L3TYPE_MASK 0xC0 +#define ZXDH_PD_OFFLOAD_SVLAN_INSERT (1 << 14) +#define ZXDH_PD_OFFLOAD_CVLAN_INSERT (1 << 13) + #define ZXDH_PCODE_MASK 0x1F #define ZXDH_PCODE_IP_PKT_TYPE 0x01 #define ZXDH_PCODE_TCP_PKT_TYPE 0x02 @@ -259,6 +265,18 @@ static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + if (cookie->ol_flags & RTE_MBUF_F_TX_VLAN) { + ol_flag |= ZXDH_PD_OFFLOAD_CVLAN_INSERT; + hdr->pi_hdr.vlan_id = rte_be_to_cpu_16(cookie->vlan_tci); + hdr->pd_hdr.cvlan_insert = + rte_be_to_cpu_32((ZXDH_CVLAN_TPID << 16) | cookie->vlan_tci); + } + if (cookie->ol_flags & RTE_MBUF_F_TX_QINQ) { + ol_flag |= ZXDH_PD_OFFLOAD_SVLAN_INSERT; + hdr->pd_hdr.svlan_insert = + rte_be_to_cpu_32((ZXDH_SVLAN_TPID << 16) | cookie->vlan_tci_outer); + } + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); } diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f5d12e50fb..f5327495d4 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 #define ZXDH_SDT_MULTICAST_ATT_TABLE 11 @@ -19,6 +20,10 @@ #define ZXDH_MC_GROUP_NUM 4 #define ZXDH_BASE_VFID 1152 #define ZXDH_TABLE_HIT_FLAG 128 +#define ZXDH_FIRST_VLAN_GROUP_BITS 23 +#define ZXDH_VLAN_GROUP_BITS 31 +#define ZXDH_VLAN_GROUP_NUM 35 +#define ZXDH_VLAN_FILTER_VLANID_STEP 120 int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) @@ -565,3 +570,95 @@ int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable } return 0; } + +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_vlan_filter_table vlan_table = {0}; + int16_t ret = 0; + + if (!hw->is_pf) + return 0; + + for (uint8_t vlan_group = 0; vlan_group < ZXDH_VLAN_GROUP_NUM; vlan_group++) { + if (vlan_group == 0) { + vlan_table.vlans[0] |= (1 << ZXDH_FIRST_VLAN_GROUP_BITS); + vlan_table.vlans[0] |= (1 << ZXDH_VLAN_GROUP_BITS); + + } else { + vlan_table.vlans[0] = 0; + } + uint32_t index = (vlan_group << 11) | hw->vport.vfid; + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = { + .index = index, + .p_data = (uint32_t *)&vlan_table + }; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d], vlan_group:%d, init vlan filter table failed", + hw->vport.vfid, vlan_group); + ret = -1; + } + } + + return ret; +} + +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) +{ + struct zxdh_vlan_filter_table vlan_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + memset(&vlan_table, 0, sizeof(struct zxdh_vlan_filter_table)); + int table_num = vlan_id / ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t index = (table_num << 11) | vport_num.vfid; + uint16_t group = (vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP) / 8 + 1; + + uint8_t val = sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t); + uint8_t vlan_tbl_index = group / val; + uint16_t used_group = vlan_tbl_index * val; + + used_group = (used_group == 0 ? 0 : (used_group - 1)); + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (uint32_t *)&vlan_table}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret) { + PMD_DRV_LOG(ERR, "get vlan table failed"); + return -1; + } + uint16_t relative_vlan_id = vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t *base_group = &vlan_table.vlans[0]; + + *base_group |= 1 << 31; + base_group = &vlan_table.vlans[vlan_tbl_index]; + uint8_t valid_bits = (vlan_tbl_index == 0 ? + ZXDH_FIRST_VLAN_GROUP_BITS : ZXDH_VLAN_GROUP_BITS) + 1; + + uint8_t shift_left = (valid_bits - (relative_vlan_id - used_group * 8) % valid_bits) - 1; + + if (enable) + *base_group |= 1 << shift_left; + else + *base_group &= ~(1 << shift_left); + + + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_VLAN_ATT_TABLE, + .p_entry_data = &entry_data + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write vlan table failed"); + return -1; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 88cdff0053..3b69410a50 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -44,7 +44,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -74,7 +74,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -195,6 +195,10 @@ struct zxdh_multicast_table { uint32_t bitmap[2]; }; +struct zxdh_vlan_filter_table { + uint32_t vlans[4]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -206,5 +210,7 @@ int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t has int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 55046 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (11 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 12/15] net/zxdh: vlan filter/ offload " Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-10 5:53 ` [PATCH v2 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 25550 bytes --] provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 53 +++- drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 410 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 26 ++ drivers/net/zxdh/zxdh_msg.h | 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++++++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 606 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 6fb006c2da..415ca547d0 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -19,3 +19,6 @@ Allmulticast mode = Y VLAN filter = Y VLAN offload = Y QinQ offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 8a8f588956..71c95d9bda 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -61,6 +61,9 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; + dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO); @@ -781,13 +784,52 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) ret = zxdh_vlan_offload_configure(dev); if (ret) { - PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + PMD_DRV_LOG(ERR, "vlan offload configure failed"); + return ret; + } + + ret = zxdh_rss_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "rss configure failed"); return ret; } return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -874,6 +916,11 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } zxdh_pci_reinit_complete(hw); end: @@ -1100,6 +1147,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable = zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set = zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 1c1a4b58ce..c53c2ddeb7 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -79,6 +79,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; + uint16_t *rss_reta; uint8_t *isr; uint8_t weak_barriers; @@ -97,6 +98,8 @@ struct zxdh_hw { uint8_t admin_status; uint8_t promisc_status; uint8_t allmulti_status; + uint8_t rss_enable; + uint8_t rss_init; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index aae4b90eb4..290e6cfaee 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -3,6 +3,7 @@ */ #include <rte_malloc.h> +#include <rte_ether.h> #include "zxdh_ethdev.h" #include "zxdh_pci.h" @@ -12,6 +13,14 @@ #include "zxdh_logs.h" #define ZXDH_VLAN_FILTER_GROUPS 64 +#define ZXDH_INVALID_LOGIC_QID 0xFFFFU + +/* Supported RSS */ +#define ZXDH_RSS_HF_MASK (~(ZXDH_RSS_HF)) +#define ZXDH_HF_F5 1 +#define ZXDH_HF_F3 2 +#define ZXDH_HF_MAC_VLAN 4 +#define ZXDH_HF_ALL 0 static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { @@ -750,3 +759,404 @@ int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } + +int +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; + uint16_t idx; + uint16_t i; + uint16_t pos; + int ret; + + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); + return -EINVAL; + } + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate RSS reta"); + return -ENOMEM; + } + } + for (idx = 0, i = 0; (i < reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + if (reta_conf[idx].reta[pos] > dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "reta table value err(%u >= %u)", + reta_conf[idx].reta[pos], dev->data->nb_rx_queues); + return -EINVAL; + } + if (hw->rss_reta[i] != reta_conf[idx].reta[pos]) + break; + } + if (i == reta_size) { + PMD_DRV_LOG(DEBUG, "reta table same with buffered table"); + return 0; + } + memcpy(old_reta, hw->rss_reta, sizeof(old_reta)); + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + hw->rss_reta[i] = reta_conf[idx].reta[pos]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < reta_size; i++) + msg.data.rss_reta.reta[i] = + (hw->channel_context[hw->rss_reta[i] * 2].ph_chno); + + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return ret; +} + +static uint16_t +zxdh_hw_qid_to_logic_qid(struct rte_eth_dev *dev, uint16_t qid) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t rx_queues = dev->data->nb_rx_queues; + uint16_t i; + + for (i = 0; i < rx_queues; i++) { + if (qid == hw->channel_context[i * 2].ph_chno) + return i; + } + return ZXDH_INVALID_LOGIC_QID; +} + +int +zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + uint16_t idx; + uint16_t i; + int ret = 0; + uint16_t qid_logic; + + ret = (!reta_size || reta_size > RTE_ETH_RSS_RETA_SIZE_256); + if (ret) { + PMD_DRV_LOG(ERR, "request reta size(%u) not same with buffered(%u)", + reta_size, RTE_ETH_RSS_RETA_SIZE_256); + return -EINVAL; + } + + /* Fill each entry of the table even if its bit is not set. */ + for (idx = 0, i = 0; (i != reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = hw->rss_reta[i]; + } + + + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_GET, &msg); + + if (hw->is_pf) { + ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table get failed"); + return -EINVAL; + } + } + + struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + + qid_logic = zxdh_hw_qid_to_logic_qid(dev, reta_table->reta[i]); + if (qid_logic == ZXDH_INVALID_LOGIC_QID) { + PMD_DRV_LOG(ERR, "rsp phy reta qid (%u) is illegal(%u)", + reta_table->reta[i], qid_logic); + return -EINVAL; + } + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = qid_logic; + } + return 0; +} + +static uint32_t +zxdh_rss_hf_to_hw(uint64_t hf) +{ + uint32_t hw_hf = 0; + + if (hf & ZXDH_HF_MAC_VLAN_ETH) + hw_hf |= ZXDH_HF_MAC_VLAN; + if (hf & ZXDH_HF_F3_ETH) + hw_hf |= ZXDH_HF_F3; + if (hf & ZXDH_HF_F5_ETH) + hw_hf |= ZXDH_HF_F5; + + if (hw_hf == (ZXDH_HF_MAC_VLAN | ZXDH_HF_F3 | ZXDH_HF_F5)) + hw_hf = ZXDH_HF_ALL; + return hw_hf; +} + +static uint64_t +zxdh_rss_hf_to_eth(uint32_t hw_hf) +{ + uint64_t hf = 0; + + if (hw_hf == ZXDH_HF_ALL) + return (ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH); + + if (hw_hf & ZXDH_HF_MAC_VLAN) + hf |= ZXDH_HF_MAC_VLAN_ETH; + if (hw_hf & ZXDH_HF_F3) + hf |= ZXDH_HF_F3_ETH; + if (hw_hf & ZXDH_HF_F5) + hf |= ZXDH_HF_F5_ETH; + + return hf; +} + +int +zxdh_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + uint32_t hw_hf_new, hw_hf_old; + int need_update_hf = 0; + int ret = 0; + + ret = rss_conf->rss_hf & ZXDH_RSS_HF_MASK; + if (ret) { + PMD_DRV_LOG(ERR, "Not support some hash function (%08lx)", rss_conf->rss_hf); + return -EINVAL; + } + + hw_hf_new = zxdh_rss_hf_to_hw(rss_conf->rss_hf); + hw_hf_old = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + + if ((hw_hf_new != hw_hf_old || !!rss_conf->rss_hf)) + need_update_hf = 1; + + if (need_update_hf) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = !!rss_conf->rss_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = !!rss_conf->rss_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf_new; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf_new; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + old_rss_conf->rss_hf = rss_conf->rss_hf; + } + + return 0; +} + +int +zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret; + uint32_t hw_hf; + + if (rss_conf == NULL) { + PMD_DRV_LOG(ERR, "rss conf is NULL"); + return -ENOMEM; + } + + hw_hf = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + rss_conf->rss_hf = zxdh_rss_hf_to_eth(hw_hf); + + zxdh_msg_head_build(hw, ZXDH_RSS_HF_GET, &msg); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + reply_msg.reply_body.rss_hf.rss_hf = port_attr.rss_hash_factor; + } else { + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + rss_conf->rss_hf = zxdh_rss_hf_to_eth(reply_msg.reply_body.rss_hf.rss_hf); + + return 0; +} + +static int +zxdh_get_rss_enable_conf(struct rte_eth_dev *dev) +{ + if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) + return dev->data->nb_rx_queues == 1 ? 0 : 1; + else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE) + return 0; + + return 0; +} + +int +zxdh_rss_configure(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *dev_data = dev->data; + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg = {0}; + int ret = 0; + uint32_t hw_hf; + uint32_t i; + + if (dev->data->nb_rx_queues == 0) { + PMD_DRV_LOG(ERR, "port %u nb_rx_queues is 0", dev->data->port_id); + return -1; + } + + /* config rss enable */ + uint8_t curr_rss_enable = zxdh_get_rss_enable_conf(dev); + + if (hw->rss_enable != curr_rss_enable) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = curr_rss_enable; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = curr_rss_enable; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + hw->rss_enable = curr_rss_enable; + } + + if (curr_rss_enable && hw->rss_init == 0) { + /* config hash factor */ + dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = ZXDH_HF_F5_ETH; + hw_hf = zxdh_rss_hf_to_hw(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf); + memset(&msg, 0, sizeof(msg)); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + hw->rss_init = 1; + } + + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "alloc memory fail"); + return -1; + } + } + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + hw->rss_reta[i] = i % dev_data->nb_rx_queues; + + /* hw config reta */ + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_256; i++) + msg.data.rss_reta.reta[i] = + hw->channel_context[hw->rss_reta[i] * 2].ph_chno; + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 058d271ab3..860716d079 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,8 +5,25 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <rte_ether.h> + #include "zxdh_ethdev.h" +#define ZXDH_ETH_RSS_L2 RTE_ETH_RSS_L2_PAYLOAD +#define ZXDH_ETH_RSS_IP \ + (RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6) +#define ZXDH_ETH_RSS_TCP (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP) +#define ZXDH_ETH_RSS_UDP (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP) +#define ZXDH_ETH_RSS_SCTP (RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + +#define ZXDH_HF_F5_ETH (ZXDH_ETH_RSS_TCP | ZXDH_ETH_RSS_UDP | ZXDH_ETH_RSS_SCTP) +#define ZXDH_HF_F3_ETH ZXDH_ETH_RSS_IP +#define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 +#define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -20,5 +37,14 @@ int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_configure(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index b1da47fc47..44fada3877 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -182,6 +182,11 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_RSS_ENABLE = 7, + ZXDH_RSS_RETA_SET = 8, + ZXDH_RSS_RETA_GET = 9, + ZXDH_RSS_HF_SET = 15, + ZXDH_RSS_HF_GET = 16, ZXDH_VLAN_FILTER_SET = 17, ZXDH_VLAN_FILTER_ADD = 18, ZXDH_VLAN_FILTER_DEL = 19, @@ -291,6 +296,14 @@ struct zxdh_link_info_msg { uint32_t speed; } __rte_packed; +struct zxdh_rss_reta { + uint32_t reta[RTE_ETH_RSS_RETA_SIZE_256]; +}; + +struct zxdh_rss_hf { + uint32_t rss_hf; +}; + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -307,6 +320,8 @@ struct zxdh_msg_reply_body { union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; struct zxdh_link_info_msg link_msg; + struct zxdh_rss_hf rss_hf; + struct zxdh_rss_reta rss_reta; } __rte_packed; } __rte_packed; @@ -360,6 +375,10 @@ struct zxdh_vlan_offload { uint8_t type; } __rte_packed; +struct zxdh_rss_enable { + uint8_t enable; +}; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -385,6 +404,9 @@ struct zxdh_msg_info { struct zxdh_vlan_filter vlan_filter_msg; struct zxdh_vlan_filter_set vlan_filter_set_msg; struct zxdh_vlan_offload vlan_offload_msg; + struct zxdh_rss_reta rss_reta; + struct zxdh_rss_enable rss_enable; + struct zxdh_rss_hf rss_hf; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index f5327495d4..83de96b24d 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_RSS_ATT_TABLE 3 #define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 @@ -662,3 +663,84 @@ int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) } return 0; } + +int +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + for (uint16_t j = 0; j < 8; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; + else + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; + #else + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; + #endif + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] |= 0x8000; + #else + rss_vqid.vqm_qid[0] |= 0x8000; + #endif + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = { + .index = vport_num.vfid * 32 + i, + .p_data = (uint32_t *)&rss_vqid + }; + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_RSS_ATT_TABLE, + .p_entry_data = &entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write rss base qid failed vfid:%d", vport_num.vfid); + return ret; + } + } + return 0; +} + +int +zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vport_num.vfid * 32 + i, (uint32_t *)&rss_vqid}; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_RSS_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &user_entry, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get rss tbl failed, vfid:%d", vport_num.vfid); + return -1; + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] &= 0x7FFF; + #else + rss_vqid.vqm_qid[0] &= 0x7FFF; + #endif + uint8_t size = sizeof(struct zxdh_rss_to_vqid_table) / sizeof(uint16_t); + + for (int j = 0; j < size; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j + 1]; + else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j - 1]; + #else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j]; + #endif + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 3b69410a50..abfe3d5b01 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,6 +7,7 @@ #include <stdint.h> +#define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 extern struct zxdh_dtb_shared_data g_dtb_data; @@ -199,6 +200,10 @@ struct zxdh_vlan_filter_table { uint32_t vlans[4]; }; +struct zxdh_rss_to_vqid_table { + uint16_t vqm_qid[8]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -212,5 +217,7 @@ int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); +int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 63652 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 14/15] net/zxdh: basic stats ops implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (12 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 2024-12-10 5:53 ` [PATCH v2 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37632 bytes --] basic stats ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 353 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 27 +++ drivers/net/zxdh/zxdh_msg.h | 16 ++ drivers/net/zxdh/zxdh_np.c | 349 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 30 +++ drivers/net/zxdh/zxdh_queue.h | 2 + drivers/net/zxdh/zxdh_rxtx.c | 85 ++++++- drivers/net/zxdh/zxdh_tables.h | 9 +- 11 files changed, 870 insertions(+), 6 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 415ca547d0..98c141cf95 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -22,3 +22,5 @@ QinQ offload = Y RSS hash = Y RSS reta update = Y Inner RSS = Y +Basic stats = Y +Stats per queue = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3cc6a1d348..c8a52b587c 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -32,6 +32,7 @@ Features of the ZXDH PMD are: - VLAN stripping and inserting - QINQ stripping and inserting - Receive Side Scaling (RSS) +- Port hardware statistics Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 71c95d9bda..1238bb048d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1151,6 +1151,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .reta_query = zxdh_dev_rss_reta_query, .rss_hash_update = zxdh_rss_hash_update, .rss_hash_conf_get = zxdh_rss_hash_conf_get, + .stats_get = zxdh_dev_stats_get, + .stats_reset = zxdh_dev_stats_reset, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 290e6cfaee..9c131cc56d 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -11,6 +11,8 @@ #include "zxdh_ethdev_ops.h" #include "zxdh_tables.h" #include "zxdh_logs.h" +#include "zxdh_rxtx.h" +#include "zxdh_np.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -22,6 +24,108 @@ #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +struct zxdh_hw_mac_stats { + uint64_t rx_total; + uint64_t rx_pause; + uint64_t rx_unicast; + uint64_t rx_multicast; + uint64_t rx_broadcast; + uint64_t rx_vlan; + uint64_t rx_size_64; + uint64_t rx_size_65_127; + uint64_t rx_size_128_255; + uint64_t rx_size_256_511; + uint64_t rx_size_512_1023; + uint64_t rx_size_1024_1518; + uint64_t rx_size_1519_mru; + uint64_t rx_undersize; + uint64_t rx_oversize; + uint64_t rx_fragment; + uint64_t rx_jabber; + uint64_t rx_control; + uint64_t rx_eee; + + uint64_t tx_total; + uint64_t tx_pause; + uint64_t tx_unicast; + uint64_t tx_multicast; + uint64_t tx_broadcast; + uint64_t tx_vlan; + uint64_t tx_size_64; + uint64_t tx_size_65_127; + uint64_t tx_size_128_255; + uint64_t tx_size_256_511; + uint64_t tx_size_512_1023; + uint64_t tx_size_1024_1518; + uint64_t tx_size_1519_mtu; + uint64_t tx_undersize; + uint64_t tx_oversize; + uint64_t tx_fragment; + uint64_t tx_jabber; + uint64_t tx_control; + uint64_t tx_eee; + + uint64_t rx_error; + uint64_t rx_fcs_error; + uint64_t rx_drop; + + uint64_t tx_error; + uint64_t tx_fcs_error; + uint64_t tx_drop; + +} __rte_packed; + +struct zxdh_hw_mac_bytes { + uint64_t rx_total_bytes; + uint64_t rx_good_bytes; + uint64_t tx_total_bytes; + uint64_t tx_good_bytes; +} __rte_packed; + +struct zxdh_np_stats_data { + uint64_t n_pkts_dropped; + uint64_t n_bytes_dropped; +}; + +struct zxdh_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct zxdh_xstats_name_off zxdh_rxq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_rx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_rx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_rx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_rx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_rx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_rx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[7])}, +}; + +static const struct zxdh_xstats_name_off zxdh_txq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_tx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_tx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_tx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_tx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_tx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_tx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[7])}, +}; + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1160,3 +1264,252 @@ zxdh_rss_configure(struct rte_eth_dev *dev) } return 0; } + +static int32_t +zxdh_hw_vqm_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode, + struct zxdh_hw_vqm_stats *hw_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_GET: + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get hw stats"); + return -EAGAIN; + } + struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body; + + rte_memcpy(hw_stats, &reply_body->vqm_stats, sizeof(struct zxdh_hw_vqm_stats)); + return 0; +} + +static int zxdh_hw_mac_stats_get(struct rte_eth_dev *dev, + struct zxdh_hw_mac_stats *mac_stats, + struct zxdh_hw_mac_bytes *mac_bytes) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET); + uint64_t stats_addr = 0; + uint64_t bytes_addr = 0; + + if (hw->speed <= RTE_ETH_SPEED_NUM_25G) { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4); + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4); + } else { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4; + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4; + } + + rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats)); + rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes)); + return 0; +} + +static void zxdh_data_hi_to_lo(uint64_t *data) +{ + uint32_t n_data_hi; + uint32_t n_data_lo; + + n_data_lo = *data >> 32; + n_data_hi = *data; + *data = (uint64_t)(rte_le_to_cpu_32(n_data_hi)) << 32 | + rte_le_to_cpu_32(n_data_lo); +} + +static int zxdh_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_np_stats_data stats_data; + uint32_t stats_id = zxdh_vport_to_vfid(hw->vport); + uint32_t idx = 0; + int ret = 0; + + idx = stats_id + ZXDH_BROAD_STATS_EGRESS_BASE; + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_tx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_tx_broadcast); + + idx = stats_id + ZXDH_BROAD_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_rx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_rx_broadcast); + + idx = stats_id + ZXDH_MTU_STATS_EGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + + np_stats->np_tx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_tx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_bytes); + + idx = stats_id + ZXDH_MTU_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + np_stats->np_rx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_rx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_bytes); + + return 0; +} + +static int +zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_stats_get(dev, np_stats); + if (ret) { + PMD_DRV_LOG(ERR, "get np stats failed"); + return -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_GET_NP_STATS, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", + hw->vport.vport); + return -1; + } + memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); + } + return ret; +} + +int +zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_hw_vqm_stats vqm_stats = {0}; + struct zxdh_hw_np_stats np_stats = {0}; + struct zxdh_hw_mac_stats mac_stats = {0}; + struct zxdh_hw_mac_bytes mac_bytes = {0}; + uint32_t i = 0; + + zxdh_hw_vqm_stats_get(dev, ZXDH_VQM_DEV_STATS_GET, &vqm_stats); + if (hw->is_pf) + zxdh_hw_mac_stats_get(dev, &mac_stats, &mac_bytes); + + zxdh_hw_np_stats_get(dev, &np_stats); + + stats->ipackets = vqm_stats.rx_total; + stats->opackets = vqm_stats.tx_total; + stats->ibytes = vqm_stats.rx_bytes; + stats->obytes = vqm_stats.tx_bytes; + stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop; + stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts; + stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts; + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[i]; + + if (rxvq == NULL) + continue; + stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[0].offset); + stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[1].offset); + stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[5].offset); + } + + for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_tx *txvq = dev->data->tx_queues[i]; + + if (txvq == NULL) + continue; + stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[0].offset); + stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[1].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[5].offset); + } + return 0; +} + +static int zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to reset hw stats"); + return -EAGAIN; + } + return 0; +} + +int zxdh_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET); + if (hw->is_pf) + zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 860716d079..f35378e691 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <stdint.h> + #include <rte_ether.h> #include "zxdh_ethdev.h" @@ -24,6 +26,29 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +struct zxdh_hw_vqm_stats { + uint64_t rx_total; + uint64_t tx_total; + uint64_t rx_bytes; + uint64_t tx_bytes; + uint64_t rx_error; + uint64_t tx_error; + uint64_t rx_drop; +} __rte_packed; + +struct zxdh_hw_np_stats { + uint64_t np_rx_broadcast; + uint64_t np_tx_broadcast; + uint64_t np_rx_mtu_drop_pkts; + uint64_t np_tx_mtu_drop_pkts; + uint64_t np_rx_mtu_drop_bytes; + uint64_t np_tx_mtu_drop_bytes; + uint64_t np_rx_mtr_drop_pkts; + uint64_t np_tx_mtr_drop_pkts; + uint64_t np_rx_mtr_drop_bytes; + uint64_t np_tx_mtr_drop_bytes; +}; + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -46,5 +71,7 @@ int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_configure(struct rte_eth_dev *dev); +int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int zxdh_dev_stats_reset(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 44fada3877..ced0e7e2a1 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -9,10 +9,16 @@ #include <ethdev_driver.h> +#include "zxdh_ethdev_ops.h" + #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) #define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) +#define ZXDH_MAC_OFFSET (0x24000) +#define ZXDH_MAC_STATS_OFFSET (0x1408) +#define ZXDH_MAC_BYTES_OFFSET (0xb000) + #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 #define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) @@ -173,7 +179,13 @@ enum pciebar_layout_type { /* riscv msg opcodes */ enum zxdh_agent_msg_type { + ZXDH_MAC_STATS_GET = 10, + ZXDH_MAC_STATS_RESET, ZXDH_MAC_LINK_GET = 14, + ZXDH_VQM_DEV_STATS_GET = 21, + ZXDH_VQM_DEV_STATS_RESET, + ZXDH_VQM_QUEUE_STATS_GET = 24, + ZXDH_VQM_QUEUE_STATS_RESET, } __rte_packed; enum zxdh_msg_type { @@ -195,6 +207,8 @@ enum zxdh_msg_type { ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, + ZXDH_GET_NP_STATS = 31, + ZXDH_MSG_TYPE_END, } __rte_packed; @@ -322,6 +336,8 @@ struct zxdh_msg_reply_body { struct zxdh_link_info_msg link_msg; struct zxdh_rss_hf rss_hf; struct zxdh_rss_reta rss_reta; + struct zxdh_hw_vqm_stats vqm_stats; + struct zxdh_hw_np_stats np_stats; } __rte_packed; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 4e32de0151..d707df729d 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_REG_T g_dpp_reg_info[4]; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4]; ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; +ZXDH_PPU_STAT_CFG_T g_ppu_stat_cfg = {0}; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) @@ -46,6 +47,18 @@ ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; #define ZXDH_COMM_CONVERT32(dw_data) \ (((dw_data) & 0xff) << 24) +#define ZXDH_DTB_TAB_UP_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.wr_index) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].user_flag) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].phy_addr) + +#define ZXDH_DTB_TAB_UP_DATA_LEN_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.data_len[(INDEX)]) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1793,3 +1806,339 @@ zxdh_np_dtb_table_entry_get(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_stat_cfg_soft_get(uint32_t dev_id, + ZXDH_PPU_STAT_CFG_T *p_stat_cfg) +{ + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_stat_cfg); + + p_stat_cfg->ddr_base_addr = g_ppu_stat_cfg.ddr_base_addr; + p_stat_cfg->eram_baddr = g_ppu_stat_cfg.eram_baddr; + p_stat_cfg->eram_depth = g_ppu_stat_cfg.eram_depth; + p_stat_cfg->ppu_addr_offset = g_ppu_stat_cfg.ppu_addr_offset; + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t int_flag, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_desc_data) +{ + uint32_t queue_en = 0; + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + + zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!", queue_id); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (desc_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + zxdh_np_dtb_item_buff_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + item_index, 0, desc_len, p_desc_data); + + ZXDH_DTB_TAB_UP_DATA_LEN_GET(dev_id, queue_id, item_index) = data_len; + + item_info.cmd_vld = 1; + item_info.cmd_type = ZXDH_DTB_DIR_UP_TYPE; + item_info.int_en = int_flag; + item_info.data_len = desc_len / 4; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + + return 0; +} + +static uint32_t +zxdh_np_dtb_write_dump_desc_info(uint32_t dev_id, + uint32_t queue_id, + uint32_t queue_element_id, + uint32_t *p_dump_info, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_dump_data) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + ZXDH_COMM_CHECK_POINT(p_dump_data); + rc = zxdh_np_dtb_tab_up_info_set(dev_id, + queue_id, + queue_element_id, + dtb_interrupt_status, + data_len, + desc_len, + p_dump_info); + if (rc != 0) { + PMD_DRV_LOG(ERR, "the queue %d element id %d dump" + " info set failed!", queue_id, queue_element_id); + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + queue_element_id, 0, ZXDH_DTB_TAB_ACK_UNUSED_MASK); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_index) +{ + uint32_t i = 0; + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + zxdh_np_dtb_item_ack_rd(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, &ack_vale); + + ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + *p_item_index = item_index; + + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t *p_phy_haddr, + uint32_t *p_phy_laddr) +{ + uint64_t addr = 0; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(dev_id, queue_id, item_index) == + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE) + addr = ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(dev_id, queue_id, item_index); + else + addr = ZXDH_DTB_ITEM_ACK_SIZE; + + *p_phy_haddr = (addr >> 32) & 0xffffffff; + *p_phy_laddr = addr & 0xffffffff; + + return 0; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_dma_dump(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t depth, + uint32_t *p_data, + uint32_t *element_id) +{ + uint32_t rc = 0; + uint32_t dump_dst_phy_haddr = 0; + uint32_t dump_dst_phy_laddr = 0; + uint32_t queue_item_index = 0; + uint32_t data_len = 0; + uint32_t desc_len = 0; + + uint8_t form_buff[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + + rc = zxdh_np_dtb_tab_up_free_item_get(dev_id, queue_id, &queue_item_index); + if (rc != 0) { + PMD_DRV_LOG(ERR, "dpp_dtb_tab_up_free_item_get failed = %d!", base_addr); + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + } + + *element_id = queue_item_index; + + rc = zxdh_np_dtb_tab_up_item_addr_get(dev_id, queue_id, queue_item_index, + &dump_dst_phy_haddr, &dump_dst_phy_laddr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_tab_up_item_addr_get"); + + data_len = depth * 128 / 32; + desc_len = ZXDH_DTB_LEN_POS_SETP / 4; + + + rc = zxdh_np_dtb_write_dump_desc_info(dev_id, queue_id, queue_item_index, + (uint32_t *)form_buff, data_len, desc_len, p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_dump_desc_info"); + + return 0; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t index, + uint32_t rd_mode, + uint32_t *p_data) +{ + uint32_t rc = 0; + + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t temp_data[4] = {0}; + uint32_t eram_dump_base_addr = 0; + uint32_t element_id = 0; + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + row_index = index; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + + eram_dump_base_addr = base_addr + row_index; + + rc = zxdh_np_dtb_se_smmu0_dma_dump(dev_id, + queue_id, + eram_dump_base_addr, + 1, + temp_data, + &element_id); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_dma_dump"); + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_stat_smmu0_int_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t smmu0_base_addr, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t rc = 0; + + uint32_t eram_rd_mode = 0; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + if (rd_mode == ZXDH_STAT_128_MODE) + eram_rd_mode = ZXDH_ERAM128_OPR_128b; + else + eram_rd_mode = ZXDH_ERAM128_OPR_64b; + + rc = zxdh_np_dtb_se_smmu0_ind_read(dev_id, + queue_id, + smmu0_base_addr, + index, + eram_rd_mode, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_ind_read"); + + return rc; +} + +int +zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t rc = 0; + uint32_t ppu_eram_baddr = 0; + uint32_t ppu_eram_depth = 0; + ZXDH_PPU_STAT_CFG_T stat_cfg = {0}; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + memset(&stat_cfg, 0x0, sizeof(stat_cfg)); + + rc = zxdh_np_stat_cfg_soft_get(dev_id, &stat_cfg); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_stat_cfg_soft_get"); + + ppu_eram_depth = stat_cfg.eram_depth; + ppu_eram_baddr = stat_cfg.eram_baddr; + + if ((index >> (ZXDH_STAT_128_MODE - rd_mode)) < ppu_eram_depth) { + rc = zxdh_np_dtb_stat_smmu0_int_read(dev_id, + queue_id, + ppu_eram_baddr, + rd_mode, + index, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_stat_smmu0_int_read"); + } + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 19d1f03f59..7da29cf7bd 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -432,6 +432,18 @@ typedef enum zxdh_sdt_table_type_e { ZXDH_SDT_TBLT_MAX = 7, } ZXDH_SDT_TABLE_TYPE_E; +typedef enum zxdh_dtb_dir_type_e { + ZXDH_DTB_DIR_DOWN_TYPE = 0, + ZXDH_DTB_DIR_UP_TYPE = 1, + ZXDH_DTB_DIR_TYPE_MAX, +} ZXDH_DTB_DIR_TYPE_E; + +typedef enum zxdh_dtb_tab_up_user_addr_type_e { + ZXDH_DTB_TAB_UP_NOUSER_ADDR_TYPE = 0, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE = 1, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_MAX, +} ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_E; + typedef struct zxdh_dtb_lpm_entry_t { uint32_t dtb_len0; uint8_t *p_data_buff0; @@ -537,6 +549,19 @@ typedef struct zxdh_dtb_hash_entry_info_t { uint8_t *p_rst; } ZXDH_DTB_HASH_ENTRY_INFO_T; +typedef struct zxdh_ppu_stat_cfg_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_base_addr; + uint32_t ppu_addr_offset; +} ZXDH_PPU_STAT_CFG_T; + +typedef enum zxdh_stat_cnt_mode_e { + ZXDH_STAT_64_MODE = 0, + ZXDH_STAT_128_MODE = 1, + ZXDH_STAT_MAX_MODE, +} ZXDH_STAT_CNT_MODE_E; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, @@ -545,5 +570,10 @@ int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); +int zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 9343df81ac..deb0dd891a 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -53,6 +53,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) #define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) #define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) +#define ZXDH_PD_HDR_SIZE_MAX 256 +#define ZXDH_PD_HDR_SIZE_MIN ZXDH_TYPE_HDR_SIZE /* * ring descriptors: 16 bytes. diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 0ffce50042..23c6f5191c 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -406,6 +406,40 @@ static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); } +static void +zxdh_update_packet_stats(struct zxdh_virtnet_stats *stats, struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + struct rte_ether_addr *ea = NULL; + + stats->bytes += s; + + if (s == 64) { + stats->size_bins[1]++; + } else if (s > 64 && s < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + stats->size_bins[bin]++; + } else { + if (s < 64) + stats->size_bins[0]++; + else if (s < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } +} + uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -459,12 +493,19 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt break; } } + if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", txm->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + break; + } /* Enqueue Packet buffers */ if (can_push) zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); else zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + zxdh_update_packet_stats(&txvq->stats, txm); } + txvq->stats.packets += nb_tx; if (likely(nb_tx)) { if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { zxdh_queue_notify(vq); @@ -474,9 +515,10 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt return nb_tx; } -uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { + struct zxdh_virtnet_tx *txvq = tx_queue; uint16_t nb_tx; for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { @@ -496,6 +538,12 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t rte_errno = -error; break; } + if (m->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", m->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + rte_errno = ENOMEM; + break; + } } return nb_tx; } @@ -571,7 +619,7 @@ static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *h return 0; } -static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +static void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) { int32_t error = 0; /* @@ -613,7 +661,13 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, for (i = 0; i < num; i++) { rxm = rcv_pkts[i]; - + if (unlikely(len[i] < ZXDH_UL_NET_HDR_SIZE)) { + nb_enqueued++; + PMD_RX_LOG(ERR, "RX, len:%u err", len[i]); + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } struct zxdh_net_hdr_ul *header = (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM); @@ -623,8 +677,22 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); seg_num = 1; } - /* bit[0:6]-pd_len unit:2B */ + if (seg_num > ZXDH_RX_MAX_SEGS) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } uint16_t pd_len = header->type_hdr.pd_len << 1; + + if (pd_len > ZXDH_PD_HDR_SIZE_MAX || pd_len < ZXDH_PD_HDR_SIZE_MIN) { + PMD_RX_LOG(ERR, "pd_len:%d is invalid", pd_len); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* Private queue only handle type hdr */ hdr_size = pd_len; rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; @@ -639,6 +707,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, /* Update rte_mbuf according to pi/pd header */ if (zxdh_rx_update_mbuf(rxm, header) < 0) { zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; continue; } seg_res = seg_num - 1; @@ -661,8 +730,11 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } @@ -675,6 +747,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(rcv_cnt == 0)) { PMD_RX_LOG(ERR, "No enough segments for packet."); rte_pktmbuf_free(rx_pkts[nb_rx]); + rxvq->stats.errors++; break; } while (extra_idx < rcv_cnt) { @@ -694,11 +767,15 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } + rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ if (likely(!zxdh_queue_full(vq))) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index abfe3d5b01..e697744c23 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,8 +7,13 @@ #include <stdint.h> -#define ZXDH_PORT_BASE_QID_FLAG 10 -#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_BASE_QID_FLAG 10 +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 + +#define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 +#define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 +#define ZXDH_BROAD_STATS_EGRESS_BASE 0xC902 +#define ZXDH_BROAD_STATS_INGRESS_BASE 0xD102 extern struct zxdh_dtb_shared_data g_dtb_data; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 87748 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v2 15/15] net/zxdh: mtu update ops implementations 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang ` (13 preceding siblings ...) 2024-12-10 5:53 ` [PATCH v2 14/15] net/zxdh: basic stats ops implementations Junlong Wang @ 2024-12-10 5:53 ` Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-10 5:53 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8505 bytes --] mtu update ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 4 ++ drivers/net/zxdh/zxdh_ethdev_ops.c | 78 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 3 ++ drivers/net/zxdh/zxdh_tables.c | 42 ++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 4 ++ 7 files changed, 134 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 98c141cf95..3561e31666 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -24,3 +24,4 @@ RSS reta update = Y Inner RSS = Y Basic stats = Y Stats per queue = Y +MTU update = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index c8a52b587c..58e0c49a2e 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -33,6 +33,8 @@ Features of the ZXDH PMD are: - QINQ stripping and inserting - Receive Side Scaling (RSS) - Port hardware statistics +- MTU update +- Jumbo frames Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1238bb048d..72db51973d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -63,6 +63,9 @@ zxdh_dev_infos_get(struct rte_eth_dev *dev, dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_256; dev_info->flow_type_rss_offloads = ZXDH_RSS_HF; + dev_info->max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - + RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + dev_info->min_mtu = ZXDH_ETHER_MIN_MTU; dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | @@ -1153,6 +1156,7 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .rss_hash_conf_get = zxdh_rss_hash_conf_get, .stats_get = zxdh_dev_stats_get, .stats_reset = zxdh_dev_stats_reset, + .mtu_set = zxdh_dev_mtu_set, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 9c131cc56d..bea27a2a57 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -13,6 +13,7 @@ #include "zxdh_logs.h" #include "zxdh_rxtx.h" #include "zxdh_np.h" +#include "zxdh_queue.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -1513,3 +1514,80 @@ int zxdh_dev_stats_reset(struct rte_eth_dev *dev) return 0; } + +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_panel_table panel = {0}; + struct zxdh_port_attr_table vport_att = {0}; + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); + uint16_t max_mtu = 0; + int ret = 0; + + max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + if (new_mtu < ZXDH_ETHER_MIN_MTU || new_mtu > max_mtu) { + PMD_DRV_LOG(ERR, "invalid mtu:%d, range[%d, %d]", + new_mtu, ZXDH_ETHER_MIN_MTU, max_mtu); + return -EINVAL; + } + + if (dev->data->mtu == new_mtu) + return 0; + + if (hw->is_pf) { + memset(&panel, 0, sizeof(panel)); + memset(&vport_att, 0, sizeof(vport_att)); + ret = zxdh_get_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get_panel_attr ret:%d", ret); + return -1; + } + + ret = zxdh_get_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, get vport dpp_ret:%d", vfid, ret); + return -1; + } + + panel.mtu = new_mtu; + panel.mtu_enable = 1; + ret = zxdh_set_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set zxdh_dev_mtu failed, ret:%u", ret); + return ret; + } + + vport_att.mtu_enable = 1; + vport_att.mtu = new_mtu; + ret = zxdh_set_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, set vport dpp_ret:%d", vfid, ret); + return ret; + } + } else { + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_MTU_EN_FLAG; + attr_msg->value = 1; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_EN_FLAG); + return ret; + } + attr_msg->mode = ZXDH_PORT_MTU_FLAG; + attr_msg->value = new_mtu; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_FLAG); + return ret; + } + } + dev->data->mtu = new_mtu; + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index f35378e691..fac6cbd5e8 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -26,6 +26,8 @@ #define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 #define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +#define ZXDH_ETHER_MIN_MTU 68 + struct zxdh_hw_vqm_stats { uint64_t rx_total; uint64_t tx_total; @@ -73,5 +75,6 @@ int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss int zxdh_rss_configure(struct rte_eth_dev *dev); int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int zxdh_dev_stats_reset(struct rte_eth_dev *dev); +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 83de96b24d..bf6f9f6b2a 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -165,6 +165,48 @@ zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + + if (ret != 0) + PMD_DRV_LOG(ERR, "get panel table failed"); + + return ret; +} + +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_attr + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) + PMD_DRV_LOG(ERR, "Insert panel table failed"); + + return ret; +} + int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index e697744c23..f7591cd948 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -7,8 +7,10 @@ #include <stdint.h> +#define ZXDH_PORT_MTU_FLAG 9 #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_MTU_EN_FLAG 42 #define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 #define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 @@ -224,5 +226,7 @@ int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_attr); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18467 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 02/15] net/zxdh: zxdh np uninit implementation 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang 2024-12-06 5:57 ` [PATCH v1 01/15] net/zxdh: zxdh np init implementation Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 03/15] net/zxdh: port tables init implementations Junlong Wang ` (12 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20347 bytes --] (np)network processor release resources in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 ++++ drivers/net/zxdh/zxdh_np.c | 490 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 107 +++++++ 3 files changed, 645 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 740e579da8..df5b8b7d55 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -841,6 +841,51 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static void +zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) +{ + struct rte_eth_dev *dev = hw->eth_dev; + int ret = 0; + int i = 0; + + if (g_dtb_data.init_done && g_dtb_data.bind_device == dev) { + ret = zxdh_np_online_uninit(0, dev->data->name, g_dtb_data.queueid); + if (ret) + PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name); + + if (g_dtb_data.dtb_table_conf_mz) + rte_memzone_free(g_dtb_data.dtb_table_conf_mz); + + if (g_dtb_data.dtb_table_dump_mz) { + rte_memzone_free(g_dtb_data.dtb_table_dump_mz); + g_dtb_data.dtb_table_dump_mz = NULL; + } + + for (i = 0; i < ZXDH_MAX_BASE_DTB_TABLE_COUNT; i++) { + if (g_dtb_data.dtb_table_bulk_dump_mz[i]) { + rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]); + g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL; + } + } + g_dtb_data.init_done = 0; + g_dtb_data.bind_device = NULL; + } + if (zxdh_shared_data != NULL) + zxdh_shared_data->np_init_done = 0; +} + +static void +zxdh_np_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!g_dtb_data.init_done && !g_dtb_data.dev_refcnt) + return; + + if (--g_dtb_data.dev_refcnt == 0) + zxdh_np_dtb_data_res_free(hw); +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { @@ -848,6 +893,7 @@ zxdh_dev_close(struct rte_eth_dev *dev) int ret = 0; zxdh_intr_release(dev); + zxdh_np_uninit(dev); zxdh_pci_reset(hw); zxdh_dev_free_mbufs(dev); @@ -1013,6 +1059,7 @@ zxdh_np_dtb_res_init(struct rte_eth_dev *dev) return 0; free_res: + zxdh_np_dtb_data_res_free(hw); rte_free(dpp_ctrl); return ret; } @@ -1179,6 +1226,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) err_zxdh_init: zxdh_intr_release(eth_dev); + zxdh_np_uninit(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 9c50039fb1..a603c88049 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -16,11 +16,22 @@ static ZXDH_DEV_MGR_T g_dev_mgr = {0}; static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_REG_T g_dpp_reg_info[4] = {0}; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) #define ZXDH_SDT_SOFT_TBL_GET(id) (g_sdt_mgr.sdt_tbl_array[id]) +#define ZXDH_COMM_MASK_BIT(_bitnum_)\ + (0x1U << (_bitnum_)) + +#define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ + ((_inttype_)(((_bitqnt_) < 32))) + +#define ZXDH_REG_DATA_MAX (128) + #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ do {\ if (NULL == (point)) {\ @@ -345,3 +356,482 @@ zxdh_np_host_init(uint32_t dev_id, ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dev_agent_addr_set"); return 0; } + +static ZXDH_RISCV_DTB_MGR * +zxdh_np_riscv_dtb_queue_mgr_get(uint32_t dev_id) +{ + if (dev_id >= ZXDH_DEV_CHANNEL_MAX) + return NULL; + else + return p_riscv_dtb_queue_mgr[dev_id]; +} + +static uint32_t +zxdh_np_riscv_dtb_mgr_queue_info_delete(uint32_t dev_id, uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + p_riscv_dtb_mgr->queue_alloc_count--; + p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag = 0; + p_riscv_dtb_mgr->queue_user_info[queue_id].queue_id = 0xFF; + p_riscv_dtb_mgr->queue_user_info[queue_id].vport = 0; + memset(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, 0, ZXDH_PORT_NAME_MAX); + + return 0; +} + +static uint32_t +zxdh_np_dev_get_dev_type(uint32_t dev_id) +{ + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + ZXDH_DEV_CFG_T *p_dev_info = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info == NULL) + return 0xffff; + + return p_dev_info->dev_type; +} + +static uint32_t +zxdh_np_comm_read_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t len = 0; + uint32_t start_byte_index = 0; + uint32_t end_byte_index = 0; + uint32_t byte_num = 0; + uint32_t buffer_size = 0; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + len = end_bit - start_bit + 1; + buffer_size = base_size_bit / 8; + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + *p_data = 0; + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + *p_data = (uint32_t)(((p_base[start_byte_index] >> (7U - (end_bit & 7))) + & (0xff >> (8U - len))) & 0xff); + return 0; + } + + if (start_bit & 7) { + *p_data = (p_base[start_byte_index] & (0xff >> (start_bit & 7))) & UINT8_MAX; + start_byte_index++; + } + + for (byte_num = start_byte_index; byte_num < end_byte_index; byte_num++) { + *p_data <<= 8; + *p_data += p_base[byte_num]; + } + + *p_data <<= 1 + (end_bit & 7); + *p_data += ((p_base[byte_num & (buffer_size - 1)] & (0xff << (7 - (end_bit & 7)))) >> + (7 - (end_bit & 7))) & 0xff; + + return 0; +} + +static uint32_t +zxdh_np_comm_read_bits_ex(uint8_t *p_base, uint32_t base_size_bit, + uint32_t *p_data, uint32_t msb_start_pos, uint32_t len) +{ + uint32_t rtn = 0; + + rtn = zxdh_np_comm_read_bits(p_base, + base_size_bit, + p_data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + return rtn; +} + +static uint32_t +zxdh_np_reg_read(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t rc = 0; + uint32_t i = 0; + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + for (i = 0; i < p_reg_info->field_num; i++) { + rc = zxdh_np_comm_read_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + (uint32_t *)p_data + i, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_read_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", dev_id, m_offset, n_offset); + } + } + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_read"); + + p_vm_info->dbi_en = vm_info.dbi_en; + p_vm_info->queue_en = vm_info.queue_en; + p_vm_info->epid = vm_info.cfg_epid; + p_vm_info->vector = vm_info.cfg_vector; + p_vm_info->vfunc_num = vm_info.cfg_vfunc_num; + p_vm_info->func_num = vm_info.cfg_func_num; + p_vm_info->vfunc_active = vm_info.cfg_vfunc_active; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits(uint8_t *p_base, uint32_t base_size_bit, + uint32_t data, uint32_t start_bit, uint32_t end_bit) +{ + uint32_t start_byte_index = 0; + uint32_t end_byte_index = 0; + uint8_t mask_value = 0; + uint32_t byte_num = 0; + uint32_t buffer_size = 0; + + if (0 != (base_size_bit % 8)) + return 1; + + if (start_bit > end_bit) + return 1; + + if (base_size_bit < end_bit) + return 1; + + buffer_size = base_size_bit / 8; + + while (0 != (buffer_size & (buffer_size - 1))) + buffer_size += 1; + + end_byte_index = (end_bit >> 3); + start_byte_index = (start_bit >> 3); + + if (start_byte_index == end_byte_index) { + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + mask_value |= (((1 << (7 - (end_bit & 7))) - 1) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= (((data << (7 - (end_bit & 7)))) & 0xff); + return 0; + } + + if (7 != (end_bit & 7)) { + mask_value = ((0x7f >> (end_bit & 7)) & 0xff); + p_base[end_byte_index] &= mask_value; + p_base[end_byte_index] |= ((data << (7 - (end_bit & 7))) & 0xff); + end_byte_index--; + data >>= 1 + (end_bit & 7); + } + + for (byte_num = end_byte_index; byte_num > start_byte_index; byte_num--) { + p_base[byte_num & (buffer_size - 1)] = data & 0xff; + data >>= 8; + } + + mask_value = ((0xFE << (7 - (start_bit & 7))) & 0xff); + p_base[byte_num] &= mask_value; + p_base[byte_num] |= data; + + return 0; +} + +static uint32_t +zxdh_np_comm_write_bits_ex(uint8_t *p_base, + uint32_t base_size_bit, + uint32_t data, + uint32_t msb_start_pos, + uint32_t len) +{ + uint32_t rtn = 0; + + rtn = zxdh_np_comm_write_bits(p_base, + base_size_bit, + data, + (base_size_bit - 1 - msb_start_pos), + (base_size_bit - 1 - msb_start_pos + len - 1)); + + return rtn; +} + +static uint32_t +zxdh_np_reg_write(uint32_t dev_id, uint32_t reg_no, + uint32_t m_offset, uint32_t n_offset, void *p_data) +{ + uint32_t rc = 0; + uint32_t i = 0; + uint32_t p_buff[ZXDH_REG_DATA_MAX] = {0}; + uint32_t temp_data = 0; + ZXDH_REG_T *p_reg_info = NULL; + ZXDH_FIELD_T *p_field_info = NULL; + + if (reg_no < 4) { + p_reg_info = &g_dpp_reg_info[reg_no]; + p_field_info = p_reg_info->p_fields; + + for (i = 0; i < p_reg_info->field_num; i++) { + if (p_field_info[i].len <= 32) { + temp_data = *((uint32_t *)p_data + i); + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_buff, + p_reg_info->width * 8, + temp_data, + p_field_info[i].msb_pos, + p_field_info[i].len); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_comm_write_bits_ex"); + PMD_DRV_LOG(ERR, "dev_id %d(%d)(%d)is ok!", + dev_id, m_offset, n_offset); + } + } + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_vm_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_VM_INFO_T *p_vm_info) +{ + uint32_t rc = 0; + ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T vm_info = {0}; + + vm_info.dbi_en = p_vm_info->dbi_en; + vm_info.queue_en = p_vm_info->queue_en; + vm_info.cfg_epid = p_vm_info->epid; + vm_info.cfg_vector = p_vm_info->vector; + vm_info.cfg_vfunc_num = p_vm_info->vfunc_num; + vm_info.cfg_func_num = p_vm_info->func_num; + vm_info.cfg_vfunc_active = p_vm_info->vfunc_active; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_EPID_V_FUNC_NUM, + 0, queue_id, &vm_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_reg_write"); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + vm_info.queue_en = enable; + rc = zxdh_np_dtb_queue_vm_info_set(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_set"); + + return rc; +} + +static uint32_t +zxdh_np_riscv_dpp_dtb_queue_id_release(uint32_t dev_id, + char pName[ZXDH_PORT_NAME_MAX], uint32_t queue_id) +{ + ZXDH_RISCV_DTB_MGR *p_riscv_dtb_mgr = NULL; + + p_riscv_dtb_mgr = zxdh_np_riscv_dtb_queue_mgr_get(dev_id); + if (p_riscv_dtb_mgr == NULL) + return 1; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + if (p_riscv_dtb_mgr->queue_user_info[queue_id].alloc_flag != 1) { + PMD_DRV_LOG(ERR, "queue %d not alloc!", queue_id); + return 2; + } + + if (strcmp(p_riscv_dtb_mgr->queue_user_info[queue_id].user_name, pName) != 0) { + PMD_DRV_LOG(ERR, "queue %d name %s error!", queue_id, pName); + return 3; + } + zxdh_np_dtb_queue_enable_set(dev_id, queue_id, 0); + zxdh_np_riscv_dtb_mgr_queue_info_delete(dev_id, queue_id); + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_unused_item_num_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_num) +{ + uint32_t rc = 0; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) { + *p_item_num = 32; + return 0; + } + + rc = zxdh_np_reg_read(dev_id, ZXDH_DTB_INFO_QUEUE_BUF_SPACE, + 0, queue_id, p_item_num); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_read"); + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_id_free(uint32_t dev_id, + uint32_t queue_id) +{ + uint32_t rc = 0; + uint32_t item_num = 0; + ZXDH_DTB_MGR_T *p_dtb_mgr = NULL; + + p_dtb_mgr = p_dpp_dtb_mgr[dev_id]; + if (p_dtb_mgr == NULL) + return 1; + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &item_num); + + p_dtb_mgr->queue_info[queue_id].init_flag = 0; + p_dtb_mgr->queue_info[queue_id].vport = 0; + p_dtb_mgr->queue_info[queue_id].vector = 0; + + return rc; +} + +static uint32_t +zxdh_np_dtb_queue_release(uint32_t devid, + char pname[32], + uint32_t queueid) +{ + uint32_t rc = 0; + + ZXDH_COMM_CHECK_DEV_POINT(devid, pname); + + rc = zxdh_np_riscv_dpp_dtb_queue_id_release(devid, pname, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_riscv_dpp_dtb_queue_id_release"); + + rc = zxdh_np_dtb_queue_id_free(devid, queueid); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_id_free"); + + return rc; +} + +static uint32_t +zxdh_np_dtb_mgr_destroy(uint32_t dev_id) +{ + if (p_dpp_dtb_mgr[dev_id] != NULL) { + free(p_dpp_dtb_mgr[dev_id]); + p_dpp_dtb_mgr[dev_id] = NULL; + } + + return 0; +} + +static uint32_t +zxdh_np_tlb_mgr_destroy(uint32_t dev_id) +{ + if (g_p_dpp_tlb_mgr[dev_id] != NULL) { + free(g_p_dpp_tlb_mgr[dev_id]); + g_p_dpp_tlb_mgr[dev_id] = NULL; + } + + return 0; +} + +static uint32_t +zxdh_np_sdt_mgr_destroy(uint32_t dev_id) +{ + ZXDH_SDT_SOFT_TABLE_T *p_sdt_tbl_temp = NULL; + ZXDH_SDT_MGR_T *p_sdt_mgr = NULL; + + p_sdt_tbl_temp = ZXDH_SDT_SOFT_TBL_GET(dev_id); + p_sdt_mgr = ZXDH_SDT_MGR_PTR_GET(); + + if (p_sdt_tbl_temp != NULL) + free(p_sdt_tbl_temp); + + ZXDH_SDT_SOFT_TBL_GET(dev_id) = NULL; + + p_sdt_mgr->channel_num--; + + return 0; +} + +static uint32_t +zxdh_np_dev_del(uint32_t dev_id) +{ + ZXDH_DEV_CFG_T *p_dev_info = NULL; + ZXDH_DEV_MGR_T *p_dev_mgr = NULL; + + p_dev_mgr = &g_dev_mgr; + p_dev_info = p_dev_mgr->p_dev_array[dev_id]; + + if (p_dev_info != NULL) { + free(p_dev_info); + p_dev_mgr->p_dev_array[dev_id] = NULL; + p_dev_mgr->device_num--; + } + + return 0; +} + +int +zxdh_np_online_uninit(uint32_t dev_id, + char *port_name, + uint32_t queue_id) +{ + uint32_t rc = 0; + + rc = zxdh_np_dtb_queue_release(dev_id, port_name, queue_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "%s:dtb release error," + "port name %s queue id %d. ", __func__, port_name, queue_id); + + rc = zxdh_np_dtb_mgr_destroy(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_dtb_mgr_destroy error!"); + + rc = zxdh_np_tlb_mgr_destroy(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_tlb_mgr_destroy error!"); + + rc = zxdh_np_sdt_mgr_destroy(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_sdt_mgr_destroy error!"); + + rc = zxdh_np_dev_del(dev_id); + if (rc != 0) + PMD_DRV_LOG(ERR, "zxdh_dev_del error!"); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 573eafe796..dc0e867827 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -47,6 +47,11 @@ #define ZXDH_INIT_FLAG_TM_IMEM_FLAG (1 << 9) #define ZXDH_INIT_FLAG_AGENT_FLAG (1 << 10) +#define ZXDH_ACL_TBL_ID_MIN (0) +#define ZXDH_ACL_TBL_ID_MAX (7) +#define ZXDH_ACL_TBL_ID_NUM (8U) +#define ZXDH_ACL_BLOCK_NUM (8U) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -67,6 +72,15 @@ typedef enum zxdh_dev_type_e { ZXDH_DEV_TYPE_INVALID, } ZXDH_DEV_TYPE_E; +typedef enum zxdh_reg_info_e { + ZXDH_DTB_CFG_QUEUE_DTB_HADDR = 0, + ZXDH_DTB_CFG_QUEUE_DTB_LADDR = 1, + ZXDH_DTB_CFG_QUEUE_DTB_LEN = 2, + ZXDH_DTB_INFO_QUEUE_BUF_SPACE = 3, + ZXDH_DTB_CFG_EPID_V_FUNC_NUM = 4, + ZXDH_REG_ENUM_MAX_VALUE +} ZXDH_REG_INFO_E; + typedef enum zxdh_dev_access_type_e { ZXDH_DEV_ACCESS_TYPE_PCIE = 0, ZXDH_DEV_ACCESS_TYPE_RISCV = 1, @@ -79,6 +93,26 @@ typedef enum zxdh_dev_agent_flag_e { ZXDH_DEV_AGENT_INVALID, } ZXDH_DEV_AGENT_FLAG_E; +typedef enum zxdh_acl_pri_mode_e { + ZXDH_ACL_PRI_EXPLICIT = 1, + ZXDH_ACL_PRI_IMPLICIT, + ZXDH_ACL_PRI_SPECIFY, + ZXDH_ACL_PRI_INVALID, +} ZXDH_ACL_PRI_MODE_E; + +typedef struct zxdh_d_node { + void *data; + struct zxdh_d_node *prev; + struct zxdh_d_node *next; +} ZXDH_D_NODE; + +typedef struct zxdh_d_head { + uint32_t used; + uint32_t maxnum; + ZXDH_D_NODE *p_next; + ZXDH_D_NODE *p_prev; +} ZXDH_D_HEAD; + typedef struct zxdh_dtb_tab_up_user_addr_t { uint32_t user_flag; uint64_t phy_addr; @@ -193,6 +227,79 @@ typedef struct zxdh_sdt_mgr_t { ZXDH_SDT_SOFT_TABLE_T *sdt_tbl_array[ZXDH_DEV_CHANNEL_MAX]; } ZXDH_SDT_MGR_T; +typedef struct zxdh_riscv_dtb_queue_USER_info_t { + uint32_t alloc_flag; + uint32_t queue_id; + uint32_t vport; + char user_name[ZXDH_PORT_NAME_MAX]; +} ZXDH_RISCV_DTB_QUEUE_USER_INFO_T; + +typedef struct zxdh_riscv_dtb_mgr { + uint32_t queue_alloc_count; + uint32_t queue_index; + ZXDH_RISCV_DTB_QUEUE_USER_INFO_T queue_user_info[ZXDH_DTB_QUEUE_NUM_MAX]; +} ZXDH_RISCV_DTB_MGR; + +typedef struct zxdh_dtb_queue_vm_info_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t epid; + uint32_t vfunc_num; + uint32_t vector; + uint32_t func_num; + uint32_t vfunc_active; +} ZXDH_DTB_QUEUE_VM_INFO_T; + +typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { + uint32_t dbi_en; + uint32_t queue_en; + uint32_t cfg_epid; + uint32_t cfg_vfunc_num; + uint32_t cfg_vector; + uint32_t cfg_func_num; + uint32_t cfg_vfunc_active; +} ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; + + +typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); +typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); + +typedef struct zxdh_field_t { + const char *p_name; + uint32_t flags; + uint16_t msb_pos; + + uint16_t len; + uint32_t default_value; + uint32_t default_step; +} ZXDH_FIELD_T; + +typedef struct zxdh_reg_t { + const char *reg_name; + uint32_t reg_no; + uint32_t module_no; + uint32_t flags; + uint32_t array_type; + uint32_t addr; + uint32_t width; + uint32_t m_size; + uint32_t n_size; + uint32_t m_step; + uint32_t n_step; + uint32_t field_num; + ZXDH_FIELD_T *p_fields; + + ZXDH_REG_WRITE p_write_fun; + ZXDH_REG_READ p_read_fun; +} ZXDH_REG_T; + +typedef struct zxdh_tlb_mgr_t { + uint32_t entry_num; + uint32_t va_width; + uint32_t pa_width; +} ZXDH_TLB_MGR_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); +int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); #endif /* ZXDH_NP_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 49163 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 03/15] net/zxdh: port tables init implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang 2024-12-06 5:57 ` [PATCH v1 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-06 5:57 ` [PATCH v1 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 04/15] net/zxdh: port tables unint implementations Junlong Wang ` (11 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 43912 bytes --] insert port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 24 ++ drivers/net/zxdh/zxdh_msg.c | 63 ++++ drivers/net/zxdh/zxdh_msg.h | 72 ++++ drivers/net/zxdh/zxdh_np.c | 666 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_np.h | 212 ++++++++++- drivers/net/zxdh/zxdh_pci.h | 2 + drivers/net/zxdh/zxdh_tables.c | 104 +++++ drivers/net/zxdh/zxdh_tables.h | 148 ++++++++ 9 files changed, 1289 insertions(+), 3 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_tables.c create mode 100644 drivers/net/zxdh/zxdh_tables.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index ab24a3145c..5b3af87c5b 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -20,4 +20,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_queue.c', 'zxdh_np.c', + 'zxdh_tables.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index df5b8b7d55..9f3a5bcf9c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -14,6 +14,7 @@ #include "zxdh_common.h" #include "zxdh_queue.h" #include "zxdh_np.h" +#include "zxdh_tables.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -1146,6 +1147,25 @@ zxdh_np_init(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_tables_init(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_init failed"); + return ret; + } + + ret = zxdh_panel_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -1222,6 +1242,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_tables_init(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index a0a005b178..1aed979de3 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_msg.h" +#include "zxdh_pci.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 @@ -100,6 +101,7 @@ #define ZXDH_BAR_CHAN_MSG_EMEC 1 #define ZXDH_BAR_CHAN_MSG_NO_ACK 0 #define ZXDH_BAR_CHAN_MSG_ACK 1 +#define ZXDH_MSG_REPS_OK 0xff uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, @@ -1080,3 +1082,64 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, res->bar_length = recv_msg.offset_reps.length; return ZXDH_BAR_MSG_OK; } + +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_pci_bar_msg in = {0}; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + + in.virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + in.payload_addr = msg_req; + in.payload_len = msg_req_len; + in.src = ZXDH_MSG_CHAN_END_VF; + in.dst = ZXDH_MSG_CHAN_END_PF; + in.module_id = ZXDH_MODULE_BAR_MSG_TO_PF; + in.src_pcieid = hw->pcie_id; + in.dst_pcieid = ZXDH_PF_PCIE_ID(hw->pcie_id); + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, + "vf[%d] send bar msg to pf failed.ret %d", hw->vport.vfid, ret); + return -EAGAIN; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -EAGAIN; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -EAGAIN; + } + return 0; +} + +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_msg_head *msghead = &msg_info->msg_head; + + msghead->msg_type = type; + msghead->vport = hw->vport.vport; + msghead->vf_id = hw->vport.vfid; + msghead->pcieid = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index fbc79e8f9d..35ed5d1a1c 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -33,6 +33,19 @@ #define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) + +#define ZXDH_MSG_REPLYBODY_HEAD sizeof(enum zxdh_reps_flag) +#define ZXDH_MSG_HEADER_SIZE 4 +#define ZXDH_MSG_REPLY_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - sizeof(struct zxdh_msg_reply_head)) + +#define ZXDH_MSG_HEAD_LEN 8 +#define ZXDH_MSG_REQ_BODY_MAX_LEN \ + (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) + enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, ZXDH_MSG_CHAN_END_PF, @@ -151,6 +164,13 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +enum zxdh_msg_type { + ZXDH_NULL = 0, + ZXDH_VF_PORT_INIT = 1, + + ZXDH_MSG_TYPE_END, +} __rte_packed; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -240,6 +260,54 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_msg_reply_head { + uint8_t flag; + uint16_t reps_len; + uint8_t resvd; +} __rte_packed; + +enum zxdh_reps_flag { + ZXDH_REPS_FAIL, + ZXDH_REPS_SUCC = 0xaa, +} __rte_packed; + +struct zxdh_msg_reply_body { + enum zxdh_reps_flag flag; + union { + uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_reply_info { + struct zxdh_msg_reply_head reply_head; + struct zxdh_msg_reply_body reply_body; +} __rte_packed; + +struct zxdh_vf_init_msg { + uint8_t link_up; + uint8_t rsv; + uint16_t base_qid; + uint8_t rss_enable; +} __rte_packed; + +struct zxdh_msg_head { + enum zxdh_msg_type msg_type; + uint16_t vport; + uint16_t vf_id; + uint16_t pcieid; +} __rte_packed; + +struct zxdh_msg_info { + union { + uint8_t head_len[ZXDH_MSG_HEAD_LEN]; + struct zxdh_msg_head msg_head; + }; + union { + uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; + struct zxdh_vf_init_msg vf_init_msg; + } __rte_packed data; +} __rte_packed; + typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer, uint16_t *reps_len, void *dev); @@ -253,5 +321,9 @@ int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); +void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, + struct zxdh_msg_info *msg_info); +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index a603c88049..6b8168da6f 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -7,18 +7,23 @@ #include <rte_common.h> #include <rte_log.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> #include "zxdh_np.h" #include "zxdh_logs.h" static uint64_t g_np_bar_offset; -static ZXDH_DEV_MGR_T g_dev_mgr = {0}; -static ZXDH_SDT_MGR_T g_sdt_mgr = {0}; +static ZXDH_DEV_MGR_T g_dev_mgr; +static ZXDH_SDT_MGR_T g_sdt_mgr; +static uint32_t g_dpp_dtb_int_enable; +static uint32_t g_table_type[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_REG_T g_dpp_reg_info[4] = {0}; +ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4] = {0}; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) @@ -75,6 +80,98 @@ do {\ } \ } while (0) +#define ZXDH_COMM_CHECK_POINT(point)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] ! FUNCTION : %s!",\ + __FILE__, __LINE__, __func__);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + + +#define ZXDH_COMM_CHECK_POINT_MEMORY_FREE(point, ptr)\ +do {\ + if ((point) == NULL) {\ + PMD_DRV_LOG(ERR, "ZXIC %s:%d[Error:POINT NULL] !"\ + "FUNCTION : %s!", __FILE__, __LINE__, __func__);\ + rte_free(ptr);\ + ZXDH_COMM_ASSERT(0);\ + } \ +} while (0) + +#define ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, becall, ptr)\ +do {\ + if ((rc) != 0) {\ + PMD_DRV_LOG(ERR, "ZXICP %s:%d, %s Call"\ + " %s Fail!", __FILE__, __LINE__, __func__, becall);\ + rte_free(ptr);\ + } \ +} while (0) + +#define ZXDH_COMM_CONVERT16(w_data) \ + (((w_data) & 0xff) << 8) + +#define ZXDH_DTB_TAB_UP_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.item_size) + +#define ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + ((INDEX) * p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.item_size) + +#define ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.wr_index) + +#define ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_down.start_phy_addr) + +#define ZXDH_DTB_QUEUE_INIT_FLAG_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].init_flag) + +static uint32_t +zxdh_np_comm_is_big_endian(void) +{ + ZXDH_ENDIAN_U c_data; + + c_data.a = 1; + + if (c_data.b == 1) + return 0; + else + return 1; +} + +static void +zxdh_np_comm_swap(uint8_t *p_uc_data, uint32_t dw_byte_len) +{ + uint32_t dw_byte_num = 0; + uint8_t uc_byte_mode = 0; + uint32_t uc_is_big_flag = 0; + uint32_t i = 0; + uint16_t *p_w_tmp = NULL; + uint32_t *p_dw_tmp = NULL; + + + p_dw_tmp = (uint32_t *)(p_uc_data); + + uc_is_big_flag = zxdh_np_comm_is_big_endian(); + + if (uc_is_big_flag) + return; + + dw_byte_num = dw_byte_len >> 2; + uc_byte_mode = dw_byte_len % 4 & 0xff; + + for (i = 0; i < dw_byte_num; i++) { + (*p_dw_tmp) = ZXDH_COMM_CONVERT16(*p_dw_tmp); + p_dw_tmp++; + } + + if (uc_byte_mode > 1) { + p_w_tmp = (uint16_t *)(p_dw_tmp); + (*p_w_tmp) = ZXDH_COMM_CONVERT16(*p_w_tmp); + } +} + static uint32_t zxdh_np_dev_init(void) { @@ -835,3 +932,568 @@ zxdh_np_online_uninit(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_sdt_tbl_type_get(uint32_t dev_id, uint32_t sdt_no) +{ + return g_table_type[dev_id][sdt_no]; +} + + +static ZXDH_DTB_TABLE_T * +zxdh_np_table_info_get(uint32_t table_type) +{ + return &g_dpp_dtb_table_info[table_type]; +} + +static uint32_t +zxdh_np_dtb_write_table_cmd(uint32_t dev_id, + ZXDH_DTB_TABLE_INFO_E table_type, + void *p_cmd_data, + void *p_cmd_buff) +{ + uint32_t rc = 0; + uint32_t field_cnt = 0; + ZXDH_DTB_TABLE_T *p_table_info; + ZXDH_DTB_FIELD_T *p_field_info = NULL; + uint32_t temp_data = 0; + + ZXDH_COMM_CHECK_POINT(p_cmd_data); + ZXDH_COMM_CHECK_POINT(p_cmd_buff); + p_table_info = zxdh_np_table_info_get(table_type); + p_field_info = p_table_info->p_fields; + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_table_info); + + for (field_cnt = 0; field_cnt < p_table_info->field_num; field_cnt++) { + temp_data = *((uint32_t *)p_cmd_data + field_cnt) & ZXDH_COMM_GET_BIT_MASK(uint32_t, + p_field_info[field_cnt].len); + + rc = zxdh_np_comm_write_bits_ex((uint8_t *)p_cmd_buff, + ZXDH_DTB_TABLE_CMD_SIZE_BIT, + temp_data, + p_field_info[field_cnt].lsb_pos, + p_field_info[field_cnt].len); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxic_comm_write_bits"); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_smmu0_write_entry_data(uint32_t dev_id, + uint32_t mode, + uint32_t addr, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t rc = 0; + ZXDH_DTB_ERAM_TABLE_FORM_T dtb_eram_form_info = {0}; + + dtb_eram_form_info.valid = ZXDH_DTB_TABLE_VALID; + dtb_eram_form_info.type_mode = ZXDH_DTB_TABLE_MODE_ERAM; + dtb_eram_form_info.data_mode = mode; + dtb_eram_form_info.cpu_wr = 1; + dtb_eram_form_info.addr = addr; + dtb_eram_form_info.cpu_rd = 0; + dtb_eram_form_info.cpu_rd_mode = 0; + + if (ZXDH_ERAM128_OPR_128b == mode) { + p_entry->data_in_cmd_flag = 0; + p_entry->data_size = 128 / 8; + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_128, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + memcpy(p_entry->data, p_data, 128 / 8); + } else if (ZXDH_ERAM128_OPR_64b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 64 / 8; + dtb_eram_form_info.data_l = *(p_data + 1); + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_64, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + + } else if (ZXDH_ERAM128_OPR_1b == mode) { + p_entry->data_in_cmd_flag = 1; + p_entry->data_size = 1; + dtb_eram_form_info.data_h = *(p_data); + + rc = zxdh_np_dtb_write_table_cmd(dev_id, ZXDH_DTB_TABLE_ERAM_1, + &dtb_eram_form_info, p_entry->cmd); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_table_cmd"); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_write(uint32_t dev_id, + uint32_t base_addr, + uint32_t index, + uint32_t wrt_mode, + uint32_t *p_data, + ZXDH_DTB_ENTRY_T *p_entry) +{ + uint32_t rc = 0; + uint32_t temp_idx = 0; + uint32_t dtb_ind_addr = 0; + + switch (wrt_mode) { + case ZXDH_ERAM128_OPR_128b: + { + if ((0xFFFFFFFF - (base_addr)) < (index)) { + PMD_DRV_LOG(ERR, "ICM %s:%d[Error:VALUE[val0=0x%x]" + "INVALID] [val1=0x%x] ! FUNCTION :%s !", __FILE__, __LINE__, + base_addr, index, __func__); + + return ZXDH_PAR_CHK_INVALID_INDEX; + } + if (base_addr + index > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 7; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + if ((base_addr + (index >> 1)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + temp_idx = index << 6; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + if ((base_addr + (index >> 7)) > ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL - 1) { + PMD_DRV_LOG(ERR, "dpp_se_smmu0_ind_write : index out of range !"); + return 1; + } + + temp_idx = index; + } + } + + dtb_ind_addr = ((base_addr << 7) & ZXDH_ERAM128_BADDR_MASK) + temp_idx; + + PMD_DRV_LOG(INFO, " dtb eram item 1bit addr: 0x%x", dtb_ind_addr); + + rc = zxdh_np_dtb_smmu0_write_entry_data(dev_id, + wrt_mode, + dtb_ind_addr, + p_data, + p_entry); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_smmu0_write_entry_data"); + + return 0; +} + +static uint32_t +zxdh_np_eram_dtb_len_get(uint32_t mode) +{ + uint32_t dtb_len = 0; + + switch (mode) { + case ZXDH_ERAM128_OPR_128b: + { + dtb_len += 2; + break; + } + case ZXDH_ERAM128_OPR_64b: + case ZXDH_ERAM128_OPR_1b: + { + dtb_len += 1; + break; + } + default: + break; + } + + return dtb_len; +} + +static uint32_t +zxdh_np_dtb_eram_one_entry(uint32_t dev_id, + uint32_t sdt_no, + uint32_t del_en, + void *pdata, + uint32_t *p_dtb_len, + ZXDH_DTB_ENTRY_T *p_dtb_one_entry) +{ + uint32_t rc = 0; + uint32_t base_addr = 0; + uint32_t index = 0; + uint32_t opr_mode = 0; + uint32_t buff[ZXDH_SMMU0_READ_REG_MAX_NUM] = {0}; + + ZXDH_SDTTBL_ERAM_T sdt_eram = {0}; + ZXDH_DTB_ERAM_ENTRY_INFO_T *peramdata = NULL; + + ZXDH_COMM_CHECK_POINT(pdata); + ZXDH_COMM_CHECK_POINT(p_dtb_one_entry); + ZXDH_COMM_CHECK_POINT(p_dtb_len); + + peramdata = (ZXDH_DTB_ERAM_ENTRY_INFO_T *)pdata; + index = peramdata->index; + base_addr = sdt_eram.eram_base_addr; + opr_mode = sdt_eram.eram_mode; + + switch (opr_mode) { + case ZXDH_ERAM128_TBL_128b: + { + opr_mode = ZXDH_ERAM128_OPR_128b; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + opr_mode = ZXDH_ERAM128_OPR_64b; + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + opr_mode = ZXDH_ERAM128_OPR_1b; + break; + } + } + + if (del_en) { + memset((uint8_t *)buff, 0, sizeof(buff)); + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + buff, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(sdt_no, rc, "zxdh_dtb_se_smmu0_ind_write"); + } else { + rc = zxdh_np_dtb_se_smmu0_ind_write(dev_id, + base_addr, + index, + opr_mode, + peramdata->p_data, + p_dtb_one_entry); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_dtb_se_smmu0_ind_write"); + } + *p_dtb_len = zxdh_np_eram_dtb_len_get(opr_mode); + + return 0; +} + +static uint32_t +zxdh_np_dtb_data_write(uint8_t *p_data_buff, + uint32_t addr_offset, + ZXDH_DTB_ENTRY_T *entry) +{ + ZXDH_COMM_CHECK_POINT(p_data_buff); + ZXDH_COMM_CHECK_POINT(entry); + + uint8_t *p_cmd = p_data_buff + addr_offset; + uint32_t cmd_size = ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8; + + uint8_t *p_data = p_cmd + cmd_size; + uint32_t data_size = entry->data_size; + + uint8_t *cmd = (uint8_t *)entry->cmd; + uint8_t *data = (uint8_t *)entry->data; + + rte_memcpy(p_cmd, cmd, cmd_size); + + if (!entry->data_in_cmd_flag) { + zxdh_np_comm_swap(data, data_size); + rte_memcpy(p_data, data, data_size); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_enable_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *enable) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_VM_INFO_T vm_info = {0}; + + rc = zxdh_np_dtb_queue_vm_info_get(dev_id, queue_id, &vm_info); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_dtb_queue_vm_info_get"); + + *enable = vm_info.queue_en; + return rc; +} + +static uint32_t +zxdh_np_dtb_item_buff_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t len, + uint32_t *p_data) +{ + uint64_t addr = 0; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + + ZXDH_DTB_ITEM_ACK_SIZE + pos * 4; + + memcpy((uint8_t *)(addr), p_data, len * 4); + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_rd(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t *p_data) +{ + uint64_t addr = 0; + uint32_t val = 0; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + val = *((volatile uint32_t *)(addr)); + + *p_data = val; + + return 0; +} + +static uint32_t +zxdh_np_dtb_item_ack_wr(uint32_t dev_id, + uint32_t queue_id, + uint32_t dir_flag, + uint32_t index, + uint32_t pos, + uint32_t data) +{ + uint64_t addr = 0; + + if (dir_flag == 1) + addr = ZXDH_DTB_TAB_UP_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + else + addr = ZXDH_DTB_TAB_DOWN_VIR_ADDR_GET(dev_id, queue_id, index) + pos * 4; + + *((volatile uint32_t *)(addr)) = data; + + return 0; +} + +static uint32_t +zxdh_np_dtb_queue_item_info_set(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_QUEUE_ITEM_INFO_T *p_item_info) +{ + uint32_t rc = 0; + ZXDH_DTB_QUEUE_LEN_T dtb_len = {0}; + + dtb_len.cfg_dtb_cmd_type = p_item_info->cmd_type; + dtb_len.cfg_dtb_cmd_int_en = p_item_info->int_en; + dtb_len.cfg_queue_dtb_len = p_item_info->data_len; + + rc = zxdh_np_reg_write(dev_id, ZXDH_DTB_CFG_QUEUE_DTB_LEN, + 0, queue_id, (void *)&dtb_len); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_reg_write"); + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_down_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t int_flag, + uint32_t data_len, + uint32_t *p_data, + uint32_t *p_item_index) +{ + uint32_t rc = 0; + uint32_t i = 0; + uint32_t queue_en = 0; + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (data_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + rc = zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!,rc=%d", queue_id, rc); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + rc = zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + rc = zxdh_np_dtb_item_ack_rd(dev_id, queue_id, 0, + item_index, 0, &ack_vale); + + ZXDH_DTB_TAB_DOWN_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + rc = zxdh_np_dtb_item_buff_wr(dev_id, queue_id, 0, + item_index, 0, data_len, p_data); + + rc = zxdh_np_dtb_item_ack_wr(dev_id, queue_id, 0, + item_index, 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + item_info.cmd_vld = 1; + item_info.cmd_type = 0; + item_info.int_en = int_flag; + item_info.data_len = data_len / 4; + item_info.data_hddr = ((ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(dev_id, queue_id, + item_index) >> 4) >> 32) & 0xffffffff; + item_info.data_laddr = (ZXDH_DTB_TAB_DOWN_PHY_ADDR_GET(dev_id, queue_id, + item_index) >> 4) & 0xffffffff; + + rc = zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + *p_item_index = item_index; + + return 0; +} + +static uint32_t +zxdh_np_dtb_write_down_table_data(uint32_t dev_id, + uint32_t queue_id, + uint32_t down_table_len, + uint8_t *p_down_table_buff, + uint32_t *p_element_id) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + dtb_interrupt_status = g_dpp_dtb_int_enable; + + rc = zxdh_np_dtb_tab_down_info_set(dev_id, + queue_id, + dtb_interrupt_status, + down_table_len / 4, + (uint32_t *)p_down_table_buff, + p_element_id); + return rc; +} + +int +zxdh_np_dtb_table_entry_write(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *down_entries) +{ + uint32_t rc = 0; + uint32_t entry_index = 0; + uint32_t sdt_no = 0; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t addr_offset = 0; + uint32_t max_size = 0; + uint8_t *p_data_buff = NULL; + + uint8_t *p_data_buff_ex = NULL; + ZXDH_DTB_LPM_ENTRY_T lpm_entry = {0}; + + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX] = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + + p_data_buff = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + memset(p_data_buff, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + p_data_buff_ex = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT_MEMORY_FREE(p_data_buff_ex, p_data_buff); + memset(p_data_buff_ex, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + memset((uint8_t *)&lpm_entry, 0x0, sizeof(ZXDH_DTB_LPM_ENTRY_T)); + memset((uint8_t *)&dtb_one_entry, 0x0, sizeof(ZXDH_DTB_ENTRY_T)); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = down_entries + entry_index; + sdt_no = pentry->sdt_no; + tbl_type = zxdh_np_sdt_tbl_type_get(dev_id, sdt_no); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_ADD_OR_UPDATE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index dc0e867827..02c27df887 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -7,6 +7,9 @@ #include <stdint.h> +#define ZXDH_DISABLE (0) +#define ZXDH_ENABLE (1) + #define ZXDH_PORT_NAME_MAX (32) #define ZXDH_DEV_CHANNEL_MAX (2) #define ZXDH_DEV_SDT_ID_MAX (256U) @@ -52,6 +55,94 @@ #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SMMU0_READ_REG_MAX_NUM (4) + +#define ZXDH_DTB_ITEM_ACK_SIZE (16) +#define ZXDH_DTB_ITEM_BUFF_SIZE (16 * 1024) +#define ZXDH_DTB_ITEM_SIZE (16 + 16 * 1024) +#define ZXDH_DTB_TAB_UP_SIZE ((16 + 16 * 1024) * 32) +#define ZXDH_DTB_TAB_DOWN_SIZE ((16 + 16 * 1024) * 32) + +#define ZXDH_DTB_TAB_UP_ACK_VLD_MASK (0x555555) +#define ZXDH_DTB_TAB_DOWN_ACK_VLD_MASK (0x5a5a5a) +#define ZXDH_DTB_TAB_ACK_IS_USING_MASK (0x11111100) +#define ZXDH_DTB_TAB_ACK_UNUSED_MASK (0x0) +#define ZXDH_DTB_TAB_ACK_SUCCESS_MASK (0xff) +#define ZXDH_DTB_TAB_ACK_FAILED_MASK (0x1) +#define ZXDH_DTB_TAB_ACK_CHECK_VALUE (0x12345678) + +#define ZXDH_DTB_TAB_ACK_VLD_SHIFT (104) +#define ZXDH_DTB_TAB_ACK_STATUS_SHIFT (96) +#define ZXDH_DTB_LEN_POS_SETP (16) +#define ZXDH_DTB_ITEM_ADD_OR_UPDATE (0) +#define ZXDH_DTB_ITEM_DELETE (1) + +#define ZXDH_ETCAM_LEN_SIZE (6) +#define ZXDH_ETCAM_BLOCK_NUM (8) +#define ZXDH_ETCAM_TBLID_NUM (8) +#define ZXDH_ETCAM_RAM_NUM (8) +#define ZXDH_ETCAM_RAM_WIDTH (80U) +#define ZXDH_ETCAM_WR_MASK_MAX (((uint32_t)1 << ZXDH_ETCAM_RAM_NUM) - 1) +#define ZXDH_ETCAM_WIDTH_MIN (ZXDH_ETCAM_RAM_WIDTH) +#define ZXDH_ETCAM_WIDTH_MAX (ZXDH_ETCAM_RAM_NUM * ZXDH_ETCAM_RAM_WIDTH) + +#define ZXDH_DTB_TABLE_DATA_BUFF_SIZE (uint16_t)(1024 * 16) +#define ZXDH_DTB_TABLE_CMD_SIZE_BIT (128) + +#define ZXDH_SE_SMMU0_ERAM_BLOCK_NUM (32) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK (0x4000) +#define ZXDH_SE_SMMU0_ERAM_ADDR_NUM_TOTAL \ + (ZXDH_SE_SMMU0_ERAM_BLOCK_NUM * ZXDH_SE_SMMU0_ERAM_ADDR_NUM_PER_BLOCK) + +/**errco code */ +#define ZXDH_RC_BASE (0x1000U) +#define ZXDH_PARAMETER_CHK_BASE (ZXDH_RC_BASE | 0x200) +#define ZXDH_PAR_CHK_POINT_NULL (ZXDH_PARAMETER_CHK_BASE | 0x001) +#define ZXDH_PAR_CHK_ARGIN_ZERO (ZXDH_PARAMETER_CHK_BASE | 0x002) +#define ZXDH_PAR_CHK_ARGIN_OVERFLOW (ZXDH_PARAMETER_CHK_BASE | 0x003) +#define ZXDH_PAR_CHK_ARGIN_ERROR (ZXDH_PARAMETER_CHK_BASE | 0x004) +#define ZXDH_PAR_CHK_INVALID_INDEX (ZXDH_PARAMETER_CHK_BASE | 0x005) +#define ZXDH_PAR_CHK_INVALID_RANGE (ZXDH_PARAMETER_CHK_BASE | 0x006) +#define ZXDH_PAR_CHK_INVALID_DEV_ID (ZXDH_PARAMETER_CHK_BASE | 0x007) +#define ZXDH_PAR_CHK_INVALID_PARA (ZXDH_PARAMETER_CHK_BASE | 0x008) + +#define ZXDH_ERAM128_BADDR_MASK (0x3FFFF80) + +#define ZXDH_DTB_TABLE_MODE_ERAM (0) +#define ZXDH_DTB_TABLE_MODE_DDR (1) +#define ZXDH_DTB_TABLE_MODE_ZCAM (2) +#define ZXDH_DTB_TABLE_MODE_ETCAM (3) +#define ZXDH_DTB_TABLE_MODE_MC_HASH (4) +#define ZXDH_DTB_TABLE_VALID (1) + +/* DTB module error code */ +#define ZXDH_RC_DTB_BASE (0xd00) +#define ZXDH_RC_DTB_MGR_EXIST (ZXDH_RC_DTB_BASE | 0x0) +#define ZXDH_RC_DTB_MGR_NOT_EXIST (ZXDH_RC_DTB_BASE | 0x1) +#define ZXDH_RC_DTB_QUEUE_RES_EMPTY (ZXDH_RC_DTB_BASE | 0x2) +#define ZXDH_RC_DTB_QUEUE_BUFF_SIZE_ERR (ZXDH_RC_DTB_BASE | 0x3) +#define ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY (ZXDH_RC_DTB_BASE | 0x4) +#define ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY (ZXDH_RC_DTB_BASE | 0x5) +#define ZXDH_RC_DTB_TAB_UP_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x6) +#define ZXDH_RC_DTB_TAB_DOWN_BUFF_EMPTY (ZXDH_RC_DTB_BASE | 0x7) +#define ZXDH_RC_DTB_TAB_UP_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x8) +#define ZXDH_RC_DTB_TAB_DOWN_TRANS_ERR (ZXDH_RC_DTB_BASE | 0x9) +#define ZXDH_RC_DTB_QUEUE_IS_WORKING (ZXDH_RC_DTB_BASE | 0xa) +#define ZXDH_RC_DTB_QUEUE_IS_NOT_INIT (ZXDH_RC_DTB_BASE | 0xb) +#define ZXDH_RC_DTB_MEMORY_ALLOC_ERR (ZXDH_RC_DTB_BASE | 0xc) +#define ZXDH_RC_DTB_PARA_INVALID (ZXDH_RC_DTB_BASE | 0xd) +#define ZXDH_RC_DMA_RANGE_INVALID (ZXDH_RC_DTB_BASE | 0xe) +#define ZXDH_RC_DMA_RCV_DATA_EMPTY (ZXDH_RC_DTB_BASE | 0xf) +#define ZXDH_RC_DTB_LPM_INSERT_FAIL (ZXDH_RC_DTB_BASE | 0x10) +#define ZXDH_RC_DTB_LPM_DELETE_FAIL (ZXDH_RC_DTB_BASE | 0x11) +#define ZXDH_RC_DTB_DOWN_LEN_INVALID (ZXDH_RC_DTB_BASE | 0x12) +#define ZXDH_RC_DTB_DOWN_HASH_CONFLICT (ZXDH_RC_DTB_BASE | 0x13) +#define ZXDH_RC_DTB_QUEUE_NOT_ALLOC (ZXDH_RC_DTB_BASE | 0x14) +#define ZXDH_RC_DTB_QUEUE_NAME_ERROR (ZXDH_RC_DTB_BASE | 0x15) +#define ZXDH_RC_DTB_DUMP_SIZE_SMALL (ZXDH_RC_DTB_BASE | 0x16) +#define ZXDH_RC_DTB_SEARCH_VPORT_QUEUE_ZERO (ZXDH_RC_DTB_BASE | 0x17) +#define ZXDH_RC_DTB_QUEUE_NOT_ENABLE (ZXDH_RC_DTB_BASE | 0x18) + typedef enum zxdh_module_init_e { ZXDH_MODULE_INIT_NPPU = 0, ZXDH_MODULE_INIT_PPU, @@ -260,7 +351,6 @@ typedef struct zxdh_dtb4k_dtb_enq_cfg_epid_v_func_num_0_127_t { uint32_t cfg_vfunc_active; } ZXDH_DTB4K_DTB_ENQ_CFG_EPID_V_FUNC_NUM_0_127_T; - typedef uint32_t (*ZXDH_REG_WRITE)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); typedef uint32_t (*ZXDH_REG_READ)(uint32_t dev_id, uint32_t addr, uint32_t *p_data); @@ -299,7 +389,127 @@ typedef struct zxdh_tlb_mgr_t { uint32_t pa_width; } ZXDH_TLB_MGR_T; +typedef enum zxdh_eram128_tbl_mode_e { + ZXDH_ERAM128_TBL_1b = 0, + ZXDH_ERAM128_TBL_32b = 1, + ZXDH_ERAM128_TBL_64b = 2, + ZXDH_ERAM128_TBL_128b = 3, + ZXDH_ERAM128_TBL_2b = 4, + ZXDH_ERAM128_TBL_4b = 5, + ZXDH_ERAM128_TBL_8b = 6, + ZXDH_ERAM128_TBL_16b = 7 +} ZXDH_ERAM128_TBL_MODE_E; + +typedef enum zxdh_eram128_opr_mode_e { + ZXDH_ERAM128_OPR_128b = 0, + ZXDH_ERAM128_OPR_64b = 1, + ZXDH_ERAM128_OPR_1b = 2, + ZXDH_ERAM128_OPR_32b = 3 + +} ZXDH_ERAM128_OPR_MODE_E; + +typedef enum zxdh_dtb_table_info_e { + ZXDH_DTB_TABLE_DDR = 0, + ZXDH_DTB_TABLE_ERAM_1 = 1, + ZXDH_DTB_TABLE_ERAM_64 = 2, + ZXDH_DTB_TABLE_ERAM_128 = 3, + ZXDH_DTB_TABLE_ZCAM = 4, + ZXDH_DTB_TABLE_ETCAM = 5, + ZXDH_DTB_TABLE_MC_HASH = 6, + ZXDH_DTB_TABLE_ENUM_MAX +} ZXDH_DTB_TABLE_INFO_E; + +typedef enum zxdh_sdt_table_type_e { + ZXDH_SDT_TBLT_INVALID = 0, + ZXDH_SDT_TBLT_ERAM = 1, + ZXDH_SDT_TBLT_DDR3 = 2, + ZXDH_SDT_TBLT_HASH = 3, + ZXDH_SDT_TBLT_LPM = 4, + ZXDH_SDT_TBLT_ETCAM = 5, + ZXDH_SDT_TBLT_PORTTBL = 6, + ZXDH_SDT_TBLT_MAX = 7, +} ZXDH_SDT_TABLE_TYPE_E; + +typedef struct zxdh_dtb_lpm_entry_t { + uint32_t dtb_len0; + uint8_t *p_data_buff0; + uint32_t dtb_len1; + uint8_t *p_data_buff1; +} ZXDH_DTB_LPM_ENTRY_T; + +typedef struct zxdh_dtb_entry_t { + uint8_t *cmd; + uint8_t *data; + uint32_t data_in_cmd_flag; + uint32_t data_size; +} ZXDH_DTB_ENTRY_T; + +typedef struct zxdh_dtb_eram_table_form_t { + uint32_t valid; + uint32_t type_mode; + uint32_t data_mode; + uint32_t cpu_wr; + uint32_t cpu_rd; + uint32_t cpu_rd_mode; + uint32_t addr; + uint32_t data_h; + uint32_t data_l; +} ZXDH_DTB_ERAM_TABLE_FORM_T; + +typedef struct zxdh_sdt_tbl_eram_t { + uint32_t table_type; + uint32_t eram_mode; + uint32_t eram_base_addr; + uint32_t eram_table_depth; + uint32_t eram_clutch_en; +} ZXDH_SDTTBL_ERAM_T; + +typedef union zxdh_endian_u { + unsigned int a; + unsigned char b; +} ZXDH_ENDIAN_U; + +typedef struct zxdh_dtb_field_t { + const char *p_name; + uint16_t lsb_pos; + uint16_t len; +} ZXDH_DTB_FIELD_T; + +typedef struct zxdh_dtb_table_t { + const char *table_type; + uint32_t table_no; + uint32_t field_num; + ZXDH_DTB_FIELD_T *p_fields; +} ZXDH_DTB_TABLE_T; + +typedef struct zxdh_dtb_queue_item_info_t { + uint32_t cmd_vld; + uint32_t cmd_type; + uint32_t int_en; + uint32_t data_len; + uint32_t data_laddr; + uint32_t data_hddr; +} ZXDH_DTB_QUEUE_ITEM_INFO_T; + +typedef struct zxdh_dtb_queue_len_t { + uint32_t cfg_dtb_cmd_type; + uint32_t cfg_dtb_cmd_int_en; + uint32_t cfg_queue_dtb_len; +} ZXDH_DTB_QUEUE_LEN_T; + +typedef struct zxdh_dtb_eram_entry_info_t { + uint32_t index; + uint32_t *p_data; +} ZXDH_DTB_ERAM_ENTRY_INFO_T; + +typedef struct zxdh_dtb_user_entry_t { + uint32_t sdt_no; + void *p_entry_data; +} ZXDH_DTB_USER_ENTRY_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); +int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index d6487a574f..e3f13cb17d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -12,6 +12,8 @@ #include "zxdh_ethdev.h" +#define ZXDH_PF_PCIE_ID(pcie_id) (((pcie_id) & 0xff00) | 1 << 11) + enum zxdh_msix_status { ZXDH_MSIX_NONE = 0, ZXDH_MSIX_DISABLED = 1, diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c new file mode 100644 index 0000000000..4284fefe3a --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.c @@ -0,0 +1,104 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_msg.h" +#include "zxdh_np.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +#define ZXDH_SDT_VPORT_ATT_TABLE 1 +#define ZXDH_SDT_PANEL_ATT_TABLE 2 + +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_write = {ZXDH_SDT_VPORT_ATT_TABLE, (void *)&entry}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) + PMD_DRV_LOG(ERR, "write vport_att failed vfid:%d failed", vfid); + + return ret; +} + +int +zxdh_port_attr_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + port_attr.hit_flag = 1; + port_attr.phy_port = hw->phyport; + port_attr.pf_vfid = zxdh_vport_to_vfid(hw->vport); + port_attr.rss_enable = 0; + if (!hw->is_pf) + port_attr.is_vf = 1; + + port_attr.mtu = dev->data->mtu; + port_attr.mtu_enable = 1; + port_attr.is_up = 0; + if (!port_attr.rss_enable) + port_attr.port_base_qid = 0; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -EAGAIN; + } + } else { + struct zxdh_vf_init_msg *vf_init_msg = &msg_info.data.vf_init_msg; + + zxdh_msg_head_build(hw, ZXDH_VF_PORT_INIT, &msg_info); + msg_info.msg_head.msg_type = ZXDH_VF_PORT_INIT; + vf_init_msg->link_up = 1; + vf_init_msg->base_qid = 0; + vf_init_msg->rss_enable = 0; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port_init failed"); + return -EAGAIN; + } + } + return ret; +}; + +int zxdh_panel_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + + struct zxdh_panel_table panel; + + memset(&panel, 0, sizeof(panel)); + panel.hit_flag = 1; + panel.pf_vfid = zxdh_vport_to_vfid(hw->vport); + panel.mtu_enable = 1; + panel.mtu = dev->data->mtu; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = hw->phyport, + .p_data = (uint32_t *)&panel + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) { + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + return -EAGAIN; + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h new file mode 100644 index 0000000000..5d34af2f05 --- /dev/null +++ b/drivers/net/zxdh/zxdh_tables.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_TABLES_H +#define ZXDH_TABLES_H + +#include <stdint.h> + +extern struct zxdh_dtb_shared_data g_dtb_data; + +#define ZXDH_DEVICE_NO 0 + +struct zxdh_port_attr_table { +#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint16_t mtu; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint8_t rss_hash_factor; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#else + uint8_t rsv3 : 1; + uint8_t rdma_offload_enable: 1; + uint8_t vlan_filter_enable: 1; + uint8_t vlan_strip_offload: 1; + uint8_t qinq_valn_strip_offload: 1; + uint8_t rss_enable: 1; + uint8_t mtu_enable: 1; + uint8_t hit_flag: 1; + + uint8_t accelerator_offload_flag: 1; + uint8_t lro_offload: 1; + uint8_t ip_fragment_offload: 1; + uint8_t tcp_udp_checksum_offload: 1; + uint8_t ip_checksum_offload: 1; + uint8_t outer_ip_checksum_offload: 1; + uint8_t is_up: 1; + uint8_t rsv1: 1; + + uint8_t inline_sec_offload: 1; + uint8_t ovs_enable: 1; + uint8_t lag_enable: 1; + uint8_t is_passthrough: 1; + uint8_t is_vf: 1; + uint8_t virtion_version: 2; + uint8_t virtio_enable: 1; + + uint8_t byte4_rsv1: 1; + uint8_t ingress_meter_enable: 1; + uint8_t egress_meter_enable: 1; + uint8_t byte4_rsv2: 2; + uint8_t fd_enable: 1; + uint8_t vepa_enable: 1; + uint8_t spoof_check_enable: 1; + + uint16_t port_base_qid : 12; + uint16_t hash_search_index : 3; + uint16_t rsv: 1; + + uint16_t mtu; + + uint16_t lag_id : 3; + uint16_t pf_vfid : 11; + uint16_t ingress_tm_enable : 1; + uint16_t egress_tm_enable : 1; + + uint8_t hash_alg: 4; + uint8_t phy_port: 4; + + uint8_t rss_hash_factor; + + uint16_t tpid; + + uint16_t vhca : 10; + uint16_t uplink_port : 6; +#endif +}; + +struct zxdh_panel_table { + uint16_t port_vfid_1588 : 11, + rsv2 : 5; + uint16_t pf_vfid : 11, + rsv1 : 1, + enable_1588_tc : 2, + trust_mode : 1, + hit_flag : 1; + uint32_t mtu : 16, + mtu_enable : 1, + rsv : 3, + tm_base_queue : 12; + uint32_t rsv_1; + uint32_t rsv_2; +}; /* 16B */ + +int zxdh_port_attr_init(struct rte_eth_dev *dev); +int zxdh_panel_table_init(struct rte_eth_dev *dev); +int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); + +#endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 105367 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 04/15] net/zxdh: port tables unint implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (2 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 03/15] net/zxdh: port tables init implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang ` (10 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8858 bytes --] delete port tables in host. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 19 ++++++ drivers/net/zxdh/zxdh_msg.h | 1 + drivers/net/zxdh/zxdh_np.c | 113 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 9 +++ drivers/net/zxdh/zxdh_tables.c | 33 +++++++++- drivers/net/zxdh/zxdh_tables.h | 1 + 6 files changed, 175 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 9f3a5bcf9c..63eac7781c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -842,6 +842,19 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static int +zxdh_tables_uninit(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_port_attr_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); + return ret; + } + return ret; +} + static void zxdh_np_dtb_data_res_free(struct zxdh_hw *hw) { @@ -893,6 +906,12 @@ zxdh_dev_close(struct rte_eth_dev *dev) struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_tables_uninit(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s :unint port %s failed ", __func__, dev->device->name); + return -1; + } + zxdh_intr_release(dev); zxdh_np_uninit(dev); zxdh_pci_reset(hw); diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 35ed5d1a1c..9997417f28 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -167,6 +167,7 @@ enum pciebar_layout_type { enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, + ZXDH_VF_PORT_UNINIT = 2, ZXDH_MSG_TYPE_END, } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 6b8168da6f..242a6901ed 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -22,6 +22,7 @@ ZXDH_PPU_CLS_BITMAP_T g_ppu_cls_bit_map[ZXDH_DEV_CHANNEL_MAX]; ZXDH_DTB_MGR_T *p_dpp_dtb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_RISCV_DTB_MGR *p_riscv_dtb_queue_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; +ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_REG_T g_dpp_reg_info[4] = {0}; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4] = {0}; @@ -1497,3 +1498,115 @@ zxdh_np_dtb_table_entry_write(uint32_t dev_id, return rc; } + +static uint32_t +zxdh_np_sdt_tbl_data_get(uint32_t dev_id, uint32_t sdt_no, ZXDH_SDT_TBL_DATA_T *p_sdt_data) +{ + uint32_t rc = 0; + + p_sdt_data->data_high32 = g_sdt_info[dev_id][sdt_no].data_high32; + p_sdt_data->data_low32 = g_sdt_info[dev_id][sdt_no].data_low32; + + return rc; +} + +int +zxdh_np_dtb_table_entry_delete(uint32_t dev_id, + uint32_t queue_id, + uint32_t entrynum, + ZXDH_DTB_USER_ENTRY_T *delete_entries) +{ + uint32_t rc = 0; + uint32_t entry_index = 0; + uint32_t sdt_no = 0; + uint32_t tbl_type = 0; + uint32_t element_id = 0xff; + uint32_t one_dtb_len = 0; + uint32_t dtb_len = 0; + uint32_t addr_offset = 0; + uint32_t max_size = 0; + uint8_t *p_data_buff = NULL; + uint8_t *p_data_buff_ex = NULL; + ZXDH_DTB_LPM_ENTRY_T lpm_entry = {0}; + + uint8_t entry_cmd[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + uint8_t entry_data[ZXDH_ETCAM_WIDTH_MAX / 8] = {0}; + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + ZXDH_DTB_USER_ENTRY_T *pentry = NULL; + ZXDH_DTB_ENTRY_T dtb_one_entry = {0}; + + ZXDH_COMM_CHECK_POINT(delete_entries); + + p_data_buff = (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE, 0); + ZXDH_COMM_CHECK_POINT(p_data_buff); + memset(p_data_buff, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + p_data_buff_ex = + (uint8_t *)rte_malloc(NULL, ZXDH_DTB_TABLE_DATA_BUFF_SIZE * sizeof(uint8_t), 0); + memset(p_data_buff_ex, 0, ZXDH_DTB_TABLE_DATA_BUFF_SIZE); + + memset((uint8_t *)&lpm_entry, 0x0, sizeof(ZXDH_DTB_LPM_ENTRY_T)); + + memset((uint8_t *)&dtb_one_entry, 0x0, sizeof(ZXDH_DTB_ENTRY_T)); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + dtb_one_entry.cmd = entry_cmd; + dtb_one_entry.data = entry_data; + + max_size = (ZXDH_DTB_TABLE_DATA_BUFF_SIZE / 16) - 1; + + for (entry_index = 0; entry_index < entrynum; entry_index++) { + pentry = delete_entries + entry_index; + + sdt_no = pentry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_one_entry(dev_id, sdt_no, ZXDH_DTB_ITEM_DELETE, + pentry->p_entry_data, &one_dtb_len, &dtb_one_entry); + break; + } + + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return 1; + } + } + + addr_offset = dtb_len * ZXDH_DTB_LEN_POS_SETP; + dtb_len += one_dtb_len; + if (dtb_len > max_size) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + PMD_DRV_LOG(ERR, " %s error dtb_len>%u!", __func__, + max_size); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_data_write(p_data_buff, addr_offset, &dtb_one_entry); + memset(entry_cmd, 0x0, sizeof(entry_cmd)); + memset(entry_data, 0x0, sizeof(entry_data)); + } + + if (dtb_len == 0) { + rte_free(p_data_buff); + rte_free(p_data_buff_ex); + return ZXDH_RC_DTB_DOWN_LEN_INVALID; + } + + rc = zxdh_np_dtb_write_down_table_data(dev_id, + queue_id, + dtb_len * 16, + p_data_buff, + &element_id); + rte_free(p_data_buff); + ZXDH_COMM_CHECK_RC_MEMORY_FREE_NO_ASSERT(rc, + "dpp_dtb_write_down_table_data", p_data_buff_ex); + + rte_free(p_data_buff_ex); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 02c27df887..3cb9580254 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -54,6 +54,8 @@ #define ZXDH_ACL_TBL_ID_MAX (7) #define ZXDH_ACL_TBL_ID_NUM (8U) #define ZXDH_ACL_BLOCK_NUM (8U) +#define ZXDH_SDT_H_TBL_TYPE_BT_POS (29) +#define ZXDH_SDT_H_TBL_TYPE_BT_LEN (3) #define ZXDH_SMMU0_READ_REG_MAX_NUM (4) @@ -507,9 +509,16 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_sdt_tbl_data_t { + uint32_t data_high32; + uint32_t data_low32; +} ZXDH_SDT_TBL_DATA_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); +int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, + uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 4284fefe3a..e28823c657 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -70,7 +70,38 @@ zxdh_port_attr_init(struct rte_eth_dev *dev) return ret; }; -int zxdh_panel_table_init(struct rte_eth_dev *dev) +int +zxdh_port_attr_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + if (hw->is_pf == 1) { + ZXDH_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (uint32_t *)&port_attr}; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_VPORT_ATT_TABLE, + .p_entry_data = (void *)&port_attr_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "delete port attr table failed"); + return -ret; + } + } else { + zxdh_msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf port tables uninit failed"); + return -ret; + } + } + return ret; +} + +int +zxdh_panel_table_init(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5d34af2f05..5e9b36faee 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -144,5 +144,6 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 19388 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 05/15] net/zxdh: rx/tx queue setup and intr enable 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (3 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 04/15] net/zxdh: port tables unint implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang ` (9 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 7822 bytes --] rx/tx queue setup and intr enable implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 4 + drivers/net/zxdh/zxdh_queue.c | 149 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 33 ++++++++ 3 files changed, 186 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 63eac7781c..f123e05ccf 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -934,6 +934,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, + .rx_queue_setup = zxdh_dev_rx_queue_setup, + .tx_queue_setup = zxdh_dev_tx_queue_setup, + .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, + .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index b4ef90ea36..af21f046ad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -12,6 +12,11 @@ #include "zxdh_common.h" #include "zxdh_msg.h" +#define ZXDH_MBUF_MIN_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_MBUF_SIZE_4K 4096 +#define ZXDH_RX_FREE_THRESH 32 +#define ZXDH_TX_FREE_THRESH 32 + struct rte_mbuf * zxdh_queue_detach_unused(struct zxdh_virtqueue *vq) { @@ -125,3 +130,147 @@ zxdh_free_queues(struct rte_eth_dev *dev) return 0; } + +static int +zxdh_check_mempool(struct rte_mempool *mp, uint16_t offset, uint16_t min_length) +{ + uint16_t data_room_size; + + if (mp == NULL) + return -EINVAL; + data_room_size = rte_pktmbuf_data_room_size(mp); + if (data_room_size < offset + min_length) { + PMD_RX_LOG(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", + mp->name, data_room_size, + offset + min_length, offset, min_length); + return -EINVAL; + } + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_RQ_QUEUE_IDX; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + int32_t ret = 0; + + if (rx_conf->rx_deferred_start) { + PMD_RX_LOG(ERR, "Rx deferred start is not supported"); + return -EINVAL; + } + uint16_t rx_free_thresh = rx_conf->rx_free_thresh; + + if (rx_free_thresh == 0) + rx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_RX_FREE_THRESH); + + /* rx_free_thresh must be multiples of four. */ + if (rx_free_thresh & 0x3) { + PMD_RX_LOG(ERR, "(rx_free_thresh=%u port=%u queue=%u)", + rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + /* rx_free_thresh must be less than the number of RX entries */ + if (rx_free_thresh >= vq->vq_nentries) { + PMD_RX_LOG(ERR, "RX entries (%u). (rx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries, rx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + vq->vq_free_thresh = rx_free_thresh; + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + + rxvq->queue_id = vtpci_logic_qidx; + + int mbuf_min_size = ZXDH_MBUF_MIN_SIZE; + + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + mbuf_min_size = ZXDH_MBUF_SIZE_4K; + + ret = zxdh_check_mempool(mp, RTE_PKTMBUF_HEADROOM, mbuf_min_size); + if (ret != 0) { + PMD_RX_LOG(ERR, + "rxq setup but mpool size too small(<%d) failed", mbuf_min_size); + return -EINVAL; + } + rxvq->mpool = mp; + if (queue_idx < dev->data->nb_rx_queues) + dev->data->rx_queues[queue_idx] = rxvq; + + return 0; +} + +int32_t +zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf) +{ + uint16_t vtpci_logic_qidx = 2 * queue_idx + ZXDH_TQ_QUEUE_IDX; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[vtpci_logic_qidx]; + struct zxdh_virtnet_tx *txvq = NULL; + uint16_t tx_free_thresh = 0; + + if (tx_conf->tx_deferred_start) { + PMD_TX_LOG(ERR, "Tx deferred start is not supported"); + return -EINVAL; + } + + nb_desc = ZXDH_QUEUE_DEPTH; + + vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); + + txvq = &vq->txq; + txvq->queue_id = vtpci_logic_qidx; + + tx_free_thresh = tx_conf->tx_free_thresh; + if (tx_free_thresh == 0) + tx_free_thresh = RTE_MIN(vq->vq_nentries / 4, ZXDH_TX_FREE_THRESH); + + /* tx_free_thresh must be less than the number of TX entries minus 3 */ + if (tx_free_thresh >= (vq->vq_nentries - 3)) { + PMD_TX_LOG(ERR, "TX entries - 3 (%u). (tx_free_thresh=%u port=%u queue=%u)", + vq->vq_nentries - 3, tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } + + vq->vq_free_thresh = tx_free_thresh; + + if (queue_idx < dev->data->nb_tx_queues) + dev->data->tx_queues[queue_idx] = txvq; + + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_enable_intr(vq); + zxdh_mb(hw->weak_barriers); + return 0; +} + +int32_t +zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[queue_id]; + struct zxdh_virtqueue *vq = rxvq->vq; + + zxdh_queue_disable_intr(vq); + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1304d5e4ea..2f602d894f 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -8,6 +8,7 @@ #include <stdint.h> #include <rte_common.h> +#include <rte_atomic.h> #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" @@ -30,6 +31,7 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 +#define ZXDH_QUEUE_DEPTH 1024 /* * ring descriptors: 16 bytes. @@ -270,8 +272,39 @@ zxdh_queue_disable_intr(struct zxdh_virtqueue *vq) } } +static inline void +zxdh_queue_enable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow == ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +static inline void +zxdh_mb(uint8_t weak_barriers) +{ + if (weak_barriers) + rte_atomic_thread_fence(rte_memory_order_seq_cst); + else + rte_mb(); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); +int32_t zxdh_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf); +int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + uint32_t socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17339 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 06/15] net/zxdh: dev start/stop ops implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (4 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang ` (8 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 12191 bytes --] dev start/stop implementations, start/stop the rx/tx queues. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 61 ++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.c | 24 ++++++++ drivers/net/zxdh/zxdh_pci.h | 1 + drivers/net/zxdh/zxdh_queue.c | 93 +++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 68 ++++++++++++++++++++++ 7 files changed, 251 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 05c8091ed7..874541c589 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -7,3 +7,5 @@ Linux = Y x86-64 = Y ARMv8 = Y +SR-IOV = Y +Multiprocess = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 2144753d75..eb970a888f 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -18,6 +18,8 @@ Features Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. +- Multiple queues for TX and RX +- SR-IOV VF Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index f123e05ccf..a9c0d083fe 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -900,12 +900,35 @@ zxdh_np_uninit(struct rte_eth_dev *dev) zxdh_np_dtb_data_res_free(hw); } +static int +zxdh_dev_stop(struct rte_eth_dev *dev) +{ + int ret = 0; + + if (dev->data->dev_started == 0) + return 0; + + ret = zxdh_intr_disable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "intr disable failed"); + return -1; + } + + return 0; +} + static int zxdh_dev_close(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; int ret = 0; + ret = zxdh_dev_stop(dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "%s stop port %s failed.", __func__, dev->device->name); + return -1; + } + ret = zxdh_tables_uninit(dev); if (ret != 0) { PMD_DRV_LOG(ERR, "%s :unint port %s failed ", __func__, dev->device->name); @@ -929,9 +952,47 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_start(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq; + int32_t ret; + uint16_t logic_qidx; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + ret = zxdh_dev_rx_queue_setup_finish(dev, logic_qidx); + if (ret < 0) + return ret; + } + ret = zxdh_intr_enable(dev); + if (ret) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + return -EIO; + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + logic_qidx = 2 * i + ZXDH_RQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + /* Flush the old packets */ + zxdh_queue_rxvq_flush(vq); + zxdh_queue_notify(vq); + } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + logic_qidx = 2 * i + ZXDH_TQ_QUEUE_IDX; + vq = hw->vqs[logic_qidx]; + zxdh_queue_notify(vq); + } + return 0; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_start = zxdh_dev_start, + .dev_stop = zxdh_dev_stop, .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, .rx_queue_setup = zxdh_dev_rx_queue_setup, diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 250e67d560..83164a5c79 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -202,6 +202,29 @@ zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) rte_write16(0, &hw->common_cfg->queue_enable); } +static void +zxdh_notify_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint32_t notify_data = 0; + + if (!zxdh_pci_with_feature(hw, ZXDH_F_NOTIFICATION_DATA)) { + rte_write16(vq->vq_queue_index, vq->notify_addr); + return; + } + + if (zxdh_pci_with_feature(hw, ZXDH_F_RING_PACKED)) { + notify_data = ((uint32_t)(!!(vq->vq_packed.cached_flags & + ZXDH_VRING_PACKED_DESC_F_AVAIL)) << 31) | + ((uint32_t)vq->vq_avail_idx << 16) | + vq->vq_queue_index; + } else { + notify_data = ((uint32_t)vq->vq_avail_idx << 16) | vq->vq_queue_index; + } + PMD_DRV_LOG(DEBUG, "queue:%d notify_data 0x%x notify_addr 0x%p", + vq->vq_queue_index, notify_data, vq->notify_addr); + rte_write32(notify_data, vq->notify_addr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -216,6 +239,7 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_num = zxdh_set_queue_num, .setup_queue = zxdh_setup_queue, .del_queue = zxdh_del_queue, + .notify_queue = zxdh_notify_queue, }; uint8_t diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index e3f13cb17d..5c5f72b90e 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -144,6 +144,7 @@ struct zxdh_pci_ops { int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*notify_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c index af21f046ad..d45fd78dad 100644 --- a/drivers/net/zxdh/zxdh_queue.c +++ b/drivers/net/zxdh/zxdh_queue.c @@ -274,3 +274,96 @@ zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) zxdh_queue_disable_intr(vq); return 0; } + +int32_t +zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, struct rte_mbuf **cookie, uint16_t num) +{ + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + struct zxdh_hw *hw = vq->hw; + struct zxdh_vq_desc_extra *dxp; + uint16_t flags = vq->vq_packed.cached_flags; + int32_t i; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; + /* rx pkt fill in data_off */ + start_dp[idx].addr = rte_mbuf_iova_get(cookie[i]) + RTE_PKTMBUF_HEADROOM; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM; + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + zxdh_queue_store_flags_packed(&start_dp[idx], flags, hw->weak_barriers); + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + flags = vq->vq_packed.cached_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + +int32_t +zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtqueue *vq = hw->vqs[logic_qidx]; + struct zxdh_virtnet_rx *rxvq = &vq->rxq; + uint16_t desc_idx; + int32_t error = 0; + + /* Allocate blank mbufs for the each rx descriptor */ + memset(&rxvq->fake_mbuf, 0, sizeof(rxvq->fake_mbuf)); + for (desc_idx = 0; desc_idx < ZXDH_MBUF_BURST_SZ; desc_idx++) + vq->sw_ring[vq->vq_nentries + desc_idx] = &rxvq->fake_mbuf; + + while (!zxdh_queue_full(vq)) { + uint16_t free_cnt = vq->vq_free_cnt; + + free_cnt = RTE_MIN(ZXDH_MBUF_BURST_SZ, free_cnt); + struct rte_mbuf *new_pkts[free_cnt]; + + if (likely(rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt) == 0)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + int32_t i; + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + } else { + PMD_DRV_LOG(ERR, "port %d rxq %d allocated bufs from %s failed", + hw->port_id, logic_qidx, rxvq->mpool->name); + break; + } + } + return 0; +} + +void +zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + uint16_t i = 0; + struct zxdh_vring_packed_desc *descs = vq->vq_packed.ring.desc; + int32_t cnt = 0; + + i = vq->vq_used_cons_idx; + while (zxdh_desc_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 2f602d894f..343ab60c1a 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -25,6 +25,11 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ #define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) +#define ZXDH_VRING_PACKED_DESC_F_USED (1 << (15)) + +/* Frequently used combinations */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL_USED \ + (ZXDH_VRING_PACKED_DESC_F_AVAIL | ZXDH_VRING_PACKED_DESC_F_USED) #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 @@ -32,6 +37,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 /* * ring descriptors: 16 bytes. @@ -290,6 +297,63 @@ zxdh_mb(uint8_t weak_barriers) rte_mb(); } +static inline int32_t +zxdh_queue_full(const struct zxdh_virtqueue *vq) +{ + return (vq->vq_free_cnt == 0); +} + +static inline void +zxdh_queue_store_flags_packed(struct zxdh_vring_packed_desc *dp, + uint16_t flags, uint8_t weak_barriers) +{ + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + rte_io_wmb(); + dp->flags = flags; + #else + rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release); + #endif + } else { + rte_io_wmb(); + dp->flags = flags; + } +} + +static inline uint16_t +zxdh_queue_fetch_flags_packed(struct zxdh_vring_packed_desc *dp, + uint8_t weak_barriers) +{ + uint16_t flags; + if (weak_barriers) { + #ifdef RTE_ARCH_X86_64 + flags = dp->flags; + rte_io_rmb(); + #else + flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire); + #endif + } else { + flags = dp->flags; + rte_io_rmb(); + } + + return flags; +} + +static inline int32_t +zxdh_desc_used(struct zxdh_vring_packed_desc *desc, struct zxdh_virtqueue *vq) +{ + uint16_t flags = zxdh_queue_fetch_flags_packed(desc, vq->hw->weak_barriers); + uint16_t used = !!(flags & ZXDH_VRING_PACKED_DESC_F_USED); + uint16_t avail = !!(flags & ZXDH_VRING_PACKED_DESC_F_AVAIL); + return avail == used && used == vq->vq_packed.used_wrap_counter; +} + +static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) +{ + ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); @@ -306,5 +370,9 @@ int32_t zxdh_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp); int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); +int32_t zxdh_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t logic_qidx); +void zxdh_queue_rxvq_flush(struct zxdh_virtqueue *vq); +int32_t zxdh_enqueue_recv_refill_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **cookie, uint16_t num); #endif /* ZXDH_QUEUE_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 27567 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 07/15] net/zxdh: provided dev simple tx implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (5 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang ` (7 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18410 bytes --] provided dev simple tx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 20 ++ drivers/net/zxdh/zxdh_queue.h | 25 +++ drivers/net/zxdh/zxdh_rxtx.c | 395 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 + 5 files changed, 445 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_rxtx.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 5b3af87c5b..20b2cf484a 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -21,4 +21,5 @@ sources = files( 'zxdh_queue.c', 'zxdh_np.c', 'zxdh_tables.c', + 'zxdh_rxtx.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index a9c0d083fe..c32de633db 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -15,6 +15,7 @@ #include "zxdh_queue.h" #include "zxdh_np.h" #include "zxdh_tables.h" +#include "zxdh_rxtx.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -952,6 +953,24 @@ zxdh_dev_close(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + + if (!zxdh_pci_packed_queue(hw)) { + PMD_DRV_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id); + return -1; + } + if (!zxdh_pci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) { + PMD_DRV_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id); + return -1; + } + eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; + eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + return 0; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -967,6 +986,7 @@ zxdh_dev_start(struct rte_eth_dev *dev) if (ret < 0) return ret; } + zxdh_set_rxtx_funcs(dev); ret = zxdh_intr_enable(dev); if (ret) { PMD_DRV_LOG(ERR, "interrupt enable failed"); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 343ab60c1a..1bd292e235 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -21,6 +21,15 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_TQ_QUEUE_IDX 1 #define ZXDH_MAX_TX_INDIRECT 8 +/* This marks a buffer as continuing via the next field. */ +#define ZXDH_VRING_DESC_F_NEXT 1 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define ZXDH_VRING_DESC_F_WRITE 2 + +/* This means the buffer contains a list of buffer descriptors. */ +#define ZXDH_VRING_DESC_F_INDIRECT 4 + /* This marks a buffer as write-only (otherwise read-only). */ #define ZXDH_VRING_DESC_F_WRITE 2 /* This flag means the descriptor was made available by the driver */ @@ -34,11 +43,16 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 #define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 #define ZXDH_RING_EVENT_FLAGS_DESC 0x2 +#define ZXDH_RING_F_INDIRECT_DESC 28 #define ZXDH_VQ_RING_DESC_CHAIN_END 32768 #define ZXDH_QUEUE_DEPTH 1024 #define ZXDH_RQ_QUEUE_IDX 0 #define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_TYPE_HDR_SIZE sizeof(struct zxdh_type_hdr) +#define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) +#define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) +#define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) /* * ring descriptors: 16 bytes. @@ -354,6 +368,17 @@ static inline void zxdh_queue_notify(struct zxdh_virtqueue *vq) ZXDH_VTPCI_OPS(vq->hw)->notify_queue(vq->hw, vq); } +static inline int32_t +zxdh_queue_kick_prepare_packed(struct zxdh_virtqueue *vq) +{ + uint16_t flags = 0; + + zxdh_mb(vq->hw->weak_barriers); + flags = vq->vq_packed.ring.device->desc_event_flags; + + return (flags != ZXDH_RING_EVENT_FLAGS_DISABLE); +} + struct rte_mbuf *zxdh_queue_detach_unused(struct zxdh_virtqueue *vq); int32_t zxdh_free_queues(struct rte_eth_dev *dev); int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c new file mode 100644 index 0000000000..01e9b19798 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -0,0 +1,395 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <stdalign.h> + +#include <rte_net.h> + +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_queue.h" + +#define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ +#define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ +#define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ + +#define ZXDH_PI_L3TYPE_IP 0x00 +#define ZXDH_PI_L3TYPE_IPV6 0x40 +#define ZXDH_PI_L3TYPE_NOIP 0x80 +#define ZXDH_PI_L3TYPE_RSV 0xC0 +#define ZXDH_PI_L3TYPE_MASK 0xC0 + +#define ZXDH_PCODE_MASK 0x1F +#define ZXDH_PCODE_IP_PKT_TYPE 0x01 +#define ZXDH_PCODE_TCP_PKT_TYPE 0x02 +#define ZXDH_PCODE_UDP_PKT_TYPE 0x03 +#define ZXDH_PCODE_NO_IP_PKT_TYPE 0x09 +#define ZXDH_PCODE_NO_REASSMBLE_TCP_PKT_TYPE 0x0C + +#define ZXDH_TX_MAX_SEGS 31 +#define ZXDH_RX_MAX_SEGS 31 + +static void +zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t curr_id = 0; + uint16_t free_cnt = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num > 0 && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + do { + curr_id = used_idx; + dxp = &vq->vq_descx[used_idx]; + used_idx += dxp->ndescs; + free_cnt += dxp->ndescs; + num -= dxp->ndescs; + if (used_idx >= size) { + used_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + } while (curr_id != id); + } + vq->vq_used_cons_idx = used_idx; + vq->vq_free_cnt += free_cnt; +} + +static void +zxdh_ring_free_id_packed(struct zxdh_virtqueue *vq, uint16_t id) +{ + struct zxdh_vq_desc_extra *dxp = NULL; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static void +zxdh_xmit_cleanup_normal_packed(struct zxdh_virtqueue *vq, int32_t num) +{ + uint16_t used_idx = 0; + uint16_t id = 0; + uint16_t size = vq->vq_nentries; + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct zxdh_vq_desc_extra *dxp = NULL; + + used_idx = vq->vq_used_cons_idx; + /* desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + while (num-- && zxdh_desc_used(&desc[used_idx], vq)) { + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->vq_packed.used_wrap_counter ^= 1; + } + zxdh_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + +static void +zxdh_xmit_cleanup_packed(struct zxdh_virtqueue *vq, int32_t num, int32_t in_order) +{ + if (in_order) + zxdh_xmit_cleanup_inorder_packed(vq, num); + else + zxdh_xmit_cleanup_normal_packed(vq, num); +} + +static uint8_t +zxdh_xmit_get_ptype(struct rte_mbuf *m) +{ + uint8_t pcode = ZXDH_PCODE_NO_IP_PKT_TYPE; + uint8_t l3_ptype = ZXDH_PI_L3TYPE_NOIP; + + if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV4 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV4)) { + l3_ptype = ZXDH_PI_L3TYPE_IP; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else if ((m->packet_type & RTE_PTYPE_INNER_L3_MASK) == RTE_PTYPE_INNER_L3_IPV6 || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L3_MASK) == RTE_PTYPE_L3_IPV6)) { + l3_ptype = ZXDH_PI_L3TYPE_IPV6; + pcode = ZXDH_PCODE_IP_PKT_TYPE; + } else { + goto end; + } + if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_TCP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_TCP)) + pcode = ZXDH_PCODE_TCP_PKT_TYPE; + else if ((m->packet_type & RTE_PTYPE_INNER_L4_MASK) == RTE_PTYPE_INNER_L4_UDP || + ((!(m->packet_type & RTE_PTYPE_TUNNEL_MASK)) && + (m->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP)) + pcode = ZXDH_PCODE_UDP_PKT_TYPE; + +end: + return l3_ptype | ZXDH_PKT_FORM_CPU | pcode; +} + +static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, + struct zxdh_net_hdr_dl *hdr) +{ + uint16_t pkt_flag_lw16 = ZXDH_NO_IPID_UPDATE; + uint16_t l3_offset; + uint32_t ol_flag = 0; + + hdr->pi_hdr.pkt_flag_lw16 = rte_be_to_cpu_16(pkt_flag_lw16); + + hdr->pi_hdr.pkt_type = zxdh_xmit_get_ptype(cookie); + l3_offset = ZXDH_DL_NET_HDR_SIZE + cookie->outer_l2_len + + cookie->outer_l3_len + cookie->l2_len; + hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); + hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); +} + +static inline void zxdh_enqueue_xmit_packed_fast(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, int32_t in_order) +{ + struct zxdh_virtqueue *vq = txvq->vq; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + uint16_t flags = vq->vq_packed.cached_flags; + struct zxdh_net_hdr_dl *hdr = NULL; + + dxp->ndescs = 1; + dxp->cookie = cookie; + hdr = rte_pktmbuf_mtod_offset(cookie, struct zxdh_net_hdr_dl *, -ZXDH_DL_NET_HDR_SIZE); + zxdh_xmit_fill_net_hdr(cookie, hdr); + + uint16_t idx = vq->vq_avail_idx; + struct zxdh_vring_packed_desc *dp = &vq->vq_packed.ring.desc[idx]; + + dp->addr = rte_pktmbuf_iova(cookie) - ZXDH_DL_NET_HDR_SIZE; + dp->len = cookie->data_len + ZXDH_DL_NET_HDR_SIZE; + dp->id = id; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + vq->vq_free_cnt--; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(dp, flags, vq->hw->weak_barriers); +} + +static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, + struct rte_mbuf *cookie, + uint16_t needed, + int32_t use_indirect, + int32_t in_order) +{ + struct zxdh_tx_region *txr = txvq->zxdh_net_hdr_mz->addr; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_vring_packed_desc *start_dp = vq->vq_packed.ring.desc; + void *hdr = NULL; + uint16_t head_idx = vq->vq_avail_idx; + uint16_t idx = head_idx; + uint16_t prev = head_idx; + uint16_t head_flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + uint16_t seg_num = cookie->nb_segs; + uint16_t id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; + struct zxdh_vring_packed_desc *head_dp = &vq->vq_packed.ring.desc[idx]; + struct zxdh_vq_desc_extra *dxp = &vq->vq_descx[id]; + + dxp->ndescs = needed; + dxp->cookie = cookie; + head_flags |= vq->vq_packed.cached_flags; + /* if offload disabled, it is not zeroed below, do it now */ + + if (use_indirect) { + /** + * setup tx ring slot to point to indirect + * descriptor list stored in reserved region. + * the first slot in indirect ring is already + * preset to point to the header in reserved region + **/ + start_dp[idx].addr = + txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_packed_indir, txr); + start_dp[idx].len = (seg_num + 1) * sizeof(struct zxdh_vring_packed_desc); + /* Packed descriptor id needs to be restored when inorder. */ + if (in_order) + start_dp[idx].id = idx; + + /* reset flags for indirect desc */ + head_flags = ZXDH_VRING_DESC_F_INDIRECT; + head_flags |= vq->vq_packed.cached_flags; + hdr = (void *)&txr[idx].tx_hdr; + /* loop below will fill in rest of the indirect elements */ + start_dp = txr[idx].tx_packed_indir; + start_dp->len = ZXDH_DL_NET_HDR_SIZE; /* update actual net or type hdr size */ + idx = 1; + } else { + /* setup first tx ring slot to point to header stored in reserved region. */ + start_dp[idx].addr = txvq->zxdh_net_hdr_mem + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = ZXDH_DL_NET_HDR_SIZE; + head_flags |= ZXDH_VRING_DESC_F_NEXT; + hdr = (void *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + zxdh_xmit_fill_net_hdr(cookie, (struct zxdh_net_hdr_dl *)hdr); + + do { + start_dp[idx].addr = rte_pktmbuf_iova(cookie); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + uint16_t flags = cookie->next ? ZXDH_VRING_DESC_F_NEXT : 0; + flags |= vq->vq_packed.cached_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } while ((cookie = cookie->next) != NULL); + start_dp[prev].id = id; + if (use_indirect) { + idx = head_idx; + if (++idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->vq_packed.cached_flags ^= ZXDH_VRING_PACKED_DESC_F_AVAIL_USED; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + vq->vq_avail_idx = idx; + if (!in_order) { + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == ZXDH_VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = ZXDH_VQ_RING_DESC_CHAIN_END; + } + zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); +} + +uint16_t +zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct zxdh_virtnet_tx *txvq = tx_queue; + struct zxdh_virtqueue *vq = txvq->vq; + struct zxdh_hw *hw = vq->hw; + uint16_t nb_tx = 0; + + bool in_order = zxdh_pci_with_feature(hw, ZXDH_F_IN_ORDER); + + if (nb_pkts > vq->vq_free_cnt) + zxdh_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt, in_order); + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int32_t can_push = 0; + int32_t use_indirect = 0; + int32_t slots = 0; + int32_t need = 0; + + /* optimize ring usage */ + if ((zxdh_pci_with_feature(hw, ZXDH_F_ANY_LAYOUT) || + zxdh_pci_with_feature(hw, ZXDH_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= ZXDH_DL_NET_HDR_SIZE && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + alignof(struct zxdh_net_hdr_dl))) { + can_push = 1; + } else if (zxdh_pci_with_feature(hw, ZXDH_RING_F_INDIRECT_DESC) && + txm->nb_segs < ZXDH_MAX_TX_INDIRECT) { + use_indirect = 1; + } + /** + * How many main ring entries are needed to this Tx? + * indirect => 1 + * any_layout => number of segments + * default => number of segments + 1 + **/ + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); + need = slots - vq->vq_free_cnt; + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + zxdh_xmit_cleanup_packed(vq, need, in_order); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, "port[ep:%d, pf:%d, vf:%d, vfid:%d, pcieid:%d], que:%d[pch:%d]. No free tx desc to xmit", + hw->vport.epid, hw->vport.pfid, hw->vport.vfid, + hw->vfid, hw->pcie_id, txvq->queue_id, + hw->channel_context[txvq->queue_id].ph_chno); + break; + } + } + /* Enqueue Packet buffers */ + if (can_push) + zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); + else + zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + } + if (likely(nb_tx)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { + zxdh_queue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + return nb_tx; +} + +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx; + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *m = tx_pkts[nb_tx]; + int32_t error; + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + error = rte_validate_tx_offload(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } +#endif + + error = rte_net_intel_cksum_prepare(m); + if (unlikely(error)) { + rte_errno = -error; + break; + } + } + return nb_tx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index de9353b223..e07e01e821 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -44,4 +44,8 @@ struct zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ } __rte_packed; +uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45120 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 08/15] net/zxdh: provided dev simple rx implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (6 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 09/15] net/zxdh: link info update, set link up/down Junlong Wang ` (6 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11211 bytes --] provided dev simple rx implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 1 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_rxtx.c | 311 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 2 + 5 files changed, 317 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 874541c589..85c5c8fd32 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,3 +9,4 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess = Y +Scattered Rx = Y \ No newline at end of file diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index eb970a888f..f42db9c1f1 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -20,6 +20,7 @@ Features of the ZXDH PMD are: - Multi arch support: x86_64, ARMv8. - Multiple queues for TX and RX - SR-IOV VF +- Scattered and gather for TX and RX Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c32de633db..226b9d6b67 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -968,6 +968,8 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) } eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare; eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed; + eth_dev->rx_pkt_burst = &zxdh_recv_pkts_packed; + return 0; } diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 01e9b19798..07ef708112 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -31,6 +31,93 @@ #define ZXDH_TX_MAX_SEGS 31 #define ZXDH_RX_MAX_SEGS 31 +uint32_t zxdh_outer_l2_type[16] = { + 0, + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_TIMESYNC, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L2_ETHER_LLDP, + RTE_PTYPE_L2_ETHER_NSH, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L2_ETHER_PPPOE, + RTE_PTYPE_L2_ETHER_FCOE, + RTE_PTYPE_L2_ETHER_MPLS, +}; + +uint32_t zxdh_outer_l3_type[16] = { + 0, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_outer_l4_type[16] = { + 0, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_IGMP, +}; + +uint32_t zxdh_tunnel_type[16] = { + 0, + RTE_PTYPE_TUNNEL_IP, + RTE_PTYPE_TUNNEL_GRE, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_TUNNEL_GRENAT, + RTE_PTYPE_TUNNEL_GTPC, + RTE_PTYPE_TUNNEL_GTPU, + RTE_PTYPE_TUNNEL_ESP, + RTE_PTYPE_TUNNEL_L2TP, + RTE_PTYPE_TUNNEL_VXLAN_GPE, + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, +}; + +uint32_t zxdh_inner_l2_type[16] = { + 0, + RTE_PTYPE_INNER_L2_ETHER, + 0, + 0, + 0, + 0, + RTE_PTYPE_INNER_L2_ETHER_VLAN, + RTE_PTYPE_INNER_L2_ETHER_QINQ, + 0, + 0, + 0, +}; + +uint32_t zxdh_inner_l3_type[16] = { + 0, + RTE_PTYPE_INNER_L3_IPV4, + RTE_PTYPE_INNER_L3_IPV4_EXT, + RTE_PTYPE_INNER_L3_IPV6, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, +}; + +uint32_t zxdh_inner_l4_type[16] = { + 0, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, + RTE_PTYPE_INNER_L4_SCTP, + RTE_PTYPE_INNER_L4_ICMP, + 0, + 0, +}; + static void zxdh_xmit_cleanup_inorder_packed(struct zxdh_virtqueue *vq, int32_t num) { @@ -393,3 +480,227 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t } return nb_tx; } + +static uint16_t zxdh_dequeue_burst_rx_packed(struct zxdh_virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct zxdh_vring_packed_desc *desc = vq->vq_packed.ring.desc; + struct rte_mbuf *cookie = NULL; + uint16_t i, used_idx; + uint16_t id; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + /** + * desc_is_used has a load-acquire or rte_io_rmb inside + * and wait for used desc in virtqueue. + */ + if (!zxdh_desc_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + vq->vq_descx[id].cookie = NULL; + if (unlikely(cookie == NULL)) { + PMD_RX_LOG(ERR, + "vring descriptor with no mbuf cookie at %u", vq->vq_used_cons_idx); + break; + } + rx_pkts[i] = cookie; + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->vq_packed.used_wrap_counter ^= 1; + } + } + return i; +} + +static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *hdr) +{ + struct zxdh_pd_hdr_ul *pd_hdr = &hdr->pd_hdr; + struct zxdh_pi_hdr *pi_hdr = &hdr->pi_hdr; + uint32_t idx = 0; + + m->pkt_len = rte_be_to_cpu_16(pi_hdr->ul.pkt_len); + + uint16_t pkt_type_outer = rte_be_to_cpu_16(pd_hdr->pkt_type_out); + + idx = (pkt_type_outer >> 12) & 0xF; + m->packet_type = zxdh_outer_l2_type[idx]; + idx = (pkt_type_outer >> 8) & 0xF; + m->packet_type |= zxdh_outer_l3_type[idx]; + idx = (pkt_type_outer >> 4) & 0xF; + m->packet_type |= zxdh_outer_l4_type[idx]; + idx = pkt_type_outer & 0xF; + m->packet_type |= zxdh_tunnel_type[idx]; + + uint16_t pkt_type_inner = rte_be_to_cpu_16(pd_hdr->pkt_type_in); + + if (pkt_type_inner) { + idx = (pkt_type_inner >> 12) & 0xF; + m->packet_type |= zxdh_inner_l2_type[idx]; + idx = (pkt_type_inner >> 8) & 0xF; + m->packet_type |= zxdh_inner_l3_type[idx]; + idx = (pkt_type_inner >> 4) & 0xF; + m->packet_type |= zxdh_inner_l4_type[idx]; + } + + return 0; +} + +static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +{ + int32_t error = 0; + /* + * Requeue the discarded mbuf. This should always be + * successful since it was just dequeued. + */ + error = zxdh_enqueue_recv_refill_packed(vq, &m, 1); + if (unlikely(error)) { + PMD_RX_LOG(ERR, "cannot enqueue discarded mbuf"); + rte_pktmbuf_free(m); + } +} + +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct zxdh_virtnet_rx *rxvq = rx_queue; + struct zxdh_virtqueue *vq = rxvq->vq; + struct zxdh_hw *hw = vq->hw; + struct rte_eth_dev *dev = hw->eth_dev; + struct rte_mbuf *rxm = NULL; + struct rte_mbuf *prev = NULL; + uint32_t len[ZXDH_MBUF_BURST_SZ] = {0}; + struct rte_mbuf *rcv_pkts[ZXDH_MBUF_BURST_SZ] = {NULL}; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint16_t hdr_size = 0; + int32_t error = 0; + uint16_t nb_rx = 0; + uint16_t num = nb_pkts; + + if (unlikely(num > ZXDH_MBUF_BURST_SZ)) + num = ZXDH_MBUF_BURST_SZ; + + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + uint16_t i; + uint16_t rcvd_pkt_len = 0; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + struct zxdh_net_hdr_ul *header = + (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM); + + seg_num = header->type_hdr.num_buffers; + if (seg_num == 0) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + seg_num = 1; + } + /* bit[0:6]-pd_len unit:2B */ + uint16_t pd_len = header->type_hdr.pd_len << 1; + /* Private queue only handle type hdr */ + hdr_size = pd_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rcvd_pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + /* Update rte_mbuf according to pi/pd header */ + if (zxdh_rx_update_mbuf(rxm, header) < 0) { + zxdh_discard_rxbuf(vq, rxm); + continue; + } + seg_res = seg_num - 1; + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->data_len = (uint16_t)(len[i]); + + rcvd_pkt_len += (uint32_t)(len[i]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + seg_res -= 1; + } + + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, ZXDH_MBUF_BURST_SZ); + uint16_t extra_idx = 0; + + rcv_cnt = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, len, rcv_cnt); + if (unlikely(rcv_cnt == 0)) { + PMD_RX_LOG(ERR, "No enough segments for packet."); + rte_pktmbuf_free(rx_pkts[nb_rx]); + break; + } + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + prev->next = rxm; + prev = rxm; + rxm->next = NULL; + rcvd_pkt_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + if (rcvd_pkt_len != rx_pkts[nb_rx]->pkt_len) { + PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", + rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); + zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + continue; + } + nb_rx++; + } + } + + /* Allocate new mbuf for the used descriptor */ + if (likely(!zxdh_queue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = zxdh_enqueue_recv_refill_packed(vq, new_pkts, free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + if (likely(nb_enqueued)) { + if (unlikely(zxdh_queue_kick_prepare_packed(vq))) + zxdh_queue_notify(vq); + } + return nb_rx; +} diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index e07e01e821..6c1c132479 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -47,5 +47,7 @@ struct zxdh_virtnet_tx { uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 28750 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 09/15] net/zxdh: link info update, set link up/down 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (7 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang ` (5 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24032 bytes --] provided link info update, set link up /down, and link intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 4 +- doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 13 ++ drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 166 ++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 14 +++ drivers/net/zxdh/zxdh_msg.c | 58 ++++++++- drivers/net/zxdh/zxdh_msg.h | 41 +++++++ drivers/net/zxdh/zxdh_np.c | 183 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 20 ++++ drivers/net/zxdh/zxdh_tables.c | 14 +++ drivers/net/zxdh/zxdh_tables.h | 2 + 13 files changed, 519 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 85c5c8fd32..f052fde413 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -9,4 +9,6 @@ x86-64 = Y ARMv8 = Y SR-IOV = Y Multiprocess = Y -Scattered Rx = Y \ No newline at end of file +Scattered Rx = Y +Link status = Y +Link status event = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index f42db9c1f1..fdbc3b3923 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -21,6 +21,9 @@ Features of the ZXDH PMD are: - Multiple queues for TX and RX - SR-IOV VF - Scattered and gather for TX and RX +- Link Auto-negotiation +- Link state information +- Set Link down or up Driver compilation and testing diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 20b2cf484a..48f8f5e1ee 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -22,4 +22,5 @@ sources = files( 'zxdh_np.c', 'zxdh_tables.c', 'zxdh_rxtx.c', + 'zxdh_ethdev_ops.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 226b9d6b67..57ee2f7c55 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -16,6 +16,7 @@ #include "zxdh_np.h" #include "zxdh_tables.h" #include "zxdh_rxtx.h" +#include "zxdh_ethdev_ops.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; struct zxdh_shared_data *zxdh_shared_data; @@ -105,9 +106,16 @@ static void zxdh_devconf_intr_handler(void *param) { struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + + uint8_t isr = zxdh_pci_isr(hw); if (zxdh_intr_unmask(dev) < 0) PMD_DRV_LOG(ERR, "interrupt enable failed"); + if (isr & ZXDH_PCI_ISR_CONFIG) { + if (zxdh_dev_link_update(dev, 0) == 0) + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL); + } } @@ -1007,6 +1015,8 @@ zxdh_dev_start(struct rte_eth_dev *dev) vq = hw->vqs[logic_qidx]; zxdh_queue_notify(vq); } + zxdh_dev_set_link_up(dev); + return 0; } @@ -1021,6 +1031,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .tx_queue_setup = zxdh_dev_tx_queue_setup, .rx_queue_intr_enable = zxdh_dev_rx_queue_intr_enable, .rx_queue_intr_disable = zxdh_dev_rx_queue_intr_disable, + .link_update = zxdh_dev_link_update, + .dev_set_link_up = zxdh_dev_set_link_up, + .dev_set_link_down = zxdh_dev_set_link_down, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 6fdb5fb767..cf2bc207e9 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -70,6 +70,7 @@ struct zxdh_hw { uint64_t guest_features; uint32_t max_queue_pairs; uint32_t speed; + uint32_t speed_mode; uint32_t notify_off_multiplier; uint16_t *notify_base; uint16_t pcie_id; @@ -91,6 +92,7 @@ struct zxdh_hw { uint8_t panel_id; uint8_t has_tx_offload; uint8_t has_rx_offload; + uint8_t admin_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c new file mode 100644 index 0000000000..635868c4c0 --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_msg.h" +#include "zxdh_ethdev_ops.h" +#include "zxdh_tables.h" +#include "zxdh_logs.h" + +static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int32_t ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -EAGAIN; + } + port_attr.is_up = link_status; + + ret = zxdh_set_port_attr(hw->vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "write port_attr failed"); + return -EAGAIN; + } + } else { + struct zxdh_port_attr_set_msg *port_attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + port_attr_msg->mode = ZXDH_PORT_ATTR_IS_UP_FLAG; + port_attr_msg->value = link_status; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PORT_ATTR_IS_UP_FLAG); + return ret; + } + } + return ret; +} + +static int32_t +zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + uint16_t status = 0; + int32_t ret = 0; + + if (zxdh_pci_with_feature(hw, ZXDH_NET_F_STATUS)) + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, status), + &status, sizeof(status)); + + link->link_status = status; + + if (status == RTE_ETH_LINK_DOWN) { + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } else { + zxdh_agent_msg_build(hw, ZXDH_MAC_LINK_GET, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), + ZXDH_BAR_MODULE_MAC); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_LINK_GET); + return -EAGAIN; + } + link->link_speed = reply_info.reply_body.link_msg.speed; + hw->speed_mode = reply_info.reply_body.link_msg.speed_modes; + if ((reply_info.reply_body.link_msg.duplex & RTE_ETH_LINK_FULL_DUPLEX) == + RTE_ETH_LINK_FULL_DUPLEX) + link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + else + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + } + hw->speed = link->link_speed; + + return 0; +} + +static int zxdh_set_link_status(struct rte_eth_dev *dev, uint8_t link_status) +{ + uint16_t curr_link_status = dev->data->dev_link.link_status; + + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (link_status == curr_link_status) { + PMD_DRV_LOG(INFO, "curr_link_status %u", curr_link_status); + return 0; + } + + hw->admin_status = link_status; + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get link status from hw"); + return ret; + } + dev->data->dev_link.link_status = hw->admin_status & link.link_status; + + if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) { + dev->data->dev_link.link_speed = link.link_speed; + dev->data->dev_link.link_duplex = link.link_duplex; + } else { + dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; + } + return zxdh_config_port_status(dev, dev->data->dev_link.link_status); +} + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_UP); + + if (ret) + PMD_DRV_LOG(ERR, "Set link up failed, code:%d", ret); + + return ret; +} + +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused) +{ + struct rte_eth_link link; + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + memset(&link, 0, sizeof(link)); + link.link_duplex = hw->duplex; + link.link_speed = hw->speed; + link.link_autoneg = RTE_ETH_LINK_AUTONEG; + + ret = zxdh_link_info_get(dev, &link); + if (ret != 0) { + PMD_DRV_LOG(ERR, " Failed to get link status from hw"); + return ret; + } + link.link_status &= hw->admin_status; + if (link.link_status == RTE_ETH_LINK_DOWN) + link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + + ret = zxdh_config_port_status(dev, link.link_status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set port attr %d failed.", link.link_status); + return ret; + } + return rte_eth_linkstatus_set(dev, &link); +} + +int zxdh_dev_set_link_down(struct rte_eth_dev *dev) +{ + int ret = zxdh_set_link_status(dev, RTE_ETH_LINK_DOWN); + + if (ret) + PMD_DRV_LOG(ERR, "Set link down failed"); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h new file mode 100644 index 0000000000..c6d6ca56fd --- /dev/null +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_ETHDEV_OPS_H +#define ZXDH_ETHDEV_OPS_H + +#include "zxdh_ethdev.h" + +int zxdh_dev_set_link_up(struct rte_eth_dev *dev); +int zxdh_dev_set_link_down(struct rte_eth_dev *dev); +int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); + +#endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 1aed979de3..be7bf46728 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -1083,7 +1083,7 @@ int zxdh_get_bar_offset(struct zxdh_bar_offset_params *paras, return ZXDH_BAR_MSG_OK; } -int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, +int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1133,6 +1133,50 @@ int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, return 0; } +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_pci_bar_msg in = {0}; + struct zxdh_msg_recviver_mem result = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + + if (reply) { + RTE_ASSERT(reply_len < sizeof(zxdh_msg_reply_info)); + result.recv_buffer = reply; + result.buffer_len = reply_len; + } else { + result.recv_buffer = &reply_info; + result.buffer_len = sizeof(reply_info); + } + struct zxdh_msg_reply_head *reply_head = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_head); + struct zxdh_msg_reply_body *reply_body = + &(((struct zxdh_msg_reply_info *)result.recv_buffer)->reply_body); + in.payload_addr = &msg_req; + in.payload_len = msg_req_len; + in.virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + in.src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = module_id; + in.src_pcieid = hw->pcie_id; + if (zxdh_bar_chan_sync_msg_send(&in, &result) != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "Failed to send sync messages or receive response"); + return -EAGAIN; + } + if (reply_head->flag != ZXDH_MSG_REPS_OK) { + PMD_MSG_LOG(ERR, "vf[%d] get pf reply failed: reply_head flag : 0x%x(0xff is OK).replylen %d", + hw->vport.vfid, reply_head->flag, reply_head->reps_len); + return -EAGAIN; + } + if (reply_body->flag != ZXDH_REPS_SUCC) { + PMD_MSG_LOG(ERR, "vf[%d] msg processing failed", hw->vfid); + return -EAGAIN; + } + return 0; +} + void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info) { @@ -1143,3 +1187,15 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, msghead->vf_id = hw->vport.vfid; msghead->pcieid = hw->pcie_id; } + +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info) +{ + struct zxdh_agent_msg_head *agent_head = &msg_info->agent_msg_head; + + agent_head->msg_type = type; + agent_head->panel_id = hw->panel_id; + agent_head->phyport = hw->phyport; + agent_head->vf_id = hw->vfid; + agent_head->pcie_id = hw->pcie_id; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 9997417f28..66c337443b 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -164,11 +164,18 @@ enum pciebar_layout_type { ZXDH_URI_MAX, }; +/* riscv msg opcodes */ +enum zxdh_agent_msg_type { + ZXDH_MAC_LINK_GET = 14, +} __rte_packed; + enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_PORT_ATTRS_SET = 25, + ZXDH_MSG_TYPE_END, } __rte_packed; @@ -261,6 +268,16 @@ struct zxdh_offset_get_msg { uint16_t type; }; +struct zxdh_link_info_msg { + uint8_t autoneg; + uint8_t link_state; + uint8_t blink_enable; + uint8_t duplex; + uint32_t speed_modes; + uint32_t speed; +} __rte_packed; + + struct zxdh_msg_reply_head { uint8_t flag; uint16_t reps_len; @@ -276,6 +293,7 @@ struct zxdh_msg_reply_body { enum zxdh_reps_flag flag; union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; + struct zxdh_link_info_msg link_msg; } __rte_packed; } __rte_packed; @@ -291,6 +309,21 @@ struct zxdh_vf_init_msg { uint8_t rss_enable; } __rte_packed; +struct zxdh_port_attr_set_msg { + uint32_t mode; + uint32_t value; + uint8_t allmulti_follow; +} __rte_packed; + +struct zxdh_agent_msg_head { + enum zxdh_agent_msg_type msg_type; + uint8_t panel_id; + uint8_t phyport; + uint8_t rsv; + uint16_t vf_id; + uint16_t pcie_id; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -302,10 +335,13 @@ struct zxdh_msg_info { union { uint8_t head_len[ZXDH_MSG_HEAD_LEN]; struct zxdh_msg_head msg_head; + struct zxdh_agent_msg_head agent_msg_head; }; union { uint8_t datainfo[ZXDH_MSG_REQ_BODY_MAX_LEN]; struct zxdh_vf_init_msg vf_init_msg; + struct zxdh_port_attr_set_msg port_attr_msg; + struct zxdh_link_info_msg link_msg; } __rte_packed data; } __rte_packed; @@ -326,5 +362,10 @@ void zxdh_msg_head_build(struct zxdh_hw *hw, enum zxdh_msg_type type, struct zxdh_msg_info *msg_info); int zxdh_vf_send_msg_to_pf(struct rte_eth_dev *dev, void *msg_req, uint16_t msg_req_len, void *reply, uint16_t reply_len); +void zxdh_agent_msg_build(struct zxdh_hw *hw, enum zxdh_agent_msg_type type, + struct zxdh_msg_info *msg_info); +int32_t zxdh_send_msg_to_riscv(struct rte_eth_dev *dev, void *msg_req, + uint16_t msg_req_len, void *reply, uint16_t reply_len, + enum ZXDH_BAR_MODULE_ID module_id); #endif /* ZXDH_MSG_H */ diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 242a6901ed..2a4d38b846 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -36,6 +36,16 @@ ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4] = {0}; #define ZXDH_COMM_GET_BIT_MASK(_inttype_, _bitqnt_)\ ((_inttype_)(((_bitqnt_) < 32))) +#define ZXDH_COMM_UINT32_GET_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + ((_uidst_) = (((_uisrc_) >> (_uistartpos_)) & \ + (ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_))))) + +#define ZXDH_COMM_UINT32_WRITE_BITS(_uidst_, _uisrc_, _uistartpos_, _uilen_)\ + (((_uidst_) & ~(ZXDH_COMM_GET_BIT_MASK(uint32_t, (_uilen_)) << (_uistartpos_)))) + +#define ZXDH_COMM_CONVERT32(dw_data) \ + (((dw_data) & 0xff) << 24) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1610,3 +1620,176 @@ zxdh_np_dtb_table_entry_delete(uint32_t dev_id, rte_free(p_data_buff_ex); return 0; } + +static uint32_t +zxdh_np_sdt_tbl_data_parser(uint32_t sdt_hig32, uint32_t sdt_low32, void *p_sdt_info) +{ + uint32_t tbl_type = 0; + uint32_t clutch_en = 0; + + ZXDH_SDTTBL_ERAM_T *p_sdt_eram = NULL; + ZXDH_SDTTBL_PORTTBL_T *p_sdt_porttbl = NULL; + + + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_hig32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + ZXDH_COMM_UINT32_GET_BITS(clutch_en, sdt_low32, 0, 1); + + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + p_sdt_eram = (ZXDH_SDTTBL_ERAM_T *)p_sdt_info; + p_sdt_eram->table_type = tbl_type; + p_sdt_eram->eram_clutch_en = clutch_en; + break; + } + + case ZXDH_SDT_TBLT_PORTTBL: + { + p_sdt_porttbl = (ZXDH_SDTTBL_PORTTBL_T *)p_sdt_info; + p_sdt_porttbl->table_type = tbl_type; + p_sdt_porttbl->porttbl_clutch_en = clutch_en; + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} + +static uint32_t +zxdh_np_soft_sdt_tbl_get(uint32_t dev_id, uint32_t sdt_no, void *p_sdt_info) +{ + uint32_t rc = 0; + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + + rc = zxdh_np_sdt_tbl_data_get(dev_id, sdt_no, &sdt_tbl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_sdt_tbl_data_get"); + + rc = zxdh_np_sdt_tbl_data_parser(sdt_tbl.data_high32, sdt_tbl.data_low32, p_sdt_info); + + if (rc != 0) + PMD_DRV_LOG(ERR, "dpp sdt [%d] tbl_data_parser error.", sdt_no); + + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_sdt_tbl_data_parser"); + + return rc; +} + +static uint32_t +zxdh_np_eram_index_cal(uint32_t eram_mode, uint32_t index, + uint32_t *p_row_index, uint32_t *p_col_index) +{ + uint32_t rc = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + + switch (eram_mode) { + case ZXDH_ERAM128_TBL_128b: + { + row_index = index; + break; + } + case ZXDH_ERAM128_TBL_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + case ZXDH_ERAM128_TBL_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + *p_row_index = row_index; + *p_col_index = col_index; + + return rc; +} + +static uint32_t +zxdh_np_dtb_eram_data_get(uint32_t dev_id, uint32_t queue_id, uint32_t sdt_no, + ZXDH_DTB_ERAM_ENTRY_INFO_T *p_dump_eram_entry) +{ + uint32_t rc = 0; + uint32_t rd_mode = 0; + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t temp_data[4] = {0}; + uint32_t index = p_dump_eram_entry->index; + uint32_t *p_data = p_dump_eram_entry->p_data; + + ZXDH_SDTTBL_ERAM_T sdt_eram_info = {0}; + + rc = zxdh_np_soft_sdt_tbl_get(queue_id, sdt_no, &sdt_eram_info); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "dpp_soft_sdt_tbl_get"); + rd_mode = sdt_eram_info.eram_mode; + + rc = zxdh_np_eram_index_cal(rd_mode, index, &row_index, &col_index); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dtb_eram_index_cal"); + + switch (rd_mode) { + case ZXDH_ERAM128_TBL_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_TBL_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_TBL_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + return rc; +} + +int +zxdh_np_dtb_table_entry_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, + uint32_t srh_mode) +{ + uint32_t rc = 0; + uint32_t sdt_no = 0; + uint32_t tbl_type = 0; + ZXDH_SDT_TBL_DATA_T sdt_tbl = {0}; + + memset(&sdt_tbl, 0x0, sizeof(ZXDH_SDT_TBL_DATA_T)); + sdt_no = get_entry->sdt_no; + rc = zxdh_np_sdt_tbl_data_get(srh_mode, sdt_no, &sdt_tbl); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_sdt_tbl_data_get"); + ZXDH_COMM_UINT32_GET_BITS(tbl_type, sdt_tbl.data_high32, + ZXDH_SDT_H_TBL_TYPE_BT_POS, ZXDH_SDT_H_TBL_TYPE_BT_LEN); + switch (tbl_type) { + case ZXDH_SDT_TBLT_ERAM: + { + rc = zxdh_np_dtb_eram_data_get(dev_id, + queue_id, + sdt_no, + (ZXDH_DTB_ERAM_ENTRY_INFO_T *)get_entry->p_entry_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_eram_data_get"); + break; + } + default: + { + PMD_DRV_LOG(ERR, "SDT table_type[ %d ] is invalid!", tbl_type); + return 1; + } + } + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 3cb9580254..3a7a830d7d 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -514,11 +514,31 @@ typedef struct zxdh_sdt_tbl_data_t { uint32_t data_low32; } ZXDH_SDT_TBL_DATA_T; +typedef struct zxdh_sdt_tbl_etcam_t { + uint32_t table_type; + uint32_t etcam_id; + uint32_t etcam_key_mode; + uint32_t etcam_table_id; + uint32_t no_as_rsp_mode; + uint32_t as_en; + uint32_t as_eram_baddr; + uint32_t as_rsp_mode; + uint32_t etcam_table_depth; + uint32_t etcam_clutch_en; +} ZXDH_SDTTBL_ETCAM_T; + +typedef struct zxdh_sdt_tbl_porttbl_t { + uint32_t table_type; + uint32_t porttbl_clutch_en; +} ZXDH_SDTTBL_PORTTBL_T; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *down_entries); int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); +int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, + ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index e28823c657..15098e723d 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -26,6 +26,20 @@ int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) return ret; } +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) +{ + int ret = 0; + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vfid, (uint32_t *)port_attr}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VPORT_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "get port_attr vfid:%d failed, ret:%d ", vfid, ret); + + return ret; +} + int zxdh_port_attr_init(struct rte_eth_dev *dev) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 5e9b36faee..7f592beb3c 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -10,6 +10,7 @@ extern struct zxdh_dtb_shared_data g_dtb_data; #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_ATTR_IS_UP_FLAG 35 struct zxdh_port_attr_table { #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN @@ -144,6 +145,7 @@ struct zxdh_panel_table { int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); +int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 51575 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 10/15] net/zxdh: mac set/add/remove ops implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (8 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 09/15] net/zxdh: link info update, set link up/down Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 11/15] net/zxdh: promiscuous/allmulticast " Junlong Wang ` (4 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24234 bytes --] provided mac set/add/remove ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_common.c | 24 +++ drivers/net/zxdh/zxdh_common.h | 1 + drivers/net/zxdh/zxdh_ethdev.c | 28 ++++ drivers/net/zxdh/zxdh_ethdev.h | 3 + drivers/net/zxdh/zxdh_ethdev_ops.c | 233 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 11 ++ drivers/net/zxdh/zxdh_np.h | 5 + drivers/net/zxdh/zxdh_tables.c | 196 ++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 36 +++++ 12 files changed, 545 insertions(+) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index f052fde413..d5f3bac917 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -12,3 +12,5 @@ Multiprocess = Y Scattered Rx = Y Link status = Y Link status event = Y +Unicast MAC filter = Y +Multicast MAC filter = Y \ No newline at end of file diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index fdbc3b3923..e0b0776aca 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -24,6 +24,8 @@ Features of the ZXDH PMD are: - Link Auto-negotiation - Link state information - Set Link down or up +- Unicast MAC filter +- Multicast MAC filter Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 4f18c97ed7..75883a8897 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -256,6 +256,30 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) return ret; } +static int +zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *hash_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); + + return ret; +} + uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) { diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index 72c29e1522..826f1fb95d 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -22,6 +22,7 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx); uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); void zxdh_release_lock(struct zxdh_hw *hw); diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 57ee2f7c55..ad3eb85676 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -981,6 +981,23 @@ zxdh_set_rxtx_funcs(struct rte_eth_dev *eth_dev) return 0; } +static int +zxdh_mac_config(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_set_mac_table(hw->vport.vport, + ð_dev->data->mac_addrs[0], hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport); + return ret; + } + } + return ret; +} + static int zxdh_dev_start(struct rte_eth_dev *dev) { @@ -1016,6 +1033,9 @@ zxdh_dev_start(struct rte_eth_dev *dev) zxdh_queue_notify(vq); } zxdh_dev_set_link_up(dev); + ret = zxdh_mac_config(hw->eth_dev); + if (ret) + PMD_DRV_LOG(ERR, " mac config failed"); return 0; } @@ -1034,6 +1054,9 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .link_update = zxdh_dev_link_update, .dev_set_link_up = zxdh_dev_set_link_up, .dev_set_link_down = zxdh_dev_set_link_down, + .mac_addr_add = zxdh_dev_mac_addr_add, + .mac_addr_remove = zxdh_dev_mac_addr_remove, + .mac_addr_set = zxdh_dev_mac_addr_set, }; static int32_t @@ -1079,6 +1102,11 @@ zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) hw->vfid = zxdh_vport_to_vfid(hw->vport); + if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) { + PMD_DRV_LOG(ERR, "Failed to get hash idx"); + return -1; + } + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { PMD_DRV_LOG(ERR, "Failed to get panel_id"); return -1; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index cf2bc207e9..3306fdfa99 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -78,6 +78,8 @@ struct zxdh_hw { uint16_t port_id; uint16_t vfid; uint16_t queue_num; + uint16_t mc_num; + uint16_t uc_num; uint8_t *isr; uint8_t weak_barriers; @@ -90,6 +92,7 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t hash_search_index; uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 635868c4c0..d1d232b411 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -164,3 +164,236 @@ int zxdh_dev_set_link_down(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Set link down failed"); return ret; } + +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *addr) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_ether_addr *old_addr = &dev->data->mac_addrs[0]; + struct zxdh_msg_info msg_info = {0}; + uint16_t ret = 0; + + if (!rte_is_valid_assigned_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "mac address is invalid!"); + return -EINVAL; + } + + if (hw->is_pf) { + ret = zxdh_del_mac_table(hw->vport.vport, old_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->uc_num--; + + ret = zxdh_set_mac_table(hw->vport.vport, addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->uc_num++; + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + mac_filter->mac_flag = true; + rte_memcpy(&mac_filter->mac, old_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_DEL); + return ret; + } + hw->uc_num--; + PMD_DRV_LOG(INFO, "Success to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + + mac_filter->filter_flag = ZXDH_MAC_UNFILTER; + rte_memcpy(&mac_filter->mac, addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_MAC_ADD); + return ret; + } + hw->uc_num++; + } + rte_ether_addr_copy(addr, (struct rte_ether_addr *)hw->mac_addr); + return ret; +} + +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + uint16_t i, ret; + + if (index >= ZXDH_MAX_MAC_ADDRS) { + PMD_DRV_LOG(ERR, "Add mac index (%u) is out of range", index); + return -EINVAL; + } + + for (i = 0; (i != ZXDH_MAX_MAC_ADDRS); ++i) { + if (memcmp(&dev->data->mac_addrs[i], mac_addr, sizeof(*mac_addr))) + continue; + + PMD_DRV_LOG(INFO, "MAC address already configured"); + return -EADDRINUSE; + } + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_set_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add failed, code:%d", ret); + return -ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_ADD, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num < ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return -ret; + } + hw->uc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } else { + if (hw->mc_num < ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_ADD); + return -ret; + } + hw->mc_num++; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return -EINVAL; + } + } + } + dev->data->mac_addrs[index] = *mac_addr; + return 0; +} +/** + * Fun: + */ +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t index __rte_unused) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index]; + uint16_t ret = 0; + + if (index >= ZXDH_MAX_MAC_ADDRS) + return; + + if (hw->is_pf) { + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_del_mac_table(hw->vport.vport, + mac_addr, hw->hash_search_index); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_del failed, code:%d", ret); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } else { + struct zxdh_mac_filter *mac_filter = &msg_info.data.mac_filter_msg; + + mac_filter->filter_flag = ZXDH_MAC_FILTER; + rte_memcpy(&mac_filter->mac, mac_addr, sizeof(struct rte_ether_addr)); + zxdh_msg_head_build(hw, ZXDH_MAC_DEL, &msg_info); + if (rte_is_unicast_ether_addr(mac_addr)) { + if (hw->uc_num <= ZXDH_MAX_UC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->uc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } else { + if (hw->mc_num <= ZXDH_MAX_MC_MAC_ADDRS) { + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, + sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_MAC_DEL); + return; + } + hw->mc_num--; + } else { + PMD_DRV_LOG(ERR, "MC_MAC is out of range, MAX_MC_MAC:%d", + ZXDH_MAX_MC_MAC_ADDRS); + return; + } + } + } + memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index c6d6ca56fd..4630bb70db 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -10,5 +10,9 @@ int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); +int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr, + uint32_t index, uint32_t vmdq); +int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); +void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 66c337443b..5b4af7d841 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -45,6 +45,8 @@ #define ZXDH_MSG_HEAD_LEN 8 #define ZXDH_MSG_REQ_BODY_MAX_LEN \ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) +#define ZXDH_MAC_FILTER 0xaa +#define ZXDH_MAC_UNFILTER 0xff enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -173,6 +175,8 @@ enum zxdh_msg_type { ZXDH_NULL = 0, ZXDH_VF_PORT_INIT = 1, ZXDH_VF_PORT_UNINIT = 2, + ZXDH_MAC_ADD = 3, + ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, @@ -315,6 +319,12 @@ struct zxdh_port_attr_set_msg { uint8_t allmulti_follow; } __rte_packed; +struct zxdh_mac_filter { + uint8_t mac_flag; + uint8_t filter_flag; + struct rte_ether_addr mac; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -342,6 +352,7 @@ struct zxdh_msg_info { struct zxdh_vf_init_msg vf_init_msg; struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; + struct zxdh_mac_filter mac_filter_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 3a7a830d7d..7295b709ce 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -509,6 +509,11 @@ typedef struct zxdh_dtb_user_entry_t { void *p_entry_data; } ZXDH_DTB_USER_ENTRY_T; +typedef struct zxdh_dtb_hash_entry_info_t { + uint8_t *p_actu_key; + uint8_t *p_rst; +} ZXDH_DTB_HASH_ENTRY_INFO_T; + typedef struct zxdh_sdt_tbl_data_t { uint32_t data_high32; uint32_t data_low32; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 15098e723d..117f3cf12e 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -11,6 +11,10 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_MAC_HASH_INDEX_BASE 64 +#define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) +#define ZXDH_MC_GROUP_NUM 4 + int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { int ret = 0; @@ -147,3 +151,195 @@ zxdh_panel_table_init(struct rte_eth_dev *dev) return ret; } + +int +zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "Insert mac_table failed"); + return -ret; + } + } else { + for (group_id = 0; group_id < 4; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + uint8_t index = (vport_num.vfid % 64) / 32; + if (ret == 0) { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - (vport_num.vfid % 64) % 32)); + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + } + } else { + if (vport_num.vf_flag) { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_bitmap[index] |= + rte_cpu_to_be_32(UINT32_C(1) << + (31 - (vport_num.vfid % 64) % 32)); + else + multicast_table.entry.mc_bitmap[index] = false; + } else { + if (group_id == vport_num.vfid / 64) + multicast_table.entry.mc_pf_enable = + rte_cpu_to_be_32((1 << 30)); + else + multicast_table.entry.mc_pf_enable = false; + } + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "add mac_table failed, code:%d", ret); + return -ret; + } + } + } + return 0; +} + +int +zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx) +{ + struct zxdh_mac_unicast_table unicast_table = {0}; + struct zxdh_mac_multicast_table multicast_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + uint32_t ret, del_flag = 0; + uint16_t group_id = 0; + + if (rte_is_unicast_ether_addr(addr)) { + rte_memcpy(unicast_table.key.dmac_addr, addr, sizeof(struct rte_ether_addr)); + unicast_table.entry.hit_flag = 0; + unicast_table.entry.vfid = vport_num.vfid; + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&unicast_table.key, + .p_rst = (uint8_t *)&unicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "delete l2_fwd_hash_table failed, code:%d", ret); + return -ret; + } + } else { + multicast_table.key.vf_group_id = vport_num.vfid / 64; + rte_memcpy(multicast_table.key.mac_addr, addr, sizeof(struct rte_ether_addr)); + + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + uint8_t index = (vport_num.vfid % 64) / 32; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &entry_get, 1); + if (vport_num.vf_flag) + multicast_table.entry.mc_bitmap[index] &= + ~(rte_cpu_to_be_32(UINT32_C(1) << + (31 - (vport_num.vfid % 64) % 32))); + else + multicast_table.entry.mc_pf_enable = 0; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + if (ret) { + PMD_DRV_LOG(ERR, "mac_addr_add mc_table failed, code:%d", ret); + return -ret; + } + + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + &entry_get, 1); + if (multicast_table.entry.mc_bitmap[0] == 0 && + multicast_table.entry.mc_bitmap[1] == 0 && + multicast_table.entry.mc_pf_enable == 0) { + if (group_id == (ZXDH_MC_GROUP_NUM - 1)) + del_flag = 1; + } else { + break; + } + } + if (del_flag) { + for (group_id = 0; group_id < ZXDH_MC_GROUP_NUM; group_id++) { + multicast_table.key.vf_group_id = group_id; + rte_memcpy(multicast_table.key.mac_addr, addr, + sizeof(struct rte_ether_addr)); + ZXDH_DTB_HASH_ENTRY_INFO_T dtb_hash_entry = { + .p_actu_key = (uint8_t *)&multicast_table.key, + .p_rst = (uint8_t *)&multicast_table.entry + }; + ZXDH_DTB_USER_ENTRY_T entry_get = { + .sdt_no = ZXDH_MAC_HASH_INDEX(hash_search_idx), + .p_entry_data = (void *)&dtb_hash_entry + }; + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_get); + } + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 7f592beb3c..a99eb2bec6 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -142,10 +142,46 @@ struct zxdh_panel_table { uint32_t rsv_2; }; /* 16B */ +struct zxdh_mac_unicast_key { + uint16_t rsv; + uint8_t dmac_addr[6]; +}; + +struct zxdh_mac_unicast_entry { + uint8_t rsv1 : 7, + hit_flag : 1; + uint8_t rsv; + uint16_t vfid; +}; + +struct zxdh_mac_unicast_table { + struct zxdh_mac_unicast_key key; + struct zxdh_mac_unicast_entry entry; +}; + +struct zxdh_mac_multicast_key { + uint8_t rsv; + uint8_t vf_group_id; + uint8_t mac_addr[6]; +}; + +struct zxdh_mac_multicast_entry { + uint32_t mc_pf_enable; + uint32_t rsv1; + uint32_t mc_bitmap[2]; +}; + +struct zxdh_mac_multicast_table { + struct zxdh_mac_multicast_key key; + struct zxdh_mac_multicast_entry entry; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 67842 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 11/15] net/zxdh: promiscuous/allmulticast ops implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (9 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 12/15] net/zxdh: vlan filter, vlan offload " Junlong Wang ` (3 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 18603 bytes --] provided promiscuous/allmulticast ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 4 +- doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 15 ++ drivers/net/zxdh/zxdh_ethdev.h | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 132 +++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 4 + drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_tables.c | 219 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 22 +++ 9 files changed, 409 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index d5f3bac917..38b715aa7c 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -13,4 +13,6 @@ Scattered Rx = Y Link status = Y Link status event = Y Unicast MAC filter = Y -Multicast MAC filter = Y \ No newline at end of file +Multicast MAC filter = Y +Promiscuous mode = Y +Allmulticast mode = Y \ No newline at end of file diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index e0b0776aca..0399df1302 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -26,6 +26,8 @@ Features of the ZXDH PMD are: - Set Link down or up - Unicast MAC filter - Multicast MAC filter +- Promiscuous mode +- Multicast mode Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ad3eb85676..b2c3b42176 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -861,6 +861,11 @@ zxdh_tables_uninit(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "zxdh_port_attr_uninit failed"); return ret; } + ret = zxdh_promisc_table_uninit(dev); + if (ret) { + PMD_DRV_LOG(ERR, "del promisc_table failed, code:%d", ret); + return ret; + } return ret; } @@ -1057,6 +1062,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .mac_addr_add = zxdh_dev_mac_addr_add, .mac_addr_remove = zxdh_dev_mac_addr_remove, .mac_addr_set = zxdh_dev_mac_addr_set, + .promiscuous_enable = zxdh_dev_promiscuous_enable, + .promiscuous_disable = zxdh_dev_promiscuous_disable, + .allmulticast_enable = zxdh_dev_allmulticast_enable, + .allmulticast_disable = zxdh_dev_allmulticast_disable, }; static int32_t @@ -1310,6 +1319,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, " panel table init failed"); return ret; } + ret = zxdh_promisc_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, "promisc_table_init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 3306fdfa99..76c5a37dfa 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -96,6 +96,8 @@ struct zxdh_hw { uint8_t has_tx_offload; uint8_t has_rx_offload; uint8_t admin_status; + uint8_t promisc_status; + uint8_t allmulti_status; }; struct zxdh_dtb_shared_data { diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index d1d232b411..c8c54c07f1 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -397,3 +397,135 @@ void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev __rte_unused, uint32_t ind } memset(&dev->data->mac_addrs[index], 0, sizeof(struct rte_ether_addr)); } + +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + int16_t ret = 0; + + if (hw->promisc_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, true); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = true; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 1; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->promisc_status == 1) { + if (hw->is_pf) { + ret = zxdh_dev_unicast_table_set(hw, hw->vport.vport, false); + if (hw->allmulti_status == 0) + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + promisc_msg->mode = ZXDH_PROMISC_MODE; + promisc_msg->value = false; + if (hw->allmulti_status == 0) + promisc_msg->mc_follow = true; + + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_PROMISC_MODE); + return ret; + } + } + hw->promisc_status = 0; + } + return ret; +} +/** + * Fun: + */ +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 0) { + if (hw->is_pf) { + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, true); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = true; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 1; + } + return ret; +} + +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int16_t ret = 0; + struct zxdh_msg_info msg_info = {0}; + + if (hw->allmulti_status == 1) { + if (hw->is_pf) { + if (hw->promisc_status == 1) + goto end; + ret = zxdh_dev_multicast_table_set(hw, hw->vport.vport, false); + } else { + struct zxdh_port_promisc_msg *promisc_msg = &msg_info.data.port_promisc_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_PROMISC_SET, &msg_info); + if (hw->promisc_status == 1) + goto end; + promisc_msg->mode = ZXDH_ALLMULTI_MODE; + promisc_msg->value = false; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d", + hw->vport.vport, ZXDH_ALLMULTI_MODE); + return ret; + } + } + hw->allmulti_status = 0; + } + return ret; +end: + hw->allmulti_status = 0; + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 4630bb70db..394ddedc0e 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -14,5 +14,9 @@ int zxdh_dev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_ad uint32_t index, uint32_t vmdq); int zxdh_dev_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); void zxdh_dev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index); +int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); +int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); +int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 5b4af7d841..002314ef19 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -47,6 +47,8 @@ (ZXDH_MSG_PAYLOAD_MAX_LEN - ZXDH_MSG_HEAD_LEN) #define ZXDH_MAC_FILTER 0xaa #define ZXDH_MAC_UNFILTER 0xff +#define ZXDH_PROMISC_MODE 1 +#define ZXDH_ALLMULTI_MODE 2 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -179,6 +181,7 @@ enum zxdh_msg_type { ZXDH_MAC_DEL = 4, ZXDH_PORT_ATTRS_SET = 25, + ZXDH_PORT_PROMISC_SET = 26, ZXDH_MSG_TYPE_END, } __rte_packed; @@ -325,6 +328,12 @@ struct zxdh_mac_filter { struct rte_ether_addr mac; } __rte_packed; +struct zxdh_port_promisc_msg { + uint8_t mode; + uint8_t value; + uint8_t mc_follow; +} __rte_packed; + struct zxdh_agent_msg_head { enum zxdh_agent_msg_type msg_type; uint8_t panel_id; @@ -353,6 +362,7 @@ struct zxdh_msg_info { struct zxdh_port_attr_set_msg port_attr_msg; struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; + struct zxdh_port_promisc_msg port_promisc_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 117f3cf12e..788df41d40 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,10 +10,15 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_BROCAST_ATT_TABLE 6 +#define ZXDH_SDT_UNICAST_ATT_TABLE 10 +#define ZXDH_SDT_MULTICAST_ATT_TABLE 11 #define ZXDH_MAC_HASH_INDEX_BASE 64 #define ZXDH_MAC_HASH_INDEX(index) (ZXDH_MAC_HASH_INDEX_BASE + (index)) #define ZXDH_MC_GROUP_NUM 4 +#define ZXDH_BASE_VFID 1152 +#define ZXDH_TABLE_HIT_FLAG 128 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -343,3 +348,217 @@ zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_se } return 0; } + +int zxdh_promisc_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t ret, vf_group_id = 0; + struct zxdh_brocast_table brocast_table = {0}; + struct zxdh_unitcast_table uc_table = {0}; + struct zxdh_multicast_table mc_table = {0}; + + if (!hw->is_pf) + return 0; + + for (; vf_group_id < 4; vf_group_id++) { + brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&brocast_table + }; + ZXDH_DTB_USER_ENTRY_T entry_brocast = { + .sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE, + .p_entry_data = (void *)&eram_brocast_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_brocast); + if (ret) { + PMD_DRV_LOG(ERR, "write brocast table failed"); + return ret; + } + + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_unicast = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_uc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &entry_unicast); + if (ret) { + PMD_DRV_LOG(ERR, "write unicast table failed"); + return ret; + } + + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG); + ZXDH_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry_multicast = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&eram_mc_entry + }; + + ret = zxdh_np_dtb_table_entry_delete(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, &entry_multicast); + if (ret) { + PMD_DRV_LOG(ERR, "write multicast table failed"); + return ret; + } + } + + return ret; +} + +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_unitcast_table uc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T uc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&uc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE, + .p_entry_data = (void *)&uc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + uc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + uc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + } else { + uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "unicast_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} + +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable) +{ + int16_t ret = 0; + struct zxdh_multicast_table mc_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T mc_table_entry = { + .index = ((hw->vfid - ZXDH_BASE_VFID) << 2) + vport_num.vfid / 64, + .p_data = (uint32_t *)&mc_table + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE, + .p_entry_data = (void *)&mc_table_entry + }; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_get_failed:%d", hw->vfid); + return -ret; + } + + if (vport_num.vf_flag) { + if (enable) + mc_table.bitmap[(vport_num.vfid % 64) / 32] |= + UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32); + else + mc_table.bitmap[(vport_num.vfid % 64) / 32] &= + ~(UINT32_C(1) << (31 - (vport_num.vfid % 64) % 32)); + + } else { + mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG + (enable << 6)); + } + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + if (ret) { + PMD_DRV_LOG(ERR, "allmulti_table_set_failed:%d", hw->vfid); + return -ret; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index a99eb2bec6..f5767eb2ba 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -176,12 +176,34 @@ struct zxdh_mac_multicast_table { struct zxdh_mac_multicast_entry entry; }; +struct zxdh_brocast_table { + uint32_t flag; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_unitcast_table { + uint32_t uc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + +struct zxdh_multicast_table { + uint32_t mc_flood_pf_enable; + uint32_t rsv; + uint32_t bitmap[2]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); int zxdh_port_attr_uninit(struct rte_eth_dev *dev); +int zxdh_promisc_table_init(struct rte_eth_dev *dev); +int zxdh_promisc_table_uninit(struct rte_eth_dev *dev); int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); +int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 45514 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 12/15] net/zxdh: vlan filter, vlan offload ops implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (10 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 11/15] net/zxdh: promiscuous/allmulticast " Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang ` (2 subsequent siblings) 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20822 bytes --] provided vlan filter, vlan offload ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 4 +- doc/guides/nics/zxdh.rst | 3 + drivers/net/zxdh/zxdh_ethdev.c | 40 +++++- drivers/net/zxdh/zxdh_ethdev_ops.c | 221 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 2 + drivers/net/zxdh/zxdh_msg.h | 22 +++ drivers/net/zxdh/zxdh_rxtx.c | 18 +++ drivers/net/zxdh/zxdh_tables.c | 94 ++++++++++++ drivers/net/zxdh/zxdh_tables.h | 10 +- 9 files changed, 410 insertions(+), 4 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 38b715aa7c..d8d0261726 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -15,4 +15,6 @@ Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y Promiscuous mode = Y -Allmulticast mode = Y \ No newline at end of file +Allmulticast mode = Y +VLAN filter = Y +VLAN offload = Y \ No newline at end of file diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 0399df1302..3a7585d123 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -28,6 +28,9 @@ Features of the ZXDH PMD are: - Multicast MAC filter - Promiscuous mode - Multicast mode +- VLAN filter and VLAN offload +- VLAN stripping and inserting +- QINQ stripping and inserting Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b2c3b42176..3e6cfc1d6b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -759,6 +759,34 @@ zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) return 0; } +static int +zxdh_vlan_offload_configure(struct rte_eth_dev *dev) +{ + int ret; + int mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | RTE_ETH_QINQ_STRIP_MASK; + + ret = zxdh_dev_vlan_offload_set(dev, mask); + if (ret) { + PMD_DRV_LOG(ERR, "vlan offload set error"); + return -1; + } + + return 0; +} + +static int +zxdh_dev_conf_offload(struct rte_eth_dev *dev) +{ + int ret = 0; + + ret = zxdh_vlan_offload_configure(dev); + if (ret) { + PMD_DRV_LOG(ERR, "zxdh_vlan_offload_configure failed"); + return ret; + } + + return 0; +} static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) @@ -816,7 +844,7 @@ zxdh_dev_configure(struct rte_eth_dev *dev) nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; if (nr_vq == hw->queue_num) - return 0; + goto end; PMD_DRV_LOG(DEBUG, "queue changed need reset "); /* Reset the device although not necessary at startup */ @@ -848,6 +876,8 @@ zxdh_dev_configure(struct rte_eth_dev *dev) zxdh_pci_reinit_complete(hw); +end: + zxdh_dev_conf_offload(dev); return ret; } @@ -1066,6 +1096,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .promiscuous_disable = zxdh_dev_promiscuous_disable, .allmulticast_enable = zxdh_dev_allmulticast_enable, .allmulticast_disable = zxdh_dev_allmulticast_disable, + .vlan_filter_set = zxdh_dev_vlan_filter_set, + .vlan_offload_set = zxdh_dev_vlan_offload_set, }; static int32_t @@ -1325,6 +1357,12 @@ zxdh_tables_init(struct rte_eth_dev *dev) return ret; } + ret = zxdh_vlan_filter_table_init(dev); + if (ret) { + PMD_DRV_LOG(ERR, " panel table init failed"); + return ret; + } + return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index c8c54c07f1..094770984c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(c) 2024 ZTE Corporation */ +#include <rte_malloc.h> + #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_msg.h" @@ -9,6 +11,8 @@ #include "zxdh_tables.h" #include "zxdh_logs.h" +#define ZXDH_VLAN_FILTER_GROUPS 64 + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -529,3 +533,220 @@ int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev) hw->allmulti_status = 0; return ret; } + +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t idx = 0; + uint16_t bit_idx = 0; + uint8_t msg_type = 0; + int ret = 0; + + vlan_id &= RTE_VLAN_ID_MASK; + if (vlan_id == 0 || vlan_id == RTE_ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "vlan id (%d) is reserved", vlan_id); + return -EINVAL; + } + + if (dev->data->dev_started == 0) { + PMD_DRV_LOG(ERR, "vlan_filter dev not start"); + return -1; + } + + idx = vlan_id / ZXDH_VLAN_FILTER_GROUPS; + bit_idx = vlan_id % ZXDH_VLAN_FILTER_GROUPS; + + if (on) { + if (dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx)) { + PMD_DRV_LOG(ERR, "vlan:%d has already added.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_ADD; + } else { + if (!(dev->data->vlan_filter_conf.ids[idx] & (1ULL << bit_idx))) { + PMD_DRV_LOG(ERR, "vlan:%d has already deleted.", vlan_id); + return 0; + } + msg_type = ZXDH_VLAN_FILTER_DEL; + } + + if (hw->is_pf) { + ret = zxdh_vlan_filter_table_set(hw->vport.vport, vlan_id, on); + if (ret) { + PMD_DRV_LOG(ERR, "vlan_id:%d table set failed.", vlan_id); + return -1; + } + } else { + struct zxdh_msg_info msg = {0}; + zxdh_msg_head_build(hw, msg_type, &msg); + msg.data.vlan_filter_msg.vlan_id = vlan_id; + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, msg_type); + return ret; + } + } + + if (on) + dev->data->vlan_filter_conf.ids[idx] |= (1ULL << bit_idx); + else + dev->data->vlan_filter_conf.ids[idx] &= ~(1ULL << bit_idx); + + return 0; +} + +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rxmode *rxmode; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret = 0; + + rxmode = &dev->data->dev_conf.rxmode; + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_filter_enable = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_filter_set_msg.enable = true; + zxdh_msg_head_build(hw, ZXDH_VLAN_FILTER_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan filter offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.vlan_strip_offload = false; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_VLAN_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d vlan strip offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + if (mask & RTE_ETH_QINQ_STRIP_MASK) { + memset(&msg, 0, sizeof(struct zxdh_msg_info)); + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = true; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } else { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.qinq_strip_offload = true; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } else { + msg.data.vlan_offload_msg.enable = false; + msg.data.vlan_offload_msg.type = ZXDH_QINQ_STRIP_MSG_TYPE; + zxdh_msg_head_build(hw, ZXDH_VLAN_OFFLOAD, &msg); + ret = zxdh_vf_send_msg_to_pf(hw->eth_dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "port %d qinq offload set failed", + hw->vport.vfid); + return -EAGAIN; + } + } + } + } + + return ret; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 394ddedc0e..058d271ab3 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -18,5 +18,7 @@ int zxdh_dev_promiscuous_enable(struct rte_eth_dev *dev); int zxdh_dev_promiscuous_disable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); +int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); +int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 002314ef19..bed16d31a0 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -49,6 +49,8 @@ #define ZXDH_MAC_UNFILTER 0xff #define ZXDH_PROMISC_MODE 1 #define ZXDH_ALLMULTI_MODE 2 +#define ZXDH_VLAN_STRIP_MSG_TYPE 0 +#define ZXDH_QINQ_STRIP_MSG_TYPE 1 enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -179,6 +181,10 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_VLAN_FILTER_SET = 17, + ZXDH_VLAN_FILTER_ADD = 18, + ZXDH_VLAN_FILTER_DEL = 19, + ZXDH_VLAN_OFFLOAD = 21, ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, @@ -343,6 +349,19 @@ struct zxdh_agent_msg_head { uint16_t pcie_id; } __rte_packed; +struct zxdh_vlan_filter { + uint16_t vlan_id; +}; + +struct zxdh_vlan_filter_set { + uint8_t enable; +}; + +struct zxdh_vlan_offload { + uint8_t enable; + uint8_t type; +} __rte_packed; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -363,6 +382,9 @@ struct zxdh_msg_info { struct zxdh_link_info_msg link_msg; struct zxdh_mac_filter mac_filter_msg; struct zxdh_port_promisc_msg port_promisc_msg; + struct zxdh_vlan_filter vlan_filter_msg; + struct zxdh_vlan_filter_set vlan_filter_set_msg; + struct zxdh_vlan_offload vlan_offload_msg; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index 07ef708112..be5865ac85 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -11,6 +11,9 @@ #include "zxdh_pci.h" #include "zxdh_queue.h" +#define ZXDH_SVLAN_TPID 0x88a8 +#define ZXDH_CVLAN_TPID 0x8100 + #define ZXDH_PKT_FORM_CPU 0x20 /* 1-cpu 0-np */ #define ZXDH_NO_IP_FRAGMENT 0x2000 /* ip fragment flag */ #define ZXDH_NO_IPID_UPDATE 0x4000 /* ipid update flag */ @@ -21,6 +24,9 @@ #define ZXDH_PI_L3TYPE_RSV 0xC0 #define ZXDH_PI_L3TYPE_MASK 0xC0 +#define ZXDH_PD_OFFLOAD_SVLAN_INSERT (1 << 14) +#define ZXDH_PD_OFFLOAD_CVLAN_INSERT (1 << 13) + #define ZXDH_PCODE_MASK 0x1F #define ZXDH_PCODE_IP_PKT_TYPE 0x01 #define ZXDH_PCODE_TCP_PKT_TYPE 0x02 @@ -258,6 +264,18 @@ static void zxdh_xmit_fill_net_hdr(struct rte_mbuf *cookie, hdr->pi_hdr.l3_offset = rte_be_to_cpu_16(l3_offset); hdr->pi_hdr.l4_offset = rte_be_to_cpu_16(l3_offset + cookie->l3_len); + if (cookie->ol_flags & RTE_MBUF_F_TX_VLAN) { + ol_flag |= ZXDH_PD_OFFLOAD_CVLAN_INSERT; + hdr->pi_hdr.vlan_id = rte_be_to_cpu_16(cookie->vlan_tci); + hdr->pd_hdr.cvlan_insert = + rte_be_to_cpu_32((ZXDH_CVLAN_TPID << 16) | cookie->vlan_tci); + } + if (cookie->ol_flags & RTE_MBUF_F_TX_QINQ) { + ol_flag |= ZXDH_PD_OFFLOAD_SVLAN_INSERT; + hdr->pd_hdr.svlan_insert = + rte_be_to_cpu_32((ZXDH_SVLAN_TPID << 16) | cookie->vlan_tci_outer); + } + hdr->pd_hdr.ol_flag = rte_be_to_cpu_32(ol_flag); } diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 788df41d40..93dc956597 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 #define ZXDH_SDT_MULTICAST_ATT_TABLE 11 @@ -19,6 +20,10 @@ #define ZXDH_MC_GROUP_NUM 4 #define ZXDH_BASE_VFID 1152 #define ZXDH_TABLE_HIT_FLAG 128 +#define ZXDH_FIRST_VLAN_GROUP_BITS 23 +#define ZXDH_VLAN_GROUP_BITS 31 +#define ZXDH_VLAN_GROUP_NUM 35 +#define ZXDH_VLAN_FILTER_VLANID_STEP 120 int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) { @@ -562,3 +567,92 @@ int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable } return 0; } + +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_vlan_filter_table vlan_table = {0}; + int16_t ret = 0; + + for (uint8_t vlan_group = 0; vlan_group < ZXDH_VLAN_GROUP_NUM; vlan_group++) { + if (vlan_group == 0) { + vlan_table.vlans[0] |= (1 << ZXDH_FIRST_VLAN_GROUP_BITS); + vlan_table.vlans[0] |= (1 << ZXDH_VLAN_GROUP_BITS); + + } else { + vlan_table.vlans[0] = 0; + } + uint32_t index = (vlan_group << 11) | hw->vport.vfid; + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = { + .index = index, + .p_data = (uint32_t *)&vlan_table + }; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d], vlan_group:%d, init vlan filter table failed", + hw->vport.vfid, vlan_group); + ret = -1; + } + } + return ret; +} + +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) +{ + struct zxdh_vlan_filter_table vlan_table = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + memset(&vlan_table, 0, sizeof(struct zxdh_vlan_filter_table)); + int table_num = vlan_id / ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t index = (table_num << 11) | vport_num.vfid; + uint16_t group = (vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP) / 8 + 1; + + uint8_t val = sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t); + uint8_t vlan_tbl_index = group / + ((sizeof(struct zxdh_vlan_filter_table) / sizeof(uint32_t))); + uint16_t used_group = vlan_tbl_index * val; + + used_group = (used_group == 0 ? 0 : (used_group - 1)); + + ZXDH_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (uint32_t *)&vlan_table}; + ZXDH_DTB_USER_ENTRY_T user_entry_get = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &user_entry_get, 1); + if (ret) { + PMD_DRV_LOG(ERR, "get vlan table failed"); + return -1; + } + uint16_t relative_vlan_id = vlan_id - table_num * ZXDH_VLAN_FILTER_VLANID_STEP; + uint32_t *base_group = &vlan_table.vlans[0]; + + *base_group |= 1 << 31; + base_group = &vlan_table.vlans[vlan_tbl_index]; + uint8_t valid_bits = (vlan_tbl_index == 0 ? + ZXDH_FIRST_VLAN_GROUP_BITS : ZXDH_VLAN_GROUP_BITS) + 1; + + uint8_t shift_left = (valid_bits - (relative_vlan_id - used_group * 8) % valid_bits) - 1; + + if (enable) + *base_group |= 1 << shift_left; + else + *base_group &= ~(1 << shift_left); + + + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_VLAN_ATT_TABLE, + .p_entry_data = &entry_data + }; + + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write vlan table failed"); + return -1; + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index f5767eb2ba..85e95a876f 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -43,7 +43,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -73,7 +73,7 @@ struct zxdh_port_attr_table { uint8_t rdma_offload_enable: 1; uint8_t vlan_filter_enable: 1; uint8_t vlan_strip_offload: 1; - uint8_t qinq_valn_strip_offload: 1; + uint8_t qinq_strip_offload: 1; uint8_t rss_enable: 1; uint8_t mtu_enable: 1; uint8_t hit_flag: 1; @@ -194,6 +194,10 @@ struct zxdh_multicast_table { uint32_t bitmap[2]; }; +struct zxdh_vlan_filter_table { + uint32_t vlans[4]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -205,5 +209,7 @@ int zxdh_set_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t has int zxdh_del_mac_table(uint16_t vport, struct rte_ether_addr *addr, uint8_t hash_search_idx); int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); +int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); +int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 55248 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 13/15] net/zxdh: rss hash config/update, reta update/get 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (11 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 12/15] net/zxdh: vlan filter, vlan offload " Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 20871 bytes --] provided rss hash config/update, reta update/get ops. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 5 +- doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 43 ++++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_ethdev_ops.c | 316 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 8 + drivers/net/zxdh/zxdh_msg.h | 22 ++ drivers/net/zxdh/zxdh_tables.c | 82 ++++++++ drivers/net/zxdh/zxdh_tables.h | 7 + 9 files changed, 484 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index d8d0261726..5cae08f611 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -17,4 +17,7 @@ Multicast MAC filter = Y Promiscuous mode = Y Allmulticast mode = Y VLAN filter = Y -VLAN offload = Y \ No newline at end of file +VLAN offload = Y +RSS hash = Y +RSS reta update = Y +Inner RSS = Y diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3a7585d123..3cc6a1d348 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -31,6 +31,7 @@ Features of the ZXDH PMD are: - VLAN filter and VLAN offload - VLAN stripping and inserting - QINQ stripping and inserting +- Receive Side Scaling (RSS) Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 3e6cfc1d6b..87502adf74 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -788,6 +788,39 @@ zxdh_dev_conf_offload(struct rte_eth_dev *dev) return 0; } +static int +zxdh_rss_qid_config(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_port_attr_table port_attr = {0}; + struct zxdh_msg_info msg_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff; + + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed", hw->vfid); + return ret; + } + } else { + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_BASE_QID_FLAG; + attr_msg->value = hw->channel_context[0].ph_chno & 0xfff; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_BASE_QID_FLAG); + return ret; + } + } + return ret; +} + static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) { @@ -874,6 +907,12 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return -1; } + ret = zxdh_rss_qid_config(dev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to configure base qid!"); + return -1; + } + zxdh_pci_reinit_complete(hw); end: @@ -1098,6 +1137,10 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .allmulticast_disable = zxdh_dev_allmulticast_disable, .vlan_filter_set = zxdh_dev_vlan_filter_set, .vlan_offload_set = zxdh_dev_vlan_offload_set, + .reta_update = zxdh_dev_rss_reta_update, + .reta_query = zxdh_dev_rss_reta_query, + .rss_hash_update = zxdh_rss_hash_update, + .rss_hash_conf_get = zxdh_rss_hash_conf_get, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 76c5a37dfa..f558a1502d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -80,6 +80,7 @@ struct zxdh_hw { uint16_t queue_num; uint16_t mc_num; uint16_t uc_num; + uint16_t *rss_reta; uint8_t *isr; uint8_t weak_barriers; diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 094770984c..ef31957923 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -12,6 +12,29 @@ #include "zxdh_logs.h" #define ZXDH_VLAN_FILTER_GROUPS 64 +#define ZXDH_INVALID_LOGIC_QID 0xFFFFU + +#define ZXDH_ETH_RSS_L2 RTE_ETH_RSS_L2_PAYLOAD +#define ZXDH_ETH_RSS_IP \ + (RTE_ETH_RSS_IPV4 | \ + RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_FRAG_IPV6) +#define ZXDH_ETH_RSS_TCP (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP) +#define ZXDH_ETH_RSS_UDP (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP) +#define ZXDH_ETH_RSS_SCTP (RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + +#define ZXDH_HF_F5_ETH (ZXDH_ETH_RSS_TCP | ZXDH_ETH_RSS_UDP | ZXDH_ETH_RSS_SCTP) +#define ZXDH_HF_F3_ETH ZXDH_ETH_RSS_IP +#define ZXDH_HF_MAC_VLAN_ETH ZXDH_ETH_RSS_L2 + +/* Supported RSS */ +#define ZXDH_RSS_HF ((ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH)) +#define ZXDH_RSS_HF_MASK (~(ZXDH_RSS_HF)) +#define ZXDH_HF_F5 1 +#define ZXDH_HF_F3 2 +#define ZXDH_HF_MAC_VLAN 4 +#define ZXDH_HF_ALL 0 static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { @@ -750,3 +773,296 @@ int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) return ret; } + +int +zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + uint16_t old_reta[RTE_ETH_RSS_RETA_SIZE_256]; + uint16_t idx; + uint16_t i; + uint16_t pos; + int ret; + + if (reta_size != RTE_ETH_RSS_RETA_SIZE_256) { + PMD_DRV_LOG(ERR, "reta_size is illegal(%u).reta_size should be 256", reta_size); + return -EINVAL; + } + if (!hw->rss_reta) { + hw->rss_reta = rte_zmalloc(NULL, RTE_ETH_RSS_RETA_SIZE_256 * sizeof(uint16_t), 4); + if (hw->rss_reta == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate RSS reta"); + return -ENOMEM; + } + } + for (idx = 0, i = 0; (i < reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + if (reta_conf[idx].reta[pos] > dev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "reta table value err(%u >= %u)", + reta_conf[idx].reta[pos], dev->data->nb_rx_queues); + return -EINVAL; + } + if (hw->rss_reta[i] != reta_conf[idx].reta[pos]) + break; + } + if (i == reta_size) { + PMD_DRV_LOG(DEBUG, "reta table same with buffered table"); + return 0; + } + memcpy(old_reta, hw->rss_reta, sizeof(old_reta)); + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + pos = i % RTE_ETH_RETA_GROUP_SIZE; + if (((reta_conf[idx].mask >> pos) & 0x1) == 0) + continue; + hw->rss_reta[i] = reta_conf[idx].reta[pos]; + } + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_SET, &msg); + for (i = 0; i < reta_size; i++) + msg.data.rss_reta.reta[i] = + (hw->channel_context[hw->rss_reta[i] * 2].ph_chno); + + + if (hw->is_pf) { + ret = zxdh_rss_table_set(hw->vport.vport, &msg.data.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table set failed"); + return -EINVAL; + } + } + return ret; +} + +static uint16_t +zxdh_hw_qid_to_logic_qid(struct rte_eth_dev *dev, uint16_t qid) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + uint16_t rx_queues = dev->data->nb_rx_queues; + uint16_t i; + + for (i = 0; i < rx_queues; i++) { + if (qid == hw->channel_context[i * 2].ph_chno) + return i; + } + return ZXDH_INVALID_LOGIC_QID; +} + +int +zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + uint16_t idx; + uint16_t i; + int ret = 0; + uint16_t qid_logic; + + ret = (!reta_size || reta_size > RTE_ETH_RSS_RETA_SIZE_256); + if (ret) { + PMD_DRV_LOG(ERR, "request reta size(%u) not same with buffered(%u)", + reta_size, RTE_ETH_RSS_RETA_SIZE_256); + return -EINVAL; + } + + /* Fill each entry of the table even if its bit is not set. */ + for (idx = 0, i = 0; (i != reta_size); ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = hw->rss_reta[i]; + } + + + + zxdh_msg_head_build(hw, ZXDH_RSS_RETA_GET, &msg); + + if (hw->is_pf) { + ret = zxdh_rss_table_get(hw->vport.vport, &reply_msg.reply_body.rss_reta); + if (ret) { + PMD_DRV_LOG(ERR, "rss reta table set failed"); + return -EINVAL; + } + } else { + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "vf rss reta table get failed"); + return -EINVAL; + } + } + + struct zxdh_rss_reta *reta_table = &reply_msg.reply_body.rss_reta; + + for (idx = 0, i = 0; i < reta_size; ++i) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + + qid_logic = zxdh_hw_qid_to_logic_qid(dev, reta_table->reta[i]); + if (qid_logic == ZXDH_INVALID_LOGIC_QID) { + PMD_DRV_LOG(ERR, "rsp phy reta qid (%u) is illegal(%u)", + reta_table->reta[i], qid_logic); + return -EINVAL; + } + reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] = qid_logic; + } + return 0; +} + +static uint32_t +zxdh_rss_hf_to_hw(uint64_t hf) +{ + uint32_t hw_hf = 0; + + if (hf & ZXDH_HF_MAC_VLAN_ETH) + hw_hf |= ZXDH_HF_MAC_VLAN; + if (hf & ZXDH_HF_F3_ETH) + hw_hf |= ZXDH_HF_F3; + if (hf & ZXDH_HF_F5_ETH) + hw_hf |= ZXDH_HF_F5; + + if (hw_hf == (ZXDH_HF_MAC_VLAN | ZXDH_HF_F3 | ZXDH_HF_F5)) + hw_hf = ZXDH_HF_ALL; + return hw_hf; +} + +static uint64_t +zxdh_rss_hf_to_eth(uint32_t hw_hf) +{ + uint64_t hf = 0; + + if (hw_hf == ZXDH_HF_ALL) + return (ZXDH_HF_MAC_VLAN_ETH | ZXDH_HF_F3_ETH | ZXDH_HF_F5_ETH); + + if (hw_hf & ZXDH_HF_MAC_VLAN) + hf |= ZXDH_HF_MAC_VLAN_ETH; + if (hw_hf & ZXDH_HF_F3) + hf |= ZXDH_HF_F3_ETH; + if (hw_hf & ZXDH_HF_F5) + hf |= ZXDH_HF_F5_ETH; + + return hf; +} + +int +zxdh_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + uint32_t hw_hf_new, hw_hf_old; + int need_update_hf = 0; + int ret = 0; + + ret = rss_conf->rss_hf & ZXDH_RSS_HF_MASK; + if (ret) { + PMD_DRV_LOG(ERR, "Not support some hash function (%08lx)", rss_conf->rss_hf); + return -EINVAL; + } + + hw_hf_new = zxdh_rss_hf_to_hw(rss_conf->rss_hf); + hw_hf_old = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + + if ((hw_hf_new != hw_hf_old || !!rss_conf->rss_hf)) + need_update_hf = 1; + + if (need_update_hf) { + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_enable = !!rss_conf->rss_hf; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } else { + msg.data.rss_enable.enable = !!rss_conf->rss_hf; + zxdh_msg_head_build(hw, ZXDH_RSS_ENABLE, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss enable set failed"); + return -EINVAL; + } + } + + + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + port_attr.rss_hash_factor = hw_hf_new; + ret = zxdh_set_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } else { + msg.data.rss_hf.rss_hf = hw_hf_new; + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, + sizeof(struct zxdh_msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + old_rss_conf->rss_hf = rss_conf->rss_hf; + } + + return 0; +} + +int +zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) +{ + struct zxdh_hw *hw = (struct zxdh_hw *)dev->data->dev_private; + struct rte_eth_rss_conf *old_rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; + struct zxdh_msg_info msg = {0}; + struct zxdh_msg_reply_info reply_msg = {0}; + struct zxdh_port_attr_table port_attr = {0}; + int ret; + uint32_t hw_hf; + + if (rss_conf == NULL) { + PMD_DRV_LOG(ERR, "rss conf is NULL"); + return -ENOMEM; + } + + hw_hf = zxdh_rss_hf_to_hw(old_rss_conf->rss_hf); + rss_conf->rss_hf = zxdh_rss_hf_to_eth(hw_hf); + + zxdh_msg_head_build(hw, ZXDH_RSS_HF_GET, &msg); + if (hw->is_pf) { + ret = zxdh_get_port_attr(hw->vport.vfid, &port_attr); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + reply_msg.reply_body.rss_hf.rss_hf = port_attr.rss_hash_factor; + } else { + zxdh_msg_head_build(hw, ZXDH_RSS_HF_SET, &msg); + ret = zxdh_vf_send_msg_to_pf(dev, &msg, sizeof(struct zxdh_msg_info), + &reply_msg, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, "rss hash factor set failed"); + return -EINVAL; + } + } + rss_conf->rss_hf = zxdh_rss_hf_to_eth(reply_msg.reply_body.rss_hf.rss_hf); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index 058d271ab3..ef89c0d325 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -20,5 +20,13 @@ int zxdh_dev_allmulticast_enable(struct rte_eth_dev *dev); int zxdh_dev_allmulticast_disable(struct rte_eth_dev *dev); int zxdh_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); int zxdh_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask); +int zxdh_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index bed16d31a0..57092abe92 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -181,6 +181,11 @@ enum zxdh_msg_type { ZXDH_VF_PORT_UNINIT = 2, ZXDH_MAC_ADD = 3, ZXDH_MAC_DEL = 4, + ZXDH_RSS_ENABLE = 7, + ZXDH_RSS_RETA_SET = 8, + ZXDH_RSS_RETA_GET = 9, + ZXDH_RSS_HF_SET = 15, + ZXDH_RSS_HF_GET = 16, ZXDH_VLAN_FILTER_SET = 17, ZXDH_VLAN_FILTER_ADD = 18, ZXDH_VLAN_FILTER_DEL = 19, @@ -290,6 +295,14 @@ struct zxdh_link_info_msg { uint32_t speed; } __rte_packed; +struct zxdh_rss_reta { + uint32_t reta[RTE_ETH_RSS_RETA_SIZE_256]; +}; + +struct zxdh_rss_hf { + uint32_t rss_hf; +}; + struct zxdh_msg_reply_head { uint8_t flag; @@ -307,6 +320,8 @@ struct zxdh_msg_reply_body { union { uint8_t reply_data[ZXDH_MSG_REPLY_BODY_MAX_LEN - sizeof(enum zxdh_reps_flag)]; struct zxdh_link_info_msg link_msg; + struct zxdh_rss_hf rss_hf; + struct zxdh_rss_reta rss_reta; } __rte_packed; } __rte_packed; @@ -362,6 +377,10 @@ struct zxdh_vlan_offload { uint8_t type; } __rte_packed; +struct zxdh_rss_enable { + uint8_t enable; +}; + struct zxdh_msg_head { enum zxdh_msg_type msg_type; uint16_t vport; @@ -385,6 +404,9 @@ struct zxdh_msg_info { struct zxdh_vlan_filter vlan_filter_msg; struct zxdh_vlan_filter_set vlan_filter_set_msg; struct zxdh_vlan_offload vlan_offload_msg; + struct zxdh_rss_reta rss_reta; + struct zxdh_rss_enable rss_enable; + struct zxdh_rss_hf rss_hf; } __rte_packed data; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index 93dc956597..e8e483a02a 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -10,6 +10,7 @@ #define ZXDH_SDT_VPORT_ATT_TABLE 1 #define ZXDH_SDT_PANEL_ATT_TABLE 2 +#define ZXDH_SDT_RSS_ATT_TABLE 3 #define ZXDH_SDT_VLAN_ATT_TABLE 4 #define ZXDH_SDT_BROCAST_ATT_TABLE 6 #define ZXDH_SDT_UNICAST_ATT_TABLE 10 @@ -656,3 +657,84 @@ int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable) } return 0; } + +int +zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + for (uint16_t j = 0; j < 8; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_vqid.vqm_qid[j + 1] = rss_reta->reta[i * 8 + j]; + else + rss_vqid.vqm_qid[j - 1] = rss_reta->reta[i * 8 + j]; + #else + rss_vqid.vqm_qid[j] = rss_init->reta[i * 8 + j]; + #endif + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] |= 0x8000; + #else + rss_vqid.vqm_qid[0] |= 0x8000; + #endif + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = { + .index = vport_num.vfid * 32 + i, + .p_data = (uint32_t *)&rss_vqid + }; + ZXDH_DTB_USER_ENTRY_T user_entry_write = { + .sdt_no = ZXDH_SDT_RSS_ATT_TABLE, + .p_entry_data = &entry + }; + ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, + g_dtb_data.queueid, 1, &user_entry_write); + if (ret != 0) { + PMD_DRV_LOG(ERR, "write rss base qid failed vfid:%d", vport_num.vfid); + return ret; + } + } + return 0; +} + +int +zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta) +{ + struct zxdh_rss_to_vqid_table rss_vqid = {0}; + union zxdh_virport_num vport_num = (union zxdh_virport_num)vport; + int ret = 0; + + for (uint16_t i = 0; i < RTE_ETH_RSS_RETA_SIZE_256 / 8; i++) { + ZXDH_DTB_ERAM_ENTRY_INFO_T entry = {vport_num.vfid * 32 + i, (uint32_t *)&rss_vqid}; + ZXDH_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_RSS_ATT_TABLE, &entry}; + + ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, + g_dtb_data.queueid, &user_entry, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get rss tbl failed, vfid:%d", vport_num.vfid); + return -1; + } + + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + rss_vqid.vqm_qid[1] &= 0x7FFF; + #else + rss_vqid.vqm_qid[0] &= 0x7FFF; + #endif + uint8_t size = sizeof(struct zxdh_rss_to_vqid_table) / sizeof(uint16_t); + + for (int j = 0; j < size; j++) { + #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + if (j % 2 == 0) + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j + 1]; + else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j - 1]; + #else + rss_reta->reta[i * 8 + j] = rss_vqid.vqm_qid[j]; + #endif + } + } + return 0; +} diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 85e95a876f..649ede33e8 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -10,6 +10,7 @@ extern struct zxdh_dtb_shared_data g_dtb_data; #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 struct zxdh_port_attr_table { @@ -198,6 +199,10 @@ struct zxdh_vlan_filter_table { uint32_t vlans[4]; }; +struct zxdh_rss_to_vqid_table { + uint16_t vqm_qid[8]; +}; + int zxdh_port_attr_init(struct rte_eth_dev *dev); int zxdh_panel_table_init(struct rte_eth_dev *dev); int zxdh_set_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr); @@ -211,5 +216,7 @@ int zxdh_dev_unicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_dev_multicast_table_set(struct zxdh_hw *hw, uint16_t vport, bool enable); int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); +int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 50882 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 14/15] net/zxdh: basic stats ops implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (12 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 2024-12-06 5:57 ` [PATCH v1 15/15] net/zxdh: mtu update " Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 37282 bytes --] basic stats ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 2 + doc/guides/nics/zxdh.rst | 1 + drivers/net/zxdh/zxdh_ethdev.c | 2 + drivers/net/zxdh/zxdh_ethdev_ops.c | 353 +++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 27 +++ drivers/net/zxdh/zxdh_msg.h | 15 ++ drivers/net/zxdh/zxdh_np.c | 349 ++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_np.h | 30 +++ drivers/net/zxdh/zxdh_queue.h | 2 + drivers/net/zxdh/zxdh_rxtx.c | 82 ++++++- drivers/net/zxdh/zxdh_tables.h | 5 + 11 files changed, 866 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 5cae08f611..39c2473652 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -21,3 +21,5 @@ VLAN offload = Y RSS hash = Y RSS reta update = Y Inner RSS = Y +Basic stats = Y +Stats per queue = Y \ No newline at end of file diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index 3cc6a1d348..c8a52b587c 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -32,6 +32,7 @@ Features of the ZXDH PMD are: - VLAN stripping and inserting - QINQ stripping and inserting - Receive Side Scaling (RSS) +- Port hardware statistics Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 87502adf74..82f81d1ded 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1141,6 +1141,8 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .reta_query = zxdh_dev_rss_reta_query, .rss_hash_update = zxdh_rss_hash_update, .rss_hash_conf_get = zxdh_rss_hash_conf_get, + .stats_get = zxdh_dev_stats_get, + .stats_reset = zxdh_dev_stats_reset, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index ef31957923..6156c94f2c 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -10,6 +10,8 @@ #include "zxdh_ethdev_ops.h" #include "zxdh_tables.h" #include "zxdh_logs.h" +#include "zxdh_rxtx.h" +#include "zxdh_np.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -36,6 +38,108 @@ #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +struct zxdh_hw_mac_stats { + uint64_t rx_total; + uint64_t rx_pause; + uint64_t rx_unicast; + uint64_t rx_multicast; + uint64_t rx_broadcast; + uint64_t rx_vlan; + uint64_t rx_size_64; + uint64_t rx_size_65_127; + uint64_t rx_size_128_255; + uint64_t rx_size_256_511; + uint64_t rx_size_512_1023; + uint64_t rx_size_1024_1518; + uint64_t rx_size_1519_mru; + uint64_t rx_undersize; + uint64_t rx_oversize; + uint64_t rx_fragment; + uint64_t rx_jabber; + uint64_t rx_control; + uint64_t rx_eee; + + uint64_t tx_total; + uint64_t tx_pause; + uint64_t tx_unicast; + uint64_t tx_multicast; + uint64_t tx_broadcast; + uint64_t tx_vlan; + uint64_t tx_size_64; + uint64_t tx_size_65_127; + uint64_t tx_size_128_255; + uint64_t tx_size_256_511; + uint64_t tx_size_512_1023; + uint64_t tx_size_1024_1518; + uint64_t tx_size_1519_mtu; + uint64_t tx_undersize; + uint64_t tx_oversize; + uint64_t tx_fragment; + uint64_t tx_jabber; + uint64_t tx_control; + uint64_t tx_eee; + + uint64_t rx_error; + uint64_t rx_fcs_error; + uint64_t rx_drop; + + uint64_t tx_error; + uint64_t tx_fcs_error; + uint64_t tx_drop; + +} __rte_packed; + +struct zxdh_hw_mac_bytes { + uint64_t rx_total_bytes; + uint64_t rx_good_bytes; + uint64_t tx_total_bytes; + uint64_t tx_good_bytes; +} __rte_packed; + +struct zxdh_np_stats_data { + uint64_t n_pkts_dropped; + uint64_t n_bytes_dropped; +}; + +struct zxdh_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + unsigned int offset; +}; + +static const struct zxdh_xstats_name_off zxdh_rxq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_rx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_rx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_rx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_rx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_rx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_rx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_rx, stats.size_bins[7])}, +}; + +static const struct zxdh_xstats_name_off zxdh_txq_stat_strings[] = { + {"good_packets", offsetof(struct zxdh_virtnet_tx, stats.packets)}, + {"good_bytes", offsetof(struct zxdh_virtnet_tx, stats.bytes)}, + {"errors", offsetof(struct zxdh_virtnet_tx, stats.errors)}, + {"multicast_packets", offsetof(struct zxdh_virtnet_tx, stats.multicast)}, + {"broadcast_packets", offsetof(struct zxdh_virtnet_tx, stats.broadcast)}, + {"truncated_err", offsetof(struct zxdh_virtnet_tx, stats.truncated_err)}, + {"undersize_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[0])}, + {"size_64_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[1])}, + {"size_65_127_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[2])}, + {"size_128_255_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[3])}, + {"size_256_511_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[4])}, + {"size_512_1023_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[5])}, + {"size_1024_1518_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[6])}, + {"size_1519_max_packets", offsetof(struct zxdh_virtnet_tx, stats.size_bins[7])}, +}; + static int32_t zxdh_config_port_status(struct rte_eth_dev *dev, uint16_t link_status) { struct zxdh_hw *hw = dev->data->dev_private; @@ -1066,3 +1170,252 @@ zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_con return 0; } + +static int32_t +zxdh_hw_vqm_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode, + struct zxdh_hw_vqm_stats *hw_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_GET: + case ZXDH_VQM_QUEUE_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_GET: + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get hw stats"); + return -EAGAIN; + } + struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body; + + rte_memcpy(hw_stats, &reply_body->vqm_stats, sizeof(struct zxdh_hw_vqm_stats)); + return 0; +} + +static int zxdh_hw_mac_stats_get(struct rte_eth_dev *dev, + struct zxdh_hw_mac_stats *mac_stats, + struct zxdh_hw_mac_bytes *mac_bytes) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET); + uint64_t stats_addr = 0; + uint64_t bytes_addr = 0; + + if (hw->speed <= RTE_ETH_SPEED_NUM_25G) { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4); + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4); + } else { + stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4; + bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4; + } + + rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats)); + rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes)); + return 0; +} + +static void zxdh_data_hi_to_lo(uint64_t *data) +{ + uint32_t n_data_hi; + uint32_t n_data_lo; + + n_data_lo = *data >> 32; + n_data_hi = *data; + *data = (uint64_t)(rte_le_to_cpu_32(n_data_hi)) << 32 | + rte_le_to_cpu_32(n_data_lo); +} + +static int zxdh_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_np_stats_data stats_data; + uint32_t stats_id = zxdh_vport_to_vfid(hw->vport); + uint32_t idx = 0; + int ret = 0; + + idx = stats_id + ZXDH_BROAD_STATS_EGRESS_BASE; + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_tx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_tx_broadcast); + + idx = stats_id + ZXDH_BROAD_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 0, idx, (uint32_t *)&np_stats->np_rx_broadcast); + if (ret) + return ret; + zxdh_data_hi_to_lo(&np_stats->np_rx_broadcast); + + idx = stats_id + ZXDH_MTU_STATS_EGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + + np_stats->np_tx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_tx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_tx_mtu_drop_bytes); + + idx = stats_id + ZXDH_MTU_STATS_INGRESS_BASE; + memset(&stats_data, 0, sizeof(stats_data)); + ret = zxdh_np_dtb_stats_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, + 1, idx, (uint32_t *)&stats_data); + if (ret) + return ret; + np_stats->np_rx_mtu_drop_pkts = stats_data.n_pkts_dropped; + np_stats->np_rx_mtu_drop_bytes = stats_data.n_bytes_dropped; + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_pkts); + zxdh_data_hi_to_lo(&np_stats->np_rx_mtu_drop_bytes); + + return 0; +} + +static int +zxdh_hw_np_stats_get(struct rte_eth_dev *dev, struct zxdh_hw_np_stats *np_stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + int ret = 0; + + if (hw->is_pf) { + ret = zxdh_np_stats_get(dev, np_stats); + if (ret) { + PMD_DRV_LOG(ERR, "get np stats failed"); + return -1; + } + } else { + zxdh_msg_head_build(hw, ZXDH_GET_NP_STATS, &msg_info); + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info)); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to send msg: port 0x%x msg type ZXDH_PORT_METER_STAT_GET", + hw->vport.vport); + return -1; + } + memcpy(np_stats, &reply_info.reply_body.np_stats, sizeof(struct zxdh_hw_np_stats)); + } + return ret; +} + +int +zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_hw_vqm_stats vqm_stats = {0}; + struct zxdh_hw_np_stats np_stats = {0}; + struct zxdh_hw_mac_stats mac_stats = {0}; + struct zxdh_hw_mac_bytes mac_bytes = {0}; + uint32_t i = 0; + + zxdh_hw_vqm_stats_get(dev, ZXDH_VQM_DEV_STATS_GET, &vqm_stats); + if (hw->is_pf) + zxdh_hw_mac_stats_get(dev, &mac_stats, &mac_bytes); + + zxdh_hw_np_stats_get(dev, &np_stats); + + stats->ipackets = vqm_stats.rx_total; + stats->opackets = vqm_stats.tx_total; + stats->ibytes = vqm_stats.rx_bytes; + stats->obytes = vqm_stats.tx_bytes; + stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop; + stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts; + stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts; + + stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed; + for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_rx *rxvq = dev->data->rx_queues[i]; + + if (rxvq == NULL) + continue; + stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[0].offset); + stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[1].offset); + stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) + + zxdh_rxq_stat_strings[5].offset); + } + + for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) { + struct zxdh_virtnet_tx *txvq = dev->data->tx_queues[i]; + + if (txvq == NULL) + continue; + stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[0].offset); + stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[1].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[2].offset); + stats->q_errors[i] += *(uint64_t *)(((char *)txvq) + + zxdh_txq_stat_strings[5].offset); + } + return 0; +} + +static int zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_msg_type opcode) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msg_info msg_info = {0}; + struct zxdh_msg_reply_info reply_info = {0}; + enum ZXDH_BAR_MODULE_ID module_id; + int ret = 0; + + switch (opcode) { + case ZXDH_VQM_DEV_STATS_RESET: + module_id = ZXDH_BAR_MODULE_VQM; + break; + case ZXDH_MAC_STATS_RESET: + module_id = ZXDH_BAR_MODULE_MAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid opcode %u", opcode); + return -1; + } + + zxdh_agent_msg_build(hw, opcode, &msg_info); + + ret = zxdh_send_msg_to_riscv(dev, &msg_info, sizeof(struct zxdh_msg_info), + &reply_info, sizeof(struct zxdh_msg_reply_info), module_id); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to reset hw stats"); + return -EAGAIN; + } + return 0; +} + +int zxdh_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET); + if (hw->is_pf) + zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index ef89c0d325..dad84934fc 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -5,8 +5,33 @@ #ifndef ZXDH_ETHDEV_OPS_H #define ZXDH_ETHDEV_OPS_H +#include <stdint.h> + #include "zxdh_ethdev.h" +struct zxdh_hw_vqm_stats { + uint64_t rx_total; + uint64_t tx_total; + uint64_t rx_bytes; + uint64_t tx_bytes; + uint64_t rx_error; + uint64_t tx_error; + uint64_t rx_drop; +} __rte_packed; + +struct zxdh_hw_np_stats { + uint64_t np_rx_broadcast; + uint64_t np_tx_broadcast; + uint64_t np_rx_mtu_drop_pkts; + uint64_t np_tx_mtu_drop_pkts; + uint64_t np_rx_mtu_drop_bytes; + uint64_t np_tx_mtu_drop_bytes; + uint64_t np_rx_mtr_drop_pkts; + uint64_t np_tx_mtr_drop_pkts; + uint64_t np_rx_mtr_drop_bytes; + uint64_t np_tx_mtr_drop_bytes; +}; + int zxdh_dev_set_link_up(struct rte_eth_dev *dev); int zxdh_dev_set_link_down(struct rte_eth_dev *dev); int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, int32_t wait_to_complete __rte_unused); @@ -28,5 +53,7 @@ int zxdh_dev_rss_reta_query(struct rte_eth_dev *dev, uint16_t reta_size); int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); +int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); +int zxdh_dev_stats_reset(struct rte_eth_dev *dev); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 57092abe92..5530fc70a7 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -9,8 +9,13 @@ #include <ethdev_driver.h> +#include "zxdh_ethdev_ops.h" + #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MAC_OFFSET (0x24000) +#define ZXDH_MAC_STATS_OFFSET (0x1408) +#define ZXDH_MAC_BYTES_OFFSET (0xb000) #define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 @@ -172,7 +177,13 @@ enum pciebar_layout_type { /* riscv msg opcodes */ enum zxdh_agent_msg_type { + ZXDH_MAC_STATS_GET = 10, + ZXDH_MAC_STATS_RESET, ZXDH_MAC_LINK_GET = 14, + ZXDH_VQM_DEV_STATS_GET = 21, + ZXDH_VQM_DEV_STATS_RESET, + ZXDH_VQM_QUEUE_STATS_GET = 24, + ZXDH_VQM_QUEUE_STATS_RESET, } __rte_packed; enum zxdh_msg_type { @@ -194,6 +205,8 @@ enum zxdh_msg_type { ZXDH_PORT_ATTRS_SET = 25, ZXDH_PORT_PROMISC_SET = 26, + ZXDH_GET_NP_STATS = 31, + ZXDH_MSG_TYPE_END, } __rte_packed; @@ -322,6 +335,8 @@ struct zxdh_msg_reply_body { struct zxdh_link_info_msg link_msg; struct zxdh_rss_hf rss_hf; struct zxdh_rss_reta rss_reta; + struct zxdh_hw_vqm_stats vqm_stats; + struct zxdh_hw_np_stats np_stats; } __rte_packed; } __rte_packed; diff --git a/drivers/net/zxdh/zxdh_np.c b/drivers/net/zxdh/zxdh_np.c index 2a4d38b846..34b7732105 100644 --- a/drivers/net/zxdh/zxdh_np.c +++ b/drivers/net/zxdh/zxdh_np.c @@ -25,6 +25,7 @@ ZXDH_TLB_MGR_T *g_p_dpp_tlb_mgr[ZXDH_DEV_CHANNEL_MAX] = {NULL}; ZXDH_SDT_TBL_DATA_T g_sdt_info[ZXDH_DEV_CHANNEL_MAX][ZXDH_DEV_SDT_ID_MAX]; ZXDH_REG_T g_dpp_reg_info[4] = {0}; ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4] = {0}; +ZXDH_PPU_STAT_CFG_T g_ppu_stat_cfg = {0}; #define ZXDH_COMM_ASSERT(x) assert(x) #define ZXDH_SDT_MGR_PTR_GET() (&g_sdt_mgr) @@ -46,6 +47,18 @@ ZXDH_DTB_TABLE_T g_dpp_dtb_table_info[4] = {0}; #define ZXDH_COMM_CONVERT32(dw_data) \ (((dw_data) & 0xff) << 24) +#define ZXDH_DTB_TAB_UP_WR_INDEX_GET(DEV_ID, QUEUE_ID) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.wr_index) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].user_flag) + +#define ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.user_addr[(INDEX)].phy_addr) + +#define ZXDH_DTB_TAB_UP_DATA_LEN_GET(DEV_ID, QUEUE_ID, INDEX) \ + (p_dpp_dtb_mgr[(DEV_ID)]->queue_info[(QUEUE_ID)].tab_up.data_len[(INDEX)]) + #define ZXDH_REG_DATA_MAX (128) #define ZXDH_COMM_CHECK_DEV_POINT(dev_id, point)\ @@ -1793,3 +1806,339 @@ zxdh_np_dtb_table_entry_get(uint32_t dev_id, return 0; } + +static uint32_t +zxdh_np_stat_cfg_soft_get(uint32_t dev_id, + ZXDH_PPU_STAT_CFG_T *p_stat_cfg) +{ + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_stat_cfg); + + p_stat_cfg->ddr_base_addr = g_ppu_stat_cfg.ddr_base_addr; + p_stat_cfg->eram_baddr = g_ppu_stat_cfg.eram_baddr; + p_stat_cfg->eram_depth = g_ppu_stat_cfg.eram_depth; + p_stat_cfg->ppu_addr_offset = g_ppu_stat_cfg.ppu_addr_offset; + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_info_set(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t int_flag, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_desc_data) +{ + uint32_t queue_en = 0; + ZXDH_DTB_QUEUE_ITEM_INFO_T item_info = {0}; + + zxdh_np_dtb_queue_enable_get(dev_id, queue_id, &queue_en); + if (!queue_en) { + PMD_DRV_LOG(ERR, "the queue %d is not enable!", queue_id); + return ZXDH_RC_DTB_QUEUE_NOT_ENABLE; + } + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (desc_len % 4 != 0) + return ZXDH_RC_DTB_PARA_INVALID; + + zxdh_np_dtb_item_buff_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + item_index, 0, desc_len, p_desc_data); + + ZXDH_DTB_TAB_UP_DATA_LEN_GET(dev_id, queue_id, item_index) = data_len; + + item_info.cmd_vld = 1; + item_info.cmd_type = ZXDH_DTB_DIR_UP_TYPE; + item_info.int_en = int_flag; + item_info.data_len = desc_len / 4; + + if (zxdh_np_dev_get_dev_type(dev_id) == ZXDH_DEV_TYPE_SIM) + return 0; + + zxdh_np_dtb_queue_item_info_set(dev_id, queue_id, &item_info); + + return 0; +} + +static uint32_t +zxdh_np_dtb_write_dump_desc_info(uint32_t dev_id, + uint32_t queue_id, + uint32_t queue_element_id, + uint32_t *p_dump_info, + uint32_t data_len, + uint32_t desc_len, + uint32_t *p_dump_data) +{ + uint32_t rc = 0; + uint32_t dtb_interrupt_status = 0; + + ZXDH_COMM_CHECK_POINT(p_dump_data); + rc = zxdh_np_dtb_tab_up_info_set(dev_id, + queue_id, + queue_element_id, + dtb_interrupt_status, + data_len, + desc_len, + p_dump_info); + if (rc != 0) { + PMD_DRV_LOG(ERR, "the queue %d element id %d dump" + " info set failed!", queue_id, queue_element_id); + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, + queue_element_id, 0, ZXDH_DTB_TAB_ACK_UNUSED_MASK); + } + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_free_item_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t *p_item_index) +{ + uint32_t i = 0; + uint32_t ack_vale = 0; + uint32_t item_index = 0; + uint32_t unused_item_num = 0; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + zxdh_np_dtb_queue_unused_item_num_get(dev_id, queue_id, &unused_item_num); + + if (unused_item_num == 0) + return ZXDH_RC_DTB_QUEUE_ITEM_HW_EMPTY; + + for (i = 0; i < ZXDH_DTB_QUEUE_ITEM_NUM_MAX; i++) { + item_index = ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id) % + ZXDH_DTB_QUEUE_ITEM_NUM_MAX; + + zxdh_np_dtb_item_ack_rd(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, &ack_vale); + + ZXDH_DTB_TAB_UP_WR_INDEX_GET(dev_id, queue_id)++; + + if ((ack_vale >> 8) == ZXDH_DTB_TAB_ACK_UNUSED_MASK) + break; + } + + if (i == ZXDH_DTB_QUEUE_ITEM_NUM_MAX) + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + + zxdh_np_dtb_item_ack_wr(dev_id, queue_id, ZXDH_DTB_DIR_UP_TYPE, item_index, + 0, ZXDH_DTB_TAB_ACK_IS_USING_MASK); + + *p_item_index = item_index; + + + return 0; +} + +static uint32_t +zxdh_np_dtb_tab_up_item_addr_get(uint32_t dev_id, + uint32_t queue_id, + uint32_t item_index, + uint32_t *p_phy_haddr, + uint32_t *p_phy_laddr) +{ + uint64_t addr = 0; + + if (ZXDH_DTB_QUEUE_INIT_FLAG_GET(dev_id, queue_id) == 0) { + PMD_DRV_LOG(ERR, "dtb queue %d is not init.", queue_id); + return ZXDH_RC_DTB_QUEUE_IS_NOT_INIT; + } + + if (ZXDH_DTB_TAB_UP_USER_PHY_ADDR_FLAG_GET(dev_id, queue_id, item_index) == + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE) + addr = ZXDH_DTB_TAB_UP_USER_PHY_ADDR_GET(dev_id, queue_id, item_index); + else + addr = ZXDH_DTB_ITEM_ACK_SIZE; + + *p_phy_haddr = (addr >> 32) & 0xffffffff; + *p_phy_laddr = addr & 0xffffffff; + + return 0; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_dma_dump(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t depth, + uint32_t *p_data, + uint32_t *element_id) +{ + uint32_t rc = 0; + uint32_t dump_dst_phy_haddr = 0; + uint32_t dump_dst_phy_laddr = 0; + uint32_t queue_item_index = 0; + uint32_t data_len = 0; + uint32_t desc_len = 0; + + uint8_t form_buff[ZXDH_DTB_TABLE_CMD_SIZE_BIT / 8] = {0}; + + rc = zxdh_np_dtb_tab_up_free_item_get(dev_id, queue_id, &queue_item_index); + if (rc != 0) { + PMD_DRV_LOG(ERR, "dpp_dtb_tab_up_free_item_get failed = %d!", base_addr); + return ZXDH_RC_DTB_QUEUE_ITEM_SW_EMPTY; + } + + *element_id = queue_item_index; + + rc = zxdh_np_dtb_tab_up_item_addr_get(dev_id, queue_id, queue_item_index, + &dump_dst_phy_haddr, &dump_dst_phy_laddr); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_tab_up_item_addr_get"); + + data_len = depth * 128 / 32; + desc_len = ZXDH_DTB_LEN_POS_SETP / 4; + + + rc = zxdh_np_dtb_write_dump_desc_info(dev_id, queue_id, queue_item_index, + (uint32_t *)form_buff, data_len, desc_len, p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_write_dump_desc_info"); + + return 0; +} + +static uint32_t +zxdh_np_dtb_se_smmu0_ind_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t base_addr, + uint32_t index, + uint32_t rd_mode, + uint32_t *p_data) +{ + uint32_t rc = 0; + + uint32_t row_index = 0; + uint32_t col_index = 0; + uint32_t temp_data[4] = {0}; + uint32_t eram_dump_base_addr = 0; + uint32_t element_id = 0; + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + row_index = index; + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + row_index = (index >> 1); + col_index = index & 0x1; + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + row_index = (index >> 7); + col_index = index & 0x7F; + break; + } + } + + eram_dump_base_addr = base_addr + row_index; + + rc = zxdh_np_dtb_se_smmu0_dma_dump(dev_id, + queue_id, + eram_dump_base_addr, + 1, + temp_data, + &element_id); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_dma_dump"); + + switch (rd_mode) { + case ZXDH_ERAM128_OPR_128b: + { + memcpy(p_data, temp_data, (128 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_64b: + { + memcpy(p_data, temp_data + ((1 - col_index) << 1), (64 / 8)); + break; + } + + case ZXDH_ERAM128_OPR_1b: + { + ZXDH_COMM_UINT32_GET_BITS(p_data[0], *(temp_data + + (3 - col_index / 32)), (col_index % 32), 1); + break; + } + } + + return rc; +} + +static uint32_t +zxdh_np_dtb_stat_smmu0_int_read(uint32_t dev_id, + uint32_t queue_id, + uint32_t smmu0_base_addr, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t rc = 0; + + uint32_t eram_rd_mode = 0; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + if (rd_mode == ZXDH_STAT_128_MODE) + eram_rd_mode = ZXDH_ERAM128_OPR_128b; + else + eram_rd_mode = ZXDH_ERAM128_OPR_64b; + + rc = zxdh_np_dtb_se_smmu0_ind_read(dev_id, + queue_id, + smmu0_base_addr, + index, + eram_rd_mode, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "zxdh_np_dtb_se_smmu0_ind_read"); + + return rc; +} + +int +zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data) +{ + uint32_t rc = 0; + uint32_t ppu_eram_baddr = 0; + uint32_t ppu_eram_depth = 0; + ZXDH_PPU_STAT_CFG_T stat_cfg = {0}; + + ZXDH_COMM_CHECK_DEV_POINT(dev_id, p_data); + + memset(&stat_cfg, 0x0, sizeof(stat_cfg)); + + rc = zxdh_np_stat_cfg_soft_get(dev_id, &stat_cfg); + ZXDH_COMM_CHECK_DEV_RC(dev_id, rc, "zxdh_stat_cfg_soft_get"); + + ppu_eram_depth = stat_cfg.eram_depth; + ppu_eram_baddr = stat_cfg.eram_baddr; + + if ((index >> (ZXDH_STAT_128_MODE - rd_mode)) < ppu_eram_depth) { + rc = zxdh_np_dtb_stat_smmu0_int_read(dev_id, + queue_id, + ppu_eram_baddr, + rd_mode, + index, + p_data); + ZXDH_COMM_CHECK_RC_NO_ASSERT(rc, "dpp_dtb_stat_smmu0_int_read"); + } + + return rc; +} diff --git a/drivers/net/zxdh/zxdh_np.h b/drivers/net/zxdh/zxdh_np.h index 7295b709ce..fd59a46491 100644 --- a/drivers/net/zxdh/zxdh_np.h +++ b/drivers/net/zxdh/zxdh_np.h @@ -432,6 +432,18 @@ typedef enum zxdh_sdt_table_type_e { ZXDH_SDT_TBLT_MAX = 7, } ZXDH_SDT_TABLE_TYPE_E; +typedef enum zxdh_dtb_dir_type_e { + ZXDH_DTB_DIR_DOWN_TYPE = 0, + ZXDH_DTB_DIR_UP_TYPE = 1, + ZXDH_DTB_DIR_TYPE_MAX, +} ZXDH_DTB_DIR_TYPE_E; + +typedef enum zxdh_dtb_tab_up_user_addr_type_e { + ZXDH_DTB_TAB_UP_NOUSER_ADDR_TYPE = 0, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE = 1, + ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_MAX, +} ZXDH_DTB_TAB_UP_USER_ADDR_TYPE_E; + typedef struct zxdh_dtb_lpm_entry_t { uint32_t dtb_len0; uint8_t *p_data_buff0; @@ -537,6 +549,19 @@ typedef struct zxdh_sdt_tbl_porttbl_t { uint32_t porttbl_clutch_en; } ZXDH_SDTTBL_PORTTBL_T; +typedef struct zxdh_ppu_stat_cfg_t { + uint32_t eram_baddr; + uint32_t eram_depth; + uint32_t ddr_base_addr; + uint32_t ppu_addr_offset; +} ZXDH_PPU_STAT_CFG_T; + +typedef enum zxdh_stat_cnt_mode_e { + ZXDH_STAT_64_MODE = 0, + ZXDH_STAT_128_MODE = 1, + ZXDH_STAT_MAX_MODE, +} ZXDH_STAT_CNT_MODE_E; + int zxdh_np_host_init(uint32_t dev_id, ZXDH_DEV_INIT_CTRL_T *p_dev_init_ctrl); int zxdh_np_online_uninit(uint32_t dev_id, char *port_name, uint32_t queue_id); int zxdh_np_dtb_table_entry_write(uint32_t dev_id, uint32_t queue_id, @@ -545,5 +570,10 @@ int zxdh_np_dtb_table_entry_delete(uint32_t dev_id, uint32_t queue_id, uint32_t entrynum, ZXDH_DTB_USER_ENTRY_T *delete_entries); int zxdh_np_dtb_table_entry_get(uint32_t dev_id, uint32_t queue_id, ZXDH_DTB_USER_ENTRY_T *get_entry, uint32_t srh_mode); +int zxdh_np_dtb_stats_get(uint32_t dev_id, + uint32_t queue_id, + ZXDH_STAT_CNT_MODE_E rd_mode, + uint32_t index, + uint32_t *p_data); #endif /* ZXDH_NP_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 1bd292e235..af616d115b 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -53,6 +53,8 @@ enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; #define ZXDH_PI_HDR_SIZE sizeof(struct zxdh_pi_hdr) #define ZXDH_DL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_dl) #define ZXDH_UL_NET_HDR_SIZE sizeof(struct zxdh_net_hdr_ul) +#define ZXDH_PD_HDR_SIZE_MAX 256 +#define ZXDH_PD_HDR_SIZE_MIN ZXDH_TYPE_HDR_SIZE /* * ring descriptors: 16 bytes. diff --git a/drivers/net/zxdh/zxdh_rxtx.c b/drivers/net/zxdh/zxdh_rxtx.c index be5865ac85..c7a765a881 100644 --- a/drivers/net/zxdh/zxdh_rxtx.c +++ b/drivers/net/zxdh/zxdh_rxtx.c @@ -405,6 +405,40 @@ static inline void zxdh_enqueue_xmit_packed(struct zxdh_virtnet_tx *txvq, zxdh_queue_store_flags_packed(head_dp, head_flags, vq->hw->weak_barriers); } +static void +zxdh_update_packet_stats(struct zxdh_virtnet_stats *stats, struct rte_mbuf *mbuf) +{ + uint32_t s = mbuf->pkt_len; + struct rte_ether_addr *ea = NULL; + + stats->bytes += s; + + if (s == 64) { + stats->size_bins[1]++; + } else if (s > 64 && s < 1024) { + uint32_t bin; + + /* count zeros, and offset into correct bin */ + bin = (sizeof(s) * 8) - rte_clz32(s) - 5; + stats->size_bins[bin]++; + } else { + if (s < 64) + stats->size_bins[0]++; + else if (s < 1519) + stats->size_bins[6]++; + else + stats->size_bins[7]++; + } + + ea = rte_pktmbuf_mtod(mbuf, struct rte_ether_addr *); + if (rte_is_multicast_ether_addr(ea)) { + if (rte_is_broadcast_ether_addr(ea)) + stats->broadcast++; + else + stats->multicast++; + } +} + uint16_t zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -458,12 +492,19 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt break; } } + if (txm->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%dsegs dropped", txm->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + break; + } /* Enqueue Packet buffers */ if (can_push) zxdh_enqueue_xmit_packed_fast(txvq, txm, in_order); else zxdh_enqueue_xmit_packed(txvq, txm, slots, use_indirect, in_order); + zxdh_update_packet_stats(&txvq->stats, txm); } + txvq->stats.packets += nb_tx; if (likely(nb_tx)) { if (unlikely(zxdh_queue_kick_prepare_packed(vq))) { zxdh_queue_notify(vq); @@ -473,9 +514,10 @@ zxdh_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkt return nb_tx; } -uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **tx_pkts, +uint16_t zxdh_xmit_pkts_prepare(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { + struct zxdh_virtnet_tx *txvq = tx_queue; uint16_t nb_tx; for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { @@ -495,6 +537,12 @@ uint16_t zxdh_xmit_pkts_prepare(void *tx_queue __rte_unused, struct rte_mbuf **t rte_errno = -error; break; } + if (m->nb_segs > ZXDH_TX_MAX_SEGS) { + PMD_TX_LOG(ERR, "%d segs dropped", m->nb_segs); + txvq->stats.truncated_err += nb_pkts - nb_tx; + rte_errno = ENOMEM; + break; + } } return nb_tx; } @@ -570,7 +618,7 @@ static int32_t zxdh_rx_update_mbuf(struct rte_mbuf *m, struct zxdh_net_hdr_ul *h return 0; } -static inline void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) +static void zxdh_discard_rxbuf(struct zxdh_virtqueue *vq, struct rte_mbuf *m) { int32_t error = 0; /* @@ -612,6 +660,13 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, for (i = 0; i < num; i++) { rxm = rcv_pkts[i]; + if (unlikely(len[i] < ZXDH_UL_NET_HDR_SIZE)) { + nb_enqueued++; + PMD_RX_LOG(ERR, "RX, len:%u err", len[i]); + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } struct zxdh_net_hdr_ul *header = (struct zxdh_net_hdr_ul *)((char *)rxm->buf_addr + @@ -622,8 +677,22 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); seg_num = 1; } + if (seg_num > ZXDH_RX_MAX_SEGS) { + PMD_RX_LOG(ERR, "dequeue %d pkt, No.%d pkt seg_num is %d", num, i, seg_num); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* bit[0:6]-pd_len unit:2B */ uint16_t pd_len = header->type_hdr.pd_len << 1; + if (pd_len > ZXDH_PD_HDR_SIZE_MAX || pd_len < ZXDH_PD_HDR_SIZE_MIN) { + PMD_RX_LOG(ERR, "pd_len:%d is invalid", pd_len); + nb_enqueued++; + zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } /* Private queue only handle type hdr */ hdr_size = pd_len; rxm->data_off = RTE_PKTMBUF_HEADROOM + hdr_size; @@ -638,6 +707,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, /* Update rte_mbuf according to pi/pd header */ if (zxdh_rx_update_mbuf(rxm, header) < 0) { zxdh_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; continue; } seg_res = seg_num - 1; @@ -660,8 +730,11 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } @@ -674,6 +747,7 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, if (unlikely(rcv_cnt == 0)) { PMD_RX_LOG(ERR, "No enough segments for packet."); rte_pktmbuf_free(rx_pkts[nb_rx]); + rxvq->stats.errors++; break; } while (extra_idx < rcv_cnt) { @@ -693,11 +767,15 @@ uint16_t zxdh_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, PMD_RX_LOG(ERR, "dropped rcvd_pkt_len %d pktlen %d.", rcvd_pkt_len, rx_pkts[nb_rx]->pkt_len); zxdh_discard_rxbuf(vq, rx_pkts[nb_rx]); + rxvq->stats.errors++; + rxvq->stats.truncated_err++; continue; } + zxdh_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); nb_rx++; } } + rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ if (likely(!zxdh_queue_full(vq))) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 649ede33e8..675c7871ae 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -13,6 +13,11 @@ extern struct zxdh_dtb_shared_data g_dtb_data; #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 +#define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 +#define ZXDH_BROAD_STATS_EGRESS_BASE 0xC902 +#define ZXDH_BROAD_STATS_INGRESS_BASE 0xD102 + struct zxdh_port_attr_table { #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN uint8_t byte4_rsv1: 1; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 87155 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v1 15/15] net/zxdh: mtu update ops implementations 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang ` (13 preceding siblings ...) 2024-12-06 5:57 ` [PATCH v1 14/15] net/zxdh: basic stats ops implementations Junlong Wang @ 2024-12-06 5:57 ` Junlong Wang 14 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-06 5:57 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 8244 bytes --] mtu update ops implementations. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- doc/guides/nics/features/zxdh.ini | 3 +- doc/guides/nics/zxdh.rst | 2 + drivers/net/zxdh/zxdh_ethdev.c | 1 + drivers/net/zxdh/zxdh_ethdev_ops.c | 79 ++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev_ops.h | 1 + drivers/net/zxdh/zxdh_tables.c | 42 ++++++++++++++++ drivers/net/zxdh/zxdh_tables.h | 4 ++ 7 files changed, 131 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini index 39c2473652..b9bdb73ddf 100644 --- a/doc/guides/nics/features/zxdh.ini +++ b/doc/guides/nics/features/zxdh.ini @@ -22,4 +22,5 @@ RSS hash = Y RSS reta update = Y Inner RSS = Y Basic stats = Y -Stats per queue = Y \ No newline at end of file +Stats per queue = Y +MTU update = Y \ No newline at end of file diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst index c8a52b587c..58e0c49a2e 100644 --- a/doc/guides/nics/zxdh.rst +++ b/doc/guides/nics/zxdh.rst @@ -33,6 +33,8 @@ Features of the ZXDH PMD are: - QINQ stripping and inserting - Receive Side Scaling (RSS) - Port hardware statistics +- MTU update +- Jumbo frames Driver compilation and testing diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 82f81d1ded..8f39f41c4e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -1143,6 +1143,7 @@ static const struct eth_dev_ops zxdh_eth_dev_ops = { .rss_hash_conf_get = zxdh_rss_hash_conf_get, .stats_get = zxdh_dev_stats_get, .stats_reset = zxdh_dev_stats_reset, + .mtu_set = zxdh_dev_mtu_set, }; static int32_t diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.c b/drivers/net/zxdh/zxdh_ethdev_ops.c index 6156c94f2c..cca16001f7 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.c +++ b/drivers/net/zxdh/zxdh_ethdev_ops.c @@ -12,6 +12,7 @@ #include "zxdh_logs.h" #include "zxdh_rxtx.h" #include "zxdh_np.h" +#include "zxdh_queue.h" #define ZXDH_VLAN_FILTER_GROUPS 64 #define ZXDH_INVALID_LOGIC_QID 0xFFFFU @@ -37,6 +38,7 @@ #define ZXDH_HF_F3 2 #define ZXDH_HF_MAC_VLAN 4 #define ZXDH_HF_ALL 0 +#define ZXDH_ETHER_MIN_MTU 68 struct zxdh_hw_mac_stats { uint64_t rx_total; @@ -1419,3 +1421,80 @@ int zxdh_dev_stats_reset(struct rte_eth_dev *dev) return 0; } + +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_panel_table panel = {0}; + struct zxdh_port_attr_table vport_att = {0}; + uint16_t vfid = zxdh_vport_to_vfid(hw->vport); + uint16_t max_mtu = 0; + int ret = 0; + + max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - RTE_VLAN_HLEN - ZXDH_DL_NET_HDR_SIZE; + if (new_mtu < ZXDH_ETHER_MIN_MTU || new_mtu > max_mtu) { + PMD_DRV_LOG(ERR, "invalid mtu:%d, range[%d, %d]", + new_mtu, ZXDH_ETHER_MIN_MTU, max_mtu); + return -EINVAL; + } + + if (dev->data->mtu == new_mtu) + return 0; + + if (hw->is_pf) { + memset(&panel, 0, sizeof(panel)); + memset(&vport_att, 0, sizeof(vport_att)); + ret = zxdh_get_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "get_panel_attr ret:%d", ret); + return -1; + } + + ret = zxdh_get_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, get vport dpp_ret:%d", vfid, ret); + return -1; + } + + panel.mtu = new_mtu; + panel.mtu_enable = 1; + ret = zxdh_set_panel_attr(dev, &panel); + if (ret != 0) { + PMD_DRV_LOG(ERR, "set zxdh_dev_mtu failed, ret:%u", ret); + return ret; + } + + vport_att.mtu_enable = 1; + vport_att.mtu = new_mtu; + ret = zxdh_set_port_attr(vfid, &vport_att); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "[vfid:%d] zxdh_dev_mtu, set vport dpp_ret:%d", vfid, ret); + return ret; + } + } else { + struct zxdh_msg_info msg_info = {0}; + struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_msg; + + zxdh_msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info); + attr_msg->mode = ZXDH_PORT_MTU_EN_FLAG; + attr_msg->value = 1; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_EN_FLAG); + return ret; + } + attr_msg->mode = ZXDH_PORT_MTU_FLAG; + attr_msg->value = new_mtu; + ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ", + hw->vport.vport, ZXDH_PORT_MTU_FLAG); + return ret; + } + } + dev->data->mtu = new_mtu; + return 0; +} diff --git a/drivers/net/zxdh/zxdh_ethdev_ops.h b/drivers/net/zxdh/zxdh_ethdev_ops.h index dad84934fc..3f37c35178 100644 --- a/drivers/net/zxdh/zxdh_ethdev_ops.h +++ b/drivers/net/zxdh/zxdh_ethdev_ops.h @@ -55,5 +55,6 @@ int zxdh_rss_hash_update(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_c int zxdh_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf); int zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int zxdh_dev_stats_reset(struct rte_eth_dev *dev); +int zxdh_dev_mtu_set(struct rte_eth_dev *dev, uint16_t new_mtu); #endif /* ZXDH_ETHDEV_OPS_H */ diff --git a/drivers/net/zxdh/zxdh_tables.c b/drivers/net/zxdh/zxdh_tables.c index e8e483a02a..6587c868c7 100644 --- a/drivers/net/zxdh/zxdh_tables.c +++ b/drivers/net/zxdh/zxdh_tables.c @@ -55,6 +55,48 @@ int zxdh_get_port_attr(uint16_t vfid, struct zxdh_port_attr_table *port_attr) return ret; } +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_att) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_att + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_get(ZXDH_DEVICE_NO, g_dtb_data.queueid, &entry, 1); + + if (ret != 0) + PMD_DRV_LOG(ERR, "get eram-panel failed, ret:%d ", ret); + + return ret; +} + +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_att) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t index_phy_port = hw->phyport; + + ZXDH_DTB_ERAM_ENTRY_INFO_T panel_entry = { + .index = index_phy_port, + .p_data = (uint32_t *)panel_att + }; + ZXDH_DTB_USER_ENTRY_T entry = { + .sdt_no = ZXDH_SDT_PANEL_ATT_TABLE, + .p_entry_data = (void *)&panel_entry + }; + int ret = zxdh_np_dtb_table_entry_write(ZXDH_DEVICE_NO, g_dtb_data.queueid, 1, &entry); + + if (ret) + PMD_DRV_LOG(ERR, "Insert eram-panel failed, code:%u", ret); + + return ret; +} + int zxdh_port_attr_init(struct rte_eth_dev *dev) { diff --git a/drivers/net/zxdh/zxdh_tables.h b/drivers/net/zxdh/zxdh_tables.h index 675c7871ae..d176ec2ed3 100644 --- a/drivers/net/zxdh/zxdh_tables.h +++ b/drivers/net/zxdh/zxdh_tables.h @@ -10,8 +10,10 @@ extern struct zxdh_dtb_shared_data g_dtb_data; #define ZXDH_DEVICE_NO 0 +#define ZXDH_PORT_MTU_FLAG 9 #define ZXDH_PORT_BASE_QID_FLAG 10 #define ZXDH_PORT_ATTR_IS_UP_FLAG 35 +#define ZXDH_PORT_MTU_EN_FLAG 42 #define ZXDH_MTU_STATS_EGRESS_BASE 0x8481 #define ZXDH_MTU_STATS_INGRESS_BASE 0x8981 @@ -223,5 +225,7 @@ int zxdh_vlan_filter_table_init(struct rte_eth_dev *dev); int zxdh_vlan_filter_table_set(uint16_t vport, uint16_t vlan_id, uint8_t enable); int zxdh_rss_table_set(uint16_t vport, struct zxdh_rss_reta *rss_reta); int zxdh_rss_table_get(uint16_t vport, struct zxdh_rss_reta *rss_reta); +int zxdh_get_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_att); +int zxdh_set_panel_attr(struct rte_eth_dev *dev, struct zxdh_panel_table *panel_att); #endif /* ZXDH_TABLES_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 18213 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 02/10] net/zxdh: add logging implementation 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-04 11:58 ` [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 03/10] net/zxdh: add zxdh device pci init implementation Junlong Wang ` (9 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3107 bytes --] Add zxdh logging implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 14 ++++++++++++-- drivers/net/zxdh/zxdh_logs.h | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_logs.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 8689e56309..b64dddc91e 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -7,6 +7,7 @@ #include <rte_ethdev.h> #include "zxdh_ethdev.h" +#include "zxdh_logs.h" static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) @@ -20,13 +21,18 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) + if (eth_dev->data->mac_addrs == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); return -ENOMEM; + } memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; - if (hw->bar_addr[0] == 0) + if (hw->bar_addr[0] == 0) { + PMD_DRV_LOG(ERR, "Bad mem resource."); return -EIO; + } hw->device_id = pci_dev->id.device_id; hw->port_id = eth_dev->data->port_id; @@ -95,3 +101,7 @@ static struct rte_pci_driver zxdh_pmd = { RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE); diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..53838e313b --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_LOGS_H +#define ZXDH_LOGS_H + +#include <rte_log.h> + +extern int zxdh_logtype_driver; +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver +#define PMD_DRV_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_rx; +#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx +#define PMD_RX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "zxdh rx %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_tx; +#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx +#define PMD_TX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "zxdh tx %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_msg; +#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg +#define PMD_MSG_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "zxdh msg %s(): ", \ + __func__, __VA_ARGS__) + +#endif /* ZXDH_LOGS_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 5654 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 03/10] net/zxdh: add zxdh device pci init implementation 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-04 11:58 ` [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-04 11:58 ` [PATCH v10 02/10] net/zxdh: add logging implementation Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 04/10] net/zxdh: add msg chan and msg hwlock init Junlong Wang ` (8 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 17055 bytes --] Add device pci init implementation, to obtain PCI capability and read configuration, etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 44 +++++ drivers/net/zxdh/zxdh_ethdev.h | 18 ++- drivers/net/zxdh/zxdh_pci.c | 283 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.h | 138 ++++++++++++++++ 5 files changed, 483 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 932fb1c835..7db4e7bc71 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -15,4 +15,5 @@ endif sources = files( 'zxdh_ethdev.c', + 'zxdh_pci.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index b64dddc91e..ae20e00317 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -8,6 +8,41 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" +#include "zxdh_pci.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; + +static int32_t +zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = 0; + + ret = zxdh_read_pci_caps(pci_dev, hw); + if (ret) { + PMD_DRV_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + goto err; + } + + zxdh_hw_internal[hw->port_id].zxdh_vtpci_ops = &zxdh_dev_pci_ops; + zxdh_pci_reset(hw); + zxdh_get_pci_dev_config(hw); + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + else + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + + return 0; + +err: + PMD_DRV_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) @@ -46,6 +81,15 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + return ret; + +err_zxdh_init: + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index a11e3624a9..a22ac15065 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -5,6 +5,7 @@ #ifndef ZXDH_ETHDEV_H #define ZXDH_ETHDEV_H +#include <rte_ether.h> #include "ethdev_driver.h" #ifdef __cplusplus @@ -24,15 +25,30 @@ extern "C" { #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) #define ZXDH_NUM_BARS 2 +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U struct zxdh_hw { struct rte_eth_dev *eth_dev; - uint64_t bar_addr[ZXDH_NUM_BARS]; + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; + uint64_t bar_addr[ZXDH_NUM_BARS]; + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; uint32_t speed; + uint32_t notify_off_multiplier; + uint16_t *notify_base; + uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint8_t *isr; + uint8_t weak_barriers; + uint8_t use_msix; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t duplex; uint8_t is_pf; }; diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..68785aa03e --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,283 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <unistd.h> + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_pci.h> +#include <rte_common.h> +#include <rte_cycles.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +static void +zxdh_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void +zxdh_write_dev_config(struct zxdh_hw *hw, size_t offset, + const void *src, int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint8_t +zxdh_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void +zxdh_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint64_t +zxdh_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void +zxdh_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +const struct zxdh_pci_ops zxdh_dev_pci_ops = { + .read_dev_cfg = zxdh_read_dev_config, + .write_dev_cfg = zxdh_write_dev_config, + .get_status = zxdh_get_status, + .set_status = zxdh_set_status, + .get_features = zxdh_get_features, + .set_features = zxdh_set_features, +}; + +uint16_t +zxdh_pci_get_features(struct zxdh_hw *hw) +{ + return ZXDH_VTPCI_OPS(hw)->get_features(hw); +} + +void +zxdh_pci_reset(struct zxdh_hw *hw) +{ + PMD_DRV_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + ZXDH_VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (ZXDH_VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + rte_delay_ms(1); + } + PMD_DRV_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +static void +*get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_DRV_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_DRV_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_DRV_LOG(ERR, "invalid cap: overflows bar space"); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_DRV_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t +zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + struct zxdh_pci_cap cap; + uint8_t pos = 0; + int32_t ret = 0; + + if (dev->mem_resource[0].addr == NULL) { + PMD_DRV_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + + hw->use_msix = zxdh_pci_msix_detect(dev); + + pos = rte_pci_find_capability(dev, RTE_PCI_CAP_ID_VNDR); + while (pos) { + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_DRV_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr != RTE_PCI_CAP_ID_VNDR) { + PMD_DRV_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + PMD_DRV_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_DRV_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_DRV_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + + if ((hw->pcie_id >> 11) & 0x1) /* PF */ { + PMD_DRV_LOG(DEBUG, "EP %u PF %u", + hw->pcie_id >> 12, (hw->pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_DRV_LOG(DEBUG, "EP %u PF %u VF %u", + hw->pcie_id >> 12, + (hw->pcie_id >> 8) & 0x7, + hw->pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_DRV_LOG(ERR, "no zxdh pci device found."); + return -1; + } + return 0; +} + +void +zxdh_pci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + ZXDH_VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} + +void +zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + uint64_t guest_features = 0; + uint64_t nego_features = 0; + uint32_t max_queue_pairs = 0; + + hw->host_features = zxdh_pci_get_features(hw); + + guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + } + + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + PMD_DRV_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); +} + +enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev) +{ + uint16_t flags = 0; + uint8_t pos = 0; + int16_t ret = 0; + + pos = rte_pci_find_capability(dev, RTE_PCI_CAP_ID_MSIX); + + if (pos > 0) { + ret = rte_pci_read_config(dev, &flags, 2, pos + RTE_PCI_MSIX_FLAGS); + if (ret == 2 && flags & RTE_PCI_MSIX_FLAGS_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + return ZXDH_MSIX_NONE; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..7905911a34 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_PCI_H +#define ZXDH_PCI_H + +#include <stdint.h> +#include <stdbool.h> + +#include <bus_pci_driver.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_RING_PACKED 34 +#define ZXDH_F_IN_ORDER 35 +#define ZXDH_F_NOTIFICATION_DATA 38 + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + uint32_t speed; + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); +}; + +struct zxdh_hw_internal { + const struct zxdh_pci_ops *zxdh_vtpci_ops; +}; + +#define ZXDH_VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].zxdh_vtpci_ops) + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_dev_pci_ops; + +void zxdh_pci_reset(struct zxdh_hw *hw); +void zxdh_pci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); +void zxdh_get_pci_dev_config(struct zxdh_hw *hw); + +uint16_t zxdh_pci_get_features(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_PCI_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 38507 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 04/10] net/zxdh: add msg chan and msg hwlock init 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 03/10] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 05/10] net/zxdh: add msg chan enable implementation Junlong Wang ` (7 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 9101 bytes --] Add msg channel and hwlock init implementation Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 15 +++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_msg.c | 170 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 67 +++++++++++++ 5 files changed, 254 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 7db4e7bc71..2e0c8fddae 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -16,4 +16,5 @@ endif sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', + 'zxdh_msg.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ae20e00317..bb7c1253c9 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -9,6 +9,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_pci.h" +#include "zxdh_msg.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -85,9 +86,23 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret < 0) goto err_zxdh_init; + ret = zxdh_msg_chan_init(); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to init bar msg chan"); + goto err_zxdh_init; + } + hw->msg_chan_init = 1; + + ret = zxdh_msg_chan_hwlock_init(eth_dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: + zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index a22ac15065..20ead56e44 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -51,6 +51,7 @@ struct zxdh_hw { uint8_t duplex; uint8_t is_pf; + uint8_t msg_chan_init; }; #ifdef __cplusplus diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..319a9cab57 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,170 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <rte_spinlock.h> +#include <rte_cycles.h> +#include <inttypes.h> +#include <rte_malloc.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define ZXDH_REPS_INFO_FLAG_USABLE 0x00 +#define ZXDH_BAR_SEQID_NUM_MAX 256 + +#define ZXDH_PCIEID_IS_PF_MASK (0x0800) +#define ZXDH_PCIEID_PF_IDX_MASK (0x0700) +#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff) +#define ZXDH_PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define ZXDH_PCIEID_PF_IDX_OFFSET (8) +#define ZXDH_PCIEID_EP_IDX_OFFSET (12) + +#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3) +#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5) +#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8) + +#define ZXDH_MAX_EP_NUM (4) +#define ZXDH_MAX_HARD_SPINLOCK_NUM (511) + +#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) +#define ZXDH_FW_SHRD_OFFSET (0x5000) +#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define ZXDH_HW_LABEL_OFFSET \ + (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) + +struct zxdh_dev_stat { + bool is_mpf_scanned; + bool is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; + +struct zxdh_seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct zxdh_seqid_ring { + uint16_t cur_id; + rte_spinlock_t lock; + struct zxdh_seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX]; +}; + +static struct zxdh_dev_stat g_dev_stat; +static struct zxdh_seqid_ring g_seqid_ring; +static rte_spinlock_t chan_lock; + +static uint16_t +zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case ZXDH_MSG_CHAN_END_RISC: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case ZXDH_MSG_CHAN_END_VF: + case ZXDH_MSG_CHAN_END_PF: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx + + ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static void +label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static void +spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static int32_t +zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +/** + * Fun: PF init hard_spinlock addr + */ +static int +bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + return 0; +} + +int +zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} + +int +zxdh_msg_chan_init(void) +{ + uint16_t seq_id = 0; + + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return ZXDH_BAR_MSG_OK; + + rte_spinlock_init(&chan_lock); + g_seqid_ring.cur_id = 0; + rte_spinlock_init(&g_seqid_ring.lock); + + for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) { + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id]; + + reps_info->id = seq_id; + reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return ZXDH_BAR_MSG_OK; +} + +int +zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return ZXDH_BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..64e1ad0e7f --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_MSG_H +#define ZXDH_MSG_H + +#include <stdint.h> + +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +#define ZXDH_BAR0_INDEX 0 + +enum ZXDH_DRIVER_TYPE { + ZXDH_MSG_CHAN_END_MPF = 0, + ZXDH_MSG_CHAN_END_PF, + ZXDH_MSG_CHAN_END_VF, + ZXDH_MSG_CHAN_END_RISC, +}; + +enum ZXDH_BAR_MSG_RTN { + ZXDH_BAR_MSG_OK = 0, + ZXDH_BAR_MSG_ERR_MSGID, + ZXDH_BAR_MSG_ERR_NULL, + ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */ + ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */ + ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */ + ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /* + * The sending interface parameter boundary structure pointer is empty + */ + ZXDH_BAR_MSG_ERR_NULL_PARA, + ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /* + * Unable to find the corresponding message processing function for this module + */ + ZXDH_BAR_MSG_ERR_MODULE_NOEXIST, + /* + * The virtual address in the parameters passed in by the sending interface is empty + */ + ZXDH_BAR_MSG_ERR_VIRTADDR_NULL, + ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED, + ZXDH_BAR_MSG_ERR_KERNEL_READY, + ZXDH_BAR_MSG_ERR_USR_RET_ERR, + ZXDH_BAR_MSG_ERR_ERR_PCIEID, + ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_MSG_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17551 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 05/10] net/zxdh: add msg chan enable implementation 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 04/10] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 06/10] net/zxdh: add zxdh get device backend infos Junlong Wang ` (6 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 27937 bytes --] Add msg chan enable implementation to support send msg to backend(device side) get infos. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 6 + drivers/net/zxdh/zxdh_ethdev.h | 12 + drivers/net/zxdh/zxdh_msg.c | 657 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 128 +++++++ 4 files changed, 803 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bb7c1253c9..105c18f9e0 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -99,6 +99,12 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_msg_chan_enable(eth_dev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 20ead56e44..7434cc15d7 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -28,10 +28,22 @@ extern "C" { #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +union zxdh_virport_num { + uint16_t vport; + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; uint64_t host_features; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 319a9cab57..336ba217d3 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -17,6 +17,7 @@ #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 +#define ZXDH_REPS_INFO_FLAG_USED 0xa0 #define ZXDH_PCIEID_IS_PF_MASK (0x0800) #define ZXDH_PCIEID_PF_IDX_MASK (0x0700) @@ -33,12 +34,85 @@ #define ZXDH_MAX_EP_NUM (4) #define ZXDH_MAX_HARD_SPINLOCK_NUM (511) +#define ZXDH_LOCK_PRIMARY_ID_MASK (0x8000) +/* bar offset */ +#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000) +#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000) #define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) #define ZXDH_FW_SHRD_OFFSET (0x5000) #define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define ZXDH_HW_LABEL_OFFSET \ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) +#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) +#define ZXDH_CHAN_RISC_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) + +#define ZXDH_REPS_HEADER_LEN_OFFSET 1 +#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4 +#define ZXDH_REPS_HEADER_REPLYED 0xff + +#define ZXDH_BAR_MSG_CHAN_USABLE 0 +#define ZXDH_BAR_MSG_CHAN_USED 1 + +#define ZXDH_BAR_MSG_POL_MASK (0x10) +#define ZXDH_BAR_MSG_POL_OFFSET (4) + +#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc +#define ZXDH_BAR_MSG_VALID_MASK 1 +#define ZXDH_BAR_MSG_VALID_OFFSET 0 + +#define ZXDH_BAR_PF_NUM 7 +#define ZXDH_BAR_VF_NUM 256 +#define ZXDH_BAR_INDEX_PF_TO_VF 0 +#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff +#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0 +#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0 + +#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define ZXDH_SPINLOCK_POLLING_SPAN_US (100) + +#define ZXDH_BAR_MSG_SRC_NUM 3 +#define ZXDH_BAR_MSG_SRC_MPF 0 +#define ZXDH_BAR_MSG_SRC_PF 1 +#define ZXDH_BAR_MSG_SRC_VF 2 +#define ZXDH_BAR_MSG_SRC_ERR 0xff +#define ZXDH_BAR_MSG_DST_NUM 3 +#define ZXDH_BAR_MSG_DST_RISC 0 +#define ZXDH_BAR_MSG_DST_MPF 2 +#define ZXDH_BAR_MSG_DST_PFVF 1 +#define ZXDH_BAR_MSG_DST_ERR 0xff + +#define ZXDH_LOCK_TYPE_HARD (1) +#define ZXDH_LOCK_TYPE_SOFT (0) +#define ZXDH_BAR_INDEX_TO_RISC 0 + +#define ZXDH_BAR_CHAN_INDEX_SEND 0 +#define ZXDH_BAR_CHAN_INDEX_RECV 1 + +uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD} +}; + struct zxdh_dev_stat { bool is_mpf_scanned; bool is_res_init; @@ -60,6 +134,7 @@ struct zxdh_seqid_ring { static struct zxdh_dev_stat g_dev_stat; static struct zxdh_seqid_ring g_seqid_ring; +static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; static rte_spinlock_t chan_lock; static uint16_t @@ -102,6 +177,35 @@ spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; } +static uint8_t +spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + +static int32_t +zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t primary_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, primary_id); + break; + } + rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) { @@ -168,3 +272,556 @@ zxdh_bar_msg_chan_exit(void) g_dev_stat.is_res_init = false; return ZXDH_BAR_MSG_OK; } + +static int +zxdh_bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct zxdh_seqid_item *seqid_reps_info = NULL; + + rte_spinlock_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + int rc = 0; + + do { + count++; + ++g_id; + g_id %= ZXDH_BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) && + (count < ZXDH_BAR_SEQID_NUM_MAX)); + + if (count >= ZXDH_BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = ZXDH_BAR_MSG_OK; + +out: + rte_spinlock_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t +zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = zxdh_bar_chan_msgid_allocate(msg_id); + + if (ret != ZXDH_BAR_MSG_OK) + return ZXDH_BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return ZXDH_BAR_MSG_OK; +} + +static uint8_t +zxdh_bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case ZXDH_MSG_CHAN_END_MPF: + src_index = ZXDH_BAR_MSG_SRC_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + src_index = ZXDH_BAR_MSG_SRC_PF; + break; + case ZXDH_MSG_CHAN_END_VF: + src_index = ZXDH_BAR_MSG_SRC_VF; + break; + default: + src_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t +zxdh_bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case ZXDH_MSG_CHAN_END_MPF: + dst_index = ZXDH_BAR_MSG_DST_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_VF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_RISC: + dst_index = ZXDH_BAR_MSG_DST_RISC; + break; + default: + dst_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +static int +zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + uint8_t src_index = 0; + uint8_t dst_index = 0; + + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return ZXDH_BAR_MSG_ERR_NULL_PARA; + } + src_index = zxdh_bar_msg_src_index_trans(in->src); + dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return ZXDH_BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes"); + + return ZXDH_BAR_MSG_OK; +} + +static uint64_t +zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t +zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return ZXDH_BAR_MSG_OK; +} + +static int +zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET, + src_pcieid | ZXDH_LOCK_PRIMARY_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET, + src_pcieid | ZXDH_LOCK_PRIMARY_ID_MASK); + + return ret; +} + +static void +zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET); +} + +static int +zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); + + return ret; +} + +static int +zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr); + + return ZXDH_BAR_MSG_OK; +} + +static void +zxdh_bar_chan_msgid_free(uint16_t msg_id) +{ + struct zxdh_seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + rte_spinlock_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + rte_spinlock_unlock(&g_seqid_ring.lock); +} + +static int +zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int +zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t +zxdh_bar_chan_msg_header_set(uint64_t subchan_addr, + struct zxdh_bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t i; + + for (i = 0; i < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); i++) + zxdh_bar_chan_reg_write(subchan_addr, i * 4, *(data + i)); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t +zxdh_bar_chan_msg_header_get(uint64_t subchan_addr, + struct zxdh_bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t i; + + for (i = 0; i < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); i++) + zxdh_bar_chan_reg_read(subchan_addr, i * 4, data + i); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t +zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t remain = (len & 0x3); + uint32_t remain_data = 0; + uint32_t i; + + for (i = 0; i < count; i++) + zxdh_bar_chan_reg_write(subchan_addr, 4 * i + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + i)); + if (remain) { + for (i = 0; i < remain; i++) + remain_data |= *((uint8_t *)(msg + len - remain + i)) << (8 * i); + + zxdh_bar_chan_reg_write(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t +zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t remain_data = 0; + uint32_t remain = (len & 0x3); + uint32_t i; + + for (i = 0; i < count; i++) + zxdh_bar_chan_reg_read(subchan_addr, 4 * i + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + i)); + if (remain) { + zxdh_bar_chan_reg_read(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (i = 0; i < remain; i++) + *((uint8_t *)(msg + (len - remain + i))) = remain_data >> (8 * i); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t +zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~ZXDH_BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint16_t +zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr, + uint16_t payload_len, struct zxdh_bar_msg_header *msg_header) +{ + uint16_t ret = 0; + ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); + + ret = zxdh_bar_chan_msg_header_get(subchan_addr, + (struct zxdh_bar_msg_header *)tmp_msg_header); + + ret = zxdh_bar_chan_msg_payload_set(subchan_addr, + (uint8_t *)(payload_addr), payload_len); + + ret = zxdh_bar_chan_msg_payload_get(subchan_addr, + tmp_msg_header, payload_len); + + ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED); + return ret; +} + +static uint16_t +zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK)) + return ZXDH_BAR_MSG_CHAN_USABLE; + + return ZXDH_BAR_MSG_CHAN_USED; +} + +static uint16_t +zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK); + data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET); + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint16_t +zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return ZXDH_BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + zxdh_bar_chan_msg_payload_get(subchan_addr, + recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */ + return ZXDH_BAR_MSG_OK; +} + +int +zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint16_t seq_id = 0; + uint64_t subchan_addr = 0; + uint32_t time_out_cnt = 0; + uint16_t valid = 0; + int ret = 0; + + ret = zxdh_bar_chan_send_para_check(in, result); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + ret = zxdh_bar_chan_save_recv_info(result, &seq_id); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + zxdh_bar_chan_subchan_addr_get(in, &subchan_addr); + + msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != ZXDH_BAR_MSG_OK) { + zxdh_bar_chan_msgid_free(seq_id); + goto exit; + } + zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + + do { + rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN); + valid = zxdh_bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED)); + + if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) { + zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE); + zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = ZXDH_BAR_MSG_ERR_TIME_OUT; + } else { + ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + zxdh_bar_chan_msgid_free(seq_id); + zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static int +zxdh_bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int +zxdh_bar_chan_enable(struct zxdh_msix_para *para, uint16_t *vport) +{ + struct zxdh_bar_recv_msg recv_msg = {0}; + int ret = 0; + int check_token = 0; + int sum_res = 0; + + if (!para) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_msix_msg msix_msg = { + .pcie_id = para->pcie_id, + .vector_risc = para->vector_risc, + .vector_pfvf = para->vector_pfvf, + .vector_mpf = para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = para->driver_type, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_MISX, + .src_pcieid = para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = zxdh_bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id); + return ZXDH_BAR_MSG_OK; +} + +int +zxdh_msg_chan_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msix_para misx_info = { + .vector_risc = ZXDH_MSIX_FROM_RISCV, + .vector_pfvf = ZXDH_MSIX_FROM_PFVF, + .vector_mpf = ZXDH_MSIX_FROM_MPF, + .pcie_id = hw->pcie_id, + .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + }; + + return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 64e1ad0e7f..83e2fe6d5b 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -14,6 +14,21 @@ extern "C" { #endif #define ZXDH_BAR0_INDEX 0 +#define ZXDH_CTRLCH_OFFSET (0x2000) + +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 + +#define ZXDH_BAR_MSG_POLLING_SPAN 100 +#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) + +#define ZXDH_BAR_CHAN_MSG_SYNC 0 + +#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -22,6 +37,13 @@ enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_RISC, }; +enum ZXDH_MSG_VEC { + ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, + ZXDH_MSIX_FROM_MPF, + ZXDH_MSIX_FROM_RISCV, + ZXDH_MSG_VEC_NUM, +}; + enum ZXDH_BAR_MSG_RTN { ZXDH_BAR_MSG_OK = 0, ZXDH_BAR_MSG_ERR_MSGID, @@ -56,10 +78,116 @@ enum ZXDH_BAR_MSG_RTN { ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ }; +enum ZXDH_BAR_MODULE_ID { + ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */ + ZXDH_BAR_MODULE_TBL, /* 1: resource table */ + ZXDH_BAR_MODULE_MISX, /* 2: config msix */ + ZXDH_BAR_MODULE_SDA, /* 3: */ + ZXDH_BAR_MODULE_RDMA, /* 4: */ + ZXDH_BAR_MODULE_DEMO, /* 5: channel test */ + ZXDH_BAR_MODULE_SMMU, /* 6: */ + ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */ + ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */ + ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */ + ZXDH_BAR_MODULE_VPORT, /* 11: get vport */ + ZXDH_BAR_MODULE_BDF, /* 12: get bdf */ + ZXDH_BAR_MODULE_RISC_READY, /* 13: */ + ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + ZXDH_BAR_MDOULE_NVME, /* 15: */ + ZXDH_BAR_MDOULE_NPSDK, /* 16: */ + ZXDH_BAR_MODULE_NP_TODO, /* 17: */ + ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */ + ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */ + + ZXDH_MODULE_FLASH = 32, + ZXDH_BAR_MODULE_OFFSET_GET = 33, + ZXDH_BAR_EVENT_OVS_WITH_VCB = 36, + + ZXDH_BAR_MSG_MODULE_NUM = 100, +}; + +struct zxdh_msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct zxdh_msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; + +struct zxdh_bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; + +struct zxdh_bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; + +struct zxdh_bar_recv_msg { + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + union { + struct zxdh_bar_msix_reps msix_reps; + struct zxdh_bar_offset_reps offset_reps; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; + +struct zxdh_bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency */ + uint8_t ack : 1; /* ack msg */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); +int zxdh_msg_chan_enable(struct rte_eth_dev *dev); +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 58746 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 06/10] net/zxdh: add zxdh get device backend infos 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 05/10] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 07/10] net/zxdh: add configure zxdh intr implementation Junlong Wang ` (5 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 11892 bytes --] Add zxdh get device backend infos, use msg chan to send msg get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 256 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_common.h | 30 ++++ drivers/net/zxdh/zxdh_ethdev.c | 37 +++++ drivers/net/zxdh/zxdh_ethdev.h | 5 + drivers/net/zxdh/zxdh_msg.h | 21 +++ 6 files changed, 350 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 2e0c8fddae..a16db47f89 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -17,4 +17,5 @@ sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', 'zxdh_msg.c', + 'zxdh_common.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..0d7ea4535d --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,256 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <string.h> + +#include <ethdev_driver.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" +#include "zxdh_common.h" + +#define ZXDH_MSG_RSP_SIZE_MAX 512 + +#define ZXDH_COMMON_TABLE_READ 0 +#define ZXDH_COMMON_TABLE_WRITE 1 + +#define ZXDH_COMMON_FIELD_PHYPORT 6 + +#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) + +#define ZXDH_REPS_HEADER_OFFSET 4 +#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa + +struct zxdh_common_msg { + uint8_t type; /* 0:read table 1:write table */ + uint8_t field; + uint16_t pcie_id; + uint16_t slen; /* Data length for write table */ + uint16_t reserved; +} __rte_packed; + +struct zxdh_common_rsp_hdr { + uint8_t rsp_status; + uint16_t rsp_len; + uint8_t reserved; + uint8_t payload_status; + uint8_t rsv; + uint16_t payload_len; +} __rte_packed; + +struct zxdh_tbl_msg_header { + uint8_t type; + uint8_t field; + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; + +struct zxdh_tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; + +static int32_t +zxdh_fill_common_msg(struct zxdh_hw *hw, struct zxdh_pci_bar_msg *desc, + uint8_t type, uint8_t field, + void *buff, uint16_t buff_size) +{ + uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size; + + desc->payload_addr = rte_zmalloc(NULL, msg_len, 0); + if (unlikely(desc->payload_addr == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate msg_data"); + return -ENOMEM; + } + memset(desc->payload_addr, 0, msg_len); + desc->payload_len = msg_len; + struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr; + + msg_data->type = type; + msg_data->field = field; + msg_data->pcie_id = hw->pcie_id; + msg_data->slen = buff_size; + if (buff_size != 0) + rte_memcpy(msg_data + 1, buff, buff_size); + + return 0; +} + +static int32_t +zxdh_send_command(struct zxdh_hw *hw, struct zxdh_pci_bar_msg *desc, + enum ZXDH_BAR_MODULE_ID module_id, + struct zxdh_msg_recviver_mem *msg_rsp) +{ + desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + desc->dst = ZXDH_MSG_CHAN_END_RISC; + desc->module_id = module_id; + desc->src_pcieid = hw->pcie_id; + + msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX; + msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0); + if (unlikely(msg_rsp->recv_buffer == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate messages response"); + return -ENOMEM; + } + + if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response"); + rte_free(msg_rsp->recv_buffer); + return -1; + } + + return 0; +} + +static int32_t +zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp, + void *buff, uint16_t len) +{ + struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer; + + if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) { + PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d", + rsp_hdr->payload_status, rsp_hdr->payload_len); + return -1; + } + if (len != 0) + rte_memcpy(buff, rsp_hdr + 1, len); + + return 0; +} + +static int32_t +zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_msg_recviver_mem msg_rsp; + struct zxdh_pci_bar_msg desc; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t +zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT, + (void *)phyport, sizeof(*phyport)); + return ret; +} + +static inline void +zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + param->pcie_id = hw->pcie_id; + param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param->src_type = ZXDH_BAR_MODULE_TBL; +} + +static int +zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + struct zxdh_pci_bar_msg in = {0}; + uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + int ret = 0; + + if (!res || !dev) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_tbl_msg_header tbl_msg = { + .type = ZXDH_TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = ZXDH_BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + struct zxdh_tbl_msg_reps_header *tbl_reps = + (struct zxdh_tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET); + + if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { + PMD_DRV_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + + sizeof(struct zxdh_tbl_msg_reps_header)), *len); + return ret; +} + +static int +zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *panel_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t +zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_panel_id(¶m, panelid); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..ba29ca1dad --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_COMMON_H +#define ZXDH_COMMON_H + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); +int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_COMMON_H */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 105c18f9e0..da5ac3ccd1 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -10,9 +10,22 @@ #include "zxdh_logs.h" #include "zxdh_pci.h" #include "zxdh_msg.h" +#include "zxdh_common.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +uint16_t +zxdh_vport_to_vfid(union zxdh_virport_num v) +{ + /* epid > 4 is local soft queue. return 1192 */ + if (v.epid > 4) + return 1192; + if (v.vf_flag) + return v.epid * 256 + v.vfid; + else + return (v.epid * 8 + v.pfid) + 1152; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { @@ -45,6 +58,26 @@ zxdh_init_device(struct rte_eth_dev *eth_dev) return ret; } +static int +zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) +{ + if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) { + PMD_DRV_LOG(ERR, "Failed to get phyport"); + return -1; + } + PMD_DRV_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + + hw->vfid = zxdh_vport_to_vfid(hw->vport); + + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { + PMD_DRV_LOG(ERR, "Failed to get panel_id"); + return -1; + } + PMD_DRV_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -105,6 +138,10 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_agent_comm(eth_dev, hw); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7434cc15d7..7b7bb16be8 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -55,6 +55,7 @@ struct zxdh_hw { uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint16_t vfid; uint8_t *isr; uint8_t weak_barriers; @@ -64,8 +65,12 @@ struct zxdh_hw { uint8_t duplex; uint8_t is_pf; uint8_t msg_chan_init; + uint8_t phyport; + uint8_t panel_id; }; +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 83e2fe6d5b..b2beedec64 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -107,6 +107,27 @@ enum ZXDH_BAR_MODULE_ID { ZXDH_BAR_MSG_MODULE_NUM = 100, }; +enum ZXDH_RES_TBL_FILED { + ZXDH_TBL_FIELD_PCIEID = 0, + ZXDH_TBL_FIELD_BDF = 1, + ZXDH_TBL_FIELD_MSGCH = 2, + ZXDH_TBL_FIELD_DATACH = 3, + ZXDH_TBL_FIELD_VPORT = 4, + ZXDH_TBL_FIELD_PNLID = 5, + ZXDH_TBL_FIELD_PHYPORT = 6, + ZXDH_TBL_FIELD_SERDES_NUM = 7, + ZXDH_TBL_FIELD_NP_PORT = 8, + ZXDH_TBL_FIELD_SPEED = 9, + ZXDH_TBL_FIELD_HASHID = 10, + ZXDH_TBL_FIELD_NON, +}; + +enum ZXDH_TBL_MSG_TYPE { + ZXDH_TBL_TYPE_READ, + ZXDH_TBL_TYPE_WRITE, + ZXDH_TBL_TYPE_NON, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 25121 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 07/10] net/zxdh: add configure zxdh intr implementation 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 06/10] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 08/10] net/zxdh: add zxdh dev infos get ops Junlong Wang ` (4 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 29425 bytes --] configure zxdh intr include risc,dtb. and release intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 315 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 6 + drivers/net/zxdh/zxdh_msg.c | 210 ++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 16 +- drivers/net/zxdh/zxdh_pci.c | 31 ++++ drivers/net/zxdh/zxdh_pci.h | 11 ++ drivers/net/zxdh/zxdh_queue.h | 110 ++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 ++++++ 8 files changed, 752 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index da5ac3ccd1..1a3658e74b 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -11,6 +11,7 @@ #include "zxdh_pci.h" #include "zxdh_msg.h" #include "zxdh_common.h" +#include "zxdh_queue.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -26,6 +27,315 @@ zxdh_vport_to_vfid(union zxdh_virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static void +zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + + +static int32_t +zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_pci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + +static void +zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); +} + + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void +zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + + if (hw->is_pf) { + PMD_DRV_LOG(DEBUG, "zxdh_risc2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_DRV_LOG(DEBUG, "zxdh_riscvf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void +zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + + if (hw->is_pf) { + PMD_DRV_LOG(DEBUG, "zxdh_vf2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_DRV_LOG(DEBUG, "zxdh_pf2vf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void +zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static void +zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t +zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t +zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_DRV_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t +zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + ZXDH_VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t +zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_DRV_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_DRV_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t +zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_DRV_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t +zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_DRV_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE); + PMD_DRV_LOG(DEBUG, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_DRV_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +static int32_t +zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_DRV_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_DRV_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_DRV_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_DRV_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_DRV_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_DRV_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_DRV_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { @@ -142,9 +452,14 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: + zxdh_intr_release(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7b7bb16be8..65726f3a20 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -7,6 +7,8 @@ #include <rte_ether.h> #include "ethdev_driver.h" +#include <rte_interrupts.h> +#include <eal_interrupts.h> #ifdef __cplusplus extern "C" { @@ -43,6 +45,9 @@ struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + struct rte_intr_handle *risc_intr; + struct rte_intr_handle *dtb_intr; + struct zxdh_virtqueue **vqs; union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -59,6 +64,7 @@ struct zxdh_hw { uint8_t *isr; uint8_t weak_barriers; + uint8_t intr_enabled; uint8_t use_msix; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 336ba217d3..53cf972f86 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -95,6 +95,12 @@ #define ZXDH_BAR_CHAN_INDEX_SEND 0 #define ZXDH_BAR_CHAN_INDEX_RECV 1 +#define ZXDH_BAR_CHAN_MSG_SYNC 0 +#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0 +#define ZXDH_BAR_CHAN_MSG_EMEC 1 +#define ZXDH_BAR_CHAN_MSG_NO_ACK 0 +#define ZXDH_BAR_CHAN_MSG_ACK 1 + uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, @@ -137,6 +143,39 @@ static struct zxdh_seqid_ring g_seqid_ring; static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; static rte_spinlock_t chan_lock; +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM]; + +static inline const char +*zxdh_module_id_name(int val) +{ + switch (val) { + case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG"; + case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL"; + case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX"; + case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA"; + case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA"; + case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO"; + case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU"; + case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC"; + case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA"; + case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM"; + case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP"; + case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT"; + case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF"; + case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY"; + case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE"; + case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME"; + case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK"; + case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO"; + case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF"; + case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF"; + case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH"; + case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET"; + case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { @@ -825,3 +864,174 @@ zxdh_msg_chan_enable(struct rte_eth_dev *dev) return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); } + +static uint64_t +zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t chan_id = 0; + uint8_t subchan_id = 0; + uint8_t src = 0; + uint8_t dst = 0; + + src = zxdh_bar_msg_dst_index_trans(src_type); + dst = zxdh_bar_msg_src_index_trans(dst_type); + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + chan_id = chan_id_tbl[dst][src]; + subchan_id = 1 - subchan_id_tbl[dst][src]; + + return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static void +zxdh_bar_msg_ack_async_msg_proc(struct zxdh_bar_msg_header *msg_header, + uint8_t *receiver_buff) +{ + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED; + +free_id: + zxdh_bar_chan_msgid_free(msg_header->msg_id); +} + +static void +zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, + struct zxdh_bar_msg_header *msg_header, + uint8_t *receiver_buff, void *dev) +{ + uint16_t reps_len = 0; + uint8_t *reps_buffer = NULL; + + reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0); + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + + recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + zxdh_bar_chan_msg_header_set(reply_addr, msg_header); + zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static uint64_t +zxdh_reply_addr_get(uint8_t sync, uint8_t src_type, + uint8_t dst_type, uint64_t virt_addr) +{ + uint64_t recv_rep_addr = 0; + uint8_t chan_id = 0; + uint8_t subchan_id = 0; + uint8_t src = 0; + uint8_t dst = 0; + + src = zxdh_bar_msg_dst_index_trans(src_type); + dst = zxdh_bar_msg_src_index_trans(dst_type); + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + chan_id = chan_id_tbl[dst][src]; + subchan_id = 1 - subchan_id_tbl[dst][src]; + + if (sync == ZXDH_BAR_CHAN_MSG_SYNC) + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t +zxdh_bar_chan_msg_header_check(struct zxdh_bar_msg_header *msg_header) +{ + uint16_t len = 0; + uint8_t module_id = 0; + + if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return ZXDH_BAR_MSG_ERR_MODULE; + } + module_id = msg_header->module_id; + + if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + len = msg_header->len; + + if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + zxdh_module_id_name(module_id), module_id); + return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST; + } + return ZXDH_BAR_MSG_OK; +} + +int +zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint64_t recv_addr = 0; + uint64_t reps_addr = 0; + uint16_t ret = 0; + uint8_t *recved_msg = NULL; + + recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + zxdh_bar_chan_msg_header_get(recv_addr, &msg_header); + ret = zxdh_bar_chan_msg_header_check(&msg_header); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + + recved_msg = rte_malloc(NULL, msg_header.len, 0); + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) { + zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE); + if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) { + zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + return 0; + +exit: + rte_free(recved_msg); + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index b2beedec64..5742000a3b 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,10 +13,17 @@ extern "C" { #endif -#define ZXDH_BAR0_INDEX 0 -#define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_BAR0_INDEX 0 +#define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) +#define ZXDH_QUEUE_INTR_VEC_NUM 256 #define ZXDH_BAR_MSG_POLLING_SPAN 100 #define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) @@ -201,6 +208,9 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, + void *reps_buffer, uint16_t *reps_len, void *dev); + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); @@ -209,6 +219,8 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev); int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 68785aa03e..8e7a9c1213 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -14,6 +14,7 @@ #include "zxdh_ethdev.h" #include "zxdh_pci.h" #include "zxdh_logs.h" +#include "zxdh_queue.h" #define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ (1ULL << ZXDH_NET_F_MRG_RXBUF | \ @@ -93,6 +94,27 @@ zxdh_set_features(struct zxdh_hw *hw, uint64_t features) rte_write32(features >> 32, &hw->common_cfg->guest_feature); } +static uint16_t +zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t +zxdh_set_queue_irq(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint8_t +zxdh_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -100,8 +122,17 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_status = zxdh_set_status, .get_features = zxdh_get_features, .set_features = zxdh_set_features, + .set_queue_irq = zxdh_set_queue_irq, + .set_config_irq = zxdh_set_config_irq, + .get_isr = zxdh_get_isr, }; +uint8_t +zxdh_pci_isr(struct zxdh_hw *hw) +{ + return ZXDH_VTPCI_OPS(hw)->get_isr(hw); +} + uint16_t zxdh_pci_get_features(struct zxdh_hw *hw) { diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index 7905911a34..41e47d5d3b 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -22,6 +22,13 @@ enum zxdh_msix_status { ZXDH_MSIX_ENABLED = 2 }; +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ @@ -110,6 +117,9 @@ struct zxdh_pci_ops { uint64_t (*get_features)(struct zxdh_hw *hw); void (*set_features)(struct zxdh_hw *hw, uint64_t features); + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec); + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + uint8_t (*get_isr)(struct zxdh_hw *hw); }; struct zxdh_hw_internal { @@ -130,6 +140,7 @@ void zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_pci_get_features(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev); +uint8_t zxdh_pci_isr(struct zxdh_hw *hw); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..9c790cd9d3 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,110 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_QUEUE_H +#define ZXDH_QUEUE_H + +#include <stdint.h> + +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_rxtx.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * ring descriptors: 16 bytes. + * These can chain together via "next". + */ +struct zxdh_vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +} __rte_packed; + +struct zxdh_vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[]; +} __rte_packed; + +struct zxdh_vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +} __rte_packed; + +struct zxdh_vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +} __rte_packed; + +struct zxdh_vring_packed { + uint32_t num; + struct zxdh_vring_packed_desc *desc; + struct zxdh_vring_packed_desc_event *driver; + struct zxdh_vring_packed_desc_event *device; +} __rte_packed; + +struct zxdh_vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +} __rte_packed; + +struct zxdh_virtqueue { + struct zxdh_hw *hw; /* < zxdh_hw structure pointer. */ + struct { + /* vring keeping descs and events */ + struct zxdh_vring_packed ring; + uint8_t used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /* < cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /* < last consumed descriptor */ + uint16_t vq_nentries; /* < vring desc numbers */ + uint16_t vq_free_cnt; /* < num of desc available */ + uint16_t vq_avail_idx; /* < sync until needed */ + uint16_t vq_free_thresh; /* < free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /* < linear address of vring */ + uint32_t vq_ring_size; + + union { + struct zxdh_virtnet_rx rxq; + struct zxdh_virtnet_tx txq; + }; + + /* + * physical address of vring, or virtual address + */ + rte_iova_t vq_ring_mem; + + /* + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + */ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /* < PCI queue index */ + uint16_t offset; /* < relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /* < RX software ring. */ + struct zxdh_vq_desc_extra vq_descx[]; +} __rte_packed; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..7d4b5481ec --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_RXTX_H +#define ZXDH_RXTX_H + +#include <stdint.h> + +#include <rte_common.h> +#include <rte_mbuf_core.h> + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; +}; + +struct zxdh_virtnet_rx { + struct zxdh_virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct zxdh_virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +} __rte_packed; + +struct zxdh_virtnet_tx { + struct zxdh_virtqueue *vq; + const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct zxdh_virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +} __rte_packed; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 62557 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 08/10] net/zxdh: add zxdh dev infos get ops 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 07/10] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 09/10] net/zxdh: add zxdh dev configure ops Junlong Wang ` (3 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3415 bytes --] Add support for zxdh infos get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 45 +++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_ethdev.h | 2 ++ 2 files changed, 46 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 1a3658e74b..11ec5dc34f 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -27,6 +27,44 @@ zxdh_vport_to_vfid(union zxdh_virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static int32_t +zxdh_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX); + dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX); + dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX); + dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE; + dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN; + dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS; + dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_QINQ_STRIP); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM); + + return 0; +} + static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) { @@ -336,6 +374,11 @@ zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_infos_get = zxdh_dev_infos_get, +}; + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { @@ -395,7 +438,7 @@ zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; int ret = 0; - eth_dev->dev_ops = NULL; + eth_dev->dev_ops = &zxdh_eth_dev_ops; /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 65726f3a20..89c5a9bb5f 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -29,6 +29,8 @@ extern "C" { #define ZXDH_NUM_BARS 2 #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U union zxdh_virport_num { uint16_t vport; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 7385 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 09/10] net/zxdh: add zxdh dev configure ops 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 08/10] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-04 11:58 ` [PATCH v10 10/10] net/zxdh: add zxdh dev close ops Junlong Wang ` (2 subsequent siblings) 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 38653 bytes --] provided zxdh dev configure ops for queue check,reset,alloc resources,etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 144 +++++++++++ drivers/net/zxdh/zxdh_common.h | 11 + drivers/net/zxdh/zxdh_ethdev.c | 459 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 16 ++ drivers/net/zxdh/zxdh_pci.c | 106 ++++++++ drivers/net/zxdh/zxdh_pci.h | 30 ++- drivers/net/zxdh/zxdh_queue.c | 127 +++++++++ drivers/net/zxdh/zxdh_queue.h | 175 +++++++++++++ 9 files changed, 1068 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_queue.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index a16db47f89..b96aa5a27e 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -18,4 +18,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_msg.c', 'zxdh_common.c', + 'zxdh_queue.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 0d7ea4535d..4f18c97ed7 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -20,6 +20,7 @@ #define ZXDH_COMMON_TABLE_WRITE 1 #define ZXDH_COMMON_FIELD_PHYPORT 6 +#define ZXDH_COMMON_FIELD_DATACH 3 #define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) @@ -254,3 +255,146 @@ zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) int32_t ret = zxdh_get_res_panel_id(¶m, panelid); return ret; } + +uint32_t +zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void +zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +static bool +zxdh_try_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return false; + + return true; +} + +int32_t +zxdh_timedlock(struct zxdh_hw *hw, uint32_t us) +{ + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + rte_delay_us_block(us); + /* acquire hw lock */ + if (!zxdh_try_lock(hw)) { + PMD_DRV_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout); + continue; + } + break; + } + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_DRV_LOG(ERR, "Failed to acquire channel"); + return -1; + } + return 0; +} + +void +zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + } +} + +uint32_t +zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void +zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} + +static int32_t +zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_pci_bar_msg desc; + struct zxdh_msg_recviver_mem msg_rsp; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + if (buff_size != 0 && buff == NULL) { + PMD_DRV_LOG(ERR, "Buff is invalid"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE, + field, buff, buff_size); + + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t +zxdh_datach_set(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t buff_size = (hw->queue_num + 1) * 2; + int32_t ret = 0; + uint16_t i; + + void *buff = rte_zmalloc(NULL, buff_size, 0); + if (unlikely(buff == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate buff"); + return -ENOMEM; + } + memset(buff, 0, buff_size); + uint16_t *pdata = (uint16_t *)buff; + *pdata++ = hw->queue_num; + + for (i = 0; i < hw->queue_num; i++) + *(pdata + i) = hw->channel_context[i].ph_chno; + + ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH, + (void *)buff, buff_size); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to setup data channel of common table"); + + rte_free(buff); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index ba29ca1dad..4a06da9495 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -14,6 +14,10 @@ extern "C" { #endif +#define ZXDH_VF_LOCK_REG 0x90 +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 + struct zxdh_res_para { uint64_t virt_addr; uint16_t pcie_id; @@ -22,6 +26,13 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +void zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); +int32_t zxdh_datach_set(struct rte_eth_dev *dev); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 11ec5dc34f..54e51a31fa 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -374,8 +374,467 @@ zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +static int32_t +zxdh_features_update(struct zxdh_hw *hw, + const struct rte_eth_rxmode *rxmode, + const struct rte_eth_txmode *txmode) +{ + uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; + uint64_t req_features = hw->guest_features; + + if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM); + + if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) | + (1ULL << ZXDH_NET_F_GUEST_TSO6); + + if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_CSUM); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) | + (1ULL << ZXDH_NET_F_HOST_TSO6); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_UFO); + + req_features = req_features & hw->host_features; + hw->guest_features = req_features; + + ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); + + if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + PMD_DRV_LOG(ERR, "rx checksum not available on this host"); + return -ENOTSUP; + } + + if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); + return -ENOTSUP; + } + return 0; +} + +static bool +rx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); +} + +static bool +tx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); +} + +static void +zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i = 0; + + const char *type = NULL; + struct zxdh_virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) + type = "rxq"; + else if (queue_type == ZXDH_VTNET_TQ) + type = "txq"; + else + continue; + PMD_DRV_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + rte_pktmbuf_free(buf); + } +} + +static int32_t +zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1; + uint16_t i = 0; + uint16_t j = 0; + uint16_t done = 0; + int32_t ret = 0; + + ret = zxdh_timedlock(hw, 1000); + if (ret) { + PMD_DRV_LOG(ERR, "Acquiring hw lock got failed, timeout"); + return -1; + } + + /* Iterate COI table and find free channel */ + for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) { + uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t)); + uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + + for (j = base; j < 32; j += 2) { + /* Got the available channel & update COI table */ + if ((var & (1 << j)) == 0) { + var |= (1 << j); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + done = 1; + break; + } + } + if (done) + break; + } + zxdh_release_lock(hw); + /* check for no channel condition */ + if (done != 1) { + PMD_DRV_LOG(ERR, "NO availd queues"); + return -1; + } + /* reruen available channel ID */ + return (i * 32) + j; +} + +static int32_t +zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->channel_context[lch].valid == 1) { + PMD_DRV_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u", + lch, hw->channel_context[lch].ph_chno); + return hw->channel_context[lch].ph_chno; + } + int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch)); + + if (pch < 0) { + PMD_DRV_LOG(ERR, "Failed to acquire channel"); + return -1; + } + hw->channel_context[lch].ph_chno = (uint16_t)pch; + hw->channel_context[lch].valid = 1; + PMD_DRV_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch); + return 0; +} + +static void +zxdh_init_vring(struct zxdh_virtqueue *vq) +{ + int32_t size = vq->vq_nentries; + uint8_t *ring_mem = vq->vq_ring_virt_mem; + + memset(ring_mem, 0, vq->vq_ring_size); + + vq->vq_used_cons_idx = 0; + vq->vq_desc_head_idx = 0; + vq->vq_avail_idx = 0; + vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); + vq->vq_free_cnt = vq->vq_nentries; + memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); + vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr(vq); +} + +static int32_t +zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) +{ + char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + const struct rte_memzone *mz = NULL; + const struct rte_memzone *hdr_mz = NULL; + uint32_t size = 0; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = NULL; + struct zxdh_virtnet_tx *txvq = NULL; + struct zxdh_virtqueue *vq = NULL; + size_t sz_hdr_mz = 0; + void *sw_ring = NULL; + int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx); + int32_t numa_node = dev->device->numa_node; + uint16_t vtpci_phy_qidx = 0; + uint32_t vq_size = 0; + int32_t ret = 0; + + if (hw->channel_context[vtpci_logic_qidx].valid == 0) { + PMD_DRV_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx); + return -EINVAL; + } + vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno; + + PMD_DRV_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d", + vtpci_logic_qidx, vtpci_phy_qidx, numa_node); + + vq_size = ZXDH_QUEUE_DEPTH; + + if (ZXDH_VTPCI_OPS(hw)->set_queue_num != NULL) + ZXDH_VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size); + + snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx); + + size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct zxdh_vq_desc_extra), + RTE_CACHE_LINE_SIZE); + if (queue_type == ZXDH_VTNET_TQ) { + /* + * For each xmit packet, allocate a zxdh_net_hdr + * and indirect ring elements + */ + sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region); + } + + vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node); + if (vq == NULL) { + PMD_DRV_LOG(ERR, "can not allocate vq"); + return -ENOMEM; + } + hw->vqs[vtpci_logic_qidx] = vq; + + vq->hw = hw; + vq->vq_queue_index = vtpci_phy_qidx; + vq->vq_nentries = vq_size; + + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL; + vq->vq_packed.event_flags_shadow = 0; + if (queue_type == ZXDH_VTNET_RQ) + vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE; + + /* + * Reserve a memzone for vring elements + */ + size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); + PMD_DRV_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); + + mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + ZXDH_PCI_VRING_ALIGN); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(vq_name); + if (mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + + memset(mz->addr, 0, mz->len); + + vq->vq_ring_mem = mz->iova; + vq->vq_ring_virt_mem = mz->addr; + + zxdh_init_vring(vq); + + if (sz_hdr_mz) { + snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr", + dev->data->port_id, vtpci_phy_qidx); + hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + RTE_CACHE_LINE_SIZE); + if (hdr_mz == NULL) { + if (rte_errno == EEXIST) + hdr_mz = rte_memzone_lookup(vq_hdr_name); + if (hdr_mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + } + + if (queue_type == ZXDH_VTNET_RQ) { + size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]); + + sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node); + if (!sw_ring) { + PMD_DRV_LOG(ERR, "can not allocate RX soft ring"); + ret = -ENOMEM; + goto fail_q_alloc; + } + + vq->sw_ring = sw_ring; + rxvq = &vq->rxq; + rxvq->vq = vq; + rxvq->port_id = dev->data->port_id; + rxvq->mz = mz; + } else { /* queue_type == VTNET_TQ */ + txvq = &vq->txq; + txvq->vq = vq; + txvq->port_id = dev->data->port_id; + txvq->mz = mz; + txvq->zxdh_net_hdr_mz = hdr_mz; + txvq->zxdh_net_hdr_mem = hdr_mz->iova; + } + + vq->offset = offsetof(struct rte_mbuf, buf_iova); + if (queue_type == ZXDH_VTNET_TQ) { + struct zxdh_tx_region *txr = hdr_mz->addr; + uint32_t i; + + memset(txr, 0, vq_size * sizeof(*txr)); + for (i = 0; i < vq_size; i++) { + /* first indirect descriptor is always the tx header */ + struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; + + vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + + offsetof(struct zxdh_tx_region, tx_hdr); + /* length will be updated to actual pi hdr size when xmit pkt */ + start_dp->len = 0; + } + } + if (ZXDH_VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) { + PMD_DRV_LOG(ERR, "setup_queue failed"); + return -EINVAL; + } + return 0; +fail_q_alloc: + rte_free(sw_ring); + rte_memzone_free(hdr_mz); + rte_memzone_free(mz); + rte_free(vq); + return ret; +} + +static int32_t +zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) +{ + uint16_t lch; + struct zxdh_hw *hw = dev->data->dev_private; + + hw->vqs = rte_zmalloc(NULL, sizeof(struct zxdh_virtqueue *) * nr_vq, 0); + if (!hw->vqs) { + PMD_DRV_LOG(ERR, "Failed to allocate vqs"); + return -ENOMEM; + } + for (lch = 0; lch < nr_vq; lch++) { + if (zxdh_acquire_channel(dev, lch) < 0) { + PMD_DRV_LOG(ERR, "Failed to acquire the channels"); + zxdh_free_queues(dev); + return -1; + } + if (zxdh_init_queue(dev, lch) < 0) { + PMD_DRV_LOG(ERR, "Failed to alloc virtio queue"); + zxdh_free_queues(dev); + return -1; + } + } + return 0; +} + + +static int32_t +zxdh_dev_configure(struct rte_eth_dev *dev) +{ + const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t nr_vq = 0; + int32_t ret = 0; + + if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) { + PMD_DRV_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return -EINVAL; + } + if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) { + PMD_DRV_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues, + ZXDH_QUEUES_NUM_MAX); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + + ret = zxdh_features_update(hw, rxmode, txmode); + if (ret < 0) + return ret; + + /* check if lsc interrupt feature is enabled */ + if (dev->data->dev_conf.intr_conf.lsc) { + if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) { + PMD_DRV_LOG(ERR, "link status not supported by host"); + return -ENOTSUP; + } + } + + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + + nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; + if (nr_vq == hw->queue_num) + return 0; + + PMD_DRV_LOG(DEBUG, "queue changed need reset "); + /* Reset the device although not necessary at startup */ + zxdh_pci_reset(hw); + + /* Tell the host we've noticed this device. */ + zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_ACK); + + /* Tell the host we've known how to drive the device. */ + zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER); + /* The queue needs to be released when reconfiguring*/ + if (hw->vqs != NULL) { + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + } + + hw->queue_num = nr_vq; + ret = zxdh_alloc_queues(dev, nr_vq); + if (ret < 0) + return ret; + + zxdh_datach_set(dev); + + if (zxdh_configure_intr(dev) < 0) { + PMD_DRV_LOG(ERR, "Failed to configure interrupt"); + zxdh_free_queues(dev); + return -1; + } + + zxdh_pci_reinit_complete(hw); + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = zxdh_dev_configure, .dev_infos_get = zxdh_dev_infos_get, }; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 89c5a9bb5f..28e78b0086 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -31,6 +31,13 @@ extern "C" { #define ZXDH_TX_QUEUES_MAX 128U #define ZXDH_MIN_RX_BUFSIZE 64 #define ZXDH_MAX_RX_PKTLEN 14000U +#define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_QUEUES_BASE 0 +#define ZXDH_TOTAL_QUEUES_NUM 4096 +#define ZXDH_QUEUES_NUM_MAX 256 +#define ZXDH_QUERES_SHARE_BASE (0x5000) + +#define ZXDH_MBUF_BURST_SZ 64 union zxdh_virport_num { uint16_t vport; @@ -43,6 +50,11 @@ union zxdh_virport_num { }; }; +struct zxdh_chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; @@ -50,6 +62,7 @@ struct zxdh_hw { struct rte_intr_handle *risc_intr; struct rte_intr_handle *dtb_intr; struct zxdh_virtqueue **vqs; + struct zxdh_chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -63,6 +76,7 @@ struct zxdh_hw { uint16_t device_id; uint16_t port_id; uint16_t vfid; + uint16_t queue_num; uint8_t *isr; uint8_t weak_barriers; @@ -75,6 +89,8 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t has_tx_offload; + uint8_t has_rx_offload; }; uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 8e7a9c1213..06d3f92b20 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -115,6 +115,93 @@ zxdh_get_isr(struct zxdh_hw *hw) return rte_read8(hw->isr); } +static uint16_t +zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void +zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t +check_vq_phys_addr_ok(struct zxdh_virtqueue *vq) +{ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_DRV_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} + +static inline void +io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static int32_t +zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + + sizeof(struct zxdh_vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct zxdh_vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void +zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -125,6 +212,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_irq = zxdh_set_queue_irq, .set_config_irq = zxdh_set_config_irq, .get_isr = zxdh_get_isr, + .get_queue_num = zxdh_get_queue_num, + .set_queue_num = zxdh_set_queue_num, + .setup_queue = zxdh_setup_queue, + .del_queue = zxdh_del_queue, }; uint8_t @@ -154,6 +245,21 @@ zxdh_pci_reset(struct zxdh_hw *hw) PMD_DRV_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); } +void +zxdh_pci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void +zxdh_pci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= ZXDH_VTPCI_OPS(hw)->get_status(hw); + + ZXDH_VTPCI_OPS(hw)->set_status(hw, status); +} + static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) { diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index 41e47d5d3b..2e7aa9c410 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -29,7 +29,20 @@ enum zxdh_msix_status { /* Vector value used to disable MSI for queue. */ #define ZXDH_MSI_NO_VECTOR 0x7F +#define ZXDH_PCI_VRING_ALIGN 4096 + +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ + +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ #define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ @@ -53,6 +66,7 @@ enum zxdh_msix_status { #define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 #define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 #define ZXDH_CONFIG_STATUS_FAILED 0x80 +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 struct zxdh_net_config { /* The config defining mac address (if ZXDH_NET_F_MAC) */ @@ -103,11 +117,18 @@ struct zxdh_pci_common_cfg { uint32_t queue_used_hi; /* read-write */ }; -static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +static inline int32_t +vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) { return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int32_t +vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + struct zxdh_pci_ops { void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); @@ -120,6 +141,11 @@ struct zxdh_pci_ops { uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec); uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); uint8_t (*get_isr)(struct zxdh_hw *hw); + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { @@ -141,6 +167,8 @@ void zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_pci_get_features(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev); uint8_t zxdh_pci_isr(struct zxdh_hw *hw); +void zxdh_pci_reinit_complete(struct zxdh_hw *hw); +void zxdh_pci_set_status(struct zxdh_hw *hw, uint8_t status); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..462a88b23c --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" +#include "zxdh_msg.h" + +struct rte_mbuf * +zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + return NULL; +} + +static int32_t +zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + int32_t ret = 0; + + ret = zxdh_timedlock(hw, 1000); + if (ret) { + PMD_DRV_LOG(ERR, "Acquiring hw lock got failed, timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_DRV_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t +zxdh_get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return ZXDH_VTNET_RQ; + else + return ZXDH_VTNET_TQ; +} + +int32_t +zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct zxdh_virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + if (zxdh_release_channel(dev) < 0) { + PMD_DRV_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + ZXDH_VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == ZXDH_VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.zxdh_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_DRV_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 9c790cd9d3..686cabfef1 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -11,11 +11,30 @@ #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" +#include "zxdh_pci.h" #ifdef __cplusplus extern "C" { #endif +enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; + +#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_MAX_TX_INDIRECT 8 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define ZXDH_VRING_DESC_F_WRITE 2 +/* This flag means the descriptor was made available by the driver */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) + +#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 +#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 +#define ZXDH_RING_EVENT_FLAGS_DESC 0x2 + +#define ZXDH_VQ_RING_DESC_CHAIN_END 32768 + /* * ring descriptors: 16 bytes. * These can chain together via "next". @@ -27,6 +46,19 @@ struct zxdh_vring_desc { uint16_t next; /* We chain unused descriptors via this. */ } __rte_packed; +struct zxdh_vring_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was written to. */ + uint32_t len; +}; + +struct zxdh_vring_used { + uint16_t flags; + uint16_t idx; + struct zxdh_vring_used_elem ring[]; +} __rte_packed; + struct zxdh_vring_avail { uint16_t flags; uint16_t idx; @@ -103,6 +135,149 @@ struct zxdh_virtqueue { struct zxdh_vq_desc_extra vq_descx[]; } __rte_packed; +struct zxdh_type_hdr { + uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ + uint8_t pd_len; + uint8_t num_buffers; + uint8_t reserved; +} __rte_packed; /* 4B */ + +struct zxdh_pi_hdr { + uint8_t pi_len; + uint8_t pkt_type; + uint16_t vlan_id; + uint32_t ipv6_extend; + uint16_t l3_offset; + uint16_t l4_offset; + uint8_t phy_port; + uint8_t pkt_flag_hi8; + uint16_t pkt_flag_lw16; + union { + struct { + uint64_t sa_idx; + uint8_t reserved_8[8]; + } dl; + struct { + uint32_t lro_flag; + uint32_t lro_mss; + uint16_t err_code; + uint16_t pm_id; + uint16_t pkt_len; + uint8_t reserved[2]; + } ul; + }; +} __rte_packed; /* 32B */ + +struct zxdh_pd_hdr_dl { + uint32_t ol_flag; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t dst_vfid; + uint32_t svlan_insert; + uint32_t cvlan_insert; +} __rte_packed; /* 16B */ + +struct zxdh_net_hdr_dl { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_dl pd_hdr; /* 16B */ +} __rte_packed; + +struct zxdh_pd_hdr_ul { + uint32_t pkt_flag; + uint32_t rss_hash; + uint32_t fd; + uint32_t striped_vlan_tci; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t src_vfid; + uint16_t pkt_type_out; + uint16_t pkt_type_in; +} __rte_packed; /* 24B */ + +struct zxdh_net_hdr_ul { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_ul pd_hdr; /* 24B */ +} __rte_packed; /* 60B */ + +struct zxdh_tx_region { + struct zxdh_net_hdr_dl tx_hdr; + union { + struct zxdh_vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; + struct zxdh_vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; + } __rte_packed; +}; + +static inline size_t +vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +{ + size_t size; + + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct zxdh_vring_packed_desc); + size += sizeof(struct zxdh_vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct zxdh_vring_packed_desc_event); + return size; + } + + size = num * sizeof(struct zxdh_vring_desc); + size += sizeof(struct zxdh_vring_avail) + (num * sizeof(uint16_t)); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct zxdh_vring_used) + (num * sizeof(struct zxdh_vring_used_elem)); + return size; +} + +static inline void +vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, + unsigned long align, uint32_t num) +{ + vr->num = num; + vr->desc = (struct zxdh_vring_packed_desc *)p; + vr->driver = (struct zxdh_vring_packed_desc_event *)(p + + vr->num * sizeof(struct zxdh_vring_packed_desc)); + vr->device = (struct zxdh_vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver + + sizeof(struct zxdh_vring_packed_desc_event)), align); +} + +static inline void +vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n - 1; i++) { + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static inline void +vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n; i++) { + dp[i].id = (uint16_t)i; + dp[i].flags = ZXDH_VRING_DESC_F_WRITE; + } +} + +static inline void +virtqueue_disable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 89242 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v10 10/10] net/zxdh: add zxdh dev close ops 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (8 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 09/10] net/zxdh: add zxdh dev configure ops Junlong Wang @ 2024-11-04 11:58 ` Junlong Wang 2024-11-06 0:40 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Ferruh Yigit 2024-11-12 2:49 ` Junlong Wang 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:58 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 1486 bytes --] provided zxdh dev close ops for resource released. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 54e51a31fa..c786198535 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -832,9 +832,32 @@ zxdh_dev_configure(struct rte_eth_dev *dev) return ret; } +static int +zxdh_dev_close(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int ret = 0; + + zxdh_intr_release(dev); + zxdh_pci_reset(hw); + + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + + zxdh_bar_msg_chan_exit(); + + if (dev->data->mac_addrs != NULL) { + rte_free(dev->data->mac_addrs); + dev->data->mac_addrs = NULL; + } + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { .dev_configure = zxdh_dev_configure, + .dev_close = zxdh_dev_close, .dev_infos_get = zxdh_dev_infos_get, }; @@ -977,14 +1000,6 @@ zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, zxdh_eth_dev_init); } -static int -zxdh_dev_close(struct rte_eth_dev *dev __rte_unused) -{ - int ret = 0; - - return ret; -} - static int zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev) { -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 3132 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (9 preceding siblings ...) 2024-11-04 11:58 ` [PATCH v10 10/10] net/zxdh: add zxdh dev close ops Junlong Wang @ 2024-11-06 0:40 ` Ferruh Yigit 2024-11-07 9:28 ` Ferruh Yigit 2024-11-12 2:49 ` Junlong Wang 11 siblings, 1 reply; 225+ messages in thread From: Ferruh Yigit @ 2024-11-06 0:40 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, wang.yong19 On 11/4/2024 11:58 AM, Junlong Wang wrote: > v10: > - > move zxdh under Wind River in MAINTAINERS and add myself as the maintainer > and add experimental into MAINTAINERS/driver file,elease notes. > - changed DPDK syntax is to have return value in a separate line. > - Add a keyword in log types for distinguished. > - using regular comments (non doxygen syntax). > - fix other issues. > > v9: > - fix 'v8 3/9' patch use PCI bus API, > and common PCI constants according to David Marchand's comments. > > v8: > - fix flexible arrays、Waddress-of-packed-member error. > - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. > - use zxdh_try/release_lock,and move loop into zxdh_timedlock, > make hardware lock follow spinlock pattern. > > v7: > - add release notes and modify zxdh.rst issues. > - avoid use pthread and use rte_spinlock_lock. > - using the prefix ZXDH_ before some definitions. > - resole issues according to thomas's comments. > > v6: > - Resolve ci/intel compilation issues. > - fix meson.build indentation in earlier patch. > > V5: > - split driver into multiple patches,part of the zxdh driver, > later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. > - fix errors reported by scripts. > - move the product link in zxdh.rst. > - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. > - modify other comments according to Ferruh's comments. > > Junlong Wang (10): > net/zxdh: add zxdh ethdev pmd driver > net/zxdh: add logging implementation > net/zxdh: add zxdh device pci init implementation > net/zxdh: add msg chan and msg hwlock init > net/zxdh: add msg chan enable implementation > net/zxdh: add zxdh get device backend infos > net/zxdh: add configure zxdh intr implementation > net/zxdh: add zxdh dev infos get ops > net/zxdh: add zxdh dev configure ops > net/zxdh: add zxdh dev close ops > For series, Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> Series applied to dpdk-next-net/main, thanks. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-06 0:40 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Ferruh Yigit @ 2024-11-07 9:28 ` Ferruh Yigit 2024-11-07 9:58 ` Ferruh Yigit 0 siblings, 1 reply; 225+ messages in thread From: Ferruh Yigit @ 2024-11-07 9:28 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, wang.yong19 On 11/6/2024 12:40 AM, Ferruh Yigit wrote: > On 11/4/2024 11:58 AM, Junlong Wang wrote: >> v10: >> - >> move zxdh under Wind River in MAINTAINERS and add myself as the maintainer >> and add experimental into MAINTAINERS/driver file,elease notes. >> - changed DPDK syntax is to have return value in a separate line. >> - Add a keyword in log types for distinguished. >> - using regular comments (non doxygen syntax). >> - fix other issues. >> >> v9: >> - fix 'v8 3/9' patch use PCI bus API, >> and common PCI constants according to David Marchand's comments. >> >> v8: >> - fix flexible arrays、Waddress-of-packed-member error. >> - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. >> - use zxdh_try/release_lock,and move loop into zxdh_timedlock, >> make hardware lock follow spinlock pattern. >> >> v7: >> - add release notes and modify zxdh.rst issues. >> - avoid use pthread and use rte_spinlock_lock. >> - using the prefix ZXDH_ before some definitions. >> - resole issues according to thomas's comments. >> >> v6: >> - Resolve ci/intel compilation issues. >> - fix meson.build indentation in earlier patch. >> >> V5: >> - split driver into multiple patches,part of the zxdh driver, >> later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. >> - fix errors reported by scripts. >> - move the product link in zxdh.rst. >> - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. >> - modify other comments according to Ferruh's comments. >> >> Junlong Wang (10): >> net/zxdh: add zxdh ethdev pmd driver >> net/zxdh: add logging implementation >> net/zxdh: add zxdh device pci init implementation >> net/zxdh: add msg chan and msg hwlock init >> net/zxdh: add msg chan enable implementation >> net/zxdh: add zxdh get device backend infos >> net/zxdh: add configure zxdh intr implementation >> net/zxdh: add zxdh dev infos get ops >> net/zxdh: add zxdh dev configure ops >> net/zxdh: add zxdh dev close ops >> > > For series, > Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> > > Series applied to dpdk-next-net/main, thanks. > Hi Junlong, It seems we missed to mark driver as experimental, I will update it in next-net. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-07 9:28 ` Ferruh Yigit @ 2024-11-07 9:58 ` Ferruh Yigit 0 siblings, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-07 9:58 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, wang.yong19 On 11/7/2024 9:28 AM, Ferruh Yigit wrote: > On 11/6/2024 12:40 AM, Ferruh Yigit wrote: >> On 11/4/2024 11:58 AM, Junlong Wang wrote: >>> v10: >>> - >>> move zxdh under Wind River in MAINTAINERS and add myself as the maintainer >>> and add experimental into MAINTAINERS/driver file,elease notes. >>> - changed DPDK syntax is to have return value in a separate line. >>> - Add a keyword in log types for distinguished. >>> - using regular comments (non doxygen syntax). >>> - fix other issues. >>> >>> v9: >>> - fix 'v8 3/9' patch use PCI bus API, >>> and common PCI constants according to David Marchand's comments. >>> >>> v8: >>> - fix flexible arrays、Waddress-of-packed-member error. >>> - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. >>> - use zxdh_try/release_lock,and move loop into zxdh_timedlock, >>> make hardware lock follow spinlock pattern. >>> >>> v7: >>> - add release notes and modify zxdh.rst issues. >>> - avoid use pthread and use rte_spinlock_lock. >>> - using the prefix ZXDH_ before some definitions. >>> - resole issues according to thomas's comments. >>> >>> v6: >>> - Resolve ci/intel compilation issues. >>> - fix meson.build indentation in earlier patch. >>> >>> V5: >>> - split driver into multiple patches,part of the zxdh driver, >>> later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. >>> - fix errors reported by scripts. >>> - move the product link in zxdh.rst. >>> - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. >>> - modify other comments according to Ferruh's comments. >>> >>> Junlong Wang (10): >>> net/zxdh: add zxdh ethdev pmd driver >>> net/zxdh: add logging implementation >>> net/zxdh: add zxdh device pci init implementation >>> net/zxdh: add msg chan and msg hwlock init >>> net/zxdh: add msg chan enable implementation >>> net/zxdh: add zxdh get device backend infos >>> net/zxdh: add configure zxdh intr implementation >>> net/zxdh: add zxdh dev infos get ops >>> net/zxdh: add zxdh dev configure ops >>> net/zxdh: add zxdh dev close ops >>> >> >> For series, >> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> >> >> Series applied to dpdk-next-net/main, thanks. >> > > Hi Junlong, > > It seems we missed to mark driver as experimental, I will update it in > next-net. > Following applied: diff --git a/MAINTAINERS b/MAINTAINERS index 9a812b3632b7..e6e53e39683a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1058,7 +1058,7 @@ F: drivers/net/avp/ F: doc/guides/nics/avp.rst F: doc/guides/nics/features/avp.ini -ZTE zxdh +ZTE zxdh - EXPERIMENTAL M: Junlong Wang <wang.junlong1@zte.com.cn> M: Lijie Shan <shan.lijie@zte.com.cn> F: drivers/net/zxdh/ ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v10 00/10] net/zxdh: introduce net zxdh driver 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang ` (10 preceding siblings ...) 2024-11-06 0:40 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Ferruh Yigit @ 2024-11-12 2:49 ` Junlong Wang 11 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-12 2:49 UTC (permalink / raw) To: thomas, ferruh.yigit; +Cc: dev, shan.lijie, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 609 bytes --] > >On 11/6/2024 12:40 AM, Ferruh Yigit wrote: > >> For series, > >> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> > >> > >> Series applied to dpdk-next-net/main, thanks. > >> > > > Hi Junlong, > > > It seems we missed to mark driver as experimental, I will update it in > > next-net. > > Sorry, I'm too careless, I will pay more attention next time. > Thank you very much. > I'm removing the useless #ifdef __cplusplus while pulling in main, > as we are trying to clean them in the repo. OK, Thank you for helping with the modifications. I will pay attention in next submission. [-- Attachment #1.1.2: Type: text/html , Size: 1310 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 2/9] net/zxdh: add logging implementation 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 1:02 ` Ferruh Yigit 2024-11-04 2:44 ` [v9,2/9] " Junlong Wang 2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang ` (10 subsequent siblings) 12 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3417 bytes --] Add zxdh logging implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++-- drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_logs.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 5b6c9ec1bf..c911284423 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -7,6 +7,7 @@ #include <rte_ethdev.h> #include "zxdh_ethdev.h" +#include "zxdh_logs.h" static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); return -ENOMEM; + } memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; - if (hw->bar_addr[0] == 0) + if (hw->bar_addr[0] == 0) { + PMD_INIT_LOG(ERR, "Bad mem resource."); return -EIO; + } hw->device_id = pci_dev->id.device_id; hw->port_id = eth_dev->data->port_id; @@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = { RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE); diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..a8a6a3135b --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_LOGS_H +#define ZXDH_LOGS_H + +#include <rte_log.h> + +extern int zxdh_logtype_init; +#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init +#define PMD_INIT_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_driver; +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver +#define PMD_DRV_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_rx; +#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx +#define PMD_RX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_tx; +#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx +#define PMD_TX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_msg; +#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg +#define PMD_MSG_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +#endif /* ZXDH_LOGS_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6146 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 2/9] net/zxdh: add logging implementation 2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-11-02 1:02 ` Ferruh Yigit 2024-11-04 2:44 ` [v9,2/9] " Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 1:02 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19 On 11/1/2024 6:21 AM, Junlong Wang wrote: > Add zxdh logging implementation. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > <...> > +extern int zxdh_logtype_init; > +#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init > +#define PMD_INIT_LOG(level, ...) \ > + RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \ > Are you sure you want "offload_zxdh" prefix for each log, instead of shorter 'zxdh' one? > + __func__, __VA_ARGS__) > + > +extern int zxdh_logtype_driver; > +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver > +#define PMD_DRV_LOG(level, ...) \ > + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \ > All log types seems same prefix, which is OK, but just a reminder if you want to distinguish them? ^ permalink raw reply [flat|nested] 225+ messages in thread
* [v9,2/9] net/zxdh: add logging implementation 2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang 2024-11-02 1:02 ` Ferruh Yigit @ 2024-11-04 2:44 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 2:44 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 645 bytes --] > Are you sure you want "offload_zxdh" prefix for each log, instead of > shorter 'zxdh' one? >> + __func__, __VA_ARGS__) >> + >> +extern int zxdh_logtype_driver; >> +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver >> +#define PMD_DRV_LOG(level, ...) \ >> + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \ >> > All log types seems same prefix, which is OK, but just a reminder if you > want to distinguish them? Thank you for your comments. We will change it to 'zxdh' prefix.. Add a keyword, for example, "zxdh msg %s (): ""zxdh rx %s ():" "zxdh tx %s (): " to distinguish. [-- Attachment #1.1.2: Type: text/html , Size: 1412 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 1:01 ` Ferruh Yigit 2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang ` (9 subsequent siblings) 12 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 22214 bytes --] Add device pci init implementation, to obtain PCI capability and read configuration, etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ drivers/net/zxdh/zxdh_ethdev.h | 18 ++- drivers/net/zxdh/zxdh_pci.c | 278 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.h | 138 ++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 109 +++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 55 +++++++ 7 files changed, 641 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 932fb1c835..7db4e7bc71 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -15,4 +15,5 @@ endif sources = files( 'zxdh_ethdev.c', + 'zxdh_pci.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c911284423..5c747882a7 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -8,6 +8,40 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" +#include "zxdh_pci.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; + +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = 0; + + ret = zxdh_read_pci_caps(pci_dev, hw); + if (ret) { + PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + goto err; + } + + zxdh_hw_internal[hw->port_id].zxdh_vtpci_ops = &zxdh_dev_pci_ops; + zxdh_pci_reset(hw); + zxdh_get_pci_dev_config(hw); + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + else + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + + return 0; + +err: + PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + return ret; + +err_zxdh_init: + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index a11e3624a9..a22ac15065 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -5,6 +5,7 @@ #ifndef ZXDH_ETHDEV_H #define ZXDH_ETHDEV_H +#include <rte_ether.h> #include "ethdev_driver.h" #ifdef __cplusplus @@ -24,15 +25,30 @@ extern "C" { #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) #define ZXDH_NUM_BARS 2 +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U struct zxdh_hw { struct rte_eth_dev *eth_dev; - uint64_t bar_addr[ZXDH_NUM_BARS]; + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; + uint64_t bar_addr[ZXDH_NUM_BARS]; + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; uint32_t speed; + uint32_t notify_off_multiplier; + uint16_t *notify_base; + uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint8_t *isr; + uint8_t weak_barriers; + uint8_t use_msix; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t duplex; uint8_t is_pf; }; diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..a88d620f30 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,278 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <unistd.h> + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_pci.h> +#include <rte_common.h> +#include <rte_cycles.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +static void zxdh_read_dev_config(struct zxdh_hw *hw, + size_t offset, + void *dst, + int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void zxdh_write_dev_config(struct zxdh_hw *hw, + size_t offset, + const void *src, + int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint8_t zxdh_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint64_t zxdh_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +const struct zxdh_pci_ops zxdh_dev_pci_ops = { + .read_dev_cfg = zxdh_read_dev_config, + .write_dev_cfg = zxdh_write_dev_config, + .get_status = zxdh_get_status, + .set_status = zxdh_set_status, + .get_features = zxdh_get_features, + .set_features = zxdh_set_features, +}; + +uint16_t zxdh_pci_get_features(struct zxdh_hw *hw) +{ + return ZXDH_VTPCI_OPS(hw)->get_features(hw); +} + +void zxdh_pci_reset(struct zxdh_hw *hw) +{ + PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + ZXDH_VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (ZXDH_VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + rte_delay_ms(1); + } + PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_INIT_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_INIT_LOG(ERR, "invalid cap: overflows bar space"); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + struct zxdh_pci_cap cap; + uint8_t pos = 0; + int32_t ret = 0; + + if (dev->mem_resource[0].addr == NULL) { + PMD_INIT_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + + hw->use_msix = zxdh_pci_msix_detect(dev); + + pos = rte_pci_find_capability(dev, RTE_PCI_CAP_ID_VNDR); + while (pos) { + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr != RTE_PCI_CAP_ID_VNDR) { + PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_INIT_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + + if ((hw->pcie_id >> 11) & 0x1) /* PF */ { + PMD_INIT_LOG(DEBUG, "EP %u PF %u", + hw->pcie_id >> 12, (hw->pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u", + hw->pcie_id >> 12, + (hw->pcie_id >> 8) & 0x7, + hw->pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_INIT_LOG(ERR, "no zxdh pci device found."); + return -1; + } + return 0; +} + +void zxdh_pci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + ZXDH_VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} + +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + uint64_t guest_features = 0; + uint64_t nego_features = 0; + uint32_t max_queue_pairs = 0; + + hw->host_features = zxdh_pci_get_features(hw); + + guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + } + + zxdh_pci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); + + return 0; +} + +enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev) +{ + uint16_t flags = 0; + uint8_t pos = 0; + int16_t ret = 0; + + pos = rte_pci_find_capability(dev, RTE_PCI_CAP_ID_MSIX); + + if (pos > 0) { + ret = rte_pci_read_config(dev, &flags, 2, pos + RTE_PCI_MSIX_FLAGS); + if (ret == 2 && flags & RTE_PCI_MSIX_FLAGS_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + return ZXDH_MSIX_NONE; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..ff656f28e6 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_PCI_H +#define ZXDH_PCI_H + +#include <stdint.h> +#include <stdbool.h> + +#include <bus_pci_driver.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_RING_PACKED 34 +#define ZXDH_F_IN_ORDER 35 +#define ZXDH_F_NOTIFICATION_DATA 38 + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + uint32_t speed; + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); +}; + +struct zxdh_hw_internal { + const struct zxdh_pci_ops *zxdh_vtpci_ops; +}; + +#define ZXDH_VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].zxdh_vtpci_ops) + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_dev_pci_ops; + +void zxdh_pci_reset(struct zxdh_hw *hw); +void zxdh_pci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); + +uint16_t zxdh_pci_get_features(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_PCI_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..0b6f48adf9 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_QUEUE_H +#define ZXDH_QUEUE_H + +#include <stdint.h> + +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_rxtx.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** ring descriptors: 16 bytes. + * These can chain together via "next". + **/ +struct zxdh_vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +} __rte_packed; + +struct zxdh_vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[]; +} __rte_packed; + +struct zxdh_vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +} __rte_packed; + +struct zxdh_vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +} __rte_packed; + +struct zxdh_vring_packed { + uint32_t num; + struct zxdh_vring_packed_desc *desc; + struct zxdh_vring_packed_desc_event *driver; + struct zxdh_vring_packed_desc_event *device; +} __rte_packed; + +struct zxdh_vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +} __rte_packed; + +struct zxdh_virtqueue { + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ + struct { + /**< vring keeping descs and events */ + struct zxdh_vring_packed ring; + uint8_t used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ + uint16_t vq_nentries; /**< vring desc numbers */ + uint16_t vq_free_cnt; /**< num of desc available */ + uint16_t vq_avail_idx; /**< sync until needed */ + uint16_t vq_free_thresh; /**< free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /**< linear address of vring*/ + uint32_t vq_ring_size; + + union { + struct zxdh_virtnet_rx rxq; + struct zxdh_virtnet_tx txq; + }; + + /** < physical address of vring, + * or virtual address + **/ + rte_iova_t vq_ring_mem; + + /** + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + **/ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /**< PCI queue index */ + uint16_t offset; /**< relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /**< RX software ring. */ + struct zxdh_vq_desc_extra vq_descx[]; +} __rte_packed; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..7d4b5481ec --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,55 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_RXTX_H +#define ZXDH_RXTX_H + +#include <stdint.h> + +#include <rte_common.h> +#include <rte_mbuf_core.h> + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; +}; + +struct zxdh_virtnet_rx { + struct zxdh_virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct zxdh_virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +} __rte_packed; + +struct zxdh_virtnet_tx { + struct zxdh_virtqueue *vq; + const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct zxdh_virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +} __rte_packed; + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 50871 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation 2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-11-02 1:01 ` Ferruh Yigit 0 siblings, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 1:01 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19 On 11/1/2024 6:21 AM, Junlong Wang wrote: > Add device pci init implementation, > to obtain PCI capability and read configuration, etc. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ > drivers/net/zxdh/zxdh_ethdev.h | 18 ++- > drivers/net/zxdh/zxdh_pci.c | 278 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_pci.h | 138 ++++++++++++++++ > drivers/net/zxdh/zxdh_queue.h | 109 +++++++++++++ > drivers/net/zxdh/zxdh_rxtx.h | 55 +++++++ > zxdh_queue.h & zxdh_rxtx.h seems not used in this patch, and they are not indeed related to the PCI init, or PCI at all. Can you add them in the patch that they are used? <...> > +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ > + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ > + 1ULL << ZXDH_NET_F_STATUS | \ > + 1ULL << ZXDH_NET_F_MQ | \ > + 1ULL << ZXDH_F_ANY_LAYOUT | \ > + 1ULL << ZXDH_F_VERSION_1 | \ > + 1ULL << ZXDH_F_RING_PACKED | \ > + 1ULL << ZXDH_F_IN_ORDER | \ > + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ > + 1ULL << ZXDH_NET_F_MAC) > + > +static void zxdh_read_dev_config(struct zxdh_hw *hw, > + size_t offset, > + void *dst, > + int32_t length) > Syntax issue, with tab stop = 8, above lines looks wrong can you please check? <...> > +struct zxdh_virtqueue { > + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ > + struct { > + /**< vring keeping descs and events */ > + struct zxdh_vring_packed ring; > + uint8_t used_wrap_counter; > + uint8_t rsv; > + uint16_t cached_flags; /**< cached flags for descs */ > + uint16_t event_flags_shadow; > + uint16_t rsv1; > + } __rte_packed vq_packed; > + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ > + uint16_t vq_nentries; /**< vring desc numbers */ > + uint16_t vq_free_cnt; /**< num of desc available */ > + uint16_t vq_avail_idx; /**< sync until needed */ > + uint16_t vq_free_thresh; /**< free threshold */ > + uint16_t rsv2; > + > + void *vq_ring_virt_mem; /**< linear address of vring*/ > + uint32_t vq_ring_size; > + > + union { > + struct zxdh_virtnet_rx rxq; > + struct zxdh_virtnet_tx txq; > + }; > + > + /** < physical address of vring, > + * or virtual address > + **/ > '/**' is doxygen syntax, I don't know if you are using doxgen. But '/**<' is doxygen sytax to say comment is for the code before it, not after. So it is used wrong above. If you don't have any specific reason, why not using regular comments (non doxygen syntax), and leave first line empty, and terminate with '*/', so above becomes: /* * physical address of vring, or virtual address */ This comment is for all comment code in all the series. It looks like there is a mixture of the usage. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 1:00 ` Ferruh Yigit 2024-11-04 2:47 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 5/9] net/zxdh: add msg chan enable implementation Junlong Wang ` (8 subsequent siblings) 12 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 9083 bytes --] Add msg channel and hwlock init implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 15 +++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 67 ++++++++++++++ 5 files changed, 245 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 7db4e7bc71..2e0c8fddae 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -16,4 +16,5 @@ endif sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', + 'zxdh_msg.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 5c747882a7..da454cdff3 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -9,6 +9,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_pci.h" +#include "zxdh_msg.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret < 0) goto err_zxdh_init; + ret = zxdh_msg_chan_init(); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init bar msg chan"); + goto err_zxdh_init; + } + hw->msg_chan_init = 1; + + ret = zxdh_msg_chan_hwlock_init(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: + zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index a22ac15065..20ead56e44 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -51,6 +51,7 @@ struct zxdh_hw { uint8_t duplex; uint8_t is_pf; + uint8_t msg_chan_init; }; #ifdef __cplusplus diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..9dcf99f1f7 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <rte_spinlock.h> +#include <rte_cycles.h> +#include <inttypes.h> +#include <rte_malloc.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define ZXDH_REPS_INFO_FLAG_USABLE 0x00 +#define ZXDH_BAR_SEQID_NUM_MAX 256 + +#define ZXDH_PCIEID_IS_PF_MASK (0x0800) +#define ZXDH_PCIEID_PF_IDX_MASK (0x0700) +#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff) +#define ZXDH_PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define ZXDH_PCIEID_PF_IDX_OFFSET (8) +#define ZXDH_PCIEID_EP_IDX_OFFSET (12) + +#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3) +#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5) +#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8) + +#define ZXDH_MAX_EP_NUM (4) +#define ZXDH_MAX_HARD_SPINLOCK_NUM (511) + +#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) +#define ZXDH_FW_SHRD_OFFSET (0x5000) +#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define ZXDH_HW_LABEL_OFFSET \ + (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) + +struct zxdh_dev_stat { + bool is_mpf_scanned; + bool is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; +struct zxdh_dev_stat g_dev_stat = {0}; + +struct zxdh_seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct zxdh_seqid_ring { + uint16_t cur_id; + rte_spinlock_t lock; + struct zxdh_seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX]; +}; +struct zxdh_seqid_ring g_seqid_ring = {0}; + +static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case ZXDH_MSG_CHAN_END_RISC: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case ZXDH_MSG_CHAN_END_VF: + case ZXDH_MSG_CHAN_END_PF: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx + + ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +/** + * Fun: PF init hard_spinlock addr + */ +static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + return 0; +} + +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} + +static rte_spinlock_t chan_lock; +int zxdh_msg_chan_init(void) +{ + uint16_t seq_id = 0; + + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return ZXDH_BAR_MSG_OK; + + rte_spinlock_init(&chan_lock); + g_seqid_ring.cur_id = 0; + rte_spinlock_init(&g_seqid_ring.lock); + + for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) { + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id]; + + reps_info->id = seq_id; + reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return ZXDH_BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..a0b46c900a --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_MSG_H +#define ZXDH_MSG_H + +#include <stdint.h> + +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +#define ZXDH_BAR0_INDEX 0 + +enum ZXDH_DRIVER_TYPE { + ZXDH_MSG_CHAN_END_MPF = 0, + ZXDH_MSG_CHAN_END_PF, + ZXDH_MSG_CHAN_END_VF, + ZXDH_MSG_CHAN_END_RISC, +}; + +enum ZXDH_BAR_MSG_RTN { + ZXDH_BAR_MSG_OK = 0, + ZXDH_BAR_MSG_ERR_MSGID, + ZXDH_BAR_MSG_ERR_NULL, + ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */ + ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */ + ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */ + ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /** + * The sending interface parameter boundary structure pointer is empty + */ + ZXDH_BAR_MSG_ERR_NULL_PARA, + ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /** + * Unable to find the corresponding message processing function for this module + */ + ZXDH_BAR_MSG_ERR_MODULE_NOEXIST, + /** + * The virtual address in the parameters passed in by the sending interface is empty + */ + ZXDH_BAR_MSG_ERR_VIRTADDR_NULL, + ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED, + ZXDH_BAR_MSG_ERR_KERNEL_READY, + ZXDH_BAR_MSG_ERR_USR_RET_ERR, + ZXDH_BAR_MSG_ERR_ERR_PCIEID, + ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_MSG_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17557 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init 2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-11-02 1:00 ` Ferruh Yigit 2024-11-04 2:47 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 1:00 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19 On 11/1/2024 6:21 AM, Junlong Wang wrote: > Add msg channel and hwlock init implementation. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > <...> > @@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) > if (ret < 0) > goto err_zxdh_init; > > + ret = zxdh_msg_chan_init(); > + if (ret != 0) { > + PMD_INIT_LOG(ERR, "Failed to init bar msg chan"); > + goto err_zxdh_init; > + } > + hw->msg_chan_init = 1; > + > + ret = zxdh_msg_chan_hwlock_init(eth_dev); > + if (ret != 0) { > + PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); > + goto err_zxdh_init; > + } > + > return ret; > > err_zxdh_init: > + zxdh_bar_msg_chan_exit(); > Should 'zxdh_bar_msg_chan_exit()' called during zxdh_eth_dev_uninit()? <...> > + > +struct zxdh_dev_stat { > + bool is_mpf_scanned; > + bool is_res_init; > + int16_t dev_cnt; /* probe cnt */ > +}; > +struct zxdh_dev_stat g_dev_stat = {0}; > Is there a reason to not make this global variable 'static'? Please remember, when a DPDK application compiled, this will be all application and other driver and libraries, if there is really a good reason, please keep all global variables in the scope of driver. And no need to initialize global variable to 0, that is done by default. > + > +struct zxdh_seqid_item { > + void *reps_addr; > + uint16_t id; > + uint16_t buffer_len; > + uint16_t flag; > +}; > + > +struct zxdh_seqid_ring { > + uint16_t cur_id; > + rte_spinlock_t lock; > + struct zxdh_seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX]; > +}; > +struct zxdh_seqid_ring g_seqid_ring = {0}; > + ditto <...> > +/** > + * Fun: PF init hard_spinlock addr > + */ > +static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) > +{ > + int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); > + > + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, > + bar_base_addr + ZXDH_HW_LABEL_OFFSET); > + lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); > + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, > + bar_base_addr + ZXDH_HW_LABEL_OFFSET); > + return 0; > +} > + > +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + > + if (!hw->is_pf) > + return 0; > + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw- >>bar_addr[ZXDH_BAR0_INDEX])); > +} > + > +static rte_spinlock_t chan_lock; > please move global variables to the top of the file, otherwise it is very easy to miss them. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re:[v9,4/9] net/zxdh: add msg chan and msg hwlock init 2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-11-02 1:00 ` Ferruh Yigit @ 2024-11-04 2:47 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 2:47 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 229 bytes --] >> err_zxdh_init: >> + zxdh_bar_msg_chan_exit(); >> > Should 'zxdh_bar_msg_chan_exit()' called during zxdh_eth_dev_uninit()? Yes, it is. I forget it. I will put it in the zxdh_eth_dev_uninit(). Thanks. [-- Attachment #1.1.2: Type: text/html , Size: 475 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 5/9] net/zxdh: add msg chan enable implementation 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang ` (7 subsequent siblings) 12 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 29161 bytes --] Add msg chan enable implementation to support send msg to backend(device side) get infos. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 6 + drivers/net/zxdh/zxdh_ethdev.h | 12 + drivers/net/zxdh/zxdh_msg.c | 647 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_msg.h | 131 ++++++- 4 files changed, 790 insertions(+), 6 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index da454cdff3..21255b2190 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_msg_chan_enable(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 20ead56e44..7434cc15d7 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -28,10 +28,22 @@ extern "C" { #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +union zxdh_virport_num { + uint16_t vport; + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; uint64_t host_features; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 9dcf99f1f7..4f6607fcb0 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -17,6 +17,7 @@ #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 +#define ZXDH_REPS_INFO_FLAG_USED 0xa0 #define ZXDH_PCIEID_IS_PF_MASK (0x0800) #define ZXDH_PCIEID_PF_IDX_MASK (0x0700) @@ -33,15 +34,88 @@ #define ZXDH_MAX_EP_NUM (4) #define ZXDH_MAX_HARD_SPINLOCK_NUM (511) +#define ZXDH_LOCK_PRIMARY_ID_MASK (0x8000) +/* bar offset */ +#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000) +#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000) #define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) #define ZXDH_FW_SHRD_OFFSET (0x5000) #define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define ZXDH_HW_LABEL_OFFSET \ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) +#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) +#define ZXDH_CHAN_RISC_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) + +#define ZXDH_REPS_HEADER_LEN_OFFSET 1 +#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4 +#define ZXDH_REPS_HEADER_REPLYED 0xff + +#define ZXDH_BAR_MSG_CHAN_USABLE 0 +#define ZXDH_BAR_MSG_CHAN_USED 1 + +#define ZXDH_BAR_MSG_POL_MASK (0x10) +#define ZXDH_BAR_MSG_POL_OFFSET (4) + +#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc +#define ZXDH_BAR_MSG_VALID_MASK 1 +#define ZXDH_BAR_MSG_VALID_OFFSET 0 + +#define ZXDH_BAR_PF_NUM 7 +#define ZXDH_BAR_VF_NUM 256 +#define ZXDH_BAR_INDEX_PF_TO_VF 0 +#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff +#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0 +#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0 + +#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define ZXDH_SPINLOCK_POLLING_SPAN_US (100) + +#define ZXDH_BAR_MSG_SRC_NUM 3 +#define ZXDH_BAR_MSG_SRC_MPF 0 +#define ZXDH_BAR_MSG_SRC_PF 1 +#define ZXDH_BAR_MSG_SRC_VF 2 +#define ZXDH_BAR_MSG_SRC_ERR 0xff +#define ZXDH_BAR_MSG_DST_NUM 3 +#define ZXDH_BAR_MSG_DST_RISC 0 +#define ZXDH_BAR_MSG_DST_MPF 2 +#define ZXDH_BAR_MSG_DST_PFVF 1 +#define ZXDH_BAR_MSG_DST_ERR 0xff + +#define ZXDH_LOCK_TYPE_HARD (1) +#define ZXDH_LOCK_TYPE_SOFT (0) +#define ZXDH_BAR_INDEX_TO_RISC 0 + +#define ZXDH_BAR_CHAN_INDEX_SEND 0 +#define ZXDH_BAR_CHAN_INDEX_RECV 1 + +uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD} +}; + struct zxdh_dev_stat { - bool is_mpf_scanned; - bool is_res_init; + uint8_t is_mpf_scanned; + uint8_t is_res_init; int16_t dev_cnt; /* probe cnt */ }; struct zxdh_dev_stat g_dev_stat = {0}; @@ -60,7 +134,9 @@ struct zxdh_seqid_ring { }; struct zxdh_seqid_ring g_seqid_ring = {0}; -static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; + +static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET; @@ -97,6 +173,33 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; } +static uint8_t spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t primary_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, primary_id); + break; + } + rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) { label_write((uint64_t)label_addr, virt_lock_id, 0); @@ -109,11 +212,11 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u */ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) { - int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); + int lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, bar_base_addr + ZXDH_HW_LABEL_OFFSET); - lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); + lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, bar_base_addr + ZXDH_HW_LABEL_OFFSET); return 0; @@ -159,3 +262,537 @@ int zxdh_bar_msg_chan_exit(void) g_dev_stat.is_res_init = false; return ZXDH_BAR_MSG_OK; } + +static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct zxdh_seqid_item *seqid_reps_info = NULL; + + rte_spinlock_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + int rc = 0; + + do { + count++; + ++g_id; + g_id %= ZXDH_BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) && + (count < ZXDH_BAR_SEQID_NUM_MAX)); + + if (count >= ZXDH_BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = ZXDH_BAR_MSG_OK; + +out: + rte_spinlock_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = zxdh_bar_chan_msgid_allocate(msg_id); + + if (ret != ZXDH_BAR_MSG_OK) + return ZXDH_BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return ZXDH_BAR_MSG_OK; +} + +static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case ZXDH_MSG_CHAN_END_MPF: + src_index = ZXDH_BAR_MSG_SRC_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + src_index = ZXDH_BAR_MSG_SRC_PF; + break; + case ZXDH_MSG_CHAN_END_VF: + src_index = ZXDH_BAR_MSG_SRC_VF; + break; + default: + src_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case ZXDH_MSG_CHAN_END_MPF: + dst_index = ZXDH_BAR_MSG_DST_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_VF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_RISC: + dst_index = ZXDH_BAR_MSG_DST_RISC; + break; + default: + dst_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + uint8_t src_index = 0; + uint8_t dst_index = 0; + + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return ZXDH_BAR_MSG_ERR_NULL_PARA; + } + src_index = zxdh_bar_msg_src_index_trans(in->src); + dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return ZXDH_BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes"); + + return ZXDH_BAR_MSG_OK; +} + +static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return ZXDH_BAR_MSG_OK; +} + +static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET, + src_pcieid | ZXDH_LOCK_PRIMARY_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET, + src_pcieid | ZXDH_LOCK_PRIMARY_ID_MASK); + + return ret; +} + +static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET); +} + +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); + + return ret; +} + +static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr); + + return ZXDH_BAR_MSG_OK; +} + +static void zxdh_bar_chan_msgid_free(uint16_t msg_id) +{ + struct zxdh_seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + rte_spinlock_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + rte_spinlock_unlock(&g_seqid_ring.lock); +} + +static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr, + struct zxdh_bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx)); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr, + struct zxdh_bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_write(subchan_addr, 4 * ix + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + for (ix = 0; ix < remain; ix++) + remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix); + + zxdh_bar_chan_reg_write(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + zxdh_bar_chan_reg_read(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (ix = 0; ix < remain; ix++) + *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~ZXDH_BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, + void *payload_addr, + uint16_t payload_len, + struct zxdh_bar_msg_header *msg_header) +{ + uint16_t ret = 0; + ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); + + ret = zxdh_bar_chan_msg_header_get(subchan_addr, + (struct zxdh_bar_msg_header *)tmp_msg_header); + + ret = zxdh_bar_chan_msg_payload_set(subchan_addr, + (uint8_t *)(payload_addr), payload_len); + + ret = zxdh_bar_chan_msg_payload_get(subchan_addr, + tmp_msg_header, payload_len); + + ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED); + return ret; +} + +static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK)) + return ZXDH_BAR_MSG_CHAN_USABLE; + + return ZXDH_BAR_MSG_CHAN_USED; +} + +static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK); + data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET); + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return ZXDH_BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + zxdh_bar_chan_msg_payload_get(subchan_addr, + recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */ + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint16_t seq_id = 0; + uint64_t subchan_addr = 0; + uint32_t time_out_cnt = 0; + uint16_t valid = 0; + int ret = 0; + + ret = zxdh_bar_chan_send_para_check(in, result); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + ret = zxdh_bar_chan_save_recv_info(result, &seq_id); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + zxdh_bar_chan_subchan_addr_get(in, &subchan_addr); + + msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != ZXDH_BAR_MSG_OK) { + zxdh_bar_chan_msgid_free(seq_id); + goto exit; + } + zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + + do { + rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN); + valid = zxdh_bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED)); + + if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) { + zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE); + zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = ZXDH_BAR_MSG_ERR_TIME_OUT; + } else { + ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + zxdh_bar_chan_msgid_free(seq_id); + zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static int bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int zxdh_bar_chan_enable(struct zxdh_msix_para *para, uint16_t *vport) +{ + struct zxdh_bar_recv_msg recv_msg = {0}; + int ret = 0; + int check_token = 0; + int sum_res = 0; + + if (!para) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_msix_msg msix_msg = { + .pcie_id = para->pcie_id, + .vector_risc = para->vector_risc, + .vector_pfvf = para->vector_pfvf, + .vector_mpf = para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = para->driver_type, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_MISX, + .src_pcieid = para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id); + return ZXDH_BAR_MSG_OK; +} + +int zxdh_msg_chan_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msix_para misx_info = { + .vector_risc = ZXDH_MSIX_FROM_RISCV, + .vector_pfvf = ZXDH_MSIX_FROM_PFVF, + .vector_mpf = ZXDH_MSIX_FROM_MPF, + .pcie_id = hw->pcie_id, + .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + }; + + return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a0b46c900a..bbacbbc45e 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,7 +13,22 @@ extern "C" { #endif -#define ZXDH_BAR0_INDEX 0 +#define ZXDH_BAR0_INDEX 0 +#define ZXDH_CTRLCH_OFFSET (0x2000) + +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 + +#define ZXDH_BAR_MSG_POLLING_SPAN 100 +#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) + +#define ZXDH_BAR_CHAN_MSG_SYNC 0 + +#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -22,6 +37,13 @@ enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_RISC, }; +enum ZXDH_MSG_VEC { + ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, + ZXDH_MSIX_FROM_MPF, + ZXDH_MSIX_FROM_RISCV, + ZXDH_MSG_VEC_NUM, +}; + enum ZXDH_BAR_MSG_RTN { ZXDH_BAR_MSG_OK = 0, ZXDH_BAR_MSG_ERR_MSGID, @@ -56,10 +78,117 @@ enum ZXDH_BAR_MSG_RTN { ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ }; +enum ZXDH_BAR_MODULE_ID { + ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */ + ZXDH_BAR_MODULE_TBL, /* 1: resource table */ + ZXDH_BAR_MODULE_MISX, /* 2: config msix */ + ZXDH_BAR_MODULE_SDA, /* 3: */ + ZXDH_BAR_MODULE_RDMA, /* 4: */ + ZXDH_BAR_MODULE_DEMO, /* 5: channel test */ + ZXDH_BAR_MODULE_SMMU, /* 6: */ + ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */ + ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */ + ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */ + ZXDH_BAR_MODULE_VPORT, /* 11: get vport */ + ZXDH_BAR_MODULE_BDF, /* 12: get bdf */ + ZXDH_BAR_MODULE_RISC_READY, /* 13: */ + ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + ZXDH_BAR_MDOULE_NVME, /* 15: */ + ZXDH_BAR_MDOULE_NPSDK, /* 16: */ + ZXDH_BAR_MODULE_NP_TODO, /* 17: */ + ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */ + ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */ + + ZXDH_MODULE_FLASH = 32, + ZXDH_BAR_MODULE_OFFSET_GET = 33, + ZXDH_BAR_EVENT_OVS_WITH_VCB = 36, + + ZXDH_BAR_MSG_MODULE_NUM = 100, +}; + +struct zxdh_msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct zxdh_msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; + +struct zxdh_bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; + +struct zxdh_bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; + +struct zxdh_bar_recv_msg { + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + /* */ + union { + struct zxdh_bar_msix_reps msix_reps; + struct zxdh_bar_offset_reps offset_reps; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; + +struct zxdh_bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency */ + uint8_t ack : 1; /* ack msg */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); +int zxdh_msg_chan_enable(struct rte_eth_dev *dev); +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 61773 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 1:06 ` Ferruh Yigit 2024-11-04 3:30 ` [v9,6/9] " Junlong Wang 2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang ` (6 subsequent siblings) 12 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 12094 bytes --] Add zxdh get device backend infos, use msg chan to send msg get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 250 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_common.h | 30 ++++ drivers/net/zxdh/zxdh_ethdev.c | 35 +++++ drivers/net/zxdh/zxdh_ethdev.h | 5 + drivers/net/zxdh/zxdh_msg.h | 21 +++ 6 files changed, 342 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 2e0c8fddae..a16db47f89 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -17,4 +17,5 @@ sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', 'zxdh_msg.c', + 'zxdh_common.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..34749588d5 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,250 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <string.h> + +#include <ethdev_driver.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" +#include "zxdh_common.h" + +#define ZXDH_MSG_RSP_SIZE_MAX 512 + +#define ZXDH_COMMON_TABLE_READ 0 +#define ZXDH_COMMON_TABLE_WRITE 1 + +#define ZXDH_COMMON_FIELD_PHYPORT 6 + +#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) + +#define ZXDH_REPS_HEADER_OFFSET 4 +#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa + +struct zxdh_common_msg { + uint8_t type; /* 0:read table 1:write table */ + uint8_t field; + uint16_t pcie_id; + uint16_t slen; /* Data length for write table */ + uint16_t reserved; +} __rte_packed; + +struct zxdh_common_rsp_hdr { + uint8_t rsp_status; + uint16_t rsp_len; + uint8_t reserved; + uint8_t payload_status; + uint8_t rsv; + uint16_t payload_len; +} __rte_packed; + +struct zxdh_tbl_msg_header { + uint8_t type; /* r/w */ + uint8_t field; + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; +struct zxdh_tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; + +static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + uint8_t type, + uint8_t field, + void *buff, + uint16_t buff_size) +{ + uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size; + + desc->payload_addr = rte_zmalloc(NULL, msg_len, 0); + if (unlikely(desc->payload_addr == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate msg_data"); + return -ENOMEM; + } + memset(desc->payload_addr, 0, msg_len); + desc->payload_len = msg_len; + struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr; + + msg_data->type = type; + msg_data->field = field; + msg_data->pcie_id = hw->pcie_id; + msg_data->slen = buff_size; + if (buff_size != 0) + rte_memcpy(msg_data + 1, buff, buff_size); + + return 0; +} + +static int32_t zxdh_send_command(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + enum ZXDH_BAR_MODULE_ID module_id, + struct zxdh_msg_recviver_mem *msg_rsp) +{ + desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + desc->dst = ZXDH_MSG_CHAN_END_RISC; + desc->module_id = module_id; + desc->src_pcieid = hw->pcie_id; + + msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX; + msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0); + if (unlikely(msg_rsp->recv_buffer == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate messages response"); + return -ENOMEM; + } + + if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response"); + rte_free(msg_rsp->recv_buffer); + return -1; + } + + return 0; +} + +static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp, + void *buff, uint16_t len) +{ + struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer; + + if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) { + PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d", + rsp_hdr->payload_status, rsp_hdr->payload_len); + return -1; + } + if (len != 0) + rte_memcpy(buff, rsp_hdr + 1, len); + + return 0; +} + +static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_msg_recviver_mem msg_rsp; + struct zxdh_pci_bar_msg desc; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT, + (void *)phyport, sizeof(*phyport)); + return ret; +} + +static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + param->pcie_id = hw->pcie_id; + param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param->src_type = ZXDH_BAR_MODULE_TBL; +} + +static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + struct zxdh_pci_bar_msg in = {0}; + uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + int ret = 0; + + if (!res || !dev) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_tbl_msg_header tbl_msg = { + .type = ZXDH_TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = ZXDH_BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + struct zxdh_tbl_msg_reps_header *tbl_reps = + (struct zxdh_tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET); + + if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { + PMD_DRV_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + + sizeof(struct zxdh_tbl_msg_reps_header)), *len); + return ret; +} + +static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *panel_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_panel_id(¶m, panelid); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..ba29ca1dad --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_COMMON_H +#define ZXDH_COMMON_H + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); +int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_COMMON_H */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 21255b2190..23af69fece 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -10,9 +10,21 @@ #include "zxdh_logs.h" #include "zxdh_pci.h" #include "zxdh_msg.h" +#include "zxdh_common.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) +{ + /* epid > 4 is local soft queue. return 1192 */ + if (v.epid > 4) + return 1192; + if (v.vf_flag) + return v.epid * 256 + v.vfid; + else + return (v.epid * 8 + v.pfid) + 1152; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) return ret; } +static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) +{ + if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) { + PMD_INIT_LOG(ERR, "Failed to get phyport"); + return -1; + } + PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + + hw->vfid = zxdh_vport_to_vfid(hw->vport); + + if (zxdh_panelid_get(eth_dev, &hw->panel_id) != 0) { + PMD_INIT_LOG(ERR, "Failed to get panel_id"); + return -1; + } + PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); @@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_agent_comm(eth_dev, hw); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7434cc15d7..7b7bb16be8 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -55,6 +55,7 @@ struct zxdh_hw { uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint16_t vfid; uint8_t *isr; uint8_t weak_barriers; @@ -64,8 +65,12 @@ struct zxdh_hw { uint8_t duplex; uint8_t is_pf; uint8_t msg_chan_init; + uint8_t phyport; + uint8_t panel_id; }; +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index bbacbbc45e..49a7d23014 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -107,6 +107,27 @@ enum ZXDH_BAR_MODULE_ID { ZXDH_BAR_MSG_MODULE_NUM = 100, }; +enum ZXDH_RES_TBL_FILED { + ZXDH_TBL_FIELD_PCIEID = 0, + ZXDH_TBL_FIELD_BDF = 1, + ZXDH_TBL_FIELD_MSGCH = 2, + ZXDH_TBL_FIELD_DATACH = 3, + ZXDH_TBL_FIELD_VPORT = 4, + ZXDH_TBL_FIELD_PNLID = 5, + ZXDH_TBL_FIELD_PHYPORT = 6, + ZXDH_TBL_FIELD_SERDES_NUM = 7, + ZXDH_TBL_FIELD_NP_PORT = 8, + ZXDH_TBL_FIELD_SPEED = 9, + ZXDH_TBL_FIELD_HASHID = 10, + ZXDH_TBL_FIELD_NON, +}; + +enum ZXDH_TBL_MSG_TYPE { + ZXDH_TBL_TYPE_READ, + ZXDH_TBL_TYPE_WRITE, + ZXDH_TBL_TYPE_NON, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 26243 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos 2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-11-02 1:06 ` Ferruh Yigit 2024-11-04 3:30 ` [v9,6/9] " Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 1:06 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19 On 11/1/2024 6:21 AM, Junlong Wang wrote: > Add zxdh get device backend infos, > use msg chan to send msg get. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > <...> > +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) > +{ > + /* epid > 4 is local soft queue. return 1192 */ > + if (v.epid > 4) > + return 1192; > + if (v.vf_flag) > + return v.epid * 256 + v.vfid; > + else > + return (v.epid * 8 + v.pfid) + 1152; > +} > Is there a reason to not make this function static? This way can get rid of the decleration in the header file. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v9,6/9] net/zxdh: add zxdh get device backend infos 2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-11-02 1:06 ` Ferruh Yigit @ 2024-11-04 3:30 ` Junlong Wang 1 sibling, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-04 3:30 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 562 bytes --] >> +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) >> +{ >> + /* epid > 4 is local soft queue. return 1192 */ >> + if (v.epid > 4) >> + return 1192; >> + if (v.vf_flag) >> + return v.epid * 256 + v.vfid; >> + else >> + return (v.epid * 8 + v.pfid) + 1152; >> +} >> > Is there a reason to not make this function static? This way can get rid > of the decleration in the header file. In subsequent submission, it will be used by other file functions. Therefore, this function is not set to static. Thanks! [-- Attachment #1.1.2: Type: text/html , Size: 1312 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 1:07 ` Ferruh Yigit 2024-11-01 6:21 ` [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang ` (5 subsequent siblings) 12 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24069 bytes --] configure zxdh intr include risc,dtb. and release intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 301 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 6 + drivers/net/zxdh/zxdh_msg.c | 188 ++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 16 +- drivers/net/zxdh/zxdh_pci.c | 26 +++ drivers/net/zxdh/zxdh_pci.h | 11 ++ 6 files changed, 546 insertions(+), 2 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 23af69fece..2f8190a428 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -11,6 +11,7 @@ #include "zxdh_pci.h" #include "zxdh_msg.h" #include "zxdh_common.h" +#include "zxdh_queue.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -25,6 +26,301 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + + +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_pci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + +static void zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); +} + + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + ZXDH_VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_INIT_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE); + PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_INIT_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -138,9 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: + zxdh_intr_release(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 7b7bb16be8..65726f3a20 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -7,6 +7,8 @@ #include <rte_ether.h> #include "ethdev_driver.h" +#include <rte_interrupts.h> +#include <eal_interrupts.h> #ifdef __cplusplus extern "C" { @@ -43,6 +45,9 @@ struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + struct rte_intr_handle *risc_intr; + struct rte_intr_handle *dtb_intr; + struct zxdh_virtqueue **vqs; union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -59,6 +64,7 @@ struct zxdh_hw { uint8_t *isr; uint8_t weak_barriers; + uint8_t intr_enabled; uint8_t use_msix; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 4f6607fcb0..851bab8b36 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -95,6 +95,12 @@ #define ZXDH_BAR_CHAN_INDEX_SEND 0 #define ZXDH_BAR_CHAN_INDEX_RECV 1 +#define ZXDH_BAR_CHAN_MSG_SYNC 0 +#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0 +#define ZXDH_BAR_CHAN_MSG_EMEC 1 +#define ZXDH_BAR_CHAN_MSG_NO_ACK 0 +#define ZXDH_BAR_CHAN_MSG_ACK 1 + uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, @@ -136,6 +142,36 @@ struct zxdh_seqid_ring g_seqid_ring = {0}; static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; +static inline const char *zxdh_module_id_name(int val) +{ + switch (val) { + case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG"; + case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL"; + case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX"; + case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA"; + case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA"; + case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO"; + case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU"; + case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC"; + case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA"; + case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM"; + case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP"; + case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT"; + case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF"; + case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY"; + case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE"; + case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME"; + case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK"; + case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO"; + case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF"; + case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF"; + case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH"; + case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET"; + case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; @@ -796,3 +832,155 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev) return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); } + +static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + + return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static void zxdh_bar_msg_ack_async_msg_proc(struct zxdh_bar_msg_header *msg_header, + uint8_t *receiver_buff) +{ + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED; + +free_id: + zxdh_bar_chan_msgid_free(msg_header->msg_id); +} + +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM]; +static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, + struct zxdh_bar_msg_header *msg_header, + uint8_t *receiver_buff, void *dev) +{ + uint8_t *reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0); + + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + uint16_t reps_len = 0; + + recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + zxdh_bar_chan_msg_header_set(reply_addr, msg_header); + zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type, + uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + uint64_t recv_rep_addr; + + if (sync == ZXDH_BAR_CHAN_MSG_SYNC) + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t zxdh_bar_chan_msg_header_check(struct zxdh_bar_msg_header *msg_header) +{ + if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return ZXDH_BAR_MSG_ERR_MODULE; + } + uint8_t module_id = msg_header->module_id; + + if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + uint16_t len = msg_header->len; + + if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + zxdh_module_id_name(module_id), module_id); + return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST; + } + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint64_t recv_addr = 0; + uint16_t ret = 0; + + recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + zxdh_bar_chan_msg_header_get(recv_addr, &msg_header); + ret = zxdh_bar_chan_msg_header_check(&msg_header); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + + uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0); + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) { + zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE); + if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) { + zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + return 0; + +exit: + rte_free(recved_msg); + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 49a7d23014..0d032d306c 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -13,10 +13,17 @@ extern "C" { #endif -#define ZXDH_BAR0_INDEX 0 -#define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_BAR0_INDEX 0 +#define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) +#define ZXDH_QUEUE_INTR_VEC_NUM 256 #define ZXDH_BAR_MSG_POLLING_SPAN 100 #define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) @@ -202,6 +209,9 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, + void *reps_buffer, uint16_t *reps_len, void *dev); + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); @@ -210,6 +220,8 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev); int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index a88d620f30..65164c86b7 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -92,6 +92,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) rte_write32(features >> 32, &hw->common_cfg->guest_feature); } +static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint8_t zxdh_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -99,8 +117,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_status = zxdh_set_status, .get_features = zxdh_get_features, .set_features = zxdh_set_features, + .set_queue_irq = zxdh_set_queue_irq, + .set_config_irq = zxdh_set_config_irq, + .get_isr = zxdh_get_isr, }; +uint8_t zxdh_pci_isr(struct zxdh_hw *hw) +{ + return ZXDH_VTPCI_OPS(hw)->get_isr(hw); +} + uint16_t zxdh_pci_get_features(struct zxdh_hw *hw) { return ZXDH_VTPCI_OPS(hw)->get_features(hw); diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index ff656f28e6..f362658ba6 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -22,6 +22,13 @@ enum zxdh_msix_status { ZXDH_MSIX_ENABLED = 2 }; +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ @@ -110,6 +117,9 @@ struct zxdh_pci_ops { uint64_t (*get_features)(struct zxdh_hw *hw); void (*set_features)(struct zxdh_hw *hw, uint64_t features); + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec); + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + uint8_t (*get_isr)(struct zxdh_hw *hw); }; struct zxdh_hw_internal { @@ -130,6 +140,7 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_pci_get_features(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev); +uint8_t zxdh_pci_isr(struct zxdh_hw *hw); #ifdef __cplusplus } -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 50793 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation 2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-11-02 1:07 ` Ferruh Yigit 0 siblings, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 1:07 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19 On 11/1/2024 6:21 AM, Junlong Wang wrote: > configure zxdh intr include risc,dtb. and release intr. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > <...> > +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) > +{ > + struct zxdh_hw *hw = dev->data->dev_private; > + int32_t ret = 0; > + > + if (!rte_intr_cap_multiple(dev->intr_handle)) { > + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); > + return -ENOTSUP; > + } > + zxdh_intr_release(dev); > + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; > + > + if (dev->data->dev_conf.intr_conf.rxq) > + nb_efd += dev->data->nb_rx_queues; > + > + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { > + PMD_INIT_LOG(ERR, "Fail to create eventfd"); > + return -1; > + } > + > + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", > + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { > + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", > + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); > + return -ENOMEM; > + } > + PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle- >>vec_list_size); > + if (zxdh_setup_risc_interrupts(dev) != 0) { > + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); > + ret = -1; > + goto free_intr_vec; > + } > + if (zxdh_setup_dtb_interrupts(dev) != 0) { > + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); > + ret = -1; > + goto free_intr_vec; > + } > + > + if (zxdh_queues_bind_intr(dev) < 0) { > + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); > + ret = -1; > + goto free_intr_vec; > + } > + > + if (zxdh_intr_enable(dev) < 0) { > + PMD_DRV_LOG(ERR, "interrupt enable failed"); > + ret = -1; > + goto free_intr_vec; > + } > One of the above log use 'PMD_INIT_LOG()' and other 'PMD_DRV_LOG()', how do you diffrentiate? Do you really need two different log type, init and driver? (I understand need for others, Rx, Tx & msg, but still less is easier) ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang ` (4 subsequent siblings) 12 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3547 bytes --] Add support for zxdh infos get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 44 +++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_ethdev.h | 2 ++ 2 files changed, 45 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 2f8190a428..bbdbda3457 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -26,6 +26,43 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX); + dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX); + dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX); + dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE; + dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN; + dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS; + dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_QINQ_STRIP); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM); + + return 0; +} + static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; @@ -321,6 +358,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_infos_get = zxdh_dev_infos_get, +}; + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -377,7 +419,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; int ret = 0; - eth_dev->dev_ops = NULL; + eth_dev->dev_ops = &zxdh_eth_dev_ops; /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 65726f3a20..89c5a9bb5f 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -29,6 +29,8 @@ extern "C" { #define ZXDH_NUM_BARS 2 #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U union zxdh_virport_num { uint16_t vport; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 7720 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-11-01 6:21 ` Junlong Wang 2024-11-02 0:56 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Ferruh Yigit ` (3 subsequent siblings) 12 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-11-01 6:21 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 38556 bytes --] provided zxdh dev configure ops for queue check,reset,alloc resources,etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 135 ++++++++++ drivers/net/zxdh/zxdh_common.h | 12 + drivers/net/zxdh/zxdh_ethdev.c | 449 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 16 ++ drivers/net/zxdh/zxdh_pci.c | 98 +++++++ drivers/net/zxdh/zxdh_pci.h | 26 ++ drivers/net/zxdh/zxdh_queue.c | 123 +++++++++ drivers/net/zxdh/zxdh_queue.h | 172 +++++++++++++ 9 files changed, 1032 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_queue.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index a16db47f89..b96aa5a27e 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -18,4 +18,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_msg.c', 'zxdh_common.c', + 'zxdh_queue.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 34749588d5..9535791c94 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -20,6 +20,7 @@ #define ZXDH_COMMON_TABLE_WRITE 1 #define ZXDH_COMMON_FIELD_PHYPORT 6 +#define ZXDH_COMMON_FIELD_DATACH 3 #define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) @@ -248,3 +249,137 @@ int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) int32_t ret = zxdh_get_res_panel_id(¶m, panelid); return ret; } + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +static bool zxdh_try_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return false; + + return true; +} + +int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us) +{ + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + rte_delay_us_block(us); + /* acquire hw lock */ + if (!zxdh_try_lock(hw)) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout); + continue; + } + break; + } + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + return 0; +} + +void zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + } +} + +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} + +static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_pci_bar_msg desc; + struct zxdh_msg_recviver_mem msg_rsp; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + if (buff_size != 0 && buff == NULL) { + PMD_DRV_LOG(ERR, "Buff is invalid"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE, + field, buff, buff_size); + + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_datach_set(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t buff_size = (hw->queue_num + 1) * 2; + void *buff = rte_zmalloc(NULL, buff_size, 0); + + if (unlikely(buff == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate buff"); + return -ENOMEM; + } + memset(buff, 0, buff_size); + uint16_t *pdata = (uint16_t *)buff; + *pdata++ = hw->queue_num; + uint16_t i; + + for (i = 0; i < hw->queue_num; i++) + *(pdata + i) = hw->channel_context[i].ph_chno; + + int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH, + (void *)buff, buff_size); + + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to setup data channel of common table"); + + rte_free(buff); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index ba29ca1dad..a26f0d8d6f 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -14,6 +14,10 @@ extern "C" { #endif +#define ZXDH_VF_LOCK_REG 0x90 +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 + struct zxdh_res_para { uint64_t virt_addr; uint16_t pcie_id; @@ -23,6 +27,14 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +void zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); +int32_t zxdh_datach_set(struct rte_eth_dev *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bbdbda3457..41d2fdfead 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -358,8 +358,457 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +static int32_t zxdh_features_update(struct zxdh_hw *hw, + const struct rte_eth_rxmode *rxmode, + const struct rte_eth_txmode *txmode) +{ + uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; + uint64_t req_features = hw->guest_features; + + if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM); + + if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) | + (1ULL << ZXDH_NET_F_GUEST_TSO6); + + if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_CSUM); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) | + (1ULL << ZXDH_NET_F_HOST_TSO6); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_UFO); + + req_features = req_features & hw->host_features; + hw->guest_features = req_features; + + ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); + + if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + PMD_DRV_LOG(ERR, "rx checksum not available on this host"); + return -ENOTSUP; + } + + if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); + return -ENOTSUP; + } + return 0; +} + +static bool rx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); +} + +static bool tx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); +} + +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i = 0; + + const char *type = NULL; + struct zxdh_virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) + type = "rxq"; + else if (queue_type == ZXDH_VTNET_TQ) + type = "txq"; + else + continue; + PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + rte_pktmbuf_free(buf); + } +} + +static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1; + uint16_t i = 0; + uint16_t j = 0; + uint16_t done = 0; + int32_t ret = 0; + + ret = zxdh_timedlock(hw, 1000); + if (ret) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout"); + return -1; + } + + /* Iterate COI table and find free channel */ + for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) { + uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t)); + uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + + for (j = base; j < 32; j += 2) { + /* Got the available channel & update COI table */ + if ((var & (1 << j)) == 0) { + var |= (1 << j); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + done = 1; + break; + } + } + if (done) + break; + } + zxdh_release_lock(hw); + /* check for no channel condition */ + if (done != 1) { + PMD_INIT_LOG(ERR, "NO availd queues"); + return -1; + } + /* reruen available channel ID */ + return (i * 32) + j; +} + +static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->channel_context[lch].valid == 1) { + PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u", + lch, hw->channel_context[lch].ph_chno); + return hw->channel_context[lch].ph_chno; + } + int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch)); + + if (pch < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + hw->channel_context[lch].ph_chno = (uint16_t)pch; + hw->channel_context[lch].valid = 1; + PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch); + return 0; +} + +static void zxdh_init_vring(struct zxdh_virtqueue *vq) +{ + int32_t size = vq->vq_nentries; + uint8_t *ring_mem = vq->vq_ring_virt_mem; + + memset(ring_mem, 0, vq->vq_ring_size); + + vq->vq_used_cons_idx = 0; + vq->vq_desc_head_idx = 0; + vq->vq_avail_idx = 0; + vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); + vq->vq_free_cnt = vq->vq_nentries; + memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); + vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr(vq); +} + +static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) +{ + char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + const struct rte_memzone *mz = NULL; + const struct rte_memzone *hdr_mz = NULL; + uint32_t size = 0; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = NULL; + struct zxdh_virtnet_tx *txvq = NULL; + struct zxdh_virtqueue *vq = NULL; + size_t sz_hdr_mz = 0; + void *sw_ring = NULL; + int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx); + int32_t numa_node = dev->device->numa_node; + uint16_t vtpci_phy_qidx = 0; + uint32_t vq_size = 0; + int32_t ret = 0; + + if (hw->channel_context[vtpci_logic_qidx].valid == 0) { + PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx); + return -EINVAL; + } + vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno; + + PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d", + vtpci_logic_qidx, vtpci_phy_qidx, numa_node); + + vq_size = ZXDH_QUEUE_DEPTH; + + if (ZXDH_VTPCI_OPS(hw)->set_queue_num != NULL) + ZXDH_VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size); + + snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx); + + size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct zxdh_vq_desc_extra), + RTE_CACHE_LINE_SIZE); + if (queue_type == ZXDH_VTNET_TQ) { + /* + * For each xmit packet, allocate a zxdh_net_hdr + * and indirect ring elements + */ + sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region); + } + + vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node); + if (vq == NULL) { + PMD_INIT_LOG(ERR, "can not allocate vq"); + return -ENOMEM; + } + hw->vqs[vtpci_logic_qidx] = vq; + + vq->hw = hw; + vq->vq_queue_index = vtpci_phy_qidx; + vq->vq_nentries = vq_size; + + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL; + vq->vq_packed.event_flags_shadow = 0; + if (queue_type == ZXDH_VTNET_RQ) + vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE; + + /* + * Reserve a memzone for vring elements + */ + size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); + PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); + + mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + ZXDH_PCI_VRING_ALIGN); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(vq_name); + if (mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + + memset(mz->addr, 0, mz->len); + + vq->vq_ring_mem = mz->iova; + vq->vq_ring_virt_mem = mz->addr; + + zxdh_init_vring(vq); + + if (sz_hdr_mz) { + snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr", + dev->data->port_id, vtpci_phy_qidx); + hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + RTE_CACHE_LINE_SIZE); + if (hdr_mz == NULL) { + if (rte_errno == EEXIST) + hdr_mz = rte_memzone_lookup(vq_hdr_name); + if (hdr_mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + } + + if (queue_type == ZXDH_VTNET_RQ) { + size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]); + + sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node); + if (!sw_ring) { + PMD_INIT_LOG(ERR, "can not allocate RX soft ring"); + ret = -ENOMEM; + goto fail_q_alloc; + } + + vq->sw_ring = sw_ring; + rxvq = &vq->rxq; + rxvq->vq = vq; + rxvq->port_id = dev->data->port_id; + rxvq->mz = mz; + } else { /* queue_type == VTNET_TQ */ + txvq = &vq->txq; + txvq->vq = vq; + txvq->port_id = dev->data->port_id; + txvq->mz = mz; + txvq->zxdh_net_hdr_mz = hdr_mz; + txvq->zxdh_net_hdr_mem = hdr_mz->iova; + } + + vq->offset = offsetof(struct rte_mbuf, buf_iova); + if (queue_type == ZXDH_VTNET_TQ) { + struct zxdh_tx_region *txr = hdr_mz->addr; + uint32_t i; + + memset(txr, 0, vq_size * sizeof(*txr)); + for (i = 0; i < vq_size; i++) { + /* first indirect descriptor is always the tx header */ + struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; + + vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + + offsetof(struct zxdh_tx_region, tx_hdr); + /* length will be updated to actual pi hdr size when xmit pkt */ + start_dp->len = 0; + } + } + if (ZXDH_VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) { + PMD_INIT_LOG(ERR, "setup_queue failed"); + return -EINVAL; + } + return 0; +fail_q_alloc: + rte_free(sw_ring); + rte_memzone_free(hdr_mz); + rte_memzone_free(mz); + rte_free(vq); + return ret; +} + +static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) +{ + uint16_t lch; + struct zxdh_hw *hw = dev->data->dev_private; + + hw->vqs = rte_zmalloc(NULL, sizeof(struct zxdh_virtqueue *) * nr_vq, 0); + if (!hw->vqs) { + PMD_INIT_LOG(ERR, "Failed to allocate vqs"); + return -ENOMEM; + } + for (lch = 0; lch < nr_vq; lch++) { + if (zxdh_acquire_channel(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire the channels"); + zxdh_free_queues(dev); + return -1; + } + if (zxdh_init_queue(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to alloc virtio queue"); + zxdh_free_queues(dev); + return -1; + } + } + return 0; +} + + +static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) +{ + const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t nr_vq = 0; + int32_t ret = 0; + + if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return -EINVAL; + } + if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues, + ZXDH_QUEUES_NUM_MAX); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + + ret = zxdh_features_update(hw, rxmode, txmode); + if (ret < 0) + return ret; + + /* check if lsc interrupt feature is enabled */ + if (dev->data->dev_conf.intr_conf.lsc) { + if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) { + PMD_DRV_LOG(ERR, "link status not supported by host"); + return -ENOTSUP; + } + } + + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + + nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; + if (nr_vq == hw->queue_num) + return 0; + + PMD_DRV_LOG(DEBUG, "queue changed need reset "); + /* Reset the device although not necessary at startup */ + zxdh_pci_reset(hw); + + /* Tell the host we've noticed this device. */ + zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_ACK); + + /* Tell the host we've known how to drive the device. */ + zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER); + /* The queue needs to be released when reconfiguring*/ + if (hw->vqs != NULL) { + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + } + + hw->queue_num = nr_vq; + ret = zxdh_alloc_queues(dev, nr_vq); + if (ret < 0) + return ret; + + zxdh_datach_set(dev); + + if (zxdh_configure_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to configure interrupt"); + zxdh_free_queues(dev); + return -1; + } + + zxdh_pci_reinit_complete(hw); + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = zxdh_dev_configure, .dev_infos_get = zxdh_dev_infos_get, }; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 89c5a9bb5f..28e78b0086 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -31,6 +31,13 @@ extern "C" { #define ZXDH_TX_QUEUES_MAX 128U #define ZXDH_MIN_RX_BUFSIZE 64 #define ZXDH_MAX_RX_PKTLEN 14000U +#define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_QUEUES_BASE 0 +#define ZXDH_TOTAL_QUEUES_NUM 4096 +#define ZXDH_QUEUES_NUM_MAX 256 +#define ZXDH_QUERES_SHARE_BASE (0x5000) + +#define ZXDH_MBUF_BURST_SZ 64 union zxdh_virport_num { uint16_t vport; @@ -43,6 +50,11 @@ union zxdh_virport_num { }; }; +struct zxdh_chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; @@ -50,6 +62,7 @@ struct zxdh_hw { struct rte_intr_handle *risc_intr; struct rte_intr_handle *dtb_intr; struct zxdh_virtqueue **vqs; + struct zxdh_chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -63,6 +76,7 @@ struct zxdh_hw { uint16_t device_id; uint16_t port_id; uint16_t vfid; + uint16_t queue_num; uint8_t *isr; uint8_t weak_barriers; @@ -75,6 +89,8 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t has_tx_offload; + uint8_t has_rx_offload; }; uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 65164c86b7..165d9f49a3 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -110,6 +110,87 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw) return rte_read8(hw->isr); } +static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t check_vq_phys_addr_ok(struct zxdh_virtqueue *vq) +{ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} + +static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + + sizeof(struct zxdh_vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct zxdh_vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -120,6 +201,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_irq = zxdh_set_queue_irq, .set_config_irq = zxdh_set_config_irq, .get_isr = zxdh_get_isr, + .get_queue_num = zxdh_get_queue_num, + .set_queue_num = zxdh_set_queue_num, + .setup_queue = zxdh_setup_queue, + .del_queue = zxdh_del_queue, }; uint8_t zxdh_pci_isr(struct zxdh_hw *hw) @@ -146,6 +231,19 @@ void zxdh_pci_reset(struct zxdh_hw *hw) PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); } +void zxdh_pci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_pci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void zxdh_pci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= ZXDH_VTPCI_OPS(hw)->get_status(hw); + + ZXDH_VTPCI_OPS(hw)->set_status(hw, status); +} + static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) { uint8_t bar = cap->bar; diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index f362658ba6..e86667357b 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -29,7 +29,20 @@ enum zxdh_msix_status { /* Vector value used to disable MSI for queue. */ #define ZXDH_MSI_NO_VECTOR 0x7F +#define ZXDH_PCI_VRING_ALIGN 4096 + +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ + +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ #define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ @@ -53,6 +66,7 @@ enum zxdh_msix_status { #define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 #define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 #define ZXDH_CONFIG_STATUS_FAILED 0x80 +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 struct zxdh_net_config { /* The config defining mac address (if ZXDH_NET_F_MAC) */ @@ -108,6 +122,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + struct zxdh_pci_ops { void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); @@ -120,6 +139,11 @@ struct zxdh_pci_ops { uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec); uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); uint8_t (*get_isr)(struct zxdh_hw *hw); + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { @@ -141,6 +165,8 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_pci_get_features(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_pci_msix_detect(struct rte_pci_device *dev); uint8_t zxdh_pci_isr(struct zxdh_hw *hw); +void zxdh_pci_reinit_complete(struct zxdh_hw *hw); +void zxdh_pci_set_status(struct zxdh_hw *hw, uint8_t status); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..2978a9f272 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" +#include "zxdh_msg.h" + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + return NULL; +} + +static int32_t zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + int32_t ret = 0; + + ret = zxdh_timedlock(hw, 1000); + if (ret) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return ZXDH_VTNET_RQ; + else + return ZXDH_VTNET_TQ; +} + +int32_t zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct zxdh_virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + if (zxdh_release_channel(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + ZXDH_VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == ZXDH_VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.zxdh_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 0b6f48adf9..683c4e7980 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -11,11 +11,30 @@ #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" +#include "zxdh_pci.h" #ifdef __cplusplus extern "C" { #endif +enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; + +#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_MAX_TX_INDIRECT 8 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define ZXDH_VRING_DESC_F_WRITE 2 +/* This flag means the descriptor was made available by the driver */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) + +#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 +#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 +#define ZXDH_RING_EVENT_FLAGS_DESC 0x2 + +#define ZXDH_VQ_RING_DESC_CHAIN_END 32768 + /** ring descriptors: 16 bytes. * These can chain together via "next". **/ @@ -26,6 +45,19 @@ struct zxdh_vring_desc { uint16_t next; /* We chain unused descriptors via this. */ } __rte_packed; +struct zxdh_vring_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was written to. */ + uint32_t len; +}; + +struct zxdh_vring_used { + uint16_t flags; + uint16_t idx; + struct zxdh_vring_used_elem ring[]; +} __rte_packed; + struct zxdh_vring_avail { uint16_t flags; uint16_t idx; @@ -102,6 +134,146 @@ struct zxdh_virtqueue { struct zxdh_vq_desc_extra vq_descx[]; } __rte_packed; +struct zxdh_type_hdr { + uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ + uint8_t pd_len; + uint8_t num_buffers; + uint8_t reserved; +} __rte_packed; /* 4B */ + +struct zxdh_pi_hdr { + uint8_t pi_len; + uint8_t pkt_type; + uint16_t vlan_id; + uint32_t ipv6_extend; + uint16_t l3_offset; + uint16_t l4_offset; + uint8_t phy_port; + uint8_t pkt_flag_hi8; + uint16_t pkt_flag_lw16; + union { + struct { + uint64_t sa_idx; + uint8_t reserved_8[8]; + } dl; + struct { + uint32_t lro_flag; + uint32_t lro_mss; + uint16_t err_code; + uint16_t pm_id; + uint16_t pkt_len; + uint8_t reserved[2]; + } ul; + }; +} __rte_packed; /* 32B */ + +struct zxdh_pd_hdr_dl { + uint32_t ol_flag; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t dst_vfid; + uint32_t svlan_insert; + uint32_t cvlan_insert; +} __rte_packed; /* 16B */ + +struct zxdh_net_hdr_dl { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_dl pd_hdr; /* 16B */ +} __rte_packed; + +struct zxdh_pd_hdr_ul { + uint32_t pkt_flag; + uint32_t rss_hash; + uint32_t fd; + uint32_t striped_vlan_tci; + /* ovs */ + uint8_t tag_idx; + uint8_t tag_data; + uint16_t src_vfid; + /* */ + uint16_t pkt_type_out; + uint16_t pkt_type_in; +} __rte_packed; /* 24B */ + +struct zxdh_net_hdr_ul { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_ul pd_hdr; /* 24B */ +} __rte_packed; /* 60B */ + +struct zxdh_tx_region { + struct zxdh_net_hdr_dl tx_hdr; + union { + struct zxdh_vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; + struct zxdh_vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; + } __rte_packed; +}; + +static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +{ + size_t size; + + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct zxdh_vring_packed_desc); + size += sizeof(struct zxdh_vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct zxdh_vring_packed_desc_event); + return size; + } + + size = num * sizeof(struct zxdh_vring_desc); + size += sizeof(struct zxdh_vring_avail) + (num * sizeof(uint16_t)); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct zxdh_vring_used) + (num * sizeof(struct zxdh_vring_used_elem)); + return size; +} + +static inline void vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, + unsigned long align, uint32_t num) +{ + vr->num = num; + vr->desc = (struct zxdh_vring_packed_desc *)p; + vr->driver = (struct zxdh_vring_packed_desc_event *)(p + + vr->num * sizeof(struct zxdh_vring_packed_desc)); + vr->device = (struct zxdh_vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver + + sizeof(struct zxdh_vring_packed_desc_event)), align); +} + +static inline void vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n - 1; i++) { + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static inline void vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n; i++) { + dp[i].id = (uint16_t)i; + dp[i].flags = ZXDH_VRING_DESC_F_WRITE; + } +} + +static inline void virtqueue_disable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 89354 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (8 preceding siblings ...) 2024-11-01 6:21 ` [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang @ 2024-11-02 0:56 ` Ferruh Yigit 2024-11-04 2:42 ` Junlong Wang ` (2 subsequent siblings) 12 siblings, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-02 0:56 UTC (permalink / raw) To: Junlong Wang, dev; +Cc: wang.yong19 On 11/1/2024 6:21 AM, Junlong Wang wrote: > v9: > - fix 'v8 3/9' patch use PCI bus API, > and common PCI constants according to David Marchand's comments. > > v8: > - fix flexible arrays、Waddress-of-packed-member error. > - all structs、enum、define ,etc use zxdh/ZXDH_ prefixed. > - use zxdh_try/release_lock,and move loop into zxdh_timedlock, > make hardware lock follow spinlock pattern. > > v7: > - add release notes and modify zxdh.rst issues. > - avoid use pthread and use rte_spinlock_lock. > - using the prefix ZXDH_ before some definitions. > - resole issues according to thomas's comments. > > v6: > - Resolve ci/intel compilation issues. > - fix meson.build indentation in earlier patch. > > V5: > - split driver into multiple patches,part of the zxdh driver, > later provide dev start/stop,queue_setup,npsdk_init,mac,vlan,rss ,etc. > - fix errors reported by scripts. > - move the product link in zxdh.rst. > - fix meson check use RTE_ARCH_X86_64/RTE_ARCH_ARM64. > - modify other comments according to Ferruh's comments. > > Junlong Wang (9): > net/zxdh: add zxdh ethdev pmd driver > net/zxdh: add logging implementation > net/zxdh: add zxdh device pci init implementation > net/zxdh: add msg chan and msg hwlock init > net/zxdh: add msg chan enable implementation > net/zxdh: add zxdh get device backend infos > net/zxdh: add configure zxdh intr implementation > net/zxdh: add zxdh dev infos get ops > net/zxdh: add zxdh dev configure ops > Hi Junlong, I can see not all of the eth_dev_ops implemented, and datapath not implemented, so driver is not functional right now. What happens if you want to run testpmd with the current state of the driver, I assume it crashes? And what is the plan for the driver? Are you planning to upstream remaining support in this release or in future releases? As the driver is not functional yet, to set the expectation right for the users, I suggest marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update, what do you think? ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (9 preceding siblings ...) 2024-11-02 0:56 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Ferruh Yigit @ 2024-11-04 2:42 ` Junlong Wang 2024-11-04 8:46 ` Ferruh Yigit 2024-11-04 11:46 ` Junlong Wang 2024-11-05 9:39 ` Junlong Wang 12 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-11-04 2:42 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 1393 bytes --] >> net/zxdh: add zxdh get device backend infos >> net/zxdh: add configure zxdh intr implementation >> net/zxdh: add zxdh dev infos get ops >> net/zxdh: add zxdh dev configure ops >> > Hi Junlong, > I can see not all of the eth_dev_ops implemented, and datapath not > implemented, so driver is not functional right now. > What happens if you want to run testpmd with the current state of the > driver, I assume it crashes? > And what is the plan for the driver? Are you planning to upstream > remaining support in this release or in future releases? > As the driver is not functional yet, to set the expectation right for > the users, I suggest marking driver as experimental in the maintainers > file and document the restrictions in the driver documentation, also > clarify this in the release notes update, what do you think? Hi Ferruh, The complete driver function has been implemented. At present, the driver function is integrated in batches. The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. Thanks. [-- Attachment #1.1.2: Type: text/html , Size: 2778 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-04 2:42 ` Junlong Wang @ 2024-11-04 8:46 ` Ferruh Yigit 2024-11-04 9:52 ` David Marchand 0 siblings, 1 reply; 225+ messages in thread From: Ferruh Yigit @ 2024-11-04 8:46 UTC (permalink / raw) To: Junlong Wang, Aaron Conole Cc: dev, dpdklab, David Marchand, Stephen Hemminger, Thomas Monjalon On 11/4/2024 2:42 AM, Junlong Wang wrote: >>> net/zxdh: add zxdh get device backend infos >>> net/zxdh: add configure zxdh intr implementation >>> net/zxdh: add zxdh dev infos get ops >>> net/zxdh: add zxdh dev configure ops >>> > >> Hi Junlong, > >> I can see not all of the eth_dev_ops implemented, and datapath not >> implemented, so driver is not functional right now. > >> What happens if you want to run testpmd with the current state of the >> driver, I assume it crashes? > >> And what is the plan for the driver? Are you planning to upstream >> remaining support in this release or in future releases? > >> As the driver is not functional yet, to set the expectation right for >> the users, I suggest marking driver as experimental in the maintainers >> file and document the restrictions in the driver documentation, also >> clarify this in the release notes update, what do you think? > > Hi Ferruh, > The complete driver function has been implemented. At present, the driver function is integrated in batches. > The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. > In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. > Thanks. Sounds good, thanks. Btw, build should be fine after each patch, but in this patch series there are warnings in some early patches, can you please fix them in next version? @Aaron, @David, what do you think to add patch by patch build to the CI? I hit the same issue in multiple series in this release. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-04 8:46 ` Ferruh Yigit @ 2024-11-04 9:52 ` David Marchand 0 siblings, 0 replies; 225+ messages in thread From: David Marchand @ 2024-11-04 9:52 UTC (permalink / raw) To: Ferruh Yigit, Aaron Conole Cc: Junlong Wang, dev, dpdklab, Stephen Hemminger, Thomas Monjalon On Mon, Nov 4, 2024 at 9:46 AM Ferruh Yigit <ferruh.yigit@amd.com> wrote: > > On 11/4/2024 2:42 AM, Junlong Wang wrote: > >>> net/zxdh: add zxdh get device backend infos > >>> net/zxdh: add configure zxdh intr implementation > >>> net/zxdh: add zxdh dev infos get ops > >>> net/zxdh: add zxdh dev configure ops > >>> > > > >> Hi Junlong, > > > >> I can see not all of the eth_dev_ops implemented, and datapath not > >> implemented, so driver is not functional right now. > > > >> What happens if you want to run testpmd with the current state of the > >> driver, I assume it crashes? > > > >> And what is the plan for the driver? Are you planning to upstream > >> remaining support in this release or in future releases? > > > >> As the driver is not functional yet, to set the expectation right for > >> the users, I suggest marking driver as experimental in the maintainers > >> file and document the restrictions in the driver documentation, also > >> clarify this in the release notes update, what do you think? > > > > Hi Ferruh, > > The complete driver function has been implemented. At present, the driver function is integrated in batches. > > The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. > > In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. > > Thanks. > > Sounds good, thanks. > > > Btw, build should be fine after each patch, but in this patch series > there are warnings in some early patches, can you please fix them in > next version? > > @Aaron, @David, what do you think to add patch by patch build to the CI? > I hit the same issue in multiple series in this release. I would love to have patch by patch tests too, but I don't think UNH can handle such load. We could limit at doing this patch by patch in GHA / ovsrobot only (which I think it is done for OVS CI). -- David Marchand ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (10 preceding siblings ...) 2024-11-04 2:42 ` Junlong Wang @ 2024-11-04 11:46 ` Junlong Wang 2024-11-04 22:47 ` Thomas Monjalon 2024-11-05 9:39 ` Junlong Wang 12 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-11-04 11:46 UTC (permalink / raw) To: ferruh.yigit, aconole; +Cc: dev, dpdklab, stephen, thomas [-- Attachment #1.1.1: Type: text/plain, Size: 1953 bytes --] > >>> net/zxdh: add zxdh get device backend infos > >>> net/zxdh: add configure zxdh intr implementation > >>> net/zxdh: add zxdh dev infos get ops > >>> net/zxdh: add zxdh dev configure ops > >>> > > > >> Hi Junlong, > > > >> I can see not all of the eth_dev_ops implemented, and datapath not > >> implemented, so driver is not functional right now. > > > >> What happens if you want to run testpmd with the current state of the > >> driver, I assume it crashes? > > > >> And what is the plan for the driver? Are you planning to upstream > >> remaining support in this release or in future releases? > > > >> As the driver is not functional yet, to set the expectation right for > >> the users, I suggest marking driver as experimental in the maintainers > >> file and document the restrictions in the driver documentation, also > >> clarify this in the release notes update, what do you think? > > > > Hi Ferruh, > > The complete driver function has been implemented. At present, the driver function is integrated in batches. > > The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. > > In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. > > Thanks. > > Sounds good, thanks. > > > Btw, build should be fine after each patch, but in this patch series > there are warnings in some early patches, can you please fix them in > next version? > > @Aaron, @David, what do you think to add patch by patch build to the CI? > I hit the same issue in multiple series in this release. Sure. But I cannot see these Warnings in the [v9] net/zxdh CI/check Description. What do I need to do to see these Warnings? Thanks. [-- Attachment #1.1.2: Type: text/html , Size: 4141 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-04 11:46 ` Junlong Wang @ 2024-11-04 22:47 ` Thomas Monjalon 0 siblings, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-11-04 22:47 UTC (permalink / raw) To: ferruh.yigit, aconole, Junlong Wang; +Cc: dev, dpdklab, stephen 04/11/2024 12:46, Junlong Wang: > > >>> net/zxdh: add zxdh get device backend infos > > >>> net/zxdh: add configure zxdh intr implementation > > >>> net/zxdh: add zxdh dev infos get ops > > >>> net/zxdh: add zxdh dev configure ops > > >>> > > > > > >> Hi Junlong, > > > > > >> I can see not all of the eth_dev_ops implemented, and datapath not > > >> implemented, so driver is not functional right now. > > > > > >> What happens if you want to run testpmd with the current state of the > > >> driver, I assume it crashes? > > > > > >> And what is the plan for the driver? Are you planning to upstream > > >> remaining support in this release or in future releases? > > > > > >> As the driver is not functional yet, to set the expectation right for > > >> the users, I suggest marking driver as experimental in the maintainers > > >> file and document the restrictions in the driver documentation, also > > >> clarify this in the release notes update, what do you think? > > > > > > Hi Ferruh, > > > The complete driver function has been implemented. At present, the driver function is integrated in batches. > > > The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. > > > In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. > > > Thanks. > > > > Sounds good, thanks. > > > > > > Btw, build should be fine after each patch, but in this patch series > > there are warnings in some early patches, can you please fix them in > > next version? > > > > @Aaron, @David, what do you think to add patch by patch build to the CI? > > I hit the same issue in multiple series in this release. > > Sure. But I cannot see these Warnings in the [v9] net/zxdh CI/check Description. > What do I need to do to see these Warnings? You need to test compilation after each patch locally. You can use "git rebase -i --exec ninja" for instance. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (11 preceding siblings ...) 2024-11-04 11:46 ` Junlong Wang @ 2024-11-05 9:39 ` Junlong Wang 2024-11-06 0:38 ` Ferruh Yigit 12 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-11-05 9:39 UTC (permalink / raw) To: ferruh.yigit, thomas, aconole; +Cc: dev, dpdklab, stephen [-- Attachment #1.1.1: Type: text/plain, Size: 2194 bytes --] > > > >> Hi Junlong, > > > > > > > >> I can see not all of the eth_dev_ops implemented, and datapath not > > > >> implemented, so driver is not functional right now. > > > > > > > >> What happens if you want to run testpmd with the current state of the > > > >> driver, I assume it crashes? > > > > > > > >> And what is the plan for the driver? Are you planning to upstream > > > >> remaining support in this release or in future releases? > > > > > > > >> As the driver is not functional yet, to set the expectation right for > > > >> the users, I suggest marking driver as experimental in the maintainers > > > >> file and document the restrictions in the driver documentation, also > > > >> clarify this in the release notes update, what do you think? > > > > > > > > > Hi Ferruh, > > > > The complete driver function has been implemented. At present, the driver function is integrated in batches. > > > > The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. > > > > In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. > > > > Thanks. > > > > > > Sounds good, thanks. > > > > > > > > > Btw, build should be fine after each patch, but in this patch series > > > there are warnings in some early patches, can you please fix them in > > > next version? > > > > > > @Aaron, @David, what do you think to add patch by patch build to the CI? > > > I hit the same issue in multiple series in this release. > > > > Sure. But I cannot see these Warnings in the [v9] net/zxdh CI/check Description. > > What do I need to do to see these Warnings? > You need to test compilation after each patch locally. > You can use "git rebase -i --exec ninja" for instance. Sorry, I have tried multiple times to use 'git rebase -i --exec ninja' locally after each patch, but I couldn't find the warnings. I have no idea how to solve it. Is there any other way to see the warnings? [-- Attachment #1.1.2: Type: text/html , Size: 4975 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v9 0/9] net/zxdh: introduce net zxdh driver 2024-11-05 9:39 ` Junlong Wang @ 2024-11-06 0:38 ` Ferruh Yigit 0 siblings, 0 replies; 225+ messages in thread From: Ferruh Yigit @ 2024-11-06 0:38 UTC (permalink / raw) To: Junlong Wang, thomas, aconole; +Cc: dev, dpdklab, stephen On 11/5/2024 9:39 AM, Junlong Wang wrote: >> > > >> Hi Junlong, >> > > > >> > > >> I can see not all of the eth_dev_ops implemented, and datapath not >> > > >> implemented, so driver is not functional right now. >> > > > >> > > >> What happens if you want to run testpmd with the current state of the >> > > >> driver, I assume it crashes? >> > > > >> > > >> And what is the plan for the driver? Are you planning to upstream >> > > >> remaining support in this release or in future releases? >> > > > >> > > >> As the driver is not functional yet, to set the expectation right for >> > > >> the users, I suggest marking driver as experimental in the maintainers >> > > >> file and document the restrictions in the driver documentation, also >> > > >> clarify this in the release notes update, what do you think? >> > > > > >> > > > Hi Ferruh, >> > > > The complete driver function has been implemented. At present, the driver function is integrated in batches. >> > > > The integrated PMD does not cause the crash but cannot support all PMD functions.We plan to integrate the driver in this version and finally provide a complete and available driver. >> > > > In the current phase, we will marking driver as experimental in the maintainers file and document the restrictions in the driver documentation, also clarify this in the release notes update. >> > > > Thanks. >> > > >> > > Sounds good, thanks. >> > > >> > > >> > > Btw, build should be fine after each patch, but in this patch series >> > > there are warnings in some early patches, can you please fix them in >> > > next version? >> > > >> > > @Aaron, @David, what do you think to add patch by patch build to the CI? >> > > I hit the same issue in multiple series in this release. >> > >> > Sure. But I cannot see these Warnings in the [v9] net/zxdh CI/ > check Description. >> > What do I need to do to see these Warnings? > >> You need to test compilation after each patch locally. >> You can use "git rebase -i --exec ninja" for instance. > > Sorry, I have tried multiple times to use 'git rebase -i -- > exec ninja' locally after each patch, but I couldn't find the warnings. > I have no idea how to solve it. Is there any other way to see the warnings? > Hi Junlong, I can't reproduce it, and v10 also looks good, so I will proceed with the driver. From my logs I can see patch by path fails in 'net/r8169', not this driver, probably I confused them as I reviewed them next to each other, sorry about confusion caused. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 2/9] net/zxdh: add logging implementation 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang ` (6 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3417 bytes --] Add zxdh logging implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++-- drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_logs.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 5b6c9ec1bf..c911284423 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -7,6 +7,7 @@ #include <rte_ethdev.h> #include "zxdh_ethdev.h" +#include "zxdh_logs.h" static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); return -ENOMEM; + } memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; - if (hw->bar_addr[0] == 0) + if (hw->bar_addr[0] == 0) { + PMD_INIT_LOG(ERR, "Bad mem resource."); return -EIO; + } hw->device_id = pci_dev->id.device_id; hw->port_id = eth_dev->data->port_id; @@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = { RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE); diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..a8a6a3135b --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_LOGS_H +#define ZXDH_LOGS_H + +#include <rte_log.h> + +extern int zxdh_logtype_init; +#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init +#define PMD_INIT_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_driver; +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver +#define PMD_DRV_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_rx; +#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx +#define PMD_RX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_tx; +#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx +#define PMD_TX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_msg; +#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg +#define PMD_MSG_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +#endif /* ZXDH_LOGS_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6146 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-30 9:01 ` [PATCH v8 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 14:55 ` David Marchand 2024-10-30 9:01 ` [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang ` (5 subsequent siblings) 8 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 22959 bytes --] Add device pci init implementation, to obtain PCI capability and read configuration, etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ drivers/net/zxdh/zxdh_ethdev.h | 21 ++- drivers/net/zxdh/zxdh_pci.c | 285 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++ 7 files changed, 655 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 932fb1c835..7db4e7bc71 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -15,4 +15,5 @@ endif sources = files( 'zxdh_ethdev.c', + 'zxdh_pci.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index c911284423..8877855965 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -8,6 +8,40 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" +#include "zxdh_pci.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; + +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = 0; + + ret = zxdh_read_pci_caps(pci_dev, hw); + if (ret) { + PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + goto err; + } + + zxdh_hw_internal[hw->port_id].zxdh_vtpci_ops = &zxdh_dev_pci_ops; + zxdh_vtpci_reset(hw); + zxdh_get_pci_dev_config(hw); + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + else + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + + return 0; + +err: + PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + return ret; + +err_zxdh_init: + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 93375aea11..8be5af6aeb 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_H #define ZXDH_ETHDEV_H +#include <rte_ether.h> + #include "ethdev_driver.h" #ifdef __cplusplus @@ -24,15 +26,30 @@ extern "C" { #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) #define ZXDH_NUM_BARS 2 +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U struct zxdh_hw { struct rte_eth_dev *eth_dev; - uint64_t bar_addr[ZXDH_NUM_BARS]; + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; - uint32_t speed; + uint64_t bar_addr[ZXDH_NUM_BARS]; + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; + uint32_t speed; + uint32_t notify_off_multiplier; + uint16_t *notify_base; + uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint8_t *isr; + uint8_t weak_barriers; + uint8_t use_msix; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t duplex; uint8_t is_pf; }; diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..8fcab6e888 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,285 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <unistd.h> + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_pci.h> +#include <rte_common.h> +#include <rte_cycles.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +static void zxdh_read_dev_config(struct zxdh_hw *hw, + size_t offset, + void *dst, + int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void zxdh_write_dev_config(struct zxdh_hw *hw, + size_t offset, + const void *src, + int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint8_t zxdh_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint64_t zxdh_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +const struct zxdh_pci_ops zxdh_dev_pci_ops = { + .read_dev_cfg = zxdh_read_dev_config, + .write_dev_cfg = zxdh_write_dev_config, + .get_status = zxdh_get_status, + .set_status = zxdh_set_status, + .get_features = zxdh_get_features, + .set_features = zxdh_set_features, +}; + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) +{ + return ZXDH_VTPCI_OPS(hw)->get_features(hw); +} + +void zxdh_vtpci_reset(struct zxdh_hw *hw) +{ + PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + ZXDH_VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (ZXDH_VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + rte_delay_ms(1); + } + PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_INIT_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_INIT_LOG(ERR, "invalid cap: overflows bar space"); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + uint8_t pos = 0; + int32_t ret = 0; + + if (dev->mem_resource[0].addr == NULL) { + PMD_INIT_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + + ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret); + return -1; + } + while (pos) { + struct zxdh_pci_cap cap; + + ret = rte_pci_read_config(dev, &cap, 2, pos); + if (ret != 2) { + PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr == ZXDH_PCI_CAP_ID_MSIX) { + /** + * Transitional devices would also have this capability, + * that's why we also check if msix is enabled. + * 1st byte is cap ID; 2nd byte is the position of next cap; + * next two bytes are the flags. + */ + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", + pos + 2, ret); + break; + } + hw->use_msix = (flags & ZXDH_PCI_MSIX_ENABLE) ? + ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED; + } + if (cap.cap_vndr != ZXDH_PCI_CAP_ID_VNDR) { + PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_INIT_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + uint16_t pcie_id = hw->pcie_id; + + if ((pcie_id >> 11) & 0x1) /* PF */ { + PMD_INIT_LOG(DEBUG, "EP %u PF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_INIT_LOG(ERR, "no zxdh pci device found."); + return -1; + } + return 0; +} + +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + ZXDH_VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} + +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + uint64_t guest_features = 0; + uint64_t nego_features = 0; + uint32_t max_queue_pairs = 0; + + hw->host_features = zxdh_vtpci_get_features(hw); + + guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + } + + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..bb5ae64ddf --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_PCI_H +#define ZXDH_PCI_H + +#include <stdint.h> +#include <stdbool.h> + +#include <bus_pci_driver.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +#define ZXDH_PCI_CAPABILITY_LIST 0x34 +#define ZXDH_PCI_CAP_ID_VNDR 0x09 +#define ZXDH_PCI_CAP_ID_MSIX 0x11 + +#define ZXDH_PCI_MSIX_ENABLE 0x8000 + +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_RING_PACKED 34 +#define ZXDH_F_IN_ORDER 35 +#define ZXDH_F_NOTIFICATION_DATA 38 + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + /* + * speed, in units of 1Mb. All values 0 to INT_MAX are legal. + * Any other value stands for unknown. + */ + uint32_t speed; + /* 0x00 - half duplex + * 0x01 - full duplex + * Any other value stands for unknown. + */ + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); +}; + +struct zxdh_hw_internal { + const struct zxdh_pci_ops *zxdh_vtpci_ops; +}; + +#define ZXDH_VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].zxdh_vtpci_ops) + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_dev_pci_ops; + +void zxdh_vtpci_reset(struct zxdh_hw *hw); +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_PCI_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..fd73f14e2d --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_QUEUE_H +#define ZXDH_QUEUE_H + +#include <stdint.h> + +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_rxtx.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** ring descriptors: 16 bytes. + * These can chain together via "next". + **/ +struct zxdh_vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +} __rte_packed; + +struct zxdh_vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[]; +} __rte_packed; + +struct zxdh_vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +} __rte_packed; + +struct zxdh_vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +} __rte_packed; + +struct zxdh_vring_packed { + uint32_t num; + struct zxdh_vring_packed_desc *desc; + struct zxdh_vring_packed_desc_event *driver; + struct zxdh_vring_packed_desc_event *device; +} __rte_packed; + +struct zxdh_vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +} __rte_packed; + +struct zxdh_virtqueue { + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ + struct { + /**< vring keeping descs and events */ + struct zxdh_vring_packed ring; + uint8_t used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ + uint16_t vq_nentries; /**< vring desc numbers */ + uint16_t vq_free_cnt; /**< num of desc available */ + uint16_t vq_avail_idx; /**< sync until needed */ + uint16_t vq_free_thresh; /**< free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /**< linear address of vring*/ + uint32_t vq_ring_size; + + union { + struct zxdh_virtnet_rx rxq; + struct zxdh_virtnet_tx txq; + }; + + /** < physical address of vring, + * or virtual address for virtio_user. + **/ + rte_iova_t vq_ring_mem; + + /** + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + **/ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /**< PCI queue index */ + uint16_t offset; /**< relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /**< RX software ring. */ + struct zxdh_vq_desc_extra vq_descx[]; +} __rte_packed; + +#endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..ccac7e7834 --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_RXTX_H +#define ZXDH_RXTX_H + +#include <stdint.h> + +#include <rte_common.h> +#include <rte_mbuf_core.h> + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; +}; + +struct zxdh_virtnet_rx { + struct zxdh_virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct zxdh_virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +} __rte_packed; + +struct zxdh_virtnet_tx { + struct zxdh_virtqueue *vq; + const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct zxdh_virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +} __rte_packed; + +#endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53777 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-10-30 14:55 ` David Marchand 0 siblings, 0 replies; 225+ messages in thread From: David Marchand @ 2024-10-30 14:55 UTC (permalink / raw) To: Junlong Wang Cc: dev, wang.yong19, Maxime Coquelin, Stephen Hemminger, Thomas Monjalon, Ferruh Yigit On Wed, Oct 30, 2024 at 10:07 AM Junlong Wang <wang.junlong1@zte.com.cn> wrote: > > Add device pci init implementation, > to obtain PCI capability and read configuration, etc. This title with PCI intrigued me, so I had a look at this patch. Please use the PCI bus API and don't redefine common PCI constants/helpers. To make it easier for you, I recommend having a look at: $ git show a10b6e53fe baa9c55009 7bb1168d98 -- drivers/net/virtio/virtio_pci.c -- David Marchand ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 5/9] net/zxdh: add msg chan enable implementation Junlong Wang ` (4 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 9083 bytes --] Add msg channel and hwlock init implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 15 +++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_msg.c | 161 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 67 ++++++++++++++ 5 files changed, 245 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 7db4e7bc71..2e0c8fddae 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -16,4 +16,5 @@ endif sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', + 'zxdh_msg.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 8877855965..2dcf144fc9 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -9,6 +9,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_pci.h" +#include "zxdh_msg.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret < 0) goto err_zxdh_init; + ret = zxdh_msg_chan_init(); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init bar msg chan"); + goto err_zxdh_init; + } + hw->msg_chan_init = 1; + + ret = zxdh_msg_chan_hwlock_init(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: + zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 8be5af6aeb..5902704923 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -52,6 +52,7 @@ struct zxdh_hw { uint8_t duplex; uint8_t is_pf; + uint8_t msg_chan_init; }; #ifdef __cplusplus diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..9dcf99f1f7 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <rte_spinlock.h> +#include <rte_cycles.h> +#include <inttypes.h> +#include <rte_malloc.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define ZXDH_REPS_INFO_FLAG_USABLE 0x00 +#define ZXDH_BAR_SEQID_NUM_MAX 256 + +#define ZXDH_PCIEID_IS_PF_MASK (0x0800) +#define ZXDH_PCIEID_PF_IDX_MASK (0x0700) +#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff) +#define ZXDH_PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define ZXDH_PCIEID_PF_IDX_OFFSET (8) +#define ZXDH_PCIEID_EP_IDX_OFFSET (12) + +#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3) +#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5) +#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8) + +#define ZXDH_MAX_EP_NUM (4) +#define ZXDH_MAX_HARD_SPINLOCK_NUM (511) + +#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) +#define ZXDH_FW_SHRD_OFFSET (0x5000) +#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define ZXDH_HW_LABEL_OFFSET \ + (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) + +struct zxdh_dev_stat { + bool is_mpf_scanned; + bool is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; +struct zxdh_dev_stat g_dev_stat = {0}; + +struct zxdh_seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct zxdh_seqid_ring { + uint16_t cur_id; + rte_spinlock_t lock; + struct zxdh_seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX]; +}; +struct zxdh_seqid_ring g_seqid_ring = {0}; + +static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case ZXDH_MSG_CHAN_END_RISC: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case ZXDH_MSG_CHAN_END_VF: + case ZXDH_MSG_CHAN_END_PF: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx + + ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +/** + * Fun: PF init hard_spinlock addr + */ +static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + return 0; +} + +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} + +static rte_spinlock_t chan_lock; +int zxdh_msg_chan_init(void) +{ + uint16_t seq_id = 0; + + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return ZXDH_BAR_MSG_OK; + + rte_spinlock_init(&chan_lock); + g_seqid_ring.cur_id = 0; + rte_spinlock_init(&g_seqid_ring.lock); + + for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) { + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id]; + + reps_info->id = seq_id; + reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return ZXDH_BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..a0b46c900a --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_MSG_H +#define ZXDH_MSG_H + +#include <stdint.h> + +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +#define ZXDH_BAR0_INDEX 0 + +enum ZXDH_DRIVER_TYPE { + ZXDH_MSG_CHAN_END_MPF = 0, + ZXDH_MSG_CHAN_END_PF, + ZXDH_MSG_CHAN_END_VF, + ZXDH_MSG_CHAN_END_RISC, +}; + +enum ZXDH_BAR_MSG_RTN { + ZXDH_BAR_MSG_OK = 0, + ZXDH_BAR_MSG_ERR_MSGID, + ZXDH_BAR_MSG_ERR_NULL, + ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */ + ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */ + ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */ + ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /** + * The sending interface parameter boundary structure pointer is empty + */ + ZXDH_BAR_MSG_ERR_NULL_PARA, + ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /** + * Unable to find the corresponding message processing function for this module + */ + ZXDH_BAR_MSG_ERR_MODULE_NOEXIST, + /** + * The virtual address in the parameters passed in by the sending interface is empty + */ + ZXDH_BAR_MSG_ERR_VIRTADDR_NULL, + ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED, + ZXDH_BAR_MSG_ERR_KERNEL_READY, + ZXDH_BAR_MSG_ERR_USR_RET_ERR, + ZXDH_BAR_MSG_ERR_ERR_PCIEID, + ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_MSG_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17557 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 5/9] net/zxdh: add msg chan enable implementation 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-10-30 9:01 ` [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang ` (3 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 28128 bytes --] Add msg chan enable implementation to support send msg to backend(device side) get infos. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 6 + drivers/net/zxdh/zxdh_ethdev.h | 12 + drivers/net/zxdh/zxdh_msg.c | 641 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_msg.h | 129 +++++++ 4 files changed, 785 insertions(+), 3 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 2dcf144fc9..a729344288 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_msg_chan_enable(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 5902704923..bed1334690 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -29,10 +29,22 @@ extern "C" { #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +union zxdh_virport_num { + uint16_t vport; + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; uint64_t host_features; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 9dcf99f1f7..1bf72a9b7c 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -8,7 +8,6 @@ #include <rte_memcpy.h> #include <rte_spinlock.h> #include <rte_cycles.h> -#include <inttypes.h> #include <rte_malloc.h> #include "zxdh_ethdev.h" @@ -17,6 +16,7 @@ #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 #define ZXDH_BAR_SEQID_NUM_MAX 256 +#define ZXDH_REPS_INFO_FLAG_USED 0xa0 #define ZXDH_PCIEID_IS_PF_MASK (0x0800) #define ZXDH_PCIEID_PF_IDX_MASK (0x0700) @@ -33,15 +33,88 @@ #define ZXDH_MAX_EP_NUM (4) #define ZXDH_MAX_HARD_SPINLOCK_NUM (511) +#define LOCK_PRIMARY_ID_MASK (0x8000) +/* bar offset */ +#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000) +#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000) #define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) #define ZXDH_FW_SHRD_OFFSET (0x5000) #define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define ZXDH_HW_LABEL_OFFSET \ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) +#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) +#define ZXDH_CHAN_RISC_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) + +#define ZXDH_REPS_HEADER_LEN_OFFSET 1 +#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4 +#define ZXDH_REPS_HEADER_REPLYED 0xff + +#define ZXDH_BAR_MSG_CHAN_USABLE 0 +#define ZXDH_BAR_MSG_CHAN_USED 1 + +#define ZXDH_BAR_MSG_POL_MASK (0x10) +#define ZXDH_BAR_MSG_POL_OFFSET (4) + +#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc +#define ZXDH_BAR_MSG_VALID_MASK 1 +#define ZXDH_BAR_MSG_VALID_OFFSET 0 + +#define ZXDH_BAR_PF_NUM 7 +#define ZXDH_BAR_VF_NUM 256 +#define ZXDH_BAR_INDEX_PF_TO_VF 0 +#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff +#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0 +#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0 + +#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define ZXDH_SPINLOCK_POLLING_SPAN_US (100) + +#define ZXDH_BAR_MSG_SRC_NUM 3 +#define ZXDH_BAR_MSG_SRC_MPF 0 +#define ZXDH_BAR_MSG_SRC_PF 1 +#define ZXDH_BAR_MSG_SRC_VF 2 +#define ZXDH_BAR_MSG_SRC_ERR 0xff +#define ZXDH_BAR_MSG_DST_NUM 3 +#define ZXDH_BAR_MSG_DST_RISC 0 +#define ZXDH_BAR_MSG_DST_MPF 2 +#define ZXDH_BAR_MSG_DST_PFVF 1 +#define ZXDH_BAR_MSG_DST_ERR 0xff + +#define ZXDH_LOCK_TYPE_HARD (1) +#define ZXDH_LOCK_TYPE_SOFT (0) +#define ZXDH_BAR_INDEX_TO_RISC 0 + +#define ZXDH_BAR_CHAN_INDEX_SEND 0 +#define ZXDH_BAR_CHAN_INDEX_RECV 1 + +uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD} +}; + struct zxdh_dev_stat { - bool is_mpf_scanned; - bool is_res_init; + uint8_t is_mpf_scanned; + uint8_t is_res_init; int16_t dev_cnt; /* probe cnt */ }; struct zxdh_dev_stat g_dev_stat = {0}; @@ -97,6 +170,33 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; } +static uint8_t spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t primary_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, primary_id); + break; + } + rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) { label_write((uint64_t)label_addr, virt_lock_id, 0); @@ -159,3 +259,538 @@ int zxdh_bar_msg_chan_exit(void) g_dev_stat.is_res_init = false; return ZXDH_BAR_MSG_OK; } + +static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct zxdh_seqid_item *seqid_reps_info = NULL; + + rte_spinlock_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + int rc = 0; + + do { + count++; + ++g_id; + g_id %= ZXDH_BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) && + (count < ZXDH_BAR_SEQID_NUM_MAX)); + + if (count >= ZXDH_BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = ZXDH_BAR_MSG_OK; + +out: + rte_spinlock_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = zxdh_bar_chan_msgid_allocate(msg_id); + + if (ret != ZXDH_BAR_MSG_OK) + return ZXDH_BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return ZXDH_BAR_MSG_OK; +} + +static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case ZXDH_MSG_CHAN_END_MPF: + src_index = ZXDH_BAR_MSG_SRC_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + src_index = ZXDH_BAR_MSG_SRC_PF; + break; + case ZXDH_MSG_CHAN_END_VF: + src_index = ZXDH_BAR_MSG_SRC_VF; + break; + default: + src_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case ZXDH_MSG_CHAN_END_MPF: + dst_index = ZXDH_BAR_MSG_DST_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_VF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_RISC: + dst_index = ZXDH_BAR_MSG_DST_RISC; + break; + default: + dst_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + uint8_t src_index = 0; + uint8_t dst_index = 0; + + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return ZXDH_BAR_MSG_ERR_NULL_PARA; + } + src_index = zxdh_bar_msg_src_index_trans(in->src); + dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return ZXDH_BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes"); + + return ZXDH_BAR_MSG_OK; +} + +static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return ZXDH_BAR_MSG_OK; +} + +static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + + return ret; +} + +static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET); +} + +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); + + return ret; +} + +static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr); + + return ZXDH_BAR_MSG_OK; +} + +static void zxdh_bar_chan_msgid_free(uint16_t msg_id) +{ + struct zxdh_seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + rte_spinlock_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + rte_spinlock_unlock(&g_seqid_ring.lock); +} + +static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr, + struct zxdh_bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx)); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr, + struct zxdh_bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_write(subchan_addr, 4 * ix + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + for (ix = 0; ix < remain; ix++) + remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix); + + zxdh_bar_chan_reg_write(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + zxdh_bar_chan_reg_read(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (ix = 0; ix < remain; ix++) + *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~ZXDH_BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint8_t temp_msg[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; +static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, + void *payload_addr, + uint16_t payload_len, + struct zxdh_bar_msg_header *msg_header) +{ + uint16_t ret = 0; + ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); + + ret = zxdh_bar_chan_msg_header_get(subchan_addr, + (struct zxdh_bar_msg_header *)temp_msg); + + ret = zxdh_bar_chan_msg_payload_set(subchan_addr, + (uint8_t *)(payload_addr), payload_len); + + ret = zxdh_bar_chan_msg_payload_get(subchan_addr, + temp_msg, payload_len); + + ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED); + return ret; +} + +static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK)) + return ZXDH_BAR_MSG_CHAN_USABLE; + + return ZXDH_BAR_MSG_CHAN_USED; +} + +static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK); + data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET); + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return ZXDH_BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + zxdh_bar_chan_msg_payload_get(subchan_addr, + recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */ + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint16_t seq_id = 0; + uint64_t subchan_addr = 0; + uint32_t time_out_cnt = 0; + uint16_t valid = 0; + int ret = 0; + + ret = zxdh_bar_chan_send_para_check(in, result); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + ret = zxdh_bar_chan_save_recv_info(result, &seq_id); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + zxdh_bar_chan_subchan_addr_get(in, &subchan_addr); + + msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != ZXDH_BAR_MSG_OK) { + zxdh_bar_chan_msgid_free(seq_id); + goto exit; + } + zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + + do { + rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN); + valid = zxdh_bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED)); + + if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) { + zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE); + zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = ZXDH_BAR_MSG_ERR_TIME_OUT; + } else { + ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + zxdh_bar_chan_msgid_free(seq_id); + zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static int bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int zxdh_bar_chan_enable(struct zxdh_msix_para *para, uint16_t *vport) +{ + struct zxdh_bar_recv_msg recv_msg = {0}; + int ret = 0; + int check_token = 0; + int sum_res = 0; + + if (!para) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_msix_msg msix_msg = { + .pcie_id = para->pcie_id, + .vector_risc = para->vector_risc, + .vector_pfvf = para->vector_pfvf, + .vector_mpf = para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = para->driver_type, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_MISX, + .src_pcieid = para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id); + return ZXDH_BAR_MSG_OK; +} + +int zxdh_msg_chan_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_msix_para misx_info = { + .vector_risc = ZXDH_MSIX_FROM_RISCV, + .vector_pfvf = ZXDH_MSIX_FROM_PFVF, + .vector_mpf = ZXDH_MSIX_FROM_MPF, + .pcie_id = hw->pcie_id, + .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + }; + + return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a0b46c900a..7fbab4b214 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -14,6 +14,21 @@ extern "C" { #endif #define ZXDH_BAR0_INDEX 0 +#define ZXDH_CTRLCH_OFFSET (0x2000) + +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 + +#define ZXDH_BAR_MSG_POLLING_SPAN 100 +#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) + +#define ZXDH_BAR_CHAN_MSG_SYNC 0 + +#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct zxdh_bar_msg_header)) +#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct zxdh_bar_msg_header)) enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -22,6 +37,13 @@ enum ZXDH_DRIVER_TYPE { ZXDH_MSG_CHAN_END_RISC, }; +enum ZXDH_MSG_VEC { + ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, + ZXDH_MSIX_FROM_MPF, + ZXDH_MSIX_FROM_RISCV, + ZXDH_MSG_VEC_NUM, +}; + enum ZXDH_BAR_MSG_RTN { ZXDH_BAR_MSG_OK = 0, ZXDH_BAR_MSG_ERR_MSGID, @@ -56,10 +78,117 @@ enum ZXDH_BAR_MSG_RTN { ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ }; +enum zxdh_bar_module_id { + ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */ + ZXDH_BAR_MODULE_TBL, /* 1: resource table */ + ZXDH_BAR_MODULE_MISX, /* 2: config msix */ + ZXDH_BAR_MODULE_SDA, /* 3: */ + ZXDH_BAR_MODULE_RDMA, /* 4: */ + ZXDH_BAR_MODULE_DEMO, /* 5: channel test */ + ZXDH_BAR_MODULE_SMMU, /* 6: */ + ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */ + ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */ + ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */ + ZXDH_BAR_MODULE_VPORT, /* 11: get vport */ + ZXDH_BAR_MODULE_BDF, /* 12: get bdf */ + ZXDH_BAR_MODULE_RISC_READY, /* 13: */ + ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + ZXDH_BAR_MDOULE_NVME, /* 15: */ + ZXDH_BAR_MDOULE_NPSDK, /* 16: */ + ZXDH_BAR_MODULE_NP_TODO, /* 17: */ + ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */ + ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */ + + ZXDH_MODULE_FLASH = 32, + ZXDH_BAR_MODULE_OFFSET_GET = 33, + ZXDH_BAR_EVENT_OVS_WITH_VCB = 36, + + ZXDH_BAR_MSG_MODULE_NUM = 100, +}; + +struct zxdh_msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct zxdh_msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; + +struct zxdh_bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; + +struct zxdh_bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; + +struct zxdh_bar_recv_msg { + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + /* */ + union { + struct zxdh_bar_msix_reps msix_reps; + struct zxdh_bar_offset_reps offset_reps; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; + +struct zxdh_bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency */ + uint8_t ack : 1; /* ack msg */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); +int zxdh_msg_chan_enable(struct rte_eth_dev *dev); +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 59853 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-10-30 9:01 ` [PATCH v8 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang ` (2 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 16207 bytes --] Add zxdh get device backend infos, use msg chan to send msg get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 250 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_common.h | 30 ++++ drivers/net/zxdh/zxdh_ethdev.c | 35 +++++ drivers/net/zxdh/zxdh_ethdev.h | 5 + drivers/net/zxdh/zxdh_msg.c | 17 +-- drivers/net/zxdh/zxdh_msg.h | 21 +++ drivers/net/zxdh/zxdh_queue.h | 4 + drivers/net/zxdh/zxdh_rxtx.h | 4 + 9 files changed, 359 insertions(+), 8 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 2e0c8fddae..a16db47f89 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -17,4 +17,5 @@ sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', 'zxdh_msg.c', + 'zxdh_common.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..0cb5380c5e --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,250 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <string.h> + +#include <ethdev_driver.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" +#include "zxdh_common.h" + +#define ZXDH_MSG_RSP_SIZE_MAX 512 + +#define ZXDH_COMMON_TABLE_READ 0 +#define ZXDH_COMMON_TABLE_WRITE 1 + +#define ZXDH_COMMON_FIELD_PHYPORT 6 + +#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) + +#define ZXDH_REPS_HEADER_OFFSET 4 +#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa + +struct zxdh_common_msg { + uint8_t type; /* 0:read table 1:write table */ + uint8_t field; + uint16_t pcie_id; + uint16_t slen; /* Data length for write table */ + uint16_t reserved; +} __rte_packed; + +struct zxdh_common_rsp_hdr { + uint8_t rsp_status; + uint16_t rsp_len; + uint8_t reserved; + uint8_t payload_status; + uint8_t rsv; + uint16_t payload_len; +} __rte_packed; + +struct zxdh_tbl_msg_header { + uint8_t type; /* r/w */ + uint8_t field; + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; +struct zxdh_tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; + +static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + uint8_t type, + uint8_t field, + void *buff, + uint16_t buff_size) +{ + uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size; + + desc->payload_addr = rte_zmalloc(NULL, msg_len, 0); + if (unlikely(desc->payload_addr == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate msg_data"); + return -ENOMEM; + } + memset(desc->payload_addr, 0, msg_len); + desc->payload_len = msg_len; + struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr; + + msg_data->type = type; + msg_data->field = field; + msg_data->pcie_id = hw->pcie_id; + msg_data->slen = buff_size; + if (buff_size != 0) + rte_memcpy(msg_data + 1, buff, buff_size); + + return 0; +} + +static int32_t zxdh_send_command(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + enum zxdh_bar_module_id module_id, + struct zxdh_msg_recviver_mem *msg_rsp) +{ + desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + desc->dst = ZXDH_MSG_CHAN_END_RISC; + desc->module_id = module_id; + desc->src_pcieid = hw->pcie_id; + + msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX; + msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0); + if (unlikely(msg_rsp->recv_buffer == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate messages response"); + return -ENOMEM; + } + + if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response"); + rte_free(msg_rsp->recv_buffer); + return -1; + } + + return 0; +} + +static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp, + void *buff, uint16_t len) +{ + struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer; + + if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) { + PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d", + rsp_hdr->payload_status, rsp_hdr->payload_len); + return -1; + } + if (len != 0) + rte_memcpy(buff, rsp_hdr + 1, len); + + return 0; +} + +static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_msg_recviver_mem msg_rsp; + struct zxdh_pci_bar_msg desc; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT, + (void *)phyport, sizeof(*phyport)); + return ret; +} + +static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + param->pcie_id = hw->pcie_id; + param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param->src_type = ZXDH_BAR_MODULE_TBL; +} + +static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + struct zxdh_pci_bar_msg in = {0}; + uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + int ret = 0; + + if (!res || !dev) + return ZXDH_BAR_MSG_ERR_NULL; + + struct zxdh_tbl_msg_header tbl_msg = { + .type = ZXDH_TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = ZXDH_BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + struct zxdh_tbl_msg_reps_header *tbl_reps = + (struct zxdh_tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET); + + if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { + PMD_DRV_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + + sizeof(struct zxdh_tbl_msg_reps_header)), *len); + return ret; +} + +static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *panel_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..f098ae4cf9 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_COMMON_H +#define ZXDH_COMMON_H + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_COMMON_H */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index a729344288..8d9df218ce 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -10,9 +10,21 @@ #include "zxdh_logs.h" #include "zxdh_pci.h" #include "zxdh_msg.h" +#include "zxdh_common.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) +{ + /* epid > 4 is local soft queue. return 1192 */ + if (v.epid > 4) + return 1192; + if (v.vf_flag) + return v.epid * 256 + v.vfid; + else + return (v.epid * 8 + v.pfid) + 1152; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) return ret; } +static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) +{ + if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) { + PMD_INIT_LOG(ERR, "Failed to get phyport"); + return -1; + } + PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + + hw->vfid = zxdh_vport_to_vfid(hw->vport); + + if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) { + PMD_INIT_LOG(ERR, "Failed to get panel_id"); + return -1; + } + PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); @@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_agent_comm(eth_dev, hw); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index bed1334690..1ee8dd744c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -56,6 +56,7 @@ struct zxdh_hw { uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint16_t vfid; uint8_t *isr; uint8_t weak_barriers; @@ -65,8 +66,12 @@ struct zxdh_hw { uint8_t duplex; uint8_t is_pf; uint8_t msg_chan_init; + uint8_t phyport; + uint8_t panel_id; }; +uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 1bf72a9b7c..4105daf5c6 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -133,7 +133,9 @@ struct zxdh_seqid_ring { }; struct zxdh_seqid_ring g_seqid_ring = {0}; -static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; + +static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET; @@ -209,11 +211,11 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u */ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) { - int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); + int lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, bar_base_addr + ZXDH_HW_LABEL_OFFSET); - lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); + lock_id = zxdh_pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, bar_base_addr + ZXDH_HW_LABEL_OFFSET); return 0; @@ -409,7 +411,7 @@ static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) { int ret = 0; - uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst); PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid); if (dst == ZXDH_MSG_CHAN_END_RISC) @@ -426,7 +428,7 @@ static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_ad static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) { - uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + uint16_t lockid = zxdh_pcie_id_to_hard_lock(src_pcieid, dst); PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid); if (dst == ZXDH_MSG_CHAN_END_RISC) @@ -586,7 +588,6 @@ static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid return ZXDH_BAR_MSG_OK; } -static uint8_t temp_msg[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr, uint16_t payload_len, @@ -596,13 +597,13 @@ static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); ret = zxdh_bar_chan_msg_header_get(subchan_addr, - (struct zxdh_bar_msg_header *)temp_msg); + (struct zxdh_bar_msg_header *)tmp_msg_header); ret = zxdh_bar_chan_msg_payload_set(subchan_addr, (uint8_t *)(payload_addr), payload_len); ret = zxdh_bar_chan_msg_payload_get(subchan_addr, - temp_msg, payload_len); + tmp_msg_header, payload_len); ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED); return ret; diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 7fbab4b214..7da60ee189 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -107,6 +107,27 @@ enum zxdh_bar_module_id { ZXDH_BAR_MSG_MODULE_NUM = 100, }; +enum ZXDH_RES_TBL_FILED { + ZXDH_TBL_FIELD_PCIEID = 0, + ZXDH_TBL_FIELD_BDF = 1, + ZXDH_TBL_FIELD_MSGCH = 2, + ZXDH_TBL_FIELD_DATACH = 3, + ZXDH_TBL_FIELD_VPORT = 4, + ZXDH_TBL_FIELD_PNLID = 5, + ZXDH_TBL_FIELD_PHYPORT = 6, + ZXDH_TBL_FIELD_SERDES_NUM = 7, + ZXDH_TBL_FIELD_NP_PORT = 8, + ZXDH_TBL_FIELD_SPEED = 9, + ZXDH_TBL_FIELD_HASHID = 10, + ZXDH_TBL_FIELD_NON, +}; + +enum ZXDH_TBL_MSG_TYPE { + ZXDH_TBL_TYPE_READ, + ZXDH_TBL_TYPE_WRITE, + ZXDH_TBL_TYPE_NON, +}; + struct zxdh_msix_para { uint16_t pcie_id; uint16_t vector_risc; diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index fd73f14e2d..66f37ec612 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -102,4 +102,8 @@ struct zxdh_virtqueue { struct zxdh_vq_desc_extra vq_descx[]; } __rte_packed; +#ifdef __cplusplus +} +#endif + #endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index ccac7e7834..31b1c8f0a5 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -48,4 +48,8 @@ struct zxdh_virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ } __rte_packed; +#ifdef __cplusplus +} +#endif + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 33972 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-10-30 9:01 ` [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-30 9:01 ` [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24868 bytes --] configure zxdh intr include risc,dtb. and release intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 300 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 8 + drivers/net/zxdh/zxdh_msg.c | 188 +++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 13 ++ drivers/net/zxdh/zxdh_pci.c | 62 +++++++ drivers/net/zxdh/zxdh_pci.h | 12 ++ 6 files changed, 583 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 8d9df218ce..5963aed949 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -25,6 +25,301 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + + +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + +static void zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); +} + + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + ZXDH_VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_INIT_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE); + PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = ZXDH_VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_INIT_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -138,9 +433,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: + zxdh_intr_release(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 1ee8dd744c..0a7b574477 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -8,6 +8,10 @@ #include <rte_ether.h> #include "ethdev_driver.h" +#include <rte_interrupts.h> +#include <eal_interrupts.h> + +#include "zxdh_queue.h" #ifdef __cplusplus extern "C" { @@ -44,6 +48,9 @@ struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + struct rte_intr_handle *risc_intr; + struct rte_intr_handle *dtb_intr; + struct zxdh_virtqueue **vqs; union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -60,6 +67,7 @@ struct zxdh_hw { uint8_t *isr; uint8_t weak_barriers; + uint8_t intr_enabled; uint8_t use_msix; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 4105daf5c6..71c199ec2e 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -94,6 +94,12 @@ #define ZXDH_BAR_CHAN_INDEX_SEND 0 #define ZXDH_BAR_CHAN_INDEX_RECV 1 +#define ZXDH_BAR_CHAN_MSG_SYNC 0 +#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0 +#define ZXDH_BAR_CHAN_MSG_EMEC 1 +#define ZXDH_BAR_CHAN_MSG_NO_ACK 0 +#define ZXDH_BAR_CHAN_MSG_ACK 1 + uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, @@ -135,6 +141,36 @@ struct zxdh_seqid_ring g_seqid_ring = {0}; static uint8_t tmp_msg_header[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; +static inline const char *zxdh_module_id_name(int val) +{ + switch (val) { + case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG"; + case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL"; + case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX"; + case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA"; + case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA"; + case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO"; + case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU"; + case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC"; + case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA"; + case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM"; + case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP"; + case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT"; + case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF"; + case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY"; + case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE"; + case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME"; + case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK"; + case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO"; + case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF"; + case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF"; + case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH"; + case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET"; + case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + static uint16_t zxdh_pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; @@ -795,3 +831,155 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev) return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); } + +static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + + return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static void zxdh_bar_msg_ack_async_msg_proc(struct zxdh_bar_msg_header *msg_header, + uint8_t *receiver_buff) +{ + struct zxdh_seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED; + +free_id: + zxdh_bar_chan_msgid_free(msg_header->msg_id); +} + +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM]; +static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, + struct zxdh_bar_msg_header *msg_header, + uint8_t *receiver_buff, void *dev) +{ + uint8_t *reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0); + + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + uint16_t reps_len = 0; + + recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + zxdh_bar_chan_msg_header_set(reply_addr, msg_header); + zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type, + uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + uint64_t recv_rep_addr; + + if (sync == ZXDH_BAR_CHAN_MSG_SYNC) + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t zxdh_bar_chan_msg_header_check(struct zxdh_bar_msg_header *msg_header) +{ + if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return ZXDH_BAR_MSG_ERR_MODULE; + } + uint8_t module_id = msg_header->module_id; + + if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + uint16_t len = msg_header->len; + + if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + zxdh_module_id_name(module_id), module_id); + return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST; + } + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + struct zxdh_bar_msg_header msg_header = {0}; + uint64_t recv_addr = 0; + uint16_t ret = 0; + + recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + zxdh_bar_chan_msg_header_get(recv_addr, &msg_header); + ret = zxdh_bar_chan_msg_header_check(&msg_header); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + + uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0); + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) { + zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE); + if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) { + zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + return 0; + +exit: + rte_free(recved_msg); + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 7da60ee189..691a847462 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -15,8 +15,15 @@ extern "C" { #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) +#define ZXDH_QUEUE_INTR_VEC_NUM 256 #define ZXDH_BAR_MSG_POLLING_SPAN 100 #define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) @@ -202,6 +209,10 @@ struct zxdh_bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, + void *reps_buffer, uint16_t *reps_len, void *dev); + + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); @@ -210,6 +221,8 @@ int zxdh_msg_chan_enable(struct rte_eth_dev *dev); int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 8fcab6e888..5bd0df11da 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -92,6 +92,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) rte_write32(features >> 32, &hw->common_cfg->guest_feature); } +static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint8_t zxdh_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -99,8 +117,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_status = zxdh_set_status, .get_features = zxdh_get_features, .set_features = zxdh_set_features, + .set_queue_irq = zxdh_set_queue_irq, + .set_config_irq = zxdh_set_config_irq, + .get_isr = zxdh_get_isr, }; +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) +{ + return ZXDH_VTPCI_OPS(hw)->get_isr(hw); +} + uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) { return ZXDH_VTPCI_OPS(hw)->get_features(hw); @@ -283,3 +309,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) return 0; } + +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev) +{ + uint8_t pos = 0; + int32_t ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret); + return ZXDH_MSIX_NONE; + } + while (pos) { + uint8_t cap[2] = {0}; + + ret = rte_pci_read_config(dev, cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap[0] == ZXDH_PCI_CAP_ID_MSIX) { + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap)); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, + "failed to read pci cap at pos: %x ret %d", pos + 2, ret); + break; + } + if (flags & ZXDH_PCI_MSIX_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + pos = cap[1]; + } + return ZXDH_MSIX_NONE; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index bb5ae64ddf..55d8c5449c 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -22,6 +22,13 @@ enum zxdh_msix_status { ZXDH_MSIX_ENABLED = 2 }; +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + #define ZXDH_PCI_CAPABILITY_LIST 0x34 #define ZXDH_PCI_CAP_ID_VNDR 0x09 #define ZXDH_PCI_CAP_ID_MSIX 0x11 @@ -124,6 +131,9 @@ struct zxdh_pci_ops { uint64_t (*get_features)(struct zxdh_hw *hw); void (*set_features)(struct zxdh_hw *hw, uint64_t features); + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec); + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + uint8_t (*get_isr)(struct zxdh_hw *hw); }; struct zxdh_hw_internal { @@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); #ifdef __cplusplus } -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53446 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-10-30 9:01 ` [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 2024-10-30 9:01 ` [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3513 bytes --] Add support for zxdh infos get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 44 +++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_ethdev.h | 3 +++ 2 files changed, 46 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 5963aed949..bf0d9b7b3a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -25,6 +25,43 @@ uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX); + dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX); + dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX); + dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE; + dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN; + dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS; + dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_QINQ_STRIP); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM); + + return 0; +} + static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; @@ -320,6 +357,11 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_infos_get = zxdh_dev_infos_get, +}; + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -376,7 +418,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; int ret = 0; - eth_dev->dev_ops = NULL; + eth_dev->dev_ops = &zxdh_eth_dev_ops; /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 0a7b574477..78f5ca6c00 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -33,6 +33,9 @@ extern "C" { #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U + union zxdh_virport_num { uint16_t vport; struct { -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 7588 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-10-30 9:01 ` [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-10-30 9:01 ` Junlong Wang 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-30 9:01 UTC (permalink / raw) To: dev; +Cc: wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 40973 bytes --] provided zxdh dev configure ops for queue check,reset,alloc resources,etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 135 ++++++++++ drivers/net/zxdh/zxdh_common.h | 12 + drivers/net/zxdh/zxdh_ethdev.c | 450 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 34 ++- drivers/net/zxdh/zxdh_pci.c | 98 +++++++ drivers/net/zxdh/zxdh_pci.h | 37 ++- drivers/net/zxdh/zxdh_queue.c | 123 +++++++++ drivers/net/zxdh/zxdh_queue.h | 173 +++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 4 +- 10 files changed, 1051 insertions(+), 16 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_queue.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index a16db47f89..b96aa5a27e 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -18,4 +18,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_msg.c', 'zxdh_common.c', + 'zxdh_queue.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 0cb5380c5e..5ff01c418e 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -20,6 +20,7 @@ #define ZXDH_COMMON_TABLE_WRITE 1 #define ZXDH_COMMON_FIELD_PHYPORT 6 +#define ZXDH_COMMON_FIELD_DATACH 3 #define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) @@ -248,3 +249,137 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); return ret; } + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +static bool zxdh_try_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return false; + + return true; +} + +int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us) +{ + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + rte_delay_us_block(us); + /* acquire hw lock */ + if (!zxdh_try_lock(hw)) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout); + continue; + } + break; + } + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + return 0; +} + +void zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + } +} + +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} + +static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_pci_bar_msg desc; + struct zxdh_msg_recviver_mem msg_rsp; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + if (buff_size != 0 && buff == NULL) { + PMD_DRV_LOG(ERR, "Buff is invalid"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE, + field, buff, buff_size); + + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_datach_set(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t buff_size = (hw->queue_num + 1) * 2; + void *buff = rte_zmalloc(NULL, buff_size, 0); + + if (unlikely(buff == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate buff"); + return -ENOMEM; + } + memset(buff, 0, buff_size); + uint16_t *pdata = (uint16_t *)buff; + *pdata++ = hw->queue_num; + uint16_t i; + + for (i = 0; i < hw->queue_num; i++) + *(pdata + i) = hw->channel_context[i].ph_chno; + + int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH, + (void *)buff, buff_size); + + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to setup data channel of common table"); + + rte_free(buff); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index f098ae4cf9..a60e25b2e3 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -14,6 +14,10 @@ extern "C" { #endif +#define ZXDH_VF_LOCK_REG 0x90 +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 + struct zxdh_res_para { uint64_t virt_addr; uint16_t pcie_id; @@ -23,6 +27,14 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +void zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_timedlock(struct zxdh_hw *hw, uint32_t us); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); +int32_t zxdh_datach_set(struct rte_eth_dev *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index bf0d9b7b3a..ceaba444d4 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -11,6 +11,7 @@ #include "zxdh_pci.h" #include "zxdh_msg.h" #include "zxdh_common.h" +#include "zxdh_queue.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -357,8 +358,457 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +static int32_t zxdh_features_update(struct zxdh_hw *hw, + const struct rte_eth_rxmode *rxmode, + const struct rte_eth_txmode *txmode) +{ + uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; + uint64_t req_features = hw->guest_features; + + if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM); + + if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) | + (1ULL << ZXDH_NET_F_GUEST_TSO6); + + if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_CSUM); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) | + (1ULL << ZXDH_NET_F_HOST_TSO6); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_UFO); + + req_features = req_features & hw->host_features; + hw->guest_features = req_features; + + ZXDH_VTPCI_OPS(hw)->set_features(hw, req_features); + + if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + PMD_DRV_LOG(ERR, "rx checksum not available on this host"); + return -ENOTSUP; + } + + if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); + return -ENOTSUP; + } + return 0; +} + +static bool rx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); +} + +static bool tx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); +} + +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i = 0; + + const char *type = NULL; + struct zxdh_virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) + type = "rxq"; + else if (queue_type == ZXDH_VTNET_TQ) + type = "txq"; + else + continue; + PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + rte_pktmbuf_free(buf); + } +} + +static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1; + uint16_t i = 0; + uint16_t j = 0; + uint16_t done = 0; + int32_t ret = 0; + + ret = zxdh_timedlock(hw, 1000); + if (ret) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout"); + return -1; + } + + /* Iterate COI table and find free channel */ + for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) { + uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t)); + uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + + for (j = base; j < 32; j += 2) { + /* Got the available channel & update COI table */ + if ((var & (1 << j)) == 0) { + var |= (1 << j); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + done = 1; + break; + } + } + if (done) + break; + } + zxdh_release_lock(hw); + /* check for no channel condition */ + if (done != 1) { + PMD_INIT_LOG(ERR, "NO availd queues"); + return -1; + } + /* reruen available channel ID */ + return (i * 32) + j; +} + +static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->channel_context[lch].valid == 1) { + PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u", + lch, hw->channel_context[lch].ph_chno); + return hw->channel_context[lch].ph_chno; + } + int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch)); + + if (pch < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + hw->channel_context[lch].ph_chno = (uint16_t)pch; + hw->channel_context[lch].valid = 1; + PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch); + return 0; +} + +static void zxdh_init_vring(struct zxdh_virtqueue *vq) +{ + int32_t size = vq->vq_nentries; + uint8_t *ring_mem = vq->vq_ring_virt_mem; + + memset(ring_mem, 0, vq->vq_ring_size); + + vq->vq_used_cons_idx = 0; + vq->vq_desc_head_idx = 0; + vq->vq_avail_idx = 0; + vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); + vq->vq_free_cnt = vq->vq_nentries; + memset(vq->vq_descx, 0, sizeof(struct zxdh_vq_desc_extra) * vq->vq_nentries); + vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr(vq); +} + +static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) +{ + char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + const struct rte_memzone *mz = NULL; + const struct rte_memzone *hdr_mz = NULL; + uint32_t size = 0; + struct zxdh_hw *hw = dev->data->dev_private; + struct zxdh_virtnet_rx *rxvq = NULL; + struct zxdh_virtnet_tx *txvq = NULL; + struct zxdh_virtqueue *vq = NULL; + size_t sz_hdr_mz = 0; + void *sw_ring = NULL; + int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx); + int32_t numa_node = dev->device->numa_node; + uint16_t vtpci_phy_qidx = 0; + uint32_t vq_size = 0; + int32_t ret = 0; + + if (hw->channel_context[vtpci_logic_qidx].valid == 0) { + PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx); + return -EINVAL; + } + vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno; + + PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d", + vtpci_logic_qidx, vtpci_phy_qidx, numa_node); + + vq_size = ZXDH_QUEUE_DEPTH; + + if (ZXDH_VTPCI_OPS(hw)->set_queue_num != NULL) + ZXDH_VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size); + + snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx); + + size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct zxdh_vq_desc_extra), + RTE_CACHE_LINE_SIZE); + if (queue_type == ZXDH_VTNET_TQ) { + /* + * For each xmit packet, allocate a zxdh_net_hdr + * and indirect ring elements + */ + sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region); + } + + vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node); + if (vq == NULL) { + PMD_INIT_LOG(ERR, "can not allocate vq"); + return -ENOMEM; + } + hw->vqs[vtpci_logic_qidx] = vq; + + vq->hw = hw; + vq->vq_queue_index = vtpci_phy_qidx; + vq->vq_nentries = vq_size; + + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL; + vq->vq_packed.event_flags_shadow = 0; + if (queue_type == ZXDH_VTNET_RQ) + vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE; + + /* + * Reserve a memzone for vring elements + */ + size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); + PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); + + mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + ZXDH_PCI_VRING_ALIGN); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(vq_name); + if (mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + + memset(mz->addr, 0, mz->len); + + vq->vq_ring_mem = mz->iova; + vq->vq_ring_virt_mem = mz->addr; + + zxdh_init_vring(vq); + + if (sz_hdr_mz) { + snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr", + dev->data->port_id, vtpci_phy_qidx); + hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + RTE_CACHE_LINE_SIZE); + if (hdr_mz == NULL) { + if (rte_errno == EEXIST) + hdr_mz = rte_memzone_lookup(vq_hdr_name); + if (hdr_mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + } + + if (queue_type == ZXDH_VTNET_RQ) { + size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]); + + sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node); + if (!sw_ring) { + PMD_INIT_LOG(ERR, "can not allocate RX soft ring"); + ret = -ENOMEM; + goto fail_q_alloc; + } + + vq->sw_ring = sw_ring; + rxvq = &vq->rxq; + rxvq->vq = vq; + rxvq->port_id = dev->data->port_id; + rxvq->mz = mz; + } else { /* queue_type == VTNET_TQ */ + txvq = &vq->txq; + txvq->vq = vq; + txvq->port_id = dev->data->port_id; + txvq->mz = mz; + txvq->zxdh_net_hdr_mz = hdr_mz; + txvq->zxdh_net_hdr_mem = hdr_mz->iova; + } + + vq->offset = offsetof(struct rte_mbuf, buf_iova); + if (queue_type == ZXDH_VTNET_TQ) { + struct zxdh_tx_region *txr = hdr_mz->addr; + uint32_t i; + + memset(txr, 0, vq_size * sizeof(*txr)); + for (i = 0; i < vq_size; i++) { + /* first indirect descriptor is always the tx header */ + struct zxdh_vring_packed_desc *start_dp = txr[i].tx_packed_indir; + + vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + start_dp->addr = txvq->zxdh_net_hdr_mem + i * sizeof(*txr) + + offsetof(struct zxdh_tx_region, tx_hdr); + /* length will be updated to actual pi hdr size when xmit pkt */ + start_dp->len = 0; + } + } + if (ZXDH_VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) { + PMD_INIT_LOG(ERR, "setup_queue failed"); + return -EINVAL; + } + return 0; +fail_q_alloc: + rte_free(sw_ring); + rte_memzone_free(hdr_mz); + rte_memzone_free(mz); + rte_free(vq); + return ret; +} + +static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) +{ + uint16_t lch; + struct zxdh_hw *hw = dev->data->dev_private; + + hw->vqs = rte_zmalloc(NULL, sizeof(struct zxdh_virtqueue *) * nr_vq, 0); + if (!hw->vqs) { + PMD_INIT_LOG(ERR, "Failed to allocate vqs"); + return -ENOMEM; + } + for (lch = 0; lch < nr_vq; lch++) { + if (zxdh_acquire_channel(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire the channels"); + zxdh_free_queues(dev); + return -1; + } + if (zxdh_init_queue(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to alloc virtio queue"); + zxdh_free_queues(dev); + return -1; + } + } + return 0; +} + + +static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) +{ + const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t nr_vq = 0; + int32_t ret = 0; + + if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return -EINVAL; + } + if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues, + ZXDH_QUEUES_NUM_MAX); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + + ret = zxdh_features_update(hw, rxmode, txmode); + if (ret < 0) + return ret; + + /* check if lsc interrupt feature is enabled */ + if (dev->data->dev_conf.intr_conf.lsc) { + if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) { + PMD_DRV_LOG(ERR, "link status not supported by host"); + return -ENOTSUP; + } + } + + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + + nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; + if (nr_vq == hw->queue_num) + return 0; + + PMD_DRV_LOG(DEBUG, "queue changed need reset "); + /* Reset the device although not necessary at startup */ + zxdh_vtpci_reset(hw); + + /* Tell the host we've noticed this device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK); + + /* Tell the host we've known how to drive the device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER); + /* The queue needs to be released when reconfiguring*/ + if (hw->vqs != NULL) { + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + } + + hw->queue_num = nr_vq; + ret = zxdh_alloc_queues(dev, nr_vq); + if (ret < 0) + return ret; + + zxdh_datach_set(dev); + + if (zxdh_configure_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to configure interrupt"); + zxdh_free_queues(dev); + return -1; + } + + zxdh_vtpci_reinit_complete(hw); + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = zxdh_dev_configure, .dev_infos_get = zxdh_dev_infos_get, }; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 78f5ca6c00..49ddb505d8 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -11,8 +11,6 @@ #include <rte_interrupts.h> #include <eal_interrupts.h> -#include "zxdh_queue.h" - #ifdef __cplusplus extern "C" { #endif @@ -25,16 +23,23 @@ extern "C" { #define ZXDH_E312_PF_DEVICEID 0x8049 #define ZXDH_E312_VF_DEVICEID 0x8060 -#define ZXDH_MAX_UC_MAC_ADDRS 32 -#define ZXDH_MAX_MC_MAC_ADDRS 32 -#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) +#define ZXDH_MAX_UC_MAC_ADDRS 32 +#define ZXDH_MAX_MC_MAC_ADDRS 32 +#define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) + +#define ZXDH_NUM_BARS 2 +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U -#define ZXDH_NUM_BARS 2 -#define ZXDH_RX_QUEUES_MAX 128U -#define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U +#define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_QUEUES_BASE 0 +#define ZXDH_TOTAL_QUEUES_NUM 4096 +#define ZXDH_QUEUES_NUM_MAX 256 +#define ZXDH_QUERES_SHARE_BASE (0x5000) -#define ZXDH_MIN_RX_BUFSIZE 64 -#define ZXDH_MAX_RX_PKTLEN 14000U +#define ZXDH_MBUF_BURST_SZ 64 union zxdh_virport_num { uint16_t vport; @@ -47,6 +52,11 @@ union zxdh_virport_num { }; }; +struct zxdh_chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; @@ -54,6 +64,7 @@ struct zxdh_hw { struct rte_intr_handle *risc_intr; struct rte_intr_handle *dtb_intr; struct zxdh_virtqueue **vqs; + struct zxdh_chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; union zxdh_virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -67,6 +78,7 @@ struct zxdh_hw { uint16_t device_id; uint16_t port_id; uint16_t vfid; + uint16_t queue_num; uint8_t *isr; uint8_t weak_barriers; @@ -79,6 +91,8 @@ struct zxdh_hw { uint8_t msg_chan_init; uint8_t phyport; uint8_t panel_id; + uint8_t has_tx_offload; + uint8_t has_rx_offload; }; uint16_t zxdh_vport_to_vfid(union zxdh_virport_num v); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 5bd0df11da..f38de20baf 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -110,6 +110,87 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw) return rte_read8(hw->isr); } +static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t check_vq_phys_addr_ok(struct zxdh_virtqueue *vq) +{ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} + +static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct zxdh_vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + + sizeof(struct zxdh_vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct zxdh_vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void zxdh_del_queue(struct zxdh_hw *hw, struct zxdh_virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -120,6 +201,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_irq = zxdh_set_queue_irq, .set_config_irq = zxdh_set_config_irq, .get_isr = zxdh_get_isr, + .get_queue_num = zxdh_get_queue_num, + .set_queue_num = zxdh_set_queue_num, + .setup_queue = zxdh_setup_queue, + .del_queue = zxdh_del_queue, }; uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) @@ -146,6 +231,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw) PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); } +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= ZXDH_VTPCI_OPS(hw)->get_status(hw); + + ZXDH_VTPCI_OPS(hw)->set_status(hw, status); +} + static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) { uint8_t bar = cap->bar; diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index 55d8c5449c..231824f3c4 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -8,7 +8,9 @@ #include <stdint.h> #include <stdbool.h> +#include <rte_pci.h> #include <bus_pci_driver.h> +#include <ethdev_driver.h> #include "zxdh_ethdev.h" @@ -29,13 +31,25 @@ enum zxdh_msix_status { /* Vector value used to disable MSI for queue. */ #define ZXDH_MSI_NO_VECTOR 0x7F -#define ZXDH_PCI_CAPABILITY_LIST 0x34 -#define ZXDH_PCI_CAP_ID_VNDR 0x09 -#define ZXDH_PCI_CAP_ID_MSIX 0x11 +#define ZXDH_PCI_CAPABILITY_LIST 0x34 +#define ZXDH_PCI_CAP_ID_VNDR 0x09 +#define ZXDH_PCI_CAP_ID_MSIX 0x11 -#define ZXDH_PCI_MSIX_ENABLE 0x8000 +#define ZXDH_PCI_MSIX_ENABLE 0x8000 +#define ZXDH_PCI_VRING_ALIGN 4096 +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ + +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ #define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ @@ -60,6 +74,8 @@ enum zxdh_msix_status { #define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 #define ZXDH_CONFIG_STATUS_FAILED 0x80 +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 + struct zxdh_net_config { /* The config defining mac address (if ZXDH_NET_F_MAC) */ uint8_t mac[RTE_ETHER_ADDR_LEN]; @@ -122,6 +138,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + struct zxdh_pci_ops { void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); @@ -134,6 +155,11 @@ struct zxdh_pci_ops { uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq, uint16_t vec); uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); uint8_t (*get_isr)(struct zxdh_hw *hw); + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct zxdh_virtqueue *vq); }; struct zxdh_hw_internal { @@ -156,6 +182,9 @@ uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw); +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..2978a9f272 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" +#include "zxdh_msg.h" + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + return NULL; +} + +static int32_t zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + int32_t ret = 0; + + ret = zxdh_timedlock(hw, 1000); + if (ret) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return ZXDH_VTNET_RQ; + else + return ZXDH_VTNET_TQ; +} + +int32_t zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct zxdh_virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + if (zxdh_release_channel(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + ZXDH_VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == ZXDH_VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.zxdh_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 66f37ec612..a9ba0be0d0 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -11,11 +11,30 @@ #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" +#include "zxdh_pci.h" #ifdef __cplusplus extern "C" { #endif +enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; + +#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_MAX_TX_INDIRECT 8 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define ZXDH_VRING_DESC_F_WRITE 2 +/* This flag means the descriptor was made available by the driver */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) + +#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 +#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 +#define ZXDH_RING_EVENT_FLAGS_DESC 0x2 + +#define ZXDH_VQ_RING_DESC_CHAIN_END 32768 + /** ring descriptors: 16 bytes. * These can chain together via "next". **/ @@ -26,6 +45,19 @@ struct zxdh_vring_desc { uint16_t next; /* We chain unused descriptors via this. */ } __rte_packed; +struct zxdh_vring_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was written to. */ + uint32_t len; +}; + +struct zxdh_vring_used { + uint16_t flags; + uint16_t idx; + struct zxdh_vring_used_elem ring[]; +}; + struct zxdh_vring_avail { uint16_t flags; uint16_t idx; @@ -102,6 +134,147 @@ struct zxdh_virtqueue { struct zxdh_vq_desc_extra vq_descx[]; } __rte_packed; +struct zxdh_type_hdr { + uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ + uint8_t pd_len; + uint8_t num_buffers; + uint8_t reserved; +} __rte_packed; /* 4B */ + +struct zxdh_pi_hdr { + uint8_t pi_len; + uint8_t pkt_type; + uint16_t vlan_id; + uint32_t ipv6_extend; + uint16_t l3_offset; + uint16_t l4_offset; + uint8_t phy_port; + uint8_t pkt_flag_hi8; + uint16_t pkt_flag_lw16; + union { + struct { + uint64_t sa_idx; + uint8_t reserved_8[8]; + } dl; + struct { + uint32_t lro_flag; + uint32_t lro_mss; + uint16_t err_code; + uint16_t pm_id; + uint16_t pkt_len; + uint8_t reserved[2]; + } ul; + }; +} __rte_packed; /* 32B */ + +struct zxdh_pd_hdr_dl { + uint32_t ol_flag; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t dst_vfid; + uint32_t svlan_insert; + uint32_t cvlan_insert; +} __rte_packed; /* 16B */ + +struct zxdh_net_hdr_dl { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_dl pd_hdr; /* 16B */ +} __rte_packed; + +struct zxdh_pd_hdr_ul { + uint32_t pkt_flag; + uint32_t rss_hash; + uint32_t fd; + uint32_t striped_vlan_tci; + /* ovs */ + uint8_t tag_idx; + uint8_t tag_data; + uint16_t src_vfid; + /* */ + uint16_t pkt_type_out; + uint16_t pkt_type_in; +} __rte_packed; /* 24B */ + +struct zxdh_net_hdr_ul { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_ul pd_hdr; /* 24B */ +} __rte_packed; /* 60B */ + +struct zxdh_tx_region { + struct zxdh_net_hdr_dl tx_hdr; + union { + struct zxdh_vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; + struct zxdh_vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; + } __rte_packed; +}; + +static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +{ + size_t size; + + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct zxdh_vring_packed_desc); + size += sizeof(struct zxdh_vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct zxdh_vring_packed_desc_event); + return size; + } + + size = num * sizeof(struct zxdh_vring_desc); + size += sizeof(struct zxdh_vring_avail) + (num * sizeof(uint16_t)); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct zxdh_vring_used) + (num * sizeof(struct zxdh_vring_used_elem)); + return size; +} + +static inline void vring_init_packed(struct zxdh_vring_packed *vr, uint8_t *p, + unsigned long align, uint32_t num) +{ + vr->num = num; + vr->desc = (struct zxdh_vring_packed_desc *)p; + vr->driver = (struct zxdh_vring_packed_desc_event *)(p + + vr->num * sizeof(struct zxdh_vring_packed_desc)); + vr->device = (struct zxdh_vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver + + sizeof(struct zxdh_vring_packed_desc_event)), align); +} + +static inline void vring_desc_init_packed(struct zxdh_virtqueue *vq, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n - 1; i++) { + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static inline void vring_desc_init_indirect_packed(struct zxdh_vring_packed_desc *dp, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n; i++) { + dp[i].id = (uint16_t)i; + dp[i].flags = ZXDH_VRING_DESC_F_WRITE; + } +} + +static inline void virtqueue_disable_intr(struct zxdh_virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct zxdh_virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); + + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index 31b1c8f0a5..7d4b5481ec 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -40,8 +40,8 @@ struct zxdh_virtnet_rx { struct zxdh_virtnet_tx { struct zxdh_virtqueue *vq; - const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */ - rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */ + const struct rte_memzone *zxdh_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t zxdh_net_hdr_mem; /* hdr for each xmit packet */ uint16_t queue_id; /* DPDK queue index. */ uint16_t port_id; /* Device port identifier. */ struct zxdh_virtnet_stats stats; -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 94517 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 2/9] net/zxdh: add logging implementation 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang ` (6 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3417 bytes --] Add zxdh logging implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 15 +++++++++++-- drivers/net/zxdh/zxdh_logs.h | 40 ++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+), 2 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_logs.h diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 75d8b28cc3..7220770c01 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -7,6 +7,7 @@ #include <rte_ethdev.h> #include "zxdh_ethdev.h" +#include "zxdh_logs.h" static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -19,13 +20,18 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses", + ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN); return -ENOMEM; + } memset(hw, 0, sizeof(*hw)); hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr; - if (hw->bar_addr[0] == 0) + if (hw->bar_addr[0] == 0) { + PMD_INIT_LOG(ERR, "Bad mem resource."); return -EIO; + } hw->device_id = pci_dev->id.device_id; hw->port_id = eth_dev->data->port_id; @@ -90,3 +96,8 @@ static struct rte_pci_driver zxdh_pmd = { RTE_PMD_REGISTER_PCI(net_zxdh, zxdh_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map); RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_init, init, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_driver, driver, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_rx, rx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_tx, tx, NOTICE); +RTE_LOG_REGISTER_SUFFIX(zxdh_logtype_msg, msg, NOTICE); diff --git a/drivers/net/zxdh/zxdh_logs.h b/drivers/net/zxdh/zxdh_logs.h new file mode 100644 index 0000000000..a8a6a3135b --- /dev/null +++ b/drivers/net/zxdh/zxdh_logs.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_LOGS_H +#define ZXDH_LOGS_H + +#include <rte_log.h> + +extern int zxdh_logtype_init; +#define RTE_LOGTYPE_ZXDH_INIT zxdh_logtype_init +#define PMD_INIT_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_INIT, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_driver; +#define RTE_LOGTYPE_ZXDH_DRIVER zxdh_logtype_driver +#define PMD_DRV_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_DRIVER, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_rx; +#define RTE_LOGTYPE_ZXDH_RX zxdh_logtype_rx +#define PMD_RX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_RX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_tx; +#define RTE_LOGTYPE_ZXDH_TX zxdh_logtype_tx +#define PMD_TX_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_TX, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +extern int zxdh_logtype_msg; +#define RTE_LOGTYPE_ZXDH_MSG zxdh_logtype_msg +#define PMD_MSG_LOG(level, ...) \ + RTE_LOG_LINE_PREFIX(level, ZXDH_MSG, "offload_zxdh %s(): ", \ + __func__, __VA_ARGS__) + +#endif /* ZXDH_LOGS_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 6146 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-22 12:20 ` [PATCH v7 2/9] net/zxdh: add logging implementation Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-27 16:47 ` Stephen Hemminger 2024-10-27 16:47 ` Stephen Hemminger 2024-10-22 12:20 ` [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang ` (5 subsequent siblings) 8 siblings, 2 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 22886 bytes --] Add device pci init implementation, to obtain PCI capability and read configuration, etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ drivers/net/zxdh/zxdh_ethdev.h | 22 ++- drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++ drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++ drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++ 7 files changed, 660 insertions(+), 3 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_pci.c create mode 100644 drivers/net/zxdh/zxdh_pci.h create mode 100644 drivers/net/zxdh/zxdh_queue.h create mode 100644 drivers/net/zxdh/zxdh_rxtx.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 932fb1c835..7db4e7bc71 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -15,4 +15,5 @@ endif sources = files( 'zxdh_ethdev.c', + 'zxdh_pci.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 7220770c01..f34b2af7b3 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -8,6 +8,40 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" +#include "zxdh_pci.h" + +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; + +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) +{ + struct zxdh_hw *hw = eth_dev->data->dev_private; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + int ret = 0; + + ret = zxdh_read_pci_caps(pci_dev, hw); + if (ret) { + PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->port_id); + goto err; + } + + zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_dev_pci_ops; + zxdh_vtpci_reset(hw); + zxdh_get_pci_dev_config(hw); + + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, ð_dev->data->mac_addrs[0]); + + /* If host does not support both status and MSI-X then disable LSC */ + if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && hw->use_msix != ZXDH_MSIX_NONE) + eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC; + else + eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; + + return 0; + +err: + PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id); + return ret; +} static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { @@ -45,6 +79,15 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) hw->is_pf = 1; } + ret = zxdh_init_device(eth_dev); + if (ret < 0) + goto err_zxdh_init; + + return ret; + +err_zxdh_init: + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; return ret; } diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 086f3a0cdc..e2db2508e2 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -5,6 +5,8 @@ #ifndef ZXDH_ETHDEV_H #define ZXDH_ETHDEV_H +#include <rte_ether.h> + #include "ethdev_driver.h" #ifdef __cplusplus @@ -23,16 +25,30 @@ extern "C" { #define ZXDH_MAX_MC_MAC_ADDRS 32 #define ZXDH_MAX_MAC_ADDRS (ZXDH_MAX_UC_MAC_ADDRS + ZXDH_MAX_MC_MAC_ADDRS) -#define ZXDH_NUM_BARS 2 +#define ZXDH_NUM_BARS 2 +#define ZXDH_RX_QUEUES_MAX 128U +#define ZXDH_TX_QUEUES_MAX 128U struct zxdh_hw { struct rte_eth_dev *eth_dev; - uint64_t bar_addr[ZXDH_NUM_BARS]; + struct zxdh_pci_common_cfg *common_cfg; + struct zxdh_net_config *dev_cfg; - uint32_t speed; + uint64_t bar_addr[ZXDH_NUM_BARS]; + uint64_t host_features; + uint64_t guest_features; + uint32_t max_queue_pairs; + uint32_t speed; + uint32_t notify_off_multiplier; + uint16_t *notify_base; + uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint8_t *isr; + uint8_t weak_barriers; + uint8_t use_msix; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; }; diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c new file mode 100644 index 0000000000..b0c3d0e0be --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.c @@ -0,0 +1,290 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <unistd.h> + +#ifdef RTE_EXEC_ENV_LINUX + #include <dirent.h> + #include <fcntl.h> +#endif + +#include <rte_io.h> +#include <rte_bus.h> +#include <rte_pci.h> +#include <rte_common.h> +#include <rte_cycles.h> + +#include "zxdh_ethdev.h" +#include "zxdh_pci.h" +#include "zxdh_logs.h" +#include "zxdh_queue.h" + +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES \ + (1ULL << ZXDH_NET_F_MRG_RXBUF | \ + 1ULL << ZXDH_NET_F_STATUS | \ + 1ULL << ZXDH_NET_F_MQ | \ + 1ULL << ZXDH_F_ANY_LAYOUT | \ + 1ULL << ZXDH_F_VERSION_1 | \ + 1ULL << ZXDH_F_RING_PACKED | \ + 1ULL << ZXDH_F_IN_ORDER | \ + 1ULL << ZXDH_F_NOTIFICATION_DATA | \ + 1ULL << ZXDH_NET_F_MAC) + +static void zxdh_read_dev_config(struct zxdh_hw *hw, + size_t offset, + void *dst, + int32_t length) +{ + int32_t i = 0; + uint8_t *p = NULL; + uint8_t old_gen = 0; + uint8_t new_gen = 0; + + do { + old_gen = rte_read8(&hw->common_cfg->config_generation); + + p = dst; + for (i = 0; i < length; i++) + *p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i); + + new_gen = rte_read8(&hw->common_cfg->config_generation); + } while (old_gen != new_gen); +} + +static void zxdh_write_dev_config(struct zxdh_hw *hw, + size_t offset, + const void *src, + int32_t length) +{ + int32_t i = 0; + const uint8_t *p = src; + + for (i = 0; i < length; i++) + rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i)); +} + +static uint8_t zxdh_get_status(struct zxdh_hw *hw) +{ + return rte_read8(&hw->common_cfg->device_status); +} + +static void zxdh_set_status(struct zxdh_hw *hw, uint8_t status) +{ + rte_write8(status, &hw->common_cfg->device_status); +} + +static uint64_t zxdh_get_features(struct zxdh_hw *hw) +{ + uint32_t features_lo = 0; + uint32_t features_hi = 0; + + rte_write32(0, &hw->common_cfg->device_feature_select); + features_lo = rte_read32(&hw->common_cfg->device_feature); + + rte_write32(1, &hw->common_cfg->device_feature_select); + features_hi = rte_read32(&hw->common_cfg->device_feature); + + return ((uint64_t)features_hi << 32) | features_lo; +} + +static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) +{ + rte_write32(0, &hw->common_cfg->guest_feature_select); + rte_write32(features & ((1ULL << 32) - 1), &hw->common_cfg->guest_feature); + rte_write32(1, &hw->common_cfg->guest_feature_select); + rte_write32(features >> 32, &hw->common_cfg->guest_feature); +} + +const struct zxdh_pci_ops zxdh_dev_pci_ops = { + .read_dev_cfg = zxdh_read_dev_config, + .write_dev_cfg = zxdh_write_dev_config, + .get_status = zxdh_get_status, + .set_status = zxdh_set_status, + .get_features = zxdh_get_features, + .set_features = zxdh_set_features, +}; + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_features(hw); +} + +void zxdh_vtpci_reset(struct zxdh_hw *hw) +{ + PMD_INIT_LOG(INFO, "port %u device start reset, just wait...", hw->port_id); + uint32_t retry = 0; + + VTPCI_OPS(hw)->set_status(hw, ZXDH_CONFIG_STATUS_RESET); + /* Flush status write and wait device ready max 3 seconds. */ + while (VTPCI_OPS(hw)->get_status(hw) != ZXDH_CONFIG_STATUS_RESET) { + ++retry; + rte_delay_ms(1); + } + PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); +} + +static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) +{ + uint8_t bar = cap->bar; + uint32_t length = cap->length; + uint32_t offset = cap->offset; + + if (bar >= PCI_MAX_RESOURCE) { + PMD_INIT_LOG(ERR, "invalid bar: %u", bar); + return NULL; + } + if (offset + length < offset) { + PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows", offset, length); + return NULL; + } + if (offset + length > dev->mem_resource[bar].len) { + PMD_INIT_LOG(ERR, "invalid cap: overflows bar space"); + return NULL; + } + uint8_t *base = dev->mem_resource[bar].addr; + + if (base == NULL) { + PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar); + return NULL; + } + return base + offset; +} + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw) +{ + uint8_t pos = 0; + int32_t ret = 0; + + if (dev->mem_resource[0].addr == NULL) { + PMD_INIT_LOG(ERR, "bar0 base addr is NULL"); + return -1; + } + + ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(DEBUG, "failed to read pci capability list, ret %d", ret); + return -1; + } + while (pos) { + struct zxdh_pci_cap cap; + + ret = rte_pci_read_config(dev, &cap, 2, pos); + if (ret != 2) { + PMD_INIT_LOG(DEBUG, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap.cap_vndr == ZXDH_PCI_CAP_ID_MSIX) { + /** + * Transitional devices would also have this capability, + * that's why we also check if msix is enabled. + * 1st byte is cap ID; 2nd byte is the position of next cap; + * next two bytes are the flags. + */ + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + 2); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", + pos + 2, ret); + break; + } + hw->use_msix = (flags & ZXDH_PCI_MSIX_ENABLE) ? + ZXDH_MSIX_ENABLED : ZXDH_MSIX_DISABLED; + } + if (cap.cap_vndr != ZXDH_PCI_CAP_ID_VNDR) { + PMD_INIT_LOG(DEBUG, "[%2x] skipping non VNDR cap id: %02x", + pos, cap.cap_vndr); + goto next; + } + ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + PMD_INIT_LOG(DEBUG, "[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u", + pos, cap.cfg_type, cap.bar, cap.offset, cap.length); + switch (cap.cfg_type) { + case ZXDH_PCI_CAP_COMMON_CFG: + hw->common_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_NOTIFY_CFG: { + ret = rte_pci_read_config(dev, &hw->notify_off_multiplier, + 4, pos + sizeof(cap)); + if (ret != 4) + PMD_INIT_LOG(ERR, + "failed to read notify_off_multiplier, ret %d", ret); + else + hw->notify_base = get_cfg_addr(dev, &cap); + break; + } + case ZXDH_PCI_CAP_DEVICE_CFG: + hw->dev_cfg = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_ISR_CFG: + hw->isr = get_cfg_addr(dev, &cap); + break; + case ZXDH_PCI_CAP_PCI_CFG: { + hw->pcie_id = *(uint16_t *)&cap.padding[1]; + PMD_INIT_LOG(DEBUG, "get pcie id 0x%x", hw->pcie_id); + uint16_t pcie_id = hw->pcie_id; + + if ((pcie_id >> 11) & 0x1) /* PF */ { + PMD_INIT_LOG(DEBUG, "EP %u PF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7); + } else { /* VF */ + PMD_INIT_LOG(DEBUG, "EP %u PF %u VF %u", + pcie_id >> 12, (pcie_id >> 8) & 0x7, pcie_id & 0xff); + } + break; + } + } +next: + pos = cap.cap_next; + } + if (hw->common_cfg == NULL || hw->notify_base == NULL || + hw->dev_cfg == NULL || hw->isr == NULL) { + PMD_INIT_LOG(ERR, "no zxdh pci device found."); + return -1; + } + return 0; +} + +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, void *dst, int32_t length) +{ + VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length); +} + +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) +{ + uint64_t guest_features = 0; + uint64_t nego_features = 0; + uint32_t max_queue_pairs = 0; + + hw->host_features = zxdh_vtpci_get_features(hw); + + guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES; + nego_features = guest_features & hw->host_features; + + hw->guest_features = nego_features; + + if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) { + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac), + &hw->mac_addr, RTE_ETHER_ADDR_LEN); + } else { + rte_eth_random_addr(&hw->mac_addr[0]); + } + + zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs), + &max_queue_pairs, sizeof(max_queue_pairs)); + + if (max_queue_pairs == 0) + hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX; + else + hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs); + PMD_INIT_LOG(DEBUG, "set max queue pairs %d", hw->max_queue_pairs); + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h new file mode 100644 index 0000000000..f67de40962 --- /dev/null +++ b/drivers/net/zxdh/zxdh_pci.h @@ -0,0 +1,151 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_PCI_H +#define ZXDH_PCI_H + +#include <stdint.h> +#include <stdbool.h> + +#include <bus_pci_driver.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +enum zxdh_msix_status { + ZXDH_MSIX_NONE = 0, + ZXDH_MSIX_DISABLED = 1, + ZXDH_MSIX_ENABLED = 2 +}; + +#define ZXDH_PCI_CAPABILITY_LIST 0x34 +#define ZXDH_PCI_CAP_ID_VNDR 0x09 +#define ZXDH_PCI_CAP_ID_MSIX 0x11 + +#define ZXDH_PCI_MSIX_ENABLE 0x8000 + +#define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ +#define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ +#define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ +#define ZXDH_F_ANY_LAYOUT 27 /* Can the device handle any descriptor layout */ +#define ZXDH_F_VERSION_1 32 +#define ZXDH_F_RING_PACKED 34 +#define ZXDH_F_IN_ORDER 35 +#define ZXDH_F_NOTIFICATION_DATA 38 + +#define ZXDH_PCI_CAP_COMMON_CFG 1 /* Common configuration */ +#define ZXDH_PCI_CAP_NOTIFY_CFG 2 /* Notifications */ +#define ZXDH_PCI_CAP_ISR_CFG 3 /* ISR Status */ +#define ZXDH_PCI_CAP_DEVICE_CFG 4 /* Device specific configuration */ +#define ZXDH_PCI_CAP_PCI_CFG 5 /* PCI configuration access */ + +/* Status byte for guest to report progress. */ +#define ZXDH_CONFIG_STATUS_RESET 0x00 +#define ZXDH_CONFIG_STATUS_ACK 0x01 +#define ZXDH_CONFIG_STATUS_DRIVER 0x02 +#define ZXDH_CONFIG_STATUS_DRIVER_OK 0x04 +#define ZXDH_CONFIG_STATUS_FEATURES_OK 0x08 +#define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 +#define ZXDH_CONFIG_STATUS_FAILED 0x80 + +struct zxdh_net_config { + /* The config defining mac address (if ZXDH_NET_F_MAC) */ + uint8_t mac[RTE_ETHER_ADDR_LEN]; + /* See ZXDH_NET_F_STATUS and ZXDH_NET_S_* above */ + uint16_t status; + uint16_t max_virtqueue_pairs; + uint16_t mtu; + /* + * speed, in units of 1Mb. All values 0 to INT_MAX are legal. + * Any other value stands for unknown. + */ + uint32_t speed; + /* 0x00 - half duplex + * 0x01 - full duplex + * Any other value stands for unknown. + */ + uint8_t duplex; +} __rte_packed; + +/* This is the PCI capability header: */ +struct zxdh_pci_cap { + uint8_t cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */ + uint8_t cap_next; /* Generic PCI field: next ptr. */ + uint8_t cap_len; /* Generic PCI field: capability length */ + uint8_t cfg_type; /* Identifies the structure. */ + uint8_t bar; /* Where to find it. */ + uint8_t padding[3]; /* Pad to full dword. */ + uint32_t offset; /* Offset within bar. */ + uint32_t length; /* Length of the structure, in bytes. */ +}; + +/* Fields in ZXDH_PCI_CAP_COMMON_CFG: */ +struct zxdh_pci_common_cfg { + /* About the whole device. */ + uint32_t device_feature_select; /* read-write */ + uint32_t device_feature; /* read-only */ + uint32_t guest_feature_select; /* read-write */ + uint32_t guest_feature; /* read-write */ + uint16_t msix_config; /* read-write */ + uint16_t num_queues; /* read-only */ + uint8_t device_status; /* read-write */ + uint8_t config_generation; /* read-only */ + + /* About a specific virtqueue. */ + uint16_t queue_select; /* read-write */ + uint16_t queue_size; /* read-write, power of 2. */ + uint16_t queue_msix_vector; /* read-write */ + uint16_t queue_enable; /* read-write */ + uint16_t queue_notify_off; /* read-only */ + uint32_t queue_desc_lo; /* read-write */ + uint32_t queue_desc_hi; /* read-write */ + uint32_t queue_avail_lo; /* read-write */ + uint32_t queue_avail_hi; /* read-write */ + uint32_t queue_used_lo; /* read-write */ + uint32_t queue_used_hi; /* read-write */ +}; + +static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) +{ + return (hw->guest_features & (1ULL << bit)) != 0; +} + +struct zxdh_pci_ops { + void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); + void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); + + uint8_t (*get_status)(struct zxdh_hw *hw); + void (*set_status)(struct zxdh_hw *hw, uint8_t status); + + uint64_t (*get_features)(struct zxdh_hw *hw); + void (*set_features)(struct zxdh_hw *hw, uint64_t features); +}; + +struct zxdh_hw_internal { + const struct zxdh_pci_ops *vtpci_ops; +}; + +#define VTPCI_OPS(hw) (zxdh_hw_internal[(hw)->port_id].vtpci_ops) + +extern struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +extern const struct zxdh_pci_ops zxdh_dev_pci_ops; + +void zxdh_vtpci_reset(struct zxdh_hw *hw); +void zxdh_vtpci_read_dev_config(struct zxdh_hw *hw, size_t offset, + void *dst, int32_t length); + +int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); +int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); + +uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_PCI_H */ diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h new file mode 100644 index 0000000000..d93705ed7b --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.h @@ -0,0 +1,105 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_QUEUE_H +#define ZXDH_QUEUE_H + +#include <stdint.h> + +#include <rte_common.h> + +#include "zxdh_ethdev.h" +#include "zxdh_rxtx.h" + +#ifdef __cplusplus +extern "C" { +#endif + +/** ring descriptors: 16 bytes. + * These can chain together via "next". + **/ +struct vring_desc { + uint64_t addr; /* Address (guest-physical). */ + uint32_t len; /* Length. */ + uint16_t flags; /* The flags as indicated above. */ + uint16_t next; /* We chain unused descriptors via this. */ +}; + +struct vring_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[0]; +}; + +struct vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +}; + +struct vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +}; + +struct vring_packed { + uint32_t num; + struct vring_packed_desc *desc; + struct vring_packed_desc_event *driver; + struct vring_packed_desc_event *device; +}; + +struct vq_desc_extra { + void *cookie; + uint16_t ndescs; + uint16_t next; +}; + +struct virtqueue { + struct zxdh_hw *hw; /**< zxdh_hw structure pointer. */ + struct { + /**< vring keeping descs and events */ + struct vring_packed ring; + uint8_t used_wrap_counter; + uint8_t rsv; + uint16_t cached_flags; /**< cached flags for descs */ + uint16_t event_flags_shadow; + uint16_t rsv1; + } __rte_packed vq_packed; + uint16_t vq_used_cons_idx; /**< last consumed descriptor */ + uint16_t vq_nentries; /**< vring desc numbers */ + uint16_t vq_free_cnt; /**< num of desc available */ + uint16_t vq_avail_idx; /**< sync until needed */ + uint16_t vq_free_thresh; /**< free threshold */ + uint16_t rsv2; + + void *vq_ring_virt_mem; /**< linear address of vring*/ + uint32_t vq_ring_size; + + union { + struct virtnet_rx rxq; + struct virtnet_tx txq; + }; + + /** < physical address of vring, + * or virtual address for virtio_user. + **/ + rte_iova_t vq_ring_mem; + + /** + * Head of the free chain in the descriptor table. If + * there are no free descriptors, this will be set to + * VQ_RING_DESC_CHAIN_END. + **/ + uint16_t vq_desc_head_idx; + uint16_t vq_desc_tail_idx; + uint16_t vq_queue_index; /**< PCI queue index */ + uint16_t offset; /**< relative offset to obtain addr in mbuf */ + uint16_t *notify_addr; + struct rte_mbuf **sw_ring; /**< RX software ring. */ + struct vq_desc_extra vq_descx[0]; +}; + +#endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h new file mode 100644 index 0000000000..f12f58d84a --- /dev/null +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_RXTX_H +#define ZXDH_RXTX_H + +#include <stdint.h> + +#include <rte_common.h> +#include <rte_mbuf_core.h> + +#ifdef __cplusplus +extern "C" { +#endif + +struct virtnet_stats { + uint64_t packets; + uint64_t bytes; + uint64_t errors; + uint64_t multicast; + uint64_t broadcast; + uint64_t truncated_err; + uint64_t size_bins[8]; /* Size bins in array as RFC 2819, undersized [0], 64 [1], etc */ +}; + +struct virtnet_rx { + struct virtqueue *vq; + + /* dummy mbuf, for wraparound when processing RX ring. */ + struct rte_mbuf fake_mbuf; + + uint64_t mbuf_initializer; /* value to init mbufs. */ + struct rte_mempool *mpool; /* mempool for mbuf allocation */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate RX ring. */ +}; + +struct virtnet_tx { + struct virtqueue *vq; + const struct rte_memzone *virtio_net_hdr_mz; /* memzone to populate hdr. */ + rte_iova_t virtio_net_hdr_mem; /* hdr for each xmit packet */ + uint16_t queue_id; /* DPDK queue index. */ + uint16_t port_id; /* Device port identifier. */ + struct virtnet_stats stats; + const struct rte_memzone *mz; /* mem zone to populate TX ring. */ +}; + +#endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53702 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-10-27 16:47 ` Stephen Hemminger 2024-10-27 16:47 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-27 16:47 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19 On Tue, 22 Oct 2024 20:20:36 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > + > +#ifdef RTE_EXEC_ENV_LINUX > + #include <dirent.h> > + #include <fcntl.h> > +#endif > + Why is this necessary? Driver builds without this ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation 2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-27 16:47 ` Stephen Hemminger @ 2024-10-27 16:47 ` Stephen Hemminger 1 sibling, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-27 16:47 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19 On Tue, 22 Oct 2024 20:20:36 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > Add device pci init implementation, > to obtain PCI capability and read configuration, etc. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_ethdev.c | 43 +++++ > drivers/net/zxdh/zxdh_ethdev.h | 22 ++- > drivers/net/zxdh/zxdh_pci.c | 290 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_pci.h | 151 +++++++++++++++++ > drivers/net/zxdh/zxdh_queue.h | 105 ++++++++++++ > drivers/net/zxdh/zxdh_rxtx.h | 51 ++++++ > 7 files changed, 660 insertions(+), 3 deletions(-) > create mode 100644 drivers/net/zxdh/zxdh_pci.c > create mode 100644 drivers/net/zxdh/zxdh_pci.h > create mode 100644 drivers/net/zxdh/zxdh_queue.h > create mode 100644 drivers/net/zxdh/zxdh_rxtx.h DPDK has switched from GNU special zero length arrays to the C99 standard flexible arrays. ### [PATCH] net/zxdh: add zxdh device pci init implementation ERROR:FLEXIBLE_ARRAY: Use C99 flexible arrays - see https://docs.kernel.org/process/deprecated.html#zero-length-and-one-element-arrays #620: FILE: drivers/net/zxdh/zxdh_queue.h:33: + uint16_t ring[0]; +}; ERROR:FLEXIBLE_ARRAY: Use C99 flexible arrays - see https://docs.kernel.org/process/deprecated.html#zero-length-and-one-element-arrays #690: FILE: drivers/net/zxdh/zxdh_queue.h:103: + struct vq_desc_extra vq_descx[0]; +}; total: 2 errors, 0 warnings, 0 checks, 698 lines checked ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (2 preceding siblings ...) 2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang ` (4 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 9059 bytes --] Add msg channel and hwlock init implementation. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_ethdev.c | 15 ++++ drivers/net/zxdh/zxdh_ethdev.h | 1 + drivers/net/zxdh/zxdh_msg.c | 160 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 67 ++++++++++++++ 5 files changed, 244 insertions(+) create mode 100644 drivers/net/zxdh/zxdh_msg.c create mode 100644 drivers/net/zxdh/zxdh_msg.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 7db4e7bc71..2e0c8fddae 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -16,4 +16,5 @@ endif sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', + 'zxdh_msg.c', ) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index f34b2af7b3..6278fd61b6 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -9,6 +9,7 @@ #include "zxdh_ethdev.h" #include "zxdh_logs.h" #include "zxdh_pci.h" +#include "zxdh_msg.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; @@ -83,9 +84,23 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret < 0) goto err_zxdh_init; + ret = zxdh_msg_chan_init(); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to init bar msg chan"); + goto err_zxdh_init; + } + hw->msg_chan_init = 1; + + ret = zxdh_msg_chan_hwlock_init(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: + zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; return ret; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index e2db2508e2..e937713d71 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -51,6 +51,7 @@ struct zxdh_hw { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; + uint8_t msg_chan_init; }; #ifdef __cplusplus diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c new file mode 100644 index 0000000000..51095da5a3 --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.c @@ -0,0 +1,160 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_memcpy.h> +#include <rte_spinlock.h> +#include <rte_cycles.h> +#include <rte_malloc.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" + +#define ZXDH_REPS_INFO_FLAG_USABLE 0x00 +#define ZXDH_BAR_SEQID_NUM_MAX 256 + +#define ZXDH_PCIEID_IS_PF_MASK (0x0800) +#define ZXDH_PCIEID_PF_IDX_MASK (0x0700) +#define ZXDH_PCIEID_VF_IDX_MASK (0x00ff) +#define ZXDH_PCIEID_EP_IDX_MASK (0x7000) +/* PCIEID bit field offset */ +#define ZXDH_PCIEID_PF_IDX_OFFSET (8) +#define ZXDH_PCIEID_EP_IDX_OFFSET (12) + +#define ZXDH_MULTIPLY_BY_8(x) ((x) << 3) +#define ZXDH_MULTIPLY_BY_32(x) ((x) << 5) +#define ZXDH_MULTIPLY_BY_256(x) ((x) << 8) + +#define ZXDH_MAX_EP_NUM (4) +#define ZXDH_MAX_HARD_SPINLOCK_NUM (511) + +#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) +#define ZXDH_FW_SHRD_OFFSET (0x5000) +#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define ZXDH_HW_LABEL_OFFSET \ + (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) + +struct dev_stat { + uint8_t is_mpf_scanned; + uint8_t is_res_init; + int16_t dev_cnt; /* probe cnt */ +}; +struct dev_stat g_dev_stat = {0}; + +struct seqid_item { + void *reps_addr; + uint16_t id; + uint16_t buffer_len; + uint16_t flag; +}; + +struct seqid_ring { + uint16_t cur_id; + rte_spinlock_t lock; + struct seqid_item reps_info_tbl[ZXDH_BAR_SEQID_NUM_MAX]; +}; +struct seqid_ring g_seqid_ring = {0}; + +static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) +{ + uint16_t lock_id = 0; + uint16_t pf_idx = (src_pcieid & ZXDH_PCIEID_PF_IDX_MASK) >> ZXDH_PCIEID_PF_IDX_OFFSET; + uint16_t ep_idx = (src_pcieid & ZXDH_PCIEID_EP_IDX_MASK) >> ZXDH_PCIEID_EP_IDX_OFFSET; + + switch (dst) { + /* msg to risc */ + case ZXDH_MSG_CHAN_END_RISC: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx; + break; + /* msg to pf/vf */ + case ZXDH_MSG_CHAN_END_VF: + case ZXDH_MSG_CHAN_END_PF: + lock_id = ZXDH_MULTIPLY_BY_8(ep_idx) + pf_idx + + ZXDH_MULTIPLY_BY_8(1 + ZXDH_MAX_EP_NUM); + break; + default: + lock_id = 0; + break; + } + if (lock_id >= ZXDH_MAX_HARD_SPINLOCK_NUM) + lock_id = 0; + + return lock_id; +} + +static void label_write(uint64_t label_lock_addr, uint32_t lock_id, uint16_t value) +{ + *(volatile uint16_t *)(label_lock_addr + lock_id * 2) = value; +} + +static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t data) +{ + *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; +} + +static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) +{ + label_write((uint64_t)label_addr, virt_lock_id, 0); + spinlock_write(virt_addr, virt_lock_id, 0); + return 0; +} + +/** + * Fun: PF init hard_spinlock addr + */ +static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) +{ + int lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_RISC); + + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + lock_id = pcie_id_to_hard_lock(pcie_id, ZXDH_MSG_CHAN_END_VF); + zxdh_spinlock_unlock(lock_id, bar_base_addr + ZXDH_BAR0_SPINLOCK_OFFSET, + bar_base_addr + ZXDH_HW_LABEL_OFFSET); + return 0; +} + +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} + +static rte_spinlock_t chan_lock; +int zxdh_msg_chan_init(void) +{ + uint16_t seq_id = 0; + + g_dev_stat.dev_cnt++; + if (g_dev_stat.is_res_init) + return ZXDH_BAR_MSG_OK; + + rte_spinlock_init(&chan_lock); + g_seqid_ring.cur_id = 0; + rte_spinlock_init(&g_seqid_ring.lock); + + for (seq_id = 0; seq_id < ZXDH_BAR_SEQID_NUM_MAX; seq_id++) { + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[seq_id]; + + reps_info->id = seq_id; + reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + } + g_dev_stat.is_res_init = true; + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_msg_chan_exit(void) +{ + if (!g_dev_stat.is_res_init || (--g_dev_stat.dev_cnt > 0)) + return ZXDH_BAR_MSG_OK; + + g_dev_stat.is_res_init = false; + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h new file mode 100644 index 0000000000..2caf2ddaea --- /dev/null +++ b/drivers/net/zxdh/zxdh_msg.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_MSG_H +#define ZXDH_MSG_H + +#include <stdint.h> + +#include <ethdev_driver.h> + +#ifdef __cplusplus +extern "C" { +#endif + +#define ZXDH_BAR0_INDEX 0 + +enum DRIVER_TYPE { + ZXDH_MSG_CHAN_END_MPF = 0, + ZXDH_MSG_CHAN_END_PF, + ZXDH_MSG_CHAN_END_VF, + ZXDH_MSG_CHAN_END_RISC, +}; + +enum BAR_MSG_RTN { + ZXDH_BAR_MSG_OK = 0, + ZXDH_BAR_MSG_ERR_MSGID, + ZXDH_BAR_MSG_ERR_NULL, + ZXDH_BAR_MSG_ERR_TYPE, /* Message type exception */ + ZXDH_BAR_MSG_ERR_MODULE, /* Module ID exception */ + ZXDH_BAR_MSG_ERR_BODY_NULL, /* Message body exception */ + ZXDH_BAR_MSG_ERR_LEN, /* Message length exception */ + ZXDH_BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */ + ZXDH_BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/ + ZXDH_BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/ + ZXDH_BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/ + ZXDH_BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/ + /** + * The sending interface parameter boundary structure pointer is empty + */ + ZXDH_BAR_MSG_ERR_NULL_PARA, + ZXDH_BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/ + /** + * Unable to find the corresponding message processing function for this module + */ + ZXDH_BAR_MSG_ERR_MODULE_NOEXIST, + /** + * The virtual address in the parameters passed in by the sending interface is empty + */ + ZXDH_BAR_MSG_ERR_VIRTADDR_NULL, + ZXDH_BAR_MSG_ERR_REPLY, /* sync msg resp_error */ + ZXDH_BAR_MSG_ERR_MPF_NOT_SCANNED, + ZXDH_BAR_MSG_ERR_KERNEL_READY, + ZXDH_BAR_MSG_ERR_USR_RET_ERR, + ZXDH_BAR_MSG_ERR_ERR_PCIEID, + ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ +}; + +int zxdh_msg_chan_init(void); +int zxdh_bar_msg_chan_exit(void); +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_MSG_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 17544 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 5/9] net/zxdh: add msg chan enable implementation 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (3 preceding siblings ...) 2024-10-22 12:20 ` [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-26 17:05 ` Thomas Monjalon 2024-10-22 12:20 ` [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang ` (3 subsequent siblings) 8 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 28635 bytes --] Add msg chan enable implementation to support send msg to backend(device side) get infos. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 6 + drivers/net/zxdh/zxdh_ethdev.h | 12 + drivers/net/zxdh/zxdh_msg.c | 659 ++++++++++++++++++++++++++++++++- drivers/net/zxdh/zxdh_msg.h | 129 +++++++ 4 files changed, 794 insertions(+), 12 deletions(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 6278fd61b6..2a9c1939c3 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -97,6 +97,12 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_msg_chan_enable(eth_dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret); + goto err_zxdh_init; + } + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index e937713d71..4922f3d457 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -29,10 +29,22 @@ extern "C" { #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +union virport_num { + uint16_t vport; + struct { + uint16_t vfid:8; + uint16_t pfid:3; + uint16_t vf_flag:1; + uint16_t epid:3; + uint16_t direct_flag:1; + }; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + union virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; uint64_t host_features; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 51095da5a3..0e6074022f 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -15,6 +15,8 @@ #include "zxdh_msg.h" #define ZXDH_REPS_INFO_FLAG_USABLE 0x00 +#define ZXDH_REPS_INFO_FLAG_USED 0xa0 + #define ZXDH_BAR_SEQID_NUM_MAX 256 #define ZXDH_PCIEID_IS_PF_MASK (0x0800) @@ -32,12 +34,85 @@ #define ZXDH_MAX_EP_NUM (4) #define ZXDH_MAX_HARD_SPINLOCK_NUM (511) -#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) -#define ZXDH_FW_SHRD_OFFSET (0x5000) -#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) +#define LOCK_PRIMARY_ID_MASK (0x8000) +/* bar offset */ +#define ZXDH_BAR0_CHAN_RISC_OFFSET (0x2000) +#define ZXDH_BAR0_CHAN_PFVF_OFFSET (0x3000) +#define ZXDH_BAR0_SPINLOCK_OFFSET (0x4000) +#define ZXDH_FW_SHRD_OFFSET (0x5000) +#define ZXDH_FW_SHRD_INNER_HW_LABEL_PAT (0x800) #define ZXDH_HW_LABEL_OFFSET \ (ZXDH_FW_SHRD_OFFSET + ZXDH_FW_SHRD_INNER_HW_LABEL_PAT) +#define ZXDH_CHAN_RISC_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_SPINLOCK_OFFSET \ + (ZXDH_BAR0_SPINLOCK_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) +#define ZXDH_CHAN_RISC_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_RISC_OFFSET) +#define ZXDH_CHAN_PFVF_LABEL_OFFSET \ + (ZXDH_HW_LABEL_OFFSET - ZXDH_BAR0_CHAN_PFVF_OFFSET) + +#define ZXDH_REPS_HEADER_LEN_OFFSET 1 +#define ZXDH_REPS_HEADER_PAYLOAD_OFFSET 4 +#define ZXDH_REPS_HEADER_REPLYED 0xff + +#define ZXDH_BAR_MSG_CHAN_USABLE 0 +#define ZXDH_BAR_MSG_CHAN_USED 1 + +#define ZXDH_BAR_MSG_POL_MASK (0x10) +#define ZXDH_BAR_MSG_POL_OFFSET (4) + +#define ZXDH_BAR_ALIGN_WORD_MASK 0xfffffffc +#define ZXDH_BAR_MSG_VALID_MASK 1 +#define ZXDH_BAR_MSG_VALID_OFFSET 0 + +#define ZXDH_BAR_PF_NUM 7 +#define ZXDH_BAR_VF_NUM 256 +#define ZXDH_BAR_INDEX_PF_TO_VF 0 +#define ZXDH_BAR_INDEX_MPF_TO_MPF 0xff +#define ZXDH_BAR_INDEX_MPF_TO_PFVF 0 +#define ZXDH_BAR_INDEX_PFVF_TO_MPF 0 + +#define ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES (1000) +#define ZXDH_SPINLOCK_POLLING_SPAN_US (100) + +#define ZXDH_BAR_MSG_SRC_NUM 3 +#define ZXDH_BAR_MSG_SRC_MPF 0 +#define ZXDH_BAR_MSG_SRC_PF 1 +#define ZXDH_BAR_MSG_SRC_VF 2 +#define ZXDH_BAR_MSG_SRC_ERR 0xff +#define ZXDH_BAR_MSG_DST_NUM 3 +#define ZXDH_BAR_MSG_DST_RISC 0 +#define ZXDH_BAR_MSG_DST_MPF 2 +#define ZXDH_BAR_MSG_DST_PFVF 1 +#define ZXDH_BAR_MSG_DST_ERR 0xff + +#define ZXDH_LOCK_TYPE_HARD (1) +#define ZXDH_LOCK_TYPE_SOFT (0) +#define ZXDH_BAR_INDEX_TO_RISC 0 + +#define ZXDH_BAR_CHAN_INDEX_SEND 0 +#define ZXDH_BAR_CHAN_INDEX_RECV 1 + +uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, + {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV, ZXDH_BAR_CHAN_INDEX_RECV} +}; + +uint8_t chan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_MPF_TO_PFVF, ZXDH_BAR_INDEX_MPF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF}, + {ZXDH_BAR_INDEX_TO_RISC, ZXDH_BAR_INDEX_PF_TO_VF, ZXDH_BAR_INDEX_PFVF_TO_MPF} +}; + +uint8_t lock_type_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_SOFT, ZXDH_LOCK_TYPE_HARD}, + {ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD, ZXDH_LOCK_TYPE_HARD} +}; + struct dev_stat { uint8_t is_mpf_scanned; uint8_t is_res_init; @@ -96,6 +171,11 @@ static void spinlock_write(uint64_t virt_lock_addr, uint32_t lock_id, uint8_t da *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id) = data; } +static uint8_t spinlock_read(uint64_t virt_lock_addr, uint32_t lock_id) +{ + return *(volatile uint8_t *)((uint64_t)virt_lock_addr + (uint64_t)lock_id); +} + static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, uint64_t label_addr) { label_write((uint64_t)label_addr, virt_lock_id, 0); @@ -103,6 +183,28 @@ static int32_t zxdh_spinlock_unlock(uint32_t virt_lock_id, uint64_t virt_addr, u return 0; } +static int32_t zxdh_spinlock_lock(uint32_t virt_lock_id, uint64_t virt_addr, + uint64_t label_addr, uint16_t primary_id) +{ + uint32_t lock_rd_cnt = 0; + + do { + /* read to lock */ + uint8_t spl_val = spinlock_read(virt_addr, virt_lock_id); + + if (spl_val == 0) { + label_write((uint64_t)label_addr, virt_lock_id, primary_id); + break; + } + rte_delay_us_block(ZXDH_SPINLOCK_POLLING_SPAN_US); + lock_rd_cnt++; + } while (lock_rd_cnt < ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES); + if (lock_rd_cnt >= ZXDH_MAX_HARD_SPINLOCK_ASK_TIMES) + return -1; + + return 0; +} + /** * Fun: PF init hard_spinlock addr */ @@ -118,15 +220,6 @@ static int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr) return 0; } -int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) -{ - struct zxdh_hw *hw = dev->data->dev_private; - - if (!hw->is_pf) - return 0; - return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); -} - static rte_spinlock_t chan_lock; int zxdh_msg_chan_init(void) { @@ -158,3 +251,545 @@ int zxdh_bar_msg_chan_exit(void) g_dev_stat.is_res_init = false; return ZXDH_BAR_MSG_OK; } + +static int zxdh_bar_chan_msgid_allocate(uint16_t *msgid) +{ + struct seqid_item *seqid_reps_info = NULL; + + rte_spinlock_lock(&g_seqid_ring.lock); + uint16_t g_id = g_seqid_ring.cur_id; + uint16_t count = 0; + int rc = 0; + + do { + count++; + ++g_id; + g_id %= ZXDH_BAR_SEQID_NUM_MAX; + seqid_reps_info = &g_seqid_ring.reps_info_tbl[g_id]; + } while ((seqid_reps_info->flag != ZXDH_REPS_INFO_FLAG_USABLE) && + (count < ZXDH_BAR_SEQID_NUM_MAX)); + + if (count >= ZXDH_BAR_SEQID_NUM_MAX) { + rc = -1; + goto out; + } + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USED; + g_seqid_ring.cur_id = g_id; + *msgid = g_id; + rc = ZXDH_BAR_MSG_OK; + +out: + rte_spinlock_unlock(&g_seqid_ring.lock); + return rc; +} + +static uint16_t zxdh_bar_chan_save_recv_info(struct zxdh_msg_recviver_mem *result, uint16_t *msg_id) +{ + int ret = zxdh_bar_chan_msgid_allocate(msg_id); + + if (ret != ZXDH_BAR_MSG_OK) + return ZXDH_BAR_MSG_ERR_MSGID; + + PMD_MSG_LOG(DEBUG, "allocate msg_id: %u", *msg_id); + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[*msg_id]; + + reps_info->reps_addr = result->recv_buffer; + reps_info->buffer_len = result->buffer_len; + return ZXDH_BAR_MSG_OK; +} + +static uint8_t zxdh_bar_msg_src_index_trans(uint8_t src) +{ + uint8_t src_index = 0; + + switch (src) { + case ZXDH_MSG_CHAN_END_MPF: + src_index = ZXDH_BAR_MSG_SRC_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + src_index = ZXDH_BAR_MSG_SRC_PF; + break; + case ZXDH_MSG_CHAN_END_VF: + src_index = ZXDH_BAR_MSG_SRC_VF; + break; + default: + src_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return src_index; +} + +static uint8_t zxdh_bar_msg_dst_index_trans(uint8_t dst) +{ + uint8_t dst_index = 0; + + switch (dst) { + case ZXDH_MSG_CHAN_END_MPF: + dst_index = ZXDH_BAR_MSG_DST_MPF; + break; + case ZXDH_MSG_CHAN_END_PF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_VF: + dst_index = ZXDH_BAR_MSG_DST_PFVF; + break; + case ZXDH_MSG_CHAN_END_RISC: + dst_index = ZXDH_BAR_MSG_DST_RISC; + break; + default: + dst_index = ZXDH_BAR_MSG_SRC_ERR; + break; + } + return dst_index; +} + +static int zxdh_bar_chan_send_para_check(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result) +{ + uint8_t src_index = 0; + uint8_t dst_index = 0; + + if (in == NULL || result == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null para."); + return ZXDH_BAR_MSG_ERR_NULL_PARA; + } + src_index = zxdh_bar_msg_src_index_trans(in->src); + dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "send para ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + if (in->module_id >= ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "send para ERR: invalid module_id: %d.", in->module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + if (in->payload_addr == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: null message."); + return ZXDH_BAR_MSG_ERR_BODY_NULL; + } + if (in->payload_len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "send para ERR: len %d is too long.", in->payload_len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (in->virt_addr == 0 || result->recv_buffer == NULL) { + PMD_MSG_LOG(ERR, "send para ERR: virt_addr or recv_buffer is NULL."); + return ZXDH_BAR_MSG_ERR_VIRTADDR_NULL; + } + if (result->buffer_len < ZXDH_REPS_HEADER_PAYLOAD_OFFSET) + PMD_MSG_LOG(ERR, "recv buffer len is short than minimal 4 bytes"); + + return ZXDH_BAR_MSG_OK; +} + +static uint64_t zxdh_subchan_addr_cal(uint64_t virt_addr, uint8_t chan_id, uint8_t subchan_id) +{ + return virt_addr + (2 * chan_id + subchan_id) * ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL; +} + +static uint16_t zxdh_bar_chan_subchan_addr_get(struct zxdh_pci_bar_msg *in, uint64_t *subchan_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(in->src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(in->dst); + uint16_t chan_id = chan_id_tbl[src_index][dst_index]; + uint16_t subchan_id = subchan_id_tbl[src_index][dst_index]; + + *subchan_addr = zxdh_subchan_addr_cal(in->virt_addr, chan_id, subchan_id); + return ZXDH_BAR_MSG_OK; +} + +static int zxdh_bar_hard_lock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + int ret = 0; + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x lock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + else + ret = zxdh_spinlock_lock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET, + src_pcieid | LOCK_PRIMARY_ID_MASK); + + return ret; +} + +static void zxdh_bar_hard_unlock(uint16_t src_pcieid, uint8_t dst, uint64_t virt_addr) +{ + uint16_t lockid = pcie_id_to_hard_lock(src_pcieid, dst); + + PMD_MSG_LOG(DEBUG, "dev pcieid: 0x%x unlock, get hardlockid: %u", src_pcieid, lockid); + if (dst == ZXDH_MSG_CHAN_END_RISC) + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_RISC_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_RISC_LABEL_OFFSET); + else + zxdh_spinlock_unlock(lockid, virt_addr + ZXDH_CHAN_PFVF_SPINLOCK_OFFSET, + virt_addr + ZXDH_CHAN_PFVF_LABEL_OFFSET); +} + +static int zxdh_bar_chan_lock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + int ret = 0; + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "lock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + ret = zxdh_bar_hard_lock(src_pcieid, dst, virt_addr); + if (ret != 0) + PMD_MSG_LOG(ERR, "dev: 0x%x failed to lock.", src_pcieid); + + return ret; +} + +static int zxdh_bar_chan_unlock(uint8_t src, uint8_t dst, uint16_t src_pcieid, uint64_t virt_addr) +{ + uint8_t src_index = zxdh_bar_msg_src_index_trans(src); + uint8_t dst_index = zxdh_bar_msg_dst_index_trans(dst); + + if (src_index == ZXDH_BAR_MSG_SRC_ERR || dst_index == ZXDH_BAR_MSG_DST_ERR) { + PMD_MSG_LOG(ERR, "unlock ERR: chan doesn't exist."); + return ZXDH_BAR_MSG_ERR_TYPE; + } + + zxdh_bar_hard_unlock(src_pcieid, dst, virt_addr); + + return ZXDH_BAR_MSG_OK; +} + +static void zxdh_bar_chan_msgid_free(uint16_t msg_id) +{ + struct seqid_item *seqid_reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + rte_spinlock_lock(&g_seqid_ring.lock); + seqid_reps_info->flag = ZXDH_REPS_INFO_FLAG_USABLE; + PMD_MSG_LOG(DEBUG, "free msg_id: %u", msg_id); + rte_spinlock_unlock(&g_seqid_ring.lock); +} + +static int zxdh_bar_chan_reg_write(uint64_t subchan_addr, uint32_t offset, uint32_t data) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *(uint32_t *)(subchan_addr + algin_offset) = data; + return 0; +} + +static int zxdh_bar_chan_reg_read(uint64_t subchan_addr, uint32_t offset, uint32_t *pdata) +{ + uint32_t algin_offset = (offset & ZXDH_BAR_ALIGN_WORD_MASK); + + if (unlikely(algin_offset >= ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL)) { + PMD_MSG_LOG(ERR, "algin_offset exceeds channel size!"); + return -1; + } + *pdata = *(uint32_t *)(subchan_addr + algin_offset); + return 0; +} + +static uint16_t zxdh_bar_chan_msg_header_set(uint64_t subchan_addr, + struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_write(subchan_addr, idx * 4, *(data + idx)); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_header_get(uint64_t subchan_addr, + struct bar_msg_header *msg_header) +{ + uint32_t *data = (uint32_t *)msg_header; + uint16_t idx; + + for (idx = 0; idx < (ZXDH_BAR_MSG_PLAYLOAD_OFFSET >> 2); idx++) + zxdh_bar_chan_reg_read(subchan_addr, idx * 4, data + idx); + + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_set(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_write(subchan_addr, 4 * ix + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, *(data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + for (ix = 0; ix < remain; ix++) + remain_data |= *((uint8_t *)(msg + len - remain + ix)) << (8 * ix); + + zxdh_bar_chan_reg_write(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, remain_data); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_payload_get(uint64_t subchan_addr, uint8_t *msg, uint16_t len) +{ + uint32_t *data = (uint32_t *)msg; + uint32_t count = (len >> 2); + uint32_t ix; + + for (ix = 0; ix < count; ix++) + zxdh_bar_chan_reg_read(subchan_addr, 4 * ix + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, (data + ix)); + + uint32_t remain = (len & 0x3); + + if (remain) { + uint32_t remain_data = 0; + + zxdh_bar_chan_reg_read(subchan_addr, 4 * count + + ZXDH_BAR_MSG_PLAYLOAD_OFFSET, &remain_data); + for (ix = 0; ix < remain; ix++) + *((uint8_t *)(msg + (len - remain + ix))) = remain_data >> (8 * ix); + } + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_msg_valid_set(uint64_t subchan_addr, uint8_t valid_label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~ZXDH_BAR_MSG_VALID_MASK); + data |= (uint32_t)valid_label; + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint8_t temp_msg[ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL]; +static uint16_t zxdh_bar_chan_msg_send(uint64_t subchan_addr, void *payload_addr, + uint16_t payload_len, struct bar_msg_header *msg_header) +{ + uint16_t ret = 0; + ret = zxdh_bar_chan_msg_header_set(subchan_addr, msg_header); + + ret = zxdh_bar_chan_msg_header_get(subchan_addr, + (struct bar_msg_header *)temp_msg); + + ret = zxdh_bar_chan_msg_payload_set(subchan_addr, + (uint8_t *)(payload_addr), payload_len); + + ret = zxdh_bar_chan_msg_payload_get(subchan_addr, + temp_msg, payload_len); + + ret = zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USED); + return ret; +} + +static uint16_t zxdh_bar_msg_valid_stat_get(uint64_t subchan_addr) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + if (ZXDH_BAR_MSG_CHAN_USABLE == (data & ZXDH_BAR_MSG_VALID_MASK)) + return ZXDH_BAR_MSG_CHAN_USABLE; + + return ZXDH_BAR_MSG_CHAN_USED; +} + +static uint16_t zxdh_bar_chan_msg_poltag_set(uint64_t subchan_addr, uint8_t label) +{ + uint32_t data; + + zxdh_bar_chan_reg_read(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, &data); + data &= (~(uint32_t)ZXDH_BAR_MSG_POL_MASK); + data |= ((uint32_t)label << ZXDH_BAR_MSG_POL_OFFSET); + zxdh_bar_chan_reg_write(subchan_addr, ZXDH_BAR_MSG_VALID_OFFSET, data); + return ZXDH_BAR_MSG_OK; +} + +static uint16_t zxdh_bar_chan_sync_msg_reps_get(uint64_t subchan_addr, + uint64_t recv_buffer, uint16_t buffer_len) +{ + struct bar_msg_header msg_header = {0}; + uint16_t msg_id = 0; + uint16_t msg_len = 0; + + zxdh_bar_chan_msg_header_get(subchan_addr, &msg_header); + msg_id = msg_header.msg_id; + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id %u unused", msg_id); + return ZXDH_BAR_MSG_ERR_REPLY; + } + msg_len = msg_header.len; + + if (msg_len > buffer_len - 4) { + PMD_MSG_LOG(ERR, "recv buffer len is: %u, but reply msg len is: %u", + buffer_len, msg_len + 4); + return ZXDH_BAR_MSG_ERR_REPSBUFF_LEN; + } + uint8_t *recv_msg = (uint8_t *)recv_buffer; + + zxdh_bar_chan_msg_payload_get(subchan_addr, + recv_msg + ZXDH_REPS_HEADER_PAYLOAD_OFFSET, msg_len); + *(uint16_t *)(recv_msg + ZXDH_REPS_HEADER_LEN_OFFSET) = msg_len; + *recv_msg = ZXDH_REPS_HEADER_REPLYED; /* set reps's valid */ + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result) +{ + struct bar_msg_header msg_header = {0}; + uint16_t seq_id = 0; + uint64_t subchan_addr = 0; + uint32_t time_out_cnt = 0; + uint16_t valid = 0; + int ret = 0; + + ret = zxdh_bar_chan_send_para_check(in, result); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + ret = zxdh_bar_chan_save_recv_info(result, &seq_id); + if (ret != ZXDH_BAR_MSG_OK) + goto exit; + + zxdh_bar_chan_subchan_addr_get(in, &subchan_addr); + + msg_header.sync = ZXDH_BAR_CHAN_MSG_SYNC; + msg_header.emec = in->emec; + msg_header.usr = 0; + msg_header.rsv = 0; + msg_header.module_id = in->module_id; + msg_header.len = in->payload_len; + msg_header.msg_id = seq_id; + msg_header.src_pcieid = in->src_pcieid; + msg_header.dst_pcieid = in->dst_pcieid; + + ret = zxdh_bar_chan_lock(in->src, in->dst, in->src_pcieid, in->virt_addr); + if (ret != ZXDH_BAR_MSG_OK) { + zxdh_bar_chan_msgid_free(seq_id); + goto exit; + } + zxdh_bar_chan_msg_send(subchan_addr, in->payload_addr, in->payload_len, &msg_header); + + do { + rte_delay_us_block(ZXDH_BAR_MSG_POLLING_SPAN); + valid = zxdh_bar_msg_valid_stat_get(subchan_addr); + ++time_out_cnt; + } while ((time_out_cnt < ZXDH_BAR_MSG_TIMEOUT_TH) && (valid == ZXDH_BAR_MSG_CHAN_USED)); + + if (time_out_cnt == ZXDH_BAR_MSG_TIMEOUT_TH && valid != ZXDH_BAR_MSG_CHAN_USABLE) { + zxdh_bar_chan_msg_valid_set(subchan_addr, ZXDH_BAR_MSG_CHAN_USABLE); + zxdh_bar_chan_msg_poltag_set(subchan_addr, 0); + PMD_MSG_LOG(ERR, "BAR MSG ERR: chan type time out."); + ret = ZXDH_BAR_MSG_ERR_TIME_OUT; + } else { + ret = zxdh_bar_chan_sync_msg_reps_get(subchan_addr, + (uint64_t)result->recv_buffer, result->buffer_len); + } + zxdh_bar_chan_msgid_free(seq_id); + zxdh_bar_chan_unlock(in->src, in->dst, in->src_pcieid, in->virt_addr); + +exit: + return ret; +} + +static int bar_get_sum(uint8_t *ptr, uint8_t len) +{ + uint64_t sum = 0; + int idx; + + for (idx = 0; idx < len; idx++) + sum += *(ptr + idx); + + return (uint16_t)sum; +} + +static int zxdh_bar_chan_enable(struct msix_para *para, uint16_t *vport) +{ + struct bar_recv_msg recv_msg = {0}; + int ret = 0; + int check_token = 0; + int sum_res = 0; + + if (!para) + return ZXDH_BAR_MSG_ERR_NULL; + + struct msix_msg msix_msg = { + .pcie_id = para->pcie_id, + .vector_risc = para->vector_risc, + .vector_pfvf = para->vector_pfvf, + .vector_mpf = para->vector_mpf, + }; + struct zxdh_pci_bar_msg in = { + .virt_addr = para->virt_addr, + .payload_addr = &msix_msg, + .payload_len = sizeof(msix_msg), + .emec = 0, + .src = para->driver_type, + .dst = ZXDH_MSG_CHAN_END_RISC, + .module_id = ZXDH_BAR_MODULE_MISX, + .src_pcieid = para->pcie_id, + .dst_pcieid = 0, + .usr = 0, + }; + + struct zxdh_msg_recviver_mem result = { + .recv_buffer = &recv_msg, + .buffer_len = sizeof(recv_msg), + }; + + ret = zxdh_bar_chan_sync_msg_send(&in, &result); + if (ret != ZXDH_BAR_MSG_OK) + return -ret; + + check_token = recv_msg.msix_reps.check; + sum_res = bar_get_sum((uint8_t *)&msix_msg, sizeof(msix_msg)); + + if (check_token != sum_res) { + PMD_MSG_LOG(ERR, "expect token: 0x%x, get token: 0x%x.", sum_res, check_token); + return ZXDH_BAR_MSG_ERR_REPLY; + } + *vport = recv_msg.msix_reps.vport; + PMD_MSG_LOG(DEBUG, "vport of pcieid: 0x%x get success.", para->pcie_id); + return ZXDH_BAR_MSG_OK; +} + +int zxdh_msg_chan_enable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + struct msix_para misx_info = { + .vector_risc = ZXDH_MSIX_FROM_RISCV, + .vector_pfvf = ZXDH_MSIX_FROM_PFVF, + .vector_mpf = ZXDH_MSIX_FROM_MPF, + .pcie_id = hw->pcie_id, + .driver_type = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF, + .virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET), + }; + + return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport); +} + +int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->is_pf) + return 0; + return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index 2caf2ddaea..aa91f499ef 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -14,6 +14,21 @@ extern "C" { #endif #define ZXDH_BAR0_INDEX 0 +#define ZXDH_CTRLCH_OFFSET (0x2000) + +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 + +#define ZXDH_BAR_MSG_POLLING_SPAN 100 +#define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_POLL_CNT_PER_S (1 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) +#define ZXDH_BAR_MSG_TIMEOUT_TH (10 * 1000 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) + +#define ZXDH_BAR_CHAN_MSG_SYNC 0 + +#define ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL (2 * 1024) /* channel size */ +#define ZXDH_BAR_MSG_PLAYLOAD_OFFSET (sizeof(struct bar_msg_header)) +#define ZXDH_BAR_MSG_PAYLOAD_MAX_LEN \ + (ZXDH_BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header)) enum DRIVER_TYPE { ZXDH_MSG_CHAN_END_MPF = 0, @@ -22,6 +37,13 @@ enum DRIVER_TYPE { ZXDH_MSG_CHAN_END_RISC, }; +enum MSG_VEC { + ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, + ZXDH_MSIX_FROM_MPF, + ZXDH_MSIX_FROM_RISCV, + ZXDH_MSG_VEC_NUM, +}; + enum BAR_MSG_RTN { ZXDH_BAR_MSG_OK = 0, ZXDH_BAR_MSG_ERR_MSGID, @@ -56,10 +78,117 @@ enum BAR_MSG_RTN { ZXDH_BAR_MSG_ERR_SOCKET, /* netlink sockte err */ }; +enum bar_module_id { + ZXDH_BAR_MODULE_DBG = 0, /* 0: debug */ + ZXDH_BAR_MODULE_TBL, /* 1: resource table */ + ZXDH_BAR_MODULE_MISX, /* 2: config msix */ + ZXDH_BAR_MODULE_SDA, /* 3: */ + ZXDH_BAR_MODULE_RDMA, /* 4: */ + ZXDH_BAR_MODULE_DEMO, /* 5: channel test */ + ZXDH_BAR_MODULE_SMMU, /* 6: */ + ZXDH_BAR_MODULE_MAC, /* 7: mac rx/tx stats */ + ZXDH_BAR_MODULE_VDPA, /* 8: vdpa live migration */ + ZXDH_BAR_MODULE_VQM, /* 9: vqm live migration */ + ZXDH_BAR_MODULE_NP, /* 10: vf msg callback np */ + ZXDH_BAR_MODULE_VPORT, /* 11: get vport */ + ZXDH_BAR_MODULE_BDF, /* 12: get bdf */ + ZXDH_BAR_MODULE_RISC_READY, /* 13: */ + ZXDH_BAR_MODULE_REVERSE, /* 14: byte stream reverse */ + ZXDH_BAR_MDOULE_NVME, /* 15: */ + ZXDH_BAR_MDOULE_NPSDK, /* 16: */ + ZXDH_BAR_MODULE_NP_TODO, /* 17: */ + ZXDH_MODULE_BAR_MSG_TO_PF, /* 18: */ + ZXDH_MODULE_BAR_MSG_TO_VF, /* 19: */ + + ZXDH_MODULE_FLASH = 32, + ZXDH_BAR_MODULE_OFFSET_GET = 33, + ZXDH_BAR_EVENT_OVS_WITH_VCB = 36, + + ZXDH_BAR_MSG_MODULE_NUM = 100, +}; + +struct msix_para { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; + uint64_t virt_addr; + uint16_t driver_type; /* refer to DRIVER_TYPE */ +}; + +struct msix_msg { + uint16_t pcie_id; + uint16_t vector_risc; + uint16_t vector_pfvf; + uint16_t vector_mpf; +}; + +struct zxdh_pci_bar_msg { + uint64_t virt_addr; /* bar addr */ + void *payload_addr; + uint16_t payload_len; + uint16_t emec; + uint16_t src; /* refer to BAR_DRIVER_TYPE */ + uint16_t dst; /* refer to BAR_DRIVER_TYPE */ + uint16_t module_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; + uint16_t usr; +}; + +struct bar_msix_reps { + uint16_t pcie_id; + uint16_t check; + uint16_t vport; + uint16_t rsv; +} __rte_packed; + +struct bar_offset_reps { + uint16_t check; + uint16_t rsv; + uint32_t offset; + uint32_t length; +} __rte_packed; + +struct bar_recv_msg { + uint8_t reps_ok; + uint16_t reps_len; + uint8_t rsv; + /* */ + union { + struct bar_msix_reps msix_reps; + struct bar_offset_reps offset_reps; + } __rte_packed; +} __rte_packed; + +struct zxdh_msg_recviver_mem { + void *recv_buffer; /* first 4B is head, followed by payload */ + uint64_t buffer_len; +}; + +struct bar_msg_header { + uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */ + uint8_t sync : 1; + uint8_t emec : 1; /* emergency */ + uint8_t ack : 1; /* ack msg */ + uint8_t poll : 1; + uint8_t usr : 1; + uint8_t rsv; + uint16_t module_id; + uint16_t len; + uint16_t msg_id; + uint16_t src_pcieid; + uint16_t dst_pcieid; /* used in PF-->VF */ +}; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); +int zxdh_msg_chan_enable(struct rte_eth_dev *dev); +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, + struct zxdh_msg_recviver_mem *result); + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 60717 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 5/9] net/zxdh: add msg chan enable implementation 2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-10-26 17:05 ` Thomas Monjalon 0 siblings, 0 replies; 225+ messages in thread From: Thomas Monjalon @ 2024-10-26 17:05 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, ferruh.yigit, stephen, wang.yong19 22/10/2024 14:20, Junlong Wang: > +enum MSG_VEC { > + ZXDH_MSIX_FROM_PFVF = ZXDH_MSIX_INTR_MSG_VEC_BASE, > + ZXDH_MSIX_FROM_MPF, > + ZXDH_MSIX_FROM_RISCV, > + ZXDH_MSG_VEC_NUM, > +}; > + > enum BAR_MSG_RTN { > ZXDH_BAR_MSG_OK = 0, > ZXDH_BAR_MSG_ERR_MSGID, Again these enums are uppercased and not prefixed. Please check all other structs which are not prefixed. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (4 preceding siblings ...) 2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang ` (2 subsequent siblings) 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 13359 bytes --] Add zxdh get device backend infos, use msg chan to send msg get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 249 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_common.h | 30 ++++ drivers/net/zxdh/zxdh_ethdev.c | 35 +++++ drivers/net/zxdh/zxdh_ethdev.h | 5 + drivers/net/zxdh/zxdh_msg.h | 24 +++- drivers/net/zxdh/zxdh_queue.h | 4 + drivers/net/zxdh/zxdh_rxtx.h | 4 + 8 files changed, 351 insertions(+), 1 deletion(-) create mode 100644 drivers/net/zxdh/zxdh_common.c create mode 100644 drivers/net/zxdh/zxdh_common.h diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index 2e0c8fddae..a16db47f89 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -17,4 +17,5 @@ sources = files( 'zxdh_ethdev.c', 'zxdh_pci.c', 'zxdh_msg.c', + 'zxdh_common.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c new file mode 100644 index 0000000000..21a0cd72cf --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.c @@ -0,0 +1,249 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <string.h> + +#include <ethdev_driver.h> +#include <rte_malloc.h> +#include <rte_memcpy.h> + +#include "zxdh_ethdev.h" +#include "zxdh_logs.h" +#include "zxdh_msg.h" +#include "zxdh_common.h" + +#define ZXDH_MSG_RSP_SIZE_MAX 512 + +#define ZXDH_COMMON_TABLE_READ 0 +#define ZXDH_COMMON_TABLE_WRITE 1 + +#define ZXDH_COMMON_FIELD_PHYPORT 6 + +#define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) + +#define ZXDH_REPS_HEADER_OFFSET 4 +#define ZXDH_TBL_MSG_PRO_SUCCESS 0xaa + +struct zxdh_common_msg { + uint8_t type; /* 0:read table 1:write table */ + uint8_t field; + uint16_t pcie_id; + uint16_t slen; /* Data length for write table */ + uint16_t reserved; +} __rte_packed; + +struct zxdh_common_rsp_hdr { + uint8_t rsp_status; + uint16_t rsp_len; + uint8_t reserved; + uint8_t payload_status; + uint8_t rsv; + uint16_t payload_len; +} __rte_packed; + +struct tbl_msg_header { + uint8_t type; /* r/w */ + uint8_t field; + uint16_t pcieid; + uint16_t slen; + uint16_t rsv; +}; +struct tbl_msg_reps_header { + uint8_t check; + uint8_t rsv; + uint16_t len; +}; + +static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + uint8_t type, + uint8_t field, + void *buff, + uint16_t buff_size) +{ + uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size; + + desc->payload_addr = rte_zmalloc(NULL, msg_len, 0); + if (unlikely(desc->payload_addr == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate msg_data"); + return -ENOMEM; + } + memset(desc->payload_addr, 0, msg_len); + desc->payload_len = msg_len; + struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr; + + msg_data->type = type; + msg_data->field = field; + msg_data->pcie_id = hw->pcie_id; + msg_data->slen = buff_size; + if (buff_size != 0) + rte_memcpy(msg_data + 1, buff, buff_size); + + return 0; +} + +static int32_t zxdh_send_command(struct zxdh_hw *hw, + struct zxdh_pci_bar_msg *desc, + enum bar_module_id module_id, + struct zxdh_msg_recviver_mem *msg_rsp) +{ + desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + desc->src = hw->is_pf ? ZXDH_MSG_CHAN_END_PF : ZXDH_MSG_CHAN_END_VF; + desc->dst = ZXDH_MSG_CHAN_END_RISC; + desc->module_id = module_id; + desc->src_pcieid = hw->pcie_id; + + msg_rsp->buffer_len = ZXDH_MSG_RSP_SIZE_MAX; + msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0); + if (unlikely(msg_rsp->recv_buffer == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate messages response"); + return -ENOMEM; + } + + if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response"); + rte_free(msg_rsp->recv_buffer); + return -1; + } + + return 0; +} + +static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp, + void *buff, uint16_t len) +{ + struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer; + + if (rsp_hdr->payload_status != 0xaa || rsp_hdr->payload_len != len) { + PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d", + rsp_hdr->payload_status, rsp_hdr->payload_len); + return -1; + } + if (len != 0) + rte_memcpy(buff, rsp_hdr + 1, len); + + return 0; +} + +static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_msg_recviver_mem msg_rsp; + struct zxdh_pci_bar_msg desc; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT, + (void *)phyport, sizeof(*phyport)); + return ret; +} + +static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + param->pcie_id = hw->pcie_id; + param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET; + param->src_type = ZXDH_BAR_MODULE_TBL; +} + +static int zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) +{ + if (!res || !dev) + return ZXDH_BAR_MSG_ERR_NULL; + + struct tbl_msg_header tbl_msg = { + .type = ZXDH_TBL_TYPE_READ, + .field = field, + .pcieid = dev->pcie_id, + .slen = 0, + .rsv = 0, + }; + + struct zxdh_pci_bar_msg in = {0}; + + in.virt_addr = dev->virt_addr; + in.payload_addr = &tbl_msg; + in.payload_len = sizeof(tbl_msg); + in.src = dev->src_type; + in.dst = ZXDH_MSG_CHAN_END_RISC; + in.module_id = ZXDH_BAR_MODULE_TBL; + in.src_pcieid = dev->pcie_id; + + uint8_t recv_buf[ZXDH_RSC_TBL_CONTENT_LEN_MAX + 8] = {0}; + struct zxdh_msg_recviver_mem result = { + .recv_buffer = recv_buf, + .buffer_len = sizeof(recv_buf), + }; + int ret = zxdh_bar_chan_sync_msg_send(&in, &result); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_DRV_LOG(ERR, + "send sync_msg failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + struct tbl_msg_reps_header *tbl_reps = + (struct tbl_msg_reps_header *)(recv_buf + ZXDH_REPS_HEADER_OFFSET); + + if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { + PMD_DRV_LOG(ERR, + "get resource_field failed. pcieid: 0x%x, ret: %d.", dev->pcie_id, ret); + return ret; + } + *len = tbl_reps->len; + rte_memcpy(res, + (recv_buf + ZXDH_REPS_HEADER_OFFSET + sizeof(struct tbl_msg_reps_header)), *len); + return ret; +} + +static int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id) +{ + uint8_t reps = 0; + uint16_t reps_len = 0; + + if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) + return -1; + + *panel_id = reps; + return ZXDH_BAR_MSG_OK; +} + +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) +{ + struct zxdh_res_para param; + + zxdh_fill_res_para(dev, ¶m); + int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h new file mode 100644 index 0000000000..f098ae4cf9 --- /dev/null +++ b/drivers/net/zxdh/zxdh_common.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#ifndef ZXDH_COMMON_H +#define ZXDH_COMMON_H + +#include <stdint.h> +#include <rte_ethdev.h> + +#include "zxdh_ethdev.h" + +#ifdef __cplusplus +extern "C" { +#endif + +struct zxdh_res_para { + uint64_t virt_addr; + uint16_t pcie_id; + uint16_t src_type; /* refer to BAR_DRIVER_TYPE */ +}; + +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); + +#ifdef __cplusplus +} +#endif + +#endif /* ZXDH_COMMON_H */ diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index 2a9c1939c3..ea9253dd2a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -10,9 +10,21 @@ #include "zxdh_logs.h" #include "zxdh_pci.h" #include "zxdh_msg.h" +#include "zxdh_common.h" struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; +uint16_t vport_to_vfid(union virport_num v) +{ + /* epid > 4 is local soft queue. return 1192 */ + if (v.epid > 4) + return 1192; + if (v.vf_flag) + return v.epid * 256 + v.vfid; + else + return (v.epid * 8 + v.pfid) + 1152; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -44,6 +56,25 @@ static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) return ret; } +static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw) +{ + if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) { + PMD_INIT_LOG(ERR, "Failed to get phyport"); + return -1; + } + PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport); + + hw->vfid = vport_to_vfid(hw->vport); + + if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) { + PMD_INIT_LOG(ERR, "Failed to get panel_id"); + return -1; + } + PMD_INIT_LOG(INFO, "Get panel id success: 0x%x", hw->panel_id); + + return 0; +} + static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) { struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); @@ -103,6 +134,10 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) goto err_zxdh_init; } + ret = zxdh_agent_comm(eth_dev, hw); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index 4922f3d457..bf495a083d 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -56,6 +56,7 @@ struct zxdh_hw { uint16_t pcie_id; uint16_t device_id; uint16_t port_id; + uint16_t vfid; uint8_t *isr; uint8_t weak_barriers; @@ -63,9 +64,13 @@ struct zxdh_hw { uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; uint8_t is_pf; + uint8_t phyport; + uint8_t panel_id; uint8_t msg_chan_init; }; +uint16_t vport_to_vfid(union virport_num v); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index aa91f499ef..a3fca395c1 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -16,7 +16,7 @@ extern "C" { #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) -#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 #define ZXDH_BAR_MSG_POLLING_SPAN 100 #define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) @@ -107,6 +107,27 @@ enum bar_module_id { ZXDH_BAR_MSG_MODULE_NUM = 100, }; +enum RES_TBL_FILED { + ZXDH_TBL_FIELD_PCIEID = 0, + ZXDH_TBL_FIELD_BDF = 1, + ZXDH_TBL_FIELD_MSGCH = 2, + ZXDH_TBL_FIELD_DATACH = 3, + ZXDH_TBL_FIELD_VPORT = 4, + ZXDH_TBL_FIELD_PNLID = 5, + ZXDH_TBL_FIELD_PHYPORT = 6, + ZXDH_TBL_FIELD_SERDES_NUM = 7, + ZXDH_TBL_FIELD_NP_PORT = 8, + ZXDH_TBL_FIELD_SPEED = 9, + ZXDH_TBL_FIELD_HASHID = 10, + ZXDH_TBL_FIELD_NON, +}; + +enum TBL_MSG_TYPE { + ZXDH_TBL_TYPE_READ, + ZXDH_TBL_TYPE_WRITE, + ZXDH_TBL_TYPE_NON, +}; + struct msix_para { uint16_t pcie_id; uint16_t vector_risc; @@ -181,6 +202,7 @@ struct bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; + int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index d93705ed7b..7b48f4884b 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -102,4 +102,8 @@ struct virtqueue { struct vq_desc_extra vq_descx[0]; }; +#ifdef __cplusplus +} +#endif + #endif /* ZXDH_QUEUE_H */ diff --git a/drivers/net/zxdh/zxdh_rxtx.h b/drivers/net/zxdh/zxdh_rxtx.h index f12f58d84a..5372b910fc 100644 --- a/drivers/net/zxdh/zxdh_rxtx.h +++ b/drivers/net/zxdh/zxdh_rxtx.h @@ -48,4 +48,8 @@ struct virtnet_tx { const struct rte_memzone *mz; /* mem zone to populate TX ring. */ }; +#ifdef __cplusplus +} +#endif + #endif /* ZXDH_RXTX_H */ -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 27873 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (5 preceding siblings ...) 2024-10-22 12:20 ` [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-27 17:07 ` Stephen Hemminger 2024-10-22 12:20 ` [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 8 siblings, 1 reply; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 24760 bytes --] configure zxdh intr include risc,dtb. and release intr. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 301 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 8 + drivers/net/zxdh/zxdh_msg.c | 187 ++++++++++++++++++++ drivers/net/zxdh/zxdh_msg.h | 10 ++ drivers/net/zxdh/zxdh_pci.c | 62 +++++++ drivers/net/zxdh/zxdh_pci.h | 12 ++ 6 files changed, 580 insertions(+) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index ea9253dd2a..cb8c85941a 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -25,6 +25,302 @@ uint16_t vport_to_vfid(union virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR); + } +} + + +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (rte_intr_ack(dev->intr_handle) < 0) + return -1; + + hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev)); + + return 0; +} + +static void zxdh_devconf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + + if (zxdh_intr_unmask(dev) < 0) + PMD_DRV_LOG(ERR, "interrupt enable failed"); +} + + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_fromriscv_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_risc2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_riscvf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_RISC, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +/* Interrupt handler triggered by NIC for handling specific interrupt. */ +static void zxdh_frompfvf_intr_handler(void *param) +{ + struct rte_eth_dev *dev = param; + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t virt_addr = 0; + + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET); + if (hw->is_pf) { + PMD_INIT_LOG(DEBUG, "zxdh_vf2pf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_VF, ZXDH_MSG_CHAN_END_PF, virt_addr, dev); + } else { + PMD_INIT_LOG(DEBUG, "zxdh_pf2vf_intr_handler"); + zxdh_bar_irq_recv(ZXDH_MSG_CHAN_END_PF, ZXDH_MSG_CHAN_END_VF, virt_addr, dev); + } +} + +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + /* register callback to update dev config intr */ + rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev); + + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev) +{ + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + + struct zxdh_hw *hw = dev->data->dev_private; + + /* register callback to update dev config intr */ + rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev); + /* Register rsic_v to pf interrupt callback */ + struct rte_intr_handle *tmp = hw->risc_intr + + (ZXDH_MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE); + + rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev); + tmp = hw->risc_intr + (ZXDH_MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE); + rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev); +} + +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) + return 0; + + zxdh_intr_cb_unreg(dev); + if (rte_intr_disable(dev->intr_handle) < 0) + return -1; + + hw->intr_enabled = 0; + return 0; +} + +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev) +{ + int ret = 0; + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->intr_enabled) { + zxdh_intr_cb_reg(dev); + ret = rte_intr_enable(dev->intr_handle); + if (unlikely(ret)) + PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name); + + hw->intr_enabled = 1; + } + return ret; +} + +static int32_t zxdh_intr_release(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) + VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR); + + zxdh_queues_unbind_intr(dev); + zxdh_intr_disable(dev); + + rte_intr_efd_disable(dev->intr_handle); + rte_intr_vec_list_free(dev->intr_handle); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return 0; +} + +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint8_t i; + + if (!hw->risc_intr) { + PMD_INIT_LOG(ERR, " to allocate risc_intr"); + hw->risc_intr = rte_zmalloc("risc_intr", + ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0); + if (hw->risc_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate risc_intr"); + return -ENOMEM; + } + } + + for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) { + if (dev->intr_handle->efds[i] < 0) { + PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i); + rte_free(hw->risc_intr); + hw->risc_intr = NULL; + return -1; + } + + struct rte_intr_handle *intr_handle = hw->risc_intr + i; + + intr_handle->fd = dev->intr_handle->efds[i]; + intr_handle->type = dev->intr_handle->type; + } + + return 0; +} + +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (!hw->dtb_intr) { + hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0); + if (hw->dtb_intr == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr"); + return -ENOMEM; + } + } + + if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) { + PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1); + rte_free(hw->dtb_intr); + hw->dtb_intr = NULL; + return -1; + } + hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1]; + hw->dtb_intr->type = dev->intr_handle->type; + return 0; +} + +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t i; + uint16_t vec; + + if (!dev->data->dev_conf.intr_conf.rxq) { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + i * 2, ZXDH_MSI_NO_VECTOR, vec); + } + } else { + for (i = 0; i < dev->data->nb_rx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[i * 2], i + ZXDH_QUEUE_INTR_VEC_BASE); + PMD_INIT_LOG(DEBUG, "vq%d irq set %d, get %d", + i * 2, i + ZXDH_QUEUE_INTR_VEC_BASE, vec); + } + } + /* mask all txq intr */ + for (i = 0; i < dev->data->nb_tx_queues; ++i) { + vec = VTPCI_OPS(hw)->set_queue_irq(hw, + hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR); + PMD_INIT_LOG(DEBUG, "vq%d irq set 0x%x, get 0x%x", + (i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec); + } + return 0; +} + +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + int32_t ret = 0; + + if (!rte_intr_cap_multiple(dev->intr_handle)) { + PMD_INIT_LOG(ERR, "Multiple intr vector not supported"); + return -ENOTSUP; + } + zxdh_intr_release(dev); + uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM; + + if (dev->data->dev_conf.intr_conf.rxq) + nb_efd += dev->data->nb_rx_queues; + + if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) { + PMD_INIT_LOG(ERR, "Fail to create eventfd"); + return -1; + } + + if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM)) { + PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors", + hw->max_queue_pairs + ZXDH_INTR_NONQUE_NUM); + return -ENOMEM; + } + PMD_INIT_LOG(DEBUG, "allocate %u rxq vectors", dev->intr_handle->vec_list_size); + if (zxdh_setup_risc_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!"); + ret = -1; + goto free_intr_vec; + } + if (zxdh_setup_dtb_interrupts(dev) != 0) { + PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_queues_bind_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt"); + ret = -1; + goto free_intr_vec; + } + + if (zxdh_intr_enable(dev) < 0) { + PMD_DRV_LOG(ERR, "interrupt enable failed"); + ret = -1; + goto free_intr_vec; + } + return 0; + +free_intr_vec: + zxdh_intr_release(dev); + return ret; +} + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -138,9 +434,14 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) if (ret != 0) goto err_zxdh_init; + ret = zxdh_configure_intr(eth_dev); + if (ret != 0) + goto err_zxdh_init; + return ret; err_zxdh_init: + zxdh_intr_release(eth_dev); zxdh_bar_msg_chan_exit(); rte_free(eth_dev->data->mac_addrs); eth_dev->data->mac_addrs = NULL; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index bf495a083d..e62d46488c 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -8,6 +8,10 @@ #include <rte_ether.h> #include "ethdev_driver.h" +#include <rte_interrupts.h> +#include <eal_interrupts.h> + +#include "zxdh_queue.h" #ifdef __cplusplus extern "C" { @@ -44,6 +48,9 @@ struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; struct zxdh_net_config *dev_cfg; + struct rte_intr_handle *risc_intr; + struct rte_intr_handle *dtb_intr; + struct virtqueue **vqs; union virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -60,6 +67,7 @@ struct zxdh_hw { uint8_t *isr; uint8_t weak_barriers; + uint8_t intr_enabled; uint8_t use_msix; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t duplex; diff --git a/drivers/net/zxdh/zxdh_msg.c b/drivers/net/zxdh/zxdh_msg.c index 0e6074022f..01411a6309 100644 --- a/drivers/net/zxdh/zxdh_msg.c +++ b/drivers/net/zxdh/zxdh_msg.c @@ -95,6 +95,12 @@ #define ZXDH_BAR_CHAN_INDEX_SEND 0 #define ZXDH_BAR_CHAN_INDEX_RECV 1 +#define ZXDH_BAR_CHAN_MSG_SYNC 0 +#define ZXDH_BAR_CHAN_MSG_NO_EMEC 0 +#define ZXDH_BAR_CHAN_MSG_EMEC 1 +#define ZXDH_BAR_CHAN_MSG_NO_ACK 0 +#define ZXDH_BAR_CHAN_MSG_ACK 1 + uint8_t subchan_id_tbl[ZXDH_BAR_MSG_SRC_NUM][ZXDH_BAR_MSG_DST_NUM] = { {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND}, {ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_SEND, ZXDH_BAR_CHAN_INDEX_RECV}, @@ -134,6 +140,36 @@ struct seqid_ring { }; struct seqid_ring g_seqid_ring = {0}; +static inline const char *module_id_name(int val) +{ + switch (val) { + case ZXDH_BAR_MODULE_DBG: return "ZXDH_BAR_MODULE_DBG"; + case ZXDH_BAR_MODULE_TBL: return "ZXDH_BAR_MODULE_TBL"; + case ZXDH_BAR_MODULE_MISX: return "ZXDH_BAR_MODULE_MISX"; + case ZXDH_BAR_MODULE_SDA: return "ZXDH_BAR_MODULE_SDA"; + case ZXDH_BAR_MODULE_RDMA: return "ZXDH_BAR_MODULE_RDMA"; + case ZXDH_BAR_MODULE_DEMO: return "ZXDH_BAR_MODULE_DEMO"; + case ZXDH_BAR_MODULE_SMMU: return "ZXDH_BAR_MODULE_SMMU"; + case ZXDH_BAR_MODULE_MAC: return "ZXDH_BAR_MODULE_MAC"; + case ZXDH_BAR_MODULE_VDPA: return "ZXDH_BAR_MODULE_VDPA"; + case ZXDH_BAR_MODULE_VQM: return "ZXDH_BAR_MODULE_VQM"; + case ZXDH_BAR_MODULE_NP: return "ZXDH_BAR_MODULE_NP"; + case ZXDH_BAR_MODULE_VPORT: return "ZXDH_BAR_MODULE_VPORT"; + case ZXDH_BAR_MODULE_BDF: return "ZXDH_BAR_MODULE_BDF"; + case ZXDH_BAR_MODULE_RISC_READY: return "ZXDH_BAR_MODULE_RISC_READY"; + case ZXDH_BAR_MODULE_REVERSE: return "ZXDH_BAR_MODULE_REVERSE"; + case ZXDH_BAR_MDOULE_NVME: return "ZXDH_BAR_MDOULE_NVME"; + case ZXDH_BAR_MDOULE_NPSDK: return "ZXDH_BAR_MDOULE_NPSDK"; + case ZXDH_BAR_MODULE_NP_TODO: return "ZXDH_BAR_MODULE_NP_TODO"; + case ZXDH_MODULE_BAR_MSG_TO_PF: return "ZXDH_MODULE_BAR_MSG_TO_PF"; + case ZXDH_MODULE_BAR_MSG_TO_VF: return "ZXDH_MODULE_BAR_MSG_TO_VF"; + case ZXDH_MODULE_FLASH: return "ZXDH_MODULE_FLASH"; + case ZXDH_BAR_MODULE_OFFSET_GET: return "ZXDH_BAR_MODULE_OFFSET_GET"; + case ZXDH_BAR_EVENT_OVS_WITH_VCB: return "ZXDH_BAR_EVENT_OVS_WITH_VCB"; + default: return "NA"; + } +} + static uint16_t pcie_id_to_hard_lock(uint16_t src_pcieid, uint8_t dst) { uint16_t lock_id = 0; @@ -793,3 +829,154 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev) return 0; return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX])); } + +static uint64_t zxdh_recv_addr_get(uint8_t src_type, uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + + return zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); +} + +static void zxdh_bar_msg_ack_async_msg_proc(struct bar_msg_header *msg_header, + uint8_t *receiver_buff) +{ + struct seqid_item *reps_info = &g_seqid_ring.reps_info_tbl[msg_header->msg_id]; + + if (reps_info->flag != ZXDH_REPS_INFO_FLAG_USED) { + PMD_MSG_LOG(ERR, "msg_id: %u is released", msg_header->msg_id); + return; + } + if (msg_header->len > reps_info->buffer_len - 4) { + PMD_MSG_LOG(ERR, "reps_buf_len is %u, but reps_msg_len is %u", + reps_info->buffer_len, msg_header->len + 4); + goto free_id; + } + uint8_t *reps_buffer = (uint8_t *)reps_info->reps_addr; + + rte_memcpy(reps_buffer + 4, receiver_buff, msg_header->len); + *(uint16_t *)(reps_buffer + 1) = msg_header->len; + *(uint8_t *)(reps_info->reps_addr) = ZXDH_REPS_HEADER_REPLYED; + +free_id: + zxdh_bar_chan_msgid_free(msg_header->msg_id); +} + +zxdh_bar_chan_msg_recv_callback msg_recv_func_tbl[ZXDH_BAR_MSG_MODULE_NUM]; +static void zxdh_bar_msg_sync_msg_proc(uint64_t reply_addr, struct bar_msg_header *msg_header, + uint8_t *receiver_buff, void *dev) +{ + uint8_t *reps_buffer = rte_malloc(NULL, ZXDH_BAR_MSG_PAYLOAD_MAX_LEN, 0); + + if (reps_buffer == NULL) + return; + + zxdh_bar_chan_msg_recv_callback recv_func = msg_recv_func_tbl[msg_header->module_id]; + uint16_t reps_len = 0; + + recv_func(receiver_buff, msg_header->len, reps_buffer, &reps_len, dev); + msg_header->ack = ZXDH_BAR_CHAN_MSG_ACK; + msg_header->len = reps_len; + zxdh_bar_chan_msg_header_set(reply_addr, msg_header); + zxdh_bar_chan_msg_payload_set(reply_addr, reps_buffer, reps_len); + zxdh_bar_chan_msg_valid_set(reply_addr, ZXDH_BAR_MSG_CHAN_USABLE); + rte_free(reps_buffer); +} + +static uint64_t zxdh_reply_addr_get(uint8_t sync, uint8_t src_type, + uint8_t dst_type, uint64_t virt_addr) +{ + uint8_t src = zxdh_bar_msg_dst_index_trans(src_type); + uint8_t dst = zxdh_bar_msg_src_index_trans(dst_type); + + if (src == ZXDH_BAR_MSG_SRC_ERR || dst == ZXDH_BAR_MSG_DST_ERR) + return 0; + + uint8_t chan_id = chan_id_tbl[dst][src]; + uint8_t subchan_id = 1 - subchan_id_tbl[dst][src]; + uint64_t recv_rep_addr; + + if (sync == ZXDH_BAR_CHAN_MSG_SYNC) + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, subchan_id); + else + recv_rep_addr = zxdh_subchan_addr_cal(virt_addr, chan_id, 1 - subchan_id); + + return recv_rep_addr; +} + +static uint16_t zxdh_bar_chan_msg_header_check(struct bar_msg_header *msg_header) +{ + if (msg_header->valid != ZXDH_BAR_MSG_CHAN_USED) { + PMD_MSG_LOG(ERR, "recv header ERR: valid label is not used."); + return ZXDH_BAR_MSG_ERR_MODULE; + } + uint8_t module_id = msg_header->module_id; + + if (module_id >= (uint8_t)ZXDH_BAR_MSG_MODULE_NUM) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid module_id: %u.", module_id); + return ZXDH_BAR_MSG_ERR_MODULE; + } + uint16_t len = msg_header->len; + + if (len > ZXDH_BAR_MSG_PAYLOAD_MAX_LEN) { + PMD_MSG_LOG(ERR, "recv header ERR: invalid mesg len: %u.", len); + return ZXDH_BAR_MSG_ERR_LEN; + } + if (msg_recv_func_tbl[msg_header->module_id] == NULL) { + PMD_MSG_LOG(ERR, "recv header ERR: module:%s(%u) doesn't register", + module_id_name(module_id), module_id); + return ZXDH_BAR_MSG_ERR_MODULE_NOEXIST; + } + return ZXDH_BAR_MSG_OK; +} + +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev) +{ + struct bar_msg_header msg_header = {0}; + uint64_t recv_addr = 0; + uint16_t ret = 0; + + recv_addr = zxdh_recv_addr_get(src, dst, virt_addr); + if (recv_addr == 0) { + PMD_MSG_LOG(ERR, "invalid driver type(src:%u, dst:%u).", src, dst); + return -1; + } + + zxdh_bar_chan_msg_header_get(recv_addr, &msg_header); + ret = zxdh_bar_chan_msg_header_check(&msg_header); + + if (ret != ZXDH_BAR_MSG_OK) { + PMD_MSG_LOG(ERR, "recv msg_head err, ret: %u.", ret); + return -1; + } + + uint8_t *recved_msg = rte_malloc(NULL, msg_header.len, 0); + if (recved_msg == NULL) { + PMD_MSG_LOG(ERR, "malloc temp buff failed."); + return -1; + } + zxdh_bar_chan_msg_payload_get(recv_addr, recved_msg, msg_header.len); + + uint64_t reps_addr = zxdh_reply_addr_get(msg_header.sync, src, dst, virt_addr); + + if (msg_header.sync == ZXDH_BAR_CHAN_MSG_SYNC) { + zxdh_bar_msg_sync_msg_proc(reps_addr, &msg_header, recved_msg, dev); + goto exit; + } + zxdh_bar_chan_msg_valid_set(recv_addr, ZXDH_BAR_MSG_CHAN_USABLE); + if (msg_header.ack == ZXDH_BAR_CHAN_MSG_ACK) { + zxdh_bar_msg_ack_async_msg_proc(&msg_header, recved_msg); + goto exit; + } + return 0; + +exit: + rte_free(recved_msg); + return ZXDH_BAR_MSG_OK; +} diff --git a/drivers/net/zxdh/zxdh_msg.h b/drivers/net/zxdh/zxdh_msg.h index a3fca395c1..14e5578033 100644 --- a/drivers/net/zxdh/zxdh_msg.h +++ b/drivers/net/zxdh/zxdh_msg.h @@ -15,8 +15,15 @@ extern "C" { #define ZXDH_BAR0_INDEX 0 #define ZXDH_CTRLCH_OFFSET (0x2000) +#define ZXDH_MSG_CHAN_PFVFSHARE_OFFSET (ZXDH_CTRLCH_OFFSET + 0x1000) #define ZXDH_MSIX_INTR_MSG_VEC_BASE 1 +#define ZXDH_MSIX_INTR_MSG_VEC_NUM 3 +#define ZXDH_MSIX_INTR_DTB_VEC (ZXDH_MSIX_INTR_MSG_VEC_BASE + ZXDH_MSIX_INTR_MSG_VEC_NUM) +#define ZXDH_MSIX_INTR_DTB_VEC_NUM 1 +#define ZXDH_INTR_NONQUE_NUM (ZXDH_MSIX_INTR_MSG_VEC_NUM + ZXDH_MSIX_INTR_DTB_VEC_NUM + 1) +#define ZXDH_QUEUE_INTR_VEC_BASE (ZXDH_MSIX_INTR_DTB_VEC + ZXDH_MSIX_INTR_DTB_VEC_NUM) +#define ZXDH_QUEUE_INTR_VEC_NUM 256 #define ZXDH_BAR_MSG_POLLING_SPAN 100 #define ZXDH_BAR_MSG_POLL_CNT_PER_MS (1 * 1000 / ZXDH_BAR_MSG_POLLING_SPAN) @@ -202,6 +209,8 @@ struct bar_msg_header { uint16_t dst_pcieid; /* used in PF-->VF */ }; +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, + void *reps_buffer, uint16_t *reps_len, void *dev); int zxdh_msg_chan_init(void); int zxdh_bar_msg_chan_exit(void); @@ -210,6 +219,7 @@ int zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev); int zxdh_msg_chan_enable(struct rte_eth_dev *dev); int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result); +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev); #ifdef __cplusplus } diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index b0c3d0e0be..06b54b06cd 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -97,6 +97,24 @@ static void zxdh_set_features(struct zxdh_hw *hw, uint64_t features) rte_write32(features >> 32, &hw->common_cfg->guest_feature); } +static uint16_t zxdh_set_config_irq(struct zxdh_hw *hw, uint16_t vec) +{ + rte_write16(vec, &hw->common_cfg->msix_config); + return rte_read16(&hw->common_cfg->msix_config); +} + +static uint16_t zxdh_set_queue_irq(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + rte_write16(vec, &hw->common_cfg->queue_msix_vector); + return rte_read16(&hw->common_cfg->queue_msix_vector); +} + +static uint8_t zxdh_get_isr(struct zxdh_hw *hw) +{ + return rte_read8(hw->isr); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -104,8 +122,16 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_status = zxdh_set_status, .get_features = zxdh_get_features, .set_features = zxdh_set_features, + .set_queue_irq = zxdh_set_queue_irq, + .set_config_irq = zxdh_set_config_irq, + .get_isr = zxdh_get_isr, }; +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) +{ + return VTPCI_OPS(hw)->get_isr(hw); +} + uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw) { return VTPCI_OPS(hw)->get_features(hw); @@ -288,3 +314,39 @@ int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw) return 0; } + +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev) +{ + uint8_t pos = 0; + int32_t ret = rte_pci_read_config(dev, &pos, 1, ZXDH_PCI_CAPABILITY_LIST); + + if (ret != 1) { + PMD_INIT_LOG(ERR, "failed to read pci capability list, ret %d", ret); + return ZXDH_MSIX_NONE; + } + while (pos) { + uint8_t cap[2] = {0}; + + ret = rte_pci_read_config(dev, cap, sizeof(cap), pos); + if (ret != sizeof(cap)) { + PMD_INIT_LOG(ERR, "failed to read pci cap at pos: %x ret %d", pos, ret); + break; + } + if (cap[0] == ZXDH_PCI_CAP_ID_MSIX) { + uint16_t flags = 0; + + ret = rte_pci_read_config(dev, &flags, sizeof(flags), pos + sizeof(cap)); + if (ret != sizeof(flags)) { + PMD_INIT_LOG(ERR, + "failed to read pci cap at pos: %x ret %d", pos + 2, ret); + break; + } + if (flags & ZXDH_PCI_MSIX_ENABLE) + return ZXDH_MSIX_ENABLED; + else + return ZXDH_MSIX_DISABLED; + } + pos = cap[1]; + } + return ZXDH_MSIX_NONE; +} diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index f67de40962..25e34c3ba9 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -22,6 +22,13 @@ enum zxdh_msix_status { ZXDH_MSIX_ENABLED = 2 }; +/* The bit of the ISR which indicates a device has an interrupt. */ +#define ZXDH_PCI_ISR_INTR 0x1 +/* The bit of the ISR which indicates a device configuration change. */ +#define ZXDH_PCI_ISR_CONFIG 0x2 +/* Vector value used to disable MSI for queue. */ +#define ZXDH_MSI_NO_VECTOR 0x7F + #define ZXDH_PCI_CAPABILITY_LIST 0x34 #define ZXDH_PCI_CAP_ID_VNDR 0x09 #define ZXDH_PCI_CAP_ID_MSIX 0x11 @@ -124,6 +131,9 @@ struct zxdh_pci_ops { uint64_t (*get_features)(struct zxdh_hw *hw); void (*set_features)(struct zxdh_hw *hw, uint64_t features); + uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); + uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); + uint8_t (*get_isr)(struct zxdh_hw *hw); }; struct zxdh_hw_internal { @@ -143,6 +153,8 @@ int32_t zxdh_read_pci_caps(struct rte_pci_device *dev, struct zxdh_hw *hw); int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw); uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); +uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); +enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); #ifdef __cplusplus } -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 53151 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation 2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-10-27 17:07 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-27 17:07 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19 On Tue, 22 Oct 2024 20:20:40 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > +/* Interrupt handler triggered by NIC for handling specific interrupt. */ > +static void zxdh_fromriscv_intr_handler(void *param) > +{ > + struct rte_eth_dev *dev = param; > + struct zxdh_hw *hw = dev->data->dev_private; > + uint64_t virt_addr = 0; > + > + virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET); There is no need to initialize a variable like virt_addr which is set later. ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (6 preceding siblings ...) 2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 8 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 3289 bytes --] Add support for zxdh infos get. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/zxdh_ethdev.c | 48 +++++++++++++++++++++++++++++++++- 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index cb8c85941a..cc270d6e73 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -12,6 +12,9 @@ #include "zxdh_msg.h" #include "zxdh_common.h" +#define ZXDH_MIN_RX_BUFSIZE 64 +#define ZXDH_MAX_RX_PKTLEN 14000U + struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS]; uint16_t vport_to_vfid(union virport_num v) @@ -25,6 +28,43 @@ uint16_t vport_to_vfid(union virport_num v) return (v.epid * 8 + v.pfid) + 1152; } +static int32_t zxdh_dev_infos_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + dev_info->speed_capa = rte_eth_speed_bitflag(hw->speed, RTE_ETH_LINK_FULL_DUPLEX); + dev_info->max_rx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX); + dev_info->max_tx_queues = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX); + dev_info->min_rx_bufsize = ZXDH_MIN_RX_BUFSIZE; + dev_info->max_rx_pktlen = ZXDH_MAX_RX_PKTLEN; + dev_info->max_mac_addrs = ZXDH_MAX_MAC_ADDRS; + dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | + RTE_ETH_RX_OFFLOAD_VLAN_FILTER | + RTE_ETH_RX_OFFLOAD_QINQ_STRIP); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | + RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM); + dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER); + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO; + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + + dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO | + RTE_ETH_TX_OFFLOAD_UDP_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | + RTE_ETH_TX_OFFLOAD_QINQ_INSERT | + RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO); + dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM); + + return 0; +} + static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev) { struct zxdh_hw *hw = dev->data->dev_private; @@ -321,6 +361,12 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +/* dev_ops for zxdh, bare necessities for basic operation */ +static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_infos_get = zxdh_dev_infos_get, +}; + + static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev) { struct zxdh_hw *hw = eth_dev->data->dev_private; @@ -377,7 +423,7 @@ static int zxdh_eth_dev_init(struct rte_eth_dev *eth_dev) struct zxdh_hw *hw = eth_dev->data->dev_private; int ret = 0; - eth_dev->dev_ops = NULL; + eth_dev->dev_ops = &zxdh_eth_dev_ops; /* Allocate memory for storing MAC addresses */ eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac", -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 7205 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang ` (7 preceding siblings ...) 2024-10-22 12:20 ` [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang @ 2024-10-22 12:20 ` Junlong Wang 2024-10-24 11:31 ` [v7,9/9] " Junlong Wang ` (4 more replies) 8 siblings, 5 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-22 12:20 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19, Junlong Wang [-- Attachment #1.1.1: Type: text/plain, Size: 39070 bytes --] provided zxdh dev configure ops for queue check,reset,alloc resources,etc. Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> --- drivers/net/zxdh/meson.build | 1 + drivers/net/zxdh/zxdh_common.c | 118 +++++++++ drivers/net/zxdh/zxdh_common.h | 12 + drivers/net/zxdh/zxdh_ethdev.c | 457 +++++++++++++++++++++++++++++++++ drivers/net/zxdh/zxdh_ethdev.h | 18 +- drivers/net/zxdh/zxdh_pci.c | 97 +++++++ drivers/net/zxdh/zxdh_pci.h | 29 +++ drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++ drivers/net/zxdh/zxdh_queue.h | 175 ++++++++++++- 9 files changed, 1035 insertions(+), 3 deletions(-) create mode 100644 drivers/net/zxdh/zxdh_queue.c diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build index a16db47f89..b96aa5a27e 100644 --- a/drivers/net/zxdh/meson.build +++ b/drivers/net/zxdh/meson.build @@ -18,4 +18,5 @@ sources = files( 'zxdh_pci.c', 'zxdh_msg.c', 'zxdh_common.c', + 'zxdh_queue.c', ) diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c index 21a0cd72cf..657f35a49c 100644 --- a/drivers/net/zxdh/zxdh_common.c +++ b/drivers/net/zxdh/zxdh_common.c @@ -20,6 +20,7 @@ #define ZXDH_COMMON_TABLE_WRITE 1 #define ZXDH_COMMON_FIELD_PHYPORT 6 +#define ZXDH_COMMON_FIELD_DATACH 3 #define ZXDH_RSC_TBL_CONTENT_LEN_MAX (257 * 2) @@ -247,3 +248,120 @@ int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid) int32_t ret = zxdh_get_res_panel_id(¶m, pannelid); return ret; } + +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + uint32_t val = *((volatile uint32_t *)(baseaddr + reg)); + return val; +} + +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]); + *((volatile uint32_t *)(baseaddr + reg)) = val; +} + +int32_t zxdh_acquire_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + /* check whether lock is used */ + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) + return -1; + + return 0; +} + +int32_t zxdh_release_lock(struct zxdh_hw *hw) +{ + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); + + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); + return 0; + } + + return -1; +} + +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg) +{ + uint32_t val = *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)); + return val; +} + +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val) +{ + *((volatile uint32_t *)(pci_comm_cfg_baseaddr + reg)) = val; +} + +static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field, + void *buff, uint16_t buff_size) +{ + struct zxdh_pci_bar_msg desc; + struct zxdh_msg_recviver_mem msg_rsp; + int32_t ret = 0; + + if (!hw->msg_chan_init) { + PMD_DRV_LOG(ERR, "Bar messages channel not initialized"); + return -1; + } + if (buff_size != 0 && buff == NULL) { + PMD_DRV_LOG(ERR, "Buff is invalid"); + return -1; + } + + ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE, + field, buff, buff_size); + + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to fill common msg"); + return ret; + } + + ret = zxdh_send_command(hw, &desc, ZXDH_BAR_MODULE_TBL, &msg_rsp); + if (ret != 0) + goto free_msg_data; + + ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0); + if (ret != 0) + goto free_rsp_data; + +free_rsp_data: + rte_free(msg_rsp.recv_buffer); +free_msg_data: + rte_free(desc.payload_addr); + return ret; +} + +int32_t zxdh_datach_set(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t buff_size = (hw->queue_num + 1) * 2; + void *buff = rte_zmalloc(NULL, buff_size, 0); + + if (unlikely(buff == NULL)) { + PMD_DRV_LOG(ERR, "Failed to allocate buff"); + return -ENOMEM; + } + memset(buff, 0, buff_size); + uint16_t *pdata = (uint16_t *)buff; + *pdata++ = hw->queue_num; + uint16_t i; + + for (i = 0; i < hw->queue_num; i++) + *(pdata + i) = hw->channel_context[i].ph_chno; + + int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH, + (void *)buff, buff_size); + + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to setup data channel of common table"); + + rte_free(buff); + return ret; +} diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h index f098ae4cf9..568aba258f 100644 --- a/drivers/net/zxdh/zxdh_common.h +++ b/drivers/net/zxdh/zxdh_common.h @@ -14,6 +14,10 @@ extern "C" { #endif +#define ZXDH_VF_LOCK_REG 0x90 +#define ZXDH_VF_LOCK_ENABLE_MASK 0x1 +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX 10 + struct zxdh_res_para { uint64_t virt_addr; uint16_t pcie_id; @@ -23,6 +27,14 @@ struct zxdh_res_para { int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport); int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid); +uint32_t zxdh_read_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg); +void zxdh_write_bar_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val); +int32_t zxdh_release_lock(struct zxdh_hw *hw); +int32_t zxdh_acquire_lock(struct zxdh_hw *hw); +uint32_t zxdh_read_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg); +void zxdh_write_comm_reg(uint64_t pci_comm_cfg_baseaddr, uint32_t reg, uint32_t val); +int32_t zxdh_datach_set(struct rte_eth_dev *dev); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c index cc270d6e73..30d164bffd 100644 --- a/drivers/net/zxdh/zxdh_ethdev.c +++ b/drivers/net/zxdh/zxdh_ethdev.c @@ -11,6 +11,7 @@ #include "zxdh_pci.h" #include "zxdh_msg.h" #include "zxdh_common.h" +#include "zxdh_queue.h" #define ZXDH_MIN_RX_BUFSIZE 64 #define ZXDH_MAX_RX_PKTLEN 14000U @@ -361,8 +362,464 @@ static int32_t zxdh_configure_intr(struct rte_eth_dev *dev) return ret; } +static int32_t zxdh_features_update(struct zxdh_hw *hw, + const struct rte_eth_rxmode *rxmode, + const struct rte_eth_txmode *txmode) +{ + uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; + uint64_t req_features = hw->guest_features; + + if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM); + + if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) | + (1ULL << ZXDH_NET_F_GUEST_TSO6); + + if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << ZXDH_NET_F_CSUM); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) | + (1ULL << ZXDH_NET_F_HOST_TSO6); + + if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO) + req_features |= (1ULL << ZXDH_NET_F_HOST_UFO); + + req_features = req_features & hw->host_features; + hw->guest_features = req_features; + + VTPCI_OPS(hw)->set_features(hw, req_features); + + if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) && + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) { + PMD_DRV_LOG(ERR, "rx checksum not available on this host"); + return -ENOTSUP; + } + + if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) && + (!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) { + PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host"); + return -ENOTSUP; + } + return 0; +} + +static bool rx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6); +} + +static bool tx_offload_enabled(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) || + vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO); +} + +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t i = 0; + + const char *type = NULL; + struct virtqueue *vq = NULL; + struct rte_mbuf *buf = NULL; + int32_t queue_type = 0; + + if (hw->vqs == NULL) + return; + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (!vq) + continue; + + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) + type = "rxq"; + else if (queue_type == ZXDH_VTNET_TQ) + type = "txq"; + else + continue; + PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i); + + while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) + rte_pktmbuf_free(buf); + } +} + +static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t base = (queue_type == ZXDH_VTNET_RQ) ? 0 : 1; + uint16_t i = 0; + uint16_t j = 0; + uint16_t done = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + rte_delay_us_block(1000); + /* acquire hw lock */ + if (zxdh_acquire_lock(hw) < 0) { + PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout); + continue; + } + /* Iterate COI table and find free channel */ + for (i = ZXDH_QUEUES_BASE / 32; i < ZXDH_TOTAL_QUEUES_NUM / 32; i++) { + uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t)); + uint32_t var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + + for (j = base; j < 32; j += 2) { + /* Got the available channel & update COI table */ + if ((var & (1 << j)) == 0) { + var |= (1 << j); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + done = 1; + break; + } + } + if (done) + break; + } + break; + } + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + zxdh_release_lock(hw); + /* check for no channel condition */ + if (done != 1) { + PMD_INIT_LOG(ERR, "NO availd queues"); + return -1; + } + /* reruen available channel ID */ + return (i * 32) + j; +} + +static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch) +{ + struct zxdh_hw *hw = dev->data->dev_private; + + if (hw->channel_context[lch].valid == 1) { + PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u", + lch, hw->channel_context[lch].ph_chno); + return hw->channel_context[lch].ph_chno; + } + int32_t pch = zxdh_get_available_channel(dev, zxdh_get_queue_type(lch)); + + if (pch < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire channel"); + return -1; + } + hw->channel_context[lch].ph_chno = (uint16_t)pch; + hw->channel_context[lch].valid = 1; + PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch); + return 0; +} + +static void zxdh_init_vring(struct virtqueue *vq) +{ + int32_t size = vq->vq_nentries; + uint8_t *ring_mem = vq->vq_ring_virt_mem; + + memset(ring_mem, 0, vq->vq_ring_size); + + vq->vq_used_cons_idx = 0; + vq->vq_desc_head_idx = 0; + vq->vq_avail_idx = 0; + vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); + vq->vq_free_cnt = vq->vq_nentries; + memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); + vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr(vq); +} + +static int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx) +{ + char vq_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + char vq_hdr_name[ZXDH_VIRTQUEUE_MAX_NAME_SZ] = {0}; + const struct rte_memzone *mz = NULL; + const struct rte_memzone *hdr_mz = NULL; + uint32_t size = 0; + struct zxdh_hw *hw = dev->data->dev_private; + struct virtnet_rx *rxvq = NULL; + struct virtnet_tx *txvq = NULL; + struct virtqueue *vq = NULL; + size_t sz_hdr_mz = 0; + void *sw_ring = NULL; + int32_t queue_type = zxdh_get_queue_type(vtpci_logic_qidx); + int32_t numa_node = dev->device->numa_node; + uint16_t vtpci_phy_qidx = 0; + uint32_t vq_size = 0; + int32_t ret = 0; + + if (hw->channel_context[vtpci_logic_qidx].valid == 0) { + PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx); + return -EINVAL; + } + vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno; + + PMD_INIT_LOG(DEBUG, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d", + vtpci_logic_qidx, vtpci_phy_qidx, numa_node); + + vq_size = ZXDH_QUEUE_DEPTH; + + if (VTPCI_OPS(hw)->set_queue_num != NULL) + VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size); + + snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx); + + size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra), + RTE_CACHE_LINE_SIZE); + if (queue_type == ZXDH_VTNET_TQ) { + /* + * For each xmit packet, allocate a zxdh_net_hdr + * and indirect ring elements + */ + sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region); + } + + vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node); + if (vq == NULL) { + PMD_INIT_LOG(ERR, "can not allocate vq"); + return -ENOMEM; + } + hw->vqs[vtpci_logic_qidx] = vq; + + vq->hw = hw; + vq->vq_queue_index = vtpci_phy_qidx; + vq->vq_nentries = vq_size; + + vq->vq_packed.used_wrap_counter = 1; + vq->vq_packed.cached_flags = ZXDH_VRING_PACKED_DESC_F_AVAIL; + vq->vq_packed.event_flags_shadow = 0; + if (queue_type == ZXDH_VTNET_RQ) + vq->vq_packed.cached_flags |= ZXDH_VRING_DESC_F_WRITE; + + /* + * Reserve a memzone for vring elements + */ + size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN); + vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN); + PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); + + mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + ZXDH_PCI_VRING_ALIGN); + if (mz == NULL) { + if (rte_errno == EEXIST) + mz = rte_memzone_lookup(vq_name); + if (mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + + memset(mz->addr, 0, mz->len); + + vq->vq_ring_mem = mz->iova; + vq->vq_ring_virt_mem = mz->addr; + + zxdh_init_vring(vq); + + if (sz_hdr_mz) { + snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr", + dev->data->port_id, vtpci_phy_qidx); + hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz, + numa_node, RTE_MEMZONE_IOVA_CONTIG, + RTE_CACHE_LINE_SIZE); + if (hdr_mz == NULL) { + if (rte_errno == EEXIST) + hdr_mz = rte_memzone_lookup(vq_hdr_name); + if (hdr_mz == NULL) { + ret = -ENOMEM; + goto fail_q_alloc; + } + } + } + + if (queue_type == ZXDH_VTNET_RQ) { + size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]); + + sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node); + if (!sw_ring) { + PMD_INIT_LOG(ERR, "can not allocate RX soft ring"); + ret = -ENOMEM; + goto fail_q_alloc; + } + + vq->sw_ring = sw_ring; + rxvq = &vq->rxq; + rxvq->vq = vq; + rxvq->port_id = dev->data->port_id; + rxvq->mz = mz; + } else { /* queue_type == VTNET_TQ */ + txvq = &vq->txq; + txvq->vq = vq; + txvq->port_id = dev->data->port_id; + txvq->mz = mz; + txvq->virtio_net_hdr_mz = hdr_mz; + txvq->virtio_net_hdr_mem = hdr_mz->iova; + } + + vq->offset = offsetof(struct rte_mbuf, buf_iova); + if (queue_type == ZXDH_VTNET_TQ) { + struct zxdh_tx_region *txr = hdr_mz->addr; + uint32_t i; + + memset(txr, 0, vq_size * sizeof(*txr)); + for (i = 0; i < vq_size; i++) { + /* first indirect descriptor is always the tx header */ + struct vring_packed_desc *start_dp = txr[i].tx_packed_indir; + + vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir)); + start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) + + offsetof(struct zxdh_tx_region, tx_hdr); + /* length will be updated to actual pi hdr size when xmit pkt */ + start_dp->len = 0; + } + } + if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) { + PMD_INIT_LOG(ERR, "setup_queue failed"); + return -EINVAL; + } + return 0; +fail_q_alloc: + rte_free(sw_ring); + rte_memzone_free(hdr_mz); + rte_memzone_free(mz); + rte_free(vq); + return ret; +} + +static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq) +{ + uint16_t lch; + struct zxdh_hw *hw = dev->data->dev_private; + + hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0); + if (!hw->vqs) { + PMD_INIT_LOG(ERR, "Failed to allocate vqs"); + return -ENOMEM; + } + for (lch = 0; lch < nr_vq; lch++) { + if (zxdh_acquire_channel(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to acquire the channels"); + zxdh_free_queues(dev); + return -1; + } + if (zxdh_init_queue(dev, lch) < 0) { + PMD_INIT_LOG(ERR, "Failed to alloc virtio queue"); + zxdh_free_queues(dev); + return -1; + } + } + return 0; +} + + +static int32_t zxdh_dev_configure(struct rte_eth_dev *dev) +{ + const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; + struct zxdh_hw *hw = dev->data->dev_private; + uint32_t nr_vq = 0; + int32_t ret = 0; + + if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues); + return -EINVAL; + } + if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) { + PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!", + dev->data->nb_rx_queues, dev->data->nb_tx_queues, + ZXDH_QUEUES_NUM_MAX); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + if (rxmode->mq_mode != RTE_ETH_MQ_RX_RSS && rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode); + return -EINVAL; + } + + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { + PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode); + return -EINVAL; + } + + ret = zxdh_features_update(hw, rxmode, txmode); + if (ret < 0) + return ret; + + /* check if lsc interrupt feature is enabled */ + if (dev->data->dev_conf.intr_conf.lsc) { + if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) { + PMD_DRV_LOG(ERR, "link status not supported by host"); + return -ENOTSUP; + } + } + + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + + nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues; + if (nr_vq == hw->queue_num) + return 0; + + PMD_DRV_LOG(DEBUG, "queue changed need reset "); + /* Reset the device although not necessary at startup */ + zxdh_vtpci_reset(hw); + + /* Tell the host we've noticed this device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK); + + /* Tell the host we've known how to drive the device. */ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER); + /* The queue needs to be released when reconfiguring*/ + if (hw->vqs != NULL) { + zxdh_dev_free_mbufs(dev); + zxdh_free_queues(dev); + } + + hw->queue_num = nr_vq; + ret = zxdh_alloc_queues(dev, nr_vq); + if (ret < 0) + return ret; + + zxdh_datach_set(dev); + + if (zxdh_configure_intr(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to configure interrupt"); + zxdh_free_queues(dev); + return -1; + } + + zxdh_vtpci_reinit_complete(hw); + + return ret; +} + /* dev_ops for zxdh, bare necessities for basic operation */ static const struct eth_dev_ops zxdh_eth_dev_ops = { + .dev_configure = zxdh_dev_configure, .dev_infos_get = zxdh_dev_infos_get, }; diff --git a/drivers/net/zxdh/zxdh_ethdev.h b/drivers/net/zxdh/zxdh_ethdev.h index e62d46488c..a77db614aa 100644 --- a/drivers/net/zxdh/zxdh_ethdev.h +++ b/drivers/net/zxdh/zxdh_ethdev.h @@ -11,8 +11,6 @@ #include <rte_interrupts.h> #include <eal_interrupts.h> -#include "zxdh_queue.h" - #ifdef __cplusplus extern "C" { #endif @@ -32,6 +30,13 @@ extern "C" { #define ZXDH_NUM_BARS 2 #define ZXDH_RX_QUEUES_MAX 128U #define ZXDH_TX_QUEUES_MAX 128U +#define ZXDH_QUEUE_DEPTH 1024 +#define ZXDH_QUEUES_BASE 0 +#define ZXDH_TOTAL_QUEUES_NUM 4096 +#define ZXDH_QUEUES_NUM_MAX 256 +#define ZXDH_QUERES_SHARE_BASE (0x5000) + +#define ZXDH_MBUF_BURST_SZ 64 union virport_num { uint16_t vport; @@ -44,6 +49,11 @@ union virport_num { }; }; +struct chnl_context { + uint16_t valid; + uint16_t ph_chno; +}; + struct zxdh_hw { struct rte_eth_dev *eth_dev; struct zxdh_pci_common_cfg *common_cfg; @@ -51,6 +61,7 @@ struct zxdh_hw { struct rte_intr_handle *risc_intr; struct rte_intr_handle *dtb_intr; struct virtqueue **vqs; + struct chnl_context channel_context[ZXDH_QUEUES_NUM_MAX]; union virport_num vport; uint64_t bar_addr[ZXDH_NUM_BARS]; @@ -64,6 +75,7 @@ struct zxdh_hw { uint16_t device_id; uint16_t port_id; uint16_t vfid; + uint16_t queue_num; uint8_t *isr; uint8_t weak_barriers; @@ -75,6 +87,8 @@ struct zxdh_hw { uint8_t phyport; uint8_t panel_id; uint8_t msg_chan_init; + uint8_t has_tx_offload; + uint8_t has_rx_offload; }; uint16_t vport_to_vfid(union virport_num v); diff --git a/drivers/net/zxdh/zxdh_pci.c b/drivers/net/zxdh/zxdh_pci.c index 06b54b06cd..e4422c44a1 100644 --- a/drivers/net/zxdh/zxdh_pci.c +++ b/drivers/net/zxdh/zxdh_pci.c @@ -115,6 +115,86 @@ static uint8_t zxdh_get_isr(struct zxdh_hw *hw) return rte_read8(hw->isr); } +static uint16_t zxdh_get_queue_num(struct zxdh_hw *hw, uint16_t queue_id) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + return rte_read16(&hw->common_cfg->queue_size); +} + +static void zxdh_set_queue_num(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size) +{ + rte_write16(queue_id, &hw->common_cfg->queue_select); + rte_write16(vq_size, &hw->common_cfg->queue_size); +} + +static int32_t check_vq_phys_addr_ok(struct virtqueue *vq) +{ + if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >> (ZXDH_PCI_QUEUE_ADDR_SHIFT + 32)) { + PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!"); + return 0; + } + return 1; +} + +static inline void io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi) +{ + rte_write32(val & ((1ULL << 32) - 1), lo); + rte_write32(val >> 32, hi); +} + +static int32_t zxdh_setup_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + uint64_t desc_addr = 0; + uint64_t avail_addr = 0; + uint64_t used_addr = 0; + uint16_t notify_off = 0; + + if (!check_vq_phys_addr_ok(vq)) + return -1; + + desc_addr = vq->vq_ring_mem; + avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc); + if (vtpci_packed_queue(vq->hw)) { + used_addr = RTE_ALIGN_CEIL((avail_addr + sizeof(struct vring_packed_desc_event)), + ZXDH_PCI_VRING_ALIGN); + } else { + used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail, + ring[vq->vq_nentries]), ZXDH_PCI_VRING_ALIGN); + } + + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + notify_off = rte_read16(&hw->common_cfg->queue_notify_off); /* default 0 */ + notify_off = 0; + vq->notify_addr = (void *)((uint8_t *)hw->notify_base + + notify_off * hw->notify_off_multiplier); + + rte_write16(1, &hw->common_cfg->queue_enable); + + return 0; +} + +static void zxdh_del_queue(struct zxdh_hw *hw, struct virtqueue *vq) +{ + rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select); + + io_write64_twopart(0, &hw->common_cfg->queue_desc_lo, + &hw->common_cfg->queue_desc_hi); + io_write64_twopart(0, &hw->common_cfg->queue_avail_lo, + &hw->common_cfg->queue_avail_hi); + io_write64_twopart(0, &hw->common_cfg->queue_used_lo, + &hw->common_cfg->queue_used_hi); + + rte_write16(0, &hw->common_cfg->queue_enable); +} + const struct zxdh_pci_ops zxdh_dev_pci_ops = { .read_dev_cfg = zxdh_read_dev_config, .write_dev_cfg = zxdh_write_dev_config, @@ -125,6 +205,10 @@ const struct zxdh_pci_ops zxdh_dev_pci_ops = { .set_queue_irq = zxdh_set_queue_irq, .set_config_irq = zxdh_set_config_irq, .get_isr = zxdh_get_isr, + .get_queue_num = zxdh_get_queue_num, + .set_queue_num = zxdh_set_queue_num, + .setup_queue = zxdh_setup_queue, + .del_queue = zxdh_del_queue, }; uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw) @@ -151,6 +235,19 @@ void zxdh_vtpci_reset(struct zxdh_hw *hw) PMD_INIT_LOG(INFO, "port %u device reset %u ms done", hw->port_id, retry); } +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw) +{ + zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER_OK); +} + +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status) +{ + if (status != ZXDH_CONFIG_STATUS_RESET) + status |= VTPCI_OPS(hw)->get_status(hw); + + VTPCI_OPS(hw)->set_status(hw, status); +} + static void *get_cfg_addr(struct rte_pci_device *dev, struct zxdh_pci_cap *cap) { uint8_t bar = cap->bar; diff --git a/drivers/net/zxdh/zxdh_pci.h b/drivers/net/zxdh/zxdh_pci.h index 25e34c3ba9..4c9fea7c6d 100644 --- a/drivers/net/zxdh/zxdh_pci.h +++ b/drivers/net/zxdh/zxdh_pci.h @@ -8,7 +8,9 @@ #include <stdint.h> #include <stdbool.h> +#include <rte_pci.h> #include <bus_pci_driver.h> +#include <ethdev_driver.h> #include "zxdh_ethdev.h" @@ -34,8 +36,20 @@ enum zxdh_msix_status { #define ZXDH_PCI_CAP_ID_MSIX 0x11 #define ZXDH_PCI_MSIX_ENABLE 0x8000 +#define ZXDH_PCI_VRING_ALIGN 4096 +#define ZXDH_NET_F_CSUM 0 /* Host handles pkts w/ partial csum */ +#define ZXDH_NET_F_GUEST_CSUM 1 /* Guest handles pkts w/ partial csum */ +#define ZXDH_NET_F_MTU 3 /* Initial MTU advice. */ #define ZXDH_NET_F_MAC 5 /* Host has given MAC address. */ +#define ZXDH_NET_F_GUEST_TSO4 7 /* Guest can handle TSOv4 in. */ +#define ZXDH_NET_F_GUEST_TSO6 8 /* Guest can handle TSOv6 in. */ +#define ZXDH_NET_F_GUEST_ECN 9 /* Guest can handle TSO[6] w/ ECN in. */ +#define ZXDH_NET_F_GUEST_UFO 10 /* Guest can handle UFO in. */ + +#define ZXDH_NET_F_HOST_UFO 14 /* Host can handle UFO in. */ +#define ZXDH_NET_F_HOST_TSO4 11 /* Host can handle TSOv4 in. */ +#define ZXDH_NET_F_HOST_TSO6 12 /* Host can handle TSOv6 in. */ #define ZXDH_NET_F_MRG_RXBUF 15 /* Host can merge receive buffers. */ #define ZXDH_NET_F_STATUS 16 /* zxdh_net_config.status available */ #define ZXDH_NET_F_MQ 22 /* Device supports Receive Flow Steering */ @@ -60,6 +74,8 @@ enum zxdh_msix_status { #define ZXDH_CONFIG_STATUS_DEV_NEED_RESET 0x40 #define ZXDH_CONFIG_STATUS_FAILED 0x80 +#define ZXDH_PCI_QUEUE_ADDR_SHIFT 12 + struct zxdh_net_config { /* The config defining mac address (if ZXDH_NET_F_MAC) */ uint8_t mac[RTE_ETHER_ADDR_LEN]; @@ -122,6 +138,11 @@ static inline int32_t vtpci_with_feature(struct zxdh_hw *hw, uint64_t bit) return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int32_t vtpci_packed_queue(struct zxdh_hw *hw) +{ + return vtpci_with_feature(hw, ZXDH_F_RING_PACKED); +} + struct zxdh_pci_ops { void (*read_dev_cfg)(struct zxdh_hw *hw, size_t offset, void *dst, int32_t len); void (*write_dev_cfg)(struct zxdh_hw *hw, size_t offset, const void *src, int32_t len); @@ -134,6 +155,11 @@ struct zxdh_pci_ops { uint16_t (*set_queue_irq)(struct zxdh_hw *hw, struct virtqueue *vq, uint16_t vec); uint16_t (*set_config_irq)(struct zxdh_hw *hw, uint16_t vec); uint8_t (*get_isr)(struct zxdh_hw *hw); + uint16_t (*get_queue_num)(struct zxdh_hw *hw, uint16_t queue_id); + void (*set_queue_num)(struct zxdh_hw *hw, uint16_t queue_id, uint16_t vq_size); + + int32_t (*setup_queue)(struct zxdh_hw *hw, struct virtqueue *vq); + void (*del_queue)(struct zxdh_hw *hw, struct virtqueue *vq); }; struct zxdh_hw_internal { @@ -156,6 +182,9 @@ uint16_t zxdh_vtpci_get_features(struct zxdh_hw *hw); uint8_t zxdh_vtpci_isr(struct zxdh_hw *hw); enum zxdh_msix_status zxdh_vtpci_msix_detect(struct rte_pci_device *dev); +void zxdh_vtpci_reinit_complete(struct zxdh_hw *hw); +void zxdh_vtpci_set_status(struct zxdh_hw *hw, uint8_t status); + #ifdef __cplusplus } #endif diff --git a/drivers/net/zxdh/zxdh_queue.c b/drivers/net/zxdh/zxdh_queue.c new file mode 100644 index 0000000000..4f12271f66 --- /dev/null +++ b/drivers/net/zxdh/zxdh_queue.c @@ -0,0 +1,131 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 ZTE Corporation + */ + +#include <stdint.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "zxdh_queue.h" +#include "zxdh_logs.h" +#include "zxdh_pci.h" +#include "zxdh_common.h" +#include "zxdh_msg.h" + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq) +{ + struct rte_mbuf *cookie = NULL; + int32_t idx = 0; + + if (vq == NULL) + return NULL; + + for (idx = 0; idx < vq->vq_nentries; idx++) { + cookie = vq->vq_descx[idx].cookie; + if (cookie != NULL) { + vq->vq_descx[idx].cookie = NULL; + return cookie; + } + } + return NULL; +} + +static int32_t zxdh_release_channel(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + uint32_t var = 0; + uint32_t addr = 0; + uint32_t widx = 0; + uint32_t bidx = 0; + uint16_t pch = 0; + uint16_t lch = 0; + uint16_t timeout = 0; + + while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + if (zxdh_acquire_lock(hw) != 0) { + PMD_INIT_LOG(ERR, + "Could not acquire lock to release channel, timeout %d", timeout); + continue; + } + break; + } + + if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) { + PMD_INIT_LOG(ERR, "Acquire lock timeout"); + return -1; + } + + for (lch = 0; lch < nr_vq; lch++) { + if (hw->channel_context[lch].valid == 0) { + PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch); + continue; + } + + pch = hw->channel_context[lch].ph_chno; + widx = pch / 32; + bidx = pch % 32; + + addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t)); + var = zxdh_read_bar_reg(dev, ZXDH_BAR0_INDEX, addr); + var &= ~(1 << bidx); + zxdh_write_bar_reg(dev, ZXDH_BAR0_INDEX, addr, var); + + hw->channel_context[lch].valid = 0; + hw->channel_context[lch].ph_chno = 0; + } + + zxdh_release_lock(hw); + + return 0; +} + +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx) +{ + if (vtpci_queue_idx % 2 == 0) + return ZXDH_VTNET_RQ; + else + return ZXDH_VTNET_TQ; +} + +int32_t zxdh_free_queues(struct rte_eth_dev *dev) +{ + struct zxdh_hw *hw = dev->data->dev_private; + uint16_t nr_vq = hw->queue_num; + struct virtqueue *vq = NULL; + int32_t queue_type = 0; + uint16_t i = 0; + + if (hw->vqs == NULL) + return 0; + + if (zxdh_release_channel(dev) < 0) { + PMD_INIT_LOG(ERR, "Failed to clear coi table"); + return -1; + } + + for (i = 0; i < nr_vq; i++) { + vq = hw->vqs[i]; + if (vq == NULL) + continue; + + VTPCI_OPS(hw)->del_queue(hw, vq); + queue_type = zxdh_get_queue_type(i); + if (queue_type == ZXDH_VTNET_RQ) { + rte_free(vq->sw_ring); + rte_memzone_free(vq->rxq.mz); + } else if (queue_type == ZXDH_VTNET_TQ) { + rte_memzone_free(vq->txq.mz); + rte_memzone_free(vq->txq.virtio_net_hdr_mz); + } + + rte_free(vq); + hw->vqs[i] = NULL; + PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i); + } + + rte_free(hw->vqs); + hw->vqs = NULL; + + return 0; +} diff --git a/drivers/net/zxdh/zxdh_queue.h b/drivers/net/zxdh/zxdh_queue.h index 7b48f4884b..4038541286 100644 --- a/drivers/net/zxdh/zxdh_queue.h +++ b/drivers/net/zxdh/zxdh_queue.h @@ -11,11 +11,30 @@ #include "zxdh_ethdev.h" #include "zxdh_rxtx.h" +#include "zxdh_pci.h" #ifdef __cplusplus extern "C" { #endif +enum { ZXDH_VTNET_RQ = 0, ZXDH_VTNET_TQ = 1 }; + +#define ZXDH_VIRTQUEUE_MAX_NAME_SZ 32 +#define ZXDH_RQ_QUEUE_IDX 0 +#define ZXDH_TQ_QUEUE_IDX 1 +#define ZXDH_MAX_TX_INDIRECT 8 + +/* This marks a buffer as write-only (otherwise read-only). */ +#define ZXDH_VRING_DESC_F_WRITE 2 +/* This flag means the descriptor was made available by the driver */ +#define ZXDH_VRING_PACKED_DESC_F_AVAIL (1 << (7)) + +#define ZXDH_RING_EVENT_FLAGS_ENABLE 0x0 +#define ZXDH_RING_EVENT_FLAGS_DISABLE 0x1 +#define ZXDH_RING_EVENT_FLAGS_DESC 0x2 + +#define ZXDH_VQ_RING_DESC_CHAIN_END 32768 + /** ring descriptors: 16 bytes. * These can chain together via "next". **/ @@ -26,6 +45,19 @@ struct vring_desc { uint16_t next; /* We chain unused descriptors via this. */ }; +struct vring_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was written to. */ + uint32_t len; +}; + +struct vring_used { + uint16_t flags; + uint16_t idx; + struct vring_used_elem ring[0]; +}; + struct vring_avail { uint16_t flags; uint16_t idx; @@ -91,7 +123,7 @@ struct virtqueue { /** * Head of the free chain in the descriptor table. If * there are no free descriptors, this will be set to - * VQ_RING_DESC_CHAIN_END. + * ZXDH_VQ_RING_DESC_CHAIN_END. **/ uint16_t vq_desc_head_idx; uint16_t vq_desc_tail_idx; @@ -102,6 +134,147 @@ struct virtqueue { struct vq_desc_extra vq_descx[0]; }; +struct zxdh_type_hdr { + uint8_t port; /* bit[0:1] 00-np 01-DRS 10-DTP */ + uint8_t pd_len; + uint8_t num_buffers; + uint8_t reserved; +} __rte_packed; /* 4B */ + +struct zxdh_pi_hdr { + uint8_t pi_len; + uint8_t pkt_type; + uint16_t vlan_id; + uint32_t ipv6_extend; + uint16_t l3_offset; + uint16_t l4_offset; + uint8_t phy_port; + uint8_t pkt_flag_hi8; + uint16_t pkt_flag_lw16; + union { + struct { + uint64_t sa_idx; + uint8_t reserved_8[8]; + } dl; + struct { + uint32_t lro_flag; + uint32_t lro_mss; + uint16_t err_code; + uint16_t pm_id; + uint16_t pkt_len; + uint8_t reserved[2]; + } ul; + }; +} __rte_packed; /* 32B */ + +struct zxdh_pd_hdr_dl { + uint32_t ol_flag; + uint8_t tag_idx; + uint8_t tag_data; + uint16_t dst_vfid; + uint32_t svlan_insert; + uint32_t cvlan_insert; +} __rte_packed; /* 16B */ + +struct zxdh_net_hdr_dl { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_dl pd_hdr; /* 16B */ +} __rte_packed; + +struct zxdh_pd_hdr_ul { + uint32_t pkt_flag; + uint32_t rss_hash; + uint32_t fd; + uint32_t striped_vlan_tci; + /* ovs */ + uint8_t tag_idx; + uint8_t tag_data; + uint16_t src_vfid; + /* */ + uint16_t pkt_type_out; + uint16_t pkt_type_in; +} __rte_packed; /* 24B */ + +struct zxdh_net_hdr_ul { + struct zxdh_type_hdr type_hdr; /* 4B */ + struct zxdh_pi_hdr pi_hdr; /* 32B */ + struct zxdh_pd_hdr_ul pd_hdr; /* 24B */ +} __rte_packed; /* 60B */ + +struct zxdh_tx_region { + struct zxdh_net_hdr_dl tx_hdr; + union { + struct vring_desc tx_indir[ZXDH_MAX_TX_INDIRECT]; + struct vring_packed_desc tx_packed_indir[ZXDH_MAX_TX_INDIRECT]; + } __rte_packed; +}; + +static inline size_t vring_size(struct zxdh_hw *hw, uint32_t num, unsigned long align) +{ + size_t size; + + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct vring_packed_desc); + size += sizeof(struct vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_packed_desc_event); + return size; + } + + size = num * sizeof(struct vring_desc); + size += sizeof(struct vring_avail) + (num * sizeof(uint16_t)); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_used) + (num * sizeof(struct vring_used_elem)); + return size; +} + +static inline void vring_init_packed(struct vring_packed *vr, uint8_t *p, + unsigned long align, uint32_t num) +{ + vr->num = num; + vr->desc = (struct vring_packed_desc *)p; + vr->driver = (struct vring_packed_desc_event *)(p + + vr->num * sizeof(struct vring_packed_desc)); + vr->device = (struct vring_packed_desc_event *)RTE_ALIGN_CEIL(((uintptr_t)vr->driver + + sizeof(struct vring_packed_desc_event)), align); +} + +static inline void vring_desc_init_packed(struct virtqueue *vq, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n - 1; i++) { + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->vq_packed.ring.desc[i].id = i; + vq->vq_descx[i].next = ZXDH_VQ_RING_DESC_CHAIN_END; +} + +static inline void vring_desc_init_indirect_packed(struct vring_packed_desc *dp, int32_t n) +{ + int32_t i = 0; + + for (i = 0; i < n; i++) { + dp[i].id = (uint16_t)i; + dp[i].flags = ZXDH_VRING_DESC_F_WRITE; + } +} + +static inline void virtqueue_disable_intr(struct virtqueue *vq) +{ + if (vq->vq_packed.event_flags_shadow != ZXDH_RING_EVENT_FLAGS_DISABLE) { + vq->vq_packed.event_flags_shadow = ZXDH_RING_EVENT_FLAGS_DISABLE; + vq->vq_packed.ring.driver->desc_event_flags = vq->vq_packed.event_flags_shadow; + } +} + +struct rte_mbuf *zxdh_virtqueue_detach_unused(struct virtqueue *vq); +int32_t zxdh_free_queues(struct rte_eth_dev *dev); +int32_t zxdh_get_queue_type(uint16_t vtpci_queue_idx); + + #ifdef __cplusplus } #endif -- 2.27.0 [-- Attachment #1.1.2: Type: text/html , Size: 91442 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v7,9/9] net/zxdh: add zxdh dev configure ops 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang @ 2024-10-24 11:31 ` Junlong Wang 2024-10-25 9:48 ` Junlong Wang ` (3 subsequent siblings) 4 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-24 11:31 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, stephen, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 100 bytes --] If you have time, hope you can check if the zxdh driver still needs to be modified. Best regards! [-- Attachment #1.1.2: Type: text/html , Size: 224 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v7,9/9] net/zxdh: add zxdh dev configure ops 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-24 11:31 ` [v7,9/9] " Junlong Wang @ 2024-10-25 9:48 ` Junlong Wang 2024-10-26 2:32 ` Junlong Wang ` (2 subsequent siblings) 4 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-25 9:48 UTC (permalink / raw) To: thomas, ferruh.yigit; +Cc: dev, stephen, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 123 bytes --] Hi maintainers, If you have time, hope you can check if the zxdh driver still needs to be modified. Best regards! [-- Attachment #1.1.2: Type: text/html , Size: 257 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [v7,9/9] net/zxdh: add zxdh dev configure ops 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-24 11:31 ` [v7,9/9] " Junlong Wang 2024-10-25 9:48 ` Junlong Wang @ 2024-10-26 2:32 ` Junlong Wang 2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger 2024-10-27 16:58 ` Stephen Hemminger 4 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-10-26 2:32 UTC (permalink / raw) To: ferruh.yigit; +Cc: dev, thomas, stephen, wang.yong19 [-- Attachment #1.1.1: Type: text/plain, Size: 416 bytes --] Hi Ferruh, I hope this message finds you well. I have made the revisions according to your review comments, and the final version was submitted on Oct 22th. Since then, I have not received any feedback from the community. I would appreciate it if you could provide any suggestions for modifications. If there is no problem with the submitted driver. Thank you for your time and assistance. Thanks! [-- Attachment #1.1.2: Type: text/html , Size: 806 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang ` (2 preceding siblings ...) 2024-10-26 2:32 ` Junlong Wang @ 2024-10-27 16:40 ` Stephen Hemminger 2024-10-27 17:03 ` Stephen Hemminger 2024-10-27 16:58 ` Stephen Hemminger 4 siblings, 1 reply; 225+ messages in thread From: Stephen Hemminger @ 2024-10-27 16:40 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19 On Tue, 22 Oct 2024 20:20:42 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > +int32_t zxdh_acquire_lock(struct zxdh_hw *hw) > +{ > + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); > + > + /* check whether lock is used */ > + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) > + return -1; > + > + return 0; > +} > + > +int32_t zxdh_release_lock(struct zxdh_hw *hw) > +{ > + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); > + > + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { > + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; > + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); > + return 0; > + } > + > + return -1; > +} > + It is your driver, so you are free to name functions as appropriate. But it would make more sense to make the hardware lock follow the pattern of existing spinlock's etc. ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops 2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger @ 2024-10-27 17:03 ` Stephen Hemminger 0 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-27 17:03 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19 On Sun, 27 Oct 2024 09:40:48 -0700 Stephen Hemminger <stephen@networkplumber.org> wrote: > On Tue, 22 Oct 2024 20:20:42 +0800 > Junlong Wang <wang.junlong1@zte.com.cn> wrote: > > > +int32_t zxdh_acquire_lock(struct zxdh_hw *hw) > > +{ > > + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); > > + > > + /* check whether lock is used */ > > + if (!(var & ZXDH_VF_LOCK_ENABLE_MASK)) > > + return -1; > > + > > + return 0; > > +} > > + > > +int32_t zxdh_release_lock(struct zxdh_hw *hw) > > +{ > > + uint32_t var = zxdh_read_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG); > > + > > + if (var & ZXDH_VF_LOCK_ENABLE_MASK) { > > + var &= ~ZXDH_VF_LOCK_ENABLE_MASK; > > + zxdh_write_comm_reg((uint64_t)hw->common_cfg, ZXDH_VF_LOCK_REG, var); > > + return 0; > > + } > > + > > + return -1; > > +} > > + > > It is your driver, so you are free to name functions as appropriate. > > But it would make more sense to make the hardware lock follow the pattern > of existing spinlock's etc. I am suggesting: static bool zxdh_try_lock(hw); void zxdh_release_lock(hw); also, move the loop into a new function, something like: int zxdh_timedlock(hw, uint32_t us); ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang ` (3 preceding siblings ...) 2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger @ 2024-10-27 16:58 ` Stephen Hemminger 4 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-10-27 16:58 UTC (permalink / raw) To: Junlong Wang; +Cc: dev, thomas, ferruh.yigit, wang.yong19 On Tue, 22 Oct 2024 20:20:42 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > provided zxdh dev configure ops for queue > check,reset,alloc resources,etc. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > drivers/net/zxdh/meson.build | 1 + > drivers/net/zxdh/zxdh_common.c | 118 +++++++++ > drivers/net/zxdh/zxdh_common.h | 12 + > drivers/net/zxdh/zxdh_ethdev.c | 457 +++++++++++++++++++++++++++++++++ > drivers/net/zxdh/zxdh_ethdev.h | 18 +- > drivers/net/zxdh/zxdh_pci.c | 97 +++++++ > drivers/net/zxdh/zxdh_pci.h | 29 +++ > drivers/net/zxdh/zxdh_queue.c | 131 ++++++++++ > drivers/net/zxdh/zxdh_queue.h | 175 ++++++++++++- > 9 files changed, 1035 insertions(+), 3 deletions(-) > create mode 100644 drivers/net/zxdh/zxdh_queue.c In future, DPDK wants to re-enable the Gcc warning for taking address of packed member. When I enable that (in config/meson.build) this shows up. [1478/3078] Compiling C object drivers/libtmp_rte_net_zxdh.a.p/net_zxdh_zxdh_ethdev.c.o ../drivers/net/zxdh/zxdh_ethdev.c: In function ‘zxdh_init_vring’: ../drivers/net/zxdh/zxdh_ethdev.c:541:27: warning: taking address of packed member of ‘struct <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member] 541 | vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size); | ^~~~~~~~~~~~~~~~~~~ ../drivers/net/zxdh/zxdh_ethdev.c: In function ‘zxdh_init_queue’: ../drivers/net/zxdh/zxdh_ethdev.c:682:62: warning: taking address of packed member of ‘union <anonymous>’ may result in an unaligned pointer value [-Waddress-of-packed-member] 682 | struct vring_packed_desc *start_dp = txr[i].tx_packed_indir; | ^~~ [1479/3078] Compiling C object drivers/libtmp_rte_net_virtio.a.p/net_virtio_virtio_ethdev ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4] net/zxdh: Provided zxdh basic init 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang ` (4 preceding siblings ...) 2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang @ 2024-12-19 22:38 ` Stephen Hemminger 2024-12-20 1:47 ` Junlong Wang 6 siblings, 0 replies; 225+ messages in thread From: Stephen Hemminger @ 2024-12-19 22:38 UTC (permalink / raw) To: Junlong Wang; +Cc: ferruh.yigit, dev, wang.yong19 On Tue, 10 Sep 2024 20:00:20 +0800 Junlong Wang <wang.junlong1@zte.com.cn> wrote: > provided zxdh initialization of zxdh PMD driver. > include msg channel, np init and etc. > > Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn> > --- > V4: Resolve compilation issues > V3: Resolve compilation issues > V2: Resolve compilation issues and modify doc(zxdh.ini zdh.rst) > V1: Provide zxdh basic init and open source NPSDK lib > --- Overall this looks good, one test checklist item for me was to build with Gcc 14 and analyzer option. This finds bugs but can generate false positives. The output is quite verbose. It complains about this which may or may not be a real problem. If memcpy() is used instead of rte_memcpy() then the problem goes away. The issue is that inlined version rte_memcpy() will reference past the arguments as an internal optimization for small values. [1564/3222] Compiling C object drivers/libtmp_rte_net_zxdh.a.p/net_zxdh_zxdh_common.c.o In file included from ../lib/mempool/rte_mempool.h:50, from ../lib/mbuf/rte_mbuf.h:38, from ../lib/net/rte_ether.h:20, from ../lib/ethdev/rte_eth_ctrl.h:10, from ../lib/ethdev/rte_ethdev.h:1472, from ../lib/ethdev/ethdev_driver.h:21, from ../drivers/net/zxdh/zxdh_common.c:8: In function ‘rte_mov15_or_less’, inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:395:10, inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10, inlined from ‘zxdh_get_res_info’ at ../drivers/net/zxdh/zxdh_common.c:231:2: ../lib/eal/x86/include/rte_memcpy.h:82:55: warning: stack-based buffer overflow [CWE-121] [-Wanalyzer-out-of-bounds] 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ‘zxdh_panelid_get’: events 1-3 | |../drivers/net/zxdh/zxdh_common.c:250:1: | 239 | uint8_t reps = 0; | | ~~~~ | | | | | (2) capacity: 1 byte |...... | 250 | zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) | | ^~~~~~~~~~~~~~~~ | | | | | (1) entry to ‘zxdh_panelid_get’ |...... | 255 | int32_t ret = zxdh_get_res_panel_id(¶m, panelid); | | ~ | | | | | (3) inlined call to ‘zxdh_get_res_panel_id’ from ‘zxdh_panelid_get’ | +--> ‘zxdh_get_res_panel_id’: event 4 | | 242 | if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (4) calling ‘zxdh_get_res_info’ from ‘zxdh_panelid_get’ | ‘zxdh_get_res_info’: events 5-12 | | 186 | zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) | | ^~~~~~~~~~~~~~~~~ | | | | | (5) entry to ‘zxdh_get_res_info’ |...... | 192 | if (!res || !dev) | | ~ | | | | | (6) following ‘false’ branch... |...... | 195 | struct zxdh_tbl_msg_header tbl_msg = { | | ~~~~~~~ | | | | | (7) ...to here |...... | 217 | if (ret != ZXDH_BAR_MSG_OK) { | | ~ | | | | | (8) following ‘false’ branch (when ‘ret == 0’)... |...... | 225 | if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { | | ~~~~~~~~~~~~~~~~ | | | | | | | (9) ...to here | | (10) following ‘false’ branch... |...... | 230 | *len = tbl_reps->len; | | ~~~~~~~~~~~~~ | | | | | (11) ...to here | 231 | rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + | | ~ | | | | | (12) inlined call to ‘rte_memcpy’ from ‘zxdh_get_res_info’ | +--> ‘rte_memcpy’: events 13-14 | |../lib/eal/x86/include/rte_memcpy.h:754:12: | 754 | if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK)) | | ^ | | | | | (13) following ‘false’ branch... |...... | 757 | return rte_memcpy_generic(dst, src, n); | | ~ | | | | | (14) inlined call to ‘rte_memcpy_generic’ from ‘rte_memcpy’ | +--> ‘rte_memcpy_generic’: events 15-17 | | 394 | if (n < 16) { | | ^ | | | | | (15) ...to here | | (16) following ‘true’ branch... | 395 | return rte_mov15_or_less(dst, src, n); | | ~ | | | | | (17) inlined call to ‘rte_mov15_or_less’ from ‘rte_memcpy_generic’ | +--> ‘rte_mov15_or_less’: events 18-21 | | 81 | if (n & 8) { | | ^ | | | | | (18) ...to here | | (19) following ‘true’ branch... | 82 | ((struct rte_uint64_alias *)dst)->val = | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (21) out-of-bounds write from byte 1 till byte 7 but ‘reps’ ends at byte 1 | 83 | ((const struct rte_uint64_alias *)src)->val; | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (20) ...to here | ../lib/eal/x86/include/rte_memcpy.h:82:55: note: write of 7 bytes to beyond the end of ‘reps’ 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┌──────────────────────────────────────────────────────────────────────┐ │ write of ‘uint64_t’ (8 bytes) │ └──────────────────────────────────────────────────────────────────────┘ │ │ │ │ v v ┌────────────────────────┐┌────────────────────────────────────────────┐ │‘reps’ (type: ‘uint8_t’)││ after valid range │ └────────────────────────┘└────────────────────────────────────────────┘ ├───────────┬────────────┤├─────────────────────┬──────────────────────┤ │ │ ╭────────┴───────╮ ╭───────────┴──────────╮ │capacity: 1 byte│ │⚠️ overflow of 7 bytes│ ╰────────────────╯ ╰──────────────────────╯ In function ‘rte_mov15_or_less’, inlined from ‘rte_memcpy_aligned’ at ../lib/eal/x86/include/rte_memcpy.h:706:10, inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:755:10, inlined from ‘zxdh_get_res_info’ at ../drivers/net/zxdh/zxdh_common.c:231:2: ../lib/eal/x86/include/rte_memcpy.h:82:55: warning: stack-based buffer overflow [CWE-121] [-Wanalyzer-out-of-bounds] 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ‘zxdh_hashidx_get’: events 1-3 | |../drivers/net/zxdh/zxdh_common.c:273:1: | 262 | uint8_t reps = 0; | | ~~~~ | | | | | (2) capacity: 1 byte |...... | 273 | zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) | | ^~~~~~~~~~~~~~~~ | | | | | (1) entry to ‘zxdh_hashidx_get’ |...... | 278 | int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); | | ~ | | | | | (3) inlined call to ‘zxdh_get_res_hash_id’ from ‘zxdh_hashidx_get’ | +--> ‘zxdh_get_res_hash_id’: event 4 | | 265 | if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (4) calling ‘zxdh_get_res_info’ from ‘zxdh_hashidx_get’ | ‘zxdh_get_res_info’: events 5-12 | | 186 | zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) | | ^~~~~~~~~~~~~~~~~ | | | | | (5) entry to ‘zxdh_get_res_info’ |...... | 192 | if (!res || !dev) | | ~ | | | | | (6) following ‘false’ branch... |...... | 195 | struct zxdh_tbl_msg_header tbl_msg = { | | ~~~~~~~ | | | | | (7) ...to here |...... | 217 | if (ret != ZXDH_BAR_MSG_OK) { | | ~ | | | | | (8) following ‘false’ branch (when ‘ret == 0’)... |...... | 225 | if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { | | ~~~~~~~~~~~~~~~~ | | | | | | | (9) ...to here | | (10) following ‘false’ branch... |...... | 230 | *len = tbl_reps->len; | | ~~~~~~~~~~~~~ | | | | | (11) ...to here | 231 | rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + | | ~ | | | | | (12) inlined call to ‘rte_memcpy’ from ‘zxdh_get_res_info’ | +--> ‘rte_memcpy’: events 13-14 | |../lib/eal/x86/include/rte_memcpy.h:754:12: | 754 | if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK)) | | ^ | | | | | (13) following ‘true’ branch... | 755 | return rte_memcpy_aligned(dst, src, n); | | ~ | | | | | (14) inlined call to ‘rte_memcpy_aligned’ from ‘rte_memcpy’ | +--> ‘rte_memcpy_aligned’: events 15-17 | | 705 | if (n < 16) { | | ^ | | | | | (15) ...to here | | (16) following ‘true’ branch... | 706 | return rte_mov15_or_less(dst, src, n); | | ~ | | | | | (17) inlined call to ‘rte_mov15_or_less’ from ‘rte_memcpy_aligned’ | +--> ‘rte_mov15_or_less’: events 18-21 | | 81 | if (n & 8) { | | ^ | | | | | (18) ...to here | | (19) following ‘true’ branch... | 82 | ((struct rte_uint64_alias *)dst)->val = | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (21) out-of-bounds write from byte 1 till byte 7 but ‘reps’ ends at byte 1 | 83 | ((const struct rte_uint64_alias *)src)->val; | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (20) ...to here | ../lib/eal/x86/include/rte_memcpy.h:82:55: note: write of 7 bytes to beyond the end of ‘reps’ 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┌──────────────────────────────────────────────────────────────────────┐ │ write of ‘uint64_t’ (8 bytes) │ └──────────────────────────────────────────────────────────────────────┘ │ │ │ │ v v ┌────────────────────────┐┌────────────────────────────────────────────┐ │‘reps’ (type: ‘uint8_t’)││ after valid range │ └────────────────────────┘└────────────────────────────────────────────┘ ├───────────┬────────────┤├─────────────────────┬──────────────────────┤ │ │ ╭────────┴───────╮ ╭───────────┴──────────╮ │capacity: 1 byte│ │⚠️ overflow of 7 bytes│ ╰────────────────╯ ╰──────────────────────╯ ../lib/eal/x86/include/rte_memcpy.h:82:55: warning: stack-based buffer overflow [CWE-121] [-Wanalyzer-out-of-bounds] ‘zxdh_panelid_get’: events 1-3 | |../drivers/net/zxdh/zxdh_common.c:250:1: | 239 | uint8_t reps = 0; | | ~~~~ | | | | | (2) capacity: 1 byte |...... | 250 | zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) | | ^~~~~~~~~~~~~~~~ | | | | | (1) entry to ‘zxdh_panelid_get’ |...... | 255 | int32_t ret = zxdh_get_res_panel_id(¶m, panelid); | | ~ | | | | | (3) inlined call to ‘zxdh_get_res_panel_id’ from ‘zxdh_panelid_get’ | +--> ‘zxdh_get_res_panel_id’: event 4 | | 242 | if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (4) calling ‘zxdh_get_res_info’ from ‘zxdh_panelid_get’ | ‘zxdh_get_res_info’: events 5-12 | | 186 | zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) | | ^~~~~~~~~~~~~~~~~ | | | | | (5) entry to ‘zxdh_get_res_info’ |...... | 192 | if (!res || !dev) | | ~ | | | | | (6) following ‘false’ branch... |...... | 195 | struct zxdh_tbl_msg_header tbl_msg = { | | ~~~~~~~ | | | | | (7) ...to here |...... | 217 | if (ret != ZXDH_BAR_MSG_OK) { | | ~ | | | | | (8) following ‘false’ branch (when ‘ret == 0’)... |...... | 225 | if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { | | ~~~~~~~~~~~~~~~~ | | | | | | | (9) ...to here | | (10) following ‘false’ branch... |...... | 230 | *len = tbl_reps->len; | | ~~~~~~~~~~~~~ | | | | | (11) ...to here | 231 | rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + | | ~ | | | | | (12) inlined call to ‘rte_memcpy’ from ‘zxdh_get_res_info’ | +--> ‘rte_memcpy’: events 13-14 | |../lib/eal/x86/include/rte_memcpy.h:754:12: | 754 | if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK)) | | ^ | | | | | (13) following ‘true’ branch... | 755 | return rte_memcpy_aligned(dst, src, n); | | ~ | | | | | (14) inlined call to ‘rte_memcpy_aligned’ from ‘rte_memcpy’ | +--> ‘rte_memcpy_aligned’: events 15-17 | | 705 | if (n < 16) { | | ^ | | | | | (15) ...to here | | (16) following ‘true’ branch... | 706 | return rte_mov15_or_less(dst, src, n); | | ~ | | | | | (17) inlined call to ‘rte_mov15_or_less’ from ‘rte_memcpy_aligned’ | +--> ‘rte_mov15_or_less’: events 18-21 | | 81 | if (n & 8) { | | ^ | | | | | (18) ...to here | | (19) following ‘true’ branch... | 82 | ((struct rte_uint64_alias *)dst)->val = | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (21) out-of-bounds write from byte 1 till byte 7 but ‘reps’ ends at byte 1 | 83 | ((const struct rte_uint64_alias *)src)->val; | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (20) ...to here | ../lib/eal/x86/include/rte_memcpy.h:82:55: note: write of 7 bytes to beyond the end of ‘reps’ 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┌──────────────────────────────────────────────────────────────────────┐ │ write of ‘uint64_t’ (8 bytes) │ └──────────────────────────────────────────────────────────────────────┘ │ │ │ │ v v ┌────────────────────────┐┌────────────────────────────────────────────┐ │‘reps’ (type: ‘uint8_t’)││ after valid range │ └────────────────────────┘└────────────────────────────────────────────┘ ├───────────┬────────────┤├─────────────────────┬──────────────────────┤ │ │ ╭────────┴───────╮ ╭───────────┴──────────╮ │capacity: 1 byte│ │⚠️ overflow of 7 bytes│ ╰────────────────╯ ╰──────────────────────╯ In function ‘rte_mov15_or_less’, inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:395:10, inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10, inlined from ‘zxdh_get_res_info’ at ../drivers/net/zxdh/zxdh_common.c:231:2: ../lib/eal/x86/include/rte_memcpy.h:82:55: warning: stack-based buffer overflow [CWE-121] [-Wanalyzer-out-of-bounds] 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ‘zxdh_hashidx_get’: events 1-3 | |../drivers/net/zxdh/zxdh_common.c:273:1: | 262 | uint8_t reps = 0; | | ~~~~ | | | | | (2) capacity: 1 byte |...... | 273 | zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx) | | ^~~~~~~~~~~~~~~~ | | | | | (1) entry to ‘zxdh_hashidx_get’ |...... | 278 | int32_t ret = zxdh_get_res_hash_id(¶m, hash_idx); | | ~ | | | | | (3) inlined call to ‘zxdh_get_res_hash_id’ from ‘zxdh_hashidx_get’ | +--> ‘zxdh_get_res_hash_id’: event 4 | | 265 | if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_HASHID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (4) calling ‘zxdh_get_res_info’ from ‘zxdh_hashidx_get’ | ‘zxdh_get_res_info’: events 5-12 | | 186 | zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) | | ^~~~~~~~~~~~~~~~~ | | | | | (5) entry to ‘zxdh_get_res_info’ |...... | 192 | if (!res || !dev) | | ~ | | | | | (6) following ‘false’ branch... |...... | 195 | struct zxdh_tbl_msg_header tbl_msg = { | | ~~~~~~~ | | | | | (7) ...to here |...... | 217 | if (ret != ZXDH_BAR_MSG_OK) { | | ~ | | | | | (8) following ‘false’ branch (when ‘ret == 0’)... |...... | 225 | if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { | | ~~~~~~~~~~~~~~~~ | | | | | | | (9) ...to here | | (10) following ‘false’ branch... |...... | 230 | *len = tbl_reps->len; | | ~~~~~~~~~~~~~ | | | | | (11) ...to here | 231 | rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + | | ~ | | | | | (12) inlined call to ‘rte_memcpy’ from ‘zxdh_get_res_info’ | +--> ‘rte_memcpy’: events 13-14 | |../lib/eal/x86/include/rte_memcpy.h:754:12: | 754 | if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK)) | | ^ | | | | | (13) following ‘false’ branch... |...... | 757 | return rte_memcpy_generic(dst, src, n); | | ~ | | | | | (14) inlined call to ‘rte_memcpy_generic’ from ‘rte_memcpy’ | +--> ‘rte_memcpy_generic’: events 15-17 | | 394 | if (n < 16) { | | ^ | | | | | (15) ...to here | | (16) following ‘true’ branch... | 395 | return rte_mov15_or_less(dst, src, n); | | ~ | | | | | (17) inlined call to ‘rte_mov15_or_less’ from ‘rte_memcpy_generic’ | +--> ‘rte_mov15_or_less’: events 18-21 | | 81 | if (n & 8) { | | ^ | | | | | (18) ...to here | | (19) following ‘true’ branch... | 82 | ((struct rte_uint64_alias *)dst)->val = | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (21) out-of-bounds write from byte 1 till byte 7 but ‘reps’ ends at byte 1 | 83 | ((const struct rte_uint64_alias *)src)->val; | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | | | (20) ...to here | ../lib/eal/x86/include/rte_memcpy.h:82:55: note: write of 7 bytes to beyond the end of ‘reps’ 82 | ((struct rte_uint64_alias *)dst)->val = | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ 83 | ((const struct rte_uint64_alias *)src)->val; | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ┌──────────────────────────────────────────────────────────────────────┐ │ write of ‘uint64_t’ (8 bytes) │ └──────────────────────────────────────────────────────────────────────┘ │ │ │ │ v v ┌────────────────────────┐┌────────────────────────────────────────────┐ │‘reps’ (type: ‘uint8_t’)││ after valid range │ └────────────────────────┘└────────────────────────────────────────────┘ ├───────────┬────────────┤├─────────────────────┬──────────────────────┤ │ │ ╭────────┴───────╮ ╭───────────┴──────────╮ │capacity: 1 byte│ │⚠️ overflow of 7 bytes│ ╰────────────────╯ ╰──────────────────────╯ ^ permalink raw reply [flat|nested] 225+ messages in thread
* Re: [PATCH v4] net/zxdh: Provided zxdh basic init 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang ` (5 preceding siblings ...) 2024-12-19 22:38 ` [PATCH v4] net/zxdh: Provided zxdh basic init Stephen Hemminger @ 2024-12-20 1:47 ` Junlong Wang 6 siblings, 0 replies; 225+ messages in thread From: Junlong Wang @ 2024-12-20 1:47 UTC (permalink / raw) To: stephen; +Cc: dev [-- Attachment #1.1.1: Type: text/plain, Size: 10404 bytes --] In the latest v4 version I submitted on December 18th, when I opened the -Wanalyzer-out-of-bounds and compiled it on the gcc14.2 environment, '''''' C compiler for the host machine: cc (gcc 14.2.1 "cc (GCC) 14.2.1 20241104 (Red Hat 14.2.1-6)") Compiler for C supports arguments -Wanalyzer-out-of-bounds: YES '''''' this issue did not occur; And [PATCH v4] net/zxdh: Provided zxdh basic init, this patch was submitted three months ago on September 9th; So, I am confused; Is this issue also present in the latest v4 submission version and do we need to solved it >Overall this looks good, one test checklist item for me was to build >with Gcc 14 and analyzer option. This finds bugs but can generate false >positives. The output is quite verbose. >It complains about this which may or may not be a real problem. >If memcpy() is used instead of rte_memcpy() then the problem goes away. >The issue is that inlined version rte_memcpy() will reference past the arguments >as an internal optimization for small values. >[1564/3222] Compiling C object drivers/libtmp_rte_net_zxdh.a.p/net_zxdh_zxdh_common.c.o >In file included from ../lib/mempool/rte_mempool.h:50, > from ../lib/mbuf/rte_mbuf.h:38, > from ../lib/net/rte_ether.h:20, > from ../lib/ethdev/rte_eth_ctrl.h:10, > from ../lib/ethdev/rte_ethdev.h:1472, > from ../lib/ethdev/ethdev_driver.h:21, > from ../drivers/net/zxdh/zxdh_common.c:8: >In function ‘rte_mov15_or_less’, > inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:395:10, > inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10, > inlined from ‘zxdh_get_res_info’ at ../drivers/net/zxdh/zxdh_common.c:231:2: >.../lib/eal/x86/include/rte_memcpy.h:82:55: warning: stack-based buffer overflow [CWE-121] [-Wanalyzer-out-of-bounds] > 82 | ((struct rte_uint64_alias *)dst)->val = > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ > 83 | ((const struct rte_uint64_alias *)src)->val; > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > ‘zxdh_panelid_get’: events 1-3 > | > |../drivers/net/zxdh/zxdh_common.c:250:1: > | 239 | uint8_t reps = 0; > | | ~~~~ > | | | > | | (2) capacity: 1 byte > |...... > | 250 | zxdh_panelid_get(struct rte_eth_dev *dev, uint8_t *panelid) > | | ^~~~~~~~~~~~~~~~ > | | | > | | (1) entry to ‘zxdh_panelid_get’ > |...... > | 255 | int32_t ret = zxdh_get_res_panel_id(¶m, panelid); > | | ~ > | | | > | | (3) inlined call to ‘zxdh_get_res_panel_id’ from ‘zxdh_panelid_get’ > | > | > | 242 | if (zxdh_get_res_info(in, ZXDH_TBL_FIELD_PNLID, &reps, &reps_len) != ZXDH_BAR_MSG_OK) > | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > | | | > | | (4) calling ‘zxdh_get_res_info’ from ‘zxdh_panelid_get’ > | > ‘zxdh_get_res_info’: events 5-12 > | > | 186 | zxdh_get_res_info(struct zxdh_res_para *dev, uint8_t field, uint8_t *res, uint16_t *len) > | | ^~~~~~~~~~~~~~~~~ > | | | > | | (5) entry to ‘zxdh_get_res_info’ > |...... > | 192 | if (!res || !dev) > | | ~ > | | | > | | (6) following ‘false’ branch.... > |...... > | 195 | struct zxdh_tbl_msg_header tbl_msg = { > | | ~~~~~~~ > | | | > | | (7) ...to here > |...... > | 217 | if (ret != ZXDH_BAR_MSG_OK) { > | | ~ > | | | > | | (8) following ‘false’ branch (when ‘ret == 0’)... > |...... > | 225 | if (tbl_reps->check != ZXDH_TBL_MSG_PRO_SUCCESS) { > | | ~~~~~~~~~~~~~~~~ > | | | | > | | | (9) ...to here > | | (10) following ‘false’ branch... > |...... > | 230 | *len = tbl_reps->len; > | | ~~~~~~~~~~~~~ > | | | > | | (11) ...to here > | 231 | rte_memcpy(res, (recv_buf + ZXDH_REPS_HEADER_OFFSET + > | | ~ > | | | > | | (12) inlined call to ‘rte_memcpy’ from ‘zxdh_get_res_info’ > | > +--> ‘rte_memcpy’: events 13-14 > | > |../lib/eal/x86/include/rte_memcpy.h:754:12: > | 754 | if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK)) > | | ^ > | | | > | | (13) following ‘false’ branch... > |...... > | 757 | return rte_memcpy_generic(dst, src, n); > | | ~ > | | | > | | (14) inlined call to ‘rte_memcpy_generic’ from ‘rte_memcpy’ > | > +--> ‘rte_memcpy_generic’: events 15-17 > | > | 394 | if (n < 16) { > | | ^ > | | | > | | (15) ...to here > | | (16) following ‘true’ branch... > | 395 | return rte_mov15_or_less(dst, src, n); > | | ~ > | | | > | | (17) inlined call to ‘rte_mov15_or_less’ from ‘rte_memcpy_generic’ > | > +--> ‘rte_mov15_or_less’: events 18-21 > | > | 81 | if (n & 8) { > | | ^ > | | | > | | (18) ...to here > | | (19) following ‘true’ branch... > | 82 | ((struct rte_uint64_alias *)dst)->val = > | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > | | | > | | (21) out-of-bounds write from byte 1 till byte 7 but ‘reps’ ends at byte 1 > | 83 | ((const struct rte_uint64_alias *)src)->val; > | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > | | | > | | (20) ...to here > | >.../lib/eal/x86/include/rte_memcpy.h:82:55: note: write of 7 bytes to beyond the end of ‘reps’ > 82 | ((struct rte_uint64_alias *)dst)->val = > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ > 83 | ((const struct rte_uint64_alias *)src)->val; > | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > ┌──────────────────────────────────────────────────────────────────────┐ > │ write of ‘uint64_t’ (8 bytes) │ > └──────────────────────────────────────────────────────────────────────┘ > │ │ > │ │ > v v > ┌────────────────────────┐┌────────────────────────────────────────────┐ > │‘reps’ (type: ‘uint8_t’)││ after valid range │ > └────────────────────────┘└────────────────────────────────────────────┘ > ├───────────┬────────────┤├─────────────────────┬──────────────────────┤ > │ │ > ╭────────┴───────╮ ╭───────────┴──────────╮ > │capacity: 1 byte│ │⚠️ overflow of 7 bytes│ > ╰────────────────╯ ╰──────────────────────╯ [-- Attachment #1.1.2: Type: text/html , Size: 32916 bytes --] ^ permalink raw reply [flat|nested] 225+ messages in thread
end of thread, other threads:[~2024-12-26 3:48 UTC | newest] Thread overview: 225+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2024-09-10 12:00 [PATCH v4] net/zxdh: Provided zxdh basic init Junlong Wang 2024-09-24 1:35 ` [v4] " Junlong Wang 2024-09-25 22:39 ` [PATCH v4] " Ferruh Yigit 2024-09-26 6:49 ` [v4] " Junlong Wang 2024-10-07 21:43 ` [PATCH v4] " Stephen Hemminger 2024-10-15 5:43 ` [PATCH v5 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-15 5:43 ` [PATCH v5 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-15 5:44 ` [PATCH v5 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-10-15 5:44 ` [PATCH v5 5/9] net/zxdh: add msg chan enable implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-10-15 5:44 ` [PATCH v5 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang 2024-10-15 5:44 ` [PATCH v5 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-15 5:44 ` [PATCH v5 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-15 15:37 ` Stephen Hemminger 2024-10-15 15:57 ` Stephen Hemminger 2024-10-16 8:16 ` [PATCH v6 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-16 8:16 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-16 8:18 ` [PATCH v6 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-16 8:18 ` [PATCH v6 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-16 8:18 ` [PATCH v6 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-10-16 8:18 ` [PATCH v6 5/9] net/zxdh: add msg chan enable implementation Junlong Wang 2024-10-21 8:50 ` Thomas Monjalon 2024-10-21 10:56 ` Junlong Wang 2024-10-16 8:18 ` [PATCH v6 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-10-21 8:52 ` Thomas Monjalon 2024-10-16 8:18 ` [PATCH v6 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang 2024-10-16 8:18 ` [PATCH v6 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-21 8:54 ` Thomas Monjalon 2024-10-16 8:18 ` [PATCH v6 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-18 5:18 ` [v6,9/9] " Junlong Wang 2024-10-18 6:48 ` David Marchand 2024-10-19 11:17 ` Junlong Wang 2024-10-21 9:03 ` [PATCH v6 1/9] net/zxdh: add zxdh ethdev pmd driver Thomas Monjalon 2024-10-22 12:20 ` [PATCH v7 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-22 12:20 ` [PATCH v7 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-10-30 9:01 ` [PATCH v8 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-10-30 9:01 ` [PATCH v8 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-01 6:21 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-01 6:21 ` [PATCH v9 1/9] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-02 0:57 ` Ferruh Yigit 2024-11-04 11:58 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-04 11:58 ` [PATCH v10 01/10] net/zxdh: add zxdh ethdev pmd driver Junlong Wang 2024-11-07 10:32 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Junlong Wang 2024-11-12 0:42 ` Thomas Monjalon 2024-12-06 5:57 ` [PATCH v1 00/15] net/zxdh: updated " Junlong Wang 2024-12-06 5:57 ` [PATCH v1 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-10 5:53 ` [PATCH v2 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-10 5:53 ` [PATCH v2 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-11 16:10 ` Stephen Hemminger 2024-12-12 2:06 ` Junlong Wang 2024-12-12 3:35 ` Junlong Wang 2024-12-17 11:41 ` [PATCH v3 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-17 11:41 ` [PATCH v3 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-17 11:41 ` [PATCH v3 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-17 11:41 ` [PATCH v3 03/15] net/zxdh: port tables init implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang 2024-12-17 11:41 ` [PATCH v3 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-17 11:41 ` [PATCH v3 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 11/15] net/zxdh: promisc/allmulti " Junlong Wang 2024-12-17 11:41 ` [PATCH v3 12/15] net/zxdh: vlan filter/ offload " Junlong Wang 2024-12-17 11:41 ` [PATCH v3 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang 2024-12-17 11:41 ` [PATCH v3 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-17 11:41 ` [PATCH v3 15/15] net/zxdh: mtu update " Junlong Wang 2024-12-18 9:25 ` [PATCH v4 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-18 9:25 ` [PATCH v4 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-18 9:25 ` [PATCH v4 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-18 9:25 ` [PATCH v4 03/15] net/zxdh: port tables init implementations Junlong Wang 2024-12-18 9:25 ` [PATCH v4 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-18 9:25 ` [PATCH v4 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang 2024-12-18 9:25 ` [PATCH v4 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang 2024-12-21 0:51 ` Stephen Hemminger 2024-12-18 9:25 ` [PATCH v4 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang 2024-12-18 9:25 ` [PATCH v4 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang 2024-12-18 9:25 ` [PATCH v4 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-18 9:25 ` [PATCH v4 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang 2024-12-18 9:25 ` [PATCH v4 11/15] net/zxdh: promisc/allmulti " Junlong Wang 2024-12-18 9:25 ` [PATCH v4 12/15] net/zxdh: vlan filter/ offload " Junlong Wang 2024-12-18 9:26 ` [PATCH v4 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang 2024-12-21 0:44 ` Stephen Hemminger 2024-12-18 9:26 ` [PATCH v4 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-18 9:26 ` [PATCH v4 15/15] net/zxdh: mtu update " Junlong Wang 2024-12-21 0:33 ` Stephen Hemminger 2024-12-23 11:02 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Junlong Wang 2024-12-23 11:02 ` [PATCH v5 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-23 11:02 ` [PATCH v5 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-23 11:02 ` [PATCH v5 03/15] net/zxdh: port tables init implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang 2024-12-23 11:02 ` [PATCH v5 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-23 11:02 ` [PATCH v5 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 11/15] net/zxdh: promisc/allmulti " Junlong Wang 2024-12-23 11:02 ` [PATCH v5 12/15] net/zxdh: vlan filter/ offload " Junlong Wang 2024-12-23 11:02 ` [PATCH v5 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang 2024-12-23 11:02 ` [PATCH v5 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-23 11:02 ` [PATCH v5 15/15] net/zxdh: mtu update " Junlong Wang 2024-12-24 20:30 ` [PATCH v5 00/15] net/zxdh: updated net zxdh driver Stephen Hemminger 2024-12-24 20:47 ` Stephen Hemminger 2024-12-26 3:37 ` [PATCH v6 " Junlong Wang 2024-12-26 3:37 ` [PATCH v6 01/15] net/zxdh: zxdh np init implementation Junlong Wang 2024-12-26 3:37 ` [PATCH v6 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-26 3:37 ` [PATCH v6 03/15] net/zxdh: port tables init implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang 2024-12-26 3:37 ` [PATCH v6 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-26 3:37 ` [PATCH v6 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 11/15] net/zxdh: promisc/allmulti " Junlong Wang 2024-12-26 3:37 ` [PATCH v6 12/15] net/zxdh: vlan filter/ offload " Junlong Wang 2024-12-26 3:37 ` [PATCH v6 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang 2024-12-26 3:37 ` [PATCH v6 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-26 3:37 ` [PATCH v6 15/15] net/zxdh: mtu update " Junlong Wang 2024-12-10 5:53 ` [PATCH v2 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-13 19:38 ` Stephen Hemminger 2024-12-13 19:41 ` Stephen Hemminger 2024-12-13 19:41 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 03/15] net/zxdh: port tables init implementations Junlong Wang 2024-12-13 19:42 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-13 19:45 ` Stephen Hemminger 2024-12-13 19:48 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang 2024-12-10 5:53 ` [PATCH v2 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang 2024-12-13 21:05 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang 2024-12-10 5:53 ` [PATCH v2 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang 2024-12-10 5:53 ` [PATCH v2 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-13 19:57 ` Stephen Hemminger 2024-12-13 20:08 ` Stephen Hemminger 2024-12-10 5:53 ` [PATCH v2 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang 2024-12-10 5:53 ` [PATCH v2 11/15] net/zxdh: promisc/allmulti " Junlong Wang 2024-12-10 5:53 ` [PATCH v2 12/15] net/zxdh: vlan filter/ offload " Junlong Wang 2024-12-10 5:53 ` [PATCH v2 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang 2024-12-10 5:53 ` [PATCH v2 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-10 5:53 ` [PATCH v2 15/15] net/zxdh: mtu update " Junlong Wang 2024-12-06 5:57 ` [PATCH v1 02/15] net/zxdh: zxdh np uninit implementation Junlong Wang 2024-12-06 5:57 ` [PATCH v1 03/15] net/zxdh: port tables init implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 04/15] net/zxdh: port tables unint implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 05/15] net/zxdh: rx/tx queue setup and intr enable Junlong Wang 2024-12-06 5:57 ` [PATCH v1 06/15] net/zxdh: dev start/stop ops implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 07/15] net/zxdh: provided dev simple tx implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 08/15] net/zxdh: provided dev simple rx implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 09/15] net/zxdh: link info update, set link up/down Junlong Wang 2024-12-06 5:57 ` [PATCH v1 10/15] net/zxdh: mac set/add/remove ops implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 11/15] net/zxdh: promiscuous/allmulticast " Junlong Wang 2024-12-06 5:57 ` [PATCH v1 12/15] net/zxdh: vlan filter, vlan offload " Junlong Wang 2024-12-06 5:57 ` [PATCH v1 13/15] net/zxdh: rss hash config/update, reta update/get Junlong Wang 2024-12-06 5:57 ` [PATCH v1 14/15] net/zxdh: basic stats ops implementations Junlong Wang 2024-12-06 5:57 ` [PATCH v1 15/15] net/zxdh: mtu update " Junlong Wang 2024-11-04 11:58 ` [PATCH v10 02/10] net/zxdh: add logging implementation Junlong Wang 2024-11-04 11:58 ` [PATCH v10 03/10] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-11-04 11:58 ` [PATCH v10 04/10] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-11-04 11:58 ` [PATCH v10 05/10] net/zxdh: add msg chan enable implementation Junlong Wang 2024-11-04 11:58 ` [PATCH v10 06/10] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-11-04 11:58 ` [PATCH v10 07/10] net/zxdh: add configure zxdh intr implementation Junlong Wang 2024-11-04 11:58 ` [PATCH v10 08/10] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-11-04 11:58 ` [PATCH v10 09/10] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-11-04 11:58 ` [PATCH v10 10/10] net/zxdh: add zxdh dev close ops Junlong Wang 2024-11-06 0:40 ` [PATCH v10 00/10] net/zxdh: introduce net zxdh driver Ferruh Yigit 2024-11-07 9:28 ` Ferruh Yigit 2024-11-07 9:58 ` Ferruh Yigit 2024-11-12 2:49 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 2/9] net/zxdh: add logging implementation Junlong Wang 2024-11-02 1:02 ` Ferruh Yigit 2024-11-04 2:44 ` [v9,2/9] " Junlong Wang 2024-11-01 6:21 ` [PATCH v9 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-11-02 1:01 ` Ferruh Yigit 2024-11-01 6:21 ` [PATCH v9 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-11-02 1:00 ` Ferruh Yigit 2024-11-04 2:47 ` Junlong Wang 2024-11-01 6:21 ` [PATCH v9 5/9] net/zxdh: add msg chan enable implementation Junlong Wang 2024-11-01 6:21 ` [PATCH v9 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-11-02 1:06 ` Ferruh Yigit 2024-11-04 3:30 ` [v9,6/9] " Junlong Wang 2024-11-01 6:21 ` [PATCH v9 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang 2024-11-02 1:07 ` Ferruh Yigit 2024-11-01 6:21 ` [PATCH v9 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-11-01 6:21 ` [PATCH v9 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-11-02 0:56 ` [PATCH v9 0/9] net/zxdh: introduce net zxdh driver Ferruh Yigit 2024-11-04 2:42 ` Junlong Wang 2024-11-04 8:46 ` Ferruh Yigit 2024-11-04 9:52 ` David Marchand 2024-11-04 11:46 ` Junlong Wang 2024-11-04 22:47 ` Thomas Monjalon 2024-11-05 9:39 ` Junlong Wang 2024-11-06 0:38 ` Ferruh Yigit 2024-10-30 9:01 ` [PATCH v8 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-30 9:01 ` [PATCH v8 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-30 14:55 ` David Marchand 2024-10-30 9:01 ` [PATCH v8 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-10-30 9:01 ` [PATCH v8 5/9] net/zxdh: add msg chan enable implementation Junlong Wang 2024-10-30 9:01 ` [PATCH v8 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-10-30 9:01 ` [PATCH v8 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang 2024-10-30 9:01 ` [PATCH v8 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-30 9:01 ` [PATCH v8 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-22 12:20 ` [PATCH v7 2/9] net/zxdh: add logging implementation Junlong Wang 2024-10-22 12:20 ` [PATCH v7 3/9] net/zxdh: add zxdh device pci init implementation Junlong Wang 2024-10-27 16:47 ` Stephen Hemminger 2024-10-27 16:47 ` Stephen Hemminger 2024-10-22 12:20 ` [PATCH v7 4/9] net/zxdh: add msg chan and msg hwlock init Junlong Wang 2024-10-22 12:20 ` [PATCH v7 5/9] net/zxdh: add msg chan enable implementation Junlong Wang 2024-10-26 17:05 ` Thomas Monjalon 2024-10-22 12:20 ` [PATCH v7 6/9] net/zxdh: add zxdh get device backend infos Junlong Wang 2024-10-22 12:20 ` [PATCH v7 7/9] net/zxdh: add configure zxdh intr implementation Junlong Wang 2024-10-27 17:07 ` Stephen Hemminger 2024-10-22 12:20 ` [PATCH v7 8/9] net/zxdh: add zxdh dev infos get ops Junlong Wang 2024-10-22 12:20 ` [PATCH v7 9/9] net/zxdh: add zxdh dev configure ops Junlong Wang 2024-10-24 11:31 ` [v7,9/9] " Junlong Wang 2024-10-25 9:48 ` Junlong Wang 2024-10-26 2:32 ` Junlong Wang 2024-10-27 16:40 ` [PATCH v7 9/9] " Stephen Hemminger 2024-10-27 17:03 ` Stephen Hemminger 2024-10-27 16:58 ` Stephen Hemminger 2024-12-19 22:38 ` [PATCH v4] net/zxdh: Provided zxdh basic init Stephen Hemminger 2024-12-20 1:47 ` Junlong Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).