* [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-18 2:51 Yanling Song
2021-12-18 2:51 ` [PATCH v1 01/25] drivers/net: introduce a new PMD driver Yanling Song
` (24 more replies)
0 siblings, 25 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
Yanling Song (25):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 40 +
doc/guides/nics/spnic.rst | 61 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 188 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 212 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 485 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 774 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1187 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 367 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 194 ++
drivers/net/spnic/base/spnic_nic_event.h | 29 +
drivers/net/spnic/base/spnic_wq.c | 139 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 14 +
drivers/net/spnic/spnic_ethdev.c | 3231 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 738 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 956 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
40 files changed, 16638 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 01/25] drivers/net: introduce a new PMD driver
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-19 19:40 ` Stephen Hemminger
2021-12-18 2:51 ` [PATCH v1 02/25] net/spnic: initialize the HW interface Yanling Song
` (23 subsequent siblings)
24 siblings, 1 reply; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
Introduce a new PMD driver which names spnic.
Now, this driver only implements module entry
without doing anything else.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 26 ++++
drivers/net/spnic/base/spnic_compat.h | 188 ++++++++++++++++++++++++++
drivers/net/spnic/meson.build | 11 ++
drivers/net/spnic/spnic_ethdev.c | 107 +++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 28 ++++
drivers/net/spnic/version.map | 3 +
7 files changed, 364 insertions(+)
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/version.map
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 2355d1cde8..a5c715f59c 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -53,6 +53,7 @@ drivers = [
'ring',
'sfc',
'softnic',
+ 'spnic',
'tap',
'thunderx',
'txgbe',
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
new file mode 100644
index 0000000000..e83a473881
--- /dev/null
+++ b/drivers/net/spnic/base/meson.build
@@ -0,0 +1,26 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+
+sources = [
+]
+
+extra_flags = []
+# The driver runs only on arch64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast']
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
+
+deps += ['hash']
+cflags += ['-DHW_CONVERT_ENDIAN']
+c_args = cflags
+
+base_lib = static_library('spnic_base', sources,
+ dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci, static_rte_hash],
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/spnic/base/spnic_compat.h b/drivers/net/spnic/base/spnic_compat.h
new file mode 100644
index 0000000000..dd0ea2a04e
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_compat.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_COMPAT_H_
+#define _SPNIC_COMPAT_H_
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <sys/time.h>
+#include <unistd.h>
+#include <pthread.h>
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_memzone.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_config.h>
+#include <rte_io.h>
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+
+#ifndef BIT
+#define BIT(n) (1 << (n))
+#endif
+
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#define SPNIC_MEM_ALLOC_ALIGN_MIN 1
+
+#define SPNIC_DRIVER_NAME "spnic"
+
+extern int spnic_logtype;
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, spnic_logtype, \
+ SPNIC_DRIVER_NAME ": " fmt "\n", ##args)
+
+/* Bit order interface */
+#define cpu_to_be16(o) rte_cpu_to_be_16(o)
+#define cpu_to_be32(o) rte_cpu_to_be_32(o)
+#define cpu_to_be64(o) rte_cpu_to_be_64(o)
+#define cpu_to_le32(o) rte_cpu_to_le_32(o)
+#define be16_to_cpu(o) rte_be_to_cpu_16(o)
+#define be32_to_cpu(o) rte_be_to_cpu_32(o)
+#define be64_to_cpu(o) rte_be_to_cpu_64(o)
+#define le32_to_cpu(o) rte_le_to_cpu_32(o)
+
+#define ARRAY_LEN(arr) ((sizeof(arr) / sizeof((arr)[0])))
+
+#ifdef CLOCK_MONOTONIC_RAW /* Defined in glibc bits/time.h */
+#define CLOCK_TYPE CLOCK_MONOTONIC_RAW
+#else
+#define CLOCK_TYPE CLOCK_MONOTONIC
+#endif
+
+#define SPNIC_MUTEX_TIMEOUT 10
+#define SPNIC_S_TO_MS_UNIT 1000
+#define SPNIC_S_TO_NS_UNIT 1000000
+
+static inline unsigned long clock_gettime_ms(void)
+{
+ struct timespec tv;
+
+ (void)clock_gettime(CLOCK_TYPE, &tv);
+
+ return (unsigned long)tv.tv_sec * SPNIC_S_TO_MS_UNIT +
+ (unsigned long)tv.tv_nsec / SPNIC_S_TO_NS_UNIT;
+}
+
+#define jiffies clock_gettime_ms()
+#define msecs_to_jiffies(ms) (ms)
+#define time_before(now, end) ((now) < (end))
+
+/**
+ * Convert data to big endian 32 bit format
+ *
+ * @param data
+ * The data to convert
+ * @param len
+ * Length of data to convert, must be Multiple of 4B
+ */
+static inline void spnic_cpu_to_be32(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = cpu_to_be32(*mem);
+ mem++;
+ }
+}
+
+/**
+ * Convert data from big endian 32 bit format
+ *
+ * @param data
+ * The data to convert
+ * @param len
+ * Length of data to convert, must be Multiple of 4B
+ */
+static inline void spnic_be32_to_cpu(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = be32_to_cpu(*mem);
+ mem++;
+ }
+}
+
+static inline u16 ilog2(u32 n)
+{
+ u16 res = 0;
+
+ while (n > 1) {
+ n >>= 1;
+ res++;
+ }
+
+ return res;
+}
+
+static inline int spnic_mutex_init(pthread_mutex_t *pthreadmutex,
+ const pthread_mutexattr_t *mattr)
+{
+ int err;
+
+ err = pthread_mutex_init(pthreadmutex, mattr);
+ if (unlikely(err))
+ PMD_DRV_LOG(ERR, "Initialize mutex failed, error: %d", err);
+
+ return err;
+}
+
+static inline int spnic_mutex_destroy(pthread_mutex_t *pthreadmutex)
+{
+ int err;
+
+ err = pthread_mutex_destroy(pthreadmutex);
+ if (unlikely(err))
+ PMD_DRV_LOG(ERR, "Destroy mutex failed, error: %d", err);
+
+ return err;
+}
+
+static inline int spnic_mutex_lock(pthread_mutex_t *pthreadmutex)
+{
+ struct timespec tout;
+ int err;
+
+ (void)clock_gettime(CLOCK_TYPE, &tout);
+
+ tout.tv_sec += SPNIC_MUTEX_TIMEOUT;
+ err = pthread_mutex_timedlock(pthreadmutex, &tout);
+ if (err)
+ PMD_DRV_LOG(ERR, "Mutex lock failed, err: %d", err);
+
+ return err;
+}
+
+static inline int spnic_mutex_unlock(pthread_mutex_t *pthreadmutex)
+{
+ return pthread_mutex_unlock(pthreadmutex);
+}
+
+#endif /* _SPNIC_COMPAT_H_ */
diff --git a/drivers/net/spnic/meson.build b/drivers/net/spnic/meson.build
new file mode 100644
index 0000000000..042d2fe6e1
--- /dev/null
+++ b/drivers/net/spnic/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+ 'spnic_ethdev.c',
+ )
+
+includes += include_directories('base')
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
new file mode 100644
index 0000000000..b06492a8e9
--- /dev/null
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <ethdev_pci.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+
+#include "base/spnic_compat.h"
+#include "spnic_ethdev.h"
+
+/* Driver-specific log messages type */
+int spnic_logtype;
+
+static int spnic_func_init(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev = NULL;
+ struct rte_pci_device *pci_dev = NULL;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ /* EAL is secondary and eth_dev is already created */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ PMD_DRV_LOG(INFO, "Initialize %s in secondary process",
+ eth_dev->data->name);
+
+ return 0;
+ }
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ snprintf(nic_dev->dev_name, sizeof(nic_dev->dev_name),
+ "spnic-%.4x:%.2x:%.2x.%x",
+ pci_dev->addr.domain, pci_dev->addr.bus,
+ pci_dev->addr.devid, pci_dev->addr.function);
+
+ rte_bit_relaxed_set32(SPNIC_DEV_INIT, &nic_dev->dev_status);
+ PMD_DRV_LOG(INFO, "Initialize %s in primary succeed",
+ eth_dev->data->name);
+
+ return 0;
+}
+
+static int spnic_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ PMD_DRV_LOG(INFO, "Initializing spnic-%.4x:%.2x:%.2x.%x in %s process",
+ pci_dev->addr.domain, pci_dev->addr.bus,
+ pci_dev->addr.devid, pci_dev->addr.function,
+ (rte_eal_process_type() == RTE_PROC_PRIMARY) ?
+ "primary" : "secondary");
+
+ return spnic_func_init(eth_dev);
+}
+
+static int spnic_dev_uninit(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ rte_bit_relaxed_clear32(SPNIC_DEV_INIT, &nic_dev->dev_status);
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+static struct rte_pci_id pci_id_spnic_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_RAMAXEL, SPNIC_DEV_ID_PF) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_RAMAXEL, SPNIC_DEV_ID_VF) },
+ {.vendor_id = 0},
+};
+
+static int spnic_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct spnic_nic_dev),
+ spnic_dev_init);
+}
+
+static int spnic_pci_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_remove(pci_dev, spnic_dev_uninit);
+}
+
+static struct rte_pci_driver rte_spnic_pmd = {
+ .id_table = pci_id_spnic_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = spnic_pci_probe,
+ .remove = spnic_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_spnic, rte_spnic_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_spnic, pci_id_spnic_map);
+
+RTE_INIT(spnic_init_log)
+{
+ spnic_logtype = rte_log_register("pmd.net.spnic");
+ if (spnic_logtype >= 0)
+ rte_log_set_level(spnic_logtype, RTE_LOG_INFO);
+}
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
new file mode 100644
index 0000000000..d4ec641d83
--- /dev/null
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_ETHDEV_H_
+#define _SPNIC_ETHDEV_H_
+
+/* Vendor id */
+#define PCI_VENDOR_ID_RAMAXEL 0x1E81
+
+/* Device ids */
+#define SPNIC_DEV_ID_PF 0x9020
+#define SPNIC_DEV_ID_VF 0x9001
+
+enum spnic_dev_status {
+ SPNIC_DEV_INIT
+};
+
+#define SPNIC_DEV_NAME_LEN 32
+struct spnic_nic_dev {
+ u32 dev_status;
+ char dev_name[SPNIC_DEV_NAME_LEN];
+};
+
+#define SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \
+ ((struct spnic_nic_dev *)(dev)->data->dev_private)
+
+#endif /* _SPNIC_ETHDEV_H_ */
diff --git a/drivers/net/spnic/version.map b/drivers/net/spnic/version.map
new file mode 100644
index 0000000000..4a76d1d52d
--- /dev/null
+++ b/drivers/net/spnic/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+ local: *;
+};
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 02/25] net/spnic: initialize the HW interface
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-18 2:51 ` [PATCH v1 01/25] drivers/net: introduce a new PMD driver Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 03/25] net/spnic: add mbox message channel Yanling Song
` (22 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
Add HW interface registers and initialize the HW
interface.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 2 +
drivers/net/spnic/base/spnic_csr.h | 104 ++++
drivers/net/spnic/base/spnic_hwdev.c | 41 ++
drivers/net/spnic/base/spnic_hwdev.h | 29 +
drivers/net/spnic/base/spnic_hwif.c | 774 +++++++++++++++++++++++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++++++
drivers/net/spnic/spnic_ethdev.c | 66 +++
drivers/net/spnic/spnic_ethdev.h | 48 +-
8 files changed, 1212 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index e83a473881..edd6e94772 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -2,6 +2,8 @@
# Copyright(c) 2021 Ramaxel Memory Technology, Ltd
sources = [
+ 'spnic_hwdev.c',
+ 'spnic_hwif.c'
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_csr.h b/drivers/net/spnic/base/spnic_csr.h
new file mode 100644
index 0000000000..602d5de6b1
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_csr.h
@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_CSR_H_
+#define _SPNIC_CSR_H_
+
+#define PCI_VENDOR_ID_RAMAXEL 0x1E81
+
+/* Device ids */
+#define SPNIC_DEV_ID_PF 0x9020
+#define SPNIC_DEV_ID_VF 0x9001
+
+/*
+ * Bit30/bit31 for bar index flag
+ * 00: bar0
+ * 01: bar1
+ * 10: bar2
+ * 11: bar3
+ */
+#define SPNIC_CFG_REGS_FLAG 0x40000000
+
+#define SPNIC_MGMT_REGS_FLAG 0xC0000000
+
+#define SPNIC_REGS_FLAG_MAKS 0x3FFFFFFF
+
+#define SPNIC_VF_CFG_REG_OFFSET 0x2000
+
+#define SPNIC_HOST_CSR_BASE_ADDR (SPNIC_MGMT_REGS_FLAG + 0x6000)
+#define SPNIC_CSR_GLOBAL_BASE_ADDR (SPNIC_MGMT_REGS_FLAG + 0x6400)
+
+/* HW interface registers */
+#define SPNIC_CSR_FUNC_ATTR0_ADDR (SPNIC_CFG_REGS_FLAG + 0x0)
+#define SPNIC_CSR_FUNC_ATTR1_ADDR (SPNIC_CFG_REGS_FLAG + 0x4)
+#define SPNIC_CSR_FUNC_ATTR2_ADDR (SPNIC_CFG_REGS_FLAG + 0x8)
+#define SPNIC_CSR_FUNC_ATTR3_ADDR (SPNIC_CFG_REGS_FLAG + 0xC)
+#define SPNIC_CSR_FUNC_ATTR4_ADDR (SPNIC_CFG_REGS_FLAG + 0x10)
+#define SPNIC_CSR_FUNC_ATTR5_ADDR (SPNIC_CFG_REGS_FLAG + 0x14)
+#define SPNIC_CSR_FUNC_ATTR6_ADDR (SPNIC_CFG_REGS_FLAG + 0x18)
+
+#define SPNIC_FUNC_CSR_MAILBOX_DATA_OFF 0x80
+#define SPNIC_FUNC_CSR_MAILBOX_CONTROL_OFF (SPNIC_CFG_REGS_FLAG + 0x0100)
+#define SPNIC_FUNC_CSR_MAILBOX_INT_OFFSET_OFF (SPNIC_CFG_REGS_FLAG + 0x0104)
+#define SPNIC_FUNC_CSR_MAILBOX_RESULT_H_OFF (SPNIC_CFG_REGS_FLAG + 0x0108)
+#define SPNIC_FUNC_CSR_MAILBOX_RESULT_L_OFF (SPNIC_CFG_REGS_FLAG + 0x010C)
+
+#define SPNIC_PPF_ELECTION_OFFSET 0x0
+#define SPNIC_MPF_ELECTION_OFFSET 0x20
+
+#define SPNIC_CSR_PPF_ELECTION_ADDR \
+ (SPNIC_HOST_CSR_BASE_ADDR + SPNIC_PPF_ELECTION_OFFSET)
+
+#define SPNIC_CSR_GLOBAL_MPF_ELECTION_ADDR \
+ (SPNIC_HOST_CSR_BASE_ADDR + SPNIC_MPF_ELECTION_OFFSET)
+
+#define SPNIC_CSR_DMA_ATTR_TBL_ADDR (SPNIC_CFG_REGS_FLAG + 0x380)
+#define SPNIC_CSR_DMA_ATTR_INDIR_IDX_ADDR (SPNIC_CFG_REGS_FLAG + 0x390)
+
+/* MSI-X registers */
+#define SPNIC_CSR_MSIX_INDIR_IDX_ADDR (SPNIC_CFG_REGS_FLAG + 0x310)
+#define SPNIC_CSR_MSIX_CTRL_ADDR (SPNIC_CFG_REGS_FLAG + 0x300)
+#define SPNIC_CSR_MSIX_CNT_ADDR (SPNIC_CFG_REGS_FLAG + 0x304)
+#define SPNIC_CSR_FUNC_MSI_CLR_WR_ADDR (SPNIC_CFG_REGS_FLAG + 0x58)
+
+#define SPNIC_MSI_CLR_INDIR_RESEND_TIMER_CLR_SHIFT 0
+#define SPNIC_MSI_CLR_INDIR_INT_MSK_SET_SHIFT 1
+#define SPNIC_MSI_CLR_INDIR_INT_MSK_CLR_SHIFT 2
+#define SPNIC_MSI_CLR_INDIR_AUTO_MSK_SET_SHIFT 3
+#define SPNIC_MSI_CLR_INDIR_AUTO_MSK_CLR_SHIFT 4
+#define SPNIC_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_SHIFT 22
+
+#define SPNIC_MSI_CLR_INDIR_RESEND_TIMER_CLR_MASK 0x1U
+#define SPNIC_MSI_CLR_INDIR_INT_MSK_SET_MASK 0x1U
+#define SPNIC_MSI_CLR_INDIR_INT_MSK_CLR_MASK 0x1U
+#define SPNIC_MSI_CLR_INDIR_AUTO_MSK_SET_MASK 0x1U
+#define SPNIC_MSI_CLR_INDIR_AUTO_MSK_CLR_MASK 0x1U
+#define SPNIC_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_MASK 0x3FFU
+
+#define SPNIC_MSI_CLR_INDIR_SET(val, member) \
+ (((val) & SPNIC_MSI_CLR_INDIR_##member##_MASK) << \
+ SPNIC_MSI_CLR_INDIR_##member##_SHIFT)
+
+/* EQ registers */
+#define SPNIC_AEQ_INDIR_IDX_ADDR (SPNIC_CFG_REGS_FLAG + 0x210)
+
+#define SPNIC_AEQ_MTT_OFF_BASE_ADDR (SPNIC_CFG_REGS_FLAG + 0x240)
+
+#define SPNIC_CSR_EQ_PAGE_OFF_STRIDE 8
+
+#define SPNIC_AEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (SPNIC_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * SPNIC_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define SPNIC_AEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (SPNIC_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * SPNIC_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define SPNIC_CSR_AEQ_CTRL_0_ADDR (SPNIC_CFG_REGS_FLAG + 0x200)
+#define SPNIC_CSR_AEQ_CTRL_1_ADDR (SPNIC_CFG_REGS_FLAG + 0x204)
+#define SPNIC_CSR_AEQ_CONS_IDX_ADDR (SPNIC_CFG_REGS_FLAG + 0x208)
+#define SPNIC_CSR_AEQ_PROD_IDX_ADDR (SPNIC_CFG_REGS_FLAG + 0x20C)
+#define SPNIC_CSR_AEQ_CI_SIMPLE_INDIR_ADDR (SPNIC_CFG_REGS_FLAG + 0x50)
+
+#endif /* _SPNIC_CSR_H_ */
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
new file mode 100644
index 0000000000..de73f244fd
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include "spnic_compat.h"
+#include "spnic_csr.h"
+#include "spnic_hwif.h"
+#include "spnic_hwdev.h"
+
+int spnic_init_hwdev(struct spnic_hwdev *hwdev)
+{
+ int err;
+
+ hwdev->chip_fault_stats = rte_zmalloc("chip_fault_stats",
+ SPNIC_CHIP_FAULT_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (!hwdev->chip_fault_stats) {
+ PMD_DRV_LOG(ERR, "Alloc memory for chip_fault_stats failed");
+ return -ENOMEM;
+ }
+
+ err = spnic_init_hwif(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Initialize hwif failed");
+ goto init_hwif_err;
+ }
+
+ return 0;
+
+init_hwif_err:
+ rte_free(hwdev->chip_fault_stats);
+
+ return -EFAULT;
+}
+
+void spnic_free_hwdev(struct spnic_hwdev *hwdev)
+{
+ spnic_free_hwif(hwdev);
+
+ rte_free(hwdev->chip_fault_stats);
+}
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
new file mode 100644
index 0000000000..a6cb8bc36e
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_HWDEV_H_
+#define _SPNIC_HWDEV_H_
+
+#include <rte_ether.h>
+
+#define SPNIC_CHIP_FAULT_SIZE (110 * 1024)
+
+struct spnic_hwdev {
+ void *dev_handle; /* Pointer to spnic_nic_dev */
+ void *pci_dev; /* Pointer to rte_pci_device */
+ void *eth_dev; /* Pointer to rte_eth_dev */
+
+ uint16_t port_id;
+
+ struct spnic_hwif *hwif;
+ u8 *chip_fault_stats;
+
+ u16 max_vfs;
+ u16 link_status;
+};
+
+int spnic_init_hwdev(struct spnic_hwdev *hwdev);
+
+void spnic_free_hwdev(struct spnic_hwdev *hwdev);
+#endif /* _SPNIC_HWDEV_H_ */
diff --git a/drivers/net/spnic/base/spnic_hwif.c b/drivers/net/spnic/base/spnic_hwif.c
new file mode 100644
index 0000000000..9daaa7abd9
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hwif.c
@@ -0,0 +1,774 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_bus_pci.h>
+#include "spnic_compat.h"
+#include "spnic_csr.h"
+#include "spnic_hwdev.h"
+#include "spnic_hwif.h"
+
+#define WAIT_HWIF_READY_TIMEOUT 10000
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / \
+ SPNIC_DB_PAGE_SIZE))
+
+#define SPNIC_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define SPNIC_AF0_P2P_IDX_SHIFT 12
+#define SPNIC_AF0_PCI_INTF_IDX_SHIFT 17
+#define SPNIC_AF0_VF_IN_PF_SHIFT 20
+#define SPNIC_AF0_FUNC_TYPE_SHIFT 28
+
+#define SPNIC_AF0_FUNC_GLOBAL_IDX_MASK 0xFFF
+#define SPNIC_AF0_P2P_IDX_MASK 0x1F
+#define SPNIC_AF0_PCI_INTF_IDX_MASK 0x7
+#define SPNIC_AF0_VF_IN_PF_MASK 0xFF
+#define SPNIC_AF0_FUNC_TYPE_MASK 0x1
+
+#define SPNIC_AF0_GET(val, member) \
+ (((val) >> SPNIC_AF0_##member##_SHIFT) & SPNIC_AF0_##member##_MASK)
+
+#define SPNIC_AF1_PPF_IDX_SHIFT 0
+#define SPNIC_AF1_AEQS_PER_FUNC_SHIFT 8
+#define SPNIC_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define SPNIC_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define SPNIC_AF1_PPF_IDX_MASK 0x3F
+#define SPNIC_AF1_AEQS_PER_FUNC_MASK 0x3
+#define SPNIC_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define SPNIC_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define SPNIC_AF1_GET(val, member) \
+ (((val) >> SPNIC_AF1_##member##_SHIFT) & SPNIC_AF1_##member##_MASK)
+
+#define SPNIC_AF2_CEQS_PER_FUNC_SHIFT 0
+#define SPNIC_AF2_DMA_ATTR_PER_FUNC_SHIFT 9
+#define SPNIC_AF2_IRQS_PER_FUNC_SHIFT 16
+
+#define SPNIC_AF2_CEQS_PER_FUNC_MASK 0x1FF
+#define SPNIC_AF2_DMA_ATTR_PER_FUNC_MASK 0x7
+#define SPNIC_AF2_IRQS_PER_FUNC_MASK 0x7FF
+
+#define SPNIC_AF2_GET(val, member) \
+ (((val) >> SPNIC_AF2_##member##_SHIFT) & SPNIC_AF2_##member##_MASK)
+
+#define SPNIC_AF3_GLOBAL_VF_ID_OF_NXT_PF_SHIFT 0
+#define SPNIC_AF3_GLOBAL_VF_ID_OF_PF_SHIFT 16
+
+#define SPNIC_AF3_GLOBAL_VF_ID_OF_NXT_PF_MASK 0xFFF
+#define SPNIC_AF3_GLOBAL_VF_ID_OF_PF_MASK 0xFFF
+
+#define SPNIC_AF3_GET(val, member) \
+ (((val) >> SPNIC_AF3_##member##_SHIFT) & SPNIC_AF3_##member##_MASK)
+
+#define SPNIC_AF4_DOORBELL_CTRL_SHIFT 0
+#define SPNIC_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define SPNIC_AF4_GET(val, member) \
+ (((val) >> SPNIC_AF4_##member##_SHIFT) & SPNIC_AF4_##member##_MASK)
+
+#define SPNIC_AF4_SET(val, member) \
+ (((val) & SPNIC_AF4_##member##_MASK) << SPNIC_AF4_##member##_SHIFT)
+
+#define SPNIC_AF4_CLEAR(val, member) \
+ ((val) & (~(SPNIC_AF4_##member##_MASK << \
+ SPNIC_AF4_##member##_SHIFT)))
+
+#define SPNIC_AF5_OUTBOUND_CTRL_SHIFT 0
+#define SPNIC_AF5_OUTBOUND_CTRL_MASK 0x1
+
+#define SPNIC_AF5_GET(val, member) \
+ (((val) >> SPNIC_AF5_##member##_SHIFT) & SPNIC_AF5_##member##_MASK)
+
+#define SPNIC_AF5_SET(val, member) \
+ (((val) & SPNIC_AF5_##member##_MASK) << SPNIC_AF5_##member##_SHIFT)
+
+#define SPNIC_AF5_CLEAR(val, member) \
+ ((val) & (~(SPNIC_AF5_##member##_MASK << \
+ SPNIC_AF5_##member##_SHIFT)))
+
+#define SPNIC_AF6_PF_STATUS_SHIFT 0
+#define SPNIC_AF6_PF_STATUS_MASK 0xFFFF
+
+#define SPNIC_AF6_SET(val, member) \
+ ((((u32)(val)) & SPNIC_AF6_##member##_MASK) << \
+ SPNIC_AF6_##member##_SHIFT)
+
+#define SPNIC_AF6_GET(val, member) \
+ (((val) >> SPNIC_AF6_##member##_SHIFT) & SPNIC_AF6_##member##_MASK)
+
+#define SPNIC_AF6_CLEAR(val, member) \
+ ((val) & (~(SPNIC_AF6_##member##_MASK << \
+ SPNIC_AF6_##member##_SHIFT)))
+
+#define SPNIC_PPF_ELECTION_IDX_SHIFT 0
+
+#define SPNIC_PPF_ELECTION_IDX_MASK 0x3F
+
+#define SPNIC_PPF_ELECTION_SET(val, member) \
+ (((val) & SPNIC_PPF_ELECTION_##member##_MASK) << \
+ SPNIC_PPF_ELECTION_##member##_SHIFT)
+
+#define SPNIC_PPF_ELECTION_GET(val, member) \
+ (((val) >> SPNIC_PPF_ELECTION_##member##_SHIFT) & \
+ SPNIC_PPF_ELECTION_##member##_MASK)
+
+#define SPNIC_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(SPNIC_PPF_ELECTION_##member##_MASK \
+ << SPNIC_PPF_ELECTION_##member##_SHIFT)))
+
+#define SPNIC_MPF_ELECTION_IDX_SHIFT 0
+
+#define SPNIC_MPF_ELECTION_IDX_MASK 0x1F
+
+#define SPNIC_MPF_ELECTION_SET(val, member) \
+ (((val) & SPNIC_MPF_ELECTION_##member##_MASK) << \
+ SPNIC_MPF_ELECTION_##member##_SHIFT)
+
+#define SPNIC_MPF_ELECTION_GET(val, member) \
+ (((val) >> SPNIC_MPF_ELECTION_##member##_SHIFT) & \
+ SPNIC_MPF_ELECTION_##member##_MASK)
+
+#define SPNIC_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(SPNIC_MPF_ELECTION_##member##_MASK \
+ << SPNIC_MPF_ELECTION_##member##_SHIFT)))
+
+#define SPNIC_GET_REG_FLAG(reg) ((reg) & (~(SPNIC_REGS_FLAG_MAKS)))
+
+#define SPNIC_GET_REG_ADDR(reg) ((reg) & (SPNIC_REGS_FLAG_MAKS))
+
+#define SPNIC_IS_VF_DEV(pdev) ((pdev)->id.device_id == SPNIC_DEV_ID_VF)
+
+u32 spnic_hwif_read_reg(struct spnic_hwif *hwif, u32 reg)
+{
+ if (SPNIC_GET_REG_FLAG(reg) == SPNIC_MGMT_REGS_FLAG)
+ return be32_to_cpu(rte_read32(hwif->mgmt_regs_base +
+ SPNIC_GET_REG_ADDR(reg)));
+ else
+ return be32_to_cpu(rte_read32(hwif->cfg_regs_base +
+ SPNIC_GET_REG_ADDR(reg)));
+}
+
+void spnic_hwif_write_reg(struct spnic_hwif *hwif, u32 reg, u32 val)
+{
+ if (SPNIC_GET_REG_FLAG(reg) == SPNIC_MGMT_REGS_FLAG)
+ rte_write32(cpu_to_be32(val),
+ hwif->mgmt_regs_base + SPNIC_GET_REG_ADDR(reg));
+ else
+ rte_write32(cpu_to_be32(val),
+ hwif->cfg_regs_base + SPNIC_GET_REG_ADDR(reg));
+}
+
+/**
+ * Judge whether HW initialization ok
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ *
+ * @retval zero: Success
+ * @retval negative: Failure
+ */
+static int hwif_ready(struct spnic_hwdev *hwdev)
+{
+ u32 addr, attr1;
+
+ addr = SPNIC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = spnic_hwif_read_reg(hwdev->hwif, addr);
+ if (attr1 == SPNIC_PCIE_LINK_DOWN)
+ return -EBUSY;
+
+ if (!SPNIC_AF1_GET(attr1, MGMT_INIT_STATUS))
+ return -EBUSY;
+
+ return 0;
+}
+
+static int wait_hwif_ready(struct spnic_hwdev *hwdev)
+{
+ ulong timeout = 0;
+
+ do {
+ if (!hwif_ready(hwdev))
+ return 0;
+
+ rte_delay_ms(1);
+ timeout++;
+ } while (timeout <= WAIT_HWIF_READY_TIMEOUT);
+
+ PMD_DRV_LOG(ERR, "Hwif is not ready");
+ return -EBUSY;
+}
+
+/**
+ * Set the attributes as members in hwif
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device
+ * @param[in] attr0
+ * The first attribute that was read from the hw
+ * @param[in] attr1
+ * The second attribute that was read from the hw
+ * @param[in] attr2
+ * The third attribute that was read from the hw
+ * @param[in] attr3
+ * The fourth attribute that was read from the hw
+ */
+static void set_hwif_attr(struct spnic_hwif *hwif, u32 attr0, u32 attr1,
+ u32 attr2, u32 attr3)
+{
+ hwif->attr.func_global_idx = SPNIC_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+ hwif->attr.port_to_port_idx = SPNIC_AF0_GET(attr0, P2P_IDX);
+ hwif->attr.pci_intf_idx = SPNIC_AF0_GET(attr0, PCI_INTF_IDX);
+ hwif->attr.vf_in_pf = SPNIC_AF0_GET(attr0, VF_IN_PF);
+ hwif->attr.func_type = SPNIC_AF0_GET(attr0, FUNC_TYPE);
+
+ hwif->attr.ppf_idx = SPNIC_AF1_GET(attr1, PPF_IDX);
+ hwif->attr.num_aeqs = BIT(SPNIC_AF1_GET(attr1, AEQS_PER_FUNC));
+
+ hwif->attr.num_ceqs = (u8)SPNIC_AF2_GET(attr2, CEQS_PER_FUNC);
+ hwif->attr.num_irqs = SPNIC_AF2_GET(attr2, IRQS_PER_FUNC);
+ hwif->attr.num_dma_attr = BIT(SPNIC_AF2_GET(attr2, DMA_ATTR_PER_FUNC));
+
+ hwif->attr.global_vf_id_of_pf = SPNIC_AF3_GET(attr3,
+ GLOBAL_VF_ID_OF_PF);
+}
+
+/**
+ * Read and set the attributes as members in hwif
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device
+ */
+static void get_hwif_attr(struct spnic_hwif *hwif)
+{
+ u32 addr, attr0, attr1, attr2, attr3;
+
+ addr = SPNIC_CSR_FUNC_ATTR0_ADDR;
+ attr0 = spnic_hwif_read_reg(hwif, addr);
+
+ addr = SPNIC_CSR_FUNC_ATTR1_ADDR;
+ attr1 = spnic_hwif_read_reg(hwif, addr);
+
+ addr = SPNIC_CSR_FUNC_ATTR2_ADDR;
+ attr2 = spnic_hwif_read_reg(hwif, addr);
+
+ addr = SPNIC_CSR_FUNC_ATTR3_ADDR;
+ attr3 = spnic_hwif_read_reg(hwif, addr);
+
+ set_hwif_attr(hwif, attr0, attr1, attr2, attr3);
+}
+
+void spnic_set_pf_status(struct spnic_hwif *hwif, enum spnic_pf_status status)
+{
+ u32 attr6 = SPNIC_AF6_SET(status, PF_STATUS);
+ u32 addr = SPNIC_CSR_FUNC_ATTR6_ADDR;
+
+ if (hwif->attr.func_type == TYPE_VF)
+ return;
+
+ spnic_hwif_write_reg(hwif, addr, attr6);
+}
+
+enum spnic_pf_status spnic_get_pf_status(struct spnic_hwif *hwif)
+{
+ u32 attr6 = spnic_hwif_read_reg(hwif, SPNIC_CSR_FUNC_ATTR6_ADDR);
+
+ return SPNIC_AF6_GET(attr6, PF_STATUS);
+}
+
+static enum spnic_doorbell_ctrl
+spnic_get_doorbell_ctrl_status(struct spnic_hwif *hwif)
+{
+ u32 attr4 = spnic_hwif_read_reg(hwif, SPNIC_CSR_FUNC_ATTR4_ADDR);
+
+ return SPNIC_AF4_GET(attr4, DOORBELL_CTRL);
+}
+
+static enum spnic_outbound_ctrl
+spnic_get_outbound_ctrl_status(struct spnic_hwif *hwif)
+{
+ u32 attr5 = spnic_hwif_read_reg(hwif, SPNIC_CSR_FUNC_ATTR5_ADDR);
+
+ return SPNIC_AF5_GET(attr5, OUTBOUND_CTRL);
+}
+
+void spnic_enable_doorbell(struct spnic_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = SPNIC_CSR_FUNC_ATTR4_ADDR;
+ attr4 = spnic_hwif_read_reg(hwif, addr);
+
+ attr4 = SPNIC_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= SPNIC_AF4_SET(ENABLE_DOORBELL, DOORBELL_CTRL);
+
+ spnic_hwif_write_reg(hwif, addr, attr4);
+}
+
+void spnic_disable_doorbell(struct spnic_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = SPNIC_CSR_FUNC_ATTR4_ADDR;
+ attr4 = spnic_hwif_read_reg(hwif, addr);
+
+ attr4 = SPNIC_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= SPNIC_AF4_SET(DISABLE_DOORBELL, DOORBELL_CTRL);
+
+ spnic_hwif_write_reg(hwif, addr, attr4);
+}
+
+/**
+ * Try to set hwif as ppf and set the type of hwif in this case
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device
+ */
+static void set_ppf(struct spnic_hwif *hwif)
+{
+ struct spnic_func_attr *attr = &hwif->attr;
+ u32 addr, val, ppf_election;
+
+ addr = SPNIC_CSR_PPF_ELECTION_ADDR;
+
+ val = spnic_hwif_read_reg(hwif, addr);
+ val = SPNIC_PPF_ELECTION_CLEAR(val, IDX);
+
+ ppf_election = SPNIC_PPF_ELECTION_SET(attr->func_global_idx, IDX);
+ val |= ppf_election;
+
+ spnic_hwif_write_reg(hwif, addr, val);
+
+ /* Check PPF */
+ val = spnic_hwif_read_reg(hwif, addr);
+
+ attr->ppf_idx = SPNIC_PPF_ELECTION_GET(val, IDX);
+ if (attr->ppf_idx == attr->func_global_idx)
+ attr->func_type = TYPE_PPF;
+}
+
+/**
+ * Get the mpf index from the hwif
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device
+ */
+static void get_mpf(struct spnic_hwif *hwif)
+{
+ struct spnic_func_attr *attr = &hwif->attr;
+ u32 mpf_election, addr;
+
+ addr = SPNIC_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ mpf_election = spnic_hwif_read_reg(hwif, addr);
+ attr->mpf_idx = SPNIC_MPF_ELECTION_GET(mpf_election, IDX);
+}
+
+/**
+ * Try to set hwif as mpf and set the mpf idx in hwif
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device
+ */
+static void set_mpf(struct spnic_hwif *hwif)
+{
+ struct spnic_func_attr *attr = &hwif->attr;
+ u32 addr, val, mpf_election;
+
+ addr = SPNIC_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ val = spnic_hwif_read_reg(hwif, addr);
+
+ val = SPNIC_MPF_ELECTION_CLEAR(val, IDX);
+ mpf_election = SPNIC_MPF_ELECTION_SET(attr->func_global_idx, IDX);
+
+ val |= mpf_election;
+ spnic_hwif_write_reg(hwif, addr, val);
+}
+
+static void init_db_area_idx(struct spnic_free_db_area *free_db_area,
+ u64 db_dwqe_len)
+{
+ u32 i, db_max_areas;
+
+ db_max_areas = (db_dwqe_len > SPNIC_DB_DWQE_SIZE) ?
+ SPNIC_DB_MAX_AREAS :
+ (u32)(db_dwqe_len / SPNIC_DB_PAGE_SIZE);
+
+ for (i = 0; i < db_max_areas; i++)
+ free_db_area->db_idx[i] = i;
+
+ free_db_area->num_free = db_max_areas;
+ free_db_area->db_max_areas = db_max_areas;
+
+ rte_spinlock_init(&free_db_area->idx_lock);
+}
+
+static int get_db_idx(struct spnic_hwif *hwif, u32 *idx)
+{
+ struct spnic_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pos;
+ u32 pg_idx;
+
+ rte_spinlock_lock(&free_db_area->idx_lock);
+
+ do {
+ if (free_db_area->num_free == 0) {
+ rte_spinlock_unlock(&free_db_area->idx_lock);
+ return -ENOMEM;
+ }
+
+ free_db_area->num_free--;
+
+ pos = free_db_area->alloc_pos++;
+ /* Doorbell max areas should be 2^n */
+ pos &= free_db_area->db_max_areas - 1;
+
+ pg_idx = free_db_area->db_idx[pos];
+
+ free_db_area->db_idx[pos] = 0xFFFFFFFF;
+ } while (pg_idx >= free_db_area->db_max_areas);
+
+ rte_spinlock_unlock(&free_db_area->idx_lock);
+
+ *idx = pg_idx;
+
+ return 0;
+}
+
+static void free_db_idx(struct spnic_hwif *hwif, u32 idx)
+{
+ struct spnic_free_db_area *free_db_area = &hwif->free_db_area;
+ u32 pos;
+
+ if (idx >= free_db_area->db_max_areas)
+ return;
+
+ rte_spinlock_lock(&free_db_area->idx_lock);
+
+ pos = free_db_area->return_pos++;
+ pos &= free_db_area->db_max_areas - 1;
+
+ free_db_area->db_idx[pos] = idx;
+
+ free_db_area->num_free++;
+
+ rte_spinlock_unlock(&free_db_area->idx_lock);
+}
+
+void spnic_free_db_addr(void *hwdev, const void *db_base,
+ __rte_unused void *dwqe_base)
+{
+ struct spnic_hwif *hwif = NULL;
+ u32 idx;
+
+ if (!hwdev || !db_base)
+ return;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+ idx = DB_IDX(db_base, hwif->db_base);
+
+ free_db_idx(hwif, idx);
+}
+
+int spnic_alloc_db_addr(void *hwdev, void **db_base, void **dwqe_base)
+{
+ struct spnic_hwif *hwif = NULL;
+ u32 idx;
+ int err;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ err = get_db_idx(hwif, &idx);
+ if (err)
+ return -EFAULT;
+
+ *db_base = hwif->db_base + idx * SPNIC_DB_PAGE_SIZE;
+
+ if (!dwqe_base)
+ return 0;
+
+ *dwqe_base = (u8 *)*db_base + SPNIC_DWQE_OFFSET;
+
+ return 0;
+}
+
+/**
+ * Set msix state
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ * @param[in] msix_idx
+ * MSIX index
+ * @param[in] flag
+ * MSIX state flag, 0-enable, 1-disable
+ */
+void spnic_set_msix_state(void *hwdev, u16 msix_idx, enum spnic_msix_state flag)
+{
+ struct spnic_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+ u8 int_msk = 1;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = SPNIC_MSI_CLR_INDIR_SET(int_msk, INT_MSK_SET);
+ else
+ mask_bits = SPNIC_MSI_CLR_INDIR_SET(int_msk, INT_MSK_CLR);
+ mask_bits = mask_bits |
+ SPNIC_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = SPNIC_CSR_FUNC_MSI_CLR_WR_ADDR;
+ spnic_hwif_write_reg(hwif, addr, mask_bits);
+}
+
+static void disable_all_msix(struct spnic_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ spnic_set_msix_state(hwdev, i, SPNIC_MSIX_DISABLE);
+}
+
+void spnic_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en)
+{
+ struct spnic_hwif *hwif = NULL;
+ u32 msix_ctrl = 0, addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ msix_ctrl = SPNIC_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX) |
+ SPNIC_MSI_CLR_INDIR_SET(clear_resend_en, RESEND_TIMER_CLR);
+
+ addr = SPNIC_CSR_FUNC_MSI_CLR_WR_ADDR;
+ spnic_hwif_write_reg(hwif, addr, msix_ctrl);
+}
+#ifdef SPNIC_RELEASE
+static int wait_until_doorbell_flush_states(struct spnic_hwif *hwif,
+ enum spnic_doorbell_ctrl states)
+{
+ enum spnic_doorbell_ctrl db_ctrl;
+ u32 cnt = 0;
+
+ if (!hwif)
+ return -EINVAL;
+
+ while (cnt < SPNIC_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT) {
+ db_ctrl = spnic_get_doorbell_ctrl_status(hwif);
+ if (db_ctrl == states)
+ return 0;
+
+ rte_delay_ms(1);
+ cnt++;
+ }
+
+ return -EFAULT;
+}
+#endif
+
+static int wait_until_doorbell_and_outbound_enabled(struct spnic_hwif *hwif)
+{
+ enum spnic_doorbell_ctrl db_ctrl;
+ enum spnic_outbound_ctrl outbound_ctrl;
+ u32 cnt = 0;
+
+ while (cnt < SPNIC_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT) {
+ db_ctrl = spnic_get_doorbell_ctrl_status(hwif);
+ outbound_ctrl = spnic_get_outbound_ctrl_status(hwif);
+ if (outbound_ctrl == ENABLE_OUTBOUND &&
+ db_ctrl == ENABLE_DOORBELL)
+ return 0;
+
+ rte_delay_ms(1);
+ cnt++;
+ }
+
+ return -EFAULT;
+}
+
+static void spnic_get_bar_addr(struct spnic_hwdev *hwdev)
+{
+ struct rte_pci_device *pci_dev = hwdev->pci_dev;
+ struct spnic_hwif *hwif = hwdev->hwif;
+ void *cfg_regs_base = NULL;
+ void *mgmt_reg_base = NULL;
+ void *intr_reg_base = NULL;
+ void *db_base = NULL;
+ int cfg_bar;
+
+ cfg_bar = SPNIC_IS_VF_DEV(pci_dev) ?
+ SPNIC_VF_PCI_CFG_REG_BAR : SPNIC_PF_PCI_CFG_REG_BAR;
+
+ cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr;
+ intr_reg_base = pci_dev->mem_resource[SPNIC_PCI_INTR_REG_BAR].addr;
+ if (!SPNIC_IS_VF_DEV(pci_dev)) {
+ mgmt_reg_base =
+ pci_dev->mem_resource[SPNIC_PCI_MGMT_REG_BAR].addr;
+ }
+ db_base = pci_dev->mem_resource[SPNIC_PCI_DB_BAR].addr;
+
+ /* If function is VF, mgmt_regs_base will be NULL */
+ if (!mgmt_reg_base)
+ hwif->cfg_regs_base = (u8 *)cfg_regs_base +
+ SPNIC_VF_CFG_REG_OFFSET;
+ else
+ hwif->cfg_regs_base = cfg_regs_base;
+ hwif->intr_regs_base = intr_reg_base;
+ hwif->mgmt_regs_base = mgmt_reg_base;
+ hwif->db_base = db_base;
+ hwif->db_dwqe_len = pci_dev->mem_resource[SPNIC_PCI_DB_BAR].len;
+}
+
+/**
+ * Initialize the hw interface
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
+ */
+int spnic_init_hwif(void *dev)
+{
+ struct spnic_hwdev *hwdev = NULL;
+ struct spnic_hwif *hwif;
+ int err;
+
+ hwif = rte_zmalloc("spnic_hwif", sizeof(struct spnic_hwif),
+ RTE_CACHE_LINE_SIZE);
+ if (!hwif)
+ return -ENOMEM;
+
+ hwdev = (struct spnic_hwdev *)dev;
+ hwdev->hwif = hwif;
+
+ spnic_get_bar_addr(hwdev);
+
+ init_db_area_idx(&hwif->free_db_area, hwif->db_dwqe_len);
+
+ err = wait_hwif_ready(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Chip status is not ready");
+ goto hwif_ready_err;
+ }
+
+ get_hwif_attr(hwif);
+
+ err = wait_until_doorbell_and_outbound_enabled(hwif);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Hw doorbell/outbound is disabled");
+ goto hwif_ready_err;
+ }
+
+ if (!SPNIC_IS_VF(hwdev)) {
+ set_ppf(hwif);
+
+ if (SPNIC_IS_PPF(hwdev))
+ set_mpf(hwif);
+
+ get_mpf(hwif);
+ }
+
+ disable_all_msix(hwdev);
+ /* Disable mgmt cpu reporting any event */
+ spnic_set_pf_status(hwdev->hwif, SPNIC_PF_STATUS_INIT);
+
+ PMD_DRV_LOG(INFO, "global_func_idx: %d, func_type: %d, host_id: %d, ppf: %d, mpf: %d",
+ hwif->attr.func_global_idx, hwif->attr.func_type,
+ hwif->attr.pci_intf_idx, hwif->attr.ppf_idx,
+ hwif->attr.mpf_idx);
+
+ return 0;
+
+hwif_ready_err:
+ rte_free(hwdev->hwif);
+ hwdev->hwif = NULL;
+
+ return err;
+}
+
+/**
+ * Free the hw interface
+ *
+ * @param[in] dev
+ * The pointer to the private hardware device object
+ */
+void spnic_free_hwif(void *dev)
+{
+ struct spnic_hwdev *hwdev = (struct spnic_hwdev *)dev;
+
+ rte_free(hwdev->hwif);
+}
+
+u16 spnic_global_func_id(void *hwdev)
+{
+ struct spnic_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_global_idx;
+}
+
+u8 spnic_pf_id_of_vf(void *hwdev)
+{
+ struct spnic_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.port_to_port_idx;
+}
+
+u8 spnic_pcie_itf_id(void *hwdev)
+{
+ struct spnic_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.pci_intf_idx;
+}
+
+enum func_type spnic_func_type(void *hwdev)
+{
+ struct spnic_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_type;
+}
+
+u16 spnic_glb_pf_vf_offset(void *hwdev)
+{
+ struct spnic_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.global_vf_id_of_pf;
+}
diff --git a/drivers/net/spnic/base/spnic_hwif.h b/drivers/net/spnic/base/spnic_hwif.h
new file mode 100644
index 0000000000..6755e1377a
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hwif.h
@@ -0,0 +1,155 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_HWIF_H_
+#define _SPNIC_HWIF_H_
+
+#define SPNIC_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT 60000
+#define SPNIC_PCIE_LINK_DOWN 0xFFFFFFFF
+
+/* PCIe bar space */
+#define SPNIC_VF_PCI_CFG_REG_BAR 0
+#define SPNIC_PF_PCI_CFG_REG_BAR 1
+
+#define SPNIC_PCI_INTR_REG_BAR 2
+#define SPNIC_PCI_MGMT_REG_BAR 3 /* Only PF has mgmt bar */
+#define SPNIC_PCI_DB_BAR 4
+
+#define SPNIC_DB_DWQE_SIZE 0x00400000
+
+/* Doorbell or direct wqe page size is 4K */
+#define SPNIC_DB_PAGE_SIZE 0x00001000ULL
+#define SPNIC_DWQE_OFFSET 0x00000800ULL
+
+#define SPNIC_DB_MAX_AREAS (SPNIC_DB_DWQE_SIZE / SPNIC_DB_PAGE_SIZE)
+
+enum func_type {
+ TYPE_PF,
+ TYPE_VF,
+ TYPE_PPF,
+ TYPE_UNKNOWN
+};
+
+enum spnic_msix_state {
+ SPNIC_MSIX_ENABLE,
+ SPNIC_MSIX_DISABLE
+};
+
+struct spnic_free_db_area {
+ u32 db_idx[SPNIC_DB_MAX_AREAS];
+
+ u32 num_free;
+
+ u32 alloc_pos;
+ u32 return_pos;
+ u32 db_max_areas;
+
+ /* Spinlock for allocating doorbell area */
+ rte_spinlock_t idx_lock;
+};
+
+struct spnic_func_attr {
+ u16 func_global_idx;
+ u8 port_to_port_idx;
+ u8 pci_intf_idx;
+ u8 vf_in_pf;
+ enum func_type func_type;
+
+ u8 mpf_idx;
+
+ u8 ppf_idx;
+
+ u16 num_irqs; /* Max: 2 ^ 15 */
+ u8 num_aeqs; /* Max: 2 ^ 3 */
+ u8 num_ceqs; /* Max: 2 ^ 7 */
+
+ u8 num_dma_attr; /* Max: 2 ^ 6 */
+
+ u16 global_vf_id_of_pf;
+};
+
+struct spnic_hwif {
+ /* Configure virtual address, PF is bar1, VF is bar0/1 */
+ u8 *cfg_regs_base;
+ /* Interrupt configuration register address, PF is bar2, VF is bar2/3 */
+ u8 *intr_regs_base;
+ /* For PF bar3 virtual address, if function is VF should set NULL */
+ u8 *mgmt_regs_base;
+ u8 *db_base;
+ u64 db_dwqe_len;
+ struct spnic_free_db_area free_db_area;
+
+ struct spnic_func_attr attr;
+
+ void *pdev;
+};
+
+enum spnic_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1
+};
+
+enum spnic_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1
+};
+
+enum spnic_pf_status {
+ SPNIC_PF_STATUS_INIT = 0X0,
+ SPNIC_PF_STATUS_ACTIVE_FLAG = 0x11,
+ SPNIC_PF_STATUS_FLR_START_FLAG = 0x12,
+ SPNIC_PF_STATUS_FLR_FINISH_FLAG = 0x13
+};
+
+#define SPNIC_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define SPNIC_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define SPNIC_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define SPNIC_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define SPNIC_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define SPNIC_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define SPNIC_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define SPNIC_IS_PF(dev) (SPNIC_FUNC_TYPE(dev) == TYPE_PF)
+#define SPNIC_IS_VF(dev) (SPNIC_FUNC_TYPE(dev) == TYPE_VF)
+#define SPNIC_IS_PPF(dev) (SPNIC_FUNC_TYPE(dev) == TYPE_PPF)
+
+u32 spnic_hwif_read_reg(struct spnic_hwif *hwif, u32 reg);
+
+void spnic_hwif_write_reg(struct spnic_hwif *hwif, u32 reg, u32 val);
+
+void spnic_set_msix_state(void *hwdev, u16 msix_idx,
+ enum spnic_msix_state flag);
+
+void spnic_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en);
+
+u16 spnic_global_func_id(void *hwdev);
+
+u8 spnic_pf_id_of_vf(void *hwdev);
+
+u8 spnic_pcie_itf_id(void *hwdev);
+
+enum func_type spnic_func_type(void *hwdev);
+
+u16 spnic_glb_pf_vf_offset(void *hwdev);
+
+void spnic_set_pf_status(struct spnic_hwif *hwif,
+ enum spnic_pf_status status);
+
+enum spnic_pf_status spnic_get_pf_status(struct spnic_hwif *hwif);
+
+int spnic_alloc_db_addr(void *hwdev, void **db_base, void **dwqe_base);
+
+void spnic_free_db_addr(void *hwdev, const void *db_base,
+ __rte_unused void *dwqe_base);
+
+void spnic_disable_doorbell(struct spnic_hwif *hwif);
+
+void spnic_enable_doorbell(struct spnic_hwif *hwif);
+
+int spnic_init_hwif(void *dev);
+
+void spnic_free_hwif(void *dev);
+
+#endif /* _SPNIC_HWIF_H_ */
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index b06492a8e9..228ed0c936 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -9,15 +9,48 @@
#include <rte_ether.h>
#include "base/spnic_compat.h"
+#include "base/spnic_csr.h"
+#include "base/spnic_hwdev.h"
+#include "base/spnic_hwif.h"
+
#include "spnic_ethdev.h"
/* Driver-specific log messages type */
int spnic_logtype;
+#define SPNIC_MAX_UC_MAC_ADDRS 128
+#define SPNIC_MAX_MC_MAC_ADDRS 128
+
+/**
+ * Close the device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static int spnic_dev_close(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev =
+ SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+ if (rte_bit_relaxed_test_and_set32(SPNIC_DEV_CLOSE, &nic_dev->dev_status)) {
+ PMD_DRV_LOG(WARNING, "Device %s already closed",
+ nic_dev->dev_name);
+ return 0;
+ }
+
+ spnic_free_hwdev(nic_dev->hwdev);
+
+ rte_free(nic_dev->hwdev);
+ nic_dev->hwdev = NULL;
+
+ return 0;
+}
+
static int spnic_func_init(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev = NULL;
struct rte_pci_device *pci_dev = NULL;
+ int err;
pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -35,11 +68,42 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.domain, pci_dev->addr.bus,
pci_dev->addr.devid, pci_dev->addr.function);
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+ /* Create hardware device */
+ nic_dev->hwdev = rte_zmalloc("spnic_hwdev", sizeof(*nic_dev->hwdev),
+ RTE_CACHE_LINE_SIZE);
+ if (!nic_dev->hwdev) {
+ PMD_DRV_LOG(ERR, "Allocate hwdev memory failed, dev_name: %s",
+ eth_dev->data->name);
+ err = -ENOMEM;
+ goto alloc_hwdev_mem_fail;
+ }
+ nic_dev->hwdev->pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ nic_dev->hwdev->dev_handle = nic_dev;
+ nic_dev->hwdev->eth_dev = eth_dev;
+ nic_dev->hwdev->port_id = eth_dev->data->port_id;
+
+ err = spnic_init_hwdev(nic_dev->hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init chip hwdev failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_hwdev_fail;
+ }
+
rte_bit_relaxed_set32(SPNIC_DEV_INIT, &nic_dev->dev_status);
PMD_DRV_LOG(INFO, "Initialize %s in primary succeed",
eth_dev->data->name);
return 0;
+
+init_hwdev_fail:
+ rte_free(nic_dev->hwdev);
+ nic_dev->hwdev = NULL;
+
+alloc_hwdev_mem_fail:
+ PMD_DRV_LOG(ERR, "Initialize %s in primary failed",
+ eth_dev->data->name);
+ return err;
}
static int spnic_dev_init(struct rte_eth_dev *eth_dev)
@@ -67,6 +131,8 @@ static int spnic_dev_uninit(struct rte_eth_dev *dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
+ spnic_dev_close(dev);
+
return 0;
}
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
index d4ec641d83..654234aaa4 100644
--- a/drivers/net/spnic/spnic_ethdev.h
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -5,21 +5,55 @@
#ifndef _SPNIC_ETHDEV_H_
#define _SPNIC_ETHDEV_H_
-/* Vendor id */
-#define PCI_VENDOR_ID_RAMAXEL 0x1E81
-
-/* Device ids */
-#define SPNIC_DEV_ID_PF 0x9020
-#define SPNIC_DEV_ID_VF 0x9001
+#define SPNIC_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
+#define SPNIC_VFTA_SIZE (4096 / SPNIC_UINT32_BIT_SIZE)
+#define SPNIC_MAX_QUEUE_NUM 64
enum spnic_dev_status {
- SPNIC_DEV_INIT
+ SPNIC_DEV_INIT,
+ SPNIC_DEV_CLOSE,
+ SPNIC_DEV_START,
+ SPNIC_DEV_INTR_EN
};
#define SPNIC_DEV_NAME_LEN 32
struct spnic_nic_dev {
+ struct spnic_hwdev *hwdev; /* Hardware device */
+
+ struct spnic_txq **txqs;
+ struct spnic_rxq **rxqs;
+ struct rte_mempool *cpy_mpool;
+
+ u16 num_sqs;
+ u16 num_rqs;
+ u16 max_sqs;
+ u16 max_rqs;
+
+ u16 rx_buff_len;
+ u16 mtu_size;
+
+ u16 rss_state;
+ u8 num_rss;
+ u8 rsvd0;
+
+ u32 rx_mode;
+ u8 rx_queue_list[SPNIC_MAX_QUEUE_NUM];
+ rte_spinlock_t queue_list_lock;
+ pthread_mutex_t rx_mode_mutex;
+
+ u32 default_cos;
+ u32 rx_csum_en;
+
u32 dev_status;
+
+ bool pause_set;
+ pthread_mutex_t pause_mutuex;
+
+ struct rte_ether_addr default_addr;
+ struct rte_ether_addr *mc_list;
+
char dev_name[SPNIC_DEV_NAME_LEN];
+ u32 vfta[SPNIC_VFTA_SIZE]; /* VLAN bitmap */
};
#define SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 03/25] net/spnic: add mbox message channel
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-18 2:51 ` [PATCH v1 01/25] drivers/net: introduce a new PMD driver Yanling Song
2021-12-18 2:51 ` [PATCH v1 02/25] net/spnic: initialize the HW interface Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 04/25] net/spnic: introduce event queue Yanling Song
` (21 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch adds a message channel named mbox which can send
message form PF/VF driver to hardware or sned message from
VF to PF.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 3 +-
drivers/net/spnic/base/spnic_hwdev.c | 69 ++
drivers/net/spnic/base/spnic_hwdev.h | 6 +
drivers/net/spnic/base/spnic_mbox.c | 1158 ++++++++++++++++++++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 +++++
drivers/net/spnic/base/spnic_mgmt.h | 36 +
6 files changed, 1473 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index edd6e94772..de80eef7c4 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -3,7 +3,8 @@
sources = [
'spnic_hwdev.c',
- 'spnic_hwif.c'
+ 'spnic_hwif.c',
+ 'spnic_mbox.c'
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
index de73f244fd..bcecbaa895 100644
--- a/drivers/net/spnic/base/spnic_hwdev.c
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -5,8 +5,66 @@
#include "spnic_compat.h"
#include "spnic_csr.h"
#include "spnic_hwif.h"
+#include "spnic_mgmt.h"
+#include "spnic_mbox.h"
#include "spnic_hwdev.h"
+int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
+ __rte_unused u16 cmd, __rte_unused void *buf_in,
+ __rte_unused u16 in_size, __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ struct spnic_hwdev *hwdev = handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ PMD_DRV_LOG(WARNING, "Unsupported pf mbox event %d to process", cmd);
+
+ return 0;
+}
+
+static int init_mgmt_channel(struct spnic_hwdev *hwdev)
+{
+ int err;
+
+ err = spnic_func_to_func_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mailbox channel failed");
+ goto func_to_func_init_err;
+ }
+
+ return 0;
+
+func_to_func_init_err:
+
+ return err;
+}
+
+static void free_mgmt_channel(struct spnic_hwdev *hwdev)
+{
+ spnic_func_to_func_free(hwdev);
+}
+
+
+static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
+{
+ int err;
+
+ err = init_mgmt_channel(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mgmt channel failed");
+ return err;
+ }
+
+ return 0;
+}
+
+static void spnic_uninit_comm_ch(struct spnic_hwdev *hwdev)
+{
+ free_mgmt_channel(hwdev);
+}
+
int spnic_init_hwdev(struct spnic_hwdev *hwdev)
{
int err;
@@ -25,8 +83,17 @@ int spnic_init_hwdev(struct spnic_hwdev *hwdev)
goto init_hwif_err;
}
+ err = spnic_init_comm_ch(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init communication channel failed");
+ goto init_comm_ch_err;
+ }
+
return 0;
+init_comm_ch_err:
+ spnic_free_hwif(hwdev);
+
init_hwif_err:
rte_free(hwdev->chip_fault_stats);
@@ -35,6 +102,8 @@ int spnic_init_hwdev(struct spnic_hwdev *hwdev)
void spnic_free_hwdev(struct spnic_hwdev *hwdev)
{
+ spnic_uninit_comm_ch(hwdev);
+
spnic_free_hwif(hwdev);
rte_free(hwdev->chip_fault_stats);
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
index a6cb8bc36e..b3a8b32287 100644
--- a/drivers/net/spnic/base/spnic_hwdev.h
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -17,12 +17,18 @@ struct spnic_hwdev {
uint16_t port_id;
struct spnic_hwif *hwif;
+ struct spnic_mbox *func_to_func;
u8 *chip_fault_stats;
u16 max_vfs;
u16 link_status;
};
+int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
+ __rte_unused u16 cmd, __rte_unused void *buf_in,
+ __rte_unused u16 in_size, __rte_unused void *buf_out,
+ __rte_unused u16 *out_size);
+
int spnic_init_hwdev(struct spnic_hwdev *hwdev);
void spnic_free_hwdev(struct spnic_hwdev *hwdev);
diff --git a/drivers/net/spnic/base/spnic_mbox.c b/drivers/net/spnic/base/spnic_mbox.c
new file mode 100644
index 0000000000..d019612cef
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_mbox.c
@@ -0,0 +1,1158 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_atomic.h>
+#include <ethdev_driver.h>
+#include "spnic_compat.h"
+#include "spnic_hwdev.h"
+#include "spnic_csr.h"
+#include "spnic_hwif.h"
+#include "spnic_mgmt.h"
+#include "spnic_mbox.h"
+
+#define SPNIC_MBOX_INT_DST_FUNC_SHIFT 0
+#define SPNIC_MBOX_INT_DST_AEQN_SHIFT 10
+#define SPNIC_MBOX_INT_SRC_RESP_AEQN_SHIFT 12
+#define SPNIC_MBOX_INT_STAT_DMA_SHIFT 14
+/* The size of data to be send (unit of 4 bytes) */
+#define SPNIC_MBOX_INT_TX_SIZE_SHIFT 20
+/* SO_RO(strong order, relax order) */
+#define SPNIC_MBOX_INT_STAT_DMA_SO_RO_SHIFT 25
+#define SPNIC_MBOX_INT_WB_EN_SHIFT 28
+
+#define SPNIC_MBOX_INT_DST_AEQN_MASK 0x3
+#define SPNIC_MBOX_INT_SRC_RESP_AEQN_MASK 0x3
+#define SPNIC_MBOX_INT_STAT_DMA_MASK 0x3F
+#define SPNIC_MBOX_INT_TX_SIZE_MASK 0x1F
+#define SPNIC_MBOX_INT_STAT_DMA_SO_RO_MASK 0x3
+#define SPNIC_MBOX_INT_WB_EN_MASK 0x1
+
+#define SPNIC_MBOX_INT_SET(val, field) \
+ (((val) & SPNIC_MBOX_INT_##field##_MASK) << \
+ SPNIC_MBOX_INT_##field##_SHIFT)
+
+enum spnic_mbox_tx_status {
+ TX_NOT_DONE = 1,
+};
+
+#define SPNIC_MBOX_CTRL_TRIGGER_AEQE_SHIFT 0
+/* Specifies the issue request for the message data.
+ * 0 - Tx request is done;
+ * 1 - Tx request is in process.
+ */
+#define SPNIC_MBOX_CTRL_TX_STATUS_SHIFT 1
+#define SPNIC_MBOX_CTRL_DST_FUNC_SHIFT 16
+
+#define SPNIC_MBOX_CTRL_TRIGGER_AEQE_MASK 0x1
+#define SPNIC_MBOX_CTRL_TX_STATUS_MASK 0x1
+#define SPNIC_MBOX_CTRL_DST_FUNC_MASK 0x1FFF
+
+#define SPNIC_MBOX_CTRL_SET(val, field) \
+ (((val) & SPNIC_MBOX_CTRL_##field##_MASK) << \
+ SPNIC_MBOX_CTRL_##field##_SHIFT)
+
+#define MBOX_SEGLEN_MASK \
+ SPNIC_MSG_HEADER_SET(SPNIC_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
+
+#define MBOX_MSG_POLLING_TIMEOUT 300000
+#define SPNIC_MBOX_COMP_TIME 300000U
+
+#define MBOX_MAX_BUF_SZ 2048UL
+#define MBOX_HEADER_SZ 8
+#define SPNIC_MBOX_DATA_SIZE (MBOX_MAX_BUF_SZ - MBOX_HEADER_SZ)
+
+#define MBOX_TLP_HEADER_SZ 16
+
+/* Mbox size is 64B, 8B for mbox_header, 8B reserved */
+#define MBOX_SEG_LEN 48
+#define MBOX_SEG_LEN_ALIGN 4
+#define MBOX_WB_STATUS_LEN 16UL
+
+/* Mbox write back status is 16B, only first 4B is used */
+#define MBOX_WB_STATUS_ERRCODE_MASK 0xFFFF
+#define MBOX_WB_STATUS_MASK 0xFF
+#define MBOX_WB_ERROR_CODE_MASK 0xFF00
+#define MBOX_WB_STATUS_FINISHED_SUCCESS 0xFF
+#define MBOX_WB_STATUS_FINISHED_WITH_ERR 0xFE
+#define MBOX_WB_STATUS_NOT_FINISHED 0x00
+
+#define MBOX_STATUS_FINISHED(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) != MBOX_WB_STATUS_NOT_FINISHED)
+#define MBOX_STATUS_SUCCESS(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) == MBOX_WB_STATUS_FINISHED_SUCCESS)
+#define MBOX_STATUS_ERRCODE(wb) \
+ ((wb) & MBOX_WB_ERROR_CODE_MASK)
+
+#define SEQ_ID_START_VAL 0
+#define SEQ_ID_MAX_VAL 42
+
+#define DST_AEQ_IDX_DEFAULT_VAL 0
+#define SRC_AEQ_IDX_DEFAULT_VAL 0
+#define NO_DMA_ATTRIBUTE_VAL 0
+
+#define MBOX_MSG_NO_DATA_LEN 1
+
+#define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ)
+#define MBOX_AREA(hwif) \
+ ((hwif)->cfg_regs_base + SPNIC_FUNC_CSR_MAILBOX_DATA_OFF)
+
+#define IS_PF_OR_PPF_SRC(src_func_idx) ((src_func_idx) < SPNIC_MAX_PF_FUNCS)
+
+#define MBOX_RESPONSE_ERROR 0x1
+#define MBOX_MSG_ID_MASK 0xF
+#define MBOX_MSG_ID(func_to_func) ((func_to_func)->send_msg_id)
+#define MBOX_MSG_ID_INC(func_to_func) ((MBOX_MSG_ID(func_to_func) + 1) & \
+ MBOX_MSG_ID_MASK)
+
+/* Max message counter waits to process for one function */
+#define SPNIC_MAX_MSG_CNT_TO_PROCESS 10
+
+enum mbox_ordering_type {
+ STRONG_ORDER,
+};
+
+enum mbox_write_back_type {
+ WRITE_BACK = 1,
+};
+
+enum mbox_aeq_trig_type {
+ NOT_TRIGGER,
+ TRIGGER,
+};
+
+static int send_mbox_to_func(struct spnic_mbox *func_to_func,
+ enum spnic_mod_type mod, u16 cmd, void *msg,
+ u16 msg_len, u16 dst_func,
+ enum spnic_msg_direction_type direction,
+ enum spnic_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+static int send_tlp_mbox_to_func(struct spnic_mbox *func_to_func,
+ enum spnic_mod_type mod, u16 cmd, void *msg,
+ u16 msg_len, u16 dst_func,
+ enum spnic_msg_direction_type direction,
+ enum spnic_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+
+static int recv_vf_mbox_handler(struct spnic_mbox *func_to_func,
+ struct spnic_recv_mbox *recv_mbox,
+ void *buf_out, u16 *out_size,
+ __rte_unused void *param)
+{
+ int err = 0;
+
+ switch (recv_mbox->mod) {
+ case SPNIC_MOD_COMM:
+ err = vf_handle_pf_comm_mbox(func_to_func->hwdev, func_to_func,
+ recv_mbox->cmd, recv_mbox->mbox,
+ recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "No handler, mod: %d", recv_mbox->mod);
+ err = SPNIC_MBOX_VF_CMD_ERROR;
+ break;
+ }
+
+ return err;
+}
+
+static void response_for_recv_func_mbox(struct spnic_mbox *func_to_func,
+ struct spnic_recv_mbox *recv_mbox,
+ int err, u16 out_size, u16 src_func_idx)
+{
+ struct mbox_msg_info msg_info = {0};
+
+ if (recv_mbox->ack_type == SPNIC_MSG_ACK) {
+ msg_info.msg_id = recv_mbox->msg_info.msg_id;
+ if (err)
+ msg_info.status = SPNIC_MBOX_PF_SEND_ERR;
+
+ if (IS_TLP_MBX(src_func_idx))
+ send_tlp_mbox_to_func(func_to_func, recv_mbox->mod,
+ recv_mbox->cmd,
+ recv_mbox->buf_out, out_size,
+ src_func_idx, SPNIC_MSG_RESPONSE,
+ SPNIC_MSG_NO_ACK, &msg_info);
+ else
+ send_mbox_to_func(func_to_func, recv_mbox->mod,
+ recv_mbox->cmd, recv_mbox->buf_out,
+ out_size, src_func_idx,
+ SPNIC_MSG_RESPONSE,
+ SPNIC_MSG_NO_ACK, &msg_info);
+ }
+}
+
+static void recv_func_mbox_handler(struct spnic_mbox *func_to_func,
+ struct spnic_recv_mbox *recv_mbox,
+ u16 src_func_idx, void *param)
+{
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+ void *buf_out = recv_mbox->buf_out;
+ u16 out_size = MBOX_MAX_BUF_SZ;
+ int err = 0;
+
+ if (SPNIC_IS_VF(hwdev)) {
+ err = recv_vf_mbox_handler(func_to_func, recv_mbox, buf_out,
+ &out_size, param);
+ } else {
+ err = -EINVAL;
+ PMD_DRV_LOG(ERR, "PMD doesn't support non-VF handle mailbox message");
+ }
+
+ if (!out_size || err)
+ out_size = MBOX_MSG_NO_DATA_LEN;
+
+ if (recv_mbox->ack_type == SPNIC_MSG_ACK) {
+ response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+ out_size, src_func_idx);
+ }
+}
+
+static void resp_mbox_handler(struct spnic_mbox *func_to_func,
+ struct spnic_recv_mbox *recv_mbox)
+{
+ rte_spinlock_lock(&func_to_func->mbox_lock);
+ if (recv_mbox->msg_info.msg_id == func_to_func->send_msg_id &&
+ func_to_func->event_flag == EVENT_START)
+ func_to_func->event_flag = EVENT_SUCCESS;
+ else
+ PMD_DRV_LOG(ERR, "Mbox response timeout, current send msg id(0x%x), "
+ "recv msg id(0x%x), status(0x%x)",
+ func_to_func->send_msg_id,
+ recv_mbox->msg_info.msg_id,
+ recv_mbox->msg_info.status);
+ rte_spinlock_unlock(&func_to_func->mbox_lock);
+}
+
+static bool check_mbox_segment(struct spnic_recv_mbox *recv_mbox,
+ u64 mbox_header)
+{
+ u8 seq_id, seg_len, msg_id, mod;
+ u16 src_func_idx, cmd;
+
+ seq_id = SPNIC_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = SPNIC_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ src_func_idx = SPNIC_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+ msg_id = SPNIC_MSG_HEADER_GET(mbox_header, MSG_ID);
+ mod = SPNIC_MSG_HEADER_GET(mbox_header, MODULE);
+ cmd = SPNIC_MSG_HEADER_GET(mbox_header, CMD);
+
+ if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN)
+ goto seg_err;
+
+ if (seq_id == 0) {
+ recv_mbox->seq_id = seq_id;
+ recv_mbox->msg_info.msg_id = msg_id;
+ recv_mbox->mod = mod;
+ recv_mbox->cmd = cmd;
+ } else {
+ if ((seq_id != recv_mbox->seq_id + 1) ||
+ msg_id != recv_mbox->msg_info.msg_id ||
+ mod != recv_mbox->mod || cmd != recv_mbox->cmd)
+ goto seg_err;
+
+ recv_mbox->seq_id = seq_id;
+ }
+
+ return true;
+
+seg_err:
+ PMD_DRV_LOG(ERR, "Mailbox segment check failed, src func id: 0x%x, "
+ "front seg info: seq id: 0x%x, msg id: 0x%x, mod: 0x%x, "
+ "cmd: 0x%x\n",
+ src_func_idx, recv_mbox->seq_id, recv_mbox->msg_info.msg_id,
+ recv_mbox->mod, recv_mbox->cmd);
+ PMD_DRV_LOG(ERR, "Current seg info: seg len: 0x%x, seq id: 0x%x, "
+ "msg id: 0x%x, mod: 0x%x, cmd: 0x%x\n",
+ seg_len, seq_id, msg_id, mod, cmd);
+
+ return false;
+}
+
+static int recv_mbox_handler(struct spnic_mbox *func_to_func, void *header,
+ struct spnic_recv_mbox *recv_mbox, void *param)
+{
+ u64 mbox_header = *((u64 *)header);
+ void *mbox_body = MBOX_BODY_FROM_HDR(header);
+ u16 src_func_idx;
+ int pos;
+ u8 seq_id;
+
+ seq_id = SPNIC_MSG_HEADER_GET(mbox_header, SEQID);
+ src_func_idx = SPNIC_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (!check_mbox_segment(recv_mbox, mbox_header)) {
+ recv_mbox->seq_id = SEQ_ID_MAX_VAL;
+ return SPNIC_MSG_HANDLER_RES;
+ }
+
+ pos = seq_id * MBOX_SEG_LEN;
+ memcpy((u8 *)recv_mbox->mbox + pos, mbox_body,
+ SPNIC_MSG_HEADER_GET(mbox_header, SEG_LEN));
+
+ if (!SPNIC_MSG_HEADER_GET(mbox_header, LAST))
+ return SPNIC_MSG_HANDLER_RES;
+
+ recv_mbox->cmd = SPNIC_MSG_HEADER_GET(mbox_header, CMD);
+ recv_mbox->mod = SPNIC_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_mbox->mbox_len = SPNIC_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_mbox->ack_type = SPNIC_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_mbox->msg_info.msg_id = SPNIC_MSG_HEADER_GET(mbox_header, MSG_ID);
+ recv_mbox->msg_info.status = SPNIC_MSG_HEADER_GET(mbox_header, STATUS);
+ recv_mbox->seq_id = SEQ_ID_MAX_VAL;
+
+ if (SPNIC_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ SPNIC_MSG_RESPONSE) {
+ resp_mbox_handler(func_to_func, recv_mbox);
+ return 0;
+ }
+
+ recv_func_mbox_handler(func_to_func, recv_mbox, src_func_idx, param);
+
+ return SPNIC_MSG_HANDLER_RES;
+}
+
+int spnic_mbox_func_aeqe_handler(void *handle, u8 *header,
+ __rte_unused u8 size, void *param)
+{
+ struct spnic_mbox *func_to_func = NULL;
+ struct spnic_recv_mbox *recv_mbox = NULL;
+ u64 mbox_header = *((u64 *)header);
+ u64 src, dir;
+
+ func_to_func = ((struct spnic_hwdev *)handle)->func_to_func;
+
+ dir = SPNIC_MSG_HEADER_GET(mbox_header, DIRECTION);
+ src = SPNIC_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (src >= SPNIC_MAX_FUNCTIONS && src != SPNIC_MGMT_SRC_ID) {
+ PMD_DRV_LOG(ERR, "Mailbox source function id: %u is invalid",
+ (u32)src);
+ return SPNIC_MSG_HANDLER_RES;
+ }
+
+ src = (src == SPNIC_MGMT_SRC_ID) ? SPNIC_MAX_FUNCTIONS : src;
+
+ recv_mbox = (dir == SPNIC_MSG_DIRECT_SEND) ?
+ &func_to_func->mbox_send[src] :
+ &func_to_func->mbox_resp[src];
+
+ return recv_mbox_handler(func_to_func, (u64 *)header, recv_mbox, param);
+}
+
+static void clear_mbox_status(struct spnic_send_mbox *mbox)
+{
+ *mbox->wb_status = 0;
+
+ /* Clear mailbox write back status */
+ rte_wmb();
+}
+
+static void mbox_copy_header(struct spnic_send_mbox *mbox, u64 *header)
+{
+ u32 *data = (u32 *)header;
+ u32 i, idx_max = MBOX_HEADER_SZ / sizeof(u32);
+
+ for (i = 0; i < idx_max; i++) {
+ rte_write32(cpu_to_be32(*(data + i)),
+ mbox->data + i * sizeof(u32));
+ }
+}
+
+#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a
+static u32 mbox_dma_msg_xor(u32 *data, u16 msg_len)
+{
+ u32 xor = MBOX_DMA_MSG_INIT_XOR_VAL;
+ u16 dw_len = msg_len / sizeof(u32);
+ u16 i;
+
+ for (i = 0; i < dw_len; i++)
+ xor ^= data[i];
+
+ return xor;
+}
+
+static void mbox_copy_send_data_addr(struct spnic_send_mbox *mbox, u16 seg_len)
+{
+ u32 addr_h, addr_l, xor;
+
+ xor = mbox_dma_msg_xor(mbox->sbuff_vaddr, seg_len);
+ addr_h = upper_32_bits(mbox->sbuff_paddr);
+ addr_l = lower_32_bits(mbox->sbuff_paddr);
+
+ rte_write32(cpu_to_be32(xor), mbox->data + MBOX_HEADER_SZ);
+ rte_write32(cpu_to_be32(addr_h),
+ mbox->data + MBOX_HEADER_SZ + sizeof(u32));
+ rte_write32(cpu_to_be32(addr_l),
+ mbox->data + MBOX_HEADER_SZ + 2 * sizeof(u32));
+ rte_write32(cpu_to_be32(seg_len),
+ mbox->data + MBOX_HEADER_SZ + 3 * sizeof(u32));
+ /* Reserved */
+ rte_write32(0, mbox->data + MBOX_HEADER_SZ + 4 * sizeof(u32));
+ rte_write32(0, mbox->data + MBOX_HEADER_SZ + 5 * sizeof(u32));
+}
+
+static void mbox_copy_send_data(struct spnic_send_mbox *mbox, void *seg,
+ u16 seg_len)
+{
+ u32 *data = seg;
+ u32 data_len, chk_sz = sizeof(u32);
+ u32 i, idx_max;
+
+ data_len = seg_len;
+ idx_max = RTE_ALIGN(data_len, chk_sz) / chk_sz;
+
+ for (i = 0; i < idx_max; i++) {
+ rte_write32(cpu_to_be32(*(data + i)),
+ mbox->data + MBOX_HEADER_SZ + i * sizeof(u32));
+ }
+}
+
+static void write_mbox_msg_attr(struct spnic_mbox *func_to_func,
+ u16 dst_func, u16 dst_aeqn, u16 seg_len)
+{
+ u32 mbox_int, mbox_ctrl;
+
+ /* If VF, function ids must self-learning by HW(PPF=1 PF=0) */
+ if (SPNIC_IS_VF(func_to_func->hwdev) &&
+ dst_func != SPNIC_MGMT_SRC_ID) {
+ if (dst_func == SPNIC_HWIF_PPF_IDX(func_to_func->hwdev->hwif))
+ dst_func = 1;
+ else
+ dst_func = 0;
+ }
+
+ mbox_int = SPNIC_MBOX_INT_SET(dst_aeqn, DST_AEQN) |
+ SPNIC_MBOX_INT_SET(0, SRC_RESP_AEQN) |
+ SPNIC_MBOX_INT_SET(NO_DMA_ATTRIBUTE_VAL, STAT_DMA) |
+ SPNIC_MBOX_INT_SET(RTE_ALIGN(seg_len + MBOX_HEADER_SZ,
+ MBOX_SEG_LEN_ALIGN) >> 2,
+ TX_SIZE) |
+ SPNIC_MBOX_INT_SET(STRONG_ORDER, STAT_DMA_SO_RO) |
+ SPNIC_MBOX_INT_SET(WRITE_BACK, WB_EN);
+
+ spnic_hwif_write_reg(func_to_func->hwdev->hwif,
+ SPNIC_FUNC_CSR_MAILBOX_INT_OFFSET_OFF, mbox_int);
+
+ rte_wmb(); /* Writing the mbox intr attributes */
+ mbox_ctrl = SPNIC_MBOX_CTRL_SET(TX_NOT_DONE, TX_STATUS);
+
+ mbox_ctrl |= SPNIC_MBOX_CTRL_SET(NOT_TRIGGER, TRIGGER_AEQE);
+
+ mbox_ctrl |= SPNIC_MBOX_CTRL_SET(dst_func, DST_FUNC);
+
+ spnic_hwif_write_reg(func_to_func->hwdev->hwif,
+ SPNIC_FUNC_CSR_MAILBOX_CONTROL_OFF, mbox_ctrl);
+}
+
+static void dump_mbox_reg(struct spnic_hwdev *hwdev)
+{
+ u32 val;
+
+ val = spnic_hwif_read_reg(hwdev->hwif,
+ SPNIC_FUNC_CSR_MAILBOX_CONTROL_OFF);
+ PMD_DRV_LOG(ERR, "Mailbox control reg: 0x%x", val);
+ val = spnic_hwif_read_reg(hwdev->hwif,
+ SPNIC_FUNC_CSR_MAILBOX_INT_OFFSET_OFF);
+ PMD_DRV_LOG(ERR, "Mailbox interrupt offset: 0x%x", val);
+}
+
+static u16 get_mbox_status(struct spnic_send_mbox *mbox)
+{
+ /* Write back is 16B, but only use first 4B */
+ u64 wb_val = be64_to_cpu(*mbox->wb_status);
+
+ rte_rmb(); /* Verify reading before check */
+
+ return (u16)(wb_val & MBOX_WB_STATUS_ERRCODE_MASK);
+}
+
+static int send_mbox_seg(struct spnic_mbox *func_to_func, u64 header,
+ u16 dst_func, void *seg, u16 seg_len,
+ __rte_unused void *msg_info)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, wb_status = 0, errcode;
+ u16 seq_dir = SPNIC_MSG_HEADER_GET(header, DIRECTION);
+ u32 cnt = 0;
+
+ /* Mbox to mgmt cpu, hardware doesn't care dst aeq id */
+ if (num_aeqs >= 2)
+ dst_aeqn = (seq_dir == SPNIC_MSG_DIRECT_SEND) ?
+ SPNIC_ASYNC_MSG_AEQ : SPNIC_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+
+ mbox_copy_header(send_mbox, &header);
+
+ mbox_copy_send_data(send_mbox, seg, seg_len);
+
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn, seg_len);
+
+ rte_wmb(); /* Writing the mbox msg attributes */
+
+ while (cnt < MBOX_MSG_POLLING_TIMEOUT) {
+ wb_status = get_mbox_status(send_mbox);
+ if (MBOX_STATUS_FINISHED(wb_status))
+ break;
+
+ rte_delay_ms(1);
+ cnt++;
+ }
+
+ if (cnt == MBOX_MSG_POLLING_TIMEOUT) {
+ PMD_DRV_LOG(ERR, "Send mailbox segment timeout, wb status: 0x%x",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ PMD_DRV_LOG(ERR, "Send mailbox segment to function %d error, wb status: 0x%x",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int send_tlp_mbox_seg(struct spnic_mbox *func_to_func, u64 header,
+ u16 dst_func, void *seg, u16 seg_len,
+ __rte_unused void *msg_info)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, wb_status = 0, errcode;
+ u16 seq_dir = SPNIC_MSG_HEADER_GET(header, DIRECTION);
+ u32 cnt = 0;
+
+ /* Mbox to mgmt cpu, hardware doesn't care dst aeq id */
+ if (num_aeqs >= 2)
+ dst_aeqn = (seq_dir == SPNIC_MSG_DIRECT_SEND) ?
+ SPNIC_ASYNC_MSG_AEQ : SPNIC_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+
+ mbox_copy_header(send_mbox, &header);
+
+ /* Copy data to DMA buffer */
+ memcpy(send_mbox->sbuff_vaddr, seg, seg_len);
+
+ /* Copy data address to mailbox ctrl csr */
+ mbox_copy_send_data_addr(send_mbox, seg_len);
+
+ /* Send tlp mailbox, needs to change the txsize to 16 */
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn,
+ MBOX_TLP_HEADER_SZ);
+
+ rte_wmb(); /* Writing the mbox msg attributes */
+
+ while (cnt < MBOX_MSG_POLLING_TIMEOUT) {
+ wb_status = get_mbox_status(send_mbox);
+ if (MBOX_STATUS_FINISHED(wb_status))
+ break;
+
+ rte_delay_ms(1);
+ cnt++;
+ }
+
+ if (cnt == MBOX_MSG_POLLING_TIMEOUT) {
+ PMD_DRV_LOG(ERR, "Send mailbox segment timeout, wb status: 0x%x",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ PMD_DRV_LOG(ERR, "Send mailbox segment to function %d error, wb status: 0x%x",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int send_mbox_to_func(struct spnic_mbox *func_to_func,
+ enum spnic_mod_type mod, u16 cmd, void *msg,
+ u16 msg_len, u16 dst_func,
+ enum spnic_msg_direction_type direction,
+ enum spnic_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ int err = 0;
+ u32 seq_id = 0;
+ u16 seg_len = MBOX_SEG_LEN;
+ u16 rsp_aeq_id, left = msg_len;
+ u8 *msg_seg = (u8 *)msg;
+ u64 header = 0;
+
+ rsp_aeq_id = SPNIC_MBOX_RSP_MSG_AEQ;
+
+ err = spnic_mutex_lock(&func_to_func->msg_send_mutex);
+ if (err)
+ return err;
+
+ header = SPNIC_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ SPNIC_MSG_HEADER_SET(mod, MODULE) |
+ SPNIC_MSG_HEADER_SET(seg_len, SEG_LEN) |
+ SPNIC_MSG_HEADER_SET(ack_type, NO_ACK) |
+ SPNIC_MSG_HEADER_SET(SPNIC_DATA_INLINE, DATA_TYPE) |
+ SPNIC_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ SPNIC_MSG_HEADER_SET(NOT_LAST_SEGMENT, LAST) |
+ SPNIC_MSG_HEADER_SET(direction, DIRECTION) |
+ SPNIC_MSG_HEADER_SET(cmd, CMD) |
+ /* The VF's offset to it's associated PF */
+ SPNIC_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ SPNIC_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ SPNIC_MSG_HEADER_SET(SPNIC_MSG_FROM_MBOX, SOURCE) |
+ SPNIC_MSG_HEADER_SET(!!msg_info->status, STATUS);
+
+ while (!(SPNIC_MSG_HEADER_GET(header, LAST))) {
+ if (left <= MBOX_SEG_LEN) {
+ header &= ~MBOX_SEGLEN_MASK;
+ header |= SPNIC_MSG_HEADER_SET(left, SEG_LEN);
+ header |= SPNIC_MSG_HEADER_SET(LAST_SEGMENT, LAST);
+
+ seg_len = left;
+ }
+
+ err = send_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ seg_len, msg_info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Send mbox seg failed, seq_id: 0x%x",
+ (u8)SPNIC_MSG_HEADER_GET(header, SEQID));
+
+ goto send_err;
+ }
+
+ left -= MBOX_SEG_LEN;
+ msg_seg += MBOX_SEG_LEN;
+
+ seq_id++;
+ header &= ~(SPNIC_MSG_HEADER_SET(SPNIC_MSG_HEADER_SEQID_MASK,
+ SEQID));
+ header |= SPNIC_MSG_HEADER_SET(seq_id, SEQID);
+ }
+
+send_err:
+ (void)spnic_mutex_unlock(&func_to_func->msg_send_mutex);
+
+ return err;
+}
+
+static int send_tlp_mbox_to_func(struct spnic_mbox *func_to_func,
+ enum spnic_mod_type mod, u16 cmd, void *msg,
+ u16 msg_len, u16 dst_func,
+ enum spnic_msg_direction_type direction,
+ enum spnic_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+ u8 *msg_seg = (u8 *)msg;
+ int err = 0;
+ u16 rsp_aeq_id;
+ u64 header = 0;
+
+ rsp_aeq_id = SPNIC_MBOX_RSP_MSG_AEQ;
+
+ err = spnic_mutex_lock(&func_to_func->msg_send_mutex);
+ if (err)
+ return err;
+
+ header = SPNIC_MSG_HEADER_SET(MBOX_TLP_HEADER_SZ, MSG_LEN) |
+ SPNIC_MSG_HEADER_SET(MBOX_TLP_HEADER_SZ, SEG_LEN) |
+ SPNIC_MSG_HEADER_SET(mod, MODULE) |
+ SPNIC_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ SPNIC_MSG_HEADER_SET(ack_type, NO_ACK) |
+ SPNIC_MSG_HEADER_SET(SPNIC_DATA_DMA, DATA_TYPE) |
+ SPNIC_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ SPNIC_MSG_HEADER_SET(direction, DIRECTION) |
+ SPNIC_MSG_HEADER_SET(cmd, CMD) |
+ SPNIC_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ SPNIC_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ SPNIC_MSG_HEADER_SET(SPNIC_MSG_FROM_MBOX, SOURCE) |
+ SPNIC_MSG_HEADER_SET(!!msg_info->status, STATUS) |
+ SPNIC_MSG_HEADER_SET(spnic_global_func_id(hwdev),
+ SRC_GLB_FUNC_IDX);
+
+ err = send_tlp_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ msg_len, msg_info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Send mbox seg failed, seq_id: 0x%x",
+ (u8)SPNIC_MSG_HEADER_GET(header, SEQID));
+ }
+
+ (void)spnic_mutex_unlock(&func_to_func->msg_send_mutex);
+
+ return err;
+}
+
+static void set_mbox_to_func_event(struct spnic_mbox *func_to_func,
+ enum mbox_event_state event_flag)
+{
+ rte_spinlock_lock(&func_to_func->mbox_lock);
+ func_to_func->event_flag = event_flag;
+ rte_spinlock_unlock(&func_to_func->mbox_lock);
+}
+
+static int spnic_mbox_to_func(struct spnic_mbox *func_to_func,
+ enum spnic_mod_type mod, u16 cmd, u16 dst_func,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ /* Use mbox_resp to hole data which responsed from other function */
+ struct spnic_recv_mbox *mbox_for_resp = NULL;
+ struct mbox_msg_info msg_info = {0};
+ u16 mbox_rsp_idx;
+ int err;
+
+ mbox_rsp_idx = (dst_func == SPNIC_MGMT_SRC_ID) ?
+ SPNIC_MAX_FUNCTIONS : dst_func;
+
+ mbox_for_resp = &func_to_func->mbox_resp[mbox_rsp_idx];
+
+ err = spnic_mutex_lock(&func_to_func->mbox_send_mutex);
+ if (err)
+ return err;
+
+ msg_info.msg_id = MBOX_MSG_ID_INC(func_to_func);
+ func_to_func->send_msg_id = msg_info.msg_id;
+
+ set_mbox_to_func_event(func_to_func, EVENT_START);
+
+ if (IS_TLP_MBX(dst_func))
+ err = send_tlp_mbox_to_func(func_to_func, mod, cmd, buf_in,
+ in_size, dst_func,
+ SPNIC_MSG_DIRECT_SEND,
+ SPNIC_MSG_ACK, &msg_info);
+ else
+ err = send_mbox_to_func(func_to_func, mod, cmd, buf_in,
+ in_size, dst_func,
+ SPNIC_MSG_DIRECT_SEND,
+ SPNIC_MSG_ACK, &msg_info);
+
+ if (err) {
+ PMD_DRV_LOG(ERR, "Send mailbox failed, msg_id: %d",
+ msg_info.msg_id);
+ set_mbox_to_func_event(func_to_func, EVENT_FAIL);
+ goto send_err;
+ }
+
+ if (mod != mbox_for_resp->mod || cmd != mbox_for_resp->cmd) {
+ PMD_DRV_LOG(ERR, "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x, cmd: 0x%x, timeout0x%x\n",
+ mbox_for_resp->mod, mbox_for_resp->cmd, mod, cmd, timeout);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (mbox_for_resp->msg_info.status) {
+ err = mbox_for_resp->msg_info.status;
+ goto send_err;
+ }
+
+ if (buf_out && out_size) {
+ if (*out_size < mbox_for_resp->mbox_len) {
+ PMD_DRV_LOG(ERR, "Invalid response mbox message length: %d for "
+ "mod: %d cmd: %d, should less than: %d",
+ mbox_for_resp->mbox_len, mod, cmd,
+ *out_size);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (mbox_for_resp->mbox_len)
+ memcpy(buf_out, mbox_for_resp->mbox,
+ mbox_for_resp->mbox_len);
+
+ *out_size = mbox_for_resp->mbox_len;
+ }
+
+send_err:
+ (void)spnic_mutex_unlock(&func_to_func->mbox_send_mutex);
+
+ return err;
+}
+
+static int mbox_func_params_valid(__rte_unused struct spnic_mbox *func_to_func,
+ void *buf_in, u16 in_size)
+{
+ if (!buf_in || !in_size)
+ return -EINVAL;
+
+ if (in_size > SPNIC_MBOX_DATA_SIZE) {
+ PMD_DRV_LOG(ERR, "Mbox msg len(%d) exceed limit(%u)",
+ in_size, (u8)SPNIC_MBOX_DATA_SIZE);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int spnic_mbox_to_func_no_ack(struct spnic_hwdev *hwdev, u16 func_idx,
+ enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size)
+{
+ struct spnic_mbox *func_to_func = hwdev->func_to_func;
+ struct mbox_msg_info msg_info = {0};
+ int err;
+
+ err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ err = spnic_mutex_lock(&func_to_func->mbox_send_mutex);
+ if (err)
+ return err;
+
+ if (IS_TLP_MBX(func_idx))
+ err = send_tlp_mbox_to_func(func_to_func, mod, cmd,
+ buf_in, in_size, func_idx,
+ SPNIC_MSG_DIRECT_SEND,
+ SPNIC_MSG_NO_ACK, &msg_info);
+ else
+ err = send_mbox_to_func(func_to_func, mod, cmd,
+ buf_in, in_size, func_idx,
+ SPNIC_MSG_DIRECT_SEND,
+ SPNIC_MSG_NO_ACK, &msg_info);
+ if (err)
+ PMD_DRV_LOG(ERR, "Send mailbox no ack failed");
+
+ (void)spnic_mutex_unlock(&func_to_func->mbox_send_mutex);
+
+ return err;
+}
+
+int spnic_send_mbox_to_mgmt(struct spnic_hwdev *hwdev, enum spnic_mod_type mod,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ struct spnic_mbox *func_to_func = hwdev->func_to_func;
+ int err;
+
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ return spnic_mbox_to_func(func_to_func, mod, cmd, SPNIC_MGMT_SRC_ID,
+ buf_in, in_size, buf_out, out_size, timeout);
+}
+
+void spnic_response_mbox_to_mgmt(struct spnic_hwdev *hwdev,
+ enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id)
+{
+ struct mbox_msg_info msg_info;
+ u16 dst_func;
+
+ msg_info.msg_id = (u8)msg_id;
+ msg_info.status = 0;
+ dst_func = SPNIC_MGMT_SRC_ID;
+
+ if (IS_TLP_MBX(dst_func))
+ send_tlp_mbox_to_func(hwdev->func_to_func, mod, cmd, buf_in,
+ in_size, SPNIC_MGMT_SRC_ID,
+ SPNIC_MSG_RESPONSE, SPNIC_MSG_NO_ACK,
+ &msg_info);
+ else
+ send_mbox_to_func(hwdev->func_to_func, mod, cmd, buf_in,
+ in_size, SPNIC_MGMT_SRC_ID,
+ SPNIC_MSG_RESPONSE, SPNIC_MSG_NO_ACK,
+ &msg_info);
+}
+
+int spnic_send_mbox_to_mgmt_no_ack(struct spnic_hwdev *hwdev,
+ enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size)
+{
+ struct spnic_mbox *func_to_func = hwdev->func_to_func;
+ int err;
+
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ return spnic_mbox_to_func_no_ack(hwdev, SPNIC_MGMT_SRC_ID, mod, cmd,
+ buf_in, in_size);
+}
+
+int spnic_mbox_to_pf(struct spnic_hwdev *hwdev, enum spnic_mod_type mod,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ if (!SPNIC_IS_VF(hwdev)) {
+ PMD_DRV_LOG(ERR, "Params error, func_type: %d",
+ spnic_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ return spnic_mbox_to_func(hwdev->func_to_func, mod, cmd,
+ spnic_pf_id_of_vf(hwdev), buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int spnic_mbox_to_vf(struct spnic_hwdev *hwdev, enum spnic_mod_type mod,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout)
+{
+ struct spnic_mbox *func_to_func = NULL;
+ u16 dst_func_idx;
+ int err = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = hwdev->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ if (SPNIC_IS_VF(hwdev)) {
+ PMD_DRV_LOG(ERR, "Params error, func_type: %d",
+ spnic_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ PMD_DRV_LOG(ERR, "VF id: %d error!", vf_id);
+ return -EINVAL;
+ }
+
+ /*
+ * The sum of vf_offset_to_pf + vf_id is the VF's global function id of
+ * VF in this pf
+ */
+ dst_func_idx = spnic_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return spnic_mbox_to_func(func_to_func, mod, cmd, dst_func_idx, buf_in,
+ in_size, buf_out, out_size, timeout);
+}
+
+static int init_mbox_info(struct spnic_recv_mbox *mbox_info,
+ int mbox_max_buf_sz)
+{
+ int err;
+
+ mbox_info->seq_id = SEQ_ID_MAX_VAL;
+
+ mbox_info->mbox = rte_zmalloc("mbox", (size_t)mbox_max_buf_sz, 1);
+ if (!mbox_info->mbox)
+ return -ENOMEM;
+
+ mbox_info->buf_out = rte_zmalloc("mbox_buf_out",
+ (size_t)mbox_max_buf_sz, 1);
+ if (!mbox_info->buf_out) {
+ err = -ENOMEM;
+ goto alloc_buf_out_err;
+ }
+
+ return 0;
+
+alloc_buf_out_err:
+ rte_free(mbox_info->mbox);
+
+ return err;
+}
+
+static void clean_mbox_info(struct spnic_recv_mbox *mbox_info)
+{
+ rte_free(mbox_info->buf_out);
+ rte_free(mbox_info->mbox);
+}
+
+static int alloc_mbox_info(struct spnic_recv_mbox *mbox_info,
+ int mbox_max_buf_sz)
+{
+ u16 func_idx, i;
+ int err;
+
+ for (func_idx = 0; func_idx < SPNIC_MAX_FUNCTIONS + 1; func_idx++) {
+ err = init_mbox_info(&mbox_info[func_idx], mbox_max_buf_sz);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mbox info failed");
+ goto init_mbox_info_err;
+ }
+ }
+
+ return 0;
+
+init_mbox_info_err:
+ for (i = 0; i < func_idx; i++)
+ clean_mbox_info(&mbox_info[i]);
+
+ return err;
+}
+
+static void free_mbox_info(struct spnic_recv_mbox *mbox_info)
+{
+ u16 func_idx;
+
+ for (func_idx = 0; func_idx < SPNIC_MAX_FUNCTIONS + 1; func_idx++)
+ clean_mbox_info(&mbox_info[func_idx]);
+}
+
+static void prepare_send_mbox(struct spnic_mbox *func_to_func)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ send_mbox->data = MBOX_AREA(func_to_func->hwdev->hwif);
+}
+
+static int alloc_mbox_wb_status(struct spnic_mbox *func_to_func)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+ u32 addr_h, addr_l;
+
+ send_mbox->wb_mz = rte_eth_dma_zone_reserve(hwdev->eth_dev, "wb_mz", 0,
+ MBOX_WB_STATUS_LEN,
+ RTE_CACHE_LINE_SIZE,
+ SOCKET_ID_ANY);
+ if (!send_mbox->wb_mz)
+ return -ENOMEM;
+
+ send_mbox->wb_vaddr = send_mbox->wb_mz->addr;
+ send_mbox->wb_paddr = send_mbox->wb_mz->iova;
+ send_mbox->wb_status = send_mbox->wb_vaddr;
+
+ addr_h = upper_32_bits(send_mbox->wb_paddr);
+ addr_l = lower_32_bits(send_mbox->wb_paddr);
+
+ spnic_hwif_write_reg(hwdev->hwif, SPNIC_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ addr_h);
+ spnic_hwif_write_reg(hwdev->hwif, SPNIC_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ addr_l);
+
+ return 0;
+}
+
+static void free_mbox_wb_status(struct spnic_mbox *func_to_func)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+
+ spnic_hwif_write_reg(hwdev->hwif,
+ SPNIC_FUNC_CSR_MAILBOX_RESULT_H_OFF, 0);
+ spnic_hwif_write_reg(hwdev->hwif,
+ SPNIC_FUNC_CSR_MAILBOX_RESULT_L_OFF, 0);
+
+ rte_memzone_free(send_mbox->wb_mz);
+}
+
+static int alloc_mbox_tlp_buffer(struct spnic_mbox *func_to_func)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct spnic_hwdev *hwdev = func_to_func->hwdev;
+
+ send_mbox->sbuff_mz = rte_eth_dma_zone_reserve(hwdev->eth_dev,
+ "sbuff_mz", 0,
+ MBOX_MAX_BUF_SZ,
+ MBOX_MAX_BUF_SZ,
+ SOCKET_ID_ANY);
+ if (!send_mbox->sbuff_mz)
+ return -ENOMEM;
+
+ send_mbox->sbuff_vaddr = send_mbox->sbuff_mz->addr;
+ send_mbox->sbuff_paddr = send_mbox->sbuff_mz->iova;
+
+ return 0;
+}
+
+static void free_mbox_tlp_buffer(struct spnic_mbox *func_to_func)
+{
+ struct spnic_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ rte_memzone_free(send_mbox->sbuff_mz);
+}
+
+int spnic_func_to_func_init(struct spnic_hwdev *hwdev)
+{
+ struct spnic_mbox *func_to_func;
+ int err;
+
+ func_to_func = rte_zmalloc("func_to_func", sizeof(*func_to_func), 1);
+ if (!func_to_func)
+ return -ENOMEM;
+
+ hwdev->func_to_func = func_to_func;
+ func_to_func->hwdev = hwdev;
+ (void)spnic_mutex_init(&func_to_func->mbox_send_mutex, NULL);
+ (void)spnic_mutex_init(&func_to_func->msg_send_mutex, NULL);
+ rte_spinlock_init(&func_to_func->mbox_lock);
+
+ err = alloc_mbox_info(func_to_func->mbox_send, MBOX_MAX_BUF_SZ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mem for mbox_active failed");
+ goto alloc_mbox_for_send_err;
+ }
+
+ err = alloc_mbox_info(func_to_func->mbox_resp, MBOX_MAX_BUF_SZ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mem for mbox_passive failed");
+ goto alloc_mbox_for_resp_err;
+ }
+
+ err = alloc_mbox_tlp_buffer(func_to_func);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mbox send buffer failed");
+ goto alloc_tlp_buffer_err;
+ }
+
+ err = alloc_mbox_wb_status(func_to_func);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mbox write back status failed");
+ goto alloc_wb_status_err;
+ }
+
+ prepare_send_mbox(func_to_func);
+
+ return 0;
+
+alloc_wb_status_err:
+ free_mbox_tlp_buffer(func_to_func);
+
+alloc_tlp_buffer_err:
+ free_mbox_info(func_to_func->mbox_resp);
+
+alloc_mbox_for_resp_err:
+ free_mbox_info(func_to_func->mbox_send);
+
+alloc_mbox_for_send_err:
+ (void)spnic_mutex_destroy(&func_to_func->msg_send_mutex);
+ (void)spnic_mutex_destroy(&func_to_func->mbox_send_mutex);
+ rte_free(func_to_func);
+
+ return err;
+}
+
+void spnic_func_to_func_free(struct spnic_hwdev *hwdev)
+{
+ struct spnic_mbox *func_to_func = hwdev->func_to_func;
+
+ free_mbox_wb_status(func_to_func);
+ free_mbox_tlp_buffer(func_to_func);
+ free_mbox_info(func_to_func->mbox_resp);
+ free_mbox_info(func_to_func->mbox_send);
+ (void)spnic_mutex_destroy(&func_to_func->mbox_send_mutex);
+ (void)spnic_mutex_destroy(&func_to_func->msg_send_mutex);
+
+ rte_free(func_to_func);
+}
diff --git a/drivers/net/spnic/base/spnic_mbox.h b/drivers/net/spnic/base/spnic_mbox.h
new file mode 100644
index 0000000000..446471e8f8
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_mbox.h
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_MBOX_H_
+#define _SPNIC_MBOX_H_
+
+#define SPNIC_MBOX_PF_SEND_ERR 0x1
+#define SPNIC_MBOX_PF_BUSY_ACTIVE_FW 0x2
+#define SPNIC_MBOX_VF_CMD_ERROR 0x3
+
+#define SPNIC_MGMT_SRC_ID 0x1FFF
+#define SPNIC_MAX_FUNCTIONS 4096
+#define SPNIC_MAX_PF_FUNCS 32
+
+/* Message header define */
+#define SPNIC_MSG_HEADER_SRC_GLB_FUNC_IDX_SHIFT 0
+#define SPNIC_MSG_HEADER_STATUS_SHIFT 13
+#define SPNIC_MSG_HEADER_SOURCE_SHIFT 15
+#define SPNIC_MSG_HEADER_AEQ_ID_SHIFT 16
+#define SPNIC_MSG_HEADER_MSG_ID_SHIFT 18
+#define SPNIC_MSG_HEADER_CMD_SHIFT 22
+
+#define SPNIC_MSG_HEADER_MSG_LEN_SHIFT 32
+#define SPNIC_MSG_HEADER_MODULE_SHIFT 43
+#define SPNIC_MSG_HEADER_SEG_LEN_SHIFT 48
+#define SPNIC_MSG_HEADER_NO_ACK_SHIFT 54
+#define SPNIC_MSG_HEADER_DATA_TYPE_SHIFT 55
+#define SPNIC_MSG_HEADER_SEQID_SHIFT 56
+#define SPNIC_MSG_HEADER_LAST_SHIFT 62
+#define SPNIC_MSG_HEADER_DIRECTION_SHIFT 63
+
+#define SPNIC_MSG_HEADER_CMD_MASK 0x3FF
+#define SPNIC_MSG_HEADER_MSG_ID_MASK 0xF
+#define SPNIC_MSG_HEADER_AEQ_ID_MASK 0x3
+#define SPNIC_MSG_HEADER_SOURCE_MASK 0x1
+#define SPNIC_MSG_HEADER_STATUS_MASK 0x1
+#define SPNIC_MSG_HEADER_SRC_GLB_FUNC_IDX_MASK 0x1FFF
+
+#define SPNIC_MSG_HEADER_MSG_LEN_MASK 0x7FF
+#define SPNIC_MSG_HEADER_MODULE_MASK 0x1F
+#define SPNIC_MSG_HEADER_SEG_LEN_MASK 0x3F
+#define SPNIC_MSG_HEADER_NO_ACK_MASK 0x1
+#define SPNIC_MSG_HEADER_DATA_TYPE_MASK 0x1
+#define SPNIC_MSG_HEADER_SEQID_MASK 0x3F
+#define SPNIC_MSG_HEADER_LAST_MASK 0x1
+#define SPNIC_MSG_HEADER_DIRECTION_MASK 0x1
+
+#define SPNIC_MSG_HEADER_GET(val, field) \
+ (((val) >> SPNIC_MSG_HEADER_##field##_SHIFT) & \
+ SPNIC_MSG_HEADER_##field##_MASK)
+#define SPNIC_MSG_HEADER_SET(val, field) \
+ ((u64)(((u64)(val)) & \
+ SPNIC_MSG_HEADER_##field##_MASK) << \
+ SPNIC_MSG_HEADER_##field##_SHIFT)
+
+#define IS_TLP_MBX(dst_func) ((dst_func) == SPNIC_MGMT_SRC_ID)
+
+enum spnic_msg_direction_type {
+ SPNIC_MSG_DIRECT_SEND = 0,
+ SPNIC_MSG_RESPONSE = 1
+};
+
+enum spnic_msg_segment_type {
+ NOT_LAST_SEGMENT = 0,
+ LAST_SEGMENT = 1
+};
+
+enum spnic_msg_ack_type {
+ SPNIC_MSG_ACK,
+ SPNIC_MSG_NO_ACK
+};
+
+enum spnic_data_type {
+ SPNIC_DATA_INLINE = 0,
+ SPNIC_DATA_DMA = 1
+};
+
+enum spnic_msg_src_type {
+ SPNIC_MSG_FROM_MGMT = 0,
+ SPNIC_MSG_FROM_MBOX = 1
+};
+
+enum spnic_msg_aeq_type {
+ SPNIC_ASYNC_MSG_AEQ = 0,
+ /* Indicate dest func or mgmt cpu which aeq to response mbox message */
+ SPNIC_MBOX_RSP_MSG_AEQ = 1,
+ /* Indicate mgmt cpu which aeq to response api cmd message */
+ SPNIC_MGMT_RSP_MSG_AEQ = 2
+};
+
+enum spnic_mbox_seg_errcode {
+ MBOX_ERRCODE_NO_ERRORS = 0,
+ /* VF sends the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_VF_TO_WRONG_FUNC = 0x100,
+ /* PPF sends the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PPF_TO_WRONG_FUNC = 0x200,
+ /* PF sends the mailbox data to the wrong destination functions */
+ MBOX_ERRCODE_PF_TO_WRONG_FUNC = 0x300,
+ /* The mailbox data size is set to all zero */
+ MBOX_ERRCODE_ZERO_DATA_SIZE = 0x400,
+ /* The sender function attribute has not been learned by CPI hardware */
+ MBOX_ERRCODE_UNKNOWN_SRC_FUNC = 0x500,
+ /* The receiver function attr has not been learned by CPI hardware */
+ MBOX_ERRCODE_UNKNOWN_DES_FUNC = 0x600
+};
+
+struct mbox_msg_info {
+ u8 msg_id;
+ u8 status; /* Can only use 3 bit */
+};
+
+struct spnic_recv_mbox {
+ void *mbox;
+ void *buf_out;
+ u16 cmd;
+ u16 mbox_len;
+ enum spnic_mod_type mod;
+ enum spnic_msg_ack_type ack_type;
+ u8 seq_id;
+ struct mbox_msg_info msg_info;
+};
+
+struct spnic_send_mbox {
+ u8 *data;
+
+ u64 *wb_status; /* Write back status */
+
+ const struct rte_memzone *wb_mz;
+ void *wb_vaddr;
+ rte_iova_t wb_paddr;
+
+ const struct rte_memzone *sbuff_mz;
+ void *sbuff_vaddr;
+ rte_iova_t sbuff_paddr;
+};
+
+enum mbox_event_state {
+ EVENT_START = 0,
+ EVENT_FAIL,
+ EVENT_SUCCESS,
+ EVENT_TIMEOUT,
+ EVENT_END
+};
+
+enum spnic_mbox_cb_state {
+ SPNIC_VF_MBOX_CB_REG = 0,
+ SPNIC_VF_MBOX_CB_RUNNING,
+ SPNIC_PF_MBOX_CB_REG,
+ SPNIC_PF_MBOX_CB_RUNNING,
+ SPNIC_PPF_MBOX_CB_REG,
+ SPNIC_PPF_MBOX_CB_RUNNING,
+ SPNIC_PPF_TO_PF_MBOX_CB_REG,
+ SPNIC_PPF_TO_PF_MBOX_CB_RUNNIG
+};
+
+struct spnic_mbox {
+ struct spnic_hwdev *hwdev;
+
+ pthread_mutex_t mbox_send_mutex;
+ pthread_mutex_t msg_send_mutex;
+
+ struct spnic_send_mbox send_mbox;
+
+ /* Last element for mgmt */
+ struct spnic_recv_mbox mbox_resp[SPNIC_MAX_FUNCTIONS + 1];
+ struct spnic_recv_mbox mbox_send[SPNIC_MAX_FUNCTIONS + 1];
+
+ u8 send_msg_id;
+ enum mbox_event_state event_flag;
+ /* Lock for mbox event flag */
+ rte_spinlock_t mbox_lock;
+};
+
+int spnic_mbox_func_aeqe_handler(void *handle, u8 *header, __rte_unused u8 size,
+ void *param);
+
+int spnic_func_to_func_init(struct spnic_hwdev *hwdev);
+
+void spnic_func_to_func_free(struct spnic_hwdev *hwdev);
+
+int spnic_send_mbox_to_mgmt(struct spnic_hwdev *hwdev, enum spnic_mod_type mod,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+void spnic_response_mbox_to_mgmt(struct spnic_hwdev *hwdev,
+ enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id);
+
+int spnic_send_mbox_to_mgmt_no_ack(struct spnic_hwdev *hwdev,
+ enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size);
+
+int spnic_mbox_to_pf(struct spnic_hwdev *hwdev, enum spnic_mod_type mod,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+int spnic_mbox_to_vf(struct spnic_hwdev *hwdev, enum spnic_mod_type mod,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout);
+
+#endif /* _SPNIC_MBOX_H_ */
diff --git a/drivers/net/spnic/base/spnic_mgmt.h b/drivers/net/spnic/base/spnic_mgmt.h
new file mode 100644
index 0000000000..37d0410473
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_mgmt.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_MGMT_H_
+#define _SPNIC_MGMT_H_
+
+#define SPNIC_MSG_HANDLER_RES (-1)
+
+/* Cmdq module type */
+enum spnic_mod_type {
+ SPNIC_MOD_COMM = 0, /* HW communication module */
+ SPNIC_MOD_L2NIC = 1, /* L2NIC module */
+ SPNIC_MOD_ROCE = 2,
+ SPNIC_MOD_PLOG = 3,
+ SPNIC_MOD_TOE = 4,
+ SPNIC_MOD_FLR = 5,
+ SPNIC_MOD_FC = 6,
+ SPNIC_MOD_CFGM = 7, /* Configuration module */
+ SPNIC_MOD_CQM = 8,
+ SPNIC_MOD_VSWITCH = 9,
+ COMM_MOD_FC = 10,
+ SPNIC_MOD_OVS = 11,
+ SPNIC_MOD_DSW = 12,
+ SPNIC_MOD_MIGRATE = 13,
+ SPNIC_MOD_HILINK = 14,
+ SPNIC_MOD_CRYPT = 15, /* Secure crypto module */
+ SPNIC_MOD_HW_MAX = 16, /* Hardware max module id */
+
+ /* Software module id, for PF/VF and multi-host */
+ SPNIC_MOD_SW_FUNC = 17,
+ SPNIC_MOD_IOE = 18,
+ SPNIC_MOD_MAX
+};
+
+#endif /* _SPNIC_MGMT_H_ */
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 04/25] net/spnic: introduce event queue
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (2 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 03/25] net/spnic: add mbox message channel Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 05/25] net/spnic: add mgmt module Yanling Song
` (20 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch introduce event queue to receive response message
from hardware or destiation function when a source function
send mbox to it. This commit implements the related data
structure, initialization and interfaces handling the message.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 1 +
drivers/net/spnic/base/spnic_eqs.c | 606 +++++++++++++++++++++++++++
drivers/net/spnic/base/spnic_eqs.h | 102 +++++
drivers/net/spnic/base/spnic_hwdev.c | 44 +-
drivers/net/spnic/base/spnic_hwdev.h | 22 +
drivers/net/spnic/base/spnic_mbox.c | 20 +-
6 files changed, 790 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index de80eef7c4..ce7457f400 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -2,6 +2,7 @@
# Copyright(c) 2021 Ramaxel Memory Technology, Ltd
sources = [
+ 'spnic_eqs.c',
'spnic_hwdev.c',
'spnic_hwif.c',
'spnic_mbox.c'
diff --git a/drivers/net/spnic/base/spnic_eqs.c b/drivers/net/spnic/base/spnic_eqs.c
new file mode 100644
index 0000000000..7953976441
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_eqs.c
@@ -0,0 +1,606 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <ethdev_driver.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include "spnic_compat.h"
+#include "spnic_hwdev.h"
+#include "spnic_hwif.h"
+#include "spnic_csr.h"
+#include "spnic_eqs.h"
+#include "spnic_mgmt.h"
+#include "spnic_mbox.h"
+
+#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
+#define AEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+#define AEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define AEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x7U
+#define AEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+#define AEQ_CTRL_0_SET(val, member) \
+ (((val) & AEQ_CTRL_0_##member##_MASK) << \
+ AEQ_CTRL_0_##member##_SHIFT)
+
+#define AEQ_CTRL_0_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_0_##member##_MASK \
+ << AEQ_CTRL_0_##member##_SHIFT)))
+
+#define AEQ_CTRL_1_LEN_SHIFT 0
+#define AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
+#define AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+
+#define AEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define AEQ_CTRL_1_ELEM_SIZE_MASK 0x3U
+#define AEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+#define AEQ_CTRL_1_SET(val, member) \
+ (((val) & AEQ_CTRL_1_##member##_MASK) << \
+ AEQ_CTRL_1_##member##_SHIFT)
+
+#define AEQ_CTRL_1_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_1_##member##_MASK \
+ << AEQ_CTRL_1_##member##_SHIFT)))
+
+#define SPNIC_EQ_PROD_IDX_MASK 0xFFFFF
+#define SPNIC_TASK_PROCESS_EQE_LIMIT 1024
+#define SPNIC_EQ_UPDATE_CI_STEP 64
+
+#define EQ_ELEM_DESC_TYPE_SHIFT 0
+#define EQ_ELEM_DESC_SRC_SHIFT 7
+#define EQ_ELEM_DESC_SIZE_SHIFT 8
+#define EQ_ELEM_DESC_WRAPPED_SHIFT 31
+
+#define EQ_ELEM_DESC_TYPE_MASK 0x7FU
+#define EQ_ELEM_DESC_SRC_MASK 0x1U
+#define EQ_ELEM_DESC_SIZE_MASK 0xFFU
+#define EQ_ELEM_DESC_WRAPPED_MASK 0x1U
+
+#define EQ_ELEM_DESC_GET(val, member) \
+ (((val) >> EQ_ELEM_DESC_##member##_SHIFT) & \
+ EQ_ELEM_DESC_##member##_MASK)
+
+#define EQ_CI_SIMPLE_INDIR_CI_SHIFT 0
+#define EQ_CI_SIMPLE_INDIR_ARMED_SHIFT 21
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_SHIFT 30
+
+#define EQ_CI_SIMPLE_INDIR_CI_MASK 0x1FFFFFU
+#define EQ_CI_SIMPLE_INDIR_ARMED_MASK 0x1U
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_MASK 0x3U
+
+#define EQ_CI_SIMPLE_INDIR_SET(val, member) \
+ (((val) & EQ_CI_SIMPLE_INDIR_##member##_MASK) << \
+ EQ_CI_SIMPLE_INDIR_##member##_SHIFT)
+
+#define EQ_CI_SIMPLE_INDIR_CLEAR(val, member) \
+ ((val) & (~(EQ_CI_SIMPLE_INDIR_##member##_MASK \
+ << EQ_CI_SIMPLE_INDIR_##member##_SHIFT)))
+
+#define EQ_WRAPPED(eq) ((u32)(eq)->wrapped << EQ_VALID_SHIFT)
+
+#define EQ_CONS_IDX(eq) ((eq)->cons_idx | \
+ ((u32)(eq)->wrapped << EQ_WRAPPED_SHIFT))
+#define GET_EQ_NUM_PAGES(eq, size) \
+ ((u16)(RTE_ALIGN((u32)((eq)->eq_len * (eq)->elem_size), \
+ (size)) / (size)))
+
+#define GET_EQ_NUM_ELEMS(eq, pg_size) ((pg_size) / (u32)(eq)->elem_size)
+
+#define GET_EQ_ELEMENT(eq, idx) \
+ (((u8 *)(eq)->virt_addr[(idx) / (eq)->num_elem_in_pg]) + \
+ (u32)(((idx) & ((eq)->num_elem_in_pg - 1)) * (eq)->elem_size))
+
+#define GET_AEQ_ELEM(eq, idx) ((struct spnic_aeq_elem *)\
+ GET_EQ_ELEMENT((eq), (idx)))
+
+#define GET_CURR_AEQ_ELEM(eq) GET_AEQ_ELEM((eq), (eq)->cons_idx)
+
+#define PAGE_IN_4K(page_size) ((page_size) >> 12)
+#define EQ_SET_HW_PAGE_SIZE_VAL(eq) \
+ ((u32)ilog2(PAGE_IN_4K((eq)->page_size)))
+
+#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
+#define EQ_SET_HW_ELEM_SIZE_VAL(eq) ((u32)ilog2(ELEMENT_SIZE_IN_32B(eq)))
+
+#define AEQ_DMA_ATTR_DEFAULT 0
+
+#define EQ_MSIX_RESEND_TIMER_CLEAR 1
+
+#define EQ_WRAPPED_SHIFT 20
+
+#define EQ_VALID_SHIFT 31
+
+#define aeq_to_aeqs(eq) \
+ container_of((eq) - (eq)->q_id, struct spnic_aeqs, aeq[0])
+
+#define AEQ_MSIX_ENTRY_IDX_0 0
+
+/**
+ * Write the cons idx to hw
+ *
+ * @param[in] eq
+ * The event queue to update the cons idx
+ * @param[in] arm_state
+ * Indicate whether report interrupts when generate eq element
+ */
+static void set_eq_cons_idx(struct spnic_eq *eq, u32 arm_state)
+{
+ u32 eq_wrap_ci, val;
+ u32 addr = SPNIC_CSR_AEQ_CI_SIMPLE_INDIR_ADDR;
+
+ eq_wrap_ci = EQ_CONS_IDX(eq);
+
+ /* dpdk pmd driver only aeq0 use int_arm mode */
+ if (eq->q_id != 0)
+ val = EQ_CI_SIMPLE_INDIR_SET(SPNIC_EQ_NOT_ARMED, ARMED);
+ else
+ val = EQ_CI_SIMPLE_INDIR_SET(arm_state, ARMED);
+
+ val = val | EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, AEQ_IDX);
+
+ spnic_hwif_write_reg(eq->hwdev->hwif, addr, val);
+}
+
+/**
+ * Set aeq's ctrls registers
+ *
+ * @param[in] eq
+ * The event queue for setting
+ */
+static void set_aeq_ctrls(struct spnic_eq *eq)
+{
+ struct spnic_hwif *hwif = eq->hwdev->hwif;
+ struct irq_info *eq_irq = &eq->eq_irq;
+ u32 addr, val, ctrl0, ctrl1, page_size_val, elem_size;
+ u32 pci_intf_idx = SPNIC_PCI_INTF_IDX(hwif);
+
+ /* Set ctrl0 */
+ addr = SPNIC_CSR_AEQ_CTRL_0_ADDR;
+
+ val = spnic_hwif_read_reg(hwif, addr);
+
+ val = AEQ_CTRL_0_CLEAR(val, INTR_IDX) &
+ AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
+ AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
+ AEQ_CTRL_0_CLEAR(val, INTR_MODE);
+
+ ctrl0 = AEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ AEQ_CTRL_0_SET(AEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ AEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ AEQ_CTRL_0_SET(SPNIC_INTR_MODE_ARMED, INTR_MODE);
+
+ val |= ctrl0;
+
+ spnic_hwif_write_reg(hwif, addr, val);
+
+ /* Set ctrl1 */
+ addr = SPNIC_CSR_AEQ_CTRL_1_ADDR;
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
+
+ ctrl1 = AEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
+ AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ spnic_hwif_write_reg(hwif, addr, ctrl1);
+}
+
+/**
+ * Initialize all the elements in the aeq
+ *
+ * @param[in] eq
+ * The event queue
+ * @param[in] init_val
+ * Value to init
+ */
+static void aeq_elements_init(struct spnic_eq *eq, u32 init_val)
+{
+ struct spnic_aeq_elem *aeqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ aeqe = GET_AEQ_ELEM(eq, i);
+ aeqe->desc = cpu_to_be32(init_val);
+ }
+
+ rte_wmb(); /* Write the init values */
+}
+
+/**
+ * Allocate the pages for the queue
+ *
+ * @param[in] eq
+ * The event queue
+ *
+ * @retval zero : Success
+ * @retval negative : Failure.
+ */
+static int alloc_eq_pages(struct spnic_eq *eq)
+{
+ struct spnic_hwif *hwif = eq->hwdev->hwif;
+ u64 dma_addr_size, virt_addr_size, eq_mz_size;
+ u32 reg, init_val;
+ u16 pg_num, i;
+ int err;
+
+ dma_addr_size = eq->num_pages * sizeof(*eq->dma_addr);
+ virt_addr_size = eq->num_pages * sizeof(*eq->virt_addr);
+ eq_mz_size = eq->num_pages * sizeof(*eq->eq_mz);
+
+ eq->dma_addr = rte_zmalloc("eq_dma", dma_addr_size,
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!eq->dma_addr)
+ return -ENOMEM;
+
+ eq->virt_addr = rte_zmalloc("eq_va", virt_addr_size,
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!eq->virt_addr) {
+ err = -ENOMEM;
+ goto virt_addr_alloc_err;
+ }
+
+ eq->eq_mz = rte_zmalloc("eq_mz", eq_mz_size, SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!eq->eq_mz) {
+ err = -ENOMEM;
+ goto eq_mz_alloc_err;
+ }
+
+ for (pg_num = 0; pg_num < eq->num_pages; pg_num++) {
+ eq->eq_mz[pg_num] = rte_eth_dma_zone_reserve(eq->hwdev->eth_dev,
+ "eq_mz", eq->q_id, eq->page_size,
+ eq->page_size, SOCKET_ID_ANY);
+ if (!eq->eq_mz[pg_num]) {
+ err = -ENOMEM;
+ goto dma_alloc_err;
+ }
+
+ eq->dma_addr[pg_num] = eq->eq_mz[pg_num]->iova;
+ eq->virt_addr[pg_num] = eq->eq_mz[pg_num]->addr;
+
+ reg = SPNIC_AEQ_HI_PHYS_ADDR_REG(pg_num);
+ spnic_hwif_write_reg(hwif, reg,
+ upper_32_bits(eq->dma_addr[pg_num]));
+
+ reg = SPNIC_AEQ_LO_PHYS_ADDR_REG(pg_num);
+ spnic_hwif_write_reg(hwif, reg,
+ lower_32_bits(eq->dma_addr[pg_num]));
+ }
+
+ eq->num_elem_in_pg = GET_EQ_NUM_ELEMS(eq, eq->page_size);
+ if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
+ PMD_DRV_LOG(ERR, "Number element in eq page != power of 2");
+ err = -EINVAL;
+ goto dma_alloc_err;
+ }
+ init_val = EQ_WRAPPED(eq);
+
+ aeq_elements_init(eq, init_val);
+
+ return 0;
+
+dma_alloc_err:
+ for (i = 0; i < pg_num; i++)
+ rte_memzone_free(eq->eq_mz[i]);
+
+eq_mz_alloc_err:
+ rte_free(eq->virt_addr);
+
+virt_addr_alloc_err:
+ rte_free(eq->dma_addr);
+
+ return err;
+}
+
+/**
+ * Free the pages of the queue
+ *
+ * @param[in] eq
+ * The event queue
+ */
+static void free_eq_pages(struct spnic_eq *eq)
+{
+ u16 pg_num;
+
+ for (pg_num = 0; pg_num < eq->num_pages; pg_num++)
+ rte_memzone_free(eq->eq_mz[pg_num]);
+
+ rte_free(eq->eq_mz);
+ rte_free(eq->virt_addr);
+ rte_free(eq->dma_addr);
+}
+
+static inline u32 get_page_size(struct spnic_eq *eq)
+{
+ u32 total_size;
+ u16 count, n = 0;
+
+ total_size = RTE_ALIGN((eq->eq_len * eq->elem_size),
+ SPNIC_MIN_EQ_PAGE_SIZE);
+ if (total_size <= (SPNIC_EQ_MAX_PAGES * SPNIC_MIN_EQ_PAGE_SIZE))
+ return SPNIC_MIN_EQ_PAGE_SIZE;
+
+ count = (u16)(RTE_ALIGN((total_size / SPNIC_EQ_MAX_PAGES),
+ SPNIC_MIN_EQ_PAGE_SIZE) / SPNIC_MIN_EQ_PAGE_SIZE);
+ if (!(count & (count - 1)))
+ return SPNIC_MIN_EQ_PAGE_SIZE * count;
+
+ while (count) {
+ count >>= 1;
+ n++;
+ }
+
+ return ((u32)SPNIC_MIN_EQ_PAGE_SIZE) << n;
+}
+
+/**
+ * Initialize aeq
+ *
+ * @param[in] eq
+ * The event queue
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ * @param[in] q_id
+ * Queue id number
+ * @param[in] q_len
+ * The number of EQ elements
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
+ */
+static int init_aeq(struct spnic_eq *eq, struct spnic_hwdev *hwdev,
+ u16 q_id, u32 q_len)
+{
+ int err = 0;
+
+ eq->hwdev = hwdev;
+ eq->q_id = q_id;
+ eq->eq_len = q_len;
+
+ /* Indirect access should set q_id first */
+ spnic_hwif_write_reg(hwdev->hwif, SPNIC_AEQ_INDIR_IDX_ADDR, eq->q_id);
+ rte_wmb(); /* write index before config */
+
+ /* Clear eq_len to force eqe drop in hardware */
+ spnic_hwif_write_reg(eq->hwdev->hwif, SPNIC_CSR_AEQ_CTRL_1_ADDR, 0);
+ rte_wmb();
+ /* Init aeq pi to 0 before allocating aeq pages */
+ spnic_hwif_write_reg(eq->hwdev->hwif, SPNIC_CSR_AEQ_PROD_IDX_ADDR, 0);
+
+ eq->cons_idx = 0;
+ eq->wrapped = 0;
+
+ eq->elem_size = SPNIC_AEQE_SIZE;
+ eq->page_size = get_page_size(eq);
+ eq->orig_page_size = eq->page_size;
+ eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
+ if (eq->num_pages > SPNIC_EQ_MAX_PAGES) {
+ PMD_DRV_LOG(ERR, "Too many pages: %d for aeq", eq->num_pages);
+ return -EINVAL;
+ }
+
+ err = alloc_eq_pages(eq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate pages for eq failed");
+ return err;
+ }
+
+ /* Pmd driver uses AEQ_MSIX_ENTRY_IDX_0 */
+ eq->eq_irq.msix_entry_idx = AEQ_MSIX_ENTRY_IDX_0;
+ set_aeq_ctrls(eq);
+
+ set_eq_cons_idx(eq, SPNIC_EQ_ARMED);
+
+ if (eq->q_id == 0)
+ spnic_set_msix_state(hwdev, 0, SPNIC_MSIX_ENABLE);
+
+ eq->poll_retry_nr = SPNIC_RETRY_NUM;
+
+ return 0;
+}
+
+/**
+ * Remove aeq
+ *
+ * @param[in] eq
+ * The event queue
+ */
+static void remove_aeq(struct spnic_eq *eq)
+{
+ struct irq_info *entry = &eq->eq_irq;
+
+ if (eq->q_id == 0)
+ spnic_set_msix_state(eq->hwdev, entry->msix_entry_idx,
+ SPNIC_MSIX_DISABLE);
+
+ /* Indirect access should set q_id first */
+ spnic_hwif_write_reg(eq->hwdev->hwif, SPNIC_AEQ_INDIR_IDX_ADDR,
+ eq->q_id);
+
+ rte_wmb(); /* Write index before config */
+
+ /* Clear eq_len to avoid hw access host memory */
+ spnic_hwif_write_reg(eq->hwdev->hwif, SPNIC_CSR_AEQ_CTRL_1_ADDR, 0);
+
+ /* Update cons_idx to avoid invalid interrupt */
+ eq->cons_idx = spnic_hwif_read_reg(eq->hwdev->hwif,
+ SPNIC_CSR_AEQ_PROD_IDX_ADDR);
+ set_eq_cons_idx(eq, SPNIC_EQ_NOT_ARMED);
+
+ free_eq_pages(eq);
+}
+
+/**
+ * Init all aeqs
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
+ */
+int spnic_aeqs_init(struct spnic_hwdev *hwdev)
+{
+ struct spnic_aeqs *aeqs = NULL;
+ u16 num_aeqs;
+ int err;
+ u16 i, q_id;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ num_aeqs = SPNIC_HWIF_NUM_AEQS(hwdev->hwif);
+ if (num_aeqs > SPNIC_MAX_AEQS) {
+ PMD_DRV_LOG(INFO, "Adjust aeq num to %d", SPNIC_MAX_AEQS);
+ num_aeqs = SPNIC_MAX_AEQS;
+ } else if (num_aeqs < SPNIC_MIN_AEQS) {
+ PMD_DRV_LOG(ERR, "PMD needs %d AEQs, Chip has %d",
+ SPNIC_MIN_AEQS, num_aeqs);
+ return -EINVAL;
+ }
+
+ aeqs = rte_zmalloc("spnic_aeqs", sizeof(*aeqs),
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!aeqs)
+ return -ENOMEM;
+
+ hwdev->aeqs = aeqs;
+ aeqs->hwdev = hwdev;
+ aeqs->num_aeqs = num_aeqs;
+
+ for (q_id = 0; q_id < num_aeqs; q_id++) {
+ err = init_aeq(&aeqs->aeq[q_id], hwdev, q_id,
+ SPNIC_DEFAULT_AEQ_LEN);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init aeq %d failed", q_id);
+ goto init_aeq_err;
+ }
+ }
+
+ return 0;
+
+init_aeq_err:
+ for (i = 0; i < q_id; i++)
+ remove_aeq(&aeqs->aeq[i]);
+
+ rte_free(aeqs);
+ return err;
+}
+
+/**
+ * Free all aeqs
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ */
+void spnic_aeqs_free(struct spnic_hwdev *hwdev)
+{
+ struct spnic_aeqs *aeqs = hwdev->aeqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++)
+ remove_aeq(&aeqs->aeq[q_id]);
+
+ rte_free(aeqs);
+}
+
+static int aeq_elem_handler(struct spnic_eq *eq, u32 aeqe_desc,
+ struct spnic_aeq_elem *aeqe_pos, void *param)
+{
+ enum spnic_aeq_type event;
+ u8 data[SPNIC_AEQE_DATA_SIZE];
+ u8 size;
+
+ event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+
+ memcpy(data, aeqe_pos->aeqe_data, SPNIC_AEQE_DATA_SIZE);
+ spnic_be32_to_cpu(data, SPNIC_AEQE_DATA_SIZE);
+ size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
+
+ if (event == SPNIC_MBX_FROM_FUNC) {
+ return spnic_mbox_func_aeqe_handler(eq->hwdev, data, size,
+ param);
+ } else {
+ PMD_DRV_LOG(ERR, "AEQ hw event not support %d", event);
+ return -EINVAL;
+ }
+}
+
+/**
+ * Poll one or continue aeqe, and call dedicated process
+ *
+ * @param[in] eq
+ * The event queue
+ * @param[in] timeout
+ * 0 - Poll all aeqe in eq, used in interrupt mode,
+ * > 0 - Poll aeq until get aeqe with 'last' field set to 1,
+ * used in polling mode.
+ * @param[in] param
+ * Customized parameter
+ *
+ * @retval zero : Success
+ * @retval -EIO : Poll timeout
+ * @retval -ENODEV : Swe not support
+ */
+int spnic_aeq_poll_msg(struct spnic_eq *eq, u32 timeout, void *param)
+{
+ struct spnic_aeq_elem *aeqe_pos = NULL;
+ u32 aeqe_desc = 0;
+ u32 eqe_cnt = 0;
+ int err = -EFAULT;
+ int done = SPNIC_MSG_HANDLER_RES;
+ unsigned long end;
+ u16 i;
+
+ for (i = 0; ((timeout == 0) && (i < eq->eq_len)) ||
+ ((timeout > 0) && (done != 0) && (i < eq->eq_len)); i++) {
+ err = -EIO;
+ end = jiffies + msecs_to_jiffies(timeout);
+ do {
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+ rte_rmb();
+
+ /* Data in HW is in Big endian Format */
+ aeqe_desc = be32_to_cpu(aeqe_pos->desc);
+
+ /*
+ * HW updates wrapped bit,
+ * when it adds eq element event
+ */
+ if (EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED)
+ != eq->wrapped) {
+ err = 0;
+ break;
+ }
+
+ if (timeout != 0)
+ usleep(1000);
+ } while (time_before(jiffies, end));
+
+ if (err != 0) /* Poll time out */
+ break;
+
+ done = aeq_elem_handler(eq, aeqe_desc, aeqe_pos, param);
+
+ eq->cons_idx++;
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= SPNIC_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, SPNIC_EQ_NOT_ARMED);
+ }
+ }
+
+ set_eq_cons_idx(eq, SPNIC_EQ_ARMED);
+
+ return err;
+}
diff --git a/drivers/net/spnic/base/spnic_eqs.h b/drivers/net/spnic/base/spnic_eqs.h
new file mode 100644
index 0000000000..baefae58fb
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_eqs.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_EQS_H_
+#define _SPNIC_EQS_H_
+
+#define SPNIC_MAX_AEQS 4
+#define SPNIC_MIN_AEQS 2
+#define SPNIC_EQ_MAX_PAGES 4
+
+#define SPNIC_AEQE_SIZE 64
+
+#define SPNIC_AEQE_DESC_SIZE 4
+#define SPNIC_AEQE_DATA_SIZE \
+ (SPNIC_AEQE_SIZE - SPNIC_AEQE_DESC_SIZE)
+
+/* Linux is 1K, dpdk is 64 */
+#define SPNIC_DEFAULT_AEQ_LEN 64
+
+#define SPNIC_MIN_EQ_PAGE_SIZE 0x1000 /* Min eq page size 4K Bytes */
+#define SPNIC_MAX_EQ_PAGE_SIZE 0x400000 /* Max eq page size 4M Bytes */
+
+#define SPNIC_MIN_AEQ_LEN 64
+#define SPNIC_MAX_AEQ_LEN \
+ ((SPNIC_MAX_EQ_PAGE_SIZE / SPNIC_AEQE_SIZE) * SPNIC_EQ_MAX_PAGES)
+
+#define EQ_IRQ_NAME_LEN 64
+
+enum spnic_eq_intr_mode {
+ SPNIC_INTR_MODE_ARMED,
+ SPNIC_INTR_MODE_ALWAYS
+};
+
+enum spnic_eq_ci_arm_state {
+ SPNIC_EQ_NOT_ARMED,
+ SPNIC_EQ_ARMED
+};
+
+struct irq_info {
+ u16 msix_entry_idx; /* IRQ corresponding index number */
+ u32 irq_id; /* The IRQ number from OS */
+};
+
+#define SPNIC_RETRY_NUM 10
+
+enum spnic_aeq_type {
+ SPNIC_HW_INTER_INT = 0,
+ SPNIC_MBX_FROM_FUNC = 1,
+ SPNIC_MSG_FROM_MGMT_CPU = 2,
+ SPNIC_API_RSP = 3,
+ SPNIC_API_CHAIN_STS = 4,
+ SPNIC_MBX_SEND_RSLT = 5,
+ SPNIC_MAX_AEQ_EVENTS
+};
+
+struct spnic_eq {
+ struct spnic_hwdev *hwdev;
+ u16 q_id;
+ u32 page_size;
+ u32 orig_page_size;
+ u32 eq_len;
+
+ u32 cons_idx;
+ u16 wrapped;
+
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+
+ struct irq_info eq_irq;
+
+ const struct rte_memzone **eq_mz;
+ rte_iova_t *dma_addr;
+ u8 **virt_addr;
+
+ u16 poll_retry_nr;
+};
+
+struct spnic_aeq_elem {
+ u8 aeqe_data[SPNIC_AEQE_DATA_SIZE];
+ u32 desc;
+};
+
+struct spnic_aeqs {
+ struct spnic_hwdev *hwdev;
+
+ struct spnic_eq aeq[SPNIC_MAX_AEQS];
+ u16 num_aeqs;
+};
+
+int spnic_aeqs_init(struct spnic_hwdev *hwdev);
+
+void spnic_aeqs_free(struct spnic_hwdev *hwdev);
+
+void spnic_dump_aeq_info(struct spnic_hwdev *hwdev);
+
+int spnic_aeq_poll_msg(struct spnic_eq *eq, u32 timeout, void *param);
+
+void spnic_dev_handle_aeq_event(struct spnic_hwdev *hwdev, void *param);
+
+#endif /* _SPNIC_EQS_H_ */
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
index bcecbaa895..e45058423c 100644
--- a/drivers/net/spnic/base/spnic_hwdev.c
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -5,10 +5,45 @@
#include "spnic_compat.h"
#include "spnic_csr.h"
#include "spnic_hwif.h"
+#include "spnic_eqs.h"
#include "spnic_mgmt.h"
#include "spnic_mbox.h"
#include "spnic_hwdev.h"
+typedef void (*mgmt_event_cb)(void *handle, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+struct mgmt_event_handle {
+ u16 cmd;
+ mgmt_event_cb proc;
+};
+
+const struct mgmt_event_handle mgmt_event_proc[] = {
+};
+
+void pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ struct spnic_hwdev *hwdev = handle;
+ u32 i, event_num = RTE_DIM(mgmt_event_proc);
+
+ if (!hwdev)
+ return;
+
+ for (i = 0; i < event_num; i++) {
+ if (cmd == mgmt_event_proc[i].cmd) {
+ if (mgmt_event_proc[i].proc)
+ mgmt_event_proc[i].proc(handle, buf_in, in_size,
+ buf_out, out_size);
+
+ return;
+ }
+ }
+
+ PMD_DRV_LOG(WARNING, "Unsupported mgmt cpu event %d to process", cmd);
+}
+
int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
__rte_unused u16 cmd, __rte_unused void *buf_in,
__rte_unused u16 in_size, __rte_unused void *buf_out,
@@ -28,6 +63,12 @@ static int init_mgmt_channel(struct spnic_hwdev *hwdev)
{
int err;
+ err = spnic_aeqs_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init async event queues failed");
+ return err;
+ }
+
err = spnic_func_to_func_init(hwdev);
if (err) {
PMD_DRV_LOG(ERR, "Init mailbox channel failed");
@@ -37,6 +78,7 @@ static int init_mgmt_channel(struct spnic_hwdev *hwdev)
return 0;
func_to_func_init_err:
+ spnic_aeqs_free(hwdev);
return err;
}
@@ -44,9 +86,9 @@ static int init_mgmt_channel(struct spnic_hwdev *hwdev)
static void free_mgmt_channel(struct spnic_hwdev *hwdev)
{
spnic_func_to_func_free(hwdev);
+ spnic_aeqs_free(hwdev);
}
-
static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
{
int err;
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
index b3a8b32287..a0691eed2e 100644
--- a/drivers/net/spnic/base/spnic_hwdev.h
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -8,6 +8,21 @@
#include <rte_ether.h>
#define SPNIC_CHIP_FAULT_SIZE (110 * 1024)
+struct cfg_mgmt_info;
+struct spnic_hwif;
+struct spnic_aeqs;
+struct spnic_mbox;
+struct spnic_msg_pf_to_mgmt;
+
+struct ffm_intr_info {
+ u8 node_id;
+ /* Error level of the interrupt source */
+ u8 err_level;
+ /* Classification by interrupt source properties */
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+};
struct spnic_hwdev {
void *dev_handle; /* Pointer to spnic_nic_dev */
@@ -18,6 +33,9 @@ struct spnic_hwdev {
struct spnic_hwif *hwif;
struct spnic_mbox *func_to_func;
+ struct cfg_mgmt_info *cfg_mgmt;
+ struct spnic_aeqs *aeqs;
+ struct spnic_msg_pf_to_mgmt *pf_to_mgmt;
u8 *chip_fault_stats;
u16 max_vfs;
@@ -29,6 +47,10 @@ int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
__rte_unused u16 in_size, __rte_unused void *buf_out,
__rte_unused u16 *out_size);
+void pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
int spnic_init_hwdev(struct spnic_hwdev *hwdev);
void spnic_free_hwdev(struct spnic_hwdev *hwdev);
diff --git a/drivers/net/spnic/base/spnic_mbox.c b/drivers/net/spnic/base/spnic_mbox.c
index d019612cef..1677bd7404 100644
--- a/drivers/net/spnic/base/spnic_mbox.c
+++ b/drivers/net/spnic/base/spnic_mbox.c
@@ -2,13 +2,13 @@
* Copyright(c) 2021 Ramaxel Memory Technology, Ltd
*/
-#include <rte_atomic.h>
#include <ethdev_driver.h>
#include "spnic_compat.h"
#include "spnic_hwdev.h"
#include "spnic_csr.h"
-#include "spnic_hwif.h"
#include "spnic_mgmt.h"
+#include "spnic_hwif.h"
+#include "spnic_eqs.h"
#include "spnic_mbox.h"
#define SPNIC_MBOX_INT_DST_FUNC_SHIFT 0
@@ -713,7 +713,9 @@ static int spnic_mbox_to_func(struct spnic_mbox *func_to_func,
/* Use mbox_resp to hole data which responsed from other function */
struct spnic_recv_mbox *mbox_for_resp = NULL;
struct mbox_msg_info msg_info = {0};
+ struct spnic_eq *aeq = NULL;
u16 mbox_rsp_idx;
+ u32 time;
int err;
mbox_rsp_idx = (dst_func == SPNIC_MGMT_SRC_ID) ?
@@ -748,9 +750,19 @@ static int spnic_mbox_to_func(struct spnic_mbox *func_to_func,
goto send_err;
}
+ time = msecs_to_jiffies(timeout ? timeout : SPNIC_MBOX_COMP_TIME);
+ aeq = &func_to_func->hwdev->aeqs->aeq[SPNIC_MBOX_RSP_MSG_AEQ];
+ err = spnic_aeq_poll_msg(aeq, time, NULL);
+ if (err) {
+ set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+ PMD_DRV_LOG(ERR, "Send mailbox message time out");
+ err = -ETIMEDOUT;
+ goto send_err;
+ }
+
if (mod != mbox_for_resp->mod || cmd != mbox_for_resp->cmd) {
- PMD_DRV_LOG(ERR, "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x, cmd: 0x%x, timeout0x%x\n",
- mbox_for_resp->mod, mbox_for_resp->cmd, mod, cmd, timeout);
+ PMD_DRV_LOG(ERR, "Invalid response mbox message, mod: 0x%x, cmd: 0x%x, expect mod: 0x%x, cmd: 0x%x\n",
+ mbox_for_resp->mod, mbox_for_resp->cmd, mod, cmd);
err = -EFAULT;
goto send_err;
}
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 05/25] net/spnic: add mgmt module
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (3 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 04/25] net/spnic: introduce event queue Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 06/25] net/spnic: add cmdq and work queue Yanling Song
` (19 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
Mgmt module manage the message gerenated from the hardware.
This patch implements mgmt module initialization, related event
processing and message command definition.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 4 +-
drivers/net/spnic/base/spnic_cmd.h | 222 ++++++++++++++
drivers/net/spnic/base/spnic_eqs.c | 46 ++-
drivers/net/spnic/base/spnic_hwdev.c | 91 +++++-
drivers/net/spnic/base/spnic_hwdev.h | 66 +++-
drivers/net/spnic/base/spnic_mbox.c | 17 ++
drivers/net/spnic/base/spnic_mgmt.c | 367 +++++++++++++++++++++++
drivers/net/spnic/base/spnic_mgmt.h | 74 +++++
drivers/net/spnic/base/spnic_nic_event.c | 171 +++++++++++
drivers/net/spnic/base/spnic_nic_event.h | 34 +++
10 files changed, 1081 insertions(+), 11 deletions(-)
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index ce7457f400..3f6a060b37 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -5,7 +5,9 @@ sources = [
'spnic_eqs.c',
'spnic_hwdev.c',
'spnic_hwif.c',
- 'spnic_mbox.c'
+ 'spnic_mbox.c',
+ 'spnic_mgmt.c',
+ 'spnic_nic_event.c'
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_cmd.h b/drivers/net/spnic/base/spnic_cmd.h
new file mode 100644
index 0000000000..8900489eef
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_cmd.h
@@ -0,0 +1,222 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_CMD_H_
+#define _SPNIC_CMD_H_
+
+#define NIC_RSS_TEMP_ID_TO_CTX_LT_IDX(tmp_id) tmp_id
+/* Begin of one temp tbl */
+#define NIC_RSS_TEMP_ID_TO_INDIR_LT_IDX(tmp_id) ((tmp_id) << 4)
+/* 4 ctx in one entry */
+#define NIC_RSS_CTX_TBL_ENTRY_SIZE 0x10
+/* Entry size = 16B, 16 entry/template */
+#define NIC_RSS_INDIR_TBL_ENTRY_SIZE 0x10
+/* Entry size = 16B, so entry_num = 256B/16B */
+#define NIC_RSS_INDIR_TBL_ENTRY_NUM 0x10
+
+#define NIC_UP_RSS_INVALID_TEMP_ID 0xFF
+#define NIC_UP_RSS_INVALID_FUNC_ID 0xFFFF
+#define NIC_UP_RSS_INVALID 0x00
+#define NIC_UP_RSS_EN 0x01
+#define NIC_UP_RSS_INVALID_GROUP_ID 0x7F
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+#define SPNIC_RSS_TYPE_VALID_SHIFT 23
+#define SPNIC_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define SPNIC_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define SPNIC_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define SPNIC_RSS_TYPE_IPV6_SHIFT 27
+#define SPNIC_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define SPNIC_RSS_TYPE_IPV4_SHIFT 29
+#define SPNIC_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define SPNIC_RSS_TYPE_UDP_IPV4_SHIFT 31
+#define SPNIC_RSS_TYPE_SET(val, member) \
+ (((u32)(val) & 0x1) << SPNIC_RSS_TYPE_##member##_SHIFT)
+
+#define SPNIC_RSS_TYPE_GET(val, member) \
+ (((u32)(val) >> SPNIC_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+/* NIC CMDQ MODE */
+typedef enum spnic_ucode_cmd {
+ SPNIC_UCODE_CMD_MODIFY_QUEUE_CTX = 0,
+ SPNIC_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ SPNIC_UCODE_CMD_ARM_SQ,
+ SPNIC_UCODE_CMD_ARM_RQ,
+ SPNIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ SPNIC_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ SPNIC_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ SPNIC_UCODE_CMD_GET_RSS_CONTEXT_TABLE,
+ SPNIC_UCODE_CMD_SET_IQ_ENABLE,
+ SPNIC_UCODE_CMD_SET_RQ_FLUSH = 10,
+ SPNIC_UCODE_CMD_MODIFY_VLAN_CTX,
+ SPNIC_UCODE_CMD_DPI_FLOW
+} cmdq_nic_subtype_e;
+
+/*
+ * Commands between NIC and MPU
+ */
+enum spnic_cmd {
+ SPNIC_CMD_VF_REGISTER = 0, /* only for PFD and VFD */
+
+ /* FUNC CFG */
+ SPNIC_CMD_SET_FUNC_TBL = 5,
+ SPNIC_CMD_SET_VPORT_ENABLE,
+ SPNIC_CMD_SET_RX_MODE,
+ SPNIC_CMD_SQ_CI_ATTR_SET,
+ SPNIC_CMD_GET_VPORT_STAT,
+ SPNIC_CMD_CLEAN_VPORT_STAT,
+ SPNIC_CMD_CLEAR_QP_RESOURCE,
+ SPNIC_CMD_CFG_FLEX_QUEUE,
+ /* LRO CFG */
+ SPNIC_CMD_CFG_RX_LRO,
+ SPNIC_CMD_CFG_LRO_TIMER,
+ SPNIC_CMD_FEATURE_NEGO,
+
+ /* MAC & VLAN CFG */
+ SPNIC_CMD_GET_MAC = 20,
+ SPNIC_CMD_SET_MAC,
+ SPNIC_CMD_DEL_MAC,
+ SPNIC_CMD_UPDATE_MAC,
+ SPNIC_CMD_GET_ALL_DEFAULT_MAC,
+
+ SPNIC_CMD_CFG_FUNC_VLAN,
+ SPNIC_CMD_SET_VLAN_FILTER_EN,
+ SPNIC_CMD_SET_RX_VLAN_OFFLOAD,
+
+ /* SR-IOV */
+ SPNIC_CMD_CFG_VF_VLAN = 40,
+ SPNIC_CMD_SET_SPOOPCHK_STATE,
+ /* RATE LIMIT */
+ SPNIC_CMD_SET_MAX_MIN_RATE,
+
+ /* RSS CFG */
+ SPNIC_CMD_RSS_CFG = 60,
+ SPNIC_CMD_RSS_TEMP_MGR,
+ SPNIC_CMD_GET_RSS_CTX_TBL,
+ SPNIC_CMD_CFG_RSS_HASH_KEY,
+ SPNIC_CMD_CFG_RSS_HASH_ENGINE,
+ SPNIC_CMD_GET_INDIR_TBL,
+
+ /* DPI/FDIR */
+ SPNIC_CMD_ADD_TC_FLOW = 80,
+ SPNIC_CMD_DEL_TC_FLOW,
+ SPNIC_CMD_GET_TC_FLOW,
+ SPNIC_CMD_FLUSH_TCAM,
+ SPNIC_CMD_CFG_TCAM_BLOCK,
+ SPNIC_CMD_ENABLE_TCAM,
+ SPNIC_CMD_GET_TCAM_BLOCK,
+
+ /* PORT CFG */
+ SPNIC_CMD_SET_PORT_ENABLE = 100,
+ SPNIC_CMD_CFG_PAUSE_INFO,
+
+ SPNIC_CMD_SET_PORT_CAR,
+ SPNIC_CMD_SET_ER_DROP_PKT,
+
+ SPNIC_CMD_VF_COS,
+ SPNIC_CMD_SETUP_COS_MAPPING,
+ SPNIC_CMD_SET_ETS,
+ SPNIC_CMD_SET_PFC,
+
+ /* MISC */
+ SPNIC_CMD_BIOS_CFG = 120,
+ SPNIC_CMD_SET_FIRMWARE_CUSTOM_PACKETS_MSG,
+
+ /* DFX */
+ SPNIC_CMD_GET_SM_TABLE = 140,
+ SPNIC_CMD_RD_LINE_TBL,
+
+ SPNIC_CMD_MAX = 256
+};
+
+/* COMM commands between driver to MPU */
+enum spnic_mgmt_cmd {
+ MGMT_CMD_FUNC_RESET = 0,
+ MGMT_CMD_FEATURE_NEGO,
+ MGMT_CMD_FLUSH_DOORBELL,
+ MGMT_CMD_START_FLUSH,
+ MGMT_CMD_SET_FUNC_FLR,
+
+ MGMT_CMD_SET_CMDQ_CTXT = 20,
+ MGMT_CMD_SET_VAT,
+ MGMT_CMD_CFG_PAGESIZE,
+ MGMT_CMD_CFG_MSIX_CTRL_REG,
+ MGMT_CMD_SET_CEQ_CTRL_REG,
+ MGMT_CMD_SET_DMA_ATTR,
+
+ MGMT_CMD_GET_MQM_FIX_INFO = 40,
+ MGMT_CMD_SET_MQM_CFG_INFO,
+ MGMT_CMD_SET_MQM_SRCH_GPA,
+ MGMT_CMD_SET_PPF_TMR,
+ MGMT_CMD_SET_PPF_HT_GPA,
+ MGMT_CMD_SET_FUNC_TMR_BITMAT,
+
+ MGMT_CMD_GET_FW_VERSION = 60,
+ MGMT_CMD_GET_BOARD_INFO,
+ MGMT_CMD_SYNC_TIME,
+ MGMT_CMD_GET_HW_PF_INFOS,
+ MGMT_CMD_SEND_BDF_INFO,
+
+ MGMT_CMD_UPDATE_FW = 80,
+ MGMT_CMD_ACTIVE_FW,
+ MGMT_CMD_HOT_ACTIVE_FW,
+ MGMT_CMD_HOT_ACTIVE_DONE_NOTICE,
+ MGMT_CMD_SWITCH_CFG,
+ MGMT_CMD_CHECK_FLASH,
+ MGMT_CMD_CHECK_FLASH_RW,
+ MGMT_CMD_RESOURCE_CFG,
+ MGMT_CMD_UPDATE_BIOS,
+
+ MGMT_CMD_FAULT_REPORT = 100,
+ MGMT_CMD_WATCHDOG_INFO,
+ MGMT_CMD_MGMT_RESET,
+ MGMT_CMD_FFM_SET,
+
+ MGMT_CMD_GET_LOG = 120,
+ MGMT_CMD_TEMP_OP,
+ MGMT_CMD_EN_AUTO_RST_CHIP,
+ MGMT_CMD_CFG_REG,
+ MGMT_CMD_GET_CHIP_ID,
+ MGMT_CMD_SYSINFO_DFX,
+ MGMT_CMD_PCIE_DFX_NTC,
+};
+
+enum mag_cmd {
+ SERDES_CMD_PROCESS = 0,
+
+ MAG_CMD_SET_PORT_CFG = 1,
+ MAG_CMD_SET_PORT_ADAPT = 2,
+ MAG_CMD_CFG_LOOPBACK_MODE = 3,
+
+ MAG_CMD_GET_PORT_ENABLE = 5,
+ MAG_CMD_SET_PORT_ENABLE = 6,
+ MAG_CMD_GET_LINK_STATUS = 7,
+ MAG_CMD_SET_LINK_FOLLOW = 8,
+ MAG_CMD_SET_PMA_ENABLE = 9,
+ MAG_CMD_CFG_FEC_MODE = 10,
+
+ /* PHY */
+ MAG_CMD_GET_XSFP_INFO = 60,
+ MAG_CMD_SET_XSFP_ENABLE = 61,
+ MAG_CMD_GET_XSFP_PRESENT = 62,
+ MAG_CMD_SET_XSFP_RW = 63,
+ MAG_CMD_CFG_XSFP_TEMPERATURE = 64,
+
+ MAG_CMD_WIRE_EVENT = 100,
+ MAG_CMD_LINK_ERR_EVENT = 101,
+
+ MAG_CMD_EVENT_PORT_INFO = 150,
+ MAG_CMD_GET_PORT_STAT = 151,
+ MAG_CMD_CLR_PORT_STAT = 152,
+ MAG_CMD_GET_PORT_INFO = 153,
+ MAG_CMD_GET_PCS_ERR_CNT = 154,
+ MAG_CMD_GET_MAG_CNT = 155,
+ MAG_CMD_DUMP_ANTRAIN_INFO = 156,
+
+ MAG_CMD_MAX = 0xFF
+};
+
+#endif /* _SPNIC_CMD_H_ */
diff --git a/drivers/net/spnic/base/spnic_eqs.c b/drivers/net/spnic/base/spnic_eqs.c
index 7953976441..ee52252ecc 100644
--- a/drivers/net/spnic/base/spnic_eqs.c
+++ b/drivers/net/spnic/base/spnic_eqs.c
@@ -12,6 +12,7 @@
#include "spnic_eqs.h"
#include "spnic_mgmt.h"
#include "spnic_mbox.h"
+#include "spnic_nic_event.h"
#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
@@ -510,6 +511,39 @@ void spnic_aeqs_free(struct spnic_hwdev *hwdev)
rte_free(aeqs);
}
+void spnic_dump_aeq_info(struct spnic_hwdev *hwdev)
+{
+ struct spnic_aeq_elem *aeqe_pos = NULL;
+ struct spnic_eq *eq = NULL;
+ u32 addr, ci, pi, ctrl0, idx;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->aeqs->num_aeqs; q_id++) {
+ eq = &hwdev->aeqs->aeq[q_id];
+ /* Indirect access should set q_id first */
+ spnic_hwif_write_reg(eq->hwdev->hwif, SPNIC_AEQ_INDIR_IDX_ADDR,
+ eq->q_id);
+ rte_wmb(); /* Write index before config */
+
+ addr = SPNIC_CSR_AEQ_CTRL_0_ADDR;
+
+ ctrl0 = spnic_hwif_read_reg(hwdev->hwif, addr);
+
+ idx = spnic_hwif_read_reg(hwdev->hwif,
+ SPNIC_AEQ_INDIR_IDX_ADDR);
+
+ addr = SPNIC_CSR_AEQ_CONS_IDX_ADDR;
+ ci = spnic_hwif_read_reg(hwdev->hwif, addr);
+ addr = SPNIC_CSR_AEQ_PROD_IDX_ADDR;
+ pi = spnic_hwif_read_reg(hwdev->hwif, addr);
+ aeqe_pos = GET_CURR_AEQ_ELEM(eq);
+ PMD_DRV_LOG(ERR, "Aeq id: %d, idx: %u, ctrl0: 0x%08x, wrap: %d,"
+ " pi: 0x%x, ci: 0x%08x, desc: 0x%x", q_id, idx,
+ ctrl0, eq->wrapped, pi, ci,
+ be32_to_cpu(aeqe_pos->desc));
+ }
+}
+
static int aeq_elem_handler(struct spnic_eq *eq, u32 aeqe_desc,
struct spnic_aeq_elem *aeqe_pos, void *param)
{
@@ -518,12 +552,22 @@ static int aeq_elem_handler(struct spnic_eq *eq, u32 aeqe_desc,
u8 size;
event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+ if (EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
+ /* SW event uses only the first 8B */
+ memcpy(data, aeqe_pos->aeqe_data, SPNIC_AEQE_DATA_SIZE);
+ spnic_be32_to_cpu(data, SPNIC_AEQE_DATA_SIZE);
+ /* Just support SPNIC_STATELESS_EVENT */
+ return spnic_nic_sw_aeqe_handler(eq->hwdev, event, data);
+ }
memcpy(data, aeqe_pos->aeqe_data, SPNIC_AEQE_DATA_SIZE);
spnic_be32_to_cpu(data, SPNIC_AEQE_DATA_SIZE);
size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
- if (event == SPNIC_MBX_FROM_FUNC) {
+ if (event == SPNIC_MSG_FROM_MGMT_CPU) {
+ return spnic_mgmt_msg_aeqe_handler(eq->hwdev, data, size,
+ param);
+ } else if (event == SPNIC_MBX_FROM_FUNC) {
return spnic_mbox_func_aeqe_handler(eq->hwdev, data, size,
param);
} else {
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
index e45058423c..2b5154f8a4 100644
--- a/drivers/net/spnic/base/spnic_hwdev.c
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -7,6 +7,7 @@
#include "spnic_hwif.h"
#include "spnic_eqs.h"
#include "spnic_mgmt.h"
+#include "spnic_cmd.h"
#include "spnic_mbox.h"
#include "spnic_hwdev.h"
@@ -18,7 +19,62 @@ struct mgmt_event_handle {
mgmt_event_cb proc;
};
+int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
+ __rte_unused u16 cmd, __rte_unused void *buf_in,
+ __rte_unused u16 in_size, __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ struct spnic_hwdev *hwdev = handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ PMD_DRV_LOG(WARNING, "Unsupported pf mbox event %d to process", cmd);
+
+ return 0;
+}
+
+static void fault_event_handler(__rte_unused void *hwdev,
+ __rte_unused void *buf_in,
+ __rte_unused u16 in_size,
+ __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ PMD_DRV_LOG(WARNING, "Unsupported fault event handler");
+}
+
+static void ffm_event_msg_handler(__rte_unused void *hwdev,
+ void *buf_in, u16 in_size,
+ __rte_unused void *buf_out, u16 *out_size)
+{
+ struct ffm_intr_info *intr = NULL;
+
+ if (in_size != sizeof(*intr)) {
+ PMD_DRV_LOG(ERR, "Invalid fault event report, length: %d, should be %"PRIu64" ",
+ in_size, sizeof(*intr));
+ return;
+ }
+
+ intr = buf_in;
+
+ PMD_DRV_LOG(ERR, "node_id: 0x%x, err_type: 0x%x, err_level: %d, "
+ "err_csr_addr: 0x%08x, err_csr_value: 0x%08x",
+ intr->node_id, intr->err_type, intr->err_level,
+ intr->err_csr_addr, intr->err_csr_value);
+
+ *out_size = sizeof(*intr);
+}
+
const struct mgmt_event_handle mgmt_event_proc[] = {
+ {
+ .cmd = MGMT_CMD_FAULT_REPORT,
+ .proc = fault_event_handler,
+ },
+
+ {
+ .cmd = MGMT_CMD_FFM_SET,
+ .proc = ffm_event_msg_handler,
+ },
};
void pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle,
@@ -44,21 +100,30 @@ void pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle,
PMD_DRV_LOG(WARNING, "Unsupported mgmt cpu event %d to process", cmd);
}
-int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
- __rte_unused u16 cmd, __rte_unused void *buf_in,
- __rte_unused u16 in_size, __rte_unused void *buf_out,
- __rte_unused u16 *out_size)
+static int spnic_comm_pf_to_mgmt_init(struct spnic_hwdev *hwdev)
{
- struct spnic_hwdev *hwdev = handle;
+ int err;
- if (!hwdev)
- return -EINVAL;
+ /* VF does not support send msg to mgmt directly */
+ if (spnic_func_type(hwdev) == TYPE_VF)
+ return 0;
- PMD_DRV_LOG(WARNING, "Unsupported pf mbox event %d to process", cmd);
+ err = spnic_pf_to_mgmt_init(hwdev);
+ if (err)
+ return err;
return 0;
}
+static void spnic_comm_pf_to_mgmt_free(struct spnic_hwdev *hwdev)
+{
+ /* VF does not support send msg to mgmt directly */
+ if (spnic_func_type(hwdev) == TYPE_VF)
+ return;
+
+ spnic_pf_to_mgmt_free(hwdev);
+}
+
static int init_mgmt_channel(struct spnic_hwdev *hwdev)
{
int err;
@@ -69,6 +134,12 @@ static int init_mgmt_channel(struct spnic_hwdev *hwdev)
return err;
}
+ err = spnic_comm_pf_to_mgmt_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mgmt channel failed");
+ goto msg_init_err;
+ }
+
err = spnic_func_to_func_init(hwdev);
if (err) {
PMD_DRV_LOG(ERR, "Init mailbox channel failed");
@@ -78,6 +149,9 @@ static int init_mgmt_channel(struct spnic_hwdev *hwdev)
return 0;
func_to_func_init_err:
+ spnic_comm_pf_to_mgmt_free(hwdev);
+
+msg_init_err:
spnic_aeqs_free(hwdev);
return err;
@@ -86,6 +160,7 @@ static int init_mgmt_channel(struct spnic_hwdev *hwdev)
static void free_mgmt_channel(struct spnic_hwdev *hwdev)
{
spnic_func_to_func_free(hwdev);
+ spnic_comm_pf_to_mgmt_free(hwdev);
spnic_aeqs_free(hwdev);
}
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
index a0691eed2e..4e77d776ee 100644
--- a/drivers/net/spnic/base/spnic_hwdev.h
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -7,7 +7,6 @@
#include <rte_ether.h>
-#define SPNIC_CHIP_FAULT_SIZE (110 * 1024)
struct cfg_mgmt_info;
struct spnic_hwif;
struct spnic_aeqs;
@@ -24,6 +23,70 @@ struct ffm_intr_info {
u32 err_csr_value;
};
+struct link_event_stats {
+ u32 link_down_stats;
+ u32 link_up_stats;
+};
+
+enum spnic_fault_err_level {
+ FAULT_LEVEL_FATAL,
+ FAULT_LEVEL_SERIOUS_RESET,
+ FAULT_LEVEL_SERIOUS_FLR,
+ FAULT_LEVEL_GENERAL,
+ FAULT_LEVEL_SUGGESTION,
+ FAULT_LEVEL_MAX
+};
+
+enum spnic_fault_type {
+ FAULT_TYPE_CHIP,
+ FAULT_TYPE_UCODE,
+ FAULT_TYPE_MEM_RD_TIMEOUT,
+ FAULT_TYPE_MEM_WR_TIMEOUT,
+ FAULT_TYPE_REG_RD_TIMEOUT,
+ FAULT_TYPE_REG_WR_TIMEOUT,
+ FAULT_TYPE_PHY_FAULT,
+ FAULT_TYPE_MAX
+};
+
+struct fault_event_stats {
+ rte_atomic32_t chip_fault_stats[22][FAULT_LEVEL_MAX];
+ rte_atomic32_t fault_type_stat[FAULT_TYPE_MAX];
+ rte_atomic32_t pcie_fault_stats;
+};
+
+struct spnic_hw_stats {
+ rte_atomic32_t heart_lost_stats;
+ struct link_event_stats link_event_stats;
+ struct fault_event_stats fault_event_stats;
+};
+
+#define SPNIC_CHIP_FAULT_SIZE (110 * 1024)
+#define MAX_DRV_BUF_SIZE 4096
+
+struct nic_cmd_chip_fault_stats {
+ u32 offset;
+ u8 chip_fault_stats[MAX_DRV_BUF_SIZE];
+};
+
+struct spnic_board_info {
+ u8 board_type;
+ u8 port_num;
+ u8 port_speed;
+ u8 pcie_width;
+ u8 host_num;
+ u8 pf_num;
+ u16 vf_total_num;
+ u8 tile_num;
+ u8 qcm_num;
+ u8 core_num;
+ u8 work_mode;
+ u8 service_mode;
+ u8 pcie_mode;
+ u8 boot_sel;
+ u8 board_id;
+ u32 cfg_addr;
+};
+
struct spnic_hwdev {
void *dev_handle; /* Pointer to spnic_nic_dev */
void *pci_dev; /* Pointer to rte_pci_device */
@@ -37,6 +100,7 @@ struct spnic_hwdev {
struct spnic_aeqs *aeqs;
struct spnic_msg_pf_to_mgmt *pf_to_mgmt;
u8 *chip_fault_stats;
+ struct spnic_hw_stats hw_stats;
u16 max_vfs;
u16 link_status;
diff --git a/drivers/net/spnic/base/spnic_mbox.c b/drivers/net/spnic/base/spnic_mbox.c
index 1677bd7404..5c39e307be 100644
--- a/drivers/net/spnic/base/spnic_mbox.c
+++ b/drivers/net/spnic/base/spnic_mbox.c
@@ -6,10 +6,12 @@
#include "spnic_compat.h"
#include "spnic_hwdev.h"
#include "spnic_csr.h"
+#include "spnic_cmd.h"
#include "spnic_mgmt.h"
#include "spnic_hwif.h"
#include "spnic_eqs.h"
#include "spnic_mbox.h"
+#include "spnic_nic_event.h"
#define SPNIC_MBOX_INT_DST_FUNC_SHIFT 0
#define SPNIC_MBOX_INT_DST_AEQN_SHIFT 10
@@ -148,6 +150,21 @@ static int recv_vf_mbox_handler(struct spnic_mbox *func_to_func,
recv_mbox->mbox_len,
buf_out, out_size);
break;
+ case SPNIC_MOD_L2NIC:
+ err = spnic_vf_event_handler(func_to_func->hwdev,
+ func_to_func->hwdev->cfg_mgmt,
+ recv_mbox->cmd, recv_mbox->mbox,
+ recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
+ case SPNIC_MOD_HILINK:
+ err = spnic_vf_mag_event_handler(func_to_func->hwdev,
+ func_to_func->hwdev->cfg_mgmt,
+ recv_mbox->cmd,
+ recv_mbox->mbox,
+ recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
default:
PMD_DRV_LOG(ERR, "No handler, mod: %d", recv_mbox->mod);
err = SPNIC_MBOX_VF_CMD_ERROR;
diff --git a/drivers/net/spnic/base/spnic_mgmt.c b/drivers/net/spnic/base/spnic_mgmt.c
new file mode 100644
index 0000000000..e202780411
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_mgmt.c
@@ -0,0 +1,367 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_ethdev.h>
+
+#include "spnic_compat.h"
+#include "spnic_hwdev.h"
+#include "spnic_eqs.h"
+#include "spnic_mgmt.h"
+#include "spnic_mbox.h"
+#include "spnic_nic_event.h"
+
+#define SPNIC_MSG_TO_MGMT_MAX_LEN 2016
+
+#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define SEGMENT_LEN 48
+#define ASYNC_MSG_FLAG 0x20
+#define MGMT_MSG_MAX_SEQ_ID (RTE_ALIGN(SPNIC_MSG_TO_MGMT_MAX_LEN, \
+ SEGMENT_LEN) / SEGMENT_LEN)
+
+#define BUF_OUT_DEFAULT_SIZE 1
+
+#define MGMT_MSG_SIZE_MIN 20
+#define MGMT_MSG_SIZE_STEP 16
+#define MGMT_MSG_RSVD_FOR_DEV 8
+
+#define SYNC_MSG_ID_MASK 0x1F
+#define ASYNC_MSG_ID_MASK 0x1F
+
+#define SYNC_FLAG 0
+#define ASYNC_FLAG 1
+
+#define MSG_NO_RESP 0xFFFF
+
+#define MGMT_MSG_TIMEOUT 300000 /* Millisecond */
+
+int spnic_msg_to_mgmt_sync(void *hwdev, enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ err = spnic_send_mbox_to_mgmt(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+ return err;
+}
+
+static void send_mgmt_ack(struct spnic_msg_pf_to_mgmt *pf_to_mgmt,
+ enum spnic_mod_type mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 msg_id)
+{
+ u16 buf_size;
+
+ if (!in_size)
+ buf_size = BUF_OUT_DEFAULT_SIZE;
+ else
+ buf_size = in_size;
+
+ spnic_response_mbox_to_mgmt(pf_to_mgmt->hwdev, mod, cmd, buf_in,
+ buf_size, msg_id);
+}
+
+static bool check_mgmt_seq_id_and_seg_len(struct spnic_recv_msg *recv_msg,
+ u8 seq_id, u8 seg_len, u16 msg_id)
+{
+ if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN)
+ return false;
+
+ if (seq_id == 0) {
+ recv_msg->seq_id = seq_id;
+ recv_msg->msg_id = msg_id;
+ } else {
+ if ((seq_id != recv_msg->seq_id + 1) ||
+ msg_id != recv_msg->msg_id) {
+ recv_msg->seq_id = 0;
+ return false;
+ }
+
+ recv_msg->seq_id = seq_id;
+ }
+
+ return true;
+}
+
+static void spnic_mgmt_recv_msg_handler(struct spnic_msg_pf_to_mgmt *pf_to_mgmt,
+ struct spnic_recv_msg *recv_msg,
+ __rte_unused void *param)
+{
+ void *buf_out = pf_to_mgmt->mgmt_ack_buf;
+ bool ack_first = false;
+ u16 out_size = 0;
+
+ memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
+
+ switch (recv_msg->mod) {
+ case SPNIC_MOD_COMM:
+ pf_handle_mgmt_comm_event(pf_to_mgmt->hwdev, pf_to_mgmt,
+ recv_msg->cmd, recv_msg->msg,
+ recv_msg->msg_len,
+ buf_out, &out_size);
+ break;
+ case SPNIC_MOD_L2NIC:
+ spnic_pf_event_handler(pf_to_mgmt->hwdev, pf_to_mgmt,
+ recv_msg->cmd, recv_msg->msg,
+ recv_msg->msg_len, buf_out, &out_size);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Not support mod, maybe need to response, mod: %d",
+ recv_msg->mod);
+ break;
+ }
+
+ if (!ack_first && !recv_msg->async_mgmt_to_pf)
+ /* Mgmt sends async msg, sends the response */
+ send_mgmt_ack(pf_to_mgmt, recv_msg->mod, recv_msg->cmd, buf_out,
+ out_size, recv_msg->msg_id);
+}
+
+/**
+ * Handler a message from mgmt cpu
+ *
+ * @param[in] pf_to_mgmt
+ * PF to mgmt channel
+ * @param[in] recv_msg
+ * Received message details
+ * @param[in] param
+ * Customized parameter (unused_)
+ *
+ * @retval 0 : When aeqe is response message
+ * @retval -1 : Default result, when wrong message or not last message.
+ */
+static int recv_mgmt_msg_handler(struct spnic_msg_pf_to_mgmt *pf_to_mgmt,
+ u8 *header, struct spnic_recv_msg *recv_msg,
+ void *param)
+{
+ u64 mbox_header = *((u64 *)header);
+ void *msg_body = header + sizeof(mbox_header);
+ u8 seq_id, seq_len;
+ u32 offset;
+ u8 front_id;
+ u16 msg_id;
+
+ /* Don't need to get anything from hw when cmd is async */
+ if (SPNIC_MSG_HEADER_GET(mbox_header, DIRECTION) == SPNIC_MSG_RESPONSE)
+ return 0;
+
+ seq_len = SPNIC_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ seq_id = SPNIC_MSG_HEADER_GET(mbox_header, SEQID);
+ msg_id = SPNIC_MSG_HEADER_GET(mbox_header, MSG_ID);
+ front_id = recv_msg->seq_id;
+
+ if (!check_mgmt_seq_id_and_seg_len(recv_msg, seq_id, seq_len, msg_id)) {
+ PMD_DRV_LOG(ERR, "Mgmt msg sequence id and segment length check failed, "
+ "front seq_id: 0x%x, current seq_id: 0x%x, seg len: 0x%x "
+ "front msg_id: %d, cur msg_id: %d",
+ front_id, seq_id, seq_len,
+ recv_msg->msg_id, msg_id);
+ /* Set seq_id to invalid seq_id */
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+ return SPNIC_MSG_HANDLER_RES;
+ }
+
+ offset = seq_id * SEGMENT_LEN;
+ memcpy((u8 *)recv_msg->msg + offset, msg_body, seq_len);
+
+ if (!SPNIC_MSG_HEADER_GET(mbox_header, LAST))
+ return SPNIC_MSG_HANDLER_RES;
+
+ recv_msg->cmd = SPNIC_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->mod = SPNIC_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->async_mgmt_to_pf = SPNIC_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_msg->msg_len = SPNIC_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_msg->msg_id = SPNIC_MSG_HEADER_GET(mbox_header, MSG_ID);
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ spnic_mgmt_recv_msg_handler(pf_to_mgmt, recv_msg, param);
+
+ return SPNIC_MSG_HANDLER_RES;
+}
+
+/**
+ * Handler for a mgmt message event
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ * @param[in] header
+ * The header of the message
+ * @param[in] size
+ * Size (unused_)
+ * @param[in] param
+ * Customized parameter
+ *
+ * @retval zero : When aeqe is response message
+ * @retval negative : When wrong message or not last message.
+ */
+int spnic_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size, void *param)
+{
+ struct spnic_hwdev *dev = (struct spnic_hwdev *)hwdev;
+ struct spnic_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct spnic_recv_msg *recv_msg = NULL;
+ bool is_send_dir = false;
+
+ if ((SPNIC_MSG_HEADER_GET(*(u64 *)header, SOURCE) ==
+ SPNIC_MSG_FROM_MBOX)) {
+ return spnic_mbox_func_aeqe_handler(hwdev, header, size, param);
+ }
+
+ pf_to_mgmt = dev->pf_to_mgmt;
+
+ is_send_dir = (SPNIC_MSG_HEADER_GET(*(u64 *)header, DIRECTION) ==
+ SPNIC_MSG_DIRECT_SEND) ? true : false;
+
+ recv_msg = is_send_dir ? &pf_to_mgmt->recv_msg_from_mgmt :
+ &pf_to_mgmt->recv_resp_msg_from_mgmt;
+
+ return recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg, param);
+}
+
+/**
+ * Allocate received message memory
+ *
+ * @param[in] recv_msg
+ * Pointer that will hold the allocated data
+ *
+ * @retval zero : Success
+ * @retval negative : Failure.
+ */
+static int alloc_recv_msg(struct spnic_recv_msg *recv_msg)
+{
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ recv_msg->msg = rte_zmalloc("recv_msg", MAX_PF_MGMT_BUF_SIZE,
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!recv_msg->msg)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * Free received message memory
+ *
+ * @param[in] recv_msg
+ * Pointer that will hold the allocated data
+ */
+static void free_recv_msg(struct spnic_recv_msg *recv_msg)
+{
+ rte_free(recv_msg->msg);
+}
+
+/**
+ * Allocate all the message buffers of PF to mgmt channel
+ *
+ * @param[in] pf_to_mgmt
+ * PF to mgmt channel
+ *
+ * @retval zero : Success
+ * @retval negative : Failure.
+ */
+static int alloc_msg_buf(struct spnic_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ int err;
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate recv msg failed");
+ return err;
+ }
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate resp recv msg failed");
+ goto alloc_msg_for_resp_err;
+ }
+
+ pf_to_mgmt->mgmt_ack_buf = rte_zmalloc("mgmt_ack_buf",
+ MAX_PF_MGMT_BUF_SIZE,
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!pf_to_mgmt->mgmt_ack_buf) {
+ err = -ENOMEM;
+ goto ack_msg_buf_err;
+ }
+
+ return 0;
+
+ack_msg_buf_err:
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+
+alloc_msg_for_resp_err:
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ return err;
+}
+
+/**
+ * Free all the message buffers of PF to mgmt channel
+ *
+ * @param[in] pf_to_mgmt
+ * PF to mgmt channel
+ */
+static void free_msg_buf(struct spnic_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ rte_free(pf_to_mgmt->mgmt_ack_buf);
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+}
+
+/**
+ * Initialize PF to mgmt channel
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ *
+ * @retval zero : Success
+ * @retval negative : Failure.
+ */
+int spnic_pf_to_mgmt_init(struct spnic_hwdev *hwdev)
+{
+ struct spnic_msg_pf_to_mgmt *pf_to_mgmt;
+ int err;
+
+ pf_to_mgmt = rte_zmalloc("pf_to_mgmt", sizeof(*pf_to_mgmt),
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!pf_to_mgmt)
+ return -ENOMEM;
+
+ hwdev->pf_to_mgmt = pf_to_mgmt;
+ pf_to_mgmt->hwdev = hwdev;
+
+ err = spnic_mutex_init(&pf_to_mgmt->sync_msg_mutex, NULL);
+ if (err)
+ goto mutex_init_err;
+
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate msg buffers failed");
+ goto alloc_msg_buf_err;
+ }
+
+ return 0;
+
+alloc_msg_buf_err:
+ spnic_mutex_destroy(&pf_to_mgmt->sync_msg_mutex);
+
+mutex_init_err:
+ rte_free(pf_to_mgmt);
+
+ return err;
+}
+
+/**
+ * Free PF to mgmt channel
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ */
+void spnic_pf_to_mgmt_free(struct spnic_hwdev *hwdev)
+{
+ struct spnic_msg_pf_to_mgmt *pf_to_mgmt = hwdev->pf_to_mgmt;
+
+ free_msg_buf(pf_to_mgmt);
+ spnic_mutex_destroy(&pf_to_mgmt->sync_msg_mutex);
+ rte_free(pf_to_mgmt);
+}
diff --git a/drivers/net/spnic/base/spnic_mgmt.h b/drivers/net/spnic/base/spnic_mgmt.h
index 37d0410473..ca820828d2 100644
--- a/drivers/net/spnic/base/spnic_mgmt.h
+++ b/drivers/net/spnic/base/spnic_mgmt.h
@@ -7,6 +7,13 @@
#define SPNIC_MSG_HANDLER_RES (-1)
+/* Structures for l2nic and mag msg to mgmt sync interface */
+struct mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
/* Cmdq module type */
enum spnic_mod_type {
SPNIC_MOD_COMM = 0, /* HW communication module */
@@ -33,4 +40,71 @@ enum spnic_mod_type {
SPNIC_MOD_MAX
};
+typedef enum {
+ RES_TYPE_FLUSH_BIT = 0,
+ RES_TYPE_MQM,
+ RES_TYPE_SMF,
+
+ RES_TYPE_COMM = 10,
+ /* clear mbox and aeq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_COMM_MGMT_CH,
+ /* clear cmdq, The RES_TYPE_COMM bit must be set */
+ RES_TYPE_COMM_CMD_CH,
+ RES_TYPE_NIC,
+ RES_TYPE_OVS,
+ RES_TYPE_MAX = 20,
+} func_reset_flag_e;
+
+#define SPNIC_COMM_RES ((1 << RES_TYPE_COMM) | \
+ (1 << RES_TYPE_FLUSH_BIT) | \
+ (1 << RES_TYPE_MQM) | \
+ (1 << RES_TYPE_SMF) | \
+ (1 << RES_TYPE_COMM_CMD_CH))
+#define SPNIC_NIC_RES (1 << RES_TYPE_NIC)
+#define SPNIC_OVS_RES (1 << RES_TYPE_OVS)
+
+struct spnic_recv_msg {
+ void *msg;
+
+ u16 msg_len;
+ enum spnic_mod_type mod;
+ u16 cmd;
+ u8 seq_id;
+ u16 msg_id;
+ int async_mgmt_to_pf;
+};
+
+enum comm_pf_to_mgmt_event_state {
+ SEND_EVENT_UNINIT = 0,
+ SEND_EVENT_START,
+ SEND_EVENT_SUCCESS,
+ SEND_EVENT_FAIL,
+ SEND_EVENT_TIMEOUT,
+ SEND_EVENT_END
+};
+
+struct spnic_msg_pf_to_mgmt {
+ struct spnic_hwdev *hwdev;
+
+ /* Mutex for sync message */
+ pthread_mutex_t sync_msg_mutex;
+
+ void *mgmt_ack_buf;
+
+ struct spnic_recv_msg recv_msg_from_mgmt;
+ struct spnic_recv_msg recv_resp_msg_from_mgmt;
+
+ u16 sync_msg_id;
+};
+
+int spnic_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size, void *param);
+
+int spnic_pf_to_mgmt_init(struct spnic_hwdev *hwdev);
+
+void spnic_pf_to_mgmt_free(struct spnic_hwdev *hwdev);
+
+int spnic_msg_to_mgmt_sync(void *hwdev, enum spnic_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
#endif /* _SPNIC_MGMT_H_ */
diff --git a/drivers/net/spnic/base/spnic_nic_event.c b/drivers/net/spnic/base/spnic_nic_event.c
new file mode 100644
index 0000000000..07ea036d84
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_nic_event.c
@@ -0,0 +1,171 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <ethdev_driver.h>
+
+#include "spnic_compat.h"
+#include "spnic_cmd.h"
+#include "spnic_hwif.h"
+#include "spnic_hwdev.h"
+#include "spnic_mgmt.h"
+#include "spnic_hwdev.h"
+#include "spnic_nic_event.h"
+
+static void get_port_info(u8 link_state, struct rte_eth_link *link)
+{
+ if (!link_state) {
+ link->link_status = ETH_LINK_DOWN;
+ link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_duplex = ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = ETH_LINK_FIXED;
+ }
+}
+
+static void spnic_link_event_stats(void *dev, u8 link)
+{
+ struct spnic_hwdev *hwdev = dev;
+ struct link_event_stats *stats = &hwdev->hw_stats.link_event_stats;
+
+ if (link)
+ __atomic_fetch_add(&stats->link_up_stats, 1, __ATOMIC_RELAXED);
+ else
+ __atomic_fetch_add(&stats->link_down_stats, 1, __ATOMIC_RELAXED);
+}
+
+static void link_status_event_handler(void *hwdev, void *buf_in,
+ __rte_unused u16 in_size,
+ __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ struct spnic_cmd_link_state *link_status = NULL;
+ struct rte_eth_link link;
+ struct spnic_hwdev *dev = hwdev;
+ int err;
+
+ link_status = buf_in;
+ PMD_DRV_LOG(INFO, "Link status report received, func_id: %d, status: %d(%s)",
+ spnic_global_func_id(hwdev), link_status->state,
+ link_status->state ? "UP" : "DOWN");
+
+ spnic_link_event_stats(hwdev, link_status->state);
+
+ /* Link event reported only after set vport enable */
+ get_port_info(link_status->state, &link);
+ err = rte_eth_linkstatus_set((struct rte_eth_dev *)(dev->eth_dev),
+ &link);
+ if (!err)
+ rte_eth_dev_callback_process(dev->eth_dev,
+ RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
+struct nic_event_handler {
+ u16 cmd;
+ void (*handler)(void *hwdev, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+};
+
+struct nic_event_handler nic_cmd_handler[] = {
+};
+
+static void nic_event_handler(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 i, size = ARRAY_LEN(nic_cmd_handler);
+
+ if (!hwdev)
+ return;
+
+ *out_size = 0;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == nic_cmd_handler[i].cmd) {
+ nic_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ }
+ }
+
+ if (i == size)
+ PMD_DRV_LOG(WARNING,
+ "Unsupported nic event cmd(%d) to process", cmd);
+}
+
+/*
+ * VF handler mbox msg from ppf/pf
+ * VF link change event
+ * VF fault report event
+ */
+int spnic_vf_event_handler(void *hwdev, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ nic_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+ return 0;
+}
+
+/* NIC event of PF/PPF handler reported by mgmt cpu */
+void spnic_pf_event_handler(void *hwdev, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ nic_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
+
+static struct nic_event_handler mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = link_status_event_handler,
+ },
+};
+
+static int spnic_mag_event_handler(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ u32 size = ARRAY_LEN(mag_cmd_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ for (i = 0; i < size; i++) {
+ if (cmd == mag_cmd_handler[i].cmd) {
+ mag_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ }
+ }
+
+ /* can't find this event cmd */
+ if (i == size)
+ PMD_DRV_LOG(ERR, "Unsupported mag event, cmd: %u\n", cmd);
+
+ return 0;
+}
+
+int spnic_vf_mag_event_handler(void *hwdev, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return spnic_mag_event_handler(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+/* pf/ppf handler mgmt cpu report hilink event*/
+void spnic_pf_mag_event_handler(void *hwdev, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ spnic_mag_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
+
+u8 spnic_nic_sw_aeqe_handler(__rte_unused void *hwdev, u8 event, u8 *data)
+{
+ PMD_DRV_LOG(ERR,
+ "Received nic ucode aeq event type: 0x%x, data: %"PRIu64"",
+ event, *((u64 *)data));
+
+ return 0;
+}
diff --git a/drivers/net/spnic/base/spnic_nic_event.h b/drivers/net/spnic/base/spnic_nic_event.h
new file mode 100644
index 0000000000..eb41d76a7d
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_nic_event.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_NIC_EVENT_H_
+#define _SPNIC_NIC_EVENT_H_
+
+struct spnic_cmd_link_state {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+void spnic_pf_event_handler(void *hwdev, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int spnic_vf_event_handler(void *hwdev, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+void spnic_pf_mag_event_handler(void *hwdev, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+int spnic_vf_mag_event_handler(void *hwdev, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+u8 spnic_nic_sw_aeqe_handler(__rte_unused void *hwdev, u8 event, u8 *data);
+
+#endif /* _SPNIC_NIC_EVENT_H_ */
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 06/25] net/spnic: add cmdq and work queue
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (4 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 05/25] net/spnic: add mgmt module Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 07/25] net/spnic: add interface handling cmdq message Yanling Song
` (18 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit introduce cmdq and work queue which can be used to
send bulk message data(up to 2KB) to hardware. cmdq provides a
mechanism to encapsulate the message to be sent and handle the
response data or status. work queue is used to manager the wqe
in which includes message data buffer description, ctrl info,
header info and response message data buffer. This patch
implements the initialization and data structure.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 4 +-
drivers/net/spnic/base/spnic_cmdq.c | 202 ++++++++++++++++++++++
drivers/net/spnic/base/spnic_cmdq.h | 228 +++++++++++++++++++++++++
drivers/net/spnic/base/spnic_hw_comm.c | 222 ++++++++++++++++++++++++
drivers/net/spnic/base/spnic_hw_comm.h | 176 +++++++++++++++++++
drivers/net/spnic/base/spnic_hwdev.c | 215 +++++++++++++++++++++++
drivers/net/spnic/base/spnic_hwdev.h | 8 +-
drivers/net/spnic/base/spnic_wq.h | 57 +++++++
8 files changed, 1109 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_wq.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index 3f6a060b37..5e4efac7be 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -7,7 +7,9 @@ sources = [
'spnic_hwif.c',
'spnic_mbox.c',
'spnic_mgmt.c',
- 'spnic_nic_event.c'
+ 'spnic_nic_event.c',
+ 'spnic_cmdq.c',
+ 'spnic_hw_comm.c',
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_cmdq.c b/drivers/net/spnic/base/spnic_cmdq.c
new file mode 100644
index 0000000000..ccfcf739a0
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_cmdq.c
@@ -0,0 +1,202 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_mbuf.h>
+
+#include "spnic_compat.h"
+#include "spnic_hwdev.h"
+#include "spnic_hwif.h"
+#include "spnic_wq.h"
+#include "spnic_cmd.h"
+#include "spnic_mgmt.h"
+#include "spnic_cmdq.h"
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
+#define CMDQ_CTXT_EQ_ID_SHIFT 53
+#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
+#define CMDQ_CTXT_CEQ_EN_SHIFT 62
+#define CMDQ_CTXT_HW_BUSY_BIT_SHIFT 63
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_EQ_ID_MASK 0xFF
+#define CMDQ_CTXT_CEQ_ARM_MASK 0x1
+#define CMDQ_CTXT_CEQ_EN_MASK 0x1
+#define CMDQ_CTXT_HW_BUSY_BIT_MASK 0x1
+
+#define CMDQ_CTXT_PAGE_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
+#define CMDQ_CTXT_CI_SHIFT 52
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_CI_MASK 0xFFF
+
+#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << CMDQ_CTXT_##member##_SHIFT)
+
+#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+
+static int init_cmdq(struct spnic_cmdq *cmdq, struct spnic_hwdev *hwdev,
+ struct spnic_wq *wq, enum spnic_cmdq_type q_type)
+{
+ void *db_base = NULL;
+ int err = 0;
+ size_t errcode_size;
+ size_t cmd_infos_size;
+
+ cmdq->wq = wq;
+ cmdq->cmdq_type = q_type;
+ cmdq->wrapped = 1;
+
+ rte_spinlock_init(&cmdq->cmdq_lock);
+
+ errcode_size = wq->q_depth * sizeof(*cmdq->errcode);
+ cmdq->errcode = rte_zmalloc(NULL, errcode_size, 0);
+ if (!cmdq->errcode) {
+ PMD_DRV_LOG(ERR, "Allocate errcode for cmdq failed");
+ return -ENOMEM;
+ }
+
+ cmd_infos_size = wq->q_depth * sizeof(*cmdq->cmd_infos);
+ cmdq->cmd_infos = rte_zmalloc(NULL, cmd_infos_size, 0);
+ if (!cmdq->cmd_infos) {
+ PMD_DRV_LOG(ERR, "Allocate cmd info for cmdq failed");
+ err = -ENOMEM;
+ goto cmd_infos_err;
+ }
+
+ err = spnic_alloc_db_addr(hwdev, &db_base, NULL);
+ if (err)
+ goto alloc_db_err;
+
+ cmdq->db_base = (u8 *)db_base;
+
+ return 0;
+
+alloc_db_err:
+ rte_free(cmdq->cmd_infos);
+
+cmd_infos_err:
+ rte_free(cmdq->errcode);
+
+ return err;
+}
+
+static void free_cmdq(struct spnic_hwdev *hwdev, struct spnic_cmdq *cmdq)
+{
+ spnic_free_db_addr(hwdev, cmdq->db_base, NULL);
+ rte_free(cmdq->cmd_infos);
+ rte_free(cmdq->errcode);
+}
+
+static int spnic_set_cmdq_ctxts(struct spnic_hwdev *hwdev)
+{
+ struct spnic_cmdqs *cmdqs = hwdev->cmdqs;
+ struct spnic_cmd_cmdq_ctxt cmdq_ctxt;
+ enum spnic_cmdq_type cmdq_type;
+ u16 out_size = sizeof(cmdq_ctxt);
+ int err;
+
+ cmdq_type = SPNIC_CMDQ_SYNC;
+ for (; cmdq_type < SPNIC_MAX_CMDQ_TYPES; cmdq_type++) {
+ memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt));
+ memcpy(&cmdq_ctxt.ctxt_info, &cmdqs->cmdq[cmdq_type].cmdq_ctxt,
+ sizeof(cmdq_ctxt.ctxt_info));
+ cmdq_ctxt.func_idx = spnic_global_func_id(hwdev);
+ cmdq_ctxt.cmdq_id = cmdq_type;
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_SET_CMDQ_CTXT,
+ &cmdq_ctxt, sizeof(cmdq_ctxt),
+ &cmdq_ctxt, &out_size, 0);
+ if (err || !out_size || cmdq_ctxt.status) {
+ PMD_DRV_LOG(ERR, "Set cmdq ctxt failed, err: %d, status: 0x%x, out_size: 0x%x",
+ err, cmdq_ctxt.status, out_size);
+ return -EFAULT;
+ }
+ }
+
+ cmdqs->status |= SPNIC_CMDQ_ENABLE;
+
+ return 0;
+}
+
+int spnic_reinit_cmdq_ctxts(struct spnic_hwdev *hwdev)
+{
+ return spnic_set_cmdq_ctxts(hwdev);
+}
+
+int spnic_cmdqs_init(struct spnic_hwdev *hwdev)
+{
+ struct spnic_cmdqs *cmdqs = NULL;
+ enum spnic_cmdq_type type, cmdq_type;
+ char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE];
+ int err;
+
+ cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0);
+ if (!cmdqs)
+ return -ENOMEM;
+
+ hwdev->cmdqs = cmdqs;
+ cmdqs->hwdev = hwdev;
+
+ memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE);
+ snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "spnic_cmdq_%u",
+ hwdev->port_id);
+
+ cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name,
+ SPNIC_CMDQ_DEPTH * SPNIC_MAX_CMDQ_TYPES,
+ 0, 0, SPNIC_CMDQ_BUF_SIZE, rte_socket_id());
+ if (!cmdqs->cmd_buf_pool) {
+ PMD_DRV_LOG(ERR, "Create cmdq buffer pool failed");
+ err = -ENOMEM;
+ goto pool_create_err;
+ }
+
+ cmdq_type = SPNIC_CMDQ_SYNC;
+ for (; cmdq_type < SPNIC_MAX_CMDQ_TYPES; cmdq_type++) {
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev,
+ &cmdqs->saved_wqs[cmdq_type], cmdq_type);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Initialize cmdq failed");
+ goto init_cmdq_err;
+ }
+ }
+
+ err = spnic_set_cmdq_ctxts(hwdev);
+ if (err)
+ goto init_cmdq_err;
+
+ return 0;
+
+init_cmdq_err:
+ type = SPNIC_CMDQ_SYNC;
+ for (; type < cmdq_type; type++)
+ free_cmdq(hwdev, &cmdqs->cmdq[type]);
+
+ rte_mempool_free(cmdqs->cmd_buf_pool);
+
+pool_create_err:
+ rte_free(cmdqs);
+
+ return err;
+}
+
+void spnic_cmdqs_free(struct spnic_hwdev *hwdev)
+{
+ struct spnic_cmdqs *cmdqs = hwdev->cmdqs;
+ enum spnic_cmdq_type cmdq_type = SPNIC_CMDQ_SYNC;
+
+ cmdqs->status &= ~SPNIC_CMDQ_ENABLE;
+
+ for (; cmdq_type < SPNIC_MAX_CMDQ_TYPES; cmdq_type++)
+ free_cmdq(cmdqs->hwdev, &cmdqs->cmdq[cmdq_type]);
+
+ rte_mempool_free(cmdqs->cmd_buf_pool);
+
+ rte_free(cmdqs->saved_wqs);
+
+ rte_free(cmdqs);
+}
diff --git a/drivers/net/spnic/base/spnic_cmdq.h b/drivers/net/spnic/base/spnic_cmdq.h
new file mode 100644
index 0000000000..71753be6e8
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_cmdq.h
@@ -0,0 +1,228 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_CMDQ_H_
+#define _SPNIC_CMDQ_H_
+
+#define SPNIC_SCMD_DATA_LEN 16
+
+/* Pmd driver uses 64, kernel l2nic uses 4096 */
+#define SPNIC_CMDQ_DEPTH 64
+
+#define SPNIC_CMDQ_BUF_SIZE 2048U
+#define SPNIC_CMDQ_BUF_HW_RSVD 8
+#define SPNIC_CMDQ_MAX_DATA_SIZE (SPNIC_CMDQ_BUF_SIZE \
+ - SPNIC_CMDQ_BUF_HW_RSVD)
+
+#define SPNIC_CEQ_ID_CMDQ 0
+
+enum cmdq_scmd_type {
+ CMDQ_SET_ARM_CMD = 2,
+};
+
+enum cmdq_wqe_type {
+ WQE_LCMD_TYPE,
+ WQE_SCMD_TYPE
+};
+
+enum ctrl_sect_len {
+ CTRL_SECT_LEN = 1,
+ CTRL_DIRECT_SECT_LEN = 2
+};
+
+enum bufdesc_len {
+ BUFDESC_LCMD_LEN = 2,
+ BUFDESC_SCMD_LEN = 3
+};
+
+enum data_format {
+ DATA_SGE,
+};
+
+enum completion_format {
+ COMPLETE_DIRECT,
+ COMPLETE_SGE
+};
+
+enum completion_request {
+ CEQ_SET = 1,
+};
+
+enum cmdq_cmd_type {
+ SYNC_CMD_DIRECT_RESP,
+ SYNC_CMD_SGE_RESP,
+ ASYNC_CMD
+};
+
+enum spnic_cmdq_type {
+ SPNIC_CMDQ_SYNC,
+ SPNIC_CMDQ_ASYNC,
+ SPNIC_MAX_CMDQ_TYPES
+};
+
+enum spnic_db_src_type {
+ SPNIC_DB_SRC_CMDQ_TYPE,
+ SPNIC_DB_SRC_L2NIC_SQ_TYPE
+};
+
+enum spnic_cmdq_db_type {
+ SPNIC_DB_SQ_RQ_TYPE,
+ SPNIC_DB_CMDQ_TYPE
+};
+
+/* Cmdq ack type */
+enum spnic_ack_type {
+ SPNIC_ACK_TYPE_CMDQ,
+ SPNIC_ACK_TYPE_SHARE_CQN,
+ SPNIC_ACK_TYPE_APP_CQN,
+
+ SPNIC_MOD_ACK_MAX = 15
+};
+
+/* Cmdq wqe ctrls */
+struct spnic_cmdq_header {
+ u32 header_info;
+ u32 saved_data;
+};
+
+struct spnic_scmd_bufdesc {
+ u32 buf_len;
+ u32 rsvd;
+ u8 data[SPNIC_SCMD_DATA_LEN];
+};
+
+struct spnic_lcmd_bufdesc {
+ u32 rsvd1;
+ u64 saved_async_buf;
+ u64 rsvd3;
+};
+
+struct spnic_cmdq_db {
+ u32 db_head;
+ u32 db_info;
+};
+
+struct spnic_status {
+ u32 status_info;
+};
+
+struct spnic_ctrl {
+ u32 ctrl_info;
+};
+
+struct spnic_sge_resp {
+ u32 rsvd;
+};
+
+struct spnic_cmdq_completion {
+ /* HW format */
+ union {
+ struct spnic_sge_resp sge_resp;
+ u64 direct_resp;
+ };
+};
+
+struct spnic_cmdq_wqe_scmd {
+ struct spnic_cmdq_header header;
+ u64 rsvd;
+ struct spnic_status status;
+ struct spnic_ctrl ctrl;
+ struct spnic_cmdq_completion completion;
+ struct spnic_scmd_bufdesc buf_desc;
+};
+
+struct spnic_cmdq_wqe_lcmd {
+ struct spnic_cmdq_header header;
+ struct spnic_status status;
+ struct spnic_ctrl ctrl;
+ struct spnic_cmdq_completion completion;
+ struct spnic_lcmd_bufdesc buf_desc;
+};
+
+struct spnic_cmdq_inline_wqe {
+ struct spnic_cmdq_wqe_scmd wqe_scmd;
+};
+
+struct spnic_cmdq_wqe {
+ /* HW format */
+ union {
+ struct spnic_cmdq_inline_wqe inline_wqe;
+ struct spnic_cmdq_wqe_lcmd wqe_lcmd;
+ };
+};
+
+struct spnic_cmdq_ctxt_info {
+ u64 curr_wqe_page_pfn;
+ u64 wq_block_pfn;
+};
+
+struct spnic_cmd_cmdq_ctxt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 cmdq_id;
+ u8 rsvd1[5];
+
+ struct spnic_cmdq_ctxt_info ctxt_info;
+};
+
+enum spnic_cmdq_status {
+ SPNIC_CMDQ_ENABLE = BIT(0),
+};
+
+enum spnic_cmdq_cmd_type {
+ SPNIC_CMD_TYPE_NONE,
+ SPNIC_CMD_TYPE_SET_ARM,
+ SPNIC_CMD_TYPE_DIRECT_RESP,
+ SPNIC_CMD_TYPE_SGE_RESP
+};
+
+struct spnic_cmdq_cmd_info {
+ enum spnic_cmdq_cmd_type cmd_type;
+};
+
+struct spnic_cmdq {
+ struct spnic_wq *wq;
+
+ enum spnic_cmdq_type cmdq_type;
+ int wrapped;
+
+ int *errcode;
+ u8 *db_base;
+
+ rte_spinlock_t cmdq_lock;
+
+ struct spnic_cmdq_ctxt_info cmdq_ctxt;
+
+ struct spnic_cmdq_cmd_info *cmd_infos;
+};
+
+struct spnic_cmdqs {
+ struct spnic_hwdev *hwdev;
+
+ struct rte_mempool *cmd_buf_pool;
+
+ struct spnic_wq *saved_wqs;
+
+ struct spnic_cmdq cmdq[SPNIC_MAX_CMDQ_TYPES];
+
+ u32 status;
+};
+
+struct spnic_cmd_buf {
+ void *buf;
+ uint64_t dma_addr;
+ struct rte_mbuf *mbuf;
+ u16 size;
+};
+
+int spnic_reinit_cmdq_ctxts(struct spnic_hwdev *hwdev);
+
+int spnic_cmdqs_init(struct spnic_hwdev *hwdev);
+
+void spnic_cmdqs_free(struct spnic_hwdev *hwdev);
+
+#endif /* _SPNIC_CMDQ_H_ */
diff --git a/drivers/net/spnic/base/spnic_hw_comm.c b/drivers/net/spnic/base/spnic_hw_comm.c
new file mode 100644
index 0000000000..7c58989c14
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hw_comm.c
@@ -0,0 +1,222 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <ethdev_driver.h>
+#include <rte_bus_pci.h>
+#include <rte_hash.h>
+#include <rte_jhash.h>
+
+#include "spnic_compat.h"
+#include "spnic_csr.h"
+#include "spnic_hwdev.h"
+#include "spnic_hwif.h"
+#include "spnic_mgmt.h"
+#include "spnic_cmdq.h"
+#include "spnic_hw_comm.h"
+#include "spnic_cmd.h"
+
+#define SPNIC_MSIX_CNT_LLI_TIMER_SHIFT 0
+#define SPNIC_MSIX_CNT_LLI_CREDIT_SHIFT 8
+#define SPNIC_MSIX_CNT_COALESC_TIMER_SHIFT 8
+#define SPNIC_MSIX_CNT_PENDING_SHIFT 8
+#define SPNIC_MSIX_CNT_RESEND_TIMER_SHIFT 29
+
+#define SPNIC_MSIX_CNT_LLI_TIMER_MASK 0xFFU
+#define SPNIC_MSIX_CNT_LLI_CREDIT_MASK 0xFFU
+#define SPNIC_MSIX_CNT_COALESC_TIMER_MASK 0xFFU
+#define SPNIC_MSIX_CNT_PENDING_MASK 0x1FU
+#define SPNIC_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+int spnic_get_interrupt_cfg(void *dev, struct interrupt_info *info)
+{
+ struct spnic_hwdev *hwdev = dev;
+ struct spnic_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = spnic_global_func_id(hwdev);
+ msix_cfg.msix_index = info->msix_index;
+ msix_cfg.opcode = SPNIC_MGMT_CMD_OP_GET;
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg),
+ &msix_cfg, &out_size, 0);
+ if (err || !out_size || msix_cfg.status) {
+ PMD_DRV_LOG(ERR, "Get interrupt config failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, msix_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ info->lli_credit_limit = msix_cfg.lli_credit_cnt;
+ info->lli_timer_cfg = msix_cfg.lli_tmier_cnt;
+ info->pending_limt = msix_cfg.pending_cnt;
+ info->coalesc_timer_cfg = msix_cfg.coalesct_timer_cnt;
+ info->resend_timer_cfg = msix_cfg.resend_timer_cnt;
+
+ return 0;
+}
+
+/**
+ * Set interrupt cfg
+ *
+ * @param[in] dev
+ * The pointer to the private hardware device object
+ * @param[in] info
+ * Interrupt info
+ *
+ * @retval zero : Success
+ * @retval negative : Failure.
+ */
+int spnic_set_interrupt_cfg(void *dev, struct interrupt_info info)
+{
+ struct spnic_hwdev *hwdev = dev;
+ struct spnic_cmd_msix_config msix_cfg;
+ struct interrupt_info temp_info;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ temp_info.msix_index = info.msix_index;
+ err = spnic_get_interrupt_cfg(hwdev, &temp_info);
+ if (err)
+ return -EIO;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = spnic_global_func_id(hwdev);
+ msix_cfg.msix_index = (u16)info.msix_index;
+ msix_cfg.opcode = SPNIC_MGMT_CMD_OP_SET;
+
+ msix_cfg.lli_credit_cnt = temp_info.lli_credit_limit;
+ msix_cfg.lli_tmier_cnt = temp_info.lli_timer_cfg;
+ msix_cfg.pending_cnt = temp_info.pending_limt;
+ msix_cfg.coalesct_timer_cnt = temp_info.coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = temp_info.resend_timer_cfg;
+
+ if (info.lli_set) {
+ msix_cfg.lli_credit_cnt = info.lli_credit_limit;
+ msix_cfg.lli_tmier_cnt = info.lli_timer_cfg;
+ }
+
+ if (info.interrupt_coalesc_set) {
+ msix_cfg.pending_cnt = info.pending_limt;
+ msix_cfg.coalesct_timer_cnt = info.coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = info.resend_timer_cfg;
+ }
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg),
+ &msix_cfg, &out_size, 0);
+ if (err || !out_size || msix_cfg.status) {
+ PMD_DRV_LOG(ERR, "Set interrupt config failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, msix_cfg.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int spnic_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size)
+{
+ struct spnic_cmd_wq_page_size page_size_info;
+ u16 out_size = sizeof(page_size_info);
+ int err;
+
+ memset(&page_size_info, 0, sizeof(page_size_info));
+ page_size_info.func_idx = func_idx;
+ page_size_info.page_size = SPNIC_PAGE_SIZE_HW(page_size);
+ page_size_info.opcode = SPNIC_MGMT_CMD_OP_SET;
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_CFG_PAGESIZE,
+ &page_size_info, sizeof(page_size_info),
+ &page_size_info, &out_size, 0);
+ if (err || !out_size || page_size_info.status) {
+ PMD_DRV_LOG(ERR, "Set wq page size failed, err: %d, "
+ "status: 0x%x, out_size: 0x%0x",
+ err, page_size_info.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int spnic_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
+{
+ struct spnic_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_idx = spnic_global_func_id(hwdev);
+ root_ctxt.set_cmdq_depth = 1;
+ root_ctxt.cmdq_depth = (u8)ilog2(cmdq_depth);
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM, MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ PMD_DRV_LOG(ERR, "Set cmdq depth failed, err: %d, status: 0x%x, out_size: 0x%x",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+/**
+ * Set the dma attributes for entry
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ * @param[in] entry_idx
+ * The entry index in the dma table
+ * @param[in] st
+ * PCIE TLP steering tag
+ * @param[in] at
+ * PCIE TLP AT field
+ * @param[in] ph
+ * PCIE TLP Processing Hint field
+ * @param[in] no_snooping
+ * PCIE TLP No snooping
+ * @param[in] tph_en
+ * PCIE TLP Processing Hint Enable
+ */
+int spnic_set_dma_attr_tbl(struct spnic_hwdev *hwdev, u32 entry_idx, u8 st,
+ u8 at, u8 ph, u8 no_snooping, u8 tph_en)
+{
+ struct comm_cmd_dma_attr_config dma_attr;
+ u16 out_size = sizeof(dma_attr);
+ int err;
+
+ memset(&dma_attr, 0, sizeof(dma_attr));
+ dma_attr.func_id = spnic_global_func_id(hwdev);
+ dma_attr.entry_idx = entry_idx;
+ dma_attr.st = st;
+ dma_attr.at = at;
+ dma_attr.ph = ph;
+ dma_attr.no_snooping = no_snooping;
+ dma_attr.tph_en = tph_en;
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_SET_DMA_ATTR,
+ &dma_attr, sizeof(dma_attr),
+ &dma_attr, &out_size, 0);
+ if (err || !out_size || dma_attr.head.status) {
+ PMD_DRV_LOG(ERR, "Failed to set dma attr, err: %d, status: 0x%x, out_size: 0x%x\n",
+ err, dma_attr.head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/spnic/base/spnic_hw_comm.h b/drivers/net/spnic/base/spnic_hw_comm.h
new file mode 100644
index 0000000000..c905f49b7a
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hw_comm.h
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_HW_COMM_H_
+#define _SPNIC_HW_COMM_H_
+
+#define SPNIC_MGMT_CMD_OP_GET 0
+#define SPNIC_MGMT_CMD_OP_SET 1
+
+#define SPNIC_MSIX_CNT_LLI_TIMER_SHIFT 0
+#define SPNIC_MSIX_CNT_LLI_CREDIT_SHIFT 8
+#define SPNIC_MSIX_CNT_COALESC_TIMER_SHIFT 8
+#define SPNIC_MSIX_CNT_PENDING_SHIFT 8
+#define SPNIC_MSIX_CNT_RESEND_TIMER_SHIFT 29
+
+#define SPNIC_MSIX_CNT_LLI_TIMER_MASK 0xFFU
+#define SPNIC_MSIX_CNT_LLI_CREDIT_MASK 0xFFU
+#define SPNIC_MSIX_CNT_COALESC_TIMER_MASK 0xFFU
+#define SPNIC_MSIX_CNT_PENDING_MASK 0x1FU
+#define SPNIC_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+#define SPNIC_MSIX_CNT_SET(val, member) \
+ (((val) & SPNIC_MSIX_CNT_##member##_MASK) << \
+ SPNIC_MSIX_CNT_##member##_SHIFT)
+
+#define MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, status) \
+ ((err) || (status) || !(out_size))
+
+struct spnic_cmd_msix_config {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 msix_index;
+ u8 pending_cnt;
+ u8 coalesct_timer_cnt;
+ u8 resend_timer_cnt;
+ u8 lli_tmier_cnt;
+ u8 lli_credit_cnt;
+ u8 rsvd2[5];
+};
+
+#define SPNIC_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
+
+struct spnic_cmd_wq_page_size {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 opcode;
+ /*
+ * Real size is 4KB * 2^page_size, range(0~20) must be checked
+ * by driver
+ */
+ u8 page_size;
+
+ u32 rsvd1;
+};
+
+struct spnic_reset {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1[3];
+ u64 reset_flag;
+};
+
+struct spnic_cmd_root_ctxt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 set_cmdq_depth;
+ u8 cmdq_depth;
+ u16 rx_buf_sz;
+ u8 lro_en;
+ u8 rsvd1;
+ u16 sq_depth;
+ u16 rq_depth;
+ u64 rsvd2;
+};
+
+enum spnic_fw_ver_type {
+ SPNIC_FW_VER_TYPE_BOOT,
+ SPNIC_FW_VER_TYPE_MPU,
+ SPNIC_FW_VER_TYPE_NPU,
+ SPNIC_FW_VER_TYPE_SMU,
+ SPNIC_FW_VER_TYPE_CFG,
+};
+
+struct comm_cmd_dma_attr_config {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 entry_idx;
+ u8 st;
+ u8 at;
+ u8 ph;
+ u8 no_snooping;
+ u8 tph_en;
+ u32 resv1;
+};
+
+#define SPNIC_FW_VERSION_LEN 16
+#define SPNIC_FW_COMPILE_TIME_LEN 20
+#define SPNIC_MGMT_VERSION_MAX_LEN 32
+struct spnic_cmd_get_fw_version {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 fw_type;
+ u16 rsvd1;
+ u8 ver[SPNIC_FW_VERSION_LEN];
+ u8 time[SPNIC_FW_COMPILE_TIME_LEN];
+};
+
+struct spnic_cmd_clear_doorbell {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u16 rsvd1[3];
+};
+
+struct spnic_cmd_clear_resource {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u16 rsvd1[3];
+};
+
+struct spnic_cmd_board_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ struct spnic_board_info info;
+
+ u32 rsvd1[25];
+};
+
+struct interrupt_info {
+ u32 lli_set;
+ u32 interrupt_coalesc_set;
+ u16 msix_index;
+ u8 lli_credit_limit;
+ u8 lli_timer_cfg;
+ u8 pending_limt;
+ u8 coalesc_timer_cfg;
+ u8 resend_timer_cfg;
+};
+
+int spnic_get_interrupt_cfg(void *dev, struct interrupt_info *info);
+
+int spnic_set_interrupt_cfg(void *dev, struct interrupt_info info);
+
+int spnic_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size);
+
+int spnic_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+
+int spnic_set_dma_attr_tbl(struct spnic_hwdev *hwdev, u32 entry_idx, u8 st,
+ u8 at, u8 ph, u8 no_snooping, u8 tph_en);
+
+#endif
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
index 2b5154f8a4..5671cb860c 100644
--- a/drivers/net/spnic/base/spnic_hwdev.c
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -9,7 +9,64 @@
#include "spnic_mgmt.h"
#include "spnic_cmd.h"
#include "spnic_mbox.h"
+#include "spnic_cmdq.h"
#include "spnic_hwdev.h"
+#include "spnic_hw_comm.h"
+
+enum spnic_pcie_nosnoop {
+ SPNIC_PCIE_SNOOP = 0,
+ SPNIC_PCIE_NO_SNOOP = 1
+};
+
+enum spnic_pcie_tph {
+ SPNIC_PCIE_TPH_DISABLE = 0,
+ SPNIC_PCIE_TPH_ENABLE = 1
+};
+
+#define SPNIC_DMA_ATTR_INDIR_IDX_SHIFT 0
+
+#define SPNIC_DMA_ATTR_INDIR_IDX_MASK 0x3FF
+
+#define SPNIC_DMA_ATTR_INDIR_IDX_SET(val, member) \
+ (((u32)(val) & SPNIC_DMA_ATTR_INDIR_##member##_MASK) << \
+ SPNIC_DMA_ATTR_INDIR_##member##_SHIFT)
+
+#define SPNIC_DMA_ATTR_INDIR_IDX_CLEAR(val, member) \
+ ((val) & (~(SPNIC_DMA_ATTR_INDIR_##member##_MASK \
+ << SPNIC_DMA_ATTR_INDIR_##member##_SHIFT)))
+
+#define SPNIC_DMA_ATTR_ENTRY_ST_SHIFT 0
+#define SPNIC_DMA_ATTR_ENTRY_AT_SHIFT 8
+#define SPNIC_DMA_ATTR_ENTRY_PH_SHIFT 10
+#define SPNIC_DMA_ATTR_ENTRY_NO_SNOOPING_SHIFT 12
+#define SPNIC_DMA_ATTR_ENTRY_TPH_EN_SHIFT 13
+
+#define SPNIC_DMA_ATTR_ENTRY_ST_MASK 0xFF
+#define SPNIC_DMA_ATTR_ENTRY_AT_MASK 0x3
+#define SPNIC_DMA_ATTR_ENTRY_PH_MASK 0x3
+#define SPNIC_DMA_ATTR_ENTRY_NO_SNOOPING_MASK 0x1
+#define SPNIC_DMA_ATTR_ENTRY_TPH_EN_MASK 0x1
+
+#define SPNIC_DMA_ATTR_ENTRY_SET(val, member) \
+ (((u32)(val) & SPNIC_DMA_ATTR_ENTRY_##member##_MASK) << \
+ SPNIC_DMA_ATTR_ENTRY_##member##_SHIFT)
+
+#define SPNIC_DMA_ATTR_ENTRY_CLEAR(val, member) \
+ ((val) & (~(SPNIC_DMA_ATTR_ENTRY_##member##_MASK \
+ << SPNIC_DMA_ATTR_ENTRY_##member##_SHIFT)))
+
+#define SPNIC_PCIE_ST_DISABLE 0
+#define SPNIC_PCIE_AT_DISABLE 0
+#define SPNIC_PCIE_PH_DISABLE 0
+
+#define PCIE_MSIX_ATTR_ENTRY 0
+
+#define SPNIC_CHIP_PRESENT 1
+#define SPNIC_CHIP_ABSENT 0
+
+#define SPNIC_DEAULT_EQ_MSIX_PENDING_LIMIT 0
+#define SPNIC_DEAULT_EQ_MSIX_COALESC_TIMER_CFG 0xFF
+#define SPNIC_DEAULT_EQ_MSIX_RESEND_TIMER_CFG 7
typedef void (*mgmt_event_cb)(void *handle, void *buf_in, u16 in_size,
void *buf_out, u16 *out_size);
@@ -100,6 +157,78 @@ void pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle,
PMD_DRV_LOG(WARNING, "Unsupported mgmt cpu event %d to process", cmd);
}
+/**
+ * Initialize the default dma attributes
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int dma_attr_table_init(struct spnic_hwdev *hwdev)
+{
+ u32 addr, val, dst_attr;
+
+ /* Use indirect access should set entry_idx first */
+ addr = SPNIC_CSR_DMA_ATTR_INDIR_IDX_ADDR;
+ val = spnic_hwif_read_reg(hwdev->hwif, addr);
+ val = SPNIC_DMA_ATTR_INDIR_IDX_CLEAR(val, IDX);
+
+ val |= SPNIC_DMA_ATTR_INDIR_IDX_SET(PCIE_MSIX_ATTR_ENTRY, IDX);
+
+ spnic_hwif_write_reg(hwdev->hwif, addr, val);
+
+ rte_wmb(); /* Write index before config */
+
+ addr = SPNIC_CSR_DMA_ATTR_TBL_ADDR;
+ val = spnic_hwif_read_reg(hwdev->hwif, addr);
+
+ dst_attr = SPNIC_DMA_ATTR_ENTRY_SET(SPNIC_PCIE_ST_DISABLE, ST) |
+ SPNIC_DMA_ATTR_ENTRY_SET(SPNIC_PCIE_AT_DISABLE, AT) |
+ SPNIC_DMA_ATTR_ENTRY_SET(SPNIC_PCIE_PH_DISABLE, PH) |
+ SPNIC_DMA_ATTR_ENTRY_SET(SPNIC_PCIE_SNOOP, NO_SNOOPING) |
+ SPNIC_DMA_ATTR_ENTRY_SET(SPNIC_PCIE_TPH_DISABLE, TPH_EN);
+
+ if (val == dst_attr)
+ return 0;
+
+ return spnic_set_dma_attr_tbl(hwdev, PCIE_MSIX_ATTR_ENTRY,
+ SPNIC_PCIE_ST_DISABLE,
+ SPNIC_PCIE_AT_DISABLE,
+ SPNIC_PCIE_PH_DISABLE,
+ SPNIC_PCIE_SNOOP,
+ SPNIC_PCIE_TPH_DISABLE);
+}
+
+static int init_aeqs_msix_attr(struct spnic_hwdev *hwdev)
+{
+ struct spnic_aeqs *aeqs = hwdev->aeqs;
+ struct interrupt_info info = {0};
+ struct spnic_eq *eq = NULL;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = SPNIC_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = SPNIC_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = SPNIC_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ eq = &aeqs->aeq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = spnic_set_interrupt_cfg(hwdev, info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set msix attr for aeq %d failed",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
static int spnic_comm_pf_to_mgmt_init(struct spnic_hwdev *hwdev)
{
int err;
@@ -124,6 +253,35 @@ static void spnic_comm_pf_to_mgmt_free(struct spnic_hwdev *hwdev)
spnic_pf_to_mgmt_free(hwdev);
}
+static int spnic_comm_cmdqs_init(struct spnic_hwdev *hwdev)
+{
+ int err;
+
+ err = spnic_cmdqs_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cmd queues failed");
+ return err;
+ }
+
+ err = spnic_set_cmdq_depth(hwdev, SPNIC_CMDQ_DEPTH);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set cmdq depth failed");
+ goto set_cmdq_depth_err;
+ }
+
+ return 0;
+
+set_cmdq_depth_err:
+ spnic_cmdqs_free(hwdev);
+
+ return err;
+}
+
+static void spnic_comm_cmdqs_free(struct spnic_hwdev *hwdev)
+{
+ spnic_cmdqs_free(hwdev);
+}
+
static int init_mgmt_channel(struct spnic_hwdev *hwdev)
{
int err;
@@ -164,6 +322,51 @@ static void free_mgmt_channel(struct spnic_hwdev *hwdev)
spnic_aeqs_free(hwdev);
}
+#define SPNIC_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define SPNIC_HW_WQ_PAGE_SIZE 0x1000
+
+static int init_cmdqs_channel(struct spnic_hwdev *hwdev)
+{
+ int err;
+
+ err = dma_attr_table_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init dma attr table failed");
+ goto dma_attr_init_err;
+ }
+
+ err = init_aeqs_msix_attr(hwdev);
+ if (err)
+ goto init_aeqs_msix_err;
+
+ /* Set default wq page_size */
+ hwdev->wq_page_size = SPNIC_DEFAULT_WQ_PAGE_SIZE;
+ err = spnic_set_wq_page_size(hwdev, spnic_global_func_id(hwdev),
+ hwdev->wq_page_size);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set wq page size failed");
+ goto init_wq_pg_size_err;
+ }
+
+ err = spnic_comm_cmdqs_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cmd queues failed");
+ goto cmdq_init_err;
+ }
+
+ return 0;
+
+cmdq_init_err:
+ if (SPNIC_FUNC_TYPE(hwdev) != TYPE_VF)
+ spnic_set_wq_page_size(hwdev, spnic_global_func_id(hwdev),
+ SPNIC_HW_WQ_PAGE_SIZE);
+init_wq_pg_size_err:
+init_aeqs_msix_err:
+dma_attr_init_err:
+
+ return err;
+}
+
static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
{
int err;
@@ -174,11 +377,23 @@ static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
return err;
}
+ err = init_cmdqs_channel(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cmdq channel failed");
+ goto init_cmdqs_channel_err;
+ }
+
return 0;
+
+init_cmdqs_channel_err:
+ free_mgmt_channel(hwdev);
+
+ return err;
}
static void spnic_uninit_comm_ch(struct spnic_hwdev *hwdev)
{
+ spnic_comm_cmdqs_free(hwdev);
free_mgmt_channel(hwdev);
}
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
index 4e77d776ee..8c581c7480 100644
--- a/drivers/net/spnic/base/spnic_hwdev.h
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -91,13 +91,17 @@ struct spnic_hwdev {
void *dev_handle; /* Pointer to spnic_nic_dev */
void *pci_dev; /* Pointer to rte_pci_device */
void *eth_dev; /* Pointer to rte_eth_dev */
-
+ struct spnic_hwif *hwif;
uint16_t port_id;
- struct spnic_hwif *hwif;
+ u32 wq_page_size;
+
struct spnic_mbox *func_to_func;
struct cfg_mgmt_info *cfg_mgmt;
+
+ struct spnic_cmdqs *cmdqs;
struct spnic_aeqs *aeqs;
+
struct spnic_msg_pf_to_mgmt *pf_to_mgmt;
u8 *chip_fault_stats;
struct spnic_hw_stats hw_stats;
diff --git a/drivers/net/spnic/base/spnic_wq.h b/drivers/net/spnic/base/spnic_wq.h
new file mode 100644
index 0000000000..032d45e79e
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_wq.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_WQ_H_
+#define _SPNIC_WQ_H_
+
+/* Use 0-level CLA, page size must be: SQ 16B(wqe) * 64k(max_q_depth) */
+#define SPNIC_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define SPNIC_HW_WQ_PAGE_SIZE 0x1000
+
+#define CMDQ_BLOCKS_PER_PAGE 8
+#define CMDQ_BLOCK_SIZE 512UL
+#define CMDQ_PAGE_SIZE RTE_ALIGN((CMDQ_BLOCKS_PER_PAGE * \
+ CMDQ_BLOCK_SIZE), PAGE_SIZE)
+
+#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
+ ((u64 *)(((u64)((cmdq_pages)->cmdq_page_vaddr)) \
+ + (u64)((wq)->block_idx * CMDQ_BLOCK_SIZE)))
+
+#define CMDQ_BASE_PADDR(cmdq_pages, wq) \
+ (((u64)((cmdq_pages)->cmdq_page_paddr)) \
+ + (u64)(wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_ADDR(cmdq_pages, wq) \
+ ((u64 *)(((u64)((cmdq_pages)->cmdq_shadow_page_vaddr)) \
+ + (u64)((wq)->block_idx * CMDQ_BLOCK_SIZE)))
+
+#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+
+#define WQ_WQE_ADDR(wq, idx) ((void *)((u64)((wq)->queue_buf_vaddr) + \
+ ((idx) << (wq)->wqebb_shift)))
+
+struct spnic_wq {
+ /* The addresses are 64 bit in the HW */
+ u64 queue_buf_vaddr;
+
+ u16 q_depth;
+ u16 mask;
+ rte_atomic32_t delta;
+
+ u32 cons_idx;
+ u32 prod_idx;
+
+ u64 queue_buf_paddr;
+
+ u32 wqebb_size;
+ u32 wqebb_shift;
+
+ u32 wq_buf_size;
+
+ const struct rte_memzone *wq_mz;
+
+ u32 rsvd[5];
+};
+
+#endif /* _SPNIC_WQ_H_ :*/
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 07/25] net/spnic: add interface handling cmdq message
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (5 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 06/25] net/spnic: add cmdq and work queue Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 08/25] net/spnic: add hardware info initialization Yanling Song
` (17 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit adds cmdq_sync_cmd_direct_resp() and
cmdq_sync_cmd_detail_resp() interfaces by which driver can send
cmdq message using wqe a data structure describe the buffer.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 1 +
drivers/net/spnic/base/spnic_cmdq.c | 673 +++++++++++++++++++++++++
drivers/net/spnic/base/spnic_cmdq.h | 20 +
drivers/net/spnic/base/spnic_hw_comm.c | 41 ++
drivers/net/spnic/base/spnic_hwdev.c | 8 +-
drivers/net/spnic/base/spnic_hwdev.h | 13 +
drivers/net/spnic/base/spnic_wq.c | 139 +++++
drivers/net/spnic/base/spnic_wq.h | 70 ++-
8 files changed, 960 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/spnic/base/spnic_wq.c
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index 5e4efac7be..da6d6ee4a2 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -10,6 +10,7 @@ sources = [
'spnic_nic_event.c',
'spnic_cmdq.c',
'spnic_hw_comm.c',
+ 'spnic_wq.c'
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_cmdq.c b/drivers/net/spnic/base/spnic_cmdq.c
index ccfcf739a0..3ab518eade 100644
--- a/drivers/net/spnic/base/spnic_cmdq.c
+++ b/drivers/net/spnic/base/spnic_cmdq.c
@@ -12,6 +12,71 @@
#include "spnic_mgmt.h"
#include "spnic_cmdq.h"
+#define CMDQ_CMD_TIMEOUT 300000 /* Millisecond */
+
+#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
+#define LOWER_8_BITS(data) ((data) & 0xFF)
+
+#define CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
+#define CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFFU
+
+#define CMDQ_DB_INFO_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_INFO_##member##_MASK) \
+ << CMDQ_DB_INFO_##member##_SHIFT)
+#define CMDQ_DB_INFO_UPPER_32(val) ((u64)(val) << 32)
+
+#define CMDQ_DB_HEAD_QUEUE_TYPE_SHIFT 23
+#define CMDQ_DB_HEAD_CMDQ_TYPE_SHIFT 24
+#define CMDQ_DB_HEAD_SRC_TYPE_SHIFT 27
+#define CMDQ_DB_HEAD_QUEUE_TYPE_MASK 0x1U
+#define CMDQ_DB_HEAD_CMDQ_TYPE_MASK 0x7U
+#define CMDQ_DB_HEAD_SRC_TYPE_MASK 0x1FU
+#define CMDQ_DB_HEAD_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_HEAD_##member##_MASK) << \
+ CMDQ_DB_HEAD_##member##_SHIFT)
+
+#define CMDQ_CTRL_PI_SHIFT 0
+#define CMDQ_CTRL_CMD_SHIFT 16
+#define CMDQ_CTRL_MOD_SHIFT 24
+#define CMDQ_CTRL_ACK_TYPE_SHIFT 29
+#define CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_CTRL_PI_MASK 0xFFFFU
+#define CMDQ_CTRL_CMD_MASK 0xFFU
+#define CMDQ_CTRL_MOD_MASK 0x1FU
+#define CMDQ_CTRL_ACK_TYPE_MASK 0x3U
+#define CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_CTRL_SET(val, member) \
+ (((u32)(val) & CMDQ_CTRL_##member##_MASK) << CMDQ_CTRL_##member##_SHIFT)
+
+#define CMDQ_CTRL_GET(val, member) \
+ (((val) >> CMDQ_CTRL_##member##_SHIFT) & CMDQ_CTRL_##member##_MASK)
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
+#define CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
+#define CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFFU
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_WQE_HEADER_SET(val, member) \
+ (((u32)(val) & CMDQ_WQE_HEADER_##member##_MASK) << \
+ CMDQ_WQE_HEADER_##member##_SHIFT)
+
+#define CMDQ_WQE_HEADER_GET(val, member) \
+ (((val) >> CMDQ_WQE_HEADER_##member##_SHIFT) & \
+ CMDQ_WQE_HEADER_##member##_MASK)
+
#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
#define CMDQ_CTXT_EQ_ID_SHIFT 53
#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
@@ -36,8 +101,523 @@
#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
(((u64)(val) & CMDQ_CTXT_##member##_MASK) << CMDQ_CTXT_##member##_SHIFT)
+#define SAVED_DATA_ARM_SHIFT 31
+
+#define SAVED_DATA_ARM_MASK 0x1U
+
+#define SAVED_DATA_SET(val, member) \
+ (((val) & SAVED_DATA_##member##_MASK) << SAVED_DATA_##member##_SHIFT)
+
+#define SAVED_DATA_CLEAR(val, member) \
+ ((val) & (~(SAVED_DATA_##member##_MASK << SAVED_DATA_##member##_SHIFT)))
+
+#define WQE_ERRCODE_VAL_SHIFT 0
+
+#define WQE_ERRCODE_VAL_MASK 0x7FFFFFFF
+
+#define WQE_ERRCODE_GET(val, member) \
+ (((val) >> WQE_ERRCODE_##member##_SHIFT) & WQE_ERRCODE_##member##_MASK)
+
+#define WQE_COMPLETED(ctrl_info) CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
+
+#define WQE_HEADER(wqe) ((struct spnic_cmdq_header *)(wqe))
+
+#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
+
+#define CMDQ_DB_ADDR(db_base, pi) (((u8 *)(db_base)) + CMDQ_DB_PI_OFF(pi))
+
+#define CMDQ_PFN(addr, page_size) ((addr) >> (ilog2(page_size)))
+
+#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
+
+#define WQE_LCMD_SIZE 64
+#define WQE_SCMD_SIZE 64
+
+#define COMPLETE_LEN 3
+
+#define CMDQ_WQEBB_SIZE 64
+#define CMDQ_WQEBB_SHIFT 6
+
+#define CMDQ_WQE_SIZE 64
+
+#define SPNIC_CMDQ_WQ_BUF_SIZE 4096
+
+#define WQE_NUM_WQEBBS(wqe_size, wq) \
+ ((u16)(RTE_ALIGN((u32)(wqe_size), (wq)->wqebb_size) / (wq)->wqebb_size))
+
+#define cmdq_to_cmdqs(cmdq) container_of((cmdq) - (cmdq)->cmdq_type, \
+ struct spnic_cmdqs, cmdq[0])
+
#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+static int spnic_cmdq_poll_msg(struct spnic_cmdq *cmdq, u32 timeout);
+
+bool spnic_cmdq_idle(struct spnic_cmdq *cmdq)
+{
+ struct spnic_wq *wq = cmdq->wq;
+
+ return (__atomic_load_n(&wq->delta, __ATOMIC_RELAXED) == wq->q_depth ?
+ true : false);
+}
+
+struct spnic_cmd_buf *spnic_alloc_cmd_buf(void *hwdev)
+{
+ struct spnic_cmdqs *cmdqs = ((struct spnic_hwdev *)hwdev)->cmdqs;
+ struct spnic_cmd_buf *cmd_buf;
+
+ cmd_buf = rte_zmalloc(NULL, sizeof(*cmd_buf), 0);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buffer failed");
+ return NULL;
+ }
+
+ cmd_buf->mbuf = rte_pktmbuf_alloc(cmdqs->cmd_buf_pool);
+ if (!cmd_buf->mbuf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd from the pool failed");
+ goto alloc_pci_buf_err;
+ }
+
+ cmd_buf->dma_addr = rte_mbuf_data_iova(cmd_buf->mbuf);
+ cmd_buf->buf = rte_pktmbuf_mtod(cmd_buf->mbuf, void *);
+
+ return cmd_buf;
+
+alloc_pci_buf_err:
+ rte_free(cmd_buf);
+ return NULL;
+}
+
+void spnic_free_cmd_buf(struct spnic_cmd_buf *cmd_buf)
+{
+ rte_pktmbuf_free(cmd_buf->mbuf);
+
+ rte_free(cmd_buf);
+}
+
+static u32 cmdq_wqe_size(enum cmdq_wqe_type wqe_type)
+{
+ u32 wqe_size = 0;
+
+ switch (wqe_type) {
+ case WQE_LCMD_TYPE:
+ wqe_size = WQE_LCMD_SIZE;
+ break;
+ case WQE_SCMD_TYPE:
+ wqe_size = WQE_SCMD_SIZE;
+ break;
+ default:
+ break;
+ }
+
+ return wqe_size;
+}
+
+static int cmdq_get_wqe_size(enum bufdesc_len len)
+{
+ int wqe_size = 0;
+
+ switch (len) {
+ case BUFDESC_LCMD_LEN:
+ wqe_size = WQE_LCMD_SIZE;
+ break;
+ case BUFDESC_SCMD_LEN:
+ wqe_size = WQE_SCMD_SIZE;
+ break;
+ default:
+ break;
+ }
+
+ return wqe_size;
+}
+
+static void cmdq_set_completion(struct spnic_cmdq_completion *complete,
+ struct spnic_cmd_buf *buf_out)
+{
+ struct spnic_sge_resp *sge_resp = &complete->sge_resp;
+
+ spnic_set_sge(&sge_resp->sge, buf_out->dma_addr,
+ SPNIC_CMDQ_BUF_SIZE);
+}
+
+static void cmdq_set_lcmd_bufdesc(struct spnic_cmdq_wqe_lcmd *wqe,
+ struct spnic_cmd_buf *buf_in)
+{
+ spnic_set_sge(&wqe->buf_desc.sge, buf_in->dma_addr, buf_in->size);
+}
+
+static void cmdq_set_db(struct spnic_cmdq *cmdq,
+ enum spnic_cmdq_type cmdq_type, u16 prod_idx)
+{
+ u64 db = 0;
+
+ /* Hardware will do endianness coverting */
+ db = CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX);
+ db = CMDQ_DB_INFO_UPPER_32(db) |
+ CMDQ_DB_HEAD_SET(SPNIC_DB_CMDQ_TYPE, QUEUE_TYPE) |
+ CMDQ_DB_HEAD_SET(cmdq_type, CMDQ_TYPE) |
+ CMDQ_DB_HEAD_SET(SPNIC_DB_SRC_CMDQ_TYPE, SRC_TYPE);
+
+ rte_wmb(); /* Write all before the doorbell */
+
+ rte_write64(db, CMDQ_DB_ADDR(cmdq->db_base, prod_idx));
+}
+
+static void cmdq_wqe_fill(void *dst, void *src)
+{
+ memcpy((u8 *)dst + FIRST_DATA_TO_WRITE_LAST,
+ (u8 *)src + FIRST_DATA_TO_WRITE_LAST,
+ CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
+
+ rte_wmb(); /* The first 8 bytes should be written last */
+
+ *(u64 *)dst = *(u64 *)src;
+}
+
+static void cmdq_prepare_wqe_ctrl(struct spnic_cmdq_wqe *wqe, int wrapped,
+ enum spnic_mod_type mod, u8 cmd, u16 prod_idx,
+ enum completion_format complete_format,
+ enum data_format local_data_format,
+ enum bufdesc_len buf_len)
+{
+ struct spnic_ctrl *ctrl = NULL;
+ enum ctrl_sect_len ctrl_len;
+ struct spnic_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct spnic_cmdq_wqe_scmd *wqe_scmd = NULL;
+ u32 saved_data = WQE_HEADER(wqe)->saved_data;
+
+ if (local_data_format == DATA_SGE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+
+ wqe_lcmd->status.status_info = 0;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_len = CTRL_SECT_LEN;
+ } else {
+ wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+
+ wqe_scmd->status.status_info = 0;
+ ctrl = &wqe_scmd->ctrl;
+ ctrl_len = CTRL_DIRECT_SECT_LEN;
+ }
+
+ ctrl->ctrl_info = CMDQ_CTRL_SET(prod_idx, PI) |
+ CMDQ_CTRL_SET(cmd, CMD) |
+ CMDQ_CTRL_SET(mod, MOD) |
+ CMDQ_CTRL_SET(SPNIC_ACK_TYPE_CMDQ, ACK_TYPE);
+
+ WQE_HEADER(wqe)->header_info =
+ CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
+ CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
+ CMDQ_WQE_HEADER_SET(local_data_format, DATA_FMT) |
+ CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
+ CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
+ CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
+ CMDQ_WQE_HEADER_SET((u32)wrapped, HW_BUSY_BIT);
+
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ if (cmd == CMDQ_SET_ARM_CMD && mod == SPNIC_MOD_COMM)
+ WQE_HEADER(wqe)->saved_data = saved_data |
+ SAVED_DATA_SET(1, ARM);
+ else
+ WQE_HEADER(wqe)->saved_data = saved_data;
+}
+
+static void cmdq_set_lcmd_wqe(struct spnic_cmdq_wqe *wqe,
+ enum cmdq_cmd_type cmd_type,
+ struct spnic_cmd_buf *buf_in,
+ struct spnic_cmd_buf *buf_out, int wrapped,
+ enum spnic_mod_type mod, u8 cmd, u16 prod_idx)
+{
+ struct spnic_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_DIRECT_RESP:
+ complete_format = COMPLETE_DIRECT;
+ wqe_lcmd->completion.direct_resp = 0;
+ break;
+ case SYNC_CMD_SGE_RESP:
+ if (buf_out) {
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_lcmd->completion, buf_out);
+ }
+ break;
+ case ASYNC_CMD:
+ complete_format = COMPLETE_DIRECT;
+ wqe_lcmd->completion.direct_resp = 0;
+ wqe_lcmd->buf_desc.saved_async_buf = (u64)(buf_in);
+ break;
+ default:
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, mod, cmd, prod_idx, complete_format,
+ DATA_SGE, BUFDESC_LCMD_LEN);
+
+ cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
+}
+
+static int cmdq_sync_cmd_direct_resp(struct spnic_cmdq *cmdq,
+ enum spnic_mod_type mod, u8 cmd,
+ struct spnic_cmd_buf *buf_in,
+ u64 *out_param, u32 timeout)
+{
+ struct spnic_wq *wq = cmdq->wq;
+ struct spnic_cmdq_wqe wqe;
+ struct spnic_cmdq_wqe *curr_wqe = NULL;
+ struct spnic_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped;
+ u32 timeo, wqe_size;
+ int err;
+
+ wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct */
+ rte_spinlock_lock(&cmdq->cmdq_lock);
+
+ curr_wqe = spnic_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ err = -EBUSY;
+ goto cmdq_unlock;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ /* Cmdq wqe is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmdq->cmd_infos[curr_prod_idx].cmd_type = SPNIC_CMD_TYPE_DIRECT_RESP;
+
+ cmdq_set_db(cmdq, SPNIC_CMDQ_SYNC, next_prod_idx);
+
+ timeo = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT);
+ err = spnic_cmdq_poll_msg(cmdq, timeo);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x",
+ curr_prod_idx);
+ err = -ETIMEDOUT;
+ goto cmdq_unlock;
+ }
+
+ rte_smp_rmb(); /* Read error code after completion */
+
+ if (out_param) {
+ wqe_lcmd = &curr_wqe->wqe_lcmd;
+ *out_param = cpu_to_be64(wqe_lcmd->completion.direct_resp);
+ }
+
+ if (cmdq->errcode[curr_prod_idx])
+ err = cmdq->errcode[curr_prod_idx];
+
+cmdq_unlock:
+ rte_spinlock_unlock(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static int cmdq_sync_cmd_detail_resp(struct spnic_cmdq *cmdq,
+ enum spnic_mod_type mod, u8 cmd,
+ struct spnic_cmd_buf *buf_in,
+ struct spnic_cmd_buf *buf_out,
+ u32 timeout)
+{
+ struct spnic_wq *wq = cmdq->wq;
+ struct spnic_cmdq_wqe wqe;
+ struct spnic_cmdq_wqe *curr_wqe = NULL;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped;
+ u32 timeo, wqe_size;
+ int err;
+
+ wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct */
+ rte_spinlock_lock(&cmdq->cmdq_lock);
+
+ curr_wqe = spnic_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ err = -EBUSY;
+ goto cmdq_unlock;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out,
+ wrapped, mod, cmd, curr_prod_idx);
+
+ /* Cmdq wqe is not shadow, therefore wqe will be written to wq */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmdq->cmd_infos[curr_prod_idx].cmd_type = SPNIC_CMD_TYPE_SGE_RESP;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ timeo = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT);
+ err = spnic_cmdq_poll_msg(cmdq, timeo);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x",
+ curr_prod_idx);
+ err = -ETIMEDOUT;
+ goto cmdq_unlock;
+ }
+
+ rte_smp_rmb(); /* Read error code after completion */
+
+ if (cmdq->errcode[curr_prod_idx])
+ err = cmdq->errcode[curr_prod_idx];
+
+cmdq_unlock:
+ rte_spinlock_unlock(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static int cmdq_params_valid(void *hwdev, struct spnic_cmd_buf *buf_in)
+{
+ if (!buf_in || !hwdev) {
+ PMD_DRV_LOG(ERR, "Invalid CMDQ buffer or hwdev is NULL");
+ return -EINVAL;
+ }
+
+ if (buf_in->size == 0 || buf_in->size > SPNIC_CMDQ_MAX_DATA_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid CMDQ buffer size: 0x%x",
+ buf_in->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int wait_cmdqs_enable(struct spnic_cmdqs *cmdqs)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(WAIT_CMDQ_ENABLE_TIMEOUT);
+ do {
+ if (cmdqs->status & SPNIC_CMDQ_ENABLE)
+ return 0;
+ } while (time_before(jiffies, end));
+
+ return -EBUSY;
+}
+
+int spnic_cmdq_direct_resp(void *hwdev, enum spnic_mod_type mod, u8 cmd,
+ struct spnic_cmd_buf *buf_in, u64 *out_param,
+ u32 timeout)
+{
+ struct spnic_cmdqs *cmdqs = ((struct spnic_hwdev *)hwdev)->cmdqs;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Invalid cmdq parameters");
+ return err;
+ }
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq is disabled");
+ return err;
+ }
+
+ return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[SPNIC_CMDQ_SYNC],
+ mod, cmd, buf_in, out_param, timeout);
+}
+
+int spnic_cmdq_detail_resp(void *hwdev, enum spnic_mod_type mod, u8 cmd,
+ struct spnic_cmd_buf *buf_in,
+ struct spnic_cmd_buf *buf_out, u32 timeout)
+{
+ struct spnic_cmdqs *cmdqs = ((struct spnic_hwdev *)hwdev)->cmdqs;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Invalid cmdq parameters");
+ return err;
+ }
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq is disabled");
+ return err;
+ }
+
+ return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[SPNIC_CMDQ_SYNC],
+ mod, cmd, buf_in, buf_out, timeout);
+}
+
+static void cmdq_update_errcode(struct spnic_cmdq *cmdq, u16 prod_idx,
+ int errcode)
+{
+ cmdq->errcode[prod_idx] = errcode;
+}
+
+static void clear_wqe_complete_bit(struct spnic_cmdq *cmdq,
+ struct spnic_cmdq_wqe *wqe)
+{
+ struct spnic_ctrl *ctrl = NULL;
+ u32 header_info = WQE_HEADER(wqe)->header_info;
+ int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN);
+ int wqe_size = cmdq_get_wqe_size(buf_len);
+ u16 num_wqebbs;
+
+ if (wqe_size == WQE_LCMD_SIZE)
+ ctrl = &wqe->wqe_lcmd.ctrl;
+ else
+ ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+
+ /* Clear HW busy bit */
+ ctrl->ctrl_info = 0;
+
+ rte_wmb(); /* Verify wqe is cleared */
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq);
+ spnic_put_wqe(cmdq->wq, num_wqebbs);
+}
+
+static void cmdq_init_queue_ctxt(struct spnic_cmdq *cmdq,
+ struct spnic_cmdq_ctxt_info *ctxt_info)
+{
+ struct spnic_wq *wq = cmdq->wq;
+ u64 wq_first_page_paddr, pfn;
+
+ u16 start_ci = (u16)(wq->cons_idx);
+
+ /* The data in the HW is in Big Endian Format */
+ wq_first_page_paddr = wq->queue_buf_paddr;
+
+ pfn = CMDQ_PFN(wq_first_page_paddr, RTE_PGSIZE_4K);
+ ctxt_info->curr_wqe_page_pfn =
+ CMDQ_CTXT_PAGE_INFO_SET(1, HW_BUSY_BIT) |
+ CMDQ_CTXT_PAGE_INFO_SET(0, CEQ_EN) |
+ CMDQ_CTXT_PAGE_INFO_SET(0, CEQ_ARM) |
+ CMDQ_CTXT_PAGE_INFO_SET(SPNIC_CEQ_ID_CMDQ, EQ_ID) |
+ CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN);
+
+ ctxt_info->wq_block_pfn = CMDQ_CTXT_BLOCK_INFO_SET(start_ci, CI) |
+ CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN);
+}
+
static int init_cmdq(struct spnic_cmdq *cmdq, struct spnic_hwdev *hwdev,
struct spnic_wq *wq, enum spnic_cmdq_type q_type)
{
@@ -125,6 +705,14 @@ static int spnic_set_cmdq_ctxts(struct spnic_hwdev *hwdev)
int spnic_reinit_cmdq_ctxts(struct spnic_hwdev *hwdev)
{
+ struct spnic_cmdqs *cmdqs = hwdev->cmdqs;
+ enum spnic_cmdq_type cmdq_type = SPNIC_CMDQ_SYNC;
+
+ for (; cmdq_type < SPNIC_MAX_CMDQ_TYPES; cmdq_type++) {
+ cmdqs->cmdq[cmdq_type].wrapped = 1;
+ spnic_wq_wqe_pg_clear(cmdqs->cmdq[cmdq_type].wq);
+ }
+
return spnic_set_cmdq_ctxts(hwdev);
}
@@ -132,6 +720,7 @@ int spnic_cmdqs_init(struct spnic_hwdev *hwdev)
{
struct spnic_cmdqs *cmdqs = NULL;
enum spnic_cmdq_type type, cmdq_type;
+ size_t saved_wqs_size;
char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE];
int err;
@@ -142,6 +731,14 @@ int spnic_cmdqs_init(struct spnic_hwdev *hwdev)
hwdev->cmdqs = cmdqs;
cmdqs->hwdev = hwdev;
+ saved_wqs_size = SPNIC_MAX_CMDQ_TYPES * sizeof(struct spnic_wq);
+ cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0);
+ if (!cmdqs->saved_wqs) {
+ PMD_DRV_LOG(ERR, "Allocate saved wqs failed");
+ err = -ENOMEM;
+ goto alloc_wqs_err;
+ }
+
memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE);
snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "spnic_cmdq_%u",
hwdev->port_id);
@@ -155,6 +752,14 @@ int spnic_cmdqs_init(struct spnic_hwdev *hwdev)
goto pool_create_err;
}
+ err = spnic_cmdq_alloc(cmdqs->saved_wqs, hwdev, SPNIC_MAX_CMDQ_TYPES,
+ SPNIC_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT,
+ SPNIC_CMDQ_DEPTH);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate cmdq failed");
+ goto cmdq_alloc_err;
+ }
+
cmdq_type = SPNIC_CMDQ_SYNC;
for (; cmdq_type < SPNIC_MAX_CMDQ_TYPES; cmdq_type++) {
err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev,
@@ -163,6 +768,9 @@ int spnic_cmdqs_init(struct spnic_hwdev *hwdev)
PMD_DRV_LOG(ERR, "Initialize cmdq failed");
goto init_cmdq_err;
}
+
+ cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type],
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
}
err = spnic_set_cmdq_ctxts(hwdev);
@@ -176,9 +784,15 @@ int spnic_cmdqs_init(struct spnic_hwdev *hwdev)
for (; type < cmdq_type; type++)
free_cmdq(hwdev, &cmdqs->cmdq[type]);
+ spnic_cmdq_free(cmdqs->saved_wqs, SPNIC_MAX_CMDQ_TYPES);
+
+cmdq_alloc_err:
rte_mempool_free(cmdqs->cmd_buf_pool);
pool_create_err:
+ rte_free(cmdqs->saved_wqs);
+
+alloc_wqs_err:
rte_free(cmdqs);
return err;
@@ -194,9 +808,68 @@ void spnic_cmdqs_free(struct spnic_hwdev *hwdev)
for (; cmdq_type < SPNIC_MAX_CMDQ_TYPES; cmdq_type++)
free_cmdq(cmdqs->hwdev, &cmdqs->cmdq[cmdq_type]);
+ spnic_cmdq_free(cmdqs->saved_wqs, SPNIC_MAX_CMDQ_TYPES);
+
rte_mempool_free(cmdqs->cmd_buf_pool);
rte_free(cmdqs->saved_wqs);
rte_free(cmdqs);
}
+
+static int spnic_cmdq_poll_msg(struct spnic_cmdq *cmdq, u32 timeout)
+{
+ struct spnic_cmdq_wqe *wqe = NULL;
+ struct spnic_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct spnic_ctrl *ctrl = NULL;
+ struct spnic_cmdq_cmd_info *cmd_info = NULL;
+ u32 status_info, ctrl_info;
+ u16 ci;
+ int errcode;
+ unsigned long end;
+ int done = 0;
+ int err = 0;
+
+ wqe = spnic_read_wqe(cmdq->wq, 1, &ci);
+ if (!wqe) {
+ PMD_DRV_LOG(ERR, "No outstanding cmdq msg");
+ return -EINVAL;
+ }
+
+ cmd_info = &cmdq->cmd_infos[ci];
+ if (cmd_info->cmd_type == SPNIC_CMD_TYPE_NONE) {
+ PMD_DRV_LOG(ERR, "Cmdq msg has not been filled and send to hw, "
+ "or get TMO msg ack. cmdq ci: %u", ci);
+ return -EINVAL;
+ }
+
+ /* Only arm bit is using scmd wqe, the wqe is lcmd */
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ end = jiffies + msecs_to_jiffies(timeout);
+ do {
+ ctrl_info = ctrl->ctrl_info;
+ if (WQE_COMPLETED(ctrl_info)) {
+ done = 1;
+ break;
+ }
+
+ rte_delay_ms(1);
+ } while (time_before(jiffies, end));
+
+ if (done) {
+ status_info = wqe_lcmd->status.status_info;
+ errcode = WQE_ERRCODE_GET(status_info, VAL);
+ cmdq_update_errcode(cmdq, ci, errcode);
+ clear_wqe_complete_bit(cmdq, wqe);
+ err = 0;
+ } else {
+ PMD_DRV_LOG(ERR, "Poll cmdq msg time out, ci: %u", ci);
+ err = -ETIMEDOUT;
+ }
+
+ /* Set this cmd invalid */
+ cmd_info->cmd_type = SPNIC_CMD_TYPE_NONE;
+
+ return err;
+}
diff --git a/drivers/net/spnic/base/spnic_cmdq.h b/drivers/net/spnic/base/spnic_cmdq.h
index 71753be6e8..ffd540746b 100644
--- a/drivers/net/spnic/base/spnic_cmdq.h
+++ b/drivers/net/spnic/base/spnic_cmdq.h
@@ -93,6 +93,7 @@ struct spnic_scmd_bufdesc {
};
struct spnic_lcmd_bufdesc {
+ struct spnic_sge sge;
u32 rsvd1;
u64 saved_async_buf;
u64 rsvd3;
@@ -112,6 +113,7 @@ struct spnic_ctrl {
};
struct spnic_sge_resp {
+ struct spnic_sge sge;
u32 rsvd;
};
@@ -221,6 +223,24 @@ struct spnic_cmd_buf {
int spnic_reinit_cmdq_ctxts(struct spnic_hwdev *hwdev);
+bool spnic_cmdq_idle(struct spnic_cmdq *cmdq);
+
+struct spnic_cmd_buf *spnic_alloc_cmd_buf(void *hwdev);
+
+void spnic_free_cmd_buf(struct spnic_cmd_buf *cmd_buf);
+
+/*
+ * PF/VF sends cmd to ucode by cmdq, and return 0 if success.
+ * timeout=0, use default timeout.
+ */
+int spnic_cmdq_direct_resp(void *hwdev, enum spnic_mod_type mod, u8 cmd,
+ struct spnic_cmd_buf *buf_in, u64 *out_param,
+ u32 timeout);
+
+int spnic_cmdq_detail_resp(void *hwdev, enum spnic_mod_type mod, u8 cmd,
+ struct spnic_cmd_buf *buf_in,
+ struct spnic_cmd_buf *buf_out, u32 timeout);
+
int spnic_cmdqs_init(struct spnic_hwdev *hwdev);
void spnic_cmdqs_free(struct spnic_hwdev *hwdev);
diff --git a/drivers/net/spnic/base/spnic_hw_comm.c b/drivers/net/spnic/base/spnic_hw_comm.c
index 7c58989c14..48730ce7fe 100644
--- a/drivers/net/spnic/base/spnic_hw_comm.c
+++ b/drivers/net/spnic/base/spnic_hw_comm.c
@@ -11,6 +11,7 @@
#include "spnic_csr.h"
#include "spnic_hwdev.h"
#include "spnic_hwif.h"
+#include "spnic_wq.h"
#include "spnic_mgmt.h"
#include "spnic_cmdq.h"
#include "spnic_hw_comm.h"
@@ -28,6 +29,46 @@
#define SPNIC_MSIX_CNT_PENDING_MASK 0x1FU
#define SPNIC_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+
+enum spnic_rx_buf_size {
+ SPNIC_RX_BUF_SIZE_32B = 0x20,
+ SPNIC_RX_BUF_SIZE_64B = 0x40,
+ SPNIC_RX_BUF_SIZE_96B = 0x60,
+ SPNIC_RX_BUF_SIZE_128B = 0x80,
+ SPNIC_RX_BUF_SIZE_192B = 0xC0,
+ SPNIC_RX_BUF_SIZE_256B = 0x100,
+ SPNIC_RX_BUF_SIZE_384B = 0x180,
+ SPNIC_RX_BUF_SIZE_512B = 0x200,
+ SPNIC_RX_BUF_SIZE_768B = 0x300,
+ SPNIC_RX_BUF_SIZE_1K = 0x400,
+ SPNIC_RX_BUF_SIZE_1_5K = 0x600,
+ SPNIC_RX_BUF_SIZE_2K = 0x800,
+ SPNIC_RX_BUF_SIZE_3K = 0xC00,
+ SPNIC_RX_BUF_SIZE_4K = 0x1000,
+ SPNIC_RX_BUF_SIZE_8K = 0x2000,
+ SPNIC_RX_BUF_SIZE_16K = 0x4000,
+};
+
+const u32 spnic_hw_rx_buf_size[] = {
+ SPNIC_RX_BUF_SIZE_32B,
+ SPNIC_RX_BUF_SIZE_64B,
+ SPNIC_RX_BUF_SIZE_96B,
+ SPNIC_RX_BUF_SIZE_128B,
+ SPNIC_RX_BUF_SIZE_192B,
+ SPNIC_RX_BUF_SIZE_256B,
+ SPNIC_RX_BUF_SIZE_384B,
+ SPNIC_RX_BUF_SIZE_512B,
+ SPNIC_RX_BUF_SIZE_768B,
+ SPNIC_RX_BUF_SIZE_1K,
+ SPNIC_RX_BUF_SIZE_1_5K,
+ SPNIC_RX_BUF_SIZE_2K,
+ SPNIC_RX_BUF_SIZE_3K,
+ SPNIC_RX_BUF_SIZE_4K,
+ SPNIC_RX_BUF_SIZE_8K,
+ SPNIC_RX_BUF_SIZE_16K,
+};
+
int spnic_get_interrupt_cfg(void *dev, struct interrupt_info *info)
{
struct spnic_hwdev *hwdev = dev;
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
index 5671cb860c..00b3b41d97 100644
--- a/drivers/net/spnic/base/spnic_hwdev.c
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -9,6 +9,7 @@
#include "spnic_mgmt.h"
#include "spnic_cmd.h"
#include "spnic_mbox.h"
+#include "spnic_wq.h"
#include "spnic_cmdq.h"
#include "spnic_hwdev.h"
#include "spnic_hw_comm.h"
@@ -322,9 +323,6 @@ static void free_mgmt_channel(struct spnic_hwdev *hwdev)
spnic_aeqs_free(hwdev);
}
-#define SPNIC_DEFAULT_WQ_PAGE_SIZE 0x100000
-#define SPNIC_HW_WQ_PAGE_SIZE 0x1000
-
static int init_cmdqs_channel(struct spnic_hwdev *hwdev)
{
int err;
@@ -394,6 +392,10 @@ static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
static void spnic_uninit_comm_ch(struct spnic_hwdev *hwdev)
{
spnic_comm_cmdqs_free(hwdev);
+
+ if (SPNIC_FUNC_TYPE(hwdev) != TYPE_VF)
+ spnic_set_wq_page_size(hwdev, spnic_global_func_id(hwdev),
+ SPNIC_HW_WQ_PAGE_SIZE);
free_mgmt_channel(hwdev);
}
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
index 8c581c7480..3b055dd732 100644
--- a/drivers/net/spnic/base/spnic_hwdev.h
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -13,6 +13,19 @@ struct spnic_aeqs;
struct spnic_mbox;
struct spnic_msg_pf_to_mgmt;
+#define MGMT_VERSION_MAX_LEN 32
+
+enum spnic_set_arm_type {
+ SPNIC_SET_ARM_CMDQ,
+ SPNIC_SET_ARM_SQ,
+ SPNIC_SET_ARM_TYPE_NUM
+};
+
+struct spnic_page_addr {
+ void *virt_addr;
+ u64 phys_addr;
+};
+
struct ffm_intr_info {
u8 node_id;
/* Error level of the interrupt source */
diff --git a/drivers/net/spnic/base/spnic_wq.c b/drivers/net/spnic/base/spnic_wq.c
new file mode 100644
index 0000000000..fb3aa6fb3a
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_wq.c
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <ethdev_pci.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mempool.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+
+#include "spnic_compat.h"
+#include "spnic_hwdev.h"
+#include "spnic_wq.h"
+
+static void free_wq_pages(struct spnic_wq *wq)
+{
+ rte_memzone_free(wq->wq_mz);
+
+ wq->queue_buf_paddr = 0;
+ wq->queue_buf_vaddr = 0;
+}
+
+static int alloc_wq_pages(struct spnic_hwdev *hwdev, struct spnic_wq *wq,
+ int qid)
+{
+ const struct rte_memzone *wq_mz;
+
+ wq_mz = rte_eth_dma_zone_reserve(hwdev->eth_dev, "spnic_wq_mz",
+ (uint16_t)qid, wq->wq_buf_size,
+ RTE_PGSIZE_256K, SOCKET_ID_ANY);
+ if (!wq_mz) {
+ PMD_DRV_LOG(ERR, "Allocate wq[%d] rq_mz failed", qid);
+ return -ENOMEM;
+ }
+
+ memset(wq_mz->addr, 0, wq->wq_buf_size);
+ wq->wq_mz = wq_mz;
+ wq->queue_buf_paddr = wq_mz->iova;
+ wq->queue_buf_vaddr = (u64)(u64 *)wq_mz->addr;
+
+ return 0;
+}
+
+void spnic_put_wqe(struct spnic_wq *wq, int num_wqebbs)
+{
+ wq->cons_idx += num_wqebbs;
+ __atomic_add_fetch(&wq->delta, num_wqebbs, __ATOMIC_RELAXED);
+}
+
+void *spnic_read_wqe(struct spnic_wq *wq, int num_wqebbs, u16 *cons_idx)
+{
+ u16 curr_cons_idx;
+
+ if ((__atomic_load_n(&wq->delta, __ATOMIC_RELAXED) + num_wqebbs) > wq->q_depth)
+ return NULL;
+
+ curr_cons_idx = (u16)(wq->cons_idx);
+
+ curr_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx);
+
+ *cons_idx = curr_cons_idx;
+
+ return WQ_WQE_ADDR(wq, (u32)(*cons_idx));
+}
+
+int spnic_cmdq_alloc(struct spnic_wq *wq, void *dev, int cmdq_blocks,
+ u32 wq_buf_size, u32 wqebb_shift, u16 q_depth)
+{
+ struct spnic_hwdev *hwdev = (struct spnic_hwdev *)dev;
+ int i, j;
+ int err;
+
+ /* Validate q_depth is power of 2 & wqebb_size is not 0 */
+ for (i = 0; i < cmdq_blocks; i++) {
+ wq[i].wqebb_size = 1 << wqebb_shift;
+ wq[i].wqebb_shift = wqebb_shift;
+ wq[i].wq_buf_size = wq_buf_size;
+ wq[i].q_depth = q_depth;
+
+ err = alloc_wq_pages(hwdev, &wq[i], i);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to alloc CMDQ blocks");
+ goto cmdq_block_err;
+ }
+
+ wq[i].cons_idx = 0;
+ wq[i].prod_idx = 0;
+ __atomic_store_n(&(wq[i].delta), q_depth, __ATOMIC_RELAXED);
+
+ wq[i].mask = q_depth - 1;
+ }
+
+ return 0;
+
+cmdq_block_err:
+ for (j = 0; j < i; j++)
+ free_wq_pages(&wq[j]);
+
+ return err;
+}
+
+void spnic_cmdq_free(struct spnic_wq *wq, int cmdq_blocks)
+{
+ int i;
+
+ for (i = 0; i < cmdq_blocks; i++)
+ free_wq_pages(&wq[i]);
+}
+
+void spnic_wq_wqe_pg_clear(struct spnic_wq *wq)
+{
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+
+ memset((void *)wq->queue_buf_vaddr, 0, wq->wq_buf_size);
+}
+
+void *spnic_get_wqe(struct spnic_wq *wq, int num_wqebbs, u16 *prod_idx)
+{
+ u16 curr_prod_idx;
+
+ __atomic_fetch_sub(&wq->delta, num_wqebbs, __ATOMIC_RELAXED);
+ curr_prod_idx = wq->prod_idx;
+ wq->prod_idx += num_wqebbs;
+ *prod_idx = MASKED_WQE_IDX(wq, curr_prod_idx);
+
+ return WQ_WQE_ADDR(wq, (u32)(*prod_idx));
+}
+
+void spnic_set_sge(struct spnic_sge *sge, uint64_t addr, u32 len)
+{
+ sge->hi_addr = upper_32_bits(addr);
+ sge->lo_addr = lower_32_bits(addr);
+ sge->len = len;
+}
diff --git a/drivers/net/spnic/base/spnic_wq.h b/drivers/net/spnic/base/spnic_wq.h
index 032d45e79e..776407312c 100644
--- a/drivers/net/spnic/base/spnic_wq.h
+++ b/drivers/net/spnic/base/spnic_wq.h
@@ -9,11 +9,45 @@
#define SPNIC_DEFAULT_WQ_PAGE_SIZE 0x100000
#define SPNIC_HW_WQ_PAGE_SIZE 0x1000
+#define WQS_BLOCKS_PER_PAGE 4
+
+#define WQ_SIZE(wq) ((u32)((u64)(wq)->q_depth * (wq)->wqebb_size))
+
+#define WQE_PAGE_NUM(wq, idx) (((idx) >> ((wq)->wqebbs_per_page_shift)) & \
+ ((wq)->num_q_pages - 1))
+
+#define WQE_PAGE_OFF(wq, idx) ((u64)((wq)->wqebb_size) * \
+ ((idx) & ((wq)->num_wqebbs_per_page - 1)))
+
+#define WQ_PAGE_ADDR_SIZE sizeof(u64)
+#define WQ_PAGE_ADDR_SIZE_SHIFT 3
+#define WQ_PAGE_ADDR(wq, idx) \
+ ((u8 *)(*(u64 *)((u64)((wq)->shadow_block_vaddr) + \
+ (WQE_PAGE_NUM(wq, idx) << WQ_PAGE_ADDR_SIZE_SHIFT))))
+
+#define WQ_BLOCK_SIZE 4096UL
+#define WQS_PAGE_SIZE (WQS_BLOCKS_PER_PAGE * WQ_BLOCK_SIZE)
+#define WQ_MAX_PAGES (WQ_BLOCK_SIZE >> WQ_PAGE_ADDR_SIZE_SHIFT)
+
#define CMDQ_BLOCKS_PER_PAGE 8
#define CMDQ_BLOCK_SIZE 512UL
#define CMDQ_PAGE_SIZE RTE_ALIGN((CMDQ_BLOCKS_PER_PAGE * \
CMDQ_BLOCK_SIZE), PAGE_SIZE)
+#define ADDR_4K_ALIGNED(addr) (0 == ((addr) & 0xfff))
+#define ADDR_256K_ALIGNED(addr) (0 == ((addr) & 0x3ffff))
+
+#define WQ_BASE_VADDR(wqs, wq) \
+ ((u64 *)(((u64)((wqs)->page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE))
+
+#define WQ_BASE_PADDR(wqs, wq) (((wqs)->page_paddr[(wq)->page_idx]) \
+ + (u64)(wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_ADDR(wqs, wq) \
+ ((u64 *)(((u64)((wqs)->shadow_page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE))
+
#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
((u64 *)(((u64)((cmdq_pages)->cmdq_page_vaddr)) \
+ (u64)((wq)->block_idx * CMDQ_BLOCK_SIZE)))
@@ -28,16 +62,33 @@
#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+#define WQE_SHADOW_PAGE(wq, wqe) \
+ ((u16)(((unsigned long)(wqe) - (unsigned long)(wq)->shadow_wqe) \
+ / (wq)->max_wqe_size))
+
+#define WQE_IN_RANGE(wqe, start, end) \
+ (((unsigned long)(wqe) >= (unsigned long)(start)) && \
+ ((unsigned long)(wqe) < (unsigned long)(end)))
+
+#define WQ_NUM_PAGES(num_wqs) \
+ (RTE_ALIGN((u32)(num_wqs), WQS_BLOCKS_PER_PAGE) / WQS_BLOCKS_PER_PAGE)
+
#define WQ_WQE_ADDR(wq, idx) ((void *)((u64)((wq)->queue_buf_vaddr) + \
((idx) << (wq)->wqebb_shift)))
+struct spnic_sge {
+ u32 hi_addr;
+ u32 lo_addr;
+ u32 len;
+};
+
struct spnic_wq {
/* The addresses are 64 bit in the HW */
u64 queue_buf_vaddr;
u16 q_depth;
u16 mask;
- rte_atomic32_t delta;
+ u32 delta;
u32 cons_idx;
u32 prod_idx;
@@ -54,4 +105,19 @@ struct spnic_wq {
u32 rsvd[5];
};
-#endif /* _SPNIC_WQ_H_ :*/
+void spnic_wq_wqe_pg_clear(struct spnic_wq *wq);
+
+int spnic_cmdq_alloc(struct spnic_wq *wq, void *dev, int cmdq_blocks,
+ u32 wq_buf_size, u32 wqebb_shift, u16 q_depth);
+
+void spnic_cmdq_free(struct spnic_wq *wq, int cmdq_blocks);
+
+void *spnic_get_wqe(struct spnic_wq *wq, int num_wqebbs, u16 *prod_idx);
+
+void spnic_put_wqe(struct spnic_wq *wq, int num_wqebbs);
+
+void *spnic_read_wqe(struct spnic_wq *wq, int num_wqebbs, u16 *cons_idx);
+
+void spnic_set_sge(struct spnic_sge *sge, uint64_t addr, u32 len);
+
+#endif /* _SPNIC_WQ_H_ */
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 08/25] net/spnic: add hardware info initialization
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (6 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 07/25] net/spnic: add interface handling cmdq message Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 09/25] net/spnic: support MAC and link event handling Yanling Song
` (16 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commits add hardware info initialization, including
that device capability initialization, common feature
negotiation, and two interfaces spnic_get_board_info(),
spnic_get_mgmt_version() to get hardware info and
firmware version.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 3 +-
drivers/net/spnic/base/spnic_hw_cfg.c | 157 +++++++++++++++++++++++++
drivers/net/spnic/base/spnic_hw_cfg.h | 117 ++++++++++++++++++
drivers/net/spnic/base/spnic_hw_comm.c | 121 +++++++++++++++++++
drivers/net/spnic/base/spnic_hw_comm.h | 22 ++++
drivers/net/spnic/base/spnic_hwdev.c | 70 +++++++++++
drivers/net/spnic/base/spnic_hwdev.h | 5 +
drivers/net/spnic/base/spnic_mbox.c | 8 ++
8 files changed, 502 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index da6d6ee4a2..77a56ca41e 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -10,7 +10,8 @@ sources = [
'spnic_nic_event.c',
'spnic_cmdq.c',
'spnic_hw_comm.c',
- 'spnic_wq.c'
+ 'spnic_wq.c',
+ 'spnic_hw_cfg.c'
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_hw_cfg.c b/drivers/net/spnic/base/spnic_hw_cfg.c
new file mode 100644
index 0000000000..e8856ce9fe
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hw_cfg.c
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include "spnic_compat.h"
+#include "spnic_mgmt.h"
+#include "spnic_mbox.h"
+#include "spnic_hwdev.h"
+#include "spnic_hwif.h"
+#include "spnic_hw_cfg.h"
+
+static void parse_pub_res_cap(struct service_cap *cap,
+ struct spnic_cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ cap->host_id = dev_cap->host_id;
+ cap->ep_id = dev_cap->ep_id;
+ cap->er_id = dev_cap->er_id;
+ cap->port_id = dev_cap->port_id;
+
+ cap->svc_type = dev_cap->svc_cap_en;
+ cap->chip_svc_type = cap->svc_type;
+
+ cap->cos_valid_bitmap = dev_cap->valid_cos_bitmap;
+ cap->flexq_en = dev_cap->flexq_en;
+
+ cap->host_total_function = dev_cap->host_total_func;
+ cap->max_vf = 0;
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ cap->max_vf = dev_cap->max_vf;
+ cap->pf_num = dev_cap->host_pf_num;
+ cap->pf_id_start = dev_cap->pf_id_start;
+ cap->vf_num = dev_cap->host_vf_num;
+ cap->vf_id_start = dev_cap->vf_id_start;
+ }
+
+ PMD_DRV_LOG(INFO, "Get public resource capability: ");
+ PMD_DRV_LOG(INFO, "host_id: 0x%x, ep_id: 0x%x, er_id: 0x%x, "
+ "port_id: 0x%x",
+ cap->host_id, cap->ep_id, cap->er_id, cap->port_id);
+ PMD_DRV_LOG(INFO, "host_total_function: 0x%x, max_vf: 0x%x",
+ cap->host_total_function, cap->max_vf);
+ PMD_DRV_LOG(INFO, "host_pf_num: 0x%x, pf_id_start: 0x%x, host_vf_num: 0x%x, vf_id_start: 0x%x",
+ cap->pf_num, cap->pf_id_start,
+ cap->vf_num, cap->vf_id_start);
+}
+
+static void parse_l2nic_res_cap(struct service_cap *cap,
+ struct spnic_cfg_cmd_dev_cap *dev_cap)
+{
+ struct nic_service_cap *nic_cap = &cap->nic_cap;
+
+ nic_cap->max_sqs = dev_cap->nic_max_sq_id + 1;
+ nic_cap->max_rqs = dev_cap->nic_max_rq_id + 1;
+
+ PMD_DRV_LOG(INFO, "L2nic resource capbility, max_sqs: 0x%x, "
+ "max_rqs: 0x%x",
+ nic_cap->max_sqs, nic_cap->max_rqs);
+}
+
+static void parse_dev_cap(struct spnic_hwdev *dev,
+ struct spnic_cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ parse_pub_res_cap(cap, dev_cap, type);
+
+ if (IS_NIC_TYPE(dev))
+ parse_l2nic_res_cap(cap, dev_cap);
+}
+
+static int get_cap_from_fw(struct spnic_hwdev *hwdev, enum func_type type)
+{
+ struct spnic_cfg_cmd_dev_cap dev_cap;
+ u16 out_len = sizeof(dev_cap);
+ int err;
+
+ memset(&dev_cap, 0, sizeof(dev_cap));
+ dev_cap.func_id = spnic_global_func_id(hwdev);
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_CFGM,
+ SPNIC_CFG_CMD_GET_DEV_CAP,
+ &dev_cap, sizeof(dev_cap),
+ &dev_cap, &out_len, 0);
+ if (err || dev_cap.status || !out_len) {
+ PMD_DRV_LOG(ERR, "Get capability from FW failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, dev_cap.status, out_len);
+ return -EFAULT;
+ }
+
+ parse_dev_cap(hwdev, &dev_cap, type);
+ return 0;
+}
+
+static int get_dev_cap(struct spnic_hwdev *hwdev)
+{
+ enum func_type type = SPNIC_FUNC_TYPE(hwdev);
+
+ switch (type) {
+ case TYPE_PF:
+ case TYPE_PPF:
+ case TYPE_VF:
+ if (get_cap_from_fw(hwdev, type) != 0)
+ return -EFAULT;
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Unsupported PCIe function type: %d", type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int cfg_mbx_vf_proc_msg(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ __rte_unused void *buf_in, __rte_unused u16 in_size,
+ __rte_unused void *buf_out, __rte_unused u16 *out_size)
+{
+ struct spnic_hwdev *dev = hwdev;
+
+ if (!dev)
+ return -EINVAL;
+
+ PMD_DRV_LOG(WARNING,
+ "Unsupported cfg mbox vf event %d to process", cmd);
+
+ return 0;
+}
+
+int spnic_init_capability(void *dev)
+{
+ struct spnic_hwdev *hwdev = (struct spnic_hwdev *)dev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+ int err;
+
+ cfg_mgmt = rte_zmalloc("cfg_mgmt", sizeof(*cfg_mgmt),
+ SPNIC_MEM_ALLOC_ALIGN_MIN);
+ if (!cfg_mgmt)
+ return -ENOMEM;
+
+ memset(cfg_mgmt, 0, sizeof(struct cfg_mgmt_info));
+ hwdev->cfg_mgmt = cfg_mgmt;
+ cfg_mgmt->hwdev = hwdev;
+
+ err = get_dev_cap(hwdev);
+ if (err) {
+ rte_free(cfg_mgmt);
+ hwdev->cfg_mgmt = NULL;
+ }
+
+ return err;
+}
+
+void spnic_free_capability(void *dev)
+{
+ rte_free(((struct spnic_hwdev *)dev)->cfg_mgmt);
+}
diff --git a/drivers/net/spnic/base/spnic_hw_cfg.h b/drivers/net/spnic/base/spnic_hw_cfg.h
new file mode 100644
index 0000000000..1b1b598726
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_hw_cfg.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_HW_CFG_H_
+#define _SPNIC_HW_CFG_H_
+
+#define CFG_MAX_CMD_TIMEOUT 30000 /* ms */
+
+#define K_UNIT BIT(10)
+#define M_UNIT BIT(20)
+#define G_UNIT BIT(30)
+
+/* Number of PFs and VFs */
+#define HOST_PF_NUM 4
+#define HOST_VF_NUM 0
+#define HOST_OQID_MASK_VAL 2
+
+#define L2NIC_SQ_DEPTH (4 * K_UNIT)
+#define L2NIC_RQ_DEPTH (4 * K_UNIT)
+
+enum intr_type {
+ INTR_TYPE_MSIX,
+ INTR_TYPE_MSI,
+ INTR_TYPE_INT,
+ INTR_TYPE_NONE
+};
+
+/* Service type relates define */
+enum cfg_svc_type_en {
+ CFG_SVC_NIC_BIT0 = (1 << 0)
+};
+
+struct nic_service_cap {
+ u16 max_sqs;
+ u16 max_rqs;
+};
+
+/* Device capability */
+struct service_cap {
+ enum cfg_svc_type_en svc_type; /* User input service type */
+ enum cfg_svc_type_en chip_svc_type; /* HW supported service type */
+
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id; /* PF/VF's ER */
+ u8 port_id; /* PF/VF's physical port */
+
+ u16 host_total_function;
+ u8 pf_num;
+ u8 pf_id_start;
+ u16 vf_num; /* max numbers of vf in current host */
+ u16 vf_id_start;
+
+ u8 flexq_en;
+ u8 cos_valid_bitmap;
+ u16 max_vf; /* max VF number that PF supported */
+
+ struct nic_service_cap nic_cap; /* NIC capability */
+};
+
+struct cfg_mgmt_info {
+ void *hwdev;
+ struct service_cap svc_cap;
+};
+
+enum spnic_cfg_cmd {
+ SPNIC_CFG_CMD_GET_DEV_CAP = 0,
+};
+
+struct spnic_cfg_cmd_dev_cap {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1;
+
+ /* Public resource */
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id;
+ u8 port_id;
+
+ u16 host_total_func;
+ u8 host_pf_num;
+ u8 pf_id_start;
+ u16 host_vf_num;
+ u16 vf_id_start;
+ u32 rsvd_host;
+
+ u16 svc_cap_en;
+ u16 max_vf;
+ u8 flexq_en;
+ u8 valid_cos_bitmap;
+ /* Reserved for func_valid_cos_bitmap */
+ u16 rsvd_cos;
+
+ u32 rsvd[11];
+
+ /* l2nic */
+ u16 nic_max_sq_id;
+ u16 nic_max_rq_id;
+ u32 rsvd_nic[3];
+
+ u32 rsvd_glb[60];
+};
+
+#define IS_NIC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SVC_NIC_BIT0)
+
+int spnic_init_capability(void *dev);
+void spnic_free_capability(void *dev);
+
+int cfg_mbx_vf_proc_msg(void *hwdev, void *pri_handle, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+#endif /* _SPNIC_HW_CFG_H_ */
diff --git a/drivers/net/spnic/base/spnic_hw_comm.c b/drivers/net/spnic/base/spnic_hw_comm.c
index 48730ce7fe..5cb607cf03 100644
--- a/drivers/net/spnic/base/spnic_hw_comm.c
+++ b/drivers/net/spnic/base/spnic_hw_comm.c
@@ -192,6 +192,31 @@ int spnic_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size)
return 0;
}
+int spnic_func_reset(void *hwdev, u64 reset_flag)
+{
+ struct spnic_reset func_reset;
+ struct spnic_hwif *hwif = ((struct spnic_hwdev *)hwdev)->hwif;
+ u16 out_size = sizeof(func_reset);
+ int err = 0;
+
+ PMD_DRV_LOG(INFO, "Function is reset");
+
+ memset(&func_reset, 0, sizeof(func_reset));
+ func_reset.func_id = SPNIC_HWIF_GLOBAL_IDX(hwif);
+ func_reset.reset_flag = reset_flag;
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM, MGMT_CMD_FUNC_RESET,
+ &func_reset, sizeof(func_reset),
+ &func_reset, &out_size, 0);
+ if (err || !out_size || func_reset.status) {
+ PMD_DRV_LOG(ERR, "Reset func resources failed, err: %d, "
+ "status: 0x%x, out_size: 0x%x",
+ err, func_reset.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
int spnic_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
{
struct spnic_cmd_root_ctxt root_ctxt;
@@ -261,3 +286,99 @@ int spnic_set_dma_attr_tbl(struct spnic_hwdev *hwdev, u32 entry_idx, u8 st,
return 0;
}
+
+int spnic_get_mgmt_version(void *hwdev, char *mgmt_ver, int max_mgmt_len)
+{
+ struct spnic_cmd_get_fw_version fw_ver;
+ u16 out_size = sizeof(fw_ver);
+ int err;
+
+ if (!hwdev || !mgmt_ver)
+ return -EINVAL;
+
+ memset(&fw_ver, 0, sizeof(fw_ver));
+ fw_ver.fw_type = SPNIC_FW_VER_TYPE_MPU;
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_GET_FW_VERSION,
+ &fw_ver, sizeof(fw_ver), &fw_ver,
+ &out_size, 0);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, fw_ver.status)) {
+ PMD_DRV_LOG(ERR, "Get mgmt version failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, fw_ver.status, out_size);
+ return -EIO;
+ }
+
+ snprintf(mgmt_ver, max_mgmt_len, "%s", fw_ver.ver);
+
+ return 0;
+}
+
+int spnic_get_board_info(void *hwdev, struct spnic_board_info *info)
+{
+ struct spnic_cmd_board_info board_info;
+ u16 out_size = sizeof(board_info);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&board_info, 0, sizeof(board_info));
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_GET_BOARD_INFO,
+ &board_info, sizeof(board_info),
+ &board_info, &out_size, 0);
+ if (err || board_info.status || !out_size) {
+ PMD_DRV_LOG(ERR, "Get board info failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, board_info.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(info, &board_info.info, sizeof(*info));
+
+ return 0;
+}
+
+static int spnic_comm_features_nego(void *hwdev, u8 opcode, u64 *s_feature,
+ u16 size)
+{
+ struct comm_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = spnic_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, (size * sizeof(u64)));
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM,
+ MGMT_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size, 0);
+ if (err || !out_size || feature_nego.head.status) {
+ PMD_DRV_LOG(ERR, "Failed to negotiate feature, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, feature_nego.head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, (size * sizeof(u64)));
+
+ return 0;
+}
+
+int spnic_get_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return spnic_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_GET, s_feature,
+ size);
+}
+
+int spnic_set_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return spnic_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_SET, s_feature,
+ size);
+}
diff --git a/drivers/net/spnic/base/spnic_hw_comm.h b/drivers/net/spnic/base/spnic_hw_comm.h
index c905f49b7a..207f0aaeae 100644
--- a/drivers/net/spnic/base/spnic_hw_comm.h
+++ b/drivers/net/spnic/base/spnic_hw_comm.h
@@ -96,6 +96,18 @@ enum spnic_fw_ver_type {
SPNIC_FW_VER_TYPE_CFG,
};
+#define MGMT_MSG_CMD_OP_SET 1
+#define MGMT_MSG_CMD_OP_GET 0
+
+struct comm_cmd_feature_nego {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[MAX_FEATURE_QWORD];
+};
+
struct comm_cmd_dma_attr_config {
struct mgmt_msg_head head;
@@ -162,6 +174,12 @@ struct interrupt_info {
u8 resend_timer_cfg;
};
+int spnic_func_reset(void *hwdev, u64 reset_flag);
+
+int spnic_get_mgmt_version(void *hwdev, char *mgmt_ver, int max_mgmt_len);
+
+int spnic_get_board_info(void *hwdev, struct spnic_board_info *info);
+
int spnic_get_interrupt_cfg(void *dev, struct interrupt_info *info);
int spnic_set_interrupt_cfg(void *dev, struct interrupt_info info);
@@ -170,6 +188,10 @@ int spnic_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size);
int spnic_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+int spnic_get_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
+int spnic_set_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
int spnic_set_dma_attr_tbl(struct spnic_hwdev *hwdev, u32 entry_idx, u8 st,
u8 at, u8 ph, u8 no_snooping, u8 tph_en);
diff --git a/drivers/net/spnic/base/spnic_hwdev.c b/drivers/net/spnic/base/spnic_hwdev.c
index 00b3b41d97..bd5b1d57cb 100644
--- a/drivers/net/spnic/base/spnic_hwdev.c
+++ b/drivers/net/spnic/base/spnic_hwdev.c
@@ -11,6 +11,7 @@
#include "spnic_mbox.h"
#include "spnic_wq.h"
#include "spnic_cmdq.h"
+#include "spnic_hw_cfg.h"
#include "spnic_hwdev.h"
#include "spnic_hw_comm.h"
@@ -283,6 +284,35 @@ static void spnic_comm_cmdqs_free(struct spnic_hwdev *hwdev)
spnic_cmdqs_free(hwdev);
}
+static void spnic_sync_mgmt_func_state(struct spnic_hwdev *hwdev)
+{
+ spnic_set_pf_status(hwdev->hwif, SPNIC_PF_STATUS_ACTIVE_FLAG);
+}
+
+static int __get_func_misc_info(struct spnic_hwdev *hwdev)
+{
+ int err;
+
+ err = spnic_get_board_info(hwdev, &hwdev->board_info);
+ if (err) {
+ /* For the PF/VF of slave host, return error */
+ if (spnic_pcie_itf_id(hwdev))
+ return err;
+
+ memset(&hwdev->board_info, 0xff,
+ sizeof(struct spnic_board_info));
+ }
+
+ err = spnic_get_mgmt_version(hwdev, hwdev->mgmt_ver,
+ SPNIC_MGMT_VERSION_MAX_LEN);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get mgmt cpu version failed");
+ return err;
+ }
+
+ return 0;
+}
+
static int init_mgmt_channel(struct spnic_hwdev *hwdev)
{
int err;
@@ -375,15 +405,31 @@ static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
return err;
}
+ err = __get_func_misc_info(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get function msic information failed");
+ goto get_func_info_err;
+ }
+
+ err = spnic_func_reset(hwdev, SPNIC_NIC_RES | SPNIC_COMM_RES);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Reset function failed");
+ goto func_reset_err;
+ }
+
err = init_cmdqs_channel(hwdev);
if (err) {
PMD_DRV_LOG(ERR, "Init cmdq channel failed");
goto init_cmdqs_channel_err;
}
+ spnic_sync_mgmt_func_state(hwdev);
+
return 0;
init_cmdqs_channel_err:
+func_reset_err:
+get_func_info_err:
free_mgmt_channel(hwdev);
return err;
@@ -391,11 +437,14 @@ static int spnic_init_comm_ch(struct spnic_hwdev *hwdev)
static void spnic_uninit_comm_ch(struct spnic_hwdev *hwdev)
{
+ spnic_set_pf_status(hwdev->hwif, SPNIC_PF_STATUS_INIT);
+
spnic_comm_cmdqs_free(hwdev);
if (SPNIC_FUNC_TYPE(hwdev) != TYPE_VF)
spnic_set_wq_page_size(hwdev, spnic_global_func_id(hwdev),
SPNIC_HW_WQ_PAGE_SIZE);
+
free_mgmt_channel(hwdev);
}
@@ -423,8 +472,27 @@ int spnic_init_hwdev(struct spnic_hwdev *hwdev)
goto init_comm_ch_err;
}
+ err = spnic_init_capability(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init capability failed");
+ goto init_cap_err;
+ }
+
+ err = spnic_set_comm_features(hwdev, hwdev->features,
+ MAX_FEATURE_QWORD);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to set comm features\n");
+ goto set_feature_err;
+ }
+
return 0;
+set_feature_err:
+ spnic_free_capability(hwdev);
+
+init_cap_err:
+ spnic_uninit_comm_ch(hwdev);
+
init_comm_ch_err:
spnic_free_hwif(hwdev);
@@ -436,6 +504,8 @@ int spnic_init_hwdev(struct spnic_hwdev *hwdev)
void spnic_free_hwdev(struct spnic_hwdev *hwdev)
{
+ spnic_free_capability(hwdev);
+
spnic_uninit_comm_ch(hwdev);
spnic_free_hwif(hwdev);
diff --git a/drivers/net/spnic/base/spnic_hwdev.h b/drivers/net/spnic/base/spnic_hwdev.h
index 3b055dd732..d9dce12764 100644
--- a/drivers/net/spnic/base/spnic_hwdev.h
+++ b/drivers/net/spnic/base/spnic_hwdev.h
@@ -100,6 +100,7 @@ struct spnic_board_info {
u32 cfg_addr;
};
+#define MAX_FEATURE_QWORD 4
struct spnic_hwdev {
void *dev_handle; /* Pointer to spnic_nic_dev */
void *pci_dev; /* Pointer to rte_pci_device */
@@ -117,7 +118,11 @@ struct spnic_hwdev {
struct spnic_msg_pf_to_mgmt *pf_to_mgmt;
u8 *chip_fault_stats;
+
struct spnic_hw_stats hw_stats;
+ struct spnic_board_info board_info;
+ char mgmt_ver[MGMT_VERSION_MAX_LEN];
+ u64 features[MAX_FEATURE_QWORD];
u16 max_vfs;
u16 link_status;
diff --git a/drivers/net/spnic/base/spnic_mbox.c b/drivers/net/spnic/base/spnic_mbox.c
index 5c39e307be..cdb7d31e8c 100644
--- a/drivers/net/spnic/base/spnic_mbox.c
+++ b/drivers/net/spnic/base/spnic_mbox.c
@@ -10,6 +10,7 @@
#include "spnic_mgmt.h"
#include "spnic_hwif.h"
#include "spnic_eqs.h"
+#include "spnic_hw_cfg.h"
#include "spnic_mbox.h"
#include "spnic_nic_event.h"
@@ -150,6 +151,13 @@ static int recv_vf_mbox_handler(struct spnic_mbox *func_to_func,
recv_mbox->mbox_len,
buf_out, out_size);
break;
+ case SPNIC_MOD_CFGM:
+ err = cfg_mbx_vf_proc_msg(func_to_func->hwdev,
+ func_to_func->hwdev->cfg_mgmt,
+ recv_mbox->cmd, recv_mbox->mbox,
+ recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
case SPNIC_MOD_L2NIC:
err = spnic_vf_event_handler(func_to_func->hwdev,
func_to_func->hwdev->cfg_mgmt,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 09/25] net/spnic: support MAC and link event handling
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (7 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 08/25] net/spnic: add hardware info initialization Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 10/25] net/spnic: add function info initialization Yanling Song
` (15 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit adds interfaces to add/remove MAC addresses
and registers related ops to struct eth_dev_ops. Furthermore,
this commit adds callback to handle link events.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/meson.build | 3 +-
drivers/net/spnic/base/spnic_hw_cfg.c | 12 +
drivers/net/spnic/base/spnic_hw_cfg.h | 2 +
drivers/net/spnic/base/spnic_nic_cfg.c | 291 +++++++++++++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 180 ++++++++++++
drivers/net/spnic/base/spnic_nic_event.c | 27 +-
drivers/net/spnic/base/spnic_nic_event.h | 9 +-
drivers/net/spnic/spnic_ethdev.c | 348 ++++++++++++++++++++++-
8 files changed, 861 insertions(+), 11 deletions(-)
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
diff --git a/drivers/net/spnic/base/meson.build b/drivers/net/spnic/base/meson.build
index 77a56ca41e..f4bb4469ae 100644
--- a/drivers/net/spnic/base/meson.build
+++ b/drivers/net/spnic/base/meson.build
@@ -11,7 +11,8 @@ sources = [
'spnic_cmdq.c',
'spnic_hw_comm.c',
'spnic_wq.c',
- 'spnic_hw_cfg.c'
+ 'spnic_hw_cfg.c',
+ 'spnic_nic_cfg.c'
]
extra_flags = []
diff --git a/drivers/net/spnic/base/spnic_hw_cfg.c b/drivers/net/spnic/base/spnic_hw_cfg.c
index e8856ce9fe..6505f48273 100644
--- a/drivers/net/spnic/base/spnic_hw_cfg.c
+++ b/drivers/net/spnic/base/spnic_hw_cfg.c
@@ -155,3 +155,15 @@ void spnic_free_capability(void *dev)
{
rte_free(((struct spnic_hwdev *)dev)->cfg_mgmt);
}
+
+u8 spnic_physical_port_id(void *hwdev)
+{
+ struct spnic_hwdev *dev = hwdev;
+
+ if (!dev) {
+ PMD_DRV_LOG(INFO, "Hwdev is NULL for getting physical port id");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.port_id;
+}
diff --git a/drivers/net/spnic/base/spnic_hw_cfg.h b/drivers/net/spnic/base/spnic_hw_cfg.h
index 1b1b598726..9ab51f2875 100644
--- a/drivers/net/spnic/base/spnic_hw_cfg.h
+++ b/drivers/net/spnic/base/spnic_hw_cfg.h
@@ -112,6 +112,8 @@ struct spnic_cfg_cmd_dev_cap {
int spnic_init_capability(void *dev);
void spnic_free_capability(void *dev);
+u8 spnic_physical_port_id(void *hwdev);
+
int cfg_mbx_vf_proc_msg(void *hwdev, void *pri_handle, u16 cmd, void *buf_in,
u16 in_size, void *buf_out, u16 *out_size);
#endif /* _SPNIC_HW_CFG_H_ */
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
new file mode 100644
index 0000000000..c47bc330a3
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -0,0 +1,291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_ether.h>
+#include "spnic_compat.h"
+#include "spnic_cmd.h"
+#include "spnic_mgmt.h"
+#include "spnic_hwif.h"
+#include "spnic_mbox.h"
+#include "spnic_hwdev.h"
+#include "spnic_wq.h"
+#include "spnic_cmdq.h"
+#include "spnic_nic_cfg.h"
+#include "spnic_hw_cfg.h"
+
+struct vf_msg_handler {
+ u16 cmd;
+};
+
+const struct vf_msg_handler vf_cmd_handler[] = {
+ {
+ .cmd = SPNIC_CMD_VF_REGISTER,
+ },
+
+ {
+ .cmd = SPNIC_CMD_GET_MAC,
+ },
+
+ {
+ .cmd = SPNIC_CMD_SET_MAC,
+ },
+
+ {
+ .cmd = SPNIC_CMD_DEL_MAC,
+ },
+
+ {
+ .cmd = SPNIC_CMD_UPDATE_MAC,
+ },
+
+ {
+ .cmd = SPNIC_CMD_VF_COS,
+ },
+};
+
+static const struct vf_msg_handler vf_mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ },
+};
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (spnic_func_type(hwdev) == TYPE_VF) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_cmd_handler[i].cmd)
+ cmd_to_pf = true;
+ }
+ }
+
+ if (cmd_to_pf) {
+ return spnic_mbox_to_pf(hwdev, SPNIC_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+ }
+
+ return spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+}
+
+static int spnic_check_mac_info(u8 status, u16 vlan_id)
+{
+ if ((status && status != SPNIC_MGMT_STATUS_EXIST &&
+ status != SPNIC_PF_SET_VF_ALREADY) ||
+ (vlan_id & CHECK_IPSU_15BIT &&
+ status == SPNIC_MGMT_STATUS_EXIST))
+ return -EINVAL;
+
+ return 0;
+}
+
+#define VLAN_N_VID 4096
+
+int spnic_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
+{
+ struct spnic_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ if (vlan_id >= VLAN_N_VID) {
+ PMD_DRV_LOG(ERR, "Invalid VLAN number: %d", vlan_id);
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ memmove(mac_info.mac, mac_addr, ETH_ALEN);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SET_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || !out_size ||
+ spnic_check_mac_info(mac_info.msg_head.status, mac_info.vlan_id)) {
+ PMD_DRV_LOG(ERR, "Update MAC failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (mac_info.msg_head.status == SPNIC_PF_SET_VF_ALREADY) {
+ PMD_DRV_LOG(WARNING, "PF has already set VF mac, Ignore set operation");
+ return SPNIC_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == SPNIC_MGMT_STATUS_EXIST) {
+ PMD_DRV_LOG(WARNING, "MAC is repeated. Ignore update operation");
+ return 0;
+ }
+
+ return 0;
+}
+
+int spnic_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
+{
+ struct spnic_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ if (vlan_id >= VLAN_N_VID) {
+ PMD_DRV_LOG(ERR, "Invalid VLAN number: %d", vlan_id);
+ return -EINVAL;
+ }
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ memmove(mac_info.mac, mac_addr, ETH_ALEN);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_DEL_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || !out_size || (mac_info.msg_head.status &&
+ mac_info.msg_head.status != SPNIC_PF_SET_VF_ALREADY)) {
+ PMD_DRV_LOG(ERR, "Delete MAC failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (mac_info.msg_head.status == SPNIC_PF_SET_VF_ALREADY) {
+ PMD_DRV_LOG(WARNING, "PF has already set VF mac, Ignore delete operation");
+ return SPNIC_PF_SET_VF_ALREADY;
+ }
+
+ return 0;
+}
+
+int spnic_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id)
+{
+ struct spnic_port_mac_update mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !old_mac || !new_mac)
+ return -EINVAL;
+
+ if (vlan_id >= VLAN_N_VID) {
+ PMD_DRV_LOG(ERR, "Invalid VLAN number: %d", vlan_id);
+ return -EINVAL;
+ }
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ memcpy(mac_info.old_mac, old_mac, ETH_ALEN);
+ memcpy(mac_info.new_mac, new_mac, ETH_ALEN);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_UPDATE_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || !out_size ||
+ spnic_check_mac_info(mac_info.msg_head.status, mac_info.vlan_id)) {
+ PMD_DRV_LOG(ERR, "Update MAC failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (mac_info.msg_head.status == SPNIC_PF_SET_VF_ALREADY) {
+ PMD_DRV_LOG(WARNING, "PF has already set VF MAC. Ignore update operation");
+ return SPNIC_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == SPNIC_MGMT_STATUS_EXIST) {
+ PMD_DRV_LOG(INFO, "MAC is repeated. Ignore update operation");
+ return 0;
+ }
+
+ return 0;
+}
+
+int spnic_get_default_mac(void *hwdev, u8 *mac_addr, int ether_len)
+{
+ struct spnic_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = spnic_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_GET_MAC,
+ &mac_info, sizeof(mac_info),
+ &mac_info, &out_size);
+ if (err || !out_size || mac_info.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Get MAC failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ memmove(mac_addr, mac_info.mac, ether_len);
+
+ return 0;
+}
+
+int spnic_get_port_info(void *hwdev, struct nic_port_info *port_info)
+{
+ struct spnic_cmd_port_info port_msg;
+ u16 out_size = sizeof(port_msg);
+ int err;
+
+ if (!hwdev || !port_info)
+ return -EINVAL;
+
+ memset(&port_msg, 0, sizeof(port_msg));
+ port_msg.port_id = spnic_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_INFO, &port_msg,
+ sizeof(port_msg), &port_msg, &out_size);
+ if (err || !out_size || port_msg.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Get port info failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, port_msg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ port_info->autoneg_cap = port_msg.autoneg_cap;
+ port_info->autoneg_state = port_msg.autoneg_state;
+ port_info->duplex = port_msg.duplex;
+ port_info->port_type = port_msg.port_type;
+ port_info->speed = port_msg.speed;
+ port_info->fec = port_msg.fec;
+
+ return 0;
+}
+
+static int _mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_mag_cmd_handler);
+
+ if (spnic_func_type(hwdev) == TYPE_VF) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_mag_cmd_handler[i].cmd)
+ return spnic_mbox_to_pf(hwdev, SPNIC_MOD_HILINK,
+ cmd, buf_in, in_size,
+ buf_out, out_size, 0);
+ }
+ }
+
+ return spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+}
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
new file mode 100644
index 0000000000..669e982876
--- /dev/null
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -0,0 +1,180 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_NIC_CFG_H_
+#define _SPNIC_NIC_CFG_H_
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
+#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+
+#define SPNIC_PF_SET_VF_ALREADY 0x4
+#define SPNIC_MGMT_STATUS_EXIST 0x6
+#define CHECK_IPSU_15BIT 0x8000
+
+/* Structures for port info */
+struct nic_port_info {
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+};
+
+enum spnic_link_status {
+ SPNIC_LINK_DOWN = 0,
+ SPNIC_LINK_UP
+};
+
+enum nic_media_type {
+ MEDIA_UNKNOWN = -1,
+ MEDIA_FIBRE = 0,
+ MEDIA_COPPER,
+ MEDIA_BACKPLANE
+};
+
+enum nic_speed_level {
+ LINK_SPEED_10MB = 0,
+ LINK_SPEED_100MB,
+ LINK_SPEED_1GB,
+ LINK_SPEED_10GB,
+ LINK_SPEED_25GB,
+ LINK_SPEED_40GB,
+ LINK_SPEED_100GB,
+ LINK_SPEED_LEVELS,
+};
+
+struct spnic_port_mac_set {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct spnic_port_mac_update {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct spnic_cmd_port_info {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct spnic_cmd_link_state {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+/**
+ * Update MAC address to hardware
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] old_mac
+ * Old MAC addr to delete
+ * @param[in] new_mac
+ * New MAC addr to update
+ * @param[in] vlan_id
+ * Vlan id
+ * @param func_id
+ * Function index
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id);
+
+/**
+ * Get the default mac address
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] mac_addr
+ * Mac address from hardware
+ * @param[in] ether_len
+ * The length of mac address
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_default_mac(void *hwdev, u8 *mac_addr, int ether_len);
+
+/**
+ * Set mac address
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] mac_addr
+ * Mac address from hardware
+ * @param[in] vlan_id
+ * Vlan id
+ * @param[in] func_id
+ * Function index
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
+
+/**
+ * Delete MAC address
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] mac_addr
+ * MAC address from hardware
+ * @param[in] vlan_id
+ * Vlan id
+ * @param[in] func_id
+ * Function index
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
+
+/**
+ * Get port info
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] port_info
+ * Port info, including autoneg, port type, duplex, speed and fec mode
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_port_info(void *hwdev, struct nic_port_info *port_info);
+
+#endif /* _SPNIC_NIC_CFG_H_ */
diff --git a/drivers/net/spnic/base/spnic_nic_event.c b/drivers/net/spnic/base/spnic_nic_event.c
index 07ea036d84..1c3621171a 100644
--- a/drivers/net/spnic/base/spnic_nic_event.c
+++ b/drivers/net/spnic/base/spnic_nic_event.c
@@ -9,16 +9,39 @@
#include "spnic_hwif.h"
#include "spnic_hwdev.h"
#include "spnic_mgmt.h"
+#include "spnic_nic_cfg.h"
#include "spnic_hwdev.h"
#include "spnic_nic_event.h"
-static void get_port_info(u8 link_state, struct rte_eth_link *link)
+void get_port_info(struct spnic_hwdev *hwdev, u8 link_state,
+ struct rte_eth_link *link)
{
+ uint32_t port_speed[LINK_SPEED_LEVELS] = {ETH_SPEED_NUM_10M,
+ ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
+ ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
+ ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+ struct nic_port_info port_info = {0};
+ int err;
+
if (!link_state) {
link->link_status = ETH_LINK_DOWN;
link->link_speed = ETH_SPEED_NUM_NONE;
link->link_duplex = ETH_LINK_HALF_DUPLEX;
link->link_autoneg = ETH_LINK_FIXED;
+ } else {
+ link->link_status = ETH_LINK_UP;
+
+ err = spnic_get_port_info(hwdev, &port_info);
+ if (err) {
+ link->link_speed = ETH_SPEED_NUM_NONE;
+ link->link_duplex = ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = ETH_LINK_FIXED;
+ } else {
+ link->link_speed = port_speed[port_info.speed %
+ LINK_SPEED_LEVELS];
+ link->link_duplex = port_info.duplex;
+ link->link_autoneg = port_info.autoneg_state;
+ }
}
}
@@ -51,7 +74,7 @@ static void link_status_event_handler(void *hwdev, void *buf_in,
spnic_link_event_stats(hwdev, link_status->state);
/* Link event reported only after set vport enable */
- get_port_info(link_status->state, &link);
+ get_port_info(dev, link_status->state, &link);
err = rte_eth_linkstatus_set((struct rte_eth_dev *)(dev->eth_dev),
&link);
if (!err)
diff --git a/drivers/net/spnic/base/spnic_nic_event.h b/drivers/net/spnic/base/spnic_nic_event.h
index eb41d76a7d..ac0c072887 100644
--- a/drivers/net/spnic/base/spnic_nic_event.h
+++ b/drivers/net/spnic/base/spnic_nic_event.h
@@ -5,13 +5,8 @@
#ifndef _SPNIC_NIC_EVENT_H_
#define _SPNIC_NIC_EVENT_H_
-struct spnic_cmd_link_state {
- struct mgmt_msg_head msg_head;
-
- u8 port_id;
- u8 state;
- u16 rsvd1;
-};
+void get_port_info(struct spnic_hwdev *hwdev, u8 link_state,
+ struct rte_eth_link *link);
void spnic_pf_event_handler(void *hwdev, __rte_unused void *pri_handle,
u16 cmd, void *buf_in, u16 in_size,
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 228ed0c936..8f71280fa7 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -5,14 +5,23 @@
#include <rte_pci.h>
#include <rte_bus_pci.h>
#include <ethdev_pci.h>
+#include <rte_malloc.h>
#include <rte_errno.h>
#include <rte_ether.h>
#include "base/spnic_compat.h"
+#include "base/spnic_cmd.h"
#include "base/spnic_csr.h"
+#include "base/spnic_wq.h"
+#include "base/spnic_eqs.h"
+#include "base/spnic_mgmt.h"
+#include "base/spnic_cmdq.h"
#include "base/spnic_hwdev.h"
#include "base/spnic_hwif.h"
-
+#include "base/spnic_hw_cfg.h"
+#include "base/spnic_hw_comm.h"
+#include "base/spnic_nic_cfg.h"
+#include "base/spnic_nic_event.h"
#include "spnic_ethdev.h"
/* Driver-specific log messages type */
@@ -21,6 +30,58 @@ int spnic_logtype;
#define SPNIC_MAX_UC_MAC_ADDRS 128
#define SPNIC_MAX_MC_MAC_ADDRS 128
+static void spnic_delete_mc_addr_list(struct spnic_nic_dev *nic_dev)
+{
+ u16 func_id;
+ u32 i;
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+
+ for (i = 0; i < SPNIC_MAX_MC_MAC_ADDRS; i++) {
+ if (rte_is_zero_ether_addr(&nic_dev->mc_list[i]))
+ break;
+
+ spnic_del_mac(nic_dev->hwdev, nic_dev->mc_list[i].addr_bytes,
+ 0, func_id);
+ memset(&nic_dev->mc_list[i], 0, sizeof(struct rte_ether_addr));
+ }
+}
+
+/**
+ * Deinit mac_vlan table in hardware.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ */
+static void spnic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev =
+ SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ u16 func_id = 0;
+ int err;
+ int i;
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+
+ for (i = 0; i < SPNIC_MAX_UC_MAC_ADDRS; i++) {
+ if (rte_is_zero_ether_addr(ð_dev->data->mac_addrs[i]))
+ continue;
+
+ err = spnic_del_mac(nic_dev->hwdev,
+ eth_dev->data->mac_addrs[i].addr_bytes,
+ 0, func_id);
+ if (err && err != SPNIC_PF_SET_VF_ALREADY)
+ PMD_DRV_LOG(ERR, "Delete mac table failed, dev_name: %s",
+ eth_dev->data->name);
+
+ memset(ð_dev->data->mac_addrs[i], 0,
+ sizeof(struct rte_ether_addr));
+ }
+
+ /* Delete multicast mac addrs */
+ spnic_delete_mc_addr_list(nic_dev);
+}
+
/**
* Close the device.
*
@@ -38,13 +99,247 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
return 0;
}
+ spnic_deinit_mac_addr(eth_dev);
+ rte_free(nic_dev->mc_list);
+
+ rte_bit_relaxed_clear32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
+
spnic_free_hwdev(nic_dev->hwdev);
+ eth_dev->dev_ops = NULL;
+
rte_free(nic_dev->hwdev);
nic_dev->hwdev = NULL;
return 0;
}
+/**
+ * Update MAC address
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] addr
+ * Pointer to MAC address
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_set_mac_addr(struct rte_eth_dev *dev,
+ struct rte_ether_addr *addr)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ char mac_addr[RTE_ETHER_ADDR_FMT_SIZE];
+ u16 func_id;
+ int err;
+
+ if (!rte_is_valid_assigned_ether_addr(addr)) {
+ rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE, addr);
+ PMD_DRV_LOG(ERR, "Set invalid MAC address %s", mac_addr);
+ return -EINVAL;
+ }
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+ err = spnic_update_mac(nic_dev->hwdev,
+ nic_dev->default_addr.addr_bytes,
+ addr->addr_bytes, 0, func_id);
+ if (err)
+ return err;
+
+ rte_ether_addr_copy(addr, &nic_dev->default_addr);
+ rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE,
+ &nic_dev->default_addr);
+
+ PMD_DRV_LOG(INFO, "Set new MAC address %s", mac_addr);
+
+ return 0;
+}
+
+/**
+ * Remove a MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] index
+ * MAC address index.
+ */
+static void spnic_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 func_id;
+ int err;
+
+ if (index >= SPNIC_MAX_UC_MAC_ADDRS) {
+ PMD_DRV_LOG(INFO, "Remove MAC index(%u) is out of range",
+ index);
+ return;
+ }
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+ err = spnic_del_mac(nic_dev->hwdev,
+ dev->data->mac_addrs[index].addr_bytes,
+ 0, func_id);
+ if (err)
+ PMD_DRV_LOG(ERR, "Remove MAC index(%u) failed", index);
+}
+
+/**
+ * Add a MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mac_addr
+ * MAC address to register.
+ * @param[in] index
+ * MAC address index.
+ * @param[in] vmdq
+ * VMDq pool index to associate address with (unused_).
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_mac_addr_add(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr, uint32_t index,
+ __rte_unused uint32_t vmdq)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ unsigned int i;
+ u16 func_id;
+ int err;
+
+ if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+ PMD_DRV_LOG(ERR, "Add invalid MAC address");
+ return -EINVAL;
+ }
+
+ if (index >= SPNIC_MAX_UC_MAC_ADDRS) {
+ PMD_DRV_LOG(ERR, "Add MAC index(%u) is out of range", index);
+ return -EINVAL;
+ }
+
+ /* Make sure this address doesn't already be configured */
+ for (i = 0; i < SPNIC_MAX_UC_MAC_ADDRS; i++) {
+ if (rte_is_same_ether_addr(mac_addr,
+ &dev->data->mac_addrs[i])) {
+ PMD_DRV_LOG(ERR, "MAC address is already configured");
+ return -EADDRINUSE;
+ }
+ }
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+ err = spnic_set_mac(nic_dev->hwdev, mac_addr->addr_bytes, 0, func_id);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+/**
+ * Set multicast MAC address
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mc_addr_set
+ * Pointer to multicast MAC address
+ * @param[in] nb_mc_addr
+ * The number of multicast MAC address to set
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addr_set,
+ uint32_t nb_mc_addr)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ char mac_addr[RTE_ETHER_ADDR_FMT_SIZE];
+ u16 func_id;
+ int err;
+ u32 i;
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+
+ /* Delete old multi_cast addrs firstly */
+ spnic_delete_mc_addr_list(nic_dev);
+
+ if (nb_mc_addr > SPNIC_MAX_MC_MAC_ADDRS)
+ return -EINVAL;
+
+ for (i = 0; i < nb_mc_addr; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addr_set[i])) {
+ rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE,
+ &mc_addr_set[i]);
+ PMD_DRV_LOG(ERR, "Set mc MAC addr failed, addr(%s) invalid",
+ mac_addr);
+ return -EINVAL;
+ }
+ }
+
+ for (i = 0; i < nb_mc_addr; i++) {
+ err = spnic_set_mac(nic_dev->hwdev, mc_addr_set[i].addr_bytes,
+ 0, func_id);
+ if (err) {
+ spnic_delete_mc_addr_list(nic_dev);
+ return err;
+ }
+
+ rte_ether_addr_copy(&mc_addr_set[i], &nic_dev->mc_list[i]);
+ }
+
+ return 0;
+}
+static const struct eth_dev_ops spnic_pmd_ops = {
+ .mac_addr_set = spnic_set_mac_addr,
+ .mac_addr_remove = spnic_mac_addr_remove,
+ .mac_addr_add = spnic_mac_addr_add,
+ .set_mc_addr_list = spnic_set_mc_addr_list,
+};
+
+static const struct eth_dev_ops spnic_pmd_vf_ops = {
+ .mac_addr_set = spnic_set_mac_addr,
+ .mac_addr_remove = spnic_mac_addr_remove,
+ .mac_addr_add = spnic_mac_addr_add,
+ .set_mc_addr_list = spnic_set_mc_addr_list,
+};
+
+/**
+ * Init mac_vlan table in hardwares.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_init_mac_table(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev =
+ SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ u8 addr_bytes[RTE_ETHER_ADDR_LEN];
+ u16 func_id = 0;
+ int err = 0;
+
+ err = spnic_get_default_mac(nic_dev->hwdev, addr_bytes,
+ RTE_ETHER_ADDR_LEN);
+ if (err)
+ return err;
+
+ rte_ether_addr_copy((struct rte_ether_addr *)addr_bytes,
+ ð_dev->data->mac_addrs[0]);
+ if (rte_is_zero_ether_addr(ð_dev->data->mac_addrs[0]))
+ rte_eth_random_addr(eth_dev->data->mac_addrs[0].addr_bytes);
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+ err = spnic_set_mac(nic_dev->hwdev,
+ eth_dev->data->mac_addrs[0].addr_bytes,
+ 0, func_id);
+ if (err && err != SPNIC_PF_SET_VF_ALREADY)
+ return err;
+
+ rte_ether_addr_copy(ð_dev->data->mac_addrs[0],
+ &nic_dev->default_addr);
+
+ return 0;
+}
static int spnic_func_init(struct rte_eth_dev *eth_dev)
{
@@ -63,11 +358,37 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
}
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ memset(nic_dev, 0, sizeof(*nic_dev));
snprintf(nic_dev->dev_name, sizeof(nic_dev->dev_name),
"spnic-%.4x:%.2x:%.2x.%x",
pci_dev->addr.domain, pci_dev->addr.bus,
pci_dev->addr.devid, pci_dev->addr.function);
+ /* Alloc mac_addrs */
+ eth_dev->data->mac_addrs = rte_zmalloc("spnic_mac",
+ SPNIC_MAX_UC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0);
+ if (!eth_dev->data->mac_addrs) {
+ PMD_DRV_LOG(ERR, "Allocate %zx bytes to store MAC addresses "
+ "failed, dev_name: %s",
+ SPNIC_MAX_UC_MAC_ADDRS *
+ sizeof(struct rte_ether_addr),
+ eth_dev->data->name);
+ err = -ENOMEM;
+ goto alloc_eth_addr_fail;
+ }
+
+ nic_dev->mc_list = rte_zmalloc("spnic_mc",
+ SPNIC_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0);
+ if (!nic_dev->mc_list) {
+ PMD_DRV_LOG(ERR, "Allocate %zx bytes to store multicast "
+ "addresses failed, dev_name: %s",
+ SPNIC_MAX_MC_MAC_ADDRS *
+ sizeof(struct rte_ether_addr),
+ eth_dev->data->name);
+ err = -ENOMEM;
+ goto alloc_mc_list_fail;
+ }
+
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Create hardware device */
nic_dev->hwdev = rte_zmalloc("spnic_hwdev", sizeof(*nic_dev->hwdev),
@@ -90,17 +411,42 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto init_hwdev_fail;
}
+ if (SPNIC_FUNC_TYPE(nic_dev->hwdev) == TYPE_VF)
+ eth_dev->dev_ops = &spnic_pmd_vf_ops;
+ else
+ eth_dev->dev_ops = &spnic_pmd_ops;
+ err = spnic_init_mac_table(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mac table failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_mac_table_fail;
+ }
+
+ rte_bit_relaxed_set32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
+
rte_bit_relaxed_set32(SPNIC_DEV_INIT, &nic_dev->dev_status);
PMD_DRV_LOG(INFO, "Initialize %s in primary succeed",
eth_dev->data->name);
return 0;
+init_mac_table_fail:
+ spnic_free_hwdev(nic_dev->hwdev);
+ eth_dev->dev_ops = NULL;
+
init_hwdev_fail:
rte_free(nic_dev->hwdev);
nic_dev->hwdev = NULL;
alloc_hwdev_mem_fail:
+ rte_free(nic_dev->mc_list);
+ nic_dev->mc_list = NULL;
+
+alloc_mc_list_fail:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+
+alloc_eth_addr_fail:
PMD_DRV_LOG(ERR, "Initialize %s in primary failed",
eth_dev->data->name);
return err;
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 10/25] net/spnic: add function info initialization
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (8 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 09/25] net/spnic: support MAC and link event handling Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 11/25] net/spnic: add queue pairs context initialization Yanling Song
` (14 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch mainly implements function info initialization
including mtu, link state, port state, port info and cos
as well as the definition of the corresponding data structure.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_hw_cfg.c | 43 +++
drivers/net/spnic/base/spnic_hw_cfg.h | 6 +
drivers/net/spnic/base/spnic_nic_cfg.c | 221 ++++++++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 213 ++++++++++++++
drivers/net/spnic/spnic_ethdev.c | 382 ++++++++++++++++++++++++-
drivers/net/spnic/spnic_ethdev.h | 22 +-
6 files changed, 876 insertions(+), 11 deletions(-)
diff --git a/drivers/net/spnic/base/spnic_hw_cfg.c b/drivers/net/spnic/base/spnic_hw_cfg.c
index 6505f48273..49e16ee89c 100644
--- a/drivers/net/spnic/base/spnic_hw_cfg.c
+++ b/drivers/net/spnic/base/spnic_hw_cfg.c
@@ -156,6 +156,49 @@ void spnic_free_capability(void *dev)
rte_free(((struct spnic_hwdev *)dev)->cfg_mgmt);
}
+/* *
+ * @brief spnic_support_nic - function support nic
+ * @param hwdev: device pointer to hwdev
+ * @retval true: function support nic
+ * @retval false: function not support nic
+ */
+bool spnic_support_nic(void *hwdev)
+{
+ struct spnic_hwdev *dev = (struct spnic_hwdev *)hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_NIC_TYPE(dev))
+ return false;
+
+ return true;
+}
+
+u16 spnic_func_max_sqs(void *hwdev)
+{
+ struct spnic_hwdev *dev = hwdev;
+
+ if (!dev) {
+ PMD_DRV_LOG(INFO, "Hwdev is NULL for getting max_sqs");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+
+u16 spnic_func_max_rqs(void *hwdev)
+{
+ struct spnic_hwdev *dev = hwdev;
+
+ if (!dev) {
+ PMD_DRV_LOG(INFO, "Hwdev is NULL for getting max_rqs");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_rqs;
+}
+
u8 spnic_physical_port_id(void *hwdev)
{
struct spnic_hwdev *dev = hwdev;
diff --git a/drivers/net/spnic/base/spnic_hw_cfg.h b/drivers/net/spnic/base/spnic_hw_cfg.h
index 9ab51f2875..5019f38ec2 100644
--- a/drivers/net/spnic/base/spnic_hw_cfg.h
+++ b/drivers/net/spnic/base/spnic_hw_cfg.h
@@ -112,8 +112,14 @@ struct spnic_cfg_cmd_dev_cap {
int spnic_init_capability(void *dev);
void spnic_free_capability(void *dev);
+u16 spnic_func_max_sqs(void *hwdev);
+u16 spnic_func_max_rqs(void *hwdev);
+
u8 spnic_physical_port_id(void *hwdev);
int cfg_mbx_vf_proc_msg(void *hwdev, void *pri_handle, u16 cmd, void *buf_in,
u16 in_size, void *buf_out, u16 *out_size);
+
+bool spnic_support_nic(void *hwdev);
+
#endif /* _SPNIC_HW_CFG_H_ */
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index c47bc330a3..886aaea384 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -265,6 +265,227 @@ int spnic_get_port_info(void *hwdev, struct nic_port_info *port_info)
return 0;
}
+
+int spnic_get_link_state(void *hwdev, u8 *link_state)
+{
+ struct spnic_cmd_link_state get_link;
+ u16 out_size = sizeof(get_link);
+ int err;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ memset(&get_link, 0, sizeof(get_link));
+ get_link.port_id = spnic_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_LINK_STATUS, &get_link,
+ sizeof(get_link), &get_link, &out_size);
+ if (err || !out_size || get_link.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Get link state failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, get_link.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ *link_state = get_link.state;
+
+ return 0;
+}
+
+int spnic_set_vport_enable(void *hwdev, bool enable)
+{
+ struct spnic_vport_state en_state;
+ u16 out_size = sizeof(en_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&en_state, 0, sizeof(en_state));
+ en_state.func_id = spnic_global_func_id(hwdev);
+ en_state.state = enable ? 1 : 0;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SET_VPORT_ENABLE, &en_state,
+ sizeof(en_state), &en_state, &out_size);
+ if (err || !out_size || en_state.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set vport state failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, en_state.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int spnic_set_port_enable(void *hwdev, bool enable)
+{
+ struct mag_cmd_set_port_enable en_state;
+ u16 out_size = sizeof(en_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (spnic_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ memset(&en_state, 0, sizeof(en_state));
+ en_state.function_id = spnic_global_func_id(hwdev);
+ en_state.state = enable ? MAG_CMD_TX_ENABLE | MAG_CMD_RX_ENABLE :
+ MAG_CMD_PORT_DISABLE;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_PORT_ENABLE, &en_state,
+ sizeof(en_state), &en_state, &out_size);
+ if (err || !out_size || en_state.head.status) {
+ PMD_DRV_LOG(ERR, "Set port state failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, en_state.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int spnic_set_function_table(void *hwdev, u32 cfg_bitmap,
+ struct spnic_func_tbl_cfg *cfg)
+{
+ struct spnic_cmd_set_func_tbl cmd_func_tbl;
+ u16 out_size = sizeof(cmd_func_tbl);
+ int err;
+
+ memset(&cmd_func_tbl, 0, sizeof(cmd_func_tbl));
+ cmd_func_tbl.func_id = spnic_global_func_id(hwdev);
+ cmd_func_tbl.cfg_bitmap = cfg_bitmap;
+ cmd_func_tbl.tbl_cfg = *cfg;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SET_FUNC_TBL,
+ &cmd_func_tbl, sizeof(cmd_func_tbl),
+ &cmd_func_tbl, &out_size);
+ if (err || cmd_func_tbl.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR, "Set func table failed, bitmap: 0x%x, err: %d, "
+ "status: 0x%x, out size: 0x%x\n", cfg_bitmap, err,
+ cmd_func_tbl.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int spnic_init_function_table(void *hwdev, u16 rx_buff_len)
+{
+ struct spnic_func_tbl_cfg func_tbl_cfg;
+ u32 cfg_bitmap = BIT(FUNC_CFG_INIT) | BIT(FUNC_CFG_MTU) |
+ BIT(FUNC_CFG_RX_BUF_SIZE);
+
+ memset(&func_tbl_cfg, 0, sizeof(func_tbl_cfg));
+ func_tbl_cfg.mtu = 0x3FFF; /* Default, max mtu */
+ func_tbl_cfg.rx_wqe_buf_size = rx_buff_len;
+
+ return spnic_set_function_table(hwdev, cfg_bitmap, &func_tbl_cfg);
+}
+
+int spnic_set_port_mtu(void *hwdev, u16 new_mtu)
+{
+ struct spnic_func_tbl_cfg func_tbl_cfg;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (new_mtu < SPNIC_MIN_MTU_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid mtu size: %ubytes, mtu size < %ubytes",
+ new_mtu, SPNIC_MIN_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ if (new_mtu > SPNIC_MAX_JUMBO_FRAME_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid mtu size: %ubytes, mtu size > %ubytes",
+ new_mtu, SPNIC_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+
+ memset(&func_tbl_cfg, 0, sizeof(func_tbl_cfg));
+ func_tbl_cfg.mtu = new_mtu;
+
+ return spnic_set_function_table(hwdev, BIT(FUNC_CFG_MTU),
+ &func_tbl_cfg);
+}
+
+static int spnic_vf_func_init(void *hwdev)
+{
+ struct spnic_cmd_register_vf register_info;
+ u16 out_size = sizeof(register_info);
+ int err;
+
+ if (spnic_func_type(hwdev) != TYPE_VF)
+ return 0;
+
+ memset(®ister_info, 0, sizeof(register_info));
+ register_info.op_register = 1;
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_VF_REGISTER,
+ ®ister_info, sizeof(register_info),
+ ®ister_info, &out_size);
+ if (err || register_info.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR, "Register VF failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, register_info.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int spnic_vf_func_free(void *hwdev)
+{
+ struct spnic_cmd_register_vf unregister;
+ u16 out_size = sizeof(unregister);
+ int err;
+
+ if (spnic_func_type(hwdev) != TYPE_VF)
+ return 0;
+
+ memset(&unregister, 0, sizeof(unregister));
+ unregister.op_register = 0;
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_VF_REGISTER,
+ &unregister, sizeof(unregister),
+ &unregister, &out_size);
+ if (err || unregister.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR, "Unregister VF failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, unregister.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int spnic_init_nic_hwdev(void *hwdev)
+{
+ return spnic_vf_func_init(hwdev);
+}
+
+void spnic_free_nic_hwdev(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ spnic_vf_func_free(hwdev);
+}
+
+int spnic_vf_get_default_cos(void *hwdev, u8 *cos_id)
+{
+ struct spnic_cmd_vf_dcb_state vf_dcb;
+ u16 out_size = sizeof(vf_dcb);
+ int err;
+
+ memset(&vf_dcb, 0, sizeof(vf_dcb));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_VF_COS, &vf_dcb,
+ sizeof(vf_dcb), &vf_dcb, &out_size);
+ if (err || !out_size || vf_dcb.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Get VF default cos failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, vf_dcb.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ *cos_id = vf_dcb.state.default_cos;
+
+ return 0;
+}
+
static int _mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
u16 in_size, void *buf_out, u16 *out_size)
{
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index 669e982876..98cad645d2 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -12,6 +12,26 @@
#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+#define SPNIC_DCB_UP_MAX 0x8
+
+#define SPNIC_MAX_NUM_RQ 256
+
+#define SPNIC_MAX_MTU_SIZE 9600
+#define SPNIC_MIN_MTU_SIZE 384
+
+#define SPNIC_COS_NUM_MAX 8
+
+#define SPNIC_VLAN_TAG_SIZE 4
+#define SPNIC_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + SPNIC_VLAN_TAG_SIZE * 2)
+
+#define SPNIC_MIN_FRAME_SIZE (SPNIC_MIN_MTU_SIZE + SPNIC_ETH_OVERHEAD)
+#define SPNIC_MAX_JUMBO_FRAME_SIZE (SPNIC_MAX_MTU_SIZE + SPNIC_ETH_OVERHEAD)
+
+#define SPNIC_MTU_TO_PKTLEN(mtu) ((mtu) + SPNIC_ETH_OVERHEAD)
+
+#define SPNIC_PKTLEN_TO_MTU(pktlen) ((pktlen) - SPNIC_ETH_OVERHEAD)
+
#define SPNIC_PF_SET_VF_ALREADY 0x4
#define SPNIC_MGMT_STATUS_EXIST 0x6
#define CHECK_IPSU_15BIT 0x8000
@@ -92,6 +112,114 @@ struct spnic_cmd_link_state {
u16 rsvd1;
};
+struct nic_pause_config {
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+};
+
+struct spnic_cmd_pause_config {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+struct spnic_vport_state {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /* 0--disable, 1--enable */
+ u8 rsvd2[3];
+};
+
+#define MAG_CMD_PORT_DISABLE 0x0
+#define MAG_CMD_TX_ENABLE 0x1
+#define MAG_CMD_RX_ENABLE 0x2
+/* the physical port is disable only when all pf of the port are set to down,
+ * if any pf is enable, the port is enable
+ */
+struct mag_cmd_set_port_enable {
+ struct mgmt_msg_head head;
+
+ u16 function_id;
+ u16 rsvd0;
+
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd1[3];
+};
+
+struct mag_cmd_get_port_enable {
+ struct mgmt_msg_head head;
+
+ u8 port;
+ u8 state; /* bitmap bit0:tx_en bit1:rx_en */
+ u8 rsvd0[2];
+};
+
+struct spnic_cmd_clear_qp_resource {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+
+enum spnic_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct spnic_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct spnic_cmd_set_func_tbl {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct spnic_func_tbl_cfg tbl_cfg;
+};
+
+enum {
+ SPNIC_IFLA_VF_LINK_STATE_AUTO, /* Link state of the uplink */
+ SPNIC_IFLA_VF_LINK_STATE_ENABLE, /* Link always up */
+ SPNIC_IFLA_VF_LINK_STATE_DISABLE, /* Link always down */
+};
+
+struct spnic_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u16 rsvd1;
+ u8 up_cos[SPNIC_DCB_UP_MAX];
+ u32 rsvd2[7];
+};
+
+struct spnic_cmd_vf_dcb_state {
+ struct mgmt_msg_head msg_head;
+
+ struct spnic_dcb_state state;
+};
+
+struct spnic_cmd_register_vf {
+ struct mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0 - unregister, 1 - register */
+ u8 rsvd[39];
+};
+
int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
void *buf_out, u16 *out_size);
@@ -164,6 +292,77 @@ int spnic_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
*/
int spnic_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
+/**
+ * Set function mtu
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] new_mtu
+ * MTU value
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_port_mtu(void *hwdev, u16 new_mtu);
+
+/**
+ * Set function valid status
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] enable
+ * 0-disable, 1-enable
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_vport_enable(void *hwdev, bool enable);
+
+/**
+ * Set port status
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] enable
+ * 0-disable, 1-enable
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_port_enable(void *hwdev, bool enable);
+
+/**
+ * Get link state
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] link_state
+ * Link state, 0-link down, 1-link up
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_link_state(void *hwdev, u8 *link_state);
+
+/**
+ * Init nic hwdev
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_init_nic_hwdev(void *hwdev);
+
+/**
+ * Free nic hwdev
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ */
+void spnic_free_nic_hwdev(void *hwdev);
+
/**
* Get port info
*
@@ -177,4 +376,18 @@ int spnic_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
*/
int spnic_get_port_info(void *hwdev, struct nic_port_info *port_info);
+int spnic_init_function_table(void *hwdev, u16 rx_buff_len);
+
+/**
+ * Get VF function default cos
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] cos_id
+ * Cos id
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_vf_get_default_cos(void *hwdev, u8 *cos_id);
#endif /* _SPNIC_NIC_CFG_H_ */
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 8f71280fa7..7f73e70df1 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -30,23 +30,106 @@ int spnic_logtype;
#define SPNIC_MAX_UC_MAC_ADDRS 128
#define SPNIC_MAX_MC_MAC_ADDRS 128
-static void spnic_delete_mc_addr_list(struct spnic_nic_dev *nic_dev)
+/**
+ * Deinit mac_vlan table in hardware.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ */
+
+/**
+ * Set ethernet device link state up.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
+ */
+static int spnic_dev_set_link_up(struct rte_eth_dev *dev)
{
- u16 func_id;
- u32 i;
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err;
- func_id = spnic_global_func_id(nic_dev->hwdev);
+ /* Link status follow phy port status, mpu will open pma */
+ err = spnic_set_port_enable(nic_dev->hwdev, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set MAC link up failed, dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
- for (i = 0; i < SPNIC_MAX_MC_MAC_ADDRS; i++) {
- if (rte_is_zero_ether_addr(&nic_dev->mc_list[i]))
+ return err;
+}
+
+/**
+ * Set ethernet device link state down.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
+ */
+static int spnic_dev_set_link_down(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err;
+
+ /* Link status follow phy port status, mpu will close pma */
+ err = spnic_set_port_enable(nic_dev->hwdev, false);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set MAC link down failed, dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+
+ return err;
+}
+
+/**
+ * Get device physical link information.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] wait_to_complete
+ * Wait for request completion.
+ *
+ * @retval 0 : Link status changed
+ * @retval -1 : Link status not changed.
+ */
+static int spnic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+#define CHECK_INTERVAL 10 /* 10ms */
+#define MAX_REPEAT_TIME 100 /* 1s (100 * 10ms) in total */
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_eth_link link;
+ u8 link_state;
+ unsigned int rep_cnt = MAX_REPEAT_TIME;
+ int ret;
+
+ memset(&link, 0, sizeof(link));
+ do {
+ /* Get link status information from hardware */
+ ret = spnic_get_link_state(nic_dev->hwdev, &link_state);
+ if (ret) {
+ link.link_status = ETH_LINK_DOWN;
+ link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = ETH_LINK_FIXED;
+ goto out;
+ }
+
+ get_port_info(nic_dev->hwdev, link_state, &link);
+
+ if (!wait_to_complete || link.link_status)
break;
- spnic_del_mac(nic_dev->hwdev, nic_dev->mc_list[i].addr_bytes,
- 0, func_id);
- memset(&nic_dev->mc_list[i], 0, sizeof(struct rte_ether_addr));
- }
+ rte_delay_ms(CHECK_INTERVAL);
+ } while (rep_cnt--);
+
+out:
+ return rte_eth_linkstatus_set(dev, &link);
}
+static void spnic_delete_mc_addr_list(struct spnic_nic_dev *nic_dev);
+
/**
* Deinit mac_vlan table in hardware.
*
@@ -82,6 +165,122 @@ static void spnic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
spnic_delete_mc_addr_list(nic_dev);
}
+static int spnic_init_sw_rxtxqs(struct spnic_nic_dev *nic_dev)
+{
+ u32 txq_size;
+ u32 rxq_size;
+
+ /* Allocate software txq array */
+ txq_size = nic_dev->max_sqs * sizeof(*nic_dev->txqs);
+ nic_dev->txqs = rte_zmalloc("spnic_txqs", txq_size,
+ RTE_CACHE_LINE_SIZE);
+ if (!nic_dev->txqs) {
+ PMD_DRV_LOG(ERR, "Allocate txqs failed");
+ return -ENOMEM;
+ }
+
+ /* Allocate software rxq array */
+ rxq_size = nic_dev->max_rqs * sizeof(*nic_dev->rxqs);
+ nic_dev->rxqs = rte_zmalloc("spnic_rxqs", rxq_size,
+ RTE_CACHE_LINE_SIZE);
+ if (!nic_dev->rxqs) {
+ /* Free txqs */
+ rte_free(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+
+ PMD_DRV_LOG(ERR, "Allocate rxqs failed");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void spnic_deinit_sw_rxtxqs(struct spnic_nic_dev *nic_dev)
+{
+ rte_free(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+
+ rte_free(nic_dev->rxqs);
+ nic_dev->rxqs = NULL;
+}
+
+/**
+ * Start the device.
+ *
+ * Initialize function table, rxq and txq context, config rx offload, and enable
+ * vport and port to prepare receiving packets.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+static int spnic_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev;
+ int err;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+ err = spnic_init_function_table(nic_dev->hwdev, nic_dev->rx_buff_len);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init function table failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_func_tbl_fail;
+ }
+
+ /* Set default mtu */
+ err = spnic_set_port_mtu(nic_dev->hwdev, nic_dev->mtu_size);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set mtu_size[%d] failed, dev_name: %s",
+ nic_dev->mtu_size, eth_dev->data->name);
+ goto set_mtu_fail;
+ }
+
+
+ /* Update eth_dev link status */
+ if (eth_dev->data->dev_conf.intr_conf.lsc != 0)
+ (void)spnic_link_update(eth_dev, 0);
+
+ rte_bit_relaxed_set32(SPNIC_DEV_START, &nic_dev->dev_status);
+
+ return 0;
+
+set_mtu_fail:
+init_func_tbl_fail:
+
+ return err;
+}
+
+/**
+ * Stop the device.
+ *
+ * Stop phy port and vport, flush pending io request, clean context configure
+ * and free io resourece.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static int spnic_dev_stop(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev;
+ struct rte_eth_link link;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ if (!rte_bit_relaxed_test_and_clear32(SPNIC_DEV_START, &nic_dev->dev_status)) {
+ PMD_DRV_LOG(INFO, "Device %s already stopped",
+ nic_dev->dev_name);
+ return 0;
+ }
+
+ /* Clear recorded link status */
+ memset(&link, 0, sizeof(link));
+ (void)rte_eth_linkstatus_set(dev, &link);
+
+ return 0;
+}
+
/**
* Close the device.
*
@@ -99,11 +298,19 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
return 0;
}
+ spnic_dev_stop(eth_dev);
+
+ spnic_deinit_sw_rxtxqs(nic_dev);
spnic_deinit_mac_addr(eth_dev);
rte_free(nic_dev->mc_list);
rte_bit_relaxed_clear32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
+
+ /* Destroy rx mode mutex */
+ spnic_mutex_destroy(&nic_dev->rx_mode_mutex);
+
+ spnic_free_nic_hwdev(nic_dev->hwdev);
spnic_free_hwdev(nic_dev->hwdev);
eth_dev->dev_ops = NULL;
@@ -113,6 +320,34 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
return 0;
}
+
+static int spnic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err = 0;
+
+ PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
+ dev->data->port_id, mtu, SPNIC_MTU_TO_PKTLEN(mtu));
+
+ if (mtu < SPNIC_MIN_MTU_SIZE || mtu > SPNIC_MAX_MTU_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d",
+ mtu, SPNIC_MIN_MTU_SIZE, SPNIC_MAX_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ err = spnic_set_port_mtu(nic_dev->hwdev, mtu);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set port mtu failed, err: %d", err);
+ return err;
+ }
+
+ /* Update max frame size */
+ dev->data->dev_conf.rxmode.mtu = SPNIC_MTU_TO_PKTLEN(mtu);
+ nic_dev->mtu_size = mtu;
+
+ return err;
+}
+
/**
* Update MAC address
*
@@ -233,6 +468,23 @@ static int spnic_mac_addr_add(struct rte_eth_dev *dev,
return 0;
}
+static void spnic_delete_mc_addr_list(struct spnic_nic_dev *nic_dev)
+{
+ u16 func_id;
+ u32 i;
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+
+ for (i = 0; i < SPNIC_MAX_MC_MAC_ADDRS; i++) {
+ if (rte_is_zero_ether_addr(&nic_dev->mc_list[i]))
+ break;
+
+ spnic_del_mac(nic_dev->hwdev, nic_dev->mc_list[i].addr_bytes,
+ 0, func_id);
+ memset(&nic_dev->mc_list[i], 0, sizeof(struct rte_ether_addr));
+ }
+}
+
/**
* Set multicast MAC address
*
@@ -287,7 +539,15 @@ static int spnic_set_mc_addr_list(struct rte_eth_dev *dev,
return 0;
}
+
static const struct eth_dev_ops spnic_pmd_ops = {
+ .dev_set_link_up = spnic_dev_set_link_up,
+ .dev_set_link_down = spnic_dev_set_link_down,
+ .link_update = spnic_link_update,
+ .dev_start = spnic_dev_start,
+ .dev_stop = spnic_dev_stop,
+ .dev_close = spnic_dev_close,
+ .mtu_set = spnic_dev_set_mtu,
.mac_addr_set = spnic_set_mac_addr,
.mac_addr_remove = spnic_mac_addr_remove,
.mac_addr_add = spnic_mac_addr_add,
@@ -295,6 +555,11 @@ static const struct eth_dev_ops spnic_pmd_ops = {
};
static const struct eth_dev_ops spnic_pmd_vf_ops = {
+ .link_update = spnic_link_update,
+ .dev_start = spnic_dev_start,
+ .dev_stop = spnic_dev_stop,
+ .dev_close = spnic_dev_close,
+ .mtu_set = spnic_dev_set_mtu,
.mac_addr_set = spnic_set_mac_addr,
.mac_addr_remove = spnic_mac_addr_remove,
.mac_addr_add = spnic_mac_addr_add,
@@ -341,6 +606,66 @@ static int spnic_init_mac_table(struct rte_eth_dev *eth_dev)
return 0;
}
+static int spnic_pf_get_default_cos(struct spnic_hwdev *hwdev, u8 *cos_id)
+{
+ u8 default_cos = 0;
+ u8 valid_cos_bitmap;
+ u8 i;
+
+ valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap;
+ if (!valid_cos_bitmap) {
+ PMD_DRV_LOG(ERR, "PF has none cos to support\n");
+ return -EFAULT;
+ }
+
+ for (i = 0; i < SPNIC_COS_NUM_MAX; i++) {
+ if (valid_cos_bitmap & BIT(i))
+ /* Find max cos id as default cos */
+ default_cos = i;
+ }
+
+ *cos_id = default_cos;
+
+ return 0;
+}
+
+static int spnic_init_default_cos(struct spnic_nic_dev *nic_dev)
+{
+ u8 cos_id = 0;
+ int err;
+
+ if (!SPNIC_IS_VF(nic_dev->hwdev)) {
+ err = spnic_pf_get_default_cos(nic_dev->hwdev, &cos_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get PF default cos failed, err: %d",
+ err);
+ return err;
+ }
+ } else {
+ err = spnic_vf_get_default_cos(nic_dev->hwdev, &cos_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get VF default cos failed, err: %d",
+ err);
+ return err;
+ }
+ }
+
+ nic_dev->default_cos = cos_id;
+ PMD_DRV_LOG(INFO, "Default cos %d", nic_dev->default_cos);
+ return 0;
+}
+
+static int spnic_set_default_hw_feature(struct spnic_nic_dev *nic_dev)
+{
+ int err;
+
+ err = spnic_init_default_cos(nic_dev);
+ if (err)
+ return err;
+
+ return 0;
+}
+
static int spnic_func_init(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev = NULL;
@@ -411,10 +736,28 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto init_hwdev_fail;
}
+ nic_dev->max_sqs = spnic_func_max_sqs(nic_dev->hwdev);
+ nic_dev->max_rqs = spnic_func_max_rqs(nic_dev->hwdev);
+
if (SPNIC_FUNC_TYPE(nic_dev->hwdev) == TYPE_VF)
eth_dev->dev_ops = &spnic_pmd_vf_ops;
else
eth_dev->dev_ops = &spnic_pmd_ops;
+
+ err = spnic_init_nic_hwdev(nic_dev->hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init nic hwdev failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_nic_hwdev_fail;
+ }
+
+ err = spnic_init_sw_rxtxqs(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_sw_rxtxqs_fail;
+ }
+
err = spnic_init_mac_table(eth_dev);
if (err) {
PMD_DRV_LOG(ERR, "Init mac table failed, dev_name: %s",
@@ -422,6 +765,16 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto init_mac_table_fail;
}
+ /* Set hardware feature to default status */
+ err = spnic_set_default_hw_feature(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set hw default features failed, dev_name: %s",
+ eth_dev->data->name);
+ goto set_default_feature_fail;
+ }
+
+ spnic_mutex_init(&nic_dev->rx_mode_mutex, NULL);
+
rte_bit_relaxed_set32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
rte_bit_relaxed_set32(SPNIC_DEV_INIT, &nic_dev->dev_status);
@@ -430,7 +783,16 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
return 0;
+set_default_feature_fail:
+ spnic_deinit_mac_addr(eth_dev);
+
init_mac_table_fail:
+ spnic_deinit_sw_rxtxqs(nic_dev);
+
+init_sw_rxtxqs_fail:
+ spnic_free_nic_hwdev(nic_dev->hwdev);
+
+init_nic_hwdev_fail:
spnic_free_hwdev(nic_dev->hwdev);
eth_dev->dev_ops = NULL;
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
index 654234aaa4..321db389dc 100644
--- a/drivers/net/spnic/spnic_ethdev.h
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -4,6 +4,7 @@
#ifndef _SPNIC_ETHDEV_H_
#define _SPNIC_ETHDEV_H_
+#define SPNIC_DEV_NAME_LEN 32
#define SPNIC_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
#define SPNIC_VFTA_SIZE (4096 / SPNIC_UINT32_BIT_SIZE)
@@ -16,7 +17,25 @@ enum spnic_dev_status {
SPNIC_DEV_INTR_EN
};
-#define SPNIC_DEV_NAME_LEN 32
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+};
+
+#define DEFAULT_DRV_FEATURE 0x3FFF
+
struct spnic_nic_dev {
struct spnic_hwdev *hwdev; /* Hardware device */
@@ -53,6 +72,7 @@ struct spnic_nic_dev {
struct rte_ether_addr *mc_list;
char dev_name[SPNIC_DEV_NAME_LEN];
+ u64 feature_cap;
u32 vfta[SPNIC_VFTA_SIZE]; /* VLAN bitmap */
};
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 11/25] net/spnic: add queue pairs context initialization
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (9 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 10/25] net/spnic: add function info initialization Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 12/25] net/spnic: support mbuf handling of Tx/Rx Yanling Song
` (13 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch adds the initialization of Tx/Rx queues
context and negotiation of NIC features.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_hw_comm.c | 101 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 6 +
drivers/net/spnic/base/spnic_nic_cfg.c | 76 +++
drivers/net/spnic/base/spnic_nic_cfg.h | 65 ++-
drivers/net/spnic/meson.build | 3 +-
drivers/net/spnic/spnic_ethdev.c | 57 +-
drivers/net/spnic/spnic_io.c | 738 +++++++++++++++++++++++++
drivers/net/spnic/spnic_io.h | 154 ++++++
drivers/net/spnic/spnic_rx.h | 113 ++++
drivers/net/spnic/spnic_tx.h | 62 +++
10 files changed, 1370 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.h
diff --git a/drivers/net/spnic/base/spnic_hw_comm.c b/drivers/net/spnic/base/spnic_hw_comm.c
index 5cb607cf03..1c751f2403 100644
--- a/drivers/net/spnic/base/spnic_hw_comm.c
+++ b/drivers/net/spnic/base/spnic_hw_comm.c
@@ -217,6 +217,107 @@ int spnic_func_reset(void *hwdev, u64 reset_flag)
return 0;
}
+int spnic_convert_rx_buf_size(u32 rx_buf_sz, u32 *match_sz)
+{
+ u32 i, num_hw_types, best_match_sz;
+
+ if (unlikely(!match_sz || rx_buf_sz < SPNIC_RX_BUF_SIZE_32B))
+ return -EINVAL;
+
+ if (rx_buf_sz >= SPNIC_RX_BUF_SIZE_16K) {
+ best_match_sz = SPNIC_RX_BUF_SIZE_16K;
+ goto size_matched;
+ }
+
+ num_hw_types = sizeof(spnic_hw_rx_buf_size) /
+ sizeof(spnic_hw_rx_buf_size[0]);
+ best_match_sz = spnic_hw_rx_buf_size[0];
+ for (i = 0; i < num_hw_types; i++) {
+ if (rx_buf_sz == spnic_hw_rx_buf_size[i]) {
+ best_match_sz = spnic_hw_rx_buf_size[i];
+ break;
+ } else if (rx_buf_sz < spnic_hw_rx_buf_size[i]) {
+ break;
+ }
+ best_match_sz = spnic_hw_rx_buf_size[i];
+ }
+
+size_matched:
+ *match_sz = best_match_sz;
+
+ return 0;
+}
+
+static u16 get_hw_rx_buf_size(u32 rx_buf_sz)
+{
+ u16 num_hw_types = sizeof(spnic_hw_rx_buf_size) /
+ sizeof(spnic_hw_rx_buf_size[0]);
+ u16 i;
+
+ for (i = 0; i < num_hw_types; i++) {
+ if (spnic_hw_rx_buf_size[i] == rx_buf_sz)
+ return i;
+ }
+
+ PMD_DRV_LOG(WARNING, "Chip can't support rx buf size of %d", rx_buf_sz);
+
+ return DEFAULT_RX_BUF_SIZE; /* Default 2K */
+}
+
+int spnic_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth, u16 rx_buf_sz)
+{
+ struct spnic_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_idx = spnic_global_func_id(hwdev);
+ root_ctxt.set_cmdq_depth = 0;
+ root_ctxt.cmdq_depth = 0;
+ root_ctxt.lro_en = 1;
+ root_ctxt.rq_depth = (u16)ilog2(rq_depth);
+ root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz);
+ root_ctxt.sq_depth = (u16)ilog2(sq_depth);
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM, MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ PMD_DRV_LOG(ERR, "Set root context failed, err: %d, status: 0x%x, out_size: 0x%x",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int spnic_clean_root_ctxt(void *hwdev)
+{
+ struct spnic_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_idx = spnic_global_func_id(hwdev);
+
+ err = spnic_msg_to_mgmt_sync(hwdev, SPNIC_MOD_COMM, MGMT_CMD_SET_VAT,
+ &root_ctxt, sizeof(root_ctxt),
+ &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ PMD_DRV_LOG(ERR, "Clean root context failed, err: %d, status: 0x%x, out_size: 0x%x",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
int spnic_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
{
struct spnic_cmd_root_ctxt root_ctxt;
diff --git a/drivers/net/spnic/base/spnic_hw_comm.h b/drivers/net/spnic/base/spnic_hw_comm.h
index 207f0aaeae..f960fbb53f 100644
--- a/drivers/net/spnic/base/spnic_hw_comm.h
+++ b/drivers/net/spnic/base/spnic_hw_comm.h
@@ -180,6 +180,10 @@ int spnic_get_mgmt_version(void *hwdev, char *mgmt_ver, int max_mgmt_len);
int spnic_get_board_info(void *hwdev, struct spnic_board_info *info);
+int spnic_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth, u16 rx_buf_sz);
+
+int spnic_clean_root_ctxt(void *hwdev);
+
int spnic_get_interrupt_cfg(void *dev, struct interrupt_info *info);
int spnic_set_interrupt_cfg(void *dev, struct interrupt_info info);
@@ -188,6 +192,8 @@ int spnic_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size);
int spnic_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+int spnic_convert_rx_buf_size(u32 rx_buf_sz, u32 *match_sz);
+
int spnic_get_comm_features(void *hwdev, u64 *s_feature, u16 size);
int spnic_set_comm_features(void *hwdev, u64 *s_feature, u16 size);
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index 886aaea384..25d98d67dd 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -75,6 +75,42 @@ int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
in_size, buf_out, out_size, 0);
}
+int spnic_set_ci_table(void *hwdev, struct spnic_sq_attr *attr)
+{
+ struct spnic_cmd_cons_idx_attr cons_idx_attr;
+ u16 out_size = sizeof(cons_idx_attr);
+ int err;
+
+ if (!hwdev || !attr)
+ return -EINVAL;
+
+ memset(&cons_idx_attr, 0, sizeof(cons_idx_attr));
+ cons_idx_attr.func_idx = spnic_global_func_id(hwdev);
+ cons_idx_attr.dma_attr_off = attr->dma_attr_off;
+ cons_idx_attr.pending_limit = attr->pending_limit;
+ cons_idx_attr.coalescing_time = attr->coalescing_time;
+
+ if (attr->intr_en) {
+ cons_idx_attr.intr_en = attr->intr_en;
+ cons_idx_attr.intr_idx = attr->intr_idx;
+ }
+
+ cons_idx_attr.l2nic_sqn = attr->l2nic_sqn;
+ cons_idx_attr.ci_addr = attr->ci_dma_base;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SQ_CI_ATTR_SET,
+ &cons_idx_attr, sizeof(cons_idx_attr),
+ &cons_idx_attr, &out_size);
+ if (err || !out_size || cons_idx_attr.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set ci attribute table failed, err: %d, "
+ "status: 0x%x, out_size: 0x%x",
+ err, cons_idx_attr.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
static int spnic_check_mac_info(u8 status, u16 vlan_id)
{
if ((status && status != SPNIC_MGMT_STATUS_EXIST &&
@@ -406,6 +442,46 @@ int spnic_set_port_mtu(void *hwdev, u16 new_mtu)
&func_tbl_cfg);
}
+static int nic_feature_nego(void *hwdev, u8 opcode, u64 *s_feature, u16 size)
+{
+ struct spnic_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = spnic_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == SPNIC_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, size * sizeof(u64));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Failed to negotiate nic feature, err:%d, status: 0x%x, out_size: 0x%x\n",
+ err, feature_nego.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (opcode == SPNIC_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, size * sizeof(u64));
+
+ return 0;
+}
+
+int spnic_get_feature_from_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, SPNIC_CMD_OP_GET, s_feature, size);
+}
+
+int spnic_set_feature_to_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, SPNIC_CMD_OP_SET, s_feature, size);
+}
+
static int spnic_vf_func_init(void *hwdev)
{
struct spnic_cmd_register_vf register_info;
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index 98cad645d2..ce9792f8ee 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -36,6 +36,15 @@
#define SPNIC_MGMT_STATUS_EXIST 0x6
#define CHECK_IPSU_15BIT 0x8000
+struct spnic_cmd_feature_nego {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /* 1: set, 0: get */
+ u8 rsvd;
+ u64 s_feature[MAX_FEATURE_QWORD];
+};
+
/* Structures for port info */
struct nic_port_info {
u8 port_type;
@@ -69,6 +78,30 @@ enum nic_speed_level {
LINK_SPEED_LEVELS,
};
+struct spnic_sq_attr {
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u64 ci_dma_base;
+};
+
+struct spnic_cmd_cons_idx_attr {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
struct spnic_port_mac_set {
struct mgmt_msg_head msg_head;
@@ -88,7 +121,6 @@ struct spnic_port_mac_update {
u16 rsvd2;
u8 new_mac[ETH_ALEN];
};
-
struct spnic_cmd_port_info {
struct mgmt_msg_head msg_head;
@@ -193,6 +225,9 @@ struct spnic_cmd_set_func_tbl {
struct spnic_func_tbl_cfg tbl_cfg;
};
+#define SPNIC_CMD_OP_GET 0
+#define SPNIC_CMD_OP_SET 1
+
enum {
SPNIC_IFLA_VF_LINK_STATE_AUTO, /* Link state of the uplink */
SPNIC_IFLA_VF_LINK_STATE_ENABLE, /* Link always up */
@@ -223,6 +258,8 @@ struct spnic_cmd_register_vf {
int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
void *buf_out, u16 *out_size);
+int spnic_set_ci_table(void *hwdev, struct spnic_sq_attr *attr);
+
/**
* Update MAC address to hardware
*
@@ -390,4 +427,30 @@ int spnic_init_function_table(void *hwdev, u16 rx_buff_len);
* @retval non-zero : Failure
*/
int spnic_vf_get_default_cos(void *hwdev, u8 *cos_id);
+
+/**
+ * Get service feature HW supported
+ *
+ * @param[in] dev
+ * Device pointer to hwdev
+ * @param[in] size
+ * s_feature's array size
+ * @param[out] s_feature
+ * s_feature HW supported
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+int spnic_get_feature_from_hw(void *hwdev, u64 *s_feature, u16 size);
+
+/**
+ * Set service feature driver supported to hardware
+ *
+ * @param[in] dev
+ * Device pointer to hwdev
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+int spnic_set_feature_to_hw(void *hwdev, u64 *s_feature, u16 size);
+
#endif /* _SPNIC_NIC_CFG_H_ */
diff --git a/drivers/net/spnic/meson.build b/drivers/net/spnic/meson.build
index 042d2fe6e1..20d5151a8d 100644
--- a/drivers/net/spnic/meson.build
+++ b/drivers/net/spnic/meson.build
@@ -6,6 +6,7 @@ objs = [base_objs]
sources = files(
'spnic_ethdev.c',
- )
+ 'spnic_io.c',
+)
includes += include_directories('base')
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 7f73e70df1..4205ab43a4 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -22,14 +22,25 @@
#include "base/spnic_hw_comm.h"
#include "base/spnic_nic_cfg.h"
#include "base/spnic_nic_event.h"
+#include "spnic_io.h"
+#include "spnic_tx.h"
+#include "spnic_rx.h"
#include "spnic_ethdev.h"
/* Driver-specific log messages type */
int spnic_logtype;
+#define SPNIC_DEFAULT_RX_FREE_THRESH 32
+#define SPNIC_DEFAULT_TX_FREE_THRESH 32
+
#define SPNIC_MAX_UC_MAC_ADDRS 128
#define SPNIC_MAX_MC_MAC_ADDRS 128
+#define SPNIC_MAX_QUEUE_DEPTH 16384
+#define SPNIC_MIN_QUEUE_DEPTH 128
+#define SPNIC_TXD_ALIGN 1
+#define SPNIC_RXD_ALIGN 1
+
/**
* Deinit mac_vlan table in hardware.
*
@@ -219,6 +230,7 @@ static void spnic_deinit_sw_rxtxqs(struct spnic_nic_dev *nic_dev)
static int spnic_dev_start(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev;
+ u64 nic_features;
int err;
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
@@ -230,6 +242,26 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
goto init_func_tbl_fail;
}
+ nic_features = spnic_get_driver_feature(nic_dev->hwdev);
+ nic_features &= DEFAULT_DRV_FEATURE;
+ spnic_update_driver_feature(nic_dev->hwdev, nic_features);
+
+ err = spnic_set_feature_to_hw(nic_dev->hwdev, &nic_dev->feature_cap, 1);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to set nic features to hardware, err %d\n",
+ err);
+ goto get_feature_err;
+ }
+
+
+ /* Init txq and rxq context */
+ err = spnic_init_qp_ctxts(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init qp context failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_qp_fail;
+ }
+
/* Set default mtu */
err = spnic_set_port_mtu(nic_dev->hwdev, nic_dev->mtu_size);
if (err) {
@@ -238,7 +270,6 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
goto set_mtu_fail;
}
-
/* Update eth_dev link status */
if (eth_dev->data->dev_conf.intr_conf.lsc != 0)
(void)spnic_link_update(eth_dev, 0);
@@ -248,6 +279,10 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
return 0;
set_mtu_fail:
+ spnic_free_qp_ctxts(nic_dev->hwdev);
+
+init_qp_fail:
+get_feature_err:
init_func_tbl_fail:
return err;
@@ -278,6 +313,9 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
memset(&link, 0, sizeof(link));
(void)rte_eth_linkstatus_set(dev, &link);
+ /* Clean root context */
+ spnic_free_qp_ctxts(nic_dev->hwdev);
+
return 0;
}
@@ -290,7 +328,7 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
static int spnic_dev_close(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev =
- SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
if (rte_bit_relaxed_test_and_set32(SPNIC_DEV_CLOSE, &nic_dev->dev_status)) {
PMD_DRV_LOG(WARNING, "Device %s already closed",
@@ -306,7 +344,6 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
rte_bit_relaxed_clear32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
-
/* Destroy rx mode mutex */
spnic_mutex_destroy(&nic_dev->rx_mode_mutex);
@@ -736,6 +773,12 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto init_hwdev_fail;
}
+ if (!spnic_support_nic(nic_dev->hwdev)) {
+ PMD_DRV_LOG(ERR, "Hw of %s don't support nic\n",
+ eth_dev->data->name);
+ goto init_hwdev_fail;
+ }
+
nic_dev->max_sqs = spnic_func_max_sqs(nic_dev->hwdev);
nic_dev->max_rqs = spnic_func_max_rqs(nic_dev->hwdev);
@@ -751,6 +794,13 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto init_nic_hwdev_fail;
}
+ err = spnic_get_feature_from_hw(nic_dev->hwdev, &nic_dev->feature_cap, 1);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get nic feature from hardware failed, dev_name: %s",
+ eth_dev->data->name);
+ goto get_cap_fail;
+ }
+
err = spnic_init_sw_rxtxqs(nic_dev);
if (err) {
PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s",
@@ -792,6 +842,7 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
init_sw_rxtxqs_fail:
spnic_free_nic_hwdev(nic_dev->hwdev);
+get_cap_fail:
init_nic_hwdev_fail:
spnic_free_hwdev(nic_dev->hwdev);
eth_dev->dev_ops = NULL;
diff --git a/drivers/net/spnic/spnic_io.c b/drivers/net/spnic/spnic_io.c
new file mode 100644
index 0000000000..b4358f530a
--- /dev/null
+++ b/drivers/net/spnic/spnic_io.c
@@ -0,0 +1,738 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_io.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <ethdev_pci.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mempool.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+#include <rte_ethdev_core.h>
+#include <ethdev_driver.h>
+
+#include "base/spnic_compat.h"
+#include "base/spnic_cmd.h"
+#include "base/spnic_wq.h"
+#include "base/spnic_mgmt.h"
+#include "base/spnic_cmdq.h"
+#include "base/spnic_hwdev.h"
+#include "base/spnic_hw_comm.h"
+#include "base/spnic_nic_cfg.h"
+#include "base/spnic_hw_cfg.h"
+#include "spnic_io.h"
+#include "spnic_tx.h"
+#include "spnic_rx.h"
+#include "spnic_ethdev.h"
+
+#define SPNIC_DEAULT_TX_CI_PENDING_LIMIT 0
+#define SPNIC_DEAULT_TX_CI_COALESCING_TIME 0
+#define SPNIC_DEAULT_DROP_THD_ON 0xFFFF
+#define SPNIC_DEAULT_DROP_THD_OFF 0
+
+#define WQ_PREFETCH_MAX 4
+#define WQ_PREFETCH_MIN 1
+#define WQ_PREFETCH_THRESHOLD 256
+
+#define SPNIC_Q_CTXT_MAX 31
+
+enum spnic_qp_ctxt_type {
+ SPNIC_QP_CTXT_TYPE_SQ,
+ SPNIC_QP_CTXT_TYPE_RQ,
+};
+
+struct spnic_qp_ctxt_header {
+ u16 num_queues;
+ u16 queue_type;
+ u16 start_qid;
+ u16 rsvd;
+};
+
+struct spnic_sq_ctxt {
+ u32 ci_pi;
+ u32 drop_mode_sp;
+ u32 wq_pfn_hi_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd0;
+ u32 pkt_drop_thd;
+ u32 global_sq_id;
+ u32 vlan_ceq_attr;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 rsvd8;
+ u32 rsvd9;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct spnic_rq_ctxt {
+ u32 ci_pi;
+ u32 ceq_attr;
+ u32 wq_pfn_hi_type_owner;
+ u32 wq_pfn_lo;
+
+ u32 rsvd[3];
+ u32 cqe_sge_len;
+
+ u32 pref_cache;
+ u32 pref_ci_owner;
+ u32 pref_wq_pfn_hi_ci;
+ u32 pref_wq_pfn_lo;
+
+ u32 pi_paddr_hi;
+ u32 pi_paddr_lo;
+ u32 wq_block_pfn_hi;
+ u32 wq_block_pfn_lo;
+};
+
+struct spnic_sq_ctxt_block {
+ struct spnic_qp_ctxt_header cmdq_hdr;
+ struct spnic_sq_ctxt sq_ctxt[SPNIC_Q_CTXT_MAX];
+};
+
+struct spnic_rq_ctxt_block {
+ struct spnic_qp_ctxt_header cmdq_hdr;
+ struct spnic_rq_ctxt rq_ctxt[SPNIC_Q_CTXT_MAX];
+};
+
+struct spnic_clean_queue_ctxt {
+ struct spnic_qp_ctxt_header cmdq_hdr;
+ u32 rsvd;
+};
+
+#define SQ_CTXT_SIZE(num_sqs) ((u16)(sizeof(struct spnic_qp_ctxt_header) \
+ + (num_sqs) * sizeof(struct spnic_sq_ctxt)))
+
+#define RQ_CTXT_SIZE(num_rqs) ((u16)(sizeof(struct spnic_qp_ctxt_header) \
+ + (num_rqs) * sizeof(struct spnic_rq_ctxt)))
+
+#define CI_IDX_HIGH_SHIFH 12
+
+#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH)
+
+#define SQ_CTXT_PI_IDX_SHIFT 0
+#define SQ_CTXT_CI_IDX_SHIFT 16
+
+#define SQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define SQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define SQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0
+#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1
+
+#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U
+#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U
+
+#define SQ_CTXT_MODE_SET(val, member) (((val) & \
+ SQ_CTXT_MODE_##member##_MASK) \
+ << SQ_CTXT_MODE_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define SQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_PAGE_##member##_MASK) \
+ << SQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0
+#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16
+
+#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU
+#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU
+
+#define SQ_CTXT_PKT_DROP_THD_SET(val, member) (((val) & \
+ SQ_CTXT_PKT_DROP_##member##_MASK) \
+ << SQ_CTXT_PKT_DROP_##member##_SHIFT)
+
+#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0
+
+#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU
+
+#define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) (((val) & \
+ SQ_CTXT_##member##_MASK) \
+ << SQ_CTXT_##member##_SHIFT)
+
+
+#define SQ_CTXT_VLAN_TAG_SHIFT 0
+#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16
+#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19
+#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23
+
+#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU
+#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U
+#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U
+#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U
+
+#define SQ_CTXT_VLAN_CEQ_SET(val, member) (((val) & \
+ SQ_CTXT_VLAN_##member##_MASK) \
+ << SQ_CTXT_VLAN_##member##_SHIFT)
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define SQ_CTXT_PREF_CI_HI_SHIFT 0
+#define SQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define SQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define SQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define SQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define SQ_CTXT_PREF_SET(val, member) (((val) & \
+ SQ_CTXT_PREF_##member##_MASK) \
+ << SQ_CTXT_PREF_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define SQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ SQ_CTXT_WQ_BLOCK_##member##_MASK) \
+ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define RQ_CTXT_PI_IDX_SHIFT 0
+#define RQ_CTXT_CI_IDX_SHIFT 16
+
+#define RQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define RQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define RQ_CTXT_CI_PI_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) \
+ << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21
+#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30
+#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31
+
+#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU
+#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U
+#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U
+
+#define RQ_CTXT_CEQ_ATTR_SET(val, member) (((val) & \
+ RQ_CTXT_CEQ_ATTR_##member##_MASK) \
+ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28
+#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U
+#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define RQ_CTXT_WQ_PAGE_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_PAGE_##member##_MASK) << \
+ RQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define RQ_CTXT_CQE_LEN_SHIFT 28
+
+#define RQ_CTXT_CQE_LEN_MASK 0x3U
+
+#define RQ_CTXT_CQE_LEN_SET(val, member) (((val) & \
+ RQ_CTXT_##member##_MASK) << \
+ RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define RQ_CTXT_PREF_CI_HI_SHIFT 0
+#define RQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define RQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define RQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define RQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define RQ_CTXT_PREF_SET(val, member) (((val) & \
+ RQ_CTXT_PREF_##member##_MASK) << \
+ RQ_CTXT_PREF_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define RQ_CTXT_WQ_BLOCK_SET(val, member) (((val) & \
+ RQ_CTXT_WQ_BLOCK_##member##_MASK) << \
+ RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4)
+
+#define WQ_PAGE_PFN_SHIFT 12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+static void
+spnic_qp_prepare_cmdq_header(struct spnic_qp_ctxt_header *qp_ctxt_hdr,
+ enum spnic_qp_ctxt_type ctxt_type,
+ u16 num_queues, u16 q_id)
+{
+ qp_ctxt_hdr->queue_type = ctxt_type;
+ qp_ctxt_hdr->num_queues = num_queues;
+ qp_ctxt_hdr->start_qid = q_id;
+ qp_ctxt_hdr->rsvd = 0;
+
+ spnic_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
+}
+
+static void spnic_sq_prepare_ctxt(struct spnic_txq *sq, u16 sq_id,
+ struct spnic_sq_ctxt *sq_ctxt)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+
+ ci_start = sq->cons_idx & sq->q_mask;
+ pi_start = sq->prod_idx & sq->q_mask;
+
+ /* Read the first page from hardware table */
+ wq_page_addr = sq->queue_buf_paddr;
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ /* Use 0-level CLA */
+ wq_block_pfn = WQ_BLOCK_PFN(wq_page_addr);
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ sq_ctxt->ci_pi = SQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ SQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ sq_ctxt->drop_mode_sp = SQ_CTXT_MODE_SET(0, SP_FLAG) |
+ SQ_CTXT_MODE_SET(0, PKT_DROP);
+
+ sq_ctxt->wq_pfn_hi_owner = SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ SQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ sq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->pkt_drop_thd =
+ SQ_CTXT_PKT_DROP_THD_SET(SPNIC_DEAULT_DROP_THD_ON, THD_ON) |
+ SQ_CTXT_PKT_DROP_THD_SET(SPNIC_DEAULT_DROP_THD_OFF, THD_OFF);
+
+ sq_ctxt->global_sq_id =
+ SQ_CTXT_GLOBAL_QUEUE_ID_SET(sq_id, GLOBAL_SQ_ID);
+
+ /* Insert c-vlan in default */
+ sq_ctxt->vlan_ceq_attr = SQ_CTXT_VLAN_CEQ_SET(0, CEQ_EN) |
+ SQ_CTXT_VLAN_CEQ_SET(1, INSERT_MODE);
+
+ sq_ctxt->rsvd0 = 0;
+
+ sq_ctxt->pref_cache = SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD,
+ CACHE_THRESHOLD);
+
+ sq_ctxt->pref_ci_owner =
+ SQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ SQ_CTXT_PREF_SET(1, OWNER);
+
+ sq_ctxt->pref_wq_pfn_hi_ci =
+ SQ_CTXT_PREF_SET(ci_start, CI_LOW) |
+ SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI);
+
+ sq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->wq_block_pfn_hi =
+ SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ sq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ spnic_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
+}
+
+static void spnic_rq_prepare_ctxt(struct spnic_rxq *rq,
+ struct spnic_rq_ctxt *rq_ctxt)
+{
+ u64 wq_page_addr;
+ u64 wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo;
+ u32 wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+ u16 wqe_type = rq->wqebb_shift - SPNIC_RQ_WQEBB_SHIFT;
+ u8 intr_disable;
+
+ /* RQ depth is in unit of 8 Bytes */
+ ci_start = (u16)((rq->cons_idx & rq->q_mask) << wqe_type);
+ pi_start = (u16)((rq->prod_idx & rq->q_mask) << wqe_type);
+
+ /* Read the first page from hardware table */
+ wq_page_addr = rq->queue_buf_paddr;
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ /* Use 0-level CLA */
+ wq_block_pfn = WQ_BLOCK_PFN(wq_page_addr);
+
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ rq_ctxt->ci_pi = RQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ RQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ intr_disable = rq->dp_intr_en ? 0 : 1;
+ rq_ctxt->ceq_attr = RQ_CTXT_CEQ_ATTR_SET(intr_disable, EN) |
+ RQ_CTXT_CEQ_ATTR_SET(0, INTR_ARM) |
+ RQ_CTXT_CEQ_ATTR_SET(rq->msix_entry_idx, INTR);
+
+ /* Use 32Byte WQE with SGE for CQE in default */
+ rq_ctxt->wq_pfn_hi_type_owner =
+ RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ RQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ switch (wqe_type) {
+ case SPNIC_EXTEND_RQ_WQE:
+ /* Use 32Byte WQE with SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(0, WQE_TYPE);
+ break;
+ case SPNIC_NORMAL_RQ_WQE:
+ /* Use 16Byte WQE with 32Bytes SGE for CQE */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE);
+ rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN);
+ break;
+ default:
+ PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type);
+ }
+
+ rq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pref_cache =
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ rq_ctxt->pref_ci_owner =
+ RQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ RQ_CTXT_PREF_SET(1, OWNER);
+
+ rq_ctxt->pref_wq_pfn_hi_ci =
+ RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI) |
+ RQ_CTXT_PREF_SET(ci_start, CI_LOW);
+
+ rq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pi_paddr_hi = upper_32_bits(rq->pi_dma_addr);
+ rq_ctxt->pi_paddr_lo = lower_32_bits(rq->pi_dma_addr);
+
+ rq_ctxt->wq_block_pfn_hi =
+ RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ rq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ spnic_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
+}
+
+static int init_sq_ctxts(struct spnic_nic_dev *nic_dev)
+{
+ struct spnic_sq_ctxt_block *sq_ctxt_block = NULL;
+ struct spnic_sq_ctxt *sq_ctxt = NULL;
+ struct spnic_cmd_buf *cmd_buf = NULL;
+ struct spnic_txq *sq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = spnic_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf for sq ctx failed");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_dev->num_sqs) {
+ sq_ctxt_block = cmd_buf->buf;
+ sq_ctxt = sq_ctxt_block->sq_ctxt;
+
+ max_ctxts = (nic_dev->num_sqs - q_id) > SPNIC_Q_CTXT_MAX ?
+ SPNIC_Q_CTXT_MAX : (nic_dev->num_sqs - q_id);
+
+ spnic_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr,
+ SPNIC_QP_CTXT_TYPE_SQ,
+ max_ctxts, q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ sq = nic_dev->txqs[curr_id];
+ spnic_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]);
+ }
+
+ cmd_buf->size = SQ_CTXT_SIZE(max_ctxts);
+ err = spnic_cmdq_direct_resp(nic_dev->hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Set SQ ctxts failed, "
+ "err: %d, out_param: %"PRIu64"",
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ spnic_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+static int init_rq_ctxts(struct spnic_nic_dev *nic_dev)
+{
+ struct spnic_rq_ctxt_block *rq_ctxt_block = NULL;
+ struct spnic_rq_ctxt *rq_ctxt = NULL;
+ struct spnic_cmd_buf *cmd_buf = NULL;
+ struct spnic_rxq *rq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = spnic_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf for rq ctx failed");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_dev->num_rqs) {
+ rq_ctxt_block = cmd_buf->buf;
+ rq_ctxt = rq_ctxt_block->rq_ctxt;
+
+ max_ctxts = (nic_dev->num_rqs - q_id) > SPNIC_Q_CTXT_MAX ?
+ SPNIC_Q_CTXT_MAX : (nic_dev->num_rqs - q_id);
+
+ spnic_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr,
+ SPNIC_QP_CTXT_TYPE_RQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ rq = nic_dev->rxqs[curr_id];
+
+ spnic_rq_prepare_ctxt(rq, &rq_ctxt[i]);
+ }
+
+ cmd_buf->size = RQ_CTXT_SIZE(max_ctxts);
+ err = spnic_cmdq_direct_resp(nic_dev->hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Set RQ ctxts failed, "
+ "err: %d, out_param: %"PRIu64"",
+ err, out_param);
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ spnic_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+static int clean_queue_offload_ctxt(struct spnic_nic_dev *nic_dev,
+ enum spnic_qp_ctxt_type ctxt_type)
+{
+ struct spnic_clean_queue_ctxt *ctxt_block = NULL;
+ struct spnic_cmd_buf *cmd_buf;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = spnic_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf for LRO/TSO space failed");
+ return -ENOMEM;
+ }
+
+ ctxt_block = cmd_buf->buf;
+ ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs;
+ ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+ ctxt_block->cmdq_hdr.start_qid = 0;
+
+ spnic_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+ cmd_buf->size = sizeof(*ctxt_block);
+
+ err = spnic_cmdq_direct_resp(nic_dev->hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ cmd_buf, &out_param, 0);
+ if ((err) || (out_param)) {
+ PMD_DRV_LOG(ERR, "Clean queue offload ctxts failed, "
+ "err: %d, out_param: %"PRIu64"", err, out_param);
+ err = -EFAULT;
+ }
+
+ spnic_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+static int clean_qp_offload_ctxt(struct spnic_nic_dev *nic_dev)
+{
+ /* Clean LRO/TSO context space */
+ return (clean_queue_offload_ctxt(nic_dev, SPNIC_QP_CTXT_TYPE_SQ) ||
+ clean_queue_offload_ctxt(nic_dev, SPNIC_QP_CTXT_TYPE_RQ));
+}
+
+void spnic_get_func_rx_buf_size(void *dev)
+{
+ struct spnic_nic_dev *nic_dev = (struct spnic_nic_dev *)dev;
+ struct spnic_rxq *rxq = NULL;
+ u16 q_id;
+ u16 buf_size = 0;
+
+ for (q_id = 0; q_id < nic_dev->num_rqs; q_id++) {
+ rxq = nic_dev->rxqs[q_id];
+
+ if (rxq == NULL)
+ continue;
+
+ if (q_id == 0)
+ buf_size = rxq->buf_len;
+
+ buf_size = buf_size > rxq->buf_len ? rxq->buf_len : buf_size;
+ }
+
+ nic_dev->rx_buff_len = buf_size;
+}
+
+/* Init qps ctxt and set sq ci attr and arm all sq */
+int spnic_init_qp_ctxts(void *dev)
+{
+ struct spnic_nic_dev *nic_dev = NULL;
+ struct spnic_hwdev *hwdev = NULL;
+ struct spnic_sq_attr sq_attr;
+ u32 rq_depth;
+ u16 q_id;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ nic_dev = (struct spnic_nic_dev *)dev;
+ hwdev = nic_dev->hwdev;
+
+ err = init_sq_ctxts(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init SQ ctxts failed");
+ return err;
+ }
+
+ err = init_rq_ctxts(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init RQ ctxts failed");
+ return err;
+ }
+
+ err = clean_qp_offload_ctxt(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Clean qp offload ctxts failed");
+ return err;
+ }
+
+ rq_depth = ((u32)nic_dev->rxqs[0]->q_depth) <<
+ nic_dev->rxqs[0]->wqe_type;
+ err = spnic_set_root_ctxt(hwdev, rq_depth, nic_dev->txqs[0]->q_depth,
+ nic_dev->rx_buff_len);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set root context failed");
+ return err;
+ }
+
+ for (q_id = 0; q_id < nic_dev->num_sqs; q_id++) {
+ sq_attr.ci_dma_base = nic_dev->txqs[q_id]->ci_dma_base >> 2;
+ sq_attr.pending_limit = SPNIC_DEAULT_TX_CI_PENDING_LIMIT;
+ sq_attr.coalescing_time = SPNIC_DEAULT_TX_CI_COALESCING_TIME;
+ sq_attr.intr_en = 0;
+ sq_attr.intr_idx = 0; /* Tx doesn't need intr */
+ sq_attr.l2nic_sqn = q_id;
+ sq_attr.dma_attr_off = 0;
+ err = spnic_set_ci_table(hwdev, &sq_attr);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set ci table failed");
+ goto set_cons_idx_table_err;
+ }
+ }
+
+ return 0;
+
+set_cons_idx_table_err:
+ spnic_clean_root_ctxt(hwdev);
+ return err;
+}
+
+void spnic_free_qp_ctxts(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ spnic_clean_root_ctxt(hwdev);
+}
+
+void spnic_update_driver_feature(void *dev, u64 s_feature)
+{
+ struct spnic_nic_dev *nic_dev = NULL;
+
+ if (!dev)
+ return;
+
+ nic_dev = (struct spnic_nic_dev *)dev;
+ nic_dev->feature_cap = s_feature;
+
+ PMD_DRV_LOG(INFO, "Update nic feature to %"PRIu64"\n",
+ nic_dev->feature_cap);
+}
+
+u64 spnic_get_driver_feature(void *dev)
+{
+ struct spnic_nic_dev *nic_dev = NULL;
+
+ if (!dev)
+ return -EINVAL;
+
+ nic_dev = (struct spnic_nic_dev *)dev;
+
+ return nic_dev->feature_cap;
+}
diff --git a/drivers/net/spnic/spnic_io.h b/drivers/net/spnic/spnic_io.h
new file mode 100644
index 0000000000..c59b2f42ec
--- /dev/null
+++ b/drivers/net/spnic/spnic_io.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_IO_H_
+#define _SPNIC_IO_H_
+
+#define SPNIC_SQ_WQEBB_SHIFT 4
+#define SPNIC_RQ_WQEBB_SHIFT 3
+
+#define SPNIC_SQ_WQEBB_SIZE BIT(SPNIC_SQ_WQEBB_SHIFT)
+#define SPNIC_CQE_SIZE_SHIFT 4
+
+/* Ci addr should RTE_CACHE_SIZE(64B) alignment for performance */
+#define SPNIC_CI_Q_ADDR_SIZE 64
+
+#define CI_TABLE_SIZE(num_qps, pg_sz) \
+ (RTE_ALIGN((num_qps) * SPNIC_CI_Q_ADDR_SIZE, pg_sz))
+
+#define SPNIC_CI_VADDR(base_addr, q_id) ((u8 *)(base_addr) + \
+ (q_id) * SPNIC_CI_Q_ADDR_SIZE)
+
+#define SPNIC_CI_PADDR(base_paddr, q_id) ((base_paddr) + \
+ (q_id) * SPNIC_CI_Q_ADDR_SIZE)
+
+enum spnic_rq_wqe_type {
+ SPNIC_COMPACT_RQ_WQE,
+ SPNIC_NORMAL_RQ_WQE,
+ SPNIC_EXTEND_RQ_WQE,
+};
+
+enum spnic_queue_type {
+ SPNIC_SQ,
+ SPNIC_RQ,
+ SPNIC_MAX_QUEUE_TYPE
+};
+
+/* Doorbell info */
+struct spnic_db {
+ u32 db_info;
+ u32 pi_hi;
+};
+
+#define DB_INFO_QID_SHIFT 0
+#define DB_INFO_NON_FILTER_SHIFT 22
+#define DB_INFO_CFLAG_SHIFT 23
+#define DB_INFO_COS_SHIFT 24
+#define DB_INFO_TYPE_SHIFT 27
+
+#define DB_INFO_QID_MASK 0x1FFFU
+#define DB_INFO_NON_FILTER_MASK 0x1U
+#define DB_INFO_CFLAG_MASK 0x1U
+#define DB_INFO_COS_MASK 0x7U
+#define DB_INFO_TYPE_MASK 0x1FU
+#define DB_INFO_SET(val, member) (((u32)(val) & \
+ DB_INFO_##member##_MASK) << \
+ DB_INFO_##member##_SHIFT)
+
+#define DB_PI_LOW_MASK 0xFFU
+#define DB_PI_HIGH_MASK 0xFFU
+#define DB_PI_LOW(pi) ((pi) & DB_PI_LOW_MASK)
+#define DB_PI_HI_SHIFT 8
+#define DB_PI_HIGH(pi) (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK)
+#define DB_INFO_UPPER_32(val) (((u64)val) << 32)
+
+#define DB_ADDR(db_addr, pi) ((u64 *)(db_addr) + DB_PI_LOW(pi))
+#define SRC_TYPE 1
+
+/* Cflag data path */
+#define SQ_CFLAG_DP 0
+#define RQ_CFLAG_DP 1
+
+#define MASKED_QUEUE_IDX(queue, idx) ((idx) & (queue)->q_mask)
+
+#define NIC_WQE_ADDR(queue, idx) ((void *)((u64)((queue)->queue_buf_vaddr) + \
+ ((idx) << (queue)->wqebb_shift)))
+
+#define SPNIC_FLUSH_QUEUE_TIMEOUT 3000
+
+/**
+ * Write send queue doorbell
+ *
+ * @param[in] db_addr
+ * Doorbell address
+ * @param[in] q_id
+ * Send queue id
+ * @param[in] cos
+ * Send queue cos
+ * @param[in] cflag
+ * Cflag data path
+ * @param[in] pi
+ * Send queue pi
+ */
+static inline void spnic_write_db(void *db_addr, u16 q_id, int cos, u8 cflag,
+ u16 pi)
+{
+ u64 db;
+
+ /* Hardware will do endianness coverting */
+ db = DB_PI_HIGH(pi);
+ db = DB_INFO_UPPER_32(db) | DB_INFO_SET(SRC_TYPE, TYPE) |
+ DB_INFO_SET(cflag, CFLAG) | DB_INFO_SET(cos, COS) |
+ DB_INFO_SET(q_id, QID);
+
+ rte_wmb(); /* Write all before the doorbell */
+
+ rte_write64(*((u64 *)&db), DB_ADDR(db_addr, pi));
+}
+
+void spnic_get_func_rx_buf_size(void *dev);
+
+/**
+ * Init queue pair context
+ *
+ * @param[in] dev
+ * Device pointer to nic device
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+int spnic_init_qp_ctxts(void *dev);
+
+/**
+ * Free queue pair context
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ */
+void spnic_free_qp_ctxts(void *hwdev);
+
+/**
+ * Update service feature driver supported
+ *
+ * @param[in] dev
+ * Device pointer to nic device
+ * @param[out] s_feature
+ * s_feature driver supported
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+void spnic_update_driver_feature(void *dev, u64 s_feature);
+
+/**
+ * Get service feature driver supported
+ *
+ * @param[in] dev
+ * Device pointer to nic device
+ * @param[out] s_feature
+ * s_feature driver supported
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+u64 spnic_get_driver_feature(void *dev);
+#endif /* _SPNIC_IO_H_ */
diff --git a/drivers/net/spnic/spnic_rx.h b/drivers/net/spnic/spnic_rx.h
new file mode 100644
index 0000000000..b2f0052533
--- /dev/null
+++ b/drivers/net/spnic/spnic_rx.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_RX_H_
+#define _SPNIC_RX_H_
+
+struct spnic_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 errors;
+ u64 csum_errors;
+ u64 other_errors;
+ u64 unlock_bp;
+ u64 dropped;
+
+ u64 rx_nombuf;
+ u64 rx_discards;
+ u64 burst_pkts;
+};
+
+struct spnic_rq_cqe {
+ u32 status;
+ u32 vlan_len;
+
+ u32 offload_type;
+ u32 hash_val;
+ u32 xid;
+ u32 decrypt_info;
+ u32 rsvd6;
+ u32 pkt_info;
+};
+
+/*
+ * Attention: please do not add any member in spnic_rx_info because rxq bulk
+ * rearm mode will write mbuf in rx_info
+ */
+struct spnic_rx_info {
+ struct rte_mbuf *mbuf;
+};
+
+struct spnic_sge_sect {
+ struct spnic_sge sge;
+ u32 rsvd;
+};
+
+struct spnic_rq_extend_wqe {
+ struct spnic_sge_sect buf_desc;
+ struct spnic_sge_sect cqe_sect;
+};
+
+struct spnic_rq_normal_wqe {
+ u32 buf_hi_addr;
+ u32 buf_lo_addr;
+ u32 cqe_hi_addr;
+ u32 cqe_lo_addr;
+};
+
+struct spnic_rq_wqe {
+ union {
+ struct spnic_rq_normal_wqe normal_wqe;
+ struct spnic_rq_extend_wqe extend_wqe;
+ };
+};
+
+struct spnic_rxq {
+ struct spnic_nic_dev *nic_dev;
+
+ u16 q_id;
+ u16 q_depth;
+ u16 q_mask;
+ u16 buf_len;
+
+ u32 rx_buff_shift;
+
+ u16 rx_free_thresh;
+ u16 rxinfo_align_end;
+ u16 wqebb_shift;
+ u16 wqebb_size;
+
+ u16 wqe_type;
+ u16 cons_idx;
+ u16 prod_idx;
+ u16 delta;
+
+ u16 next_to_update;
+ u16 port_id;
+
+ const struct rte_memzone *rq_mz;
+ void *queue_buf_vaddr; /* Rq dma info */
+ rte_iova_t queue_buf_paddr;
+
+ const struct rte_memzone *pi_mz;
+ u16 *pi_virt_addr;
+ void *db_addr;
+ rte_iova_t pi_dma_addr;
+
+ struct spnic_rx_info *rx_info;
+ struct spnic_rq_cqe *rx_cqe;
+ struct rte_mempool *mb_pool;
+
+ const struct rte_memzone *cqe_mz;
+ rte_iova_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+ u8 dp_intr_en;
+ u16 msix_entry_idx;
+
+ unsigned long status;
+
+ struct spnic_rxq_stats rxq_stats;
+} __rte_cache_aligned;
+
+#endif /* _SPNIC_RX_H_ */
diff --git a/drivers/net/spnic/spnic_tx.h b/drivers/net/spnic/spnic_tx.h
new file mode 100644
index 0000000000..7528b27bd9
--- /dev/null
+++ b/drivers/net/spnic/spnic_tx.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#ifndef _SPNIC_TX_H_
+#define _SPNIC_TX_H_
+
+/* Txq info */
+struct spnic_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 tx_busy;
+ u64 off_errs;
+ u64 burst_pkts;
+ u64 sge_len0;
+ u64 mbuf_null;
+ u64 cpy_pkts;
+ u64 sge_len_too_large;
+};
+
+struct spnic_tx_info {
+ struct rte_mbuf *mbuf;
+ struct rte_mbuf *cpy_mbuf;
+ int wqebb_cnt;
+};
+
+struct spnic_txq {
+ struct spnic_nic_dev *nic_dev;
+
+ u16 q_id;
+ u16 q_depth;
+ u16 q_mask;
+ u16 wqebb_size;
+
+ u16 wqebb_shift;
+ u16 cons_idx;
+ u16 prod_idx;
+
+ u16 tx_free_thresh;
+ u16 owner; /* Used for sq */
+
+ void *db_addr;
+
+ struct spnic_tx_info *tx_info;
+
+ const struct rte_memzone *sq_mz;
+ void *queue_buf_vaddr;
+ rte_iova_t queue_buf_paddr; /* Sq dma info */
+
+ const struct rte_memzone *ci_mz;
+ void *ci_vaddr_base;
+ rte_iova_t ci_dma_base;
+
+ u64 sq_head_addr;
+ u64 sq_bot_sge_addr;
+
+ u32 cos;
+
+ struct spnic_txq_stats txq_stats;
+} __rte_cache_aligned;
+
+#endif /* _SPNIC_TX_H_ */
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 12/25] net/spnic: support mbuf handling of Tx/Rx
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (10 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 11/25] net/spnic: add queue pairs context initialization Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 13/25] net/spnic: support Rx congfiguration Yanling Song
` (12 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch defines a wqe data structure for hardware to
learn the sge info and offload info of packet. Furthermore,
this commit implements the interfaces to fill wqe with DPDK mbuf.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_nic_cfg.c | 23 ++
drivers/net/spnic/base/spnic_nic_cfg.h | 23 ++
drivers/net/spnic/meson.build | 2 +
drivers/net/spnic/spnic_ethdev.c | 502 ++++++++++++++++++++++++-
drivers/net/spnic/spnic_rx.c | 302 +++++++++++++++
drivers/net/spnic/spnic_rx.h | 41 ++
drivers/net/spnic/spnic_tx.c | 334 ++++++++++++++++
drivers/net/spnic/spnic_tx.h | 228 +++++++++++
8 files changed, 1454 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_tx.c
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index 25d98d67dd..f6914f6f6d 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -378,6 +378,29 @@ int spnic_set_port_enable(void *hwdev, bool enable)
return 0;
}
+int spnic_flush_qps_res(void *hwdev)
+{
+ struct spnic_cmd_clear_qp_resource sq_res;
+ u16 out_size = sizeof(sq_res);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&sq_res, 0, sizeof(sq_res));
+ sq_res.func_id = spnic_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CLEAR_QP_RESOURCE, &sq_res,
+ sizeof(sq_res), &sq_res, &out_size);
+ if (err || !out_size || sq_res.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Clear sq resources failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, sq_res.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int spnic_set_function_table(void *hwdev, u32 cfg_bitmap,
struct spnic_func_tbl_cfg *cfg)
{
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index ce9792f8ee..746c1c342d 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -255,6 +255,17 @@ struct spnic_cmd_register_vf {
u8 rsvd[39];
};
+
+struct spnic_cmd_set_rq_flush {
+ union {
+ struct {
+ u16 global_rq_id;
+ u16 local_rq_id;
+ };
+ u32 value;
+ };
+};
+
int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
void *buf_out, u16 *out_size);
@@ -381,6 +392,18 @@ int spnic_set_port_enable(void *hwdev, bool enable);
*/
int spnic_get_link_state(void *hwdev, u8 *link_state);
+/**
+ * Flush queue pairs resource in hardware
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_flush_qps_res(void *hwdev);
+
+
/**
* Init nic hwdev
*
diff --git a/drivers/net/spnic/meson.build b/drivers/net/spnic/meson.build
index 20d5151a8d..cd8f316366 100644
--- a/drivers/net/spnic/meson.build
+++ b/drivers/net/spnic/meson.build
@@ -7,6 +7,8 @@ objs = [base_objs]
sources = files(
'spnic_ethdev.c',
'spnic_io.c',
+ 'spnic_rx.c',
+ 'spnic_tx.c'
)
includes += include_directories('base')
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 4205ab43a4..27942e5d68 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -139,6 +139,468 @@ static int spnic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
return rte_eth_linkstatus_set(dev, &link);
}
+static void spnic_reset_rx_queue(struct rte_eth_dev *dev)
+{
+ struct spnic_rxq *rxq = NULL;
+ struct spnic_nic_dev *nic_dev;
+ int q_id = 0;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ for (q_id = 0; q_id < nic_dev->num_rqs; q_id++) {
+ rxq = nic_dev->rxqs[q_id];
+
+ rxq->cons_idx = 0;
+ rxq->prod_idx = 0;
+ rxq->delta = rxq->q_depth;
+ rxq->next_to_update = 0;
+ }
+}
+
+static void spnic_reset_tx_queue(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev;
+ struct spnic_txq *txq = NULL;
+ int q_id = 0;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ for (q_id = 0; q_id < nic_dev->num_sqs; q_id++) {
+ txq = nic_dev->txqs[q_id];
+
+ txq->cons_idx = 0;
+ txq->prod_idx = 0;
+ txq->owner = 1;
+
+ /* Clear hardware ci */
+ *(u16 *)txq->ci_vaddr_base = 0;
+ }
+}
+
+/**
+ * Create the receive queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] qid
+ * Receive queue index.
+ * @param[in] nb_desc
+ * Number of descriptors for receive queue.
+ * @param[in] socket_id
+ * Socket index on which memory must be allocated.
+ * @param rx_conf
+ * Thresholds parameters (unused_).
+ * @param mp
+ * Memory pool for buffer allocations.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+static int spnic_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
+ uint16_t nb_desc, unsigned int socket_id,
+ __rte_unused const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct spnic_nic_dev *nic_dev;
+ struct spnic_rxq *rxq = NULL;
+ const struct rte_memzone *rq_mz = NULL;
+ const struct rte_memzone *cqe_mz = NULL;
+ const struct rte_memzone *pi_mz = NULL;
+ u16 rq_depth, rx_free_thresh;
+ u32 queue_buf_size, mb_buf_size;
+ void *db_addr = NULL;
+ int wqe_count;
+ u32 buf_size;
+ int err;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ /* Queue depth must be power of 2, otherwise will be aligned up */
+ rq_depth = (nb_desc & (nb_desc - 1)) ?
+ ((u16)(1U << (ilog2(nb_desc) + 1))) : nb_desc;
+
+ /*
+ * Validate number of receive descriptors.
+ * It must not exceed hardware maximum and minimum.
+ */
+ if (rq_depth > SPNIC_MAX_QUEUE_DEPTH ||
+ rq_depth < SPNIC_MIN_QUEUE_DEPTH) {
+ PMD_DRV_LOG(ERR, "RX queue depth is out of range from %d to %d,"
+ "(nb_desc: %d, q_depth: %d, port: %d queue: %d)",
+ SPNIC_MIN_QUEUE_DEPTH, SPNIC_MAX_QUEUE_DEPTH,
+ (int)nb_desc, (int)rq_depth,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ /*
+ * The RX descriptor ring will be cleaned after rxq->rx_free_thresh
+ * descriptors are used or if the number of descriptors required
+ * to transmit a packet is greater than the number of free RX
+ * descriptors.
+ * The following constraints must be satisfied:
+ * -rx_free_thresh must be greater than 0.
+ * -rx_free_thresh must be less than the size of the ring minus 1.
+ * When set to zero use default values.
+ */
+ rx_free_thresh = (u16)((rx_conf->rx_free_thresh) ?
+ rx_conf->rx_free_thresh : SPNIC_DEFAULT_RX_FREE_THRESH);
+ if (rx_free_thresh >= (rq_depth - 1)) {
+ PMD_DRV_LOG(ERR, "rx_free_thresh must be less than the number "
+ "of RX descriptors minus 1, rx_free_thresh: %u port: %d queue: %d)",
+ (unsigned int)rx_free_thresh,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ rxq = rte_zmalloc_socket("spnic_rq", sizeof(struct spnic_rxq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (!rxq) {
+ PMD_DRV_LOG(ERR, "Allocate rxq[%d] failed, dev_name: %s",
+ qid, dev->data->name);
+ return -ENOMEM;
+ }
+
+ /* Init rq parameters */
+ rxq->nic_dev = nic_dev;
+ nic_dev->rxqs[qid] = rxq;
+ rxq->mb_pool = mp;
+ rxq->q_id = qid;
+ rxq->next_to_update = 0;
+ rxq->q_depth = rq_depth;
+ rxq->q_mask = rq_depth - 1;
+ rxq->delta = rq_depth;
+ rxq->cons_idx = 0;
+ rxq->prod_idx = 0;
+ rxq->wqe_type = SPNIC_NORMAL_RQ_WQE;
+ rxq->wqebb_shift = SPNIC_RQ_WQEBB_SHIFT + rxq->wqe_type;
+ rxq->wqebb_size = (u16)BIT(rxq->wqebb_shift);
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->rxinfo_align_end = rxq->q_depth - rxq->rx_free_thresh;
+ rxq->port_id = dev->data->port_id;
+
+ /* If buf_len used for function table, need to translated */
+ mb_buf_size = rte_pktmbuf_data_room_size(rxq->mb_pool) -
+ RTE_PKTMBUF_HEADROOM;
+ err = spnic_convert_rx_buf_size(mb_buf_size, &buf_size);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s",
+ dev->data->name);
+ goto adjust_bufsize_fail;
+ }
+
+ rxq->buf_len = buf_size;
+ rxq->rx_buff_shift = ilog2(rxq->buf_len);
+
+ pi_mz = rte_eth_dma_zone_reserve(dev, "spnic_rq_pi", qid,
+ RTE_PGSIZE_4K, RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!pi_mz) {
+ PMD_DRV_LOG(ERR, "Allocate rxq[%d] pi_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_pi_mz_fail;
+ }
+ rxq->pi_mz = pi_mz;
+ rxq->pi_dma_addr = pi_mz->iova;
+ rxq->pi_virt_addr = pi_mz->addr;
+
+ /* Rxq doesn't use direct wqe */
+ err = spnic_alloc_db_addr(nic_dev->hwdev, &db_addr, NULL);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc rq doorbell addr failed");
+ goto alloc_db_err_fail;
+ }
+ rxq->db_addr = db_addr;
+
+ queue_buf_size = BIT(rxq->wqebb_shift) * rq_depth;
+ rq_mz = rte_eth_dma_zone_reserve(dev, "spnic_rq_mz", qid,
+ queue_buf_size, RTE_PGSIZE_256K,
+ socket_id);
+ if (!rq_mz) {
+ PMD_DRV_LOG(ERR, "Allocate rxq[%d] rq_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_rq_mz_fail;
+ }
+
+ memset(rq_mz->addr, 0, queue_buf_size);
+ rxq->rq_mz = rq_mz;
+ rxq->queue_buf_paddr = rq_mz->iova;
+ rxq->queue_buf_vaddr = rq_mz->addr;
+
+ rxq->rx_info = rte_zmalloc_socket("rx_info",
+ rq_depth * sizeof(*rxq->rx_info),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (!rxq->rx_info) {
+ PMD_DRV_LOG(ERR, "Allocate rx_info failed, dev_name: %s",
+ dev->data->name);
+ err = -ENOMEM;
+ goto alloc_rx_info_fail;
+ }
+
+ cqe_mz = rte_eth_dma_zone_reserve(dev, "spnic_cqe_mz", qid,
+ rq_depth * sizeof(*rxq->rx_cqe),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (!cqe_mz) {
+ PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s",
+ dev->data->name);
+ err = -ENOMEM;
+ goto alloc_cqe_mz_fail;
+ }
+ memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe));
+ rxq->cqe_mz = cqe_mz;
+ rxq->cqe_start_paddr = cqe_mz->iova;
+ rxq->cqe_start_vaddr = cqe_mz->addr;
+ rxq->rx_cqe = (struct spnic_rq_cqe *)rxq->cqe_start_vaddr;
+
+ wqe_count = spnic_rx_fill_wqe(rxq);
+ if (wqe_count != rq_depth) {
+ PMD_DRV_LOG(ERR, "Fill rx wqe failed, wqe_count: %d, dev_name: %s",
+ wqe_count, dev->data->name);
+ err = -ENOMEM;
+ goto fill_rx_wqe_fail;
+ }
+
+ /* Record rxq pointer in rte_eth rx_queues */
+ dev->data->rx_queues[qid] = rxq;
+
+ return 0;
+
+fill_rx_wqe_fail:
+ rte_memzone_free(rxq->cqe_mz);
+alloc_cqe_mz_fail:
+ rte_free(rxq->rx_info);
+
+alloc_rx_info_fail:
+ rte_memzone_free(rxq->rq_mz);
+
+alloc_rq_mz_fail:
+ spnic_free_db_addr(nic_dev->hwdev, rxq->db_addr, NULL);
+
+alloc_db_err_fail:
+ rte_memzone_free(rxq->pi_mz);
+
+alloc_pi_mz_fail:
+adjust_bufsize_fail:
+
+ rte_free(rxq);
+ nic_dev->rxqs[qid] = NULL;
+
+ return err;
+}
+
+/**
+ * Create the transmit queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_idx
+ * Transmit queue index.
+ * @param[in] nb_desc
+ * Number of descriptors for transmit queue.
+ * @param[in] socket_id
+ * Socket index on which memory must be allocated.
+ * @param[in] tx_conf
+ * Tx queue configuration parameters (unused_).
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+static int spnic_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
+ uint16_t nb_desc, unsigned int socket_id,
+ __rte_unused const struct rte_eth_txconf *tx_conf)
+{
+ struct spnic_nic_dev *nic_dev;
+ struct spnic_hwdev *hwdev;
+ struct spnic_txq *txq = NULL;
+ const struct rte_memzone *sq_mz = NULL;
+ const struct rte_memzone *ci_mz = NULL;
+ void *db_addr = NULL;
+ u16 sq_depth, tx_free_thresh;
+ u32 queue_buf_size;
+ int err;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ hwdev = nic_dev->hwdev;
+
+ /* Queue depth must be power of 2, otherwise will be aligned up */
+ sq_depth = (nb_desc & (nb_desc - 1)) ?
+ ((u16)(1U << (ilog2(nb_desc) + 1))) : nb_desc;
+
+ /*
+ * Validate number of transmit descriptors.
+ * It must not exceed hardware maximum and minimum.
+ */
+ if (sq_depth > SPNIC_MAX_QUEUE_DEPTH ||
+ sq_depth < SPNIC_MIN_QUEUE_DEPTH) {
+ PMD_DRV_LOG(ERR, "TX queue depth is out of range from %d to %d,"
+ "(nb_desc: %d, q_depth: %d, port: %d queue: %d)",
+ SPNIC_MIN_QUEUE_DEPTH, SPNIC_MAX_QUEUE_DEPTH,
+ (int)nb_desc, (int)sq_depth,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ /*
+ * The TX descriptor ring will be cleaned after txq->tx_free_thresh
+ * descriptors are used or if the number of descriptors required
+ * to transmit a packet is greater than the number of free TX
+ * descriptors.
+ * The following constraints must be satisfied:
+ * -tx_free_thresh must be greater than 0.
+ * -tx_free_thresh must be less than the size of the ring minus 1.
+ * When set to zero use default values.
+ */
+ tx_free_thresh = (u16)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : SPNIC_DEFAULT_TX_FREE_THRESH);
+ if (tx_free_thresh >= (sq_depth - 1)) {
+ PMD_DRV_LOG(ERR, "tx_free_thresh must be less than the number of tx "
+ "descriptors minus 1, tx_free_thresh: %u port: %d queue: %d",
+ (unsigned int)tx_free_thresh,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ txq = rte_zmalloc_socket("spnic_tx_queue", sizeof(struct spnic_txq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (!txq) {
+ PMD_DRV_LOG(ERR, "Allocate txq[%d] failed, dev_name: %s",
+ qid, dev->data->name);
+ return -ENOMEM;
+ }
+ nic_dev->txqs[qid] = txq;
+ txq->nic_dev = nic_dev;
+ txq->q_id = qid;
+ txq->q_depth = sq_depth;
+ txq->q_mask = sq_depth - 1;
+ txq->cons_idx = 0;
+ txq->prod_idx = 0;
+ txq->wqebb_shift = SPNIC_SQ_WQEBB_SHIFT;
+ txq->wqebb_size = (u16)BIT(txq->wqebb_shift);
+ txq->tx_free_thresh = tx_free_thresh;
+ txq->owner = 1;
+ txq->cos = nic_dev->default_cos;
+
+ ci_mz = rte_eth_dma_zone_reserve(dev, "spnic_sq_ci", qid,
+ SPNIC_CI_Q_ADDR_SIZE,
+ SPNIC_CI_Q_ADDR_SIZE, socket_id);
+ if (!ci_mz) {
+ PMD_DRV_LOG(ERR, "Allocate txq[%d] ci_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_ci_mz_fail;
+ }
+ txq->ci_mz = ci_mz;
+ txq->ci_dma_base = ci_mz->iova;
+ txq->ci_vaddr_base = ci_mz->addr;
+
+ queue_buf_size = BIT(txq->wqebb_shift) * sq_depth;
+ sq_mz = rte_eth_dma_zone_reserve(dev, "spnic_sq_mz", qid,
+ queue_buf_size, RTE_PGSIZE_256K,
+ socket_id);
+ if (!sq_mz) {
+ PMD_DRV_LOG(ERR, "Allocate txq[%d] sq_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_sq_mz_fail;
+ }
+ memset(sq_mz->addr, 0, queue_buf_size);
+ txq->sq_mz = sq_mz;
+ txq->queue_buf_paddr = sq_mz->iova;
+ txq->queue_buf_vaddr = sq_mz->addr;
+ txq->sq_head_addr = (u64)txq->queue_buf_vaddr;
+ txq->sq_bot_sge_addr = txq->sq_head_addr + queue_buf_size;
+
+ /* Sq doesn't use direct wqe */
+ err = spnic_alloc_db_addr(hwdev, &db_addr, NULL);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc sq doorbell addr failed");
+ goto alloc_db_err_fail;
+ }
+ txq->db_addr = db_addr;
+
+ txq->tx_info = rte_zmalloc_socket("tx_info",
+ sq_depth * sizeof(*txq->tx_info),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (!txq->tx_info) {
+ PMD_DRV_LOG(ERR, "Allocate tx_info failed, dev_name: %s",
+ dev->data->name);
+ err = -ENOMEM;
+ goto alloc_tx_info_fail;
+ }
+
+ /* Record txq pointer in rte_eth tx_queues */
+ dev->data->tx_queues[qid] = txq;
+
+ return 0;
+
+alloc_tx_info_fail:
+ spnic_free_db_addr(hwdev, txq->db_addr, NULL);
+
+alloc_db_err_fail:
+ rte_memzone_free(txq->sq_mz);
+
+alloc_sq_mz_fail:
+ rte_memzone_free(txq->ci_mz);
+
+alloc_ci_mz_fail:
+ rte_free(txq);
+
+ return err;
+}
+
+static void spnic_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ struct spnic_rxq *rxq = dev->data->rx_queues[qid];
+ struct spnic_nic_dev *nic_dev;
+
+ if (!rxq) {
+ PMD_DRV_LOG(WARNING, "Rxq is null when release");
+ return;
+ }
+ nic_dev = rxq->nic_dev;
+
+ spnic_free_rxq_mbufs(rxq);
+
+ rte_memzone_free(rxq->cqe_mz);
+
+ rte_free(rxq->rx_info);
+
+ rte_memzone_free(rxq->rq_mz);
+
+ rte_memzone_free(rxq->pi_mz);
+
+ nic_dev->rxqs[rxq->q_id] = NULL;
+ rte_free(rxq);
+}
+
+static void spnic_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ struct spnic_txq *txq = dev->data->tx_queues[qid];
+ struct spnic_nic_dev *nic_dev;
+
+ if (!txq) {
+ PMD_DRV_LOG(WARNING, "Txq is null when release");
+ return;
+ }
+ nic_dev = txq->nic_dev;
+
+ spnic_free_txq_mbufs(txq);
+
+ rte_free(txq->tx_info);
+ txq->tx_info = NULL;
+
+ spnic_free_db_addr(nic_dev->hwdev, txq->db_addr, NULL);
+
+ rte_memzone_free(txq->sq_mz);
+
+ rte_memzone_free(txq->ci_mz);
+
+ nic_dev->txqs[txq->q_id] = NULL;
+ rte_free(txq);
+}
+
static void spnic_delete_mc_addr_list(struct spnic_nic_dev *nic_dev);
/**
@@ -235,6 +697,7 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ spnic_get_func_rx_buf_size(nic_dev);
err = spnic_init_function_table(nic_dev->hwdev, nic_dev->rx_buff_len);
if (err) {
PMD_DRV_LOG(ERR, "Init function table failed, dev_name: %s",
@@ -253,6 +716,9 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
goto get_feature_err;
}
+ /* reset rx and tx queue */
+ spnic_reset_rx_queue(eth_dev);
+ spnic_reset_tx_queue(eth_dev);
/* Init txq and rxq context */
err = spnic_init_qp_ctxts(nic_dev);
@@ -270,6 +736,15 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
goto set_mtu_fail;
}
+ err = spnic_start_all_rqs(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
+ eth_dev->data->name);
+ goto start_rqs_fail;
+ }
+
+ spnic_start_all_sqs(eth_dev);
+
/* Update eth_dev link status */
if (eth_dev->data->dev_conf.intr_conf.lsc != 0)
(void)spnic_link_update(eth_dev, 0);
@@ -278,6 +753,7 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
return 0;
+start_rqs_fail:
set_mtu_fail:
spnic_free_qp_ctxts(nic_dev->hwdev);
@@ -313,9 +789,17 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
memset(&link, 0, sizeof(link));
(void)rte_eth_linkstatus_set(dev, &link);
+ /* Flush pending io request */
+ spnic_flush_txqs(nic_dev);
+
+ spnic_flush_qps_res(nic_dev->hwdev);
/* Clean root context */
spnic_free_qp_ctxts(nic_dev->hwdev);
+ /* Free all tx and rx mbufs */
+ spnic_free_all_txq_mbufs(nic_dev);
+ spnic_free_all_rxq_mbufs(nic_dev);
+
return 0;
}
@@ -329,6 +813,7 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev =
SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ int qid;
if (rte_bit_relaxed_test_and_set32(SPNIC_DEV_CLOSE, &nic_dev->dev_status)) {
PMD_DRV_LOG(WARNING, "Device %s already closed",
@@ -338,6 +823,13 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
spnic_dev_stop(eth_dev);
+ /* Release io resource */
+ for (qid = 0; qid < nic_dev->num_sqs; qid++)
+ spnic_tx_queue_release(eth_dev, qid);
+
+ for (qid = 0; qid < nic_dev->num_rqs; qid++)
+ spnic_rx_queue_release(eth_dev, qid);
+
spnic_deinit_sw_rxtxqs(nic_dev);
spnic_deinit_mac_addr(eth_dev);
rte_free(nic_dev->mc_list);
@@ -581,6 +1073,10 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.dev_set_link_up = spnic_dev_set_link_up,
.dev_set_link_down = spnic_dev_set_link_down,
.link_update = spnic_link_update,
+ .rx_queue_setup = spnic_rx_queue_setup,
+ .tx_queue_setup = spnic_tx_queue_setup,
+ .rx_queue_release = spnic_rx_queue_release,
+ .tx_queue_release = spnic_tx_queue_release,
.dev_start = spnic_dev_start,
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
@@ -592,8 +1088,12 @@ static const struct eth_dev_ops spnic_pmd_ops = {
};
static const struct eth_dev_ops spnic_pmd_vf_ops = {
- .link_update = spnic_link_update,
+ .rx_queue_setup = spnic_rx_queue_setup,
+ .tx_queue_setup = spnic_tx_queue_setup,
.dev_start = spnic_dev_start,
+ .link_update = spnic_link_update,
+ .rx_queue_release = spnic_rx_queue_release,
+ .tx_queue_release = spnic_tx_queue_release,
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
.mtu_set = spnic_dev_set_mtu,
diff --git a/drivers/net/spnic/spnic_rx.c b/drivers/net/spnic/spnic_rx.c
new file mode 100644
index 0000000000..20cd50c0c4
--- /dev/null
+++ b/drivers/net/spnic/spnic_rx.c
@@ -0,0 +1,302 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_ether.h>
+#include <rte_mbuf.h>
+#include <ethdev_driver.h>
+
+#include "base/spnic_compat.h"
+#include "base/spnic_cmd.h"
+#include "base/spnic_hwif.h"
+#include "base/spnic_hwdev.h"
+#include "base/spnic_wq.h"
+#include "base/spnic_mgmt.h"
+#include "base/spnic_nic_cfg.h"
+#include "spnic_io.h"
+#include "spnic_rx.h"
+#include "spnic_ethdev.h"
+
+/**
+ * Get receive queue wqe
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @param[out] pi
+ * Return current pi
+ * @return
+ * RX wqe base address
+ */
+static inline void *spnic_get_rq_wqe(struct spnic_rxq *rxq, u16 *pi)
+{
+ *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx);
+
+ /* Get only one rq wqe for once */
+ rxq->prod_idx++;
+ rxq->delta--;
+
+ return NIC_WQE_ADDR(rxq, *pi);
+}
+
+/**
+ * Put receive queue wqe
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @param[in] wqe_cnt
+ * Wqebb counters
+ */
+static inline void spnic_put_rq_wqe(struct spnic_rxq *rxq, u16 wqe_cnt)
+{
+ rxq->delta += wqe_cnt;
+ rxq->prod_idx -= wqe_cnt;
+}
+
+/**
+ * Get receive queue local pi
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @return
+ * Receive queue local pi
+ */
+static inline u16 spnic_get_rq_local_pi(struct spnic_rxq *rxq)
+{
+ return MASKED_QUEUE_IDX(rxq, rxq->prod_idx);
+}
+
+/**
+ * Update receive queue hardware pi
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @param[in] pi
+ * Receive queue pi to update
+ */
+static inline void spnic_update_rq_hw_pi(struct spnic_rxq *rxq, u16 pi)
+{
+ *rxq->pi_virt_addr = cpu_to_be16((pi & rxq->q_mask) << rxq->wqe_type);
+}
+
+int spnic_rx_fill_wqe(struct spnic_rxq *rxq)
+{
+ struct spnic_rq_wqe *rq_wqe = NULL;
+ struct spnic_nic_dev *nic_dev = rxq->nic_dev;
+ rte_iova_t cqe_dma;
+ u16 pi = 0;
+ int i;
+
+ cqe_dma = rxq->cqe_start_paddr;
+ for (i = 0; i < rxq->q_depth; i++) {
+ rq_wqe = spnic_get_rq_wqe(rxq, &pi);
+ if (!rq_wqe) {
+ PMD_DRV_LOG(ERR, "Get rq wqe failed, rxq id: %d, wqe id: %d",
+ rxq->q_id, i);
+ break;
+ }
+
+ if (rxq->wqe_type == SPNIC_EXTEND_RQ_WQE) {
+ /* Unit of cqe length is 16B */
+ spnic_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge,
+ cqe_dma,
+ sizeof(struct spnic_rq_cqe) >>
+ SPNIC_CQE_SIZE_SHIFT);
+ /* Use fixed len */
+ rq_wqe->extend_wqe.buf_desc.sge.len =
+ nic_dev->rx_buff_len;
+ } else {
+ rq_wqe->normal_wqe.cqe_hi_addr = upper_32_bits(cqe_dma);
+ rq_wqe->normal_wqe.cqe_lo_addr = lower_32_bits(cqe_dma);
+ }
+
+ cqe_dma += sizeof(struct spnic_rq_cqe);
+ }
+
+ spnic_put_rq_wqe(rxq, (u16)i);
+
+ return i;
+}
+
+static struct rte_mbuf *spnic_rx_alloc_mbuf(struct spnic_rxq *rxq,
+ rte_iova_t *dma_addr)
+{
+ struct rte_mbuf *mbuf = NULL;
+
+ if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, &mbuf, 1) != 0))
+ return NULL;
+
+ *dma_addr = rte_mbuf_data_iova_default(mbuf);
+
+ return mbuf;
+}
+
+u32 spnic_rx_fill_buffers(struct spnic_rxq *rxq)
+{
+ struct spnic_rq_wqe *rq_wqe = NULL;
+ struct spnic_rx_info *rx_info = NULL;
+ struct rte_mbuf *mb = NULL;
+ rte_iova_t dma_addr;
+ int i, free_wqebbs;
+
+ free_wqebbs = rxq->delta - 1;
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[rxq->next_to_update];
+
+ mb = spnic_rx_alloc_mbuf(rxq, &dma_addr);
+ if (!mb) {
+ PMD_DRV_LOG(ERR, "Alloc mbuf failed");
+ break;
+ }
+
+ rx_info->mbuf = mb;
+
+ rq_wqe = NIC_WQE_ADDR(rxq, rxq->next_to_update);
+
+ /* Fill buffer address only */
+ if (rxq->wqe_type == SPNIC_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr = upper_32_bits(dma_addr);
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr = lower_32_bits(dma_addr);
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr = upper_32_bits(dma_addr);
+ rq_wqe->normal_wqe.buf_lo_addr = lower_32_bits(dma_addr);
+ }
+
+ rxq->next_to_update = (rxq->next_to_update + 1) & rxq->q_mask;
+ }
+
+ if (likely(i > 0)) {
+ spnic_write_db(rxq->db_addr, rxq->q_id, 0, RQ_CFLAG_DP,
+ rxq->next_to_update << rxq->wqe_type);
+ /* Init rq contxet used, need to optimization */
+ rxq->prod_idx = rxq->next_to_update;
+ rxq->delta -= i;
+ } else {
+ PMD_DRV_LOG(ERR, "Alloc rx buffers failed, rxq_id: %d",
+ rxq->q_id);
+ }
+
+ return i;
+}
+
+void spnic_free_rxq_mbufs(struct spnic_rxq *rxq)
+{
+ struct spnic_rx_info *rx_info = NULL;
+ int free_wqebbs = spnic_get_rq_free_wqebb(rxq) + 1;
+ volatile struct spnic_rq_cqe *rx_cqe = NULL;
+ u16 ci;
+
+ while (free_wqebbs++ < rxq->q_depth) {
+ ci = spnic_get_rq_local_ci(rxq);
+
+ rx_cqe = &rxq->rx_cqe[ci];
+
+ /* Clear done bit */
+ rx_cqe->status = 0;
+
+ rx_info = &rxq->rx_info[ci];
+ rte_pktmbuf_free(rx_info->mbuf);
+ rx_info->mbuf = NULL;
+
+ spnic_update_rq_local_ci(rxq, 1);
+ }
+}
+
+void spnic_free_all_rxq_mbufs(struct spnic_nic_dev *nic_dev)
+{
+ u16 qid;
+
+ for (qid = 0; qid < nic_dev->num_rqs; qid++)
+ spnic_free_rxq_mbufs(nic_dev->rxqs[qid]);
+}
+
+static inline u32 spnic_rx_alloc_mbuf_bulk(struct spnic_rxq *rxq,
+ struct rte_mbuf **mbufs,
+ u32 exp_mbuf_cnt)
+{
+ u32 avail_cnt;
+ int err;
+
+ err = rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, exp_mbuf_cnt);
+ if (likely(err == 0)) {
+ avail_cnt = exp_mbuf_cnt;
+ } else {
+ avail_cnt = 0;
+ rxq->rxq_stats.rx_nombuf += exp_mbuf_cnt;
+ }
+
+ return avail_cnt;
+}
+
+static inline void spnic_rearm_rxq_mbuf(struct spnic_rxq *rxq)
+{
+ struct spnic_rq_wqe *rq_wqe = NULL;
+ struct rte_mbuf **rearm_mbufs;
+ u32 i, free_wqebbs, rearm_wqebbs, exp_wqebbs;
+ rte_iova_t dma_addr;
+ u16 pi;
+
+ /* Check free wqebb cnt fo rearm */
+ free_wqebbs = spnic_get_rq_free_wqebb(rxq);
+ if (unlikely(free_wqebbs < rxq->rx_free_thresh))
+ return;
+
+ /* Get rearm mbuf array */
+ pi = spnic_get_rq_local_pi(rxq);
+ rearm_mbufs = (struct rte_mbuf **)(&rxq->rx_info[pi]);
+
+ /* Check rxq free wqebbs turn around */
+ exp_wqebbs = rxq->q_depth - pi;
+ if (free_wqebbs < exp_wqebbs)
+ exp_wqebbs = free_wqebbs;
+
+ /* Alloc mbuf in bulk */
+ rearm_wqebbs = spnic_rx_alloc_mbuf_bulk(rxq, rearm_mbufs, exp_wqebbs);
+ if (unlikely(rearm_wqebbs == 0))
+ return;
+
+ /* Rearm rx mbuf */
+ rq_wqe = NIC_WQE_ADDR(rxq, pi);
+ for (i = 0; i < rearm_wqebbs; i++) {
+ dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]);
+
+ /* Fill buffer address only */
+ if (rxq->wqe_type == SPNIC_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr = upper_32_bits(dma_addr);
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr = lower_32_bits(dma_addr);
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr = upper_32_bits(dma_addr);
+ rq_wqe->normal_wqe.buf_lo_addr = lower_32_bits(dma_addr);
+ }
+
+ rq_wqe = (struct spnic_rq_wqe *)((u64)rq_wqe +
+ rxq->wqebb_size);
+ }
+ rxq->prod_idx += rearm_wqebbs;
+ rxq->delta -= rearm_wqebbs;
+
+#ifndef SPNIC_RQ_DB
+ spnic_write_db(rxq->db_addr, rxq->q_id, 0, RQ_CFLAG_DP,
+ ((pi + rearm_wqebbs) & rxq->q_mask) << rxq->wqe_type);
+#else
+ /* Update rq hw_pi */
+ rte_wmb();
+ spnic_update_rq_hw_pi(rxq, pi + rearm_wqebbs);
+#endif
+}
+
+int spnic_start_all_rqs(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev = NULL;
+ struct spnic_rxq *rxq = NULL;
+ int i;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+ spnic_rearm_rxq_mbuf(rxq);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/spnic/spnic_rx.h b/drivers/net/spnic/spnic_rx.h
index b2f0052533..46f4e1276d 100644
--- a/drivers/net/spnic/spnic_rx.h
+++ b/drivers/net/spnic/spnic_rx.h
@@ -110,4 +110,45 @@ struct spnic_rxq {
struct spnic_rxq_stats rxq_stats;
} __rte_cache_aligned;
+int spnic_rx_fill_wqe(struct spnic_rxq *rxq);
+
+u32 spnic_rx_fill_buffers(struct spnic_rxq *rxq);
+
+void spnic_free_rxq_mbufs(struct spnic_rxq *rxq);
+
+void spnic_free_all_rxq_mbufs(struct spnic_nic_dev *nic_dev);
+
+int spnic_start_all_rqs(struct rte_eth_dev *eth_dev);
+/**
+ * Get receive queue local ci
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @return
+ * Receive queue local ci
+ */
+static inline u16 spnic_get_rq_local_ci(struct spnic_rxq *rxq)
+{
+ return MASKED_QUEUE_IDX(rxq, rxq->cons_idx);
+}
+
+static inline u16 spnic_get_rq_free_wqebb(struct spnic_rxq *rxq)
+{
+ return rxq->delta - 1;
+}
+
+/**
+ * Update receive queue local ci
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @param[in] wqe_cnt
+ * Wqebb counters
+ */
+static inline void spnic_update_rq_local_ci(struct spnic_rxq *rxq,
+ u16 wqe_cnt)
+{
+ rxq->cons_idx += wqe_cnt;
+ rxq->delta += wqe_cnt;
+}
#endif /* _SPNIC_RX_H_ */
diff --git a/drivers/net/spnic/spnic_tx.c b/drivers/net/spnic/spnic_tx.c
new file mode 100644
index 0000000000..d905879412
--- /dev/null
+++ b/drivers/net/spnic/spnic_tx.c
@@ -0,0 +1,334 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+ */
+
+#include <rte_ether.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+#include <ethdev_driver.h>
+
+#include "base/spnic_compat.h"
+#include "base/spnic_cmd.h"
+#include "base/spnic_wq.h"
+#include "base/spnic_mgmt.h"
+#include "base/spnic_hwdev.h"
+#include "base/spnic_nic_cfg.h"
+#include "spnic_io.h"
+#include "spnic_tx.h"
+#include "spnic_ethdev.h"
+
+#define SPNIC_TX_TASK_WRAPPED 1
+#define SPNIC_TX_BD_DESC_WRAPPED 2
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+#define SPNIC_MAX_TX_FREE_BULK 64
+
+#define MAX_PAYLOAD_OFFSET 221
+
+#define SPNIC_TX_OUTER_CHECKSUM_FLAG_SET 1
+#define SPNIC_TX_OUTER_CHECKSUM_FLAG_NO_SET 0
+
+/**
+ * Get send queue free wqebb cnt
+ *
+ * @param[in] sq
+ * Send queue
+ * @return
+ * Number of free wqebb
+ */
+static inline u16 spnic_get_sq_free_wqebbs(struct spnic_txq *sq)
+{
+ return (sq->q_depth -
+ ((sq->prod_idx - sq->cons_idx + sq->q_depth) & sq->q_mask) - 1);
+}
+
+/**
+ * Update send queue local ci
+ *
+ * @param[in] sq
+ * Send queue
+ * @param[in] wqe_cnt
+ * Number of wqebb
+ */
+static inline void spnic_update_sq_local_ci(struct spnic_txq *sq, u16 wqe_cnt)
+{
+ sq->cons_idx += wqe_cnt;
+}
+
+/**
+ * Get send queue local ci
+ *
+ * @param[in] sq
+ * Send queue
+ * @return
+ * Local ci
+ */
+static inline u16 spnic_get_sq_local_ci(struct spnic_txq *sq)
+{
+ return MASKED_QUEUE_IDX(sq, sq->cons_idx);
+}
+
+/**
+ * Get send queue hardware ci
+ *
+ * @param[in] sq
+ * Send queue
+ * @return
+ * Hardware ci
+ */
+static inline u16 spnic_get_sq_hw_ci(struct spnic_txq *sq)
+{
+ return MASKED_QUEUE_IDX(sq, *(u16 *)sq->ci_vaddr_base);
+}
+
+/**
+ * Get send queue wqe
+ *
+ * @param[in] sq
+ * Send queue
+ * @param[in] wqebb_cnt
+ * Num of wqebb counter
+ * @param[out] pi
+ * Return current pi
+ * @param[out] owner
+ * Owner bit for hardware
+ * @param[out] wrapped
+ * Indicate whether wqe is wrapped
+ * @return
+ * Send queue wqe base address
+ */
+static inline void *spnic_get_sq_wqe(struct spnic_txq *sq,
+ struct spnic_wqe_info *wqe_info)
+{
+ u16 cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx);
+ u32 end_pi;
+
+ end_pi = cur_pi + wqe_info->wqebb_cnt;
+ sq->prod_idx += wqe_info->wqebb_cnt;
+
+ wqe_info->owner = sq->owner;
+ wqe_info->pi = cur_pi;
+ wqe_info->wrapped = 0;
+
+ if (unlikely(end_pi >= sq->q_depth)) {
+ sq->owner = !sq->owner;
+
+ if (likely(end_pi > sq->q_depth))
+ wqe_info->wrapped = sq->q_depth - cur_pi;
+ }
+
+ return NIC_WQE_ADDR(sq, cur_pi);
+}
+
+/**
+ * Put send queue wqe
+ *
+ * @param[in] sq
+ * Send queue
+ * @param[in] wqebb_cnt
+ * Num of wqebb counter
+ * @param[out] owner
+ * Owner bit for hardware
+ */
+static inline void spnic_put_sq_wqe(struct spnic_txq *sq,
+ struct spnic_wqe_info *wqe_info)
+{
+ if (wqe_info->owner != sq->owner)
+ sq->owner = wqe_info->owner;
+
+ sq->prod_idx -= wqe_info->wqebb_cnt;
+}
+
+static inline void spnic_set_wqe_combo(struct spnic_txq *txq,
+ struct spnic_sq_wqe_combo *wqe_combo,
+ struct spnic_sq_wqe *wqe,
+ struct spnic_wqe_info *wqe_info)
+{
+ wqe_combo->hdr = &wqe->compact_wqe.wqe_desc;
+
+ if (wqe_info->offload) {
+ if (wqe_info->wrapped == SPNIC_TX_TASK_WRAPPED) {
+ wqe_combo->task = (struct spnic_sq_task *)
+ (void *)txq->sq_head_addr;
+ wqe_combo->bds_head = (struct spnic_sq_bufdesc *)
+ (void *)(txq->sq_head_addr + txq->wqebb_size);
+ } else if (wqe_info->wrapped == SPNIC_TX_BD_DESC_WRAPPED) {
+ wqe_combo->task = &wqe->extend_wqe.task;
+ wqe_combo->bds_head = (struct spnic_sq_bufdesc *)
+ (void *)(txq->sq_head_addr);
+ } else {
+ wqe_combo->task = &wqe->extend_wqe.task;
+ wqe_combo->bds_head = wqe->extend_wqe.buf_desc;
+ }
+
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+ wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES;
+ return;
+ }
+
+ if (wqe_info->wrapped == SPNIC_TX_TASK_WRAPPED) {
+ wqe_combo->bds_head = (struct spnic_sq_bufdesc *)
+ (void *)(txq->sq_head_addr);
+ } else {
+ wqe_combo->bds_head =
+ (struct spnic_sq_bufdesc *)(&wqe->extend_wqe.task);
+ }
+
+ if (wqe_info->wqebb_cnt > 1) {
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+ wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS;
+ /* This section used as vlan insert, needs to clear */
+ wqe_combo->bds_head->rsvd = 0;
+ } else {
+ wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE;
+ }
+}
+
+void spnic_free_txq_mbufs(struct spnic_txq *txq)
+{
+ struct spnic_tx_info *tx_info = NULL;
+ u16 free_wqebbs;
+ u16 ci;
+
+ free_wqebbs = spnic_get_sq_free_wqebbs(txq) + 1;
+
+ while (free_wqebbs < txq->q_depth) {
+ ci = spnic_get_sq_local_ci(txq);
+
+ tx_info = &txq->tx_info[ci];
+
+ rte_pktmbuf_free(tx_info->mbuf);
+ spnic_update_sq_local_ci(txq, tx_info->wqebb_cnt);
+
+ free_wqebbs += tx_info->wqebb_cnt;
+ tx_info->mbuf = NULL;
+ }
+}
+
+void spnic_free_all_txq_mbufs(struct spnic_nic_dev *nic_dev)
+{
+ u16 qid;
+
+ for (qid = 0; qid < nic_dev->num_sqs; qid++)
+ spnic_free_txq_mbufs(nic_dev->txqs[qid]);
+}
+
+int spnic_start_all_sqs(struct rte_eth_dev *eth_dev)
+{
+ struct spnic_nic_dev *nic_dev = NULL;
+ int i;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+ for (i = 0; i < nic_dev->num_rqs; i++)
+ eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+
+ return 0;
+}
+static inline int spnic_xmit_mbuf_cleanup(struct spnic_txq *txq, u32 free_cnt)
+{
+ struct spnic_tx_info *tx_info = NULL;
+ struct rte_mbuf *mbuf = NULL;
+ struct rte_mbuf *mbuf_temp = NULL;
+ struct rte_mbuf *mbuf_free[SPNIC_MAX_TX_FREE_BULK];
+ int nb_free = 0;
+ int wqebb_cnt = 0;
+ u16 hw_ci, sw_ci, sq_mask;
+ u32 i;
+
+ hw_ci = spnic_get_sq_hw_ci(txq);
+ sw_ci = spnic_get_sq_local_ci(txq);
+ sq_mask = txq->q_mask;
+
+ for (i = 0; i < free_cnt; ++i) {
+ tx_info = &txq->tx_info[sw_ci];
+ if (hw_ci == sw_ci ||
+ (((hw_ci - sw_ci) & sq_mask) < tx_info->wqebb_cnt))
+ break;
+
+ sw_ci = (sw_ci + tx_info->wqebb_cnt) & sq_mask;
+
+ wqebb_cnt += tx_info->wqebb_cnt;
+ mbuf = tx_info->mbuf;
+
+ if (likely(mbuf->nb_segs == 1)) {
+ mbuf_temp = rte_pktmbuf_prefree_seg(mbuf);
+ tx_info->mbuf = NULL;
+
+ if (unlikely(mbuf_temp == NULL))
+ continue;
+
+ mbuf_free[nb_free++] = mbuf_temp;
+ if (unlikely(mbuf_temp->pool != mbuf_free[0]->pool ||
+ nb_free >= SPNIC_MAX_TX_FREE_BULK)) {
+ rte_mempool_put_bulk(mbuf_free[0]->pool,
+ (void **)mbuf_free, (nb_free - 1));
+ nb_free = 0;
+ mbuf_free[nb_free++] = mbuf_temp;
+ }
+ } else {
+ rte_pktmbuf_free(mbuf);
+ tx_info->mbuf = NULL;
+ }
+ }
+
+ if (nb_free > 0)
+ rte_mempool_put_bulk(mbuf_free[0]->pool, (void **)mbuf_free,
+ nb_free);
+
+ spnic_update_sq_local_ci(txq, wqebb_cnt);
+ return i;
+}
+
+static int spnic_tx_done_cleanup(void *txq, u32 free_cnt)
+{
+ struct spnic_txq *tx_queue = txq;
+ u32 try_free_cnt = !free_cnt ? tx_queue->q_depth : free_cnt;
+
+ return spnic_xmit_mbuf_cleanup(tx_queue, try_free_cnt);
+}
+int spnic_stop_sq(struct spnic_txq *txq)
+{
+ struct spnic_nic_dev *nic_dev = txq->nic_dev;
+ unsigned long timeout;
+ int err = -EFAULT;
+ int free_wqebbs;
+
+ timeout = msecs_to_jiffies(SPNIC_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ spnic_tx_done_cleanup(txq, 0);
+ free_wqebbs = spnic_get_sq_free_wqebbs(txq) + 1;
+ if (free_wqebbs == txq->q_depth) {
+ err = 0;
+ break;
+ }
+
+ rte_delay_us(1);
+ } while (time_before(jiffies, timeout));
+
+ if (err)
+ PMD_DRV_LOG(WARNING, "%s Wait sq empty timeout, queue_idx: %u, sw_ci: %u, "
+ "hw_ci: %u, sw_pi: %u, free_wqebbs: %u, q_depth:%u\n",
+ nic_dev->dev_name, txq->q_id,
+ spnic_get_sq_local_ci(txq),
+ spnic_get_sq_hw_ci(txq),
+ MASKED_QUEUE_IDX(txq, txq->prod_idx),
+ free_wqebbs, txq->q_depth);
+
+ return err;
+}
+
+/* Should stop transmiting any packets before calling this function */
+void spnic_flush_txqs(struct spnic_nic_dev *nic_dev)
+{
+ u16 qid;
+ int err;
+
+ for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+ err = spnic_stop_sq(nic_dev->txqs[qid]);
+ if (err)
+ PMD_DRV_LOG(ERR, "Stop sq%d failed", qid);
+ }
+}
diff --git a/drivers/net/spnic/spnic_tx.h b/drivers/net/spnic/spnic_tx.h
index 7528b27bd9..d770b15c21 100644
--- a/drivers/net/spnic/spnic_tx.h
+++ b/drivers/net/spnic/spnic_tx.h
@@ -4,6 +4,224 @@
#ifndef _SPNIC_TX_H_
#define _SPNIC_TX_H_
+/* Tx offload info */
+struct spnic_tx_offload_info {
+ u8 outer_l2_len;
+ u8 outer_l3_type;
+ u16 outer_l3_len;
+
+ u8 inner_l2_len;
+ u8 inner_l3_type;
+ u16 inner_l3_len;
+
+ u8 tunnel_length;
+ u8 tunnel_type;
+ u8 inner_l4_type;
+ u8 inner_l4_len;
+
+ u16 payload_offset;
+ u8 inner_l4_tcp_udp;
+ u8 rsvd0;
+};
+
+/* tx wqe ctx */
+struct spnic_wqe_info {
+ u8 around;
+ u8 cpy_mbuf_cnt;
+ u16 sge_cnt;
+
+ u8 offload;
+ u8 rsvd0;
+ u16 payload_offset;
+
+ u8 wrapped;
+ u8 owner;
+ u16 pi;
+
+ u16 wqebb_cnt;
+ u16 rsvd1;
+
+ u32 queue_info;
+};
+struct spnic_sq_wqe_desc {
+ u32 ctrl_len;
+ u32 queue_info;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+/*
+ * Engine only pass first 12B TS field directly to uCode through metadata,
+ * vlan_offoad is used for hardware when vlan insert in tx
+ */
+struct spnic_sq_task {
+ u32 pkt_info0;
+ u32 ip_identify;
+ u32 pkt_info2; /* Rsvd for ipsec spi */
+ u32 vlan_offload;
+};
+
+struct spnic_sq_bufdesc {
+ u32 len; /* 31-bits Length, L2NIC only use length[17:0] */
+ u32 rsvd;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+struct spnic_sq_compact_wqe {
+ struct spnic_sq_wqe_desc wqe_desc;
+};
+
+struct spnic_sq_extend_wqe {
+ struct spnic_sq_wqe_desc wqe_desc;
+ struct spnic_sq_task task;
+ struct spnic_sq_bufdesc buf_desc[0];
+};
+
+struct spnic_sq_wqe {
+ union {
+ struct spnic_sq_compact_wqe compact_wqe;
+ struct spnic_sq_extend_wqe extend_wqe;
+ };
+};
+
+/* Use section pointer to support non continuous wqe */
+struct spnic_sq_wqe_combo {
+ struct spnic_sq_wqe_desc *hdr;
+ struct spnic_sq_task *task;
+ struct spnic_sq_bufdesc *bds_head;
+ u32 wqe_type;
+ u32 task_type;
+};
+
+/* SQ ctrl info */
+enum sq_wqe_data_format {
+ SQ_NORMAL_WQE = 0,
+};
+
+enum sq_wqe_ec_type {
+ SQ_WQE_COMPACT_TYPE = 0,
+ SQ_WQE_EXTENDED_TYPE = 1,
+};
+
+#define COMPACT_WQE_MAX_CTRL_LEN 0x3FFF
+
+enum sq_wqe_tasksect_len_type {
+ SQ_WQE_TASKSECT_46BITS = 0,
+ SQ_WQE_TASKSECT_16BYTES = 1,
+};
+
+#define SQ_CTRL_BD0_LEN_SHIFT 0
+#define SQ_CTRL_RSVD_SHIFT 18
+#define SQ_CTRL_BUFDESC_NUM_SHIFT 19
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 27
+#define SQ_CTRL_DATA_FORMAT_SHIFT 28
+#define SQ_CTRL_DIRECT_SHIFT 29
+#define SQ_CTRL_EXTENDED_SHIFT 30
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BD0_LEN_MASK 0x3FFFFU
+#define SQ_CTRL_RSVD_MASK 0x1U
+#define SQ_CTRL_BUFDESC_NUM_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1U
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_DIRECT_MASK 0x1U
+#define SQ_CTRL_EXTENDED_MASK 0x1U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_SET(val, member) (((u32)(val) & \
+ SQ_CTRL_##member##_MASK) << \
+ SQ_CTRL_##member##_SHIFT)
+
+#define SQ_CTRL_GET(val, member) (((val) >> SQ_CTRL_##member##_SHIFT) \
+ & SQ_CTRL_##member##_MASK)
+
+#define SQ_CTRL_CLEAR(val, member) ((val) & \
+ (~(SQ_CTRL_##member##_MASK << \
+ SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_SHIFT 0
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_MASK 0x3U
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) \
+ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) \
+ & SQ_CTRL_QUEUE_INFO_##member##_MASK)
+
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22
+#define SQ_TASK_INFO0_INNER_L4_EN_SHIFT 24
+#define SQ_TASK_INFO0_INNER_L3_EN_SHIFT 25
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_SHIFT 26
+#define SQ_TASK_INFO0_OUT_L4_EN_SHIFT 27
+#define SQ_TASK_INFO0_OUT_L3_EN_SHIFT 28
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_SHIFT 29
+#define SQ_TASK_INFO0_ESP_OFFLOAD_SHIFT 30
+#define SQ_TASK_INFO0_IPSEC_PROTO_SHIFT 31
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_IPSEC_PROTO_MASK 0x1U
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) << \
+ SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) << \
+ SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO3_VLAN_TAG_SHIFT 0
+#define SQ_TASK_INFO3_VLAN_TYPE_SHIFT 16
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_SHIFT 19
+
+#define SQ_TASK_INFO3_VLAN_TAG_MASK 0xFFFFU
+#define SQ_TASK_INFO3_VLAN_TYPE_MASK 0x7U
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK 0x1U
+
+#define SQ_TASK_INFO3_SET(val, member) \
+ (((val) & SQ_TASK_INFO3_##member##_MASK) << \
+ SQ_TASK_INFO3_##member##_SHIFT)
+#define SQ_TASK_INFO3_GET(val, member) \
+ (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \
+ SQ_TASK_INFO3_##member##_MASK)
/* Txq info */
struct spnic_txq_stats {
@@ -59,4 +277,14 @@ struct spnic_txq {
struct spnic_txq_stats txq_stats;
} __rte_cache_aligned;
+void spnic_flush_txqs(struct spnic_nic_dev *nic_dev);
+
+void spnic_free_txq_mbufs(struct spnic_txq *txq);
+
+void spnic_free_all_txq_mbufs(struct spnic_nic_dev *nic_dev);
+
+u16 spnic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
+
+int spnic_stop_sq(struct spnic_txq *txq);
+int spnic_start_all_sqs(struct rte_eth_dev *eth_dev);
#endif /* _SPNIC_TX_H_ */
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 13/25] net/spnic: support Rx congfiguration
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (11 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 12/25] net/spnic: support mbuf handling of Tx/Rx Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 14/25] net/spnic: add port/vport enable Yanling Song
` (11 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch Rx/Tx configuration including Rx csum offload, LRO, RSS,
VLAN filter and VLAN offload.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_nic_cfg.c | 525 +++++++++++++++++++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 387 ++++++++++++++++++
drivers/net/spnic/spnic_ethdev.c | 187 ++++++++-
drivers/net/spnic/spnic_ethdev.h | 2 +
drivers/net/spnic/spnic_rx.c | 221 +++++++++++
drivers/net/spnic/spnic_rx.h | 31 ++
6 files changed, 1349 insertions(+), 4 deletions(-)
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index f6914f6f6d..6c22c4fb3d 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -271,6 +271,37 @@ int spnic_get_default_mac(void *hwdev, u8 *mac_addr, int ether_len)
return 0;
}
+static int spnic_config_vlan(void *hwdev, u8 opcode, u16 vlan_id, u16 func_id)
+{
+ struct spnic_cmd_vlan_config vlan_info;
+ u16 out_size = sizeof(vlan_info);
+ int err;
+
+ memset(&vlan_info, 0, sizeof(vlan_info));
+ vlan_info.opcode = opcode;
+ vlan_info.func_id = func_id;
+ vlan_info.vlan_id = vlan_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CFG_FUNC_VLAN, &vlan_info,
+ sizeof(vlan_info), &vlan_info, &out_size);
+ if (err || !out_size || vlan_info.msg_head.status) {
+ PMD_DRV_LOG(ERR, "%s vlan failed, err: %d, status: 0x%x, out size: 0x%x",
+ opcode == SPNIC_CMD_OP_ADD ? "Add" : "Delete",
+ err, vlan_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int spnic_del_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return spnic_config_vlan(hwdev, SPNIC_CMD_OP_DEL, vlan_id, func_id);
+}
+
int spnic_get_port_info(void *hwdev, struct nic_port_info *port_info)
{
struct spnic_cmd_port_info port_msg;
@@ -564,6 +595,500 @@ void spnic_free_nic_hwdev(void *hwdev)
spnic_vf_func_free(hwdev);
}
+int spnic_set_rx_mode(void *hwdev, u32 enable)
+{
+ struct spnic_rx_mode_config rx_mode_cfg;
+ u16 out_size = sizeof(rx_mode_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&rx_mode_cfg, 0, sizeof(rx_mode_cfg));
+ rx_mode_cfg.func_id = spnic_global_func_id(hwdev);
+ rx_mode_cfg.rx_mode = enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SET_RX_MODE,
+ &rx_mode_cfg, sizeof(rx_mode_cfg),
+ &rx_mode_cfg, &out_size);
+ if (err || !out_size || rx_mode_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set rx mode failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, rx_mode_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int spnic_set_rx_vlan_offload(void *hwdev, u8 en)
+{
+ struct spnic_cmd_vlan_offload vlan_cfg;
+ u16 out_size = sizeof(vlan_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&vlan_cfg, 0, sizeof(vlan_cfg));
+ vlan_cfg.func_id = spnic_global_func_id(hwdev);
+ vlan_cfg.vlan_offload = en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SET_RX_VLAN_OFFLOAD,
+ &vlan_cfg, sizeof(vlan_cfg),
+ &vlan_cfg, &out_size);
+ if (err || !out_size || vlan_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set rx vlan offload failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, vlan_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int spnic_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl)
+{
+ struct spnic_cmd_set_vlan_filter vlan_filter;
+ u16 out_size = sizeof(vlan_filter);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.func_id = spnic_global_func_id(hwdev);
+ vlan_filter.vlan_filter_ctrl = vlan_filter_ctrl;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_SET_VLAN_FILTER_EN,
+ &vlan_filter, sizeof(vlan_filter),
+ &vlan_filter, &out_size);
+ if (err || !out_size || vlan_filter.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Failed to set vlan filter, err: %d, status: 0x%x, out size: 0x%x",
+ err, vlan_filter.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int spnic_set_rx_lro(void *hwdev, u8 ipv4_en, u8 ipv6_en,
+ u8 lro_max_pkt_len)
+{
+ struct spnic_cmd_lro_config lro_cfg;
+ u16 out_size = sizeof(lro_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&lro_cfg, 0, sizeof(lro_cfg));
+ lro_cfg.func_id = spnic_global_func_id(hwdev);
+ lro_cfg.opcode = SPNIC_CMD_OP_SET;
+ lro_cfg.lro_ipv4_en = ipv4_en;
+ lro_cfg.lro_ipv6_en = ipv6_en;
+ lro_cfg.lro_max_pkt_len = lro_max_pkt_len;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CFG_RX_LRO, &lro_cfg,
+ sizeof(lro_cfg), &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set lro offload failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, lro_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int spnic_set_rx_lro_timer(void *hwdev, u32 timer_value)
+{
+ struct spnic_cmd_lro_timer lro_timer;
+ u16 out_size = sizeof(lro_timer);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&lro_timer, 0, sizeof(lro_timer));
+ lro_timer.opcode = SPNIC_CMD_OP_SET;
+ lro_timer.timer = timer_value;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CFG_LRO_TIMER, &lro_timer,
+ sizeof(lro_timer), &lro_timer, &out_size);
+ if (err || !out_size || lro_timer.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set lro timer failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, lro_timer.msg_head.status, out_size);
+
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int spnic_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len)
+{
+ u8 ipv4_en = 0, ipv6_en = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ ipv4_en = lro_en ? 1 : 0;
+ ipv6_en = lro_en ? 1 : 0;
+
+ PMD_DRV_LOG(INFO, "Set LRO max coalesce packet size to %uK",
+ lro_max_pkt_len);
+
+ err = spnic_set_rx_lro(hwdev, ipv4_en, ipv6_en, (u8)lro_max_pkt_len);
+ if (err)
+ return err;
+
+ /* We don't set LRO timer for VF */
+ if (spnic_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ PMD_DRV_LOG(INFO, "Set LRO timer to %u", lro_timer);
+
+ return spnic_set_rx_lro_timer(hwdev, lro_timer);
+}
+
+/* RSS config */
+int spnic_rss_template_alloc(void *hwdev)
+{
+ struct spnic_rss_template_mgmt template_mgmt;
+ u16 out_size = sizeof(template_mgmt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&template_mgmt, 0, sizeof(struct spnic_rss_template_mgmt));
+ template_mgmt.func_id = spnic_global_func_id(hwdev);
+ template_mgmt.cmd = NIC_RSS_CMD_TEMP_ALLOC;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_RSS_TEMP_MGR,
+ &template_mgmt, sizeof(template_mgmt),
+ &template_mgmt, &out_size);
+ if (err || !out_size || template_mgmt.msg_head.status) {
+ if (template_mgmt.msg_head.status ==
+ SPNIC_MGMT_STATUS_TABLE_FULL) {
+ PMD_DRV_LOG(ERR, "There is no more template available");
+ return -ENOSPC;
+ }
+ PMD_DRV_LOG(ERR, "Alloc rss template failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, template_mgmt.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int spnic_rss_template_free(void *hwdev)
+{
+ struct spnic_rss_template_mgmt template_mgmt;
+ u16 out_size = sizeof(template_mgmt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&template_mgmt, 0, sizeof(struct spnic_rss_template_mgmt));
+ template_mgmt.func_id = spnic_global_func_id(hwdev);
+ template_mgmt.cmd = NIC_RSS_CMD_TEMP_FREE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_RSS_TEMP_MGR,
+ &template_mgmt, sizeof(template_mgmt),
+ &template_mgmt, &out_size);
+ if (err || !out_size || template_mgmt.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Free rss template failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, template_mgmt.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int spnic_rss_cfg_hash_key(void *hwdev, u8 opcode, u8 *key)
+{
+ struct spnic_cmd_rss_hash_key hash_key;
+ u16 out_size = sizeof(hash_key);
+ int err;
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ memset(&hash_key, 0, sizeof(struct spnic_cmd_rss_hash_key));
+ hash_key.func_id = spnic_global_func_id(hwdev);
+ hash_key.opcode = opcode;
+ if (opcode == SPNIC_CMD_OP_SET)
+ memcpy(hash_key.key, key, SPNIC_RSS_KEY_SIZE);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CFG_RSS_HASH_KEY,
+ &hash_key, sizeof(hash_key),
+ &hash_key, &out_size);
+ if (err || !out_size || hash_key.msg_head.status) {
+ PMD_DRV_LOG(ERR, "%s hash key failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ opcode == SPNIC_CMD_OP_SET ? "Set" : "Get",
+ err, hash_key.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (opcode == SPNIC_CMD_OP_GET)
+ memcpy(key, hash_key.key, SPNIC_RSS_KEY_SIZE);
+
+ return 0;
+}
+
+int spnic_rss_set_hash_key(void *hwdev, u8 *key)
+{
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ return spnic_rss_cfg_hash_key(hwdev, SPNIC_CMD_OP_SET, key);
+}
+
+int spnic_rss_get_hash_key(void *hwdev, u8 *key)
+{
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ return spnic_rss_cfg_hash_key(hwdev, SPNIC_CMD_OP_GET, key);
+}
+
+int spnic_rss_get_indir_tbl(void *hwdev, u32 *indir_table)
+{
+ struct spnic_cmd_buf *cmd_buf = NULL;
+ u16 *indir_tbl = NULL;
+ int err, i;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ cmd_buf = spnic_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf failed");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ err = spnic_cmdq_detail_resp(hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ cmd_buf, cmd_buf, 0);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get rss indir table failed");
+ spnic_free_cmd_buf(cmd_buf);
+ return err;
+ }
+
+ indir_tbl = (u16 *)cmd_buf->buf;
+ for (i = 0; i < SPNIC_RSS_INDIR_SIZE; i++)
+ indir_table[i] = *(indir_tbl + i);
+
+ spnic_free_cmd_buf(cmd_buf);
+ return 0;
+}
+
+int spnic_rss_set_indir_tbl(void *hwdev, const u32 *indir_table)
+{
+ struct nic_rss_indirect_tbl *indir_tbl = NULL;
+ struct spnic_cmd_buf *cmd_buf = NULL;
+ u32 i, size;
+ u32 *temp = NULL;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ cmd_buf = spnic_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf failed");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf;
+ memset(indir_tbl, 0, sizeof(*indir_tbl));
+
+ for (i = 0; i < SPNIC_RSS_INDIR_SIZE; i++)
+ indir_tbl->entry[i] = (u16)(*(indir_table + i));
+
+ size = sizeof(indir_tbl->entry) / sizeof(u32);
+ temp = (u32 *)indir_tbl->entry;
+ for (i = 0; i < size; i++)
+ temp[i] = cpu_to_be32(temp[i]);
+
+ err = spnic_cmdq_direct_resp(hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ cmd_buf, &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Set rss indir table failed");
+ err = -EFAULT;
+ }
+
+ spnic_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+int spnic_set_rss_type(void *hwdev, struct spnic_rss_type rss_type)
+{
+ struct nic_rss_context_tbl *ctx_tbl = NULL;
+ struct spnic_cmd_buf *cmd_buf = NULL;
+ u32 ctx = 0;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ cmd_buf = spnic_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf failed");
+ return -ENOMEM;
+ }
+
+ ctx |= SPNIC_RSS_TYPE_SET(1, VALID) |
+ SPNIC_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ SPNIC_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ SPNIC_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ SPNIC_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ SPNIC_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ SPNIC_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ SPNIC_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ SPNIC_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+
+ cmd_buf->size = sizeof(struct nic_rss_context_tbl);
+ ctx_tbl = (struct nic_rss_context_tbl *)cmd_buf->buf;
+ memset(ctx_tbl, 0, sizeof(*ctx_tbl));
+ ctx_tbl->ctx = cpu_to_be32(ctx);
+
+ /* Cfg the RSS context table by command queue */
+ err = spnic_cmdq_direct_resp(hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ cmd_buf, &out_param, 0);
+
+ spnic_free_cmd_buf(cmd_buf);
+
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Set rss context table failed, err: %d", err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int spnic_get_rss_type(void *hwdev, struct spnic_rss_type *rss_type)
+{
+ struct spnic_rss_context_table ctx_tbl;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev || !rss_type)
+ return -EINVAL;
+
+ memset(&ctx_tbl, 0, sizeof(struct spnic_rss_context_table));
+ ctx_tbl.func_id = spnic_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_GET_RSS_CTX_TBL,
+ &ctx_tbl, sizeof(ctx_tbl),
+ &ctx_tbl, &out_size);
+ if (err || !out_size || ctx_tbl.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Get hash type failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ rss_type->ipv4 = SPNIC_RSS_TYPE_GET(ctx_tbl.context, IPV4);
+ rss_type->ipv6 = SPNIC_RSS_TYPE_GET(ctx_tbl.context, IPV6);
+ rss_type->ipv6_ext = SPNIC_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT);
+ rss_type->tcp_ipv4 = SPNIC_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4);
+ rss_type->tcp_ipv6 = SPNIC_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6);
+ rss_type->tcp_ipv6_ext = SPNIC_RSS_TYPE_GET(ctx_tbl.context,
+ TCP_IPV6_EXT);
+ rss_type->udp_ipv4 = SPNIC_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4);
+ rss_type->udp_ipv6 = SPNIC_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6);
+
+ return 0;
+}
+
+static int spnic_rss_cfg_hash_engine(void *hwdev, u8 opcode, u8 *type)
+{
+ struct spnic_cmd_rss_engine_type hash_type;
+ u16 out_size = sizeof(hash_type);
+ int err;
+
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ memset(&hash_type, 0, sizeof(struct spnic_cmd_rss_engine_type));
+ hash_type.func_id = spnic_global_func_id(hwdev);
+ hash_type.opcode = opcode;
+ if (opcode == SPNIC_CMD_OP_SET)
+ hash_type.hash_engine = *type;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CFG_RSS_HASH_ENGINE,
+ &hash_type, sizeof(hash_type),
+ &hash_type, &out_size);
+ if (err || !out_size || hash_type.msg_head.status) {
+ PMD_DRV_LOG(ERR, "%s hash engine failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ opcode == SPNIC_CMD_OP_SET ? "Set" : "Get",
+ err, hash_type.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (opcode == SPNIC_CMD_OP_GET)
+ *type = hash_type.hash_engine;
+
+ return 0;
+}
+
+int spnic_rss_get_hash_engine(void *hwdev, u8 *type)
+{
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ return spnic_rss_cfg_hash_engine(hwdev, SPNIC_CMD_OP_GET, type);
+}
+
+int spnic_rss_set_hash_engine(void *hwdev, u8 type)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return spnic_rss_cfg_hash_engine(hwdev, SPNIC_CMD_OP_SET, &type);
+}
+
+int spnic_rss_cfg(void *hwdev, u8 rss_en, u8 tc_num, u8 *prio_tc)
+{
+ struct spnic_cmd_rss_config rss_cfg;
+ u16 out_size = sizeof(rss_cfg);
+ int err;
+
+ /* Ucode requires number of TC should be power of 2 */
+ if (!hwdev || !prio_tc || (tc_num & (tc_num - 1)))
+ return -EINVAL;
+
+ memset(&rss_cfg, 0, sizeof(struct spnic_cmd_rss_config));
+ rss_cfg.func_id = spnic_global_func_id(hwdev);
+ rss_cfg.rss_en = rss_en;
+ rss_cfg.rq_priority_number = tc_num ? (u8)ilog2(tc_num) : 0;
+
+ memcpy(rss_cfg.prio_tc, prio_tc, SPNIC_DCB_UP_MAX);
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_RSS_CFG, &rss_cfg,
+ sizeof(rss_cfg), &rss_cfg, &out_size);
+ if (err || !out_size || rss_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Set rss cfg failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, rss_cfg.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
int spnic_vf_get_default_cos(void *hwdev, u8 *cos_id)
{
struct spnic_cmd_vf_dcb_state vf_dcb;
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index 746c1c342d..3a906b4bc3 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -12,6 +12,8 @@
#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+#define SPNIC_VLAN_PRIORITY_SHIFT 13
+
#define SPNIC_DCB_UP_MAX 0x8
#define SPNIC_MAX_NUM_RQ 256
@@ -36,6 +38,38 @@
#define SPNIC_MGMT_STATUS_EXIST 0x6
#define CHECK_IPSU_15BIT 0x8000
+#define SPNIC_MGMT_STATUS_TABLE_EMPTY 0xB
+#define SPNIC_MGMT_STATUS_TABLE_FULL 0xC
+
+#define SPNIC_MGMT_CMD_UNSUPPORTED 0xFF
+
+#define SPNIC_MAX_UC_MAC_ADDRS 128
+#define SPNIC_MAX_MC_MAC_ADDRS 128
+
+/* Structures for RSS config */
+#define SPNIC_RSS_INDIR_SIZE 256
+#define SPNIC_RSS_INDIR_CMDQ_SIZE 128
+#define SPNIC_RSS_KEY_SIZE 40
+#define SPNIC_RSS_ENABLE 0x01
+#define SPNIC_RSS_DISABLE 0x00
+
+struct spnic_rss_type {
+ u8 tcp_ipv6_ext;
+ u8 ipv6_ext;
+ u8 tcp_ipv6;
+ u8 ipv6;
+ u8 tcp_ipv4;
+ u8 ipv4;
+ u8 udp_ipv6;
+ u8 udp_ipv4;
+};
+
+enum spnic_rss_hash_type {
+ SPNIC_RSS_HASH_ENGINE_TYPE_XOR = 0,
+ SPNIC_RSS_HASH_ENGINE_TYPE_TOEP,
+ SPNIC_RSS_HASH_ENGINE_TYPE_MAX,
+};
+
struct spnic_cmd_feature_nego {
struct mgmt_msg_head msg_head;
@@ -121,6 +155,29 @@ struct spnic_port_mac_update {
u16 rsvd2;
u8 new_mac[ETH_ALEN];
};
+
+#define SPNIC_CMD_OP_ADD 1
+#define SPNIC_CMD_OP_DEL 0
+
+struct spnic_cmd_vlan_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u16 rsvd2;
+};
+
+struct spnic_cmd_set_vlan_filter {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ /* Bit0: vlan filter en; bit1: broadcast filter en */
+ u32 vlan_filter_ctrl;
+};
+
struct spnic_cmd_port_info {
struct mgmt_msg_head msg_head;
@@ -225,9 +282,109 @@ struct spnic_cmd_set_func_tbl {
struct spnic_func_tbl_cfg tbl_cfg;
};
+struct spnic_rx_mode_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+struct spnic_cmd_vlan_offload {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
#define SPNIC_CMD_OP_GET 0
#define SPNIC_CMD_OP_SET 1
+struct spnic_cmd_lro_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /* Unit size is 1K */
+ u8 resv2[13];
+};
+
+struct spnic_cmd_lro_timer {
+ struct mgmt_msg_head msg_head;
+
+ u8 opcode; /* 1: set timer value, 0: get timer value */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct spnic_rss_template_mgmt {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct spnic_cmd_rss_hash_key {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[SPNIC_RSS_KEY_SIZE];
+};
+
+struct spnic_rss_indir_table {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[SPNIC_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_indirect_tbl {
+ u32 rsvd[4]; /* Make sure that 16B beyond entry[] */
+ u16 entry[SPNIC_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_context_tbl {
+ u32 rsvd[4];
+ u32 ctx;
+};
+
+struct spnic_rss_context_table {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct spnic_cmd_rss_engine_type {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct spnic_cmd_rss_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[SPNIC_DCB_UP_MAX];
+ u32 rsvd1;
+};
+
enum {
SPNIC_IFLA_VF_LINK_STATE_AUTO, /* Link state of the uplink */
SPNIC_IFLA_VF_LINK_STATE_ENABLE, /* Link always up */
@@ -423,6 +580,50 @@ int spnic_init_nic_hwdev(void *hwdev);
*/
void spnic_free_nic_hwdev(void *hwdev);
+/**
+ * Set function rx mode
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] enable
+ * Rx mode state, 0-disable, 1-enable
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_rx_mode(void *hwdev, u32 enable);
+
+/**
+ * Set function vlan offload valid state
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] enable
+ * Rx mode state, 0-disable, 1-enable
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_rx_vlan_offload(void *hwdev, u8 en);
+
+/**
+ * Set rx LRO configuration
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] lro_en
+ * LRO enable state, 0-disable, 1-enable
+ * @param[in] lro_timer
+ * LRO aggregation timeout
+ * @param[in] lro_max_pkt_len
+ * LRO coalesce packet size(unit size is 1K)
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len);
+
/**
* Get port info
*
@@ -438,6 +639,192 @@ int spnic_get_port_info(void *hwdev, struct nic_port_info *port_info);
int spnic_init_function_table(void *hwdev, u16 rx_buff_len);
+/**
+ * Alloc RSS template table
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_template_alloc(void *hwdev);
+
+/**
+ * Free RSS template table
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_template_free(void *hwdev);
+
+/**
+ * Set RSS indirect table
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] indir_table
+ * RSS indirect table
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_set_indir_tbl(void *hwdev, const u32 *indir_table);
+
+/**
+ * Get RSS indirect table
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] indir_table
+ * RSS indirect table
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_get_indir_tbl(void *hwdev, u32 *indir_table);
+
+/**
+ * Set RSS type
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] rss_type
+ * RSS type, including ipv4, tcpv4, ipv6, tcpv6 and etc.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_rss_type(void *hwdev, struct spnic_rss_type rss_type);
+
+/**
+ * Get RSS type
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] rss_type
+ * RSS type, including ipv4, tcpv4, ipv6, tcpv6 and etc.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_rss_type(void *hwdev, struct spnic_rss_type *rss_type);
+
+/**
+ * Get RSS hash engine
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] type
+ * RSS hash engine, pmd driver only supports Toeplitz
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_get_hash_engine(void *hwdev, u8 *type);
+
+/**
+ * Set RSS hash engine
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] type
+ * RSS hash engine, pmd driver only supports Toeplitz
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_set_hash_engine(void *hwdev, u8 type);
+
+/**
+ * Set RSS configuration
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] rss_en
+ * RSS enable lag, 0-disable, 1-enable
+ * @param[in] tc_num
+ * Number of TC
+ * @param[in] prio_tc
+ * Priority of TC
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_cfg(void *hwdev, u8 rss_en, u8 tc_num, u8 *prio_tc);
+
+/**
+ * Set RSS hash key
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] key
+ * RSS hash key
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_set_hash_key(void *hwdev, u8 *key);
+
+/**
+ * Get RSS hash key
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] key
+ * RSS hash key
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_rss_get_hash_key(void *hwdev, u8 *key);
+
+/**
+ * Add vlan to hardware
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] vlan_id
+ * Vlan id
+ * @param[in] func_id
+ * Function id
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_add_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/**
+ * Delete vlan
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] vlan_id
+ * Vlan id
+ * @param[in] func_id
+ * Function id
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_del_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/**
+ * Set vlan filter
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] vlan_filter_ctrl
+ * Vlan filter enable flag, 0-disable, 1-enable
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl);
+
/**
* Get VF function default cos
*
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 27942e5d68..db16d4038d 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -5,7 +5,10 @@
#include <rte_pci.h>
#include <rte_bus_pci.h>
#include <ethdev_pci.h>
+#include <rte_mbuf.h>
#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_mempool.h>
#include <rte_errno.h>
#include <rte_ether.h>
@@ -27,14 +30,41 @@
#include "spnic_rx.h"
#include "spnic_ethdev.h"
-/* Driver-specific log messages type */
-int spnic_logtype;
+#define SPNIC_MIN_RX_BUF_SIZE 1024
+
+#define SPNIC_DEFAULT_BURST_SIZE 32
+#define SPNIC_DEFAULT_NB_QUEUES 1
+#define SPNIC_DEFAULT_RING_SIZE 1024
+#define SPNIC_MAX_LRO_SIZE 65536
#define SPNIC_DEFAULT_RX_FREE_THRESH 32
#define SPNIC_DEFAULT_TX_FREE_THRESH 32
-#define SPNIC_MAX_UC_MAC_ADDRS 128
-#define SPNIC_MAX_MC_MAC_ADDRS 128
+/*
+ * Vlan_id is a 12 bit number. The VFTA array is actually a 4096 bit array,
+ * 128 of 32bit elements. 2^5 = 32. The val of lower 5 bits specifies the bit
+ * in the 32bit element. The higher 7 bit val specifies VFTA array index.
+ */
+#define SPNIC_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F))
+#define SPNIC_VFTA_IDX(vlan_id) ((vlan_id) >> 5)
+
+#define SPNIC_LRO_DEFAULT_COAL_PKT_SIZE 32
+#define SPNIC_LRO_DEFAULT_TIME_LIMIT 16
+#define SPNIC_LRO_UNIT_WQE_SIZE 1024 /* Bytes */
+
+/* Driver-specific log messages type */
+int spnic_logtype;
+
+enum spnic_rx_mod {
+ SPNIC_RX_MODE_UC = 1 << 0,
+ SPNIC_RX_MODE_MC = 1 << 1,
+ SPNIC_RX_MODE_BC = 1 << 2,
+ SPNIC_RX_MODE_MC_ALL = 1 << 3,
+ SPNIC_RX_MODE_PROMISC = 1 << 4,
+};
+
+#define SPNIC_DEFAULT_RX_MODE (SPNIC_RX_MODE_UC | SPNIC_RX_MODE_MC | \
+ SPNIC_RX_MODE_BC)
#define SPNIC_MAX_QUEUE_DEPTH 16384
#define SPNIC_MIN_QUEUE_DEPTH 128
@@ -638,6 +668,139 @@ static void spnic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
spnic_delete_mc_addr_list(nic_dev);
}
+static int spnic_set_rxtx_configure(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ struct rte_eth_rss_conf *rss_conf = NULL;
+ bool lro_en, vlan_filter, vlan_strip;
+ int max_lro_size, lro_max_pkt_len;
+ int err;
+
+ /* Config rx mode */
+ err = spnic_set_rx_mode(nic_dev->hwdev, SPNIC_DEFAULT_RX_MODE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rx_mode: 0x%x failed",
+ SPNIC_DEFAULT_RX_MODE);
+ return err;
+ }
+ nic_dev->rx_mode = SPNIC_DEFAULT_RX_MODE;
+
+ /* Config rx checksum offload */
+ if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ nic_dev->rx_csum_en = SPNIC_DEFAULT_RX_CSUM_OFFLOAD;
+
+ /* Config lro */
+ lro_en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+ true : false;
+ max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
+ lro_max_pkt_len = max_lro_size / SPNIC_LRO_UNIT_WQE_SIZE ?
+ max_lro_size / SPNIC_LRO_UNIT_WQE_SIZE : 1;
+
+ PMD_DRV_LOG(INFO, "max_lro_size: %d, rx_buff_len: %d, lro_max_pkt_len: %d mtu: %d",
+ max_lro_size, nic_dev->rx_buff_len, lro_max_pkt_len,
+ dev->data->dev_conf.rxmode.mtu);
+
+ err = spnic_set_rx_lro_state(nic_dev->hwdev, lro_en,
+ SPNIC_LRO_DEFAULT_TIME_LIMIT,
+ lro_max_pkt_len);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set lro state failed, err: %d", err);
+ return err;
+ }
+
+ /* Config RSS */
+ if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+ nic_dev->num_rqs > 1) {
+ rss_conf = &(dev_conf->rx_adv_conf.rss_conf);
+ err = spnic_update_rss_config(dev, rss_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rss config failed, err: %d", err);
+ return err;
+ }
+ }
+
+ /* Config vlan filter */
+ vlan_filter = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+ true : false;
+
+ err = spnic_set_vlan_fliter(nic_dev->hwdev, vlan_filter);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Config vlan filter failed, device: %s, port_id: %d, err: %d",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ /* Config vlan stripping */
+ vlan_strip = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+ true : false;
+
+ err = spnic_set_rx_vlan_offload(nic_dev->hwdev, vlan_strip);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Config vlan strip failed, device: %s, port_id: %d, err: %d",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ spnic_init_rx_queue_list(nic_dev);
+
+ return 0;
+}
+
+static void spnic_remove_rxtx_configure(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u8 prio_tc[SPNIC_DCB_UP_MAX] = {0};
+
+ spnic_set_rx_mode(nic_dev->hwdev, 0);
+
+ if (nic_dev->rss_state == SPNIC_RSS_ENABLE) {
+ spnic_rss_cfg(nic_dev->hwdev, SPNIC_RSS_DISABLE, 0, prio_tc);
+ spnic_rss_template_free(nic_dev->hwdev);
+ }
+}
+
+static bool spnic_find_vlan_filter(struct spnic_nic_dev *nic_dev,
+ uint16_t vlan_id)
+{
+ u32 vid_idx, vid_bit;
+
+ vid_idx = SPNIC_VFTA_IDX(vlan_id);
+ vid_bit = SPNIC_VFTA_BIT(vlan_id);
+
+ return (nic_dev->vfta[vid_idx] & vid_bit) ? true : false;
+}
+
+static void spnic_store_vlan_filter(struct spnic_nic_dev *nic_dev,
+ u16 vlan_id, bool on)
+{
+ u32 vid_idx, vid_bit;
+
+ vid_idx = SPNIC_VFTA_IDX(vlan_id);
+ vid_bit = SPNIC_VFTA_BIT(vlan_id);
+
+ if (on)
+ nic_dev->vfta[vid_idx] |= vid_bit;
+ else
+ nic_dev->vfta[vid_idx] &= ~vid_bit;
+}
+
+static void spnic_remove_all_vlanid(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int vlan_id;
+ u16 func_id;
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+
+ for (vlan_id = 1; vlan_id < RTE_ETHER_MAX_VLAN_ID; vlan_id++) {
+ if (spnic_find_vlan_filter(nic_dev, vlan_id)) {
+ spnic_del_vlan(nic_dev->hwdev, vlan_id, func_id);
+ spnic_store_vlan_filter(nic_dev, vlan_id, false);
+ }
+ }
+}
+
static int spnic_init_sw_rxtxqs(struct spnic_nic_dev *nic_dev)
{
u32 txq_size;
@@ -736,6 +899,14 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
goto set_mtu_fail;
}
+ /* Set rx configuration: rss/checksum/rxmode/lro */
+ err = spnic_set_rxtx_configure(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
+ eth_dev->data->name);
+ goto set_rxtx_config_fail;
+ }
+
err = spnic_start_all_rqs(eth_dev);
if (err) {
PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
@@ -754,6 +925,9 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
return 0;
start_rqs_fail:
+ spnic_remove_rxtx_configure(eth_dev);
+
+set_rxtx_config_fail:
set_mtu_fail:
spnic_free_qp_ctxts(nic_dev->hwdev);
@@ -793,6 +967,10 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
spnic_flush_txqs(nic_dev);
spnic_flush_qps_res(nic_dev->hwdev);
+
+ /* Clean RSS table and rx_mode */
+ spnic_remove_rxtx_configure(dev);
+
/* Clean root context */
spnic_free_qp_ctxts(nic_dev->hwdev);
@@ -833,6 +1011,7 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
spnic_deinit_sw_rxtxqs(nic_dev);
spnic_deinit_mac_addr(eth_dev);
rte_free(nic_dev->mc_list);
+ spnic_remove_all_vlanid(eth_dev);
rte_bit_relaxed_clear32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
index 321db389dc..996b4e4b8f 100644
--- a/drivers/net/spnic/spnic_ethdev.h
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -63,6 +63,8 @@ struct spnic_nic_dev {
u32 default_cos;
u32 rx_csum_en;
+ u8 rss_key[SPNIC_RSS_KEY_SIZE];
+
u32 dev_status;
bool pause_set;
diff --git a/drivers/net/spnic/spnic_rx.c b/drivers/net/spnic/spnic_rx.c
index 20cd50c0c4..4d8c6c7e60 100644
--- a/drivers/net/spnic/spnic_rx.c
+++ b/drivers/net/spnic/spnic_rx.c
@@ -284,19 +284,240 @@ static inline void spnic_rearm_rxq_mbuf(struct spnic_rxq *rxq)
#endif
}
+static int spnic_init_rss_key(struct spnic_nic_dev *nic_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ u8 default_rss_key[SPNIC_RSS_KEY_SIZE] = {
+ 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+ 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+ 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+ 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+ 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa};
+ u8 hashkey[SPNIC_RSS_KEY_SIZE] = {0};
+ int err;
+
+ if (rss_conf->rss_key == NULL ||
+ rss_conf->rss_key_len > SPNIC_RSS_KEY_SIZE)
+ memcpy(hashkey, default_rss_key, SPNIC_RSS_KEY_SIZE);
+ else
+ memcpy(hashkey, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ err = spnic_rss_set_hash_key(nic_dev->hwdev, hashkey);
+ if (err)
+ return err;
+
+ memcpy(nic_dev->rss_key, hashkey, SPNIC_RSS_KEY_SIZE);
+ return 0;
+}
+
+void spnic_add_rq_to_rx_queue_list(struct spnic_nic_dev *nic_dev,
+ u16 queue_id)
+{
+ u8 rss_queue_count = nic_dev->num_rss;
+
+ RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1));
+
+ nic_dev->rx_queue_list[rss_queue_count] = (u8)queue_id;
+ nic_dev->num_rss++;
+}
+
+void spnic_init_rx_queue_list(struct spnic_nic_dev *nic_dev)
+{
+ nic_dev->num_rss = 0;
+}
+
+static void spnic_fill_indir_tbl(struct spnic_nic_dev *nic_dev,
+ u32 *indir_tbl)
+{
+ u8 rss_queue_count = nic_dev->num_rss;
+ int i = 0;
+ int j;
+
+ if (rss_queue_count == 0) {
+ /* delete q_id from indir tbl */
+ for (i = 0; i < SPNIC_RSS_INDIR_SIZE; i++)
+ indir_tbl[i] = 0xFF; /* Invalid value in indir tbl */
+ } else {
+ while (i < SPNIC_RSS_INDIR_SIZE)
+ for (j = 0; (j < rss_queue_count) &&
+ (i < SPNIC_RSS_INDIR_SIZE); j++)
+ indir_tbl[i++] = nic_dev->rx_queue_list[j];
+ }
+}
+
+int spnic_refill_indir_rqid(struct spnic_rxq *rxq)
+{
+ struct spnic_nic_dev *nic_dev = rxq->nic_dev;
+ u32 *indir_tbl;
+ int err;
+
+ indir_tbl = rte_zmalloc(NULL, SPNIC_RSS_INDIR_SIZE * sizeof(u32), 0);
+ if (!indir_tbl) {
+ PMD_DRV_LOG(ERR, "Alloc indir_tbl mem failed, eth_dev:%s, queue_idx:%d\n",
+ nic_dev->dev_name, rxq->q_id);
+ return -ENOMEM;
+ }
+
+ /* build indir tbl according to the number of rss queue */
+ spnic_fill_indir_tbl(nic_dev, indir_tbl);
+
+ err = spnic_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set indrect table failed, eth_dev:%s, queue_idx:%d\n",
+ nic_dev->dev_name, rxq->q_id);
+ goto out;
+ }
+
+out:
+ rte_free(indir_tbl);
+ return err;
+}
+
+static int spnic_init_rss_type(struct spnic_nic_dev *nic_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct spnic_rss_type rss_type = {0};
+ u64 rss_hf = rss_conf->rss_hf;
+ int err;
+
+ rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+
+ err = spnic_set_rss_type(nic_dev->hwdev, rss_type);
+ return err;
+}
+
+int spnic_update_rss_config(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u8 prio_tc[SPNIC_DCB_UP_MAX] = {0};
+ u8 num_tc = 0;
+ int err;
+
+ if (rss_conf->rss_hf == 0) {
+ rss_conf->rss_hf = SPNIC_RSS_OFFLOAD_ALL;
+ } else if ((rss_conf->rss_hf & SPNIC_RSS_OFFLOAD_ALL) == 0) {
+ PMD_DRV_LOG(ERR, "Does't support rss hash type: %"PRIu64"",
+ rss_conf->rss_hf);
+ return -EINVAL;
+ }
+
+ err = spnic_rss_template_alloc(nic_dev->hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc rss template failed, err: %d", err);
+ return err;
+ }
+
+ err = spnic_init_rss_key(nic_dev, rss_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rss hash key failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ err = spnic_init_rss_type(nic_dev, rss_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rss hash type failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ err = spnic_rss_set_hash_engine(nic_dev->hwdev,
+ SPNIC_RSS_HASH_ENGINE_TYPE_TOEP);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rss hash function failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ err = spnic_rss_cfg(nic_dev->hwdev, SPNIC_RSS_ENABLE, num_tc,
+ prio_tc);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ nic_dev->rss_state = SPNIC_RSS_ENABLE;
+ return 0;
+
+init_rss_fail:
+ if (spnic_rss_template_free(nic_dev->hwdev))
+ PMD_DRV_LOG(WARNING, "Free rss template failed");
+
+ return err;
+}
+
+static u8 spnic_find_queue_pos_by_rq_id(u8 *queues, u8 queues_count,
+ u8 queue_id)
+{
+ u8 pos;
+
+ for (pos = 0; pos < queues_count; pos++) {
+ if (queue_id == queues[pos])
+ break;
+ }
+
+ return pos;
+}
+
+void spnic_remove_rq_from_rx_queue_list(struct spnic_nic_dev *nic_dev,
+ u16 queue_id)
+{
+ u8 queue_pos;
+ u8 rss_queue_count = nic_dev->num_rss;
+
+ queue_pos = spnic_find_queue_pos_by_rq_id(nic_dev->rx_queue_list,
+ rss_queue_count,
+ (u8)queue_id);
+
+ if (queue_pos < rss_queue_count) {
+ rss_queue_count--;
+ memmove(nic_dev->rx_queue_list + queue_pos,
+ nic_dev->rx_queue_list + queue_pos + 1,
+ (rss_queue_count - queue_pos) *
+ sizeof(nic_dev->rx_queue_list[0]));
+ }
+
+ RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list));
+ nic_dev->num_rss = rss_queue_count;
+}
+
int spnic_start_all_rqs(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev = NULL;
struct spnic_rxq *rxq = NULL;
+ int err = 0;
int i;
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
for (i = 0; i < nic_dev->num_rqs; i++) {
rxq = eth_dev->data->rx_queues[i];
+ spnic_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
spnic_rearm_rxq_mbuf(rxq);
eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
}
+ if (nic_dev->rss_state == SPNIC_RSS_ENABLE) {
+ err = spnic_refill_indir_rqid(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Refill rq to indrect table failed, eth_dev:%s, queue_idx:%d err:%d\n",
+ rxq->nic_dev->dev_name, rxq->q_id, err);
+ goto out;
+ }
+ }
+
return 0;
+out:
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+ spnic_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+ spnic_free_rxq_mbufs(rxq);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+ return err;
}
diff --git a/drivers/net/spnic/spnic_rx.h b/drivers/net/spnic/spnic_rx.h
index 46f4e1276d..0b534f1904 100644
--- a/drivers/net/spnic/spnic_rx.h
+++ b/drivers/net/spnic/spnic_rx.h
@@ -5,6 +5,23 @@
#ifndef _SPNIC_RX_H_
#define _SPNIC_RX_H_
+#define SPNIC_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
+
+#define SPNIC_RSS_OFFLOAD_ALL ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4 | \
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_NONFRAG_IPV4_OTHER | \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_NONFRAG_IPV6_OTHER | \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
struct spnic_rxq_stats {
u64 packets;
u64 bytes;
@@ -118,7 +135,21 @@ void spnic_free_rxq_mbufs(struct spnic_rxq *rxq);
void spnic_free_all_rxq_mbufs(struct spnic_nic_dev *nic_dev);
+int spnic_update_rss_config(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+
int spnic_start_all_rqs(struct rte_eth_dev *eth_dev);
+
+void spnic_add_rq_to_rx_queue_list(struct spnic_nic_dev *nic_dev,
+ u16 queue_id);
+
+int spnic_refill_indir_rqid(struct spnic_rxq *rxq);
+
+void spnic_init_rx_queue_list(struct spnic_nic_dev *nic_dev);
+
+void spnic_remove_rq_from_rx_queue_list(struct spnic_nic_dev *nic_dev,
+ u16 queue_id);
+
/**
* Get receive queue local ci
*
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 14/25] net/spnic: add port/vport enable
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (12 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 13/25] net/spnic: support Rx congfiguration Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 15/25] net/spnic: support IO packets handling Yanling Song
` (10 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch adds interface to enable port/vport so that the hardware
would receive packets to host.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/spnic_ethdev.c | 46 ++++++++++++++++++++++++++++++++
1 file changed, 46 insertions(+)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index db16d4038d..826a34f7fc 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -855,8 +855,10 @@ static void spnic_deinit_sw_rxtxqs(struct spnic_nic_dev *nic_dev)
static int spnic_dev_start(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev;
+ struct spnic_rxq *rxq = NULL;
u64 nic_features;
int err;
+ u16 i;
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
@@ -916,6 +918,22 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
spnic_start_all_sqs(eth_dev);
+ /* Open virtual port and ready to start packet receiving */
+ err = spnic_set_vport_enable(nic_dev->hwdev, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable vport failed, dev_name: %s",
+ eth_dev->data->name);
+ goto en_vport_fail;
+ }
+
+ /* Open physical port and start packet receiving */
+ err = spnic_set_port_enable(nic_dev->hwdev, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable physical port failed, dev_name: %s",
+ eth_dev->data->name);
+ goto en_port_fail;
+ }
+
/* Update eth_dev link status */
if (eth_dev->data->dev_conf.intr_conf.lsc != 0)
(void)spnic_link_update(eth_dev, 0);
@@ -924,6 +942,20 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
return 0;
+en_port_fail:
+ (void)spnic_set_vport_enable(nic_dev->hwdev, false);
+
+en_vport_fail:
+ /* Flush tx && rx chip resources in case of setting vport fake fail */
+ (void)spnic_flush_qps_res(nic_dev->hwdev);
+ rte_delay_ms(100);
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ rxq = nic_dev->rxqs[i];
+ spnic_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+ spnic_free_rxq_mbufs(rxq);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
start_rqs_fail:
spnic_remove_rxtx_configure(eth_dev);
@@ -951,6 +983,7 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
{
struct spnic_nic_dev *nic_dev;
struct rte_eth_link link;
+ int err;
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
if (!rte_bit_relaxed_test_and_clear32(SPNIC_DEV_START, &nic_dev->dev_status)) {
@@ -959,6 +992,19 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
return 0;
}
+ /* Stop phy port and vport */
+ err = spnic_set_port_enable(nic_dev->hwdev, false);
+ if (err)
+ PMD_DRV_LOG(WARNING, "Disable phy port failed, error: %d, "
+ "dev_name: %s, port_id: %d", err, dev->data->name,
+ dev->data->port_id);
+
+ err = spnic_set_vport_enable(nic_dev->hwdev, false);
+ if (err)
+ PMD_DRV_LOG(WARNING, "Disable vport failed, error: %d, "
+ "dev_name: %s, port_id: %d", err, dev->data->name,
+ dev->data->port_id);
+
/* Clear recorded link status */
memset(&link, 0, sizeof(link));
(void)rte_eth_linkstatus_set(dev, &link);
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 15/25] net/spnic: support IO packets handling
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (13 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 14/25] net/spnic: add port/vport enable Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 16/25] net/spnic: add device configure/version/info Yanling Song
` (9 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch implements rx_pkt_burst() and tx_pkt_burst()
to hanld IO packets.
For Tx packets, this commit implements parsing ol_flags of
mbuf and filling those offload info on wqe so that hardware
can process the packets correctly. Furthermore, this commit
allocates a mempool to cover scenes with too many mbufs for
one packet.
For Rx packets, this commit implements filling ol_flags of
mbuf and rearming new mbuf and rq wqe.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/spnic_ethdev.c | 48 +++
drivers/net/spnic/spnic_ethdev.h | 7 +
drivers/net/spnic/spnic_rx.c | 209 ++++++++++++
drivers/net/spnic/spnic_rx.h | 137 ++++++++
drivers/net/spnic/spnic_tx.c | 524 +++++++++++++++++++++++++++++++
drivers/net/spnic/spnic_tx.h | 7 +
6 files changed, 932 insertions(+)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 826a34f7fc..b4d20e1a6f 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -970,6 +970,32 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
return err;
}
+static int spnic_copy_mempool_init(struct spnic_nic_dev *nic_dev)
+{
+ nic_dev->cpy_mpool = rte_mempool_lookup(nic_dev->dev_name);
+ if (nic_dev->cpy_mpool == NULL) {
+ nic_dev->cpy_mpool =
+ rte_pktmbuf_pool_create(nic_dev->dev_name,
+ SPNIC_COPY_MEMPOOL_DEPTH, 0, 0,
+ SPNIC_COPY_MBUF_SIZE, rte_socket_id());
+ if (nic_dev->cpy_mpool == NULL) {
+ PMD_DRV_LOG(ERR, "Create copy mempool failed, errno: %d, dev_name: %s",
+ rte_errno, nic_dev->dev_name);
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+static void spnic_copy_mempool_uninit(struct spnic_nic_dev *nic_dev)
+{
+ if (nic_dev->cpy_mpool != NULL) {
+ rte_mempool_free(nic_dev->cpy_mpool);
+ nic_dev->cpy_mpool = NULL;
+ }
+}
+
/**
* Stop the device.
*
@@ -986,6 +1012,9 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
int err;
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ if (!nic_dev || !spnic_support_nic(nic_dev->hwdev))
+ return 0;
+
if (!rte_bit_relaxed_test_and_clear32(SPNIC_DEV_START, &nic_dev->dev_status)) {
PMD_DRV_LOG(INFO, "Device %s already stopped",
nic_dev->dev_name);
@@ -1014,6 +1043,11 @@ static int spnic_dev_stop(struct rte_eth_dev *dev)
spnic_flush_qps_res(nic_dev->hwdev);
+ /*
+ * After set vport disable 100ms, no packets will be send to host
+ */
+ rte_delay_ms(100);
+
/* Clean RSS table and rx_mode */
spnic_remove_rxtx_configure(dev);
@@ -1054,6 +1088,7 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
for (qid = 0; qid < nic_dev->num_rqs; qid++)
spnic_rx_queue_release(eth_dev, qid);
+ spnic_copy_mempool_uninit(nic_dev);
spnic_deinit_sw_rxtxqs(nic_dev);
spnic_deinit_mac_addr(eth_dev);
rte_free(nic_dev->mc_list);
@@ -1067,6 +1102,8 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
spnic_free_nic_hwdev(nic_dev->hwdev);
spnic_free_hwdev(nic_dev->hwdev);
+ eth_dev->rx_pkt_burst = NULL;
+ eth_dev->tx_pkt_burst = NULL;
eth_dev->dev_ops = NULL;
rte_free(nic_dev->hwdev);
@@ -1548,6 +1585,13 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto set_default_feature_fail;
}
+ err = spnic_copy_mempool_init(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Create copy mempool failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_mpool_fail;
+ }
+
spnic_mutex_init(&nic_dev->rx_mode_mutex, NULL);
rte_bit_relaxed_set32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
@@ -1558,6 +1602,7 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
return 0;
+init_mpool_fail:
set_default_feature_fail:
spnic_deinit_mac_addr(eth_dev);
@@ -1602,6 +1647,9 @@ static int spnic_dev_init(struct rte_eth_dev *eth_dev)
(rte_eal_process_type() == RTE_PROC_PRIMARY) ?
"primary" : "secondary");
+ eth_dev->rx_pkt_burst = spnic_recv_pkts;
+ eth_dev->tx_pkt_burst = spnic_xmit_pkts;
+
return spnic_func_init(eth_dev);
}
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
index 996b4e4b8f..2b59886942 100644
--- a/drivers/net/spnic/spnic_ethdev.h
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -4,6 +4,9 @@
#ifndef _SPNIC_ETHDEV_H_
#define _SPNIC_ETHDEV_H_
+
+#define SPNIC_COPY_MEMPOOL_DEPTH 128
+#define SPNIC_COPY_MBUF_SIZE 4096
#define SPNIC_DEV_NAME_LEN 32
#define SPNIC_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
@@ -17,6 +20,10 @@ enum spnic_dev_status {
SPNIC_DEV_INTR_EN
};
+enum spnic_tx_cvlan_type {
+ SPNIC_TX_TPID0,
+};
+
enum nic_feature_cap {
NIC_F_CSUM = BIT(0),
NIC_F_SCTP_CRC = BIT(1),
diff --git a/drivers/net/spnic/spnic_rx.c b/drivers/net/spnic/spnic_rx.c
index 4d8c6c7e60..5af836ed41 100644
--- a/drivers/net/spnic/spnic_rx.c
+++ b/drivers/net/spnic/spnic_rx.c
@@ -486,6 +486,117 @@ void spnic_remove_rq_from_rx_queue_list(struct spnic_nic_dev *nic_dev,
nic_dev->num_rss = rss_queue_count;
}
+
+static inline uint64_t spnic_rx_vlan(uint32_t offload_type, uint32_t vlan_len,
+ uint16_t *vlan_tci)
+{
+ uint16_t vlan_tag;
+
+ vlan_tag = SPNIC_GET_RX_VLAN_TAG(vlan_len);
+ if (!SPNIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) {
+ *vlan_tci = 0;
+ return 0;
+ }
+
+ *vlan_tci = vlan_tag;
+
+ return RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
+}
+
+static inline uint64_t spnic_rx_csum(uint32_t status, struct spnic_rxq *rxq)
+{
+ struct spnic_nic_dev *nic_dev = rxq->nic_dev;
+ uint32_t csum_err;
+ uint64_t flags;
+
+ if (unlikely(!(nic_dev->rx_csum_en & SPNIC_DEFAULT_RX_CSUM_OFFLOAD)))
+ return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+
+ /* Most case checksum is ok */
+ csum_err = SPNIC_GET_RX_CSUM_ERR(status);
+ if (likely(csum_err == 0))
+ return (RTE_MBUF_F_RX_IP_CKSUM_GOOD | RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+
+ /*
+ * If bypass bit is set, all other err status indications should be
+ * ignored
+ */
+ if (unlikely(csum_err & SPNIC_RX_CSUM_HW_CHECK_NONE))
+ return RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+
+ flags = 0;
+
+ /* IP checksum error */
+ if (csum_err & SPNIC_RX_CSUM_IP_CSUM_ERR) {
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ rxq->rxq_stats.errors++;
+ }
+
+ /* L4 checksum error */
+ if (csum_err & SPNIC_RX_CSUM_TCP_CSUM_ERR ||
+ csum_err & SPNIC_RX_CSUM_UDP_CSUM_ERR ||
+ csum_err & SPNIC_RX_CSUM_SCTP_CRC_ERR) {
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+ rxq->rxq_stats.errors++;
+ }
+
+ if (unlikely(csum_err == SPNIC_RX_CSUM_IPSU_OTHER_ERR))
+ rxq->rxq_stats.other_errors++;
+
+ return flags;
+}
+
+static inline uint64_t spnic_rx_rss_hash(uint32_t offload_type,
+ uint32_t rss_hash_value,
+ uint32_t *rss_hash)
+{
+ uint32_t rss_type;
+
+ rss_type = SPNIC_GET_RSS_TYPES(offload_type);
+ if (likely(rss_type != 0)) {
+ *rss_hash = rss_hash_value;
+ return RTE_MBUF_F_RX_RSS_HASH;
+ }
+
+ return 0;
+}
+
+static void spnic_recv_jumbo_pkt(struct spnic_rxq *rxq,
+ struct rte_mbuf *head_mbuf,
+ u32 remain_pkt_len)
+{
+ struct rte_mbuf *cur_mbuf = NULL;
+ struct rte_mbuf *rxm = NULL;
+ struct spnic_rx_info *rx_info = NULL;
+ u16 sw_ci, rx_buf_len = rxq->buf_len;
+ u32 pkt_len;
+
+ while (remain_pkt_len > 0) {
+ sw_ci = spnic_get_rq_local_ci(rxq);
+ rx_info = &rxq->rx_info[sw_ci];
+
+ spnic_update_rq_local_ci(rxq, 1);
+
+ pkt_len = remain_pkt_len > rx_buf_len ?
+ rx_buf_len : remain_pkt_len;
+ remain_pkt_len -= pkt_len;
+
+ cur_mbuf = rx_info->mbuf;
+ cur_mbuf->data_len = (u16)pkt_len;
+ cur_mbuf->next = NULL;
+
+ head_mbuf->pkt_len += cur_mbuf->data_len;
+ head_mbuf->nb_segs++;
+
+ if (!rxm)
+ head_mbuf->next = cur_mbuf;
+ else
+ rxm->next = cur_mbuf;
+
+ rxm = cur_mbuf;
+ }
+}
+
int spnic_start_all_rqs(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev = NULL;
@@ -521,3 +632,101 @@ int spnic_start_all_rqs(struct rte_eth_dev *eth_dev)
}
return err;
}
+
+u16 spnic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
+{
+ struct spnic_rxq *rxq = rx_queue;
+ struct spnic_rx_info *rx_info = NULL;
+ volatile struct spnic_rq_cqe *rx_cqe = NULL;
+ struct rte_mbuf *rxm = NULL;
+ u16 sw_ci, wqebb_cnt = 0;
+ u32 status, pkt_len, vlan_len, offload_type, hash_value;
+ u32 lro_num;
+ u64 rx_bytes = 0;
+ u16 rx_buf_len, pkts = 0;
+
+ rx_buf_len = rxq->buf_len;
+ sw_ci = spnic_get_rq_local_ci(rxq);
+
+ while (pkts < nb_pkts) {
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ status = rx_cqe->status;
+ if (!SPNIC_GET_RX_DONE(status))
+ break;
+
+ /* Make sure rx_done is read before packet length */
+ rte_rmb();
+
+ vlan_len = rx_cqe->vlan_len;
+ pkt_len = SPNIC_GET_RX_PKT_LEN(vlan_len);
+
+ rx_info = &rxq->rx_info[sw_ci];
+ rxm = rx_info->mbuf;
+
+ /* 1. Next ci point and prefetch */
+ sw_ci++;
+ sw_ci &= rxq->q_mask;
+
+ /* 2. Prefetch next mbuf first 64B */
+ rte_prefetch0(rxq->rx_info[sw_ci].mbuf);
+
+ /* 3. Jumbo frame process */
+ if (likely(pkt_len <= rx_buf_len)) {
+ rxm->data_len = pkt_len;
+ rxm->pkt_len = pkt_len;
+ wqebb_cnt++;
+ } else {
+ rxm->data_len = rx_buf_len;
+ rxm->pkt_len = rx_buf_len;
+
+ /* If receive jumbo, updating ci will be done by
+ * spnic_recv_jumbo_pkt function.
+ */
+ spnic_update_rq_local_ci(rxq, wqebb_cnt + 1);
+ wqebb_cnt = 0;
+ spnic_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len);
+ sw_ci = spnic_get_rq_local_ci(rxq);
+ }
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->port = rxq->port_id;
+
+ /* 4. Rx checksum offload */
+ rxm->ol_flags |= spnic_rx_csum(status, rxq);
+
+ /* 5. Vlan offload */
+ offload_type = rx_cqe->offload_type;
+ rxm->ol_flags |= spnic_rx_vlan(offload_type, vlan_len,
+ &rxm->vlan_tci);
+ /* 6. RSS */
+ hash_value = rx_cqe->hash_val;
+ rxm->ol_flags |= spnic_rx_rss_hash(offload_type, hash_value,
+ &rxm->hash.rss);
+ /* 7. LRO */
+ lro_num = SPNIC_GET_RX_NUM_LRO(status);
+ if (unlikely(lro_num != 0)) {
+ rxm->ol_flags |= RTE_MBUF_F_RX_LRO;
+ rxm->tso_segsz = pkt_len / lro_num;
+ }
+
+ rx_cqe->status = 0;
+
+ rx_bytes += pkt_len;
+ rx_pkts[pkts++] = rxm;
+ }
+
+ if (pkts) {
+ /* 8. Update local ci */
+ spnic_update_rq_local_ci(rxq, wqebb_cnt);
+
+ /* Update packet stats */
+ rxq->rxq_stats.packets += pkts;
+ rxq->rxq_stats.bytes += rx_bytes;
+ }
+ rxq->rxq_stats.burst_pkts = pkts;
+
+ /* 9. Rearm mbuf to rxq */
+ spnic_rearm_rxq_mbuf(rxq);
+
+ return pkts;
+}
diff --git a/drivers/net/spnic/spnic_rx.h b/drivers/net/spnic/spnic_rx.h
index 0b534f1904..5ae4b5f1ab 100644
--- a/drivers/net/spnic/spnic_rx.h
+++ b/drivers/net/spnic/spnic_rx.h
@@ -5,6 +5,135 @@
#ifndef _SPNIC_RX_H_
#define _SPNIC_RX_H_
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) (((val) >> \
+ RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define SPNIC_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define SPNIC_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+#define SPNIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define SPNIC_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) (((val) >> \
+ RQ_CQE_SGE_##member##_SHIFT) & \
+ RQ_CQE_SGE_##member##_MASK)
+
+#define SPNIC_GET_RX_VLAN_TAG(vlan_len) RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define SPNIC_GET_RX_PKT_LEN(vlan_len) RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_DECRY_PKT_SHIFT 29
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+#define RQ_CQE_STATUS_DECRY_PKT_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) (((val) >> \
+ RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define SPNIC_GET_RX_CSUM_ERR(status) RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define SPNIC_GET_RX_DONE(status) RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define SPNIC_GET_RX_FLUSH(status) RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define SPNIC_GET_RX_BP_EN(status) RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define SPNIC_GET_RX_NUM_LRO(status) RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define SPNIC_RX_IS_DECRY_PKT(status) RQ_CQE_STATUS_GET(status, DECRY_PKT)
+
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+
+#define RQ_CQE_PKT_NUM_GET(val, member) (((val) >> \
+ RQ_CQE_PKT_##member##_SHIFT) & \
+ RQ_CQE_PKT_##member##_MASK)
+#define SPNIC_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) (((val) >> \
+ RQ_CQE_##member##_SHIFT) & \
+ RQ_CQE_##member##_MASK)
+#define SPNIC_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define RQ_CQE_PKT_LEN_GET(val, member) (((val) >> \
+ RQ_CQE_PKT_##member##_SHIFT) & \
+ RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_SHIFT 8
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_SHIFT 0
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_MASK 0xFFU
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_MASK 0xFFU
+
+#define RQ_CQE_DECRY_INFO_GET(val, member) (((val) >> \
+ RQ_CQE_DECRY_INFO_##member##_SHIFT) & \
+ RQ_CQE_DECRY_INFO_##member##_MASK)
+
+#define SPNIC_GET_DECRYPT_STATUS(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, DECRY_STATUS)
+
+#define SPNIC_GET_ESP_NEXT_HEAD(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD)
+
+/* Rx cqe checksum err */
+#define SPNIC_RX_CSUM_IP_CSUM_ERR BIT(0)
+#define SPNIC_RX_CSUM_TCP_CSUM_ERR BIT(1)
+#define SPNIC_RX_CSUM_UDP_CSUM_ERR BIT(2)
+#define SPNIC_RX_CSUM_IGMP_CSUM_ERR BIT(3)
+#define SPNIC_RX_CSUM_ICMPv4_CSUM_ERR BIT(4)
+#define SPNIC_RX_CSUM_ICMPv6_CSUM_ERR BIT(5)
+#define SPNIC_RX_CSUM_SCTP_CRC_ERR BIT(6)
+#define SPNIC_RX_CSUM_HW_CHECK_NONE BIT(7)
+#define SPNIC_RX_CSUM_IPSU_OTHER_ERR BIT(8)
+
#define SPNIC_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
#define SPNIC_RSS_OFFLOAD_ALL ( \
@@ -138,8 +267,16 @@ void spnic_free_all_rxq_mbufs(struct spnic_nic_dev *nic_dev);
int spnic_update_rss_config(struct rte_eth_dev *dev,
struct rte_eth_rss_conf *rss_conf);
+int spnic_poll_rq_empty(struct spnic_rxq *rxq);
+
+void spnic_dump_cqe_status(struct spnic_rxq *rxq, u32 *cqe_done_cnt,
+ u32 *cqe_hole_cnt, u32 *head_ci,
+ u32 *head_done);
+
int spnic_start_all_rqs(struct rte_eth_dev *eth_dev);
+u16 spnic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
+
void spnic_add_rq_to_rx_queue_list(struct spnic_nic_dev *nic_dev,
u16 queue_id);
diff --git a/drivers/net/spnic/spnic_tx.c b/drivers/net/spnic/spnic_tx.c
index d905879412..0772d4929f 100644
--- a/drivers/net/spnic/spnic_tx.c
+++ b/drivers/net/spnic/spnic_tx.c
@@ -30,6 +30,18 @@
#define SPNIC_TX_OUTER_CHECKSUM_FLAG_SET 1
#define SPNIC_TX_OUTER_CHECKSUM_FLAG_NO_SET 0
+#define SPNIC_TX_OFFLOAD_MASK ( \
+ SPNIC_TX_CKSUM_OFFLOAD_MASK | \
+ RTE_MBUF_F_TX_VLAN)
+
+#define SPNIC_TX_CKSUM_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_TCP_CKSUM | \
+ RTE_MBUF_F_TX_UDP_CKSUM | \
+ RTE_MBUF_F_TX_SCTP_CKSUM | \
+ RTE_MBUF_F_TX_OUTER_IP_CKSUM | \
+ RTE_MBUF_F_TX_TCP_SEG)
+
/**
* Get send queue free wqebb cnt
*
@@ -289,6 +301,518 @@ static int spnic_tx_done_cleanup(void *txq, u32 free_cnt)
return spnic_xmit_mbuf_cleanup(tx_queue, try_free_cnt);
}
+
+static inline int spnic_tx_offload_pkt_prepare(struct rte_mbuf *mbuf,
+ u16 *inner_l3_offset)
+{
+ uint64_t ol_flags = mbuf->ol_flags;
+
+ /* Only support vxlan offload */
+ if ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) &&
+ (!(ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN)))
+ return -EINVAL;
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ if (rte_validate_tx_offload(mbuf) != 0)
+ return -EINVAL;
+#endif
+ if ((ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN)) {
+ if ((ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) ||
+ (ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) ||
+ (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ /*
+ * For this senmatic, l2_len of mbuf means
+ * len(out_udp + vxlan + in_eth)
+ */
+ *inner_l3_offset = mbuf->l2_len + mbuf->outer_l2_len +
+ mbuf->outer_l3_len;
+ } else {
+ /*
+ * For this senmatic, l2_len of mbuf means
+ * len(out_eth + out_ip + out_udp + vxlan + in_eth)
+ */
+ *inner_l3_offset = mbuf->l2_len;
+ }
+ } else {
+ /* For non-tunnel type pkts */
+ *inner_l3_offset = mbuf->l2_len;
+ }
+
+ return 0;
+}
+
+/**
+ * Set vlan offload info
+ *
+ * @param[in] task
+ * Send queue wqe task section
+ * @param[in] vlan_tag
+ * Vlan tag info
+ * @param[in] vlan_type
+ * Vlan type in hardware
+ */
+static inline void spnic_set_vlan_tx_offload(struct spnic_sq_task *task,
+ u16 vlan_tag, u8 vlan_type)
+{
+ task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) |
+ SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) |
+ SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID);
+}
+
+static inline int spnic_set_tx_offload(struct rte_mbuf *mbuf,
+ struct spnic_sq_task *task,
+ struct spnic_wqe_info *wqe_info)
+{
+ uint64_t ol_flags = mbuf->ol_flags;
+ u16 pld_offset = 0;
+ u32 queue_info = 0;
+ u16 vlan_tag;
+
+ task->pkt_info0 = 0;
+ task->ip_identify = 0;
+ task->pkt_info2 = 0;
+ task->vlan_offload = 0;
+
+ /* Vlan offload */
+ if (unlikely(ol_flags & RTE_MBUF_F_TX_VLAN)) {
+ vlan_tag = mbuf->vlan_tci;
+ spnic_set_vlan_tx_offload(task, vlan_tag, SPNIC_TX_TPID0);
+ }
+
+ if (!(ol_flags & SPNIC_TX_CKSUM_OFFLOAD_MASK))
+ return 0;
+
+ /* Tso offload */
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+ pld_offset = wqe_info->payload_offset;
+ if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET)
+ return -EINVAL;
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO);
+ queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF);
+
+ /* Set MSS value */
+ queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS);
+ queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS);
+ } else {
+ if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ switch (ol_flags & RTE_MBUF_F_TX_L4_MASK) {
+ case RTE_MBUF_F_TX_TCP_CKSUM:
+ case RTE_MBUF_F_TX_UDP_CKSUM:
+ case RTE_MBUF_F_TX_SCTP_CKSUM:
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+
+ break;
+
+ case RTE_MBUF_F_TX_L4_NO_CKSUM:
+ break;
+
+ default:
+ PMD_DRV_LOG(INFO, "not support pkt type");
+ return -EINVAL;
+ }
+ }
+
+ /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc */
+ switch (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+ case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+ break;
+
+ case 0:
+ break;
+
+ default:
+ /* For non UDP/GRE tunneling, drop the tunnel packet */
+ PMD_DRV_LOG(INFO, "not support tunnel pkt type");
+ return -EINVAL;
+ }
+
+ if (ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM)
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN);
+
+ wqe_info->queue_info = queue_info;
+ return 0;
+}
+
+static inline bool spnic_is_tso_sge_valid(struct rte_mbuf *mbuf,
+ struct spnic_wqe_info *wqe_info)
+{
+ u32 total_len, limit_len, checked_len, left_len, adjust_mss;
+ u32 i, max_sges, left_sges, first_len;
+ struct rte_mbuf *mbuf_head, *mbuf_pre, *mbuf_first;
+
+ left_sges = mbuf->nb_segs;
+ mbuf_head = mbuf;
+ mbuf_first = mbuf;
+
+ /* tso sge number validation */
+ if (unlikely(left_sges >= SPNIC_NONTSO_PKT_MAX_SGE)) {
+ checked_len = 0;
+ total_len = 0;
+ first_len = 0;
+ adjust_mss = mbuf->tso_segsz >= TX_MSS_MIN ?
+ mbuf->tso_segsz : TX_MSS_MIN;
+ max_sges = SPNIC_NONTSO_PKT_MAX_SGE - 1;
+ limit_len = adjust_mss + wqe_info->payload_offset;
+
+ for (i = 0; (i < max_sges) && (total_len < limit_len); i++) {
+ total_len += mbuf->data_len;
+ mbuf_pre = mbuf;
+ mbuf = mbuf->next;
+ }
+
+ while (left_sges >= SPNIC_NONTSO_PKT_MAX_SGE) {
+ if (total_len >= limit_len) {
+ /* update the limit len */
+ limit_len = adjust_mss;
+ /* update checked len */
+ checked_len += first_len;
+ /* record the first len */
+ first_len = mbuf_first->data_len;
+ /* first mbuf move to the next */
+ mbuf_first = mbuf_first->next;
+ /* update total len */
+ total_len -= first_len;
+ left_sges--;
+ i--;
+ for (; (i < max_sges) &&
+ (total_len < limit_len); i++) {
+ total_len += mbuf->data_len;
+ mbuf_pre = mbuf;
+ mbuf = mbuf->next;
+ }
+ } else {
+ /* try to copy if not valid */
+ checked_len += (total_len - mbuf_pre->data_len);
+
+ left_len = mbuf_head->pkt_len - checked_len;
+ if (left_len > SPNIC_COPY_MBUF_SIZE)
+ return false;
+ wqe_info->sge_cnt = (u16)(mbuf_head->nb_segs +
+ i - left_sges);
+ wqe_info->cpy_mbuf_cnt = 1;
+
+ return true;
+ }
+ }
+ }
+
+ wqe_info->sge_cnt = mbuf_head->nb_segs;
+ return true;
+}
+
+static inline int spnic_get_tx_offload(struct rte_mbuf *mbuf,
+ struct spnic_wqe_info *wqe_info)
+{
+ uint64_t ol_flags = mbuf->ol_flags;
+ u16 i, total_len, inner_l3_offset = 0;
+ struct rte_mbuf *mbuf_pkt = NULL;
+ int err;
+
+ wqe_info->sge_cnt = mbuf->nb_segs;
+ if (!(ol_flags & SPNIC_TX_OFFLOAD_MASK)) {
+ wqe_info->offload = 0;
+ return 0;
+ }
+
+ wqe_info->offload = 1;
+ err = spnic_tx_offload_pkt_prepare(mbuf, &inner_l3_offset);
+ if (err)
+ return err;
+
+ /* non tso mbuf */
+ if (likely(!(mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG))) {
+ if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE))
+ return -EINVAL;
+
+ if (likely(SPNIC_NONTSO_SEG_NUM_VALID(mbuf->nb_segs)))
+ return 0;
+
+ total_len = 0;
+ mbuf_pkt = mbuf;
+ for (i = 0; i < (SPNIC_NONTSO_PKT_MAX_SGE - 1); i++) {
+ total_len += mbuf_pkt->data_len;
+ mbuf_pkt = mbuf_pkt->next;
+ }
+
+ if ((u32)(total_len + (u16)SPNIC_COPY_MBUF_SIZE) <
+ mbuf->pkt_len)
+ return -EINVAL;
+
+ wqe_info->sge_cnt = SPNIC_NONTSO_PKT_MAX_SGE;
+ wqe_info->cpy_mbuf_cnt = 1;
+ return 0;
+ }
+
+ /* tso mbuf */
+ wqe_info->payload_offset = inner_l3_offset + mbuf->l3_len +
+ mbuf->l4_len;
+
+ if (unlikely(SPNIC_TSO_SEG_NUM_INVALID(mbuf->nb_segs)))
+ return -EINVAL;
+
+ if (unlikely(!spnic_is_tso_sge_valid(mbuf, wqe_info)))
+ return -EINVAL;
+
+ return 0;
+}
+
+static inline void spnic_set_buf_desc(struct spnic_sq_bufdesc *buf_descs,
+ rte_iova_t addr, u32 len)
+{
+ buf_descs->hi_addr = upper_32_bits(addr);
+ buf_descs->lo_addr = lower_32_bits(addr);
+ buf_descs->len = len;
+}
+
+static inline void *spnic_copy_tx_mbuf(struct spnic_nic_dev *nic_dev,
+ struct rte_mbuf *mbuf, u16 sge_cnt)
+{
+ struct rte_mbuf *dst_mbuf;
+ u32 offset = 0;
+ u16 i;
+
+ if (unlikely(!nic_dev->cpy_mpool))
+ return NULL;
+
+ dst_mbuf = rte_pktmbuf_alloc(nic_dev->cpy_mpool);
+ if (unlikely(!dst_mbuf))
+ return NULL;
+
+ dst_mbuf->data_off = 0;
+ dst_mbuf->data_len = 0;
+ for (i = 0; i < sge_cnt; i++) {
+ rte_memcpy((u8 *)dst_mbuf->buf_addr + offset,
+ (u8 *)mbuf->buf_addr + mbuf->data_off,
+ mbuf->data_len);
+ dst_mbuf->data_len += mbuf->data_len;
+ offset += mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ dst_mbuf->pkt_len = dst_mbuf->data_len;
+ return dst_mbuf;
+}
+
+static int spnic_mbuf_dma_map_sge(struct spnic_txq *txq, struct rte_mbuf *mbuf,
+ struct spnic_sq_wqe_combo *wqe_combo,
+ struct spnic_wqe_info *wqe_info)
+{
+ struct spnic_sq_wqe_desc *wqe_desc = wqe_combo->hdr;
+ struct spnic_sq_bufdesc *buf_desc = wqe_combo->bds_head;
+ uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt;
+ uint16_t real_segs = mbuf->nb_segs;
+
+ rte_iova_t dma_addr;
+ u32 i;
+
+ for (i = 0; i < nb_segs; i++) {
+ if (unlikely(mbuf == NULL)) {
+ txq->txq_stats.mbuf_null++;
+ return -EINVAL;
+ }
+
+ if (unlikely(mbuf->data_len == 0)) {
+ txq->txq_stats.sge_len0++;
+ return -EINVAL;
+ }
+
+ dma_addr = rte_mbuf_data_iova(mbuf);
+ if (i == 0) {
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE &&
+ mbuf->data_len > COMPACT_WQE_MAX_CTRL_LEN) {
+ txq->txq_stats.sge_len_too_large++;
+ return -EINVAL;
+ }
+ wqe_desc->hi_addr = upper_32_bits(dma_addr);
+ wqe_desc->lo_addr = lower_32_bits(dma_addr);
+ wqe_desc->ctrl_len = mbuf->data_len;
+ } else {
+ /*
+ * Parts of wqe is in sq bottom while parts
+ * of wqe is in sq head
+ */
+ if (unlikely(wqe_info->wrapped &&
+ (u64)buf_desc == txq->sq_bot_sge_addr))
+ buf_desc = (struct spnic_sq_bufdesc *)
+ (void *)txq->sq_head_addr;
+
+ spnic_set_buf_desc(buf_desc, dma_addr, mbuf->data_len);
+ buf_desc++;
+ }
+
+ mbuf = mbuf->next;
+ }
+
+ if (unlikely(wqe_info->cpy_mbuf_cnt != 0)) {
+ /* copy invalid mbuf segs to a valid buffer, lost performance */
+ txq->txq_stats.cpy_pkts += 1;
+ mbuf = spnic_copy_tx_mbuf(txq->nic_dev, mbuf,
+ real_segs - nb_segs);
+ if (unlikely(!mbuf))
+ return -EINVAL;
+
+ txq->tx_info[wqe_info->pi].cpy_mbuf = mbuf;
+
+ /* deal with the last mbuf */
+ dma_addr = rte_mbuf_data_iova(mbuf);
+ if (unlikely(mbuf->data_len == 0)) {
+ txq->txq_stats.sge_len0++;
+ return -EINVAL;
+ }
+
+ if (unlikely(wqe_info->wrapped &&
+ ((u64)buf_desc == txq->sq_bot_sge_addr)))
+ buf_desc =
+ (struct spnic_sq_bufdesc *)txq->sq_head_addr;
+
+ spnic_set_buf_desc(buf_desc, dma_addr, mbuf->data_len);
+ }
+ return 0;
+}
+
+static inline void spnic_prepare_sq_ctrl(struct spnic_sq_wqe_combo *wqe_combo,
+ struct spnic_wqe_info *wqe_info)
+{
+ struct spnic_sq_wqe_desc *wqe_desc = wqe_combo->hdr;
+
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) {
+ wqe_desc->ctrl_len |= SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(wqe_info->owner, OWNER);
+ /* Compact wqe queue_info will transfer to ucode */
+ wqe_desc->queue_info = 0;
+ return;
+ }
+
+ wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) |
+ SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) |
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(wqe_info->owner, OWNER);
+
+ wqe_desc->queue_info = wqe_info->queue_info;
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC);
+
+ if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) {
+ wqe_desc->queue_info |=
+ SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS);
+ } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) <
+ TX_MSS_MIN) {
+ /* Mss should not less than 80 */
+ wqe_desc->queue_info =
+ SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS);
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS);
+ }
+}
+
+u16 spnic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts)
+{
+ struct spnic_txq *txq = tx_queue;
+ struct spnic_tx_info *tx_info = NULL;
+ struct rte_mbuf *mbuf_pkt = NULL;
+ struct spnic_sq_wqe_combo wqe_combo = {0};
+ struct spnic_sq_wqe *sq_wqe = NULL;
+ struct spnic_wqe_info wqe_info = {0};
+ u32 offload_err, free_cnt;
+ u64 tx_bytes = 0;
+ u16 free_wqebb_cnt, nb_tx;
+ int err;
+
+ free_cnt = txq->tx_free_thresh;
+ /* Reclaim tx mbuf before xmit new packets */
+ spnic_xmit_mbuf_cleanup(txq, free_cnt);
+
+ /* Tx loop routine */
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ mbuf_pkt = *tx_pkts++;
+ if (spnic_get_tx_offload(mbuf_pkt, &wqe_info)) {
+ txq->txq_stats.off_errs++;
+ break;
+ }
+
+ if (!wqe_info.offload)
+ /*
+ * Use extended sq wqe with small TS, which can include
+ * multi sges, or compact sq normal wqe, which just
+ * supports one sge
+ */
+ wqe_info.wqebb_cnt = mbuf_pkt->nb_segs;
+ else
+ /* Use extended sq wqe with normal TS */
+ wqe_info.wqebb_cnt = mbuf_pkt->nb_segs + 1;
+
+ free_wqebb_cnt = spnic_get_sq_free_wqebbs(txq);
+ if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) {
+ /* Reclaim again */
+ spnic_xmit_mbuf_cleanup(txq, free_cnt);
+ free_wqebb_cnt = spnic_get_sq_free_wqebbs(txq);
+ if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) {
+ txq->txq_stats.tx_busy += (nb_pkts - nb_tx);
+ break;
+ }
+ }
+
+ /* Get sq wqe address from wqe_page */
+ sq_wqe = spnic_get_sq_wqe(txq, &wqe_info);
+ if (unlikely(!sq_wqe)) {
+ txq->txq_stats.tx_busy++;
+ break;
+ }
+
+ /* Task or bd section maybe warpped for one wqe */
+ spnic_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info);
+
+ wqe_info.queue_info = 0;
+ /* Fill tx packet offload into qsf and task field */
+ if (wqe_info.offload) {
+ offload_err = spnic_set_tx_offload(mbuf_pkt,
+ wqe_combo.task,
+ &wqe_info);
+ if (unlikely(offload_err)) {
+ spnic_put_sq_wqe(txq, &wqe_info);
+ txq->txq_stats.off_errs++;
+ break;
+ }
+ }
+
+ /* Fill sq_wqe buf_desc and bd_desc */
+ err = spnic_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo,
+ &wqe_info);
+ if (err) {
+ spnic_put_sq_wqe(txq, &wqe_info);
+ txq->txq_stats.off_errs++;
+ break;
+ }
+
+ /* Record tx info */
+ tx_info = &txq->tx_info[wqe_info.pi];
+ tx_info->mbuf = mbuf_pkt;
+ tx_info->wqebb_cnt = wqe_info.wqebb_cnt;
+
+ spnic_prepare_sq_ctrl(&wqe_combo, &wqe_info);
+
+ spnic_write_db(txq->db_addr, txq->q_id, txq->cos, SQ_CFLAG_DP,
+ MASKED_QUEUE_IDX(txq, txq->prod_idx));
+
+ tx_bytes += mbuf_pkt->pkt_len;
+ }
+
+ /* Update txq stats */
+ if (nb_tx) {
+ txq->txq_stats.packets += nb_tx;
+ txq->txq_stats.bytes += tx_bytes;
+ }
+ txq->txq_stats.burst_pkts = nb_tx;
+
+ return nb_tx;
+}
+
int spnic_stop_sq(struct spnic_txq *txq)
{
struct spnic_nic_dev *nic_dev = txq->nic_dev;
diff --git a/drivers/net/spnic/spnic_tx.h b/drivers/net/spnic/spnic_tx.h
index d770b15c21..4c2d587104 100644
--- a/drivers/net/spnic/spnic_tx.h
+++ b/drivers/net/spnic/spnic_tx.h
@@ -4,6 +4,13 @@
#ifndef _SPNIC_TX_H_
#define _SPNIC_TX_H_
+#define MAX_SINGLE_SGE_SIZE 65536
+#define SPNIC_NONTSO_PKT_MAX_SGE 38
+#define SPNIC_NONTSO_SEG_NUM_VALID(num) \
+ ((num) <= SPNIC_NONTSO_PKT_MAX_SGE)
+
+#define SPNIC_TSO_PKT_MAX_SGE 127
+#define SPNIC_TSO_SEG_NUM_INVALID(num) ((num) > SPNIC_TSO_PKT_MAX_SGE)
/* Tx offload info */
struct spnic_tx_offload_info {
u8 outer_l2_len;
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 16/25] net/spnic: add device configure/version/info
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (14 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 15/25] net/spnic: support IO packets handling Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-20 0:23 ` Stephen Hemminger
2021-12-18 2:51 ` [PATCH v1 17/25] net/spnic: support RSS configuration update and get Yanling Song
` (8 subsequent siblings)
24 siblings, 1 reply; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit adds the callbacks to configure queue number and mtu
as well as query configuration information and firmware version.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/spnic_ethdev.c | 148 ++++++++++++++++++++++++++++++-
1 file changed, 146 insertions(+), 2 deletions(-)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index b4d20e1a6f..c97f15e871 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -71,12 +71,150 @@ enum spnic_rx_mod {
#define SPNIC_TXD_ALIGN 1
#define SPNIC_RXD_ALIGN 1
+static const struct rte_eth_desc_lim spnic_rx_desc_lim = {
+ .nb_max = SPNIC_MAX_QUEUE_DEPTH,
+ .nb_min = SPNIC_MIN_QUEUE_DEPTH,
+ .nb_align = SPNIC_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim spnic_tx_desc_lim = {
+ .nb_max = SPNIC_MAX_QUEUE_DEPTH,
+ .nb_min = SPNIC_MIN_QUEUE_DEPTH,
+ .nb_align = SPNIC_TXD_ALIGN,
+};
+
/**
- * Deinit mac_vlan table in hardware.
+ * Ethernet device configuration.
*
- * @param[in] eth_dev
+ * Prepare the driver for a given number of TX and RX queues, mtu size
+ * and configure RSS.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
+ */
+static int spnic_dev_configure(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ nic_dev->num_sqs = dev->data->nb_tx_queues;
+ nic_dev->num_rqs = dev->data->nb_rx_queues;
+
+ if (nic_dev->num_sqs > nic_dev->max_sqs ||
+ nic_dev->num_rqs > nic_dev->max_rqs) {
+ PMD_DRV_LOG(ERR, "num_sqs: %d or num_rqs: %d larger than max_sqs: %d or max_rqs: %d",
+ nic_dev->num_sqs, nic_dev->num_rqs,
+ nic_dev->max_sqs, nic_dev->max_rqs);
+ return -EINVAL;
+ }
+
+ /* The range of mtu is 384~9600 */
+ if (SPNIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
+ SPNIC_MIN_FRAME_SIZE ||
+ SPNIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
+ SPNIC_MAX_JUMBO_FRAME_SIZE) {
+ PMD_DRV_LOG(ERR, "Max rx pkt len out of range, mtu: %d, expect between %d and %d",
+ dev->data->dev_conf.rxmode.mtu,
+ SPNIC_MIN_FRAME_SIZE, SPNIC_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+
+ nic_dev->mtu_size =
+ SPNIC_PKTLEN_TO_MTU(dev->data->dev_conf.rxmode.mtu);
+
+ if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
+ return 0;
+}
+
+/**
+ * Get information about the device.
+ *
+ * @param[in] dev
* Pointer to ethernet device structure.
+ * @param[out] info
+ * Info structure for ethernet device.
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure.
*/
+static int spnic_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *info)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ info->max_rx_queues = nic_dev->max_rqs;
+ info->max_tx_queues = nic_dev->max_sqs;
+ info->min_rx_bufsize = SPNIC_MIN_RX_BUF_SIZE;
+ info->max_rx_pktlen = SPNIC_MAX_JUMBO_FRAME_SIZE;
+ info->max_mac_addrs = SPNIC_MAX_UC_MAC_ADDRS;
+ info->min_mtu = SPNIC_MIN_MTU_SIZE;
+ info->max_mtu = SPNIC_MAX_MTU_SIZE;
+ info->max_lro_pkt_size = SPNIC_MAX_LRO_SIZE;
+
+ info->rx_queue_offload_capa = 0;
+ info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM |
+ DEV_RX_OFFLOAD_SCTP_CKSUM |
+ DEV_RX_OFFLOAD_VLAN_FILTER |
+ DEV_RX_OFFLOAD_SCATTER |
+ DEV_RX_OFFLOAD_TCP_LRO |
+ DEV_RX_OFFLOAD_RSS_HASH;
+
+ info->tx_queue_offload_capa = 0;
+ info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM |
+ DEV_TX_OFFLOAD_SCTP_CKSUM |
+ DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_TCP_TSO |
+ DEV_TX_OFFLOAD_MULTI_SEGS;
+
+ info->hash_key_size = SPNIC_RSS_KEY_SIZE;
+ info->reta_size = SPNIC_RSS_INDIR_SIZE;
+ info->flow_type_rss_offloads = SPNIC_RSS_OFFLOAD_ALL;
+
+ info->rx_desc_lim = spnic_rx_desc_lim;
+ info->tx_desc_lim = spnic_tx_desc_lim;
+
+ /* Driver-preferred rx/tx parameters */
+ info->default_rxportconf.burst_size = SPNIC_DEFAULT_BURST_SIZE;
+ info->default_txportconf.burst_size = SPNIC_DEFAULT_BURST_SIZE;
+ info->default_rxportconf.nb_queues = SPNIC_DEFAULT_NB_QUEUES;
+ info->default_txportconf.nb_queues = SPNIC_DEFAULT_NB_QUEUES;
+ info->default_rxportconf.ring_size = SPNIC_DEFAULT_RING_SIZE;
+ info->default_txportconf.ring_size = SPNIC_DEFAULT_RING_SIZE;
+
+ return 0;
+}
+
+static int spnic_fw_version_get(struct rte_eth_dev *dev, char *fw_version,
+ size_t fw_size)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ char mgmt_ver[MGMT_VERSION_MAX_LEN] = { 0 };
+ int err;
+
+ err = spnic_get_mgmt_version(nic_dev->hwdev, mgmt_ver,
+ SPNIC_MGMT_VERSION_MAX_LEN);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get fw version failed");
+ return -EIO;
+ }
+
+ if (fw_size < strlen(mgmt_ver) + 1)
+ return (strlen(mgmt_ver) + 1);
+
+ snprintf(fw_version, fw_size, "%s", mgmt_ver);
+
+ return 0;
+}
/**
* Set ethernet device link state up.
@@ -1332,6 +1470,9 @@ static int spnic_set_mc_addr_list(struct rte_eth_dev *dev,
}
static const struct eth_dev_ops spnic_pmd_ops = {
+ .dev_configure = spnic_dev_configure,
+ .dev_infos_get = spnic_dev_infos_get,
+ .fw_version_get = spnic_fw_version_get,
.dev_set_link_up = spnic_dev_set_link_up,
.dev_set_link_down = spnic_dev_set_link_down,
.link_update = spnic_link_update,
@@ -1350,6 +1491,9 @@ static const struct eth_dev_ops spnic_pmd_ops = {
};
static const struct eth_dev_ops spnic_pmd_vf_ops = {
+ .dev_configure = spnic_dev_configure,
+ .dev_infos_get = spnic_dev_infos_get,
+ .fw_version_get = spnic_fw_version_get,
.rx_queue_setup = spnic_rx_queue_setup,
.tx_queue_setup = spnic_tx_queue_setup,
.dev_start = spnic_dev_start,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 17/25] net/spnic: support RSS configuration update and get
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (15 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 16/25] net/spnic: add device configure/version/info Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 18/25] net/spnic: support VLAN filtering and offloading Yanling Song
` (7 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit implements rss_hash_update and rss_hash_conf_get.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/spnic_ethdev.c | 235 +++++++++++++++++++++++++++++++
1 file changed, 235 insertions(+)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index c97f15e871..61342db497 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -1277,6 +1277,233 @@ static int spnic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return err;
}
+
+/**
+ * Update the RSS hash key and RSS hash type.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rss_conf
+ * RSS configuration data.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct spnic_rss_type rss_type = {0};
+ u64 rss_hf = rss_conf->rss_hf;
+ int err = 0;
+
+ if (nic_dev->rss_state == SPNIC_RSS_DISABLE) {
+ if (rss_hf != 0)
+ return -EINVAL;
+
+ PMD_DRV_LOG(INFO, "RSS is not enabled");
+ return 0;
+ }
+
+ if (rss_conf->rss_key_len > SPNIC_RSS_KEY_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid RSS key, rss_key_len: %d",
+ rss_conf->rss_key_len);
+ return -EINVAL;
+ }
+
+ if (rss_conf->rss_key) {
+ err = spnic_rss_set_hash_key(nic_dev->hwdev, nic_dev->rss_key);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set RSS hash key failed");
+ return err;
+ }
+ memcpy(nic_dev->rss_key, rss_conf->rss_key,
+ rss_conf->rss_key_len);
+ }
+
+ rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
+ ETH_RSS_NONFRAG_IPV4_OTHER)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
+ ETH_RSS_NONFRAG_IPV6_OTHER)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+
+ err = spnic_set_rss_type(nic_dev->hwdev, rss_type);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set RSS type failed");
+
+ return err;
+}
+
+/**
+ * Get the RSS hash configuration.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] rss_conf
+ * RSS configuration data.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_rss_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct spnic_rss_type rss_type = {0};
+ int err;
+
+ if (!rss_conf)
+ return -EINVAL;
+
+ if (nic_dev->rss_state == SPNIC_RSS_DISABLE) {
+ rss_conf->rss_hf = 0;
+ PMD_DRV_LOG(INFO, "RSS is not enabled");
+ return 0;
+ }
+
+ if (rss_conf->rss_key &&
+ rss_conf->rss_key_len >= SPNIC_RSS_KEY_SIZE) {
+ /*
+ * Get RSS key from driver to reduce the frequency of the MPU
+ * accessing the RSS memory.
+ */
+ rss_conf->rss_key_len = sizeof(nic_dev->rss_key);
+ memcpy(rss_conf->rss_key, nic_dev->rss_key,
+ rss_conf->rss_key_len);
+ }
+
+ err = spnic_get_rss_type(nic_dev->hwdev, &rss_type);
+ if (err)
+ return err;
+
+ rss_conf->rss_hf = 0;
+ rss_conf->rss_hf |= rss_type.ipv4 ? (ETH_RSS_IPV4 |
+ ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER) : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+ rss_conf->rss_hf |= rss_type.ipv6 ? (ETH_RSS_IPV6 |
+ ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+
+ return 0;
+}
+
+/**
+ * Get the RETA indirection table.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] reta_conf
+ * Pointer to RETA configuration structure array.
+ * @param[in] reta_size
+ * Size of the RETA table.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 indirtbl[SPNIC_RSS_INDIR_SIZE] = {0};
+ u16 idx, shift;
+ u16 i;
+ int err;
+
+ if (nic_dev->rss_state == SPNIC_RSS_DISABLE) {
+ PMD_DRV_LOG(INFO, "RSS is not enabled");
+ return 0;
+ }
+
+ if (reta_size != SPNIC_RSS_INDIR_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid reta size, reta_size: %d", reta_size);
+ return -EINVAL;
+ }
+
+ err = spnic_rss_get_indir_tbl(nic_dev->hwdev, indirtbl);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d",
+ err);
+ return err;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_RETA_GROUP_SIZE;
+ shift = i % RTE_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
+ }
+
+ return 0;
+}
+
+/**
+ * Update the RETA indirection table.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] reta_conf
+ * Pointer to RETA configuration structure array.
+ * @param[in] reta_size
+ * Size of the RETA table.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 indirtbl[SPNIC_RSS_INDIR_SIZE] = {0};
+ u16 idx, shift;
+ u16 i;
+ int err;
+
+ if (nic_dev->rss_state == SPNIC_RSS_DISABLE)
+ return 0;
+
+ if (reta_size != SPNIC_RSS_INDIR_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid reta size, reta_size: %d", reta_size);
+ return -EINVAL;
+ }
+
+ err = spnic_rss_get_indir_tbl(nic_dev->hwdev, indirtbl);
+ if (err)
+ return err;
+
+ /* Update RSS reta table */
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_RETA_GROUP_SIZE;
+ shift = i % RTE_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ indirtbl[i] = reta_conf[idx].reta[shift];
+ }
+
+ for (i = 0 ; i < reta_size; i++) {
+ if (indirtbl[i] >= nic_dev->num_rqs) {
+ PMD_DRV_LOG(ERR, "Invalid reta entry, index: %d, num_rqs: %d",
+ indirtbl[i], nic_dev->num_rqs);
+ return -EFAULT;
+ }
+ }
+
+ err = spnic_rss_set_indir_tbl(nic_dev->hwdev, indirtbl);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set RSS reta table failed");
+
+ return err;
+}
+
/**
* Update MAC address
*
@@ -1484,6 +1711,10 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
.mtu_set = spnic_dev_set_mtu,
+ .rss_hash_update = spnic_rss_hash_update,
+ .rss_hash_conf_get = spnic_rss_conf_get,
+ .reta_update = spnic_rss_reta_update,
+ .reta_query = spnic_rss_reta_query,
.mac_addr_set = spnic_set_mac_addr,
.mac_addr_remove = spnic_mac_addr_remove,
.mac_addr_add = spnic_mac_addr_add,
@@ -1503,6 +1734,10 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
.mtu_set = spnic_dev_set_mtu,
+ .rss_hash_update = spnic_rss_hash_update,
+ .rss_hash_conf_get = spnic_rss_conf_get,
+ .reta_update = spnic_rss_reta_update,
+ .reta_query = spnic_rss_reta_query,
.mac_addr_set = spnic_set_mac_addr,
.mac_addr_remove = spnic_mac_addr_remove,
.mac_addr_add = spnic_mac_addr_add,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 18/25] net/spnic: support VLAN filtering and offloading
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (16 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 17/25] net/spnic: support RSS configuration update and get Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 19/25] net/spnic: support promiscuous and allmulticast Rx modes Yanling Song
` (6 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit implements vlan_filter_set() and vlan_offload_set()
to support VLAN filtering and offloading.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_nic_cfg.c | 8 ++
drivers/net/spnic/spnic_ethdev.c | 121 +++++++++++++++++++++++++
2 files changed, 129 insertions(+)
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index 6c22c4fb3d..0b1198eb5f 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -294,6 +294,14 @@ static int spnic_config_vlan(void *hwdev, u8 opcode, u16 vlan_id, u16 func_id)
return 0;
}
+int spnic_add_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return spnic_config_vlan(hwdev, SPNIC_CMD_OP_ADD, vlan_id, func_id);
+}
+
int spnic_del_vlan(void *hwdev, u16 vlan_id, u16 func_id)
{
if (!hwdev)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 61342db497..7f6bd41c55 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -1277,6 +1277,123 @@ static int spnic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
return err;
}
+/**
+ * Add or delete vlan id.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] vlan_id
+ * Vlan id is used to filter vlan packets
+ * @param[in] enable
+ * Disable or enable vlan filter function
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id,
+ int enable)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err = 0;
+ u16 func_id;
+
+ if (vlan_id >= RTE_ETHER_MAX_VLAN_ID)
+ return -EINVAL;
+
+ if (vlan_id == 0)
+ return 0;
+
+ func_id = spnic_global_func_id(nic_dev->hwdev);
+
+ if (enable) {
+ /* If vlanid is already set, just return */
+ if (spnic_find_vlan_filter(nic_dev, vlan_id)) {
+ PMD_DRV_LOG(INFO, "Vlan %u has been added, device: %s",
+ vlan_id, nic_dev->dev_name);
+ return 0;
+ }
+
+ err = spnic_add_vlan(nic_dev->hwdev, vlan_id, func_id);
+ } else {
+ /* If vlanid can't be found, just return */
+ if (!spnic_find_vlan_filter(nic_dev, vlan_id)) {
+ PMD_DRV_LOG(INFO, "Vlan %u is not in the vlan filter list, device: %s",
+ vlan_id, nic_dev->dev_name);
+ return 0;
+ }
+
+ err = spnic_del_vlan(nic_dev->hwdev, vlan_id, func_id);
+ }
+
+ if (err) {
+ PMD_DRV_LOG(ERR, "%s vlan failed, func_id: %d, vlan_id: %d, err: %d",
+ enable ? "Add" : "Remove", func_id, vlan_id, err);
+ return err;
+ }
+
+ spnic_store_vlan_filter(nic_dev, vlan_id, enable);
+
+ PMD_DRV_LOG(INFO, "%s vlan %u succeed, device: %s",
+ enable ? "Add" : "Remove", vlan_id, nic_dev->dev_name);
+
+ return 0;
+}
+
+/**
+ * Enable or disable vlan offload.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mask
+ * Definitions used for VLAN setting, vlan filter of vlan strip
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ bool on;
+ int err;
+
+ /* Enable or disable VLAN filter */
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+ true : false;
+ err = spnic_set_vlan_fliter(nic_dev->hwdev, on);
+ if (err) {
+ PMD_DRV_LOG(ERR, "%s vlan filter failed, device: %s, port_id: %d, err: %d",
+ on ? "Enable" : "Disable",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ PMD_DRV_LOG(INFO, "%s vlan filter succeed, device: %s, port_id: %d",
+ on ? "Enable" : "Disable",
+ nic_dev->dev_name, dev->data->port_id);
+ }
+
+ /* Enable or disable VLAN stripping */
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+ true : false;
+ err = spnic_set_rx_vlan_offload(nic_dev->hwdev, on);
+ if (err) {
+ PMD_DRV_LOG(ERR, "%s vlan strip failed, device: %s, port_id: %d, err: %d",
+ on ? "Enable" : "Disable",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ PMD_DRV_LOG(INFO, "%s vlan strip succeed, device: %s, port_id: %d",
+ on ? "Enable" : "Disable",
+ nic_dev->dev_name, dev->data->port_id);
+ }
+
+ return 0;
+}
+
/**
* Update the RSS hash key and RSS hash type.
@@ -1711,6 +1828,8 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
.mtu_set = spnic_dev_set_mtu,
+ .vlan_filter_set = spnic_vlan_filter_set,
+ .vlan_offload_set = spnic_vlan_offload_set,
.rss_hash_update = spnic_rss_hash_update,
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
@@ -1734,6 +1853,8 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
.mtu_set = spnic_dev_set_mtu,
+ .vlan_filter_set = spnic_vlan_filter_set,
+ .vlan_offload_set = spnic_vlan_offload_set,
.rss_hash_update = spnic_rss_hash_update,
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 19/25] net/spnic: support promiscuous and allmulticast Rx modes
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (17 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 18/25] net/spnic: support VLAN filtering and offloading Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 20/25] net/spnic: support flow control Yanling Song
` (5 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit implements promiscuous_enable/disable() and
allmulticast_enable/disable() to configure promiscuous or
allmulticast Rx modes. Note: promiscuous rx mode is only supported
by PF.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/spnic_ethdev.c | 156 +++++++++++++++++++++++++++++++
1 file changed, 156 insertions(+)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 7f6bd41c55..0b5ef15373 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -1394,6 +1394,156 @@ static int spnic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
return 0;
}
+/**
+ * Enable allmulticast mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = spnic_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode | SPNIC_RX_MODE_MC_ALL;
+
+ err = spnic_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Enable allmulticast failed, error: %d", err);
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO, "Enable allmulticast succeed, nic_dev: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return 0;
+}
+
+/**
+ * Disable allmulticast mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = spnic_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode & (~SPNIC_RX_MODE_MC_ALL);
+
+ err = spnic_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Disable allmulticast failed, error: %d", err);
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO, "Disable allmulticast succeed, nic_dev: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return 0;
+}
+
+/**
+ * Enable promiscuous mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = spnic_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode | SPNIC_RX_MODE_PROMISC;
+
+ err = spnic_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Enable promiscuous failed");
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO, "Enable promiscuous, nic_dev: %s, port_id: %d, promisc: %d",
+ nic_dev->dev_name, dev->data->port_id,
+ dev->data->promiscuous);
+ return 0;
+}
+
+/**
+ * Disable promiscuous mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = spnic_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode & (~SPNIC_RX_MODE_PROMISC);
+
+ err = spnic_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Disable promiscuous failed");
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)spnic_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO, "Disable promiscuous, nic_dev: %s, port_id: %d, promisc: %d",
+ nic_dev->dev_name, dev->data->port_id,
+ dev->data->promiscuous);
+ return 0;
+}
+
/**
* Update the RSS hash key and RSS hash type.
@@ -1830,6 +1980,10 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.mtu_set = spnic_dev_set_mtu,
.vlan_filter_set = spnic_vlan_filter_set,
.vlan_offload_set = spnic_vlan_offload_set,
+ .allmulticast_enable = spnic_dev_allmulticast_enable,
+ .allmulticast_disable = spnic_dev_allmulticast_disable,
+ .promiscuous_enable = spnic_dev_promiscuous_enable,
+ .promiscuous_disable = spnic_dev_promiscuous_disable,
.rss_hash_update = spnic_rss_hash_update,
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
@@ -1855,6 +2009,8 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.mtu_set = spnic_dev_set_mtu,
.vlan_filter_set = spnic_vlan_filter_set,
.vlan_offload_set = spnic_vlan_offload_set,
+ .allmulticast_enable = spnic_dev_allmulticast_enable,
+ .allmulticast_disable = spnic_dev_allmulticast_disable,
.rss_hash_update = spnic_rss_hash_update,
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 20/25] net/spnic: support flow control
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (18 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 19/25] net/spnic: support promiscuous and allmulticast Rx modes Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 21/25] net/spnic: support getting Tx/Rx queues info Yanling Song
` (4 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit implements flow control operations
to support related syscalls.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_nic_cfg.c | 53 ++++++++++++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 25 +++++++++
drivers/net/spnic/spnic_ethdev.c | 77 ++++++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 1 +
4 files changed, 156 insertions(+)
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index 0b1198eb5f..24336a2096 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -440,6 +440,59 @@ int spnic_flush_qps_res(void *hwdev)
return 0;
}
+static int spnic_cfg_hw_pause(void *hwdev, u8 opcode,
+ struct nic_pause_config *nic_pause)
+{
+ struct spnic_cmd_pause_config pause_info;
+ u16 out_size = sizeof(pause_info);
+ int err;
+
+ memset(&pause_info, 0, sizeof(pause_info));
+
+ pause_info.port_id = spnic_physical_port_id(hwdev);
+ pause_info.opcode = opcode;
+ if (opcode == SPNIC_CMD_OP_SET) {
+ pause_info.auto_neg = nic_pause->auto_neg;
+ pause_info.rx_pause = nic_pause->rx_pause;
+ pause_info.tx_pause = nic_pause->tx_pause;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CFG_PAUSE_INFO,
+ &pause_info, sizeof(pause_info),
+ &pause_info, &out_size);
+ if (err || !out_size || pause_info.msg_head.status) {
+ PMD_DRV_LOG(ERR, "%s pause info failed, err: %d, status: 0x%x, out size: 0x%x\n",
+ opcode == SPNIC_CMD_OP_SET ? "Set" : "Get",
+ err, pause_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == SPNIC_CMD_OP_GET) {
+ nic_pause->auto_neg = pause_info.auto_neg;
+ nic_pause->rx_pause = pause_info.rx_pause;
+ nic_pause->tx_pause = pause_info.tx_pause;
+ }
+
+ return 0;
+}
+
+int spnic_set_pause_info(void *hwdev, struct nic_pause_config nic_pause)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return spnic_cfg_hw_pause(hwdev, SPNIC_CMD_OP_SET, &nic_pause);
+}
+
+int spnic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
+{
+ if (!hwdev || !nic_pause)
+ return -EINVAL;
+
+
+ return spnic_cfg_hw_pause(hwdev, SPNIC_CMD_OP_GET, nic_pause);
+}
+
static int spnic_set_function_table(void *hwdev, u32 cfg_bitmap,
struct spnic_func_tbl_cfg *cfg)
{
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index 3a906b4bc3..2cdaada2ea 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -560,6 +560,31 @@ int spnic_get_link_state(void *hwdev, u8 *link_state);
*/
int spnic_flush_qps_res(void *hwdev);
+/**
+ * Set pause info
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] nic_pause
+ * Pause info
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_pause_info(void *hwdev, struct nic_pause_config nic_pause);
+
+/**
+ * Get pause info
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] nic_pause
+ * Pause info
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause);
/**
* Init nic hwdev
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 0b5ef15373..dc7fcae10f 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -1544,6 +1544,81 @@ static int spnic_dev_promiscuous_disable(struct rte_eth_dev *dev)
return 0;
}
+static int spnic_dev_flow_ctrl_get(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct nic_pause_config nic_pause;
+ int err;
+
+ err = spnic_mutex_lock(&nic_dev->pause_mutuex);
+ if (err)
+ return err;
+
+ memset(&nic_pause, 0, sizeof(nic_pause));
+ err = spnic_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ (void)spnic_mutex_unlock(&nic_dev->pause_mutuex);
+ return err;
+ }
+
+ if (nic_dev->pause_set || !nic_pause.auto_neg) {
+ nic_pause.rx_pause = nic_dev->nic_pause.rx_pause;
+ nic_pause.tx_pause = nic_dev->nic_pause.tx_pause;
+ }
+
+ fc_conf->autoneg = nic_pause.auto_neg;
+
+ if (nic_pause.tx_pause && nic_pause.rx_pause)
+ fc_conf->mode = RTE_FC_FULL;
+ else if (nic_pause.tx_pause)
+ fc_conf->mode = RTE_FC_TX_PAUSE;
+ else if (nic_pause.rx_pause)
+ fc_conf->mode = RTE_FC_RX_PAUSE;
+ else
+ fc_conf->mode = RTE_FC_NONE;
+
+ (void)spnic_mutex_unlock(&nic_dev->pause_mutuex);
+ return 0;
+}
+
+static int spnic_dev_flow_ctrl_set(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct nic_pause_config nic_pause;
+ int err;
+
+ err = spnic_mutex_lock(&nic_dev->pause_mutuex);
+ if (err)
+ return err;
+
+ memset(&nic_pause, 0, sizeof(nic_pause));
+ if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
+ (fc_conf->mode & RTE_FC_TX_PAUSE))
+ nic_pause.tx_pause = true;
+
+ if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
+ (fc_conf->mode & RTE_FC_RX_PAUSE))
+ nic_pause.rx_pause = true;
+
+ err = spnic_set_pause_info(nic_dev->hwdev, nic_pause);
+ if (err) {
+ (void)spnic_mutex_unlock(&nic_dev->pause_mutuex);
+ return err;
+ }
+
+ nic_dev->pause_set = true;
+ nic_dev->nic_pause.rx_pause = nic_pause.rx_pause;
+ nic_dev->nic_pause.tx_pause = nic_pause.tx_pause;
+
+ PMD_DRV_LOG(INFO, "Just support set tx or rx pause info, tx: %s, rx: %s\n",
+ nic_pause.tx_pause ? "on" : "off",
+ nic_pause.rx_pause ? "on" : "off");
+
+ (void)spnic_mutex_unlock(&nic_dev->pause_mutuex);
+ return 0;
+}
/**
* Update the RSS hash key and RSS hash type.
@@ -1984,6 +2059,8 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.allmulticast_disable = spnic_dev_allmulticast_disable,
.promiscuous_enable = spnic_dev_promiscuous_enable,
.promiscuous_disable = spnic_dev_promiscuous_disable,
+ .flow_ctrl_get = spnic_dev_flow_ctrl_get,
+ .flow_ctrl_set = spnic_dev_flow_ctrl_set,
.rss_hash_update = spnic_rss_hash_update,
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
index 2b59886942..be429945ac 100644
--- a/drivers/net/spnic/spnic_ethdev.h
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -76,6 +76,7 @@ struct spnic_nic_dev {
bool pause_set;
pthread_mutex_t pause_mutuex;
+ struct nic_pause_config nic_pause;
struct rte_ether_addr default_addr;
struct rte_ether_addr *mc_list;
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 21/25] net/spnic: support getting Tx/Rx queues info
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (19 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 20/25] net/spnic: support flow control Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 22/25] net/spnic: net/spnic: support xstats statistics Yanling Song
` (3 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch implements rxq_info_get() and txq_info_get() to
support getting queue depth and mbuf pool info of specified
Tx/Rx queue.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/spnic_ethdev.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index dc7fcae10f..2cd68e0178 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -1846,6 +1846,23 @@ static int spnic_rss_reta_update(struct rte_eth_dev *dev,
return err;
}
+static void spnic_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *rxq_info)
+{
+ struct spnic_rxq *rxq = dev->data->rx_queues[queue_id];
+
+ rxq_info->mp = rxq->mb_pool;
+ rxq_info->nb_desc = rxq->q_depth;
+}
+
+static void spnic_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *txq_qinfo)
+{
+ struct spnic_txq *txq = dev->data->tx_queues[queue_id];
+
+ txq_qinfo->nb_desc = txq->q_depth;
+}
+
/**
* Update MAC address
*
@@ -2065,6 +2082,8 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
.reta_query = spnic_rss_reta_query,
+ .rxq_info_get = spnic_rxq_info_get,
+ .txq_info_get = spnic_txq_info_get,
.mac_addr_set = spnic_set_mac_addr,
.mac_addr_remove = spnic_mac_addr_remove,
.mac_addr_add = spnic_mac_addr_add,
@@ -2092,6 +2111,8 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
.reta_query = spnic_rss_reta_query,
+ .rxq_info_get = spnic_rxq_info_get,
+ .txq_info_get = spnic_txq_info_get,
.mac_addr_set = spnic_set_mac_addr,
.mac_addr_remove = spnic_mac_addr_remove,
.mac_addr_add = spnic_mac_addr_add,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 22/25] net/spnic: net/spnic: support xstats statistics
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (20 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 21/25] net/spnic: support getting Tx/Rx queues info Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 23/25] net/spnic: support VFIO interrupt Yanling Song
` (2 subsequent siblings)
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit implements DFX statistics of
physical port, function, Rx queues and Tx queues,
which includes MAC statistic, unicast/multicast/broadcast
packets statistic, rx_mbuf, tx_busy and etc.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_nic_cfg.c | 118 ++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 206 +++++++++++
drivers/net/spnic/spnic_ethdev.c | 474 +++++++++++++++++++++++++
3 files changed, 798 insertions(+)
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index 24336a2096..ce77b306db 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -493,6 +493,124 @@ int spnic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
return spnic_cfg_hw_pause(hwdev, SPNIC_CMD_OP_GET, nic_pause);
}
+int spnic_get_vport_stats(void *hwdev, struct spnic_vport_stats *stats)
+{
+ struct spnic_port_stats_info stats_info;
+ struct spnic_cmd_vport_stats vport_stats;
+ u16 out_size = sizeof(vport_stats);
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ memset(&vport_stats, 0, sizeof(vport_stats));
+
+ stats_info.func_id = spnic_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_GET_VPORT_STAT,
+ &stats_info, sizeof(stats_info),
+ &vport_stats, &out_size);
+ if (err || !out_size || vport_stats.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Get function stats failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, vport_stats.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(stats, &vport_stats.stats, sizeof(*stats));
+
+ return 0;
+}
+
+int spnic_get_phy_port_stats(void *hwdev, struct mag_phy_port_stats *stats)
+{
+ struct mag_cmd_get_port_stat *port_stats = NULL;
+ struct mag_cmd_port_stats_info stats_info;
+ u16 out_size = sizeof(*port_stats);
+ int err;
+
+ port_stats = rte_zmalloc("port_stats", sizeof(*port_stats), 0);
+ if (!port_stats)
+ return -ENOMEM;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = spnic_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_STAT,
+ &stats_info, sizeof(stats_info),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->counter, sizeof(*stats));
+
+out:
+ rte_free(port_stats);
+
+ return err;
+}
+
+int spnic_clear_vport_stats(void *hwdev)
+{
+ struct spnic_cmd_clear_vport_stats clear_vport_stats;
+ u16 out_size = sizeof(clear_vport_stats);
+ int err;
+
+ if (!hwdev) {
+ PMD_DRV_LOG(ERR, "Hwdev is NULL");
+ return -EINVAL;
+ }
+
+ memset(&clear_vport_stats, 0, sizeof(clear_vport_stats));
+ clear_vport_stats.func_id = spnic_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, SPNIC_CMD_CLEAN_VPORT_STAT,
+ &clear_vport_stats,
+ sizeof(clear_vport_stats),
+ &clear_vport_stats, &out_size);
+ if (err || !out_size || clear_vport_stats.msg_head.status) {
+ PMD_DRV_LOG(ERR, "Clear vport stats failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, clear_vport_stats.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int spnic_clear_phy_port_stats(void *hwdev)
+{
+ struct mag_cmd_clr_port_stat *port_stats = NULL;
+ u16 out_size = sizeof(*port_stats);
+ int err;
+
+ port_stats = rte_zmalloc("port_stats", sizeof(*port_stats), 0);
+ if (!port_stats)
+ return -ENOMEM;
+
+ port_stats->port_id = spnic_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_STAT,
+ &port_stats, sizeof(port_stats),
+ port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to get port statistics, err: %d, status: 0x%x, out size: 0x%x\n",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+out:
+ rte_free(port_stats);
+
+ return err;
+}
+
static int spnic_set_function_table(void *hwdev, u32 cfg_bitmap,
struct spnic_func_tbl_cfg *cfg)
{
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index 2cdaada2ea..e5e4ffea4b 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -259,6 +259,160 @@ struct spnic_cmd_clear_qp_resource {
u16 rsvd1;
};
+struct spnic_port_stats_info {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct spnic_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct spnic_cmd_vport_stats {
+ struct mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct spnic_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct mag_phy_port_stats {
+ u64 mac_tx_fragment_pkt_num;
+ u64 mac_tx_undersize_pkt_num;
+ u64 mac_tx_undermin_pkt_num;
+ u64 mac_tx_64_oct_pkt_num;
+ u64 mac_tx_65_127_oct_pkt_num;
+ u64 mac_tx_128_255_oct_pkt_num;
+ u64 mac_tx_256_511_oct_pkt_num;
+ u64 mac_tx_512_1023_oct_pkt_num;
+ u64 mac_tx_1024_1518_oct_pkt_num;
+ u64 mac_tx_1519_2047_oct_pkt_num;
+ u64 mac_tx_2048_4095_oct_pkt_num;
+ u64 mac_tx_4096_8191_oct_pkt_num;
+ u64 mac_tx_8192_9216_oct_pkt_num;
+ u64 mac_tx_9217_12287_oct_pkt_num;
+ u64 mac_tx_12288_16383_oct_pkt_num;
+ u64 mac_tx_1519_max_bad_pkt_num;
+ u64 mac_tx_1519_max_good_pkt_num;
+ u64 mac_tx_oversize_pkt_num;
+ u64 mac_tx_jabber_pkt_num;
+ u64 mac_tx_bad_pkt_num;
+ u64 mac_tx_bad_oct_num;
+ u64 mac_tx_good_pkt_num;
+ u64 mac_tx_good_oct_num;
+ u64 mac_tx_total_pkt_num;
+ u64 mac_tx_total_oct_num;
+ u64 mac_tx_uni_pkt_num;
+ u64 mac_tx_multi_pkt_num;
+ u64 mac_tx_broad_pkt_num;
+ u64 mac_tx_pause_num;
+ u64 mac_tx_pfc_pkt_num;
+ u64 mac_tx_pfc_pri0_pkt_num;
+ u64 mac_tx_pfc_pri1_pkt_num;
+ u64 mac_tx_pfc_pri2_pkt_num;
+ u64 mac_tx_pfc_pri3_pkt_num;
+ u64 mac_tx_pfc_pri4_pkt_num;
+ u64 mac_tx_pfc_pri5_pkt_num;
+ u64 mac_tx_pfc_pri6_pkt_num;
+ u64 mac_tx_pfc_pri7_pkt_num;
+ u64 mac_tx_control_pkt_num;
+ u64 mac_tx_err_all_pkt_num;
+ u64 mac_tx_from_app_good_pkt_num;
+ u64 mac_tx_from_app_bad_pkt_num;
+
+ u64 mac_rx_fragment_pkt_num;
+ u64 mac_rx_undersize_pkt_num;
+ u64 mac_rx_undermin_pkt_num;
+ u64 mac_rx_64_oct_pkt_num;
+ u64 mac_rx_65_127_oct_pkt_num;
+ u64 mac_rx_128_255_oct_pkt_num;
+ u64 mac_rx_256_511_oct_pkt_num;
+ u64 mac_rx_512_1023_oct_pkt_num;
+ u64 mac_rx_1024_1518_oct_pkt_num;
+ u64 mac_rx_1519_2047_oct_pkt_num;
+ u64 mac_rx_2048_4095_oct_pkt_num;
+ u64 mac_rx_4096_8191_oct_pkt_num;
+ u64 mac_rx_8192_9216_oct_pkt_num;
+ u64 mac_rx_9217_12287_oct_pkt_num;
+ u64 mac_rx_12288_16383_oct_pkt_num;
+ u64 mac_rx_1519_max_bad_pkt_num;
+ u64 mac_rx_1519_max_good_pkt_num;
+ u64 mac_rx_oversize_pkt_num;
+ u64 mac_rx_jabber_pkt_num;
+ u64 mac_rx_bad_pkt_num;
+ u64 mac_rx_bad_oct_num;
+ u64 mac_rx_good_pkt_num;
+ u64 mac_rx_good_oct_num;
+ u64 mac_rx_total_pkt_num;
+ u64 mac_rx_total_oct_num;
+ u64 mac_rx_uni_pkt_num;
+ u64 mac_rx_multi_pkt_num;
+ u64 mac_rx_broad_pkt_num;
+ u64 mac_rx_pause_num;
+ u64 mac_rx_pfc_pkt_num;
+ u64 mac_rx_pfc_pri0_pkt_num;
+ u64 mac_rx_pfc_pri1_pkt_num;
+ u64 mac_rx_pfc_pri2_pkt_num;
+ u64 mac_rx_pfc_pri3_pkt_num;
+ u64 mac_rx_pfc_pri4_pkt_num;
+ u64 mac_rx_pfc_pri5_pkt_num;
+ u64 mac_rx_pfc_pri6_pkt_num;
+ u64 mac_rx_pfc_pri7_pkt_num;
+ u64 mac_rx_control_pkt_num;
+ u64 mac_rx_sym_err_pkt_num;
+ u64 mac_rx_fcs_err_pkt_num;
+ u64 mac_rx_send_app_good_pkt_num;
+ u64 mac_rx_send_app_bad_pkt_num;
+ u64 mac_rx_unfilter_pkt_num;
+};
+
+struct mag_cmd_port_stats_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_get_port_stat {
+ struct mgmt_msg_head head;
+
+ struct mag_phy_port_stats counter;
+ u64 rsvd1[15];
+};
+
+struct mag_cmd_clr_port_stat {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+};
+
+struct spnic_cmd_clear_vport_stats {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+};
enum spnic_func_tbl_cfg_bitmap {
FUNC_CFG_INIT,
@@ -586,6 +740,58 @@ int spnic_set_pause_info(void *hwdev, struct nic_pause_config nic_pause);
*/
int spnic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause);
+/**
+ * Get function stats
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] stats
+ * Function stats
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_vport_stats(void *hwdev, struct spnic_vport_stats *stats);
+
+/**
+ * Get port stats
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] stats
+ * Port stats
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_get_phy_port_stats(void *hwdev, struct mag_phy_port_stats *stats);
+
+/**
+ * Clear function stats
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] stats
+ * Function stats
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_clear_vport_stats(void *hwdev);
+
+/**
+ * Clear port stats
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[out] stats
+ * Port stats
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_clear_phy_port_stats(void *hwdev);
+
/**
* Init nic hwdev
*
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 2cd68e0178..4de86fd08a 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -66,6 +66,171 @@ enum spnic_rx_mod {
#define SPNIC_DEFAULT_RX_MODE (SPNIC_RX_MODE_UC | SPNIC_RX_MODE_MC | \
SPNIC_RX_MODE_BC)
+struct spnic_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ u32 offset;
+};
+
+#define SPNIC_FUNC_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .offset = offsetof(struct spnic_vport_stats, _stat_item) \
+}
+
+#define SPNIC_PORT_STAT(_stat_item) { \
+ .name = #_stat_item, \
+ .offset = offsetof(struct mag_phy_port_stats, _stat_item) \
+}
+
+static const struct spnic_xstats_name_off spnic_vport_stats_strings[] = {
+ SPNIC_FUNC_STAT(tx_unicast_pkts_vport),
+ SPNIC_FUNC_STAT(tx_unicast_bytes_vport),
+ SPNIC_FUNC_STAT(tx_multicast_pkts_vport),
+ SPNIC_FUNC_STAT(tx_multicast_bytes_vport),
+ SPNIC_FUNC_STAT(tx_broadcast_pkts_vport),
+ SPNIC_FUNC_STAT(tx_broadcast_bytes_vport),
+
+ SPNIC_FUNC_STAT(rx_unicast_pkts_vport),
+ SPNIC_FUNC_STAT(rx_unicast_bytes_vport),
+ SPNIC_FUNC_STAT(rx_multicast_pkts_vport),
+ SPNIC_FUNC_STAT(rx_multicast_bytes_vport),
+ SPNIC_FUNC_STAT(rx_broadcast_pkts_vport),
+ SPNIC_FUNC_STAT(rx_broadcast_bytes_vport),
+
+ SPNIC_FUNC_STAT(tx_discard_vport),
+ SPNIC_FUNC_STAT(rx_discard_vport),
+ SPNIC_FUNC_STAT(tx_err_vport),
+ SPNIC_FUNC_STAT(rx_err_vport),
+};
+
+#define SPNIC_VPORT_XSTATS_NUM (sizeof(spnic_vport_stats_strings) / \
+ sizeof(spnic_vport_stats_strings[0]))
+
+static const struct spnic_xstats_name_off spnic_phyport_stats_strings[] = {
+ SPNIC_PORT_STAT(mac_tx_fragment_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_undersize_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_undermin_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_64_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_oversize_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_jabber_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_bad_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_bad_oct_num),
+ SPNIC_PORT_STAT(mac_tx_good_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_good_oct_num),
+ SPNIC_PORT_STAT(mac_tx_total_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_total_oct_num),
+ SPNIC_PORT_STAT(mac_tx_uni_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_multi_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_broad_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pause_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_control_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_err_all_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_from_app_good_pkt_num),
+ SPNIC_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+
+ SPNIC_PORT_STAT(mac_rx_fragment_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_undersize_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_undermin_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_64_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_oversize_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_jabber_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_bad_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_bad_oct_num),
+ SPNIC_PORT_STAT(mac_rx_good_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_good_oct_num),
+ SPNIC_PORT_STAT(mac_rx_total_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_total_oct_num),
+ SPNIC_PORT_STAT(mac_rx_uni_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_multi_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_broad_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pause_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_control_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_sym_err_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_fcs_err_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_send_app_good_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+ SPNIC_PORT_STAT(mac_rx_unfilter_pkt_num)
+};
+
+#define SPNIC_PHYPORT_XSTATS_NUM (sizeof(spnic_phyport_stats_strings) / \
+ sizeof(spnic_phyport_stats_strings[0]))
+
+static const struct spnic_xstats_name_off spnic_rxq_stats_strings[] = {
+ {"rx_nombuf", offsetof(struct spnic_rxq_stats, rx_nombuf)},
+ {"burst_pkt", offsetof(struct spnic_rxq_stats, burst_pkts)},
+};
+
+#define SPNIC_RXQ_XSTATS_NUM (sizeof(spnic_rxq_stats_strings) / \
+ sizeof(spnic_rxq_stats_strings[0]))
+
+static const struct spnic_xstats_name_off spnic_txq_stats_strings[] = {
+ {"tx_busy", offsetof(struct spnic_txq_stats, tx_busy)},
+ {"offload_errors", offsetof(struct spnic_txq_stats, off_errs)},
+ {"burst_pkts", offsetof(struct spnic_txq_stats, burst_pkts)},
+ {"sge_len0", offsetof(struct spnic_txq_stats, sge_len0)},
+ {"mbuf_null", offsetof(struct spnic_txq_stats, mbuf_null)},
+};
+
+#define SPNIC_TXQ_XSTATS_NUM (sizeof(spnic_txq_stats_strings) / \
+ sizeof(spnic_txq_stats_strings[0]))
+
+static int spnic_xstats_calc_num(struct spnic_nic_dev *nic_dev)
+{
+ if (SPNIC_IS_VF(nic_dev->hwdev)) {
+ return (SPNIC_VPORT_XSTATS_NUM +
+ SPNIC_RXQ_XSTATS_NUM * nic_dev->num_rqs +
+ SPNIC_TXQ_XSTATS_NUM * nic_dev->num_sqs);
+ } else {
+ return (SPNIC_VPORT_XSTATS_NUM +
+ SPNIC_PHYPORT_XSTATS_NUM +
+ SPNIC_RXQ_XSTATS_NUM * nic_dev->num_rqs +
+ SPNIC_TXQ_XSTATS_NUM * nic_dev->num_sqs);
+ }
+}
+
#define SPNIC_MAX_QUEUE_DEPTH 16384
#define SPNIC_MIN_QUEUE_DEPTH 128
#define SPNIC_TXD_ALIGN 1
@@ -1846,6 +2011,305 @@ static int spnic_rss_reta_update(struct rte_eth_dev *dev,
return err;
}
+/**
+ * Get device generic statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] stats
+ * Stats structure output buffer.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct spnic_vport_stats vport_stats;
+ struct spnic_rxq *rxq = NULL;
+ struct spnic_txq *txq = NULL;
+ int i, err, q_num;
+ u64 rx_discards_pmd = 0;
+
+ err = spnic_get_vport_stats(nic_dev->hwdev, &vport_stats);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get vport stats from fw failed, nic_dev: %s",
+ nic_dev->dev_name);
+ return err;
+ }
+
+ dev->data->rx_mbuf_alloc_failed = 0;
+
+ /* Rx queue stats */
+ q_num = (nic_dev->num_rqs < RTE_ETHDEV_QUEUE_STAT_CNTRS) ?
+ nic_dev->num_rqs : RTE_ETHDEV_QUEUE_STAT_CNTRS;
+ for (i = 0; i < q_num; i++) {
+ rxq = nic_dev->rxqs[i];
+ stats->q_ipackets[i] = rxq->rxq_stats.packets;
+ stats->q_ibytes[i] = rxq->rxq_stats.bytes;
+ stats->q_errors[i] = rxq->rxq_stats.dropped;
+
+ stats->ierrors += rxq->rxq_stats.errors;
+ rx_discards_pmd += rxq->rxq_stats.dropped;
+ dev->data->rx_mbuf_alloc_failed += rxq->rxq_stats.rx_nombuf;
+ }
+
+ /* Tx queue stats */
+ q_num = (nic_dev->num_sqs < RTE_ETHDEV_QUEUE_STAT_CNTRS) ?
+ nic_dev->num_sqs : RTE_ETHDEV_QUEUE_STAT_CNTRS;
+ for (i = 0; i < q_num; i++) {
+ txq = nic_dev->txqs[i];
+ stats->q_opackets[i] = txq->txq_stats.packets;
+ stats->q_obytes[i] = txq->txq_stats.bytes;
+ stats->oerrors += (txq->txq_stats.tx_busy +
+ txq->txq_stats.off_errs);
+ }
+
+ /* Vport stats */
+ stats->oerrors += vport_stats.tx_discard_vport;
+
+ stats->imissed = vport_stats.rx_discard_vport + rx_discards_pmd;
+
+ stats->ipackets = (vport_stats.rx_unicast_pkts_vport +
+ vport_stats.rx_multicast_pkts_vport +
+ vport_stats.rx_broadcast_pkts_vport -
+ rx_discards_pmd);
+
+ stats->opackets = (vport_stats.tx_unicast_pkts_vport +
+ vport_stats.tx_multicast_pkts_vport +
+ vport_stats.tx_broadcast_pkts_vport);
+
+ stats->ibytes = (vport_stats.rx_unicast_bytes_vport +
+ vport_stats.rx_multicast_bytes_vport +
+ vport_stats.rx_broadcast_bytes_vport);
+
+ stats->obytes = (vport_stats.tx_unicast_bytes_vport +
+ vport_stats.tx_multicast_bytes_vport +
+ vport_stats.tx_broadcast_bytes_vport);
+ return 0;
+}
+
+/**
+ * Clear device generic statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_stats_reset(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct spnic_rxq *rxq = NULL;
+ struct spnic_txq *txq = NULL;
+ int qid;
+ int err;
+
+ err = spnic_clear_vport_stats(nic_dev->hwdev);
+ if (err)
+ return err;
+
+ for (qid = 0; qid < nic_dev->num_rqs; qid++) {
+ rxq = nic_dev->rxqs[qid];
+ memset(&rxq->rxq_stats, 0, sizeof(struct spnic_rxq_stats));
+ }
+
+ for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+ txq = nic_dev->txqs[qid];
+ memset(&txq->txq_stats, 0, sizeof(struct spnic_txq_stats));
+ }
+
+ return 0;
+}
+
+/**
+ * Get device extended statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] xstats
+ * Pointer to rte extended stats table.
+ * @param[in] n
+ * The size of the stats table.
+ *
+ * @retval positive: Number of extended stats on success and stats is filled
+ * @retval negative: Failure
+ */
+static int spnic_dev_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n)
+{
+ struct spnic_nic_dev *nic_dev;
+ struct mag_phy_port_stats port_stats;
+ struct spnic_vport_stats vport_stats;
+ struct spnic_rxq *rxq = NULL;
+ struct spnic_rxq_stats rxq_stats;
+ struct spnic_txq *txq = NULL;
+ struct spnic_txq_stats txq_stats;
+ u16 qid;
+ u32 i;
+ int err, count;
+
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ count = spnic_xstats_calc_num(nic_dev);
+ if ((int)n < count)
+ return count;
+
+ count = 0;
+
+ /* Get stats from rxq stats structure */
+ for (qid = 0; qid < nic_dev->num_rqs; qid++) {
+ rxq = nic_dev->rxqs[qid];
+ memcpy(&rxq_stats, &rxq->rxq_stats, sizeof(rxq->rxq_stats));
+
+ for (i = 0; i < SPNIC_RXQ_XSTATS_NUM; i++) {
+ xstats[count].value =
+ *(uint64_t *)(((char *)&rxq_stats) +
+ spnic_rxq_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ }
+
+ /* Get stats from txq stats structure */
+ for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+ txq = nic_dev->txqs[qid];
+ memcpy(&txq_stats, &txq->txq_stats, sizeof(txq->txq_stats));
+
+ for (i = 0; i < SPNIC_TXQ_XSTATS_NUM; i++) {
+ xstats[count].value =
+ *(uint64_t *)(((char *)&txq_stats) +
+ spnic_txq_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ }
+
+ /* Get stats from vport stats structure */
+ err = spnic_get_vport_stats(nic_dev->hwdev, &vport_stats);
+ if (err)
+ return err;
+
+ for (i = 0; i < SPNIC_VPORT_XSTATS_NUM; i++) {
+ xstats[count].value =
+ *(uint64_t *)(((char *)&vport_stats) +
+ spnic_vport_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ if (SPNIC_IS_VF(nic_dev->hwdev))
+ return count;
+
+ /* Get stats from phy port stats structure */
+ err = spnic_get_phy_port_stats(nic_dev->hwdev, &port_stats);
+ if (err)
+ return err;
+
+ for (i = 0; i < SPNIC_PHYPORT_XSTATS_NUM; i++) {
+ xstats[count].value = *(uint64_t *)(((char *)&port_stats) +
+ spnic_phyport_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ return count;
+}
+
+/**
+ * Clear device extended statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @retval zero: Success
+ * @retval non-zero: Failure
+ */
+static int spnic_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err;
+
+ err = spnic_dev_stats_reset(dev);
+ if (err)
+ return err;
+
+ if (spnic_func_type(nic_dev->hwdev) != TYPE_VF) {
+ err = spnic_clear_phy_port_stats(nic_dev->hwdev);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * Retrieve names of extended device statistics
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] xstats_names
+ * Buffer to insert names into.
+ *
+ * @return
+ * Number of xstats names.
+ */
+static int spnic_dev_xstats_get_names(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int count = 0;
+ u16 i, q_num;
+
+ if (xstats_names == NULL)
+ return spnic_xstats_calc_num(nic_dev);
+
+ /* Get pmd rxq stats name */
+ for (q_num = 0; q_num < nic_dev->num_rqs; q_num++) {
+ for (i = 0; i < SPNIC_RXQ_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "rxq%d_%s_pmd", q_num,
+ spnic_rxq_stats_strings[i].name);
+ count++;
+ }
+ }
+
+ /* Get pmd txq stats name */
+ for (q_num = 0; q_num < nic_dev->num_sqs; q_num++) {
+ for (i = 0; i < SPNIC_TXQ_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "txq%d_%s_pmd", q_num,
+ spnic_txq_stats_strings[i].name);
+ count++;
+ }
+ }
+
+ /* Get vport stats name */
+ for (i = 0; i < SPNIC_VPORT_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", spnic_vport_stats_strings[i].name);
+ count++;
+ }
+
+ if (SPNIC_IS_VF(nic_dev->hwdev))
+ return count;
+
+ /* Get phy port stats name */
+ for (i = 0; i < SPNIC_PHYPORT_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "%s", spnic_phyport_stats_strings[i].name);
+ count++;
+ }
+
+ return count;
+}
+
static void spnic_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *rxq_info)
{
@@ -2082,6 +2546,11 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
.reta_query = spnic_rss_reta_query,
+ .stats_get = spnic_dev_stats_get,
+ .stats_reset = spnic_dev_stats_reset,
+ .xstats_get = spnic_dev_xstats_get,
+ .xstats_reset = spnic_dev_xstats_reset,
+ .xstats_get_names = spnic_dev_xstats_get_names,
.rxq_info_get = spnic_rxq_info_get,
.txq_info_get = spnic_txq_info_get,
.mac_addr_set = spnic_set_mac_addr,
@@ -2111,6 +2580,11 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.rss_hash_conf_get = spnic_rss_conf_get,
.reta_update = spnic_rss_reta_update,
.reta_query = spnic_rss_reta_query,
+ .stats_get = spnic_dev_stats_get,
+ .stats_reset = spnic_dev_stats_reset,
+ .xstats_get = spnic_dev_xstats_get,
+ .xstats_reset = spnic_dev_xstats_reset,
+ .xstats_get_names = spnic_dev_xstats_get_names,
.rxq_info_get = spnic_rxq_info_get,
.txq_info_get = spnic_txq_info_get,
.mac_addr_set = spnic_set_mac_addr,
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 23/25] net/spnic: support VFIO interrupt
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (21 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 22/25] net/spnic: net/spnic: support xstats statistics Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 24/25] net/spnic: support Tx/Rx queue start/stop Yanling Song
2021-12-18 2:51 ` [PATCH v1 25/25] net/spnic: add doc infrastructure Yanling Song
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit supports VFIO interrupt for Rx queue and
asynchronous event, and implements rx_queue_intr_disable()
and rx_queue_intr_enable() to disable/enable the interrupt
of specified Rx queue.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_eqs.c | 11 ++
drivers/net/spnic/spnic_ethdev.c | 218 ++++++++++++++++++++++++++++-
drivers/net/spnic/spnic_ethdev.h | 3 +
drivers/net/spnic/spnic_rx.c | 2 +
4 files changed, 233 insertions(+), 1 deletion(-)
diff --git a/drivers/net/spnic/base/spnic_eqs.c b/drivers/net/spnic/base/spnic_eqs.c
index ee52252ecc..513d0329ed 100644
--- a/drivers/net/spnic/base/spnic_eqs.c
+++ b/drivers/net/spnic/base/spnic_eqs.c
@@ -12,6 +12,7 @@
#include "spnic_eqs.h"
#include "spnic_mgmt.h"
#include "spnic_mbox.h"
+#include "spnic_hw_comm.h"
#include "spnic_nic_event.h"
#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
@@ -648,3 +649,13 @@ int spnic_aeq_poll_msg(struct spnic_eq *eq, u32 timeout, void *param)
return err;
}
+
+void spnic_dev_handle_aeq_event(struct spnic_hwdev *hwdev, void *param)
+{
+ struct spnic_eq *aeq = &hwdev->aeqs->aeq[0];
+
+ /* Clear resend timer cnt register */
+ spnic_misx_intr_clear_resend_bit(hwdev, aeq->eq_irq.msix_entry_idx,
+ EQ_MSIX_RESEND_TIMER_CLEAR);
+ (void)spnic_aeq_poll_msg(aeq, 0, param);
+}
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index 4de86fd08a..e4db4afdfd 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -248,6 +248,28 @@ static const struct rte_eth_desc_lim spnic_tx_desc_lim = {
.nb_align = SPNIC_TXD_ALIGN,
};
+/**
+ * Interrupt handler triggered by NIC for handling specific event
+ *
+ * @param[in] param
+ * The address of parameter (struct rte_eth_dev *) regsitered before
+ */
+static void spnic_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (!rte_bit_relaxed_get32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status)) {
+ PMD_DRV_LOG(WARNING,
+ "Intr is disabled, ignore intr event, dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return;
+ }
+
+ /* Aeq0 msg handler */
+ spnic_dev_handle_aeq_event(nic_dev->hwdev, param);
+}
+
/**
* Ethernet device configuration.
*
@@ -971,6 +993,46 @@ static void spnic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
spnic_delete_mc_addr_list(nic_dev);
}
+int spnic_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
+ uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 msix_intr;
+
+ if (!rte_intr_dp_is_en(intr_handle))
+ return 0;
+
+ if (queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ msix_intr = (u16)(queue_id + RTE_INTR_VEC_RXTX_OFFSET);
+ spnic_set_msix_state(nic_dev->hwdev, msix_intr, SPNIC_MSIX_ENABLE);
+
+ return 0;
+}
+
+int spnic_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 msix_intr;
+
+ if (!rte_intr_dp_is_en(intr_handle))
+ return 0;
+
+ if (queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ msix_intr = (u16)(queue_id + RTE_INTR_VEC_RXTX_OFFSET);
+ spnic_set_msix_state(nic_dev->hwdev, msix_intr, SPNIC_MSIX_DISABLE);
+ spnic_misx_intr_clear_resend_bit(nic_dev->hwdev, msix_intr, 1);
+
+ return 0;
+}
+
static int spnic_set_rxtx_configure(struct rte_eth_dev *dev)
{
struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
@@ -1104,6 +1166,108 @@ static void spnic_remove_all_vlanid(struct rte_eth_dev *dev)
}
}
+static void spnic_disable_interrupt(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+ if (!rte_bit_relaxed_get32(SPNIC_DEV_INIT, &nic_dev->dev_status))
+ return;
+
+ /* disable rte interrupt */
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
+ spnic_dev_interrupt_handler, (void *)dev);
+}
+
+static void spnic_enable_interrupt(struct rte_eth_dev *dev)
+{
+ struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+ if (!rte_bit_relaxed_get32(SPNIC_DEV_INIT, &nic_dev->dev_status))
+ return;
+
+ /* enable rte interrupt */
+ rte_intr_enable(pci_dev->intr_handle);
+ rte_intr_callback_register(pci_dev->intr_handle,
+ spnic_dev_interrupt_handler, (void *)dev);
+}
+#define SPNIC_TXRX_MSIX_PENDING_LIMIT 2
+#define SPNIC_TXRX_MSIX_COALESC_TIMER 2
+#define SPNIC_TXRX_MSIX_RESEND_TIMER_CFG 7
+
+static int spnic_init_rxq_msix_attr(void *hwdev, u16 msix_index)
+{
+ struct interrupt_info info = { 0 };
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = SPNIC_TXRX_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = SPNIC_TXRX_MSIX_COALESC_TIMER;
+ info.resend_timer_cfg = SPNIC_TXRX_MSIX_RESEND_TIMER_CFG;
+
+ info.msix_index = msix_index;
+ err = spnic_set_interrupt_cfg(hwdev, info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set msix attr failed, msix_index %d\n",
+ msix_index);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void spnic_deinit_rxq_intr(struct rte_eth_dev *dev)
+{
+ struct rte_intr_handle *intr_handle = dev->intr_handle;
+
+ rte_intr_efd_disable(intr_handle);
+}
+
+static int spnic_init_rxq_intr(struct rte_eth_dev *dev)
+{
+ struct rte_intr_handle *intr_handle = NULL;
+ struct spnic_nic_dev *nic_dev = NULL;
+ struct spnic_rxq *rxq = NULL;
+ u32 nb_rx_queues, i;
+ int err;
+
+ intr_handle = dev->intr_handle;
+ nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ if (!dev->data->dev_conf.intr_conf.rxq)
+ return 0;
+
+ if (!rte_intr_cap_multiple(intr_handle)) {
+ PMD_DRV_LOG(ERR, "Rx queue interrupts require MSI-X interrupts"
+ " (vfio-pci driver)\n");
+ return -ENOTSUP;
+ }
+
+ nb_rx_queues = dev->data->nb_rx_queues;
+ err = rte_intr_efd_enable(intr_handle, nb_rx_queues);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to enable event fds for Rx queue interrupts\n");
+ return err;
+ }
+
+ for (i = 0; i < nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ rxq->dp_intr_en = 1;
+ rxq->msix_entry_idx = (u16)(i + RTE_INTR_VEC_RXTX_OFFSET);
+
+ err = spnic_init_rxq_msix_attr(nic_dev->hwdev,
+ rxq->msix_entry_idx);
+ if (err) {
+ spnic_deinit_rxq_intr(dev);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
static int spnic_init_sw_rxtxqs(struct spnic_nic_dev *nic_dev)
{
u32 txq_size;
@@ -1165,6 +1329,14 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ spnic_disable_interrupt(eth_dev);
+ err = spnic_init_rxq_intr(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rxq intr fail, eth_dev:%s",
+ eth_dev->data->name);
+ goto init_rxq_intr_fail;
+ }
+
spnic_get_func_rx_buf_size(nic_dev);
err = spnic_init_function_table(nic_dev->hwdev, nic_dev->rx_buff_len);
if (err) {
@@ -1212,6 +1384,9 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
goto set_rxtx_config_fail;
}
+ /* enable dev interrupt */
+ spnic_enable_interrupt(eth_dev);
+
err = spnic_start_all_rqs(eth_dev);
if (err) {
PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
@@ -1256,6 +1431,7 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
rxq = nic_dev->rxqs[i];
spnic_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
spnic_free_rxq_mbufs(rxq);
+ spnic_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
@@ -1269,6 +1445,7 @@ static int spnic_dev_start(struct rte_eth_dev *eth_dev)
init_qp_fail:
get_feature_err:
init_func_tbl_fail:
+init_rxq_intr_fail:
return err;
}
@@ -1374,7 +1551,8 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
{
struct spnic_nic_dev *nic_dev =
SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
- int qid;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int qid, ret;
if (rte_bit_relaxed_test_and_set32(SPNIC_DEV_CLOSE, &nic_dev->dev_status)) {
PMD_DRV_LOG(WARNING, "Device %s already closed",
@@ -1398,6 +1576,15 @@ static int spnic_dev_close(struct rte_eth_dev *eth_dev)
spnic_remove_all_vlanid(eth_dev);
rte_bit_relaxed_clear32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
+ spnic_set_msix_state(nic_dev->hwdev, 0, SPNIC_MSIX_DISABLE);
+ ret = rte_intr_disable(pci_dev->intr_handle);
+ if (ret)
+ PMD_DRV_LOG(ERR, "Device %s disable intr failed: %d",
+ nic_dev->dev_name, ret);
+
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
+ spnic_dev_interrupt_handler,
+ (void *)eth_dev);
/* Destroy rx mode mutex */
spnic_mutex_destroy(&nic_dev->rx_mode_mutex);
@@ -2530,6 +2717,8 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.tx_queue_setup = spnic_tx_queue_setup,
.rx_queue_release = spnic_rx_queue_release,
.tx_queue_release = spnic_tx_queue_release,
+ .rx_queue_intr_enable = spnic_dev_rx_queue_intr_enable,
+ .rx_queue_intr_disable = spnic_dev_rx_queue_intr_disable,
.dev_start = spnic_dev_start,
.dev_stop = spnic_dev_stop,
.dev_close = spnic_dev_close,
@@ -2565,6 +2754,8 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.fw_version_get = spnic_fw_version_get,
.rx_queue_setup = spnic_rx_queue_setup,
.tx_queue_setup = spnic_tx_queue_setup,
+ .rx_queue_intr_enable = spnic_dev_rx_queue_intr_enable,
+ .rx_queue_intr_disable = spnic_dev_rx_queue_intr_disable,
.dev_start = spnic_dev_start,
.link_update = spnic_link_update,
.rx_queue_release = spnic_rx_queue_release,
@@ -2820,6 +3011,24 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
goto init_mpool_fail;
}
+ /* Register callback func to eal lib */
+ err = rte_intr_callback_register(pci_dev->intr_handle,
+ spnic_dev_interrupt_handler,
+ (void *)eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Register intr callback failed, dev_name: %s",
+ eth_dev->data->name);
+ goto reg_intr_cb_fail;
+ }
+
+ /* Enable uio/vfio intr/eventfd mapping */
+ err = rte_intr_enable(pci_dev->intr_handle);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
+ eth_dev->data->name);
+ goto enable_intr_fail;
+ }
+
spnic_mutex_init(&nic_dev->rx_mode_mutex, NULL);
rte_bit_relaxed_set32(SPNIC_DEV_INTR_EN, &nic_dev->dev_status);
@@ -2830,6 +3039,13 @@ static int spnic_func_init(struct rte_eth_dev *eth_dev)
return 0;
+enable_intr_fail:
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
+ spnic_dev_interrupt_handler,
+ (void *)eth_dev);
+
+reg_intr_cb_fail:
+ spnic_copy_mempool_uninit(nic_dev);
init_mpool_fail:
set_default_feature_fail:
spnic_deinit_mac_addr(eth_dev);
diff --git a/drivers/net/spnic/spnic_ethdev.h b/drivers/net/spnic/spnic_ethdev.h
index be429945ac..fdd0e87e62 100644
--- a/drivers/net/spnic/spnic_ethdev.h
+++ b/drivers/net/spnic/spnic_ethdev.h
@@ -89,4 +89,7 @@ struct spnic_nic_dev {
#define SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \
((struct spnic_nic_dev *)(dev)->data->dev_private)
+int spnic_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);
+int spnic_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+ uint16_t queue_id);
#endif /* _SPNIC_ETHDEV_H_ */
diff --git a/drivers/net/spnic/spnic_rx.c b/drivers/net/spnic/spnic_rx.c
index 5af836ed41..f990d10be4 100644
--- a/drivers/net/spnic/spnic_rx.c
+++ b/drivers/net/spnic/spnic_rx.c
@@ -610,6 +610,7 @@ int spnic_start_all_rqs(struct rte_eth_dev *eth_dev)
rxq = eth_dev->data->rx_queues[i];
spnic_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
spnic_rearm_rxq_mbuf(rxq);
+ spnic_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
}
@@ -628,6 +629,7 @@ int spnic_start_all_rqs(struct rte_eth_dev *eth_dev)
rxq = eth_dev->data->rx_queues[i];
spnic_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
spnic_free_rxq_mbufs(rxq);
+ spnic_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
return err;
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 24/25] net/spnic: support Tx/Rx queue start/stop
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (22 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 23/25] net/spnic: support VFIO interrupt Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 25/25] net/spnic: add doc infrastructure Yanling Song
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This commit support starting or stopping a specified Rx/Tx queue
For Rx queue:
when startting rx queue, mbuf will be allocated and fill rq
wqe with mbuf info, then add the qid to indirect table of RSS.
if the first rx queue is started, the valid bit in function table
will be setted so that the packets can be received to host.
when stopping rx queue, the PMD driver will poll the rx queue
until it is empty and release the mbuf, then the PMD driver will
remove the qid for RSS indirect table. if the last rx queue is
stopped, the valid bit in function table will be cleared.
For Rx queue:
when stopping tx queue, the PMD driver will wait until all tx
packets are sent and then releases all mbuf.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
drivers/net/spnic/base/spnic_nic_cfg.c | 33 ++++
drivers/net/spnic/base/spnic_nic_cfg.h | 13 ++
drivers/net/spnic/spnic_ethdev.c | 82 +++++++++
drivers/net/spnic/spnic_rx.c | 222 +++++++++++++++++++++++++
drivers/net/spnic/spnic_rx.h | 4 +
5 files changed, 354 insertions(+)
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.c b/drivers/net/spnic/base/spnic_nic_cfg.c
index ce77b306db..934f11a881 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.c
+++ b/drivers/net/spnic/base/spnic_nic_cfg.c
@@ -1289,6 +1289,39 @@ int spnic_vf_get_default_cos(void *hwdev, u8 *cos_id)
return 0;
}
+int spnic_set_rq_flush(void *hwdev, u16 q_id)
+{
+ struct spnic_cmd_set_rq_flush *rq_flush_msg = NULL;
+ struct spnic_cmd_buf *cmd_buf = NULL;
+ u64 out_param = EIO;
+ int err;
+
+ cmd_buf = spnic_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Failed to allocate cmd buf\n");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(*rq_flush_msg);
+
+ rq_flush_msg = cmd_buf->buf;
+ rq_flush_msg->local_rq_id = q_id;
+ rq_flush_msg->value = cpu_to_be32(rq_flush_msg->value);
+
+ err = spnic_cmdq_direct_resp(hwdev, SPNIC_MOD_L2NIC,
+ SPNIC_UCODE_CMD_SET_RQ_FLUSH, cmd_buf,
+ &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Failed to set rq flush, err:%d, out_param: %"PRIu64"",
+ err, out_param);
+ err = -EFAULT;
+ }
+
+ spnic_free_cmd_buf(cmd_buf);
+
+ return err;
+}
+
static int _mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in,
u16 in_size, void *buf_out, u16 *out_size)
{
diff --git a/drivers/net/spnic/base/spnic_nic_cfg.h b/drivers/net/spnic/base/spnic_nic_cfg.h
index e5e4ffea4b..e4b4a52d32 100644
--- a/drivers/net/spnic/base/spnic_nic_cfg.h
+++ b/drivers/net/spnic/base/spnic_nic_cfg.h
@@ -1069,6 +1069,19 @@ int spnic_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl);
*/
int spnic_vf_get_default_cos(void *hwdev, u8 *cos_id);
+/**
+ * Flush rx queue resource
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] q_id
+ * rx queue id
+ *
+ * @retval zero : Success
+ * @retval non-zero : Failure
+ */
+int spnic_set_rq_flush(void *hwdev, u16 q_id);
+
/**
* Get service feature HW supported
*
diff --git a/drivers/net/spnic/spnic_ethdev.c b/drivers/net/spnic/spnic_ethdev.c
index e4db4afdfd..ff50715120 100644
--- a/drivers/net/spnic/spnic_ethdev.c
+++ b/drivers/net/spnic/spnic_ethdev.c
@@ -993,6 +993,80 @@ static void spnic_deinit_mac_addr(struct rte_eth_dev *eth_dev)
spnic_delete_mc_addr_list(nic_dev);
}
+static int spnic_dev_rx_queue_start(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t rq_id)
+{
+ struct spnic_rxq *rxq = NULL;
+ int rc;
+
+ if (rq_id < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[rq_id];
+
+ rc = spnic_start_rq(dev, rxq);
+ if (rc) {
+ PMD_DRV_LOG(ERR, "Start rx queue failed, eth_dev:%s, queue_idx:%d",
+ dev->data->name, rq_id);
+ return rc;
+ }
+
+ dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
+
+static int spnic_dev_rx_queue_stop(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t rq_id)
+{
+ struct spnic_rxq *rxq = NULL;
+ int rc;
+
+ if (rq_id < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[rq_id];
+
+ rc = spnic_stop_rq(dev, rxq);
+ if (rc) {
+ PMD_DRV_LOG(ERR, "Stop rx queue failed, eth_dev:%s, queue_idx:%d",
+ dev->data->name, rq_id);
+ return rc;
+ }
+
+ dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+static int spnic_dev_tx_queue_start(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t sq_id)
+{
+ PMD_DRV_LOG(INFO, "Start tx queue, eth_dev:%s, queue_idx:%d",
+ dev->data->name, sq_id);
+ dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+}
+
+static int spnic_dev_tx_queue_stop(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t sq_id)
+{
+ struct spnic_txq *txq = NULL;
+ int rc;
+
+ if (sq_id < dev->data->nb_tx_queues) {
+ txq = dev->data->tx_queues[sq_id];
+ rc = spnic_stop_sq(txq);
+ if (rc) {
+ PMD_DRV_LOG(ERR, "Stop tx queue failed, eth_dev:%s, queue_idx:%d",
+ dev->data->name, sq_id);
+ return rc;
+ }
+
+ dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
int spnic_dev_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
@@ -2717,6 +2791,10 @@ static const struct eth_dev_ops spnic_pmd_ops = {
.tx_queue_setup = spnic_tx_queue_setup,
.rx_queue_release = spnic_rx_queue_release,
.tx_queue_release = spnic_tx_queue_release,
+ .rx_queue_start = spnic_dev_rx_queue_start,
+ .rx_queue_stop = spnic_dev_rx_queue_stop,
+ .tx_queue_start = spnic_dev_tx_queue_start,
+ .tx_queue_stop = spnic_dev_tx_queue_stop,
.rx_queue_intr_enable = spnic_dev_rx_queue_intr_enable,
.rx_queue_intr_disable = spnic_dev_rx_queue_intr_disable,
.dev_start = spnic_dev_start,
@@ -2756,6 +2834,10 @@ static const struct eth_dev_ops spnic_pmd_vf_ops = {
.tx_queue_setup = spnic_tx_queue_setup,
.rx_queue_intr_enable = spnic_dev_rx_queue_intr_enable,
.rx_queue_intr_disable = spnic_dev_rx_queue_intr_disable,
+ .rx_queue_start = spnic_dev_rx_queue_start,
+ .rx_queue_stop = spnic_dev_rx_queue_stop,
+ .tx_queue_start = spnic_dev_tx_queue_start,
+ .tx_queue_stop = spnic_dev_tx_queue_stop,
.dev_start = spnic_dev_start,
.link_update = spnic_link_update,
.rx_queue_release = spnic_rx_queue_release,
diff --git a/drivers/net/spnic/spnic_rx.c b/drivers/net/spnic/spnic_rx.c
index f990d10be4..cba746df88 100644
--- a/drivers/net/spnic/spnic_rx.c
+++ b/drivers/net/spnic/spnic_rx.c
@@ -486,6 +486,228 @@ void spnic_remove_rq_from_rx_queue_list(struct spnic_nic_dev *nic_dev,
nic_dev->num_rss = rss_queue_count;
}
+static void spnic_rx_queue_release_mbufs(struct spnic_rxq *rxq)
+{
+ u16 sw_ci, ci_mask, free_wqebbs;
+ u16 rx_buf_len;
+ u32 status, vlan_len, pkt_len;
+ u32 pkt_left_len = 0;
+ u32 nr_released = 0;
+ struct spnic_rx_info *rx_info;
+ volatile struct spnic_rq_cqe *rx_cqe;
+
+ sw_ci = spnic_get_rq_local_ci(rxq);
+ rx_info = &rxq->rx_info[sw_ci];
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ free_wqebbs = (u16)(spnic_get_rq_free_wqebb(rxq) + 1);
+ status = rx_cqe->status;
+ ci_mask = rxq->q_mask;
+
+ while (free_wqebbs < rxq->q_depth) {
+ rx_buf_len = rxq->buf_len;
+ if (pkt_left_len != 0) {
+ /* flush continues jumbo rqe */
+ pkt_left_len = (pkt_left_len <= rx_buf_len) ? 0 :
+ (pkt_left_len - rx_buf_len);
+ } else if (SPNIC_GET_RX_FLUSH(status)) {
+ /* flush one released rqe */
+ pkt_left_len = 0;
+ } else if (SPNIC_GET_RX_DONE(status)) {
+ /* flush single packet or first jumbo rqe */
+ vlan_len = rx_cqe->vlan_len;
+ pkt_len = SPNIC_GET_RX_PKT_LEN(vlan_len);
+ pkt_left_len = (pkt_len <= rx_buf_len) ? 0 :
+ (pkt_len - rx_buf_len);
+ } else {
+ break;
+ }
+
+ rte_pktmbuf_free(rx_info->mbuf);
+
+ rx_info->mbuf = NULL;
+ rx_cqe->status = 0;
+ nr_released++;
+ free_wqebbs++;
+
+ /* see next cqe */
+ sw_ci++;
+ sw_ci &= ci_mask;
+ rx_info = &rxq->rx_info[sw_ci];
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ status = rx_cqe->status;
+ }
+
+ spnic_update_rq_local_ci(rxq, nr_released);
+}
+
+int spnic_poll_rq_empty(struct spnic_rxq *rxq)
+{
+ unsigned long timeout;
+ int free_wqebb;
+ int err = -EFAULT;
+
+ timeout = msecs_to_jiffies(SPNIC_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ free_wqebb = spnic_get_rq_free_wqebb(rxq) + 1;
+ if (free_wqebb == rxq->q_depth) {
+ err = 0;
+ break;
+ }
+ spnic_rx_queue_release_mbufs(rxq);
+ rte_delay_us(1);
+ } while (time_before(jiffies, timeout));
+
+ return err;
+}
+
+void spnic_dump_cqe_status(struct spnic_rxq *rxq, u32 *cqe_done_cnt,
+ u32 *cqe_hole_cnt, u32 *head_ci,
+ u32 *head_done)
+{
+ u16 sw_ci;
+ u16 avail_pkts = 0;
+ u16 hit_done = 0;
+ u16 cqe_hole = 0;
+ u32 status;
+ volatile struct spnic_rq_cqe *rx_cqe;
+
+ sw_ci = spnic_get_rq_local_ci(rxq);
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ status = rx_cqe->status;
+ *head_done = SPNIC_GET_RX_DONE(status);
+ *head_ci = sw_ci;
+
+ for (sw_ci = 0; sw_ci < rxq->q_depth; sw_ci++) {
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+
+ /* test current ci is done */
+ status = rx_cqe->status;
+ if (!SPNIC_GET_RX_DONE(status) ||
+ !SPNIC_GET_RX_FLUSH(status)) {
+ if (hit_done) {
+ cqe_hole++;
+ hit_done = 0;
+ }
+
+ continue;
+ }
+
+ avail_pkts++;
+ hit_done = 1;
+ }
+
+ *cqe_done_cnt = avail_pkts;
+ *cqe_hole_cnt = cqe_hole;
+}
+
+int spnic_stop_rq(struct rte_eth_dev *eth_dev, struct spnic_rxq *rxq)
+{
+ struct spnic_nic_dev *nic_dev = rxq->nic_dev;
+ u32 cqe_done_cnt = 0;
+ u32 cqe_hole_cnt = 0;
+ u32 head_ci, head_done;
+ int err;
+
+ /* disable rxq intr */
+ spnic_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+
+ /* lock dev queue switch */
+ rte_spinlock_lock(&nic_dev->queue_list_lock);
+
+ spnic_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+
+ if (nic_dev->rss_state == SPNIC_RSS_ENABLE) {
+ err = spnic_refill_indir_rqid(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Clear rq in indirect table failed, eth_dev:%s, queue_idx:%d\n",
+ nic_dev->dev_name, rxq->q_id);
+ spnic_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+ goto set_indir_failed;
+ }
+ }
+
+ if (nic_dev->num_rss == 0) {
+ err = spnic_set_vport_enable(nic_dev->hwdev, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "%s Disable vport failed, rc:%d",
+ nic_dev->dev_name, err);
+ goto set_vport_failed;
+ }
+ }
+
+ /* unlock dev queue list switch */
+ rte_spinlock_unlock(&nic_dev->queue_list_lock);
+
+ /* Send flush rq cmd to uCode */
+ err = spnic_set_rq_flush(nic_dev->hwdev, rxq->q_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d\n",
+ nic_dev->dev_name, rxq->q_id);
+ goto rq_flush_failed;
+ }
+
+ err = spnic_poll_rq_empty(rxq);
+ if (err) {
+ spnic_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt,
+ &head_ci, &head_done);
+ PMD_DRV_LOG(ERR, "Poll rq empty timeout, eth_dev:%s, queue_idx:%d, "
+ "mbuf_left:%d, cqe_done:%d, cqe_hole:%d, cqe[%d].done=%d\n",
+ nic_dev->dev_name, rxq->q_id,
+ rxq->q_depth - spnic_get_rq_free_wqebb(rxq),
+ cqe_done_cnt, cqe_hole_cnt, head_ci, head_done);
+ goto poll_rq_failed;
+ }
+
+ return 0;
+
+poll_rq_failed:
+rq_flush_failed:
+ rte_spinlock_lock(&nic_dev->queue_list_lock);
+set_vport_failed:
+ spnic_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+ if (nic_dev->rss_state == SPNIC_RSS_ENABLE)
+ (void)spnic_refill_indir_rqid(rxq);
+set_indir_failed:
+ rte_spinlock_unlock(&nic_dev->queue_list_lock);
+ spnic_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+ return err;
+}
+
+int spnic_start_rq(struct rte_eth_dev *eth_dev, struct spnic_rxq *rxq)
+{
+ struct spnic_nic_dev *nic_dev = rxq->nic_dev;
+ int err = 0;
+
+ /* lock dev queue switch */
+ rte_spinlock_lock(&nic_dev->queue_list_lock);
+
+ spnic_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+
+ spnic_rearm_rxq_mbuf(rxq);
+
+ if (nic_dev->rss_state == SPNIC_RSS_ENABLE) {
+ err = spnic_refill_indir_rqid(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Refill rq to indrect table failed, eth_dev:%s, queue_idx:%d err:%d\n",
+ nic_dev->dev_name, rxq->q_id, err);
+ spnic_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+ }
+ }
+
+ if (rxq->nic_dev->num_rss == 1) {
+ err = spnic_set_vport_enable(nic_dev->hwdev, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "%s enable vport failed, err:%d",
+ nic_dev->dev_name, err);
+ }
+
+ /* unlock dev queue list switch */
+ rte_spinlock_unlock(&nic_dev->queue_list_lock);
+
+ spnic_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+
+ return err;
+}
static inline uint64_t spnic_rx_vlan(uint32_t offload_type, uint32_t vlan_len,
uint16_t *vlan_tci)
diff --git a/drivers/net/spnic/spnic_rx.h b/drivers/net/spnic/spnic_rx.h
index 5ae4b5f1ab..a876f75595 100644
--- a/drivers/net/spnic/spnic_rx.h
+++ b/drivers/net/spnic/spnic_rx.h
@@ -273,6 +273,10 @@ void spnic_dump_cqe_status(struct spnic_rxq *rxq, u32 *cqe_done_cnt,
u32 *cqe_hole_cnt, u32 *head_ci,
u32 *head_done);
+int spnic_stop_rq(struct rte_eth_dev *eth_dev, struct spnic_rxq *rxq);
+
+int spnic_start_rq(struct rte_eth_dev *eth_dev, struct spnic_rxq *rxq);
+
int spnic_start_all_rqs(struct rte_eth_dev *eth_dev);
u16 spnic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v1 25/25] net/spnic: add doc infrastructure
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
` (23 preceding siblings ...)
2021-12-18 2:51 ` [PATCH v1 24/25] net/spnic: support Tx/Rx queue start/stop Yanling Song
@ 2021-12-18 2:51 ` Yanling Song
24 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-18 2:51 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, ferruh.yigit
This patch adds doc infrastructure for spnic PMD driver.
Signed-off-by: Yanling Song <songyl@ramaxel.com>
---
MAINTAINERS | 6 +++
doc/guides/nics/features/spnic.ini | 40 ++++++++++++++++++++
doc/guides/nics/spnic.rst | 61 ++++++++++++++++++++++++++++++
3 files changed, 107 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 18d9edaf88..12f6171aef 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -919,6 +919,12 @@ F: drivers/net/qede/
F: doc/guides/nics/qede.rst
F: doc/guides/nics/features/qede*.ini
+Ramaxel SPNIC
+M: Yanling Song <songyl@ramaxel.com>
+F: drivers/net/spnic/
+F: doc/guides/nics/spnic.rst
+F: doc/guides/nics/features/spnic.ini
+
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: drivers/common/sfc_efx/
diff --git a/doc/guides/nics/features/spnic.ini b/doc/guides/nics/features/spnic.ini
new file mode 100644
index 0000000000..bdefcb451b
--- /dev/null
+++ b/doc/guides/nics/features/spnic.ini
@@ -0,0 +1,40 @@
+;
+; Supported features of 'spnic' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities = Y
+Link status = Y
+Link status event = Y
+Queue start/stop = Y
+MTU update = Y
+Jumbo frame = Y
+Scattered Rx = Y
+LRO = Y
+TSO = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
+Unicast MAC filter = Y
+Multicast MAC filter = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
+SR-IOV = Y
+Flow control = Y
+CRC offload = Y
+VLAN filter = Y
+VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
+Basic stats = Y
+Extended stats = Y
+Stats per queue = Y
+FW version = Y
+Multiprocess aware = Y
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
\ No newline at end of file
diff --git a/doc/guides/nics/spnic.rst b/doc/guides/nics/spnic.rst
new file mode 100644
index 0000000000..9305ecbb84
--- /dev/null
+++ b/doc/guides/nics/spnic.rst
@@ -0,0 +1,61 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2021 Ramaxel Memory Technology, Ltd
+
+
+SPNIC Poll Mode Driver
+======================
+
+The spnic PMD (**librte_net_spnic**) provides poll mode driver support
+for 25Gbps/100Gbps SPNxxx Network Adapters.
+
+
+Features
+--------
+
+- Multiple queues for TX and RX
+- Receiver Side Scaling(RSS)
+- RSS supports IPv4, IPv6, TCPv4, TCPv6, UDPv4 and UDPv6, use inner type for VXLAN as default
+- MAC/VLAN filtering
+- Checksum offload
+- TSO offload
+- LRO offload
+- Promiscuous mode
+- Port hardware statistics
+- Link state information
+- Link flow control(pause frame)
+- Scattered and gather for TX and RX
+- SR-IOV - Partially supported VFIO only
+- VLAN filter and VLAN offload
+- Allmulticast mode
+- MTU update
+- Unicast MAC filter
+- Multicast MAC filter
+- Set Link down or up
+- FW version
+- Multi arch support: x86_64, ARMv8.
+
+Prerequisites
+-------------
+
+- Learning about SPNIC using
+ `<xx card introduce>`_.
+
+- Getting the latest product documents and software supports using
+ `<Website to support>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+It is highly recommended to upgrade the spnic driver and firmware to avoid the compatibility issues,
+and check the work mode with the latest product documents.
+
+Limitations or Known issues
+---------------------------
+Build with ICC is not supported yet.
+X86-32, Power8, ARMv7 and BSD are not supported yet.
--
2.27.0
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v1 01/25] drivers/net: introduce a new PMD driver
2021-12-18 2:51 ` [PATCH v1 01/25] drivers/net: introduce a new PMD driver Yanling Song
@ 2021-12-19 19:40 ` Stephen Hemminger
2021-12-22 0:54 ` Yanling Song
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2021-12-19 19:40 UTC (permalink / raw)
To: Yanling Song; +Cc: dev, yanling.song, yanggan, ferruh.yigit
On Sat, 18 Dec 2021 10:51:28 +0800
Yanling Song <songyl@ramaxel.com> wrote:
> +#ifdef CLOCK_MONOTONIC_RAW /* Defined in glibc bits/time.h */
> +#define CLOCK_TYPE CLOCK_MONOTONIC_RAW
> +#else
> +#define CLOCK_TYPE CLOCK_MONOTONIC
> +#endif
CLOCK_MONOTONIC_RAW was defined in Linux.2.6.28
DPDK does not support any kernels that old, so the #ifdef is not needed.
+
+static inline unsigned long clock_gettime_ms(void)
+{
+ struct timespec tv;
+
+ (void)clock_gettime(CLOCK_TYPE, &tv);
+
+ return (unsigned long)tv.tv_sec * SPNIC_S_TO_MS_UNIT +
+ (unsigned long)tv.tv_nsec / SPNIC_S_TO_NS_UNIT;
+}
If all you want is jiffie accuracy, you could use CLOCK_MONOTONIC_COARSE.
+#define jiffies clock_gettime_ms()
+#define msecs_to_jiffies(ms) (ms)
+#define time_before(now, end) ((now) < (end))
Does that simple version of the macro work right if jiffies wraps around?
Less of an issue on 64 bit platforms...
The kernel version is effectively.
#define time_before(now, end) ((long)((now) - (end)) < 0)
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v1 16/25] net/spnic: add device configure/version/info
2021-12-18 2:51 ` [PATCH v1 16/25] net/spnic: add device configure/version/info Yanling Song
@ 2021-12-20 0:23 ` Stephen Hemminger
2021-12-22 0:56 ` Yanling Song
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2021-12-20 0:23 UTC (permalink / raw)
To: Yanling Song; +Cc: dev, yanling.song, yanggan, ferruh.yigit
On Sat, 18 Dec 2021 10:51:43 +0800
Yanling Song <songyl@ramaxel.com> wrote:
> +static int spnic_dev_configure(struct rte_eth_dev *dev)
> +{
> + struct spnic_nic_dev *nic_dev = SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
> +
> + nic_dev->num_sqs = dev->data->nb_tx_queues;
> + nic_dev->num_rqs = dev->data->nb_rx_queues;
> +
> + if (nic_dev->num_sqs > nic_dev->max_sqs ||
> + nic_dev->num_rqs > nic_dev->max_rqs) {
> + PMD_DRV_LOG(ERR, "num_sqs: %d or num_rqs: %d larger than max_sqs: %d or max_rqs: %d",
> + nic_dev->num_sqs, nic_dev->num_rqs,
> + nic_dev->max_sqs, nic_dev->max_rqs);
> + return -EINVAL;
> + }
> +
This should already be covered by checks in ethedev:dev_configure.
> + /* The range of mtu is 384~9600 */
> + if (SPNIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> + SPNIC_MIN_FRAME_SIZE ||
> + SPNIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> + SPNIC_MAX_JUMBO_FRAME_SIZE) {
> + PMD_DRV_LOG(ERR, "Max rx pkt len out of range, mtu: %d, expect between %d and %d",
> + dev->data->dev_conf.rxmode.mtu,
> + SPNIC_MIN_FRAME_SIZE, SPNIC_MAX_JUMBO_FRAME_SIZE);
> + return -EINVAL;
> + }
Already covered by eth_dev_validate_mtu called from ethdev dev_configure.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v1 01/25] drivers/net: introduce a new PMD driver
2021-12-19 19:40 ` Stephen Hemminger
@ 2021-12-22 0:54 ` Yanling Song
2021-12-22 16:55 ` Stephen Hemminger
0 siblings, 1 reply; 32+ messages in thread
From: Yanling Song @ 2021-12-22 0:54 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, yanling.song, yanggan, ferruh.yigit, xuyun
On Sun, 19 Dec 2021 11:40:31 -0800
Stephen Hemminger <stephen@networkplumber.org> wrote:
> On Sat, 18 Dec 2021 10:51:28 +0800
> Yanling Song <songyl@ramaxel.com> wrote:
>
> > +#ifdef CLOCK_MONOTONIC_RAW /* Defined in glibc bits/time.h */
> > +#define CLOCK_TYPE CLOCK_MONOTONIC_RAW
> > +#else
> > +#define CLOCK_TYPE CLOCK_MONOTONIC
> > +#endif
>
> CLOCK_MONOTONIC_RAW was defined in Linux.2.6.28
> DPDK does not support any kernels that old, so the #ifdef is not
> needed.
>
OK. #ifdef will be removed in the next version.
>
> +
> +static inline unsigned long clock_gettime_ms(void)
> +{
> + struct timespec tv;
> +
> + (void)clock_gettime(CLOCK_TYPE, &tv);
> +
> + return (unsigned long)tv.tv_sec * SPNIC_S_TO_MS_UNIT +
> + (unsigned long)tv.tv_nsec / SPNIC_S_TO_NS_UNIT;
> +}
>
> If all you want is jiffie accuracy, you could use
> CLOCK_MONOTONIC_COARSE.
>
I did not get your point: CLOCK_MONOTONIC is more accurate than
CLOCK_MONOTONIC_COARSE, right?
>
> +#define jiffies clock_gettime_ms()
> +#define msecs_to_jiffies(ms) (ms)
>
> +#define time_before(now, end) ((now) < (end))
>
> Does that simple version of the macro work right if jiffies wraps
> around? Less of an issue on 64 bit platforms...
>
> The kernel version is effectively.
> #define time_before(now, end) ((long)((now) - (end)) < 0)
OK. Will be changed in the next version.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v1 16/25] net/spnic: add device configure/version/info
2021-12-20 0:23 ` Stephen Hemminger
@ 2021-12-22 0:56 ` Yanling Song
0 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-22 0:56 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, yanling.song, =?GB18030?B?0e64yQ==?=, ferruh.yigit, xuyun
On Mon, 20 Dec 2021 08:23:56 +0800
Stephen Hemminger <stephen@networkplumber.org> wrote:
> On Sat, 18 Dec 2021 10:51:43 +0800
> Yanling Song <songyl@ramaxel.com> wrote:
>
> > +static int spnic_dev_configure(struct rte_eth_dev *dev)
> > +{
> > + struct spnic_nic_dev *nic_dev =
> > SPNIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev); +
> > + nic_dev->num_sqs = dev->data->nb_tx_queues;
> > + nic_dev->num_rqs = dev->data->nb_rx_queues;
> > +
> > + if (nic_dev->num_sqs > nic_dev->max_sqs ||
> > + nic_dev->num_rqs > nic_dev->max_rqs) {
> > + PMD_DRV_LOG(ERR, "num_sqs: %d or num_rqs: %d
> > larger than max_sqs: %d or max_rqs: %d",
> > + nic_dev->num_sqs, nic_dev->num_rqs,
> > + nic_dev->max_sqs, nic_dev->max_rqs);
> > + return -EINVAL;
> > + }
> > +
>
> This should already be covered by checks in ethedev:dev_configure.
OK. The check will be removed in the next version.
>
> > + /* The range of mtu is 384~9600 */
> > + if (SPNIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) <
> > + SPNIC_MIN_FRAME_SIZE ||
> > + SPNIC_MTU_TO_PKTLEN(dev->data->dev_conf.rxmode.mtu) >
> > + SPNIC_MAX_JUMBO_FRAME_SIZE) {
> > + PMD_DRV_LOG(ERR, "Max rx pkt len out of range,
> > mtu: %d, expect between %d and %d",
> > + dev->data->dev_conf.rxmode.mtu,
> > + SPNIC_MIN_FRAME_SIZE,
> > SPNIC_MAX_JUMBO_FRAME_SIZE);
> > + return -EINVAL;
> > + }
>
> Already covered by eth_dev_validate_mtu called from ethdev
> dev_configure.
>
OK. The check will be removed in the next version.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v1 01/25] drivers/net: introduce a new PMD driver
2021-12-22 0:54 ` Yanling Song
@ 2021-12-22 16:55 ` Stephen Hemminger
2021-12-23 8:10 ` Yanling Song
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2021-12-22 16:55 UTC (permalink / raw)
To: Yanling Song; +Cc: dev, yanling.song, yanggan, ferruh.yigit, xuyun
On Wed, 22 Dec 2021 08:54:00 +0800
Yanling Song <songyl@ramaxel.com> wrote:
> > If all you want is jiffie accuracy, you could use
> > CLOCK_MONOTONIC_COARSE.
> >
> I did not get your point: CLOCK_MONOTONIC is more accurate than
> CLOCK_MONOTONIC_COARSE, right?
CLOCK_MONOTONIC ends up using the TSC counter and values in the
shared page (VDSO) to compute time accurately.
CLOCK_MONOTONIC_COARSE is faster and good enough if you only
want ms accuracy. It just reads a value from shared page
and avoids the TSC instruction.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v1 01/25] drivers/net: introduce a new PMD driver
2021-12-22 16:55 ` Stephen Hemminger
@ 2021-12-23 8:10 ` Yanling Song
0 siblings, 0 replies; 32+ messages in thread
From: Yanling Song @ 2021-12-23 8:10 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, yanling.song, yanggan, ferruh.yigit, xuyun
On Wed, 22 Dec 2021 08:55:22 -0800
Stephen Hemminger <stephen@networkplumber.org> wrote:
> On Wed, 22 Dec 2021 08:54:00 +0800
> Yanling Song <songyl@ramaxel.com> wrote:
>
> > > If all you want is jiffie accuracy, you could use
> > > CLOCK_MONOTONIC_COARSE.
> > >
> > I did not get your point: CLOCK_MONOTONIC is more accurate than
> > CLOCK_MONOTONIC_COARSE, right?
>
> CLOCK_MONOTONIC ends up using the TSC counter and values in the
> shared page (VDSO) to compute time accurately.
>
> CLOCK_MONOTONIC_COARSE is faster and good enough if you only
> want ms accuracy. It just reads a value from shared page
> and avoids the TSC instruction.
OK. Got it. Thanks. Will be included in the next version.
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2021-12-23 8:11 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-18 2:51 [PATCH v1 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-18 2:51 ` [PATCH v1 01/25] drivers/net: introduce a new PMD driver Yanling Song
2021-12-19 19:40 ` Stephen Hemminger
2021-12-22 0:54 ` Yanling Song
2021-12-22 16:55 ` Stephen Hemminger
2021-12-23 8:10 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 02/25] net/spnic: initialize the HW interface Yanling Song
2021-12-18 2:51 ` [PATCH v1 03/25] net/spnic: add mbox message channel Yanling Song
2021-12-18 2:51 ` [PATCH v1 04/25] net/spnic: introduce event queue Yanling Song
2021-12-18 2:51 ` [PATCH v1 05/25] net/spnic: add mgmt module Yanling Song
2021-12-18 2:51 ` [PATCH v1 06/25] net/spnic: add cmdq and work queue Yanling Song
2021-12-18 2:51 ` [PATCH v1 07/25] net/spnic: add interface handling cmdq message Yanling Song
2021-12-18 2:51 ` [PATCH v1 08/25] net/spnic: add hardware info initialization Yanling Song
2021-12-18 2:51 ` [PATCH v1 09/25] net/spnic: support MAC and link event handling Yanling Song
2021-12-18 2:51 ` [PATCH v1 10/25] net/spnic: add function info initialization Yanling Song
2021-12-18 2:51 ` [PATCH v1 11/25] net/spnic: add queue pairs context initialization Yanling Song
2021-12-18 2:51 ` [PATCH v1 12/25] net/spnic: support mbuf handling of Tx/Rx Yanling Song
2021-12-18 2:51 ` [PATCH v1 13/25] net/spnic: support Rx congfiguration Yanling Song
2021-12-18 2:51 ` [PATCH v1 14/25] net/spnic: add port/vport enable Yanling Song
2021-12-18 2:51 ` [PATCH v1 15/25] net/spnic: support IO packets handling Yanling Song
2021-12-18 2:51 ` [PATCH v1 16/25] net/spnic: add device configure/version/info Yanling Song
2021-12-20 0:23 ` Stephen Hemminger
2021-12-22 0:56 ` Yanling Song
2021-12-18 2:51 ` [PATCH v1 17/25] net/spnic: support RSS configuration update and get Yanling Song
2021-12-18 2:51 ` [PATCH v1 18/25] net/spnic: support VLAN filtering and offloading Yanling Song
2021-12-18 2:51 ` [PATCH v1 19/25] net/spnic: support promiscuous and allmulticast Rx modes Yanling Song
2021-12-18 2:51 ` [PATCH v1 20/25] net/spnic: support flow control Yanling Song
2021-12-18 2:51 ` [PATCH v1 21/25] net/spnic: support getting Tx/Rx queues info Yanling Song
2021-12-18 2:51 ` [PATCH v1 22/25] net/spnic: net/spnic: support xstats statistics Yanling Song
2021-12-18 2:51 ` [PATCH v1 23/25] net/spnic: support VFIO interrupt Yanling Song
2021-12-18 2:51 ` [PATCH v1 24/25] net/spnic: support Tx/Rx queue start/stop Yanling Song
2021-12-18 2:51 ` [PATCH v1 25/25] net/spnic: add doc infrastructure Yanling Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).