* [RFC 01/18] net/hinic3: add intro doc for hinic3
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 02/18] net/hinic3: add basic header files Feifei Wang
` (19 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Feifei Wang, Yi Chen, Xin Wang
From: Feifei Wang <wangfeifei40@huawei.com>
This patch adds some basic files to describe the hinic3 driver.
Signed-off-by: Feifei Wang <wangfeifei40@huawei.com>
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
---
.mailmap | 4 +-
MAINTAINERS | 6 +++
doc/guides/nics/hinic3.rst | 52 ++++++++++++++++++++++++++
doc/guides/nics/index.rst | 1 +
doc/guides/rel_notes/release_25_07.rst | 32 +---------------
5 files changed, 64 insertions(+), 31 deletions(-)
create mode 100644 doc/guides/nics/hinic3.rst
diff --git a/.mailmap b/.mailmap
index d8439b79ce..8c1341e783 100644
--- a/.mailmap
+++ b/.mailmap
@@ -429,7 +429,7 @@ Fang TongHao <fangtonghao@sangfor.com.cn>
Fan Zhang <fanzhang.oss@gmail.com> <roy.fan.zhang@intel.com>
Farah Smith <farah.smith@broadcom.com>
Fei Chen <chenwei.0515@bytedance.com>
-Feifei Wang <feifei.wang2@arm.com> <feifei.wang@arm.com>
+Feifei Wang <wangfeifei40@huawei.com> <feifei.wang1218@gmail.com> <feifei.wang2@arm.com> <feifei.wang@arm.com> <wff_light@vip.163.com>
Fei Qin <fei.qin@corigine.com>
Fengjiang Liu <liufengjiang.0426@bytedance.com>
Fengnan Chang <changfengnan@bytedance.com>
@@ -1718,6 +1718,7 @@ Xingguang He <xingguang.he@intel.com>
Xingyou Chen <niatlantice@gmail.com>
Xing Wang <xing_wang@realsil.com.cn>
Xinying Yu <xinying.yu@corigine.com>
+Xin Wang <wangxin679@h-partners.com>
Xin Long <longxin.xl@alibaba-inc.com>
Xi Zhang <xix.zhang@intel.com>
Xuan Ding <xuan.ding@intel.com>
@@ -1750,6 +1751,7 @@ Yelena Krivosheev <yelena@marvell.com>
Yerden Zhumabekov <e_zhumabekov@sts.kz> <yerden.zhumabekov@sts.kz>
Yevgeny Kliteynik <kliteyn@nvidia.com>
Yicai Lu <luyicai@huawei.com>
+Yi Chen <chenyi221@huawei.com>
Yiding Zhou <yidingx.zhou@intel.com>
Yi Li <liyi1@chinatelecom.cn>
Yi Liu <yi.liu@nxp.com>
diff --git a/MAINTAINERS b/MAINTAINERS
index 167cc74a15..f96a27210d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -773,6 +773,12 @@ F: drivers/net/hinic/
F: doc/guides/nics/hinic.rst
F: doc/guides/nics/features/hinic.ini
+Huawei hinic3
+M: Feifei Wang <wangfeifei40@huawei.com>
+F: drivers/net/hinic3/
+F: doc/guides/nics/hinic3.rst
+F: doc/guides/nics/features/hinic3.ini
+
Intel Network Common Code
M: Bruce Richardson <bruce.richardson@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
diff --git a/doc/guides/nics/hinic3.rst b/doc/guides/nics/hinic3.rst
new file mode 100644
index 0000000000..c7080c8c1d
--- /dev/null
+++ b/doc/guides/nics/hinic3.rst
@@ -0,0 +1,52 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+
+HINIC Poll Mode Driver
+======================
+
+The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
+for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
+
+Features
+--------
+
+- Multi arch support: x86_64, ARMv8.
+- Multiple queues for TX and RX
+- Receiver Side Scaling (RSS)
+- flow filtering
+- Checksum offload
+- TSO offload
+- Promiscuous mode
+- Port hardware statistics
+- Link state information
+- Link flow control
+- Scattered and gather for TX and RX
+- Allmulticast mode
+- MTU update
+- Multicast MAC filter
+- Flow API
+- Set Link down or up
+- VLAN filter and VLAN offload
+- SR-IOV - Partially supported at this point, VFIO only
+- FW version
+- LRO
+
+Prerequisites
+-------------
+
+- Learning about Huawei Hi1823 Series Intelligent NICs using
+ `<https://www.hikunpeng.com/compute/component/nic>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+X86-32, Windows, and BSD are not supported yet.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 10a2eca3b0..5ae4021ccb 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -33,6 +33,7 @@ Network Interface Controller Drivers
fm10k
gve
hinic
+ hinic3
hns3
i40e
ice
diff --git a/doc/guides/rel_notes/release_25_07.rst b/doc/guides/rel_notes/release_25_07.rst
index 093b85d206..1d65cf7829 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -24,37 +24,9 @@ DPDK Release 25.07
New Features
------------
-.. This section should contain new features added in this release.
- Sample format:
-
- * **Add a title in the past tense with a full stop.**
-
- Add a short 1-2 sentence description in the past tense.
- The description should be enough to allow someone scanning
- the release notes to understand the new feature.
-
- If the feature adds a lot of sub-features you can use a bullet list
- like this:
-
- * Added feature foo to do something.
- * Enhanced feature bar to do something else.
-
- Refer to the previous release notes for examples.
-
- Suggested order in release notes items:
- * Core libs (EAL, mempool, ring, mbuf, buses)
- * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
- - ethdev (lib, PMDs)
- - cryptodev (lib, PMDs)
- - eventdev (lib, PMDs)
- - etc
- * Other libs
- * Apps, Examples, Tools (if significant)
-
- This section is a comment. Do not overwrite or remove it.
- Also, make sure to start the actual text at the margin.
- =======================================================
+* **Added Huawei hinic3 net driver [EXPERIMENTAL].**
+ * Added network driver for the Huawei SPx series Network Adapters.
Removed Items
-------------
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 02/18] net/hinic3: add basic header files
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
2025-04-18 9:05 ` [RFC 01/18] net/hinic3: add intro doc for hinic3 Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 03/18] net/hinic3: add hardware interfaces of BAR operation Feifei Wang
` (18 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Yi Chen, Feifei Wang
From: Xin Wang <wangxin679@h-partners.com>
Add HW registers definition header file for SP series NIC.
Add some headers that define commands and basic defines for
use in the code.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_cmd.h | 231 ++++++++++++++++++++
drivers/net/hinic3/base/hinic3_compat.h | 266 ++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_csr.h | 108 ++++++++++
3 files changed, 605 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
diff --git a/drivers/net/hinic3/base/hinic3_cmd.h b/drivers/net/hinic3/base/hinic3_cmd.h
new file mode 100644
index 0000000000..f0e200a944
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_cmd.h
@@ -0,0 +1,231 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_CMD_H_
+#define _HINIC3_CMD_H_
+
+#define NIC_RSS_TEMP_ID_TO_CTX_LT_IDX(tmp_id) tmp_id
+/* Begin of one temp tbl. */
+#define NIC_RSS_TEMP_ID_TO_INDIR_LT_IDX(tmp_id) ((tmp_id) << 4)
+/* 4 ctx in one entry. */
+#define NIC_RSS_CTX_TBL_ENTRY_SIZE 0x10
+/* Entry size = 16B, 16 entry/template. */
+#define NIC_RSS_INDIR_TBL_ENTRY_SIZE 0x10
+/* Entry size = 16B, so entry_num = 256B/16B. */
+#define NIC_RSS_INDIR_TBL_ENTRY_NUM 0x10
+
+#define NIC_UP_RSS_INVALID_TEMP_ID 0xFF
+#define NIC_UP_RSS_INVALID_FUNC_ID 0xFFFF
+#define NIC_UP_RSS_INVALID 0x00
+#define NIC_UP_RSS_EN 0x01
+#define NIC_UP_RSS_INVALID_GROUP_ID 0x7F
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+#define HINIC3_RSS_TYPE_VALID_SHIFT 23
+#define HINIC3_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC3_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC3_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC3_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC3_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC3_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC3_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC3_RSS_TYPE_UDP_IPV4_SHIFT 31
+#define HINIC3_RSS_TYPE_SET(val, member) \
+ (((u32)(val) & 0x1) << HINIC3_RSS_TYPE_##member##_SHIFT)
+
+#define HINIC3_RSS_TYPE_GET(val, member) \
+ (((u32)(val) >> HINIC3_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+/* NIC CMDQ MODE. */
+typedef enum hinic3_ucode_cmd {
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX = 0,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC3_UCODE_CMD_ARM_SQ,
+ HINIC3_UCODE_CMD_ARM_RQ,
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC3_UCODE_CMD_GET_RSS_CONTEXT_TABLE,
+ HINIC3_UCODE_CMD_SET_IQ_ENABLE,
+ HINIC3_UCODE_CMD_SET_RQ_FLUSH = 10,
+ HINIC3_UCODE_CMD_MODIFY_VLAN_CTX,
+} cmdq_nic_subtype_e;
+
+/* Commands between NIC to MPU. */
+enum hinic3_nic_cmd {
+ /* Only for PFD and VFD. */
+ HINIC3_NIC_CMD_VF_REGISTER = 0,
+
+ /* FUNC CFG */
+ HINIC3_NIC_CMD_SET_FUNC_TBL = 5,
+ HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ HINIC3_NIC_CMD_SET_RX_MODE,
+ HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ HINIC3_NIC_CMD_GET_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAN_VPORT_STAT,
+ HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ HINIC3_NIC_CMD_CFG_FLEX_QUEUE,
+ /* LRO CFG */
+ HINIC3_NIC_CMD_CFG_RX_LRO,
+ HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ HINIC3_NIC_CMD_FEATURE_NEGO,
+
+ /* MAC & VLAN CFG */
+ HINIC3_NIC_CMD_GET_MAC = 20,
+ HINIC3_NIC_CMD_SET_MAC,
+ HINIC3_NIC_CMD_DEL_MAC,
+ HINIC3_NIC_CMD_UPDATE_MAC,
+ HINIC3_NIC_CMD_GET_ALL_DEFAULT_MAC,
+
+ HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+
+ /* SR-IOV */
+ HINIC3_NIC_CMD_CFG_VF_VLAN = 40,
+ HINIC3_NIC_CMD_SET_SPOOPCHK_STATE,
+ /* RATE LIMIT */
+ HINIC3_NIC_CMD_SET_MAX_MIN_RATE,
+
+ /* RSS CFG */
+ HINIC3_NIC_CMD_RSS_CFG = 60,
+ HINIC3_NIC_CMD_RSS_TEMP_MGR,
+ HINIC3_NIC_CMD_GET_RSS_CTX_TBL,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC,
+
+ /* FDIR */
+ HINIC3_NIC_CMD_ADD_TC_FLOW = 80,
+ HINIC3_NIC_CMD_DEL_TC_FLOW,
+ HINIC3_NIC_CMD_GET_TC_FLOW,
+ HINIC3_NIC_CMD_FLUSH_TCAM,
+ HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ HINIC3_NIC_CMD_ENABLE_TCAM,
+ HINIC3_NIC_CMD_GET_TCAM_BLOCK,
+
+ HINIC3_NIC_CMD_SET_FDIR_STATUS = 91,
+
+ /* PORT CFG */
+ HINIC3_NIC_CMD_SET_PORT_ENABLE = 100,
+ HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+
+ HINIC3_NIC_CMD_SET_PORT_CAR,
+ HINIC3_NIC_CMD_SET_ER_DROP_PKT,
+
+ HINIC3_NIC_CMD_VF_COS,
+ HINIC3_NIC_CMD_SETUP_COS_MAPPING,
+ HINIC3_NIC_CMD_SET_ETS,
+ HINIC3_NIC_CMD_SET_PFC,
+
+ /* MISC */
+ HINIC3_NIC_CMD_BIOS_CFG = 120,
+ HINIC3_NIC_CMD_SET_FIRMWARE_CUSTOM_PACKETS_MSG,
+
+ /* DFX */
+ HINIC3_NIC_CMD_GET_SM_TABLE = 140,
+ HINIC3_NIC_CMD_RD_LINE_TBL,
+
+ HINIC3_NIC_CMD_SET_VHD_CFG = 161,
+
+ HINIC3_NIC_CMD_GET_PORT_STAT = 200,
+ HINIC3_NIC_CMD_CLEAN_PORT_STAT,
+
+ HINIC3_NIC_CMD_MAX = 256
+};
+
+/* COMM commands between driver to MPU. */
+enum hinic3_mgmt_cmd {
+ HINIC3_MGMT_CMD_FUNC_RESET = 0,
+ HINIC3_MGMT_CMD_FEATURE_NEGO,
+ HINIC3_MGMT_CMD_FLUSH_DOORBELL,
+ HINIC3_MGMT_CMD_START_FLUSH,
+ HINIC3_MGMT_CMD_SET_FUNC_FLR,
+ HINIC3_MGMT_CMD_SET_FUNC_SVC_USED_STATE = 7,
+
+ HINIC3_MGMT_CMD_CFG_MSIX_NUM = 10,
+
+ HINIC3_MGMT_CMD_SET_CMDQ_CTXT = 20,
+ HINIC3_MGMT_CMD_SET_VAT,
+ HINIC3_MGMT_CMD_CFG_PAGESIZE,
+ HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ HINIC3_MGMT_CMD_SET_CEQ_CTRL_REG,
+ HINIC3_MGMT_CMD_SET_DMA_ATTR,
+
+ HINIC3_MGMT_CMD_GET_MQM_FIX_INFO = 40,
+ HINIC3_MGMT_CMD_SET_MQM_CFG_INFO,
+ HINIC3_MGMT_CMD_SET_MQM_SRCH_GPA,
+ HINIC3_MGMT_CMD_SET_PPF_TMR,
+ HINIC3_MGMT_CMD_SET_PPF_HT_GPA,
+ HINIC3_MGMT_CMD_SET_FUNC_TMR_BITMAT,
+
+ HINIC3_MGMT_CMD_GET_FW_VERSION = 60,
+ HINIC3_MGMT_CMD_GET_BOARD_INFO,
+ HINIC3_MGMT_CMD_SYNC_TIME,
+ HINIC3_MGMT_CMD_GET_HW_PF_INFOS,
+ HINIC3_MGMT_CMD_SEND_BDF_INFO,
+
+ HINIC3_MGMT_CMD_UPDATE_FW = 80,
+ HINIC3_MGMT_CMD_ACTIVE_FW,
+ HINIC3_MGMT_CMD_HOT_ACTIVE_FW,
+ HINIC3_MGMT_CMD_HOT_ACTIVE_DONE_NOTICE,
+ HINIC3_MGMT_CMD_SWITCH_CFG,
+ HINIC3_MGMT_CMD_CHECK_FLASH,
+ HINIC3_MGMT_CMD_CHECK_FLASH_RW,
+ HINIC3_MGMT_CMD_RESOURCE_CFG,
+ HINIC3_MGMT_CMD_UPDATE_BIOS,
+
+ HINIC3_MGMT_CMD_FAULT_REPORT = 100,
+ HINIC3_MGMT_CMD_WATCHDOG_INFO,
+ HINIC3_MGMT_CMD_MGMT_RESET,
+ HINIC3_MGMT_CMD_FFM_SET,
+
+ HINIC3_MGMT_CMD_GET_LOG = 120,
+ HINIC3_MGMT_CMD_TEMP_OP,
+ HINIC3_MGMT_CMD_EN_AUTO_RST_CHIP,
+ HINIC3_MGMT_CMD_CFG_REG,
+ HINIC3_MGMT_CMD_GET_CHIP_ID,
+ HINIC3_MGMT_CMD_SYSINFO_DFX,
+ HINIC3_MGMT_CMD_PCIE_DFX_NTC,
+};
+
+enum mag_cmd {
+ SERDES_CMD_PROCESS = 0,
+
+ MAG_CMD_SET_PORT_CFG = 1,
+ MAG_CMD_SET_PORT_ADAPT = 2,
+ MAG_CMD_CFG_LOOPBACK_MODE = 3,
+
+ MAG_CMD_GET_PORT_ENABLE = 5,
+ MAG_CMD_SET_PORT_ENABLE = 6,
+ MAG_CMD_GET_LINK_STATUS = 7,
+ MAG_CMD_SET_LINK_FOLLOW = 8,
+ MAG_CMD_SET_PMA_ENABLE = 9,
+ MAG_CMD_CFG_FEC_MODE = 10,
+
+ /* PHY */
+ MAG_CMD_GET_XSFP_INFO = 60,
+ MAG_CMD_SET_XSFP_ENABLE = 61,
+ MAG_CMD_GET_XSFP_PRESENT = 62,
+ /* sfp/qsfp single byte read/write, for equipment test. */
+ MAG_CMD_SET_XSFP_RW = 63,
+ MAG_CMD_CFG_XSFP_TEMPERATURE = 64,
+
+ MAG_CMD_WIRE_EVENT = 100,
+ MAG_CMD_LINK_ERR_EVENT = 101,
+
+ MAG_CMD_EVENT_PORT_INFO = 150,
+ MAG_CMD_GET_PORT_STAT = 151,
+ MAG_CMD_CLR_PORT_STAT = 152,
+ MAG_CMD_GET_PORT_INFO = 153,
+ MAG_CMD_GET_PCS_ERR_CNT = 154,
+ MAG_CMD_GET_MAG_CNT = 155,
+ MAG_CMD_DUMP_ANTRAIN_INFO = 156,
+
+ MAG_CMD_MAX = 0xFF
+};
+
+#endif /* _HINIC3_CMD_H_ */
diff --git a/drivers/net/hinic3/base/hinic3_compat.h b/drivers/net/hinic3/base/hinic3_compat.h
new file mode 100644
index 0000000000..ae2899d15c
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_compat.h
@@ -0,0 +1,266 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_COMPAT_H_
+#define _HINIC3_COMPAT_H_
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <sys/time.h>
+#include <unistd.h>
+#include <sys/syscall.h>
+#include <pthread.h>
+#include <ethdev_pci.h>
+#include <eal_interrupts.h>
+#include <rte_io.h>
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_config.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+
+#ifndef BIT
+#define BIT(n) (1U << (n))
+#endif
+
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+#define HINIC3_MEM_ALLOC_ALIGN_MIN 1
+
+extern int hinic3_logtype;
+#define RTE_LOGTYPE_NET_HINIC3 hinic3_logtype
+
+#define PMD_DRV_LOG(level, ...) RTE_LOG_LINE(level, NET_HINIC3, __VA_ARGS__)
+
+/* Bit order interface. */
+#define cpu_to_be16(o) rte_cpu_to_be_16(o)
+#define cpu_to_be32(o) rte_cpu_to_be_32(o)
+#define cpu_to_be64(o) rte_cpu_to_be_64(o)
+#define cpu_to_le32(o) rte_cpu_to_le_32(o)
+#define be16_to_cpu(o) rte_be_to_cpu_16(o)
+#define be32_to_cpu(o) rte_be_to_cpu_32(o)
+#define be64_to_cpu(o) rte_be_to_cpu_64(o)
+#define le32_to_cpu(o) rte_le_to_cpu_32(o)
+
+#ifdef HW_CONVERT_ENDIAN
+/* If csrs to enable endianness coverting are configed, hw will do the
+ * endianness coverting for stateless SQ ci, the fields less than 4B for
+ * doorbell, the fields less than 4B in the CQE data.
+ */
+#define hinic3_hw_be32(val) (val)
+#define hinic3_hw_cpu32(val) (val)
+#define hinic3_hw_cpu16(val) (val)
+#else
+#define hinic3_hw_be32(val) cpu_to_be32(val)
+#define hinic3_hw_cpu32(val) be32_to_cpu(val)
+#define hinic3_hw_cpu16(val) be16_to_cpu(val)
+#endif
+
+#define ARRAY_LEN(arr) ((int)(sizeof(arr) / sizeof((arr)[0])))
+
+static inline void
+hinic3_hw_be32_len(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = hinic3_hw_be32(*mem);
+ mem++;
+ }
+}
+
+static inline int
+hinic3_get_bit(int nr, volatile unsigned long *addr)
+{
+ RTE_ASSERT(nr < 0x20);
+
+ uint32_t mask = UINT32_C(1) << nr;
+ return (*addr) & mask;
+}
+
+static inline void
+hinic3_set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+ rte_atomic_fetch_or_explicit(addr, (1UL << nr),
+ rte_memory_order_seq_cst);
+}
+
+static inline void
+hinic3_clear_bit(int nr, volatile unsigned long *addr)
+{
+ rte_atomic_fetch_and_explicit(addr, ~(1UL << nr),
+ rte_memory_order_seq_cst);
+}
+
+static inline int
+hinic3_test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = (1UL << nr);
+
+ return (int)(rte_atomic_fetch_and_explicit(addr, ~mask,
+ rte_memory_order_seq_cst) &
+ mask);
+}
+
+static inline int
+hinic3_test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = (1UL << nr);
+
+ return (int)(rte_atomic_fetch_or_explicit(addr, mask,
+ rte_memory_order_seq_cst) &
+ mask);
+}
+
+#ifdef CLOCK_MONOTONIC_RAW /**< Defined in glibc bits/time.h . */
+#define CLOCK_TYPE CLOCK_MONOTONIC_RAW
+#else
+#define CLOCK_TYPE CLOCK_MONOTONIC
+#endif
+
+#define HINIC3_MUTEX_TIMEOUT 10
+#define HINIC3_S_TO_MS_UNIT 1000
+#define HINIC3_S_TO_NS_UNIT 1000000
+
+static inline unsigned long
+clock_gettime_ms(void)
+{
+ struct timespec tv;
+
+ (void)clock_gettime(CLOCK_TYPE, &tv);
+
+ return (unsigned long)tv.tv_sec * HINIC3_S_TO_MS_UNIT +
+ (unsigned long)tv.tv_nsec / HINIC3_S_TO_NS_UNIT;
+}
+
+#define jiffies clock_gettime_ms()
+#define msecs_to_jiffies(ms) (ms)
+#define time_before(now, end) ((now) < (end))
+
+/**
+ * Convert data to big endian 32 bit format.
+ *
+ * @param data
+ * The data to convert.
+ * @param len
+ * Length of data to convert, must be Multiple of 4B.
+ */
+static inline void
+hinic3_cpu_to_be32(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = cpu_to_be32(*mem);
+ mem++;
+ }
+}
+
+/**
+ * Convert data from big endian 32 bit format.
+ *
+ * @param data
+ * The data to convert.
+ * @param len
+ * Length of data to convert, must be Multiple of 4B.
+ */
+static inline void
+hinic3_be32_to_cpu(void *data, int len)
+{
+ int i, chunk_sz = sizeof(u32);
+ u32 *mem = data;
+
+ if (!data)
+ return;
+
+ len = len / chunk_sz;
+
+ for (i = 0; i < len; i++) {
+ *mem = be32_to_cpu(*mem);
+ mem++;
+ }
+}
+
+static inline u16
+ilog2(u32 n)
+{
+ u16 res = 0;
+
+ while (n > 1) {
+ n >>= 1;
+ res++;
+ }
+
+ return res;
+}
+
+static inline int
+hinic3_mutex_init(pthread_mutex_t *pthreadmutex,
+ const pthread_mutexattr_t *mattr)
+{
+ int err;
+
+ err = pthread_mutex_init(pthreadmutex, mattr);
+ if (unlikely(err))
+ PMD_DRV_LOG(ERR, "Initialize mutex failed, error: %d", err);
+
+ return err;
+}
+
+static inline int
+hinic3_mutex_destroy(pthread_mutex_t *pthreadmutex)
+{
+ int err;
+
+ err = pthread_mutex_destroy(pthreadmutex);
+ if (unlikely(err))
+ PMD_DRV_LOG(ERR, "Destroy mutex failed, error: %d", err);
+
+ return err;
+}
+
+static inline int
+hinic3_mutex_lock(pthread_mutex_t *pthreadmutex)
+{
+ int err;
+
+ err = pthread_mutex_lock(pthreadmutex);
+ if (err)
+ PMD_DRV_LOG(ERR, "Mutex lock failed, err: %d", err);
+
+ return err;
+}
+
+static inline int
+hinic3_mutex_unlock(pthread_mutex_t *pthreadmutex)
+{
+ return pthread_mutex_unlock(pthreadmutex);
+}
+
+#endif /* _HINIC3_COMPAT_H_ */
diff --git a/drivers/net/hinic3/base/hinic3_csr.h b/drivers/net/hinic3/base/hinic3_csr.h
new file mode 100644
index 0000000000..8579794c8d
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_csr.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_CSR_H_
+#define _HINIC3_CSR_H_
+
+#ifdef CONFIG_SP_VID_DID
+#define PCI_VENDOR_ID_SPNIC 0x1F3F
+#define HINIC3_DEV_ID_STANDARD 0x9020
+#define HINIC3_DEV_ID_VF 0x9001
+#else
+#define PCI_VENDOR_ID_HUAWEI 0x19e5
+#define HINIC3_DEV_ID_STANDARD 0x0222
+#define HINIC3_DEV_ID_VF 0x375F
+#endif
+
+/*
+ * Bit30/bit31 for bar index flag.
+ * 00: bar0
+ * 01: bar1
+ * 10: bar2
+ * 11: bar3
+ */
+#define HINIC3_CFG_REGS_FLAG 0x40000000
+
+#define HINIC3_MGMT_REGS_FLAG 0xC0000000
+
+#define HINIC3_REGS_FLAG_MAKS 0x3FFFFFFF
+
+#define HINIC3_VF_CFG_REG_OFFSET 0x2000
+
+#define HINIC3_HOST_CSR_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6000)
+#define HINIC3_CSR_GLOBAL_BASE_ADDR (HINIC3_MGMT_REGS_FLAG + 0x6400)
+
+/* HW interface registers. */
+#define HINIC3_CSR_FUNC_ATTR0_ADDR (HINIC3_CFG_REGS_FLAG + 0x0)
+#define HINIC3_CSR_FUNC_ATTR1_ADDR (HINIC3_CFG_REGS_FLAG + 0x4)
+#define HINIC3_CSR_FUNC_ATTR2_ADDR (HINIC3_CFG_REGS_FLAG + 0x8)
+#define HINIC3_CSR_FUNC_ATTR3_ADDR (HINIC3_CFG_REGS_FLAG + 0xC)
+#define HINIC3_CSR_FUNC_ATTR4_ADDR (HINIC3_CFG_REGS_FLAG + 0x10)
+#define HINIC3_CSR_FUNC_ATTR5_ADDR (HINIC3_CFG_REGS_FLAG + 0x14)
+#define HINIC3_CSR_FUNC_ATTR6_ADDR (HINIC3_CFG_REGS_FLAG + 0x18)
+
+#define HINIC3_FUNC_CSR_MAILBOX_DATA_OFF 0x80
+#define HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF (HINIC3_CFG_REGS_FLAG + 0x0100)
+#define HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF (HINIC3_CFG_REGS_FLAG + 0x0104)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF (HINIC3_CFG_REGS_FLAG + 0x0108)
+#define HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF (HINIC3_CFG_REGS_FLAG + 0x010C)
+
+#define HINIC3_PPF_ELECTION_OFFSET 0x0
+#define HINIC3_MPF_ELECTION_OFFSET 0x20
+
+#define HINIC3_CSR_PPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_PPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR \
+ (HINIC3_HOST_CSR_BASE_ADDR + HINIC3_MPF_ELECTION_OFFSET)
+
+#define HINIC3_CSR_DMA_ATTR_TBL_ADDR (HINIC3_CFG_REGS_FLAG + 0x380)
+#define HINIC3_CSR_DMA_ATTR_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x390)
+
+/* MSI-X registers. */
+#define HINIC3_CSR_MSIX_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x310)
+#define HINIC3_CSR_MSIX_CTRL_ADDR (HINIC3_CFG_REGS_FLAG + 0x300)
+#define HINIC3_CSR_MSIX_CNT_ADDR (HINIC3_CFG_REGS_FLAG + 0x304)
+#define HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR (HINIC3_CFG_REGS_FLAG + 0x58)
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_SHIFT 0
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_SHIFT 1
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_SHIFT 2
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_SHIFT 3
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_SHIFT 4
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_SHIFT 22
+
+#define HINIC3_MSI_CLR_INDIR_RESEND_TIMER_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_INT_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_SET_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_AUTO_MSK_CLR_MASK 0x1U
+#define HINIC3_MSI_CLR_INDIR_SIMPLE_INDIR_IDX_MASK 0x3FFU
+
+#define HINIC3_MSI_CLR_INDIR_SET(val, member) \
+ (((val) & HINIC3_MSI_CLR_INDIR_##member##_MASK) \
+ << HINIC3_MSI_CLR_INDIR_##member##_SHIFT)
+
+/* EQ registers. */
+#define HINIC3_AEQ_INDIR_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x210)
+
+#define HINIC3_AEQ_MTT_OFF_BASE_ADDR (HINIC3_CFG_REGS_FLAG + 0x240)
+
+#define HINIC3_CSR_EQ_PAGE_OFF_STRIDE 8
+
+#define HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE)
+
+#define HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num) \
+ (HINIC3_AEQ_MTT_OFF_BASE_ADDR + \
+ (pg_num) * HINIC3_CSR_EQ_PAGE_OFF_STRIDE + 4)
+
+#define HINIC3_CSR_AEQ_CTRL_0_ADDR (HINIC3_CFG_REGS_FLAG + 0x200)
+#define HINIC3_CSR_AEQ_CTRL_1_ADDR (HINIC3_CFG_REGS_FLAG + 0x204)
+#define HINIC3_CSR_AEQ_CONS_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x208)
+#define HINIC3_CSR_AEQ_PROD_IDX_ADDR (HINIC3_CFG_REGS_FLAG + 0x20C)
+#define HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR (HINIC3_CFG_REGS_FLAG + 0x50)
+
+#endif /* _HINIC3_CSR_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 03/18] net/hinic3: add hardware interfaces of BAR operation
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
2025-04-18 9:05 ` [RFC 01/18] net/hinic3: add intro doc for hinic3 Feifei Wang
2025-04-18 9:05 ` [RFC 02/18] net/hinic3: add basic header files Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 04/18] net/hinic3: add support for cmdq mechanism Feifei Wang
` (17 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
This patch adds some HW interfaces for bar operatioin interfaces,
including: mapped bar address geeting, HW attributes getting,
msi-x reg operation, function type getting and so on.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_hwif.c | 779 ++++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_hwif.h | 142 +++++
2 files changed, 921 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
diff --git a/drivers/net/hinic3/base/hinic3_hwif.c b/drivers/net/hinic3/base/hinic3_hwif.c
new file mode 100644
index 0000000000..7d075693f9
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hwif.c
@@ -0,0 +1,779 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_bus_pci.h>
+#include "hinic3_compat.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+#define WAIT_HWIF_READY_TIMEOUT 10000
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / HINIC3_DB_PAGE_SIZE))
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HINIC3_AF0_P2P_IDX_SHIFT 12
+#define HINIC3_AF0_PCI_INTF_IDX_SHIFT 17
+#define HINIC3_AF0_VF_IN_PF_SHIFT 20
+#define HINIC3_AF0_FUNC_TYPE_SHIFT 28
+
+#define HINIC3_AF0_FUNC_GLOBAL_IDX_MASK 0xFFF
+#define HINIC3_AF0_P2P_IDX_MASK 0x1F
+#define HINIC3_AF0_PCI_INTF_IDX_MASK 0x7
+#define HINIC3_AF0_VF_IN_PF_MASK 0xFF
+#define HINIC3_AF0_FUNC_TYPE_MASK 0x1
+
+#define HINIC3_AF0_GET(val, member) \
+ (((val) >> HINIC3_AF0_##member##_SHIFT) & HINIC3_AF0_##member##_MASK)
+
+#define HINIC3_AF1_PPF_IDX_SHIFT 0
+#define HINIC3_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HINIC3_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HINIC3_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HINIC3_AF1_PPF_IDX_MASK 0x3F
+#define HINIC3_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HINIC3_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HINIC3_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HINIC3_AF1_GET(val, member) \
+ (((val) >> HINIC3_AF1_##member##_SHIFT) & HINIC3_AF1_##member##_MASK)
+
+#define HINIC3_AF2_CEQS_PER_FUNC_SHIFT 0
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_SHIFT 9
+#define HINIC3_AF2_IRQS_PER_FUNC_SHIFT 16
+
+#define HINIC3_AF2_CEQS_PER_FUNC_MASK 0x1FF
+#define HINIC3_AF2_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HINIC3_AF2_IRQS_PER_FUNC_MASK 0x7FF
+
+#define HINIC3_AF2_GET(val, member) \
+ (((val) >> HINIC3_AF2_##member##_SHIFT) & HINIC3_AF2_##member##_MASK)
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_SHIFT 0
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_SHIFT 16
+
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_NXT_PF_MASK 0xFFF
+#define HINIC3_AF3_GLOBAL_VF_ID_OF_PF_MASK 0xFFF
+
+#define HINIC3_AF3_GET(val, member) \
+ (((val) >> HINIC3_AF3_##member##_SHIFT) & HINIC3_AF3_##member##_MASK)
+
+#define HINIC3_AF4_DOORBELL_CTRL_SHIFT 0
+#define HINIC3_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HINIC3_AF4_GET(val, member) \
+ (((val) >> HINIC3_AF4_##member##_SHIFT) & HINIC3_AF4_##member##_MASK)
+
+#define HINIC3_AF4_SET(val, member) \
+ (((val) & HINIC3_AF4_##member##_MASK) << HINIC3_AF4_##member##_SHIFT)
+
+#define HINIC3_AF4_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF4_##member##_MASK << HINIC3_AF4_##member##_SHIFT)))
+
+#define HINIC3_AF5_OUTBOUND_CTRL_SHIFT 0
+#define HINIC3_AF5_OUTBOUND_CTRL_MASK 0x1
+
+#define HINIC3_AF5_GET(val, member) \
+ (((val) >> HINIC3_AF5_##member##_SHIFT) & HINIC3_AF5_##member##_MASK)
+
+#define HINIC3_AF5_SET(val, member) \
+ (((val) & HINIC3_AF5_##member##_MASK) << HINIC3_AF5_##member##_SHIFT)
+
+#define HINIC3_AF5_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF5_##member##_MASK << HINIC3_AF5_##member##_SHIFT)))
+
+#define HINIC3_AF6_PF_STATUS_SHIFT 0
+#define HINIC3_AF6_PF_STATUS_MASK 0xFFFF
+
+#define HINIC3_AF6_FUNC_MAX_QUEUE_SHIFT 23
+#define HINIC3_AF6_FUNC_MAX_QUEUE_MASK 0x1FF
+
+#define HINIC3_AF6_MSIX_FLEX_EN_SHIFT 22
+#define HINIC3_AF6_MSIX_FLEX_EN_MASK 0x1
+
+#define HINIC3_AF6_SET(val, member) \
+ ((((u32)(val)) & HINIC3_AF6_##member##_MASK) \
+ << HINIC3_AF6_##member##_SHIFT)
+
+#define HINIC3_AF6_GET(val, member) \
+ (((val) >> HINIC3_AF6_##member##_SHIFT) & HINIC3_AF6_##member##_MASK)
+
+#define HINIC3_AF6_CLEAR(val, member) \
+ ((val) & (~(HINIC3_AF6_##member##_MASK << HINIC3_AF6_##member##_SHIFT)))
+
+#define HINIC3_PPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_PPF_ELECTION_IDX_MASK 0x3F
+
+#define HINIC3_PPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_PPF_ELECTION_##member##_MASK) \
+ << HINIC3_PPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_PPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_PPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_PPF_ELECTION_##member##_MASK)
+
+#define HINIC3_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_PPF_ELECTION_##member##_MASK \
+ << HINIC3_PPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_MPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC3_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC3_MPF_ELECTION_SET(val, member) \
+ (((val) & HINIC3_MPF_ELECTION_##member##_MASK) \
+ << HINIC3_MPF_ELECTION_##member##_SHIFT)
+
+#define HINIC3_MPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC3_MPF_ELECTION_##member##_SHIFT) & \
+ HINIC3_MPF_ELECTION_##member##_MASK)
+
+#define HINIC3_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC3_MPF_ELECTION_##member##_MASK \
+ << HINIC3_MPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC3_GET_REG_FLAG(reg) ((reg) & (~(HINIC3_REGS_FLAG_MAKS)))
+
+#define HINIC3_GET_REG_ADDR(reg) ((reg) & (HINIC3_REGS_FLAG_MAKS))
+
+#define HINIC3_IS_VF_DEV(pdev) ((pdev)->id.device_id == HINIC3_DEV_ID_VF)
+
+u32
+hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ return be32_to_cpu(rte_read32(hwif->mgmt_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+ else
+ return be32_to_cpu(rte_read32(hwif->cfg_regs_base +
+ HINIC3_GET_REG_ADDR(reg)));
+}
+
+void
+hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val)
+{
+ if (HINIC3_GET_REG_FLAG(reg) == HINIC3_MGMT_REGS_FLAG)
+ rte_write32(cpu_to_be32(val),
+ hwif->mgmt_regs_base + HINIC3_GET_REG_ADDR(reg));
+ else
+ rte_write32(cpu_to_be32(val),
+ hwif->cfg_regs_base + HINIC3_GET_REG_ADDR(reg));
+}
+
+/**
+ * Judge whether HW initialization ok.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hwif_ready(struct hinic3_hwdev *hwdev)
+{
+ u32 addr, attr1;
+
+ addr = HINIC3_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ if (attr1 == HINIC3_PCIE_LINK_DOWN)
+ return -EBUSY;
+
+ if (!HINIC3_AF1_GET(attr1, MGMT_INIT_STATUS))
+ return -EBUSY;
+
+ return 0;
+}
+
+static int
+wait_hwif_ready(struct hinic3_hwdev *hwdev)
+{
+ ulong timeout = 0;
+
+ do {
+ if (!hwif_ready(hwdev))
+ return 0;
+
+ rte_delay_ms(1);
+ timeout++;
+ } while (timeout <= WAIT_HWIF_READY_TIMEOUT);
+
+ PMD_DRV_LOG(ERR, "Hwif is not ready");
+ return -EBUSY;
+}
+
+/**
+ * Set the attributes as members in hwif.
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device.
+ * @param[in] attr0
+ * The first attribute that was read from the hw.
+ * @param[in] attr1
+ * The second attribute that was read from the hw.
+ * @param[in] attr2
+ * The third attribute that was read from the hw.
+ * @param[in] attr3
+ * The fourth attribute that was read from the hw.
+ */
+static void
+set_hwif_attr(struct hinic3_hwif *hwif, u32 attr0, u32 attr1, u32 attr2,
+ u32 attr3)
+{
+ hwif->attr.func_global_idx = HINIC3_AF0_GET(attr0, FUNC_GLOBAL_IDX);
+ hwif->attr.port_to_port_idx = HINIC3_AF0_GET(attr0, P2P_IDX);
+ hwif->attr.pci_intf_idx = HINIC3_AF0_GET(attr0, PCI_INTF_IDX);
+ hwif->attr.vf_in_pf = HINIC3_AF0_GET(attr0, VF_IN_PF);
+ hwif->attr.func_type = HINIC3_AF0_GET(attr0, FUNC_TYPE);
+
+ hwif->attr.ppf_idx = HINIC3_AF1_GET(attr1, PPF_IDX);
+ hwif->attr.num_aeqs = BIT(HINIC3_AF1_GET(attr1, AEQS_PER_FUNC));
+
+ hwif->attr.num_ceqs = (u8)HINIC3_AF2_GET(attr2, CEQS_PER_FUNC);
+ hwif->attr.num_irqs = HINIC3_AF2_GET(attr2, IRQS_PER_FUNC);
+ hwif->attr.num_dma_attr = BIT(HINIC3_AF2_GET(attr2, DMA_ATTR_PER_FUNC));
+
+ hwif->attr.global_vf_id_of_pf =
+ HINIC3_AF3_GET(attr3, GLOBAL_VF_ID_OF_PF);
+}
+
+/**
+ * Read and set the attributes as members in hwif.
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device.
+ */
+static void
+get_hwif_attr(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr0, attr1, attr2, attr3;
+
+ addr = HINIC3_CSR_FUNC_ATTR0_ADDR;
+ attr0 = hinic3_hwif_read_reg(hwif, addr);
+
+ addr = HINIC3_CSR_FUNC_ATTR1_ADDR;
+ attr1 = hinic3_hwif_read_reg(hwif, addr);
+
+ addr = HINIC3_CSR_FUNC_ATTR2_ADDR;
+ attr2 = hinic3_hwif_read_reg(hwif, addr);
+
+ addr = HINIC3_CSR_FUNC_ATTR3_ADDR;
+ attr3 = hinic3_hwif_read_reg(hwif, addr);
+
+ set_hwif_attr(hwif, attr0, attr1, attr2, attr3);
+}
+
+/**
+ * Update message signaled interrupt information.
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device.
+ */
+void
+hinic3_update_msix_info(struct hinic3_hwif *hwif)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+ hwif->attr.num_queue = HINIC3_AF6_GET(attr6, FUNC_MAX_QUEUE);
+ hwif->attr.msix_flex_en = HINIC3_AF6_GET(attr6, MSIX_FLEX_EN);
+ PMD_DRV_LOG(INFO, "msix_flex_en: %u, queue msix: %u",
+ hwif->attr.msix_flex_en, hwif->attr.num_queue);
+}
+
+void
+hinic3_set_pf_status(struct hinic3_hwif *hwif, enum hinic3_pf_status status)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ attr6 = HINIC3_AF6_CLEAR(attr6, PF_STATUS);
+ attr6 |= HINIC3_AF6_SET(status, PF_STATUS);
+
+ if (hwif->attr.func_type == TYPE_VF)
+ return;
+
+ hinic3_hwif_write_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR, attr6);
+}
+
+enum hinic3_pf_status
+hinic3_get_pf_status(struct hinic3_hwif *hwif)
+{
+ u32 attr6 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR6_ADDR);
+
+ return HINIC3_AF6_GET(attr6, PF_STATUS);
+}
+
+static enum hinic3_doorbell_ctrl
+hinic3_get_doorbell_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+
+ return HINIC3_AF4_GET(attr4, DOORBELL_CTRL);
+}
+
+static enum hinic3_outbound_ctrl
+hinic3_get_outbound_ctrl_status(struct hinic3_hwif *hwif)
+{
+ u32 attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+
+ return HINIC3_AF5_GET(attr5, OUTBOUND_CTRL);
+}
+
+void
+hinic3_enable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(ENABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+void
+hinic3_disable_doorbell(struct hinic3_hwif *hwif)
+{
+ u32 addr, attr4;
+
+ addr = HINIC3_CSR_FUNC_ATTR4_ADDR;
+ attr4 = hinic3_hwif_read_reg(hwif, addr);
+
+ attr4 = HINIC3_AF4_CLEAR(attr4, DOORBELL_CTRL);
+ attr4 |= HINIC3_AF4_SET(DISABLE_DOORBELL, DOORBELL_CTRL);
+
+ hinic3_hwif_write_reg(hwif, addr, attr4);
+}
+
+/**
+ * Try to set hwif as ppf and set the type of hwif in this case.
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device
+ */
+static void
+set_ppf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, ppf_election;
+
+ addr = HINIC3_CSR_PPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+ val = HINIC3_PPF_ELECTION_CLEAR(val, IDX);
+
+ ppf_election = HINIC3_PPF_ELECTION_SET(attr->func_global_idx, IDX);
+ val |= ppf_election;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* Check PPF. */
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ attr->ppf_idx = HINIC3_PPF_ELECTION_GET(val, IDX);
+ if (attr->ppf_idx == attr->func_global_idx)
+ attr->func_type = TYPE_PPF;
+}
+
+/**
+ * Get the mpf index from the hwif.
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device.
+ */
+static void
+get_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 mpf_election, addr;
+
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ mpf_election = hinic3_hwif_read_reg(hwif, addr);
+ attr->mpf_idx = HINIC3_MPF_ELECTION_GET(mpf_election, IDX);
+}
+
+/**
+ * Try to set hwif as mpf and set the mpf idx in hwif.
+ *
+ * @param[in] hwif
+ * The hardware interface of a pci function device.
+ */
+static void
+set_mpf(struct hinic3_hwif *hwif)
+{
+ struct hinic3_func_attr *attr = &hwif->attr;
+ u32 addr, val, mpf_election;
+
+ addr = HINIC3_CSR_GLOBAL_MPF_ELECTION_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = HINIC3_MPF_ELECTION_CLEAR(val, IDX);
+ mpf_election = HINIC3_MPF_ELECTION_SET(attr->func_global_idx, IDX);
+
+ val |= mpf_election;
+ hinic3_hwif_write_reg(hwif, addr, val);
+}
+
+int
+hinic3_alloc_db_addr(void *hwdev, void **db_base,
+ enum hinic3_db_type queue_type)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev || !db_base)
+ return -EINVAL;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ *db_base = hwif->db_base + queue_type * HINIC3_DB_PAGE_SIZE;
+
+ return 0;
+}
+
+void
+hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(1, AUTO_MSK_CLR);
+
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+
+/**
+ * Set msix state.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object.
+ * @param[in] msix_idx
+ * MSIX(Message Signaled Interrupts) index.
+ * @param[in] flag
+ * MSIX state flag, 0-enable, 1-disable.
+ */
+void
+hinic3_set_msix_state(void *hwdev, u16 msix_idx, enum hinic3_msix_state flag)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 mask_bits;
+ u32 addr;
+ u8 int_msk = 1;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ if (flag)
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_SET);
+ else
+ mask_bits = HINIC3_MSI_CLR_INDIR_SET(int_msk, INT_MSK_CLR);
+ mask_bits = mask_bits |
+ HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, mask_bits);
+}
+
+static void
+disable_all_msix(struct hinic3_hwdev *hwdev)
+{
+ u16 num_irqs = hwdev->hwif->attr.num_irqs;
+ u16 i;
+
+ for (i = 0; i < num_irqs; i++)
+ hinic3_set_msix_state(hwdev, i, HINIC3_MSIX_DISABLE);
+}
+
+/**
+ * Clear msix resend bit.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object.
+ * @param[in] msix_idx
+ * MSIX(Message Signaled Interrupts) index
+ * @param[in] clear_resend_en
+ * Clear resend en flag, 1-clear.
+ */
+void
+hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx, u8 clear_resend_en)
+{
+ struct hinic3_hwif *hwif = NULL;
+ u32 msix_ctrl = 0, addr;
+
+ if (!hwdev)
+ return;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ msix_ctrl = HINIC3_MSI_CLR_INDIR_SET(msix_idx, SIMPLE_INDIR_IDX) |
+ HINIC3_MSI_CLR_INDIR_SET(clear_resend_en, RESEND_TIMER_CLR);
+
+ addr = HINIC3_CSR_FUNC_MSI_CLR_WR_ADDR;
+ hinic3_hwif_write_reg(hwif, addr, msix_ctrl);
+}
+
+#ifdef HINIC3_RELEASE
+static int
+wait_until_doorbell_flush_states(struct hinic3_hwif *hwif,
+ enum hinic3_doorbell_ctrl states)
+{
+ enum hinic3_doorbell_ctrl db_ctrl;
+ u32 cnt = 0;
+
+ if (!hwif)
+ return -EINVAL;
+
+ while (cnt < HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT) {
+ db_ctrl = hinic3_get_doorbell_ctrl_status(hwif);
+ if (db_ctrl == states)
+ return 0;
+
+ rte_delay_ms(1);
+ cnt++;
+ }
+
+ return -EFAULT;
+}
+#endif
+
+static int
+wait_until_doorbell_and_outbound_enabled(struct hinic3_hwif *hwif)
+{
+ enum hinic3_doorbell_ctrl db_ctrl;
+ enum hinic3_outbound_ctrl outbound_ctrl;
+ u32 cnt = 0;
+
+ while (cnt < HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT) {
+ db_ctrl = hinic3_get_doorbell_ctrl_status(hwif);
+ outbound_ctrl = hinic3_get_outbound_ctrl_status(hwif);
+ if (outbound_ctrl == ENABLE_OUTBOUND &&
+ db_ctrl == ENABLE_DOORBELL)
+ return 0;
+
+ rte_delay_ms(1);
+ cnt++;
+ }
+
+ return -EFAULT;
+}
+
+static int
+hinic3_get_bar_addr(struct hinic3_hwdev *hwdev)
+{
+ struct rte_pci_device *pci_dev = hwdev->pci_dev;
+ struct hinic3_hwif *hwif = hwdev->hwif;
+ void *cfg_regs_base = NULL;
+ void *mgmt_reg_base = NULL;
+ void *db_base = NULL;
+ int cfg_bar;
+
+ cfg_bar = HINIC3_IS_VF_DEV(pci_dev) ? HINIC3_VF_PCI_CFG_REG_BAR
+ : HINIC3_PF_PCI_CFG_REG_BAR;
+
+ cfg_regs_base = pci_dev->mem_resource[cfg_bar].addr;
+ if (!HINIC3_IS_VF_DEV(pci_dev)) {
+ mgmt_reg_base =
+ pci_dev->mem_resource[HINIC3_PCI_MGMT_REG_BAR].addr;
+ if (mgmt_reg_base == NULL) {
+ PMD_DRV_LOG(ERR, "mgmt_reg_base addr is null");
+ return -EFAULT;
+ }
+ }
+ db_base = pci_dev->mem_resource[HINIC3_PCI_DB_BAR].addr;
+ if (cfg_regs_base == NULL) {
+ PMD_DRV_LOG(ERR,
+ "mem_resource addr is null, cfg_regs_base is NULL");
+ return -EFAULT;
+ } else if (db_base == NULL) {
+ PMD_DRV_LOG(ERR, "mem_resource addr is null, db_base is NULL");
+ return -EFAULT;
+ }
+
+ /* If function is VF, mgmt_regs_base will be NULL. */
+ if (!mgmt_reg_base)
+ hwif->cfg_regs_base =
+ (u8 *)cfg_regs_base + HINIC3_VF_CFG_REG_OFFSET;
+ else
+ hwif->cfg_regs_base = cfg_regs_base;
+ hwif->mgmt_regs_base = mgmt_reg_base;
+ hwif->db_base = db_base;
+ hwif->db_dwqe_len = pci_dev->mem_resource[HINIC3_PCI_DB_BAR].len;
+
+ return 0;
+}
+
+/**
+ * Initialize the hw interface.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_init_hwif(void *dev)
+{
+ struct hinic3_hwdev *hwdev = NULL;
+ struct hinic3_hwif *hwif;
+ int err;
+ u32 attr4, attr5;
+
+ hwif = rte_zmalloc("hinic_hwif", sizeof(struct hinic3_hwif),
+ RTE_CACHE_LINE_SIZE);
+ if (!hwif)
+ return -ENOMEM;
+
+ hwdev = (struct hinic3_hwdev *)dev;
+ hwdev->hwif = hwif;
+
+ err = hinic3_get_bar_addr(hwdev);
+ if (err != 0) {
+ PMD_DRV_LOG(ERR, "get bar addr fail");
+ goto hwif_ready_err;
+ }
+
+ err = wait_hwif_ready(hwdev);
+ if (err != 0) {
+ PMD_DRV_LOG(ERR, "Chip status is not ready");
+ goto hwif_ready_err;
+ }
+
+ get_hwif_attr(hwif);
+
+ err = wait_until_doorbell_and_outbound_enabled(hwif);
+ if (err != 0) {
+ attr4 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR4_ADDR);
+ attr5 = hinic3_hwif_read_reg(hwif, HINIC3_CSR_FUNC_ATTR5_ADDR);
+ PMD_DRV_LOG(ERR,
+ "Hw doorbell/outbound is disabled, attr4 0x%x "
+ "attr5 0x%x",
+ attr4, attr5);
+ goto hwif_ready_err;
+ }
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ set_ppf(hwif);
+
+ if (HINIC3_IS_PPF(hwdev))
+ set_mpf(hwif);
+
+ get_mpf(hwif);
+ }
+
+ disable_all_msix(hwdev);
+ /* Disable mgmt cpu reporting any event. */
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+
+ PMD_DRV_LOG(INFO,
+ "global_func_idx: %d, func_type: %d, host_id: %d, ppf: %d, "
+ "mpf: %d",
+ hwif->attr.func_global_idx, hwif->attr.func_type,
+ hwif->attr.pci_intf_idx, hwif->attr.ppf_idx,
+ hwif->attr.mpf_idx);
+
+ return 0;
+
+hwif_ready_err:
+ rte_free(hwdev->hwif);
+ hwdev->hwif = NULL;
+
+ return err;
+}
+
+/**
+ * Free the hw interface.
+ *
+ * @param[in] dev
+ * The pointer to the private hardware device.
+ */
+void
+hinic3_free_hwif(void *dev)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+
+ rte_free(hwdev->hwif);
+}
+
+u16
+hinic3_global_func_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_global_idx;
+}
+
+u8
+hinic3_pf_id_of_vf(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.port_to_port_idx;
+}
+
+u8
+hinic3_pcie_itf_id(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.pci_intf_idx;
+}
+
+enum func_type
+hinic3_func_type(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.func_type;
+}
+
+u16
+hinic3_glb_pf_vf_offset(void *hwdev)
+{
+ struct hinic3_hwif *hwif = NULL;
+
+ if (!hwdev)
+ return 0;
+
+ hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+
+ return hwif->attr.global_vf_id_of_pf;
+}
diff --git a/drivers/net/hinic3/base/hinic3_hwif.h b/drivers/net/hinic3/base/hinic3_hwif.h
new file mode 100644
index 0000000000..97d2ed99df
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hwif.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_HWIF_H_
+#define _HINIC3_HWIF_H_
+
+#define HINIC3_WAIT_DOORBELL_AND_OUTBOUND_TIMEOUT 60000
+#define HINIC3_PCIE_LINK_DOWN 0xFFFFFFFF
+
+/* PCIe bar space. */
+#define HINIC3_VF_PCI_CFG_REG_BAR 0
+#define HINIC3_PF_PCI_CFG_REG_BAR 1
+
+#define HINIC3_PCI_INTR_REG_BAR 2
+#define HINIC3_PCI_MGMT_REG_BAR 3 /**< Only PF has mgmt bar. */
+#define HINIC3_PCI_DB_BAR 4
+
+/* Doorbell or direct wqe page size is 4K. */
+#define HINIC3_DB_PAGE_SIZE 0x00001000ULL
+#define HINIC3_DWQE_OFFSET 0x00000800ULL
+
+enum func_type { TYPE_PF, TYPE_VF, TYPE_PPF, TYPE_UNKNOWN };
+#define MSIX_RESEND_TIMER_CLEAR 1
+
+/* Message signaled interrupt status. */
+enum hinic3_msix_state { HINIC3_MSIX_ENABLE, HINIC3_MSIX_DISABLE };
+
+enum hinic3_msix_auto_mask {
+ HINIC3_CLR_MSIX_AUTO_MASK,
+ HINIC3_SET_MSIX_AUTO_MASK,
+};
+
+struct hinic3_func_attr {
+ u16 func_global_idx;
+ u8 port_to_port_idx;
+ u8 pci_intf_idx;
+ u8 vf_in_pf;
+ enum func_type func_type;
+
+ u8 mpf_idx;
+
+ u8 ppf_idx;
+
+ u16 num_irqs; /**< Max: 2 ^ 15. */
+ u8 num_aeqs; /**< Max: 2 ^ 3. */
+ u8 num_ceqs; /**< Max: 2 ^ 7. */
+
+ u16 num_queue; /**< Max: 2 ^ 8. */
+ u8 num_dma_attr; /**< Max: 2 ^ 6. */
+ u8 msix_flex_en;
+
+ u16 global_vf_id_of_pf;
+};
+
+/* Structure for hardware interface. */
+struct hinic3_hwif {
+ /* Configure virtual address, PF is bar1, VF is bar0/1. */
+ u8 *cfg_regs_base;
+ /* For PF bar3 virtual address, if function is VF should set NULL. */
+ u8 *mgmt_regs_base;
+ u8 *db_base;
+ u64 db_dwqe_len;
+
+ struct hinic3_func_attr attr;
+
+ void *pdev;
+};
+
+enum hinic3_outbound_ctrl { ENABLE_OUTBOUND = 0x0, DISABLE_OUTBOUND = 0x1 };
+
+enum hinic3_doorbell_ctrl { ENABLE_DOORBELL = 0x0, DISABLE_DOORBELL = 0x1 };
+
+enum hinic3_pf_status {
+ HINIC3_PF_STATUS_INIT = 0x0,
+ HINIC3_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HINIC3_PF_STATUS_FLR_START_FLAG = 0x12,
+ HINIC3_PF_STATUS_FLR_FINISH_FLAG = 0x13
+};
+
+/* Type of doorbell. */
+enum hinic3_db_type {
+ HINIC3_DB_TYPE_CMDQ = 0x0,
+ HINIC3_DB_TYPE_SQ = 0x1,
+ HINIC3_DB_TYPE_RQ = 0x2,
+ HINIC3_DB_TYPE_MAX = 0x3
+};
+
+/* Get the attributes of the hardware interface. */
+#define HINIC3_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HINIC3_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define HINIC3_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define HINIC3_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define HINIC3_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HINIC3_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+/* Func type judgment. */
+#define HINIC3_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HINIC3_IS_PF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PF)
+#define HINIC3_IS_VF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_VF)
+#define HINIC3_IS_PPF(dev) (HINIC3_FUNC_TYPE(dev) == TYPE_PPF)
+
+u32 hinic3_hwif_read_reg(struct hinic3_hwif *hwif, u32 reg);
+
+void hinic3_hwif_write_reg(struct hinic3_hwif *hwif, u32 reg, u32 val);
+
+void hinic3_set_msix_auto_mask_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_auto_mask flag);
+
+void hinic3_set_msix_state(void *hwdev, u16 msix_idx,
+ enum hinic3_msix_state flag);
+
+void hinic3_misx_intr_clear_resend_bit(void *hwdev, u16 msix_idx,
+ u8 clear_resend_en);
+
+u16 hinic3_global_func_id(void *hwdev);
+
+u8 hinic3_pf_id_of_vf(void *hwdev);
+
+u8 hinic3_pcie_itf_id(void *hwdev);
+
+enum func_type hinic3_func_type(void *hwdev);
+
+u16 hinic3_glb_pf_vf_offset(void *hwdev);
+void hinic3_update_msix_info(struct hinic3_hwif *hwif);
+void hinic3_set_pf_status(struct hinic3_hwif *hwif,
+ enum hinic3_pf_status status);
+
+enum hinic3_pf_status hinic3_get_pf_status(struct hinic3_hwif *hwif);
+
+int hinic3_alloc_db_addr(void *hwdev, void **db_base,
+ enum hinic3_db_type queue_type);
+
+void hinic3_disable_doorbell(struct hinic3_hwif *hwif);
+
+void hinic3_enable_doorbell(struct hinic3_hwif *hwif);
+
+int hinic3_init_hwif(void *dev);
+
+void hinic3_free_hwif(void *dev);
+
+#endif /**< _HINIC3_HWIF_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 04/18] net/hinic3: add support for cmdq mechanism
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (2 preceding siblings ...)
2025-04-18 9:05 ` [RFC 03/18] net/hinic3: add hardware interfaces of BAR operation Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 05/18] net/hinic3: add NIC event module Feifei Wang
` (16 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen
From: Xin Wang <wangxin679@h-partners.com>
Micocode is named ucode in SP series NIC. Its main responsibility is data
transmission and reception. But it can also handle some administration
works. It uses cmdq mechanism. This patch introduces data structures,
initialization, interfaces, and commands sending functions of cmdq.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
drivers/net/hinic3/base/hinic3_cmdq.c | 975 ++++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_cmdq.h | 230 ++++++
2 files changed, 1205 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
diff --git a/drivers/net/hinic3/base/hinic3_cmdq.c b/drivers/net/hinic3/base/hinic3_cmdq.c
new file mode 100644
index 0000000000..fcb3816469
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_cmdq.c
@@ -0,0 +1,975 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_mbuf.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_cmd.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_wq.h"
+
+#define CMDQ_CMD_TIMEOUT 5000 /**< Millisecond. */
+
+#define UPPER_8_BITS(data) (((data) >> 8) & 0xFF)
+#define LOWER_8_BITS(data) ((data) & 0xFF)
+
+#define CMDQ_DB_INFO_HI_PROD_IDX_SHIFT 0
+#define CMDQ_DB_INFO_HI_PROD_IDX_MASK 0xFFU
+
+#define CMDQ_DB_INFO_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_INFO_##member##_MASK) \
+ << CMDQ_DB_INFO_##member##_SHIFT)
+#define CMDQ_DB_INFO_UPPER_32(val) ((u64)(val) << 32)
+
+#define CMDQ_DB_HEAD_QUEUE_TYPE_SHIFT 23
+#define CMDQ_DB_HEAD_CMDQ_TYPE_SHIFT 24
+#define CMDQ_DB_HEAD_SRC_TYPE_SHIFT 27
+#define CMDQ_DB_HEAD_QUEUE_TYPE_MASK 0x1U
+#define CMDQ_DB_HEAD_CMDQ_TYPE_MASK 0x7U
+#define CMDQ_DB_HEAD_SRC_TYPE_MASK 0x1FU
+#define CMDQ_DB_HEAD_SET(val, member) \
+ ((((u32)(val)) & CMDQ_DB_HEAD_##member##_MASK) \
+ << CMDQ_DB_HEAD_##member##_SHIFT)
+
+#define CMDQ_CTRL_PI_SHIFT 0
+#define CMDQ_CTRL_CMD_SHIFT 16
+#define CMDQ_CTRL_MOD_SHIFT 24
+#define CMDQ_CTRL_ACK_TYPE_SHIFT 29
+#define CMDQ_CTRL_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_CTRL_PI_MASK 0xFFFFU
+#define CMDQ_CTRL_CMD_MASK 0xFFU
+#define CMDQ_CTRL_MOD_MASK 0x1FU
+#define CMDQ_CTRL_ACK_TYPE_MASK 0x3U
+#define CMDQ_CTRL_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_CTRL_SET(val, member) \
+ (((u32)(val) & CMDQ_CTRL_##member##_MASK) << CMDQ_CTRL_##member##_SHIFT)
+
+#define CMDQ_CTRL_GET(val, member) \
+ (((val) >> CMDQ_CTRL_##member##_SHIFT) & CMDQ_CTRL_##member##_MASK)
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_SHIFT 0
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_SHIFT 15
+#define CMDQ_WQE_HEADER_DATA_FMT_SHIFT 22
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_SHIFT 23
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_SHIFT 27
+#define CMDQ_WQE_HEADER_CTRL_LEN_SHIFT 29
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_SHIFT 31
+
+#define CMDQ_WQE_HEADER_BUFDESC_LEN_MASK 0xFFU
+#define CMDQ_WQE_HEADER_COMPLETE_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_DATA_FMT_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_REQ_MASK 0x1U
+#define CMDQ_WQE_HEADER_COMPLETE_SECT_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_CTRL_LEN_MASK 0x3U
+#define CMDQ_WQE_HEADER_HW_BUSY_BIT_MASK 0x1U
+
+#define CMDQ_WQE_HEADER_SET(val, member) \
+ (((u32)(val) & CMDQ_WQE_HEADER_##member##_MASK) \
+ << CMDQ_WQE_HEADER_##member##_SHIFT)
+
+#define CMDQ_WQE_HEADER_GET(val, member) \
+ (((val) >> CMDQ_WQE_HEADER_##member##_SHIFT) & \
+ CMDQ_WQE_HEADER_##member##_MASK)
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_SHIFT 0
+#define CMDQ_CTXT_EQ_ID_SHIFT 53
+#define CMDQ_CTXT_CEQ_ARM_SHIFT 61
+#define CMDQ_CTXT_CEQ_EN_SHIFT 62
+#define CMDQ_CTXT_HW_BUSY_BIT_SHIFT 63
+
+#define CMDQ_CTXT_CURR_WQE_PAGE_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_EQ_ID_MASK 0xFF
+#define CMDQ_CTXT_CEQ_ARM_MASK 0x1
+#define CMDQ_CTXT_CEQ_EN_MASK 0x1
+#define CMDQ_CTXT_HW_BUSY_BIT_MASK 0x1
+
+#define CMDQ_CTXT_PAGE_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << CMDQ_CTXT_##member##_SHIFT)
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_SHIFT 0
+#define CMDQ_CTXT_CI_SHIFT 52
+
+#define CMDQ_CTXT_WQ_BLOCK_PFN_MASK 0xFFFFFFFFFFFFF
+#define CMDQ_CTXT_CI_MASK 0xFFF
+
+#define CMDQ_CTXT_BLOCK_INFO_SET(val, member) \
+ (((u64)(val) & CMDQ_CTXT_##member##_MASK) << CMDQ_CTXT_##member##_SHIFT)
+
+#define SAVED_DATA_ARM_SHIFT 31
+
+#define SAVED_DATA_ARM_MASK 0x1U
+
+#define SAVED_DATA_SET(val, member) \
+ (((val) & SAVED_DATA_##member##_MASK) << SAVED_DATA_##member##_SHIFT)
+
+#define SAVED_DATA_CLEAR(val, member) \
+ ((val) & (~(SAVED_DATA_##member##_MASK << SAVED_DATA_##member##_SHIFT)))
+
+#define WQE_ERRCODE_VAL_SHIFT 0
+
+#define WQE_ERRCODE_VAL_MASK 0x7FFFFFFF
+
+#define WQE_ERRCODE_GET(val, member) \
+ (((val) >> WQE_ERRCODE_##member##_SHIFT) & WQE_ERRCODE_##member##_MASK)
+
+#define WQE_COMPLETED(ctrl_info) CMDQ_CTRL_GET(ctrl_info, HW_BUSY_BIT)
+
+#define WQE_HEADER(wqe) ((struct hinic3_cmdq_header *)(wqe))
+
+#define CMDQ_DB_PI_OFF(pi) (((u16)LOWER_8_BITS(pi)) << 3)
+
+#define CMDQ_DB_ADDR(db_base, pi) (((u8 *)(db_base)) + CMDQ_DB_PI_OFF(pi))
+
+#define CMDQ_PFN(addr, page_size) ((addr) >> (ilog2(page_size)))
+
+#define FIRST_DATA_TO_WRITE_LAST sizeof(u64)
+
+#define WQE_LCMD_SIZE 64
+#define WQE_SCMD_SIZE 64
+
+#define COMPLETE_LEN 3
+
+#define CMDQ_WQEBB_SIZE 64
+#define CMDQ_WQEBB_SHIFT 6
+
+#define CMDQ_WQE_SIZE 64
+
+#define HINIC3_CMDQ_WQ_BUF_SIZE 4096
+
+#define WQE_NUM_WQEBBS(wqe_size, wq) \
+ ({ \
+ typeof(wq) __wq = (wq); \
+ (u16)(RTE_ALIGN((u32)(wqe_size), __wq->wqebb_size) / \
+ __wq->wqebb_size); \
+ })
+
+#define cmdq_to_cmdqs(cmdq) \
+ ({ \
+ typeof(cmdq) __cmdq = (cmdq); \
+ container_of(__cmdq - __cmdq->cmdq_type, struct hinic3_cmdqs, \
+ __cmdq[0]); \
+ })
+
+#define WAIT_CMDQ_ENABLE_TIMEOUT 300
+
+static int hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, u32 timeout);
+
+bool
+hinic3_cmdq_idle(struct hinic3_cmdq *cmdq)
+{
+ struct hinic3_wq *wq = cmdq->wq;
+
+ return rte_atomic_load_explicit(&wq->delta, rte_memory_order_seq_cst) ==
+ wq->q_depth;
+}
+
+struct hinic3_cmd_buf *
+hinic3_alloc_cmd_buf(void *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ struct hinic3_cmd_buf *cmd_buf;
+
+ cmd_buf = rte_zmalloc(NULL, sizeof(*cmd_buf), 0);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buffer failed");
+ return NULL;
+ }
+
+ cmd_buf->mbuf = rte_pktmbuf_alloc(cmdqs->cmd_buf_pool);
+ if (!cmd_buf->mbuf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd from the pool failed");
+ goto alloc_pci_buf_err;
+ }
+
+ cmd_buf->dma_addr = rte_mbuf_data_iova(cmd_buf->mbuf);
+ cmd_buf->buf = rte_pktmbuf_mtod(cmd_buf->mbuf, void *);
+
+ return cmd_buf;
+
+alloc_pci_buf_err:
+ rte_free(cmd_buf);
+ return NULL;
+}
+
+void
+hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf)
+{
+ rte_pktmbuf_free(cmd_buf->mbuf);
+
+ rte_free(cmd_buf);
+}
+
+static u32
+cmdq_wqe_size(enum cmdq_wqe_type wqe_type)
+{
+ u32 wqe_size = 0;
+
+ switch (wqe_type) {
+ case WQE_LCMD_TYPE:
+ wqe_size = WQE_LCMD_SIZE;
+ break;
+ case WQE_SCMD_TYPE:
+ wqe_size = WQE_SCMD_SIZE;
+ break;
+ default:
+ break;
+ }
+
+ return wqe_size;
+}
+
+static int
+cmdq_get_wqe_size(enum bufdesc_len len)
+{
+ int wqe_size = 0;
+
+ switch (len) {
+ case BUFDESC_LCMD_LEN:
+ wqe_size = WQE_LCMD_SIZE;
+ break;
+ case BUFDESC_SCMD_LEN:
+ wqe_size = WQE_SCMD_SIZE;
+ break;
+ default:
+ break;
+ }
+
+ return wqe_size;
+}
+
+static void
+cmdq_set_completion(struct hinic3_cmdq_completion *complete,
+ struct hinic3_cmd_buf *buf_out)
+{
+ struct hinic3_sge_resp *sge_resp = &complete->sge_resp;
+
+ hinic3_set_sge(&sge_resp->sge, buf_out->dma_addr, HINIC3_CMDQ_BUF_SIZE);
+}
+
+static void
+cmdq_set_lcmd_bufdesc(struct hinic3_cmdq_wqe_lcmd *wqe,
+ struct hinic3_cmd_buf *buf_in)
+{
+ hinic3_set_sge(&wqe->buf_desc.sge, buf_in->dma_addr, buf_in->size);
+}
+
+static void
+cmdq_set_db(struct hinic3_cmdq *cmdq, enum hinic3_cmdq_type cmdq_type,
+ u16 prod_idx)
+{
+ u64 db = 0;
+
+ /* Hardware will do endianness coverting. */
+ db = CMDQ_DB_INFO_SET(UPPER_8_BITS(prod_idx), HI_PROD_IDX);
+ db = CMDQ_DB_INFO_UPPER_32(db) |
+ CMDQ_DB_HEAD_SET(HINIC3_DB_CMDQ_TYPE, QUEUE_TYPE) |
+ CMDQ_DB_HEAD_SET(cmdq_type, CMDQ_TYPE) |
+ CMDQ_DB_HEAD_SET(HINIC3_DB_SRC_CMDQ_TYPE, SRC_TYPE);
+
+ rte_wmb(); /**< Write all before the doorbell. */
+
+ rte_write64(db, CMDQ_DB_ADDR(cmdq->db_base, prod_idx));
+}
+
+static void
+cmdq_wqe_fill(void *dst, void *src)
+{
+ memcpy((void *)((u8 *)dst + FIRST_DATA_TO_WRITE_LAST),
+ (void *)((u8 *)src + FIRST_DATA_TO_WRITE_LAST),
+ CMDQ_WQE_SIZE - FIRST_DATA_TO_WRITE_LAST);
+
+ rte_wmb(); /**< The first 8 bytes should be written last. */
+
+ *(u64 *)dst = *(u64 *)src;
+}
+
+static void
+cmdq_prepare_wqe_ctrl(struct hinic3_cmdq_wqe *wqe, int wrapped,
+ enum hinic3_mod_type mod, u8 cmd, u16 prod_idx,
+ enum completion_format complete_format,
+ enum data_format local_data_format,
+ enum bufdesc_len buf_len)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ enum ctrl_sect_len ctrl_len;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_cmdq_wqe_scmd *wqe_scmd = NULL;
+ u32 saved_data = WQE_HEADER(wqe)->saved_data;
+
+ if (local_data_format == DATA_SGE) {
+ wqe_lcmd = &wqe->wqe_lcmd;
+
+ wqe_lcmd->status.status_info = 0;
+ ctrl = &wqe_lcmd->ctrl;
+ ctrl_len = CTRL_SECT_LEN;
+ } else {
+ wqe_scmd = &wqe->inline_wqe.wqe_scmd;
+
+ wqe_scmd->status.status_info = 0;
+ ctrl = &wqe_scmd->ctrl;
+ ctrl_len = CTRL_DIRECT_SECT_LEN;
+ }
+
+ ctrl->ctrl_info = CMDQ_CTRL_SET(prod_idx, PI) |
+ CMDQ_CTRL_SET(cmd, CMD) | CMDQ_CTRL_SET(mod, MOD) |
+ CMDQ_CTRL_SET(HINIC3_ACK_TYPE_CMDQ, ACK_TYPE);
+
+ WQE_HEADER(wqe)->header_info =
+ CMDQ_WQE_HEADER_SET(buf_len, BUFDESC_LEN) |
+ CMDQ_WQE_HEADER_SET(complete_format, COMPLETE_FMT) |
+ CMDQ_WQE_HEADER_SET(local_data_format, DATA_FMT) |
+ CMDQ_WQE_HEADER_SET(CEQ_SET, COMPLETE_REQ) |
+ CMDQ_WQE_HEADER_SET(COMPLETE_LEN, COMPLETE_SECT_LEN) |
+ CMDQ_WQE_HEADER_SET(ctrl_len, CTRL_LEN) |
+ CMDQ_WQE_HEADER_SET((u32)wrapped, HW_BUSY_BIT);
+
+ saved_data &= SAVED_DATA_CLEAR(saved_data, ARM);
+ if (cmd == CMDQ_SET_ARM_CMD && mod == HINIC3_MOD_COMM)
+ WQE_HEADER(wqe)->saved_data = saved_data |
+ SAVED_DATA_SET(1, ARM);
+ else
+ WQE_HEADER(wqe)->saved_data = saved_data;
+}
+
+static void
+cmdq_set_lcmd_wqe(struct hinic3_cmdq_wqe *wqe, enum cmdq_cmd_type cmd_type,
+ struct hinic3_cmd_buf *buf_in, struct hinic3_cmd_buf *buf_out,
+ int wrapped, enum hinic3_mod_type mod, u8 cmd, u16 prod_idx)
+{
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = &wqe->wqe_lcmd;
+ enum completion_format complete_format = COMPLETE_DIRECT;
+
+ switch (cmd_type) {
+ case SYNC_CMD_DIRECT_RESP:
+ complete_format = COMPLETE_DIRECT;
+ wqe_lcmd->completion.direct_resp = 0;
+ break;
+ case SYNC_CMD_SGE_RESP:
+ if (buf_out) {
+ complete_format = COMPLETE_SGE;
+ cmdq_set_completion(&wqe_lcmd->completion, buf_out);
+ }
+ break;
+ case ASYNC_CMD:
+ complete_format = COMPLETE_DIRECT;
+ wqe_lcmd->completion.direct_resp = 0;
+ wqe_lcmd->buf_desc.saved_async_buf = (u64)(buf_in);
+ break;
+ default:
+ break;
+ }
+
+ cmdq_prepare_wqe_ctrl(wqe, wrapped, mod, cmd, prod_idx, complete_format,
+ DATA_SGE, BUFDESC_LCMD_LEN);
+
+ cmdq_set_lcmd_bufdesc(wqe_lcmd, buf_in);
+}
+
+/**
+ * Prepare necessary context for command queue, send a synchronous command with
+ * a direct response to hardware. It waits for completion of command by polling
+ * command queue for a response.
+ *
+ * @param[in] cmdq
+ * The command queue object that represents the queue to send the command to.
+ * @param[in] mod
+ * The module type that the command belongs to.
+ * @param[in] cmd
+ * The command to be executed.
+ * @param[in] buf_in
+ * The input buffer containing the command parameters.
+ * @param[out] out_param
+ * A pointer to the location where the response data will be stored, if
+ * available.
+ * @param[in] timeout
+ * The timeout value (ms) to wait for the command completion. If zero, a default
+ * timeout will be used.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EBUSY: The command queue is busy.
+ * - -ETIMEDOUT: The command did not complete within the specified timeout.
+ */
+static int
+cmdq_sync_cmd_direct_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod,
+ u8 cmd, struct hinic3_cmd_buf *buf_in, u64 *out_param,
+ u32 timeout)
+{
+ struct hinic3_wq *wq = cmdq->wq;
+ struct hinic3_cmdq_wqe wqe;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped;
+ u32 timeo, wqe_size;
+ int err;
+
+ wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct. */
+ rte_spinlock_lock(&cmdq->cmdq_lock);
+
+ curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ err = -EBUSY;
+ goto cmdq_unlock;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_DIRECT_RESP, buf_in, NULL, wrapped,
+ mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format. */
+ hinic3_hw_be32_len(&wqe, (int)wqe_size);
+
+ /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_DIRECT_RESP;
+
+ cmdq_set_db(cmdq, HINIC3_CMDQ_SYNC, next_prod_idx);
+
+ timeo = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT);
+ err = hinic3_cmdq_poll_msg(cmdq, timeo);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x",
+ curr_prod_idx);
+ err = -ETIMEDOUT;
+ goto cmdq_unlock;
+ }
+
+ rte_smp_rmb(); /**< Read error code after completion. */
+
+ if (out_param) {
+ wqe_lcmd = &curr_wqe->wqe_lcmd;
+ *out_param = cpu_to_be64(wqe_lcmd->completion.direct_resp);
+ }
+
+ if (cmdq->errcode[curr_prod_idx])
+ err = cmdq->errcode[curr_prod_idx];
+
+cmdq_unlock:
+ rte_spinlock_unlock(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+/**
+ * Send a synchronous command with detailed response and wait for the
+ * completion.
+ *
+ * @param[in] cmdq
+ * The command queue object representing the queue to send the command to.
+ * @param[in] mod
+ * The module type that the command belongs to.
+ * @param[in] cmd
+ * The command to be executed.
+ * @param[in] buf_in
+ * The input buffer containing the parameters for the command.
+ * @param[out] buf_out
+ * The output buffer where the detailed response from the hardware will be
+ * stored.
+ * @param[in] timeout
+ * The timeout value (ms) to wait for the command completion. If zero, a default
+ * timeout will be used.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EBUSY: The command queue is busy.
+ * - -ETIMEDOUT: The command did not complete within the specified timeout.
+ */
+static int
+cmdq_sync_cmd_detail_resp(struct hinic3_cmdq *cmdq, enum hinic3_mod_type mod,
+ u8 cmd, struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, u32 timeout)
+{
+ struct hinic3_wq *wq = cmdq->wq;
+ struct hinic3_cmdq_wqe wqe;
+ struct hinic3_cmdq_wqe *curr_wqe = NULL;
+ u16 curr_prod_idx, next_prod_idx, num_wqebbs;
+ int wrapped;
+ u32 timeo, wqe_size;
+ int err;
+
+ wqe_size = cmdq_wqe_size(WQE_LCMD_TYPE);
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, wq);
+
+ /* Keep wrapped and doorbell index correct. */
+ rte_spinlock_lock(&cmdq->cmdq_lock);
+
+ curr_wqe = hinic3_get_wqe(cmdq->wq, num_wqebbs, &curr_prod_idx);
+ if (!curr_wqe) {
+ err = -EBUSY;
+ goto cmdq_unlock;
+ }
+
+ memset(&wqe, 0, sizeof(wqe));
+ wrapped = cmdq->wrapped;
+
+ next_prod_idx = curr_prod_idx + num_wqebbs;
+ if (next_prod_idx >= wq->q_depth) {
+ cmdq->wrapped = !cmdq->wrapped;
+ next_prod_idx -= wq->q_depth;
+ }
+
+ cmdq_set_lcmd_wqe(&wqe, SYNC_CMD_SGE_RESP, buf_in, buf_out, wrapped,
+ mod, cmd, curr_prod_idx);
+
+ /* The data that is written to HW should be in Big Endian Format. */
+ hinic3_hw_be32_len(&wqe, (int)wqe_size);
+
+ /* Cmdq wqe is not shadow, therefore wqe will be written to wq. */
+ cmdq_wqe_fill(curr_wqe, &wqe);
+
+ cmdq->cmd_infos[curr_prod_idx].cmd_type = HINIC3_CMD_TYPE_SGE_RESP;
+
+ cmdq_set_db(cmdq, cmdq->cmdq_type, next_prod_idx);
+
+ timeo = msecs_to_jiffies(timeout ? timeout : CMDQ_CMD_TIMEOUT);
+ err = hinic3_cmdq_poll_msg(cmdq, timeo);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq poll msg ack failed, prod idx: 0x%x",
+ curr_prod_idx);
+ err = -ETIMEDOUT;
+ goto cmdq_unlock;
+ }
+
+ rte_smp_rmb(); /**< Read error code after completion. */
+
+ if (cmdq->errcode[curr_prod_idx])
+ err = cmdq->errcode[curr_prod_idx];
+
+cmdq_unlock:
+ rte_spinlock_unlock(&cmdq->cmdq_lock);
+
+ return err;
+}
+
+static int
+cmdq_params_valid(void *hwdev, struct hinic3_cmd_buf *buf_in)
+{
+ if (!buf_in || !hwdev) {
+ PMD_DRV_LOG(ERR, "Invalid CMDQ buffer or hwdev is NULL");
+ return -EINVAL;
+ }
+
+ if (buf_in->size == 0 || buf_in->size > HINIC3_CMDQ_BUF_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid CMDQ buffer size: 0x%x",
+ buf_in->size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+wait_cmdqs_enable(struct hinic3_cmdqs *cmdqs)
+{
+ unsigned long end;
+
+ end = jiffies + msecs_to_jiffies(WAIT_CMDQ_ENABLE_TIMEOUT);
+ do {
+ if (cmdqs->status & HINIC3_CMDQ_ENABLE)
+ return 0;
+ } while (time_before(jiffies, end));
+
+ return -EBUSY;
+}
+
+int
+hinic3_cmdq_direct_resp(void *hwdev, enum hinic3_mod_type mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in, u64 *out_param,
+ u32 timeout)
+{
+ struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Invalid cmdq parameters");
+ return err;
+ }
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq is disabled");
+ return err;
+ }
+
+ return cmdq_sync_cmd_direct_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod,
+ cmd, buf_in, out_param, timeout);
+}
+
+int
+hinic3_cmdq_detail_resp(void *hwdev, enum hinic3_mod_type mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, u32 timeout)
+{
+ struct hinic3_cmdqs *cmdqs = ((struct hinic3_hwdev *)hwdev)->cmdqs;
+ int err;
+
+ err = cmdq_params_valid(hwdev, buf_in);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Invalid cmdq parameters");
+ return err;
+ }
+
+ err = wait_cmdqs_enable(cmdqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Cmdq is disabled");
+ return err;
+ }
+
+ return cmdq_sync_cmd_detail_resp(&cmdqs->cmdq[HINIC3_CMDQ_SYNC], mod,
+ cmd, buf_in, buf_out, timeout);
+}
+
+static void
+cmdq_update_errcode(struct hinic3_cmdq *cmdq, u16 prod_idx, int errcode)
+{
+ cmdq->errcode[prod_idx] = errcode;
+}
+
+static void
+clear_wqe_complete_bit(struct hinic3_cmdq *cmdq, struct hinic3_cmdq_wqe *wqe)
+{
+ struct hinic3_ctrl *ctrl = NULL;
+ u32 header_info = hinic3_hw_cpu32(WQE_HEADER(wqe)->header_info);
+ int buf_len = CMDQ_WQE_HEADER_GET(header_info, BUFDESC_LEN);
+ int wqe_size = cmdq_get_wqe_size(buf_len);
+ u16 num_wqebbs;
+
+ if (wqe_size == WQE_LCMD_SIZE)
+ ctrl = &wqe->wqe_lcmd.ctrl;
+ else
+ ctrl = &wqe->inline_wqe.wqe_scmd.ctrl;
+
+ /* Clear HW busy bit. */
+ ctrl->ctrl_info = 0;
+
+ rte_wmb(); /**< Verify wqe is cleared. */
+
+ num_wqebbs = WQE_NUM_WQEBBS(wqe_size, cmdq->wq);
+ hinic3_put_wqe(cmdq->wq, num_wqebbs);
+}
+
+static void
+cmdq_init_queue_ctxt(struct hinic3_cmdq *cmdq,
+ struct hinic3_cmdq_ctxt_info *ctxt_info)
+{
+ struct hinic3_wq *wq = cmdq->wq;
+ u64 wq_first_page_paddr, pfn;
+
+ u16 start_ci = (u16)(wq->cons_idx);
+
+ /* The data in the HW is in Big Endian Format. */
+ wq_first_page_paddr = wq->queue_buf_paddr;
+
+ pfn = CMDQ_PFN(wq_first_page_paddr, RTE_PGSIZE_4K);
+ ctxt_info->curr_wqe_page_pfn =
+ CMDQ_CTXT_PAGE_INFO_SET(1, HW_BUSY_BIT) |
+ CMDQ_CTXT_PAGE_INFO_SET(0, CEQ_EN) |
+ CMDQ_CTXT_PAGE_INFO_SET(0, CEQ_ARM) |
+ CMDQ_CTXT_PAGE_INFO_SET(HINIC3_CEQ_ID_CMDQ, EQ_ID) |
+ CMDQ_CTXT_PAGE_INFO_SET(pfn, CURR_WQE_PAGE_PFN);
+
+ ctxt_info->wq_block_pfn = CMDQ_CTXT_BLOCK_INFO_SET(start_ci, CI) |
+ CMDQ_CTXT_BLOCK_INFO_SET(pfn, WQ_BLOCK_PFN);
+}
+
+static int
+init_cmdq(struct hinic3_cmdq *cmdq, struct hinic3_hwdev *hwdev,
+ struct hinic3_wq *wq, enum hinic3_cmdq_type q_type)
+{
+ int err = 0;
+ size_t errcode_size;
+ size_t cmd_infos_size;
+
+ cmdq->wq = wq;
+ cmdq->cmdq_type = q_type;
+ cmdq->wrapped = 1;
+
+ rte_spinlock_init(&cmdq->cmdq_lock);
+
+ errcode_size = wq->q_depth * sizeof(*cmdq->errcode);
+ cmdq->errcode = rte_zmalloc(NULL, errcode_size, 0);
+ if (!cmdq->errcode) {
+ PMD_DRV_LOG(ERR, "Allocate errcode for cmdq failed");
+ return -ENOMEM;
+ }
+
+ cmd_infos_size = wq->q_depth * sizeof(*cmdq->cmd_infos);
+ cmdq->cmd_infos = rte_zmalloc(NULL, cmd_infos_size, 0);
+ if (!cmdq->cmd_infos) {
+ PMD_DRV_LOG(ERR, "Allocate cmd info for cmdq failed");
+ err = -ENOMEM;
+ goto cmd_infos_err;
+ }
+
+ cmdq->db_base = hwdev->cmdqs->cmdqs_db_base;
+
+ return 0;
+
+cmd_infos_err:
+ rte_free(cmdq->errcode);
+
+ return err;
+}
+
+static void
+free_cmdq(struct hinic3_cmdq *cmdq)
+{
+ rte_free(cmdq->cmd_infos);
+ rte_free(cmdq->errcode);
+}
+
+static int
+hinic3_set_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ struct hinic3_cmd_cmdq_ctxt cmdq_ctxt;
+ enum hinic3_cmdq_type cmdq_type;
+ u16 out_size = sizeof(cmdq_ctxt);
+ int err;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) {
+ memset(&cmdq_ctxt, 0, sizeof(cmdq_ctxt));
+ memcpy((void *)&cmdq_ctxt.ctxt_info,
+ (void *)&cmdqs->cmdq[cmdq_type].cmdq_ctxt,
+ sizeof(cmdq_ctxt.ctxt_info));
+ cmdq_ctxt.func_idx = hinic3_global_func_id(hwdev);
+ cmdq_ctxt.cmdq_id = cmdq_type;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ HINIC3_MGMT_CMD_SET_CMDQ_CTXT,
+ &cmdq_ctxt, sizeof(cmdq_ctxt),
+ &cmdq_ctxt, &out_size, 0);
+ if (err || !out_size || cmdq_ctxt.status) {
+ PMD_DRV_LOG(ERR,
+ "Set cmdq ctxt failed, err: %d, status: "
+ "0x%x, out_size: 0x%x",
+ err, cmdq_ctxt.status, out_size);
+ return -EFAULT;
+ }
+ }
+
+ cmdqs->status |= HINIC3_CMDQ_ENABLE;
+
+ return 0;
+}
+
+int
+hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC;
+
+ for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) {
+ cmdqs->cmdq[cmdq_type].wrapped = 1;
+ hinic3_wq_wqe_pg_clear(cmdqs->cmdq[cmdq_type].wq);
+ }
+
+ return hinic3_set_cmdq_ctxts(hwdev);
+}
+
+static int
+hinic3_set_cmdqs(struct hinic3_hwdev *hwdev, struct hinic3_cmdqs *cmdqs)
+{
+ void *db_base = NULL;
+ enum hinic3_cmdq_type type, cmdq_type;
+ int err;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_base, HINIC3_DB_TYPE_CMDQ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate doorbell address");
+ goto alloc_db_err;
+ }
+ cmdqs->cmdqs_db_base = (u8 *)db_base;
+
+ cmdq_type = HINIC3_CMDQ_SYNC;
+ for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++) {
+ err = init_cmdq(&cmdqs->cmdq[cmdq_type], hwdev,
+ &cmdqs->saved_wqs[cmdq_type], cmdq_type);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Initialize cmdq failed");
+ goto init_cmdq_err;
+ }
+
+ cmdq_init_queue_ctxt(&cmdqs->cmdq[cmdq_type],
+ &cmdqs->cmdq[cmdq_type].cmdq_ctxt);
+ }
+
+ err = hinic3_set_cmdq_ctxts(hwdev);
+ if (err)
+ goto init_cmdq_err;
+
+ return 0;
+
+init_cmdq_err:
+ type = HINIC3_CMDQ_SYNC;
+ for (; type < cmdq_type; type++)
+ free_cmdq(&cmdqs->cmdq[type]);
+
+alloc_db_err:
+ hinic3_cmdq_free(cmdqs->saved_wqs, HINIC3_MAX_CMDQ_TYPES);
+ return -ENOMEM;
+}
+
+int
+hinic3_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = NULL;
+ size_t saved_wqs_size;
+ char cmdq_pool_name[RTE_MEMPOOL_NAMESIZE];
+ int err;
+
+ cmdqs = rte_zmalloc(NULL, sizeof(*cmdqs), 0);
+ if (!cmdqs)
+ return -ENOMEM;
+
+ hwdev->cmdqs = cmdqs;
+ cmdqs->hwdev = hwdev;
+
+ saved_wqs_size = HINIC3_MAX_CMDQ_TYPES * sizeof(struct hinic3_wq);
+ cmdqs->saved_wqs = rte_zmalloc(NULL, saved_wqs_size, 0);
+ if (!cmdqs->saved_wqs) {
+ PMD_DRV_LOG(ERR, "Allocate saved wqs failed");
+ err = -ENOMEM;
+ goto alloc_wqs_err;
+ }
+
+ memset(cmdq_pool_name, 0, RTE_MEMPOOL_NAMESIZE);
+ (void)snprintf(cmdq_pool_name, sizeof(cmdq_pool_name), "hinic3_cmdq_%u",
+ hwdev->port_id);
+
+ cmdqs->cmd_buf_pool = rte_pktmbuf_pool_create(cmdq_pool_name,
+ HINIC3_CMDQ_DEPTH * HINIC3_MAX_CMDQ_TYPES, 0, 0,
+ HINIC3_CMDQ_BUF_SIZE, (int)rte_socket_id());
+ if (!cmdqs->cmd_buf_pool) {
+ PMD_DRV_LOG(ERR, "Create cmdq buffer pool failed");
+ err = -ENOMEM;
+ goto pool_create_err;
+ }
+
+ err = hinic3_cmdq_alloc(cmdqs->saved_wqs, hwdev, HINIC3_MAX_CMDQ_TYPES,
+ HINIC3_CMDQ_WQ_BUF_SIZE, CMDQ_WQEBB_SHIFT,
+ HINIC3_CMDQ_DEPTH);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate cmdq failed");
+ goto cmdq_alloc_err;
+ }
+
+ err = hinic3_set_cmdqs(hwdev, cmdqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "set_cmdqs failed");
+ goto cmdq_alloc_err;
+ }
+ return 0;
+
+cmdq_alloc_err:
+ rte_mempool_free(cmdqs->cmd_buf_pool);
+
+pool_create_err:
+ rte_free(cmdqs->saved_wqs);
+
+alloc_wqs_err:
+ rte_free(cmdqs);
+
+ return err;
+}
+
+void
+hinic3_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_cmdqs *cmdqs = hwdev->cmdqs;
+ enum hinic3_cmdq_type cmdq_type = HINIC3_CMDQ_SYNC;
+
+ cmdqs->status &= ~HINIC3_CMDQ_ENABLE;
+
+ for (; cmdq_type < HINIC3_MAX_CMDQ_TYPES; cmdq_type++)
+ free_cmdq(&cmdqs->cmdq[cmdq_type]);
+
+ hinic3_cmdq_free(cmdqs->saved_wqs, HINIC3_MAX_CMDQ_TYPES);
+
+ rte_mempool_free(cmdqs->cmd_buf_pool);
+
+ rte_free(cmdqs->saved_wqs);
+
+ rte_free(cmdqs);
+}
+
+static int
+hinic3_cmdq_poll_msg(struct hinic3_cmdq *cmdq, u32 timeout)
+{
+ struct hinic3_cmdq_wqe *wqe = NULL;
+ struct hinic3_cmdq_wqe_lcmd *wqe_lcmd = NULL;
+ struct hinic3_ctrl *ctrl = NULL;
+ struct hinic3_cmdq_cmd_info *cmd_info = NULL;
+ u32 status_info, ctrl_info;
+ u16 ci;
+ int errcode;
+ unsigned long end;
+ int done = 0;
+ int err = 0;
+
+ wqe = hinic3_read_wqe(cmdq->wq, 1, &ci);
+ if (!wqe) {
+ PMD_DRV_LOG(ERR, "No outstanding cmdq msg");
+ return -EINVAL;
+ }
+
+ cmd_info = &cmdq->cmd_infos[ci];
+ if (cmd_info->cmd_type == HINIC3_CMD_TYPE_NONE) {
+ PMD_DRV_LOG(ERR,
+ "Cmdq msg has not been filled and send to hw, "
+ "or get TMO msg ack. cmdq ci: %u",
+ ci);
+ return -EINVAL;
+ }
+
+ /* Only arm bit is using scmd wqe, the wqe is lcmd. */
+ wqe_lcmd = &wqe->wqe_lcmd;
+ ctrl = &wqe_lcmd->ctrl;
+ end = jiffies + msecs_to_jiffies(timeout);
+ do {
+ ctrl_info = hinic3_hw_cpu32((ctrl)->ctrl_info);
+ if (WQE_COMPLETED(ctrl_info)) {
+ done = 1;
+ break;
+ }
+
+ rte_delay_us(1);
+ } while (time_before(jiffies, end));
+
+ if (done) {
+ status_info = hinic3_hw_cpu32(wqe_lcmd->status.status_info);
+ errcode = WQE_ERRCODE_GET(status_info, VAL);
+ cmdq_update_errcode(cmdq, ci, errcode);
+ clear_wqe_complete_bit(cmdq, wqe);
+ err = 0;
+ } else {
+ PMD_DRV_LOG(ERR, "Poll cmdq msg time out, ci: %u", ci);
+ err = -ETIMEDOUT;
+ }
+
+ /* Set this cmd invalid. */
+ cmd_info->cmd_type = HINIC3_CMD_TYPE_NONE;
+
+ return err;
+}
diff --git a/drivers/net/hinic3/base/hinic3_cmdq.h b/drivers/net/hinic3/base/hinic3_cmdq.h
new file mode 100644
index 0000000000..fdff69fd51
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_cmdq.h
@@ -0,0 +1,230 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_CMDQ_H_
+#define _HINIC3_CMDQ_H_
+
+#include "hinic3_mgmt.h"
+#include "hinic3_wq.h"
+
+#define HINIC3_SCMD_DATA_LEN 16
+
+/* Pmd driver uses 64, kernel l2nic uses 4096. */
+#define HINIC3_CMDQ_DEPTH 64
+
+#define HINIC3_CMDQ_BUF_SIZE 2048U
+
+#define HINIC3_CEQ_ID_CMDQ 0
+
+enum cmdq_scmd_type {
+ CMDQ_SET_ARM_CMD = 2,
+};
+
+enum cmdq_wqe_type { WQE_LCMD_TYPE, WQE_SCMD_TYPE };
+
+enum ctrl_sect_len { CTRL_SECT_LEN = 1, CTRL_DIRECT_SECT_LEN = 2 };
+
+enum bufdesc_len { BUFDESC_LCMD_LEN = 2, BUFDESC_SCMD_LEN = 3 };
+
+enum data_format {
+ DATA_SGE,
+};
+
+enum completion_format { COMPLETE_DIRECT, COMPLETE_SGE };
+
+enum completion_request {
+ CEQ_SET = 1,
+};
+
+enum cmdq_cmd_type { SYNC_CMD_DIRECT_RESP, SYNC_CMD_SGE_RESP, ASYNC_CMD };
+
+enum hinic3_cmdq_type {
+ HINIC3_CMDQ_SYNC,
+ HINIC3_CMDQ_ASYNC,
+ HINIC3_MAX_CMDQ_TYPES
+};
+
+enum hinic3_db_src_type {
+ HINIC3_DB_SRC_CMDQ_TYPE,
+ HINIC3_DB_SRC_L2NIC_SQ_TYPE
+};
+
+enum hinic3_cmdq_db_type { HINIC3_DB_SQ_RQ_TYPE, HINIC3_DB_CMDQ_TYPE };
+
+/* Cmdq ack type. */
+enum hinic3_ack_type {
+ HINIC3_ACK_TYPE_CMDQ,
+ HINIC3_ACK_TYPE_SHARE_CQN,
+ HINIC3_ACK_TYPE_APP_CQN,
+
+ HINIC3_MOD_ACK_MAX = 15
+};
+
+/* Cmdq wqe ctrls. */
+struct hinic3_cmdq_header {
+ u32 header_info;
+ u32 saved_data;
+};
+
+struct hinic3_scmd_bufdesc {
+ u32 buf_len;
+ u32 rsvd;
+ u8 data[HINIC3_SCMD_DATA_LEN];
+};
+
+struct hinic3_lcmd_bufdesc {
+ struct hinic3_sge sge;
+ u32 rsvd1;
+ u64 saved_async_buf;
+ u64 rsvd3;
+};
+
+struct hinic3_cmdq_db {
+ u32 db_head;
+ u32 db_info;
+};
+
+struct hinic3_status {
+ u32 status_info;
+};
+
+struct hinic3_ctrl {
+ u32 ctrl_info;
+};
+
+struct hinic3_sge_resp {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_cmdq_completion {
+ /* HW format. */
+ union {
+ struct hinic3_sge_resp sge_resp;
+ u64 direct_resp;
+ };
+};
+
+struct hinic3_cmdq_wqe_scmd {
+ struct hinic3_cmdq_header header;
+ u64 rsvd;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_scmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_wqe_lcmd {
+ struct hinic3_cmdq_header header;
+ struct hinic3_status status;
+ struct hinic3_ctrl ctrl;
+ struct hinic3_cmdq_completion completion;
+ struct hinic3_lcmd_bufdesc buf_desc;
+};
+
+struct hinic3_cmdq_inline_wqe {
+ struct hinic3_cmdq_wqe_scmd wqe_scmd;
+};
+
+struct hinic3_cmdq_wqe {
+ /* HW format. */
+ union {
+ struct hinic3_cmdq_inline_wqe inline_wqe;
+ struct hinic3_cmdq_wqe_lcmd wqe_lcmd;
+ };
+};
+
+struct hinic3_cmdq_ctxt_info {
+ u64 curr_wqe_page_pfn;
+ u64 wq_block_pfn;
+};
+
+struct hinic3_cmd_cmdq_ctxt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 cmdq_id;
+ u8 rsvd1[5];
+
+ struct hinic3_cmdq_ctxt_info ctxt_info;
+};
+
+enum hinic3_cmdq_status {
+ HINIC3_CMDQ_ENABLE = BIT(0),
+};
+
+enum hinic3_cmdq_cmd_type {
+ HINIC3_CMD_TYPE_NONE,
+ HINIC3_CMD_TYPE_SET_ARM,
+ HINIC3_CMD_TYPE_DIRECT_RESP,
+ HINIC3_CMD_TYPE_SGE_RESP
+};
+
+struct hinic3_cmdq_cmd_info {
+ enum hinic3_cmdq_cmd_type cmd_type;
+};
+
+struct hinic3_cmdq {
+ struct hinic3_wq *wq;
+
+ enum hinic3_cmdq_type cmdq_type;
+ int wrapped;
+
+ int *errcode;
+ u8 *db_base;
+
+ rte_spinlock_t cmdq_lock;
+
+ struct hinic3_cmdq_ctxt_info cmdq_ctxt;
+
+ struct hinic3_cmdq_cmd_info *cmd_infos;
+};
+
+struct hinic3_cmdqs {
+ struct hinic3_hwdev *hwdev;
+ u8 *cmdqs_db_base;
+
+ struct rte_mempool *cmd_buf_pool;
+
+ struct hinic3_wq *saved_wqs;
+
+ struct hinic3_cmdq cmdq[HINIC3_MAX_CMDQ_TYPES];
+
+ u32 status;
+};
+
+struct hinic3_cmd_buf {
+ void *buf;
+ uint64_t dma_addr;
+ struct rte_mbuf *mbuf;
+ u16 size;
+};
+
+int hinic3_reinit_cmdq_ctxts(struct hinic3_hwdev *hwdev);
+
+bool hinic3_cmdq_idle(struct hinic3_cmdq *cmdq);
+
+struct hinic3_cmd_buf *hinic3_alloc_cmd_buf(void *hwdev);
+
+void hinic3_free_cmd_buf(struct hinic3_cmd_buf *cmd_buf);
+
+/*
+ * PF/VF sends cmd to ucode by cmdq, and return 0 if success.
+ * timeout=0, use default timeout.
+ */
+int hinic3_cmdq_direct_resp(void *hwdev, enum hinic3_mod_type mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in, u64 *out_param,
+ u32 timeout);
+
+int hinic3_cmdq_detail_resp(void *hwdev, enum hinic3_mod_type mod, u8 cmd,
+ struct hinic3_cmd_buf *buf_in,
+ struct hinic3_cmd_buf *buf_out, u32 timeout);
+
+int hinic3_cmdqs_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_cmdqs_free(struct hinic3_hwdev *hwdev);
+
+#endif /* _HINIC3_CMDQ_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 05/18] net/hinic3: add NIC event module
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (3 preceding siblings ...)
2025-04-18 9:05 ` [RFC 04/18] net/hinic3: add support for cmdq mechanism Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 06/18] net/hinic3: add eq mechanism function code Feifei Wang
` (15 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Yi Chen, Feifei Wang
From: Xin Wang <wangxin679@h-partners.com>
Currently, there are two types of events: pf/vf connection status
and port information printing. This patch contains related data
structures and function codes.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_nic_event.c | 433 +++++++++++++++++++++
drivers/net/hinic3/base/hinic3_nic_event.h | 39 ++
2 files changed, 472 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
diff --git a/drivers/net/hinic3/base/hinic3_nic_event.c b/drivers/net/hinic3/base/hinic3_nic_event.c
new file mode 100644
index 0000000000..14cf6f10ea
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_nic_event.c
@@ -0,0 +1,433 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include "hinic3_compat.h"
+#include "hinic3_cmd.h"
+#include "hinic3_hwif.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_nic_event.h"
+#include "hinic3_ethdev.h"
+
+static const char *g_hw_to_char_fec[HILINK_FEC_MAX_TYPE] = {
+ "not set", "rsfec", "basefec", "nofec", "llrsfec",
+};
+static const char *g_hw_to_speed_info[PORT_SPEED_UNKNOWN] = {
+ "not set", "10MB", "100MB", "1GB", "10GB",
+ "25GB", "40GB", "50GB", "100GB", "200GB",
+};
+static const char *g_hw_to_an_state_info[PORT_CFG_AN_OFF + 1] = {
+ "not set",
+ "on",
+ "off",
+};
+
+struct port_type_table {
+ u32 port_type;
+ const char *port_type_name;
+};
+
+void
+get_port_info(struct hinic3_hwdev *hwdev, u8 link_state,
+ struct rte_eth_link *link)
+{
+ uint32_t port_speed[LINK_SPEED_LEVELS] = {
+ RTE_ETH_SPEED_NUM_NONE, RTE_ETH_SPEED_NUM_10M,
+ RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+ RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+ RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_50G,
+ RTE_ETH_SPEED_NUM_100G, RTE_ETH_SPEED_NUM_200G,
+ };
+ struct nic_port_info port_info = {0};
+ int err;
+
+ if (!link_state) {
+ link->link_status = RTE_ETH_LINK_DOWN;
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ } else {
+ link->link_status = RTE_ETH_LINK_UP;
+
+ err = hinic3_get_port_info(hwdev, &port_info);
+ if (err) {
+ link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link->link_autoneg = RTE_ETH_LINK_FIXED;
+ } else {
+ link->link_speed =
+ port_speed[port_info.speed % LINK_SPEED_LEVELS];
+ link->link_duplex = port_info.duplex;
+ link->link_autoneg = port_info.autoneg_state;
+ }
+ }
+}
+
+static void
+hinic3_link_event_stats(void *dev, u8 link)
+{
+ struct hinic3_hwdev *hwdev = dev;
+
+ if (link)
+ rte_atomic_fetch_add_explicit(&hwdev->hw_stats.link_event_stats.link_up_stats,
+ 1, rte_memory_order_seq_cst);
+ else
+ rte_atomic_fetch_add_explicit(&hwdev->hw_stats.link_event_stats.link_down_stats,
+ 1, rte_memory_order_seq_cst);
+}
+
+static void
+hinic3_set_vport_state(struct hinic3_hwdev *hwdev,
+ struct hinic3_cmd_link_state *link_state)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ int err = 0;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV((struct rte_eth_dev *)(hwdev->eth_dev));
+
+ if (link_state->state) {
+ if (hinic3_get_bit(HINIC3_DEV_START, &nic_dev->dev_status))
+ err = hinic3_set_vport_enable(hwdev, true);
+ } else {
+ err = hinic3_set_vport_enable(hwdev, false);
+ }
+
+ if (err)
+ PMD_DRV_LOG(ERR, "Set vport status failed");
+}
+
+static void
+link_status_event_handler(void *hwdev, void *buf_in, __rte_unused u16 in_size,
+ __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ struct hinic3_cmd_link_state *link_status = NULL;
+ struct rte_eth_link link;
+ struct hinic3_hwdev *dev = hwdev;
+ int err;
+
+ link_status = buf_in;
+ PMD_DRV_LOG(INFO,
+ "Link status report received, func_id: %d, status: %d(%s)",
+ hinic3_global_func_id(hwdev), link_status->state,
+ link_status->state ? "UP" : "DOWN");
+
+ hinic3_link_event_stats(hwdev, link_status->state);
+
+ hinic3_set_vport_state(dev, link_status);
+
+ /* Link event reported only after set vport enable. */
+ get_port_info(dev, link_status->state, &link);
+ err = rte_eth_linkstatus_set((struct rte_eth_dev *)(dev->eth_dev),
+ &link);
+ if (!err)
+ rte_eth_dev_callback_process(dev->eth_dev,
+ RTE_ETH_EVENT_INTR_LSC, NULL);
+}
+
+static const char *
+get_port_type_name(u32 type)
+{
+ int i;
+ const struct port_type_table port_optical_type_table_s[] = {
+ {LINK_PORT_UNKNOWN, "UNKNOWN"},
+ {LINK_PORT_OPTICAL_MM, "optical_sr"},
+ {LINK_PORT_OPTICAL_SM, "optical_lr"},
+ {LINK_PORT_PAS_COPPER, "copper"},
+ {LINK_PORT_ACC, "ACC"},
+ {LINK_PORT_BASET, "baset"},
+ {LINK_PORT_AOC, "AOC"},
+ {LINK_PORT_ELECTRIC, "electric"},
+ {LINK_PORT_BACKBOARD_INTERFACE, "interface"},
+ };
+
+ for (i = 0; i < ARRAY_LEN(port_optical_type_table_s); i++) {
+ if (type == port_optical_type_table_s[i].port_type)
+ return port_optical_type_table_s[i].port_type_name;
+ }
+ return "UNKNOWN TYPE";
+}
+
+static void
+get_port_type(struct mag_cmd_event_port_info *port_info, const char **port_type)
+{
+ if (port_info->port_type <= LINK_PORT_BACKBOARD_INTERFACE)
+ *port_type = get_port_type_name(port_info->port_type);
+ else
+ PMD_DRV_LOG(INFO, "Unknown port type: %u",
+ port_info->port_type);
+}
+
+static int
+get_port_temperature_power(struct mag_cmd_event_port_info *info, char *str)
+{
+ char arr[CAP_INFO_MAX_LEN];
+
+ snprintf(arr, CAP_INFO_MAX_LEN - 1, "%s, %s, Temperature: %u", str,
+ info->sfp_type ? "QSFP" : "SFP", info->cable_temp);
+ if (info->sfp_type)
+ snprintf(str, CAP_INFO_MAX_LEN - 1,
+ "%s, rx power: %uuw %uuW %uuW %uuW", arr,
+ info->power[0x0], info->power[0x1], info->power[0x2],
+ info->power[0x3]);
+ else
+ snprintf(str, CAP_INFO_MAX_LEN - 1,
+ "%s, rx power: %uuW, tx power: %uuW", arr,
+ info->power[0x0], info->power[0x1]);
+
+ return 0;
+}
+
+static void
+print_cable_info(struct mag_cmd_event_port_info *port_info)
+{
+ char tmp_str[CAP_INFO_MAX_LEN] = {0};
+ char tmp_vendor[VENDOR_MAX_LEN] = {0};
+ const char *port_type = "Unknown port type";
+ int i;
+ int err = 0;
+ if (port_info->gpio_insert) {
+ PMD_DRV_LOG(INFO, "Cable unpresent");
+ return;
+ }
+
+ get_port_type(port_info, &port_type);
+
+ for (i = (int)sizeof(port_info->vendor_name) - 1; i >= 0; i--) {
+ if (port_info->vendor_name[i] == ' ')
+ port_info->vendor_name[i] = '\0';
+ else
+ break;
+ }
+
+ memcpy(tmp_vendor, port_info->vendor_name,
+ sizeof(port_info->vendor_name));
+ (void)snprintf(tmp_str, CAP_INFO_MAX_LEN - 1,
+ "Vendor: %s, %s, length: %um, max_speed: %uGbps",
+ tmp_vendor, port_type, port_info->cable_length,
+ port_info->max_speed);
+
+ if (port_info->port_type == LINK_PORT_OPTICAL_MM ||
+ port_info->port_type == LINK_PORT_AOC) {
+ err = get_port_temperature_power(port_info, tmp_str);
+ if (err)
+ return;
+ }
+
+ PMD_DRV_LOG(INFO, "Cable information: %s", tmp_str);
+}
+
+static void
+print_link_info(struct mag_cmd_event_port_info *port_info)
+{
+ const char *fec = "None";
+ const char *speed = "None";
+ const char *an_state = "None";
+
+ if (port_info->fec < HILINK_FEC_MAX_TYPE)
+ fec = g_hw_to_char_fec[port_info->fec];
+ else
+ PMD_DRV_LOG(INFO, "Unknown fec type: %u", port_info->fec);
+
+ if (port_info->an_state > PORT_CFG_AN_OFF) {
+ PMD_DRV_LOG(INFO, "an_state %u is invalid",
+ port_info->an_state);
+ return;
+ }
+
+ an_state = g_hw_to_an_state_info[port_info->an_state];
+
+ if (port_info->speed >= PORT_SPEED_UNKNOWN) {
+ PMD_DRV_LOG(INFO, "speed %u is invalid", port_info->speed);
+ return;
+ }
+
+ speed = g_hw_to_speed_info[port_info->speed];
+ PMD_DRV_LOG(INFO, "Link information: speed %s, %s, autoneg %s", speed,
+ fec, an_state);
+}
+
+static void
+print_port_info(void *hwdev, struct mag_cmd_event_port_info *port_info, u8 type)
+{
+ print_cable_info(port_info);
+
+ print_link_info(port_info);
+
+ if (type == RTE_ETH_LINK_UP)
+ return;
+
+ PMD_DRV_LOG(INFO, "Function %d link down msg:",
+ hinic3_global_func_id(hwdev));
+
+ PMD_DRV_LOG(INFO,
+ "PMA ctrl: %s, tx %s, rx %s, PMA fifo reg: 0x%x, "
+ "PMA signal ok reg: 0x%x, RF/LF status reg: 0x%x",
+ port_info->pma_ctrl == 1 ? "off" : "on",
+ port_info->tx_enable ? "enable" : "disable",
+ port_info->rx_enable ? "enable" : "disable",
+ port_info->pma_fifo_reg, port_info->pma_signal_ok_reg,
+ port_info->rf_lf);
+ PMD_DRV_LOG(INFO,
+ "alos: %u, rx_los: %u, PCS 64 66b reg: 0x%x, "
+ "PCS link: 0x%x, MAC link: 0x%x, PCS_err_cnt: 0x%x",
+ port_info->alos, port_info->rx_los,
+ port_info->pcs_64_66b_reg, port_info->pcs_link,
+ port_info->pcs_mac_link, port_info->pcs_err_cnt);
+ PMD_DRV_LOG(INFO,
+ "his_link_machine_state = 0x%08x, "
+ "cur_link_machine_state = 0x%08x",
+ port_info->his_link_machine_state,
+ port_info->cur_link_machine_state);
+}
+
+static void
+port_info_event_printf(void *hwdev, void *buf_in, __rte_unused u16 in_size,
+ __rte_unused void *buf_out, __rte_unused u16 *out_size)
+{
+ struct mag_cmd_event_port_info *port_info = buf_in;
+ ((struct mag_cmd_event_port_info *)buf_out)->head.status = 0;
+ enum hinic3_nic_event_type type = port_info->event_type;
+ if (type < RTE_ETH_LINK_DOWN || type > RTE_ETH_LINK_UP) {
+ PMD_DRV_LOG(ERR, "Invalid hilink info report, type: %d",
+ type);
+ return;
+ }
+
+ print_port_info(hwdev, port_info, type);
+}
+
+struct nic_event_handler {
+ u16 cmd;
+ void (*handler)(void *hwdev, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+};
+
+static const struct nic_event_handler nic_cmd_handler[] = {};
+
+/**
+ * Handle NIC event based on the provided command.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] cmd
+ * The command associated with the NIC event to be handled.
+ * @param[in] buf_in
+ * The input buffer containing the event data.
+ * @param[in] in_size
+ * The size of the input buffer.
+ * @param[out] buf_out
+ * The output buffer to store the event response.
+ * @param[out] out_size
+ * The size of the output data stored in the output buffer.
+ */
+static void
+nic_event_handler(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 i, size = ARRAY_LEN(nic_cmd_handler);
+
+ if (!hwdev)
+ return;
+
+ *out_size = 0;
+
+ for (i = 0; i < size; i++) {
+ if (cmd == nic_cmd_handler[i].cmd) {
+ nic_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ }
+ }
+
+ if (i == size)
+ PMD_DRV_LOG(WARNING, "Unsupported nic event cmd(%d) to process",
+ cmd);
+}
+
+/*
+ * VF handler mbox msg from ppf/pf.
+ * VF link change event.
+ */
+int
+hinic3_vf_event_handler(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out, u16 *out_size)
+{
+ nic_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+ return 0;
+}
+
+/* NIC event of PF/PPF handler reported by mgmt cpu. */
+void
+hinic3_pf_event_handler(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out, u16 *out_size)
+{
+ nic_event_handler(hwdev, cmd, buf_in, in_size, buf_out, out_size);
+}
+
+static const struct nic_event_handler mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ .handler = link_status_event_handler,
+ },
+ {
+ .cmd = MAG_CMD_EVENT_PORT_INFO,
+ .handler = port_info_event_printf,
+ },
+};
+
+static int
+hinic3_mag_event_handler(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 size = ARRAY_LEN(mag_cmd_handler);
+ u32 i;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ *out_size = 0;
+ for (i = 0; i < size; i++) {
+ if (cmd == mag_cmd_handler[i].cmd) {
+ mag_cmd_handler[i].handler(hwdev, buf_in, in_size,
+ buf_out, out_size);
+ break;
+ }
+ }
+
+ /* Can't find this event cmd. */
+ if (i == size)
+ PMD_DRV_LOG(ERR, "Unsupported mag event, cmd: %u", cmd);
+
+ return 0;
+}
+
+int
+hinic3_vf_mag_event_handler(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ return hinic3_mag_event_handler(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+/* pf/ppf handler mgmt cpu report hilink event. */
+void
+hinic3_pf_mag_event_handler(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ hinic3_mag_event_handler(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+u8
+hinic3_nic_sw_aeqe_handler(__rte_unused void *hwdev, u8 event, u8 *data)
+{
+ PMD_DRV_LOG(ERR,
+ "Received nic ucode aeq event type: 0x%x, data: %" PRIu64,
+ event, *((u64 *)data));
+
+ return 0;
+}
diff --git a/drivers/net/hinic3/base/hinic3_nic_event.h b/drivers/net/hinic3/base/hinic3_nic_event.h
new file mode 100644
index 0000000000..6a792e6af5
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_nic_event.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_NIC_EVENT_H_
+#define _HINIC3_NIC_EVENT_H_
+
+/**
+ * Get the Ethernet port link information based on the link state.
+ *
+ * @param[in] hwdev
+ * The hardware device context.
+ * @param[in] link_state
+ * The current link state (0 = down, non-zero = up).
+ * @param[out] link
+ * Pointer to the `rte_eth_link` structure.
+ */
+void get_port_info(struct hinic3_hwdev *hwdev, u8 link_state,
+ struct rte_eth_link *link);
+
+int hinic3_vf_event_handler(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+void hinic3_pf_event_handler(void *hwdev, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+int hinic3_vf_mag_event_handler(void *hwdev, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+void hinic3_pf_mag_event_handler(void *hwdev, void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+u8 hinic3_nic_sw_aeqe_handler(__rte_unused void *hwdev, u8 event, u8 *data);
+
+#endif /* _HINIC3_NIC_EVENT_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 06/18] net/hinic3: add eq mechanism function code
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (4 preceding siblings ...)
2025-04-18 9:05 ` [RFC 05/18] net/hinic3: add NIC event module Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 07/18] net/hinic3: add mgmt module " Feifei Wang
` (14 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
Eqs include aeq and ceq. Aeq is a kind of queue for mgmt
asynchronous message and mgmt command response message.
This patch introduces data structures, initialization,
and related interfaces about aeq.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_eqs.c | 719 +++++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_eqs.h | 98 ++++
2 files changed, 817 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
diff --git a/drivers/net/hinic3/base/hinic3_eqs.c b/drivers/net/hinic3/base/hinic3_eqs.c
new file mode 100644
index 0000000000..aa8ebc281a
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_eqs.c
@@ -0,0 +1,719 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include "hinic3_compat.h"
+#include "hinic3_csr.h"
+#include "hinic3_eqs.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mbox.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_nic_event.h"
+
+/* Indicate AEQ_CTRL_0 shift. */
+#define AEQ_CTRL_0_INTR_IDX_SHIFT 0
+#define AEQ_CTRL_0_DMA_ATTR_SHIFT 12
+#define AEQ_CTRL_0_PCI_INTF_IDX_SHIFT 20
+#define AEQ_CTRL_0_INTR_MODE_SHIFT 31
+
+/* Indicate AEQ_CTRL_0 mask. */
+#define AEQ_CTRL_0_INTR_IDX_MASK 0x3FFU
+#define AEQ_CTRL_0_DMA_ATTR_MASK 0x3FU
+#define AEQ_CTRL_0_PCI_INTF_IDX_MASK 0x7U
+#define AEQ_CTRL_0_INTR_MODE_MASK 0x1U
+
+/* Set and clear the AEQ_CTRL_0 bit fields. */
+#define AEQ_CTRL_0_SET(val, member) \
+ (((val) & AEQ_CTRL_0_##member##_MASK) << AEQ_CTRL_0_##member##_SHIFT)
+#define AEQ_CTRL_0_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_0_##member##_MASK << AEQ_CTRL_0_##member##_SHIFT)))
+
+/* Indicate AEQ_CTRL_1 shift. */
+#define AEQ_CTRL_1_LEN_SHIFT 0
+#define AEQ_CTRL_1_ELEM_SIZE_SHIFT 24
+#define AEQ_CTRL_1_PAGE_SIZE_SHIFT 28
+
+/* Indicate AEQ_CTRL_1 mask. */
+#define AEQ_CTRL_1_LEN_MASK 0x1FFFFFU
+#define AEQ_CTRL_1_ELEM_SIZE_MASK 0x3U
+#define AEQ_CTRL_1_PAGE_SIZE_MASK 0xFU
+
+/* Set and clear the AEQ_CTRL_1 bit fields. */
+#define AEQ_CTRL_1_SET(val, member) \
+ (((val) & AEQ_CTRL_1_##member##_MASK) << AEQ_CTRL_1_##member##_SHIFT)
+#define AEQ_CTRL_1_CLEAR(val, member) \
+ ((val) & (~(AEQ_CTRL_1_##member##_MASK << AEQ_CTRL_1_##member##_SHIFT)))
+
+#define HINIC3_EQ_PROD_IDX_MASK 0xFFFFF
+#define HINIC3_TASK_PROCESS_EQE_LIMIT 1024
+#define HINIC3_EQ_UPDATE_CI_STEP 64
+
+/* Indicate EQ_ELEM_DESC shift. */
+#define EQ_ELEM_DESC_TYPE_SHIFT 0
+#define EQ_ELEM_DESC_SRC_SHIFT 7
+#define EQ_ELEM_DESC_SIZE_SHIFT 8
+#define EQ_ELEM_DESC_WRAPPED_SHIFT 31
+
+/* Indicate EQ_ELEM_DESC mask. */
+#define EQ_ELEM_DESC_TYPE_MASK 0x7FU
+#define EQ_ELEM_DESC_SRC_MASK 0x1U
+#define EQ_ELEM_DESC_SIZE_MASK 0xFFU
+#define EQ_ELEM_DESC_WRAPPED_MASK 0x1U
+
+/* Get the AEQ_CTRL_1 bit fields. */
+#define EQ_ELEM_DESC_GET(val, member) \
+ (((val) >> EQ_ELEM_DESC_##member##_SHIFT) & \
+ EQ_ELEM_DESC_##member##_MASK)
+
+/* Indicate EQ_CI_SIMPLE_INDIR shift. */
+#define EQ_CI_SIMPLE_INDIR_CI_SHIFT 0
+#define EQ_CI_SIMPLE_INDIR_ARMED_SHIFT 21
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_SHIFT 30
+
+/* Indicate EQ_CI_SIMPLE_INDIR mask. */
+#define EQ_CI_SIMPLE_INDIR_CI_MASK 0x1FFFFFU
+#define EQ_CI_SIMPLE_INDIR_ARMED_MASK 0x1U
+#define EQ_CI_SIMPLE_INDIR_AEQ_IDX_MASK 0x3U
+
+/* Set and clear the EQ_CI_SIMPLE_INDIR bit fields. */
+#define EQ_CI_SIMPLE_INDIR_SET(val, member) \
+ (((val) & EQ_CI_SIMPLE_INDIR_##member##_MASK) \
+ << EQ_CI_SIMPLE_INDIR_##member##_SHIFT)
+#define EQ_CI_SIMPLE_INDIR_CLEAR(val, member) \
+ ((val) & (~(EQ_CI_SIMPLE_INDIR_##member##_MASK \
+ << EQ_CI_SIMPLE_INDIR_##member##_SHIFT)))
+
+#define EQ_WRAPPED(eq) ((u32)(eq)->wrapped << EQ_VALID_SHIFT)
+
+#define EQ_CONS_IDX(eq) \
+ ({ \
+ typeof(eq) __eq = (eq); \
+ __eq->cons_idx | ((u32)__eq->wrapped << EQ_WRAPPED_SHIFT); \
+ })
+#define GET_EQ_NUM_PAGES(eq, size) \
+ ({ \
+ typeof(eq) __eq = (eq); \
+ typeof(size) __size = (size); \
+ (u16)(RTE_ALIGN((u32)(__eq->eq_len * __eq->elem_size), \
+ __size) / \
+ __size); \
+ })
+
+#define GET_EQ_NUM_ELEMS(eq, pg_size) ((pg_size) / (u32)(eq)->elem_size)
+
+#define GET_EQ_ELEMENT(eq, idx) \
+ ({ \
+ typeof(eq) __eq = (eq); \
+ typeof(idx) __idx = (idx); \
+ ((u8 *)__eq->virt_addr[__idx / __eq->num_elem_in_pg]) + \
+ (u32)((__idx & (__eq->num_elem_in_pg - 1)) * \
+ __eq->elem_size); \
+ })
+
+#define GET_AEQ_ELEM(eq, idx) \
+ ((struct hinic3_aeq_elem *)GET_EQ_ELEMENT((eq), (idx)))
+
+#define PAGE_IN_4K(page_size) ((page_size) >> 12)
+#define EQ_SET_HW_PAGE_SIZE_VAL(eq) ((u32)ilog2(PAGE_IN_4K((eq)->page_size)))
+
+#define ELEMENT_SIZE_IN_32B(eq) (((eq)->elem_size) >> 5)
+#define EQ_SET_HW_ELEM_SIZE_VAL(eq) ((u32)ilog2(ELEMENT_SIZE_IN_32B(eq)))
+
+#define AEQ_DMA_ATTR_DEFAULT 0
+
+#define EQ_WRAPPED_SHIFT 20
+
+#define EQ_VALID_SHIFT 31
+
+#define aeq_to_aeqs(eq) \
+ ({ \
+ typeof(eq) __eq = (eq); \
+ container_of(__eq - __eq->q_id, struct hinic3_aeqs, aeq[0]); \
+ })
+
+#define AEQ_MSIX_ENTRY_IDX_0 0
+
+/**
+ * Write the consumer idx to hw.
+ *
+ * @param[in] eq
+ * The event queue to update the cons idx.
+ * @param[in] arm_state
+ * Indicate whether report interrupts when generate eq element.
+ */
+static void
+set_eq_cons_idx(struct hinic3_eq *eq, u32 arm_state)
+{
+ u32 eq_wrap_ci = 0;
+ u32 val = 0;
+ u32 addr = HINIC3_CSR_AEQ_CI_SIMPLE_INDIR_ADDR;
+
+ eq_wrap_ci = EQ_CONS_IDX(eq);
+
+ /* Dpdk pmd driver only aeq0 use int_arm mode. */
+ if (eq->q_id != 0)
+ val = EQ_CI_SIMPLE_INDIR_SET(HINIC3_EQ_NOT_ARMED, ARMED);
+ else
+ val = EQ_CI_SIMPLE_INDIR_SET(arm_state, ARMED);
+
+ val = val | EQ_CI_SIMPLE_INDIR_SET(eq_wrap_ci, CI) |
+ EQ_CI_SIMPLE_INDIR_SET(eq->q_id, AEQ_IDX);
+
+ hinic3_hwif_write_reg(eq->hwdev->hwif, addr, val);
+}
+
+/**
+ * Set aeq's ctrls registers.
+ *
+ * @param[in] eq
+ * The event queue for setting.
+ */
+static void
+set_aeq_ctrls(struct hinic3_eq *eq)
+{
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ struct irq_info *eq_irq = &eq->eq_irq;
+ u32 addr, val, ctrl0, ctrl1, page_size_val, elem_size;
+ u32 pci_intf_idx = HINIC3_PCI_INTF_IDX(hwif);
+
+ /* Set AEQ ctrl0. */
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ val = hinic3_hwif_read_reg(hwif, addr);
+
+ val = AEQ_CTRL_0_CLEAR(val, INTR_IDX) &
+ AEQ_CTRL_0_CLEAR(val, DMA_ATTR) &
+ AEQ_CTRL_0_CLEAR(val, PCI_INTF_IDX) &
+ AEQ_CTRL_0_CLEAR(val, INTR_MODE);
+
+ ctrl0 = AEQ_CTRL_0_SET(eq_irq->msix_entry_idx, INTR_IDX) |
+ AEQ_CTRL_0_SET(AEQ_DMA_ATTR_DEFAULT, DMA_ATTR) |
+ AEQ_CTRL_0_SET(pci_intf_idx, PCI_INTF_IDX) |
+ AEQ_CTRL_0_SET(HINIC3_INTR_MODE_ARMED, INTR_MODE);
+
+ val |= ctrl0;
+
+ hinic3_hwif_write_reg(hwif, addr, val);
+
+ /* Set AEQ ctrl1. */
+ addr = HINIC3_CSR_AEQ_CTRL_1_ADDR;
+
+ page_size_val = EQ_SET_HW_PAGE_SIZE_VAL(eq);
+ elem_size = EQ_SET_HW_ELEM_SIZE_VAL(eq);
+
+ ctrl1 = AEQ_CTRL_1_SET(eq->eq_len, LEN) |
+ AEQ_CTRL_1_SET(elem_size, ELEM_SIZE) |
+ AEQ_CTRL_1_SET(page_size_val, PAGE_SIZE);
+
+ hinic3_hwif_write_reg(hwif, addr, ctrl1);
+}
+
+/**
+ * Initialize all the elements in the aeq.
+ *
+ * @param[in] eq
+ * The event queue.
+ * @param[in] init_val
+ * Value to init.
+ */
+static void
+aeq_elements_init(struct hinic3_eq *eq, u32 init_val)
+{
+ struct hinic3_aeq_elem *aeqe = NULL;
+ u32 i;
+
+ for (i = 0; i < eq->eq_len; i++) {
+ aeqe = GET_AEQ_ELEM(eq, i);
+ aeqe->desc = cpu_to_be32(init_val);
+ }
+
+ rte_wmb(); /**< Write the init values. */
+}
+
+/**
+ * Set the pages for the event queue.
+ *
+ * @param[in] eq
+ * The event queue.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+set_eq_pages(struct hinic3_eq *eq)
+{
+ struct hinic3_hwif *hwif = eq->hwdev->hwif;
+ u32 reg, init_val;
+ u16 pg_num, i;
+ int err;
+
+ for (pg_num = 0; pg_num < eq->num_pages; pg_num++) {
+ /* Allocate memory for each page. */
+ eq->eq_mz[pg_num] = hinic3_dma_zone_reserve(eq->hwdev->eth_dev,
+ "eq_mz", eq->q_id, eq->page_size,
+ eq->page_size, SOCKET_ID_ANY);
+ if (!eq->eq_mz[pg_num]) {
+ err = -ENOMEM;
+ goto dma_alloc_err;
+ }
+
+ /* Write physical memory address and virtual memory address. */
+ eq->dma_addr[pg_num] = eq->eq_mz[pg_num]->iova;
+ eq->virt_addr[pg_num] = eq->eq_mz[pg_num]->addr;
+
+ reg = HINIC3_AEQ_HI_PHYS_ADDR_REG(pg_num);
+ hinic3_hwif_write_reg(hwif, reg,
+ upper_32_bits(eq->dma_addr[pg_num]));
+
+ reg = HINIC3_AEQ_LO_PHYS_ADDR_REG(pg_num);
+ hinic3_hwif_write_reg(hwif, reg,
+ lower_32_bits(eq->dma_addr[pg_num]));
+ }
+ /* Calculate the number of elements that can be accommodated. */
+ eq->num_elem_in_pg = GET_EQ_NUM_ELEMS(eq, eq->page_size);
+ if (eq->num_elem_in_pg & (eq->num_elem_in_pg - 1)) {
+ PMD_DRV_LOG(ERR, "Number element in eq page != power of 2");
+ err = -EINVAL;
+ goto dma_alloc_err;
+ }
+ init_val = EQ_WRAPPED(eq);
+
+ /* Initialize elements in the queue. */
+ aeq_elements_init(eq, init_val);
+
+ return 0;
+
+dma_alloc_err:
+ for (i = 0; i < pg_num; i++)
+ hinic3_memzone_free(eq->eq_mz[i]);
+
+ return err;
+}
+
+/**
+ * Allocate the pages for the event queue.
+ *
+ * @param[in] eq
+ * The event queue.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+alloc_eq_pages(struct hinic3_eq *eq)
+{
+ u64 dma_addr_size, virt_addr_size, eq_mz_size;
+ int err;
+
+ /* Calculate the size of the memory to be allocated. */
+ dma_addr_size = eq->num_pages * sizeof(*eq->dma_addr);
+ virt_addr_size = eq->num_pages * sizeof(*eq->virt_addr);
+ eq_mz_size = eq->num_pages * sizeof(*eq->eq_mz);
+
+ eq->dma_addr = rte_zmalloc("eq_dma", dma_addr_size,
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!eq->dma_addr)
+ return -ENOMEM;
+
+ eq->virt_addr = rte_zmalloc("eq_va", virt_addr_size,
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!eq->virt_addr) {
+ err = -ENOMEM;
+ goto virt_addr_alloc_err;
+ }
+
+ eq->eq_mz =
+ rte_zmalloc("eq_mz", eq_mz_size, HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!eq->eq_mz) {
+ err = -ENOMEM;
+ goto eq_mz_alloc_err;
+ }
+ err = set_eq_pages(eq);
+ if (err != 0)
+ goto eq_pages_err;
+
+ return 0;
+
+eq_pages_err:
+ rte_free(eq->eq_mz);
+
+eq_mz_alloc_err:
+ rte_free(eq->virt_addr);
+
+virt_addr_alloc_err:
+ rte_free(eq->dma_addr);
+
+ return err;
+}
+
+/**
+ * Free the pages of the event queue.
+ *
+ * @param[in] eq
+ * The event queue.
+ */
+static void
+free_eq_pages(struct hinic3_eq *eq)
+{
+ u16 pg_num;
+
+ for (pg_num = 0; pg_num < eq->num_pages; pg_num++)
+ hinic3_memzone_free(eq->eq_mz[pg_num]);
+
+ rte_free(eq->eq_mz);
+ rte_free(eq->virt_addr);
+ rte_free(eq->dma_addr);
+}
+
+static u32
+get_page_size(struct hinic3_eq *eq)
+{
+ u32 total_size;
+ u16 count, n = 0;
+
+ /* Total memory size. */
+ total_size = RTE_ALIGN((eq->eq_len * eq->elem_size),
+ HINIC3_MIN_EQ_PAGE_SIZE);
+ if (total_size <= (HINIC3_EQ_MAX_PAGES * HINIC3_MIN_EQ_PAGE_SIZE))
+ return HINIC3_MIN_EQ_PAGE_SIZE;
+ /* Total number of pages. */
+ count = (u16)(RTE_ALIGN((total_size / HINIC3_EQ_MAX_PAGES),
+ HINIC3_MIN_EQ_PAGE_SIZE) /
+ HINIC3_MIN_EQ_PAGE_SIZE);
+
+ /* Whether count is a power of 2. */
+ if (!(count & (count - 1)))
+ return HINIC3_MIN_EQ_PAGE_SIZE * count;
+
+ while (count) {
+ count >>= 1;
+ n++;
+ }
+
+ return ((u32)HINIC3_MIN_EQ_PAGE_SIZE) << n;
+}
+
+/**
+ * Initialize AEQ.
+ *
+ * @param[in] eq
+ * The event queue.
+ * @param[in] hwdev
+ * The pointer to the private hardware device.
+ * @param[in] q_id
+ * Queue id number.
+ * @param[in] q_len
+ * The number of EQ elements.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+init_aeq(struct hinic3_eq *eq, struct hinic3_hwdev *hwdev, u16 q_id, u32 q_len)
+{
+ int err = 0;
+
+ eq->hwdev = hwdev;
+ eq->q_id = q_id;
+ eq->eq_len = q_len;
+
+ /* Indirect access should set q_id first. */
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_AEQ_INDIR_IDX_ADDR, eq->q_id);
+ rte_wmb(); /**< write index before config. */
+
+ /* Clear eq_len to force eqe drop in hardware. */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+ rte_wmb();
+ /* Init aeq pi to 0 before allocating aeq pages. */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_CSR_AEQ_PROD_IDX_ADDR, 0);
+
+ eq->cons_idx = 0;
+ eq->wrapped = 0;
+
+ eq->elem_size = HINIC3_AEQE_SIZE;
+ eq->page_size = get_page_size(eq);
+ eq->orig_page_size = eq->page_size;
+ eq->num_pages = GET_EQ_NUM_PAGES(eq, eq->page_size);
+ if (eq->num_pages > HINIC3_EQ_MAX_PAGES) {
+ PMD_DRV_LOG(ERR, "Too many pages: %d for aeq", eq->num_pages);
+ return -EINVAL;
+ }
+
+ err = alloc_eq_pages(eq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate pages for eq failed");
+ return err;
+ }
+
+ /* Pmd driver uses AEQ_MSIX_ENTRY_IDX_0. */
+ eq->eq_irq.msix_entry_idx = AEQ_MSIX_ENTRY_IDX_0;
+ set_aeq_ctrls(eq);
+
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ if (eq->q_id == 0)
+ hinic3_set_msix_state(hwdev, 0, HINIC3_MSIX_ENABLE);
+
+ eq->poll_retry_nr = HINIC3_RETRY_NUM;
+
+ return 0;
+}
+
+/**
+ * Remove AEQ.
+ *
+ * @param[in] eq
+ * The event queue.
+ */
+static void
+remove_aeq(struct hinic3_eq *eq)
+{
+ struct irq_info *entry = &eq->eq_irq;
+
+ if (eq->q_id == 0)
+ hinic3_set_msix_state(eq->hwdev, entry->msix_entry_idx,
+ HINIC3_MSIX_DISABLE);
+
+ /* Indirect access should set q_id first. */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_AEQ_INDIR_IDX_ADDR,
+ eq->q_id);
+
+ rte_wmb(); /**< Write index before config. */
+
+ /* Clear eq_len to avoid hw access host memory. */
+ hinic3_hwif_write_reg(eq->hwdev->hwif, HINIC3_CSR_AEQ_CTRL_1_ADDR, 0);
+
+ /* Update cons_idx to avoid invalid interrupt. */
+ eq->cons_idx = hinic3_hwif_read_reg(eq->hwdev->hwif,
+ HINIC3_CSR_AEQ_PROD_IDX_ADDR);
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+
+ free_eq_pages(eq);
+}
+
+/**
+ * Init all AEQs.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device object
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_aeqs_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = NULL;
+ u16 num_aeqs;
+ int err;
+ u16 i, q_id;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ num_aeqs = HINIC3_HWIF_NUM_AEQS(hwdev->hwif);
+ if (num_aeqs > HINIC3_MAX_AEQS) {
+ PMD_DRV_LOG(INFO, "Adjust aeq num to %d", HINIC3_MAX_AEQS);
+ num_aeqs = HINIC3_MAX_AEQS;
+ } else if (num_aeqs < HINIC3_MIN_AEQS) {
+ PMD_DRV_LOG(ERR, "PMD needs %d AEQs, Chip has %d",
+ HINIC3_MIN_AEQS, num_aeqs);
+ return -EINVAL;
+ }
+
+ aeqs = rte_zmalloc("hinic3_aeqs", sizeof(*aeqs),
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!aeqs)
+ return -ENOMEM;
+
+ hwdev->aeqs = aeqs;
+ aeqs->hwdev = hwdev;
+ aeqs->num_aeqs = num_aeqs;
+
+ for (q_id = 0; q_id < num_aeqs; q_id++) {
+ err = init_aeq(&aeqs->aeq[q_id], hwdev, q_id,
+ HINIC3_DEFAULT_AEQ_LEN);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init aeq %d failed", q_id);
+ goto init_aeq_err;
+ }
+ }
+
+ return 0;
+
+init_aeq_err:
+ for (i = 0; i < q_id; i++)
+ remove_aeq(&aeqs->aeq[i]);
+
+ rte_free(aeqs);
+ return err;
+}
+
+/**
+ * Free all AEQs.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device.
+ */
+void
+hinic3_aeqs_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ u16 q_id;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++)
+ remove_aeq(&aeqs->aeq[q_id]);
+
+ rte_free(aeqs);
+}
+
+void
+hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ struct hinic3_eq *eq = NULL;
+ u32 addr, ci, pi, ctrl0, idx;
+ int q_id;
+
+ for (q_id = 0; q_id < hwdev->aeqs->num_aeqs; q_id++) {
+ eq = &hwdev->aeqs->aeq[q_id];
+ /* Indirect access should set q_id first. */
+ hinic3_hwif_write_reg(eq->hwdev->hwif,
+ HINIC3_AEQ_INDIR_IDX_ADDR, eq->q_id);
+ rte_wmb(); /**< Write index before config. */
+
+ addr = HINIC3_CSR_AEQ_CTRL_0_ADDR;
+
+ ctrl0 = hinic3_hwif_read_reg(hwdev->hwif, addr);
+
+ idx = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_AEQ_INDIR_IDX_ADDR);
+
+ addr = HINIC3_CSR_AEQ_CONS_IDX_ADDR;
+ ci = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ addr = HINIC3_CSR_AEQ_PROD_IDX_ADDR;
+ pi = hinic3_hwif_read_reg(hwdev->hwif, addr);
+ aeqe_pos = GET_AEQ_ELEM(eq, eq->cons_idx);
+ PMD_DRV_LOG(ERR,
+ "Aeq id: %d, idx: %u, ctrl0: 0x%08x, wrap: %d,"
+ " pi: 0x%x, ci: 0x%08x, desc: 0x%x",
+ q_id, idx, ctrl0, eq->wrapped, pi, ci,
+ be32_to_cpu(aeqe_pos->desc));
+ }
+}
+
+static int
+aeq_elem_handler(struct hinic3_eq *eq, u32 aeqe_desc,
+ struct hinic3_aeq_elem *aeqe_pos, void *param)
+{
+ enum hinic3_aeq_type event;
+ u8 data[HINIC3_AEQE_DATA_SIZE];
+ u8 size;
+
+ event = EQ_ELEM_DESC_GET(aeqe_desc, TYPE);
+ if (EQ_ELEM_DESC_GET(aeqe_desc, SRC)) {
+ /* SW event uses only the first 8B. */
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+ /* Just support HINIC3_STATELESS_EVENT. */
+ return hinic3_nic_sw_aeqe_handler(eq->hwdev, event, data);
+ }
+
+ memcpy(data, aeqe_pos->aeqe_data, HINIC3_AEQE_DATA_SIZE);
+ hinic3_be32_to_cpu(data, HINIC3_AEQE_DATA_SIZE);
+ size = EQ_ELEM_DESC_GET(aeqe_desc, SIZE);
+
+ if (event == HINIC3_MSG_FROM_MGMT_CPU) {
+ return hinic3_mgmt_msg_aeqe_handler(eq->hwdev, data, size,
+ param);
+ } else if (event == HINIC3_MBX_FROM_FUNC) {
+ return hinic3_mbox_func_aeqe_handler(eq->hwdev, data, size,
+ param);
+ } else {
+ PMD_DRV_LOG(ERR, "AEQ hw event not support %d", event);
+ return -EINVAL;
+ }
+}
+
+/**
+ * Poll one or continue aeqe, and call dedicated process.
+ *
+ * @param[in] eq
+ * Pointer to the event queue.
+ * @param[in] timeout
+ * equal to 0 - Poll all aeqe in eq, used in interrupt mode.
+ * Greater than 0 - Poll aeq until get aeqe with 'last' field set to 1, used in
+ * polling mode.
+ * @param[in] param
+ * Customized parameter.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_aeq_poll_msg(struct hinic3_eq *eq, u32 timeout, void *param)
+{
+ struct hinic3_aeq_elem *aeqe_pos = NULL;
+ u32 aeqe_desc = 0;
+ u32 eqe_cnt = 0;
+ int err = -EFAULT;
+ int done = HINIC3_MSG_HANDLER_RES;
+ unsigned long end;
+ u16 i;
+
+ for (i = 0; ((timeout == 0) && (i < eq->eq_len)) ||
+ ((timeout > 0) && (done != 0) && (i < eq->eq_len));
+ i++) {
+ err = -EIO;
+ end = jiffies + msecs_to_jiffies(timeout);
+ do {
+ aeqe_pos = GET_AEQ_ELEM(eq, eq->cons_idx);
+ rte_rmb();
+
+ /* Data in HW is in Big endian format. */
+ aeqe_desc = be32_to_cpu(aeqe_pos->desc);
+
+ /*
+ * HW updates wrapped bit, when it adds eq element
+ * event.
+ */
+ if (EQ_ELEM_DESC_GET(aeqe_desc, WRAPPED) !=
+ eq->wrapped) {
+ err = 0;
+ break;
+ }
+
+ if (timeout != 0)
+ usleep(HINIC3_AEQE_DESC_SIZE);
+ } while (time_before(jiffies, end));
+
+ if (err != 0) /**< Poll time out. */
+ break;
+ /* Handle the middle element of the event queue. */
+ done = aeq_elem_handler(eq, aeqe_desc, aeqe_pos, param);
+
+ eq->cons_idx++;
+ if (eq->cons_idx == eq->eq_len) {
+ eq->cons_idx = 0;
+ eq->wrapped = !eq->wrapped;
+ }
+
+ if (++eqe_cnt >= HINIC3_EQ_UPDATE_CI_STEP) {
+ eqe_cnt = 0;
+ set_eq_cons_idx(eq, HINIC3_EQ_NOT_ARMED);
+ }
+ }
+ /* Set the consumer index of the event queue. */
+ set_eq_cons_idx(eq, HINIC3_EQ_ARMED);
+
+ return err;
+}
+
+void
+hinic3_dev_handle_aeq_event(struct hinic3_hwdev *hwdev, void *param)
+{
+ struct hinic3_eq *aeq = &hwdev->aeqs->aeq[0];
+
+ /* Clear resend timer cnt register. */
+ hinic3_misx_intr_clear_resend_bit(hwdev, aeq->eq_irq.msix_entry_idx,
+ MSIX_RESEND_TIMER_CLEAR);
+ (void)hinic3_aeq_poll_msg(aeq, 0, param);
+}
diff --git a/drivers/net/hinic3/base/hinic3_eqs.h b/drivers/net/hinic3/base/hinic3_eqs.h
new file mode 100644
index 0000000000..7617ed9589
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_eqs.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_EQS_H_
+#define _HINIC3_EQS_H_
+
+#define HINIC3_MAX_AEQS 4
+#define HINIC3_MIN_AEQS 2
+#define HINIC3_EQ_MAX_PAGES 4
+
+#define HINIC3_AEQE_SIZE 64
+
+#define HINIC3_AEQE_DESC_SIZE 4
+#define HINIC3_AEQE_DATA_SIZE (HINIC3_AEQE_SIZE - HINIC3_AEQE_DESC_SIZE)
+
+/* Linux is 1K, dpdk is 64. */
+#define HINIC3_DEFAULT_AEQ_LEN 64
+
+#define HINIC3_MIN_EQ_PAGE_SIZE 0x1000 /**< Min eq page size 1K Bytes. */
+#define HINIC3_MAX_EQ_PAGE_SIZE 0x400000 /**< Max eq page size 4M Bytes */
+
+#define HINIC3_MIN_AEQ_LEN 64
+#define HINIC3_MAX_AEQ_LEN \
+ ((HINIC3_MAX_EQ_PAGE_SIZE / HINIC3_AEQE_SIZE) * HINIC3_EQ_MAX_PAGES)
+
+#define EQ_IRQ_NAME_LEN 64
+
+enum hinic3_eq_intr_mode { HINIC3_INTR_MODE_ARMED, HINIC3_INTR_MODE_ALWAYS };
+
+enum hinic3_eq_ci_arm_state { HINIC3_EQ_NOT_ARMED, HINIC3_EQ_ARMED };
+
+/* Structure for interrupt request information. */
+struct irq_info {
+ u16 msix_entry_idx; /**< IRQ corresponding index number. */
+ u32 irq_id; /**< The IRQ number from OS. */
+};
+
+#define HINIC3_RETRY_NUM 10
+
+enum hinic3_aeq_type {
+ HINIC3_HW_INTER_INT = 0,
+ HINIC3_MBX_FROM_FUNC = 1,
+ HINIC3_MSG_FROM_MGMT_CPU = 2,
+ HINIC3_API_RSP = 3,
+ HINIC3_API_CHAIN_STS = 4,
+ HINIC3_MBX_SEND_RSLT = 5,
+ HINIC3_MAX_AEQ_EVENTS
+};
+
+/* Structure for EQ(Event Queue) information. */
+struct hinic3_eq {
+ struct hinic3_hwdev *hwdev;
+ u16 q_id;
+ u32 page_size;
+ u32 orig_page_size;
+ u32 eq_len;
+
+ u32 cons_idx;
+ u16 wrapped;
+
+ u16 elem_size;
+ u16 num_pages;
+ u32 num_elem_in_pg;
+
+ struct irq_info eq_irq;
+
+ const struct rte_memzone **eq_mz;
+ rte_iova_t *dma_addr;
+ u8 **virt_addr;
+
+ u16 poll_retry_nr;
+};
+
+struct hinic3_aeq_elem {
+ u8 aeqe_data[HINIC3_AEQE_DATA_SIZE];
+ u32 desc;
+};
+
+/* Structure for AEQs(Asynchronous Event Queues) information. */
+struct hinic3_aeqs {
+ struct hinic3_hwdev *hwdev;
+
+ struct hinic3_eq aeq[HINIC3_MAX_AEQS];
+ u16 num_aeqs;
+};
+
+int hinic3_aeqs_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_aeqs_free(struct hinic3_hwdev *hwdev);
+
+void hinic3_dump_aeq_info(struct hinic3_hwdev *hwdev);
+
+int hinic3_aeq_poll_msg(struct hinic3_eq *eq, u32 timeout, void *param);
+
+void hinic3_dev_handle_aeq_event(struct hinic3_hwdev *hwdev, void *param);
+
+#endif /**< _HINIC3_EQS_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 07/18] net/hinic3: add mgmt module function code
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (5 preceding siblings ...)
2025-04-18 9:05 ` [RFC 06/18] net/hinic3: add eq mechanism function code Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 08/18] net/hinic3: add module about hardware operation Feifei Wang
` (13 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
Mgmt module is a kind of administration module for the chip.
It is responsible for handling administration command from host.
It mainly uses aeq to implement. This patch adds related data
structures, packaged interfaces and function codes.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_mgmt.c | 392 ++++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_mgmt.h | 121 ++++++++
2 files changed, 513 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
diff --git a/drivers/net/hinic3/base/hinic3_mgmt.c b/drivers/net/hinic3/base/hinic3_mgmt.c
new file mode 100644
index 0000000000..a755c5aa97
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_mgmt.c
@@ -0,0 +1,392 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+#include <rte_ethdev.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_mbox.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_nic_event.h"
+
+#define HINIC3_MSG_TO_MGMT_MAX_LEN 2016
+
+#define MAX_PF_MGMT_BUF_SIZE 2048UL
+#define SEGMENT_LEN 48
+#define ASYNC_MSG_FLAG 0x20
+#define MGMT_MSG_MAX_SEQ_ID \
+ (RTE_ALIGN(HINIC3_MSG_TO_MGMT_MAX_LEN, SEGMENT_LEN) / SEGMENT_LEN)
+
+#define BUF_OUT_DEFAULT_SIZE 1
+
+#define MGMT_MSG_SIZE_MIN 20
+#define MGMT_MSG_SIZE_STEP 16
+#define MGMT_MSG_RSVD_FOR_DEV 8
+
+#define SYNC_MSG_ID_MASK 0x1F
+#define ASYNC_MSG_ID_MASK 0x1F
+
+#define SYNC_FLAG 0
+#define ASYNC_FLAG 1
+
+#define MSG_NO_RESP 0xFFFF
+
+#define MGMT_MSG_TIMEOUT 5000 /**< Millisecond. */
+
+int
+hinic3_msg_to_mgmt_sync(void *hwdev, enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ /* Send a mailbox message to the management. */
+ err = hinic3_send_mbox_to_mgmt(hwdev, mod, cmd, buf_in, in_size,
+ buf_out, out_size, timeout);
+ return err;
+}
+
+int
+hinic3_msg_to_mgmt_no_ack(void *hwdev, enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return hinic3_send_mbox_to_mgmt_no_ack(hwdev, mod, cmd, buf_in,
+ in_size);
+}
+
+static void
+send_mgmt_ack(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ enum hinic3_mod_type mod, u16 cmd, void *buf_in, u16 in_size,
+ u16 msg_id)
+{
+ u16 buf_size;
+
+ if (!in_size)
+ buf_size = BUF_OUT_DEFAULT_SIZE;
+ else
+ buf_size = in_size;
+
+ hinic3_response_mbox_to_mgmt(pf_to_mgmt->hwdev, mod, cmd, buf_in,
+ buf_size, msg_id);
+}
+
+static bool
+check_mgmt_seq_id_and_seg_len(struct hinic3_recv_msg *recv_msg, u8 seq_id,
+ u8 seg_len, u16 msg_id)
+{
+ if (seq_id > MGMT_MSG_MAX_SEQ_ID || seg_len > SEGMENT_LEN)
+ return false;
+
+ if (seq_id == 0) {
+ recv_msg->seq_id = seq_id;
+ recv_msg->msg_id = msg_id;
+ } else {
+ if ((seq_id != recv_msg->seq_id + 1) ||
+ msg_id != recv_msg->msg_id) {
+ recv_msg->seq_id = 0;
+ return false;
+ }
+
+ recv_msg->seq_id = seq_id;
+ }
+
+ return true;
+}
+
+static void
+hinic3_mgmt_recv_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt,
+ struct hinic3_recv_msg *recv_msg,
+ __rte_unused void *param)
+{
+ void *buf_out = pf_to_mgmt->mgmt_ack_buf;
+ bool ack_first = false;
+ u16 out_size = 0;
+
+ memset(buf_out, 0, MAX_PF_MGMT_BUF_SIZE);
+
+ /* Select the corresponding processing function according to the mod. */
+ switch (recv_msg->mod) {
+ case HINIC3_MOD_COMM:
+ pf_handle_mgmt_comm_event(pf_to_mgmt->hwdev,
+ pf_to_mgmt, recv_msg->cmd,
+ recv_msg->msg, recv_msg->msg_len, buf_out, &out_size);
+ break;
+ case HINIC3_MOD_L2NIC:
+ hinic3_pf_event_handler(pf_to_mgmt->hwdev, pf_to_mgmt,
+ recv_msg->cmd, recv_msg->msg,
+ recv_msg->msg_len, buf_out, &out_size);
+ break;
+ case HINIC3_MOD_HILINK:
+ hinic3_pf_mag_event_handler(pf_to_mgmt->hwdev,
+ pf_to_mgmt, recv_msg->cmd,
+ recv_msg->msg, recv_msg->msg_len, buf_out, &out_size);
+ break;
+
+ default:
+ PMD_DRV_LOG(ERR,
+ "Not support mod, maybe need to response, mod: %d",
+ recv_msg->mod);
+ break;
+ }
+
+ if (!ack_first && !recv_msg->async_mgmt_to_pf)
+ /* Mgmt sends async msg, sends the response. */
+ send_mgmt_ack(pf_to_mgmt, recv_msg->mod, recv_msg->cmd, buf_out,
+ out_size, recv_msg->msg_id);
+}
+
+/**
+ * Handler a recv message from mgmt channel.
+ *
+ * @param[in] pf_to_mgmt
+ * PF to mgmt channel.
+ * @param[in] recv_msg
+ * Received message details.
+ * @param[in] param
+ * Customized parameter.
+ * @return
+ * 0 : When aeqe is response message.
+ * -1 : Default result, when wrong message or not last message.
+ */
+static int
+recv_mgmt_msg_handler(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt, u8 *header,
+ struct hinic3_recv_msg *recv_msg, void *param)
+{
+ u64 mbox_header = *((u64 *)header);
+ void *msg_body = header + sizeof(mbox_header);
+ u8 seq_id, seq_len;
+ u32 offset;
+ u8 front_id;
+ u16 msg_id;
+
+ /* Don't need to get anything from hw when cmd is async. */
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE)
+ return 0;
+
+ seq_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ front_id = recv_msg->seq_id;
+
+ /* Check the consistency between seq_id and seg_len. */
+ if (!check_mgmt_seq_id_and_seg_len(recv_msg, seq_id, seq_len, msg_id)) {
+ PMD_DRV_LOG(ERR,
+ "Mgmt msg sequence id and segment length check "
+ "failed, front seq_id: 0x%x, current seq_id: 0x%x,"
+ " seg len: 0x%x front msg_id: %d, cur msg_id: %d",
+ front_id, seq_id, seq_len, recv_msg->msg_id,
+ msg_id);
+ /* Set seq_id to invalid seq_id. */
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+ return HINIC3_MSG_HANDLER_RES;
+ }
+
+ offset = seq_id * SEGMENT_LEN;
+ memcpy((u8 *)recv_msg->msg + offset, msg_body, seq_len);
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return HINIC3_MSG_HANDLER_RES;
+ /* Setting the message receiving information. */
+ recv_msg->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_msg->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_msg->async_mgmt_to_pf = HINIC3_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_msg->msg_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_msg->msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ hinic3_mgmt_recv_msg_handler(pf_to_mgmt, recv_msg, param);
+
+ return HINIC3_MSG_HANDLER_RES;
+}
+
+/**
+ * Handler for a aeqe from mgmt channel.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device.
+ * @param[in] header
+ * The header of the message.
+ * @param[in] size
+ * Indicate size.
+ * @param[in] param
+ * Customized parameter.
+ * @return
+ * zero: When aeqe is response message
+ * negative: When wrong message or not last message.
+ */
+int
+hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size, void *param)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = NULL;
+ struct hinic3_recv_msg *recv_msg = NULL;
+ bool is_send_dir = false;
+
+ /* For mbox messgae, invoke the mailbox processing function. */
+ if ((HINIC3_MSG_HEADER_GET(*(u64 *)header, SOURCE) ==
+ HINIC3_MSG_FROM_MBOX)) {
+ return hinic3_mbox_func_aeqe_handler(hwdev, header, size,
+ param);
+ }
+
+ pf_to_mgmt = dev->pf_to_mgmt;
+
+ is_send_dir = (HINIC3_MSG_HEADER_GET(*(u64 *)header, DIRECTION) ==
+ HINIC3_MSG_DIRECT_SEND) ? true : false;
+
+ /* Determine whether a message is received or responsed. */
+ recv_msg = is_send_dir ? &pf_to_mgmt->recv_msg_from_mgmt
+ : &pf_to_mgmt->recv_resp_msg_from_mgmt;
+
+ return recv_mgmt_msg_handler(pf_to_mgmt, header, recv_msg, param);
+}
+
+/**
+ * Allocate received message memory.
+ *
+ * @param[in] recv_msg
+ * Pointer that will hold the allocated data.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+alloc_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ recv_msg->seq_id = MGMT_MSG_MAX_SEQ_ID;
+
+ recv_msg->msg = rte_zmalloc("recv_msg", MAX_PF_MGMT_BUF_SIZE,
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!recv_msg->msg)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void
+free_recv_msg(struct hinic3_recv_msg *recv_msg)
+{
+ rte_free(recv_msg->msg);
+}
+
+/**
+ * Allocate all the message buffers of PF to mgmt channel.
+ *
+ * @param[in] pf_to_mgmt
+ * PF to mgmt channel.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+alloc_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ int err;
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate recv msg failed");
+ return err;
+ }
+
+ err = alloc_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate resp recv msg failed");
+ goto alloc_msg_for_resp_err;
+ }
+
+ pf_to_mgmt->mgmt_ack_buf = rte_zmalloc("mgmt_ack_buf",
+ MAX_PF_MGMT_BUF_SIZE,
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!pf_to_mgmt->mgmt_ack_buf) {
+ err = -ENOMEM;
+ goto ack_msg_buf_err;
+ }
+
+ return 0;
+
+ack_msg_buf_err:
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+
+alloc_msg_for_resp_err:
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+ return err;
+}
+
+/**
+ * Free all the message buffers of PF to mgmt channel.
+ *
+ * @param[in] pf_to_mgmt
+ * PF to mgmt channel.
+ */
+static void
+free_msg_buf(struct hinic3_msg_pf_to_mgmt *pf_to_mgmt)
+{
+ rte_free(pf_to_mgmt->mgmt_ack_buf);
+ free_recv_msg(&pf_to_mgmt->recv_resp_msg_from_mgmt);
+ free_recv_msg(&pf_to_mgmt->recv_msg_from_mgmt);
+}
+
+/**
+ * Initialize PF to mgmt channel.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ int err;
+
+ pf_to_mgmt = rte_zmalloc("pf_to_mgmt", sizeof(*pf_to_mgmt),
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!pf_to_mgmt)
+ return -ENOMEM;
+
+ hwdev->pf_to_mgmt = pf_to_mgmt;
+ pf_to_mgmt->hwdev = hwdev;
+
+ err = hinic3_mutex_init(&pf_to_mgmt->sync_msg_mutex, NULL);
+ if (err)
+ goto mutex_init_err;
+
+ err = alloc_msg_buf(pf_to_mgmt);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Allocate msg buffers failed");
+ goto alloc_msg_buf_err;
+ }
+
+ return 0;
+
+alloc_msg_buf_err:
+ hinic3_mutex_destroy(&pf_to_mgmt->sync_msg_mutex);
+
+mutex_init_err:
+ rte_free(pf_to_mgmt);
+
+ return err;
+}
+
+/**
+ * Free PF to mgmt channel.
+ *
+ * @param[in] hwdev
+ * The pointer to the private hardware device.
+ */
+void
+hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt = hwdev->pf_to_mgmt;
+
+ free_msg_buf(pf_to_mgmt);
+ hinic3_mutex_destroy(&pf_to_mgmt->sync_msg_mutex);
+ rte_free(pf_to_mgmt);
+}
diff --git a/drivers/net/hinic3/base/hinic3_mgmt.h b/drivers/net/hinic3/base/hinic3_mgmt.h
new file mode 100644
index 0000000000..23454773b9
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_mgmt.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_MGMT_H_
+#define _HINIC3_MGMT_H_
+
+#define HINIC3_MSG_HANDLER_RES (-1)
+
+struct mgmt_msg_head {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+};
+
+/* Cmdq module type. */
+enum hinic3_mod_type {
+ HINIC3_MOD_COMM = 0, /**< HW communication module. */
+ HINIC3_MOD_L2NIC = 1, /**< L2NIC module. */
+ HINIC3_MOD_ROCE = 2,
+ HINIC3_MOD_PLOG = 3,
+ HINIC3_MOD_TOE = 4,
+ HINIC3_MOD_FLR = 5,
+ HINIC3_MOD_FC = 6,
+ HINIC3_MOD_CFGM = 7, /**< Configuration module. */
+ HINIC3_MOD_CQM = 8,
+ HINIC3_MOD_VSWITCH = 9,
+ COMM_MOD_FC = 10,
+ HINIC3_MOD_OVS = 11,
+ HINIC3_MOD_DSW = 12,
+ HINIC3_MOD_MIGRATE = 13,
+ HINIC3_MOD_HILINK = 14,
+ HINIC3_MOD_CRYPT = 15, /**< Secure crypto module. */
+ HINIC3_MOD_HW_MAX = 16, /**< Hardware max module id. */
+
+ HINIC3_MOD_SW_FUNC = 17, /**< SW module for PF/VF and multi-host. */
+ HINIC3_MOD_IOE = 18,
+ HINIC3_MOD_MAX
+};
+
+typedef enum {
+ RES_TYPE_FLUSH_BIT = 0,
+ RES_TYPE_MQM,
+ RES_TYPE_SMF,
+
+ RES_TYPE_COMM = 10,
+ /* Clear mbox and aeq, The RES_TYPE_COMM bit must be set. */
+ RES_TYPE_COMM_MGMT_CH,
+ /* Clear cmdq and ceq, The RES_TYPE_COMM bit must be set. */
+ RES_TYPE_COMM_CMD_CH,
+ RES_TYPE_NIC,
+ RES_TYPE_OVS,
+ RES_TYPE_VBS,
+ RES_TYPE_ROCE,
+ RES_TYPE_FC,
+ RES_TYPE_TOE,
+ RES_TYPE_IPSEC,
+ RES_TYPE_MAX,
+} func_reset_flag_e;
+
+#define HINIC3_COMM_RES \
+ ((1 << RES_TYPE_COMM) | (1 << RES_TYPE_FLUSH_BIT) | \
+ (1 << RES_TYPE_MQM) | (1 << RES_TYPE_SMF) | \
+ (1 << RES_TYPE_COMM_CMD_CH))
+#define HINIC3_NIC_RES (1 << RES_TYPE_NIC)
+#define HINIC3_OVS_RES (1 << RES_TYPE_OVS)
+#define HINIC3_VBS_RES (1 << RES_TYPE_VBS)
+#define HINIC3_ROCE_RES (1 << RES_TYPE_ROCE)
+#define HINIC3_FC_RES (1 << RES_TYPE_FC)
+#define HINIC3_TOE_RES (1 << RES_TYPE_TOE)
+#define HINIC3_IPSEC_RES (1 << RES_TYPE_IPSEC)
+
+struct hinic3_recv_msg {
+ void *msg;
+
+ u16 msg_len;
+ enum hinic3_mod_type mod;
+ u16 cmd;
+ u8 seq_id;
+ u16 msg_id;
+ int async_mgmt_to_pf;
+};
+
+/* Indicate the event status in pf-to-management communication. */
+enum comm_pf_to_mgmt_event_state {
+ SEND_EVENT_UNINIT = 0,
+ SEND_EVENT_START,
+ SEND_EVENT_SUCCESS,
+ SEND_EVENT_FAIL,
+ SEND_EVENT_TIMEOUT,
+ SEND_EVENT_END
+};
+
+struct hinic3_msg_pf_to_mgmt {
+ struct hinic3_hwdev *hwdev;
+
+ /* Mutex for sync message. */
+ pthread_mutex_t sync_msg_mutex;
+
+ void *mgmt_ack_buf;
+
+ struct hinic3_recv_msg recv_msg_from_mgmt;
+ struct hinic3_recv_msg recv_resp_msg_from_mgmt;
+
+ u16 sync_msg_id;
+};
+
+int hinic3_mgmt_msg_aeqe_handler(void *hwdev, u8 *header, u8 size, void *param);
+
+int hinic3_pf_to_mgmt_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_pf_to_mgmt_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_msg_to_mgmt_sync(void *hwdev, enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+int hinic3_msg_to_mgmt_no_ack(void *hwdev, enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size);
+
+#endif /**< _HINIC3_MGMT_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 08/18] net/hinic3: add module about hardware operation
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (6 preceding siblings ...)
2025-04-18 9:05 ` [RFC 07/18] net/hinic3: add mgmt module " Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 09/18] net/hinic3: add a NIC business configuration module Feifei Wang
` (12 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
Add code and data structure for hardware operation, including
configuration, query, initialization and release.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_hw_cfg.c | 240 ++++++++++
drivers/net/hinic3/base/hinic3_hw_cfg.h | 121 +++++
drivers/net/hinic3/base/hinic3_hw_comm.c | 452 ++++++++++++++++++
drivers/net/hinic3/base/hinic3_hw_comm.h | 366 +++++++++++++++
drivers/net/hinic3/base/hinic3_hwdev.c | 573 +++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_hwdev.h | 177 +++++++
6 files changed, 1929 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
diff --git a/drivers/net/hinic3/base/hinic3_hw_cfg.c b/drivers/net/hinic3/base/hinic3_hw_cfg.c
new file mode 100644
index 0000000000..ebe746a9ae
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hw_cfg.c
@@ -0,0 +1,240 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include "hinic3_compat.h"
+#include "hinic3_mbox.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+
+static void
+parse_pub_res_cap(struct service_cap *cap,
+ struct hinic3_cfg_cmd_dev_cap *dev_cap, enum func_type type)
+{
+ cap->host_id = dev_cap->host_id;
+ cap->ep_id = dev_cap->ep_id;
+ cap->er_id = dev_cap->er_id;
+ cap->port_id = dev_cap->port_id;
+
+ cap->svc_type = dev_cap->svc_cap_en;
+ cap->chip_svc_type = cap->svc_type;
+
+ cap->cos_valid_bitmap = dev_cap->valid_cos_bitmap;
+ cap->flexq_en = dev_cap->flexq_en;
+
+ cap->host_total_function = dev_cap->host_total_func;
+ cap->max_vf = 0;
+ if (type == TYPE_PF || type == TYPE_PPF) {
+ cap->max_vf = dev_cap->max_vf;
+ cap->pf_num = dev_cap->host_pf_num;
+ cap->pf_id_start = dev_cap->pf_id_start;
+ cap->vf_num = dev_cap->host_vf_num;
+ cap->vf_id_start = dev_cap->vf_id_start;
+ }
+
+ PMD_DRV_LOG(INFO, "Get public resource capability: ");
+ PMD_DRV_LOG(INFO,
+ "host_id: 0x%x, ep_id: 0x%x, er_id: 0x%x, port_id: 0x%x",
+ cap->host_id, cap->ep_id, cap->er_id, cap->port_id);
+ PMD_DRV_LOG(INFO, "host_total_function: 0x%x, max_vf: 0x%x",
+ cap->host_total_function, cap->max_vf);
+ PMD_DRV_LOG(INFO,
+ "host_pf_num: 0x%x, pf_id_start: 0x%x, host_vf_num: 0x%x, "
+ "vf_id_start: 0x%x",
+ cap->pf_num, cap->pf_id_start, cap->vf_num,
+ cap->vf_id_start);
+}
+
+static void
+parse_l2nic_res_cap(struct service_cap *cap,
+ struct hinic3_cfg_cmd_dev_cap *dev_cap)
+{
+ struct nic_service_cap *nic_cap = &cap->nic_cap;
+
+ nic_cap->max_sqs = dev_cap->nic_max_sq_id + 1;
+ nic_cap->max_rqs = dev_cap->nic_max_rq_id + 1;
+
+ PMD_DRV_LOG(INFO,
+ "L2nic resource capbility, "
+ "max_sqs: 0x%x, max_rqs: 0x%x",
+ nic_cap->max_sqs, nic_cap->max_rqs);
+}
+
+static void
+parse_dev_cap(struct hinic3_hwdev *dev, struct hinic3_cfg_cmd_dev_cap *dev_cap,
+ enum func_type type)
+{
+ struct service_cap *cap = &dev->cfg_mgmt->svc_cap;
+
+ parse_pub_res_cap(cap, dev_cap, type);
+
+ if (IS_NIC_TYPE(dev))
+ parse_l2nic_res_cap(cap, dev_cap);
+}
+
+static int
+get_cap_from_fw(struct hinic3_hwdev *hwdev, enum func_type type)
+{
+ struct hinic3_cfg_cmd_dev_cap dev_cap;
+ u16 out_len = sizeof(dev_cap);
+ int err;
+
+ memset(&dev_cap, 0, sizeof(dev_cap));
+ dev_cap.func_id = hinic3_global_func_id(hwdev);
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_CFGM,
+ HINIC3_CFG_CMD_GET_DEV_CAP, &dev_cap,
+ sizeof(dev_cap), &dev_cap, &out_len, 0);
+ if (err || dev_cap.status || !out_len) {
+ PMD_DRV_LOG(ERR,
+ "Get capability from FW failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, dev_cap.status, out_len);
+ return -EFAULT;
+ }
+
+ parse_dev_cap(hwdev, &dev_cap, type);
+ return 0;
+}
+
+static int
+get_dev_cap(struct hinic3_hwdev *hwdev)
+{
+ enum func_type type = HINIC3_FUNC_TYPE(hwdev);
+
+ switch (type) {
+ case TYPE_PF:
+ case TYPE_PPF:
+ case TYPE_VF:
+ if (get_cap_from_fw(hwdev, type) != 0)
+ return -EFAULT;
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Unsupported PCIe function type: %d", type);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int
+cfg_mbx_vf_proc_msg(void *hwdev, __rte_unused void *pri_handle, u16 cmd,
+ __rte_unused void *buf_in, __rte_unused u16 in_size,
+ __rte_unused void *buf_out, __rte_unused u16 *out_size)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev)
+ return -EINVAL;
+
+ PMD_DRV_LOG(WARNING, "Unsupported cfg mbox vf event %d to process",
+ cmd);
+
+ return 0;
+}
+
+int
+hinic3_init_cfg_mgmt(void *dev)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ struct cfg_mgmt_info *cfg_mgmt = NULL;
+
+ cfg_mgmt = rte_zmalloc("cfg_mgmt", sizeof(*cfg_mgmt),
+ HINIC3_MEM_ALLOC_ALIGN_MIN);
+ if (!cfg_mgmt)
+ return -ENOMEM;
+
+ memset(cfg_mgmt, 0, sizeof(struct cfg_mgmt_info));
+ hwdev->cfg_mgmt = cfg_mgmt;
+ cfg_mgmt->hwdev = hwdev;
+
+ return 0;
+}
+
+int
+hinic3_init_capability(void *dev)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+
+ return get_dev_cap(hwdev);
+}
+
+void
+hinic3_deinit_cfg_mgmt(void *dev)
+{
+ rte_free(((struct hinic3_hwdev *)dev)->cfg_mgmt);
+ ((struct hinic3_hwdev *)dev)->cfg_mgmt = NULL;
+}
+
+#ifdef HINIC3_RELEASE
+static bool
+hinic3_support_nic(void *hwdev, struct nic_service_cap *cap)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (!IS_NIC_TYPE(dev))
+ return false;
+
+ if (cap)
+ memcpy(cap, &dev->cfg_mgmt->svc_cap.nic_cap, sizeof(*cap));
+
+ return true;
+}
+
+static bool
+hinic3_func_for_mgmt(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!hwdev)
+ return false;
+
+ if (dev->cfg_mgmt->svc_cap.chip_svc_type >= CFG_SVC_NIC_BIT0)
+ return false;
+ else
+ return true;
+}
+#endif
+
+u16
+hinic3_func_max_sqs(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ PMD_DRV_LOG(INFO, "Hwdev is NULL for getting max_sqs");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_sqs;
+}
+
+u16
+hinic3_func_max_rqs(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ PMD_DRV_LOG(INFO, "Hwdev is NULL for getting max_rqs");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.nic_cap.max_rqs;
+}
+
+u8
+hinic3_physical_port_id(void *hwdev)
+{
+ struct hinic3_hwdev *dev = hwdev;
+
+ if (!dev) {
+ PMD_DRV_LOG(INFO, "Hwdev is NULL for getting physical port id");
+ return 0;
+ }
+
+ return dev->cfg_mgmt->svc_cap.port_id;
+}
diff --git a/drivers/net/hinic3/base/hinic3_hw_cfg.h b/drivers/net/hinic3/base/hinic3_hw_cfg.h
new file mode 100644
index 0000000000..8ded52faa9
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hw_cfg.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_HW_CFG_H_
+#define _HINIC3_HW_CFG_H_
+
+#define CFG_MAX_CMD_TIMEOUT 30000 /**< ms */
+
+#define K_UNIT BIT(10)
+#define M_UNIT BIT(20)
+#define G_UNIT BIT(30)
+
+/* Number of PFs and VFs. */
+#define HOST_PF_NUM 4
+#define HOST_VF_NUM 0
+#define HOST_OQID_MASK_VAL 2
+
+#define L2NIC_SQ_DEPTH (4 * K_UNIT)
+#define L2NIC_RQ_DEPTH (4 * K_UNIT)
+
+enum intr_type { INTR_TYPE_MSIX, INTR_TYPE_MSI, INTR_TYPE_INT, INTR_TYPE_NONE };
+
+/* Service type relates define. */
+enum cfg_svc_type_en { CFG_SVC_NIC_BIT0 = 1 };
+
+struct nic_service_cap {
+ u16 max_sqs;
+ u16 max_rqs;
+};
+
+/* Device capability. */
+struct service_cap {
+ enum cfg_svc_type_en svc_type; /**< User input service type. */
+ enum cfg_svc_type_en chip_svc_type; /**< HW supported service type. */
+
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id; /**< PF/VF's ER. */
+ u8 port_id; /**< PF/VF's physical port. */
+
+ u16 host_total_function;
+ u8 pf_num;
+ u8 pf_id_start;
+ u16 vf_num; /**< Max numbers of vf in current host. */
+ u16 vf_id_start;
+
+ u8 flexq_en;
+ u8 cos_valid_bitmap;
+ u16 max_vf; /**< Max VF number that PF supported. */
+
+ struct nic_service_cap nic_cap; /**< NIC capability. */
+};
+
+struct cfg_mgmt_info {
+ void *hwdev;
+ struct service_cap svc_cap;
+};
+
+enum hinic3_cfg_cmd {
+ HINIC3_CFG_CMD_GET_DEV_CAP = 0,
+};
+
+struct hinic3_cfg_cmd_dev_cap {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1;
+
+ /* Public resource. */
+ u8 host_id;
+ u8 ep_id;
+ u8 er_id;
+ u8 port_id;
+
+ u16 host_total_func;
+ u8 host_pf_num;
+ u8 pf_id_start;
+ u16 host_vf_num;
+ u16 vf_id_start;
+ u32 rsvd_host;
+
+ u16 svc_cap_en;
+ u16 max_vf;
+ u8 flexq_en;
+ u8 valid_cos_bitmap;
+ /* Reserved for func_valid_cos_bitmap. */
+ u16 rsvd_cos;
+
+ u32 rsvd[11];
+
+ /* l2nic */
+ u16 nic_max_sq_id;
+ u16 nic_max_rq_id;
+ u32 rsvd_nic[3];
+
+ u32 rsvd_glb[60];
+};
+
+#define IS_NIC_TYPE(dev) \
+ (((u32)(dev)->cfg_mgmt->svc_cap.chip_svc_type) & CFG_SVC_NIC_BIT0)
+
+int hinic3_init_cfg_mgmt(void *dev);
+int hinic3_init_capability(void *dev);
+void hinic3_deinit_cfg_mgmt(void *dev);
+
+u16 hinic3_func_max_sqs(void *hwdev);
+u16 hinic3_func_max_rqs(void *hwdev);
+
+u8 hinic3_physical_port_id(void *hwdev);
+
+int cfg_mbx_ppf_proc_msg(void *hwdev, void *pri_handle, u16 pf_id, u16 vf_id,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size);
+
+int cfg_mbx_vf_proc_msg(void *hwdev, void *pri_handle, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size);
+
+#endif /* _HINIC3_HW_CFG_H_ */
diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.c b/drivers/net/hinic3/base/hinic3_hw_comm.c
new file mode 100644
index 0000000000..d248db5b27
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hw_comm.c
@@ -0,0 +1,452 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_hash.h>
+#include <rte_jhash.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_cmd.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_wq.h"
+
+/* Buffer sizes in hinic3_convert_rx_buf_size must be in ascending order. */
+const u32 hinic3_hw_rx_buf_size[] = {
+ HINIC3_RX_BUF_SIZE_32B,
+ HINIC3_RX_BUF_SIZE_64B,
+ HINIC3_RX_BUF_SIZE_96B,
+ HINIC3_RX_BUF_SIZE_128B,
+ HINIC3_RX_BUF_SIZE_192B,
+ HINIC3_RX_BUF_SIZE_256B,
+ HINIC3_RX_BUF_SIZE_384B,
+ HINIC3_RX_BUF_SIZE_512B,
+ HINIC3_RX_BUF_SIZE_768B,
+ HINIC3_RX_BUF_SIZE_1K,
+ HINIC3_RX_BUF_SIZE_1_5K,
+ HINIC3_RX_BUF_SIZE_2K,
+ HINIC3_RX_BUF_SIZE_3K,
+ HINIC3_RX_BUF_SIZE_4K,
+ HINIC3_RX_BUF_SIZE_8K,
+ HINIC3_RX_BUF_SIZE_16K,
+};
+
+int
+hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info)
+{
+ struct hinic3_hwdev *hwdev = dev;
+ struct hinic3_cmd_msix_config msix_cfg;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = info->msix_index;
+ msix_cfg.opcode = HINIC3_MGMT_CMD_OP_GET;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg, &out_size, 0);
+ if (err || !out_size || msix_cfg.status) {
+ PMD_DRV_LOG(ERR,
+ "Get interrupt config failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, msix_cfg.status, out_size);
+ return -EINVAL;
+ }
+
+ info->lli_credit_limit = msix_cfg.lli_credit_cnt;
+ info->lli_timer_cfg = msix_cfg.lli_tmier_cnt;
+ info->pending_limt = msix_cfg.pending_cnt;
+ info->coalesc_timer_cfg = msix_cfg.coalesct_timer_cnt;
+ info->resend_timer_cfg = msix_cfg.resend_timer_cnt;
+
+ return 0;
+}
+
+int
+hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info)
+{
+ struct hinic3_hwdev *hwdev = dev;
+ struct hinic3_cmd_msix_config msix_cfg;
+ struct interrupt_info temp_info;
+ u16 out_size = sizeof(msix_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ temp_info.msix_index = info.msix_index;
+ err = hinic3_get_interrupt_cfg(hwdev, &temp_info);
+ if (err)
+ return -EIO;
+
+ memset(&msix_cfg, 0, sizeof(msix_cfg));
+ msix_cfg.func_id = hinic3_global_func_id(hwdev);
+ msix_cfg.msix_index = (u16)info.msix_index;
+ msix_cfg.opcode = HINIC3_MGMT_CMD_OP_SET;
+
+ msix_cfg.lli_credit_cnt = temp_info.lli_credit_limit;
+ msix_cfg.lli_tmier_cnt = temp_info.lli_timer_cfg;
+ msix_cfg.pending_cnt = temp_info.pending_limt;
+ msix_cfg.coalesct_timer_cnt = temp_info.coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = temp_info.resend_timer_cfg;
+
+ if (info.lli_set) {
+ msix_cfg.lli_credit_cnt = info.lli_credit_limit;
+ msix_cfg.lli_tmier_cnt = info.lli_timer_cfg;
+ }
+
+ if (info.interrupt_coalesc_set) {
+ msix_cfg.pending_cnt = info.pending_limt;
+ msix_cfg.coalesct_timer_cnt = info.coalesc_timer_cfg;
+ msix_cfg.resend_timer_cnt = info.resend_timer_cfg;
+ }
+
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_CFG_MSIX_CTRL_REG,
+ &msix_cfg, sizeof(msix_cfg), &msix_cfg, &out_size, 0);
+ if (err || !out_size || msix_cfg.status) {
+ PMD_DRV_LOG(ERR,
+ "Set interrupt config failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, msix_cfg.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size)
+{
+ struct hinic3_cmd_wq_page_size page_size_info;
+ u16 out_size = sizeof(page_size_info);
+ int err;
+
+ memset(&page_size_info, 0, sizeof(page_size_info));
+ page_size_info.func_idx = func_idx;
+ page_size_info.page_size = HINIC3_PAGE_SIZE_HW(page_size);
+ page_size_info.opcode = HINIC3_MGMT_CMD_OP_SET;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ HINIC3_MGMT_CMD_CFG_PAGESIZE,
+ &page_size_info, sizeof(page_size_info),
+ &page_size_info, &out_size, 0);
+ if (err || !out_size || page_size_info.status) {
+ PMD_DRV_LOG(ERR,
+ "Set wq page size failed, "
+ "err: %d, status: 0x%x, out_size: 0x%0x",
+ err, page_size_info.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_func_reset(void *hwdev, u64 reset_flag)
+{
+ struct hinic3_reset func_reset;
+ struct hinic3_hwif *hwif = ((struct hinic3_hwdev *)hwdev)->hwif;
+ u16 out_size = sizeof(func_reset);
+ int err = 0;
+
+ PMD_DRV_LOG(INFO, "Function is reset");
+
+ memset(&func_reset, 0, sizeof(func_reset));
+ func_reset.func_id = HINIC3_HWIF_GLOBAL_IDX(hwif);
+ func_reset.reset_flag = reset_flag;
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_FUNC_RESET, &func_reset,
+ sizeof(func_reset), &func_reset, &out_size, 0);
+ if (err || !out_size || func_reset.status) {
+ PMD_DRV_LOG(ERR,
+ "Reset func resources failed, "
+ "err: %d, status: 0x%x, out_size: 0x%x",
+ err, func_reset.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state)
+{
+ struct comm_cmd_func_svc_used_state used_state;
+ u16 out_size = sizeof(used_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&used_state, 0, sizeof(used_state));
+ used_state.func_id = hinic3_global_func_id(hwdev);
+ used_state.svc_type = svc_type;
+ used_state.used_state = state;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_FUNC_SVC_USED_STATE,
+ &used_state, sizeof(used_state), &used_state, &out_size, 0);
+ if (err || !out_size || used_state.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to set func service used state, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, used_state.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_convert_rx_buf_size(u32 rx_buf_sz, u32 *match_sz)
+{
+ u32 i, num_hw_types, best_match_sz;
+
+ if (unlikely(!match_sz || rx_buf_sz < HINIC3_RX_BUF_SIZE_32B))
+ return -EINVAL;
+
+ if (rx_buf_sz >= HINIC3_RX_BUF_SIZE_16K) {
+ best_match_sz = HINIC3_RX_BUF_SIZE_16K;
+ goto size_matched;
+ }
+
+ if (rx_buf_sz >= HINIC3_RX_BUF_SIZE_4K) {
+ best_match_sz = ((rx_buf_sz >> RX_BUF_SIZE_1K_LEN)
+ << RX_BUF_SIZE_1K_LEN);
+ goto size_matched;
+ }
+
+ num_hw_types = sizeof(hinic3_hw_rx_buf_size) /
+ sizeof(hinic3_hw_rx_buf_size[0]);
+ best_match_sz = hinic3_hw_rx_buf_size[0];
+ for (i = 0; i < num_hw_types; i++) {
+ if (rx_buf_sz == hinic3_hw_rx_buf_size[i]) {
+ best_match_sz = hinic3_hw_rx_buf_size[i];
+ break;
+ } else if (rx_buf_sz < hinic3_hw_rx_buf_size[i]) {
+ break;
+ }
+ best_match_sz = hinic3_hw_rx_buf_size[i];
+ }
+
+size_matched:
+ *match_sz = best_match_sz;
+
+ return 0;
+}
+
+static u16
+get_hw_rx_buf_size(u32 rx_buf_sz)
+{
+ u16 num_hw_types = sizeof(hinic3_hw_rx_buf_size) /
+ sizeof(hinic3_hw_rx_buf_size[0]);
+ u16 i;
+
+ for (i = 0; i < num_hw_types; i++) {
+ if (hinic3_hw_rx_buf_size[i] == rx_buf_sz)
+ return i;
+ }
+
+ PMD_DRV_LOG(WARNING, "Chip can't support rx buf size of %d", rx_buf_sz);
+
+ return DEFAULT_RX_BUF_SIZE; /**< Default 2K. */
+}
+
+int
+hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth, u16 rx_buf_sz)
+{
+ struct hinic3_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_idx = hinic3_global_func_id(hwdev);
+ root_ctxt.set_cmdq_depth = 0;
+ root_ctxt.cmdq_depth = 0;
+ root_ctxt.lro_en = 1;
+ root_ctxt.rq_depth = (u16)ilog2(rq_depth);
+ root_ctxt.rx_buf_sz = get_hw_rx_buf_size(rx_buf_sz);
+ root_ctxt.sq_depth = (u16)ilog2(sq_depth);
+
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, &root_ctxt,
+ sizeof(root_ctxt), &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ PMD_DRV_LOG(ERR,
+ "Set root context failed, "
+ "err: %d, status: 0x%x, out_size: 0x%x",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_clean_root_ctxt(void *hwdev)
+{
+ struct hinic3_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_idx = hinic3_global_func_id(hwdev);
+
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, &root_ctxt,
+ sizeof(root_ctxt), &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ PMD_DRV_LOG(ERR,
+ "Clean root context failed, "
+ "err: %d, status: 0x%x, out_size: 0x%x",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth)
+{
+ struct hinic3_cmd_root_ctxt root_ctxt;
+ u16 out_size = sizeof(root_ctxt);
+ int err;
+
+ memset(&root_ctxt, 0, sizeof(root_ctxt));
+ root_ctxt.func_idx = hinic3_global_func_id(hwdev);
+ root_ctxt.set_cmdq_depth = 1;
+ root_ctxt.cmdq_depth = (u8)ilog2(cmdq_depth);
+
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_SET_VAT, &root_ctxt,
+ sizeof(root_ctxt), &root_ctxt, &out_size, 0);
+ if (err || !out_size || root_ctxt.status) {
+ PMD_DRV_LOG(ERR,
+ "Set cmdq depth failed, "
+ "err: %d, status: 0x%x, out_size: 0x%x",
+ err, root_ctxt.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_get_mgmt_version(void *hwdev, char *mgmt_ver, int max_mgmt_len)
+{
+ struct hinic3_cmd_get_fw_version fw_ver;
+ u16 out_size = sizeof(fw_ver);
+ int err;
+
+ if (!hwdev || !mgmt_ver)
+ return -EINVAL;
+
+ memset(&fw_ver, 0, sizeof(fw_ver));
+ fw_ver.fw_type = HINIC3_FW_VER_TYPE_MPU;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ HINIC3_MGMT_CMD_GET_FW_VERSION, &fw_ver,
+ sizeof(fw_ver), &fw_ver, &out_size, 0);
+ if (MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, fw_ver.status)) {
+ PMD_DRV_LOG(ERR,
+ "Get mgmt version failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, fw_ver.status, out_size);
+ return -EIO;
+ }
+
+ (void)snprintf(mgmt_ver, max_mgmt_len, "%s", fw_ver.ver);
+ return 0;
+}
+
+int
+hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info)
+{
+ struct hinic3_cmd_board_info board_info;
+ u16 out_size = sizeof(board_info);
+ int err;
+
+ if (!hwdev || !info)
+ return -EINVAL;
+
+ memset(&board_info, 0, sizeof(board_info));
+ err = hinic3_msg_to_mgmt_sync(hwdev,
+ HINIC3_MOD_COMM, HINIC3_MGMT_CMD_GET_BOARD_INFO,
+ &board_info, sizeof(board_info), &board_info, &out_size, 0);
+ if (err || board_info.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Get board info failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, board_info.status, out_size);
+ return -EFAULT;
+ }
+
+ memcpy(info, &board_info.info, sizeof(*info));
+
+ return 0;
+}
+
+static int
+hinic3_comm_features_nego(void *hwdev, u8 opcode, u64 *s_feature, u16 size)
+{
+ struct comm_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > COMM_MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == MGMT_MSG_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, (size * sizeof(u64)));
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ HINIC3_MGMT_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size, 0);
+ if (err || !out_size || feature_nego.head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to negotiate feature, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, feature_nego.head.status, out_size);
+ return -EINVAL;
+ }
+
+ if (opcode == MGMT_MSG_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, (size * sizeof(u64)));
+
+ return 0;
+}
+
+int
+hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_GET, s_feature,
+ size);
+}
+
+int
+hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size)
+{
+ return hinic3_comm_features_nego(hwdev, MGMT_MSG_CMD_OP_SET, s_feature,
+ size);
+}
diff --git a/drivers/net/hinic3/base/hinic3_hw_comm.h b/drivers/net/hinic3/base/hinic3_hw_comm.h
new file mode 100644
index 0000000000..a2dc7273f4
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hw_comm.h
@@ -0,0 +1,366 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_HW_COMM_H_
+#define _HINIC3_HW_COMM_H_
+
+#include "hinic3_hwdev.h"
+#include "hinic3_mgmt.h"
+#define HINIC3_MGMT_CMD_OP_GET 0
+#define HINIC3_MGMT_CMD_OP_SET 1
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_SHIFT 0
+#define HINIC3_MSIX_CNT_LLI_CREDIT_SHIFT 8
+#define HINIC3_MSIX_CNT_COALESC_TIMER_SHIFT 8
+#define HINIC3_MSIX_CNT_PENDING_SHIFT 8
+#define HINIC3_MSIX_CNT_RESEND_TIMER_SHIFT 29
+
+#define HINIC3_MSIX_CNT_LLI_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_LLI_CREDIT_MASK 0xFFU
+#define HINIC3_MSIX_CNT_COALESC_TIMER_MASK 0xFFU
+#define HINIC3_MSIX_CNT_PENDING_MASK 0x1FU
+#define HINIC3_MSIX_CNT_RESEND_TIMER_MASK 0x7U
+
+#define HINIC3_MSIX_CNT_SET(val, member) \
+ (((val) & HINIC3_MSIX_CNT_##member##_MASK) \
+ << HINIC3_MSIX_CNT_##member##_SHIFT)
+
+#define MSG_TO_MGMT_SYNC_RETURN_ERR(err, out_size, status) \
+ ((err) || (status) || !(out_size))
+
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+#define RX_BUF_SIZE_1K_LEN ((u16)0xA)
+
+enum hinic3_rx_buf_size {
+ HINIC3_RX_BUF_SIZE_32B = 0x20,
+ HINIC3_RX_BUF_SIZE_64B = 0x40,
+ HINIC3_RX_BUF_SIZE_96B = 0x60,
+ HINIC3_RX_BUF_SIZE_128B = 0x80,
+ HINIC3_RX_BUF_SIZE_192B = 0xC0,
+ HINIC3_RX_BUF_SIZE_256B = 0x100,
+ HINIC3_RX_BUF_SIZE_384B = 0x180,
+ HINIC3_RX_BUF_SIZE_512B = 0x200,
+ HINIC3_RX_BUF_SIZE_768B = 0x300,
+ HINIC3_RX_BUF_SIZE_1K = 0x400,
+ HINIC3_RX_BUF_SIZE_1_5K = 0x600,
+ HINIC3_RX_BUF_SIZE_2K = 0x800,
+ HINIC3_RX_BUF_SIZE_3K = 0xC00,
+ HINIC3_RX_BUF_SIZE_4K = 0x1000,
+ HINIC3_RX_BUF_SIZE_8K = 0x2000,
+ HINIC3_RX_BUF_SIZE_16K = 0x4000,
+};
+
+struct hinic3_cmd_msix_config {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 msix_index;
+ u8 pending_cnt;
+ u8 coalesct_timer_cnt;
+ u8 resend_timer_cnt;
+ u8 lli_tmier_cnt;
+ u8 lli_credit_cnt;
+ u8 rsvd2[5];
+};
+
+struct hinic3_dma_attr_table {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 entry_idx;
+ u8 st;
+ u8 at;
+ u8 ph;
+ u8 no_snooping;
+ u8 tph_en;
+ u32 resv1;
+};
+
+#define HINIC3_PAGE_SIZE_HW(pg_size) ((u8)ilog2((u32)((pg_size) >> 12)))
+
+struct hinic3_cmd_wq_page_size {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 opcode;
+ /**
+ * Real size is 4KB * 2^page_size, range(0~20) must be checked by
+ * driver.
+ */
+ u8 page_size;
+
+ u32 rsvd1;
+};
+
+struct hinic3_reset {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 rsvd1[3];
+ u64 reset_flag;
+};
+
+struct comm_cmd_func_svc_used_state {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u16 svc_type;
+ u8 used_state;
+ u8 rsvd[35];
+};
+
+struct hinic3_cmd_root_ctxt {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u8 set_cmdq_depth;
+ u8 cmdq_depth;
+ u16 rx_buf_sz;
+ u8 lro_en;
+ u8 rsvd1;
+ u16 sq_depth;
+ u16 rq_depth;
+ u64 rsvd2;
+};
+
+enum hinic3_fw_ver_type {
+ HINIC3_FW_VER_TYPE_BOOT,
+ HINIC3_FW_VER_TYPE_MPU,
+ HINIC3_FW_VER_TYPE_NPU,
+ HINIC3_FW_VER_TYPE_SMU,
+ HINIC3_FW_VER_TYPE_CFG,
+};
+
+#define MGMT_MSG_CMD_OP_SET 1
+#define MGMT_MSG_CMD_OP_GET 0
+
+#define COMM_MAX_FEATURE_QWORD 4
+struct comm_cmd_feature_nego {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u8 opcode; /**< 1: set, 0: get. */
+ u8 rsvd;
+ u64 s_feature[COMM_MAX_FEATURE_QWORD];
+};
+
+#define HINIC3_FW_VERSION_LEN 16
+#define HINIC3_FW_COMPILE_TIME_LEN 20
+#define HINIC3_MGMT_VERSION_MAX_LEN 32
+struct hinic3_cmd_get_fw_version {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 fw_type;
+ u16 rsvd1;
+ u8 ver[HINIC3_FW_VERSION_LEN];
+ u8 time[HINIC3_FW_COMPILE_TIME_LEN];
+};
+
+struct hinic3_cmd_clear_doorbell {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u16 rsvd1[3];
+};
+
+struct hinic3_cmd_clear_resource {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_idx;
+ u16 rsvd1[3];
+};
+
+struct hinic3_cmd_board_info {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ struct hinic3_board_info info;
+
+ u32 rsvd1[23];
+};
+
+struct interrupt_info {
+ u32 lli_set;
+ u32 interrupt_coalesc_set;
+ u16 msix_index;
+ u8 lli_credit_limit;
+ u8 lli_timer_cfg;
+ u8 pending_limt;
+ u8 coalesc_timer_cfg;
+ u8 resend_timer_cfg;
+};
+
+enum cfg_msix_operation {
+ CFG_MSIX_OPERATION_FREE = 0,
+ CFG_MSIX_OPERATION_ALLOC = 1,
+};
+
+struct comm_cmd_cfg_msix_num {
+ u8 status;
+ u8 version;
+ u8 rsvd0[6];
+
+ u16 func_id;
+ u8 op_code; /**< 1: alloc, 0: free. */
+ u8 rsvd1;
+
+ u16 msix_num;
+ u16 rsvd2;
+};
+
+int hinic3_get_interrupt_cfg(void *dev, struct interrupt_info *info);
+
+/**
+ * Set interrupt cfg.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] info
+ * Interrupt info.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_interrupt_cfg(void *dev, struct interrupt_info info);
+
+int hinic3_set_wq_page_size(void *hwdev, u16 func_idx, u32 page_size);
+
+/**
+ * Send a reset command to hardware.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] reset_flag
+ * The flag that specifies the reset behavior.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_func_reset(void *hwdev, u64 reset_flag);
+
+/**
+ * Send a command to management module to set usage state of a specific service
+ * for given function.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] svc_type
+ * The service type to update.
+ * @param[in] state
+ * The state to set for the service.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_func_svc_used_state(void *hwdev, u16 svc_type, u8 state);
+
+/**
+ * Adjust the requested RX buffer size to the closest valid size supported by
+ * the hardware.
+ *
+ * @param[in] rx_buf_sz
+ * The requested RX buffer size.
+ * @param[out] match_sz
+ * The closest valid RX buffer size.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_convert_rx_buf_size(u32 rx_buf_sz, u32 *match_sz);
+
+/**
+ * Send a command to apply the settings.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] rq_depth
+ * The depth of the receive queue.
+ * @param[in] sq_depth
+ * The depth of the send queue.
+ * @param[in] rx_buf_sz
+ * The RX buffer size to set.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_root_ctxt(void *hwdev, u32 rq_depth, u32 sq_depth,
+ u16 rx_buf_sz);
+
+/**
+ * Send a command to clear any previously set context.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_clean_root_ctxt(void *hwdev);
+
+/**
+ * Send a command to set command queue depth.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] cmdq_depth
+ * The desired depth of the command queue, converted to logarithmic value
+ * before being set.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_cmdq_depth(void *hwdev, u16 cmdq_depth);
+
+/**
+ * Send a command to get firmware version.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[out] mgmt_ver
+ * The buffer to store the retrieved management firmware version.
+ * @param[in] max_mgmt_len
+ * The maximum length of the `mgmt_ver` buffer.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_mgmt_version(void *hwdev, char *mgmt_ver, int max_mgmt_len);
+
+/**
+ * Send a command to get board information.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[out] info
+ * The structure to store the retrieved board information.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_board_info(void *hwdev, struct hinic3_board_info *info);
+
+int hinic3_get_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
+int hinic3_set_comm_features(void *hwdev, u64 *s_feature, u16 size);
+
+#endif /* _HINIC3_HW_COMM_H_ */
diff --git a/drivers/net/hinic3/base/hinic3_hwdev.c b/drivers/net/hinic3/base/hinic3_hwdev.c
new file mode 100644
index 0000000000..cc36d4f353
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hwdev.c
@@ -0,0 +1,573 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_vfio.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_cmd.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_csr.h"
+#include "hinic3_eqs.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hw_comm.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mbox.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_wq.h"
+
+enum hinic3_pcie_nosnoop { HINIC3_PCIE_SNOOP = 0, HINIC3_PCIE_NO_SNOOP = 1 };
+
+enum hinic3_pcie_tph {
+ HINIC3_PCIE_TPH_DISABLE = 0,
+ HINIC3_PCIE_TPH_ENABLE = 1
+};
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SHIFT 0
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_MASK 0x3FF
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_INDIR_##member##_MASK) \
+ << HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_INDIR_IDX_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_INDIR_##member##_MASK \
+ << HINIC3_DMA_ATTR_INDIR_##member##_SHIFT)))
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_SHIFT 0
+#define HINIC3_DMA_ATTR_ENTRY_AT_SHIFT 8
+#define HINIC3_DMA_ATTR_ENTRY_PH_SHIFT 10
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_SHIFT 12
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_SHIFT 13
+
+#define HINIC3_DMA_ATTR_ENTRY_ST_MASK 0xFF
+#define HINIC3_DMA_ATTR_ENTRY_AT_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_PH_MASK 0x3
+#define HINIC3_DMA_ATTR_ENTRY_NO_SNOOPING_MASK 0x1
+#define HINIC3_DMA_ATTR_ENTRY_TPH_EN_MASK 0x1
+
+#define HINIC3_DMA_ATTR_ENTRY_SET(val, member) \
+ (((u32)(val) & HINIC3_DMA_ATTR_ENTRY_##member##_MASK) \
+ << HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)
+
+#define HINIC3_DMA_ATTR_ENTRY_CLEAR(val, member) \
+ ((val) & (~(HINIC3_DMA_ATTR_ENTRY_##member##_MASK \
+ << HINIC3_DMA_ATTR_ENTRY_##member##_SHIFT)))
+
+#define HINIC3_PCIE_ST_DISABLE 0
+#define HINIC3_PCIE_AT_DISABLE 0
+#define HINIC3_PCIE_PH_DISABLE 0
+
+#define PCIE_MSIX_ATTR_ENTRY 0
+
+#define HINIC3_CHIP_PRESENT 1
+#define HINIC3_CHIP_ABSENT 0
+
+#define HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT 0
+#define HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG 0xFF
+#define HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG 7
+
+typedef void (*mgmt_event_cb)(void *handle, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+struct mgmt_event_handle {
+ u16 cmd;
+ mgmt_event_cb proc;
+};
+
+bool
+hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev)
+{
+ return ((RTE_ETH_DEV_TO_PCI(rte_dev)->kdrv == RTE_PCI_KDRV_VFIO) &&
+ (rte_vfio_noiommu_is_enabled() != 1));
+}
+
+int
+vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
+ __rte_unused u16 cmd, __rte_unused void *buf_in,
+ __rte_unused u16 in_size, __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ PMD_DRV_LOG(WARNING, "Unsupported pf mbox event %d to process", cmd);
+
+ return 0;
+}
+
+static void
+fault_event_handler(__rte_unused void *hwdev, __rte_unused void *buf_in,
+ __rte_unused u16 in_size, __rte_unused void *buf_out,
+ __rte_unused u16 *out_size)
+{
+ PMD_DRV_LOG(WARNING, "Unsupported fault event handler");
+}
+
+static void
+ffm_event_msg_handler(__rte_unused void *hwdev, void *buf_in, u16 in_size,
+ __rte_unused void *buf_out, u16 *out_size)
+{
+ struct ffm_intr_info *intr = NULL;
+
+ if (in_size != sizeof(*intr)) {
+ PMD_DRV_LOG(ERR,
+ "Invalid fault event report, "
+ "length: %d, should be %zu",
+ in_size, sizeof(*intr));
+ return;
+ }
+
+ intr = buf_in;
+
+ PMD_DRV_LOG(ERR,
+ "node_id: 0x%x, err_type: 0x%x, err_level: %d, "
+ "err_csr_addr: 0x%08x, err_csr_value: 0x%08x",
+ intr->node_id, intr->err_type, intr->err_level,
+ intr->err_csr_addr, intr->err_csr_value);
+
+ *out_size = sizeof(*intr);
+}
+
+static const struct mgmt_event_handle mgmt_event_proc[] = {
+ {
+ .cmd = HINIC3_MGMT_CMD_FAULT_REPORT,
+ .proc = fault_event_handler,
+ },
+
+ {
+ .cmd = HINIC3_MGMT_CMD_FFM_SET,
+ .proc = ffm_event_msg_handler,
+ },
+};
+
+void
+pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size)
+{
+ struct hinic3_hwdev *hwdev = handle;
+ u32 i, event_num = RTE_DIM(mgmt_event_proc);
+
+ if (!hwdev)
+ return;
+
+ for (i = 0; i < event_num; i++) {
+ if (cmd == mgmt_event_proc[i].cmd) {
+ if (mgmt_event_proc[i].proc)
+ mgmt_event_proc[i].proc(handle, buf_in, in_size,
+ buf_out, out_size);
+ return;
+ }
+ }
+
+ PMD_DRV_LOG(WARNING, "Unsupported mgmt cpu event %d to process", cmd);
+}
+
+static int
+set_dma_attr_entry(__rte_unused struct hinic3_hwdev *hwdev,
+ __rte_unused u8 entry_idx, __rte_unused u8 st,
+ __rte_unused u8 at, __rte_unused u8 ph,
+ __rte_unused enum hinic3_pcie_nosnoop no_snooping,
+ __rte_unused enum hinic3_pcie_tph tph_en)
+{
+ struct hinic3_dma_attr_table attr;
+ u16 out_size = sizeof(attr);
+ int err;
+
+ memset(&attr, 0, sizeof(attr));
+ attr.func_id = hinic3_global_func_id(hwdev);
+ attr.entry_idx = entry_idx;
+ attr.st = st;
+ attr.at = at;
+ attr.ph = ph;
+ attr.no_snooping = no_snooping;
+ attr.tph_en = tph_en;
+
+ err = hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_COMM,
+ HINIC3_MGMT_CMD_SET_DMA_ATTR, &attr,
+ sizeof(attr), &attr, &out_size, 0);
+ if (err || !out_size || attr.head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set dma attribute failed, err: %d, status: 0x%x, "
+ "out_size: 0x%x",
+ err, attr.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * Initialize the default dma attributes.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ *
+ * 0 on success, non-zero on failure.
+ */
+static int
+dma_attr_table_init(struct hinic3_hwdev *hwdev)
+{
+ return set_dma_attr_entry(hwdev,
+ PCIE_MSIX_ATTR_ENTRY, HINIC3_PCIE_ST_DISABLE,
+ HINIC3_PCIE_AT_DISABLE, HINIC3_PCIE_PH_DISABLE,
+ HINIC3_PCIE_SNOOP, HINIC3_PCIE_TPH_DISABLE);
+}
+
+static int
+init_aeqs_msix_attr(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_aeqs *aeqs = hwdev->aeqs;
+ struct interrupt_info info = {0};
+ struct hinic3_eq *eq = NULL;
+ u16 q_id;
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_DEAULT_EQ_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_DEAULT_EQ_MSIX_COALESC_TIMER_CFG;
+ info.resend_timer_cfg = HINIC3_DEAULT_EQ_MSIX_RESEND_TIMER_CFG;
+
+ for (q_id = 0; q_id < aeqs->num_aeqs; q_id++) {
+ eq = &aeqs->aeq[q_id];
+ info.msix_index = eq->eq_irq.msix_entry_idx;
+ err = hinic3_set_interrupt_cfg(hwdev, info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set msix attr for aeq %d failed",
+ q_id);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
+static int
+hinic3_comm_pf_to_mgmt_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ /* VF does not support send msg to mgmt directly. */
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ err = hinic3_pf_to_mgmt_init(hwdev);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static void
+hinic3_comm_pf_to_mgmt_free(struct hinic3_hwdev *hwdev)
+{
+ /* VF does not support send msg to mgmt directly. */
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return;
+
+ hinic3_pf_to_mgmt_free(hwdev);
+}
+
+static int
+hinic3_comm_cmdqs_init(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_cmdqs_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cmd queues failed");
+ return err;
+ }
+
+ err = hinic3_set_cmdq_depth(hwdev, HINIC3_CMDQ_DEPTH);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set cmdq depth failed");
+ goto set_cmdq_depth_err;
+ }
+
+ return 0;
+
+set_cmdq_depth_err:
+ hinic3_cmdqs_free(hwdev);
+
+ return err;
+}
+
+static void
+hinic3_comm_cmdqs_free(struct hinic3_hwdev *hwdev)
+{
+ hinic3_cmdqs_free(hwdev);
+}
+
+static void
+hinic3_sync_mgmt_func_state(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_ACTIVE_FLAG);
+}
+
+static int
+get_func_misc_info(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_get_board_info(hwdev, &hwdev->board_info);
+ if (err) {
+ /* For the PF/VF of slave host, return error. */
+ if (hinic3_pcie_itf_id(hwdev))
+ return err;
+
+ memset(&hwdev->board_info, 0xff,
+ sizeof(struct hinic3_board_info));
+ }
+
+ err = hinic3_get_mgmt_version(hwdev, hwdev->mgmt_ver,
+ HINIC3_MGMT_VERSION_MAX_LEN);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get mgmt cpu version failed");
+ return err;
+ }
+
+ return 0;
+}
+
+static int
+init_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = hinic3_aeqs_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init async event queues failed");
+ return err;
+ }
+
+ err = hinic3_comm_pf_to_mgmt_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mgmt channel failed");
+ goto msg_init_err;
+ }
+
+ err = hinic3_func_to_func_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mailbox channel failed");
+ goto func_to_func_init_err;
+ }
+
+ return 0;
+
+func_to_func_init_err:
+ hinic3_comm_pf_to_mgmt_free(hwdev);
+
+msg_init_err:
+ hinic3_aeqs_free(hwdev);
+
+ return err;
+}
+
+static void
+free_mgmt_channel(struct hinic3_hwdev *hwdev)
+{
+ hinic3_func_to_func_free(hwdev);
+ hinic3_comm_pf_to_mgmt_free(hwdev);
+ hinic3_aeqs_free(hwdev);
+}
+
+static int
+init_cmdqs_channel(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = dma_attr_table_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init dma attr table failed");
+ goto dma_attr_init_err;
+ }
+
+ err = init_aeqs_msix_attr(hwdev);
+ if (err)
+ goto init_aeqs_msix_err;
+
+ /* Set default wq page_size. */
+ hwdev->wq_page_size = HINIC3_DEFAULT_WQ_PAGE_SIZE;
+ err = hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ hwdev->wq_page_size);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set wq page size failed");
+ goto init_wq_pg_size_err;
+ }
+
+ err = hinic3_comm_cmdqs_init(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cmd queues failed");
+ goto cmdq_init_err;
+ }
+
+ return 0;
+
+cmdq_init_err:
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE);
+init_wq_pg_size_err:
+init_aeqs_msix_err:
+dma_attr_init_err:
+
+ return err;
+}
+
+static int
+hinic3_init_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ err = init_mgmt_channel(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mgmt channel failed");
+ return err;
+ }
+
+ err = get_func_misc_info(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get function msic information failed");
+ goto get_func_info_err;
+ }
+
+ err = hinic3_func_reset(hwdev, HINIC3_NIC_RES | HINIC3_COMM_RES);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Reset function failed");
+ goto func_reset_err;
+ }
+
+ err = hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 1);
+ if (err)
+ goto set_used_state_err;
+
+ err = init_cmdqs_channel(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cmdq channel failed");
+ goto init_cmdqs_channel_err;
+ }
+
+ hinic3_sync_mgmt_func_state(hwdev);
+
+ return 0;
+
+init_cmdqs_channel_err:
+ hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0);
+set_used_state_err:
+func_reset_err:
+get_func_info_err:
+ free_mgmt_channel(hwdev);
+
+ return err;
+}
+
+static void
+hinic3_uninit_comm_ch(struct hinic3_hwdev *hwdev)
+{
+ hinic3_set_pf_status(hwdev->hwif, HINIC3_PF_STATUS_INIT);
+
+ hinic3_comm_cmdqs_free(hwdev);
+
+ if (HINIC3_FUNC_TYPE(hwdev) != TYPE_VF)
+ hinic3_set_wq_page_size(hwdev, hinic3_global_func_id(hwdev),
+ HINIC3_HW_WQ_PAGE_SIZE);
+
+ hinic3_set_func_svc_used_state(hwdev, HINIC3_MOD_COMM, 0);
+
+ hinic3_func_to_func_free(hwdev);
+
+ hinic3_comm_pf_to_mgmt_free(hwdev);
+
+ hinic3_aeqs_free(hwdev);
+}
+
+int
+hinic3_init_hwdev(struct hinic3_hwdev *hwdev)
+{
+ int err;
+
+ hwdev->chip_fault_stats = rte_zmalloc("chip_fault_stats",
+ HINIC3_CHIP_FAULT_SIZE,
+ RTE_CACHE_LINE_SIZE);
+ if (!hwdev->chip_fault_stats) {
+ PMD_DRV_LOG(ERR, "Alloc memory for chip_fault_stats failed");
+ return -ENOMEM;
+ }
+
+ err = hinic3_init_hwif(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Initialize hwif failed");
+ goto init_hwif_err;
+ }
+
+ err = hinic3_init_comm_ch(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init communication channel failed");
+ goto init_comm_ch_err;
+ }
+
+ err = hinic3_init_cfg_mgmt(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init cfg_mgnt failed");
+ goto init_cfg_err;
+ }
+
+ err = hinic3_init_capability(hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init capability failed");
+ goto init_cap_err;
+ }
+
+ return 0;
+
+init_cap_err:
+ hinic3_deinit_cfg_mgmt(hwdev);
+init_cfg_err:
+ hinic3_uninit_comm_ch(hwdev);
+
+init_comm_ch_err:
+ hinic3_free_hwif(hwdev);
+
+init_hwif_err:
+ rte_free(hwdev->chip_fault_stats);
+
+ return -EFAULT;
+}
+
+void
+hinic3_free_hwdev(struct hinic3_hwdev *hwdev)
+{
+ hinic3_deinit_cfg_mgmt(hwdev);
+
+ hinic3_uninit_comm_ch(hwdev);
+
+ hinic3_free_hwif(hwdev);
+
+ rte_free(hwdev->chip_fault_stats);
+}
+
+#ifndef RTE_VFIO_DMA_MAP_BASE_ADDR
+#define RTE_VFIO_DMA_MAP_BASE_ADDR 0
+#endif
+const struct rte_memzone *
+hinic3_dma_zone_reserve(const void *dev, const char *ring_name,
+ uint16_t queue_id, size_t size, unsigned int align,
+ int socket_id)
+{
+ return rte_eth_dma_zone_reserve(dev, ring_name, queue_id, size, align,
+ socket_id);
+}
+
+int
+hinic3_memzone_free(const struct rte_memzone *mz)
+{
+ return rte_memzone_free(mz);
+}
diff --git a/drivers/net/hinic3/base/hinic3_hwdev.h b/drivers/net/hinic3/base/hinic3_hwdev.h
new file mode 100644
index 0000000000..080d1400ed
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_hwdev.h
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_HWDEV_H_
+#define _HINIC3_HWDEV_H_
+
+#include <rte_ether.h>
+#include <rte_memory.h>
+#include <hinic3_mgmt.h>
+
+struct cfg_mgmt_info;
+
+struct hinic3_hwif;
+struct hinic3_aeqs;
+struct hinic3_mbox;
+struct hinic3_msg_pf_to_mgmt;
+
+#define MGMT_VERSION_MAX_LEN 32
+
+enum hinic3_set_arm_type {
+ HINIC3_SET_ARM_CMDQ,
+ HINIC3_SET_ARM_SQ,
+ HINIC3_SET_ARM_TYPE_NUM
+};
+
+struct hinic3_page_addr {
+ void *virt_addr;
+ u64 phys_addr;
+};
+
+struct ffm_intr_info {
+ u8 node_id;
+ /* Error level of the interrupt source. */
+ u8 err_level;
+ /* Classification by interrupt source properties. */
+ u16 err_type;
+ u32 err_csr_addr;
+ u32 err_csr_value;
+};
+
+struct link_event_stats {
+ RTE_ATOMIC(int32_t)link_down_stats;
+ RTE_ATOMIC(int32_t)link_up_stats;
+};
+
+enum hinic3_fault_err_level {
+ FAULT_LEVEL_FATAL,
+ FAULT_LEVEL_SERIOUS_RESET,
+ FAULT_LEVEL_SERIOUS_FLR,
+ FAULT_LEVEL_GENERAL,
+ FAULT_LEVEL_SUGGESTION,
+ FAULT_LEVEL_MAX
+};
+
+enum hinic3_fault_type {
+ FAULT_TYPE_CHIP,
+ FAULT_TYPE_UCODE,
+ FAULT_TYPE_MEM_RD_TIMEOUT,
+ FAULT_TYPE_MEM_WR_TIMEOUT,
+ FAULT_TYPE_REG_RD_TIMEOUT,
+ FAULT_TYPE_REG_WR_TIMEOUT,
+ FAULT_TYPE_PHY_FAULT,
+ FAULT_TYPE_MAX
+};
+
+struct fault_event_stats {
+ RTE_ATOMIC(int32_t)chip_fault_stats[22][FAULT_LEVEL_MAX];
+ RTE_ATOMIC(int32_t)fault_type_stat[FAULT_TYPE_MAX];
+ RTE_ATOMIC(int32_t)pcie_fault_stats;
+};
+
+struct hinic3_hw_stats {
+ RTE_ATOMIC(int32_t)heart_lost_stats;
+ struct link_event_stats link_event_stats;
+ struct fault_event_stats fault_event_stats;
+};
+
+#define HINIC3_CHIP_FAULT_SIZE (110 * 1024)
+#define MAX_DRV_BUF_SIZE 4096
+
+struct nic_cmd_chip_fault_stats {
+ u32 offset;
+ u8 chip_fault_stats[MAX_DRV_BUF_SIZE];
+};
+
+struct hinic3_board_info {
+ u8 board_type;
+ u8 port_num;
+ u8 port_speed;
+ u8 pcie_width;
+ u8 host_num;
+ u8 pf_num;
+ u16 vf_total_num;
+ u8 tile_num;
+ u8 qcm_num;
+ u8 core_num;
+ u8 work_mode;
+ u8 service_mode;
+ u8 pcie_mode;
+ u8 boot_sel;
+ u8 board_id;
+ u32 cfg_addr;
+ u32 service_en_bitmap;
+ u8 scenes_id;
+ u8 cfg_template_id;
+ u16 rsvd0;
+};
+
+struct hinic3_hwdev {
+ void *dev_handle; /**< Pointer to hinic3_nic_dev. */
+ void *pci_dev; /**< Pointer to rte_pci_device. */
+ void *eth_dev; /**< Pointer to rte_eth_dev. */
+
+ uint16_t port_id;
+
+ u32 wq_page_size;
+
+ struct hinic3_hwif *hwif;
+ struct cfg_mgmt_info *cfg_mgmt;
+
+ struct hinic3_cmdqs *cmdqs;
+ struct hinic3_aeqs *aeqs;
+ struct hinic3_mbox *func_to_func;
+ struct hinic3_msg_pf_to_mgmt *pf_to_mgmt;
+ struct hinic3_hw_stats hw_stats;
+ u8 *chip_fault_stats;
+
+ struct hinic3_board_info board_info;
+ char mgmt_ver[MGMT_VERSION_MAX_LEN];
+
+ u16 max_vfs;
+ u16 link_status;
+};
+
+bool hinic3_is_vfio_iommu_enable(const struct rte_eth_dev *rte_dev);
+
+int vf_handle_pf_comm_mbox(void *handle, __rte_unused void *pri_handle,
+ __rte_unused u16 cmd, __rte_unused void *buf_in,
+ __rte_unused u16 in_size, __rte_unused void *buf_out,
+ __rte_unused u16 *out_size);
+
+/**
+ * Handle management communication events for the PF.
+ *
+ * Processes the event based on the command, and calls the corresponding
+ * handler if supported.
+ *
+ * @param[in] handle
+ * Pointer to the hardware device context.
+ * @param[in] cmd
+ * Command associated with the management event.
+ * @param[in] buf_in
+ * Input buffer containing event data.
+ * @param[in] in_size
+ * Size of the input buffer.
+ * @param[out] buf_out
+ * Output buffer to store event response.
+ * @param[out] out_size
+ * Size of the output buffer.
+ */
+void pf_handle_mgmt_comm_event(void *handle, __rte_unused void *pri_handle,
+ u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int hinic3_init_hwdev(struct hinic3_hwdev *hwdev);
+
+void hinic3_free_hwdev(struct hinic3_hwdev *hwdev);
+
+const struct rte_memzone *
+hinic3_dma_zone_reserve(const void *dev, const char *ring_name,
+ uint16_t queue_id, size_t size, unsigned int align,
+ int socket_id);
+
+int hinic3_memzone_free(const struct rte_memzone *mz);
+
+#endif /* _HINIC3_HWDEV_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 09/18] net/hinic3: add a NIC business configuration module
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (7 preceding siblings ...)
2025-04-18 9:05 ` [RFC 08/18] net/hinic3: add module about hardware operation Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 10/18] net/hinic3: add context and work queue support Feifei Wang
` (11 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
The items of configurations and queries for NIC business include
MAC, VLAN, MTU, RSS and so on. These configurations and queries
are handled by mgmt module. This patch introduces related
data structures and function codes.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_nic_cfg.c | 1828 ++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_nic_cfg.h | 1527 ++++++++++++++++++
2 files changed, 3355 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.c b/drivers/net/hinic3/base/hinic3_nic_cfg.c
new file mode 100644
index 0000000000..b4a4edc939
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_nic_cfg.c
@@ -0,0 +1,1828 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_ether.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_cmd.h"
+#include "hinic3_cmdq.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mbox.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_wq.h"
+
+struct vf_msg_handler {
+ u16 cmd;
+};
+
+static const struct vf_msg_handler vf_cmd_handler[] = {
+ {
+ .cmd = HINIC3_NIC_CMD_VF_REGISTER,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_GET_MAC,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_SET_MAC,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_DEL_MAC,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_UPDATE_MAC,
+ },
+
+ {
+ .cmd = HINIC3_NIC_CMD_VF_COS,
+ },
+};
+
+static const struct vf_msg_handler vf_mag_cmd_handler[] = {
+ {
+ .cmd = MAG_CMD_GET_LINK_STATUS,
+ },
+};
+
+static int mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+/**
+ * If device is VF and command is found in a predefined list, send to PF, else
+ * send to management module.
+ *
+ * @param[in] hwdev
+ * The pointer to the hardware device structure.
+ * @param[in] cmd
+ * The command to send.
+ * @param[in] buf_in
+ * The input buffer containing the request data.
+ * @param[in] in_size
+ * The size of the input buffer.
+ * @param[out] buf_out
+ * The output buffer to receive the response data.
+ * @param[out] out_size
+ * The size of the output buffer on return.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_cmd_handler);
+ bool cmd_to_pf = false;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_cmd_handler[i].cmd)
+ cmd_to_pf = true;
+ }
+ }
+
+ if (cmd_to_pf) {
+ return hinic3_mbox_to_pf(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+ }
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_L2NIC, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+}
+
+/**
+ * Set CI table for a SQ.
+ *
+ * Configure the CI table with attributes like CI address, pending limit,
+ * coalescing time, and optional interrupt settings for specified SQ.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] attr
+ * Attributes to configure for CI table.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr)
+{
+ struct hinic3_cmd_cons_idx_attr cons_idx_attr;
+ u16 out_size = sizeof(cons_idx_attr);
+ int err;
+
+ if (!hwdev || !attr)
+ return -EINVAL;
+
+ memset(&cons_idx_attr, 0, sizeof(cons_idx_attr));
+ cons_idx_attr.func_idx = hinic3_global_func_id(hwdev);
+ cons_idx_attr.dma_attr_off = attr->dma_attr_off;
+ cons_idx_attr.pending_limit = attr->pending_limit;
+ cons_idx_attr.coalescing_time = attr->coalescing_time;
+
+ if (attr->intr_en) {
+ cons_idx_attr.intr_en = attr->intr_en;
+ cons_idx_attr.intr_idx = attr->intr_idx;
+ }
+
+ cons_idx_attr.l2nic_sqn = attr->l2nic_sqn;
+ cons_idx_attr.ci_addr = attr->ci_dma_base;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SQ_CI_ATTR_SET,
+ &cons_idx_attr, sizeof(cons_idx_attr),
+ &cons_idx_attr, &out_size);
+ if (err || !out_size || cons_idx_attr.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set ci attribute table failed, "
+ "err: %d, status: 0x%x, out_size: 0x%x",
+ err, cons_idx_attr.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+#define PF_SET_VF_MAC(hwdev, status) \
+ (hinic3_func_type(hwdev) == TYPE_VF && \
+ (status) == HINIC3_PF_SET_VF_ALREADY)
+
+static int
+hinic3_check_mac_info(void *hwdev, u8 status, u16 vlan_id)
+{
+ if ((status && status != HINIC3_MGMT_STATUS_EXIST) ||
+ ((vlan_id & CHECK_IPSU_15BIT) &&
+ status == HINIC3_MGMT_STATUS_EXIST)) {
+ if (PF_SET_VF_MAC(hwdev, status))
+ return 0;
+
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+#define VLAN_N_VID 4096
+
+int
+hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+
+ if (vlan_id >= VLAN_N_VID) {
+ PMD_DRV_LOG(ERR, "Invalid VLAN number: %d", vlan_id);
+ return -EINVAL;
+ }
+
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ memmove(mac_info.mac, mac_addr, ETH_ALEN);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ PMD_DRV_LOG(ERR,
+ "Update MAC failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ PMD_DRV_LOG(WARNING,
+ "PF has already set VF mac, Ignore set operation");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ PMD_DRV_LOG(WARNING,
+ "MAC is repeated. Ignore update operation");
+ return 0;
+ }
+
+ return 0;
+}
+
+int
+hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ if (vlan_id >= VLAN_N_VID) {
+ PMD_DRV_LOG(ERR, "Invalid VLAN number: %d", vlan_id);
+ return -EINVAL;
+ }
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ memmove(mac_info.mac, mac_addr, ETH_ALEN);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_DEL_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || !out_size ||
+ (mac_info.msg_head.status &&
+ !PF_SET_VF_MAC(hwdev, mac_info.msg_head.status))) {
+ PMD_DRV_LOG(ERR,
+ "Delete MAC failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ PMD_DRV_LOG(WARNING,
+ "PF has already set VF mac, Ignore delete operation");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ return 0;
+}
+
+int
+hinic3_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id)
+{
+ struct hinic3_port_mac_update mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !old_mac || !new_mac)
+ return -EINVAL;
+
+ if (vlan_id >= VLAN_N_VID) {
+ PMD_DRV_LOG(ERR, "Invalid VLAN number: %d", vlan_id);
+ return -EINVAL;
+ }
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = func_id;
+ mac_info.vlan_id = vlan_id;
+ memcpy(mac_info.old_mac, old_mac, ETH_ALEN);
+ memcpy(mac_info.new_mac, new_mac, ETH_ALEN);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_UPDATE_MAC,
+ &mac_info, sizeof(mac_info), &mac_info,
+ &out_size);
+ if (err || !out_size ||
+ hinic3_check_mac_info(hwdev, mac_info.msg_head.status,
+ mac_info.vlan_id)) {
+ PMD_DRV_LOG(ERR,
+ "Update MAC failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (PF_SET_VF_MAC(hwdev, mac_info.msg_head.status)) {
+ PMD_DRV_LOG(WARNING,
+ "PF has already set VF MAC. Ignore update operation");
+ return HINIC3_PF_SET_VF_ALREADY;
+ }
+
+ if (mac_info.msg_head.status == HINIC3_MGMT_STATUS_EXIST) {
+ PMD_DRV_LOG(INFO, "MAC is repeated. Ignore update operation");
+ return 0;
+ }
+
+ return 0;
+}
+
+int
+hinic3_get_default_mac(void *hwdev, u8 *mac_addr, int ether_len)
+{
+ struct hinic3_port_mac_set mac_info;
+ u16 out_size = sizeof(mac_info);
+ int err;
+
+ if (!hwdev || !mac_addr)
+ return -EINVAL;
+
+ memset(&mac_info, 0, sizeof(mac_info));
+ mac_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_MAC, &mac_info,
+ sizeof(mac_info), &mac_info, &out_size);
+ if (err || !out_size || mac_info.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Get MAC failed, err: %d, status: 0x%x, out size: 0x%x",
+ err, mac_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ memmove(mac_addr, mac_info.mac, ether_len);
+
+ return 0;
+}
+
+static int
+hinic3_config_vlan(void *hwdev, u8 opcode, u16 vlan_id, u16 func_id)
+{
+ struct hinic3_cmd_vlan_config vlan_info;
+ u16 out_size = sizeof(vlan_info);
+ int err;
+
+ memset(&vlan_info, 0, sizeof(vlan_info));
+ vlan_info.opcode = opcode;
+ vlan_info.func_id = func_id;
+ vlan_info.vlan_id = vlan_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_FUNC_VLAN,
+ &vlan_info, sizeof(vlan_info), &vlan_info,
+ &out_size);
+ if (err || !out_size || vlan_info.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "%s vlan failed, err: %d, status: 0x%x, out size: 0x%x",
+ opcode == HINIC3_CMD_OP_ADD ? "Add" : "Delete", err,
+ vlan_info.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int
+hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return hinic3_config_vlan(hwdev, HINIC3_CMD_OP_ADD, vlan_id, func_id);
+}
+
+int
+hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return hinic3_config_vlan(hwdev, HINIC3_CMD_OP_DEL, vlan_id, func_id);
+}
+
+int
+hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info)
+{
+ struct hinic3_cmd_port_info port_msg;
+ u16 out_size = sizeof(port_msg);
+ int err;
+
+ if (!hwdev || !port_info)
+ return -EINVAL;
+
+ memset(&port_msg, 0, sizeof(port_msg));
+ port_msg.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_INFO, &port_msg,
+ sizeof(port_msg), &port_msg, &out_size);
+ if (err || !out_size || port_msg.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Get port info failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, port_msg.msg_head.status, out_size);
+ return -EINVAL;
+ }
+
+ port_info->autoneg_cap = port_msg.autoneg_cap;
+ port_info->autoneg_state = port_msg.autoneg_state;
+ port_info->duplex = port_msg.duplex;
+ port_info->port_type = port_msg.port_type;
+ port_info->speed = port_msg.speed;
+ port_info->fec = port_msg.fec;
+
+ return 0;
+}
+
+int
+hinic3_get_link_state(void *hwdev, u8 *link_state)
+{
+ struct hinic3_cmd_link_state get_link;
+ u16 out_size = sizeof(get_link);
+ int err;
+
+ if (!hwdev || !link_state)
+ return -EINVAL;
+
+ memset(&get_link, 0, sizeof(get_link));
+ get_link.port_id = hinic3_physical_port_id(hwdev);
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_LINK_STATUS, &get_link,
+ sizeof(get_link), &get_link, &out_size);
+ if (err || !out_size || get_link.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Get link state failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, get_link.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ *link_state = get_link.state;
+
+ return 0;
+}
+
+int
+hinic3_set_vport_enable(void *hwdev, bool enable)
+{
+ struct hinic3_vport_state en_state;
+ u16 out_size = sizeof(en_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&en_state, 0, sizeof(en_state));
+ en_state.func_id = hinic3_global_func_id(hwdev);
+ en_state.state = enable ? 1 : 0;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_VPORT_ENABLE,
+ &en_state, sizeof(en_state), &en_state,
+ &out_size);
+ if (err || !out_size || en_state.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set vport state failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, en_state.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_set_port_enable(void *hwdev, bool enable)
+{
+ struct mag_cmd_set_port_enable en_state;
+ u16 out_size = sizeof(en_state);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ memset(&en_state, 0, sizeof(en_state));
+ en_state.function_id = hinic3_global_func_id(hwdev);
+ en_state.state = enable ? MAG_CMD_TX_ENABLE | MAG_CMD_RX_ENABLE
+ : MAG_CMD_PORT_DISABLE;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_PORT_ENABLE, &en_state,
+ sizeof(en_state), &en_state, &out_size);
+ if (err || !out_size || en_state.head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set port state failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, en_state.head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_flush_qps_res(void *hwdev)
+{
+ struct hinic3_cmd_clear_qp_resource sq_res;
+ u16 out_size = sizeof(sq_res);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&sq_res, 0, sizeof(sq_res));
+ sq_res.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CLEAR_QP_RESOURCE,
+ &sq_res, sizeof(sq_res), &sq_res,
+ &out_size);
+ if (err || !out_size || sq_res.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Clear sq resources failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, sq_res.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * Get or set the flow control (pause frame) settings for NIC.
+ *
+ * @param[in] hwdev
+ * Pointer to the hardware device.
+ * @param[in] opcode
+ * The operation to perform. Use `HINIC3_CMD_OP_SET` to set the pause settings
+ * and `HINIC3_CMD_OP_GET` to get them.
+ * @param[out] nic_pause
+ * Pointer to the `nic_pause_config` structure. This structure contains the flow
+ * control settings (auto-negotiation, Rx pause, and Tx pause). It is updated
+ * when getting settings. When setting, the values in this structure are used.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EIO if the operation failed.
+ */
+static int
+hinic3_cfg_hw_pause(void *hwdev, u8 opcode, struct nic_pause_config *nic_pause)
+{
+ struct hinic3_cmd_pause_config pause_info = {0};
+ u16 out_size = sizeof(pause_info);
+ int err = 0;
+
+ pause_info.port_id = hinic3_physical_port_id(hwdev);
+ pause_info.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET) {
+ pause_info.auto_neg = nic_pause->auto_neg;
+ pause_info.rx_pause = nic_pause->rx_pause;
+ pause_info.tx_pause = nic_pause->tx_pause;
+ }
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_PAUSE_INFO,
+ &pause_info, sizeof(pause_info),
+ &pause_info, &out_size);
+ if (err || !out_size || pause_info.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "%s pause info failed, err: %d, status: 0x%x, out "
+ "size: 0x%x",
+ opcode == HINIC3_CMD_OP_SET ? "Set" : "Get", err,
+ pause_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET) {
+ nic_pause->auto_neg = pause_info.auto_neg;
+ nic_pause->rx_pause = pause_info.rx_pause;
+ nic_pause->tx_pause = pause_info.tx_pause;
+ }
+
+ return 0;
+}
+
+int
+hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return hinic3_cfg_hw_pause(hwdev, HINIC3_CMD_OP_SET, &nic_pause);
+}
+
+int
+hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
+{
+ if (!hwdev || !nic_pause)
+ return -EINVAL;
+
+ return hinic3_cfg_hw_pause(hwdev, HINIC3_CMD_OP_GET, nic_pause);
+}
+
+int
+hinic3_get_vport_stats(void *hwdev, struct hinic3_vport_stats *stats)
+{
+ struct hinic3_port_stats_info stats_info;
+ struct hinic3_cmd_vport_stats vport_stats;
+ u16 out_size = sizeof(vport_stats);
+ int err;
+
+ if (!hwdev || !stats)
+ return -EINVAL;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ memset(&vport_stats, 0, sizeof(vport_stats));
+
+ stats_info.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_VPORT_STAT,
+ &stats_info, sizeof(stats_info),
+ &vport_stats, &out_size);
+ if (err || !out_size || vport_stats.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Get function stats failed, err: %d, status: 0x%x, "
+ "out size: 0x%x",
+ err, vport_stats.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ memcpy(stats, &vport_stats.stats, sizeof(*stats));
+
+ return 0;
+}
+
+int
+hinic3_get_phy_port_stats(void *hwdev, struct mag_phy_port_stats *stats)
+{
+ struct mag_cmd_get_port_stat *port_stats = NULL;
+ struct mag_cmd_port_stats_info stats_info;
+ u16 out_size = sizeof(*port_stats);
+ int err;
+
+ port_stats = rte_zmalloc("port_stats", sizeof(*port_stats), 0);
+ if (!port_stats)
+ return -ENOMEM;
+
+ memset(&stats_info, 0, sizeof(stats_info));
+ stats_info.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_GET_PORT_STAT, &stats_info,
+ sizeof(stats_info), port_stats, &out_size);
+ if (err || !out_size || port_stats->head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to get port statistics, err: %d, status: "
+ "0x%x, out size: 0x%x",
+ err, port_stats->head.status, out_size);
+ err = -EIO;
+ goto out;
+ }
+
+ memcpy(stats, &port_stats->counter, sizeof(*stats));
+
+out:
+ rte_free(port_stats);
+
+ return err;
+}
+
+int
+hinic3_clear_vport_stats(void *hwdev)
+{
+ struct hinic3_cmd_clear_vport_stats clear_vport_stats;
+ u16 out_size = sizeof(clear_vport_stats);
+ int err;
+
+ if (!hwdev) {
+ PMD_DRV_LOG(ERR, "Hwdev is NULL");
+ return -EINVAL;
+ }
+
+ memset(&clear_vport_stats, 0, sizeof(clear_vport_stats));
+ clear_vport_stats.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev,
+ HINIC3_NIC_CMD_CLEAN_VPORT_STAT, &clear_vport_stats,
+ sizeof(clear_vport_stats), &clear_vport_stats, &out_size);
+ if (err || !out_size || clear_vport_stats.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Clear vport stats failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, clear_vport_stats.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_clear_phy_port_stats(void *hwdev)
+{
+ struct mag_cmd_port_stats_info port_stats;
+ u16 out_size = sizeof(port_stats);
+ int err;
+
+ port_stats.port_id = hinic3_physical_port_id(hwdev);
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_CLR_PORT_STAT, &port_stats,
+ sizeof(port_stats), &port_stats, &out_size);
+ if (err || !out_size || port_stats.head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to get port statistics, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, port_stats.head.status, out_size);
+ err = -EIO;
+ }
+
+ return err;
+}
+
+static int
+hinic3_set_function_table(void *hwdev, u32 cfg_bitmap,
+ struct hinic3_func_tbl_cfg *cfg)
+{
+ struct hinic3_cmd_set_func_tbl cmd_func_tbl;
+ u16 out_size = sizeof(cmd_func_tbl);
+ int err;
+
+ memset(&cmd_func_tbl, 0, sizeof(cmd_func_tbl));
+ cmd_func_tbl.func_id = hinic3_global_func_id(hwdev);
+ cmd_func_tbl.cfg_bitmap = cfg_bitmap;
+ cmd_func_tbl.tbl_cfg = *cfg;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_FUNC_TBL,
+ &cmd_func_tbl, sizeof(cmd_func_tbl),
+ &cmd_func_tbl, &out_size);
+ if (err || cmd_func_tbl.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Set func table failed, bitmap: 0x%x, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ cfg_bitmap, err, cmd_func_tbl.msg_head.status,
+ out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_init_function_table(void *hwdev, u16 rx_buff_len)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg;
+ u32 cfg_bitmap = BIT(FUNC_CFG_INIT) | BIT(FUNC_CFG_MTU) |
+ BIT(FUNC_CFG_RX_BUF_SIZE);
+
+ memset(&func_tbl_cfg, 0, sizeof(func_tbl_cfg));
+ func_tbl_cfg.mtu = 0x3FFF; /**< Default, max mtu */
+ func_tbl_cfg.rx_wqe_buf_size = rx_buff_len;
+
+ return hinic3_set_function_table(hwdev, cfg_bitmap, &func_tbl_cfg);
+}
+
+int
+hinic3_set_port_mtu(void *hwdev, u16 new_mtu)
+{
+ struct hinic3_func_tbl_cfg func_tbl_cfg;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (new_mtu < HINIC3_MIN_MTU_SIZE) {
+ PMD_DRV_LOG(ERR,
+ "Invalid mtu size: %ubytes, mtu size < %ubytes",
+ new_mtu, HINIC3_MIN_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ if (new_mtu > HINIC3_MAX_JUMBO_FRAME_SIZE) {
+ PMD_DRV_LOG(ERR,
+ "Invalid mtu size: %ubytes, mtu size > %ubytes",
+ new_mtu, HINIC3_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+
+ memset(&func_tbl_cfg, 0, sizeof(func_tbl_cfg));
+ func_tbl_cfg.mtu = new_mtu;
+ return hinic3_set_function_table(hwdev, BIT(FUNC_CFG_MTU),
+ &func_tbl_cfg);
+}
+
+static int
+nic_feature_nego(void *hwdev, u8 opcode, u64 *s_feature, u16 size)
+{
+ struct hinic3_cmd_feature_nego feature_nego;
+ u16 out_size = sizeof(feature_nego);
+ int err;
+
+ if (!hwdev || !s_feature || size > MAX_FEATURE_QWORD)
+ return -EINVAL;
+
+ memset(&feature_nego, 0, sizeof(feature_nego));
+ feature_nego.func_id = hinic3_global_func_id(hwdev);
+ feature_nego.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(feature_nego.s_feature, s_feature, size * sizeof(u64));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_FEATURE_NEGO,
+ &feature_nego, sizeof(feature_nego),
+ &feature_nego, &out_size);
+ if (err || !out_size || feature_nego.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to negotiate nic feature, err:%d, status: "
+ "0x%x, out_size: 0x%x",
+ err, feature_nego.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(s_feature, feature_nego.s_feature, size * sizeof(u64));
+
+ return 0;
+}
+
+int
+hinic3_get_feature_from_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_GET, s_feature, size);
+}
+
+int
+hinic3_set_feature_to_hw(void *hwdev, u64 *s_feature, u16 size)
+{
+ return nic_feature_nego(hwdev, HINIC3_CMD_OP_SET, s_feature, size);
+}
+
+static int
+hinic3_vf_func_init(void *hwdev)
+{
+ struct hinic3_cmd_register_vf register_info;
+ u16 out_size = sizeof(register_info);
+ int err;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ return 0;
+
+ memset(®ister_info, 0, sizeof(register_info));
+ register_info.op_register = 1;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_VF_REGISTER,
+ ®ister_info, sizeof(register_info),
+ ®ister_info, &out_size);
+ if (err || register_info.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Register VF failed, err: %d, status: 0x%x, out "
+ "size: 0x%x",
+ err, register_info.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_vf_func_free(void *hwdev)
+{
+ struct hinic3_cmd_register_vf unregister;
+ u16 out_size = sizeof(unregister);
+ int err;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ return 0;
+
+ memset(&unregister, 0, sizeof(unregister));
+ unregister.op_register = 0;
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_VF_REGISTER,
+ &unregister, sizeof(unregister),
+ &unregister, &out_size);
+ if (err || unregister.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Unregister VF failed, err: %d, status: 0x%x, out "
+ "size: 0x%x",
+ err, unregister.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_init_nic_hwdev(void *hwdev)
+{
+ return hinic3_vf_func_init(hwdev);
+}
+
+void
+hinic3_free_nic_hwdev(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ if (hinic3_func_type(hwdev) != TYPE_VF)
+ (void)hinic3_set_link_status_follow(hwdev,
+ HINIC3_LINK_FOLLOW_DEFAULT);
+
+ hinic3_vf_func_free(hwdev);
+}
+
+int
+hinic3_set_rx_mode(void *hwdev, u32 enable)
+{
+ struct hinic3_rx_mode_config rx_mode_cfg;
+ u16 out_size = sizeof(rx_mode_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&rx_mode_cfg, 0, sizeof(rx_mode_cfg));
+ rx_mode_cfg.func_id = hinic3_global_func_id(hwdev);
+ rx_mode_cfg.rx_mode = enable;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_MODE,
+ &rx_mode_cfg, sizeof(rx_mode_cfg),
+ &rx_mode_cfg, &out_size);
+ if (err || !out_size || rx_mode_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set rx mode failed, err: %d, status: 0x%x, out "
+ "size: 0x%x",
+ err, rx_mode_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+ return 0;
+}
+
+int
+hinic3_set_rx_vlan_offload(void *hwdev, u8 en)
+{
+ struct hinic3_cmd_vlan_offload vlan_cfg;
+ u16 out_size = sizeof(vlan_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&vlan_cfg, 0, sizeof(vlan_cfg));
+ vlan_cfg.func_id = hinic3_global_func_id(hwdev);
+ vlan_cfg.vlan_offload = en;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_RX_VLAN_OFFLOAD,
+ &vlan_cfg, sizeof(vlan_cfg), &vlan_cfg,
+ &out_size);
+ if (err || !out_size || vlan_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set rx vlan offload failed, err: %d, status: "
+ "0x%x, out size: 0x%x",
+ err, vlan_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+ return 0;
+}
+
+int
+hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl)
+{
+ struct hinic3_cmd_set_vlan_filter vlan_filter;
+ u16 out_size = sizeof(vlan_filter);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&vlan_filter, 0, sizeof(vlan_filter));
+ vlan_filter.func_id = hinic3_global_func_id(hwdev);
+ vlan_filter.vlan_filter_ctrl = vlan_filter_ctrl;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_VLAN_FILTER_EN,
+ &vlan_filter, sizeof(vlan_filter),
+ &vlan_filter, &out_size);
+ if (err || !out_size || vlan_filter.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Failed to set vlan filter, err: %d, status: 0x%x, "
+ "out size: 0x%x",
+ err, vlan_filter.msg_head.status, out_size);
+ return -EIO;
+ }
+ return 0;
+}
+
+static int
+hinic3_set_rx_lro(void *hwdev, u8 ipv4_en, u8 ipv6_en, u8 lro_max_pkt_len)
+{
+ struct hinic3_cmd_lro_config lro_cfg = {0};
+ u16 out_size = sizeof(lro_cfg);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ lro_cfg.func_id = hinic3_global_func_id(hwdev);
+ lro_cfg.opcode = HINIC3_CMD_OP_SET;
+ lro_cfg.lro_ipv4_en = ipv4_en;
+ lro_cfg.lro_ipv6_en = ipv6_en;
+ lro_cfg.lro_max_pkt_len = lro_max_pkt_len;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_RX_LRO, &lro_cfg,
+ sizeof(lro_cfg), &lro_cfg, &out_size);
+ if (err || !out_size || lro_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set lro offload failed, err: %d, status: 0x%x, "
+ "out size: 0x%x",
+ err, lro_cfg.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_set_rx_lro_timer(void *hwdev, u32 timer_value)
+{
+ struct hinic3_cmd_lro_timer lro_timer;
+ u16 out_size = sizeof(lro_timer);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&lro_timer, 0, sizeof(lro_timer));
+ lro_timer.opcode = HINIC3_CMD_OP_SET;
+ lro_timer.timer = timer_value;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_LRO_TIMER,
+ &lro_timer, sizeof(lro_timer), &lro_timer,
+ &out_size);
+ if (err || !out_size || lro_timer.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set lro timer failed, err: %d, status: 0x%x, out "
+ "size: 0x%x",
+ err, lro_timer.msg_head.status, out_size);
+
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len)
+{
+ u8 ipv4_en = 0, ipv6_en = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ ipv4_en = lro_en ? 1 : 0;
+ ipv6_en = lro_en ? 1 : 0;
+
+ PMD_DRV_LOG(INFO, "Set LRO max coalesce packet size to %uK",
+ lro_max_pkt_len);
+
+ err = hinic3_set_rx_lro(hwdev, ipv4_en, ipv6_en, (u8)lro_max_pkt_len);
+ if (err)
+ return err;
+
+ /* We don't set LRO timer for VF */
+ if (hinic3_func_type(hwdev) == TYPE_VF)
+ return 0;
+
+ PMD_DRV_LOG(INFO, "Set LRO timer to %u", lro_timer);
+
+ return hinic3_set_rx_lro_timer(hwdev, lro_timer);
+}
+
+/* RSS config */
+int
+hinic3_rss_template_alloc(void *hwdev)
+{
+ struct hinic3_rss_template_mgmt template_mgmt;
+ u16 out_size = sizeof(template_mgmt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&template_mgmt, 0, sizeof(struct hinic3_rss_template_mgmt));
+ template_mgmt.func_id = hinic3_global_func_id(hwdev);
+ template_mgmt.cmd = NIC_RSS_CMD_TEMP_ALLOC;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RSS_TEMP_MGR,
+ &template_mgmt, sizeof(template_mgmt),
+ &template_mgmt, &out_size);
+ if (err || !out_size || template_mgmt.msg_head.status) {
+ if (template_mgmt.msg_head.status ==
+ HINIC3_MGMT_STATUS_TABLE_FULL) {
+ PMD_DRV_LOG(ERR, "There is no more template available");
+ return -ENOSPC;
+ }
+ PMD_DRV_LOG(ERR,
+ "Alloc rss template failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, template_mgmt.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_rss_template_free(void *hwdev)
+{
+ struct hinic3_rss_template_mgmt template_mgmt;
+ u16 out_size = sizeof(template_mgmt);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&template_mgmt, 0, sizeof(struct hinic3_rss_template_mgmt));
+ template_mgmt.func_id = hinic3_global_func_id(hwdev);
+ template_mgmt.cmd = NIC_RSS_CMD_TEMP_FREE;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RSS_TEMP_MGR,
+ &template_mgmt, sizeof(template_mgmt),
+ &template_mgmt, &out_size);
+ if (err || !out_size || template_mgmt.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Free rss template failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, template_mgmt.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_rss_cfg_hash_key(void *hwdev, u8 opcode, u8 *key, u16 key_size)
+{
+ struct hinic3_cmd_rss_hash_key hash_key;
+ u16 out_size = sizeof(hash_key);
+ int err;
+
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ memset(&hash_key, 0, sizeof(struct hinic3_cmd_rss_hash_key));
+ hash_key.func_id = hinic3_global_func_id(hwdev);
+ hash_key.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET)
+ memcpy(hash_key.key, key, key_size);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_RSS_HASH_KEY,
+ &hash_key, sizeof(hash_key), &hash_key,
+ &out_size);
+ if (err || !out_size || hash_key.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "%s hash key failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ opcode == HINIC3_CMD_OP_SET ? "Set" : "Get", err,
+ hash_key.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ memcpy(key, hash_key.key, key_size);
+
+ return 0;
+}
+
+int
+hinic3_rss_set_hash_key(void *hwdev, u8 *key, u16 key_size)
+{
+ if (!hwdev || !key)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_key(hwdev, HINIC3_CMD_OP_SET, key, key_size);
+}
+
+int
+hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table, u32 indir_table_size)
+{
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u16 *indir_tbl = NULL;
+ int err;
+ u32 i;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf failed");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ err = hinic3_cmdq_detail_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ cmd_buf, cmd_buf, 0);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get rss indir table failed");
+ hinic3_free_cmd_buf(cmd_buf);
+ return err;
+ }
+
+ indir_tbl = (u16 *)cmd_buf->buf;
+ for (i = 0; i < indir_table_size; i++)
+ indir_table[i] = *(indir_tbl + i);
+
+ hinic3_free_cmd_buf(cmd_buf);
+ return 0;
+}
+
+int
+hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table,
+ u32 indir_table_size)
+{
+ struct nic_rss_indirect_tbl *indir_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u32 i, size;
+ u32 *temp = NULL;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev || !indir_table)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf failed");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(struct nic_rss_indirect_tbl);
+ indir_tbl = (struct nic_rss_indirect_tbl *)cmd_buf->buf;
+ memset(indir_tbl, 0, sizeof(*indir_tbl));
+
+ for (i = 0; i < indir_table_size; i++)
+ indir_tbl->entry[i] = (u16)(*(indir_table + i));
+
+ rte_mb();
+ size = sizeof(indir_tbl->entry) / sizeof(u32);
+ temp = (u32 *)indir_tbl->entry;
+ for (i = 0; i < size; i++)
+ temp[i] = cpu_to_be32(temp[i]);
+
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ cmd_buf, &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Set rss indir table failed");
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+static int
+hinic3_cmdq_set_rss_type(void *hwdev, struct hinic3_rss_type rss_type)
+{
+ struct nic_rss_context_tbl *ctx_tbl = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u32 ctx = 0;
+ u64 out_param = 0;
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf failed");
+ return -ENOMEM;
+ }
+
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+
+ cmd_buf->size = sizeof(struct nic_rss_context_tbl);
+ ctx_tbl = (struct nic_rss_context_tbl *)cmd_buf->buf;
+ memset(ctx_tbl, 0, sizeof(*ctx_tbl));
+ rte_mb();
+ ctx_tbl->ctx = cpu_to_be32(ctx);
+
+ /* Cfg the RSS context table by command queue. */
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ cmd_buf, &out_param, 0);
+
+ hinic3_free_cmd_buf(cmd_buf);
+
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR, "Cmdq set rss context table failed, err: %d",
+ err);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_mgmt_set_rss_type(void *hwdev, struct hinic3_rss_type rss_type)
+{
+ struct hinic3_rss_context_table ctx_tbl;
+ u32 ctx = 0;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&ctx_tbl, 0, sizeof(ctx_tbl));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+ ctx |= HINIC3_RSS_TYPE_SET(1, VALID) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv4, IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6, IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.ipv6_ext, IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv4, TCP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6, TCP_IPV6) |
+ HINIC3_RSS_TYPE_SET(rss_type.tcp_ipv6_ext, TCP_IPV6_EXT) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv4, UDP_IPV4) |
+ HINIC3_RSS_TYPE_SET(rss_type.udp_ipv6, UDP_IPV6);
+ ctx_tbl.context = ctx;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev,
+ HINIC3_NIC_CMD_SET_RSS_CTX_TBL_INTO_FUNC, &ctx_tbl,
+ sizeof(ctx_tbl), &ctx_tbl, &out_size);
+ if (ctx_tbl.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ return HINIC3_MGMT_CMD_UNSUPPORTED;
+ } else if (err || !out_size || ctx_tbl.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Mgmt set rss context table failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+int
+hinic3_set_rss_type(void *hwdev, struct hinic3_rss_type rss_type)
+{
+ int err;
+ err = hinic3_mgmt_set_rss_type(hwdev, rss_type);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ err = hinic3_cmdq_set_rss_type(hwdev, rss_type);
+ return err;
+}
+
+int
+hinic3_get_rss_type(void *hwdev, struct hinic3_rss_type *rss_type)
+{
+ struct hinic3_rss_context_table ctx_tbl;
+ u16 out_size = sizeof(ctx_tbl);
+ int err;
+
+ if (!hwdev || !rss_type)
+ return -EINVAL;
+
+ memset(&ctx_tbl, 0, sizeof(struct hinic3_rss_context_table));
+ ctx_tbl.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_GET_RSS_CTX_TBL,
+ &ctx_tbl, sizeof(ctx_tbl), &ctx_tbl,
+ &out_size);
+ if (err || !out_size || ctx_tbl.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Get hash type failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, ctx_tbl.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ rss_type->ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV4);
+ rss_type->ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6);
+ rss_type->ipv6_ext = HINIC3_RSS_TYPE_GET(ctx_tbl.context, IPV6_EXT);
+ rss_type->tcp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV4);
+ rss_type->tcp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6);
+ rss_type->tcp_ipv6_ext =
+ HINIC3_RSS_TYPE_GET(ctx_tbl.context, TCP_IPV6_EXT);
+ rss_type->udp_ipv4 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV4);
+ rss_type->udp_ipv6 = HINIC3_RSS_TYPE_GET(ctx_tbl.context, UDP_IPV6);
+
+ return 0;
+}
+
+static int
+hinic3_rss_cfg_hash_engine(void *hwdev, u8 opcode, u8 *type)
+{
+ struct hinic3_cmd_rss_engine_type hash_type;
+ u16 out_size = sizeof(hash_type);
+ int err;
+
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ memset(&hash_type, 0, sizeof(struct hinic3_cmd_rss_engine_type));
+ hash_type.func_id = hinic3_global_func_id(hwdev);
+ hash_type.opcode = opcode;
+ if (opcode == HINIC3_CMD_OP_SET)
+ hash_type.hash_engine = *type;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_RSS_HASH_ENGINE,
+ &hash_type, sizeof(hash_type), &hash_type,
+ &out_size);
+ if (err || !out_size || hash_type.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "%s hash engine failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ opcode == HINIC3_CMD_OP_SET ? "Set" : "Get", err,
+ hash_type.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ if (opcode == HINIC3_CMD_OP_GET)
+ *type = hash_type.hash_engine;
+
+ return 0;
+}
+
+int
+hinic3_rss_get_hash_engine(void *hwdev, u8 *type)
+{
+ if (!hwdev || !type)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_engine(hwdev, HINIC3_CMD_OP_GET, type);
+}
+
+int
+hinic3_rss_set_hash_engine(void *hwdev, u8 type)
+{
+ if (!hwdev)
+ return -EINVAL;
+
+ return hinic3_rss_cfg_hash_engine(hwdev, HINIC3_CMD_OP_SET, &type);
+}
+
+int
+hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 tc_num, u8 *prio_tc)
+{
+ struct hinic3_cmd_rss_config rss_cfg;
+ u16 out_size = sizeof(rss_cfg);
+ int err;
+
+ /* Ucode requires number of TC should be power of 2. */
+ if (!hwdev || !prio_tc || (tc_num & (tc_num - 1)))
+ return -EINVAL;
+
+ memset(&rss_cfg, 0, sizeof(struct hinic3_cmd_rss_config));
+ rss_cfg.func_id = hinic3_global_func_id(hwdev);
+ rss_cfg.rss_en = rss_en;
+ rss_cfg.rq_priority_number = tc_num ? (u8)ilog2(tc_num) : 0;
+
+ memcpy(rss_cfg.prio_tc, prio_tc, HINIC3_DCB_UP_MAX);
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_RSS_CFG, &rss_cfg,
+ sizeof(rss_cfg), &rss_cfg, &out_size);
+ if (err || !out_size || rss_cfg.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set rss cfg failed, err: %d, "
+ "status: 0x%x, out size: 0x%x",
+ err, rss_cfg.msg_head.status, out_size);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+int
+hinic3_vf_get_default_cos(void *hwdev, u8 *cos_id)
+{
+ struct hinic3_cmd_vf_dcb_state vf_dcb;
+ u16 out_size = sizeof(vf_dcb);
+ int err;
+
+ memset(&vf_dcb, 0, sizeof(vf_dcb));
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_VF_COS, &vf_dcb,
+ sizeof(vf_dcb), &vf_dcb, &out_size);
+ if (err || !out_size || vf_dcb.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Get VF default cos failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, vf_dcb.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ *cos_id = vf_dcb.state.default_cos;
+
+ return 0;
+}
+
+/**
+ * Set the Ethernet type filtering rule for the FDIR of a NIC.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] pkt_type
+ * Indicate the packet type.
+ * @param[in] queue_id
+ * Indicate the queue id.
+ * @param[in] en
+ * Indicate whether to add or delete an operation. 1 - add; 0 - delete.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_set_fdir_ethertype_filter(void *hwdev, u8 pkt_type, u16 queue_id, u8 en)
+{
+ struct hinic3_set_fdir_ethertype_rule ethertype_cmd;
+ u16 out_size = sizeof(ethertype_cmd);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(ðertype_cmd, 0,
+ sizeof(struct hinic3_set_fdir_ethertype_rule));
+ ethertype_cmd.func_id = hinic3_global_func_id(hwdev);
+ ethertype_cmd.pkt_type = pkt_type;
+ ethertype_cmd.pkt_type_en = en;
+ ethertype_cmd.qid = (u8)queue_id;
+
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_SET_FDIR_STATUS,
+ ðertype_cmd, sizeof(ethertype_cmd),
+ ðertype_cmd, &out_size);
+ if (err || ethertype_cmd.head.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "set fdir ethertype rule failed, "
+ "err: %d, status: 0x%x, out size: 0x%x, func_id %d",
+ err, ethertype_cmd.head.status, out_size,
+ ethertype_cmd.func_id);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_add_tcam_rule(void *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule,
+ u8 tcam_rule_type)
+{
+ struct hinic3_fdir_add_rule tcam_cmd;
+ u16 out_size = sizeof(tcam_cmd);
+ int err;
+
+ if (!hwdev || !tcam_rule)
+ return -EINVAL;
+ /* Check whether the index is out of range. */
+ if (tcam_rule->index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ PMD_DRV_LOG(ERR, "Tcam rules num to add is invalid");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct hinic3_fdir_add_rule));
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ memcpy((void *)&tcam_cmd.rule, (void *)tcam_rule,
+ sizeof(struct hinic3_tcam_cfg_rule));
+ tcam_cmd.type = tcam_rule_type;
+
+ /* Synchronize the information to the management module. */
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ADD_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd,
+ &out_size);
+ if (err || tcam_cmd.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Add tcam rule failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, tcam_cmd.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+hinic3_del_tcam_rule(void *hwdev, u32 index, u8 tcam_rule_type)
+{
+ struct hinic3_fdir_del_rule tcam_cmd;
+ u16 out_size = sizeof(tcam_cmd);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+ /* Check whether the index is out of range. */
+ if (index >= HINIC3_MAX_TCAM_RULES_NUM) {
+ PMD_DRV_LOG(ERR, "Tcam rules num to del is invalid");
+ return -EINVAL;
+ }
+
+ memset(&tcam_cmd, 0, sizeof(struct hinic3_fdir_del_rule));
+ tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ tcam_cmd.index_start = index;
+ tcam_cmd.index_num = 1;
+ tcam_cmd.type = tcam_rule_type;
+
+ /* Synchronize the information to the management module. */
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_DEL_TC_FLOW,
+ &tcam_cmd, sizeof(tcam_cmd), &tcam_cmd,
+ &out_size);
+ if (err || tcam_cmd.msg_head.status || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Del tcam rule failed, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, tcam_cmd.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_cfg_tcam_block(void *hwdev, u8 alloc_en, u16 *index)
+{
+ struct hinic3_tcam_block tcam_block_info;
+ u16 out_size = sizeof(tcam_block_info);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+ /* Initialization TCAM block structure. */
+ memset(&tcam_block_info, 0, sizeof(struct hinic3_tcam_block));
+ tcam_block_info.func_id = hinic3_global_func_id(hwdev);
+ tcam_block_info.alloc_en = alloc_en;
+ tcam_block_info.tcam_type = HINIC3_TCAM_BLOCK_NORMAL_TYPE;
+ tcam_block_info.tcam_block_index = *index;
+
+ /* Synchronize the information to the management module. */
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_CFG_TCAM_BLOCK,
+ &tcam_block_info, sizeof(tcam_block_info),
+ &tcam_block_info, &out_size);
+ if (err || !out_size || tcam_block_info.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Set tcam block failed, err: %d, status: 0x%x, out "
+ "size: 0x%x",
+ err, tcam_block_info.msg_head.status, out_size);
+ return -EIO;
+ }
+
+ /* Update TCAM block of index. */
+ if (alloc_en)
+ *index = tcam_block_info.tcam_block_index;
+
+ return 0;
+}
+
+int
+hinic3_alloc_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_cfg_tcam_block(hwdev, HINIC3_TCAM_BLOCK_ENABLE, index);
+}
+
+int
+hinic3_free_tcam_block(void *hwdev, u16 *index)
+{
+ return hinic3_cfg_tcam_block(hwdev, HINIC3_TCAM_BLOCK_DISABLE, index);
+}
+
+int
+hinic3_flush_tcam_rule(void *hwdev)
+{
+ struct hinic3_flush_tcam_rules tcam_flush;
+ u16 out_size = sizeof(tcam_flush);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ memset(&tcam_flush, 0, sizeof(struct hinic3_flush_tcam_rules));
+ tcam_flush.func_id = hinic3_global_func_id(hwdev);
+
+ err = l2nic_msg_to_mgmt_sync(hwdev,
+ HINIC3_NIC_CMD_FLUSH_TCAM, &tcam_flush,
+ sizeof(struct hinic3_flush_tcam_rules), &tcam_flush, &out_size);
+ if (tcam_flush.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ err = HINIC3_MGMT_CMD_UNSUPPORTED;
+ PMD_DRV_LOG(INFO,
+ "Firmware/uP doesn't support flush tcam fdir");
+ } else if (err || (!out_size) || tcam_flush.msg_head.status) {
+ PMD_DRV_LOG(ERR,
+ "Flush tcam fdir rules failed, err: %d, status: "
+ "0x%x, out size: 0x%x",
+ err, tcam_flush.msg_head.status, out_size);
+ err = -EIO;
+ }
+
+ return err;
+}
+
+int
+hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable)
+{
+ struct hinic3_port_tcam_info port_tcam_cmd;
+ u16 out_size = sizeof(port_tcam_cmd);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+ /* Initialization information. */
+ memset(&port_tcam_cmd, 0, sizeof(port_tcam_cmd));
+ port_tcam_cmd.func_id = hinic3_global_func_id(hwdev);
+ port_tcam_cmd.tcam_enable = (u8)enable;
+
+ /* Synchronize the information to the management module. */
+ err = l2nic_msg_to_mgmt_sync(hwdev, HINIC3_NIC_CMD_ENABLE_TCAM,
+ &port_tcam_cmd, sizeof(port_tcam_cmd),
+ &port_tcam_cmd, &out_size);
+ if ((port_tcam_cmd.msg_head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ port_tcam_cmd.msg_head.status) ||
+ err || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Set fdir tcam filter failed, err: %d, "
+ "status: 0x%x, out size: 0x%x, enable: 0x%x",
+ err, port_tcam_cmd.msg_head.status, out_size,
+ enable);
+ return -EIO;
+ }
+
+ if (port_tcam_cmd.msg_head.status == HINIC3_MGMT_CMD_UNSUPPORTED) {
+ err = HINIC3_MGMT_CMD_UNSUPPORTED;
+ PMD_DRV_LOG(WARNING,
+ "Fw doesn't support setting fdir tcam filter");
+ }
+
+ return err;
+}
+
+int
+hinic3_set_rq_flush(void *hwdev, u16 q_id)
+{
+ struct hinic3_cmd_set_rq_flush *rq_flush_msg = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ u64 out_param = EIO;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Failed to allocate cmd buf");
+ return -ENOMEM;
+ }
+
+ cmd_buf->size = sizeof(*rq_flush_msg);
+
+ rq_flush_msg = cmd_buf->buf;
+ rq_flush_msg->local_rq_id = q_id;
+ rte_mb();
+ rq_flush_msg->value = cpu_to_be32(rq_flush_msg->value);
+
+ err = hinic3_cmdq_direct_resp(hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_SET_RQ_FLUSH, cmd_buf,
+ &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR,
+ "Failed to set rq flush, err:%d, out_param:0x%" PRIx64,
+ err, out_param);
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(cmd_buf);
+
+ return err;
+}
+
+static int
+_mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ u32 i, cmd_cnt = ARRAY_LEN(vf_mag_cmd_handler);
+
+ if (hinic3_func_type(hwdev) == TYPE_VF) {
+ for (i = 0; i < cmd_cnt; i++) {
+ if (cmd == vf_mag_cmd_handler[i].cmd)
+ return hinic3_mbox_to_pf(hwdev,
+ HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+ }
+ }
+
+ return hinic3_msg_to_mgmt_sync(hwdev, HINIC3_MOD_HILINK, cmd, buf_in,
+ in_size, buf_out, out_size, 0);
+}
+
+static int
+mag_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size)
+{
+ return _mag_msg_to_mgmt_sync(hwdev, cmd, buf_in, in_size, buf_out,
+ out_size);
+}
+
+int
+hinic3_set_link_status_follow(void *hwdev,
+ enum hinic3_link_follow_status status)
+{
+ struct mag_cmd_set_link_follow follow;
+ u16 out_size = sizeof(follow);
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ if (status >= HINIC3_LINK_FOLLOW_STATUS_MAX) {
+ PMD_DRV_LOG(ERR, "Invalid link follow status: %d", status);
+ return -EINVAL;
+ }
+
+ memset(&follow, 0, sizeof(follow));
+ follow.function_id = hinic3_global_func_id(hwdev);
+ follow.follow = status;
+
+ err = mag_msg_to_mgmt_sync(hwdev, MAG_CMD_SET_LINK_FOLLOW, &follow,
+ sizeof(follow), &follow, &out_size);
+ if ((follow.head.status != HINIC3_MGMT_CMD_UNSUPPORTED &&
+ follow.head.status) ||
+ err || !out_size) {
+ PMD_DRV_LOG(ERR,
+ "Failed to set link status follow port status, "
+ "err: %d, status: 0x%x, out size: 0x%x",
+ err, follow.head.status, out_size);
+ return -EFAULT;
+ }
+
+ return follow.head.status;
+}
diff --git a/drivers/net/hinic3/base/hinic3_nic_cfg.h b/drivers/net/hinic3/base/hinic3_nic_cfg.h
new file mode 100644
index 0000000000..3e8e14e405
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_nic_cfg.h
@@ -0,0 +1,1527 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_NIC_CFG_H_
+#define _HINIC3_NIC_CFG_H_
+
+#include "hinic3_mgmt.h"
+
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+
+#define OS_VF_ID_TO_HW(os_vf_id) ((os_vf_id) + 1)
+#define HW_VF_ID_TO_OS(hw_vf_id) ((hw_vf_id) - 1)
+
+#define HINIC3_VLAN_PRIORITY_SHIFT 13
+
+#define HINIC3_DCB_UP_MAX 0x8
+
+#define HINIC3_MAX_NUM_RQ 256
+
+#define HINIC3_MAX_MTU_SIZE 9600
+#define HINIC3_MIN_MTU_SIZE 256
+
+#define HINIC3_COS_NUM_MAX 8
+
+#define HINIC3_VLAN_TAG_SIZE 4
+#define HINIC3_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + HINIC3_VLAN_TAG_SIZE * 2)
+
+#define HINIC3_MIN_FRAME_SIZE (HINIC3_MIN_MTU_SIZE + HINIC3_ETH_OVERHEAD)
+#define HINIC3_MAX_JUMBO_FRAME_SIZE (HINIC3_MAX_MTU_SIZE + HINIC3_ETH_OVERHEAD)
+
+#define HINIC3_MTU_TO_PKTLEN(mtu) (mtu)
+
+#define HINIC3_PKTLEN_TO_MTU(pktlen) (pktlen)
+
+#define HINIC3_PF_SET_VF_ALREADY 0x4
+#define HINIC3_MGMT_STATUS_EXIST 0x6
+#define CHECK_IPSU_15BIT 0x8000
+
+#define HINIC3_MGMT_STATUS_TABLE_EMPTY 0xB
+#define HINIC3_MGMT_STATUS_TABLE_FULL 0xC
+
+#define HINIC3_MGMT_CMD_UNSUPPORTED 0xFF
+
+#define HINIC3_MAX_UC_MAC_ADDRS 128
+#define HINIC3_MAX_MC_MAC_ADDRS 2048
+
+#define CAP_INFO_MAX_LEN 512
+#define VENDOR_MAX_LEN 17
+
+/* Structures for RSS config. */
+#define HINIC3_RSS_INDIR_SIZE 256
+#define HINIC3_RSS_INDIR_CMDQ_SIZE 128
+#define HINIC3_RSS_KEY_SIZE 40
+#define HINIC3_RSS_ENABLE 0x01
+#define HINIC3_RSS_DISABLE 0x00
+#define HINIC3_INVALID_QID_BASE 0xffff
+
+#ifndef ETH_SPEED_NUM_200G
+#define ETH_SPEED_NUM_200G 200000 /**< 200 Gbps. */
+#endif
+
+struct hinic3_rss_type {
+ u8 tcp_ipv6_ext;
+ u8 ipv6_ext;
+ u8 tcp_ipv6;
+ u8 ipv6;
+ u8 tcp_ipv4;
+ u8 ipv4;
+ u8 udp_ipv6;
+ u8 udp_ipv4;
+};
+
+enum hinic3_rss_hash_type {
+ HINIC3_RSS_HASH_ENGINE_TYPE_XOR = 0,
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP,
+ HINIC3_RSS_HASH_ENGINE_TYPE_MAX,
+};
+
+#define MAX_FEATURE_QWORD 4
+struct hinic3_cmd_feature_nego {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode; /**< 1: set, 0: get. */
+ u8 rsvd;
+ u64 s_feature[MAX_FEATURE_QWORD];
+};
+
+/* Structures for port info. */
+struct nic_port_info {
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+};
+
+enum hinic3_link_status { HINIC3_LINK_DOWN = 0, HINIC3_LINK_UP };
+
+enum nic_media_type {
+ MEDIA_UNKNOWN = -1,
+ MEDIA_FIBRE = 0,
+ MEDIA_COPPER,
+ MEDIA_BACKPLANE
+};
+
+enum nic_speed_level {
+ LINK_SPEED_NOT_SET = 0,
+ LINK_SPEED_10MB,
+ LINK_SPEED_100MB,
+ LINK_SPEED_1GB,
+ LINK_SPEED_10GB,
+ LINK_SPEED_25GB,
+ LINK_SPEED_40GB,
+ LINK_SPEED_50GB,
+ LINK_SPEED_100GB,
+ LINK_SPEED_200GB,
+ LINK_SPEED_LEVELS,
+};
+
+enum hinic3_nic_event_type {
+ EVENT_NIC_LINK_DOWN,
+ EVENT_NIC_LINK_UP,
+ EVENT_NIC_PORT_MODULE_EVENT,
+ EVENT_NIC_DCB_STATE_CHANGE,
+};
+
+enum hinic3_link_port_type {
+ LINK_PORT_UNKNOWN,
+ LINK_PORT_OPTICAL_MM,
+ LINK_PORT_OPTICAL_SM,
+ LINK_PORT_PAS_COPPER,
+ LINK_PORT_ACC,
+ LINK_PORT_BASET,
+ LINK_PORT_AOC = 0x40,
+ LINK_PORT_ELECTRIC,
+ LINK_PORT_BACKBOARD_INTERFACE,
+};
+
+enum hilink_fibre_subtype {
+ FIBRE_SUBTYPE_SR = 1,
+ FIBRE_SUBTYPE_LR,
+ FIBRE_SUBTYPE_MAX,
+};
+
+enum hilink_fec_type {
+ HILINK_FEC_NOT_SET,
+ HILINK_FEC_RSFEC,
+ HILINK_FEC_BASEFEC,
+ HILINK_FEC_NOFEC,
+ HILINK_FEC_LLRSFE,
+ HILINK_FEC_MAX_TYPE,
+};
+
+enum mag_cmd_port_an {
+ PORT_AN_NOT_SET = 0,
+ PORT_CFG_AN_ON = 1,
+ PORT_CFG_AN_OFF = 2
+};
+
+enum mag_cmd_port_speed {
+ PORT_SPEED_NOT_SET = 0,
+ PORT_SPEED_10MB = 1,
+ PORT_SPEED_100MB = 2,
+ PORT_SPEED_1GB = 3,
+ PORT_SPEED_10GB = 4,
+ PORT_SPEED_25GB = 5,
+ PORT_SPEED_40GB = 6,
+ PORT_SPEED_50GB = 7,
+ PORT_SPEED_100GB = 8,
+ PORT_SPEED_200GB = 9,
+ PORT_SPEED_UNKNOWN
+};
+
+struct hinic3_sq_attr {
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u64 ci_dma_base;
+};
+
+struct hinic3_cmd_cons_idx_attr {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_idx;
+ u8 dma_attr_off;
+ u8 pending_limit;
+ u8 coalescing_time;
+ u8 intr_en;
+ u16 intr_idx;
+ u32 l2nic_sqn;
+ u32 rsvd;
+ u64 ci_addr;
+};
+
+struct hinic3_port_mac_set {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 mac[ETH_ALEN];
+};
+
+struct hinic3_port_mac_update {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 vlan_id;
+ u16 rsvd1;
+ u8 old_mac[ETH_ALEN];
+ u16 rsvd2;
+ u8 new_mac[ETH_ALEN];
+};
+
+struct hinic3_ppa_cfg_state_cmd {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 ppa_state;
+ u8 rsvd;
+};
+
+struct hinic3_ppa_cfg_mode_cmd {
+ struct mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 ppa_mode;
+ u8 qpc_func_nums;
+ u16 base_qpc_func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_ppa_cfg_flush_cmd {
+ struct mgmt_msg_head msg_head;
+
+ u16 rsvd0;
+ u8 flush_en; /**< 0: flush done, 1: in flush operation. */
+ u8 rsvd1;
+};
+
+struct hinic3_ppa_fdir_query_cmd {
+ struct mgmt_msg_head msg_head;
+
+ u32 index;
+ u32 rsvd;
+ u64 pkt_nums;
+ u64 pkt_bytes;
+};
+
+#define HINIC3_CMD_OP_ADD 1
+#define HINIC3_CMD_OP_DEL 0
+
+struct hinic3_cmd_vlan_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u16 vlan_id;
+ u16 rsvd2;
+};
+
+struct hinic3_cmd_set_vlan_filter {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 resvd[2];
+ /* Bit0: vlan filter en, bit1: broadcast filter en. */
+ u32 vlan_filter_ctrl;
+};
+
+struct hinic3_cmd_port_info {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 rsvd1[3];
+ u8 port_type;
+ u8 autoneg_cap;
+ u8 autoneg_state;
+ u8 duplex;
+ u8 speed;
+ u8 fec;
+ u16 rsvd2;
+ u32 rsvd3[4];
+};
+
+struct hinic3_cmd_link_state {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 state;
+ u16 rsvd1;
+};
+
+struct nic_pause_config {
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+};
+
+struct hinic3_cmd_pause_config {
+ struct mgmt_msg_head msg_head;
+
+ u8 port_id;
+ u8 opcode;
+ u16 rsvd1;
+ u8 auto_neg;
+ u8 rx_pause;
+ u8 tx_pause;
+ u8 rsvd2[5];
+};
+
+struct hinic3_vport_state {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 state; /**< 0:disable, 1:enable. */
+ u8 rsvd2[3];
+};
+
+#define MAG_CMD_PORT_DISABLE 0x0
+#define MAG_CMD_TX_ENABLE 0x1
+#define MAG_CMD_RX_ENABLE 0x2
+/**
+ * The physical port is disable only when all pf of the port are set to down, if
+ * any pf is enable, the port is enable.
+ */
+struct mag_cmd_set_port_enable {
+ struct mgmt_msg_head head;
+ /* function_id should not more than the max support pf_id(32). */
+ u16 function_id;
+ u16 rsvd0;
+
+ /* bitmap bit0:tx_en, bit1:rx_en. */
+ u8 state;
+ u8 rsvd1[3];
+};
+
+struct mag_cmd_get_port_enable {
+ struct mgmt_msg_head head;
+
+ u8 port;
+ u8 state; /**< bitmap bit0:tx_en, bit1:rx_en. */
+ u8 rsvd0[2];
+};
+
+struct hinic3_cmd_clear_qp_resource {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_port_stats_info {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+};
+
+struct hinic3_vport_stats {
+ u64 tx_unicast_pkts_vport;
+ u64 tx_unicast_bytes_vport;
+ u64 tx_multicast_pkts_vport;
+ u64 tx_multicast_bytes_vport;
+ u64 tx_broadcast_pkts_vport;
+ u64 tx_broadcast_bytes_vport;
+
+ u64 rx_unicast_pkts_vport;
+ u64 rx_unicast_bytes_vport;
+ u64 rx_multicast_pkts_vport;
+ u64 rx_multicast_bytes_vport;
+ u64 rx_broadcast_pkts_vport;
+ u64 rx_broadcast_bytes_vport;
+
+ u64 tx_discard_vport;
+ u64 rx_discard_vport;
+ u64 tx_err_vport;
+ u64 rx_err_vport;
+};
+
+struct hinic3_cmd_vport_stats {
+ struct mgmt_msg_head msg_head;
+
+ u32 stats_size;
+ u32 rsvd1;
+ struct hinic3_vport_stats stats;
+ u64 rsvd2[6];
+};
+
+struct hinic3_phy_port_stats {
+ u64 mac_rx_total_octs_port;
+ u64 mac_tx_total_octs_port;
+ u64 mac_rx_under_frame_pkts_port;
+ u64 mac_rx_frag_pkts_port;
+ u64 mac_rx_64_oct_pkts_port;
+ u64 mac_rx_127_oct_pkts_port;
+ u64 mac_rx_255_oct_pkts_port;
+ u64 mac_rx_511_oct_pkts_port;
+ u64 mac_rx_1023_oct_pkts_port;
+ u64 mac_rx_max_oct_pkts_port;
+ u64 mac_rx_over_oct_pkts_port;
+ u64 mac_tx_64_oct_pkts_port;
+ u64 mac_tx_127_oct_pkts_port;
+ u64 mac_tx_255_oct_pkts_port;
+ u64 mac_tx_511_oct_pkts_port;
+ u64 mac_tx_1023_oct_pkts_port;
+ u64 mac_tx_max_oct_pkts_port;
+ u64 mac_tx_over_oct_pkts_port;
+ u64 mac_rx_good_pkts_port;
+ u64 mac_rx_crc_error_pkts_port;
+ u64 mac_rx_broadcast_ok_port;
+ u64 mac_rx_multicast_ok_port;
+ u64 mac_rx_mac_frame_ok_port;
+ u64 mac_rx_length_err_pkts_port;
+ u64 mac_rx_vlan_pkts_port;
+ u64 mac_rx_pause_pkts_port;
+ u64 mac_rx_unknown_mac_frame_port;
+ u64 mac_tx_good_pkts_port;
+ u64 mac_tx_broadcast_ok_port;
+ u64 mac_tx_multicast_ok_port;
+ u64 mac_tx_underrun_pkts_port;
+ u64 mac_tx_mac_frame_ok_port;
+ u64 mac_tx_vlan_pkts_port;
+ u64 mac_tx_pause_pkts_port;
+};
+
+struct mag_phy_port_stats {
+ u64 mac_tx_fragment_pkt_num;
+ u64 mac_tx_undersize_pkt_num;
+ u64 mac_tx_undermin_pkt_num;
+ u64 mac_tx_64_oct_pkt_num;
+ u64 mac_tx_65_127_oct_pkt_num;
+ u64 mac_tx_128_255_oct_pkt_num;
+ u64 mac_tx_256_511_oct_pkt_num;
+ u64 mac_tx_512_1023_oct_pkt_num;
+ u64 mac_tx_1024_1518_oct_pkt_num;
+ u64 mac_tx_1519_2047_oct_pkt_num;
+ u64 mac_tx_2048_4095_oct_pkt_num;
+ u64 mac_tx_4096_8191_oct_pkt_num;
+ u64 mac_tx_8192_9216_oct_pkt_num;
+ u64 mac_tx_9217_12287_oct_pkt_num;
+ u64 mac_tx_12288_16383_oct_pkt_num;
+ u64 mac_tx_1519_max_bad_pkt_num;
+ u64 mac_tx_1519_max_good_pkt_num;
+ u64 mac_tx_oversize_pkt_num;
+ u64 mac_tx_jabber_pkt_num;
+ u64 mac_tx_bad_pkt_num;
+ u64 mac_tx_bad_oct_num;
+ u64 mac_tx_good_pkt_num;
+ u64 mac_tx_good_oct_num;
+ u64 mac_tx_total_pkt_num;
+ u64 mac_tx_total_oct_num;
+ u64 mac_tx_uni_pkt_num;
+ u64 mac_tx_multi_pkt_num;
+ u64 mac_tx_broad_pkt_num;
+ u64 mac_tx_pause_num;
+ u64 mac_tx_pfc_pkt_num;
+ u64 mac_tx_pfc_pri0_pkt_num;
+ u64 mac_tx_pfc_pri1_pkt_num;
+ u64 mac_tx_pfc_pri2_pkt_num;
+ u64 mac_tx_pfc_pri3_pkt_num;
+ u64 mac_tx_pfc_pri4_pkt_num;
+ u64 mac_tx_pfc_pri5_pkt_num;
+ u64 mac_tx_pfc_pri6_pkt_num;
+ u64 mac_tx_pfc_pri7_pkt_num;
+ u64 mac_tx_control_pkt_num;
+ u64 mac_tx_err_all_pkt_num;
+ u64 mac_tx_from_app_good_pkt_num;
+ u64 mac_tx_from_app_bad_pkt_num;
+
+ u64 mac_rx_fragment_pkt_num;
+ u64 mac_rx_undersize_pkt_num;
+ u64 mac_rx_undermin_pkt_num;
+ u64 mac_rx_64_oct_pkt_num;
+ u64 mac_rx_65_127_oct_pkt_num;
+ u64 mac_rx_128_255_oct_pkt_num;
+ u64 mac_rx_256_511_oct_pkt_num;
+ u64 mac_rx_512_1023_oct_pkt_num;
+ u64 mac_rx_1024_1518_oct_pkt_num;
+ u64 mac_rx_1519_2047_oct_pkt_num;
+ u64 mac_rx_2048_4095_oct_pkt_num;
+ u64 mac_rx_4096_8191_oct_pkt_num;
+ u64 mac_rx_8192_9216_oct_pkt_num;
+ u64 mac_rx_9217_12287_oct_pkt_num;
+ u64 mac_rx_12288_16383_oct_pkt_num;
+ u64 mac_rx_1519_max_bad_pkt_num;
+ u64 mac_rx_1519_max_good_pkt_num;
+ u64 mac_rx_oversize_pkt_num;
+ u64 mac_rx_jabber_pkt_num;
+ u64 mac_rx_bad_pkt_num;
+ u64 mac_rx_bad_oct_num;
+ u64 mac_rx_good_pkt_num;
+ u64 mac_rx_good_oct_num;
+ u64 mac_rx_total_pkt_num;
+ u64 mac_rx_total_oct_num;
+ u64 mac_rx_uni_pkt_num;
+ u64 mac_rx_multi_pkt_num;
+ u64 mac_rx_broad_pkt_num;
+ u64 mac_rx_pause_num;
+ u64 mac_rx_pfc_pkt_num;
+ u64 mac_rx_pfc_pri0_pkt_num;
+ u64 mac_rx_pfc_pri1_pkt_num;
+ u64 mac_rx_pfc_pri2_pkt_num;
+ u64 mac_rx_pfc_pri3_pkt_num;
+ u64 mac_rx_pfc_pri4_pkt_num;
+ u64 mac_rx_pfc_pri5_pkt_num;
+ u64 mac_rx_pfc_pri6_pkt_num;
+ u64 mac_rx_pfc_pri7_pkt_num;
+ u64 mac_rx_control_pkt_num;
+ u64 mac_rx_sym_err_pkt_num;
+ u64 mac_rx_fcs_err_pkt_num;
+ u64 mac_rx_send_app_good_pkt_num;
+ u64 mac_rx_send_app_bad_pkt_num;
+ u64 mac_rx_unfilter_pkt_num;
+};
+
+struct mag_cmd_port_stats_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 rsvd0[3];
+};
+
+struct mag_cmd_get_port_stat {
+ struct mgmt_msg_head head;
+
+ struct mag_phy_port_stats counter;
+ u64 rsvd1[15];
+};
+
+struct param_head {
+ u8 valid_len;
+ u8 info_type;
+ u8 rsvd[2];
+};
+
+struct mag_port_link_param {
+ struct param_head head;
+
+ u8 an;
+ u8 fec;
+ u8 speed;
+ u8 rsvd0;
+
+ u32 used;
+ u32 an_fec_ability;
+ u32 an_speed_ability;
+ u32 an_pause_ability;
+};
+
+struct mag_port_wire_info {
+ struct param_head head;
+
+ u8 status;
+ u8 rsvd0[3];
+
+ u8 wire_type;
+ u8 default_fec;
+ u8 speed;
+ u8 rsvd1;
+ u32 speed_ability;
+};
+
+struct mag_port_adapt_info {
+ struct param_head head;
+
+ u32 adapt_en;
+ u32 flash_adapt;
+ u32 rsvd0[2];
+
+ u32 wire_node;
+ u32 an_en;
+ u32 speed;
+ u32 fec;
+};
+
+struct mag_port_param_info {
+ u8 parameter_cnt;
+ u8 lane_id;
+ u8 lane_num;
+ u8 rsvd0;
+
+ struct mag_port_link_param default_cfg;
+ struct mag_port_link_param bios_cfg;
+ struct mag_port_link_param tool_cfg;
+ struct mag_port_link_param final_cfg;
+
+ struct mag_port_wire_info wire_info;
+ struct mag_port_adapt_info adapt_info;
+};
+
+#define XSFP_VENDOR_NAME_LEN 16
+struct mag_cmd_event_port_info {
+ struct mgmt_msg_head head;
+
+ u8 port_id;
+ u8 event_type;
+ u8 rsvd0[2];
+
+ /* Optical Module Related. */
+ u8 vendor_name[XSFP_VENDOR_NAME_LEN];
+ u32 port_type; /**< fiber / copper. */
+ u32 port_sub_type; /**< sr / lr. */
+ u32 cable_length; /**< 1 / 3 / 5 m. */
+ u8 cable_temp; /**< Temperature. */
+ u8 max_speed; /**< Max rate of optical module. */
+ u8 sfp_type; /**< sfp / qsfp. */
+ u8 rsvd1;
+ u32 power[4]; /**< Optical power. */
+
+ u8 an_state;
+ u8 fec;
+ u16 speed;
+
+ u8 gpio_insert; /**< 0: present, 1: absent. */
+ u8 alos;
+ u8 rx_los;
+ u8 pma_ctrl;
+
+ u32 pma_fifo_reg;
+ u32 pma_signal_ok_reg;
+ u32 pcs_64_66b_reg;
+ u32 rf_lf;
+ u8 pcs_link;
+ u8 pcs_mac_link;
+ u8 tx_enable;
+ u8 rx_enable;
+ u32 pcs_err_cnt;
+
+ u8 eq_data[38];
+ u8 rsvd2[2];
+
+ u32 his_link_machine_state;
+ u32 cur_link_machine_state;
+ u8 his_machine_state_data[128];
+ u8 cur_machine_state_data[128];
+ u8 his_machine_state_length;
+ u8 cur_machine_state_length;
+
+ struct mag_port_param_info param_info;
+ u8 rsvd3[360];
+};
+
+struct hinic3_port_stats {
+ struct mgmt_msg_head msg_head;
+
+ struct hinic3_phy_port_stats stats;
+};
+
+struct hinic3_cmd_clear_vport_stats {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+};
+
+struct hinic3_cmd_clear_port_stats {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+};
+
+struct hinic3_cmd_qpn {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 base_qpn;
+};
+
+enum hinic3_func_tbl_cfg_bitmap {
+ FUNC_CFG_INIT,
+ FUNC_CFG_RX_BUF_SIZE,
+ FUNC_CFG_MTU,
+};
+
+struct hinic3_func_tbl_cfg {
+ u16 rx_wqe_buf_size;
+ u16 mtu;
+ u32 rsvd[9];
+};
+
+struct hinic3_cmd_set_func_tbl {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+
+ u32 cfg_bitmap;
+ struct hinic3_func_tbl_cfg tbl_cfg;
+};
+
+struct hinic3_rx_mode_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 rx_mode;
+};
+
+struct hinic3_cmd_vlan_offload {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 vlan_offload;
+ u8 rsvd1[5];
+};
+
+#define HINIC3_CMD_OP_GET 0
+#define HINIC3_CMD_OP_SET 1
+
+struct hinic3_cmd_lro_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 lro_ipv4_en;
+ u8 lro_ipv6_en;
+ u8 lro_max_pkt_len; /**< Unit size is 1K. */
+ u8 resv2[13];
+};
+
+struct hinic3_cmd_lro_timer {
+ struct mgmt_msg_head msg_head;
+
+ u8 opcode; /**< 1: set timer value, 0: get timer value. */
+ u8 rsvd1;
+ u16 rsvd2;
+ u32 timer;
+};
+
+struct hinic3_rss_template_mgmt {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 cmd;
+ u8 template_id;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_hash_key {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 rsvd1;
+ u8 key[HINIC3_RSS_KEY_SIZE];
+};
+
+struct hinic3_rss_indir_table {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 indir[HINIC3_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_indirect_tbl {
+ u32 rsvd[4]; /**< Make sure that 16B beyond entry[]. */
+ u16 entry[HINIC3_RSS_INDIR_SIZE];
+};
+
+struct nic_rss_context_tbl {
+ u32 rsvd[4];
+ u32 ctx;
+};
+
+struct hinic3_rss_context_table {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u32 context;
+};
+
+struct hinic3_cmd_rss_engine_type {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 opcode;
+ u8 hash_engine;
+ u8 rsvd1[4];
+};
+
+struct hinic3_cmd_rss_config {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 rss_en;
+ u8 rq_priority_number;
+ u8 prio_tc[HINIC3_DCB_UP_MAX];
+ u32 rsvd1;
+};
+
+enum {
+ HINIC3_IFLA_VF_LINK_STATE_AUTO, /**< Link state of the uplink. */
+ HINIC3_IFLA_VF_LINK_STATE_ENABLE, /**< Link always up. */
+ HINIC3_IFLA_VF_LINK_STATE_DISABLE, /**< Link always down. */
+};
+
+struct hinic3_dcb_state {
+ u8 dcb_on;
+ u8 default_cos;
+ u8 trust;
+ u8 rsvd1;
+ u8 up_cos[HINIC3_DCB_UP_MAX];
+ u8 dscp2cos[64];
+ u32 rsvd2[7];
+};
+
+struct hinic3_cmd_vf_dcb_state {
+ struct mgmt_msg_head msg_head;
+
+ struct hinic3_dcb_state state;
+};
+
+struct hinic3_cmd_register_vf {
+ struct mgmt_msg_head msg_head;
+
+ u8 op_register; /* 0: unregister, 1: register. */
+ u8 rsvd[39];
+};
+
+struct hinic3_tcam_result {
+ u32 qid;
+ u32 rsvd;
+};
+
+#define HINIC3_TCAM_FLOW_KEY_SIZE 44
+#define HINIC3_MAX_TCAM_RULES_NUM 4096
+#define HINIC3_TCAM_BLOCK_ENABLE 1
+#define HINIC3_TCAM_BLOCK_DISABLE 0
+#define HINIC3_TCAM_BLOCK_NORMAL_TYPE 0
+
+struct hinic3_tcam_key_x_y {
+ u8 x[HINIC3_TCAM_FLOW_KEY_SIZE];
+ u8 y[HINIC3_TCAM_FLOW_KEY_SIZE];
+};
+
+struct hinic3_tcam_cfg_rule {
+ u32 index;
+ struct hinic3_tcam_result data;
+ struct hinic3_tcam_key_x_y key;
+};
+
+/* Define the TCAM type. */
+#define TCAM_RULE_FDIR_TYPE 0
+#define TCAM_RULE_PPA_TYPE 1
+
+struct hinic3_fdir_add_rule {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ struct hinic3_tcam_cfg_rule rule;
+};
+
+struct hinic3_fdir_del_rule {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 type;
+ u8 rsvd;
+ u32 index_start;
+ u32 index_num;
+};
+
+struct hinic3_flush_tcam_rules {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u16 rsvd;
+};
+
+struct hinic3_tcam_block {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 alloc_en; /* 0: free tcam block, 1: alloc tcam block. */
+ u8 tcam_type;
+ u16 tcam_block_index;
+ u16 rsvd;
+};
+
+struct hinic3_port_tcam_info {
+ struct mgmt_msg_head msg_head;
+
+ u16 func_id;
+ u8 tcam_enable;
+ u8 rsvd1;
+ u32 rsvd2;
+};
+
+struct hinic3_set_fdir_ethertype_rule {
+ struct mgmt_msg_head head;
+
+ u16 func_id;
+ u16 rsvd1;
+ u8 pkt_type_en;
+ u8 pkt_type;
+ u8 qid;
+ u8 rsvd2;
+};
+
+struct hinic3_cmd_set_rq_flush {
+ union {
+ struct {
+ u16 global_rq_id;
+ u16 local_rq_id;
+ };
+ u32 value;
+ };
+};
+
+enum hinic3_link_follow_status {
+ HINIC3_LINK_FOLLOW_DEFAULT,
+ HINIC3_LINK_FOLLOW_PORT,
+ HINIC3_LINK_FOLLOW_SEPARATE,
+ HINIC3_LINK_FOLLOW_STATUS_MAX,
+};
+
+struct mag_cmd_set_link_follow {
+ struct mgmt_msg_head head;
+ u16 function_id;
+ u16 rsvd0;
+ u8 follow;
+ u8 rsvd1[3];
+};
+
+int l2nic_msg_to_mgmt_sync(void *hwdev, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size);
+
+int hinic3_set_ci_table(void *hwdev, struct hinic3_sq_attr *attr);
+
+/**
+ * Update MAC address to hardware.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] old_mac
+ * Old MAC addr to delete.
+ * @param[in] new_mac
+ * New MAC addr to update.
+ * @param[in] vlan_id
+ * Vlan id.
+ * @param func_id
+ * Function index.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_update_mac(void *hwdev, u8 *old_mac, u8 *new_mac, u16 vlan_id,
+ u16 func_id);
+
+/**
+ * Get the default mac address.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] mac_addr
+ * Mac address from hardware.
+ * @param[in] ether_len
+ * The length of mac address.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_default_mac(void *hwdev, u8 *mac_addr, int ether_len);
+
+/**
+ * Set mac address.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] mac_addr
+ * Mac address from hardware.
+ * @param[in] vlan_id
+ * Vlan id.
+ * @param[in] func_id
+ * Function index.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
+
+/**
+ * Delete MAC address.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] mac_addr
+ * MAC address from hardware.
+ * @param[in] vlan_id
+ * Vlan id.
+ * @param[in] func_id
+ * Function index.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_del_mac(void *hwdev, const u8 *mac_addr, u16 vlan_id, u16 func_id);
+
+/**
+ * Set function mtu.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] new_mtu
+ * MTU value.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_port_mtu(void *hwdev, u16 new_mtu);
+
+/**
+ * Set function valid status.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] enable
+ * 0: disable, 1: enable.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_vport_enable(void *hwdev, bool enable);
+
+/**
+ * Set port status.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev
+ * @param[in] enable
+ * 0: disable, 1: enable.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_port_enable(void *hwdev, bool enable);
+
+/**
+ * Get link state.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] link_state
+ * Link state, 0: link down, 1: link up.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_link_state(void *hwdev, u8 *link_state);
+
+/**
+ * Flush queue pairs resource in hardware.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_flush_qps_res(void *hwdev);
+
+/**
+ * Set pause info.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] nic_pause
+ * Pause info.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_pause_info(void *hwdev, struct nic_pause_config nic_pause);
+
+/**
+ * Get pause info.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] nic_pause
+ * Pause info.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause);
+
+/**
+ * Get function stats.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] stats
+ * Function stats.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_vport_stats(void *hwdev, struct hinic3_vport_stats *stats);
+
+/**
+ * Get port stats.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] stats
+ * Port stats.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+
+int hinic3_get_phy_port_stats(void *hwdev, struct mag_phy_port_stats *stats);
+
+int hinic3_clear_vport_stats(void *hwdev);
+
+int hinic3_clear_phy_port_stats(void *hwdev);
+
+/**
+ * Init nic hwdev.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_init_nic_hwdev(void *hwdev);
+
+/**
+ * Free nic hwdev.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ */
+void hinic3_free_nic_hwdev(void *hwdev);
+
+/**
+ * Set function rx mode.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] enable
+ * Rx mode state, 0-disable, 1-enable.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_rx_mode(void *hwdev, u32 enable);
+
+/**
+ * Set function vlan offload valid state.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] enable
+ * Rx mode state, 0-disable, 1-enable.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_rx_vlan_offload(void *hwdev, u8 en);
+
+/**
+ * Set rx LRO configuration.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] lro_en
+ * LRO enable state, 0-disable, 1-enable.
+ * @param[in] lro_timer
+ * LRO aggregation timeout.
+ * @param[in] lro_max_pkt_len
+ * LRO coalesce packet size(unit size is 1K).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_rx_lro_state(void *hwdev, u8 lro_en, u32 lro_timer,
+ u32 lro_max_pkt_len);
+
+/**
+ * Get port info.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] port_info
+ * Port info, including autoneg, port type, duplex, speed and fec mode.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_port_info(void *hwdev, struct nic_port_info *port_info);
+
+int hinic3_init_function_table(void *hwdev, u16 rx_buff_len);
+
+/**
+ * Alloc RSS template table.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_template_alloc(void *hwdev);
+
+/**
+ * Free RSS template table.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_template_free(void *hwdev);
+
+/**
+ * Set RSS indirect table.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] indir_table
+ * RSS indirect table.
+ * @param[in] indir_table_size
+ * RSS indirect table size.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_set_indir_tbl(void *hwdev, const u32 *indir_table,
+ u32 indir_table_size);
+
+/**
+ * Get RSS indirect table.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] indir_table
+ * RSS indirect table.
+ * @param[in] indir_table_size
+ * RSS indirect table size.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_get_indir_tbl(void *hwdev, u32 *indir_table,
+ u32 indir_table_size);
+
+/**
+ * Set RSS type.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] rss_type
+ * RSS type, including ipv4, tcpv4, ipv6, tcpv6 and etc.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_rss_type(void *hwdev, struct hinic3_rss_type rss_type);
+
+/**
+ * Get RSS type.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] rss_type
+ * RSS type, including ipv4, tcpv4, ipv6, tcpv6 and etc.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_rss_type(void *hwdev, struct hinic3_rss_type *rss_type);
+
+/**
+ * Get RSS hash engine.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] type
+ * RSS hash engine, pmd driver only supports Toeplitz.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_get_hash_engine(void *hwdev, u8 *type);
+
+/**
+ * Set RSS hash engine.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] type
+ * RSS hash engine, pmd driver only supports Toeplitz.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_set_hash_engine(void *hwdev, u8 type);
+
+/**
+ * Set RSS configuration.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] rss_en
+ * RSS enable lag, 0-disable, 1-enable.
+ * @param[in] tc_num
+ * Number of TC.
+ * @param[in] prio_tc
+ * Priority of TC.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_cfg(void *hwdev, u8 rss_en, u8 tc_num, u8 *prio_tc);
+
+/**
+ * Set RSS hash key.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] key
+ * RSS hash key.
+ * @param[in] key_size
+ * hash key size.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_rss_set_hash_key(void *hwdev, u8 *key, u16 key_size);
+
+/**
+ * Add vlan to hardware.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] vlan_id
+ * Vlan id.
+ * @param[in] func_id
+ * Function id.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_add_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/**
+ * Delete vlan.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] vlan_id
+ * Vlan id.
+ * @param[in] func_id
+ * Function id.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_del_vlan(void *hwdev, u16 vlan_id, u16 func_id);
+
+/**
+ * Set vlan filter.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] vlan_filter_ctrl
+ * Vlan filter enable flag, 0-disable, 1-enable.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_vlan_fliter(void *hwdev, u32 vlan_filter_ctrl);
+
+/**
+ * Get VF function default cos.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[out] cos_id
+ * Cos id.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_vf_get_default_cos(void *hwdev, u8 *cos_id);
+
+/**
+ * Add tcam rules.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] tcam_rule
+ * Tcam rule, including tcam rule index, tcam action, tcam key and etc.
+ * @param[in] tcam_rule_type
+ * Tcam rule type.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_add_tcam_rule(void *hwdev, struct hinic3_tcam_cfg_rule *tcam_rule,
+ u8 tcam_rule_type);
+
+/**
+ * Del tcam rules.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] index
+ * Tcam rule index.
+ * @param[in] tcam_rule_type
+ * Tcam rule type.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_del_tcam_rule(void *hwdev, u32 index, u8 tcam_rule_type);
+
+/**
+ * Alloc tcam block.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] index
+ * Tcam block index.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_alloc_tcam_block(void *hwdev, u16 *index);
+
+/**
+ * Free tcam block.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] index
+ * Tcam block index.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_free_tcam_block(void *hwdev, u16 *index);
+
+/**
+ * Set fdir tcam function enable or disable.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ * @param[in] enable
+ * Tcam enable flag, 1-enable, 0-disable.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_fdir_tcam_rule_filter(void *hwdev, bool enable);
+
+/**
+ * Flush fdir tcam rule.
+ *
+ * @param[in] hwdev
+ * Device pointer to hwdev.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_flush_tcam_rule(void *hwdev);
+
+int hinic3_set_rq_flush(void *hwdev, u16 q_id);
+
+/**
+ * Get service feature HW supported.
+ *
+ * @param[in] dev
+ * Device pointer to hwdev.
+ * @param[in] size
+ * s_feature's array size.
+ * @param[out] s_feature
+ * s_feature HW supported.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_get_feature_from_hw(void *hwdev, u64 *s_feature, u16 size);
+
+/**
+ * Set service feature driver supported to hardware.
+ *
+ * @param[in] dev
+ * Device pointer to hwdev.
+ * @param[in] size
+ * s_feature's array size.
+ * @param[out] s_feature
+ * s_feature HW supported.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_set_feature_to_hw(void *hwdev, u64 *s_feature, u16 size);
+
+int hinic3_set_fdir_ethertype_filter(void *hwdev, u8 pkt_type, u16 queue_id,
+ u8 en);
+
+int hinic3_set_link_status_follow(void *hwdev,
+ enum hinic3_link_follow_status status);
+#endif /* _HINIC3_NIC_CFG_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 10/18] net/hinic3: add context and work queue support
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (8 preceding siblings ...)
2025-04-18 9:05 ` [RFC 09/18] net/hinic3: add a NIC business configuration module Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 11/18] net/hinic3: add a mailbox communication module Feifei Wang
` (10 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen
From: Xin Wang <wangxin679@h-partners.com>
Work queue is used for cmdq and tx/rx buff description.
Nic business needs to configure cmdq context and txq/rxq
context. This patch adds data structures and function codes
for work queue and context.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
drivers/net/hinic3/base/hinic3_wq.c | 148 ++++++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_wq.h | 109 ++++++++++++++++++++
2 files changed, 257 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
diff --git a/drivers/net/hinic3/base/hinic3_wq.c b/drivers/net/hinic3/base/hinic3_wq.c
new file mode 100644
index 0000000000..9bccb10c9a
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_wq.c
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+#include <rte_bus_pci.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_pci.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+
+static void
+free_wq_pages(struct hinic3_wq *wq)
+{
+ hinic3_memzone_free(wq->wq_mz);
+
+ wq->queue_buf_paddr = 0;
+ wq->queue_buf_vaddr = 0;
+}
+
+static int
+alloc_wq_pages(struct hinic3_hwdev *hwdev, struct hinic3_wq *wq, int qid)
+{
+ const struct rte_memzone *wq_mz;
+
+ wq_mz = hinic3_dma_zone_reserve(hwdev->eth_dev, "hinic3_wq_mz",
+ (uint16_t)qid, wq->wq_buf_size,
+ RTE_PGSIZE_256K, SOCKET_ID_ANY);
+ if (!wq_mz) {
+ PMD_DRV_LOG(ERR, "Allocate wq[%d] rq_mz failed", qid);
+ return -ENOMEM;
+ }
+
+ memset(wq_mz->addr, 0, wq->wq_buf_size);
+ wq->wq_mz = wq_mz;
+ wq->queue_buf_paddr = wq_mz->iova;
+ wq->queue_buf_vaddr = (u64)(u64 *)wq_mz->addr;
+
+ return 0;
+}
+
+void
+hinic3_put_wqe(struct hinic3_wq *wq, int num_wqebbs)
+{
+ wq->cons_idx += num_wqebbs;
+ rte_atomic_fetch_add_explicit(&wq->delta, num_wqebbs,
+ rte_memory_order_seq_cst);
+}
+
+void *
+hinic3_read_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *cons_idx)
+{
+ u16 curr_cons_idx;
+
+ if ((rte_atomic_load_explicit(&wq->delta, rte_memory_order_seq_cst) +
+ num_wqebbs) > wq->q_depth)
+ return NULL;
+
+ curr_cons_idx = (u16)(wq->cons_idx);
+
+ curr_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx);
+
+ *cons_idx = curr_cons_idx;
+
+ return WQ_WQE_ADDR(wq, (u32)(*cons_idx));
+}
+
+int
+hinic3_cmdq_alloc(struct hinic3_wq *wq, void *dev, int cmdq_blocks,
+ u32 wq_buf_size, u32 wqebb_shift, u16 q_depth)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ int i, j;
+ int err;
+
+ /* Validate q_depth is power of 2 & wqebb_size is not 0. */
+ for (i = 0; i < cmdq_blocks; i++) {
+ wq[i].wqebb_size = 1U << wqebb_shift;
+ wq[i].wqebb_shift = wqebb_shift;
+ wq[i].wq_buf_size = wq_buf_size;
+ wq[i].q_depth = q_depth;
+
+ err = alloc_wq_pages(hwdev, &wq[i], i);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to alloc CMDQ blocks");
+ goto cmdq_block_err;
+ }
+
+ wq[i].cons_idx = 0;
+ wq[i].prod_idx = 0;
+ rte_atomic_store_explicit(&wq[i].delta, q_depth,
+ rte_memory_order_seq_cst);
+
+ wq[i].mask = q_depth - 1;
+ }
+
+ return 0;
+
+cmdq_block_err:
+ for (j = 0; j < i; j++)
+ free_wq_pages(&wq[j]);
+
+ return err;
+}
+
+void
+hinic3_cmdq_free(struct hinic3_wq *wq, int cmdq_blocks)
+{
+ int i;
+
+ for (i = 0; i < cmdq_blocks; i++)
+ free_wq_pages(&wq[i]);
+}
+
+void
+hinic3_wq_wqe_pg_clear(struct hinic3_wq *wq)
+{
+ wq->cons_idx = 0;
+ wq->prod_idx = 0;
+
+ memset((void *)wq->queue_buf_vaddr, 0, wq->wq_buf_size);
+}
+
+void *
+hinic3_get_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *prod_idx)
+{
+ u16 curr_prod_idx;
+
+ rte_atomic_fetch_sub_explicit(&wq->delta, num_wqebbs,
+ rte_memory_order_seq_cst);
+ curr_prod_idx = (u16)(wq->prod_idx);
+ wq->prod_idx += num_wqebbs;
+ *prod_idx = MASKED_WQE_IDX(wq, curr_prod_idx);
+
+ return WQ_WQE_ADDR(wq, (u32)(*prod_idx));
+}
+
+void
+hinic3_set_sge(struct hinic3_sge *sge, uint64_t addr, u32 len)
+{
+ sge->hi_addr = upper_32_bits(addr);
+ sge->lo_addr = lower_32_bits(addr);
+ sge->len = len;
+}
diff --git a/drivers/net/hinic3/base/hinic3_wq.h b/drivers/net/hinic3/base/hinic3_wq.h
new file mode 100644
index 0000000000..84d54c2aeb
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_wq.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_WQ_H_
+#define _HINIC3_WQ_H_
+
+/* Use 0-level CLA, page size must be: SQ 16B(wqe) * 64k(max_q_depth). */
+#define HINIC3_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define HINIC3_HW_WQ_PAGE_SIZE 0x1000
+
+#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+
+#define WQ_WQE_ADDR(wq, idx) \
+ ({ \
+ typeof(wq) __wq = (wq); \
+ (void *)((u64)(__wq->queue_buf_vaddr) + ((idx) << __wq->wqebb_shift)); \
+ })
+
+struct hinic3_sge {
+ u32 hi_addr;
+ u32 lo_addr;
+ u32 len;
+};
+
+struct hinic3_wq {
+ /* The addresses are 64 bit in the HW. */
+ u64 queue_buf_vaddr;
+
+ u16 q_depth;
+ u16 mask;
+ RTE_ATOMIC(int32_t)delta;
+
+ u32 cons_idx;
+ u32 prod_idx;
+
+ u64 queue_buf_paddr;
+
+ u32 wqebb_size;
+ u32 wqebb_shift;
+
+ u32 wq_buf_size;
+
+ const struct rte_memzone *wq_mz;
+
+ u32 rsvd[5];
+};
+
+void hinic3_put_wqe(struct hinic3_wq *wq, int num_wqebbs);
+
+/**
+ * Read a WQE and update CI.
+ *
+ * @param[in] wq
+ * The work queue structure.
+ * @param[in] num_wqebbs
+ * The number of work queue elements to read.
+ * @param[out] cons_idx
+ * The updated consumer index.
+ *
+ * @return
+ * The address of WQE, or NULL if not enough elements are available.
+ */
+void *hinic3_read_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *cons_idx);
+
+/**
+ * Allocate command queue blocks and initialize related parameters.
+ *
+ * @param[in] wq
+ * The cmdq->wq structure.
+ * @param[in] dev
+ * The device context for the hardware.
+ * @param[in] cmdq_blocks
+ * The number of command queue blocks to allocate.
+ * @param[in] wq_buf_size
+ * The size of each work queue buffer.
+ * @param[in] wqebb_shift
+ * The shift value for determining the work queue element size.
+ * @param[in] q_depth
+ * The depth of each command queue.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_cmdq_alloc(struct hinic3_wq *wq, void *dev, int cmdq_blocks,
+ u32 wq_buf_size, u32 wqebb_shift, u16 q_depth);
+
+void hinic3_cmdq_free(struct hinic3_wq *wq, int cmdq_blocks);
+
+void hinic3_wq_wqe_pg_clear(struct hinic3_wq *wq);
+
+/**
+ * Get WQE and update PI.
+ *
+ * @param[in] wq
+ * The cmdq->wq structure.
+ * @param[in] num_wqebbs
+ * The number of work queue elements to allocate.
+ * @param[out] prod_idx
+ * The updated producer index, masked according to the queue size.
+ *
+ * @return
+ * The address of the work queue element.
+ */
+void *hinic3_get_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *prod_idx);
+
+void hinic3_set_sge(struct hinic3_sge *sge, uint64_t addr, u32 len);
+
+#endif /* _HINIC3_WQ_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 11/18] net/hinic3: add a mailbox communication module
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (9 preceding siblings ...)
2025-04-18 9:05 ` [RFC 10/18] net/hinic3: add context and work queue support Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 12/18] net/hinic3: add device initailization Feifei Wang
` (9 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
This patch adds support for mailbox of hinic3 PMD driver,
mailbox is used for communication between PF/VF driver and MPU.
This patch provides mailbox-related data structures and functional
code.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/base/hinic3_mbox.c | 1392 +++++++++++++++++++++++++
drivers/net/hinic3/base/hinic3_mbox.h | 199 ++++
2 files changed, 1591 insertions(+)
create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
diff --git a/drivers/net/hinic3/base/hinic3_mbox.c b/drivers/net/hinic3/base/hinic3_mbox.c
new file mode 100644
index 0000000000..78dfee2b1c
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_mbox.c
@@ -0,0 +1,1392 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include "hinic3_compat.h"
+#include "hinic3_csr.h"
+#include "hinic3_eqs.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mbox.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_nic_event.h"
+
+#define HINIC3_MBOX_INT_DST_FUNC_SHIFT 0
+#define HINIC3_MBOX_INT_DST_AEQN_SHIFT 10
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_SHIFT 12
+#define HINIC3_MBOX_INT_STAT_DMA_SHIFT 14
+/* The size of data to be send (unit of 4 bytes). */
+#define HINIC3_MBOX_INT_TX_SIZE_SHIFT 20
+/* SO_RO(strong order, relax order). */
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_SHIFT 25
+#define HINIC3_MBOX_INT_WB_EN_SHIFT 28
+
+#define HINIC3_MBOX_INT_DST_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_MASK 0x3
+#define HINIC3_MBOX_INT_STAT_DMA_MASK 0x3F
+#define HINIC3_MBOX_INT_TX_SIZE_MASK 0x1F
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_MASK 0x3
+#define HINIC3_MBOX_INT_WB_EN_MASK 0x1
+
+#define HINIC3_MBOX_INT_SET(val, field) \
+ (((val) & HINIC3_MBOX_INT_##field##_MASK) \
+ << HINIC3_MBOX_INT_##field##_SHIFT)
+
+enum hinic3_mbox_tx_status {
+ TX_NOT_DONE = 1,
+};
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_SHIFT 0
+
+/*
+ * Specifies the issue request for the message data.
+ * 0 - Tx request is done;
+ * 1 - Tx request is in process.
+ */
+#define HINIC3_MBOX_CTRL_TX_STATUS_SHIFT 1
+#define HINIC3_MBOX_CTRL_DST_FUNC_SHIFT 16
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_MASK 0x1
+#define HINIC3_MBOX_CTRL_TX_STATUS_MASK 0x1
+#define HINIC3_MBOX_CTRL_DST_FUNC_MASK 0x1FFF
+
+#define HINIC3_MBOX_CTRL_SET(val, field) \
+ (((val) & HINIC3_MBOX_CTRL_##field##_MASK) \
+ << HINIC3_MBOX_CTRL_##field##_SHIFT)
+
+#define MBOX_SEGLEN_MASK \
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
+
+#define MBOX_MSG_POLLING_TIMEOUT 500000 /* Unit is 10us. */
+#define HINIC3_MBOX_COMP_TIME 40000U
+
+#define MBOX_MAX_BUF_SZ 2048UL
+#define MBOX_HEADER_SZ 8
+#define HINIC3_MBOX_DATA_SIZE (MBOX_MAX_BUF_SZ - MBOX_HEADER_SZ)
+
+#define MBOX_TLP_HEADER_SZ 16
+
+/* Mbox size is 64B, 8B for mbox_header, 8B reserved. */
+#define MBOX_SEG_LEN 48
+#define MBOX_SEG_LEN_ALIGN 4
+#define MBOX_WB_STATUS_LEN 16UL
+
+/* Mbox write back status is 16B, only first 4B is used. */
+#define MBOX_WB_STATUS_ERRCODE_MASK 0xFFFF
+#define MBOX_WB_STATUS_MASK 0xFF
+#define MBOX_WB_ERROR_CODE_MASK 0xFF00
+#define MBOX_WB_STATUS_FINISHED_SUCCESS 0xFF
+#define MBOX_WB_STATUS_FINISHED_WITH_ERR 0xFE
+#define MBOX_WB_STATUS_NOT_FINISHED 0x00
+
+/* Determine the write back status. */
+#define MBOX_STATUS_FINISHED(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) != MBOX_WB_STATUS_NOT_FINISHED)
+#define MBOX_STATUS_SUCCESS(wb) \
+ (((wb) & MBOX_WB_STATUS_MASK) == MBOX_WB_STATUS_FINISHED_SUCCESS)
+#define MBOX_STATUS_ERRCODE(wb) ((wb) & MBOX_WB_ERROR_CODE_MASK)
+
+/* Indicate the value related to the sequence ID. */
+#define SEQ_ID_START_VAL 0
+#define SEQ_ID_MAX_VAL 42
+
+#define DST_AEQ_IDX_DEFAULT_VAL 0
+#define SRC_AEQ_IDX_DEFAULT_VAL 0
+#define NO_DMA_ATTRIBUTE_VAL 0
+
+#define MBOX_MSG_NO_DATA_LEN 1
+
+/* Obtain the specified content of the mailbox. */
+#define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ)
+#define MBOX_AREA(hwif) \
+ ((hwif)->cfg_regs_base + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF)
+
+#define IS_PF_OR_PPF_SRC(src_func_idx) ((src_func_idx) < HINIC3_MAX_PF_FUNCS)
+
+#define MBOX_RESPONSE_ERROR 0x1
+#define MBOX_MSG_ID_MASK 0xF
+#define MBOX_MSG_ID(func_to_func) ((func_to_func)->send_msg_id)
+#define MBOX_MSG_ID_INC(func_to_func) \
+ ({ \
+ typeof(func_to_func) __func = (func_to_func); \
+ MBOX_MSG_ID(__func) = (MBOX_MSG_ID(__func) + 1) & \
+ MBOX_MSG_ID_MASK; \
+ })
+
+/* Max message counter waits to process for one function. */
+#define HINIC3_MAX_MSG_CNT_TO_PROCESS 10
+
+enum mbox_ordering_type {
+ STRONG_ORDER,
+};
+
+enum mbox_write_back_type {
+ WRITE_BACK = 1,
+};
+
+enum mbox_aeq_trig_type {
+ NOT_TRIGGER,
+ TRIGGER,
+};
+
+static int send_mbox_to_func(struct hinic3_mbox *func_to_func,
+ enum hinic3_mod_type mod, u16 cmd, void *msg,
+ u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+static int send_tlp_mbox_to_func(struct hinic3_mbox *func_to_func,
+ enum hinic3_mod_type mod, u16 cmd, void *msg,
+ u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info);
+
+static int
+recv_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox, void *buf_out,
+ u16 *out_size, __rte_unused void *param)
+{
+ int err = 0;
+
+ /*
+ * Invoke the corresponding processing function according to the type of
+ * the received mailbox.
+ */
+ switch (recv_mbox->mod) {
+ case HINIC3_MOD_COMM:
+ err = vf_handle_pf_comm_mbox(func_to_func->hwdev, func_to_func,
+ recv_mbox->cmd, recv_mbox->mbox,
+ recv_mbox->mbox_len, buf_out,
+ out_size);
+ break;
+ case HINIC3_MOD_CFGM:
+ err = cfg_mbx_vf_proc_msg(func_to_func->hwdev,
+ func_to_func->hwdev->cfg_mgmt,
+ recv_mbox->cmd, recv_mbox->mbox, recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
+ case HINIC3_MOD_L2NIC:
+ err = hinic3_vf_event_handler(func_to_func->hwdev,
+ func_to_func->hwdev->cfg_mgmt,
+ recv_mbox->cmd, recv_mbox->mbox, recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
+ case HINIC3_MOD_HILINK:
+ err = hinic3_vf_mag_event_handler(func_to_func->hwdev,
+ func_to_func->hwdev->cfg_mgmt,
+ recv_mbox->cmd, recv_mbox->mbox, recv_mbox->mbox_len,
+ buf_out, out_size);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "No handler, mod: %d", recv_mbox->mod);
+ err = HINIC3_MBOX_VF_CMD_ERROR;
+ break;
+ }
+
+ return err;
+}
+
+/**
+ * Respond to the accept packet, construct a response message, and send it.
+ *
+ * @param[in] func_to_func
+ * Context for inter-function communication.
+ * @param[in] recv_mbox
+ * Pointer to the received inter-function mailbox structure.
+ * @param[in] err
+ * Error Code.
+ * @param[in] out_size
+ * Output Size.
+ * @param[in] src_func_idx
+ * Index of the source function.
+ */
+static void
+response_for_recv_func_mbox(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox, int err,
+ u16 out_size, u16 src_func_idx)
+{
+ struct mbox_msg_info msg_info = {0};
+
+ if (recv_mbox->ack_type == HINIC3_MSG_ACK) {
+ msg_info.msg_id = recv_mbox->msg_info.msg_id;
+ if (err)
+ msg_info.status = HINIC3_MBOX_PF_SEND_ERR;
+
+ /* Select the sending function based on the packet type. */
+ if (IS_TLP_MBX(src_func_idx))
+ send_tlp_mbox_to_func(func_to_func, recv_mbox->mod,
+ recv_mbox->cmd,
+ recv_mbox->buf_out, out_size,
+ src_func_idx, HINIC3_MSG_RESPONSE,
+ HINIC3_MSG_NO_ACK, &msg_info);
+ else
+ send_mbox_to_func(func_to_func, recv_mbox->mod,
+ recv_mbox->cmd, recv_mbox->buf_out,
+ out_size, src_func_idx,
+ HINIC3_MSG_RESPONSE,
+ HINIC3_MSG_NO_ACK, &msg_info);
+ }
+}
+
+static bool
+check_func_mbox_ack_first(u8 mod)
+{
+ return mod == HINIC3_MOD_HILINK;
+}
+
+static void
+recv_func_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox, u16 src_func_idx,
+ void *param)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ void *buf_out = recv_mbox->buf_out;
+ bool ack_first = false;
+ u16 out_size = MBOX_MAX_BUF_SZ;
+ int err = 0;
+ /* Check whether the response is the first ACK message. */
+ ack_first = check_func_mbox_ack_first(recv_mbox->mod);
+ if (ack_first && recv_mbox->ack_type == HINIC3_MSG_ACK) {
+ response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+ out_size, src_func_idx);
+ }
+
+ /* Processe mailbox information in the VF. */
+ if (HINIC3_IS_VF(hwdev)) {
+ err = recv_vf_mbox_handler(func_to_func, recv_mbox, buf_out,
+ &out_size, param);
+ } else {
+ err = -EINVAL;
+ PMD_DRV_LOG(ERR,
+ "PMD doesn't support non-VF handle mailbox message");
+ }
+
+ if (!out_size || err)
+ out_size = MBOX_MSG_NO_DATA_LEN;
+
+ if (!ack_first && recv_mbox->ack_type == HINIC3_MSG_ACK) {
+ response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+ out_size, src_func_idx);
+ }
+}
+
+/**
+ * Processe mailbox responses from functions.
+ *
+ * @param[in] func_to_func
+ * Mailbox for inter-function communication.
+ * @param[in] recv_mbox
+ * Received mailbox message.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+resp_mbox_handler(struct hinic3_mbox *func_to_func,
+ struct hinic3_recv_mbox *recv_mbox)
+{
+ int ret;
+ rte_spinlock_lock(&func_to_func->mbox_lock);
+ if (recv_mbox->msg_info.msg_id == func_to_func->send_msg_id &&
+ func_to_func->event_flag == EVENT_START) {
+ func_to_func->event_flag = EVENT_SUCCESS;
+ ret = 0;
+ } else {
+ PMD_DRV_LOG(ERR,
+ "Mbox response timeout, current send msg id(0x%x), "
+ "recv msg id(0x%x), status(0x%x)",
+ func_to_func->send_msg_id,
+ recv_mbox->msg_info.msg_id,
+ recv_mbox->msg_info.status);
+ ret = HINIC3_MSG_HANDLER_RES;
+ }
+ rte_spinlock_unlock(&func_to_func->mbox_lock);
+ return ret;
+}
+
+/**
+ * Check whether the received mailbox message segment is valid.
+ *
+ * @param[out] recv_mbox
+ * Received mailbox message.
+ * @param[in] mbox_header
+ * Mailbox header.
+ * @return
+ * The value true indicates valid, and the value false indicates invalid.
+ */
+static bool
+check_mbox_segment(struct hinic3_recv_mbox *recv_mbox, u64 mbox_header)
+{
+ u8 seq_id, seg_len, msg_id, mod;
+ u16 src_func_idx, cmd;
+
+ /* Get info from the mailbox header. */
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+ src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+ msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+
+ if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN)
+ goto seg_err;
+
+ /* New message segment, which saves its information to recv_mbox. */
+ if (seq_id == 0) {
+ recv_mbox->seq_id = seq_id;
+ recv_mbox->msg_info.msg_id = msg_id;
+ recv_mbox->mod = mod;
+ recv_mbox->cmd = cmd;
+ } else {
+ if ((seq_id != recv_mbox->seq_id + 1) ||
+ msg_id != recv_mbox->msg_info.msg_id ||
+ mod != recv_mbox->mod || cmd != recv_mbox->cmd)
+ goto seg_err;
+
+ recv_mbox->seq_id = seq_id;
+ }
+
+ return true;
+
+seg_err:
+ PMD_DRV_LOG(ERR,
+ "Mailbox segment check failed, src func id: 0x%x, "
+ "front seg info: seq id: 0x%x, msg id: 0x%x, mod: 0x%x, "
+ "cmd: 0x%x",
+ src_func_idx, recv_mbox->seq_id, recv_mbox->msg_info.msg_id,
+ recv_mbox->mod, recv_mbox->cmd);
+ PMD_DRV_LOG(ERR,
+ "Current seg info: seg len: 0x%x, seq id: 0x%x, "
+ "msg id: 0x%x, mod: 0x%x, cmd: 0x%x",
+ seg_len, seq_id, msg_id, mod, cmd);
+
+ return false;
+}
+
+static int
+recv_mbox_handler(struct hinic3_mbox *func_to_func, void *header,
+ struct hinic3_recv_mbox *recv_mbox, void *param)
+{
+ u64 mbox_header = *((u64 *)header);
+ void *mbox_body = MBOX_BODY_FROM_HDR(header);
+ u16 src_func_idx;
+ int pos;
+ u8 seq_id;
+ /* Obtain information from the mailbox header. */
+ seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+ src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ if (!check_mbox_segment(recv_mbox, mbox_header)) {
+ recv_mbox->seq_id = SEQ_ID_MAX_VAL;
+ return HINIC3_MSG_HANDLER_RES;
+ }
+
+ pos = seq_id * MBOX_SEG_LEN;
+ memcpy((void *)((u8 *)recv_mbox->mbox + pos), (void *)mbox_body,
+ (size_t)HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN));
+
+ if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+ return HINIC3_MSG_HANDLER_RES;
+ /* Setting the information about the recv mailbox. */
+ recv_mbox->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+ recv_mbox->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+ recv_mbox->mbox_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+ recv_mbox->ack_type = HINIC3_MSG_HEADER_GET(mbox_header, NO_ACK);
+ recv_mbox->msg_info.msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+ recv_mbox->msg_info.status = HINIC3_MSG_HEADER_GET(mbox_header, STATUS);
+ recv_mbox->seq_id = SEQ_ID_MAX_VAL;
+
+ /*
+ * If the received message is a response message, call the mbox response
+ * processing function.
+ */
+ if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+ HINIC3_MSG_RESPONSE) {
+ return resp_mbox_handler(func_to_func, recv_mbox);
+ }
+
+ recv_func_mbox_handler(func_to_func, recv_mbox, src_func_idx, param);
+ return HINIC3_MSG_HANDLER_RES;
+}
+
+static inline int
+hinic3_mbox_get_index(int func)
+{
+ return (func == HINIC3_MGMT_SRC_ID) ? HINIC3_MBOX_MPU_INDEX
+ : HINIC3_MBOX_PF_INDEX;
+}
+
+int
+hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, __rte_unused u8 size,
+ void *param)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ struct hinic3_recv_mbox *recv_mbox = NULL;
+ u64 mbox_header = *((u64 *)header);
+ u64 src, dir;
+ /* Obtain the mailbox for communication between functions. */
+ func_to_func = ((struct hinic3_hwdev *)handle)->func_to_func;
+
+ dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+ src = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+ src = hinic3_mbox_get_index((int)src);
+ recv_mbox = (dir == HINIC3_MSG_DIRECT_SEND)
+ ? &func_to_func->mbox_send[src]
+ : &func_to_func->mbox_resp[src];
+ /* Processing Received Mailbox info. */
+ return recv_mbox_handler(func_to_func, (u64 *)header, recv_mbox, param);
+}
+
+static void
+clear_mbox_status(struct hinic3_send_mbox *mbox)
+{
+ *mbox->wb_status = 0;
+
+ /* Clear mailbox write back status. */
+ rte_wmb();
+}
+
+static void
+mbox_copy_header(struct hinic3_send_mbox *mbox, u64 *header)
+{
+ u32 *data = (u32 *)header;
+ u32 i, idx_max = MBOX_HEADER_SZ / sizeof(u32);
+
+ for (i = 0; i < idx_max; i++) {
+ rte_write32(cpu_to_be32(*(data + i)),
+ mbox->data + i * sizeof(u32));
+ }
+}
+
+#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a
+static u32
+mbox_dma_msg_xor(u32 *data, u16 msg_len)
+{
+ u32 xor = MBOX_DMA_MSG_INIT_XOR_VAL;
+ u16 dw_len = msg_len / sizeof(u32);
+ u16 i;
+
+ for (i = 0; i < dw_len; i++)
+ xor ^= data[i];
+
+ return xor;
+}
+
+static void
+mbox_copy_send_data_addr(struct hinic3_send_mbox *mbox, u16 seg_len)
+{
+ u32 addr_h, addr_l, xor;
+
+ xor = mbox_dma_msg_xor(mbox->sbuff_vaddr, seg_len);
+ addr_h = upper_32_bits(mbox->sbuff_paddr);
+ addr_l = lower_32_bits(mbox->sbuff_paddr);
+
+ rte_write32(cpu_to_be32(xor), mbox->data + MBOX_HEADER_SZ);
+ rte_write32(cpu_to_be32(addr_h),
+ mbox->data + MBOX_HEADER_SZ + sizeof(u32));
+ rte_write32(cpu_to_be32(addr_l),
+ mbox->data + MBOX_HEADER_SZ + 0x2 * sizeof(u32));
+ rte_write32(cpu_to_be32((u32)seg_len),
+ mbox->data + MBOX_HEADER_SZ + 0x3 * sizeof(u32));
+ /* Reserved field. */
+ rte_write32(0, mbox->data + MBOX_HEADER_SZ + 0x4 * sizeof(u32));
+ rte_write32(0, mbox->data + MBOX_HEADER_SZ + 0x5 * sizeof(u32));
+}
+
+static void
+mbox_copy_send_data(struct hinic3_send_mbox *mbox, void *seg, u16 seg_len)
+{
+ u32 *data = seg;
+ u32 data_len, chk_sz = sizeof(u32);
+ u32 i, idx_max;
+ u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
+
+ /* The mbox message should be aligned in 4 bytes. */
+ if (seg_len % chk_sz) {
+ rte_memcpy(mbox_max_buf, seg, seg_len);
+ data = (u32 *)mbox_max_buf;
+ }
+
+ data_len = seg_len;
+ idx_max = RTE_ALIGN(data_len, chk_sz) / chk_sz;
+
+ for (i = 0; i < idx_max; i++) {
+ rte_write32(cpu_to_be32(*(data + i)),
+ mbox->data + MBOX_HEADER_SZ + i * sizeof(u32));
+ }
+}
+
+static void
+write_mbox_msg_attr(struct hinic3_mbox *func_to_func, u16 dst_func,
+ u16 dst_aeqn, u16 seg_len)
+{
+ u32 mbox_int, mbox_ctrl;
+
+ /* If VF, function ids must self-learning by HW(PPF=1 PF=0). */
+ if (HINIC3_IS_VF(func_to_func->hwdev) &&
+ dst_func != HINIC3_MGMT_SRC_ID) {
+ if (dst_func == HINIC3_HWIF_PPF_IDX(func_to_func->hwdev->hwif))
+ dst_func = 1;
+ else
+ dst_func = 0;
+ }
+ /* Set the interrupt attribute of the mailbox. */
+ mbox_int = HINIC3_MBOX_INT_SET(dst_aeqn, DST_AEQN) |
+ HINIC3_MBOX_INT_SET(0, SRC_RESP_AEQN) |
+ HINIC3_MBOX_INT_SET(NO_DMA_ATTRIBUTE_VAL, STAT_DMA) |
+ HINIC3_MBOX_INT_SET(RTE_ALIGN(seg_len + MBOX_HEADER_SZ,
+ MBOX_SEG_LEN_ALIGN) >>
+ 2,
+ TX_SIZE) |
+ HINIC3_MBOX_INT_SET(STRONG_ORDER, STAT_DMA_SO_RO) |
+ HINIC3_MBOX_INT_SET(WRITE_BACK, WB_EN);
+
+ /* The interrupt attribute is written to the interrupt register. */
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF, mbox_int);
+
+ rte_wmb(); /**< Writing the mbox intr attributes */
+
+ /* Set the control attributes of the mailbox and write to register. */
+ mbox_ctrl = HINIC3_MBOX_CTRL_SET(TX_NOT_DONE, TX_STATUS);
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(NOT_TRIGGER, TRIGGER_AEQE);
+ mbox_ctrl |= HINIC3_MBOX_CTRL_SET(dst_func, DST_FUNC);
+ hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF, mbox_ctrl);
+}
+
+/**
+ * Read the value of the mailbox register of the hardware device.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ */
+static void
+dump_mbox_reg(struct hinic3_hwdev *hwdev)
+{
+ u32 val;
+ /* Read the value of the MBOX control register. */
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF);
+ PMD_DRV_LOG(ERR, "Mailbox control reg: 0x%x", val);
+ /* Read the value of the MBOX interrupt offset register. */
+ val = hinic3_hwif_read_reg(hwdev->hwif,
+ HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF);
+ PMD_DRV_LOG(ERR, "Mailbox interrupt offset: 0x%x", val);
+}
+
+static u16
+get_mbox_status(struct hinic3_send_mbox *mbox)
+{
+ /* Write back is 16B, but only use first 4B. */
+ u64 wb_val = be64_to_cpu(*mbox->wb_status);
+
+ rte_rmb(); /**< Verify reading before check. */
+
+ return (u16)(wb_val & MBOX_WB_STATUS_ERRCODE_MASK);
+}
+
+/**
+ * Sending Mailbox Message Segment.
+ *
+ * @param[in] func_to_func
+ * Mailbox for inter-function communication.
+ * @param[in] header
+ * Mailbox header.
+ * @param[in] dst_func
+ * Indicate destination func.
+ * @param[in] seg
+ * Segment data to be sent.
+ * @param[in] seg_len
+ * Length of the segment to be sent.
+ * @param[in] msg_info
+ * Indicate the message information.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+send_mbox_seg(struct hinic3_mbox *func_to_func, u64 header, u16 dst_func,
+ void *seg, u16 seg_len, __rte_unused void *msg_info)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, wb_status = 0, errcode;
+ u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+ u32 cnt = 0;
+
+ /* Mbox to mgmt cpu, hardware doesn't care dst aeq id. */
+ if (num_aeqs >= 2)
+ dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND)
+ ? HINIC3_ASYNC_MSG_AEQ
+ : HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+ mbox_copy_header(send_mbox, &header);
+ mbox_copy_send_data(send_mbox, seg, seg_len);
+
+ /* Set mailbox msg seg len. */
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn, seg_len);
+ rte_wmb(); /**< Writing the mbox msg attributes. */
+
+ /* Wait until the status of the mailbox changes to Complete. */
+ while (cnt < MBOX_MSG_POLLING_TIMEOUT) {
+ wb_status = get_mbox_status(send_mbox);
+ if (MBOX_STATUS_FINISHED(wb_status))
+ break;
+
+ rte_delay_us(10);
+ cnt++;
+ }
+
+ if (cnt == MBOX_MSG_POLLING_TIMEOUT) {
+ PMD_DRV_LOG(ERR,
+ "Send mailbox segment timeout, wb status: 0x%x",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ PMD_DRV_LOG(ERR,
+ "Send mailbox segment to function %d error, wb "
+ "status: 0x%x",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int
+send_tlp_mbox_seg(struct hinic3_mbox *func_to_func, u64 header, u16 dst_func,
+ void *seg, u16 seg_len, __rte_unused void *msg_info)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+ u16 dst_aeqn, errcode, wb_status = 0;
+ u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+ u32 cnt = 0;
+
+ /* Mbox to mgmt cpu, hardware doesn't care dst aeq id. */
+ if (num_aeqs >= 2)
+ dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND)
+ ? HINIC3_ASYNC_MSG_AEQ
+ : HINIC3_MBOX_RSP_MSG_AEQ;
+ else
+ dst_aeqn = 0;
+
+ clear_mbox_status(send_mbox);
+ mbox_copy_header(send_mbox, &header);
+
+ /* Copy data to DMA buffer. */
+ memcpy((void *)send_mbox->sbuff_vaddr, (void *)seg, (size_t)seg_len);
+
+ /*
+ * Copy data address to mailbox ctrl CSR(Control and Status Register).
+ */
+ mbox_copy_send_data_addr(send_mbox, seg_len);
+
+ /* Set mailbox msg header size. */
+ write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn,
+ MBOX_TLP_HEADER_SZ);
+
+ rte_wmb(); /**< Writing the mbox msg attributes. */
+
+ /* Wait until the status of the mailbox changes to Complete. */
+ while (cnt < MBOX_MSG_POLLING_TIMEOUT) {
+ wb_status = get_mbox_status(send_mbox);
+ if (MBOX_STATUS_FINISHED(wb_status))
+ break;
+
+ rte_delay_us(10);
+ cnt++;
+ }
+
+ if (cnt == MBOX_MSG_POLLING_TIMEOUT) {
+ PMD_DRV_LOG(ERR,
+ "Send mailbox segment timeout, wb status: 0x%x",
+ wb_status);
+ dump_mbox_reg(hwdev);
+ return -ETIMEDOUT;
+ }
+
+ if (!MBOX_STATUS_SUCCESS(wb_status)) {
+ PMD_DRV_LOG(ERR,
+ "Send mailbox segment to function %d error, wb "
+ "status: 0x%x",
+ dst_func, wb_status);
+ errcode = MBOX_STATUS_ERRCODE(wb_status);
+ return errcode ? errcode : -EFAULT;
+ }
+
+ return 0;
+}
+
+static int
+send_mbox_to_func(struct hinic3_mbox *func_to_func, enum hinic3_mod_type mod,
+ u16 cmd, void *msg, u16 msg_len, u16 dst_func,
+ enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ int err = 0;
+ u32 seq_id = 0;
+ u16 seg_len = MBOX_SEG_LEN;
+ u16 rsp_aeq_id, left = msg_len;
+ u8 *msg_seg = (u8 *)msg;
+ u64 header = 0;
+
+ rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+
+ err = hinic3_mutex_lock(&func_to_func->msg_send_mutex);
+ if (err)
+ return err;
+
+ /* Set the header message. */
+ header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(seg_len, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ HINIC3_MSG_HEADER_SET(NOT_LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ /* The VF's offset to it's associated PF. */
+ HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+ HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS);
+ /* Loop until all messages are sent. */
+ while (!(HINIC3_MSG_HEADER_GET(header, LAST))) {
+ if (left <= MBOX_SEG_LEN) {
+ header &= ~MBOX_SEGLEN_MASK;
+ header |= HINIC3_MSG_HEADER_SET(left, SEG_LEN);
+ header |= HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST);
+
+ seg_len = left;
+ }
+
+ err = send_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ seg_len, msg_info);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Send mbox seg failed, seq_id: 0x%" PRIx64,
+ HINIC3_MSG_HEADER_GET(header, SEQID));
+
+ goto send_err;
+ }
+
+ left -= MBOX_SEG_LEN;
+ msg_seg += MBOX_SEG_LEN;
+
+ seq_id++;
+ header &= ~(HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEQID_MASK,
+ SEQID));
+ header |= HINIC3_MSG_HEADER_SET(seq_id, SEQID);
+ }
+
+send_err:
+ (void)hinic3_mutex_unlock(&func_to_func->msg_send_mutex);
+
+ return err;
+}
+
+static int
+send_tlp_mbox_to_func(struct hinic3_mbox *func_to_func,
+ enum hinic3_mod_type mod, u16 cmd, void *msg, u16 msg_len,
+ u16 dst_func, enum hinic3_msg_direction_type direction,
+ enum hinic3_msg_ack_type ack_type,
+ struct mbox_msg_info *msg_info)
+{
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u8 *msg_seg = (u8 *)msg;
+ int err = 0;
+ u16 rsp_aeq_id;
+ u64 header = 0;
+
+ rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+
+ err = hinic3_mutex_lock(&func_to_func->msg_send_mutex);
+ if (err)
+ return err;
+
+ /* Set the header message. */
+ header = HINIC3_MSG_HEADER_SET(MBOX_TLP_HEADER_SZ, MSG_LEN) |
+ HINIC3_MSG_HEADER_SET(MBOX_TLP_HEADER_SZ, SEG_LEN) |
+ HINIC3_MSG_HEADER_SET(mod, MODULE) |
+ HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+ HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+ HINIC3_MSG_HEADER_SET(HINIC3_DATA_DMA, DATA_TYPE) |
+ HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+ HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+ HINIC3_MSG_HEADER_SET(cmd, CMD) |
+ /* The VF's offset to it's associated PF. */
+ HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+ HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+ HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+ HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS) |
+ HINIC3_MSG_HEADER_SET(hinic3_global_func_id(hwdev),
+ SRC_GLB_FUNC_IDX);
+
+ /* Send a message. */
+ err = send_tlp_mbox_seg(func_to_func, header, dst_func, msg_seg,
+ msg_len, msg_info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Send mbox seg failed, seq_id: 0x%" PRIx64,
+ HINIC3_MSG_HEADER_GET(header, SEQID));
+ }
+
+ (void)hinic3_mutex_unlock(&func_to_func->msg_send_mutex);
+
+ return err;
+}
+
+/**
+ * Set mailbox F2F(Function to Function) event status.
+ *
+ * @param[out] func_to_func
+ * Context for inter-function communication.
+ * @param[in] event_flag
+ * Event status enumerated value.
+ */
+static void
+set_mbox_to_func_event(struct hinic3_mbox *func_to_func,
+ enum mbox_event_state event_flag)
+{
+ rte_spinlock_lock(&func_to_func->mbox_lock);
+ func_to_func->event_flag = event_flag;
+ rte_spinlock_unlock(&func_to_func->mbox_lock);
+}
+
+/**
+ * Send data from one function to another and receive responses.
+ *
+ * @param[in] func_to_func
+ * Context for inter-function communication.
+ * @param[in] mod
+ * Command queue module type.
+ * @param[in] cmd
+ * Indicate the command to be executed.
+ the command to be executed.
+ * @param[in] dst_func
+ * Indicate destination func.
+ * @param[in] buf_in
+ * Pointer to the input buffer.
+ * @param[in] in_size
+ * Input buffer size.
+ * @param[out] buf_out
+ * Pointer to the output buffer.
+ * @param[out] out_size
+ * Output buffer size.
+ * @param[in] timeout
+ * Timeout interval for waiting for a response.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, enum hinic3_mod_type mod,
+ u16 cmd, u16 dst_func, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout)
+{
+ /* Use mbox_resp to hole data which responsed from other function. */
+ struct hinic3_recv_mbox *mbox_for_resp = NULL;
+ struct mbox_msg_info msg_info = {0};
+ struct hinic3_eq *aeq = NULL;
+ u16 mbox_rsp_idx;
+ u32 time;
+ int err;
+
+ mbox_rsp_idx = (u16)hinic3_mbox_get_index(dst_func);
+ mbox_for_resp = &func_to_func->mbox_resp[mbox_rsp_idx];
+
+ err = hinic3_mutex_lock(&func_to_func->mbox_send_mutex);
+ if (err)
+ return err;
+
+ /* Set message ID and start event. */
+ msg_info.msg_id = MBOX_MSG_ID_INC(func_to_func);
+ set_mbox_to_func_event(func_to_func, EVENT_START);
+
+ /* Select a function to send messages based on the dst_func type. */
+ if (IS_TLP_MBX(dst_func))
+ err = send_tlp_mbox_to_func(func_to_func,
+ mod, cmd, buf_in, in_size, dst_func,
+ HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_ACK, &msg_info);
+ else
+ err = send_mbox_to_func(func_to_func, mod, cmd, buf_in, in_size,
+ dst_func, HINIC3_MSG_DIRECT_SEND,
+ HINIC3_MSG_ACK, &msg_info);
+
+ if (err) {
+ PMD_DRV_LOG(ERR, "Send mailbox failed, msg_id: %d",
+ msg_info.msg_id);
+ set_mbox_to_func_event(func_to_func, EVENT_FAIL);
+ goto send_err;
+ }
+
+ /* Wait for the response message. */
+ time = msecs_to_jiffies(timeout ? timeout : HINIC3_MBOX_COMP_TIME);
+ aeq = &func_to_func->hwdev->aeqs->aeq[HINIC3_MBOX_RSP_MSG_AEQ];
+ err = hinic3_aeq_poll_msg(aeq, time, NULL);
+ if (err) {
+ set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+ PMD_DRV_LOG(ERR, "Send mailbox message time out");
+ err = -ETIMEDOUT;
+ goto send_err;
+ }
+
+ /* Check whether mod and command of the rsp message match the sent message. */
+ if (mod != mbox_for_resp->mod || cmd != mbox_for_resp->cmd) {
+ PMD_DRV_LOG(ERR,
+ "Invalid response mbox message, mod: 0x%x, cmd: "
+ "0x%x, expect mod: 0x%x, cmd: 0x%x",
+ mbox_for_resp->mod, mbox_for_resp->cmd, mod, cmd);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ /* Check the response status. */
+ if (mbox_for_resp->msg_info.status) {
+ err = mbox_for_resp->msg_info.status;
+ goto send_err;
+ }
+
+ /* Check whether the length of the response message is valid. */
+ if (buf_out && out_size) {
+ if (*out_size < mbox_for_resp->mbox_len) {
+ PMD_DRV_LOG(ERR,
+ "Invalid response mbox message length: %d for "
+ "mod: %d cmd: %d, should less than: %d",
+ mbox_for_resp->mbox_len, mod, cmd, *out_size);
+ err = -EFAULT;
+ goto send_err;
+ }
+
+ if (mbox_for_resp->mbox_len)
+ memcpy(buf_out, mbox_for_resp->mbox,
+ (size_t)(mbox_for_resp->mbox_len));
+
+ *out_size = mbox_for_resp->mbox_len;
+ }
+
+send_err:
+ (void)hinic3_mutex_unlock(&func_to_func->mbox_send_mutex);
+
+ return err;
+}
+
+static int
+mbox_func_params_valid(__rte_unused struct hinic3_mbox *func_to_func,
+ void *buf_in, u16 in_size)
+{
+ if (!buf_in || !in_size)
+ return -EINVAL;
+
+ if (in_size > HINIC3_MBOX_DATA_SIZE) {
+ PMD_DRV_LOG(ERR, "Mbox msg len(%d) exceed limit(%" PRIu64 ")",
+ in_size, HINIC3_MBOX_DATA_SIZE);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+ enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ struct mbox_msg_info msg_info = {0};
+ int err;
+
+ err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ err = hinic3_mutex_lock(&func_to_func->mbox_send_mutex);
+ if (err)
+ return err;
+
+ if (IS_TLP_MBX(func_idx))
+ err = send_tlp_mbox_to_func(func_to_func,
+ mod, cmd, buf_in, in_size, func_idx,
+ HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_NO_ACK, &msg_info);
+ else
+ err = send_mbox_to_func(func_to_func, mod, cmd, buf_in, in_size,
+ func_idx, HINIC3_MSG_DIRECT_SEND,
+ HINIC3_MSG_NO_ACK, &msg_info);
+ if (err)
+ PMD_DRV_LOG(ERR, "Send mailbox no ack failed");
+
+ (void)hinic3_mutex_unlock(&func_to_func->mbox_send_mutex);
+
+ return err;
+}
+
+int
+hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err;
+ /* Verify the validity of the input parameters. */
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, HINIC3_MGMT_SRC_ID,
+ buf_in, in_size, buf_out, out_size, timeout);
+}
+
+void
+hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+ u16 in_size, u16 msg_id)
+{
+ struct mbox_msg_info msg_info;
+ u16 dst_func;
+
+ msg_info.msg_id = (u8)msg_id;
+ msg_info.status = 0;
+ dst_func = HINIC3_MGMT_SRC_ID;
+
+ if (IS_TLP_MBX(dst_func))
+ send_tlp_mbox_to_func(hwdev->func_to_func, mod, cmd, buf_in,
+ in_size, HINIC3_MGMT_SRC_ID,
+ HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK,
+ &msg_info);
+ else
+ send_mbox_to_func(hwdev->func_to_func, mod, cmd, buf_in,
+ in_size, HINIC3_MGMT_SRC_ID,
+ HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK,
+ &msg_info);
+}
+
+int
+hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+ u16 in_size)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+ int err;
+
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ return hinic3_mbox_to_func_no_ack(hwdev, HINIC3_MGMT_SRC_ID, mod, cmd,
+ buf_in, in_size);
+}
+
+int
+hinic3_mbox_to_pf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout)
+{
+ int err;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ /* Check the validity of parameters. */
+ err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ if (!HINIC3_IS_VF(hwdev)) {
+ PMD_DRV_LOG(ERR, "Params error, func_type: %d",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ /* Sending Email to PF. */
+ return hinic3_mbox_to_func(hwdev->func_to_func, mod, cmd,
+ hinic3_pf_id_of_vf(hwdev), buf_in, in_size,
+ buf_out, out_size, timeout);
+}
+
+int
+hinic3_mbox_to_vf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout)
+{
+ struct hinic3_mbox *func_to_func = NULL;
+ u16 dst_func_idx;
+ int err = 0;
+
+ if (!hwdev)
+ return -EINVAL;
+
+ func_to_func = hwdev->func_to_func;
+ err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+ if (err)
+ return err;
+
+ if (HINIC3_IS_VF(hwdev)) {
+ PMD_DRV_LOG(ERR, "Params error, func_type: %d",
+ hinic3_func_type(hwdev));
+ return -EINVAL;
+ }
+
+ if (!vf_id) {
+ PMD_DRV_LOG(ERR, "VF id: %d error!", vf_id);
+ return -EINVAL;
+ }
+
+ /*
+ * The sum of vf_offset_to_pf + vf_id is the VF's global function id of
+ * VF in this pf.
+ */
+ dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+ return hinic3_mbox_to_func(func_to_func, mod, cmd, dst_func_idx, buf_in,
+ in_size, buf_out, out_size, timeout);
+}
+
+static int
+init_mbox_info(struct hinic3_recv_mbox *mbox_info, int mbox_max_buf_sz)
+{
+ int err;
+
+ mbox_info->seq_id = SEQ_ID_MAX_VAL;
+
+ mbox_info->mbox =
+ rte_zmalloc("mbox", (size_t)mbox_max_buf_sz, 1); /*lint !e571*/
+ if (!mbox_info->mbox)
+ return -ENOMEM;
+
+ mbox_info->buf_out = rte_zmalloc("mbox_buf_out",
+ (size_t)mbox_max_buf_sz, 1); /*lint !e571*/
+ if (!mbox_info->buf_out) {
+ err = -ENOMEM;
+ goto alloc_buf_out_err;
+ }
+
+ return 0;
+
+alloc_buf_out_err:
+ rte_free(mbox_info->mbox);
+
+ return err;
+}
+
+static void
+clean_mbox_info(struct hinic3_recv_mbox *mbox_info)
+{
+ rte_free(mbox_info->buf_out);
+ rte_free(mbox_info->mbox);
+}
+
+static int
+alloc_mbox_info(struct hinic3_recv_mbox *mbox_info, int mbox_max_buf_sz)
+{
+ u16 func_idx, i;
+ int err;
+
+ for (func_idx = 0; func_idx < HINIC3_MAX_FUNCTIONS + 1; func_idx++) {
+ err = init_mbox_info(&mbox_info[func_idx], mbox_max_buf_sz);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mbox info failed");
+ goto init_mbox_info_err;
+ }
+ }
+
+ return 0;
+
+init_mbox_info_err:
+ for (i = 0; i < func_idx; i++)
+ clean_mbox_info(&mbox_info[i]);
+
+ return err;
+}
+
+static void
+free_mbox_info(struct hinic3_recv_mbox *mbox_info)
+{
+ u16 func_idx;
+
+ for (func_idx = 0; func_idx < HINIC3_MAX_FUNCTIONS + 1; func_idx++)
+ clean_mbox_info(&mbox_info[func_idx]);
+}
+
+static void
+prepare_send_mbox(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ send_mbox->data = MBOX_AREA(func_to_func->hwdev->hwif);
+}
+
+/**
+ * Allocate memory for the write-back state of the mailbox and write to
+ * register.
+ *
+ * @param[in] func_to_func
+ * Context for inter-function communication.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+alloc_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+ u32 addr_h, addr_l;
+
+ /* Reserved DMA area. */
+ send_mbox->wb_mz = hinic3_dma_zone_reserve(hwdev->eth_dev,
+ "wb_mz", 0, MBOX_WB_STATUS_LEN,
+ RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+ if (!send_mbox->wb_mz)
+ return -ENOMEM;
+
+ send_mbox->wb_vaddr = send_mbox->wb_mz->addr;
+ send_mbox->wb_paddr = send_mbox->wb_mz->iova;
+ send_mbox->wb_status = send_mbox->wb_vaddr;
+
+ addr_h = upper_32_bits(send_mbox->wb_paddr);
+ addr_l = lower_32_bits(send_mbox->wb_paddr);
+
+ /* Write info to the register. */
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ addr_h);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ addr_l);
+
+ return 0;
+}
+
+static void
+free_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+ 0);
+ hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+ 0);
+
+ hinic3_memzone_free(send_mbox->wb_mz);
+}
+
+static int
+alloc_mbox_tlp_buffer(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+ struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+ send_mbox->sbuff_mz = hinic3_dma_zone_reserve(hwdev->eth_dev,
+ "sbuff_mz", 0, MBOX_MAX_BUF_SZ, MBOX_MAX_BUF_SZ,
+ SOCKET_ID_ANY);
+ if (!send_mbox->sbuff_mz)
+ return -ENOMEM;
+
+ send_mbox->sbuff_vaddr = send_mbox->sbuff_mz->addr;
+ send_mbox->sbuff_paddr = send_mbox->sbuff_mz->iova;
+
+ return 0;
+}
+
+static void
+free_mbox_tlp_buffer(struct hinic3_mbox *func_to_func)
+{
+ struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+ hinic3_memzone_free(send_mbox->sbuff_mz);
+}
+
+/**
+ * Initialize function to function communication.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func;
+ int err;
+
+ func_to_func = rte_zmalloc("func_to_func", sizeof(*func_to_func), 1);
+ if (!func_to_func)
+ return -ENOMEM;
+
+ hwdev->func_to_func = func_to_func;
+ func_to_func->hwdev = hwdev;
+ (void)hinic3_mutex_init(&func_to_func->mbox_send_mutex, NULL);
+ (void)hinic3_mutex_init(&func_to_func->msg_send_mutex, NULL);
+ rte_spinlock_init(&func_to_func->mbox_lock);
+
+ /* Alloc the memory required by the mailbox. */
+ err = alloc_mbox_info(func_to_func->mbox_send, MBOX_MAX_BUF_SZ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mem for mbox_active failed");
+ goto alloc_mbox_for_send_err;
+ }
+
+ err = alloc_mbox_info(func_to_func->mbox_resp, MBOX_MAX_BUF_SZ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mem for mbox_passive failed");
+ goto alloc_mbox_for_resp_err;
+ }
+
+ err = alloc_mbox_tlp_buffer(func_to_func);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mbox send buffer failed");
+ goto alloc_tlp_buffer_err;
+ }
+
+ err = alloc_mbox_wb_status(func_to_func);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc mbox write back status failed");
+ goto alloc_wb_status_err;
+ }
+
+ prepare_send_mbox(func_to_func);
+
+ return 0;
+
+alloc_wb_status_err:
+ free_mbox_tlp_buffer(func_to_func);
+
+alloc_tlp_buffer_err:
+ free_mbox_info(func_to_func->mbox_resp);
+
+alloc_mbox_for_resp_err:
+ free_mbox_info(func_to_func->mbox_send);
+
+alloc_mbox_for_send_err:
+ (void)hinic3_mutex_destroy(&func_to_func->msg_send_mutex);
+ (void)hinic3_mutex_destroy(&func_to_func->mbox_send_mutex);
+ rte_free(func_to_func);
+
+ return err;
+}
+
+void
+hinic3_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+ struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+
+ free_mbox_wb_status(func_to_func);
+ free_mbox_tlp_buffer(func_to_func);
+ free_mbox_info(func_to_func->mbox_resp);
+ free_mbox_info(func_to_func->mbox_send);
+ (void)hinic3_mutex_destroy(&func_to_func->mbox_send_mutex);
+ (void)hinic3_mutex_destroy(&func_to_func->msg_send_mutex);
+
+ rte_free(func_to_func);
+}
diff --git a/drivers/net/hinic3/base/hinic3_mbox.h b/drivers/net/hinic3/base/hinic3_mbox.h
new file mode 100644
index 0000000000..eaf315952f
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_mbox.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_MBOX_H_
+#define _HINIC3_MBOX_H_
+
+#include "hinic3_mgmt.h"
+
+#define HINIC3_MBOX_PF_SEND_ERR 0x1
+#define HINIC3_MBOX_PF_BUSY_ACTIVE_FW 0x2
+#define HINIC3_MBOX_VF_CMD_ERROR 0x3
+
+#define HINIC3_MGMT_SRC_ID 0x1FFF
+
+#define HINIC3_MAX_PF_FUNCS 32
+
+/* Message header define. */
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_SHIFT 0
+#define HINIC3_MSG_HEADER_STATUS_SHIFT 13
+#define HINIC3_MSG_HEADER_SOURCE_SHIFT 15
+#define HINIC3_MSG_HEADER_AEQ_ID_SHIFT 16
+#define HINIC3_MSG_HEADER_MSG_ID_SHIFT 18
+#define HINIC3_MSG_HEADER_CMD_SHIFT 22
+
+#define HINIC3_MSG_HEADER_MSG_LEN_SHIFT 32
+#define HINIC3_MSG_HEADER_MODULE_SHIFT 43
+#define HINIC3_MSG_HEADER_SEG_LEN_SHIFT 48
+#define HINIC3_MSG_HEADER_NO_ACK_SHIFT 54
+#define HINIC3_MSG_HEADER_DATA_TYPE_SHIFT 55
+#define HINIC3_MSG_HEADER_SEQID_SHIFT 56
+#define HINIC3_MSG_HEADER_LAST_SHIFT 62
+#define HINIC3_MSG_HEADER_DIRECTION_SHIFT 63
+
+#define HINIC3_MSG_HEADER_CMD_MASK 0x3FF
+#define HINIC3_MSG_HEADER_MSG_ID_MASK 0xF
+#define HINIC3_MSG_HEADER_AEQ_ID_MASK 0x3
+#define HINIC3_MSG_HEADER_SOURCE_MASK 0x1
+#define HINIC3_MSG_HEADER_STATUS_MASK 0x1
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_MASK 0x1FFF
+
+#define HINIC3_MSG_HEADER_MSG_LEN_MASK 0x7FF
+#define HINIC3_MSG_HEADER_MODULE_MASK 0x1F
+#define HINIC3_MSG_HEADER_SEG_LEN_MASK 0x3F
+#define HINIC3_MSG_HEADER_NO_ACK_MASK 0x1
+#define HINIC3_MSG_HEADER_DATA_TYPE_MASK 0x1
+#define HINIC3_MSG_HEADER_SEQID_MASK 0x3F
+#define HINIC3_MSG_HEADER_LAST_MASK 0x1
+#define HINIC3_MSG_HEADER_DIRECTION_MASK 0x1
+
+#define HINIC3_MSG_HEADER_GET(val, field) \
+ (((val) >> HINIC3_MSG_HEADER_##field##_SHIFT) & \
+ HINIC3_MSG_HEADER_##field##_MASK)
+#define HINIC3_MSG_HEADER_SET(val, field) \
+ ((u64)(((u64)(val)) & HINIC3_MSG_HEADER_##field##_MASK) \
+ << HINIC3_MSG_HEADER_##field##_SHIFT)
+
+#define IS_TLP_MBX(dst_func) ((dst_func) == HINIC3_MGMT_SRC_ID)
+
+enum hinic3_msg_direction_type {
+ HINIC3_MSG_DIRECT_SEND = 0,
+ HINIC3_MSG_RESPONSE = 1
+};
+
+enum hinic3_msg_segment_type { NOT_LAST_SEGMENT = 0, LAST_SEGMENT = 1 };
+
+enum hinic3_msg_ack_type { HINIC3_MSG_ACK, HINIC3_MSG_NO_ACK };
+
+enum hinic3_data_type { HINIC3_DATA_INLINE = 0, HINIC3_DATA_DMA = 1 };
+
+enum hinic3_msg_src_type { HINIC3_MSG_FROM_MGMT = 0, HINIC3_MSG_FROM_MBOX = 1 };
+
+enum hinic3_msg_aeq_type {
+ HINIC3_ASYNC_MSG_AEQ = 0,
+ /* Indicate dst_func or mgmt cpu which aeq to response mbox message. */
+ HINIC3_MBOX_RSP_MSG_AEQ = 1,
+ /* Indicate mgmt cpu which aeq to response api cmd message. */
+ HINIC3_MGMT_RSP_MSG_AEQ = 2
+};
+
+enum hinic3_mbox_seg_errcode {
+ MBOX_ERRCODE_NO_ERRORS = 0,
+ /* VF sends the mailbox data to the wrong destination functions. */
+ MBOX_ERRCODE_VF_TO_WRONG_FUNC = 0x100,
+ /* PPF sends the mailbox data to the wrong destination functions. */
+ MBOX_ERRCODE_PPF_TO_WRONG_FUNC = 0x200,
+ /* PF sends the mailbox data to the wrong destination functions. */
+ MBOX_ERRCODE_PF_TO_WRONG_FUNC = 0x300,
+ /* The mailbox data size is set to all zero. */
+ MBOX_ERRCODE_ZERO_DATA_SIZE = 0x400,
+ /* The sender func attribute has not been learned by CPI hardware. */
+ MBOX_ERRCODE_UNKNOWN_SRC_FUNC = 0x500,
+ /* The receiver func attr has not been learned by CPI hardware. */
+ MBOX_ERRCODE_UNKNOWN_DES_FUNC = 0x600
+};
+
+enum hinic3_mbox_func_index {
+ HINIC3_MBOX_MPU_INDEX = 0,
+ HINIC3_MBOX_PF_INDEX = 1,
+ HINIC3_MAX_FUNCTIONS = 2,
+};
+
+struct mbox_msg_info {
+ u8 msg_id;
+ u8 status; /**< Can only use 3 bit. */
+};
+
+struct hinic3_recv_mbox {
+ void *mbox;
+ u16 cmd;
+ enum hinic3_mod_type mod;
+ u16 mbox_len;
+ void *buf_out;
+ enum hinic3_msg_ack_type ack_type;
+ struct mbox_msg_info msg_info;
+ u8 seq_id;
+ RTE_ATOMIC(int32_t)msg_cnt;
+};
+
+struct hinic3_send_mbox {
+ u8 *data;
+ u64 *wb_status; /**< Write back status. */
+
+ const struct rte_memzone *wb_mz;
+ void *wb_vaddr; /**< Write back virtual address. */
+ rte_iova_t wb_paddr; /**< Write back physical address. */
+
+ const struct rte_memzone *sbuff_mz;
+ void *sbuff_vaddr;
+ rte_iova_t sbuff_paddr;
+};
+
+enum mbox_event_state {
+ EVENT_START = 0,
+ EVENT_FAIL,
+ EVENT_SUCCESS,
+ EVENT_TIMEOUT,
+ EVENT_END
+};
+
+/* Execution status of the callback function. */
+enum hinic3_mbox_cb_state {
+ HINIC3_VF_MBOX_CB_REG = 0,
+ HINIC3_VF_MBOX_CB_RUNNING,
+ HINIC3_PF_MBOX_CB_REG,
+ HINIC3_PF_MBOX_CB_RUNNING,
+ HINIC3_PPF_MBOX_CB_REG,
+ HINIC3_PPF_MBOX_CB_RUNNING,
+ HINIC3_PPF_TO_PF_MBOX_CB_REG,
+ HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG
+};
+
+struct hinic3_mbox {
+ struct hinic3_hwdev *hwdev;
+
+ pthread_mutex_t mbox_send_mutex;
+ pthread_mutex_t msg_send_mutex;
+
+ struct hinic3_send_mbox send_mbox;
+
+ /* Last element for mgmt. */
+ struct hinic3_recv_mbox mbox_resp[HINIC3_MAX_FUNCTIONS + 1];
+ struct hinic3_recv_mbox mbox_send[HINIC3_MAX_FUNCTIONS + 1];
+
+ u8 send_msg_id;
+ enum mbox_event_state event_flag;
+ /* Lock for mbox event flag. */
+ rte_spinlock_t mbox_lock;
+};
+
+int hinic3_mbox_func_aeqe_handler(void *handle, u8 *header,
+ __rte_unused u8 size, void *param);
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+ u16 in_size, void *buf_out, u16 *out_size,
+ u32 timeout);
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size, u16 msg_id);
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev,
+ enum hinic3_mod_type mod, u16 cmd,
+ void *buf_in, u16 in_size);
+
+int hinic3_mbox_to_pf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+ u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+ u16 *out_size, u32 timeout);
+
+int hinic3_mbox_to_vf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+ u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+ void *buf_out, u16 *out_size, u32 timeout);
+
+#endif /**< _HINIC3_MBOX_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 12/18] net/hinic3: add device initailization
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (10 preceding siblings ...)
2025-04-18 9:05 ` [RFC 11/18] net/hinic3: add a mailbox communication module Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:05 ` [RFC 13/18] net/hinic3: add dev ops Feifei Wang
` (8 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen
From: Xin Wang <wangxin679@h-partners.com>
This patch contains data structures and function codes
related to device initialization.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
drivers/net/hinic3/hinic3_ethdev.c | 514 +++++++++++++++++++++++++++++
drivers/net/hinic3/hinic3_ethdev.h | 119 +++++++
2 files changed, 633 insertions(+)
create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
new file mode 100644
index 0000000000..c4b2f5ffe4
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -0,0 +1,514 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_csr.h"
+#include "base/hinic3_wq.h"
+#include "base/hinic3_eqs.h"
+#include "base/hinic3_cmdq.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_hwif.h"
+#include "base/hinic3_hw_cfg.h"
+#include "base/hinic3_hw_comm.h"
+#include "base/hinic3_nic_cfg.h"
+#include "base/hinic3_nic_event.h"
+#include "hinic3_ethdev.h"
+
+/**
+ * Interrupt handler triggered by NIC for handling specific event.
+ *
+ * @param[in] param
+ * The address of parameter (struct rte_eth_dev *) regsitered before.
+ */
+static void
+hinic3_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (!hinic3_get_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status)) {
+ PMD_DRV_LOG(WARNING,
+ "Intr is disabled, ignore intr event, "
+ "dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return;
+ }
+
+ /* Aeq0 msg handler. */
+ hinic3_dev_handle_aeq_event(nic_dev->hwdev, param);
+}
+
+static void
+hinic3_deinit_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+{
+ rte_free(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+
+ rte_free(nic_dev->rxqs);
+ nic_dev->rxqs = NULL;
+}
+
+/**
+ * Init mac_vlan table in hardwares.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_init_mac_table(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev =
+ HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ u8 addr_bytes[RTE_ETHER_ADDR_LEN];
+ u16 func_id = 0;
+ int err = 0;
+
+ err = hinic3_get_default_mac(nic_dev->hwdev, addr_bytes,
+ RTE_ETHER_ADDR_LEN);
+ if (err)
+ return err;
+
+ rte_ether_addr_copy((struct rte_ether_addr *)addr_bytes,
+ ð_dev->data->mac_addrs[0]);
+ if (rte_is_zero_ether_addr(ð_dev->data->mac_addrs[0]))
+ rte_eth_random_addr(eth_dev->data->mac_addrs[0].addr_bytes);
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_set_mac(nic_dev->hwdev,
+ eth_dev->data->mac_addrs[0].addr_bytes, 0,
+ func_id);
+ if (err && err != HINIC3_PF_SET_VF_ALREADY)
+ return err;
+
+ rte_ether_addr_copy(ð_dev->data->mac_addrs[0],
+ &nic_dev->default_addr);
+
+ return 0;
+}
+
+/**
+ * Deinit mac_vlan table in hardware.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev =
+ HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ u16 func_id = 0;
+ int err;
+ int i;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ for (i = 0; i < HINIC3_MAX_UC_MAC_ADDRS; i++) {
+ if (rte_is_zero_ether_addr(ð_dev->data->mac_addrs[i]))
+ continue;
+
+ err = hinic3_del_mac(nic_dev->hwdev,
+ eth_dev->data->mac_addrs[i].addr_bytes, 0,
+ func_id);
+ if (err && err != HINIC3_PF_SET_VF_ALREADY)
+ PMD_DRV_LOG(ERR,
+ "Delete mac table failed, dev_name: %s",
+ eth_dev->data->name);
+
+ memset(ð_dev->data->mac_addrs[i], 0,
+ sizeof(struct rte_ether_addr));
+ }
+
+ /* Delete multicast mac addrs. */
+ hinic3_delete_mc_addr_list(nic_dev);
+}
+
+/**
+ * Check the valid CoS bitmap to determine the available CoS IDs and set
+ * the default CoS ID to the highest valid one.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[out] cos_id
+ * Pointer to store the default CoS ID.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, u8 *cos_id)
+{
+ u8 default_cos = 0;
+ u8 valid_cos_bitmap;
+ u8 i;
+
+ valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap;
+ if (!valid_cos_bitmap) {
+ PMD_DRV_LOG(ERR, "PF has none cos to support");
+ return -EFAULT;
+ }
+
+ for (i = 0; i < HINIC3_COS_NUM_MAX; i++) {
+ if (valid_cos_bitmap & BIT(i))
+ /* Find max cos id as default cos. */
+ default_cos = i;
+ }
+
+ *cos_id = default_cos;
+
+ return 0;
+}
+
+static int
+hinic3_init_default_cos(struct hinic3_nic_dev *nic_dev)
+{
+ u8 cos_id = 0;
+ int err;
+
+ if (!HINIC3_IS_VF(nic_dev->hwdev)) {
+ err = hinic3_pf_get_default_cos(nic_dev->hwdev, &cos_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get PF default cos failed, err: %d",
+ err);
+ return err;
+ }
+ } else {
+ err = hinic3_vf_get_default_cos(nic_dev->hwdev, &cos_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get VF default cos failed, err: %d",
+ err);
+ return err;
+ }
+ }
+
+ nic_dev->default_cos = cos_id;
+ PMD_DRV_LOG(INFO, "Default cos %d", nic_dev->default_cos);
+ return 0;
+}
+
+/**
+ * Initialize Class of Service (CoS). For PF devices, it also sync the link
+ * status with the physical port.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_default_hw_feature(struct hinic3_nic_dev *nic_dev)
+{
+ int err;
+
+ err = hinic3_init_default_cos(nic_dev);
+ if (err)
+ return err;
+
+ if (hinic3_func_type(nic_dev->hwdev) == TYPE_VF)
+ return 0;
+
+ err = hinic3_set_link_status_follow(nic_dev->hwdev,
+ HINIC3_LINK_FOLLOW_PORT);
+ if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+ PMD_DRV_LOG(WARNING, "Don't support to set link status follow "
+ "phy port status");
+ else if (err)
+ return err;
+
+ return 0;
+}
+
+/**
+ * Initialize the network function, including hardware configuration, memory
+ * allocation for data structures, MAC address setup, and interrupt enabling.
+ * It also registers interrupt callbacks and sets default hardware features.
+ * If any step fails, appropriate cleanup is performed.
+ *
+ * @param[out] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_func_init(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_tcam_info *tcam_info = NULL;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct rte_pci_device *pci_dev = NULL;
+ int err;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ /* EAL is secondary and eth_dev is already created. */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ PMD_DRV_LOG(INFO, "Initialize %s in secondary process",
+ eth_dev->data->name);
+
+ return 0;
+ }
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ memset(nic_dev, 0, sizeof(*nic_dev));
+ (void)snprintf(nic_dev->dev_name, sizeof(nic_dev->dev_name),
+ "dbdf-%.4x:%.2x:%.2x.%x", pci_dev->addr.domain,
+ pci_dev->addr.bus, pci_dev->addr.devid,
+ pci_dev->addr.function);
+
+ /* Alloc mac_addrs. */
+ eth_dev->data->mac_addrs = rte_zmalloc("hinic3_mac",
+ HINIC3_MAX_UC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0);
+ if (!eth_dev->data->mac_addrs) {
+ PMD_DRV_LOG(ERR,
+ "Allocate %zx bytes to store MAC addresses "
+ "failed, dev_name: %s",
+ HINIC3_MAX_UC_MAC_ADDRS *
+ sizeof(struct rte_ether_addr),
+ eth_dev->data->name);
+ err = -ENOMEM;
+ goto alloc_eth_addr_fail;
+ }
+
+ nic_dev->mc_list = rte_zmalloc("hinic3_mc",
+ HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0);
+ if (!nic_dev->mc_list) {
+ PMD_DRV_LOG(ERR,
+ "Allocate %zx bytes to store multicast "
+ "addresses failed, dev_name: %s",
+ HINIC3_MAX_MC_MAC_ADDRS *
+ sizeof(struct rte_ether_addr),
+ eth_dev->data->name);
+ err = -ENOMEM;
+ goto alloc_mc_list_fail;
+ }
+
+ /* Create hardware device. */
+ nic_dev->hwdev = rte_zmalloc("hinic3_hwdev", sizeof(*nic_dev->hwdev),
+ RTE_CACHE_LINE_SIZE);
+ if (!nic_dev->hwdev) {
+ PMD_DRV_LOG(ERR, "Allocate hwdev memory failed, dev_name: %s",
+ eth_dev->data->name);
+ err = -ENOMEM;
+ goto alloc_hwdev_mem_fail;
+ }
+ nic_dev->hwdev->pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ nic_dev->hwdev->dev_handle = nic_dev;
+ nic_dev->hwdev->eth_dev = eth_dev;
+ nic_dev->hwdev->port_id = eth_dev->data->port_id;
+
+ err = hinic3_init_hwdev(nic_dev->hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init chip hwdev failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_hwdev_fail;
+ }
+
+ nic_dev->max_sqs = hinic3_func_max_sqs(nic_dev->hwdev);
+ nic_dev->max_rqs = hinic3_func_max_rqs(nic_dev->hwdev);
+
+ err = hinic3_init_nic_hwdev(nic_dev->hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init nic hwdev failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_nic_hwdev_fail;
+ }
+
+ err = hinic3_get_feature_from_hw(nic_dev->hwdev, &nic_dev->feature_cap,
+ 1);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Get nic feature from hardware failed, dev_name: %s",
+ eth_dev->data->name);
+ goto get_cap_fail;
+ }
+
+ err = hinic3_init_sw_rxtxqs(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_sw_rxtxqs_fail;
+ }
+
+ err = hinic3_init_mac_table(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init mac table failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_mac_table_fail;
+ }
+
+ /* Set hardware feature to default status. */
+ err = hinic3_set_default_hw_feature(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set hw default features failed, dev_name: %s",
+ eth_dev->data->name);
+ goto set_default_feature_fail;
+ }
+
+ /* Register callback func to eal lib. */
+ err = rte_intr_callback_register(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+ hinic3_dev_interrupt_handler,
+ (void *)eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Register intr callback failed, dev_name: %s",
+ eth_dev->data->name);
+ goto reg_intr_cb_fail;
+ }
+
+ /* Enable uio/vfio intr/eventfd mapping. */
+ err = rte_intr_enable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
+ eth_dev->data->name);
+ goto enable_intr_fail;
+ }
+ tcam_info = &nic_dev->tcam;
+ memset(tcam_info, 0, sizeof(struct hinic3_tcam_info));
+ TAILQ_INIT(&tcam_info->tcam_list);
+ TAILQ_INIT(&tcam_info->tcam_dynamic_info.tcam_dynamic_list);
+ TAILQ_INIT(&nic_dev->filter_ethertype_list);
+ TAILQ_INIT(&nic_dev->filter_fdir_rule_list);
+
+ hinic3_mutex_init(&nic_dev->rx_mode_mutex, NULL);
+
+ hinic3_set_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status);
+
+ hinic3_set_bit(HINIC3_DEV_INIT, &nic_dev->dev_status);
+ PMD_DRV_LOG(INFO, "Initialize %s in primary succeed",
+ eth_dev->data->name);
+
+ /**
+ * Queue xstats filled automatically by ethdev layer.
+ */
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+
+ return 0;
+
+enable_intr_fail:
+ (void)rte_intr_callback_unregister(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+ hinic3_dev_interrupt_handler,
+ (void *)eth_dev);
+
+reg_intr_cb_fail:
+set_default_feature_fail:
+ hinic3_deinit_mac_addr(eth_dev);
+
+init_mac_table_fail:
+ hinic3_deinit_sw_rxtxqs(nic_dev);
+
+init_sw_rxtxqs_fail:
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+get_cap_fail:
+init_nic_hwdev_fail:
+ hinic3_free_hwdev(nic_dev->hwdev);
+ eth_dev->dev_ops = NULL;
+ eth_dev->rx_queue_count = NULL;
+ eth_dev->rx_descriptor_status = NULL;
+ eth_dev->tx_descriptor_status = NULL;
+
+init_hwdev_fail:
+ rte_free(nic_dev->hwdev);
+ nic_dev->hwdev = NULL;
+
+alloc_hwdev_mem_fail:
+ rte_free(nic_dev->mc_list);
+ nic_dev->mc_list = NULL;
+
+alloc_mc_list_fail:
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+
+alloc_eth_addr_fail:
+ PMD_DRV_LOG(ERR, "Initialize %s in primary failed",
+ eth_dev->data->name);
+ return err;
+}
+
+static int
+hinic3_dev_init(struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ PMD_DRV_LOG(INFO, "Initializing %.4x:%.2x:%.2x.%x in %s process",
+ pci_dev->addr.domain, pci_dev->addr.bus,
+ pci_dev->addr.devid, pci_dev->addr.function,
+ (rte_eal_process_type() == RTE_PROC_PRIMARY) ? "primary"
+ : "secondary");
+
+ PMD_DRV_LOG(INFO, "Network Interface pmd driver version: %s",
+ HINIC3_PMD_DRV_VERSION);
+
+ return hinic3_func_init(eth_dev);
+}
+
+static int
+hinic3_dev_uninit(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ hinic3_clear_bit(HINIC3_DEV_INIT, &nic_dev->dev_status);
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return hinic3_dev_close(dev);
+}
+
+static const struct rte_pci_id pci_id_hinic3_map[] = {
+#ifdef CONFIG_SP_VID_DID
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)},
+#else
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)},
+ {RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)},
+#endif
+
+ {.vendor_id = 0},
+};
+
+static int
+hinic3_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct hinic3_nic_dev), hinic3_dev_init);
+}
+
+static int
+hinic3_pci_remove(struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_remove(pci_dev, hinic3_dev_uninit);
+}
+
+static struct rte_pci_driver rte_hinic3_pmd = {
+ .id_table = pci_id_hinic3_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = hinic3_pci_probe,
+ .remove = hinic3_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_hinic3, rte_hinic3_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_hinic3, pci_id_hinic3_map);
+
+RTE_INIT(hinic3_init_log)
+{
+ hinic3_logtype = rte_log_register("pmd.net.hinic3");
+ if (hinic3_logtype >= 0)
+ rte_log_set_level(hinic3_logtype, RTE_LOG_INFO);
+}
diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h
new file mode 100644
index 0000000000..a69cf972e7
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_ethdev.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_ETHDEV_H_
+#define _HINIC3_ETHDEV_H_
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_core.h>
+
+#define HINIC3_PMD_DRV_VERSION "B106"
+
+#define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle)
+
+#define HINIC3_PKT_RX_L4_CKSUM_BAD RTE_MBUF_F_RX_L4_CKSUM_BAD
+#define HINIC3_PKT_RX_IP_CKSUM_BAD RTE_MBUF_F_RX_IP_CKSUM_BAD
+#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+#define HINIC3_PKT_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_L4_CKSUM_GOOD
+#define HINIC3_PKT_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD
+#define HINIC3_PKT_TX_TCP_SEG RTE_MBUF_F_TX_TCP_SEG
+#define HINIC3_PKT_TX_UDP_CKSUM RTE_MBUF_F_TX_UDP_CKSUM
+#define HINIC3_PKT_TX_TCP_CKSUM RTE_MBUF_F_TX_TCP_CKSUM
+#define HINIC3_PKT_TX_IP_CKSUM RTE_MBUF_F_TX_IP_CKSUM
+#define HINIC3_PKT_TX_VLAN_PKT RTE_MBUF_F_TX_VLAN
+#define HINIC3_PKT_TX_L4_MASK RTE_MBUF_F_TX_L4_MASK
+#define HINIC3_PKT_TX_SCTP_CKSUM RTE_MBUF_F_TX_SCTP_CKSUM
+#define HINIC3_PKT_TX_IPV6 RTE_MBUF_F_TX_IPV6
+#define HINIC3_PKT_TX_IPV4 RTE_MBUF_F_TX_IPV4
+#define HINIC3_PKT_RX_VLAN RTE_MBUF_F_RX_VLAN
+#define HINIC3_PKT_RX_VLAN_STRIPPED RTE_MBUF_F_RX_VLAN_STRIPPED
+#define HINIC3_PKT_RX_RSS_HASH RTE_MBUF_F_RX_RSS_HASH
+#define HINIC3_PKT_TX_TUNNEL_MASK RTE_MBUF_F_TX_TUNNEL_MASK
+#define HINIC3_PKT_TX_TUNNEL_VXLAN RTE_MBUF_F_TX_TUNNEL_VXLAN
+#define HINIC3_PKT_TX_OUTER_IP_CKSUM RTE_MBUF_F_TX_OUTER_IP_CKSUM
+#define HINIC3_PKT_TX_OUTER_IPV6 RTE_MBUF_F_TX_OUTER_IPV6
+#define HINIC3_PKT_RX_LRO RTE_MBUF_F_RX_LRO
+#define HINIC3_PKT_TX_L4_NO_CKSUM RTE_MBUF_F_TX_L4_NO_CKSUM
+
+#define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool"
+/* Mbuf pool for copy invalid mbuf segs. */
+#define HINIC3_COPY_MEMPOOL_DEPTH 1024
+#define HINIC3_COPY_MEMPOOL_CACHE 128
+#define HINIC3_COPY_MBUF_SIZE 4096
+
+#define HINIC3_DEV_NAME_LEN 32
+#define DEV_STOP_DELAY_MS 100
+#define DEV_START_DELAY_MS 100
+
+#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
+#define HINIC3_VFTA_SIZE (4096 / HINIC3_UINT32_BIT_SIZE)
+#define HINIC3_MAX_QUEUE_NUM 64
+
+#define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \
+ ((struct hinic3_nic_dev *)(dev)->data->dev_private)
+
+enum hinic3_dev_status {
+ HINIC3_DEV_INIT,
+ HINIC3_DEV_CLOSE,
+ HINIC3_DEV_START,
+ HINIC3_DEV_INTR_EN
+};
+
+enum hinic3_tx_cvlan_type {
+ HINIC3_TX_TPID0,
+};
+
+enum nic_feature_cap {
+ NIC_F_CSUM = BIT(0),
+ NIC_F_SCTP_CRC = BIT(1),
+ NIC_F_TSO = BIT(2),
+ NIC_F_LRO = BIT(3),
+ NIC_F_UFO = BIT(4),
+ NIC_F_RSS = BIT(5),
+ NIC_F_RX_VLAN_FILTER = BIT(6),
+ NIC_F_RX_VLAN_STRIP = BIT(7),
+ NIC_F_TX_VLAN_INSERT = BIT(8),
+ NIC_F_VXLAN_OFFLOAD = BIT(9),
+ NIC_F_IPSEC_OFFLOAD = BIT(10),
+ NIC_F_FDIR = BIT(11),
+ NIC_F_PROMISC = BIT(12),
+ NIC_F_ALLMULTI = BIT(13),
+};
+
+#define DEFAULT_DRV_FEATURE 0x3FFF
+
+struct hinic3_nic_dev {
+ struct hinic3_hwdev *hwdev; /**< Hardware device. */
+ struct hinic3_txq **txqs;
+ struct hinic3_rxq **rxqs;
+ struct rte_mempool *cpy_mpool;
+
+ u16 num_sqs;
+ u16 num_rqs;
+ u16 max_sqs;
+ u16 max_rqs;
+
+ u16 rx_buff_len;
+ u16 mtu_size;
+
+ u32 rx_mode;
+ u8 rx_queue_list[HINIC3_MAX_QUEUE_NUM];
+ rte_spinlock_t queue_list_lock;
+
+ pthread_mutex_t rx_mode_mutex;
+
+ u32 default_cos;
+ u32 rx_csum_en;
+
+ unsigned long dev_status;
+
+ struct rte_ether_addr default_addr;
+ struct rte_ether_addr *mc_list;
+
+ char dev_name[HINIC3_DEV_NAME_LEN];
+ u64 feature_cap;
+ u32 vfta[HINIC3_VFTA_SIZE]; /**< VLAN bitmap. */
+};
+
+#endif /* _HINIC3_ETHDEV_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 13/18] net/hinic3: add dev ops
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (11 preceding siblings ...)
2025-04-18 9:05 ` [RFC 12/18] net/hinic3: add device initailization Feifei Wang
@ 2025-04-18 9:05 ` Feifei Wang
2025-04-18 9:06 ` [RFC 14/18] net/hinic3: add Rx/Tx functions Feifei Wang
` (7 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:05 UTC (permalink / raw)
To: dev; +Cc: Feifei Wang, Xin Wang, Yi Chen
From: Feifei Wang <wangfeifei40@huawei.com>
Add ops related function codes.
Signed-off-by: Feifei Wang <wangfeifei40@huawei.com>
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
drivers/net/hinic3/hinic3_ethdev.c | 2918 +++++++++++++++++++++++++++-
drivers/net/hinic3/hinic3_nic_io.c | 827 ++++++++
drivers/net/hinic3/hinic3_nic_io.h | 169 ++
drivers/net/hinic3/hinic3_rx.c | 811 ++++++++
drivers/net/hinic3/hinic3_rx.h | 356 ++++
drivers/net/hinic3/hinic3_tx.c | 274 +++
drivers/net/hinic3/hinic3_tx.h | 314 +++
7 files changed, 5652 insertions(+), 17 deletions(-)
create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
create mode 100644 drivers/net/hinic3/hinic3_rx.c
create mode 100644 drivers/net/hinic3/hinic3_rx.h
create mode 100644 drivers/net/hinic3/hinic3_tx.c
create mode 100644 drivers/net/hinic3/hinic3_tx.h
diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index c4b2f5ffe4..de380dddbb 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -21,42 +21,2917 @@
#include "base/hinic3_hw_comm.h"
#include "base/hinic3_nic_cfg.h"
#include "base/hinic3_nic_event.h"
+#include "hinic3_pmd_nic_io.h"
+#include "hinic3_pmd_tx.h"
+#include "hinic3_pmd_rx.h"
#include "hinic3_ethdev.h"
+#define HINIC3_MIN_RX_BUF_SIZE 1024
+
+#define HINIC3_DEFAULT_BURST_SIZE 32
+#define HINIC3_DEFAULT_NB_QUEUES 1
+#define HINIC3_DEFAULT_RING_SIZE 1024
+#define HINIC3_MAX_LRO_SIZE 65536
+
+#define HINIC3_DEFAULT_RX_FREE_THRESH 32
+#define HINIC3_DEFAULT_TX_FREE_THRESH 32
+
+#define HINIC3_RX_WAIT_CYCLE_THRESH 500
+
+/**
+ * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID.
+ *
+ * Vlan_id is a 12 bit number. The VFTA array is actually a 4096 bit array,
+ * 128 of 32bit elements. 2^5 = 32. The val of lower 5 bits specifies the bit
+ * in the 32bit element. The higher 7 bit val specifies VFTA array index.
+ */
+#define HINIC3_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F))
+/**
+ * Get the VFTA index from the upper 7 bits of the VLAN ID.
+ */
+#define HINIC3_VFTA_IDX(vlan_id) ((vlan_id) >> 5)
+
+#define HINIC3_LRO_DEFAULT_TIME_LIMIT 16
+#define HINIC3_LRO_UNIT_WQE_SIZE 1024 /**< Bytes. */
+
+#define HINIC3_MAX_RX_PKT_LEN(rxmod) ((rxmod).mtu)
+int hinic3_logtype; /**< Driver-specific log messages type. */
+
+/**
+ * The different receive modes for the NIC.
+ *
+ * The receive modes are represented as bit flags that control how the
+ * NIC handles various types of network traffic.
+ */
+enum hinic3_rx_mod {
+ /* Enable unicast receive mode. */
+ HINIC3_RX_MODE_UC = 1 << 0,
+ /* Enable multicast receive mode. */
+ HINIC3_RX_MODE_MC = 1 << 1,
+ /* Enable broadcast receive mode. */
+ HINIC3_RX_MODE_BC = 1 << 2,
+ /* Enable receive mode for all multicast addresses. */
+ HINIC3_RX_MODE_MC_ALL = 1 << 3,
+ /* Enable promiscuous mode, receiving all packets. */
+ HINIC3_RX_MODE_PROMISC = 1 << 4,
+};
+
+#define HINIC3_DEFAULT_RX_MODE \
+ (HINIC3_RX_MODE_UC | HINIC3_RX_MODE_MC | HINIC3_RX_MODE_BC)
+
+struct hinic3_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ u32 offset;
+};
+
+#define HINIC3_FUNC_STAT(_stat_item) \
+ { \
+ .name = #_stat_item, \
+ .offset = offsetof(struct hinic3_vport_stats, _stat_item), \
+ }
+
+static const struct hinic3_xstats_name_off hinic3_vport_stats_strings[] = {
+ HINIC3_FUNC_STAT(tx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(tx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(rx_unicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_unicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_multicast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_multicast_bytes_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_pkts_vport),
+ HINIC3_FUNC_STAT(rx_broadcast_bytes_vport),
+
+ HINIC3_FUNC_STAT(tx_discard_vport),
+ HINIC3_FUNC_STAT(rx_discard_vport),
+ HINIC3_FUNC_STAT(tx_err_vport),
+ HINIC3_FUNC_STAT(rx_err_vport),
+};
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define HINIC3_VPORT_XSTATS_NUM ARRAY_SIZE(hinic3_vport_stats_strings)
+
+#define HINIC3_PORT_STAT(_stat_item) \
+ { \
+ .name = #_stat_item, \
+ .offset = offsetof(struct mag_phy_port_stats, _stat_item), \
+ }
+
+static const struct hinic3_xstats_name_off hinic3_phyport_stats_strings[] = {
+ HINIC3_PORT_STAT(mac_tx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_tx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_good_oct_num),
+ HINIC3_PORT_STAT(mac_tx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_total_oct_num),
+ HINIC3_PORT_STAT(mac_tx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pause_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_err_all_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+
+ HINIC3_PORT_STAT(mac_rx_fragment_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undersize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_undermin_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_64_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_oversize_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_jabber_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_bad_oct_num),
+ HINIC3_PORT_STAT(mac_rx_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_good_oct_num),
+ HINIC3_PORT_STAT(mac_rx_total_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_total_oct_num),
+ HINIC3_PORT_STAT(mac_rx_uni_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_multi_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_broad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pause_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_control_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_sym_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_fcs_err_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_good_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+ HINIC3_PORT_STAT(mac_rx_unfilter_pkt_num),
+};
+
+#define HINIC3_PHYPORT_XSTATS_NUM ARRAY_SIZE(hinic3_phyport_stats_strings)
+
+#define HINIC3_RXQ_STAT(_stat_item) \
+ { \
+ .name = #_stat_item, \
+ .offset = offsetof(struct hinic3_rxq_stats, _stat_item), \
+ }
+
+/**
+ * The name and offset field of RXQ statistic items.
+ *
+ * The inclusion of additional statistics depends on the compilation flags:
+ * - `HINIC3_XSTAT_RXBUF_INFO` enables buffer-related stats.
+ * - `HINIC3_XSTAT_PROF_RX` enables performance timing stats.
+ * - `HINIC3_XSTAT_MBUF_USE` enables memory buffer usage stats.
+ */
+static const struct hinic3_xstats_name_off hinic3_rxq_stats_strings[] = {
+ HINIC3_RXQ_STAT(rx_nombuf),
+ HINIC3_RXQ_STAT(burst_pkts),
+ HINIC3_RXQ_STAT(errors),
+ HINIC3_RXQ_STAT(csum_errors),
+ HINIC3_RXQ_STAT(other_errors),
+ HINIC3_RXQ_STAT(empty),
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+ HINIC3_RXQ_STAT(rx_mbuf),
+ HINIC3_RXQ_STAT(rx_avail),
+ HINIC3_RXQ_STAT(rx_hole),
+#endif
+
+#ifdef HINIC3_XSTAT_PROF_RX
+ HINIC3_RXQ_STAT(app_tsc),
+ HINIC3_RXQ_STAT(pmd_tsc),
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+ HINIC3_RXQ_STAT(rx_alloc_mbuf_bytes),
+ HINIC3_RXQ_STAT(rx_free_mbuf_bytes),
+ HINIC3_RXQ_STAT(rx_left_mbuf_bytes),
+#endif
+};
+
+#define HINIC3_RXQ_XSTATS_NUM ARRAY_SIZE(hinic3_rxq_stats_strings)
+
+#define HINIC3_TXQ_STAT(_stat_item) \
+ { \
+ .name = #_stat_item, \
+ .offset = offsetof(struct hinic3_txq_stats, _stat_item), \
+ }
+
+/**
+ * The name and offset field of TXQ statistic items.
+ *
+ * The inclusion of additional statistics depends on the compilation flags:
+ * - `HINIC3_XSTAT_PROF_TX` enables performance timing stats.
+ * - `HINIC3_XSTAT_MBUF_USE` enables memory buffer usage stats.
+ */
+static const struct hinic3_xstats_name_off hinic3_txq_stats_strings[] = {
+ HINIC3_TXQ_STAT(tx_busy),
+ HINIC3_TXQ_STAT(offload_errors),
+ HINIC3_TXQ_STAT(burst_pkts),
+ HINIC3_TXQ_STAT(sge_len0),
+ HINIC3_TXQ_STAT(mbuf_null),
+
+#ifdef HINIC3_XSTAT_PROF_TX
+ HINIC3_TXQ_STAT(app_tsc),
+ HINIC3_TXQ_STAT(pmd_tsc),
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+ HINIC3_TXQ_STAT(tx_left_mbuf_bytes),
+#endif
+};
+
+#define HINIC3_TXQ_XSTATS_NUM ARRAY_SIZE(hinic3_txq_stats_strings)
+
+static int
+hinic3_xstats_calc_num(struct hinic3_nic_dev *nic_dev)
+{
+ if (HINIC3_IS_VF(nic_dev->hwdev)) {
+ return (HINIC3_VPORT_XSTATS_NUM +
+ HINIC3_RXQ_XSTATS_NUM * nic_dev->num_rqs +
+ HINIC3_TXQ_XSTATS_NUM * nic_dev->num_sqs);
+ } else {
+ return (HINIC3_VPORT_XSTATS_NUM + HINIC3_PHYPORT_XSTATS_NUM +
+ HINIC3_RXQ_XSTATS_NUM * nic_dev->num_rqs +
+ HINIC3_TXQ_XSTATS_NUM * nic_dev->num_sqs);
+ }
+}
+
+#define HINIC3_MAX_QUEUE_DEPTH 16384
+#define HINIC3_MIN_QUEUE_DEPTH 128
+#define HINIC3_TXD_ALIGN 1
+#define HINIC3_RXD_ALIGN 1
+
+static const struct rte_eth_desc_lim hinic3_rx_desc_lim = {
+ .nb_max = HINIC3_MAX_QUEUE_DEPTH,
+ .nb_min = HINIC3_MIN_QUEUE_DEPTH,
+ .nb_align = HINIC3_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim hinic3_tx_desc_lim = {
+ .nb_max = HINIC3_MAX_QUEUE_DEPTH,
+ .nb_min = HINIC3_MIN_QUEUE_DEPTH,
+ .nb_align = HINIC3_TXD_ALIGN,
+};
+
+static void hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev);
+
+static int hinic3_copy_mempool_init(struct hinic3_nic_dev *nic_dev);
+
+static void hinic3_copy_mempool_uninit(struct hinic3_nic_dev *nic_dev);
+
+/**
+ * Interrupt handler triggered by NIC for handling specific event.
+ *
+ * @param[in] param
+ * The address of parameter (struct rte_eth_dev *) regsitered before.
+ */
+static void
+hinic3_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = param;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (!hinic3_get_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status)) {
+ PMD_DRV_LOG(WARNING,
+ "Intr is disabled, ignore intr event, "
+ "dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return;
+ }
+
+ /* Aeq0 msg handler. */
+ hinic3_dev_handle_aeq_event(nic_dev->hwdev, param);
+}
+
+/**
+ * Do the config for TX/Rx queues, include queue number, mtu size and RSS.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_configure(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ nic_dev->num_sqs = dev->data->nb_tx_queues;
+ nic_dev->num_rqs = dev->data->nb_rx_queues;
+
+ if (nic_dev->num_sqs > nic_dev->max_sqs ||
+ nic_dev->num_rqs > nic_dev->max_rqs) {
+ PMD_DRV_LOG(ERR,
+ "num_sqs: %d or num_rqs: %d larger than "
+ "max_sqs: %d or max_rqs: %d",
+ nic_dev->num_sqs, nic_dev->num_rqs,
+ nic_dev->max_sqs, nic_dev->max_rqs);
+ return -EINVAL;
+ }
+
+ /* The range of mtu is 384~9600. */
+
+ if (HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) <
+ HINIC3_MIN_FRAME_SIZE ||
+ HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) >
+ HINIC3_MAX_JUMBO_FRAME_SIZE) {
+ PMD_DRV_LOG(ERR,
+ "Max rx pkt len out of range, max_rx_pkt_len: %d, "
+ "expect between %d and %d",
+ HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode),
+ HINIC3_MIN_FRAME_SIZE, HINIC3_MAX_JUMBO_FRAME_SIZE);
+ return -EINVAL;
+ }
+ nic_dev->mtu_size =
+ (u16)HINIC3_PKTLEN_TO_MTU(HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode));
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |=
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ /* Clear fdir filter. */
+ hinic3_free_fdir_filter(dev);
+
+ return 0;
+}
+
+/**
+ * Get information about the device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] info
+ * Info structure for ethernet device.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ info->max_rx_queues = nic_dev->max_rqs;
+ info->max_tx_queues = nic_dev->max_sqs;
+ info->min_rx_bufsize = HINIC3_MIN_RX_BUF_SIZE;
+ info->max_rx_pktlen = HINIC3_MAX_JUMBO_FRAME_SIZE;
+ info->max_mac_addrs = HINIC3_MAX_UC_MAC_ADDRS;
+ info->min_mtu = HINIC3_MIN_MTU_SIZE;
+ info->max_mtu = HINIC3_MAX_MTU_SIZE;
+ info->max_lro_pkt_size = HINIC3_MAX_LRO_SIZE;
+
+ info->rx_queue_offload_capa = 0;
+ info->rx_offload_capa =
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_SCATTER | RTE_ETH_RX_OFFLOAD_TCP_LRO |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ info->tx_queue_offload_capa = 0;
+ info->tx_offload_capa =
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
+ info->hash_key_size = HINIC3_RSS_KEY_SIZE;
+ info->reta_size = HINIC3_RSS_INDIR_SIZE;
+ info->flow_type_rss_offloads = HINIC3_RSS_OFFLOAD_ALL;
+
+ info->rx_desc_lim = hinic3_rx_desc_lim;
+ info->tx_desc_lim = hinic3_tx_desc_lim;
+
+ /* Driver-preferred rx/tx parameters. */
+ info->default_rxportconf.burst_size = HINIC3_DEFAULT_BURST_SIZE;
+ info->default_txportconf.burst_size = HINIC3_DEFAULT_BURST_SIZE;
+ info->default_rxportconf.nb_queues = HINIC3_DEFAULT_NB_QUEUES;
+ info->default_txportconf.nb_queues = HINIC3_DEFAULT_NB_QUEUES;
+ info->default_rxportconf.ring_size = HINIC3_DEFAULT_RING_SIZE;
+ info->default_txportconf.ring_size = HINIC3_DEFAULT_RING_SIZE;
+
+ return 0;
+}
+
+static int
+hinic3_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ char mgmt_ver[MGMT_VERSION_MAX_LEN] = {0};
+ int err;
+
+ err = hinic3_get_mgmt_version(nic_dev->hwdev, mgmt_ver,
+ HINIC3_MGMT_VERSION_MAX_LEN);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get fw version failed");
+ return -EIO;
+ }
+
+ if (fw_size < strlen((char *)mgmt_ver) + 1)
+ return (strlen((char *)mgmt_ver) + 1);
+
+ (void)snprintf(fw_version, fw_size, "%s", mgmt_ver);
+
+ return 0;
+}
+
+/**
+ * Set ethernet device link state up.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_set_link_up(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err;
+
+ /*
+ * Vport enable will set function valid in mpu.
+ * So dev start status need to be checked before vport enable.
+ */
+ if (hinic3_get_bit(HINIC3_DEV_START, &nic_dev->dev_status)) {
+ err = hinic3_set_vport_enable(nic_dev->hwdev, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable vport failed, dev_name: %s",
+ nic_dev->dev_name);
+ return err;
+ }
+ }
+
+ /* Link status follow phy port status, mpu will open pma. */
+ err = hinic3_set_port_enable(nic_dev->hwdev, true);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Set MAC link up failed, dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * Set ethernet device link state down.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_set_link_down(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err;
+
+ err = hinic3_set_vport_enable(nic_dev->hwdev, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Disable vport failed, dev_name: %s",
+ nic_dev->dev_name);
+ return err;
+ }
+
+ /* Link status follow phy port status, mpu will close pma. */
+ err = hinic3_set_port_enable(nic_dev->hwdev, false);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Set MAC link down failed, dev_name: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * Get device physical link information.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] wait_to_complete
+ * Wait for request completion.
+ *
+ * @return
+ * 0 : Link status changed
+ * -1 : Link status not changed.
+ */
+static int
+hinic3_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+#define CHECK_INTERVAL 10 /**< 10ms. */
+#define MAX_REPEAT_TIME 100 /**< 1s (100 * 10ms) in total. */
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_eth_link link;
+ u8 link_state;
+ unsigned int rep_cnt = MAX_REPEAT_TIME;
+ int ret;
+
+ memset(&link, 0, sizeof(link));
+ do {
+ /* Get link status information from hardware. */
+ ret = hinic3_get_link_state(nic_dev->hwdev, &link_state);
+ if (ret) {
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+ link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ goto out;
+ }
+
+ get_port_info(nic_dev->hwdev, link_state, &link);
+
+ if (!wait_to_complete || link.link_status)
+ break;
+
+ rte_delay_ms(CHECK_INTERVAL);
+ } while (rep_cnt--);
+
+out:
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+/**
+ * Reset all RX queues (RXQs).
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_reset_rx_queue(struct rte_eth_dev *dev)
+{
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_nic_dev *nic_dev;
+ int q_id = 0;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ for (q_id = 0; q_id < nic_dev->num_rqs; q_id++) {
+ rxq = nic_dev->rxqs[q_id];
+
+ rxq->cons_idx = 0;
+ rxq->prod_idx = 0;
+ rxq->delta = rxq->q_depth;
+ rxq->next_to_update = 0;
+ }
+}
+
+/**
+ * Reset all TX queues (TXQs).
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_reset_tx_queue(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev;
+ struct hinic3_txq *txq = NULL;
+ int q_id = 0;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ for (q_id = 0; q_id < nic_dev->num_sqs; q_id++) {
+ txq = nic_dev->txqs[q_id];
+
+ txq->cons_idx = 0;
+ txq->prod_idx = 0;
+ txq->owner = 1;
+
+ /* Clear hardware ci. */
+ *txq->ci_vaddr_base = 0;
+ }
+}
+
+/**
+ * Create the receive queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] qid
+ * Receive queue index.
+ * @param[in] nb_desc
+ * Number of descriptors for receive queue.
+ * @param[in] socket_id
+ * Socket index on which memory must be allocated.
+ * @param[in] rx_conf
+ * Thresholds parameters (unused_).
+ * @param[in] mp
+ * Memory pool for buffer allocations.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc,
+ unsigned int socket_id,
+ __rte_unused const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct hinic3_nic_dev *nic_dev;
+ struct hinic3_rxq *rxq = NULL;
+ const struct rte_memzone *rq_mz = NULL;
+ const struct rte_memzone *cqe_mz = NULL;
+ const struct rte_memzone *pi_mz = NULL;
+ u16 rq_depth, rx_free_thresh;
+ u32 queue_buf_size;
+ void *db_addr = NULL;
+ int wqe_count;
+ u32 buf_size;
+ int err;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ /* Queue depth must be power of 2, otherwise will be aligned up. */
+ rq_depth = (nb_desc & (nb_desc - 1))
+ ? ((u16)(1U << (ilog2(nb_desc) + 1)))
+ : nb_desc;
+
+ /*
+ * Validate number of receive descriptors.
+ * It must not exceed hardware maximum and minimum.
+ */
+ if (rq_depth > HINIC3_MAX_QUEUE_DEPTH ||
+ rq_depth < HINIC3_MIN_QUEUE_DEPTH) {
+ PMD_DRV_LOG(ERR,
+ "RX queue depth is out of range from %d to %d,"
+ "(nb_desc: %d, q_depth: %d, port: %d queue: %d)",
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH,
+ (int)nb_desc, (int)rq_depth,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ /*
+ * The RX descriptor ring will be cleaned after rxq->rx_free_thresh
+ * descriptors are used or if the number of descriptors required
+ * to transmit a packet is greater than the number of free RX
+ * descriptors.
+ * The following constraints must be satisfied:
+ * - rx_free_thresh must be greater than 0.
+ * - rx_free_thresh must be less than the size of the ring minus 1.
+ * When set to zero use default values.
+ */
+ rx_free_thresh = (u16)((rx_conf->rx_free_thresh)
+ ? rx_conf->rx_free_thresh
+ : HINIC3_DEFAULT_RX_FREE_THRESH);
+ if (rx_free_thresh >= (rq_depth - 1)) {
+ PMD_DRV_LOG(ERR,
+ "rx_free_thresh must be less than the number "
+ "of RX descriptors minus 1, rx_free_thresh: %u "
+ "port: %d queue: %d)",
+ (unsigned int)rx_free_thresh,
+ (int)dev->data->port_id, (int)qid);
+
+ return -EINVAL;
+ }
+
+ rxq = rte_zmalloc_socket("hinic3_rq", sizeof(struct hinic3_rxq),
+ RTE_CACHE_LINE_SIZE, (int)socket_id);
+ if (!rxq) {
+ PMD_DRV_LOG(ERR, "Allocate rxq[%d] failed, dev_name: %s", qid,
+ dev->data->name);
+
+ return -ENOMEM;
+ }
+
+ /* Init rq parameters. */
+ rxq->nic_dev = nic_dev;
+ nic_dev->rxqs[qid] = rxq;
+ rxq->mb_pool = mp;
+ rxq->q_id = qid;
+ rxq->next_to_update = 0;
+ rxq->q_depth = rq_depth;
+ rxq->q_mask = rq_depth - 1;
+ rxq->delta = rq_depth;
+ rxq->cons_idx = 0;
+ rxq->prod_idx = 0;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->rxinfo_align_end = rxq->q_depth - rxq->rx_free_thresh;
+ rxq->port_id = dev->data->port_id;
+ rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH;
+
+ /* If buf_len used for function table, need to translated. */
+ u16 rx_buf_size =
+ rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM;
+ err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s",
+ dev->data->name);
+ goto adjust_bufsize_fail;
+ }
+
+ if (buf_size >= HINIC3_RX_BUF_SIZE_4K &&
+ buf_size < HINIC3_RX_BUF_SIZE_16K)
+ rxq->wqe_type = HINIC3_EXTEND_RQ_WQE;
+ else
+ rxq->wqe_type = HINIC3_NORMAL_RQ_WQE;
+
+ rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type;
+ rxq->wqebb_size = (u16)BIT(rxq->wqebb_shift);
+
+ rxq->buf_len = (u16)buf_size;
+ rxq->rx_buff_shift = ilog2(rxq->buf_len);
+
+ pi_mz = hinic3_dma_zone_reserve(dev, "hinic3_rq_pi", qid, RTE_PGSIZE_4K,
+ RTE_CACHE_LINE_SIZE, (int)socket_id);
+ if (!pi_mz) {
+ PMD_DRV_LOG(ERR, "Allocate rxq[%d] pi_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_pi_mz_fail;
+ }
+ rxq->pi_mz = pi_mz;
+ rxq->pi_dma_addr = pi_mz->iova;
+ rxq->pi_virt_addr = pi_mz->addr;
+
+ err = hinic3_alloc_db_addr(nic_dev->hwdev, &db_addr, HINIC3_DB_TYPE_RQ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc rq doorbell addr failed");
+ goto alloc_db_err_fail;
+ }
+ rxq->db_addr = db_addr;
+
+ queue_buf_size = BIT(rxq->wqebb_shift) * rq_depth;
+ rq_mz = hinic3_dma_zone_reserve(dev, "hinic3_rq_mz", qid,
+ queue_buf_size, RTE_PGSIZE_256K,
+ (int)socket_id);
+ if (!rq_mz) {
+ PMD_DRV_LOG(ERR, "Allocate rxq[%d] rq_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_rq_mz_fail;
+ }
+
+ memset(rq_mz->addr, 0, queue_buf_size);
+ rxq->rq_mz = rq_mz;
+ rxq->queue_buf_paddr = rq_mz->iova;
+ rxq->queue_buf_vaddr = rq_mz->addr;
+
+ rxq->rx_info = rte_zmalloc_socket("rx_info",
+ rq_depth * sizeof(*rxq->rx_info),
+ RTE_CACHE_LINE_SIZE, (int)socket_id);
+ if (!rxq->rx_info) {
+ PMD_DRV_LOG(ERR, "Allocate rx_info failed, dev_name: %s",
+ dev->data->name);
+ err = -ENOMEM;
+ goto alloc_rx_info_fail;
+ }
+
+ cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid,
+ rq_depth * sizeof(*rxq->rx_cqe),
+ RTE_CACHE_LINE_SIZE, (int)socket_id);
+ if (!cqe_mz) {
+ PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s",
+ dev->data->name);
+ err = -ENOMEM;
+ goto alloc_cqe_mz_fail;
+ }
+ memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe));
+ rxq->cqe_mz = cqe_mz;
+ rxq->cqe_start_paddr = cqe_mz->iova;
+ rxq->cqe_start_vaddr = cqe_mz->addr;
+ rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr;
+
+ wqe_count = hinic3_rx_fill_wqe(rxq);
+ if (wqe_count != rq_depth) {
+ PMD_DRV_LOG(ERR,
+ "Fill rx wqe failed, wqe_count: %d, dev_name: %s",
+ wqe_count, dev->data->name);
+ err = -ENOMEM;
+ goto fill_rx_wqe_fail;
+ }
+ /* Record rxq pointer in rte_eth rx_queues. */
+ dev->data->rx_queues[qid] = rxq;
+
+ return 0;
+
+fill_rx_wqe_fail:
+ hinic3_memzone_free(rxq->cqe_mz);
+alloc_cqe_mz_fail:
+ rte_free(rxq->rx_info);
+
+alloc_rx_info_fail:
+ hinic3_memzone_free(rxq->rq_mz);
+
+alloc_rq_mz_fail:
+alloc_db_err_fail:
+ hinic3_memzone_free(rxq->pi_mz);
+
+alloc_pi_mz_fail:
+adjust_bufsize_fail:
+ rte_free(rxq);
+ nic_dev->rxqs[qid] = NULL;
+
+ return err;
+}
+
+/**
+ * Create the transmit queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_idx
+ * Transmit queue index.
+ * @param[in] nb_desc
+ * Number of descriptors for transmit queue.
+ * @param[in] socket_id
+ * Socket index on which memory must be allocated.
+ * @param[in] tx_conf
+ * Tx queue configuration parameters (unused_).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc,
+ unsigned int socket_id,
+ __rte_unused const struct rte_eth_txconf *tx_conf)
+{
+ struct hinic3_nic_dev *nic_dev;
+ struct hinic3_hwdev *hwdev;
+ struct hinic3_txq *txq = NULL;
+ const struct rte_memzone *sq_mz = NULL;
+ const struct rte_memzone *ci_mz = NULL;
+ void *db_addr = NULL;
+ u16 sq_depth, tx_free_thresh;
+ u32 queue_buf_size;
+ int err;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ hwdev = nic_dev->hwdev;
+
+ /* Queue depth must be power of 2, otherwise will be aligned up. */
+ sq_depth = (nb_desc & (nb_desc - 1))
+ ? ((u16)(1U << (ilog2(nb_desc) + 1)))
+ : nb_desc;
+
+ /*
+ * Validate number of transmit descriptors.
+ * It must not exceed hardware maximum and minimum.
+ */
+ if (sq_depth > HINIC3_MAX_QUEUE_DEPTH ||
+ sq_depth < HINIC3_MIN_QUEUE_DEPTH) {
+ PMD_DRV_LOG(ERR,
+ "TX queue depth is out of range from %d to %d,"
+ "(nb_desc: %d, q_depth: %d, port: %d queue: %d)",
+ HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH,
+ (int)nb_desc, (int)sq_depth,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ /*
+ * The TX descriptor ring will be cleaned after txq->tx_free_thresh
+ * descriptors are used or if the number of descriptors required
+ * to transmit a packet is greater than the number of free TX
+ * descriptors.
+ * The following constraints must be satisfied:
+ * - tx_free_thresh must be greater than 0.
+ * - tx_free_thresh must be less than the size of the ring minus 1.
+ * When set to zero use default values.
+ */
+ tx_free_thresh = (u16)((tx_conf->tx_free_thresh)
+ ? tx_conf->tx_free_thresh
+ : HINIC3_DEFAULT_TX_FREE_THRESH);
+ if (tx_free_thresh >= (sq_depth - 1)) {
+ PMD_DRV_LOG(ERR,
+ "tx_free_thresh must be less than the number of tx "
+ "descriptors minus 1, tx_free_thresh: %u port: %d "
+ "queue: %d",
+ (unsigned int)tx_free_thresh,
+ (int)dev->data->port_id, (int)qid);
+ return -EINVAL;
+ }
+
+ txq = rte_zmalloc_socket("hinic3_tx_queue", sizeof(struct hinic3_txq),
+ RTE_CACHE_LINE_SIZE, (int)socket_id);
+ if (!txq) {
+ PMD_DRV_LOG(ERR, "Allocate txq[%d] failed, dev_name: %s", qid,
+ dev->data->name);
+ return -ENOMEM;
+ }
+ nic_dev->txqs[qid] = txq;
+ txq->nic_dev = nic_dev;
+ txq->q_id = qid;
+ txq->q_depth = sq_depth;
+ txq->q_mask = sq_depth - 1;
+ txq->cons_idx = 0;
+ txq->prod_idx = 0;
+ txq->wqebb_shift = HINIC3_SQ_WQEBB_SHIFT;
+ txq->wqebb_size = (u16)BIT(txq->wqebb_shift);
+ txq->tx_free_thresh = tx_free_thresh;
+ txq->owner = 1;
+ txq->cos = nic_dev->default_cos;
+
+ ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid,
+ HINIC3_CI_Q_ADDR_SIZE,
+ HINIC3_CI_Q_ADDR_SIZE, (int)socket_id);
+ if (!ci_mz) {
+ PMD_DRV_LOG(ERR, "Allocate txq[%d] ci_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_ci_mz_fail;
+ }
+ txq->ci_mz = ci_mz;
+ txq->ci_dma_base = ci_mz->iova;
+ txq->ci_vaddr_base = (volatile u16 *)ci_mz->addr;
+
+ queue_buf_size = BIT(txq->wqebb_shift) * sq_depth;
+ sq_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_mz", qid,
+ queue_buf_size, RTE_PGSIZE_256K,
+ (int)socket_id);
+ if (!sq_mz) {
+ PMD_DRV_LOG(ERR, "Allocate txq[%d] sq_mz failed, dev_name: %s",
+ qid, dev->data->name);
+ err = -ENOMEM;
+ goto alloc_sq_mz_fail;
+ }
+ memset(sq_mz->addr, 0, queue_buf_size);
+ txq->sq_mz = sq_mz;
+ txq->queue_buf_paddr = sq_mz->iova;
+ txq->queue_buf_vaddr = sq_mz->addr;
+ txq->sq_head_addr = (u64)txq->queue_buf_vaddr;
+ txq->sq_bot_sge_addr = txq->sq_head_addr + queue_buf_size;
+
+ err = hinic3_alloc_db_addr(hwdev, &db_addr, HINIC3_DB_TYPE_SQ);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc sq doorbell addr failed");
+ goto alloc_db_err_fail;
+ }
+ txq->db_addr = db_addr;
+
+ txq->tx_info = rte_zmalloc_socket("tx_info",
+ sq_depth * sizeof(*txq->tx_info),
+ RTE_CACHE_LINE_SIZE, (int)socket_id);
+ if (!txq->tx_info) {
+ PMD_DRV_LOG(ERR, "Allocate tx_info failed, dev_name: %s",
+ dev->data->name);
+ err = -ENOMEM;
+ goto alloc_tx_info_fail;
+ }
+
+ /* Record txq pointer in rte_eth tx_queues. */
+ dev->data->tx_queues[qid] = txq;
+
+ return 0;
+
+alloc_tx_info_fail:
+alloc_db_err_fail:
+ hinic3_memzone_free(txq->sq_mz);
+
+alloc_sq_mz_fail:
+ hinic3_memzone_free(txq->ci_mz);
+
+alloc_ci_mz_fail:
+ rte_free(txq);
+ return err;
+}
+
+static void
+hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev == NULL || dev->data == NULL || dev->data->rx_queues == NULL) {
+ PMD_DRV_LOG(WARNING, "rx queue is null when release");
+ return;
+ }
+ if (queue_id >= dev->data->nb_rx_queues) {
+ PMD_DRV_LOG(WARNING, "eth_dev: %s, rx queue id: %u is illegal",
+ dev->data->name, queue_id);
+ return;
+ }
+ struct hinic3_rxq *rxq = dev->data->rx_queues[queue_id];
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!rxq) {
+ PMD_DRV_LOG(WARNING, "Rxq is null when release");
+ return;
+ }
+
+ nic_dev = rxq->nic_dev;
+
+ hinic3_free_rxq_mbufs(rxq);
+
+ hinic3_memzone_free(rxq->cqe_mz);
+
+ rte_free(rxq->rx_info);
+ rxq->rx_info = NULL;
+
+ hinic3_memzone_free(rxq->rq_mz);
+
+ hinic3_memzone_free(rxq->pi_mz);
+
+ nic_dev->rxqs[rxq->q_id] = NULL;
+ rte_free(rxq);
+ dev->data->rx_queues[queue_id] = NULL;
+}
+
+static void
+hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ if (dev == NULL || dev->data == NULL || dev->data->tx_queues == NULL) {
+ PMD_DRV_LOG(WARNING, "tx queue is null when release");
+ return;
+ }
+ if (queue_id >= dev->data->nb_tx_queues) {
+ PMD_DRV_LOG(WARNING, "eth_dev: %s, tx queue id: %u is illegal",
+ dev->data->name, queue_id);
+ return;
+ }
+ struct hinic3_txq *txq = dev->data->tx_queues[queue_id];
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!txq) {
+ PMD_DRV_LOG(WARNING, "Txq is null when release");
+ return;
+ }
+ PMD_DRV_LOG(INFO, "%s txq_idx:%d queue release.",
+ txq->nic_dev->dev_name, txq->q_id);
+ nic_dev = txq->nic_dev;
+
+ hinic3_free_txq_mbufs(txq);
+
+ rte_free(txq->tx_info);
+ txq->tx_info = NULL;
+
+ hinic3_memzone_free(txq->sq_mz);
+
+ hinic3_memzone_free(txq->ci_mz);
+
+ nic_dev->txqs[txq->q_id] = NULL;
+ rte_free(txq);
+ dev->data->tx_queues[queue_id] = NULL;
+}
+
+/**
+ * Start RXQ and enables flow director (fdir) filter for RXQ.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rq_id
+ * RX queue ID to be started.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_rx_queue_start(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t rq_id)
+{
+ struct hinic3_rxq *rxq = NULL;
+ int rc;
+
+ if (rq_id < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[rq_id];
+
+ rc = hinic3_start_rq(dev, rxq);
+ if (rc) {
+ PMD_DRV_LOG(ERR,
+ "Start rx queue failed, eth_dev:%s, "
+ "queue_idx:%d",
+ dev->data->name, rq_id);
+ return rc;
+ }
+
+ dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+ rc = hinic3_enable_rxq_fdir_filter(dev, (u32)rq_id, (u32)true);
+ if (rc) {
+ PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.",
+ rq_id);
+ return rc;
+ }
+ return 0;
+}
+
+/**
+ * Stop RXQ and disable flow director (fdir) filter for RXQ.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rq_id
+ * RX queue ID to be stopped.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_rx_queue_stop(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t rq_id)
+{
+ struct hinic3_rxq *rxq = NULL;
+ int rc;
+
+ if (rq_id < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[rq_id];
+
+ rc = hinic3_stop_rq(dev, rxq);
+ if (rc) {
+ PMD_DRV_LOG(ERR,
+ "Stop rx queue failed, eth_dev:%s, "
+ "queue_idx:%d",
+ dev->data->name, rq_id);
+ return rc;
+ }
+
+ dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+ rc = hinic3_enable_rxq_fdir_filter(dev, (u32)rq_id, (u32)false);
+ if (rc) {
+ PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.",
+ rq_id);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_dev_tx_queue_start(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t sq_id)
+{
+ struct hinic3_txq *txq = NULL;
+
+ PMD_DRV_LOG(INFO, "Start tx queue, eth_dev:%s, queue_idx:%d",
+ dev->data->name, sq_id);
+
+ txq = dev->data->tx_queues[sq_id];
+ HINIC3_SET_TXQ_STARTED(txq);
+ dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+}
+
+static int
+hinic3_dev_tx_queue_stop(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused uint16_t sq_id)
+{
+ struct hinic3_txq *txq = NULL;
+ int rc;
+
+ if (sq_id < dev->data->nb_tx_queues) {
+ txq = dev->data->tx_queues[sq_id];
+ rc = hinic3_stop_sq(txq);
+ if (rc) {
+ PMD_DRV_LOG(ERR,
+ "Stop tx queue failed, eth_dev:%s, "
+ "queue_idx:%d",
+ dev->data->name, sq_id);
+ return rc;
+ }
+
+ HINIC3_SET_TXQ_STOPPED(txq);
+ dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+int
+hinic3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = PCI_DEV_TO_INTR_HANDLE(pci_dev);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 msix_intr;
+
+ if (!rte_intr_dp_is_en(intr_handle) || !intr_handle->intr_vec)
+ return 0;
+
+ if (queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ msix_intr = (u16)intr_handle->intr_vec[queue_id];
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, msix_intr,
+ HINIC3_SET_MSIX_AUTO_MASK);
+ hinic3_set_msix_state(nic_dev->hwdev, msix_intr, HINIC3_MSIX_ENABLE);
+
+ return 0;
+}
+
+int
+hinic3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = PCI_DEV_TO_INTR_HANDLE(pci_dev);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 msix_intr;
+
+ if (!rte_intr_dp_is_en(intr_handle) || !intr_handle->intr_vec)
+ return 0;
+
+ if (queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ msix_intr = (u16)intr_handle->intr_vec[queue_id];
+ hinic3_set_msix_auto_mask_state(nic_dev->hwdev, msix_intr,
+ HINIC3_CLR_MSIX_AUTO_MASK);
+ hinic3_set_msix_state(nic_dev->hwdev, msix_intr, HINIC3_MSIX_DISABLE);
+ hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev, msix_intr,
+ MSIX_RESEND_TIMER_CLEAR);
+
+ return 0;
+}
+
+static uint32_t
+hinic3_dev_rx_queue_count(__rte_unused void *rx_queue)
+{
+ return 0;
+}
+
+static int
+hinic3_dev_rx_descriptor_status(__rte_unused void *rx_queue,
+ __rte_unused uint16_t offset)
+{
+ return 0;
+}
+
+static int
+hinic3_dev_tx_descriptor_status(__rte_unused void *tx_queue,
+ __rte_unused uint16_t offset)
+{
+ return 0;
+}
+
+static int
+hinic3_set_lro(struct hinic3_nic_dev *nic_dev, struct rte_eth_conf *dev_conf)
+{
+ bool lro_en;
+ int max_lro_size, lro_max_pkt_len;
+ int err;
+
+ /* Config lro. */
+ lro_en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true
+ : false;
+ max_lro_size = (int)(dev_conf->rxmode.max_lro_pkt_size);
+ /* `max_lro_size` is divisible by `HINIC3_LRO_UNIT_WQE_SIZE`. */
+ lro_max_pkt_len = max_lro_size / HINIC3_LRO_UNIT_WQE_SIZE
+ ? max_lro_size / HINIC3_LRO_UNIT_WQE_SIZE
+ : 1;
+
+ PMD_DRV_LOG(INFO,
+ "max_lro_size: %d, rx_buff_len: %d, lro_max_pkt_len: %d",
+ max_lro_size, nic_dev->rx_buff_len, lro_max_pkt_len);
+ PMD_DRV_LOG(INFO, "max_rx_pkt_len: %d",
+ HINIC3_MAX_RX_PKT_LEN(dev_conf->rxmode));
+ err = hinic3_set_rx_lro_state(nic_dev->hwdev, lro_en,
+ HINIC3_LRO_DEFAULT_TIME_LIMIT,
+ lro_max_pkt_len);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set lro state failed, err: %d", err);
+ return err;
+}
+
+static int
+hinic3_set_vlan(struct rte_eth_dev *dev, struct rte_eth_conf *dev_conf)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ bool vlan_filter, vlan_strip;
+ int err;
+
+ /* Config vlan filter. */
+ vlan_filter = dev_conf->rxmode.offloads &
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+ err = hinic3_set_vlan_fliter(nic_dev->hwdev, vlan_filter);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Config vlan filter failed, device: %s, port_id: "
+ "%d, err: %d",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ /* Config vlan stripping. */
+ vlan_strip = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+
+ err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, vlan_strip);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Config vlan strip failed, device: %s, port_id: "
+ "%d, err: %d",
+ nic_dev->dev_name, dev->data->port_id, err);
+ }
+
+ return err;
+}
+
+/**
+ * Configure RX mode, checksum offload, LRO, RSS, VLAN and initialize the RXQ
+ * list.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_rxtx_configure(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+ struct rte_eth_rss_conf *rss_conf = NULL;
+ int err;
+
+ /* Config rx mode. */
+ err = hinic3_set_rx_mode(nic_dev->hwdev, HINIC3_DEFAULT_RX_MODE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rx_mode: 0x%x failed",
+ HINIC3_DEFAULT_RX_MODE);
+ return err;
+ }
+ nic_dev->rx_mode = HINIC3_DEFAULT_RX_MODE;
+
+ /* Config rx checksum offload. */
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
+ nic_dev->rx_csum_en = HINIC3_DEFAULT_RX_CSUM_OFFLOAD;
+
+ err = hinic3_set_lro(nic_dev, dev_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set lro failed");
+ return err;
+ }
+ /* Config RSS. */
+ if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
+ nic_dev->num_rqs > 1) {
+ rss_conf = &dev_conf->rx_adv_conf.rss_conf;
+ err = hinic3_update_rss_config(dev, rss_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rss config failed, err: %d", err);
+ return err;
+ }
+ }
+
+ err = hinic3_set_vlan(dev, dev_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set vlan failed, err: %d", err);
+ return err;
+ }
+
+ hinic3_init_rx_queue_list(nic_dev);
+
+ return 0;
+}
+
+/**
+ * Disable RX mode and RSS, and free associated resources.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_remove_rxtx_configure(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u8 prio_tc[HINIC3_DCB_UP_MAX] = {0};
+
+ hinic3_set_rx_mode(nic_dev->hwdev, 0);
+
+ if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+ hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_DISABLE, 0, prio_tc);
+ hinic3_rss_template_free(nic_dev->hwdev);
+ nic_dev->rss_state = HINIC3_RSS_DISABLE;
+ }
+}
+
+static bool
+hinic3_find_vlan_filter(struct hinic3_nic_dev *nic_dev, uint16_t vlan_id)
+{
+ u32 vid_idx, vid_bit;
+
+ vid_idx = HINIC3_VFTA_IDX(vlan_id);
+ vid_bit = HINIC3_VFTA_BIT(vlan_id);
+
+ return (nic_dev->vfta[vid_idx] & vid_bit) ? true : false;
+}
+
+static void
+hinic3_store_vlan_filter(struct hinic3_nic_dev *nic_dev, u16 vlan_id, bool on)
+{
+ u32 vid_idx, vid_bit;
+
+ vid_idx = HINIC3_VFTA_IDX(vlan_id);
+ vid_bit = HINIC3_VFTA_BIT(vlan_id);
+
+ if (on)
+ nic_dev->vfta[vid_idx] |= vid_bit;
+ else
+ nic_dev->vfta[vid_idx] &= ~vid_bit;
+}
+
+static void
+hinic3_remove_all_vlanid(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int vlan_id;
+ u16 func_id;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ for (vlan_id = 1; vlan_id < RTE_ETHER_MAX_VLAN_ID; vlan_id++) {
+ if (hinic3_find_vlan_filter(nic_dev, vlan_id)) {
+ hinic3_del_vlan(nic_dev->hwdev, vlan_id, func_id);
+ hinic3_store_vlan_filter(nic_dev, vlan_id, false);
+ }
+ }
+}
+
+static void
+hinic3_disable_interrupt(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+ if (!hinic3_get_bit(HINIC3_DEV_INIT, &nic_dev->dev_status))
+ return;
+
+ /* Disable rte interrupt. */
+ rte_intr_disable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+ rte_intr_callback_unregister(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+ hinic3_dev_interrupt_handler, (void *)dev);
+}
+
+static void
+hinic3_enable_interrupt(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+ if (!hinic3_get_bit(HINIC3_DEV_INIT, &nic_dev->dev_status))
+ return;
+
+ /* Enable rte interrupt. */
+ rte_intr_enable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+ rte_intr_callback_register(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+ hinic3_dev_interrupt_handler, (void *)dev);
+}
+
+#define HINIC3_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
+
+/** Dp interrupt msix attribute. */
+#define HINIC3_TXRX_MSIX_PENDING_LIMIT 2
+#define HINIC3_TXRX_MSIX_COALESC_TIMER 2
+#define HINIC3_TXRX_MSIX_RESEND_TIMER_CFG 7
+
+static int
+hinic3_init_rxq_msix_attr(void *hwdev, u16 msix_index)
+{
+ struct interrupt_info info = {0};
+ int err;
+
+ info.lli_set = 0;
+ info.interrupt_coalesc_set = 1;
+ info.pending_limt = HINIC3_TXRX_MSIX_PENDING_LIMIT;
+ info.coalesc_timer_cfg = HINIC3_TXRX_MSIX_COALESC_TIMER;
+ info.resend_timer_cfg = HINIC3_TXRX_MSIX_RESEND_TIMER_CFG;
+
+ info.msix_index = msix_index;
+ err = hinic3_set_interrupt_cfg(hwdev, info);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set msix attr failed, msix_index %d",
+ msix_index);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static void
+hinic3_deinit_rxq_intr(struct rte_eth_dev *dev)
+{
+ struct rte_intr_handle *intr_handle = dev->intr_handle;
+
+ rte_intr_efd_disable(intr_handle);
+ if (intr_handle->intr_vec) {
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ }
+}
+
+/**
+ * Initialize RX queue interrupts by enabling MSI-X, allocate interrupt vectors,
+ * and configure interrupt attributes for each RX queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, negative error code on failure.
+ * - -ENOTSUP if MSI-X interrupts are not supported.
+ * - Error code if enabling event file descriptors fails.
+ * - -ENOMEM if allocating interrupt vectors fails.
+ */
+static int
+hinic3_init_rxq_intr(struct rte_eth_dev *dev)
+{
+ struct rte_intr_handle *intr_handle = NULL;
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ u32 nb_rx_queues, i;
+ int err;
+
+ intr_handle = dev->intr_handle;
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ if (!dev->data->dev_conf.intr_conf.rxq)
+ return 0;
+
+ if (!rte_intr_cap_multiple(intr_handle)) {
+ PMD_DRV_LOG(ERR, "Rx queue interrupts require MSI-X interrupts"
+ " (vfio-pci driver)");
+ return -ENOTSUP;
+ }
+
+ nb_rx_queues = dev->data->nb_rx_queues;
+ err = rte_intr_efd_enable(intr_handle, nb_rx_queues);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to enable event fds for Rx queue interrupts");
+ return err;
+ }
+
+ intr_handle->intr_vec =
+ rte_zmalloc("hinic_intr_vec", nb_rx_queues * sizeof(int), 0);
+ if (intr_handle->intr_vec == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate intr_vec");
+ rte_intr_efd_disable(intr_handle);
+ return -ENOMEM;
+ }
+ intr_handle->vec_list_size = nb_rx_queues;
+ for (i = 0; i < nb_rx_queues; i++)
+ intr_handle->intr_vec[i] = (int)(i + HINIC3_RX_VEC_START);
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ rxq->dp_intr_en = 1;
+ rxq->msix_entry_idx = (u16)intr_handle->intr_vec[i];
+
+ err = hinic3_init_rxq_msix_attr(nic_dev->hwdev,
+ rxq->msix_entry_idx);
+ if (err) {
+ hinic3_deinit_rxq_intr(dev);
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+static int
+hinic3_init_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+{
+ u32 txq_size;
+ u32 rxq_size;
+
+ /* Allocate software txq array. */
+ txq_size = nic_dev->max_sqs * sizeof(*nic_dev->txqs);
+ nic_dev->txqs =
+ rte_zmalloc("hinic3_txqs", txq_size, RTE_CACHE_LINE_SIZE);
+ if (!nic_dev->txqs) {
+ PMD_DRV_LOG(ERR, "Allocate txqs failed");
+ return -ENOMEM;
+ }
+
+ /* Allocate software rxq array. */
+ rxq_size = nic_dev->max_rqs * sizeof(*nic_dev->rxqs);
+ nic_dev->rxqs =
+ rte_zmalloc("hinic3_rxqs", rxq_size, RTE_CACHE_LINE_SIZE);
+ if (!nic_dev->rxqs) {
+ /* Free txqs. */
+ rte_free(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+
+ PMD_DRV_LOG(ERR, "Allocate rxqs failed");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void
+hinic3_deinit_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+{
+ rte_free(nic_dev->txqs);
+ nic_dev->txqs = NULL;
+
+ rte_free(nic_dev->rxqs);
+ nic_dev->rxqs = NULL;
+}
+
+static void
+hinic3_disable_queue_intr(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_intr_handle *intr_handle = dev->intr_handle;
+ int msix_intr;
+ int i;
+
+ if (intr_handle->intr_vec == NULL)
+ return;
+
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ msix_intr = intr_handle->intr_vec[i];
+ hinic3_set_msix_state(nic_dev->hwdev, (u16)msix_intr,
+ HINIC3_MSIX_DISABLE);
+ hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev,
+ (u16)msix_intr,
+ MSIX_RESEND_TIMER_CLEAR);
+ }
+}
+
+/**
+ * Start the device.
+ *
+ * Initialize function table, TXQ and TXQ context, configure RX offload, and
+ * enable vport and port to prepare receiving packets.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ u64 nic_features;
+ struct hinic3_rxq *rxq = NULL;
+ int i;
+ int err;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ err = hinic3_copy_mempool_init(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Create copy mempool failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_mpool_fail;
+ }
+ hinic3_update_msix_info(nic_dev->hwdev->hwif);
+ hinic3_disable_interrupt(eth_dev);
+ err = hinic3_init_rxq_intr(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rxq intr fail, eth_dev:%s",
+ eth_dev->data->name);
+ goto init_rxq_intr_fail;
+ }
+
+ hinic3_get_func_rx_buf_size(nic_dev);
+ err = hinic3_init_function_table(nic_dev->hwdev, nic_dev->rx_buff_len);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init function table failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_func_tbl_fail;
+ }
+
+ nic_features = hinic3_get_driver_feature(nic_dev);
+ /*
+ * You can update the features supported by the driver according to the
+ * scenario here.
+ */
+ nic_features &= DEFAULT_DRV_FEATURE;
+ hinic3_update_driver_feature(nic_dev, nic_features);
+
+ err = hinic3_set_feature_to_hw(nic_dev->hwdev, &nic_dev->feature_cap,
+ 1);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to set nic features to hardware, err %d",
+ err);
+ goto get_feature_err;
+ }
+
+ /* Reset rx and tx queue. */
+ hinic3_reset_rx_queue(eth_dev);
+ hinic3_reset_tx_queue(eth_dev);
+
+ /* Init txq and rxq context. */
+ err = hinic3_init_qp_ctxts(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init qp context failed, dev_name: %s",
+ eth_dev->data->name);
+ goto init_qp_fail;
+ }
+
+ /* Set default mtu. */
+ err = hinic3_set_port_mtu(nic_dev->hwdev, nic_dev->mtu_size);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set mtu_size[%d] failed, dev_name: %s",
+ nic_dev->mtu_size, eth_dev->data->name);
+ goto set_mtu_fail;
+ }
+ eth_dev->data->mtu = nic_dev->mtu_size;
+
+ /* Set rx configuration: rss/checksum/rxmode/lro. */
+ err = hinic3_set_rxtx_configure(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
+ eth_dev->data->name);
+ goto set_rxtx_config_fail;
+ }
+
+ /* Enable dev interrupt. */
+ hinic3_enable_interrupt(eth_dev);
+ err = hinic3_start_all_rqs(eth_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
+ eth_dev->data->name);
+ goto start_rqs_fail;
+ }
+
+ hinic3_start_all_sqs(eth_dev);
+
+ /* Open virtual port and ready to start packet receiving. */
+ err = hinic3_set_vport_enable(nic_dev->hwdev, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable vport failed, dev_name: %s",
+ eth_dev->data->name);
+ goto en_vport_fail;
+ }
+
+ /* Open physical port and start packet receiving. */
+ err = hinic3_set_port_enable(nic_dev->hwdev, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable physical port failed, dev_name: %s",
+ eth_dev->data->name);
+ goto en_port_fail;
+ }
+
+ /* Update eth_dev link status. */
+ if (eth_dev->data->dev_conf.intr_conf.lsc != 0)
+ (void)hinic3_link_update(eth_dev, 0);
+
+ hinic3_set_bit(HINIC3_DEV_START, &nic_dev->dev_status);
+
+ return 0;
+
+en_port_fail:
+ (void)hinic3_set_vport_enable(nic_dev->hwdev, false);
+
+en_vport_fail:
+ /* Flush tx && rx chip resources in case of setting vport fake fail. */
+ (void)hinic3_flush_qps_res(nic_dev->hwdev);
+ rte_delay_ms(DEV_START_DELAY_MS);
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ rxq = nic_dev->rxqs[i];
+ hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+ hinic3_free_rxq_mbufs(rxq);
+ hinic3_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+start_rqs_fail:
+ hinic3_remove_rxtx_configure(eth_dev);
+
+set_rxtx_config_fail:
+set_mtu_fail:
+ hinic3_free_qp_ctxts(nic_dev->hwdev);
+
+init_qp_fail:
+get_feature_err:
+init_func_tbl_fail:
+ hinic3_deinit_rxq_intr(eth_dev);
+init_rxq_intr_fail:
+ hinic3_copy_mempool_uninit(nic_dev);
+init_mpool_fail:
+ return err;
+}
+
+/**
+ * Look up or creates a memory pool for storing packet buffers used in copy
+ * operations.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * `-ENOMEM`: Memory pool creation fails.
+ */
+static int
+hinic3_copy_mempool_init(struct hinic3_nic_dev *nic_dev)
+{
+ nic_dev->cpy_mpool = rte_mempool_lookup(HINCI3_CPY_MEMPOOL_NAME);
+ if (nic_dev->cpy_mpool == NULL) {
+ nic_dev->cpy_mpool = rte_pktmbuf_pool_create(HINCI3_CPY_MEMPOOL_NAME,
+ HINIC3_COPY_MEMPOOL_DEPTH, HINIC3_COPY_MEMPOOL_CACHE,
+ 0, HINIC3_COPY_MBUF_SIZE, (int)rte_socket_id());
+ if (nic_dev->cpy_mpool == NULL) {
+ PMD_DRV_LOG(ERR,
+ "Create copy mempool failed, errno: %d, "
+ "dev_name: %s",
+ rte_errno, HINCI3_CPY_MEMPOOL_NAME);
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Clear the reference to the copy memory pool without freeing it.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ */
+static void
+hinic3_copy_mempool_uninit(struct hinic3_nic_dev *nic_dev)
+{
+ nic_dev->cpy_mpool = NULL;
+}
+
+/**
+ * Stop the device.
+ *
+ * Stop phy port and vport, flush pending io request, clean context configure
+ * and free io resourece.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static int
+hinic3_dev_stop(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev;
+ struct rte_eth_link link;
+ int err;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ if (!hinic3_test_and_clear_bit(HINIC3_DEV_START,
+ &nic_dev->dev_status)) {
+ PMD_DRV_LOG(INFO, "Device %s already stopped",
+ nic_dev->dev_name);
+ return 0;
+ }
+
+ /* Stop phy port and vport. */
+ err = hinic3_set_port_enable(nic_dev->hwdev, false);
+ if (err)
+ PMD_DRV_LOG(WARNING,
+ "Disable phy port failed, error: %d, "
+ "dev_name: %s, port_id: %d",
+ err, dev->data->name, dev->data->port_id);
+
+ err = hinic3_set_vport_enable(nic_dev->hwdev, false);
+ if (err)
+ PMD_DRV_LOG(WARNING,
+ "Disable vport failed, error: %d, "
+ "dev_name: %s, port_id: %d",
+ err, dev->data->name, dev->data->port_id);
+
+ /* Clear recorded link status. */
+ memset(&link, 0, sizeof(link));
+ (void)rte_eth_linkstatus_set(dev, &link);
+
+ /* Disable dp interrupt. */
+ hinic3_disable_queue_intr(dev);
+ hinic3_deinit_rxq_intr(dev);
+
+ /* Flush pending io request. */
+ hinic3_flush_txqs(nic_dev);
+
+ /* After set vport disable 100ms, no packets will be send to host. */
+ rte_delay_ms(DEV_STOP_DELAY_MS);
+
+ hinic3_flush_qps_res(nic_dev->hwdev);
+
+ /* Clean RSS table and rx_mode. */
+ hinic3_remove_rxtx_configure(dev);
+
+ /* Clean root context. */
+ hinic3_free_qp_ctxts(nic_dev->hwdev);
+
+ /* Free all tx and rx mbufs. */
+ hinic3_free_all_txq_mbufs(nic_dev);
+ hinic3_free_all_rxq_mbufs(nic_dev);
+
+ /* Free mempool. */
+ hinic3_copy_mempool_uninit(nic_dev);
+ return 0;
+}
+
+static void
+hinic3_dev_release(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev =
+ HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int qid;
+
+ /* Release io resource. */
+ for (qid = 0; qid < nic_dev->num_sqs; qid++)
+ hinic3_tx_queue_release(eth_dev, qid);
+
+ for (qid = 0; qid < nic_dev->num_rqs; qid++)
+ hinic3_rx_queue_release(eth_dev, qid);
+
+ hinic3_deinit_sw_rxtxqs(nic_dev);
+
+ hinic3_deinit_mac_addr(eth_dev);
+ rte_free(nic_dev->mc_list);
+
+ hinic3_remove_all_vlanid(eth_dev);
+
+ hinic3_clear_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status);
+ hinic3_set_msix_state(nic_dev->hwdev, 0, HINIC3_MSIX_DISABLE);
+ rte_intr_disable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+ (void)rte_intr_callback_unregister(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+ hinic3_dev_interrupt_handler,
+ (void *)eth_dev);
+
+ /* Destroy rx mode mutex. */
+ hinic3_mutex_destroy(&nic_dev->rx_mode_mutex);
+
+ hinic3_free_nic_hwdev(nic_dev->hwdev);
+ hinic3_free_hwdev(nic_dev->hwdev);
+
+ eth_dev->rx_pkt_burst = NULL;
+ eth_dev->tx_pkt_burst = NULL;
+ eth_dev->dev_ops = NULL;
+ eth_dev->rx_queue_count = NULL;
+ eth_dev->rx_descriptor_status = NULL;
+ eth_dev->tx_descriptor_status = NULL;
+
+ rte_free(nic_dev->hwdev);
+ nic_dev->hwdev = NULL;
+}
+
+/**
+ * Close the device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_close(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev =
+ HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+ int ret;
+
+ if (hinic3_test_and_set_bit(HINIC3_DEV_CLOSE, &nic_dev->dev_status)) {
+ PMD_DRV_LOG(WARNING, "Device %s already closed",
+ nic_dev->dev_name);
+ return 0;
+ }
+
+ ret = hinic3_dev_stop(eth_dev);
+
+ hinic3_dev_release(eth_dev);
+ return ret;
+}
+
+static int
+hinic3_dev_reset(__rte_unused struct rte_eth_dev *dev)
+{
+ return 0;
+}
+
+#define MIN_RX_BUFFER_SIZE 256
+#define MIN_RX_BUFFER_SIZE_SMALL_MODE 1518
+
+static int
+hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err = 0;
+
+ PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
+ dev->data->port_id, mtu, HINIC3_MTU_TO_PKTLEN(mtu));
+
+ if (mtu < HINIC3_MIN_MTU_SIZE || mtu > HINIC3_MAX_MTU_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d", mtu,
+ HINIC3_MIN_MTU_SIZE, HINIC3_MAX_MTU_SIZE);
+ return -EINVAL;
+ }
+
+ err = hinic3_set_port_mtu(nic_dev->hwdev, mtu);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set port mtu failed, err: %d", err);
+ return err;
+ }
+
+ /* Update max frame size. */
+ HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) =
+ HINIC3_MTU_TO_PKTLEN(mtu);
+ nic_dev->mtu_size = mtu;
+ return err;
+}
+
+/**
+ * Add or delete vlan id.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] vlan_id
+ * Vlan id is used to filter vlan packets.
+ * @param[in] enable
+ * Disable or enable vlan filter function.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int enable)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err = 0;
+ u16 func_id;
+
+ if (vlan_id >= RTE_ETHER_MAX_VLAN_ID)
+ return -EINVAL;
+
+ if (vlan_id == 0)
+ return 0;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ if (enable) {
+ /* If vlanid is already set, just return. */
+ if (hinic3_find_vlan_filter(nic_dev, vlan_id)) {
+ PMD_DRV_LOG(INFO, "Vlan %u has been added, device: %s",
+ vlan_id, nic_dev->dev_name);
+ return 0;
+ }
+
+ err = hinic3_add_vlan(nic_dev->hwdev, vlan_id, func_id);
+ } else {
+ /* If vlanid can't be found, just return. */
+ if (!hinic3_find_vlan_filter(nic_dev, vlan_id)) {
+ PMD_DRV_LOG(INFO,
+ "Vlan %u is not in the vlan filter list, "
+ "device: %s",
+ vlan_id, nic_dev->dev_name);
+ return 0;
+ }
+
+ err = hinic3_del_vlan(nic_dev->hwdev, vlan_id, func_id);
+ }
+
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "%s vlan failed, func_id: %d, vlan_id: %d, err: %d",
+ enable ? "Add" : "Remove", func_id, vlan_id, err);
+ return err;
+ }
+
+ hinic3_store_vlan_filter(nic_dev, vlan_id, enable);
+
+ PMD_DRV_LOG(INFO, "%s vlan %u succeed, device: %s",
+ enable ? "Add" : "Remove", vlan_id, nic_dev->dev_name);
+
+ return 0;
+}
+
+/**
+ * Enable or disable vlan offload.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mask
+ * Definitions used for VLAN setting, vlan filter of vlan strip.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ bool on;
+ int err;
+
+ /* Enable or disable VLAN filter. */
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ ? true
+ : false;
+ err = hinic3_set_vlan_fliter(nic_dev->hwdev, on);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "%s vlan filter failed, device: %s, "
+ "port_id: %d, err: %d",
+ on ? "Enable" : "Disable",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ PMD_DRV_LOG(INFO,
+ "%s vlan filter succeed, device: %s, port_id: %d",
+ on ? "Enable" : "Disable", nic_dev->dev_name,
+ dev->data->port_id);
+ }
+
+ /* Enable or disable VLAN stripping. */
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ? true
+ : false;
+ err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, on);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "%s vlan strip failed, device: %s, "
+ "port_id: %d, err: %d",
+ on ? "Enable" : "Disable",
+ nic_dev->dev_name, dev->data->port_id, err);
+ return err;
+ }
+
+ PMD_DRV_LOG(INFO,
+ "%s vlan strip succeed, device: %s, port_id: %d",
+ on ? "Enable" : "Disable", nic_dev->dev_name,
+ dev->data->port_id);
+ }
+ return 0;
+}
+
+/**
+ * Enable allmulticast mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_MC_ALL;
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Enable allmulticast failed, error: %d", err);
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO,
+ "Enable allmulticast succeed, nic_dev: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return 0;
+}
+
+/**
+ * Disable allmulticast mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode & (~HINIC3_RX_MODE_MC_ALL);
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Disable allmulticast failed, error: %d", err);
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO,
+ "Disable allmulticast succeed, nic_dev: %s, port_id: %d",
+ nic_dev->dev_name, dev->data->port_id);
+ return 0;
+}
+
+/**
+ * Get device generic statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] stats
+ * Stats structure output buffer.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_vport_stats vport_stats;
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_txq *txq = NULL;
+ int i, err, q_num;
+ u64 rx_discards_pmd = 0;
+
+ err = hinic3_get_vport_stats(nic_dev->hwdev, &vport_stats);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get vport stats from fw failed, nic_dev: %s",
+ nic_dev->dev_name);
+ return err;
+ }
+
+ dev->data->rx_mbuf_alloc_failed = 0;
+
+ /* Rx queue stats. */
+ q_num = (nic_dev->num_rqs < RTE_ETHDEV_QUEUE_STAT_CNTRS)
+ ? nic_dev->num_rqs
+ : RTE_ETHDEV_QUEUE_STAT_CNTRS;
+ for (i = 0; i < q_num; i++) {
+ rxq = nic_dev->rxqs[i];
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_left_mbuf_bytes =
+ rxq->rxq_stats.rx_alloc_mbuf_bytes -
+ rxq->rxq_stats.rx_free_mbuf_bytes;
+#endif
+ rxq->rxq_stats.errors = rxq->rxq_stats.csum_errors +
+ rxq->rxq_stats.other_errors;
+
+ stats->q_ipackets[i] = rxq->rxq_stats.packets;
+ stats->q_ibytes[i] = rxq->rxq_stats.bytes;
+ stats->q_errors[i] = rxq->rxq_stats.errors;
+
+ stats->ierrors += rxq->rxq_stats.errors;
+ rx_discards_pmd += rxq->rxq_stats.dropped;
+ dev->data->rx_mbuf_alloc_failed += rxq->rxq_stats.rx_nombuf;
+ }
+
+ /* Tx queue stats. */
+ q_num = (nic_dev->num_sqs < RTE_ETHDEV_QUEUE_STAT_CNTRS)
+ ? nic_dev->num_sqs
+ : RTE_ETHDEV_QUEUE_STAT_CNTRS;
+ for (i = 0; i < q_num; i++) {
+ txq = nic_dev->txqs[i];
+ stats->q_opackets[i] = txq->txq_stats.packets;
+ stats->q_obytes[i] = txq->txq_stats.bytes;
+ stats->oerrors += (txq->txq_stats.tx_busy +
+ txq->txq_stats.offload_errors);
+ }
+
+ /* Vport stats. */
+ stats->oerrors += vport_stats.tx_discard_vport;
+
+ stats->imissed = vport_stats.rx_discard_vport + rx_discards_pmd;
+
+ stats->ipackets =
+ (vport_stats.rx_unicast_pkts_vport +
+ vport_stats.rx_multicast_pkts_vport +
+ vport_stats.rx_broadcast_pkts_vport - rx_discards_pmd);
+
+ stats->opackets = (vport_stats.tx_unicast_pkts_vport +
+ vport_stats.tx_multicast_pkts_vport +
+ vport_stats.tx_broadcast_pkts_vport);
+
+ stats->ibytes = (vport_stats.rx_unicast_bytes_vport +
+ vport_stats.rx_multicast_bytes_vport +
+ vport_stats.rx_broadcast_bytes_vport);
+
+ stats->obytes = (vport_stats.tx_unicast_bytes_vport +
+ vport_stats.tx_multicast_bytes_vport +
+ vport_stats.tx_broadcast_bytes_vport);
+ return 0;
+}
+
/**
- * Interrupt handler triggered by NIC for handling specific event.
+ * Clear device generic statistics.
*
- * @param[in] param
- * The address of parameter (struct rte_eth_dev *) regsitered before.
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_stats_reset(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_txq *txq = NULL;
+ int qid;
+ int err;
+
+ err = hinic3_clear_vport_stats(nic_dev->hwdev);
+ if (err)
+ return err;
+
+ for (qid = 0; qid < nic_dev->num_rqs; qid++) {
+ rxq = nic_dev->rxqs[qid];
+ memset(&rxq->rxq_stats, 0, sizeof(struct hinic3_rxq_stats));
+ }
+
+ for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+ txq = nic_dev->txqs[qid];
+ memset(&txq->txq_stats, 0, sizeof(struct hinic3_txq_stats));
+ }
+
+ return 0;
+}
+
+/**
+ * Get device extended statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] xstats
+ * Pointer to rte extended stats table.
+ * @param[in] n
+ * The size of the stats table.
+ *
+ * @return
+ * positive: Number of extended stats on success and stats is filled.
+ * negative: Failure.
+ */
+static int
+hinic3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ struct hinic3_nic_dev *nic_dev;
+ struct mag_phy_port_stats port_stats;
+ struct hinic3_vport_stats vport_stats;
+ struct hinic3_rxq *rxq = NULL;
+ struct hinic3_rxq_stats rxq_stats;
+ struct hinic3_txq *txq = NULL;
+ struct hinic3_txq_stats txq_stats;
+ u16 qid;
+ u32 i;
+ int err, count;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ count = hinic3_xstats_calc_num(nic_dev);
+ if ((int)n < count)
+ return count;
+
+ count = 0;
+
+ /* Get stats from rxq stats structure. */
+ for (qid = 0; qid < nic_dev->num_rqs; qid++) {
+ rxq = nic_dev->rxqs[qid];
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+ hinic3_get_stats(rxq);
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_left_mbuf_bytes =
+ rxq->rxq_stats.rx_alloc_mbuf_bytes -
+ rxq->rxq_stats.rx_free_mbuf_bytes;
+#endif
+ rxq->rxq_stats.errors = rxq->rxq_stats.csum_errors +
+ rxq->rxq_stats.other_errors;
+
+ memcpy((void *)&rxq_stats, (void *)&rxq->rxq_stats,
+ sizeof(rxq->rxq_stats));
+
+ for (i = 0; i < HINIC3_RXQ_XSTATS_NUM; i++) {
+ xstats[count].value = *(uint64_t *)(((char *)&rxq_stats) +
+ hinic3_rxq_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ }
+
+ /* Get stats from txq stats structure. */
+ for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+ txq = nic_dev->txqs[qid];
+ memcpy((void *)&txq_stats, (void *)&txq->txq_stats,
+ sizeof(txq->txq_stats));
+
+ for (i = 0; i < HINIC3_TXQ_XSTATS_NUM; i++) {
+ xstats[count].value = *(uint64_t *)(((char *)&txq_stats) +
+ hinic3_txq_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ }
+
+ /* Get stats from vport stats structure. */
+ err = hinic3_get_vport_stats(nic_dev->hwdev, &vport_stats);
+ if (err)
+ return err;
+
+ for (i = 0; i < HINIC3_VPORT_XSTATS_NUM; i++) {
+ xstats[count].value =
+ *(uint64_t *)(((char *)&vport_stats) +
+ hinic3_vport_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ if (HINIC3_IS_VF(nic_dev->hwdev))
+ return count;
+
+ /* Get stats from phy port stats structure. */
+ err = hinic3_get_phy_port_stats(nic_dev->hwdev, &port_stats);
+ if (err)
+ return err;
+
+ for (i = 0; i < HINIC3_PHYPORT_XSTATS_NUM; i++) {
+ xstats[count].value =
+ *(uint64_t *)(((char *)&port_stats) +
+ hinic3_phyport_stats_strings[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ return count;
+}
+
+/**
+ * Clear device extended statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int err;
+
+ err = hinic3_dev_stats_reset(dev);
+ if (err)
+ return err;
+
+ if (hinic3_func_type(nic_dev->hwdev) != TYPE_VF) {
+ err = hinic3_clear_phy_port_stats(nic_dev->hwdev);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * Retrieve names of extended device statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] xstats_names
+ * Buffer to insert names into.
+ *
+ * @return
+ * Number of xstats names.
+ */
+static int
+hinic3_dev_xstats_get_names(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int count = 0;
+ u16 i, q_num;
+
+ if (xstats_names == NULL)
+ return hinic3_xstats_calc_num(nic_dev);
+
+ /* Get pmd rxq stats name. */
+ for (q_num = 0; q_num < nic_dev->num_rqs; q_num++) {
+ for (i = 0; i < HINIC3_RXQ_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "rxq%d_%s_pmd", q_num,
+ hinic3_rxq_stats_strings[i].name);
+ count++;
+ }
+ }
+
+ /* Get pmd txq stats name. */
+ for (q_num = 0; q_num < nic_dev->num_sqs; q_num++) {
+ for (i = 0; i < HINIC3_TXQ_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name),
+ "txq%d_%s_pmd", q_num,
+ hinic3_txq_stats_strings[i].name);
+ count++;
+ }
+ }
+
+ /* Get vport stats name. */
+ for (i = 0; i < HINIC3_VPORT_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ hinic3_vport_stats_strings[i].name);
+ count++;
+ }
+
+ if (HINIC3_IS_VF(nic_dev->hwdev))
+ return count;
+
+ /* Get phy port stats name. */
+ for (i = 0; i < HINIC3_PHYPORT_XSTATS_NUM; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ hinic3_phyport_stats_strings[i].name);
+ count++;
+ }
+
+ return count;
+}
+
+/**
+ * Function used to get supported ptypes of an Ethernet device.
+ *
+ * @param[in] dev
+ * ethdev handle of port.
+ * @param[out] no_of_elements
+ * number of ptypes elements. Must be initialized to 0.
+ *
+ * @return
+ * Success, array of ptypes elements and valid no_of_elements > 0.
+ * Failures, NULL.
*/
+static const uint32_t *
+hinic3_dev_supported_ptypes_get(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused size_t *no_of_elements)
+{
+ return 0;
+}
+
static void
-hinic3_dev_interrupt_handler(void *param)
+hinic3_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *rxq_info)
+{
+ struct hinic3_rxq *rxq = dev->data->rx_queues[queue_id];
+
+ rxq_info->mp = rxq->mb_pool;
+ rxq_info->nb_desc = rxq->q_depth;
+}
+
+static void
+hinic3_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *txq_qinfo)
+{
+ struct hinic3_txq *txq = dev->data->tx_queues[queue_id];
+
+ txq_qinfo->nb_desc = txq->q_depth;
+}
+
+/**
+ * Update MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] addr
+ * Pointer to MAC address.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
{
- struct rte_eth_dev *dev = param;
struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ char mac_addr[RTE_ETHER_ADDR_FMT_SIZE];
+ u16 func_id;
+ int err;
- if (!hinic3_get_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status)) {
- PMD_DRV_LOG(WARNING,
- "Intr is disabled, ignore intr event, "
- "dev_name: %s, port_id: %d",
- nic_dev->dev_name, dev->data->port_id);
+ if (!rte_is_valid_assigned_ether_addr(addr)) {
+ rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE, addr);
+ PMD_DRV_LOG(ERR, "Set invalid MAC address %s", mac_addr);
+ return -EINVAL;
+ }
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_update_mac(nic_dev->hwdev,
+ nic_dev->default_addr.addr_bytes,
+ addr->addr_bytes, 0, func_id);
+ if (err)
+ return err;
+
+ rte_ether_addr_copy(addr, &nic_dev->default_addr);
+ rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE,
+ &nic_dev->default_addr);
+
+ PMD_DRV_LOG(INFO, "Set new MAC address %s", mac_addr);
+ return 0;
+}
+
+/**
+ * Remove a MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] index
+ * MAC address index.
+ */
+static void
+hinic3_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 func_id;
+ int err;
+
+ if (index >= HINIC3_MAX_UC_MAC_ADDRS) {
+ PMD_DRV_LOG(INFO, "Remove MAC index(%u) is out of range",
+ index);
return;
}
- /* Aeq0 msg handler. */
- hinic3_dev_handle_aeq_event(nic_dev->hwdev, param);
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_del_mac(nic_dev->hwdev,
+ dev->data->mac_addrs[index].addr_bytes, 0,
+ func_id);
+ if (err)
+ PMD_DRV_LOG(ERR, "Remove MAC index(%u) failed", index);
+}
+
+/**
+ * Add a MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mac_addr
+ * MAC address to register.
+ * @param[in] index
+ * MAC address index.
+ * @param[in] vmdq
+ * VMDq pool index to associate address with (unused_).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
+ uint32_t index, __rte_unused uint32_t vmdq)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ unsigned int i;
+ u16 func_id;
+ int err;
+
+ if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+ PMD_DRV_LOG(ERR, "Add invalid MAC address");
+ return -EINVAL;
+ }
+
+ if (index >= HINIC3_MAX_UC_MAC_ADDRS) {
+ PMD_DRV_LOG(ERR, "Add MAC index(%u) is out of range", index);
+ return -EINVAL;
+ }
+
+ /* Make sure this address doesn't already be configured. */
+ for (i = 0; i < HINIC3_MAX_UC_MAC_ADDRS; i++) {
+ if (rte_is_same_ether_addr(mac_addr,
+ &dev->data->mac_addrs[i])) {
+ PMD_DRV_LOG(ERR, "MAC address is already configured");
+ return -EADDRINUSE;
+ }
+ }
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+ err = hinic3_set_mac(nic_dev->hwdev, mac_addr->addr_bytes, 0, func_id);
+ if (err)
+ return err;
+
+ return 0;
}
+/**
+ * Delete all multicast MAC addresses from the NIC device.
+ *
+ * This function iterates over the list of multicast MAC addresses and removes
+ * each address from the NIC device by calling `hinic3_del_mac`. After each
+ * deletion, the address is reset to zero.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ */
static void
-hinic3_deinit_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+hinic3_delete_mc_addr_list(struct hinic3_nic_dev *nic_dev)
{
- rte_free(nic_dev->txqs);
- nic_dev->txqs = NULL;
+ u16 func_id;
+ u32 i;
- rte_free(nic_dev->rxqs);
- nic_dev->rxqs = NULL;
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ for (i = 0; i < HINIC3_MAX_MC_MAC_ADDRS; i++) {
+ if (rte_is_zero_ether_addr(&nic_dev->mc_list[i]))
+ break;
+
+ hinic3_del_mac(nic_dev->hwdev, nic_dev->mc_list[i].addr_bytes,
+ 0, func_id);
+ memset(&nic_dev->mc_list[i], 0, sizeof(struct rte_ether_addr));
+ }
+}
+
+/**
+ * Set multicast MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mc_addr_set
+ * Pointer to multicast MAC address.
+ * @param[in] nb_mc_addr
+ * The number of multicast MAC address to set.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_mc_addr_list(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mc_addr_set, uint32_t nb_mc_addr)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ char mac_addr[RTE_ETHER_ADDR_FMT_SIZE];
+ u16 func_id;
+ int err;
+ u32 i;
+
+ func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+ /* Delete old multi_cast addrs firstly. */
+ hinic3_delete_mc_addr_list(nic_dev);
+
+ if (nb_mc_addr > HINIC3_MAX_MC_MAC_ADDRS)
+ return -EINVAL;
+
+ for (i = 0; i < nb_mc_addr; i++) {
+ if (!rte_is_multicast_ether_addr(&mc_addr_set[i])) {
+ rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE,
+ &mc_addr_set[i]);
+ PMD_DRV_LOG(ERR,
+ "Set mc MAC addr failed, addr(%s) invalid",
+ mac_addr);
+ return -EINVAL;
+ }
+ }
+
+ for (i = 0; i < nb_mc_addr; i++) {
+ err = hinic3_set_mac(nic_dev->hwdev, mc_addr_set[i].addr_bytes,
+ 0, func_id);
+ if (err) {
+ hinic3_delete_mc_addr_list(nic_dev);
+ return err;
+ }
+
+ rte_ether_addr_copy(&mc_addr_set[i], &nic_dev->mc_list[i]);
+ }
+
+ return 0;
+}
+
+static int
+hinic3_get_reg(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_dev_reg_info *regs)
+{
+ return 0;
}
+static const struct eth_dev_ops hinic3_pmd_ops = {
+ .dev_configure = hinic3_dev_configure,
+ .dev_infos_get = hinic3_dev_infos_get,
+ .fw_version_get = hinic3_fw_version_get,
+ .dev_set_link_up = hinic3_dev_set_link_up,
+ .dev_set_link_down = hinic3_dev_set_link_down,
+ .link_update = hinic3_link_update,
+ .rx_queue_setup = hinic3_rx_queue_setup,
+ .tx_queue_setup = hinic3_tx_queue_setup,
+ .rx_queue_release = hinic3_rx_queue_release,
+ .tx_queue_release = hinic3_tx_queue_release,
+ .rx_queue_start = hinic3_dev_rx_queue_start,
+ .rx_queue_stop = hinic3_dev_rx_queue_stop,
+ .tx_queue_start = hinic3_dev_tx_queue_start,
+ .tx_queue_stop = hinic3_dev_tx_queue_stop,
+ .rx_queue_intr_enable = hinic3_dev_rx_queue_intr_enable,
+ .rx_queue_intr_disable = hinic3_dev_rx_queue_intr_disable,
+ .dev_start = hinic3_dev_start,
+ .dev_stop = hinic3_dev_stop,
+ .dev_close = hinic3_dev_close,
+ .dev_reset = hinic3_dev_reset,
+ .mtu_set = hinic3_dev_set_mtu,
+ .vlan_filter_set = hinic3_vlan_filter_set,
+ .vlan_offload_set = hinic3_vlan_offload_set,
+ .allmulticast_enable = hinic3_dev_allmulticast_enable,
+ .allmulticast_disable = hinic3_dev_allmulticast_disable,
+ .stats_get = hinic3_dev_stats_get,
+ .stats_reset = hinic3_dev_stats_reset,
+ .xstats_get = hinic3_dev_xstats_get,
+ .xstats_reset = hinic3_dev_xstats_reset,
+ .xstats_get_names = hinic3_dev_xstats_get_names,
+ .dev_supported_ptypes_get = hinic3_dev_supported_ptypes_get,
+ .rxq_info_get = hinic3_rxq_info_get,
+ .txq_info_get = hinic3_txq_info_get,
+ .mac_addr_set = hinic3_set_mac_addr,
+ .mac_addr_remove = hinic3_mac_addr_remove,
+ .mac_addr_add = hinic3_mac_addr_add,
+ .set_mc_addr_list = hinic3_set_mc_addr_list,
+ .get_reg = hinic3_get_reg,
+};
+
+static const struct eth_dev_ops hinic3_pmd_vf_ops = {
+ .dev_configure = hinic3_dev_configure,
+ .dev_infos_get = hinic3_dev_infos_get,
+ .fw_version_get = hinic3_fw_version_get,
+ .rx_queue_setup = hinic3_rx_queue_setup,
+ .tx_queue_setup = hinic3_tx_queue_setup,
+ .rx_queue_intr_enable = hinic3_dev_rx_queue_intr_enable,
+ .rx_queue_intr_disable = hinic3_dev_rx_queue_intr_disable,
+
+ .rx_queue_start = hinic3_dev_rx_queue_start,
+ .rx_queue_stop = hinic3_dev_rx_queue_stop,
+ .tx_queue_start = hinic3_dev_tx_queue_start,
+ .tx_queue_stop = hinic3_dev_tx_queue_stop,
+
+ .dev_start = hinic3_dev_start,
+ .link_update = hinic3_link_update,
+ .rx_queue_release = hinic3_rx_queue_release,
+ .tx_queue_release = hinic3_tx_queue_release,
+ .dev_stop = hinic3_dev_stop,
+ .dev_close = hinic3_dev_close,
+ .mtu_set = hinic3_dev_set_mtu,
+ .vlan_filter_set = hinic3_vlan_filter_set,
+ .vlan_offload_set = hinic3_vlan_offload_set,
+ .allmulticast_enable = hinic3_dev_allmulticast_enable,
+ .allmulticast_disable = hinic3_dev_allmulticast_disable,
+ .stats_get = hinic3_dev_stats_get,
+ .stats_reset = hinic3_dev_stats_reset,
+ .xstats_get = hinic3_dev_xstats_get,
+ .xstats_reset = hinic3_dev_xstats_reset,
+ .xstats_get_names = hinic3_dev_xstats_get_names,
+ .rxq_info_get = hinic3_rxq_info_get,
+ .txq_info_get = hinic3_txq_info_get,
+ .mac_addr_set = hinic3_set_mac_addr,
+ .mac_addr_remove = hinic3_mac_addr_remove,
+ .mac_addr_add = hinic3_mac_addr_add,
+ .set_mc_addr_list = hinic3_set_mc_addr_list,
+};
+
/**
* Init mac_vlan table in hardwares.
*
@@ -319,6 +3194,15 @@ hinic3_func_init(struct rte_eth_dev *eth_dev)
nic_dev->max_sqs = hinic3_func_max_sqs(nic_dev->hwdev);
nic_dev->max_rqs = hinic3_func_max_rqs(nic_dev->hwdev);
+ if (HINIC3_FUNC_TYPE(nic_dev->hwdev) == TYPE_VF)
+ eth_dev->dev_ops = &hinic3_pmd_vf_ops;
+ else
+ eth_dev->dev_ops = &hinic3_pmd_ops;
+
+ eth_dev->rx_queue_count = hinic3_dev_rx_queue_count;
+ eth_dev->rx_descriptor_status = hinic3_dev_rx_descriptor_status;
+ eth_dev->tx_descriptor_status = hinic3_dev_tx_descriptor_status;
+
err = hinic3_init_nic_hwdev(nic_dev->hwdev);
if (err) {
PMD_DRV_LOG(ERR, "Init nic hwdev failed, dev_name: %s",
diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c
new file mode 100644
index 0000000000..aba5a641bc
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_nic_io.c
@@ -0,0 +1,827 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_config.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_pci.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_cmd.h"
+#include "base/hinic3_cmdq.h"
+#include "base/hinic3_hw_comm.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+
+#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT 3
+#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16
+#define HINIC3_DEAULT_DROP_THD_ON 0xFFFF
+#define HINIC3_DEAULT_DROP_THD_OFF 0
+
+#define WQ_PREFETCH_MAX 6
+#define WQ_PREFETCH_MIN 1
+#define WQ_PREFETCH_THRESHOLD 256
+
+#define HINIC3_Q_CTXT_MAX \
+ ((u16)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64))
+
+enum hinic3_qp_ctxt_type {
+ HINIC3_QP_CTXT_TYPE_SQ,
+ HINIC3_QP_CTXT_TYPE_RQ,
+};
+
+struct hinic3_qp_ctxt_header {
+ u16 num_queues;
+ u16 queue_type;
+ u16 start_qid;
+ u16 rsvd;
+};
+
+struct hinic3_sq_ctxt {
+ u32 ci_pi;
+ u32 drop_mode_sp; /**< Packet drop mode and special flags. */
+ u32 wq_pfn_hi_owner; /**< High PFN and ownership flag. */
+ u32 wq_pfn_lo; /**< Low bits of work queue PFN. */
+
+ u32 rsvd0; /**< Reserved field 0. */
+ u32 pkt_drop_thd; /**< Packet drop threshold. */
+ u32 global_sq_id;
+ u32 vlan_ceq_attr; /**< VLAN and CEQ attributes. */
+
+ u32 pref_cache; /**< Cache prefetch settings for the queue. */
+ u32 pref_ci_owner; /**< Prefetch settings for CI and ownership. */
+ u32 pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */
+ u32 pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */
+
+ u32 rsvd8; /**< Reserved field 8. */
+ u32 rsvd9; /**< Reserved field 9. */
+ u32 wq_block_pfn_hi; /**< High bits of work queue block PFN. */
+ u32 wq_block_pfn_lo; /**< Low bits of work queue block PFN. */
+};
+
+struct hinic3_rq_ctxt {
+ u32 ci_pi;
+ u32 ceq_attr; /**< Completion event queue attributes. */
+ u32 wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */
+ u32 wq_pfn_lo; /**< Low bits of work queue PFN. */
+
+ u32 rsvd[3]; /**< Reserved field. */
+ u32 cqe_sge_len; /**< CQE scatter/gather element length. */
+
+ u32 pref_cache; /**< Cache prefetch settings for the queue. */
+ u32 pref_ci_owner; /**< Prefetch settings for CI and ownership. */
+ u32 pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */
+ u32 pref_wq_pfn_lo; /**< Prefetch settings for low PFN. */
+
+ u32 pi_paddr_hi; /**< High 32-bits of PI DMA address. */
+ u32 pi_paddr_lo; /**< Low 32-bits of PI DMA address. */
+ u32 wq_block_pfn_hi; /**< High bits of work queue block PFN. */
+ u32 wq_block_pfn_lo; /**< Low bits of work queue block PFN. */
+};
+
+struct hinic3_sq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_rq_ctxt_block {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_clean_queue_ctxt {
+ struct hinic3_qp_ctxt_header cmdq_hdr;
+ u32 rsvd;
+};
+
+#define SQ_CTXT_SIZE(num_sqs) \
+ ((u16)(sizeof(struct hinic3_qp_ctxt_header) + \
+ (num_sqs) * sizeof(struct hinic3_sq_ctxt)))
+
+#define RQ_CTXT_SIZE(num_rqs) \
+ ((u16)(sizeof(struct hinic3_qp_ctxt_header) + \
+ (num_rqs) * sizeof(struct hinic3_rq_ctxt)))
+
+#define CI_IDX_HIGH_SHIFH 12
+
+#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH)
+
+#define SQ_CTXT_PI_IDX_SHIFT 0
+#define SQ_CTXT_CI_IDX_SHIFT 16
+
+#define SQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define SQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define SQ_CTXT_CI_PI_SET(val, member) \
+ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_MODE_SP_FLAG_SHIFT 0
+#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1
+
+#define SQ_CTXT_MODE_SP_FLAG_MASK 0x1U
+#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U
+
+#define SQ_CTXT_MODE_SET(val, member) \
+ (((val) & SQ_CTXT_MODE_##member##_MASK) \
+ << SQ_CTXT_MODE_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT 23
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define SQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define SQ_CTXT_WQ_PAGE_SET(val, member) \
+ (((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \
+ << SQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT 0
+#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16
+
+#define SQ_CTXT_PKT_DROP_THD_ON_MASK 0xFFFFU
+#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU
+
+#define SQ_CTXT_PKT_DROP_THD_SET(val, member) \
+ (((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \
+ << SQ_CTXT_PKT_DROP_##member##_SHIFT)
+
+#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0
+
+#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU
+
+#define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \
+ (((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_VLAN_TAG_SHIFT 0
+#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT 16
+#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19
+#define SQ_CTXT_VLAN_CEQ_EN_SHIFT 23
+
+#define SQ_CTXT_VLAN_TAG_MASK 0xFFFFU
+#define SQ_CTXT_VLAN_TYPE_SEL_MASK 0x7U
+#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U
+#define SQ_CTXT_VLAN_CEQ_EN_MASK 0x1U
+
+#define SQ_CTXT_VLAN_CEQ_SET(val, member) \
+ (((val) & SQ_CTXT_VLAN_##member##_MASK) \
+ << SQ_CTXT_VLAN_##member##_SHIFT)
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define SQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define SQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define SQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define SQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define SQ_CTXT_PREF_CI_HI_SHIFT 0
+#define SQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define SQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define SQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define SQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define SQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define SQ_CTXT_PREF_SET(val, member) \
+ (((val) & SQ_CTXT_PREF_##member##_MASK) \
+ << SQ_CTXT_PREF_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define SQ_CTXT_WQ_BLOCK_SET(val, member) \
+ (((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \
+ << SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define RQ_CTXT_PI_IDX_SHIFT 0
+#define RQ_CTXT_CI_IDX_SHIFT 16
+
+#define RQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define RQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define RQ_CTXT_CI_PI_SET(val, member) \
+ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT 21
+#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30
+#define RQ_CTXT_CEQ_ATTR_EN_SHIFT 31
+
+#define RQ_CTXT_CEQ_ATTR_INTR_MASK 0x3FFU
+#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U
+#define RQ_CTXT_CEQ_ATTR_EN_MASK 0x1U
+
+#define RQ_CTXT_CEQ_ATTR_SET(val, member) \
+ (((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \
+ << RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28
+#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT 31
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U
+#define RQ_CTXT_WQ_PAGE_OWNER_MASK 0x1U
+
+#define RQ_CTXT_WQ_PAGE_SET(val, member) \
+ (((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \
+ << RQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define RQ_CTXT_CQE_LEN_SHIFT 28
+
+#define RQ_CTXT_CQE_LEN_MASK 0x3U
+
+#define RQ_CTXT_CQE_LEN_SET(val, member) \
+ (((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define RQ_CTXT_PREF_CACHE_MAX_SHIFT 14
+#define RQ_CTXT_PREF_CACHE_MIN_SHIFT 25
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define RQ_CTXT_PREF_CACHE_MAX_MASK 0x7FFU
+#define RQ_CTXT_PREF_CACHE_MIN_MASK 0x7FU
+
+#define RQ_CTXT_PREF_CI_HI_SHIFT 0
+#define RQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define RQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define RQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define RQ_CTXT_PREF_CI_LOW_SHIFT 20
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define RQ_CTXT_PREF_CI_LOW_MASK 0xFFFU
+
+#define RQ_CTXT_PREF_SET(val, member) \
+ (((val) & RQ_CTXT_PREF_##member##_MASK) \
+ << RQ_CTXT_PREF_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define RQ_CTXT_WQ_BLOCK_SET(val, member) \
+ (((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \
+ << RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4)
+
+#define WQ_PAGE_PFN_SHIFT 12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr) ((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+/**
+ * Prepare the command queue header and converted it to big-endian format.
+ *
+ * @param[out] qp_ctxt_hdr
+ * Pointer to command queue context header structure to be initialized.
+ * @param[in] ctxt_type
+ * Type of context (SQ/RQ) to be set in header.
+ * @param[in] num_queues
+ * Number of queues.
+ * @param[in] q_id
+ * Starting queue ID for this context.
+ */
+static void
+hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr,
+ enum hinic3_qp_ctxt_type ctxt_type,
+ u16 num_queues, u16 q_id)
+{
+ qp_ctxt_hdr->queue_type = ctxt_type;
+ qp_ctxt_hdr->num_queues = num_queues;
+ qp_ctxt_hdr->start_qid = q_id;
+ qp_ctxt_hdr->rsvd = 0;
+
+ rte_mb();
+
+ hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
+}
+
+/**
+ * Initialize context structure for specified TXQ by configuring various queue
+ * parameters (e.g., ci, pi, work queue page addresses).
+ *
+ * @param[in] sq
+ * Pointer to TXQ structure.
+ * @param[in] sq_id
+ * ID of TXQ being configured.
+ * @param[out] sq_ctxt
+ * Pointer to structure that will hold TXQ context.
+ */
+static void
+hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, u16 sq_id,
+ struct hinic3_sq_ctxt *sq_ctxt)
+{
+ u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+
+ ci_start = sq->cons_idx & sq->q_mask;
+ pi_start = sq->prod_idx & sq->q_mask;
+
+ /* Read the first page from hardware table. */
+ wq_page_addr = sq->queue_buf_paddr;
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ /* Use 0-level CLA. */
+ wq_block_pfn = WQ_BLOCK_PFN(wq_page_addr);
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ sq_ctxt->ci_pi = SQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ SQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ sq_ctxt->drop_mode_sp = SQ_CTXT_MODE_SET(0, SP_FLAG) |
+ SQ_CTXT_MODE_SET(0, PKT_DROP);
+
+ sq_ctxt->wq_pfn_hi_owner = SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ SQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ sq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->pkt_drop_thd =
+ SQ_CTXT_PKT_DROP_THD_SET(HINIC3_DEAULT_DROP_THD_ON, THD_ON) |
+ SQ_CTXT_PKT_DROP_THD_SET(HINIC3_DEAULT_DROP_THD_OFF, THD_OFF);
+
+ sq_ctxt->global_sq_id =
+ SQ_CTXT_GLOBAL_QUEUE_ID_SET(sq_id, GLOBAL_SQ_ID);
+
+ /* Insert c-vlan in default. */
+ sq_ctxt->vlan_ceq_attr = SQ_CTXT_VLAN_CEQ_SET(0, CEQ_EN) |
+ SQ_CTXT_VLAN_CEQ_SET(1, INSERT_MODE);
+
+ sq_ctxt->rsvd0 = 0;
+
+ sq_ctxt->pref_cache =
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ sq_ctxt->pref_ci_owner =
+ SQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ SQ_CTXT_PREF_SET(1, OWNER);
+
+ sq_ctxt->pref_wq_pfn_hi_ci =
+ SQ_CTXT_PREF_SET(ci_start, CI_LOW) |
+ SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI);
+
+ sq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ sq_ctxt->wq_block_pfn_hi =
+ SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ sq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+ rte_mb();
+
+ hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
+}
+
+/**
+ * Initialize context structure for specified RXQ by configuring various queue
+ * parameters (e.g., ci, pi, work queue page addresses).
+ *
+ * @param[in] rq
+ * Pointer to RXQ structure.
+ * @param[out] rq_ctxt
+ * Pointer to structure that will hold RXQ context.
+ */
+static void
+hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt)
+{
+ u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
+ u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
+ u16 pi_start, ci_start;
+ u16 wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT;
+ u8 intr_disable;
+
+ /* RQ depth is in unit of 8 Bytes. */
+ ci_start = (u16)((rq->cons_idx & rq->q_mask) << wqe_type);
+ pi_start = (u16)((rq->prod_idx & rq->q_mask) << wqe_type);
+
+ /* Read the first page from hardware table. */
+ wq_page_addr = rq->queue_buf_paddr;
+
+ wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+ wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+ wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+ /* Use 0-level CLA. */
+ wq_block_pfn = WQ_BLOCK_PFN(wq_page_addr);
+ wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+ wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+ rq_ctxt->ci_pi = RQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+ RQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+ /* RQ doesn't need ceq, msix_entry_idx set 1, but mask not enable. */
+ intr_disable = rq->dp_intr_en ? 0 : 1;
+ rq_ctxt->ceq_attr = RQ_CTXT_CEQ_ATTR_SET(intr_disable, EN) |
+ RQ_CTXT_CEQ_ATTR_SET(0, INTR_ARM) |
+ RQ_CTXT_CEQ_ATTR_SET(rq->msix_entry_idx, INTR);
+
+ /* Use 32Byte WQE with SGE for CQE in default. */
+ rq_ctxt->wq_pfn_hi_type_owner =
+ RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+ RQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+ switch (wqe_type) {
+ case HINIC3_EXTEND_RQ_WQE:
+ /* Use 32Byte WQE with SGE for CQE. */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(0, WQE_TYPE);
+ break;
+ case HINIC3_NORMAL_RQ_WQE:
+ /* Use 16Byte WQE with 32Bytes SGE for CQE. */
+ rq_ctxt->wq_pfn_hi_type_owner |=
+ RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE);
+ rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN);
+ break;
+ default:
+ PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type);
+ }
+
+ rq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pref_cache =
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+ RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+ rq_ctxt->pref_ci_owner =
+ RQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+ RQ_CTXT_PREF_SET(1, OWNER);
+
+ rq_ctxt->pref_wq_pfn_hi_ci =
+ RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI) |
+ RQ_CTXT_PREF_SET(ci_start, CI_LOW);
+
+ rq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+ rq_ctxt->pi_paddr_hi = upper_32_bits(rq->pi_dma_addr);
+ rq_ctxt->pi_paddr_lo = lower_32_bits(rq->pi_dma_addr);
+
+ rq_ctxt->wq_block_pfn_hi =
+ RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+ rq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+ rte_mb();
+
+ hinic3_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
+}
+
+/**
+ * Allocate a command buffer, prepare context for each SQ queue by setting
+ * various parameters, send context data to hardware. It processes SQ queues in
+ * batches, with each batch not exceeding `HINIC3_Q_CTXT_MAX` SQ contexts.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, a negative error code on failure.
+ * - -ENOMEM if the memory allocation for the command buffer fails.
+ * - -EFAULT if the hardware returns an error while processing the context data.
+ */
+static int
+init_sq_ctxts(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL;
+ struct hinic3_sq_ctxt *sq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_txq *sq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf for sq ctx failed");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_dev->num_sqs) {
+ sq_ctxt_block = cmd_buf->buf;
+ sq_ctxt = sq_ctxt_block->sq_ctxt;
+
+ max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX
+ ? HINIC3_Q_CTXT_MAX
+ : (nic_dev->num_sqs - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_SQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ sq = nic_dev->txqs[curr_id];
+ hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]);
+ }
+
+ cmd_buf->size = SQ_CTXT_SIZE(max_ctxts);
+ rte_mb();
+ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR,
+ "Set SQ ctxts failed, "
+ "err: %d, out_param: %" PRIu64,
+ err, out_param);
+
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+/**
+ * Initialize context for all RQ in device.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, a negative error code on failure.
+ * - -ENOMEM if the memory allocation for the command buffer fails.
+ * - -EFAULT if the hardware returns an error while processing the context data.
+ */
+static int
+init_rq_ctxts(struct hinic3_nic_dev *nic_dev)
+{
+ struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL;
+ struct hinic3_rq_ctxt *rq_ctxt = NULL;
+ struct hinic3_cmd_buf *cmd_buf = NULL;
+ struct hinic3_rxq *rq = NULL;
+ u64 out_param = 0;
+ u16 q_id, curr_id, max_ctxts, i;
+ int err = 0;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf for rq ctx failed");
+ return -ENOMEM;
+ }
+
+ q_id = 0;
+ while (q_id < nic_dev->num_rqs) {
+ rq_ctxt_block = cmd_buf->buf;
+ rq_ctxt = rq_ctxt_block->rq_ctxt;
+
+ max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX
+ ? HINIC3_Q_CTXT_MAX
+ : (nic_dev->num_rqs - q_id);
+
+ hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr,
+ HINIC3_QP_CTXT_TYPE_RQ, max_ctxts,
+ q_id);
+
+ for (i = 0; i < max_ctxts; i++) {
+ curr_id = q_id + i;
+ rq = nic_dev->rxqs[curr_id];
+ hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]);
+ }
+
+ cmd_buf->size = RQ_CTXT_SIZE(max_ctxts);
+ rte_mb();
+ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+ cmd_buf, &out_param, 0);
+ if (err || out_param != 0) {
+ PMD_DRV_LOG(ERR,
+ "Set RQ ctxts failed, "
+ "err: %d, out_param: %" PRIu64,
+ err, out_param);
+ err = -EFAULT;
+ break;
+ }
+
+ q_id += max_ctxts;
+ }
+
+ hinic3_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+/**
+ * Allocate memory for command buffer, construct related command request, send a
+ * command to hardware to clean up queue offload context.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ * @param[in] ctxt_type
+ * The type of queue context to clean.
+ * The queue context type that determines which queue type to clean up.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev,
+ enum hinic3_qp_ctxt_type ctxt_type)
+{
+ struct hinic3_clean_queue_ctxt *ctxt_block = NULL;
+ struct hinic3_cmd_buf *cmd_buf;
+ u64 out_param = 0;
+ int err;
+
+ cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Allocate cmd buf for LRO/TSO space failed");
+ return -ENOMEM;
+ }
+
+ /* Construct related command request. */
+ ctxt_block = cmd_buf->buf;
+ /* Assumed max_rqs must be equal to max_sqs. */
+ ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs;
+ ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+ ctxt_block->cmdq_hdr.start_qid = 0;
+ /*
+ * Add a memory barrier to ensure that instructions are not out of order
+ * due to compilation optimization.
+ */
+ rte_mb();
+
+ hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+ cmd_buf->size = sizeof(*ctxt_block);
+
+ /* Send a command to hardware to clean up queue offload context. */
+ err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+ HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ cmd_buf, &out_param, 0);
+ if ((err) || (out_param)) {
+ PMD_DRV_LOG(ERR,
+ "Clean queue offload ctxts failed, "
+ "err: %d, out_param: %" PRIu64,
+ err, out_param);
+ err = -EFAULT;
+ }
+
+ hinic3_free_cmd_buf(cmd_buf);
+ return err;
+}
+
+static int
+clean_qp_offload_ctxt(struct hinic3_nic_dev *nic_dev)
+{
+ /* Clean LRO/TSO context space. */
+ return (clean_queue_offload_ctxt(nic_dev, HINIC3_QP_CTXT_TYPE_SQ) ||
+ clean_queue_offload_ctxt(nic_dev, HINIC3_QP_CTXT_TYPE_RQ));
+}
+
+void
+hinic3_get_func_rx_buf_size(void *dev)
+{
+ struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev;
+ struct hinic3_rxq *rxq = NULL;
+ u16 q_id;
+ u16 buf_size = 0;
+
+ for (q_id = 0; q_id < nic_dev->num_rqs; q_id++) {
+ rxq = nic_dev->rxqs[q_id];
+
+ if (rxq == NULL)
+ continue;
+
+ if (q_id == 0)
+ buf_size = rxq->buf_len;
+
+ buf_size = buf_size > rxq->buf_len ? rxq->buf_len : buf_size;
+ }
+
+ nic_dev->rx_buff_len = buf_size;
+}
+
+int
+hinic3_init_qp_ctxts(void *dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct hinic3_hwdev *hwdev = NULL;
+ struct hinic3_sq_attr sq_attr;
+ u32 rq_depth = 0;
+ u32 sq_depth = 0;
+ u16 q_id;
+ int err;
+
+ if (!dev)
+ return -EINVAL;
+
+ nic_dev = (struct hinic3_nic_dev *)dev;
+ hwdev = nic_dev->hwdev;
+
+ err = init_sq_ctxts(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init SQ ctxts failed");
+ return err;
+ }
+
+ err = init_rq_ctxts(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init RQ ctxts failed");
+ return err;
+ }
+
+ err = clean_qp_offload_ctxt(nic_dev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Clean qp offload ctxts failed");
+ return err;
+ }
+
+ if (nic_dev->num_rqs != 0)
+ rq_depth = ((u32)nic_dev->rxqs[0]->q_depth)
+ << nic_dev->rxqs[0]->wqe_type;
+
+ if (nic_dev->num_sqs != 0)
+ sq_depth = nic_dev->txqs[0]->q_depth;
+
+ err = hinic3_set_root_ctxt(hwdev, rq_depth, sq_depth,
+ nic_dev->rx_buff_len);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set root context failed");
+ return err;
+ }
+
+ /* Configure CI tables for each SQ. */
+ for (q_id = 0; q_id < nic_dev->num_sqs; q_id++) {
+ sq_attr.ci_dma_base = nic_dev->txqs[q_id]->ci_dma_base >> 0x2;
+ sq_attr.pending_limit = HINIC3_DEAULT_TX_CI_PENDING_LIMIT;
+ sq_attr.coalescing_time = HINIC3_DEAULT_TX_CI_COALESCING_TIME;
+ sq_attr.intr_en = 0;
+ sq_attr.intr_idx = 0; /**< Tx doesn't need interrupt. */
+ sq_attr.l2nic_sqn = q_id;
+ sq_attr.dma_attr_off = 0;
+ err = hinic3_set_ci_table(hwdev, &sq_attr);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set ci table failed");
+ goto set_cons_idx_table_err;
+ }
+ }
+
+ return 0;
+
+set_cons_idx_table_err:
+ hinic3_clean_root_ctxt(hwdev);
+ return err;
+}
+
+void
+hinic3_free_qp_ctxts(void *hwdev)
+{
+ if (!hwdev)
+ return;
+
+ hinic3_clean_root_ctxt(hwdev);
+}
+
+void
+hinic3_update_driver_feature(void *dev, u64 s_feature)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ if (!dev)
+ return;
+
+ nic_dev = (struct hinic3_nic_dev *)dev;
+ nic_dev->feature_cap = s_feature;
+
+ PMD_DRV_LOG(INFO, "Update nic feature to 0x%" PRIx64,
+ nic_dev->feature_cap);
+}
+
+u64
+hinic3_get_driver_feature(void *dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+
+ nic_dev = (struct hinic3_nic_dev *)dev;
+
+ return nic_dev->feature_cap;
+}
diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h
new file mode 100644
index 0000000000..39ffb3c8fd
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_nic_io.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_NIC_IO_H_
+#define _HINIC3_NIC_IO_H_
+
+#define HINIC3_SQ_WQEBB_SHIFT 4
+#define HINIC3_RQ_WQEBB_SHIFT 3
+
+#define HINIC3_SQ_WQEBB_SIZE BIT(HINIC3_SQ_WQEBB_SHIFT)
+#define HINIC3_CQE_SIZE_SHIFT 4
+
+/* Ci addr should RTE_CACHE_SIZE(64B) alignment for performance. */
+#define HINIC3_CI_Q_ADDR_SIZE 64
+
+#define CI_TABLE_SIZE(num_qps, pg_sz) \
+ (RTE_ALIGN((num_qps) * HINIC3_CI_Q_ADDR_SIZE, pg_sz))
+
+#define HINIC3_CI_VADDR(base_addr, q_id) \
+ ((u8 *)(base_addr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define HINIC3_CI_PADDR(base_paddr, q_id) \
+ ((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+enum hinic3_rq_wqe_type {
+ HINIC3_COMPACT_RQ_WQE,
+ HINIC3_NORMAL_RQ_WQE,
+ HINIC3_EXTEND_RQ_WQE
+};
+
+enum hinic3_queue_type {
+ HINIC3_SQ,
+ HINIC3_RQ,
+ HINIC3_MAX_QUEUE_TYPE,
+};
+
+/* Doorbell info. */
+struct hinic3_db {
+ u32 db_info;
+ u32 pi_hi;
+};
+
+#define DB_INFO_QID_SHIFT 0
+#define DB_INFO_NON_FILTER_SHIFT 22
+#define DB_INFO_CFLAG_SHIFT 23
+#define DB_INFO_COS_SHIFT 24
+#define DB_INFO_TYPE_SHIFT 27
+
+#define DB_INFO_QID_MASK 0x1FFFU
+#define DB_INFO_NON_FILTER_MASK 0x1U
+#define DB_INFO_CFLAG_MASK 0x1U
+#define DB_INFO_COS_MASK 0x7U
+#define DB_INFO_TYPE_MASK 0x1FU
+#define DB_INFO_SET(val, member) \
+ (((u32)(val) & DB_INFO_##member##_MASK) << DB_INFO_##member##_SHIFT)
+
+#define DB_PI_LOW_MASK 0xFFU
+#define DB_PI_HIGH_MASK 0xFFU
+#define DB_PI_LOW(pi) ((pi) & DB_PI_LOW_MASK)
+#define DB_PI_HI_SHIFT 8
+#define DB_PI_HIGH(pi) (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK)
+#define DB_INFO_UPPER_32(val) (((u64)(val)) << 32)
+
+#define DB_ADDR(db_addr, pi) ((u64 *)(db_addr) + DB_PI_LOW(pi))
+#define SRC_TYPE 1
+
+/* Cflag data path. */
+#define SQ_CFLAG_DP 0
+#define RQ_CFLAG_DP 1
+
+#define MASKED_QUEUE_IDX(queue, idx) ((idx) & (queue)->q_mask)
+
+#define NIC_WQE_ADDR(queue, idx) \
+ ({ \
+ typeof(queue) __queue = (queue); \
+ (void *)((u64)(__queue->queue_buf_vaddr) + \
+ ((idx) << __queue->wqebb_shift)); \
+ })
+
+/**
+ * Write send queue doorbell.
+ *
+ * @param[in] db_addr
+ * Doorbell address.
+ * @param[in] q_id
+ * Send queue id.
+ * @param[in] cos
+ * Send queue cos.
+ * @param[in] cflag
+ * Cflag data path.
+ * @param[in] pi
+ * Send queue pi.
+ */
+static inline void
+hinic3_write_db(void *db_addr, u16 q_id, int cos, u8 cflag, u16 pi)
+{
+ u64 db;
+
+ /* Hardware will do endianness coverting. */
+ db = DB_PI_HIGH(pi);
+ db = DB_INFO_UPPER_32(db) | DB_INFO_SET(SRC_TYPE, TYPE) |
+ DB_INFO_SET(cflag, CFLAG) | DB_INFO_SET(cos, COS) |
+ DB_INFO_SET(q_id, QID);
+
+ rte_wmb(); /**< Write all before the doorbell. */
+
+ rte_write64(*((u64 *)&db), DB_ADDR(db_addr, pi));
+}
+
+/**
+ * Get minimum RX buffer size for device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+void hinic3_get_func_rx_buf_size(void *dev);
+
+/**
+ * Initialize qps contexts, set SQ ci attributes, arm all SQ.
+ *
+ * Function will perform following steps:
+ * - Initialize SQ contexts.
+ * - Initialize RQ contexts.
+ * - Clean QP offload contexts of SQ and RQ.
+ * - Set root context for device.
+ * - Configure CI tables for each SQ.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_init_qp_ctxts(void *dev);
+
+/**
+ * Free queue pair context.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ */
+void hinic3_free_qp_ctxts(void *hwdev);
+
+/**
+ * Update driver feature capabilities.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] s_feature
+ * s_feature driver supported.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+void hinic3_update_driver_feature(void *dev, u64 s_feature);
+
+/**
+ * Get driver feature capabilities.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * Feature capabilities of driver.
+ */
+u64 hinic3_get_driver_feature(void *dev);
+
+#endif /* _HINIC3_NIC_IO_H_ */
diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c
new file mode 100644
index 0000000000..a1dc960236
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_rx.c
@@ -0,0 +1,811 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+#include <rte_ether.h>
+#include <rte_mbuf.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_pmd_hwif.h"
+#include "base/hinic3_pmd_hwdev.h"
+#include "base/hinic3_pmd_wq.h"
+#include "base/hinic3_pmd_nic_cfg.h"
+#include "hinic3_pmd_nic_io.h"
+#include "hinic3_pmd_ethdev.h"
+#include "hinic3_pmd_tx.h"
+#include "hinic3_pmd_rx.h"
+
+/**
+ * Get wqe from receive queue.
+ *
+ * @param[in] rxq
+ * Receive queue.
+ * @param[out] rq_wqe
+ * Receive queue wqe.
+ * @param[out] pi
+ * Current pi.
+ */
+static inline void
+hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe,
+ u16 *pi)
+{
+ *pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx);
+
+ /* Get only one rxq wqe. */
+ rxq->prod_idx++;
+ rxq->delta--;
+
+ *rq_wqe = NIC_WQE_ADDR(rxq, *pi);
+}
+
+/**
+ * Put wqe into receive queue.
+ *
+ * @param[in] rxq
+ * Receive queue.
+ * @param[in] wqe_cnt
+ * Wqebb counters.
+ */
+static inline void
+hinic3_put_rq_wqe(struct hinic3_rxq *rxq, u16 wqe_cnt)
+{
+ rxq->delta += wqe_cnt;
+ rxq->prod_idx -= wqe_cnt;
+}
+
+/**
+ * Get receive queue local pi.
+ *
+ * @param[in] rxq
+ * Receive queue.
+ * @return
+ * Receive queue local pi.
+ */
+static inline u16
+hinic3_get_rq_local_pi(struct hinic3_rxq *rxq)
+{
+ return MASKED_QUEUE_IDX(rxq, rxq->prod_idx);
+}
+
+/**
+ * Update receive queue hardware pi.
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @param[in] pi
+ * Receive queue pi to update
+ */
+static inline void
+hinic3_update_rq_hw_pi(struct hinic3_rxq *rxq, u16 pi)
+{
+ *rxq->pi_virt_addr =
+ (u16)cpu_to_be16((pi & rxq->q_mask) << rxq->wqe_type);
+}
+
+u16
+hinic3_rx_fill_wqe(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+ rte_iova_t cqe_dma;
+ u16 pi = 0;
+ u16 i;
+
+ cqe_dma = rxq->cqe_start_paddr;
+ for (i = 0; i < rxq->q_depth; i++) {
+ hinic3_get_rq_wqe(rxq, &rq_wqe, &pi);
+ if (!rq_wqe) {
+ PMD_DRV_LOG(ERR,
+ "Get rq wqe failed, rxq id: %d, wqe id: %d",
+ rxq->q_id, i);
+ break;
+ }
+
+ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ /* Unit of cqe length is 16B. */
+ hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge,
+ cqe_dma,
+ HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT);
+ /* Use fixed len. */
+ rq_wqe->extend_wqe.buf_desc.sge.len =
+ nic_dev->rx_buff_len;
+ } else {
+ rq_wqe->normal_wqe.cqe_hi_addr = upper_32_bits(cqe_dma);
+ rq_wqe->normal_wqe.cqe_lo_addr = lower_32_bits(cqe_dma);
+ }
+
+ cqe_dma += sizeof(struct hinic3_rq_cqe);
+
+ hinic3_hw_be32_len(rq_wqe, rxq->wqebb_size);
+ }
+
+ hinic3_put_rq_wqe(rxq, i);
+
+ return i;
+}
+
+static struct rte_mbuf *
+hinic3_rx_alloc_mbuf(struct hinic3_rxq *rxq, rte_iova_t *dma_addr)
+{
+ struct rte_mbuf *mbuf = NULL;
+
+ if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, &mbuf, 1) != 0))
+ return NULL;
+
+ *dma_addr = rte_mbuf_data_iova_default(mbuf);
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_alloc_mbuf_bytes++;
+#endif
+ return mbuf;
+}
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+static void
+hinic3_rxq_buffer_done_count(struct hinic3_rxq *rxq)
+{
+ u16 sw_ci, avail_pkts = 0, hit_done = 0, cqe_hole = 0;
+ u32 status;
+ volatile struct hinic3_rq_cqe *rx_cqe;
+
+ for (sw_ci = 0; sw_ci < rxq->q_depth; sw_ci++) {
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+
+ /* Check current ci is done. */
+ status = rx_cqe->status;
+ if (!HINIC3_GET_RX_DONE(status)) {
+ if (hit_done) {
+ cqe_hole++;
+ hit_done = 0;
+ }
+ continue;
+ }
+
+ avail_pkts++;
+ hit_done = 1;
+ }
+
+ rxq->rxq_stats.rx_avail = avail_pkts;
+ rxq->rxq_stats.rx_hole = cqe_hole;
+}
+
+void
+hinic3_get_stats(struct hinic3_rxq *rxq)
+{
+ rxq->rxq_stats.rx_mbuf = rxq->q_depth - hinic3_get_rq_free_wqebb(rxq);
+
+ hinic3_rxq_buffer_done_count(rxq);
+}
+#endif
+
+u16
+hinic3_rx_fill_buffers(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ struct rte_mbuf *mb = NULL;
+ rte_iova_t dma_addr;
+ u16 i, free_wqebbs;
+
+ free_wqebbs = rxq->delta - 1;
+ for (i = 0; i < free_wqebbs; i++) {
+ rx_info = &rxq->rx_info[rxq->next_to_update];
+
+ mb = hinic3_rx_alloc_mbuf(rxq, &dma_addr);
+ if (!mb) {
+ PMD_DRV_LOG(ERR, "Alloc mbuf failed");
+ break;
+ }
+
+ rx_info->mbuf = mb;
+
+ rq_wqe = NIC_WQE_ADDR(rxq, rxq->next_to_update);
+
+ /* Fill buffer address only. */
+ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+
+ rxq->next_to_update = (rxq->next_to_update + 1) & rxq->q_mask;
+ }
+
+ if (likely(i > 0)) {
+#ifndef HINIC3_RQ_DB
+ hinic3_write_db(rxq->db_addr, rxq->q_id, 0, RQ_CFLAG_DP,
+ (u16)(rxq->next_to_update << rxq->wqe_type));
+ /* Init rxq contxet used, need to optimization. */
+ rxq->prod_idx = rxq->next_to_update;
+#else
+ rte_wmb();
+ rxq->prod_idx = rxq->next_to_update;
+ hinic3_update_rq_hw_pi(rxq, rxq->next_to_update);
+#endif
+ rxq->delta -= i;
+ } else {
+ PMD_DRV_LOG(ERR, "Alloc rx buffers failed, rxq_id: %d",
+ rxq->q_id);
+ }
+
+ return i;
+}
+
+void
+hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rx_info *rx_info = NULL;
+ int free_wqebbs = hinic3_get_rq_free_wqebb(rxq) + 1;
+ volatile struct hinic3_rq_cqe *rx_cqe = NULL;
+ u16 ci;
+
+ while (free_wqebbs++ < rxq->q_depth) {
+ ci = hinic3_get_rq_local_ci(rxq);
+
+ rx_cqe = &rxq->rx_cqe[ci];
+
+ /* Clear done bit. */
+ rx_cqe->status = 0;
+
+ rx_info = &rxq->rx_info[ci];
+ rte_pktmbuf_free(rx_info->mbuf);
+ rx_info->mbuf = NULL;
+
+ hinic3_update_rq_local_ci(rxq, 1);
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_free_mbuf_bytes++;
+#endif
+ }
+}
+
+void
+hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev)
+{
+ u16 qid;
+
+ for (qid = 0; qid < nic_dev->num_rqs; qid++)
+ hinic3_free_rxq_mbufs(nic_dev->rxqs[qid]);
+}
+
+static u32
+hinic3_rx_alloc_mbuf_bulk(struct hinic3_rxq *rxq, struct rte_mbuf **mbufs,
+ u32 exp_mbuf_cnt)
+{
+ u32 avail_cnt;
+ int err;
+
+ err = rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, exp_mbuf_cnt);
+ if (likely(err == 0)) {
+ avail_cnt = exp_mbuf_cnt;
+ } else {
+ avail_cnt = 0;
+ rxq->rxq_stats.rx_nombuf += exp_mbuf_cnt;
+ }
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_alloc_mbuf_bytes += avail_cnt;
+#endif
+ return avail_cnt;
+}
+
+static int
+hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq)
+{
+ struct hinic3_rq_wqe *rq_wqe = NULL;
+ struct rte_mbuf **rearm_mbufs;
+ u32 i, free_wqebbs, rearm_wqebbs, exp_wqebbs;
+ rte_iova_t dma_addr;
+ u16 pi;
+ struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+
+ /* Check free wqebb cnt fo rearm. */
+ free_wqebbs = hinic3_get_rq_free_wqebb(rxq);
+ if (unlikely(free_wqebbs < rxq->rx_free_thresh))
+ return -ENOMEM;
+
+ /* Get rearm mbuf array. */
+ pi = hinic3_get_rq_local_pi(rxq);
+ rearm_mbufs = (struct rte_mbuf **)(&rxq->rx_info[pi]);
+
+ /* Check rxq free wqebbs turn around. */
+ exp_wqebbs = rxq->q_depth - pi;
+ if (free_wqebbs < exp_wqebbs)
+ exp_wqebbs = free_wqebbs;
+
+ /* Alloc mbuf in bulk. */
+ rearm_wqebbs = hinic3_rx_alloc_mbuf_bulk(rxq, rearm_mbufs, exp_wqebbs);
+ if (unlikely(rearm_wqebbs == 0))
+ return -ENOMEM;
+
+ /* Rearm rxq mbuf. */
+ rq_wqe = NIC_WQE_ADDR(rxq, pi);
+ for (i = 0; i < rearm_wqebbs; i++) {
+ dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]);
+
+ /* Fill buffer address only. */
+ if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+ rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ rq_wqe->extend_wqe.buf_desc.sge.len =
+ nic_dev->rx_buff_len;
+ } else {
+ rq_wqe->normal_wqe.buf_hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ rq_wqe->normal_wqe.buf_lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ }
+
+ rq_wqe =
+ (struct hinic3_rq_wqe *)((u64)rq_wqe + rxq->wqebb_size);
+ }
+ rxq->prod_idx += rearm_wqebbs;
+ rxq->delta -= rearm_wqebbs;
+
+#ifndef HINIC3_RQ_DB
+ hinic3_write_db(rxq->db_addr, rxq->q_id, 0, RQ_CFLAG_DP,
+ ((pi + rearm_wqebbs) & rxq->q_mask) << rxq->wqe_type);
+#else
+ /* Update rxq hw_pi. */
+ rte_wmb();
+ hinic3_update_rq_hw_pi(rxq, pi + rearm_wqebbs);
+#endif
+ return 0;
+}
+
+static int
+hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ u8 default_rss_key[HINIC3_RSS_KEY_SIZE] = {
+ 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+ 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+ 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+ 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+ 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa};
+ u8 hashkey[HINIC3_RSS_KEY_SIZE] = {0};
+ int err;
+
+ if (rss_conf->rss_key == NULL ||
+ rss_conf->rss_key_len > HINIC3_RSS_KEY_SIZE)
+ memcpy(hashkey, default_rss_key, HINIC3_RSS_KEY_SIZE);
+ else
+ memcpy(hashkey, rss_conf->rss_key, rss_conf->rss_key_len);
+
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, hashkey,
+ HINIC3_RSS_KEY_SIZE);
+ if (err)
+ return err;
+
+ memcpy(nic_dev->rss_key, hashkey, HINIC3_RSS_KEY_SIZE);
+ return 0;
+}
+
+void
+hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, u16 queue_id)
+{
+ u8 rss_queue_count = nic_dev->num_rss;
+
+ RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1));
+
+ nic_dev->rx_queue_list[rss_queue_count] = (u8)queue_id;
+ nic_dev->num_rss++;
+}
+
+void
+hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev)
+{
+ nic_dev->num_rss = 0;
+}
+
+static void
+hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, u32 *indir_tbl)
+{
+ u8 rss_queue_count = nic_dev->num_rss;
+ int i = 0;
+ int j;
+
+ if (rss_queue_count == 0) {
+ /* Delete q_id from indir tbl. */
+ for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++)
+ /* Invalid value in indir tbl. */
+ indir_tbl[i] = 0xFFFF;
+ } else {
+ while (i < HINIC3_RSS_INDIR_SIZE)
+ for (j = 0; (j < rss_queue_count) &&
+ (i < HINIC3_RSS_INDIR_SIZE); j++)
+ indir_tbl[i++] = nic_dev->rx_queue_list[j];
+ }
+}
+
+int
+hinic3_refill_indir_rqid(struct hinic3_rxq *rxq)
+{
+ struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+ u32 *indir_tbl;
+ int err;
+
+ indir_tbl = rte_zmalloc(NULL, HINIC3_RSS_INDIR_SIZE * sizeof(u32), 0);
+ if (!indir_tbl) {
+ PMD_DRV_LOG(ERR,
+ "Alloc indir_tbl mem failed, "
+ "eth_dev:%s, queue_idx:%d",
+ nic_dev->dev_name, rxq->q_id);
+ return -ENOMEM;
+ }
+
+ /* Build indir tbl according to the number of rss queue. */
+ hinic3_fill_indir_tbl(nic_dev, indir_tbl);
+
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl,
+ HINIC3_RSS_INDIR_SIZE);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Set indrect table failed, eth_dev:%s, queue_idx:%d",
+ nic_dev->dev_name, rxq->q_id);
+ goto out;
+ }
+
+out:
+ rte_free(indir_tbl);
+ return err;
+}
+
+static int
+hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct hinic3_rss_type rss_type = {0};
+ u64 rss_hf = rss_conf->rss_hf;
+ int err;
+
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, rss_type);
+ return err;
+}
+
+int
+hinic3_update_rss_config(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u8 prio_tc[HINIC3_DCB_UP_MAX] = {0};
+ u8 num_tc = 0;
+ int err;
+
+ if (rss_conf->rss_hf == 0) {
+ rss_conf->rss_hf = HINIC3_RSS_OFFLOAD_ALL;
+ } else if ((rss_conf->rss_hf & HINIC3_RSS_OFFLOAD_ALL) == 0) {
+ PMD_DRV_LOG(ERR, "Does't support rss hash type: %" PRIu64,
+ rss_conf->rss_hf);
+ return -EINVAL;
+ }
+
+ err = hinic3_rss_template_alloc(nic_dev->hwdev);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Alloc rss template failed, err: %d", err);
+ return err;
+ }
+
+ err = hinic3_init_rss_key(nic_dev, rss_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rss hash key failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ err = hinic3_init_rss_type(nic_dev, rss_conf);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rss hash type failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ err = hinic3_rss_set_hash_engine(nic_dev->hwdev,
+ HINIC3_RSS_HASH_ENGINE_TYPE_TOEP);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Init rss hash function failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc,
+ prio_tc);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err);
+ goto init_rss_fail;
+ }
+
+ nic_dev->rss_state = HINIC3_RSS_ENABLE;
+ return 0;
+
+init_rss_fail:
+ if (hinic3_rss_template_free(nic_dev->hwdev))
+ PMD_DRV_LOG(WARNING, "Free rss template failed");
+
+ return err;
+}
+
+/**
+ * Search given queue array to find possition of given id.
+ * Return queue pos or queue_count if not found.
+ */
+static u8
+hinic3_find_queue_pos_by_rq_id(u8 *queues, u8 queues_count, u8 queue_id)
+{
+ u8 pos;
+
+ for (pos = 0; pos < queues_count; pos++) {
+ if (queue_id == queues[pos])
+ break;
+ }
+
+ return pos;
+}
+
+void
+hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev,
+ u16 queue_id)
+{
+ u8 queue_pos;
+ u8 rss_queue_count = nic_dev->num_rss;
+
+ queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list,
+ rss_queue_count,
+ (u8)queue_id);
+ /*
+ * If queue was not at the end of the list,
+ * shift started queues up queue array list.
+ */
+ if (queue_pos < rss_queue_count) {
+ rss_queue_count--;
+ memmove(nic_dev->rx_queue_list + queue_pos,
+ nic_dev->rx_queue_list + queue_pos + 1,
+ (rss_queue_count - queue_pos) *
+ sizeof(nic_dev->rx_queue_list[0]));
+ }
+
+ RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list));
+ nic_dev->num_rss = rss_queue_count;
+}
+
+static void
+hinic3_rx_queue_release_mbufs(struct hinic3_rxq *rxq)
+{
+ u16 sw_ci, ci_mask, free_wqebbs;
+ u16 rx_buf_len;
+ u32 status, vlan_len, pkt_len;
+ u32 pkt_left_len = 0;
+ u32 nr_released = 0;
+ struct hinic3_rx_info *rx_info;
+ volatile struct hinic3_rq_cqe *rx_cqe;
+
+ sw_ci = hinic3_get_rq_local_ci(rxq);
+ rx_info = &rxq->rx_info[sw_ci];
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ free_wqebbs = hinic3_get_rq_free_wqebb(rxq) + 1;
+ status = rx_cqe->status;
+ ci_mask = rxq->q_mask;
+
+ while (free_wqebbs < rxq->q_depth) {
+ rx_buf_len = rxq->buf_len;
+ if (pkt_left_len != 0) {
+ /* Flush continues jumbo rqe. */
+ pkt_left_len = (pkt_left_len <= rx_buf_len)
+ ? 0
+ : (pkt_left_len - rx_buf_len);
+ } else if (HINIC3_GET_RX_FLUSH(status)) {
+ /* Flush one released rqe. */
+ pkt_left_len = 0;
+ } else if (HINIC3_GET_RX_DONE(status)) {
+ /* Flush single packet or first jumbo rqe. */
+ vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+ pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+ pkt_left_len = (pkt_len <= rx_buf_len)
+ ? 0
+ : (pkt_len - rx_buf_len);
+ } else {
+ break;
+ }
+ rte_pktmbuf_free(rx_info->mbuf);
+
+ rx_info->mbuf = NULL;
+ rx_cqe->status = 0;
+ nr_released++;
+ free_wqebbs++;
+
+ /* Update ci to next cqe. */
+ sw_ci++;
+ sw_ci &= ci_mask;
+ rx_info = &rxq->rx_info[sw_ci];
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ status = rx_cqe->status;
+ }
+
+ hinic3_update_rq_local_ci(rxq, (u16)nr_released);
+}
+
+int
+hinic3_poll_rq_empty(struct hinic3_rxq *rxq)
+{
+ unsigned long timeout;
+ int free_wqebb;
+ int err = -EFAULT;
+
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ free_wqebb = hinic3_get_rq_free_wqebb(rxq) + 1;
+ if (free_wqebb == rxq->q_depth) {
+ err = 0;
+ break;
+ }
+ hinic3_rx_queue_release_mbufs(rxq);
+ rte_delay_us(1);
+ } while (time_before(jiffies, timeout));
+
+ return err;
+}
+
+void
+hinic3_dump_cqe_status(struct hinic3_rxq *rxq, u32 *cqe_done_cnt,
+ u32 *cqe_hole_cnt, u32 *head_ci, u32 *head_done)
+{
+ u16 sw_ci;
+ u16 avail_pkts = 0;
+ u16 hit_done = 0;
+ u16 cqe_hole = 0;
+ u32 status;
+ volatile struct hinic3_rq_cqe *rx_cqe;
+
+ sw_ci = hinic3_get_rq_local_ci(rxq);
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ status = rx_cqe->status;
+ *head_done = HINIC3_GET_RX_DONE(status);
+ *head_ci = sw_ci;
+
+ for (sw_ci = 0; sw_ci < rxq->q_depth; sw_ci++) {
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+
+ /* Check current ci is done. */
+ status = rx_cqe->status;
+ if (!HINIC3_GET_RX_DONE(status) ||
+ !HINIC3_GET_RX_FLUSH(status)) {
+ if (hit_done) {
+ cqe_hole++;
+ hit_done = 0;
+ }
+
+ continue;
+ }
+
+ avail_pkts++;
+ hit_done = 1;
+ }
+
+ *cqe_done_cnt = avail_pkts;
+ *cqe_hole_cnt = cqe_hole;
+}
+
+int
+hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq)
+{
+ struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+ u32 cqe_done_cnt = 0;
+ u32 cqe_hole_cnt = 0;
+ u32 head_ci, head_done;
+ int err;
+
+ /* Disable rxq intr. */
+ hinic3_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+
+ /* Lock dev queue switch. */
+ rte_spinlock_lock(&nic_dev->queue_list_lock);
+
+ if (nic_dev->num_rss == 1) {
+ err = hinic3_set_vport_enable(nic_dev->hwdev, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "%s Disable vport failed, rc:%d",
+ nic_dev->dev_name, err);
+ }
+ }
+ hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+
+ /*
+ * If RSS is enable, remove q_id from rss indir table.
+ * If RSS is disable, no mbuf in rq, pakcet will be dropped.
+ */
+ if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+ err = hinic3_refill_indir_rqid(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Clear rq in indirect table failed, "
+ "eth_dev:%s, queue_idx:%d",
+ nic_dev->dev_name, rxq->q_id);
+ hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+ goto set_indir_failed;
+ }
+ }
+
+ /* Unlock dev queue list switch. */
+ rte_spinlock_unlock(&nic_dev->queue_list_lock);
+
+ /* Send flush rxq cmd to device. */
+ err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d",
+ nic_dev->dev_name, rxq->q_id);
+ goto rq_flush_failed;
+ }
+
+ err = hinic3_poll_rq_empty(rxq);
+ if (err) {
+ hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt,
+ &head_ci, &head_done);
+ PMD_DRV_LOG(ERR,
+ "Poll rq empty timeout, eth_dev:%s, queue_idx:%d, "
+ "mbuf_left:%d, "
+ "cqe_done:%d, cqe_hole:%d, cqe[%d].done=%d",
+ nic_dev->dev_name, rxq->q_id,
+ rxq->q_depth - hinic3_get_rq_free_wqebb(rxq),
+ cqe_done_cnt, cqe_hole_cnt, head_ci, head_done);
+ goto poll_rq_failed;
+ }
+
+ return 0;
+
+poll_rq_failed:
+rq_flush_failed:
+ rte_spinlock_lock(&nic_dev->queue_list_lock);
+set_indir_failed:
+ hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+ if (nic_dev->rss_state == HINIC3_RSS_ENABLE)
+ (void)hinic3_refill_indir_rqid(rxq);
+ rte_spinlock_unlock(&nic_dev->queue_list_lock);
+ hinic3_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+ return err;
+}
+
+int
+hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq)
+{
+ struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+ int err = 0;
+
+ /* Lock dev queue switch. */
+ rte_spinlock_lock(&nic_dev->queue_list_lock);
+ hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+
+ if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+ err = hinic3_refill_indir_rqid(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Refill rq to indrect table failed, "
+ "eth_dev:%s, queue_idx:%d err:%d",
+ nic_dev->dev_name, rxq->q_id, err);
+ hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+ }
+ }
+ hinic3_rearm_rxq_mbuf(rxq);
+ if (rxq->nic_dev->num_rss == 1) {
+ err = hinic3_set_vport_enable(nic_dev->hwdev, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "%s enable vport failed, err:%d",
+ nic_dev->dev_name, err);
+ }
+
+ /* Unlock dev queue list switch. */
+ rte_spinlock_unlock(&nic_dev->queue_list_lock);
+
+ hinic3_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+
+ return err;
+}
diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h
new file mode 100644
index 0000000000..56386b2511
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_rx.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_RX_H_
+#define _HINIC3_RX_H_
+
+#include "hinic3_wq.h"
+#include "hinic3_nic_io.h"
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define DPI_EXT_ACTION_FILED (1ULL << 32)
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) \
+ (((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define HINIC3_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC3_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+#define HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC3_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) \
+ (((val) >> RQ_CQE_SGE_##member##_SHIFT) & RQ_CQE_SGE_##member##_MASK)
+
+#define HINIC3_GET_RX_VLAN_TAG(vlan_len) RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC3_GET_RX_PKT_LEN(vlan_len) RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_DECRY_PKT_SHIFT 29
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+#define RQ_CQE_STATUS_DECRY_PKT_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) \
+ (((val) >> RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define HINIC3_GET_RX_CSUM_ERR(status) RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC3_GET_RX_DONE(status) RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC3_GET_RX_FLUSH(status) RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC3_GET_RX_BP_EN(status) RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC3_GET_RX_NUM_LRO(status) RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC3_RX_IS_DECRY_PKT(status) RQ_CQE_STATUS_GET(status, DECRY_PKT)
+
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+
+#define RQ_CQE_PKT_NUM_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+#define HINIC3_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) \
+ (((val) >> RQ_CQE_##member##_SHIFT) & RQ_CQE_##member##_MASK)
+
+#define HINIC3_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define RQ_CQE_PKT_LEN_GET(val, member) \
+ (((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_SHIFT 8
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_SHIFT 0
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_MASK 0xFFU
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_MASK 0xFFU
+
+#define RQ_CQE_DECRY_INFO_GET(val, member) \
+ (((val) >> RQ_CQE_DECRY_INFO_##member##_SHIFT) & \
+ RQ_CQE_DECRY_INFO_##member##_MASK)
+
+#define HINIC3_GET_DECRYPT_STATUS(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, DECRY_STATUS)
+
+#define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \
+ RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD)
+
+/* Rx cqe checksum err */
+#define HINIC3_RX_CSUM_IP_CSUM_ERR BIT(0)
+#define HINIC3_RX_CSUM_TCP_CSUM_ERR BIT(1)
+#define HINIC3_RX_CSUM_UDP_CSUM_ERR BIT(2)
+#define HINIC3_RX_CSUM_IGMP_CSUM_ERR BIT(3)
+#define HINIC3_RX_CSUM_ICMP_V4_CSUM_ERR BIT(4)
+#define HINIC3_RX_CSUM_ICMP_V6_CSUM_ERR BIT(5)
+#define HINIC3_RX_CSUM_SCTP_CRC_ERR BIT(6)
+#define HINIC3_RX_CSUM_HW_CHECK_NONE BIT(7)
+#define HINIC3_RX_CSUM_IPSU_OTHER_ERR BIT(8)
+
+#define HINIC3_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
+#define HINIC3_CQE_LEN 32
+
+#define HINIC3_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_IPV6_UDP_EX)
+
+struct hinic3_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 errors;
+ u64 csum_errors;
+ u64 other_errors;
+ u64 unlock_bp;
+ u64 dropped;
+
+ u64 rx_nombuf;
+ u64 rx_discards;
+ u64 burst_pkts;
+ u64 empty;
+ u64 tsc;
+#ifdef HINIC3_XSTAT_MBUF_USE
+ u64 rx_alloc_mbuf_bytes;
+ u64 rx_free_mbuf_bytes;
+ u64 rx_left_mbuf_bytes;
+#endif
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+ u64 rx_mbuf;
+ u64 rx_avail;
+ u64 rx_hole;
+#endif
+
+#ifdef HINIC3_XSTAT_PROF_RX
+ u64 app_tsc;
+ u64 pmd_tsc;
+#endif
+};
+
+struct __rte_cache_aligned hinic3_rq_cqe {
+ u32 status;
+ u32 vlan_len;
+
+ u32 offload_type;
+ u32 hash_val;
+ u32 mark_id_0;
+ u32 mark_id_1;
+ u32 mark_id_2;
+ u32 pkt_info;
+};
+
+/**
+ * Attention: please do not add any member in hinic3_rx_info
+ * because rxq bulk rearm mode will write mbuf in rx_info.
+ */
+struct hinic3_rx_info {
+ struct rte_mbuf *mbuf;
+};
+
+struct hinic3_sge_sect {
+ struct hinic3_sge sge;
+ u32 rsvd;
+};
+
+struct hinic3_rq_extend_wqe {
+ struct hinic3_sge_sect buf_desc;
+ struct hinic3_sge_sect cqe_sect;
+};
+
+struct hinic3_rq_normal_wqe {
+ u32 buf_hi_addr;
+ u32 buf_lo_addr;
+ u32 cqe_hi_addr;
+ u32 cqe_lo_addr;
+};
+
+struct hinic3_rq_wqe {
+ union {
+ struct hinic3_rq_normal_wqe normal_wqe;
+ struct hinic3_rq_extend_wqe extend_wqe;
+ };
+};
+
+struct __rte_cache_aligned hinic3_rxq {
+ struct hinic3_nic_dev *nic_dev;
+
+ u16 q_id;
+ u16 q_depth;
+ u16 q_mask;
+ u16 buf_len;
+
+ u32 rx_buff_shift;
+
+ u16 rx_free_thresh;
+ u16 rxinfo_align_end;
+ u16 wqebb_shift;
+ u16 wqebb_size;
+
+ u16 wqe_type;
+ u16 cons_idx;
+ u16 prod_idx;
+ u16 delta;
+
+ u16 next_to_update;
+ u16 port_id;
+
+ const struct rte_memzone *rq_mz;
+ void *queue_buf_vaddr; /**< rxq dma info */
+ rte_iova_t queue_buf_paddr;
+
+ const struct rte_memzone *pi_mz;
+ u16 *pi_virt_addr;
+ void *db_addr;
+ rte_iova_t pi_dma_addr;
+
+ struct hinic3_rx_info *rx_info;
+ struct hinic3_rq_cqe *rx_cqe;
+ struct rte_mempool *mb_pool;
+
+ const struct rte_memzone *cqe_mz;
+ rte_iova_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+ u8 dp_intr_en;
+ u16 msix_entry_idx;
+
+ unsigned long status;
+ u64 wait_time_cycle;
+
+ struct hinic3_rxq_stats rxq_stats;
+#ifdef HINIC3_XSTAT_PROF_RX
+ uint64_t prof_rx_end_tsc; /**< Performance profiling. */
+#endif
+};
+
+u16 hinic3_rx_fill_wqe(struct hinic3_rxq *rxq);
+
+u16 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq);
+
+void hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq);
+
+void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev);
+
+int hinic3_update_rss_config(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+
+int hinic3_poll_rq_empty(struct hinic3_rxq *rxq);
+
+void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, u32 *cqe_done_cnt,
+ u32 *cqe_hole_cnt, u32 *head_ci, u32 *head_done);
+
+int hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq);
+
+int hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq);
+
+u16 hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
+
+void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev,
+ u16 queue_id);
+
+int hinic3_refill_indir_rqid(struct hinic3_rxq *rxq);
+
+void hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev,
+ u16 queue_id);
+int hinic3_start_all_rqs(struct rte_eth_dev *eth_dev);
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+void hinic3_get_stats(struct hinic3_rxq *rxq);
+#endif
+
+/**
+ * Get receive queue local ci.
+ *
+ * @param[in] rxq
+ * Pointer to receive queue structure.
+ * @return
+ * Receive queue local ci.
+ */
+static inline u16
+hinic3_get_rq_local_ci(struct hinic3_rxq *rxq)
+{
+ return MASKED_QUEUE_IDX(rxq, rxq->cons_idx);
+}
+
+static inline u16
+hinic3_get_rq_free_wqebb(struct hinic3_rxq *rxq)
+{
+ return rxq->delta - 1;
+}
+
+/**
+ * Update receive queue local ci.
+ *
+ * @param[in] rxq
+ * Pointer to receive queue structure.
+ * @param[out] wqe_cnt
+ * Wqebb counters.
+ */
+static inline void
+hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, u16 wqe_cnt)
+{
+ rxq->cons_idx += wqe_cnt;
+ rxq->delta += wqe_cnt;
+}
+
+#endif /* _HINIC3_RX_H_ */
diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c
new file mode 100644
index 0000000000..6f8c42e0c3
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_tx.c
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_ether.h>
+#include <rte_io.h>
+#include <rte_mbuf.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_nic_cfg.h"
+#include "base/hinic3_hwdev.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_tx.h"
+
+#define HINIC3_TX_TASK_WRAPPED 1
+#define HINIC3_TX_BD_DESC_WRAPPED 2
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+#define HINIC3_MAX_TX_FREE_BULK 64
+
+#define MAX_PAYLOAD_OFFSET 221
+
+#define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET 1
+#define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0
+
+#define HINIC3_TX_OFFLOAD_MASK \
+ (HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT)
+
+#define HINIC3_TX_CKSUM_OFFLOAD_MASK \
+ (HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM | \
+ HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \
+ HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG)
+
+static inline u16
+hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq)
+{
+ return ((sq->q_depth -
+ (((sq->prod_idx - sq->cons_idx) + sq->q_depth) & sq->q_mask)) -
+ 1);
+}
+
+static inline void
+hinic3_update_sq_local_ci(struct hinic3_txq *sq, u16 wqe_cnt)
+{
+ sq->cons_idx += wqe_cnt;
+}
+
+static inline u16
+hinic3_get_sq_local_ci(struct hinic3_txq *sq)
+{
+ return MASKED_QUEUE_IDX(sq, sq->cons_idx);
+}
+
+static inline u16
+hinic3_get_sq_hw_ci(struct hinic3_txq *sq)
+{
+ return MASKED_QUEUE_IDX(sq, hinic3_hw_cpu16(*sq->ci_vaddr_base));
+}
+
+int
+hinic3_start_all_sqs(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct hinic3_txq *txq = NULL;
+ int i;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+ for (i = 0; i < nic_dev->num_sqs; i++) {
+ txq = eth_dev->data->tx_queues[i];
+ HINIC3_SET_TXQ_STARTED(txq);
+ eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
+
+static inline void
+hinic3_free_cpy_mbuf(struct hinic3_nic_dev *nic_dev __rte_unused,
+ struct rte_mbuf *cpy_skb)
+{
+ rte_pktmbuf_free(cpy_skb);
+}
+
+/**
+ * Cleans up buffers (mbuf) in the send queue (txq) and returns these buffers to
+ * their memory pool.
+ *
+ * @param[in] txq
+ * Point to send queue.
+ * @param[in] free_cnt
+ * Number of mbufs to be released.
+ * @return
+ * Number of released mbufs.
+ */
+static int
+hinic3_xmit_mbuf_cleanup(struct hinic3_txq *txq, u32 free_cnt)
+{
+ struct hinic3_tx_info *tx_info = NULL;
+ struct rte_mbuf *mbuf = NULL;
+ struct rte_mbuf *mbuf_temp = NULL;
+ struct rte_mbuf *mbuf_free[HINIC3_MAX_TX_FREE_BULK];
+
+ int nb_free = 0;
+ int wqebb_cnt = 0;
+ u16 hw_ci, sw_ci, sq_mask;
+ u32 i;
+
+ hw_ci = hinic3_get_sq_hw_ci(txq);
+ sw_ci = hinic3_get_sq_local_ci(txq);
+ sq_mask = txq->q_mask;
+
+ for (i = 0; i < free_cnt; ++i) {
+ tx_info = &txq->tx_info[sw_ci];
+ if (hw_ci == sw_ci ||
+ (((hw_ci - sw_ci) & sq_mask) < tx_info->wqebb_cnt))
+ break;
+ /*
+ * The cpy_mbuf is usually used in the arge-sized package
+ * scenario.
+ */
+ if (unlikely(tx_info->cpy_mbuf != NULL)) {
+ hinic3_free_cpy_mbuf(txq->nic_dev, tx_info->cpy_mbuf);
+ tx_info->cpy_mbuf = NULL;
+ }
+ sw_ci = (sw_ci + tx_info->wqebb_cnt) & sq_mask;
+
+ wqebb_cnt += tx_info->wqebb_cnt;
+ mbuf = tx_info->mbuf;
+
+ if (likely(mbuf->nb_segs == 1)) {
+ mbuf_temp = rte_pktmbuf_prefree_seg(mbuf);
+ tx_info->mbuf = NULL;
+ if (unlikely(mbuf_temp == NULL))
+ continue;
+
+ mbuf_free[nb_free++] = mbuf_temp;
+ /*
+ * If the pools of different mbufs are different,
+ * release the mbufs of the same pool.
+ */
+ if (unlikely(mbuf_temp->pool != mbuf_free[0]->pool ||
+ nb_free >= HINIC3_MAX_TX_FREE_BULK)) {
+ rte_mempool_put_bulk(mbuf_free[0]->pool,
+ (void **)mbuf_free,
+ (nb_free - 1));
+ nb_free = 0;
+ mbuf_free[nb_free++] = mbuf_temp;
+ }
+ } else {
+ rte_pktmbuf_free(mbuf);
+ tx_info->mbuf = NULL;
+ }
+ }
+
+ if (nb_free > 0)
+ rte_mempool_put_bulk(mbuf_free[0]->pool, (void **)mbuf_free,
+ nb_free);
+
+ hinic3_update_sq_local_ci(txq, wqebb_cnt);
+
+ return i;
+}
+
+static inline void
+hinic3_tx_free_mbuf_force(struct hinic3_txq *txq __rte_unused,
+ struct rte_mbuf *mbuf)
+{
+ rte_pktmbuf_free(mbuf);
+}
+
+/**
+ * Release the mbuf and update the consumer index for sending queue.
+ *
+ * @param[in] txq
+ * Point to send queue.
+ */
+void
+hinic3_free_txq_mbufs(struct hinic3_txq *txq)
+{
+ struct hinic3_tx_info *tx_info = NULL;
+ u16 free_wqebbs;
+ u16 ci;
+
+ free_wqebbs = hinic3_get_sq_free_wqebbs(txq) + 1;
+
+ while (free_wqebbs < txq->q_depth) {
+ ci = hinic3_get_sq_local_ci(txq);
+
+ tx_info = &txq->tx_info[ci];
+ if (unlikely(tx_info->cpy_mbuf != NULL)) {
+ hinic3_free_cpy_mbuf(txq->nic_dev, tx_info->cpy_mbuf);
+ tx_info->cpy_mbuf = NULL;
+ }
+ hinic3_tx_free_mbuf_force(txq, tx_info->mbuf);
+ hinic3_update_sq_local_ci(txq, (u16)(tx_info->wqebb_cnt));
+
+ free_wqebbs = (u16)(free_wqebbs + tx_info->wqebb_cnt);
+ tx_info->mbuf = NULL;
+ }
+}
+
+void
+hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev)
+{
+ u16 qid;
+ for (qid = 0; qid < nic_dev->num_sqs; qid++)
+ hinic3_free_txq_mbufs(nic_dev->txqs[qid]);
+}
+
+int
+hinic3_tx_done_cleanup(void *txq, u32 free_cnt)
+{
+ struct hinic3_txq *tx_queue = txq;
+ u32 try_free_cnt = !free_cnt ? tx_queue->q_depth : free_cnt;
+
+ return hinic3_xmit_mbuf_cleanup(tx_queue, try_free_cnt);
+}
+
+int
+hinic3_stop_sq(struct hinic3_txq *txq)
+{
+ struct hinic3_nic_dev *nic_dev = txq->nic_dev;
+ unsigned long timeout;
+ int err = -EFAULT;
+ int free_wqebbs;
+
+ timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+ do {
+ hinic3_tx_done_cleanup(txq, 0);
+ free_wqebbs = hinic3_get_sq_free_wqebbs(txq) + 1;
+ if (free_wqebbs == txq->q_depth) {
+ err = 0;
+ break;
+ }
+
+ rte_delay_us(1);
+ } while (time_before(jiffies, timeout));
+
+ if (err)
+ PMD_DRV_LOG(WARNING,
+ "%s Wait sq empty timeout, queue_idx: %u, "
+ "sw_ci: %u, hw_ci: %u, sw_pi: %u, free_wqebbs: %u, "
+ "q_depth:%u",
+ nic_dev->dev_name, txq->q_id,
+ hinic3_get_sq_local_ci(txq),
+ hinic3_get_sq_hw_ci(txq),
+ MASKED_QUEUE_IDX(txq, txq->prod_idx), free_wqebbs,
+ txq->q_depth);
+
+ return err;
+}
+
+/**
+ * Stop all sending queues (SQs).
+ *
+ * @param[in] txq
+ * Point to send queue.
+ */
+void
+hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev)
+{
+ u16 qid;
+ int err;
+
+ for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+ err = hinic3_stop_sq(nic_dev->txqs[qid]);
+ if (err)
+ PMD_DRV_LOG(ERR, "Stop sq%d failed", qid);
+ }
+}
diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h
new file mode 100644
index 0000000000..f4c61ea1b1
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_tx.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_TX_H_
+#define _HINIC3_TX_H_
+
+#define MAX_SINGLE_SGE_SIZE 65536
+#define HINIC3_NONTSO_PKT_MAX_SGE 38 /**< non-tso max sge 38. */
+#define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE)
+
+#define HINIC3_TSO_PKT_MAX_SGE 127 /**< tso max sge 127. */
+#define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE)
+
+/* Tx offload info. */
+struct hinic3_tx_offload_info {
+ u8 outer_l2_len;
+ u8 outer_l3_type;
+ u16 outer_l3_len;
+
+ u8 inner_l2_len;
+ u8 inner_l3_type;
+ u16 inner_l3_len;
+
+ u8 tunnel_length;
+ u8 tunnel_type;
+ u8 inner_l4_type;
+ u8 inner_l4_len;
+
+ u16 payload_offset;
+ u8 inner_l4_tcp_udp;
+ u8 rsvd0; /**< Reserved field. */
+};
+
+/* Tx wqe ctx. */
+struct hinic3_wqe_info {
+ u8 around; /**< Indicates whether the WQE is bypassed. */
+ u8 cpy_mbuf_cnt;
+ u16 sge_cnt;
+
+ u8 offload;
+ u8 rsvd0; /**< Reserved field 0. */
+ u16 payload_offset;
+
+ u8 wrapped;
+ u8 owner;
+ u16 pi;
+
+ u16 wqebb_cnt;
+ u16 rsvd1; /**< Reserved field 1. */
+
+ u32 queue_info;
+};
+
+/* Descriptor for the send queue of wqe. */
+struct hinic3_sq_wqe_desc {
+ u32 ctrl_len;
+ u32 queue_info;
+ u32 hi_addr;
+ u32 lo_addr;
+};
+
+/* Describes the send queue task. */
+struct hinic3_sq_task {
+ u32 pkt_info0;
+ u32 ip_identify;
+ u32 pkt_info2;
+ u32 vlan_offload;
+};
+
+/* Descriptor that describes the transmit queue buffer. */
+struct hinic3_sq_bufdesc {
+ u32 len; /**< 31-bits Length, L2NIC only use length[17:0]. */
+ u32 rsvd; /**< Reserved field. */
+ u32 hi_addr; /**< Upper address. */
+ u32 lo_addr; /**< Lower address. */
+};
+
+/* Compact work queue entry that describes the send queue (SQ). */
+struct hinic3_sq_compact_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+};
+
+/* Extend work queue entry that describes the send queue (SQ). */
+struct hinic3_sq_extend_wqe {
+ struct hinic3_sq_wqe_desc wqe_desc;
+ struct hinic3_sq_task task;
+ struct hinic3_sq_bufdesc buf_desc[];
+};
+
+struct hinic3_sq_wqe {
+ union {
+ struct hinic3_sq_compact_wqe compact_wqe;
+ struct hinic3_sq_extend_wqe extend_wqe;
+ };
+};
+
+struct hinic3_sq_wqe_combo {
+ struct hinic3_sq_wqe_desc *hdr;
+ struct hinic3_sq_task *task;
+ struct hinic3_sq_bufdesc *bds_head;
+ u32 wqe_type;
+ u32 task_type;
+};
+
+enum sq_wqe_data_format {
+ SQ_NORMAL_WQE = 0,
+};
+
+/* Indicates the type of a WQE. */
+enum sq_wqe_ec_type {
+ SQ_WQE_COMPACT_TYPE = 0,
+ SQ_WQE_EXTENDED_TYPE = 1,
+};
+
+#define COMPACT_WQE_MAX_CTRL_LEN 0x3FFF
+
+/* Indicates the type of tasks with different lengths. */
+enum sq_wqe_tasksect_len_type {
+ SQ_WQE_TASKSECT_46BITS = 0,
+ SQ_WQE_TASKSECT_16BYTES = 1,
+};
+
+/** Setting and obtaining queue information */
+#define SQ_CTRL_BD0_LEN_SHIFT 0
+#define SQ_CTRL_RSVD_SHIFT 18
+#define SQ_CTRL_BUFDESC_NUM_SHIFT 19
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 27
+#define SQ_CTRL_DATA_FORMAT_SHIFT 28
+#define SQ_CTRL_DIRECT_SHIFT 29
+#define SQ_CTRL_EXTENDED_SHIFT 30
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BD0_LEN_MASK 0x3FFFFU
+#define SQ_CTRL_RSVD_MASK 0x1U
+#define SQ_CTRL_BUFDESC_NUM_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1U
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_DIRECT_MASK 0x1U
+#define SQ_CTRL_EXTENDED_MASK 0x1U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_##member##_MASK) << SQ_CTRL_##member##_SHIFT)
+#define SQ_CTRL_GET(val, member) \
+ (((val) >> SQ_CTRL_##member##_SHIFT) & SQ_CTRL_##member##_MASK)
+#define SQ_CTRL_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_##member##_MASK << SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_SHIFT 0
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_MASK 0x3U
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) \
+ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) & \
+ SQ_CTRL_QUEUE_INFO_##member##_MASK)
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \
+ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+/* Setting and obtaining task information */
+#define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT 19
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT 22
+#define SQ_TASK_INFO0_INNER_L4_EN_SHIFT 24
+#define SQ_TASK_INFO0_INNER_L3_EN_SHIFT 25
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_SHIFT 26
+#define SQ_TASK_INFO0_OUT_L4_EN_SHIFT 27
+#define SQ_TASK_INFO0_OUT_L3_EN_SHIFT 28
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_SHIFT 29
+#define SQ_TASK_INFO0_ESP_OFFLOAD_SHIFT 30
+#define SQ_TASK_INFO0_IPSEC_PROTO_SHIFT 31
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L3_EN_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_ESP_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_IPSEC_PROTO_MASK 0x1U
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) \
+ << SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) \
+ << SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO3_VLAN_TAG_SHIFT 0
+#define SQ_TASK_INFO3_VLAN_TYPE_SHIFT 16
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_SHIFT 19
+
+#define SQ_TASK_INFO3_VLAN_TAG_MASK 0xFFFFU
+#define SQ_TASK_INFO3_VLAN_TYPE_MASK 0x7U
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK 0x1U
+
+#define SQ_TASK_INFO3_SET(val, member) \
+ (((val) & SQ_TASK_INFO3_##member##_MASK) \
+ << SQ_TASK_INFO3_##member##_SHIFT)
+#define SQ_TASK_INFO3_GET(val, member) \
+ (((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \
+ SQ_TASK_INFO3_##member##_MASK)
+
+/* Defines the TX queue status. */
+enum hinic3_txq_status {
+ HINIC3_TXQ_STATUS_START = 0,
+ HINIC3_TXQ_STATUS_STOP,
+};
+
+/* Setting and obtaining status information. */
+#define HINIC3_TXQ_IS_STARTED(txq) ((txq)->status == HINIC3_TXQ_STATUS_START)
+#define HINIC3_TXQ_IS_STOPPED(txq) ((txq)->status == HINIC3_TXQ_STATUS_STOP)
+#define HINIC3_SET_TXQ_STARTED(txq) ((txq)->status = HINIC3_TXQ_STATUS_START)
+#define HINIC3_SET_TXQ_STOPPED(txq) ((txq)->status = HINIC3_TXQ_STATUS_STOP)
+
+#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000
+
+/* Txq info. */
+struct hinic3_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 tx_busy;
+ u64 offload_errors;
+ u64 burst_pkts;
+ u64 sge_len0;
+ u64 mbuf_null;
+ u64 cpy_pkts;
+ u64 sge_len_too_large;
+
+#ifdef HINIC3_XSTAT_PROF_TX
+ u64 app_tsc;
+ u64 pmd_tsc;
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+ u64 tx_left_mbuf_bytes;
+#endif
+};
+
+/* Structure for storing the information sent. */
+struct hinic3_tx_info {
+ struct rte_mbuf *mbuf;
+ struct rte_mbuf *cpy_mbuf;
+ int wqebb_cnt;
+};
+
+/* Indicates the sending queue of information. */
+struct __rte_cache_aligned hinic3_txq {
+ struct hinic3_nic_dev *nic_dev;
+ u16 q_id;
+ u16 q_depth;
+ u16 q_mask;
+ u16 wqebb_size;
+ u16 wqebb_shift;
+ u16 cons_idx;
+ u16 prod_idx;
+ u16 status;
+
+ u16 tx_free_thresh;
+ u16 owner;
+ void *db_addr;
+ struct hinic3_tx_info *tx_info;
+
+ const struct rte_memzone *sq_mz;
+ void *queue_buf_vaddr;
+ rte_iova_t queue_buf_paddr;
+
+ const struct rte_memzone *ci_mz;
+ volatile u16 *ci_vaddr_base;
+ rte_iova_t ci_dma_base;
+ u64 sq_head_addr;
+ u64 sq_bot_sge_addr;
+ u32 cos;
+ struct hinic3_txq_stats txq_stats;
+#ifdef HINIC3_XSTAT_PROF_TX
+ uint64_t prof_tx_end_tsc;
+#endif
+};
+
+void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev);
+void hinic3_free_txq_mbufs(struct hinic3_txq *txq);
+void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev);
+int hinic3_stop_sq(struct hinic3_txq *txq);
+int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev);
+int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt);
+#endif /**< _HINIC3_TX_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 14/18] net/hinic3: add Rx/Tx functions
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (12 preceding siblings ...)
2025-04-18 9:05 ` [RFC 13/18] net/hinic3: add dev ops Feifei Wang
@ 2025-04-18 9:06 ` Feifei Wang
2025-04-18 9:06 ` [RFC 15/18] net/hinic3: add MML and EEPROM access feature Feifei Wang
` (6 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:06 UTC (permalink / raw)
To: dev; +Cc: Feifei Wang, Yi Chen, Xin Wang
From: Feifei Wang <wangfeifei40@huawei.com>
This patch add package sending and receiving function codes.
Signed-off-by: Feifei Wang <wangfeifei40@huawei.com>
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
---
drivers/net/hinic3/hinic3_ethdev.c | 9 +-
drivers/net/hinic3/hinic3_rx.c | 301 +++++++++++-
drivers/net/hinic3/hinic3_tx.c | 754 +++++++++++++++++++++++++++++
drivers/net/hinic3/hinic3_tx.h | 1 +
4 files changed, 1054 insertions(+), 11 deletions(-)
diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index de380dddbb..7cd101e5c3 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -21,9 +21,9 @@
#include "base/hinic3_hw_comm.h"
#include "base/hinic3_nic_cfg.h"
#include "base/hinic3_nic_event.h"
-#include "hinic3_pmd_nic_io.h"
-#include "hinic3_pmd_tx.h"
-#include "hinic3_pmd_rx.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
#include "hinic3_ethdev.h"
#define HINIC3_MIN_RX_BUF_SIZE 1024
@@ -3337,6 +3337,9 @@ hinic3_dev_init(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(INFO, "Network Interface pmd driver version: %s",
HINIC3_PMD_DRV_VERSION);
+ eth_dev->rx_pkt_burst = hinic3_recv_pkts;
+ eth_dev->tx_pkt_burst = hinic3_xmit_pkts;
+
return hinic3_func_init(eth_dev);
}
diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c
index a1dc960236..318d9aadc3 100644
--- a/drivers/net/hinic3/hinic3_rx.c
+++ b/drivers/net/hinic3/hinic3_rx.c
@@ -5,14 +5,14 @@
#include <rte_mbuf.h>
#include "base/hinic3_compat.h"
-#include "base/hinic3_pmd_hwif.h"
-#include "base/hinic3_pmd_hwdev.h"
-#include "base/hinic3_pmd_wq.h"
-#include "base/hinic3_pmd_nic_cfg.h"
-#include "hinic3_pmd_nic_io.h"
-#include "hinic3_pmd_ethdev.h"
-#include "hinic3_pmd_tx.h"
-#include "hinic3_pmd_rx.h"
+#include "base/hinic3_hwif.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_wq.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
/**
* Get wqe from receive queue.
@@ -809,3 +809,288 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq)
return err;
}
+
+
+static inline u64
+hinic3_rx_vlan(u32 offload_type, u32 vlan_len, u16 *vlan_tci)
+{
+ uint16_t vlan_tag;
+
+ vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len);
+ if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) {
+ *vlan_tci = 0;
+ return 0;
+ }
+
+ *vlan_tci = vlan_tag;
+
+ return HINIC3_PKT_RX_VLAN | HINIC3_PKT_RX_VLAN_STRIPPED;
+}
+
+static inline u64
+hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq)
+{
+ struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+ u32 csum_err;
+ u64 flags;
+
+ if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD)))
+ return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN;
+
+ csum_err = HINIC3_GET_RX_CSUM_ERR(status);
+ if (likely(csum_err == 0))
+ return (HINIC3_PKT_RX_IP_CKSUM_GOOD |
+ HINIC3_PKT_RX_L4_CKSUM_GOOD);
+
+ /*
+ * If bypass bit is set, all other err status indications should be
+ * ignored.
+ */
+ if (unlikely(csum_err & HINIC3_RX_CSUM_HW_CHECK_NONE))
+ return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN;
+
+ flags = 0;
+
+ /* IP checksum error. */
+ if (csum_err & HINIC3_RX_CSUM_IP_CSUM_ERR) {
+ flags |= HINIC3_PKT_RX_IP_CKSUM_BAD;
+ rxq->rxq_stats.csum_errors++;
+ }
+
+ /* L4 checksum error. */
+ if ((csum_err & HINIC3_RX_CSUM_TCP_CSUM_ERR) ||
+ (csum_err & HINIC3_RX_CSUM_UDP_CSUM_ERR) ||
+ (csum_err & HINIC3_RX_CSUM_SCTP_CRC_ERR)) {
+ flags |= HINIC3_PKT_RX_L4_CKSUM_BAD;
+ rxq->rxq_stats.csum_errors++;
+ }
+
+ if (unlikely(csum_err == HINIC3_RX_CSUM_IPSU_OTHER_ERR))
+ rxq->rxq_stats.other_errors++;
+
+ return flags;
+}
+
+static inline u64
+hinic3_rx_rss_hash(u32 offload_type, u32 rss_hash_value, u32 *rss_hash)
+{
+ u32 rss_type;
+
+ rss_type = HINIC3_GET_RSS_TYPES(offload_type);
+ if (likely(rss_type != 0)) {
+ *rss_hash = rss_hash_value;
+ return HINIC3_PKT_RX_RSS_HASH;
+ }
+
+ return 0;
+}
+
+static void
+hinic3_recv_jumbo_pkt(struct hinic3_rxq *rxq, struct rte_mbuf *head_mbuf,
+ u32 remain_pkt_len)
+{
+ struct rte_mbuf *cur_mbuf = NULL;
+ struct rte_mbuf *rxm = NULL;
+ struct hinic3_rx_info *rx_info = NULL;
+ u16 sw_ci, rx_buf_len = rxq->buf_len;
+ u32 pkt_len;
+
+ while (remain_pkt_len > 0) {
+ sw_ci = hinic3_get_rq_local_ci(rxq);
+ rx_info = &rxq->rx_info[sw_ci];
+
+ hinic3_update_rq_local_ci(rxq, 1);
+
+ pkt_len = remain_pkt_len > rx_buf_len ? rx_buf_len
+ : remain_pkt_len;
+ remain_pkt_len -= pkt_len;
+
+ cur_mbuf = rx_info->mbuf;
+ cur_mbuf->data_len = (u16)pkt_len;
+ cur_mbuf->next = NULL;
+
+ head_mbuf->pkt_len += cur_mbuf->data_len;
+ head_mbuf->nb_segs++;
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_free_mbuf_bytes++;
+#endif
+ if (!rxm)
+ head_mbuf->next = cur_mbuf;
+ else
+ rxm->next = cur_mbuf;
+
+ rxm = cur_mbuf;
+ }
+}
+
+int
+hinic3_start_all_rqs(struct rte_eth_dev *eth_dev)
+{
+ struct hinic3_nic_dev *nic_dev = NULL;
+ struct hinic3_rxq *rxq = NULL;
+ int err = 0;
+ int i;
+
+ nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+ hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+ err = hinic3_rearm_rxq_mbuf(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Fail to alloc mbuf for Rx queue %d, "
+ "qid = %u, need_mbuf: %d",
+ i, rxq->q_id, rxq->q_depth);
+ goto out;
+ }
+ hinic3_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+ err = hinic3_refill_indir_rqid(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Refill rq to indrect table failed, "
+ "eth_dev:%s, queue_idx:%d, err:%d",
+ rxq->nic_dev->dev_name, rxq->q_id, err);
+ goto out;
+ }
+ }
+
+ return 0;
+out:
+ for (i = 0; i < nic_dev->num_rqs; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+ hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+ hinic3_free_rxq_mbufs(rxq);
+ hinic3_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+ return err;
+}
+
+#define HINIC3_RX_EMPTY_THRESHOLD 3
+u16
+hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
+{
+ struct hinic3_rxq *rxq = rx_queue;
+ struct hinic3_rx_info *rx_info = NULL;
+ volatile struct hinic3_rq_cqe *rx_cqe = NULL;
+ struct rte_mbuf *rxm = NULL;
+ u16 sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0;
+ u32 status, pkt_len, vlan_len, offload_type, lro_num;
+ u64 rx_bytes = 0;
+ u32 hash_value;
+
+#ifdef HINIC3_XSTAT_PROF_RX
+ uint64_t t1 = rte_get_tsc_cycles();
+ uint64_t t2;
+#endif
+ if (((rte_get_timer_cycles() - rxq->rxq_stats.tsc) < rxq->wait_time_cycle) &&
+ rxq->rxq_stats.empty >= HINIC3_RX_EMPTY_THRESHOLD)
+ goto out;
+
+ sw_ci = hinic3_get_rq_local_ci(rxq);
+ rx_buf_len = rxq->buf_len;
+
+ while (pkts < nb_pkts) {
+ rx_cqe = &rxq->rx_cqe[sw_ci];
+ status = hinic3_hw_cpu32((u32)(rte_atomic_load_explicit(&rx_cqe->status,
+ rte_memory_order_acquire)));
+ if (!HINIC3_GET_RX_DONE(status)) {
+ rxq->rxq_stats.empty++;
+ break;
+ }
+
+ vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+
+ pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+
+ rx_info = &rxq->rx_info[sw_ci];
+ rxm = rx_info->mbuf;
+
+ /* 1. Next ci point and prefetch. */
+ sw_ci++;
+ sw_ci &= rxq->q_mask;
+
+ /* 2. Prefetch next mbuf first 64B. */
+ rte_prefetch0(rxq->rx_info[sw_ci].mbuf);
+
+ /* 3. Jumbo frame process. */
+ if (likely(pkt_len <= (u32)rx_buf_len)) {
+ rxm->data_len = (u16)pkt_len;
+ rxm->pkt_len = pkt_len;
+ wqebb_cnt++;
+ } else {
+ rxm->data_len = rx_buf_len;
+ rxm->pkt_len = rx_buf_len;
+
+ /*
+ * If receive jumbo, updating ci will be done by
+ * hinic3_recv_jumbo_pkt function.
+ */
+ hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1);
+ wqebb_cnt = 0;
+ hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len);
+ sw_ci = hinic3_get_rq_local_ci(rxq);
+ }
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->port = rxq->port_id;
+
+ /* 4. Rx checksum offload. */
+ rxm->ol_flags |= hinic3_rx_csum(status, rxq);
+
+ /* 5. Vlan offload. */
+ offload_type = hinic3_hw_cpu32(rx_cqe->offload_type);
+
+ rxm->ol_flags |=
+ hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci);
+
+ /* 6. RSS. */
+ hash_value = hinic3_hw_cpu32(rx_cqe->hash_val);
+ rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value,
+ &rxm->hash.rss);
+ /* 8. LRO. */
+ lro_num = HINIC3_GET_RX_NUM_LRO(status);
+ if (unlikely(lro_num != 0)) {
+ rxm->ol_flags |= HINIC3_PKT_RX_LRO;
+ rxm->tso_segsz = pkt_len / lro_num;
+ }
+
+ rx_cqe->status = 0;
+
+ rx_bytes += pkt_len;
+ rx_pkts[pkts++] = rxm;
+ }
+
+ if (pkts) {
+ /* 9. Update local ci. */
+ hinic3_update_rq_local_ci(rxq, wqebb_cnt);
+
+ /* Update packet stats. */
+ rxq->rxq_stats.packets += pkts;
+ rxq->rxq_stats.bytes += rx_bytes;
+ rxq->rxq_stats.empty = 0;
+#ifdef HINIC3_XSTAT_MBUF_USE
+ rxq->rxq_stats.rx_free_mbuf_bytes += pkts;
+#endif
+ }
+ rxq->rxq_stats.burst_pkts = pkts;
+ rxq->rxq_stats.tsc = rte_get_timer_cycles();
+out:
+ /* 10. Rearm mbuf to rxq. */
+ hinic3_rearm_rxq_mbuf(rxq);
+
+#ifdef HINIC3_XSTAT_PROF_RX
+ /* Do profiling stats. */
+ t2 = rte_get_tsc_cycles();
+ rxq->rxq_stats.app_tsc = t1 - rxq->prof_rx_end_tsc;
+ rxq->prof_rx_end_tsc = t2;
+ rxq->rxq_stats.pmd_tsc = t2 - t1;
+#endif
+
+ return pkts;
+}
diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c
index 6f8c42e0c3..c2157ab4b9 100644
--- a/drivers/net/hinic3/hinic3_tx.c
+++ b/drivers/net/hinic3/hinic3_tx.c
@@ -60,6 +60,98 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq)
return MASKED_QUEUE_IDX(sq, hinic3_hw_cpu16(*sq->ci_vaddr_base));
}
+static void *
+hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info)
+{
+ u16 cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx);
+ u32 end_pi;
+
+ end_pi = cur_pi + wqe_info->wqebb_cnt;
+ sq->prod_idx += wqe_info->wqebb_cnt;
+
+ wqe_info->owner = (u8)(sq->owner);
+ wqe_info->pi = cur_pi;
+ wqe_info->wrapped = 0;
+
+ if (unlikely(end_pi >= sq->q_depth)) {
+ sq->owner = !sq->owner;
+
+ if (likely(end_pi > sq->q_depth))
+ wqe_info->wrapped = (u8)(sq->q_depth - cur_pi);
+ }
+
+ return NIC_WQE_ADDR(sq, cur_pi);
+}
+
+static inline void
+hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info)
+{
+ if (wqe_info->owner != sq->owner)
+ sq->owner = wqe_info->owner;
+
+ sq->prod_idx -= wqe_info->wqebb_cnt;
+}
+
+/**
+ * Sets the WQE combination information in the transmit queue (SQ).
+ *
+ * @param[in] txq
+ * Point to send queue.
+ * @param[out] wqe_combo
+ * Point to wqe_combo of send queue(SQ).
+ * @param[in] wqe
+ * Point to wqe of send queue(SQ).
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ */
+static void
+hinic3_set_wqe_combo(struct hinic3_txq *txq,
+ struct hinic3_sq_wqe_combo *wqe_combo,
+ struct hinic3_sq_wqe *wqe,
+ struct hinic3_wqe_info *wqe_info)
+{
+ wqe_combo->hdr = &wqe->compact_wqe.wqe_desc;
+
+ if (wqe_info->offload) {
+ if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) {
+ wqe_combo->task = (struct hinic3_sq_task *)
+ (void *)txq->sq_head_addr;
+ wqe_combo->bds_head = (struct hinic3_sq_bufdesc *)
+ (void *)(txq->sq_head_addr + txq->wqebb_size);
+ } else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) {
+ wqe_combo->task = &wqe->extend_wqe.task;
+ wqe_combo->bds_head = (struct hinic3_sq_bufdesc *)
+ (void *)(txq->sq_head_addr);
+ } else {
+ wqe_combo->task = &wqe->extend_wqe.task;
+ wqe_combo->bds_head = wqe->extend_wqe.buf_desc;
+ }
+
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+ wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES;
+
+ return;
+ }
+
+ if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) {
+ wqe_combo->bds_head = (struct hinic3_sq_bufdesc *)
+ (void *)(txq->sq_head_addr);
+ } else {
+ wqe_combo->bds_head =
+ (struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task);
+ }
+
+ if (wqe_info->wqebb_cnt > 1) {
+ wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+ wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS;
+
+ /* This section used as vlan insert, needs to clear. */
+ wqe_combo->bds_head->rsvd = 0;
+ } else {
+ wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE;
+ }
+}
+
int
hinic3_start_all_sqs(struct rte_eth_dev *eth_dev)
{
@@ -220,6 +312,668 @@ hinic3_tx_done_cleanup(void *txq, u32 free_cnt)
return hinic3_xmit_mbuf_cleanup(tx_queue, try_free_cnt);
}
+/**
+ * Prepare the data packet to be sent and calculate the internal L3 offset.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf to be processed.
+ * @param[out] inner_l3_offset
+ * Inner(IP Layer) L3 layer offset.
+ * @return
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, u16 *inner_l3_offset)
+{
+ uint64_t ol_flags = mbuf->ol_flags;
+
+ /* Only support vxlan offload. */
+ if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) &&
+ (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN)))
+ return -EINVAL;
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ if (rte_validate_tx_offload(mbuf) != 0)
+ return -EINVAL;
+#endif
+ /* Support tunnel. */
+ if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) {
+ if ((ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) ||
+ (ol_flags & HINIC3_PKT_TX_OUTER_IPV6) ||
+ (ol_flags & HINIC3_PKT_TX_TCP_SEG)) {
+ /*
+ * For this senmatic, l2_len of mbuf means
+ * len(out_udp + vxlan + in_eth).
+ */
+ *inner_l3_offset = mbuf->l2_len + mbuf->outer_l2_len +
+ mbuf->outer_l3_len;
+ } else {
+ /*
+ * For this senmatic, l2_len of mbuf means
+ * len(out_eth + out_ip + out_udp + vxlan + in_eth).
+ */
+ *inner_l3_offset = mbuf->l2_len;
+ }
+ } else {
+ /* For non-tunnel type pkts. */
+ *inner_l3_offset = mbuf->l2_len;
+ }
+
+ return 0;
+}
+
+static inline void
+hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, u16 vlan_tag,
+ u8 vlan_type)
+{
+ task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) |
+ SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) |
+ SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID);
+}
+
+/**
+ * Set the corresponding offload information based on ol_flags of the mbuf.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf for which offload needs to be set in the sending queue.
+ * @param[out] task
+ * Point to task of send queue(SQ).
+ * @param[out] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @return
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task,
+ struct hinic3_wqe_info *wqe_info)
+{
+ uint64_t ol_flags = mbuf->ol_flags;
+ u16 pld_offset = 0;
+ u32 queue_info = 0;
+ u16 vlan_tag;
+
+ task->pkt_info0 = 0;
+ task->ip_identify = 0;
+ task->pkt_info2 = 0;
+ task->vlan_offload = 0;
+
+ /* Vlan offload. */
+ if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) {
+ vlan_tag = mbuf->vlan_tci;
+ hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0);
+ task->vlan_offload = hinic3_hw_be32(task->vlan_offload);
+ }
+ /* Cksum offload. */
+ if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK))
+ return 0;
+
+ /* Tso offload. */
+ if (ol_flags & HINIC3_PKT_TX_TCP_SEG) {
+ pld_offset = wqe_info->payload_offset;
+ if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET)
+ return -EINVAL;
+
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO);
+ queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF);
+
+ /* Set MSS value. */
+ queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS);
+ queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS);
+ } else {
+ if (ol_flags & HINIC3_PKT_TX_IP_CKSUM)
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+ switch (ol_flags & HINIC3_PKT_TX_L4_MASK) {
+ case HINIC3_PKT_TX_TCP_CKSUM:
+ case HINIC3_PKT_TX_UDP_CKSUM:
+ case HINIC3_PKT_TX_SCTP_CKSUM:
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+ break;
+
+ case HINIC3_PKT_TX_L4_NO_CKSUM:
+ break;
+
+ default:
+ PMD_DRV_LOG(INFO, "not support pkt type");
+ return -EINVAL;
+ }
+ }
+
+ /* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */
+ switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) {
+ case HINIC3_PKT_TX_TUNNEL_VXLAN:
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+ break;
+
+ case 0:
+ break;
+
+ default:
+ /* For non UDP/GRE tunneling, drop the tunnel packet. */
+ PMD_DRV_LOG(INFO, "not support tunnel pkt type");
+ return -EINVAL;
+ }
+
+ if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM)
+ task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN);
+
+ task->pkt_info0 = hinic3_hw_be32(task->pkt_info0);
+ task->pkt_info2 = hinic3_hw_be32(task->pkt_info2);
+ wqe_info->queue_info = queue_info;
+
+ return 0;
+}
+
+/**
+ * Check whether the number of segments in the mbuf is valid.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf to be verified.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @return
+ * true as valid, false as invalid.
+ */
+static bool
+hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info)
+{
+ u32 total_len, limit_len, checked_len, left_len, adjust_mss;
+ u32 i, max_sges, left_sges, first_len;
+ struct rte_mbuf *mbuf_head, *mbuf_first;
+ struct rte_mbuf *mbuf_pre = mbuf;
+
+ left_sges = mbuf->nb_segs;
+ mbuf_head = mbuf;
+ mbuf_first = mbuf;
+
+ /* Tso sge number validation. */
+ if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) {
+ checked_len = 0;
+ total_len = 0;
+ first_len = 0;
+ adjust_mss = mbuf->tso_segsz >= TX_MSS_MIN ? mbuf->tso_segsz
+ : TX_MSS_MIN;
+ max_sges = HINIC3_NONTSO_PKT_MAX_SGE - 1;
+ limit_len = adjust_mss + wqe_info->payload_offset;
+
+ for (i = 0; (i < max_sges) && (total_len < limit_len); i++) {
+ total_len += mbuf->data_len;
+ mbuf_pre = mbuf;
+ mbuf = mbuf->next;
+ }
+
+ /* Each continues 38 mbufs segmust do one check. */
+ while (left_sges >= HINIC3_NONTSO_PKT_MAX_SGE) {
+ if (total_len >= limit_len) {
+ /* Update the limit len. */
+ limit_len = adjust_mss;
+ /* Update checked len. */
+ checked_len += first_len;
+ /* Record the first len. */
+ first_len = mbuf_first->data_len;
+ /* First mbuf move to the next. */
+ mbuf_first = mbuf_first->next;
+ /* Update total len. */
+ total_len -= first_len;
+ left_sges--;
+ i--;
+ for (;
+ (i < max_sges) && (total_len < limit_len);
+ i++) {
+ total_len += mbuf->data_len;
+ mbuf_pre = mbuf;
+ mbuf = mbuf->next;
+ }
+ } else {
+ /* Try to copy if not valid. */
+ checked_len += (total_len - mbuf_pre->data_len);
+
+ left_len = mbuf_head->pkt_len - checked_len;
+ if (left_len > HINIC3_COPY_MBUF_SIZE)
+ return false;
+ wqe_info->sge_cnt = (u16)(mbuf_head->nb_segs +
+ i - left_sges);
+ wqe_info->cpy_mbuf_cnt = 1;
+
+ return true;
+ }
+ } /**< End of while. */
+ }
+
+ wqe_info->sge_cnt = mbuf_head->nb_segs;
+
+ return true;
+}
+
+/**
+ * Checks and processes transport offload information for data packets.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf to send.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @return
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info)
+{
+ uint64_t ol_flags = mbuf->ol_flags;
+ u16 i, total_len, inner_l3_offset = 0;
+ int err;
+ struct rte_mbuf *mbuf_pkt = NULL;
+
+ wqe_info->sge_cnt = mbuf->nb_segs;
+ /* Check if the packet set available offload flags. */
+ if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) {
+ wqe_info->offload = 0;
+ return 0;
+ }
+
+ wqe_info->offload = 1;
+ err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset);
+ if (err)
+ return err;
+
+ /* Non tso mbuf only check sge num. */
+ if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) {
+ if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE))
+ /* Non tso packet len must less than 64KB. */
+ return -EINVAL;
+
+ if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs)))
+ /* Valid non-tso mbuf. */
+ return 0;
+
+ /*
+ * The number of non-tso packet fragments must be less than 38,
+ * and mbuf segs greater than 38 must be copied to other
+ * buffers.
+ */
+ total_len = 0;
+ mbuf_pkt = mbuf;
+ for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) {
+ total_len += mbuf_pkt->data_len;
+ mbuf_pkt = mbuf_pkt->next;
+ }
+
+ /* Default support copy total 4k mbuf segs. */
+ if ((u32)(total_len + (u16)HINIC3_COPY_MBUF_SIZE) <
+ mbuf->pkt_len)
+ return -EINVAL;
+
+ wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE;
+ wqe_info->cpy_mbuf_cnt = 1;
+
+ return 0;
+ }
+
+ /* Tso mbuf. */
+ wqe_info->payload_offset =
+ inner_l3_offset + mbuf->l3_len + mbuf->l4_len;
+
+ /* Too many mbuf segs. */
+ if (unlikely(HINIC3_TSO_SEG_NUM_INVALID(mbuf->nb_segs)))
+ return -EINVAL;
+
+ /* Check whether can cover all tso mbuf segs or not. */
+ if (unlikely(!hinic3_is_tso_sge_valid(mbuf, wqe_info)))
+ return -EINVAL;
+
+ return 0;
+}
+
+static inline void
+hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr,
+ u32 len)
+{
+ buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr));
+ buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr));
+ buf_descs->len = hinic3_hw_be32(len);
+}
+
+static inline struct rte_mbuf *
+hinic3_alloc_cpy_mbuf(struct hinic3_nic_dev *nic_dev)
+{
+ return rte_pktmbuf_alloc(nic_dev->cpy_mpool);
+}
+
+/**
+ * Copy packets in the send queue(SQ).
+ *
+ * @param[in] nic_dev
+ * Point to nic device.
+ * @param[in] mbuf
+ * Point to the source mbuf.
+ * @param[in] seg_cnt
+ * Number of mbuf segments to be copied.
+ * @result
+ * The address of the copied mbuf.
+ */
+static void *
+hinic3_copy_tx_mbuf(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf,
+ u16 sge_cnt)
+{
+ struct rte_mbuf *dst_mbuf;
+ u32 offset = 0;
+ u16 i;
+
+ if (unlikely(!nic_dev->cpy_mpool))
+ return NULL;
+
+ dst_mbuf = hinic3_alloc_cpy_mbuf(nic_dev);
+ if (unlikely(!dst_mbuf))
+ return NULL;
+
+ dst_mbuf->data_off = 0;
+ dst_mbuf->data_len = 0;
+ for (i = 0; i < sge_cnt; i++) {
+ rte_memcpy((u8 *)dst_mbuf->buf_addr + offset,
+ (u8 *)mbuf->buf_addr + mbuf->data_off,
+ mbuf->data_len);
+ dst_mbuf->data_len += mbuf->data_len;
+ offset += mbuf->data_len;
+ mbuf = mbuf->next;
+ }
+ dst_mbuf->pkt_len = dst_mbuf->data_len;
+
+ return dst_mbuf;
+}
+
+/**
+ * Map the TX mbuf to the DMA address space and set related information for
+ * subsequent DMA transmission.
+ *
+ * @param[in] txq
+ * Point to send queue.
+ * @param[in] mbuf
+ * Point to the tx mbuf.
+ * @param[out] wqe_combo
+ * Point to send queue wqe_combo.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @result
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf,
+ struct hinic3_sq_wqe_combo *wqe_combo,
+ struct hinic3_wqe_info *wqe_info)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr;
+ struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head;
+
+ uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt;
+ uint16_t real_segs = mbuf->nb_segs;
+ rte_iova_t dma_addr;
+ u32 i;
+
+ for (i = 0; i < nb_segs; i++) {
+ if (unlikely(mbuf == NULL)) {
+ txq->txq_stats.mbuf_null++;
+ return -EINVAL;
+ }
+
+ if (unlikely(mbuf->data_len == 0)) {
+ txq->txq_stats.sge_len0++;
+ return -EINVAL;
+ }
+
+ dma_addr = rte_mbuf_data_iova(mbuf);
+ if (i == 0) {
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE &&
+ mbuf->data_len > COMPACT_WQE_MAX_CTRL_LEN) {
+ txq->txq_stats.sge_len_too_large++;
+ return -EINVAL;
+ }
+
+ wqe_desc->hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ wqe_desc->lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ wqe_desc->ctrl_len = mbuf->data_len;
+ } else {
+ /*
+ * Parts of wqe is in sq bottom while parts
+ * of wqe is in sq head.
+ */
+ if (unlikely(wqe_info->wrapped &&
+ (u64)buf_desc == txq->sq_bot_sge_addr))
+ buf_desc = (struct hinic3_sq_bufdesc *)
+ (void *)txq->sq_head_addr;
+
+ hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len);
+ buf_desc++;
+ }
+ mbuf = mbuf->next;
+ }
+
+ /* For now: support over 38 sge, copy the last 2 mbuf. */
+ if (unlikely(wqe_info->cpy_mbuf_cnt != 0)) {
+ /*
+ * Copy invalid mbuf segs to a valid buffer, lost performance.
+ */
+ txq->txq_stats.cpy_pkts += 1;
+ mbuf = hinic3_copy_tx_mbuf(txq->nic_dev, mbuf,
+ real_segs - nb_segs);
+ if (unlikely(!mbuf))
+ return -EINVAL;
+
+ txq->tx_info[wqe_info->pi].cpy_mbuf = mbuf;
+
+ /* Deal with the last mbuf. */
+ dma_addr = rte_mbuf_data_iova(mbuf);
+ if (unlikely(mbuf->data_len == 0)) {
+ txq->txq_stats.sge_len0++;
+ return -EINVAL;
+ }
+ /*
+ * Parts of wqe is in sq bottom while parts
+ * of wqe is in sq head.
+ */
+ if (i == 0) {
+ wqe_desc->hi_addr =
+ hinic3_hw_be32(upper_32_bits(dma_addr));
+ wqe_desc->lo_addr =
+ hinic3_hw_be32(lower_32_bits(dma_addr));
+ wqe_desc->ctrl_len = mbuf->data_len;
+ } else {
+ if (unlikely(wqe_info->wrapped &&
+ ((u64)buf_desc == txq->sq_bot_sge_addr)))
+ buf_desc = (struct hinic3_sq_bufdesc *)
+ txq->sq_head_addr;
+
+ hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len);
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Sets and configures fields in the transmit queue control descriptor based on
+ * the WQE type.
+ *
+ * @param[out] wqe_combo
+ * Point to wqe_combo of send queue.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue.
+ */
+static void
+hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo,
+ struct hinic3_wqe_info *wqe_info)
+{
+ struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr;
+
+ if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) {
+ wqe_desc->ctrl_len |=
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(wqe_info->owner, OWNER);
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+ /* Compact wqe queue_info will transfer to ucode. */
+ wqe_desc->queue_info = 0;
+
+ return;
+ }
+
+ wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) |
+ SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) |
+ SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+ SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+ SQ_CTRL_SET(wqe_info->owner, OWNER);
+
+ wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+ wqe_desc->queue_info = wqe_info->queue_info;
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC);
+
+ if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) {
+ wqe_desc->queue_info |=
+ SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS);
+ } else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) <
+ TX_MSS_MIN) {
+ /* Mss should not less than 80. */
+ wqe_desc->queue_info =
+ SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS);
+ wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS);
+ }
+
+ wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info);
+}
+
+/**
+ * It is responsible for sending data packets.
+ *
+ * @param[in] tx_queue
+ * Point to send queue.
+ * @param[in] tx_pkts
+ * Pointer to the array of data packets to be sent.
+ * @param[in] nb_pkts
+ * Number of sent packets.
+ * @return
+ * Number of actually sent packets.
+ */
+u16
+hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts)
+{
+ struct hinic3_txq *txq = tx_queue;
+ struct hinic3_tx_info *tx_info = NULL;
+ struct rte_mbuf *mbuf_pkt = NULL;
+ struct hinic3_sq_wqe_combo wqe_combo = {0};
+ struct hinic3_sq_wqe *sq_wqe = NULL;
+ struct hinic3_wqe_info wqe_info = {0};
+
+ u32 offload_err, free_cnt;
+ u64 tx_bytes = 0;
+ u16 free_wqebb_cnt, nb_tx;
+ int err;
+
+#ifdef HINIC3_XSTAT_PROF_TX
+ uint64_t t1, t2;
+ t1 = rte_get_tsc_cycles();
+#endif
+
+ if (unlikely(!HINIC3_TXQ_IS_STARTED(txq)))
+ return 0;
+
+ free_cnt = txq->tx_free_thresh;
+ /* Reclaim tx mbuf before xmit new packets. */
+ if (hinic3_get_sq_free_wqebbs(txq) < txq->tx_free_thresh)
+ hinic3_xmit_mbuf_cleanup(txq, free_cnt);
+
+ /* Tx loop routine. */
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ mbuf_pkt = *tx_pkts++;
+ if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) {
+ txq->txq_stats.offload_errors++;
+ break;
+ }
+
+ if (!wqe_info.offload)
+ wqe_info.wqebb_cnt = wqe_info.sge_cnt;
+ else
+ /* Use extended sq wqe with normal TS. */
+ wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1;
+
+ free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq);
+ if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) {
+ /* Reclaim again. */
+ hinic3_xmit_mbuf_cleanup(txq, free_cnt);
+ free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq);
+ if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) {
+ txq->txq_stats.tx_busy += (nb_pkts - nb_tx);
+ break;
+ }
+ }
+
+ /* Get sq wqe address from wqe_page. */
+ sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info);
+ if (unlikely(!sq_wqe)) {
+ txq->txq_stats.tx_busy++;
+ break;
+ }
+
+ /* Task or bd section maybe warpped for one wqe. */
+ hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info);
+
+ wqe_info.queue_info = 0;
+ /* Fill tx packet offload into qsf and task field. */
+ if (wqe_info.offload) {
+ offload_err = hinic3_set_tx_offload(mbuf_pkt,
+ wqe_combo.task,
+ &wqe_info);
+ if (unlikely(offload_err)) {
+ hinic3_put_sq_wqe(txq, &wqe_info);
+ txq->txq_stats.offload_errors++;
+ break;
+ }
+ }
+
+ /* Fill sq_wqe buf_desc and bd_desc. */
+ err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo,
+ &wqe_info);
+ if (err) {
+ hinic3_put_sq_wqe(txq, &wqe_info);
+ txq->txq_stats.offload_errors++;
+ break;
+ }
+
+ /* Record tx info. */
+ tx_info = &txq->tx_info[wqe_info.pi];
+ tx_info->mbuf = mbuf_pkt;
+ tx_info->wqebb_cnt = wqe_info.wqebb_cnt;
+
+ hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info);
+
+ tx_bytes += mbuf_pkt->pkt_len;
+ }
+
+ /* Update txq stats. */
+ if (nb_tx) {
+ hinic3_write_db(txq->db_addr, txq->q_id, (int)(txq->cos),
+ SQ_CFLAG_DP,
+ MASKED_QUEUE_IDX(txq, txq->prod_idx));
+ txq->txq_stats.packets += nb_tx;
+ txq->txq_stats.bytes += tx_bytes;
+ }
+ txq->txq_stats.burst_pkts = nb_tx;
+
+#ifdef HINIC3_XSTAT_PROF_TX
+ t2 = rte_get_tsc_cycles();
+ txq->txq_stats.app_tsc = t1 - txq->prof_tx_end_tsc;
+ txq->prof_tx_end_tsc = t2;
+ txq->txq_stats.pmd_tsc = t2 - t1;
+ txq->txq_stats.burst_pkts = nb_tx;
+#endif
+
+ return nb_tx;
+}
+
int
hinic3_stop_sq(struct hinic3_txq *txq)
{
diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h
index f4c61ea1b1..6026b3fabc 100644
--- a/drivers/net/hinic3/hinic3_tx.h
+++ b/drivers/net/hinic3/hinic3_tx.h
@@ -308,6 +308,7 @@ struct __rte_cache_aligned hinic3_txq {
void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev);
void hinic3_free_txq_mbufs(struct hinic3_txq *txq);
void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev);
+u16 hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
int hinic3_stop_sq(struct hinic3_txq *txq);
int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev);
int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt);
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 15/18] net/hinic3: add MML and EEPROM access feature
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (13 preceding siblings ...)
2025-04-18 9:06 ` [RFC 14/18] net/hinic3: add Rx/Tx functions Feifei Wang
@ 2025-04-18 9:06 ` Feifei Wang
2025-04-18 9:06 ` [RFC 16/18] net/hinic3: add RSS promiscuous ops Feifei Wang
` (5 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:06 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen
From: Xin Wang <wangxin679@h-partners.com>
Add man-machine language support and implements the get eeprom method.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
drivers/net/hinic3/hinic3_ethdev.c | 13 +
drivers/net/hinic3/mml/hinic3_dbg.c | 171 +++++
drivers/net/hinic3/mml/hinic3_dbg.h | 160 +++++
drivers/net/hinic3/mml/hinic3_mml_cmd.c | 375 +++++++++++
drivers/net/hinic3/mml/hinic3_mml_cmd.h | 131 ++++
drivers/net/hinic3/mml/hinic3_mml_ioctl.c | 215 +++++++
drivers/net/hinic3/mml/hinic3_mml_lib.c | 136 ++++
drivers/net/hinic3/mml/hinic3_mml_lib.h | 275 ++++++++
drivers/net/hinic3/mml/hinic3_mml_main.c | 167 +++++
drivers/net/hinic3/mml/hinic3_mml_queue.c | 749 ++++++++++++++++++++++
drivers/net/hinic3/mml/hinic3_mml_queue.h | 256 ++++++++
drivers/net/hinic3/mml/meson.build | 62 ++
12 files changed, 2710 insertions(+)
create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
create mode 100644 drivers/net/hinic3/mml/meson.build
diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index 7cd101e5c3..9c5decb867 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -21,6 +21,7 @@
#include "base/hinic3_hw_comm.h"
#include "base/hinic3_nic_cfg.h"
#include "base/hinic3_nic_event.h"
+#include "mml/hinic3_mml_lib.h"
#include "hinic3_nic_io.h"
#include "hinic3_tx.h"
#include "hinic3_rx.h"
@@ -2276,6 +2277,16 @@ hinic3_dev_allmulticast_disable(struct rte_eth_dev *dev)
return 0;
}
+static int
+hinic3_get_eeprom(__rte_unused struct rte_eth_dev *dev,
+ struct rte_dev_eeprom_info *info)
+{
+#define MAX_BUF_OUT_LEN 2048
+
+ return hinic3_pmd_mml_lib(info->data, info->offset, info->data,
+ &info->length, MAX_BUF_OUT_LEN);
+}
+
/**
* Get device generic statistics.
*
@@ -2879,6 +2890,7 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
.vlan_offload_set = hinic3_vlan_offload_set,
.allmulticast_enable = hinic3_dev_allmulticast_enable,
.allmulticast_disable = hinic3_dev_allmulticast_disable,
+ .get_eeprom = hinic3_get_eeprom,
.stats_get = hinic3_dev_stats_get,
.stats_reset = hinic3_dev_stats_reset,
.xstats_get = hinic3_dev_xstats_get,
@@ -2919,6 +2931,7 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = {
.vlan_offload_set = hinic3_vlan_offload_set,
.allmulticast_enable = hinic3_dev_allmulticast_enable,
.allmulticast_disable = hinic3_dev_allmulticast_disable,
+ .get_eeprom = hinic3_get_eeprom,
.stats_get = hinic3_dev_stats_get,
.stats_reset = hinic3_dev_stats_reset,
.xstats_get = hinic3_dev_xstats_get,
diff --git a/drivers/net/hinic3/mml/hinic3_dbg.c b/drivers/net/hinic3/mml/hinic3_dbg.c
new file mode 100644
index 0000000000..7525b68dee
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_dbg.c
@@ -0,0 +1,171 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_compat.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_dbg.h"
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((ulong)(db) - (ulong)(db_base)) / HINIC3_DB_PAGE_SIZE))
+
+int
+hinic3_dbg_get_rq_info(void *hwdev, uint16_t q_id,
+ struct hinic3_dbg_rq_info *rq_info, u16 *msg_size)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+ struct hinic3_nic_dev *nic_dev =
+ (struct hinic3_nic_dev *)dev->dev_handle;
+ struct hinic3_rxq *rxq = NULL;
+
+ if (q_id >= nic_dev->num_rqs) {
+ PMD_DRV_LOG(ERR, "Invalid rx queue id, q_id: %d, num_rqs: %d",
+ q_id, nic_dev->num_rqs);
+ return -EINVAL;
+ }
+
+ rq_info->q_id = q_id;
+ rxq = nic_dev->rxqs[q_id];
+
+ rq_info->hw_pi = (u16)cpu_to_be16(*rxq->pi_virt_addr);
+ rq_info->ci = rxq->cons_idx & rxq->q_mask;
+ rq_info->sw_pi = rxq->prod_idx & rxq->q_mask;
+ rq_info->wqebb_size = HINIC3_SQ_WQEBB_SIZE;
+ rq_info->q_depth = rxq->q_depth;
+ rq_info->buf_len = rxq->buf_len;
+ rq_info->ci_wqe_page_addr = rxq->queue_buf_vaddr;
+ rq_info->ci_cla_tbl_addr = NULL;
+ rq_info->msix_idx = 0;
+ rq_info->msix_vector = 0;
+
+ *msg_size = sizeof(*rq_info);
+
+ return 0;
+}
+
+int
+hinic3_dbg_get_rx_cqe_info(void *hwdev, uint16_t q_id, uint16_t idx,
+ void *buf_out, uint16_t *out_size)
+{
+ struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+ struct hinic3_nic_dev *nic_dev =
+ (struct hinic3_nic_dev *)dev->dev_handle;
+
+ if (q_id >= nic_dev->num_rqs || idx >= nic_dev->rxqs[q_id]->q_depth)
+ return -EFAULT;
+
+ (void)memcpy(buf_out, (void *)&nic_dev->rxqs[q_id]->rx_cqe[idx],
+ sizeof(struct hinic3_rq_cqe));
+ *out_size = sizeof(struct hinic3_rq_cqe);
+
+ return 0;
+}
+
+int
+hinic3_dbg_get_sq_info(void *dev, u16 q_id, struct hinic3_dbg_sq_info *sq_info,
+ u16 *msg_size)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ struct hinic3_nic_dev *nic_dev =
+ (struct hinic3_nic_dev *)hwdev->dev_handle;
+ struct hinic3_txq *txq = NULL;
+
+ if (q_id >= nic_dev->num_sqs) {
+ PMD_DRV_LOG(ERR,
+ "Inputting tx queue id is larger than actual tx "
+ "queue number, qid: %d, num_sqs: %d",
+ q_id, nic_dev->num_sqs);
+ return -EINVAL;
+ }
+
+ sq_info->q_id = q_id;
+ txq = nic_dev->txqs[q_id];
+
+ sq_info->pi = txq->prod_idx & txq->q_mask;
+ sq_info->ci = txq->cons_idx & txq->q_mask;
+ sq_info->fi = (*(u16 *)txq->ci_vaddr_base) & txq->q_mask;
+ sq_info->q_depth = txq->q_depth;
+ sq_info->weqbb_size = HINIC3_SQ_WQEBB_SIZE;
+ sq_info->ci_addr =
+ (volatile u16 *)HINIC3_CI_VADDR(txq->ci_vaddr_base, q_id);
+ sq_info->cla_addr = txq->queue_buf_paddr;
+ sq_info->db_addr.phy_addr = (u64 *)txq->db_addr;
+ sq_info->pg_idx = DB_IDX(txq->db_addr, hwdev->hwif->db_base);
+
+ *msg_size = sizeof(*sq_info);
+
+ return 0;
+}
+
+int
+hinic3_dbg_get_sq_wqe_info(void *dev, u16 q_id, u16 idx, u16 wqebb_cnt, u8 *wqe,
+ u16 *wqe_size)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ struct hinic3_nic_dev *nic_dev =
+ (struct hinic3_nic_dev *)hwdev->dev_handle;
+ struct hinic3_txq *txq = NULL;
+ void *src_wqe = NULL;
+ u32 offset;
+
+ if (q_id >= nic_dev->num_sqs) {
+ PMD_DRV_LOG(ERR,
+ "Inputting tx queue id is larger than actual tx "
+ "queue number, qid: %d, num_sqs: %d",
+ q_id, nic_dev->num_sqs);
+ return -EINVAL;
+ }
+
+ txq = nic_dev->txqs[q_id];
+ if (idx + wqebb_cnt > txq->q_depth)
+ return -EFAULT;
+
+ src_wqe = (void *)txq->queue_buf_vaddr;
+ offset = (u32)idx << txq->wqebb_shift;
+
+ (void)memcpy((void *)wqe, (void *)((u8 *)src_wqe + offset),
+ (size_t)((u32)wqebb_cnt << txq->wqebb_shift));
+
+ *wqe_size = (u16)((u32)wqebb_cnt << txq->wqebb_shift);
+ return 0;
+}
+
+int
+hinic3_dbg_get_rq_wqe_info(void *dev, u16 q_id, u16 idx, u16 wqebb_cnt, u8 *wqe,
+ u16 *wqe_size)
+{
+ struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+ struct hinic3_nic_dev *nic_dev =
+ (struct hinic3_nic_dev *)hwdev->dev_handle;
+ struct hinic3_rxq *rxq = NULL;
+ void *src_wqe = NULL;
+ u32 offset;
+
+ if (q_id >= nic_dev->num_rqs) {
+ PMD_DRV_LOG(ERR,
+ "Inputting rx queue id is larger than actual rx "
+ "queue number, qid: %d, num_rqs: %d",
+ q_id, nic_dev->num_rqs);
+ return -EINVAL;
+ }
+
+ rxq = nic_dev->rxqs[q_id];
+ if (idx + wqebb_cnt > rxq->q_depth)
+ return -EFAULT;
+
+ src_wqe = (void *)rxq->queue_buf_vaddr;
+ offset = (u32)idx << rxq->wqebb_shift;
+
+ (void)memcpy((void *)wqe, (void *)((u8 *)src_wqe + offset),
+ (size_t)((u32)wqebb_cnt << rxq->wqebb_shift));
+
+ *wqe_size = (u16)((u32)wqebb_cnt << rxq->wqebb_shift);
+ return 0;
+}
diff --git a/drivers/net/hinic3/mml/hinic3_dbg.h b/drivers/net/hinic3/mml/hinic3_dbg.h
new file mode 100644
index 0000000000..bac96c84a0
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_dbg.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#ifndef _HINIC3_MML_DBG_H
+#define _HINIC3_MML_DBG_H
+
+/* nic_tool */
+struct hinic3_tx_hw_page {
+ u64 *phy_addr;
+ u64 *map_addr;
+};
+
+/* nic_tool */
+struct hinic3_dbg_sq_info {
+ u16 q_id;
+ u16 pi;
+ u16 ci; /**< sw_ci */
+ u16 fi; /**< hw_ci */
+
+ u32 q_depth;
+ u16 weqbb_size;
+
+ volatile u16 *ci_addr;
+ u64 cla_addr;
+
+ struct hinic3_tx_hw_page db_addr;
+ u32 pg_idx;
+};
+
+/* nic_tool */
+struct hinic3_dbg_rq_info {
+ u16 q_id;
+ u16 hw_pi;
+ u16 ci; /**< sw_ci */
+ u16 sw_pi;
+ u16 wqebb_size;
+ u16 q_depth;
+ u16 buf_len;
+
+ void *ci_wqe_page_addr;
+ void *ci_cla_tbl_addr;
+ u16 msix_idx;
+ u32 msix_vector;
+};
+
+void *hinic3_dbg_get_sq_wq_handle(void *hwdev, u16 q_id);
+
+void *hinic3_dbg_get_rq_wq_handle(void *hwdev, u16 q_id);
+
+void *hinic3_dbg_get_sq_ci_addr(void *hwdev, u16 q_id);
+
+u16 hinic3_dbg_get_global_qpn(void *hwdev);
+
+/**
+ * Get details of specified RX queue and store in `rq_info`.
+ *
+ * @param[in] hwdev
+ * Pointer to the hardware device.
+ * @param[in] q_id
+ * RX queue ID.
+ * @param[out] rq_info
+ * Structure to store RX queue information.
+ * @param[out] msg_size
+ * Size of the message.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_dbg_get_rq_info(void *hwdev, uint16_t q_id,
+ struct hinic3_dbg_rq_info *rq_info, u16 *msg_size);
+
+/**
+ * Get the RX CQE at the specified index from the given RX queue.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] q_id
+ * RX queue ID.
+ * @param[in] idx
+ * Index of the CQE.
+ * @param[out] buf_out
+ * Buffer to store the CQE.
+ * @param[out] out_size
+ * Size of the CQE.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_dbg_get_rx_cqe_info(void *hwdev, uint16_t q_id, uint16_t idx,
+ void *buf_out, uint16_t *out_size);
+
+/**
+ * Get SQ information for debugging.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] q_id
+ * ID of SQ to retrieve information for.
+ * @param[out] sq_info
+ * Pointer to the structure where the SQ information will be stored.
+ * @param[out] msg_size
+ * The size (in bytes) of the `sq_info` structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EINVAL if the queue ID is invalid.
+ */
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id,
+ struct hinic3_dbg_sq_info *sq_info, u16 *msg_size);
+
+/**
+ * Get WQE information from a send queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] q_id
+ * The ID of the send queue from which to retrieve WQE information.
+ * @param[in] idx
+ * The index of the first WQE to retrieve.
+ * @param[in] wqebb_cnt
+ * The number of WQEBBs to retrieve.
+ * @param[out] wqe
+ * Pointer to the buffer where the WQE data will be stored.
+ * @param[out] wqe_size
+ * The size (in bytes) of the retrieved WQE data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EINVAL if queue ID invalid.
+ * - -EFAULT if index invalid.
+ */
+int hinic3_dbg_get_sq_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, u16 *wqe_size);
+
+/**
+ * Get WQE information from a receive queue.
+ *
+ * @param[in] dev
+ * Pointer to the device structure.
+ * @param[in] q_id
+ * The ID of the receive queue from which to retrieve WQE information.
+ * @param[in] idx
+ * The index of the first WQE to retrieve.
+ * @param[in] wqebb_cnt
+ * The number of WQEBBs to retrieve.
+ * @param[out] wqe
+ * Pointer to the buffer where the WQE data will be stored.
+ * @param[out] wqe_size
+ * The size (in bytes) of the retrieved WQE data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EINVAL if queue ID invalid.
+ * - -EFAULT if index invalid.
+ */
+int hinic3_dbg_get_rq_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+ u8 *wqe, u16 *wqe_size);
+
+#endif /* _HINIC3_MML_DBG_H */
diff --git a/drivers/net/hinic3/mml/hinic3_mml_cmd.c b/drivers/net/hinic3/mml/hinic3_mml_cmd.c
new file mode 100644
index 0000000000..06d20a62bd
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_cmd.c
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_mml_lib.h"
+#include "hinic3_compat.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_mml_cmd.h"
+
+/**
+ * Compares two strings for equality.
+ *
+ * @param[in] command
+ * The first string to compare.
+ * @param[in] argument
+ * The second string to compare.
+ *
+ * @return
+ * UDA_TRUE if the strings are equal, otherwise UDA_FALSE.
+ */
+static int
+string_cmp(const char *command, const char *argument)
+{
+ const char *cmd = command;
+ const char *arg = argument;
+
+ if (!cmd || !arg)
+ return UDA_FALSE;
+
+ if (strlen(cmd) != strlen(arg))
+ return UDA_FALSE;
+
+ do {
+ if (*cmd != *arg)
+ return UDA_FALSE;
+ cmd++;
+ arg++;
+ } while (*cmd != '\0');
+
+ return UDA_TRUE;
+}
+
+static void
+show_tool_version(cmd_adapter_t *adapter)
+{
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ "hinic3 pmd version %s", HINIC3_PMD_DRV_VERSION);
+}
+
+static void
+show_tool_help(cmd_adapter_t *adapter)
+{
+ int i;
+ major_cmd_t *major_cmd = NULL;
+
+ if (!adapter)
+ return;
+
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ "\n Usage:evsadm exec dump-hinic-status <major_cmd> "
+ "[option]\n");
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ " -h, --help show help information");
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ " -v, --version show version information");
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ "\n Major Commands:\n");
+
+ for (i = 0; i < adapter->major_cmds; i++) {
+ major_cmd = adapter->p_major_cmd[i];
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ " %-23s %s", major_cmd->name,
+ major_cmd->description);
+ }
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len, "");
+}
+
+void
+major_command_option(major_cmd_t *major_cmd, const char *little,
+ const char *large, uint32_t have_param,
+ command_record_t record)
+{
+ cmd_option_t *option = NULL;
+
+ if (major_cmd == NULL || (little == NULL && large == NULL) || !record) {
+ PMD_DRV_LOG(ERR, "Invalid input parameter.");
+ return;
+ }
+
+ if (major_cmd->option_count >= COMMAND_MAX_OPTIONS) {
+ PMD_DRV_LOG(ERR, "Do not support more than %d options",
+ COMMAND_MAX_OPTIONS);
+ return;
+ }
+
+ option = &major_cmd->options[major_cmd->option_count];
+ major_cmd->options_repeat_flag[major_cmd->option_count] = 0;
+ major_cmd->option_count++;
+
+ option->record = record;
+ option->little = little;
+ option->large = large;
+ option->have_param = have_param;
+}
+
+void
+major_command_register(cmd_adapter_t *adapter, major_cmd_t *major_cmd)
+{
+ int i = 0;
+
+ if (adapter == NULL || major_cmd == NULL) {
+ PMD_DRV_LOG(ERR, "Invalid input parameter.");
+ return;
+ }
+
+ if (adapter->major_cmds >= COMMAND_MAX_MAJORS) {
+ PMD_DRV_LOG(ERR, "Major Commands is full");
+ return;
+ }
+ while (adapter->p_major_cmd[i] != NULL)
+ i++;
+ adapter->p_major_cmd[i] = major_cmd;
+ adapter->major_cmds++;
+ major_cmd->adapter = adapter;
+ major_cmd->err_no = UDA_SUCCESS;
+ (void)memset(major_cmd->err_str, 0, sizeof(major_cmd->err_str));
+}
+
+static int
+is_help_version(cmd_adapter_t *adapter, int argc, char *arg)
+{
+ if (COMMAND_HELP_POSTION(argc) &&
+ (string_cmp("-h", arg) || string_cmp("--help", arg))) {
+ show_tool_help(adapter);
+ return UDA_TRUE;
+ }
+
+ if (COMMAND_VERSION_POSTION(argc) &&
+ (string_cmp("-v", arg) || string_cmp("--version", arg))) {
+ show_tool_version(adapter);
+ return UDA_TRUE;
+ }
+
+ return UDA_FALSE;
+}
+
+static int
+check_command_length(int argc, char **argv)
+{
+ int i;
+ unsigned long long str_len = 0;
+
+ for (i = 1; i < argc; i++)
+ str_len += strlen(argv[i]);
+
+ if (str_len > COMMAND_MAX_STRING)
+ return -UDA_EINVAL;
+
+ return UDA_SUCCESS;
+}
+
+static inline int
+char_check(const char cmd)
+{
+ if (cmd >= 'a' && cmd <= 'z')
+ return UDA_SUCCESS;
+
+ if (cmd >= 'A' && cmd <= 'Z')
+ return UDA_SUCCESS;
+ return UDA_FAIL;
+}
+
+static int
+major_command_check_param(cmd_option_t *option, char *arg)
+{
+ if (!option)
+ return -UDA_EINVAL;
+ if (option->have_param != 0) {
+ if (!arg || ((arg[0] == '-') && char_check(arg[1])))
+ return -UDA_EINVAL;
+ return UDA_SUCCESS;
+ }
+
+ return -UDA_ENOOBJ;
+}
+
+static int
+major_cmd_repeat_option_set(major_cmd_t *major_cmd, const cmd_option_t *option,
+ u32 *options_repeat_flag)
+{
+ int err;
+
+ if (*options_repeat_flag != 0) {
+ major_cmd->err_no = -UDA_EINVAL;
+ err = snprintf(major_cmd->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Repeated option %s|%s.", option->little,
+ option->large);
+ if (err <= 0) {
+ PMD_DRV_LOG(ERR,
+ "snprintf cmd repeat option failed, err: %d.",
+ err);
+ }
+ return -UDA_EINVAL;
+ }
+ *options_repeat_flag = 1;
+ return UDA_SUCCESS;
+}
+
+static int
+major_cmd_option_check(major_cmd_t *major_cmd, char **argv, int *index)
+{
+ int j, ret, err, option_ok, intermediate_var;
+ cmd_option_t *option = NULL;
+ char *arg = argv[*index];
+
+ /* Find command. */
+ for (j = 0; j < major_cmd->option_count; j++) {
+ option = &major_cmd->options[j];
+ option_ok = (((option->little != NULL) &&
+ string_cmp(option->little, arg)) ||
+ ((option->large != NULL) &&
+ string_cmp(option->large, arg)));
+ if (!option_ok)
+ continue;
+ /* Find same option. */
+ ret = major_cmd_repeat_option_set(major_cmd,
+ option, &major_cmd->options_repeat_flag[j]);
+ if (ret != UDA_SUCCESS)
+ return ret;
+
+ arg = NULL;
+ /* If this option need parameters. */
+ intermediate_var = (*index) + 1;
+ ret = major_command_check_param(option, argv[intermediate_var]);
+ if (ret == UDA_SUCCESS) {
+ (*index)++;
+ arg = argv[*index];
+ } else if (ret == -UDA_EINVAL) {
+ major_cmd->err_no = -UDA_EINVAL;
+ err = snprintf(major_cmd->err_str,
+ COMMANDER_ERR_MAX_STRING - 1,
+ "%s|%s option need parameter.",
+ option->little, option->large);
+ if (err <= 0) {
+ PMD_DRV_LOG(ERR,
+ "snprintf cmd option need para "
+ "failed, err: %d.",
+ err);
+ }
+ return -UDA_EINVAL;
+ }
+
+ /* Record messages. */
+ ret = option->record(major_cmd, arg);
+ if (ret != UDA_SUCCESS)
+ return ret;
+ break;
+ }
+
+ /* Illegal option. */
+ if (j == major_cmd->option_count) {
+ major_cmd->err_no = -UDA_EINVAL;
+ err = snprintf(major_cmd->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "%s is not option needed.", arg);
+ if (err <= 0) {
+ PMD_DRV_LOG(ERR,
+ "snprintf cmd option invalid failed, err: %d.",
+ err);
+ }
+ return -UDA_EINVAL;
+ }
+ return UDA_SUCCESS;
+}
+
+static int
+major_command_parse(major_cmd_t *major_cmd, int argc, char **argv)
+{
+ int i, err;
+
+ for (i = 0; i < argc; i++) {
+ err = major_cmd_option_check(major_cmd, argv, &i);
+ if (err != UDA_SUCCESS)
+ return err;
+ }
+
+ return UDA_SUCCESS;
+}
+
+static int
+copy_reslut_to_buffer(void *buf_out, char *reslut, int len)
+{
+ int ret;
+
+ ret = snprintf(buf_out, len - 1, "%s", reslut);
+ if (ret <= 0)
+ return 0;
+
+ return ret + 1;
+}
+
+void
+command_parse(cmd_adapter_t *adapter, int argc, char **argv, void *buf_out,
+ uint32_t *out_len)
+{
+ int i;
+ major_cmd_t *major_cmd = NULL;
+ char *arg = argv[1];
+
+ if (is_help_version(adapter, argc, arg) == UDA_TRUE) {
+ *out_len = (u32)copy_reslut_to_buffer(buf_out,
+ adapter->show_str, MAX_SHOW_STR_LEN);
+ return;
+ }
+
+ for (i = 0; i < adapter->major_cmds; i++) {
+ major_cmd = adapter->p_major_cmd[i];
+
+ /* Find major command. */
+ if (!string_cmp(major_cmd->name, arg))
+ continue;
+ if (check_command_length(argc, argv) != UDA_SUCCESS) {
+ major_cmd->err_no = -UDA_EINVAL;
+ (void)snprintf(major_cmd->err_str,
+ COMMANDER_ERR_MAX_STRING - 1,
+ "Command input too long.");
+ break;
+ }
+
+ /* Deal sub command. */
+ if (argc > SUB_COMMAND_OFFSET) {
+ if (major_command_parse(major_cmd,
+ argc - SUB_COMMAND_OFFSET,
+ argv + SUB_COMMAND_OFFSET) != UDA_SUCCESS) {
+ goto PARSE_OUT;
+ }
+ }
+
+ /* Command exec. */
+ major_cmd->execute(major_cmd);
+ break;
+ }
+
+ /* Not find command. */
+ if (i == adapter->major_cmds) {
+ hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+ "Unknown major command, assign 'evsadm exec "
+ "dump-hinic-status -h' for help.");
+ *out_len = (u32)copy_reslut_to_buffer(buf_out,
+ adapter->show_str, MAX_SHOW_STR_LEN);
+ return;
+ }
+
+PARSE_OUT:
+ if (major_cmd->err_no != UDA_SUCCESS &&
+ major_cmd->err_no != -UDA_CANCEL) {
+ PMD_DRV_LOG(ERR, "%s command error(%d): %s", major_cmd->name,
+ major_cmd->err_no, major_cmd->err_str);
+
+ hinic3_pmd_mml_log(major_cmd->show_str, &major_cmd->show_len,
+ "%s command error(%d): %s",
+ major_cmd->name, major_cmd->err_no,
+ major_cmd->err_str);
+ }
+ *out_len = (u32)copy_reslut_to_buffer(buf_out, major_cmd->show_str,
+ MAX_SHOW_STR_LEN);
+}
+
+void
+tool_target_init(int *bus_num, char *dev_name, int len)
+{
+ *bus_num = TRGET_UNKNOWN_BUS_NUM;
+ (void)memset(dev_name, 0, len);
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_cmd.h b/drivers/net/hinic3/mml/hinic3_mml_cmd.h
new file mode 100644
index 0000000000..0e1ece38f0
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_cmd.h
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#ifndef _HINIC3_MML_CMD
+#define _HINIC3_MML_CMD
+
+#include <stdint.h>
+
+#define COMMAND_HELP_POSTION(argc) \
+ ({ \
+ typeof(argc) __argc = (argc); \
+ (__argc == 1 || __argc == 2); \
+ })
+#define COMMAND_VERSION_POSTION(argc) ((argc) == 2)
+#define SUB_COMMAND_OFFSET 2
+
+#define COMMAND_MAX_MAJORS 128
+#define COMMAND_MAX_OPTIONS 64
+#define PARAM_MAX_STRING 128
+#define COMMAND_MAX_STRING 512
+#define COMMANDER_ERR_MAX_STRING 128
+
+#define MAX_NAME_LEN 32
+#define MAX_DES_LEN 128
+#define MAX_SHOW_STR_LEN 2048
+
+struct tag_major_cmd_t;
+struct tag_cmd_adapter_t;
+
+typedef int (*command_record_t)(struct tag_major_cmd_t *major, char *param);
+typedef void (*command_executeute_t)(struct tag_major_cmd_t *major);
+
+typedef struct {
+ const char *little;
+ const char *large;
+ unsigned int have_param;
+ command_record_t record;
+} cmd_option_t;
+
+/* Major command structure for save command details and options. */
+typedef struct tag_major_cmd_t {
+ struct tag_cmd_adapter_t *adapter;
+ char name[MAX_NAME_LEN];
+ int option_count;
+ cmd_option_t options[COMMAND_MAX_OPTIONS];
+ uint32_t options_repeat_flag[COMMAND_MAX_OPTIONS];
+ command_executeute_t execute;
+ int err_no;
+ char err_str[COMMANDER_ERR_MAX_STRING];
+ char show_str[MAX_SHOW_STR_LEN];
+ int show_len;
+ char description[MAX_DES_LEN];
+ void *cmd_st; /**< Command show queue state structure. */
+} major_cmd_t;
+
+typedef struct tag_cmd_adapter_t {
+ const char *name;
+ const char *version;
+ major_cmd_t *p_major_cmd[COMMAND_MAX_MAJORS];
+ int major_cmds;
+ char show_str[MAX_SHOW_STR_LEN];
+ int show_len;
+ char *cmd_buf;
+} cmd_adapter_t;
+
+/**
+ * Add an option to a major command.
+ *
+ * This function adds a command option with its short and long forms, whether it
+ * requires a parameter, and the function to handle it.
+ *
+ * @param[in] major_cmd
+ * Pointer to the major command structure.
+ * @param[in] little
+ * Short form of the option.
+ * @param[in] large
+ * Long form of the option.
+ * @param[in] have_param
+ * Flag indicating whether the option requires a parameter.
+ * @param[in] record
+ * Function to handle the option's action.
+ */
+void major_command_option(major_cmd_t *major_cmd, const char *little,
+ const char *large, uint32_t have_param,
+ command_record_t record);
+
+/**
+ * Register a major command with adapter.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ * @param[in] major_cmd
+ * The major command to be registered with the adapter.
+ */
+void major_command_register(cmd_adapter_t *adapter, major_cmd_t *major_cmd);
+
+/**
+ * Parse and execute commands.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ * @param[in] argc
+ * The number of command arguments.
+ * @param[in] argv
+ * The array of command arguments.
+ * @param[out] buf_out
+ * The buffer used to store the output result.
+ * @param[out] out_len
+ * The length (in bytes) of the output result.
+ */
+void command_parse(cmd_adapter_t *adapter, int argc, char **argv, void *buf_out,
+ uint32_t *out_len);
+
+/**
+ * Initialize the target bus number and device name.
+ *
+ * @param[out] bus_num
+ * Pointer to the bus number, which will be set to a default unknown value.
+ * @param[out] dev_name
+ * Pointer to the device name buffer, which will be cleared (set to zeros).
+ * @param[in] len
+ * The length of the device name buffer.
+ */
+void tool_target_init(int *bus_num, char *dev_name, int len);
+
+int cmd_show_q_init(cmd_adapter_t *adapter);
+int cmd_show_xstats_init(cmd_adapter_t *adapter);
+int cmd_show_dump_init(cmd_adapter_t *adapter);
+
+#endif /* _HINIC3_MML_CMD */
diff --git a/drivers/net/hinic3/mml/hinic3_mml_ioctl.c b/drivers/net/hinic3/mml/hinic3_mml_ioctl.c
new file mode 100644
index 0000000000..0fd6b97f5e
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_ioctl.c
@@ -0,0 +1,215 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+#include <rte_ethdev.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_ether.h>
+#include <rte_ethdev_core.h>
+#include "hinic3_mml_lib.h"
+#include "hinic3_dbg.h"
+#include "hinic3_compat.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+
+static int
+get_tx_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+ void *buf_out, uint16_t *out_size)
+{
+ uint16_t q_id = *((uint16_t *)buf_in);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (in_size != sizeof(int))
+ return -UDA_EINVAL;
+
+ return hinic3_dbg_get_sq_info(nic_dev->hwdev, q_id, buf_out, out_size);
+}
+
+static int
+get_tx_wqe_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+ void *buf_out, uint16_t *out_size)
+{
+ struct hinic_wqe_info *wqe_info = (struct hinic_wqe_info *)buf_in;
+ uint16_t q_id = (uint16_t)wqe_info->q_id;
+ uint16_t idx = (uint16_t)wqe_info->wqe_id;
+ uint16_t wqebb_cnt = (uint16_t)wqe_info->wqebb_cnt;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (in_size != sizeof(struct hinic_wqe_info))
+ return -UDA_EINVAL;
+
+ return hinic3_dbg_get_sq_wqe_info(nic_dev->hwdev, q_id, idx, wqebb_cnt,
+ buf_out, out_size);
+}
+
+static int
+get_rx_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+ void *buf_out, uint16_t *out_size)
+{
+ uint16_t q_id = *((uint16_t *)buf_in);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (in_size != sizeof(int))
+ return -UDA_EINVAL;
+
+ return hinic3_dbg_get_rq_info(nic_dev->hwdev, q_id, buf_out, out_size);
+}
+
+static int
+get_rx_wqe_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+ void *buf_out, uint16_t *out_size)
+{
+ struct hinic_wqe_info *wqe_info = (struct hinic_wqe_info *)buf_in;
+ uint16_t q_id = (uint16_t)wqe_info->q_id;
+ uint16_t idx = (uint16_t)wqe_info->wqe_id;
+ uint16_t wqebb_cnt = (uint16_t)wqe_info->wqebb_cnt;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (in_size != sizeof(struct hinic_wqe_info))
+ return -UDA_EINVAL;
+
+ return hinic3_dbg_get_rq_wqe_info(nic_dev->hwdev, q_id, idx, wqebb_cnt,
+ buf_out, out_size);
+}
+
+static int
+get_rx_cqe_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+ void *buf_out, uint16_t *out_size)
+{
+ struct hinic_wqe_info *wqe_info = (struct hinic_wqe_info *)buf_in;
+ uint16_t q_id = (uint16_t)wqe_info->q_id;
+ uint16_t idx = (uint16_t)wqe_info->wqe_id;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (in_size != sizeof(struct hinic_wqe_info))
+ return -UDA_EINVAL;
+
+ return hinic3_dbg_get_rx_cqe_info(nic_dev->hwdev, q_id, idx, buf_out,
+ out_size);
+}
+
+typedef int (*nic_drv_module)(struct rte_eth_dev *dev, void *buf_in,
+ uint16_t in_size, void *buf_out,
+ uint16_t *out_size);
+
+struct nic_drv_module_handle {
+ enum driver_cmd_type drv_cmd_name;
+ nic_drv_module drv_func;
+};
+
+const struct nic_drv_module_handle g_nic_drv_module_cmd_handle[] = {
+ {TX_INFO, get_tx_info}, {TX_WQE_INFO, get_tx_wqe_info},
+ {RX_INFO, get_rx_info}, {RX_WQE_INFO, get_rx_wqe_info},
+ {RX_CQE_INFO, get_rx_cqe_info},
+};
+
+static int
+send_to_nic_driver(struct rte_eth_dev *dev, struct msg_module *nt_msg)
+{
+ int index;
+ int err = 0;
+ enum driver_cmd_type cmd_type =
+ (enum driver_cmd_type)nt_msg->msg_formate;
+ int num_cmds = sizeof(g_nic_drv_module_cmd_handle) /
+ sizeof(g_nic_drv_module_cmd_handle[0]);
+
+ for (index = 0; index < num_cmds; index++) {
+ if (cmd_type ==
+ g_nic_drv_module_cmd_handle[index].drv_cmd_name) {
+ err = g_nic_drv_module_cmd_handle[index].drv_func(dev,
+ nt_msg->in_buf,
+ (uint16_t)nt_msg->buf_in_size, nt_msg->out_buf,
+ (uint16_t *)&nt_msg->buf_out_size);
+ break;
+ }
+ }
+
+ if (index == num_cmds) {
+ PMD_DRV_LOG(ERR, "Unknown nic driver cmd: %d", cmd_type);
+ err = -UDA_EINVAL;
+ }
+
+ return err;
+}
+
+static int
+hinic3_msg_handle(struct rte_eth_dev *dev, struct msg_module *nt_msg)
+{
+ int err;
+
+ switch (nt_msg->module) {
+ case SEND_TO_NIC_DRIVER:
+ err = send_to_nic_driver(dev, nt_msg);
+ if (err != 0)
+ PMD_DRV_LOG(ERR, "Send message to driver failed");
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Unknown message module: %d", nt_msg->module);
+ err = -UDA_EINVAL;
+ break;
+ }
+
+ return err;
+}
+
+static struct rte_eth_dev *
+get_eth_dev_by_pci_addr(char *pci_addr, __rte_unused int len)
+{
+ uint32_t i;
+ struct rte_eth_dev *eth_dev = NULL;
+ struct rte_pci_device *pci_dev = NULL;
+ int ret;
+ uint32_t bus, devid, function;
+
+ ret = sscanf(pci_addr, "%02x:%02x.%x", &bus, &devid, &function);
+ if (ret <= 0) {
+ PMD_DRV_LOG(ERR,
+ "Get pci bus devid and function id fail, err: %d",
+ ret);
+ return NULL;
+ }
+
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+ eth_dev = &rte_eth_devices[i];
+ if (eth_dev->state != RTE_ETH_DEV_ATTACHED)
+ continue;
+
+ pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+#ifdef CONFIG_SP_VID_DID
+ if (pci_dev->id.vendor_id == PCI_VENDOR_ID_SPNIC &&
+ (pci_dev->id.device_id == HINIC3_DEV_ID_STANDARD ||
+ pci_dev->id.device_id == HINIC3_DEV_ID_VF) &&
+#else
+ if (pci_dev->id.vendor_id == PCI_VENDOR_ID_HUAWEI &&
+ (pci_dev->id.device_id == HINIC3_DEV_ID_STANDARD ||
+ pci_dev->id.device_id == HINIC3_DEV_ID_VF) &&
+#endif
+ pci_dev->addr.bus == bus && pci_dev->addr.devid == devid &&
+ pci_dev->addr.function == function) {
+ return eth_dev;
+ }
+ }
+
+ return NULL;
+}
+
+int
+hinic3_pmd_mml_ioctl(void *msg)
+{
+ struct msg_module *nt_msg = msg;
+ struct rte_eth_dev *dev;
+
+ dev = get_eth_dev_by_pci_addr(nt_msg->device_name,
+ sizeof(nt_msg->device_name));
+ if (!dev) {
+ PMD_DRV_LOG(ERR, "Can not get the device %s correctly",
+ nt_msg->device_name);
+ return UDA_FAIL;
+ }
+
+ return hinic3_msg_handle(dev, nt_msg);
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_lib.c b/drivers/net/hinic3/mml/hinic3_mml_lib.c
new file mode 100644
index 0000000000..dae2efc54b
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_lib.c
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+#include "hinic3_compat.h"
+#include "hinic3_mml_lib.h"
+
+int
+tool_get_valid_target(char *name, struct tool_target *target)
+{
+ int ret = UDA_SUCCESS;
+
+ if (strlen(name) >= MAX_DEV_LEN) {
+ PMD_DRV_LOG(ERR,
+ "Input parameter of device name is too long.");
+ ret = -UDA_ELEN;
+ } else {
+ (void)memcpy(target->dev_name, name, strlen(name));
+ target->bus_num = 0;
+ }
+
+ return ret;
+}
+
+static void
+fill_ioctl_msg_hd(struct msg_module *msg, unsigned int module,
+ unsigned int msg_formate, unsigned int in_buff_len,
+ unsigned int out_buff_len, char *dev_name, int bus_num)
+{
+ (void)memcpy(msg->device_name, dev_name, strlen(dev_name) + 1);
+
+ msg->module = module;
+ msg->msg_formate = msg_formate;
+ msg->buf_in_size = in_buff_len;
+ msg->buf_out_size = out_buff_len;
+ msg->bus_num = bus_num;
+}
+
+static int
+lib_ioctl(struct msg_module *in_buf, void *out_buf)
+{
+ in_buf->out_buf = out_buf;
+
+ return hinic3_pmd_mml_ioctl(in_buf);
+}
+
+int
+lib_tx_sq_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+ int sq_id)
+{
+ struct msg_module msg_to_kernel;
+
+ (void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+ fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, TX_INFO,
+ (unsigned int)sizeof(int),
+ (unsigned int)sizeof(struct nic_sq_info),
+ target.dev_name, target.bus_num);
+ msg_to_kernel.in_buf = (void *)&sq_id;
+
+ return lib_ioctl(&msg_to_kernel, sq_info);
+}
+
+int
+lib_tx_wqe_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+ int sq_id, int wqe_id, void *nwqe, int nwqe_size)
+{
+ struct msg_module msg_to_kernel;
+ struct hinic_wqe_info wqe = {0};
+
+ wqe.wqe_id = wqe_id;
+ wqe.q_id = sq_id;
+ wqe.wqebb_cnt = nwqe_size / sq_info->sq_wqebb_size;
+
+ (void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+ fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, TX_WQE_INFO,
+ (unsigned int)(sizeof(struct hinic_wqe_info)),
+ nwqe_size, target.dev_name, target.bus_num);
+ msg_to_kernel.in_buf = (void *)&wqe;
+
+ return lib_ioctl(&msg_to_kernel, nwqe);
+}
+
+int
+lib_rx_rq_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+ int rq_id)
+{
+ struct msg_module msg_to_kernel;
+
+ (void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+ fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, RX_INFO,
+ (unsigned int)(sizeof(int)),
+ (unsigned int)sizeof(struct nic_rq_info),
+ target.dev_name, target.bus_num);
+ msg_to_kernel.in_buf = &rq_id;
+
+ return lib_ioctl(&msg_to_kernel, rq_info);
+}
+
+int
+lib_rx_wqe_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+ int rq_id, int wqe_id, void *nwqe, int nwqe_size)
+{
+ struct msg_module msg_to_kernel;
+ struct hinic_wqe_info wqe = {0};
+
+ wqe.wqe_id = wqe_id;
+ wqe.q_id = rq_id;
+ wqe.wqebb_cnt = nwqe_size / rq_info->rq_wqebb_size;
+
+ (void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+ fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, RX_WQE_INFO,
+ (unsigned int)(sizeof(struct hinic_wqe_info)),
+ nwqe_size, target.dev_name, target.bus_num);
+ msg_to_kernel.in_buf = (void *)&wqe;
+
+ return lib_ioctl(&msg_to_kernel, nwqe);
+}
+
+int
+lib_rx_cqe_info_get(struct tool_target target,
+ __rte_unused struct nic_rq_info *rq_info, int rq_id,
+ int wqe_id, void *nwqe, int nwqe_size)
+{
+ struct msg_module msg_to_kernel;
+ struct hinic_wqe_info wqe = {0};
+
+ wqe.wqe_id = wqe_id;
+ wqe.q_id = rq_id;
+
+ (void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+ fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, RX_CQE_INFO,
+ (unsigned int)(sizeof(struct hinic_wqe_info)),
+ nwqe_size, target.dev_name, target.bus_num);
+ msg_to_kernel.in_buf = (void *)&wqe;
+
+ return lib_ioctl(&msg_to_kernel, nwqe);
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_lib.h b/drivers/net/hinic3/mml/hinic3_mml_lib.h
new file mode 100644
index 0000000000..42c365922f
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_lib.h
@@ -0,0 +1,275 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#ifndef _HINIC3_MML_LIB
+#define _HINIC3_MML_LIB
+
+#include <string.h>
+#include <stdint.h>
+
+#include "hinic3_mml_cmd.h"
+#include "hinic3_compat.h"
+#include "hinic3_mgmt.h"
+
+#define MAX_DEV_LEN 16
+#define TRGET_UNKNOWN_BUS_NUM (-1)
+
+#ifndef DEV_NAME_LEN
+#define DEV_NAME_LEN 64
+#endif
+
+enum {
+ UDA_SUCCESS = 0x0,
+ UDA_FAIL,
+ UDA_ENXIO,
+ UDA_ENONMEM,
+ UDA_EBUSY,
+ UDA_ECRC,
+ UDA_EINVAL,
+ UDA_EFAULT,
+ UDA_ELEN,
+ UDA_ECMD,
+ UDA_ENODRIVER,
+ UDA_EXIST,
+ UDA_EOVERSTEP,
+ UDA_ENOOBJ,
+ UDA_EOBJ,
+ UDA_ENOMATCH,
+ UDA_ETIMEOUT,
+
+ UDA_CONTOP,
+
+ UDA_REBOOT = 0xFD,
+ UDA_CANCEL = 0xFE,
+ UDA_KILLED = 0xFF,
+};
+
+#define PARAM_NEED 1
+#define PARAM_NOT_NEED 0
+
+#define BASE_ALL 0
+#define BASE_8 8
+#define BASE_10 10
+#define BASE_16 16
+
+enum module_name {
+ SEND_TO_NPU = 1,
+ SEND_TO_MPU,
+ SEND_TO_SM,
+
+ SEND_TO_HW_DRIVER,
+ SEND_TO_NIC_DRIVER,
+ SEND_TO_OVS_DRIVER,
+ SEND_TO_ROCE_DRIVER,
+ SEND_TO_TOE_DRIVER,
+ SEND_TO_IWAP_DRIVER,
+ SEND_TO_FC_DRIVER,
+ SEND_FCOE_DRIVER,
+};
+
+enum driver_cmd_type {
+ TX_INFO = 1,
+ Q_NUM,
+ TX_WQE_INFO,
+ TX_MAPPING,
+ RX_INFO,
+ RX_WQE_INFO,
+ RX_CQE_INFO
+};
+
+struct tool_target {
+ int bus_num;
+ char dev_name[MAX_DEV_LEN];
+ void *pri;
+};
+
+struct nic_tx_hw_page {
+ long long phy_addr;
+ long long *map_addr;
+};
+
+struct nic_sq_info {
+ unsigned short q_id;
+ unsigned short pi; /**< Ring buffer queue producer point. */
+ unsigned short ci; /**< Ring buffer queue consumer point. */
+ unsigned short fi; /**< Ring buffer queue complete point. */
+ unsigned int sq_depth;
+ unsigned short sq_wqebb_size;
+ unsigned short *ci_addr;
+ unsigned long long cla_addr;
+
+ struct nic_tx_hw_page doorbell;
+ unsigned int page_idx;
+};
+
+struct comm_info_l2nic_sq_ci_attr {
+ struct mgmt_msg_head msg_head;
+
+ uint16_t func_idx;
+ uint8_t dma_attr_off;
+ uint8_t pending_limit;
+
+ uint8_t coalescing_time;
+ uint8_t int_en;
+ uint16_t int_offset;
+
+ uint32_t l2nic_sqn;
+ uint32_t rsv;
+ uint64_t ci_addr;
+};
+
+struct nic_rq_info {
+ unsigned short q_id; /**< Queue id in current function, 0, 1, 2... */
+
+ unsigned short hw_pi; /**< Where pkt buf allocated. */
+ unsigned short ci; /**< Where hw pkt received, owned by hw. */
+ unsigned short sw_pi; /**< Where driver begin receive pkt. */
+ unsigned short rq_wqebb_size; /**< wqebb size, default to 32 bytes. */
+
+ unsigned short rq_depth;
+ unsigned short buf_len; /**< 2K. */
+ void *ci_wqe_page_addr; /**< For queue context init. */
+ void *ci_cla_tbl_addr;
+ unsigned short int_num; /**< RSS support should consider int_num. */
+ unsigned int msix_vector; /**< For debug. */
+};
+
+struct hinic_wqe_info {
+ int q_id;
+ void *slq_handle;
+ uint32_t wqe_id;
+ uint32_t wqebb_cnt;
+};
+
+struct npu_cmd_st {
+ uint32_t mod : 8;
+ uint32_t cmd : 8;
+ uint32_t ack_type : 3;
+ uint32_t direct_resp : 1;
+ uint32_t len : 12;
+};
+
+struct mpu_cmd_st {
+ uint32_t api_type : 8;
+ uint32_t mod : 8;
+ uint32_t cmd : 16;
+};
+
+struct msg_module {
+ char device_name[DEV_NAME_LEN];
+ uint32_t module;
+ union {
+ uint32_t msg_formate; /**< For driver. */
+ struct npu_cmd_st npu_cmd;
+ struct mpu_cmd_st mpu_cmd;
+ };
+ uint32_t timeout; /**< For mpu/npu cmd. */
+ uint32_t func_idx;
+ uint32_t buf_in_size;
+ uint32_t buf_out_size;
+ void *in_buf;
+ void *out_buf;
+ int bus_num;
+ uint32_t rsvd2[5];
+};
+
+/**
+ * Convert the provided string into `uint32_t` according to the specified base.
+ *
+ * @param[in] nptr
+ * The string to be converted.
+ * @param[in] base
+ * The base to use for conversion (e.g., 10 for decimal, 16 for hexadecimal).
+ * @param[out] value
+ * The output variable where the converted `uint32_t` value will be stored.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -UDA_EINVAL if the string is invalid or the value is out of range.
+ */
+static inline int
+string_toui(const char *nptr, int base, uint32_t *value)
+{
+ char *endptr = NULL;
+ long tmp_value;
+
+ tmp_value = strtol(nptr, &endptr, base);
+ if ((*endptr != 0) || tmp_value >= 0x7FFFFFFF || tmp_value < 0)
+ return -UDA_EINVAL;
+ *value = (uint32_t)tmp_value;
+ return UDA_SUCCESS;
+}
+
+#define UDA_TRUE 1
+#define UDA_FALSE 0
+
+/**
+ * Format and append a log message to a string buffer.
+ *
+ * @param[out] show_str
+ * The string buffer where the formatted message will be appended.
+ * @param[out] show_len
+ * The current length of the string in the buffer. It is updated after
+ * appending.
+ * @param[in] fmt
+ * The format string that specifies how to format the log message.
+ * @param[in] args
+ * The variable arguments to be formatted according to the format string.
+ */
+static inline void
+hinic3_pmd_mml_log(char *show_str, int *show_len, const char *fmt, ...)
+{
+ va_list args;
+ int ret = 0;
+
+ va_start(args, fmt);
+ ret = vsprintf(show_str + *show_len, fmt, args);
+ va_end(args);
+
+ if (ret > 0) {
+ *show_len += ret;
+ } else {
+ PMD_DRV_LOG(ERR, "MML show string snprintf failed, err: %d",
+ ret);
+ }
+}
+
+/**
+ * Get a valid target device based on the given name.
+ *
+ * This function checks if the device name is valid (within the length limit)
+ * and then stores it in the target structure. The bus number is initialized to
+ * 0.
+ *
+ * @param[in] name
+ * The device name to be validated and stored.
+ * @param[out] target
+ * The structure where the device name and bus number will be stored.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int tool_get_valid_target(char *name, struct tool_target *target);
+
+int hinic3_pmd_mml_ioctl(void *msg);
+
+int lib_tx_sq_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+ int sq_id);
+
+int lib_tx_wqe_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+ int sq_id, int wqe_id, void *nwqe, int nwqe_size);
+
+int lib_rx_rq_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+ int rq_id);
+
+int lib_rx_wqe_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+ int rq_id, int wqe_id, void *nwqe, int nwqe_size);
+
+int lib_rx_cqe_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+ int rq_id, int wqe_id, void *nwqe, int nwqe_size);
+
+int hinic3_pmd_mml_lib(const char *buf_in, uint32_t in_size, char *buf_out,
+ uint32_t *out_len, uint32_t max_buf_out_len);
+
+#endif /* _HINIC3_MML_LIB */
diff --git a/drivers/net/hinic3/mml/hinic3_mml_main.c b/drivers/net/hinic3/mml/hinic3_mml_main.c
new file mode 100644
index 0000000000..7830df479e
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_main.c
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_mml_lib.h"
+#include "hinic3_mml_cmd.h"
+
+#define MAX_ARGC 20
+
+/**
+ * Free all memory associated with the command adapter, including the command
+ * states and command buffer.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ */
+static void
+cmd_deinit(cmd_adapter_t *adapter)
+{
+ int i;
+
+ for (i = 0; i < COMMAND_MAX_MAJORS; i++) {
+ if (adapter->p_major_cmd[i]) {
+ if (adapter->p_major_cmd[i]->cmd_st) {
+ free(adapter->p_major_cmd[i]->cmd_st);
+ adapter->p_major_cmd[i]->cmd_st = NULL;
+ }
+
+ free(adapter->p_major_cmd[i]);
+ adapter->p_major_cmd[i] = NULL;
+ }
+ }
+
+ if (adapter->cmd_buf) {
+ free(adapter->cmd_buf);
+ adapter->cmd_buf = NULL;
+ }
+}
+
+static int
+cmd_init(cmd_adapter_t *adapter)
+{
+ int err;
+
+ err = cmd_show_q_init(adapter);
+ if (err != 0) {
+ PMD_DRV_LOG(ERR, "Init cmd show queue fail");
+ return err;
+ }
+
+ return UDA_SUCCESS;
+}
+
+/**
+ * Separate the input command string into arguments.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ * @param[in] buf_in
+ * The input command string.
+ * @param[in] in_size
+ * The size of the input command string.
+ * @param[out] argv
+ * The array to store separated arguments.
+ *
+ * @return
+ * The number of arguments on success, a negative error code otherwise.
+ */
+static int
+cmd_separate(cmd_adapter_t *adapter, const char *buf_in, uint32_t in_size,
+ char **argv)
+{
+ char *cmd_buf = NULL;
+ char *tmp = NULL;
+ char *saveptr = NULL;
+ int i;
+
+ cmd_buf = calloc(1, in_size + 1);
+ if (!cmd_buf) {
+ PMD_DRV_LOG(ERR, "Failed to allocate cmd_buf");
+ return -UDA_ENONMEM;
+ }
+
+ (void)memcpy(cmd_buf, buf_in, in_size);
+
+ tmp = cmd_buf;
+ for (i = 1; i < MAX_ARGC; i++) {
+ argv[i] = strtok_r(tmp, " ", &saveptr);
+ if (!argv[i])
+ break;
+ tmp = NULL;
+ }
+
+ if (i == MAX_ARGC) {
+ PMD_DRV_LOG(ERR, "Parameters is too many");
+ free(cmd_buf);
+ return -UDA_FAIL;
+ }
+
+ adapter->cmd_buf = cmd_buf;
+ return i;
+}
+
+/**
+ * Process the input command string, parse arguments, and return the result.
+ *
+ * @param[in] buf_in
+ * The input command string.
+ * @param[in] in_size
+ * The size of the input command string.
+ * @param[out] buf_out
+ * The output buffer to store the command result.
+ * @param[out] out_len
+ * The length of the output buffer.
+ * @param[in] max_buf_out_len
+ * The maximum size of the output buffer.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_pmd_mml_lib(const char *buf_in, uint32_t in_size, char *buf_out,
+ uint32_t *out_len, uint32_t max_buf_out_len)
+{
+ cmd_adapter_t *adapter = NULL;
+ char *argv[MAX_ARGC];
+ int argc;
+ int err = -UDA_EINVAL;
+
+ if (!buf_in || !in_size) {
+ PMD_DRV_LOG(ERR, "Invalid param, buf_in: %d, in_size: 0x%x",
+ !!buf_in, in_size);
+ return err;
+ }
+
+ if (!buf_out || max_buf_out_len < MAX_SHOW_STR_LEN) {
+ PMD_DRV_LOG(ERR,
+ "Invalid param, buf_out: %d, max_buf_out_len: 0x%x",
+ !!buf_out, max_buf_out_len);
+ return err;
+ }
+
+ adapter = calloc(1, sizeof(cmd_adapter_t));
+ if (!adapter) {
+ PMD_DRV_LOG(ERR, "Failed to allocate cmd adapter");
+ return -UDA_ENONMEM;
+ }
+
+ err = cmd_init(adapter);
+ if (err != 0)
+ goto parse_cmd_fail;
+
+ argc = cmd_separate(adapter, buf_in, in_size, argv);
+ if (argc < 0) {
+ err = -UDA_FAIL;
+ goto parse_cmd_fail;
+ }
+
+ (void)memset(buf_out, 0, max_buf_out_len);
+ command_parse(adapter, argc, argv, buf_out, out_len);
+
+parse_cmd_fail:
+ cmd_deinit(adapter);
+ free(adapter);
+
+ return err;
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_queue.c b/drivers/net/hinic3/mml/hinic3_mml_queue.c
new file mode 100644
index 0000000000..7d29c7ea52
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_queue.c
@@ -0,0 +1,749 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_mml_lib.h"
+#include "hinic3_mml_cmd.h"
+#include "hinic3_mml_queue.h"
+
+#define ADDR_HI_BIT 32
+
+/**
+ * This function perform similar operations as `hinic3_pmd_mml_log`, but it
+ * return a code.
+ *
+ * @param[out] show_str
+ * The string buffer where the formatted message will be appended.
+ * @param[out] show_len
+ * The current length of the string in the buffer. It is updated after
+ * appending.
+ * @param[in] fmt
+ * The format string that specifies how to format the log message.
+ * @param[in] args
+ * The variable arguments to be formatted according to the format string.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - `-UDA_EINVAL` if an error occurs during the formatting process.
+ *
+ * @see hinic3_pmd_mml_log
+ */
+static int
+hinic3_pmd_mml_log_ret(char *show_str, int *show_len, const char *fmt, ...)
+{
+ va_list args;
+ int ret = 0;
+
+ va_start(args, fmt);
+ ret = vsprintf(show_str + *show_len, fmt, args);
+ va_end(args);
+
+ if (ret > 0) {
+ *show_len += ret;
+ } else {
+ PMD_DRV_LOG(ERR, "MML show string snprintf failed, err: %d",
+ ret);
+ return -UDA_EINVAL;
+ }
+
+ return UDA_SUCCESS;
+}
+
+/**
+ * Format and log the information about the RQ by appending details such as
+ * queue ID, ci, sw pi, RQ depth, RQ WQE buffer size, buffer length, interrupt
+ * number, and MSIX vector to the output buffer.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ * @param[in] rq_info
+ * The receive queue information to be displayed, which includes various
+ * properties like queue ID, depth, interrupt number, etc.
+ */
+static void
+rx_show_rq_info(major_cmd_t *self, struct nic_rq_info *rq_info)
+{
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "Receive queue information:");
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "queue_id:%u",
+ rq_info->q_id);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "ci:%u",
+ rq_info->ci);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "sw_pi:%u",
+ rq_info->sw_pi);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rq_depth:%u",
+ rq_info->rq_depth);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "rq_wqebb_size:%u", rq_info->rq_wqebb_size);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_len:%u",
+ rq_info->buf_len);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "int_num:%u",
+ rq_info->int_num);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "msix_vector:%u",
+ rq_info->msix_vector);
+}
+
+static void
+rx_show_wqe(major_cmd_t *self, nic_rq_wqe *wqe)
+{
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "Rx buffer section information:");
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_addr:0x%" PRIx64,
+ (((uint64_t)wqe->buf_desc.pkt_buf_addr_high) << ADDR_HI_BIT) |
+ wqe->buf_desc.pkt_buf_addr_low);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_len:%u",
+ wqe->buf_desc.len);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:%u",
+ wqe->rsvd0);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "Cqe buffer section information:");
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_hi:0x%" PRIx64,
+ (((uint64_t)wqe->cqe_sect.pkt_buf_addr_high) << ADDR_HI_BIT) |
+ wqe->cqe_sect.pkt_buf_addr_low);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_len:%u",
+ wqe->cqe_sect.len);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd1:%u",
+ wqe->rsvd1);
+}
+
+static void
+rx_show_cqe_info(major_cmd_t *self, struct tag_l2nic_rx_cqe *wqe_cs)
+{
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "Rx cqe info:");
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw0:0x%08x",
+ wqe_cs->dw0.value);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rx_done:0x%x",
+ wqe_cs->dw0.bs.rx_done);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "bp_en:0x%x",
+ wqe_cs->dw0.bs.bp_en);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "decry_pkt:0x%x",
+ wqe_cs->dw0.bs.decry_pkt);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "flush:0x%x",
+ wqe_cs->dw0.bs.flush);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "spec_flags:0x%x",
+ wqe_cs->dw0.bs.spec_flags);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:0x%x",
+ wqe_cs->dw0.bs.rsvd0);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "lro_num:0x%x",
+ wqe_cs->dw0.bs.lro_num);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "checksum_err:0x%x", wqe_cs->dw0.bs.checksum_err);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw1:0x%08x",
+ wqe_cs->dw1.value);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "length:%u",
+ wqe_cs->dw1.bs.length);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "vlan:0x%x",
+ wqe_cs->dw1.bs.vlan);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw2:0x%08x",
+ wqe_cs->dw2.value);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rss_type:0x%x",
+ wqe_cs->dw2.bs.rss_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:0x%x",
+ wqe_cs->dw2.bs.rsvd0);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "vlan_offload_en:0x%x",
+ wqe_cs->dw2.bs.vlan_offload_en);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "umbcast:0x%x",
+ wqe_cs->dw2.bs.umbcast);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd1:0x%x",
+ wqe_cs->dw2.bs.rsvd1);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "pkt_types:0x%x",
+ wqe_cs->dw2.bs.pkt_types);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "rss_hash_value:0x%08x",
+ wqe_cs->dw3.bs.rss_hash_value);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw4:0x%08x",
+ wqe_cs->dw4.value);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw5:0x%08x",
+ wqe_cs->dw5.value);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "mac_type:0x%x",
+ wqe_cs->dw5.ovs_bs.mac_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "l3_type:0x%x",
+ wqe_cs->dw5.ovs_bs.l3_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "l4_type:0x%x",
+ wqe_cs->dw5.ovs_bs.l4_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:0x%x",
+ wqe_cs->dw5.ovs_bs.rsvd0);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "traffic_type:0x%x",
+ wqe_cs->dw5.ovs_bs.traffic_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "traffic_from:0x%x",
+ wqe_cs->dw5.ovs_bs.traffic_from);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw6:0x%08x",
+ wqe_cs->dw6.value);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "localtag:0x%08x",
+ wqe_cs->dw7.ovs_bs.localtag);
+}
+
+#define HINIC3_PMD_MML_LOG_RET(fmt, ...) \
+ hinic3_pmd_mml_log_ret(self->show_str, &self->show_len, fmt, \
+ ##__VA_ARGS__)
+
+/**
+ * Display help information for queue command.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ * @param[in] argc
+ * A string representing the value associated with the command option (unused_).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+cmd_queue_help(major_cmd_t *self, __rte_unused char *argc)
+{
+ int ret;
+ ret = HINIC3_PMD_MML_LOG_RET("") ||
+ HINIC3_PMD_MML_LOG_RET(" Usage: %s %s", self->name,
+ "-i <device> -d <tx or rx> -t <type> "
+ "-q <queue id> [-w <wqe id>]") ||
+ HINIC3_PMD_MML_LOG_RET("\n %s", self->description) ||
+ HINIC3_PMD_MML_LOG_RET("\n Options:\n") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", "-h", "--help",
+ "display this help and exit") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", "-i",
+ "--device=<device>",
+ "device target, e.g. 08:00.0") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", "-d", "--direction",
+ "tx or rx") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", " ", "", "0: tx") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", " ", "", "1: rx") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", "-t", "--type", "") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", " ", "",
+ "0: queue info") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", " ", "",
+ "1: wqe info") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", " ", "",
+ "2: cqe info(only for rx)") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", "-q", "--queue_id",
+ "") ||
+ HINIC3_PMD_MML_LOG_RET(" %s, %-25s %s", "-w", "--wqe_id", "") ||
+ HINIC3_PMD_MML_LOG_RET("");
+
+ return ret;
+}
+
+static void
+tx_show_sq_info(major_cmd_t *self, struct nic_sq_info *sq_info)
+{
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "Send queue information:");
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "queue_id:%u",
+ sq_info->q_id);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "pi:%u",
+ sq_info->pi);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "ci:%u",
+ sq_info->ci);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "fi:%u",
+ sq_info->fi);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "sq_depth:%u",
+ sq_info->sq_depth);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "sq_wqebb_size:%u", sq_info->sq_wqebb_size);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "cla_addr:0x%" PRIu64,
+ sq_info->cla_addr);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "doorbell phy_addr:0x%" PRId64,
+ (uintptr_t)sq_info->doorbell.phy_addr);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "page_idx:%u",
+ sq_info->page_idx);
+}
+
+static void
+tx_show_wqe(major_cmd_t *self, struct nic_tx_wqe_desc *wqe)
+{
+ struct nic_tx_ctrl_section *control = NULL;
+ struct nic_tx_task_section *task = NULL;
+ unsigned int *val = (unsigned int *)wqe;
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw0:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw1:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw2:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw3:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw4:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw5:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw6:0x%08x",
+ *(val++));
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw7:0x%08x",
+ *(val++));
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "\nWqe may analyse as follows:");
+ control = &wqe->control;
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "\nInformation about wqe control section:");
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "ctrl_format:0x%08x", control->ctrl_format);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "owner:%u",
+ control->ctrl_sec.o);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "extended_compact:%u", control->ctrl_sec.ec);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "direct_normal:%u", control->ctrl_sec.dn);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "inline_sgl:%u",
+ control->ctrl_sec.df);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "ts_size:%u",
+ control->ctrl_sec.tss);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "bds_len:%u",
+ control->ctrl_sec.bdsl);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd:%u",
+ control->ctrl_sec.r);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "1st_buf_len:%u",
+ control->ctrl_sec.len);
+
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "queue_info:0x%08x", control->queue_info);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "pri:%u",
+ control->qsf.pri);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "uc:%u",
+ control->qsf.uc);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "sctp:%u",
+ control->qsf.sctp);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "mss:%u",
+ control->qsf.mss);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "tcp_udp_cs:%u",
+ control->qsf.tcp_udp_cs);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "tso:%u",
+ control->qsf.tso);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "ufo:%u",
+ control->qsf.ufo);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "payload_offset:%u", control->qsf.payload_offset);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "pkt_type:%u",
+ control->qsf.pkt_type);
+
+ /* First buffer section. */
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "bd0_hi_addr:0x%08x", wqe->bd0_hi_addr);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "bd0_lo_addr:0x%08x", wqe->bd0_lo_addr);
+
+ /* Show the task section. */
+ task = &wqe->task;
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "\nInformation about wqe task section:");
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "vport_id:%u",
+ task->bs2.vport_id);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "vport_type:%u",
+ task->bs2.vport_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "traffic_type:%u",
+ task->bs2.traffic_type);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "slave_port_id:%u", task->bs2.slave_port_id);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:%u",
+ task->bs2.rsvd0);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "crypto_en:%u",
+ task->bs2.crypto_en);
+ hinic3_pmd_mml_log(self->show_str, &self->show_len, "pkt_type:%u",
+ task->bs2.pkt_type);
+}
+
+static int
+cmd_queue_target(major_cmd_t *self, char *argc)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+ int ret;
+
+ if (tool_get_valid_target(argc, &show_q->target) != UDA_SUCCESS) {
+ self->err_no = -UDA_EINVAL;
+ ret = snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Unknown device %s.", argc);
+ if (ret <= 0) {
+ PMD_DRV_LOG(ERR,
+ "snprintf queue err msg failed, ret: %d",
+ ret);
+ }
+ return -UDA_EINVAL;
+ }
+
+ return UDA_SUCCESS;
+}
+
+static int
+get_queue_type(major_cmd_t *self, char *argc)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+ unsigned int num = 0;
+
+ if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Unknown queuetype %u.", num);
+ return -UDA_EINVAL;
+ }
+
+ show_q->qobj = (int)num;
+ return UDA_SUCCESS;
+}
+
+static int
+get_queue_id(major_cmd_t *self, char *argc)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+ unsigned int num = 0;
+
+ if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Invalid queue id.");
+ return -UDA_EINVAL;
+ }
+
+ show_q->q_id = (int)num;
+ return UDA_SUCCESS;
+}
+
+static int
+get_q_wqe_id(major_cmd_t *self, char *argc)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+ unsigned int num = 0;
+
+ if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Invalid wqe id.");
+ return -UDA_EINVAL;
+ }
+
+ show_q->wqe_id = (int)num;
+ return UDA_SUCCESS;
+}
+
+/**
+ * Set direction for queue query.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ * @param[in] argc
+ * The input argument representing the direction (as a string).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -UDA_EINVAL If the input is invalid (not a number or out of range), it sets
+ * an error in `err_no` and `err_str`.
+ */
+static int
+get_direction(major_cmd_t *self, char *argc)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+ unsigned int num = 0;
+
+ if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS || num > 1) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Unknown mode.");
+ return -UDA_EINVAL;
+ }
+
+ show_q->direction = (int)num;
+ return UDA_SUCCESS;
+}
+
+static int
+rx_param_check(major_cmd_t *self, struct cmd_show_q_st *rx_param)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+
+ if (rx_param->target.bus_num == TRGET_UNKNOWN_BUS_NUM) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Need device name.");
+ return self->err_no;
+ }
+
+ if (show_q->qobj > OBJ_CQE_INFO || show_q->qobj < OBJ_Q_INFO) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Unknown queue type.");
+ return self->err_no;
+ }
+
+ if (show_q->q_id == -1) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Need queue id.");
+ return self->err_no;
+ }
+
+ if (show_q->qobj != OBJ_Q_INFO && show_q->wqe_id == -1) {
+ self->err_no = -UDA_FAIL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Get cqe_info or wqe_info, must set wqeid.");
+ return -UDA_FAIL;
+ }
+
+ if (show_q->qobj == OBJ_Q_INFO && show_q->wqe_id != -1) {
+ self->err_no = -UDA_FAIL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Get queue info, need not set wqeid.");
+ return -UDA_FAIL;
+ }
+
+ return UDA_SUCCESS;
+}
+
+static int
+tx_param_check(major_cmd_t *self, struct cmd_show_q_st *tx_param)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+
+ if (tx_param->target.bus_num == TRGET_UNKNOWN_BUS_NUM) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Need device name.");
+ return self->err_no;
+ }
+
+ if (show_q->qobj > OBJ_WQE_INFO || show_q->qobj < OBJ_Q_INFO) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Unknown queue type.");
+ return self->err_no;
+ }
+
+ if (show_q->q_id == -1) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Need queue id.");
+ return self->err_no;
+ }
+
+ if (show_q->qobj == OBJ_WQE_INFO && show_q->wqe_id == -1) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Get wqe_info, must set wqeid.");
+ return self->err_no;
+ }
+
+ if (show_q->qobj != OBJ_WQE_INFO && show_q->wqe_id != -1) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Get queue info, need not set wqeid.");
+ return self->err_no;
+ }
+
+ return UDA_SUCCESS;
+}
+
+static void
+cmd_tx_execute(major_cmd_t *self)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+ int ret;
+ struct nic_sq_info sq_info = {0};
+ struct nic_tx_wqe_desc nwqe;
+
+ if (tx_param_check(self, show_q) != UDA_SUCCESS)
+ return;
+
+ if (show_q->qobj == OBJ_Q_INFO || show_q->qobj == OBJ_WQE_INFO) {
+ ret = lib_tx_sq_info_get(show_q->target, (void *)&sq_info,
+ show_q->q_id);
+ if (ret != UDA_SUCCESS) {
+ self->err_no = ret;
+ (void)snprintf(self->err_str,
+ COMMANDER_ERR_MAX_STRING - 1,
+ "Get tx sq_info failed.");
+ return;
+ }
+
+ if (show_q->qobj == OBJ_Q_INFO) {
+ tx_show_sq_info(self, &sq_info);
+ return;
+ }
+
+ if (show_q->wqe_id >= (int)sq_info.sq_depth) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str,
+ COMMANDER_ERR_MAX_STRING - 1,
+ "Max wqe id is %u.",
+ sq_info.sq_depth - 1);
+ return;
+ }
+
+ (void)memset(&nwqe, 0, sizeof(nwqe));
+ ret = lib_tx_wqe_info_get(show_q->target, &sq_info,
+ show_q->q_id, show_q->wqe_id,
+ (void *)&nwqe, sizeof(nwqe));
+ if (ret != UDA_SUCCESS) {
+ self->err_no = ret;
+ (void)snprintf(self->err_str,
+ COMMANDER_ERR_MAX_STRING - 1,
+ "Get tx wqe_info failed.");
+ return;
+ }
+
+ tx_show_wqe(self, &nwqe);
+ return;
+ }
+}
+
+static void
+cmd_rx_execute(major_cmd_t *self)
+{
+ int ret;
+ struct nic_rq_info rq_info = {0};
+ struct tag_l2nic_rx_cqe cqe;
+ nic_rq_wqe wqe;
+ struct cmd_show_q_st *show_q = self->cmd_st;
+
+ if (rx_param_check(self, show_q) != UDA_SUCCESS)
+ return;
+
+ ret = lib_rx_rq_info_get(show_q->target, &rq_info, show_q->q_id);
+ if (ret != UDA_SUCCESS) {
+ self->err_no = ret;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Get rx rq_info failed.");
+ return;
+ }
+
+ if (show_q->qobj == OBJ_Q_INFO) {
+ rx_show_rq_info(self, &rq_info);
+ return;
+ }
+
+ if ((uint32_t)show_q->wqe_id >= rq_info.rq_depth) {
+ self->err_no = -UDA_EINVAL;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Max wqe id is %u.", rq_info.rq_depth - 1);
+ return;
+ }
+
+ if (show_q->qobj == OBJ_WQE_INFO) {
+ (void)memset(&wqe, 0, sizeof(wqe));
+ ret = lib_rx_wqe_info_get(show_q->target, &rq_info,
+ show_q->q_id, show_q->wqe_id,
+ (void *)&wqe, sizeof(wqe));
+ if (ret != UDA_SUCCESS) {
+ self->err_no = ret;
+ (void)snprintf(self->err_str,
+ COMMANDER_ERR_MAX_STRING - 1,
+ "Get rx wqe_info failed.");
+ return;
+ }
+
+ rx_show_wqe(self, &wqe);
+ return;
+ }
+
+ /* OBJ_CQE_INFO */
+ (void)memset(&cqe, 0, sizeof(cqe));
+ ret = lib_rx_cqe_info_get(show_q->target, &rq_info, show_q->q_id,
+ show_q->wqe_id, (void *)&cqe, sizeof(cqe));
+ if (ret != UDA_SUCCESS) {
+ self->err_no = ret;
+ (void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+ "Get rx cqe_info failed.");
+ return;
+ }
+
+ rx_show_cqe_info(self, &cqe);
+}
+
+/**
+ * Execute the NIC queue query command based on the direction.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ */
+static void
+cmd_nic_queue_execute(major_cmd_t *self)
+{
+ struct cmd_show_q_st *show_q = self->cmd_st;
+
+ if (show_q->direction == -1) {
+ hinic3_pmd_mml_log(self->show_str, &self->show_len,
+ "Need -d parameter.");
+ return;
+ }
+
+ if (show_q->direction == 0)
+ cmd_tx_execute(self);
+ else
+ cmd_rx_execute(self);
+}
+
+/**
+ * Initialize and register the queue query command.
+ *
+ * @param[in] adapter
+ * The command adapter, which holds the registered commands and their states.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -UDA_ENONMEM if memory allocation fail or an error occur.
+ */
+int
+cmd_show_q_init(cmd_adapter_t *adapter)
+{
+ struct cmd_show_q_st *show_q = NULL;
+ major_cmd_t *show_q_cmd;
+
+ show_q_cmd = calloc(1, sizeof(*show_q_cmd));
+ if (!show_q_cmd) {
+ PMD_DRV_LOG(ERR, "Failed to allocate queue cmd");
+ return -UDA_ENONMEM;
+ }
+
+ (void)snprintf(show_q_cmd->name, MAX_NAME_LEN - 1, "%s", "nic_queue");
+ (void)snprintf(show_q_cmd->description,
+ MAX_DES_LEN - 1, "%s",
+ "Query the rx/tx queue information of a specified pci_addr");
+
+ show_q_cmd->option_count = 0;
+ show_q_cmd->execute = cmd_nic_queue_execute;
+
+ show_q = calloc(1, sizeof(*show_q));
+ if (!show_q) {
+ free(show_q_cmd);
+ PMD_DRV_LOG(ERR, "Failed to allocate show queue");
+ return -UDA_ENONMEM;
+ }
+
+ show_q->qobj = -1;
+ show_q->q_id = -1;
+ show_q->wqe_id = -1;
+ show_q->direction = -1;
+
+ show_q_cmd->cmd_st = show_q;
+
+ tool_target_init(&show_q->target.bus_num, show_q->target.dev_name,
+ MAX_DEV_LEN);
+
+ major_command_option(show_q_cmd, "-h", "--help", PARAM_NOT_NEED,
+ cmd_queue_help);
+ major_command_option(show_q_cmd, "-i", "--device", PARAM_NEED,
+ cmd_queue_target);
+ major_command_option(show_q_cmd, "-t", "--type", PARAM_NEED,
+ get_queue_type);
+ major_command_option(show_q_cmd, "-q", "--queue_id", PARAM_NEED,
+ get_queue_id);
+ major_command_option(show_q_cmd, "-w", "--wqe_id", PARAM_NEED,
+ get_q_wqe_id);
+ major_command_option(show_q_cmd, "-d", "--direction", PARAM_NEED,
+ get_direction);
+
+ major_command_register(adapter, show_q_cmd);
+
+ return UDA_SUCCESS;
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_queue.h b/drivers/net/hinic3/mml/hinic3_mml_queue.h
new file mode 100644
index 0000000000..633b1db50c
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_queue.h
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ * Description : hinic3 mml for queue
+ */
+
+#ifndef _HINIC3_MML_QUEUE
+#define _HINIC3_MML_QUEUE
+
+#define OBJ_Q_INFO 0
+#define OBJ_WQE_INFO 1
+#define OBJ_CQE_INFO 2
+
+/* TX. */
+struct nic_tx_ctrl_section {
+ union {
+ struct {
+ unsigned int len : 18;
+ unsigned int r : 1;
+ unsigned int bdsl : 8;
+ unsigned int tss : 1;
+ unsigned int df : 1;
+ unsigned int dn : 1;
+ unsigned int ec : 1;
+ unsigned int o : 1;
+ } ctrl_sec;
+ unsigned int ctrl_format;
+ };
+ union {
+ struct {
+ unsigned int pkt_type : 2;
+ unsigned int payload_offset : 8;
+ unsigned int ufo : 1;
+ unsigned int tso : 1;
+ unsigned int tcp_udp_cs : 1;
+ unsigned int mss : 14;
+ unsigned int sctp : 1;
+ unsigned int uc : 1;
+ unsigned int pri : 3;
+ } qsf;
+ unsigned int queue_info;
+ };
+};
+
+struct nic_tx_task_section {
+ unsigned int dw0;
+ unsigned int dw1;
+
+ /* dw2. */
+ union {
+ struct {
+ /*
+ * When TX direct, output bond id;
+ * when RX direct, output function id.
+ */
+ unsigned int vport_id : 12;
+ unsigned int vport_type : 4;
+ unsigned int traffic_type : 6;
+ /*
+ * Only used in TX direct, ctrl pkt(LACP\LLDP) output
+ * port id.
+ */
+ unsigned int slave_port_id : 2;
+ unsigned int rsvd0 : 6;
+ unsigned int crypto_en : 1;
+ unsigned int pkt_type : 1;
+ } bs2;
+ unsigned int dw2;
+ };
+
+ unsigned int dw3;
+};
+
+struct nic_tx_sge {
+ union {
+ struct {
+ unsigned int length : 31; /**< SGE length. */
+ unsigned int rsvd : 1;
+ } bs0;
+ unsigned int dw0;
+ };
+
+ union {
+ struct {
+ /* Key or unused. */
+ unsigned int key : 30;
+ /* 0:normal, 1:pointer to next SGE. */
+ unsigned int extension : 1;
+ /* 0:list, 1:last. */
+ unsigned int list : 1;
+ } bs1;
+ unsigned int dw1;
+ };
+
+ unsigned int dma_addr_high;
+ unsigned int dma_addr_low;
+};
+
+struct nic_tx_wqe_desc {
+ struct nic_tx_ctrl_section control;
+ struct nic_tx_task_section task;
+ unsigned int bd0_hi_addr;
+ unsigned int bd0_lo_addr;
+};
+
+/* RX. */
+typedef struct tag_l2nic_rx_cqe {
+ union {
+ struct {
+ unsigned int checksum_err : 16;
+ unsigned int lro_num : 8;
+ unsigned int rsvd0 : 1;
+ unsigned int spec_flags : 3;
+ unsigned int flush : 1;
+ unsigned int decry_pkt : 1;
+ unsigned int bp_en : 1;
+ unsigned int rx_done : 1;
+ } bs;
+ unsigned int value;
+ } dw0;
+
+ union {
+ struct {
+ unsigned int vlan : 16;
+ unsigned int length : 16;
+ } bs;
+ unsigned int value;
+ } dw1;
+
+ union {
+ struct {
+ unsigned int pkt_types : 12;
+ unsigned int rsvd1 : 7;
+ unsigned int umbcast : 2;
+ unsigned int vlan_offload_en : 1;
+ unsigned int rsvd0 : 2;
+ unsigned int rss_type : 8;
+ } bs;
+ unsigned int value;
+ } dw2;
+
+ union {
+ struct {
+ unsigned int rss_hash_value;
+ } bs;
+ unsigned int value;
+ } dw3;
+
+ /* dw4~dw7 field for nic/ovs multipexing. */
+ union {
+ struct { /**< For nic. */
+ unsigned int tx_ts_seq : 16;
+ unsigned int msg_1588_offset : 8;
+ unsigned int msg_1588_type : 4;
+ unsigned int rsvd : 1;
+ unsigned int if_rx_ts : 1;
+ unsigned int if_tx_ts : 1;
+ unsigned int if_1588 : 1;
+ } bs;
+
+ struct { /**< For ovs. */
+ unsigned int reserved;
+ } ovs_bs;
+
+ struct {
+ unsigned int xid;
+ } crypt_bs;
+
+ unsigned int value;
+ } dw4;
+
+ union {
+ struct { /**< For nic. */
+ unsigned int msg_1588_ts;
+ } bs;
+
+ struct { /**< For ovs. */
+ unsigned int traffic_from : 16;
+ unsigned int traffic_type : 6;
+ unsigned int rsvd0 : 2;
+ unsigned int l4_type : 3;
+ unsigned int l3_type : 3;
+ unsigned int mac_type : 2;
+ } ovs_bs;
+
+ struct { /**< For crypt. */
+ unsigned int esp_next_head : 8;
+ unsigned int decrypt_status : 8;
+ unsigned int rsvd : 16;
+ } crypt_bs;
+
+ unsigned int value;
+ } dw5;
+
+ union {
+ struct { /**< For nic. */
+ unsigned int lro_ts;
+ } bs;
+
+ struct { /**< For ovs. */
+ unsigned int reserved;
+ } ovs_bs;
+
+ unsigned int value;
+ } dw6;
+
+ union {
+ struct { /**< For nic. */
+ /* Data len of the first or middle pkt size. */
+ unsigned int first_len : 13;
+ /* Data len of the last pkt size. */
+ unsigned int last_len : 13;
+ /* The number of packet. */
+ unsigned int pkt_num : 5;
+ /* Only this bit = 1, other dw fields is valid. */
+ unsigned int super_cqe_en : 1;
+ } bs;
+
+ struct { /**< For ovs. */
+ unsigned int localtag;
+ } ovs_bs;
+
+ unsigned int value;
+ } dw7;
+} l2nic_rx_cqe_s;
+
+struct nic_rq_bd_sec {
+ unsigned int pkt_buf_addr_high; /**< Packet buffer address high. */
+ unsigned int pkt_buf_addr_low; /**< Packet buffer address low. */
+ unsigned int len;
+};
+
+typedef struct _nic_rq_wqe {
+ /* RX buffer SGE. Notes, buf_desc.len limit in bit 0~13. */
+ struct nic_rq_bd_sec buf_desc;
+ /* Reserved field 0 for 16B align. */
+ unsigned int rsvd0;
+ /*
+ * CQE buffer SGE. Notes, cqe_sect.len is in unit of 16B and limit in
+ * bit 0~4.
+ */
+ struct nic_rq_bd_sec cqe_sect;
+ /* Reserved field 1 for unused. */
+ unsigned int rsvd1;
+} nic_rq_wqe;
+
+/* CMD. */
+struct cmd_show_q_st {
+ struct tool_target target;
+
+ int qobj;
+ int q_id;
+ int wqe_id;
+ int direction;
+};
+
+#endif /* _HINIC3_MML_QUEUE */
diff --git a/drivers/net/hinic3/mml/meson.build b/drivers/net/hinic3/mml/meson.build
new file mode 100644
index 0000000000..f8d2650d8d
--- /dev/null
+++ b/drivers/net/hinic3/mml/meson.build
@@ -0,0 +1,62 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+sources = files(
+ 'hinic3_dbg.c',
+ 'hinic3_mml_cmd.c',
+ 'hinic3_mml_ioctl.c',
+ 'hinic3_mml_lib.c',
+ 'hinic3_mml_main.c',
+ 'hinic3_mml_queue.c',
+)
+
+extra_flags = [
+ '-Wno-cast-qual',
+ '-Wno-format',
+ '-Wno-format-nonliteral',
+ '-Wno-format-security',
+ '-Wno-missing-braces',
+ '-Wno-missing-field-initializers',
+ '-Wno-missing-prototypes',
+ '-Wno-pointer-sign',
+ '-Wno-pointer-to-int-cast',
+ '-Wno-sign-compare',
+ '-Wno-strict-aliasing',
+ '-Wno-unused-parameter',
+ '-Wno-unused-value',
+ '-Wno-unused-variable',
+]
+
+# The driver runs only on arch64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += [
+ '-Wno-int-to-pointer-cast',
+ '-Wno-pointer-to-int-cast',
+ ]
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
+
+deps += ['hash']
+
+c_args = cflags
+includes += include_directories('../')
+includes += include_directories('../base/')
+
+mml_lib = static_library(
+ 'hinic3_mml',
+ sources,
+ dependencies: [
+ static_rte_eal,
+ static_rte_ethdev,
+ static_rte_bus_pci,
+ static_rte_hash,
+ ],
+ include_directories: includes,
+ c_args: c_args,
+)
+mml_objs = mml_lib.extract_all_objects()
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 16/18] net/hinic3: add RSS promiscuous ops
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (14 preceding siblings ...)
2025-04-18 9:06 ` [RFC 15/18] net/hinic3: add MML and EEPROM access feature Feifei Wang
@ 2025-04-18 9:06 ` Feifei Wang
2025-04-18 9:06 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
` (4 subsequent siblings)
20 siblings, 0 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:06 UTC (permalink / raw)
To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen
From: Xin Wang <wangxin679@h-partners.com>
Add RSS and promiscuous ops related function codes.
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
drivers/net/hinic3/hinic3_ethdev.c | 370 +++++++++++++++++++++++++++++
drivers/net/hinic3/hinic3_ethdev.h | 31 +++
2 files changed, 401 insertions(+)
diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index 9c5decb867..9d2dcf95f7 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -2277,6 +2277,281 @@ hinic3_dev_allmulticast_disable(struct rte_eth_dev *dev)
return 0;
}
+/**
+ * Enable promiscuous mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC;
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Enable promiscuous failed");
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO,
+ "Enable promiscuous, nic_dev: %s, port_id: %d, promisc: %d",
+ nic_dev->dev_name, dev->data->port_id,
+ dev->data->promiscuous);
+ return 0;
+}
+
+/**
+ * Disable promiscuous mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 rx_mode;
+ int err;
+
+ err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+ if (err)
+ return err;
+
+ rx_mode = nic_dev->rx_mode & (~HINIC3_RX_MODE_PROMISC);
+
+ err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+ if (err) {
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+ PMD_DRV_LOG(ERR, "Disable promiscuous failed");
+ return err;
+ }
+
+ nic_dev->rx_mode = rx_mode;
+
+ (void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+ PMD_DRV_LOG(INFO,
+ "Disable promiscuous, nic_dev: %s, port_id: %d, promisc: %d",
+ nic_dev->dev_name, dev->data->port_id, dev->data->promiscuous);
+ return 0;
+}
+
+/**
+ * Get flow control configuration, including auto-negotiation and RX/TX pause
+ * settings.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @param[out] fc_conf
+ * The flow control configuration to be filled.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+
+/**
+ * Update the RSS hash key and RSS hash type.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rss_conf
+ * RSS configuration data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_rss_type rss_type = {0};
+ u64 rss_hf = rss_conf->rss_hf;
+ int err = 0;
+
+ if (nic_dev->rss_state == HINIC3_RSS_DISABLE) {
+ if (rss_hf != 0)
+ return -EINVAL;
+
+ PMD_DRV_LOG(INFO, "RSS is not enabled");
+ return 0;
+ }
+
+ if (rss_conf->rss_key_len > HINIC3_RSS_KEY_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid RSS key, rss_key_len: %d",
+ rss_conf->rss_key_len);
+ return -EINVAL;
+ }
+
+ if (rss_conf->rss_key) {
+ err = hinic3_rss_set_hash_key(nic_dev->hwdev, nic_dev->rss_key,
+ HINIC3_RSS_KEY_SIZE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Set RSS hash key failed");
+ return err;
+ }
+ memcpy((void *)nic_dev->rss_key, (void *)rss_conf->rss_key,
+ (size_t)rss_conf->rss_key_len);
+ }
+
+ rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
+ ? 1
+ : 0;
+ rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+ rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
+ ? 1
+ : 0;
+ rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+ rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+ rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+ rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+ rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+
+ err = hinic3_set_rss_type(nic_dev->hwdev, rss_type);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set RSS type failed");
+
+ return err;
+}
+
+/**
+ * Get the RSS hash configuration.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] rss_conf
+ * RSS configuration data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_rss_type rss_type = {0};
+ int err;
+
+ if (!rss_conf)
+ return -EINVAL;
+
+ if (nic_dev->rss_state == HINIC3_RSS_DISABLE) {
+ rss_conf->rss_hf = 0;
+ PMD_DRV_LOG(INFO, "RSS is not enabled");
+ return 0;
+ }
+
+ if (rss_conf->rss_key && rss_conf->rss_key_len >= HINIC3_RSS_KEY_SIZE) {
+ /*
+ * Get RSS key from driver to reduce the frequency of the MPU
+ * accessing the RSS memory.
+ */
+ rss_conf->rss_key_len = sizeof(nic_dev->rss_key);
+ memcpy((void *)rss_conf->rss_key, (void *)nic_dev->rss_key,
+ (size_t)rss_conf->rss_key_len);
+ }
+
+ err = hinic3_get_rss_type(nic_dev->hwdev, &rss_type);
+ if (err)
+ return err;
+
+ rss_conf->rss_hf = 0;
+ rss_conf->rss_hf |=
+ rss_type.ipv4 ? (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+ : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP
+ : 0;
+ rss_conf->rss_hf |=
+ rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+ : 0;
+ rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP
+ : 0;
+ rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP
+ : 0;
+ rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP
+ : 0;
+
+ return 0;
+}
+
+/**
+ * Get the RETA indirection table.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] reta_conf
+ * Pointer to RETA configuration structure array.
+ * @param[in] reta_size
+ * Size of the RETA table.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 indirtbl[HINIC3_RSS_INDIR_SIZE] = {0};
+ u16 idx, shift;
+ u16 i;
+ int err;
+
+ if (nic_dev->rss_state == HINIC3_RSS_DISABLE) {
+ PMD_DRV_LOG(INFO, "RSS is not enabled");
+ return 0;
+ }
+
+ if (reta_size != HINIC3_RSS_INDIR_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid reta size, reta_size: %d", reta_size);
+ return -EINVAL;
+ }
+
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl,
+ HINIC3_RSS_INDIR_SIZE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err);
+ return err;
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
+ }
+
+ return 0;
+}
+
static int
hinic3_get_eeprom(__rte_unused struct rte_eth_dev *dev,
struct rte_dev_eeprom_info *info)
@@ -2287,6 +2562,68 @@ hinic3_get_eeprom(__rte_unused struct rte_eth_dev *dev,
&info->length, MAX_BUF_OUT_LEN);
}
+/**
+ * Update the RETA indirection table.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] reta_conf
+ * Pointer to RETA configuration structure array.
+ * @param[in] reta_size
+ * Size of the RETA table.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u32 indirtbl[HINIC3_RSS_INDIR_SIZE] = {0};
+ u16 idx, shift;
+ u16 i;
+ int err;
+
+ if (nic_dev->rss_state == HINIC3_RSS_DISABLE)
+ return 0;
+
+ if (reta_size != HINIC3_RSS_INDIR_SIZE) {
+ PMD_DRV_LOG(ERR, "Invalid reta size, reta_size: %d", reta_size);
+ return -EINVAL;
+ }
+
+ err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl,
+ HINIC3_RSS_INDIR_SIZE);
+ if (err)
+ return err;
+
+ /* Update RSS reta table. */
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ indirtbl[i] = reta_conf[idx].reta[shift];
+ }
+
+ for (i = 0; i < reta_size; i++) {
+ if (indirtbl[i] >= nic_dev->num_rqs) {
+ PMD_DRV_LOG(ERR,
+ "Invalid reta entry, index: %d, num_rqs: %d",
+ indirtbl[i], nic_dev->num_rqs);
+ return -EFAULT;
+ }
+ }
+
+ err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl,
+ HINIC3_RSS_INDIR_SIZE);
+ if (err)
+ PMD_DRV_LOG(ERR, "Set RSS reta table failed");
+
+ return err;
+}
+
/**
* Get device generic statistics.
*
@@ -2857,6 +3194,29 @@ hinic3_set_mc_addr_list(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Manage flow director filter operations.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] filter_type
+ * Filter type.
+ * @param[in] filter_op
+ * Operation to perform.
+ * @param[in] arg
+ * Pointer to operation-specific structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_filter_ctrl(struct rte_eth_dev *dev, const struct rte_flow_ops **arg)
+{
+ RTE_SET_USED(dev);
+ *arg = &hinic3_flow_ops;
+ return 0;
+}
+
static int
hinic3_get_reg(__rte_unused struct rte_eth_dev *dev,
__rte_unused struct rte_dev_reg_info *regs)
@@ -2890,6 +3250,12 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
.vlan_offload_set = hinic3_vlan_offload_set,
.allmulticast_enable = hinic3_dev_allmulticast_enable,
.allmulticast_disable = hinic3_dev_allmulticast_disable,
+ .promiscuous_enable = hinic3_dev_promiscuous_enable,
+ .promiscuous_disable = hinic3_dev_promiscuous_disable,
+ .rss_hash_update = hinic3_rss_hash_update,
+ .rss_hash_conf_get = hinic3_rss_conf_get,
+ .reta_update = hinic3_rss_reta_update,
+ .reta_query = hinic3_rss_reta_query,
.get_eeprom = hinic3_get_eeprom,
.stats_get = hinic3_dev_stats_get,
.stats_reset = hinic3_dev_stats_reset,
@@ -2931,6 +3297,10 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = {
.vlan_offload_set = hinic3_vlan_offload_set,
.allmulticast_enable = hinic3_dev_allmulticast_enable,
.allmulticast_disable = hinic3_dev_allmulticast_disable,
+ .rss_hash_update = hinic3_rss_hash_update,
+ .rss_hash_conf_get = hinic3_rss_conf_get,
+ .reta_update = hinic3_rss_reta_update,
+ .reta_query = hinic3_rss_reta_query,
.get_eeprom = hinic3_get_eeprom,
.stats_get = hinic3_dev_stats_get,
.stats_reset = hinic3_dev_stats_reset,
diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h
index a69cf972e7..5dd7c7821a 100644
--- a/drivers/net/hinic3/hinic3_ethdev.h
+++ b/drivers/net/hinic3/hinic3_ethdev.h
@@ -97,6 +97,10 @@ struct hinic3_nic_dev {
u16 rx_buff_len;
u16 mtu_size;
+ u16 rss_state;
+ u8 num_rss; /**< Number of RSS queues. */
+ u8 rsvd0; /**< Reserved field 0. */
+
u32 rx_mode;
u8 rx_queue_list[HINIC3_MAX_QUEUE_NUM];
rte_spinlock_t queue_list_lock;
@@ -106,6 +110,8 @@ struct hinic3_nic_dev {
u32 default_cos;
u32 rx_csum_en;
+ u8 rss_key[HINIC3_RSS_KEY_SIZE];
+
unsigned long dev_status;
struct rte_ether_addr default_addr;
@@ -116,4 +122,29 @@ struct hinic3_nic_dev {
u32 vfta[HINIC3_VFTA_SIZE]; /**< VLAN bitmap. */
};
+/**
+ * Enable interrupt for the specified RX queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_id
+ * The ID of the receive queue for which the interrupt is being enabled.
+ * @return
+ * 0 on success, a negative error code on failure.
+ */
+int hinic3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);
+
+/**
+ * Disable interrupt for the specified RX queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_id
+ * The ID of the receive queue for which the interrupt is being disabled.
+ * @return
+ * 0 on success, a negative error code on failure.
+ */
+int hinic3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+ uint16_t queue_id);
+
#endif /* _HINIC3_ETHDEV_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 17/18] net/hinic3: add FDIR flow control module
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (15 preceding siblings ...)
2025-04-18 9:06 ` [RFC 16/18] net/hinic3: add RSS promiscuous ops Feifei Wang
@ 2025-04-18 9:06 ` Feifei Wang
2025-04-18 18:25 ` Stephen Hemminger
` (3 more replies)
2025-04-18 9:06 ` [RFC 18/18] drivers/net: add hinic3 PMD build and doc files Feifei Wang
` (3 subsequent siblings)
20 siblings, 4 replies; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:06 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
Added support for flow director filters, including ethertype, IPv4,
IPv6, and tunnel VXLAN. In addition, user can add or delete filters.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
drivers/net/hinic3/hinic3_ethdev.c | 82 ++
drivers/net/hinic3/hinic3_ethdev.h | 17 +
drivers/net/hinic3/hinic3_fdir.c | 1394 +++++++++++++++++++++++
drivers/net/hinic3/hinic3_fdir.h | 398 +++++++
drivers/net/hinic3/hinic3_flow.c | 1700 ++++++++++++++++++++++++++++
drivers/net/hinic3/hinic3_flow.h | 80 ++
6 files changed, 3671 insertions(+)
create mode 100644 drivers/net/hinic3/hinic3_fdir.c
create mode 100644 drivers/net/hinic3/hinic3_fdir.h
create mode 100644 drivers/net/hinic3/hinic3_flow.c
create mode 100644 drivers/net/hinic3/hinic3_flow.h
diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index 9d2dcf95f7..2b8d2dc7a7 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -2369,6 +2369,84 @@ hinic3_dev_promiscuous_disable(struct rte_eth_dev *dev)
* @return
* 0 on success, non-zero on failure.
*/
+static int
+hinic3_dev_flow_ctrl_get(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct nic_pause_config nic_pause;
+ int err;
+
+ err = hinic3_mutex_lock(&nic_dev->pause_mutuex);
+ if (err)
+ return err;
+
+ memset(&nic_pause, 0, sizeof(nic_pause));
+ err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+ if (err) {
+ (void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+ return err;
+ }
+
+ if (nic_dev->pause_set || !nic_pause.auto_neg) {
+ nic_pause.rx_pause = nic_dev->nic_pause.rx_pause;
+ nic_pause.tx_pause = nic_dev->nic_pause.tx_pause;
+ }
+
+ fc_conf->autoneg = nic_pause.auto_neg;
+
+ if (nic_pause.tx_pause && nic_pause.rx_pause)
+ fc_conf->mode = RTE_ETH_FC_FULL;
+ else if (nic_pause.tx_pause)
+ fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
+ else if (nic_pause.rx_pause)
+ fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
+ else
+ fc_conf->mode = RTE_ETH_FC_NONE;
+
+ (void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+ return 0;
+}
+
+static int
+hinic3_dev_flow_ctrl_set(struct rte_eth_dev *dev,
+ struct rte_eth_fc_conf *fc_conf)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct nic_pause_config nic_pause;
+ int err;
+
+ err = hinic3_mutex_lock(&nic_dev->pause_mutuex);
+ if (err)
+ return err;
+
+ memset(&nic_pause, 0, sizeof(nic_pause));
+ if ((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL ||
+ (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
+ nic_pause.tx_pause = true;
+
+ if ((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL ||
+ (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
+ nic_pause.rx_pause = true;
+
+ err = hinic3_set_pause_info(nic_dev->hwdev, nic_pause);
+ if (err) {
+ (void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+ return err;
+ }
+
+ nic_dev->pause_set = true;
+ nic_dev->nic_pause.rx_pause = nic_pause.rx_pause;
+ nic_dev->nic_pause.tx_pause = nic_pause.tx_pause;
+
+ PMD_DRV_LOG(INFO,
+ "Just support set tx or rx pause info, tx: %s, rx: %s",
+ nic_pause.tx_pause ? "on" : "off",
+ nic_pause.rx_pause ? "on" : "off");
+
+ (void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+ return 0;
+}
/**
* Update the RSS hash key and RSS hash type.
@@ -3252,6 +3330,8 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
.allmulticast_disable = hinic3_dev_allmulticast_disable,
.promiscuous_enable = hinic3_dev_promiscuous_enable,
.promiscuous_disable = hinic3_dev_promiscuous_disable,
+ .flow_ctrl_get = hinic3_dev_flow_ctrl_get,
+ .flow_ctrl_set = hinic3_dev_flow_ctrl_set,
.rss_hash_update = hinic3_rss_hash_update,
.rss_hash_conf_get = hinic3_rss_conf_get,
.reta_update = hinic3_rss_reta_update,
@@ -3269,6 +3349,7 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
.mac_addr_remove = hinic3_mac_addr_remove,
.mac_addr_add = hinic3_mac_addr_add,
.set_mc_addr_list = hinic3_set_mc_addr_list,
+ .flow_ops_get = hinic3_dev_filter_ctrl,
.get_reg = hinic3_get_reg,
};
@@ -3313,6 +3394,7 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = {
.mac_addr_remove = hinic3_mac_addr_remove,
.mac_addr_add = hinic3_mac_addr_add,
.set_mc_addr_list = hinic3_set_mc_addr_list,
+ .flow_ops_get = hinic3_dev_filter_ctrl,
};
/**
diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h
index 5dd7c7821a..07e24e971c 100644
--- a/drivers/net/hinic3/hinic3_ethdev.h
+++ b/drivers/net/hinic3/hinic3_ethdev.h
@@ -8,6 +8,8 @@
#include <rte_ethdev.h>
#include <rte_ethdev_core.h>
+#include "hinic3_fdir.h"
+
#define HINIC3_PMD_DRV_VERSION "B106"
#define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle)
@@ -83,6 +85,9 @@ enum nic_feature_cap {
#define DEFAULT_DRV_FEATURE 0x3FFF
+TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow);
+TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow);
+
struct hinic3_nic_dev {
struct hinic3_hwdev *hwdev; /**< Hardware device. */
struct hinic3_txq **txqs;
@@ -114,14 +119,26 @@ struct hinic3_nic_dev {
unsigned long dev_status;
+ u8 pause_set; /**< Flag of PAUSE frame setting. */
+ pthread_mutex_t pause_mutuex;
+ struct nic_pause_config nic_pause;
+
struct rte_ether_addr default_addr;
struct rte_ether_addr *mc_list;
char dev_name[HINIC3_DEV_NAME_LEN];
u64 feature_cap;
u32 vfta[HINIC3_VFTA_SIZE]; /**< VLAN bitmap. */
+
+ u16 tcam_rule_nums;
+ u16 ethertype_rule_nums;
+ struct hinic3_tcam_info tcam;
+ struct hinic3_ethertype_filter_list filter_ethertype_list;
+ struct hinic3_fdir_rule_filter_list filter_fdir_rule_list;
};
+extern const struct rte_flow_ops hinic3_flow_ops;
+
/**
* Enable interrupt for the specified RX queue.
*
diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c
new file mode 100644
index 0000000000..e36050f263
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_fdir.c
@@ -0,0 +1,1394 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <errno.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_hwif.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_fdir.h"
+
+#define HINIC3_UINT1_MAX 0x1
+#define HINIC3_UINT4_MAX 0xf
+#define HINIC3_UINT15_MAX 0x7fff
+
+#define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \
+ (&((struct hinic3_nic_dev *)(nic_dev))->tcam)
+
+/**
+ * Perform a bitwise AND operation on the input key value and mask, and stores
+ * the result in the key_y array.
+ *
+ * @param[out] key_y
+ * Array for storing results.
+ * @param[in] src_input
+ * Input key array.
+ * @param[in] mask
+ * Mask array.
+ * @param[in] len
+ * Length of the key value and mask.
+ */
+static void
+tcam_translate_key_y(u8 *key_y, u8 *src_input, u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_y[idx] = src_input[idx] & mask[idx];
+}
+
+/**
+ * Convert key_y to key_x using the exclusive OR operation.
+ *
+ * @param[out] key_x
+ * Array for storing results.
+ * @param[in] key_y
+ * Input key array.
+ * @param[in] mask
+ * Mask array.
+ * @param[in] len
+ * Length of the key value and mask.
+ */
+static void
+tcam_translate_key_x(u8 *key_x, u8 *key_y, u8 *mask, u8 len)
+{
+ u8 idx;
+
+ for (idx = 0; idx < len; idx++)
+ key_x[idx] = key_y[idx] ^ mask[idx];
+}
+
+static void
+tcam_key_calculate(struct hinic3_tcam_key *tcam_key,
+ struct hinic3_tcam_cfg_rule *fdir_tcam_rule)
+{
+ tcam_translate_key_y(fdir_tcam_rule->key.y, (u8 *)(&tcam_key->key_info),
+ (u8 *)(&tcam_key->key_mask),
+ HINIC3_TCAM_FLOW_KEY_SIZE);
+ tcam_translate_key_x(fdir_tcam_rule->key.x, fdir_tcam_rule->key.y,
+ (u8 *)(&tcam_key->key_mask),
+ HINIC3_TCAM_FLOW_KEY_SIZE);
+}
+
+static void
+hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ /* Fill type of ip. */
+ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX;
+ tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+ /* Fill src IPv4. */
+ tcam_key->key_mask.sipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.src_ip);
+ tcam_key->key_mask.sipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.src_ip);
+ tcam_key->key_info.sipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.src_ip);
+ tcam_key->key_info.sipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.src_ip);
+
+ /* Fill dst IPv4. */
+ tcam_key->key_mask.dipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.dst_ip);
+ tcam_key->key_mask.dipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.dst_ip);
+ tcam_key->key_info.dipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.dst_ip);
+ tcam_key->key_info.dipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip);
+}
+
+static void
+hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ /* Fill type of ip. */
+ tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX;
+ tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+ /* Fill src IPv6. */
+ tcam_key->key_mask_ipv6.sipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+ tcam_key->key_mask_ipv6.sipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+ tcam_key->key_mask_ipv6.sipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+ tcam_key->key_mask_ipv6.sipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+ tcam_key->key_mask_ipv6.sipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+ tcam_key->key_mask_ipv6.sipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+ tcam_key->key_mask_ipv6.sipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+ tcam_key->key_mask_ipv6.sipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+ tcam_key->key_info_ipv6.sipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+ tcam_key->key_info_ipv6.sipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+ tcam_key->key_info_ipv6.sipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+ tcam_key->key_info_ipv6.sipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+ tcam_key->key_info_ipv6.sipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+ tcam_key->key_info_ipv6.sipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+ tcam_key->key_info_ipv6.sipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+ tcam_key->key_info_ipv6.sipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+
+ /* Fill dst IPv6. */
+ tcam_key->key_mask_ipv6.dipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+ tcam_key->key_mask_ipv6.dipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+ tcam_key->key_mask_ipv6.dipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+ tcam_key->key_mask_ipv6.dipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+ tcam_key->key_mask_ipv6.dipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+ tcam_key->key_mask_ipv6.dipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+ tcam_key->key_mask_ipv6.dipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+ tcam_key->key_mask_ipv6.dipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+ tcam_key->key_info_ipv6.dipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+ tcam_key->key_info_ipv6.dipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+ tcam_key->key_info_ipv6.dipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+ tcam_key->key_info_ipv6.dipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+ tcam_key->key_info_ipv6.dipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+ tcam_key->key_info_ipv6.dipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+ tcam_key->key_info_ipv6.dipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+ tcam_key->key_info_ipv6.dipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+}
+
+/**
+ * Set the TCAM information in notunnel scenario.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rule
+ * Pointer to the filtering rule.
+ * @param[in] tcam_key
+ * Pointer to the TCAM key.
+ */
+static void
+hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev,
+ struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ /* Fill tcam_key info. */
+ tcam_key->key_mask.sport = rule->key_mask.src_port;
+ tcam_key->key_info.sport = rule->key_spec.src_port;
+
+ tcam_key->key_mask.dport = rule->key_mask.dst_port;
+ tcam_key->key_info.dport = rule->key_spec.dst_port;
+
+ tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX;
+ tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+
+ tcam_key->key_mask.function_id = HINIC3_UINT15_MAX;
+ tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) &
+ HINIC3_UINT15_MAX;
+
+ tcam_key->key_mask.ip_proto = rule->key_mask.proto;
+ tcam_key->key_info.ip_proto = rule->key_spec.proto;
+
+ if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4)
+ hinic3_fdir_tcam_ipv4_init(rule, tcam_key);
+ else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6)
+ hinic3_fdir_tcam_ipv6_init(rule, tcam_key);
+}
+
+static void
+hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ /* Fill type of ip. */
+ tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX;
+ tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+ /* Fill src ipv4. */
+ tcam_key->key_mask.sipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv4.src_ip);
+ tcam_key->key_mask.sipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv4.src_ip);
+ tcam_key->key_info.sipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv4.src_ip);
+ tcam_key->key_info.sipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv4.src_ip);
+
+ /* Fill dst ipv4. */
+ tcam_key->key_mask.dipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv4.dst_ip);
+ tcam_key->key_mask.dipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv4.dst_ip);
+ tcam_key->key_info.dipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv4.dst_ip);
+ tcam_key->key_info.dipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv4.dst_ip);
+}
+
+static void
+hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ /* Fill type of ip. */
+ tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX;
+ tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+ /* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x1]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x1]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x2]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x2]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x3]);
+ tcam_key->key_mask_vxlan_ipv6.dipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x3]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x1]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x1]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x2]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x2]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]);
+ tcam_key->key_info_vxlan_ipv6.dipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]);
+}
+
+static void
+hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ tcam_key->key_mask_ipv6.sipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+ tcam_key->key_mask_ipv6.sipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+ tcam_key->key_mask_ipv6.sipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+ tcam_key->key_mask_ipv6.sipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+ tcam_key->key_mask_ipv6.sipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+ tcam_key->key_mask_ipv6.sipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+ tcam_key->key_mask_ipv6.sipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+ tcam_key->key_mask_ipv6.sipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+ tcam_key->key_info_ipv6.sipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+ tcam_key->key_info_ipv6.sipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+ tcam_key->key_info_ipv6.sipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+ tcam_key->key_info_ipv6.sipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+ tcam_key->key_info_ipv6.sipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+ tcam_key->key_info_ipv6.sipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+ tcam_key->key_info_ipv6.sipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+ tcam_key->key_info_ipv6.sipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+
+ tcam_key->key_mask_ipv6.dipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+ tcam_key->key_mask_ipv6.dipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+ tcam_key->key_mask_ipv6.dipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+ tcam_key->key_mask_ipv6.dipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+ tcam_key->key_mask_ipv6.dipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+ tcam_key->key_mask_ipv6.dipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+ tcam_key->key_mask_ipv6.dipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+ tcam_key->key_mask_ipv6.dipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+ tcam_key->key_info_ipv6.dipv6_key0 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+ tcam_key->key_info_ipv6.dipv6_key1 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+ tcam_key->key_info_ipv6.dipv6_key2 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+ tcam_key->key_info_ipv6.dipv6_key3 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+ tcam_key->key_info_ipv6.dipv6_key4 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+ tcam_key->key_info_ipv6.dipv6_key5 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+ tcam_key->key_info_ipv6.dipv6_key6 =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+ tcam_key->key_info_ipv6.dipv6_key7 =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+}
+
+static void
+hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev,
+ struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ tcam_key->key_mask_ipv6.ip_proto = rule->key_mask.proto;
+ tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto;
+
+ tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX;
+ tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+
+ tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX;
+ tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+ tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX;
+ tcam_key->key_info_ipv6.function_id =
+ hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX;
+
+ tcam_key->key_mask_ipv6.dport = rule->key_mask.dst_port;
+ tcam_key->key_info_ipv6.dport = rule->key_spec.dst_port;
+
+ tcam_key->key_mask_ipv6.sport = rule->key_mask.src_port;
+ tcam_key->key_info_ipv6.sport = rule->key_spec.src_port;
+
+ if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY)
+ hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key);
+}
+
+/**
+ * Sets the TCAM information in the VXLAN scenario.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rule
+ * Pointer to the filtering rule.
+ * @param[in] tcam_key
+ * Pointer to the TCAM key.
+ */
+static void
+hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev,
+ struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV6) {
+ hinic3_fdir_tcam_ipv6_vxlan_init(dev, rule, tcam_key);
+ return;
+ }
+
+ tcam_key->key_mask.ip_proto = rule->key_mask.proto;
+ tcam_key->key_info.ip_proto = rule->key_spec.proto;
+
+ tcam_key->key_mask.sport = rule->key_mask.src_port;
+ tcam_key->key_info.sport = rule->key_spec.src_port;
+
+ tcam_key->key_mask.dport = rule->key_mask.dst_port;
+ tcam_key->key_info.dport = rule->key_spec.dst_port;
+
+ tcam_key->key_mask.outer_sipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.src_ip);
+ tcam_key->key_mask.outer_sipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.src_ip);
+ tcam_key->key_info.outer_sipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.src_ip);
+ tcam_key->key_info.outer_sipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.src_ip);
+
+ tcam_key->key_mask.outer_dipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.dst_ip);
+ tcam_key->key_mask.outer_dipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.dst_ip);
+ tcam_key->key_info.outer_dipv4_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.dst_ip);
+ tcam_key->key_info.outer_dipv4_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip);
+
+ tcam_key->key_mask.vni_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id);
+ tcam_key->key_mask.vni_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id);
+ tcam_key->key_info.vni_h =
+ HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id);
+ tcam_key->key_info.vni_l =
+ HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id);
+
+ tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX;
+ tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+
+ tcam_key->key_mask.function_id = HINIC3_UINT15_MAX;
+ tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) &
+ HINIC3_UINT15_MAX;
+
+ if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4)
+ hinic3_fdir_tcam_vxlan_ipv4_init(rule, tcam_key);
+
+ else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6)
+ hinic3_fdir_tcam_vxlan_ipv6_init(rule, tcam_key);
+}
+
+static void
+hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev,
+ struct hinic3_fdir_filter *rule,
+ struct hinic3_tcam_key *tcam_key,
+ struct hinic3_tcam_cfg_rule *fdir_tcam_rule)
+{
+ /* Initialize the TCAM based on the tunnel type. */
+ if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL)
+ hinic3_fdir_tcam_notunnel_init(dev, rule, tcam_key);
+ else
+ hinic3_fdir_tcam_vxlan_init(dev, rule, tcam_key);
+
+ /* Set the queue index. */
+ fdir_tcam_rule->data.qid = rule->rq_index;
+ /* Calculate key of TCAM. */
+ tcam_key_calculate(tcam_key, fdir_tcam_rule);
+}
+
+/**
+ * Find filter in given ethertype filter list.
+ *
+ * @param[in] filter_list
+ * Point to the Ether filter list.
+ * @param[in] key
+ * The tcam key to find.
+ * @return
+ * If a matching filter is found, the filter is returned, otherwise
+ * RTE_ETH_FILTER_NONE.
+ */
+static inline uint16_t
+hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_list,
+ uint16_t type)
+{
+ struct rte_flow *it;
+ struct hinic3_filter_t *filter_rules;
+
+ TAILQ_FOREACH(it, ethertype_list, node) {
+ filter_rules = it->rule;
+ if (type == filter_rules->ethertype_filter.ether_type)
+ return filter_rules->ethertype_filter.ether_type;
+ }
+
+ return RTE_ETH_FILTER_NONE;
+}
+
+/**
+ * Find the filter that matches the given key in the TCAM filter list.
+ *
+ * @param[in] filter_list
+ * Point to the tcam filter list.
+ * @param[in] key
+ * The tcam key to find.
+ * @return
+ * If a matching filter is found, the filter is returned, otherwise NULL.
+ */
+static inline struct hinic3_tcam_filter *
+hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list,
+ struct hinic3_tcam_key *key)
+{
+ struct hinic3_tcam_filter *it;
+
+ TAILQ_FOREACH(it, filter_list, entries) {
+ if (memcmp(key, &it->tcam_key,
+ sizeof(struct hinic3_tcam_key)) == 0) {
+ return it;
+ }
+ }
+
+ return NULL;
+}
+/**
+ * Allocate memory for dynamic blocks and then add them to the queue.
+ *
+ * @param[in] tcam_info
+ * Point to TCAM information.
+ * @param[in] dynamic_block_id
+ * Indicate the ID of a dynamic block.
+ * @return
+ * Return the pointer to the dynamic block, or NULL if the allocation fails.
+ */
+static struct hinic3_tcam_dynamic_block *
+hinic3_alloc_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+ u16 dynamic_block_id)
+{
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+
+ dynamic_block_ptr =
+ rte_zmalloc("hinic3_tcam_dynamic_mem",
+ sizeof(struct hinic3_tcam_dynamic_block), 0);
+ if (dynamic_block_ptr == NULL) {
+ PMD_DRV_LOG(ERR,
+ "Alloc fdir filter dynamic block index %d memory "
+ "failed!",
+ dynamic_block_id);
+ return NULL;
+ }
+
+ dynamic_block_ptr->dynamic_block_id = dynamic_block_id;
+
+ /* Add new block to the end of the TCAM dynamic block list. */
+ TAILQ_INSERT_TAIL(&tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ dynamic_block_ptr, entries);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt++;
+
+ return dynamic_block_ptr;
+}
+
+static void
+hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr)
+{
+ if (dynamic_block_ptr == NULL)
+ return;
+
+ /* Remove the incoming dynamic block from the TCAM dynamic list. */
+ TAILQ_REMOVE(&tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ dynamic_block_ptr, entries);
+ rte_free(dynamic_block_ptr);
+
+ tcam_info->tcam_dynamic_info.dynamic_block_cnt--;
+}
+
+/**
+ * Check whether there are free positions in the dynamic TCAM filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] fdir_tcam_rule
+ * Indicate the filtering rule to be searched for.
+ * @param[in] tcam_info
+ * Ternary Content-Addressable Memory (TCAM) information.
+ * @param[in] tcam_filter
+ * Point to the TCAM filter.
+ * @param[out] tcam_index
+ * Indicate the TCAM index to be searched for.
+ * @result
+ * Pointer to the TCAM dynamic block. If the search fails, NULL is returned.
+ */
+static struct hinic3_tcam_dynamic_block *
+hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev,
+ struct hinic3_tcam_cfg_rule *fdir_tcam_rule,
+ struct hinic3_tcam_info *tcam_info,
+ struct hinic3_tcam_filter *tcam_filter,
+ u16 *tcam_index)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt;
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u16 rule_nums = nic_dev->tcam_rule_nums;
+ int block_alloc_flag = 0;
+ u16 dynamic_block_id = 0;
+ u16 index;
+ int err;
+
+ /*
+ * Check whether the number of filtering rules reaches the maximum
+ * capacity of dynamic TCAM blocks.
+ */
+ if (rule_nums >= block_cnt * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ if (block_cnt >= (HINIC3_TCAM_DYNAMIC_MAX_FILTERS /
+ HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)) {
+ PMD_DRV_LOG(ERR,
+ "Dynamic tcam block is full, alloc failed!");
+ goto failed;
+ }
+ /*
+ * The TCAM blocks are insufficient.
+ * Apply for a new TCAM block.
+ */
+ err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+ &dynamic_block_id);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Fdir filter dynamic tcam alloc block failed!");
+ goto failed;
+ }
+
+ block_alloc_flag = 1;
+
+ /* Applying for Memory. */
+ dynamic_block_ptr =
+ hinic3_alloc_dynamic_block_resource(tcam_info,
+ dynamic_block_id);
+ if (dynamic_block_ptr == NULL) {
+ PMD_DRV_LOG(ERR, "Fdir filter dynamic alloc block "
+ "memory failed!");
+ goto block_alloc_failed;
+ }
+ }
+
+ /*
+ * Find the first dynamic TCAM block that meets dynamci_index_cnt <
+ * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE.
+ */
+ TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ entries) {
+ if (tmp->dynamic_index_cnt < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)
+ break;
+ }
+
+ if (tmp == NULL ||
+ tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ PMD_DRV_LOG(ERR,
+ "Fdir filter dynamic lookup for index failed!");
+ goto look_up_failed;
+ }
+
+ for (index = 0; index < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE; index++) {
+ if (tmp->dynamic_index[index] == 0)
+ break;
+ }
+
+ /* Find the first free position. */
+ if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+ PMD_DRV_LOG(ERR,
+ "tcam block 0x%x supports filter rules is full!",
+ tmp->dynamic_block_id);
+ goto look_up_failed;
+ }
+
+ tcam_filter->dynamic_block_id = tmp->dynamic_block_id;
+ tcam_filter->index = index;
+ *tcam_index = index;
+
+ fdir_tcam_rule->index =
+ HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+ index;
+
+ return tmp;
+
+look_up_failed:
+ if (dynamic_block_ptr != NULL)
+ hinic3_free_dynamic_block_resource(tcam_info,
+ dynamic_block_ptr);
+
+block_alloc_failed:
+ if (block_alloc_flag == 1)
+ (void)hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+
+failed:
+ return NULL;
+}
+
+/**
+ * Add a TCAM filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] tcam_key
+ * Pointer to the TCAM key.
+ * @param[in] fdir_tcam_rule
+ * Pointer to the TCAM filtering rule.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_add_tcam_filter(struct rte_eth_dev *dev,
+ struct hinic3_tcam_key *tcam_key,
+ struct hinic3_tcam_cfg_rule *fdir_tcam_rule)
+{
+ struct hinic3_tcam_info *tcam_info =
+ HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ struct hinic3_tcam_filter *tcam_filter;
+ u16 tcam_block_index = 0;
+ u16 index = 0;
+ int err;
+
+ /* Alloc TCAM filter memory. */
+ tcam_filter = rte_zmalloc("hinic3_fdir_filter",
+ sizeof(struct hinic3_tcam_filter), 0);
+ if (tcam_filter == NULL)
+ return -ENOMEM;
+ (void)rte_memcpy(&tcam_filter->tcam_key, tcam_key,
+ sizeof(struct hinic3_tcam_key));
+ tcam_filter->queue = (u16)(fdir_tcam_rule->data.qid);
+
+ /* Add new TCAM rules. */
+ if (nic_dev->tcam_rule_nums == 0) {
+ err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+ &tcam_block_index);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Fdir filter tcam alloc block failed!");
+ goto failed;
+ }
+
+ dynamic_block_ptr =
+ hinic3_alloc_dynamic_block_resource(tcam_info,
+ tcam_block_index);
+ if (dynamic_block_ptr == NULL) {
+ PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first "
+ "block memory failed!");
+ goto alloc_block_failed;
+ }
+ }
+
+ /*
+ * Look for an available index in the dynamic block to store the new
+ * TCAM filter.
+ */
+ tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info,
+ tcam_filter, &index);
+ if (tmp == NULL) {
+ PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!");
+ goto lookup_tcam_index_failed;
+ }
+
+ /* Add a new TCAM rule to the network device. */
+ err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule,
+ TCAM_RULE_FDIR_TYPE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!");
+ goto add_tcam_rules_failed;
+ }
+
+ /* If there are no rules, TCAM filtering is enabled. */
+ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) {
+ err = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, true);
+ if (err)
+ goto enable_failed;
+ }
+
+ /* Add a filter to the end of the queue. */
+ TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries);
+
+ /* Update dynamic index. */
+ tmp->dynamic_index[index] = 1;
+ tmp->dynamic_index_cnt++;
+
+ nic_dev->tcam_rule_nums++;
+
+ PMD_DRV_LOG(INFO,
+ "Add fdir tcam rule, function_id: 0x%x, "
+ "tcam_block_id: %d, local_index: %d, global_index: %d, "
+ "queue: %d, "
+ "tcam_rule_nums: %d succeed",
+ hinic3_global_func_id(nic_dev->hwdev),
+ tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index,
+ fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums);
+
+ return 0;
+
+enable_failed:
+ (void)hinic3_del_tcam_rule(nic_dev->hwdev, fdir_tcam_rule->index,
+ TCAM_RULE_FDIR_TYPE);
+
+add_tcam_rules_failed:
+lookup_tcam_index_failed:
+ if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL)
+ hinic3_free_dynamic_block_resource(tcam_info,
+ dynamic_block_ptr);
+
+alloc_block_failed:
+ if (nic_dev->tcam_rule_nums == 0)
+ (void)hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index);
+
+failed:
+ rte_free(tcam_filter);
+ return -EFAULT;
+}
+
+/**
+ * Delete a TCAM filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] tcam_filter
+ * TCAM Filters to Delete.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev,
+ struct hinic3_tcam_filter *tcam_filter)
+{
+ struct hinic3_tcam_info *tcam_info =
+ HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ u16 dynamic_block_id = tcam_filter->dynamic_block_id;
+ struct hinic3_tcam_dynamic_block *tmp = NULL;
+ u32 index = 0;
+ int err;
+
+ /* Traverse to find the block that matches the given ID. */
+ TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+ entries) {
+ if (tmp->dynamic_block_id == dynamic_block_id)
+ break;
+ }
+
+ if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) {
+ PMD_DRV_LOG(ERR,
+ "Fdir filter del dynamic lookup for block failed!");
+ return -EINVAL;
+ }
+ /* Calculate TCAM index. */
+ index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+ tcam_filter->index;
+
+ /* Delete a specified rule. */
+ err = hinic3_del_tcam_rule(nic_dev->hwdev, index, TCAM_RULE_FDIR_TYPE);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Fdir tcam rule del failed!");
+ return -EFAULT;
+ }
+
+ PMD_DRV_LOG(INFO,
+ "Del fdir_tcam_dynamic_rule function_id: 0x%x, "
+ "tcam_block_id: %d, local_index: %d, global_index: %d, "
+ "local_rules_nums: %d, global_rule_nums: %d succeed",
+ hinic3_global_func_id(nic_dev->hwdev), dynamic_block_id,
+ tcam_filter->index, index, tmp->dynamic_index_cnt - 1,
+ nic_dev->tcam_rule_nums - 1);
+
+ tmp->dynamic_index[tcam_filter->index] = 0;
+ tmp->dynamic_index_cnt--;
+ nic_dev->tcam_rule_nums--;
+ if (tmp->dynamic_index_cnt == 0) {
+ (void)hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+
+ hinic3_free_dynamic_block_resource(tcam_info, tmp);
+ }
+
+ /* If the number of rules is 0, the TCAM filter is disabled. */
+ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums))
+ (void)hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+ return 0;
+}
+
+static int
+hinic3_del_tcam_filter(struct rte_eth_dev *dev,
+ struct hinic3_tcam_filter *tcam_filter)
+{
+ struct hinic3_tcam_info *tcam_info =
+ HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+ int err;
+
+ err = hinic3_del_dynamic_tcam_filter(dev, tcam_filter);
+ if (err < 0) {
+ PMD_DRV_LOG(ERR, "Del dynamic tcam filter failed!");
+ return err;
+ }
+
+ /* Remove the filter from the TCAM list. */
+ TAILQ_REMOVE(&tcam_info->tcam_list, tcam_filter, entries);
+
+ rte_free(tcam_filter);
+
+ return 0;
+}
+
+/**
+ * Add or deletes an fdir filter rule. This is the core function for operating
+ * filters.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] fdir_filter
+ * Pointer to the fdir filter.
+ * @param[in] add
+ * This is a Boolean value (of the bool type) indicating whether the action to
+ * be performed is to add (true) or delete (false) the filter rule.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
+ struct hinic3_fdir_filter *fdir_filter,
+ bool add)
+{
+ struct hinic3_tcam_info *tcam_info =
+ HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+ struct hinic3_tcam_filter *tcam_filter;
+ struct hinic3_tcam_cfg_rule fdir_tcam_rule;
+ struct hinic3_tcam_key tcam_key;
+ int ret;
+
+ memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule));
+ memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key));
+
+ hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key,
+ &fdir_tcam_rule);
+ /* Search for a filter. */
+ tcam_filter =
+ hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key);
+ if (tcam_filter != NULL && add) {
+ PMD_DRV_LOG(ERR, "Filter exists.");
+ return -EEXIST;
+ }
+ if (tcam_filter == NULL && !add) {
+ PMD_DRV_LOG(ERR, "Filter doesn't exist.");
+ return -ENOENT;
+ }
+
+ /*
+ * If the value of Add is true, the system performs the adding
+ * operation.
+ */
+ if (add) {
+ ret = hinic3_add_tcam_filter(dev, &tcam_key, &fdir_tcam_rule);
+ if (ret)
+ goto cfg_tcam_filter_err;
+
+ fdir_filter->tcam_index = (int)(fdir_tcam_rule.index);
+ } else {
+ PMD_DRV_LOG(INFO, "begin to del tcam filter");
+ ret = hinic3_del_tcam_filter(dev, tcam_filter);
+ if (ret)
+ goto cfg_tcam_filter_err;
+ }
+
+ return 0;
+
+cfg_tcam_filter_err:
+
+ return ret;
+}
+
+/**
+ * Enable or disable the TCAM filter for the receive queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] able
+ * Flag to enable or disable the filter.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, u32 queue_id, u32 able)
+{
+ struct hinic3_tcam_info *tcam_info =
+ HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_tcam_filter *it;
+ struct hinic3_tcam_cfg_rule fdir_tcam_rule;
+ int ret;
+ u32 queue_res;
+ uint16_t index;
+
+ memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule));
+
+ if (able) {
+ TAILQ_FOREACH(it, &tcam_info->tcam_list, entries) {
+ if (queue_id == it->queue) {
+ index = (u16)(HINIC3_PKT_TCAM_DYNAMIC_INDEX_START
+ (it->dynamic_block_id) + it->index);
+
+ /*
+ * When the rxq is start, find invalid rxq_id
+ * and delete the fdir rule from the tcam.
+ */
+ ret = hinic3_del_tcam_rule(nic_dev->hwdev,
+ index,
+ TCAM_RULE_FDIR_TYPE);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "del invalid tcam "
+ "rule failed!");
+ return -EFAULT;
+ }
+
+ fdir_tcam_rule.index = index;
+ fdir_tcam_rule.data.qid = queue_id;
+ tcam_key_calculate(&it->tcam_key,
+ &fdir_tcam_rule);
+
+ /* To enable a rule, add a rule. */
+ ret = hinic3_add_tcam_rule(nic_dev->hwdev,
+ &fdir_tcam_rule,
+ TCAM_RULE_FDIR_TYPE);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "add correct tcam "
+ "rule failed!");
+ return -EFAULT;
+ }
+ }
+ }
+ } else {
+ queue_res = HINIC3_INVALID_QID_BASE | queue_id;
+
+ TAILQ_FOREACH(it, &tcam_info->tcam_list, entries) {
+ if (queue_id == it->queue) {
+ index = (u16)(HINIC3_PKT_TCAM_DYNAMIC_INDEX_START
+ (it->dynamic_block_id) + it->index);
+
+ /*
+ * When the rxq is stop, delete the fdir rule
+ * from the tcam and add the corret fdir rule
+ * from the tcam.
+ */
+ ret = hinic3_del_tcam_rule(nic_dev->hwdev,
+ index,
+ TCAM_RULE_FDIR_TYPE);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "del correct tcam "
+ "rule failed!");
+ return -EFAULT;
+ }
+
+ fdir_tcam_rule.index = index;
+ fdir_tcam_rule.data.qid = queue_res;
+ tcam_key_calculate(&it->tcam_key,
+ &fdir_tcam_rule);
+
+ /* Add the corret fdir rule from the tcam. */
+ ret = hinic3_add_tcam_rule(nic_dev->hwdev,
+ &fdir_tcam_rule,
+ TCAM_RULE_FDIR_TYPE);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "add invalid tcam "
+ "rule failed!");
+ return -EFAULT;
+ }
+ }
+ }
+ }
+
+ return ret;
+}
+
+void
+hinic3_free_fdir_filter(struct rte_eth_dev *dev)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ (void)hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+ (void)hinic3_flush_tcam_rule(nic_dev->hwdev);
+}
+
+static int
+hinic3_flow_set_arp_filter(struct rte_eth_dev *dev,
+ struct rte_eth_ethertype_filter *ethertype_filter,
+ bool add)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int ret;
+
+ /* Setting the ARP Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_ARP,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ return ret;
+ }
+
+ /* Setting the ARP Request Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_ARP_REQ,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ goto set_arp_req_failed;
+ }
+
+ /* Setting the ARP Response Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_ARP_REP,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ goto set_arp_rep_failed;
+ }
+
+ return 0;
+
+set_arp_rep_failed:
+ (void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_ARP_REQ,
+ ethertype_filter->queue, !add);
+
+set_arp_req_failed:
+ (void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_ARP,
+ ethertype_filter->queue, !add);
+
+ return ret;
+}
+
+static int
+hinic3_flow_set_slow_filter(struct rte_eth_dev *dev,
+ struct rte_eth_ethertype_filter *ethertype_filter,
+ bool add)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int ret;
+
+ /* Setting the LACP Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_LACP,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ return ret;
+ }
+
+ /* Setting the OAM Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_OAM,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ goto set_arp_oam_failed;
+ }
+
+ return 0;
+
+set_arp_oam_failed:
+ (void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_LACP,
+ ethertype_filter->queue, !add);
+
+ return ret;
+}
+
+static int
+hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev,
+ struct rte_eth_ethertype_filter *ethertype_filter,
+ bool add)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int ret;
+
+ /* Setting the LLDP Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_LLDP,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ return ret;
+ }
+
+ /* Setting the CDCP Filter. */
+ ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_CDCP,
+ ethertype_filter->queue, add);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ goto set_arp_cdcp_failed;
+ }
+
+ return 0;
+
+set_arp_cdcp_failed:
+ (void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_LLDP,
+ ethertype_filter->queue, !add);
+
+ return ret;
+}
+
+static int
+hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev,
+ struct rte_eth_ethertype_filter *ethertype_filter,
+ bool add)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_ethertype_filter_list *ethertype_list =
+ &nic_dev->filter_ethertype_list;
+
+ /* Check whether the transferred rule exists. */
+ if (hinic3_ethertype_filter_lookup(ethertype_list,
+ ethertype_filter->ether_type)) {
+ if (add) {
+ PMD_DRV_LOG(ERR,
+ "The rule already exists, can not to be added");
+ return -EPERM;
+ }
+ } else {
+ if (!add) {
+ PMD_DRV_LOG(ERR,
+ "The rule not exists, can not to be delete");
+ return -EPERM;
+ }
+ }
+ /* Create a filter based on the protocol type. */
+ switch (ethertype_filter->ether_type) {
+ case RTE_ETHER_TYPE_ARP:
+ return hinic3_flow_set_arp_filter(dev, ethertype_filter, add);
+ case RTE_ETHER_TYPE_RARP:
+ return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add);
+
+ case RTE_ETHER_TYPE_SLOW:
+ return hinic3_flow_set_slow_filter(dev, ethertype_filter, add);
+
+ case RTE_ETHER_TYPE_LLDP:
+ return hinic3_flow_set_lldp_filter(dev, ethertype_filter, add);
+
+ case RTE_ETHER_TYPE_CNM:
+ return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add);
+
+ case RTE_ETHER_TYPE_ECP:
+ return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+ HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add);
+
+ default:
+ PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d",
+ ethertype_filter->ether_type,
+ ethertype_filter->queue);
+ return -EPERM;
+ }
+}
+
+static int
+hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter)
+{
+ switch (ethertype_filter->ether_type) {
+ case RTE_ETHER_TYPE_ARP:
+ return HINIC3_ARP_RULE_NUM;
+ case RTE_ETHER_TYPE_RARP:
+ return HINIC3_RARP_RULE_NUM;
+ case RTE_ETHER_TYPE_SLOW:
+ return HINIC3_SLOW_RULE_NUM;
+ case RTE_ETHER_TYPE_LLDP:
+ return HINIC3_LLDP_RULE_NUM;
+ case RTE_ETHER_TYPE_CNM:
+ return HINIC3_CNM_RULE_NUM;
+ case RTE_ETHER_TYPE_ECP:
+ return HINIC3_ECP_RULE_NUM;
+
+ default:
+ PMD_DRV_LOG(ERR, "Unknown ethertype %d",
+ ethertype_filter->ether_type);
+ return 0;
+ }
+}
+
+/**
+ * Add or delete an Ethernet type filter rule.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] ethertype_filter
+ * Pointer to ethertype filter.
+ * @param[in] add
+ * This is a Boolean value (of the bool type) indicating whether the action to
+ * be performed is to add (true) or delete (false) the Ethernet type filter
+ * rule.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev,
+ struct rte_eth_ethertype_filter *ethertype_filter,
+ bool add)
+{
+ /* Get dev private info. */
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ int ret;
+ /* Add or remove an Ethernet type filter rule. */
+ ret = hinic3_flow_add_del_ethertype_filter_rule(dev, ethertype_filter,
+ add);
+
+ if (ret) {
+ PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d",
+ add ? "Add" : "Del", ret);
+ return ret;
+ }
+ /*
+ * If a rule is added and the rule is the first rule, rule filtering is
+ * enabled. If a rule is deleted and the rule is the last one, rule
+ * filtering is disabled.
+ */
+ if (add) {
+ if (nic_dev->ethertype_rule_nums == 0) {
+ ret = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev,
+ true);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "enable fdir rule failed, err: %d",
+ ret);
+ goto enable_fdir_failed;
+ }
+ }
+ nic_dev->ethertype_rule_nums =
+ nic_dev->ethertype_rule_nums +
+ hinic3_flow_ethertype_rule_nums(ethertype_filter);
+ } else {
+ nic_dev->ethertype_rule_nums =
+ nic_dev->ethertype_rule_nums -
+ hinic3_flow_ethertype_rule_nums(ethertype_filter);
+
+ if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) {
+ ret = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev,
+ false);
+ if (ret) {
+ PMD_DRV_LOG(ERR,
+ "disable fdir rule failed, err: %d",
+ ret);
+ }
+ }
+ }
+
+ return 0;
+
+enable_fdir_failed:
+ (void)hinic3_flow_add_del_ethertype_filter_rule(dev, ethertype_filter,
+ !add);
+ return ret;
+}
diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h
new file mode 100644
index 0000000000..fbb2461a44
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_fdir.h
@@ -0,0 +1,398 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_FDIR_H_
+#define _HINIC3_FDIR_H_
+
+#define HINIC3_FLOW_MAX_PATTERN_NUM 16
+
+#define HINIC3_TCAM_DYNAMIC_BLOCK_SIZE 16
+
+#define HINIC3_TCAM_DYNAMIC_MAX_FILTERS 1024
+
+#define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \
+ (HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index))
+
+/* Indicate a traffic filtering rule. */
+struct rte_flow {
+ TAILQ_ENTRY(rte_flow) node;
+ enum rte_filter_type filter_type;
+ void *rule;
+};
+
+struct hinic3_fdir_rule_key {
+ struct rte_eth_ipv4_flow ipv4;
+ struct rte_eth_ipv6_flow ipv6;
+ struct rte_eth_ipv4_flow inner_ipv4;
+ struct rte_eth_ipv6_flow inner_ipv6;
+ struct rte_eth_tunnel_flow tunnel;
+ uint16_t src_port;
+ uint16_t dst_port;
+ uint8_t proto;
+};
+
+struct hinic3_fdir_filter {
+ int tcam_index;
+ uint8_t ip_type; /**< Inner ip type. */
+ uint8_t outer_ip_type;
+ uint8_t tunnel_type;
+ struct hinic3_fdir_rule_key key_mask;
+ struct hinic3_fdir_rule_key key_spec;
+ uint32_t rq_index; /**< Queue assigned when matched. */
+};
+
+/* This structure is used to describe a basic filter type. */
+struct hinic3_filter_t {
+ u16 filter_rule_nums;
+ enum rte_filter_type filter_type;
+ struct rte_eth_ethertype_filter ethertype_filter;
+ struct hinic3_fdir_filter fdir_filter;
+};
+
+enum hinic3_fdir_tunnel_mode {
+ HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0,
+ HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1,
+};
+
+enum hinic3_fdir_ip_type {
+ HINIC3_FDIR_IP_TYPE_IPV4 = 0,
+ HINIC3_FDIR_IP_TYPE_IPV6 = 1,
+ HINIC3_FDIR_IP_TYPE_ANY = 2,
+};
+
+/* Describe the key structure of the TCAM. */
+struct hinic3_tcam_key_mem {
+#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN)
+ u32 rsvd0 : 16;
+ u32 ip_proto : 8;
+ u32 tunnel_type : 4;
+ u32 rsvd1 : 4;
+
+ u32 function_id : 15;
+ u32 ip_type : 1;
+
+ u32 sipv4_h : 16;
+ u32 sipv4_l : 16;
+
+ u32 dipv4_h : 16;
+ u32 dipv4_l : 16;
+ u32 rsvd2 : 16;
+
+ u32 rsvd3;
+
+ u32 rsvd4 : 16;
+ u32 dport : 16;
+
+ u32 sport : 16;
+ u32 rsvd5 : 16;
+
+ u32 rsvd6 : 16;
+ u32 outer_sipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_dipv4_l : 16;
+ u32 vni_h : 16;
+
+ u32 vni_l : 16;
+ u32 rsvd7 : 16;
+#else
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 sipv4_h : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv4_h : 16;
+ u32 sipv4_l : 16;
+
+ u32 rsvd2 : 16;
+ u32 dipv4_l : 16;
+
+ u32 rsvd3;
+
+ u32 dport : 16;
+ u32 rsvd4 : 16;
+
+ u32 rsvd5 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd6 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd7 : 16;
+ u32 vni_l : 16;
+#endif
+};
+
+/*
+ * Define the IPv6-related TCAM key data structure in common
+ * scenarios or IPv6 tunnel scenarios.
+ */
+struct hinic3_tcam_key_ipv6_mem {
+#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN)
+ u32 rsvd0 : 16;
+ /* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */
+ u32 ip_proto : 8;
+ u32 tunnel_type : 4;
+ u32 outer_ip_type : 1;
+ u32 rsvd1 : 3;
+
+ u32 function_id : 15;
+ u32 ip_type : 1;
+ u32 sipv6_key0 : 16;
+
+ u32 sipv6_key1 : 16;
+ u32 sipv6_key2 : 16;
+
+ u32 sipv6_key3 : 16;
+ u32 sipv6_key4 : 16;
+
+ u32 sipv6_key5 : 16;
+ u32 sipv6_key6 : 16;
+
+ u32 sipv6_key7 : 16;
+ u32 dport : 16;
+
+ u32 sport : 16;
+ u32 dipv6_key0 : 16;
+
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key2 : 16;
+
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key4 : 16;
+
+ u32 dipv6_key5 : 16;
+ u32 dipv6_key6 : 16;
+
+ u32 dipv6_key7 : 16;
+ u32 rsvd2 : 16;
+#else
+ u32 rsvd1 : 3;
+ u32 outer_ip_type : 1;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 sipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 sipv6_key2 : 16;
+ u32 sipv6_key1 : 16;
+
+ u32 sipv6_key4 : 16;
+ u32 sipv6_key3 : 16;
+
+ u32 sipv6_key6 : 16;
+ u32 sipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 sipv6_key7 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 sport : 16;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 rsvd2 : 16;
+ u32 dipv6_key7 : 16;
+#endif
+};
+
+/*
+ * Define the tcam key value data structure related to IPv6 in
+ * the VXLAN scenario.
+ */
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN)
+ u32 rsvd0 : 16;
+ u32 ip_proto : 8;
+ u32 tunnel_type : 4;
+ u32 rsvd1 : 4;
+
+ u32 function_id : 15;
+ u32 ip_type : 1;
+ u32 dipv6_key0 : 16;
+
+ u32 dipv6_key1 : 16;
+ u32 dipv6_key2 : 16;
+
+ u32 dipv6_key3 : 16;
+ u32 dipv6_key4 : 16;
+
+ u32 dipv6_key5 : 16;
+ u32 dipv6_key6 : 16;
+
+ u32 dipv6_key7 : 16;
+ u32 dport : 16;
+
+ u32 sport : 16;
+ u32 rsvd2 : 16;
+
+ u32 rsvd3 : 16;
+ u32 outer_sipv4_h : 16;
+
+ u32 outer_sipv4_l : 16;
+ u32 outer_dipv4_h : 16;
+
+ u32 outer_dipv4_l : 16;
+ u32 vni_h : 16;
+
+ u32 vni_l : 16;
+ u32 rsvd4 : 16;
+#else
+ u32 rsvd1 : 4;
+ u32 tunnel_type : 4;
+ u32 ip_proto : 8;
+ u32 rsvd0 : 16;
+
+ u32 dipv6_key0 : 16;
+ u32 ip_type : 1;
+ u32 function_id : 15;
+
+ u32 dipv6_key2 : 16;
+ u32 dipv6_key1 : 16;
+
+ u32 dipv6_key4 : 16;
+ u32 dipv6_key3 : 16;
+
+ u32 dipv6_key6 : 16;
+ u32 dipv6_key5 : 16;
+
+ u32 dport : 16;
+ u32 dipv6_key7 : 16;
+
+ u32 rsvd2 : 16;
+ u32 sport : 16;
+
+ u32 outer_sipv4_h : 16;
+ u32 rsvd3 : 16;
+
+ u32 outer_dipv4_h : 16;
+ u32 outer_sipv4_l : 16;
+
+ u32 vni_h : 16;
+ u32 outer_dipv4_l : 16;
+
+ u32 rsvd4 : 16;
+ u32 vni_l : 16;
+#endif
+};
+
+/*
+ * TCAM key structure. The two unions indicate the key and mask respectively.
+ * The TCAM key is consistent with the TCAM entry.
+ */
+struct hinic3_tcam_key {
+ union {
+ struct hinic3_tcam_key_mem key_info;
+ struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+ };
+ union {
+ struct hinic3_tcam_key_mem key_mask;
+ struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+ struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+ };
+};
+
+/* Structure indicates the TCAM filter. */
+struct hinic3_tcam_filter {
+ TAILQ_ENTRY(hinic3_tcam_filter)
+ entries; /**< Filter entry, used for linked list operations. */
+ uint16_t dynamic_block_id; /**< Dynamic block ID. */
+ uint16_t index; /**< TCAM index. */
+ struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */
+ uint16_t queue; /**< Allocated RX queue. */
+};
+
+/* Define a linked list header for storing hinic3_tcam_filter data. */
+TAILQ_HEAD(hinic3_tcam_filter_list, hinic3_tcam_filter);
+
+struct hinic3_tcam_dynamic_block {
+ TAILQ_ENTRY(hinic3_tcam_dynamic_block) entries;
+ u16 dynamic_block_id;
+ u16 dynamic_index_cnt;
+ u8 dynamic_index[HINIC3_TCAM_DYNAMIC_BLOCK_SIZE];
+};
+
+/* Define a linked list header for storing hinic3_tcam_dynamic_block data. */
+TAILQ_HEAD(hinic3_tcam_dynamic_filter_list, hinic3_tcam_dynamic_block);
+
+/* Indicate TCAM dynamic block info. */
+struct hinic3_tcam_dynamic_block_info {
+ struct hinic3_tcam_dynamic_filter_list tcam_dynamic_list;
+ u16 dynamic_block_cnt;
+};
+
+/* Structure is used to store TCAM information. */
+struct hinic3_tcam_info {
+ struct hinic3_tcam_filter_list tcam_list;
+ struct hinic3_tcam_dynamic_block_info tcam_dynamic_info;
+};
+
+/* Obtain the upper and lower 16 bits. */
+#define HINIC3_32_UPPER_16_BITS(n) ((((n) >> 16)) & 0xffff)
+#define HINIC3_32_LOWER_16_BITS(n) ((n) & 0xffff)
+
+/* Number of protocol rules */
+#define HINIC3_ARP_RULE_NUM 3
+#define HINIC3_RARP_RULE_NUM 1
+#define HINIC3_SLOW_RULE_NUM 2
+#define HINIC3_LLDP_RULE_NUM 2
+#define HINIC3_CNM_RULE_NUM 1
+#define HINIC3_ECP_RULE_NUM 2
+
+/* Define Ethernet type. */
+#define RTE_ETHER_TYPE_CNM 0x22e7
+#define RTE_ETHER_TYPE_ECP 0x8940
+
+/* Protocol type of the data packet. */
+enum hinic3_ether_type {
+ HINIC3_PKT_TYPE_ARP = 1,
+ HINIC3_PKT_TYPE_ARP_REQ,
+ HINIC3_PKT_TYPE_ARP_REP,
+ HINIC3_PKT_TYPE_RARP,
+ HINIC3_PKT_TYPE_LACP,
+ HINIC3_PKT_TYPE_LLDP,
+ HINIC3_PKT_TYPE_OAM,
+ HINIC3_PKT_TYPE_CDCP,
+ HINIC3_PKT_TYPE_CNM,
+ HINIC3_PKT_TYPE_ECP = 10,
+
+ HINIC3_PKT_UNKNOWN = 31,
+};
+
+int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
+ struct hinic3_fdir_filter *fdir_filter,
+ bool add);
+int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev,
+ struct rte_eth_ethertype_filter *ethertype_filter,
+ bool add);
+
+void hinic3_free_fdir_filter(struct rte_eth_dev *dev);
+int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, u32 queue_id,
+ u32 able);
+int hinic3_flow_parse_attr(const struct rte_flow_attr *attr,
+ struct rte_flow_error *error);
+
+#endif /**< _HINIC3_FDIR_H_ */
diff --git a/drivers/net/hinic3/hinic3_flow.c b/drivers/net/hinic3/hinic3_flow.c
new file mode 100644
index 0000000000..b310848530
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_flow.c
@@ -0,0 +1,1700 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <errno.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_fdir.h"
+#include "hinic3_flow.h"
+
+#define HINIC3_UINT8_MAX 0xff
+
+/* Indicate the type of the IPv4 ICPM matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_icmp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_ICMP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 any protocol matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_any[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_ANY,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the Ether matching pattern. */
+static enum rte_flow_item_type pattern_ethertype[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the TCP matching pattern. */
+static enum rte_flow_item_type pattern_ethertype_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_TCP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the UDP matching pattern. */
+static enum rte_flow_item_type pattern_ethertype_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_UDP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan any protocol matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_any[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ANY, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv4 matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv4[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv4 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv4_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv4 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv4_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv6 matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv6[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv6 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv6_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv6 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv6_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 matching pattern. */
+static enum rte_flow_item_type pattern_ipv4[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_UDP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV4,
+ HINIC3_FLOW_ITEM_TYPE_TCP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 matching pattern. */
+static enum rte_flow_item_type pattern_ipv6[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_UDP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH,
+ HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_TCP,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 VXLAN any protocol matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan_any[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_ANY, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 VXLAN TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan_tcp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 VXLAN UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan_udp[] = {
+ HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+ HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+typedef int (*hinic3_parse_filter_t)(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter);
+
+/* Indicate valid filter mode . */
+struct hinic3_valid_pattern {
+ enum rte_flow_item_type *items;
+ hinic3_parse_filter_t parse_filter;
+};
+
+static int hinic3_flow_parse_fdir_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter);
+
+static int hinic3_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter);
+
+static int hinic3_flow_parse_fdir_vxlan_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter);
+
+/*
+ * Define a supported pattern array, including the matching patterns of
+ * various network protocols and corresponding parsing functions.
+ */
+static const struct hinic3_valid_pattern hinic3_supported_patterns[] = {
+ /* Support ethertype. */
+ {pattern_ethertype, hinic3_flow_parse_ethertype_filter},
+ /* Support ipv4 but not tunnel, and any field can be masked. */
+ {pattern_ipv4, hinic3_flow_parse_fdir_filter},
+ {pattern_ipv4_any, hinic3_flow_parse_fdir_filter},
+ /* Support ipv4 + l4 but not tunnel, and any field can be masked. */
+ {pattern_ipv4_udp, hinic3_flow_parse_fdir_filter},
+ {pattern_ipv4_tcp, hinic3_flow_parse_fdir_filter},
+ /* Support ipv4 + icmp not tunnel, and any field can be masked. */
+ {pattern_ipv4_icmp, hinic3_flow_parse_fdir_filter},
+
+ /* Support ipv4 + l4 but not tunnel, and any field can be masked. */
+ {pattern_ethertype_udp, hinic3_flow_parse_fdir_filter},
+ {pattern_ethertype_tcp, hinic3_flow_parse_fdir_filter},
+
+ /* Support ipv4 + vxlan + any, and any field can be masked. */
+ {pattern_ipv4_vxlan, hinic3_flow_parse_fdir_vxlan_filter},
+ /* Support ipv4 + vxlan + ipv4, and any field can be masked. */
+ {pattern_ipv4_vxlan_ipv4, hinic3_flow_parse_fdir_vxlan_filter},
+ /* Support ipv4 + vxlan + ipv4 + l4, and any field can be masked. */
+ {pattern_ipv4_vxlan_ipv4_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+ {pattern_ipv4_vxlan_ipv4_udp, hinic3_flow_parse_fdir_vxlan_filter},
+ /* Support ipv4 + vxlan + ipv6, and any field can be masked. */
+ {pattern_ipv4_vxlan_ipv6, hinic3_flow_parse_fdir_vxlan_filter},
+ /* Support ipv4 + vxlan + ipv6 + l4, and any field can be masked. */
+ {pattern_ipv4_vxlan_ipv6_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+ {pattern_ipv4_vxlan_ipv6_udp, hinic3_flow_parse_fdir_vxlan_filter},
+ /* Support ipv4 + vxlan + l4, and any field can be masked. */
+ {pattern_ipv4_vxlan_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+ {pattern_ipv4_vxlan_udp, hinic3_flow_parse_fdir_vxlan_filter},
+ {pattern_ipv4_vxlan_any, hinic3_flow_parse_fdir_vxlan_filter},
+
+ /* Support ipv6 but not tunnel, and any field can be masked. */
+ {pattern_ipv6, hinic3_flow_parse_fdir_filter},
+ /* Support ipv6 + l4 but not tunnel, and any field can be masked. */
+ {pattern_ipv6_udp, hinic3_flow_parse_fdir_filter},
+ {pattern_ipv6_tcp, hinic3_flow_parse_fdir_filter},
+
+ /* Support ipv6 + vxlan + any, and any field can be masked. */
+ {pattern_ipv6_vxlan, hinic3_flow_parse_fdir_vxlan_filter},
+ {pattern_ipv6_vxlan_any, hinic3_flow_parse_fdir_vxlan_filter},
+
+ /* Support ipv6 + vxlan + l4, and any field can be masked. */
+ {pattern_ipv6_vxlan_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+ {pattern_ipv6_vxlan_udp, hinic3_flow_parse_fdir_vxlan_filter},
+
+};
+
+static inline void
+net_addr_to_host(uint32_t *dst, const uint32_t *src, size_t len)
+{
+ size_t i;
+ for (i = 0; i < len; i++)
+ dst[i] = rte_be_to_cpu_32(src[i]);
+}
+
+static bool
+hinic3_match_pattern(enum rte_flow_item_type *item_array,
+ const struct rte_flow_item *pattern)
+{
+ const struct rte_flow_item *item = pattern;
+
+ /* skip the first void item. */
+ while (item->type == HINIC3_FLOW_ITEM_TYPE_VOID)
+ item++;
+
+ /* Find no void item. */
+ while (((*item_array == item->type) &&
+ (*item_array != HINIC3_FLOW_ITEM_TYPE_END)) ||
+ (item->type == HINIC3_FLOW_ITEM_TYPE_VOID)) {
+ if (item->type == HINIC3_FLOW_ITEM_TYPE_VOID) {
+ item++;
+ } else {
+ item_array++;
+ item++;
+ }
+ }
+
+ return (*item_array == HINIC3_FLOW_ITEM_TYPE_END &&
+ item->type == HINIC3_FLOW_ITEM_TYPE_END);
+}
+
+/**
+ * Find matching parsing filter functions.
+ *
+ * @param[in] pattern
+ * Pattern to match.
+ * @return
+ * Matched resolution filter. If no resolution filter is found, return NULL.
+ */
+static hinic3_parse_filter_t
+hinic3_find_parse_filter_func(const struct rte_flow_item *pattern)
+{
+ hinic3_parse_filter_t parse_filter = NULL;
+ uint8_t i;
+ /* Traverse all supported patterns. */
+ for (i = 0; i < RTE_DIM(hinic3_supported_patterns); i++) {
+ if (hinic3_match_pattern(hinic3_supported_patterns[i].items,
+ pattern)) {
+ parse_filter =
+ hinic3_supported_patterns[i].parse_filter;
+ break;
+ }
+ }
+
+ return parse_filter;
+}
+
+/**
+ * Action for parsing and processing Ethernet types.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_action(struct rte_eth_dev *dev,
+ const struct rte_flow_action *actions,
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ const struct rte_flow_action_queue *act_q;
+ const struct rte_flow_action *act = actions;
+
+ /* skip the first void item. */
+ while (act->type == RTE_FLOW_ACTION_TYPE_VOID)
+ act++;
+
+ switch (act->type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ act_q = (const struct rte_flow_action_queue *)act->conf;
+ filter->fdir_filter.rq_index = act_q->index;
+ if (filter->fdir_filter.rq_index >= dev->data->nb_rx_queues) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ACTION, act,
+ "Invalid action param.");
+ return -rte_errno;
+ }
+ break;
+ default:
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ACTION,
+ act, "Invalid action type.");
+ return -rte_errno;
+ }
+
+ return 0;
+}
+
+int
+hinic3_flow_parse_attr(const struct rte_flow_attr *attr,
+ struct rte_flow_error *error)
+{
+ /* Not supported. */
+ if (!attr->ingress || attr->egress || attr->priority || attr->group) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_UNSPECIFIED, attr,
+ "Only support ingress.");
+ return -rte_errno;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_ipv4(const struct rte_flow_item *flow_item,
+ struct hinic3_filter_t *filter,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv4 *spec_ipv4, *mask_ipv4;
+
+ mask_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->mask;
+ spec_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->spec;
+ if (!mask_ipv4 || !spec_ipv4) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter ipv4 mask or spec");
+ return -rte_errno;
+ }
+
+ /*
+ * Only support src address , dst addresses, proto,
+ * others should be masked.
+ */
+ if (mask_ipv4->hdr.version_ihl || mask_ipv4->hdr.type_of_service ||
+ mask_ipv4->hdr.total_length || mask_ipv4->hdr.packet_id ||
+ mask_ipv4->hdr.fragment_offset || mask_ipv4->hdr.time_to_live ||
+ mask_ipv4->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, ipv4 only "
+ "support src ip, dst ip, proto");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+ filter->fdir_filter.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+ filter->fdir_filter.key_mask.ipv4.src_ip =
+ rte_be_to_cpu_32(mask_ipv4->hdr.src_addr);
+ filter->fdir_filter.key_spec.ipv4.src_ip =
+ rte_be_to_cpu_32(spec_ipv4->hdr.src_addr);
+ filter->fdir_filter.key_mask.ipv4.dst_ip =
+ rte_be_to_cpu_32(mask_ipv4->hdr.dst_addr);
+ filter->fdir_filter.key_spec.ipv4.dst_ip =
+ rte_be_to_cpu_32(spec_ipv4->hdr.dst_addr);
+ filter->fdir_filter.key_mask.proto = mask_ipv4->hdr.next_proto_id;
+ filter->fdir_filter.key_spec.proto = spec_ipv4->hdr.next_proto_id;
+
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_ipv6(const struct rte_flow_item *flow_item,
+ struct hinic3_filter_t *filter,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ipv6 *spec_ipv6, *mask_ipv6;
+
+ mask_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->mask;
+ spec_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->spec;
+ if (!mask_ipv6 || !spec_ipv6) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter ipv6 mask or spec");
+ return -rte_errno;
+ }
+
+ /* Only support dst addresses, src addresses, proto. */
+ if (mask_ipv6->hdr.vtc_flow || mask_ipv6->hdr.payload_len ||
+ mask_ipv6->hdr.hop_limits) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, ipv6 only "
+ "support src ip, dst ip, proto");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+ filter->fdir_filter.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+ net_addr_to_host(filter->fdir_filter.key_mask.ipv6.src_ip,
+ (const uint32_t *)mask_ipv6->hdr.src_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_spec.ipv6.src_ip,
+ (const uint32_t *)spec_ipv6->hdr.src_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_mask.ipv6.dst_ip,
+ (const uint32_t *)mask_ipv6->hdr.dst_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_spec.ipv6.dst_ip,
+ (const uint32_t *)spec_ipv6->hdr.dst_addr.a, 4);
+ filter->fdir_filter.key_mask.proto = mask_ipv6->hdr.proto;
+ filter->fdir_filter.key_spec.proto = spec_ipv6->hdr.proto;
+
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_tcp(const struct rte_flow_item *flow_item,
+ struct hinic3_filter_t *filter,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_tcp *spec_tcp, *mask_tcp;
+
+ mask_tcp = (const struct rte_flow_item_tcp *)flow_item->mask;
+ spec_tcp = (const struct rte_flow_item_tcp *)flow_item->spec;
+
+ filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+ filter->fdir_filter.key_spec.proto = IPPROTO_TCP;
+
+ if (!mask_tcp && !spec_tcp)
+ return 0;
+
+ if (!mask_tcp || !spec_tcp) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter tcp mask or spec");
+ return -rte_errno;
+ }
+
+ /* Only support src, dst ports, others should be masked. */
+ if (mask_tcp->hdr.sent_seq || mask_tcp->hdr.recv_ack ||
+ mask_tcp->hdr.data_off || mask_tcp->hdr.rx_win ||
+ mask_tcp->hdr.tcp_flags || mask_tcp->hdr.cksum ||
+ mask_tcp->hdr.tcp_urp) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, tcp only "
+ "support src port, dst port");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.key_mask.src_port =
+ (u16)rte_be_to_cpu_16(mask_tcp->hdr.src_port);
+ filter->fdir_filter.key_spec.src_port =
+ (u16)rte_be_to_cpu_16(spec_tcp->hdr.src_port);
+ filter->fdir_filter.key_mask.dst_port =
+ (u16)rte_be_to_cpu_16(mask_tcp->hdr.dst_port);
+ filter->fdir_filter.key_spec.dst_port =
+ (u16)rte_be_to_cpu_16(spec_tcp->hdr.dst_port);
+
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_udp(const struct rte_flow_item *flow_item,
+ struct hinic3_filter_t *filter,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_udp *spec_udp, *mask_udp;
+
+ mask_udp = (const struct rte_flow_item_udp *)flow_item->mask;
+ spec_udp = (const struct rte_flow_item_udp *)flow_item->spec;
+
+ filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+ filter->fdir_filter.key_spec.proto = IPPROTO_UDP;
+
+ if (!mask_udp && !spec_udp)
+ return 0;
+
+ if (!mask_udp || !spec_udp) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter udp mask or spec");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.key_mask.src_port =
+ (u16)rte_be_to_cpu_16(mask_udp->hdr.src_port);
+ filter->fdir_filter.key_spec.src_port =
+ (u16)rte_be_to_cpu_16(spec_udp->hdr.src_port);
+ filter->fdir_filter.key_mask.dst_port =
+ (u16)rte_be_to_cpu_16(mask_udp->hdr.dst_port);
+ filter->fdir_filter.key_spec.dst_port =
+ (u16)rte_be_to_cpu_16(spec_udp->hdr.dst_port);
+
+ return 0;
+}
+
+/**
+ * Parse the pattern of network traffic and apply the parsing result to the
+ * traffic filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, Its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_fdir_pattern(__rte_unused struct rte_eth_dev *dev,
+ const struct rte_flow_item *pattern,
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ const struct rte_flow_item *flow_item = pattern;
+ enum rte_flow_item_type type;
+ int err;
+
+ filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_ANY;
+ /* Traverse all modes until HINIC3_FLOW_ITEM_TYPE_END is reached. */
+ for (; flow_item->type != HINIC3_FLOW_ITEM_TYPE_END; flow_item++) {
+ if (flow_item->last) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item, "Not support range");
+ return -rte_errno;
+ }
+ type = flow_item->type;
+ switch (type) {
+ case HINIC3_FLOW_ITEM_TYPE_ETH:
+ if (flow_item->spec || flow_item->mask) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir "
+ "filter, not support mac");
+ return -rte_errno;
+ }
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_IPV4:
+ err = hinic3_flow_fdir_ipv4(flow_item, filter, error);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_IPV6:
+ err = hinic3_flow_fdir_ipv6(flow_item, filter, error);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_TCP:
+ err = hinic3_flow_fdir_tcp(flow_item, filter, error);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_UDP:
+ err = hinic3_flow_fdir_udp(flow_item, filter, error);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Resolve rules for network traffic filters.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, Its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_fdir_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ int ret;
+
+ ret = hinic3_flow_parse_fdir_pattern(dev, pattern, error, filter);
+ if (ret)
+ return ret;
+
+ ret = hinic3_flow_parse_action(dev, actions, error, filter);
+ if (ret)
+ return ret;
+
+ ret = hinic3_flow_parse_attr(attr, error);
+ if (ret)
+ return ret;
+
+ filter->filter_type = RTE_ETH_FILTER_FDIR;
+
+ return 0;
+}
+
+/**
+ * Parse and process the actions of the Ethernet type.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, Its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_ethertype_action(struct rte_eth_dev *dev,
+ const struct rte_flow_action *actions,
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ const struct rte_flow_action *act = actions;
+ const struct rte_flow_action_queue *act_q;
+
+ /* Skip the firset void item. */
+ while (act->type == RTE_FLOW_ACTION_TYPE_VOID)
+ act++;
+
+ switch (act->type) {
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ act_q = (const struct rte_flow_action_queue *)act->conf;
+ filter->ethertype_filter.queue = act_q->index;
+ if (filter->ethertype_filter.queue >= dev->data->nb_rx_queues) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ACTION, act,
+ "Invalid action param.");
+ return -rte_errno;
+ }
+ break;
+
+ default:
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ACTION,
+ act, "Invalid action type.");
+ return -rte_errno;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_flow_parse_ethertype_pattern(__rte_unused struct rte_eth_dev *dev,
+ const struct rte_flow_item *pattern,
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ const struct rte_flow_item_eth *ether_spec, *ether_mask;
+ const struct rte_flow_item *flow_item = pattern;
+ enum rte_flow_item_type type;
+
+ /* Traverse all modes until HINIC3_FLOW_ITEM_TYPE_END is reached. */
+ for (; flow_item->type != HINIC3_FLOW_ITEM_TYPE_END; flow_item++) {
+ if (flow_item->last) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item, "Not support range");
+ return -rte_errno;
+ }
+ type = flow_item->type;
+ switch (type) {
+ case HINIC3_FLOW_ITEM_TYPE_ETH:
+ /* Obtaining Ethernet Specifications and Masks. */
+ ether_spec = (const struct rte_flow_item_eth *)
+ flow_item->spec;
+ ether_mask = (const struct rte_flow_item_eth *)
+ flow_item->mask;
+ if (!ether_spec || !ether_mask) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "NULL ETH spec/mask");
+ return -rte_errno;
+ }
+
+ /*
+ * Mask bits of source MAC address must be full of 0.
+ * Mask bits of destination MAC address must be full 0.
+ * Filters traffic based on the type of Ethernet.
+ */
+ if (!rte_is_zero_ether_addr(ðer_mask->src) ||
+ (!rte_is_zero_ether_addr(ðer_mask->dst))) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid ether address mask");
+ return -rte_errno;
+ }
+
+ if ((ether_mask->type & UINT16_MAX) != UINT16_MAX) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid ethertype mask");
+ return -rte_errno;
+ }
+
+ filter->ethertype_filter.ether_type =
+ (u16)rte_be_to_cpu_16(ether_spec->type);
+
+ switch (filter->ethertype_filter.ether_type) {
+ case RTE_ETHER_TYPE_SLOW:
+ break;
+
+ case RTE_ETHER_TYPE_ARP:
+ break;
+
+ case RTE_ETHER_TYPE_RARP:
+ break;
+
+ case RTE_ETHER_TYPE_LLDP:
+ break;
+
+ default:
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Unsupported ether_type in"
+ " control packet filter.");
+ return -rte_errno;
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static int
+hinic3_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ int ret;
+
+ ret = hinic3_flow_parse_ethertype_pattern(dev, pattern, error, filter);
+ if (ret)
+ return ret;
+
+ ret = hinic3_flow_parse_ethertype_action(dev, actions, error, filter);
+ if (ret)
+ return ret;
+
+ ret = hinic3_flow_parse_attr(attr, error);
+ if (ret)
+ return ret;
+
+ filter->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_ipv4(struct rte_flow_error *error,
+ struct hinic3_filter_t *filter,
+ const struct rte_flow_item *flow_item,
+ enum hinic3_fdir_tunnel_mode tunnel_mode)
+{
+ const struct rte_flow_item_ipv4 *spec_ipv4, *mask_ipv4;
+ mask_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->mask;
+ spec_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->spec;
+
+ if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+ filter->fdir_filter.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+ if (!mask_ipv4 && !spec_ipv4)
+ return 0;
+
+ if (!mask_ipv4 || !spec_ipv4) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter, vxlan outer "
+ "ipv4 mask or spec");
+ return -rte_errno;
+ }
+
+ /*
+ * Only support src address , dst addresses, others should be
+ * masked.
+ */
+ if (mask_ipv4->hdr.version_ihl ||
+ mask_ipv4->hdr.type_of_service ||
+ mask_ipv4->hdr.total_length || mask_ipv4->hdr.packet_id ||
+ mask_ipv4->hdr.fragment_offset ||
+ mask_ipv4->hdr.time_to_live ||
+ mask_ipv4->hdr.next_proto_id ||
+ mask_ipv4->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, "
+ "vxlan outer ipv4 only support "
+ "src ip, dst ip");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.key_mask.ipv4.src_ip =
+ rte_be_to_cpu_32(mask_ipv4->hdr.src_addr);
+ filter->fdir_filter.key_spec.ipv4.src_ip =
+ rte_be_to_cpu_32(spec_ipv4->hdr.src_addr);
+ filter->fdir_filter.key_mask.ipv4.dst_ip =
+ rte_be_to_cpu_32(mask_ipv4->hdr.dst_addr);
+ filter->fdir_filter.key_spec.ipv4.dst_ip =
+ rte_be_to_cpu_32(spec_ipv4->hdr.dst_addr);
+ } else {
+ filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+ if (!mask_ipv4 && !spec_ipv4)
+ return 0;
+
+ if (!mask_ipv4 || !spec_ipv4) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter, vxlan inner "
+ "ipv4 mask or spec");
+ return -rte_errno;
+ }
+
+ /*
+ * Only support src addr , dst addr, ip proto, others should be
+ * masked.
+ */
+ if (mask_ipv4->hdr.version_ihl ||
+ mask_ipv4->hdr.type_of_service ||
+ mask_ipv4->hdr.total_length || mask_ipv4->hdr.packet_id ||
+ mask_ipv4->hdr.fragment_offset ||
+ mask_ipv4->hdr.time_to_live ||
+ mask_ipv4->hdr.hdr_checksum) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, "
+ "vxlan inner ipv4 only support "
+ "src ip, dst ip, proto");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.key_mask.inner_ipv4.src_ip =
+ rte_be_to_cpu_32(mask_ipv4->hdr.src_addr);
+ filter->fdir_filter.key_spec.inner_ipv4.src_ip =
+ rte_be_to_cpu_32(spec_ipv4->hdr.src_addr);
+ filter->fdir_filter.key_mask.inner_ipv4.dst_ip =
+ rte_be_to_cpu_32(mask_ipv4->hdr.dst_addr);
+ filter->fdir_filter.key_spec.inner_ipv4.dst_ip =
+ rte_be_to_cpu_32(spec_ipv4->hdr.dst_addr);
+ filter->fdir_filter.key_mask.proto =
+ mask_ipv4->hdr.next_proto_id;
+ filter->fdir_filter.key_spec.proto =
+ spec_ipv4->hdr.next_proto_id;
+ }
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_ipv6(struct rte_flow_error *error,
+ struct hinic3_filter_t *filter,
+ const struct rte_flow_item *flow_item,
+ enum hinic3_fdir_tunnel_mode tunnel_mode)
+{
+ const struct rte_flow_item_ipv6 *spec_ipv6, *mask_ipv6;
+
+ mask_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->mask;
+ spec_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->spec;
+
+ if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+ filter->fdir_filter.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+ if (!mask_ipv6 && !spec_ipv6)
+ return 0;
+
+ if (!mask_ipv6 || !spec_ipv6) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+ "Invalid fdir filter ipv6 mask or spec");
+ return -rte_errno;
+ }
+
+ /* Only support dst addresses, src addresses. */
+ if (mask_ipv6->hdr.vtc_flow || mask_ipv6->hdr.payload_len ||
+ mask_ipv6->hdr.hop_limits || mask_ipv6->hdr.proto) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+ "Not supported by fdir filter, ipv6 only "
+ "support src ip, dst ip, proto");
+ return -rte_errno;
+ }
+
+ net_addr_to_host(filter->fdir_filter.key_mask.ipv6.src_ip,
+ (const uint32_t *)mask_ipv6->hdr.src_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_spec.ipv6.src_ip,
+ (const uint32_t *)spec_ipv6->hdr.src_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_mask.ipv6.dst_ip,
+ (const uint32_t *)mask_ipv6->hdr.dst_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_spec.ipv6.dst_ip,
+ (const uint32_t *)spec_ipv6->hdr.dst_addr.a, 4);
+ } else {
+ filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+ if (!mask_ipv6 && !spec_ipv6)
+ return 0;
+
+ if (!mask_ipv6 || !spec_ipv6) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+ "Invalid fdir filter ipv6 mask or spec");
+ return -rte_errno;
+ }
+
+ /* Only support dst addresses, src addresses, proto. */
+ if (mask_ipv6->hdr.vtc_flow || mask_ipv6->hdr.payload_len ||
+ mask_ipv6->hdr.hop_limits) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+ "Not supported by fdir filter, ipv6 only "
+ "support src ip, dst ip, proto");
+ return -rte_errno;
+ }
+
+ net_addr_to_host(filter->fdir_filter.key_mask.inner_ipv6.src_ip,
+ (const uint32_t *)mask_ipv6->hdr.src_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_spec.inner_ipv6.src_ip,
+ (const uint32_t *)spec_ipv6->hdr.src_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_mask.inner_ipv6.dst_ip,
+ (const uint32_t *)mask_ipv6->hdr.dst_addr.a, 4);
+ net_addr_to_host(filter->fdir_filter.key_spec.inner_ipv6.dst_ip,
+ (const uint32_t *)spec_ipv6->hdr.dst_addr.a, 4);
+
+ filter->fdir_filter.key_mask.proto = mask_ipv6->hdr.proto;
+ filter->fdir_filter.key_spec.proto = spec_ipv6->hdr.proto;
+ }
+
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_tcp(struct rte_flow_error *error,
+ struct hinic3_filter_t *filter,
+ enum hinic3_fdir_tunnel_mode tunnel_mode,
+ const struct rte_flow_item *flow_item)
+{
+ const struct rte_flow_item_tcp *spec_tcp, *mask_tcp;
+
+ if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, vxlan only "
+ "support inner tcp");
+ return -rte_errno;
+ }
+
+ filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+ filter->fdir_filter.key_spec.proto = IPPROTO_TCP;
+
+ mask_tcp = (const struct rte_flow_item_tcp *)flow_item->mask;
+ spec_tcp = (const struct rte_flow_item_tcp *)flow_item->spec;
+ if (!mask_tcp && !spec_tcp)
+ return 0;
+ if (!mask_tcp || !spec_tcp) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter tcp mask or spec");
+ return -rte_errno;
+ }
+
+ /* Only support src, dst ports, others should be masked. */
+ if (mask_tcp->hdr.sent_seq || mask_tcp->hdr.recv_ack ||
+ mask_tcp->hdr.data_off || mask_tcp->hdr.rx_win ||
+ mask_tcp->hdr.tcp_flags || mask_tcp->hdr.cksum ||
+ mask_tcp->hdr.tcp_urp) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir filter, vxlan inner "
+ "tcp only support src port,dst port");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.key_mask.src_port =
+ (u16)rte_be_to_cpu_16(mask_tcp->hdr.src_port);
+ filter->fdir_filter.key_spec.src_port =
+ (u16)rte_be_to_cpu_16(spec_tcp->hdr.src_port);
+ filter->fdir_filter.key_mask.dst_port =
+ (u16)rte_be_to_cpu_16(mask_tcp->hdr.dst_port);
+ filter->fdir_filter.key_spec.dst_port =
+ (u16)rte_be_to_cpu_16(spec_tcp->hdr.dst_port);
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_udp(struct rte_flow_error *error,
+ struct hinic3_filter_t *filter,
+ enum hinic3_fdir_tunnel_mode tunnel_mode,
+ const struct rte_flow_item *flow_item)
+{
+ const struct rte_flow_item_udp *spec_udp, *mask_udp;
+
+ mask_udp = (const struct rte_flow_item_udp *)flow_item->mask;
+ spec_udp = (const struct rte_flow_item_udp *)flow_item->spec;
+
+ if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+ /*
+ * UDP is used to describe protocol,
+ * spec and mask should be NULL.
+ */
+ if (flow_item->spec || flow_item->mask) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item, "Invalid UDP item");
+ return -rte_errno;
+ }
+ } else {
+ filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+ filter->fdir_filter.key_spec.proto = IPPROTO_UDP;
+ if (!mask_udp && !spec_udp)
+ return 0;
+
+ if (!mask_udp || !spec_udp) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter vxlan inner "
+ "udp mask or spec");
+ return -rte_errno;
+ }
+
+ /* Set the filter information. */
+ filter->fdir_filter.key_mask.src_port =
+ (u16)rte_be_to_cpu_16(mask_udp->hdr.src_port);
+ filter->fdir_filter.key_spec.src_port =
+ (u16)rte_be_to_cpu_16(spec_udp->hdr.src_port);
+ filter->fdir_filter.key_mask.dst_port =
+ (u16)rte_be_to_cpu_16(mask_udp->hdr.dst_port);
+ filter->fdir_filter.key_spec.dst_port =
+ (u16)rte_be_to_cpu_16(spec_udp->hdr.dst_port);
+ }
+
+ return 0;
+}
+
+static int
+hinic3_flow_fdir_vxlan(struct rte_flow_error *error,
+ struct hinic3_filter_t *filter,
+ const struct rte_flow_item *flow_item)
+{
+ const struct rte_flow_item_vxlan *spec_vxlan, *mask_vxlan;
+ uint32_t vxlan_vni_id = 0;
+
+ spec_vxlan = (const struct rte_flow_item_vxlan *)flow_item->spec;
+ mask_vxlan = (const struct rte_flow_item_vxlan *)flow_item->mask;
+
+ filter->fdir_filter.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+
+ if (!spec_vxlan && !mask_vxlan) {
+ return 0;
+ } else if (filter->fdir_filter.outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV6) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter vxlan mask or spec, "
+ "ipv6 vxlan, don't support vni");
+ return -rte_errno;
+ }
+
+ if (!spec_vxlan || !mask_vxlan) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Invalid fdir filter vxlan mask or spec");
+ return -rte_errno;
+ }
+
+ rte_memcpy(((uint8_t *)&vxlan_vni_id + 1), spec_vxlan->vni, 3);
+ filter->fdir_filter.key_mask.tunnel.tunnel_id =
+ rte_be_to_cpu_32(vxlan_vni_id);
+ return 0;
+}
+
+static int
+hinic3_flow_parse_fdir_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
+ const struct rte_flow_item *pattern,
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ const struct rte_flow_item *flow_item = pattern;
+ enum hinic3_fdir_tunnel_mode tunnel_mode =
+ HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+ enum rte_flow_item_type type;
+ int err;
+
+ /* Inner and outer ip type, set it to any by default */
+ filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_ANY;
+ filter->fdir_filter.outer_ip_type = HINIC3_FDIR_IP_TYPE_ANY;
+
+ for (; flow_item->type != HINIC3_FLOW_ITEM_TYPE_END; flow_item++) {
+ if (flow_item->last) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item, "Not support range");
+ return -rte_errno;
+ }
+
+ type = flow_item->type;
+ switch (type) {
+ case HINIC3_FLOW_ITEM_TYPE_ETH:
+ /* All should be masked. */
+ if (flow_item->spec || flow_item->mask) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_ITEM,
+ flow_item,
+ "Not supported by fdir "
+ "filter, not support mac");
+ return -rte_errno;
+ }
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_IPV4:
+ err = hinic3_flow_fdir_tunnel_ipv4(error,
+ filter, flow_item, tunnel_mode);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_IPV6:
+ err = hinic3_flow_fdir_tunnel_ipv6(error,
+ filter, flow_item, tunnel_mode);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_TCP:
+ err = hinic3_flow_fdir_tunnel_tcp(error,
+ filter, tunnel_mode, flow_item);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_UDP:
+ err = hinic3_flow_fdir_tunnel_udp(error,
+ filter, tunnel_mode, flow_item);
+ if (err != 0)
+ return -rte_errno;
+ break;
+
+ case HINIC3_FLOW_ITEM_TYPE_VXLAN:
+ err = hinic3_flow_fdir_vxlan(error, filter, flow_item);
+ if (err != 0)
+ return -rte_errno;
+ tunnel_mode = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * Resolve VXLAN Filters in Flow Filters.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, its used to store and manipulate packet filterig rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_fdir_vxlan_filter(struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error,
+ struct hinic3_filter_t *filter)
+{
+ int ret;
+
+ ret = hinic3_flow_parse_fdir_vxlan_pattern(dev, pattern, error, filter);
+ if (ret)
+ return ret;
+
+ ret = hinic3_flow_parse_action(dev, actions, error, filter);
+ if (ret)
+ return ret;
+
+ ret = hinic3_flow_parse_attr(attr, error);
+ if (ret)
+ return ret;
+
+ filter->filter_type = RTE_ETH_FILTER_FDIR;
+
+ return 0;
+}
+
+/**
+ * Parse patterns and actions of network traffic.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, its used to store and manipulate packet filterig rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error, struct hinic3_filter_t *filter)
+{
+ hinic3_parse_filter_t parse_filter;
+ uint32_t pattern_num = 0;
+ int ret = 0;
+ /* Check whether the parameter is valid. */
+ if (!pattern || !actions || !attr) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "NULL param.");
+ return -rte_errno;
+ }
+
+ while ((pattern + pattern_num)->type != HINIC3_FLOW_ITEM_TYPE_END) {
+ pattern_num++;
+ if (pattern_num > HINIC3_FLOW_MAX_PATTERN_NUM) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_MAX_PATTERN_NUM, NULL,
+ "Too many patterns.");
+ return -rte_errno;
+ }
+ }
+ /*
+ * The corresponding filter is returned. If the filter is not found,
+ * NULL is returned.
+ */
+ parse_filter = hinic3_find_parse_filter_func(pattern);
+ if (!parse_filter) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+ pattern, "Unsupported pattern");
+ return -rte_errno;
+ }
+ /* Parsing with filters. */
+ ret = parse_filter(dev, attr, pattern, actions, error, filter);
+
+ return ret;
+}
+
+/**
+ * Check whether the traffic rule provided by the user is valid.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct hinic3_filter_t filter_rules = {0};
+
+ return hinic3_flow_parse(dev, attr, pattern, actions, error,
+ &filter_rules);
+}
+
+/**
+ * Create a flow item.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @return
+ * If the operation is successful, the created flow is returned. Otherwise, NULL
+ * is returned.
+ *
+ */
+static struct rte_flow *
+hinic3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+ const struct rte_flow_item pattern[],
+ const struct rte_flow_action actions[],
+ struct rte_flow_error *error)
+{
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct hinic3_filter_t *filter_rules = NULL;
+ struct rte_flow *flow = NULL;
+ int ret;
+
+ filter_rules =
+ rte_zmalloc("filter_rules", sizeof(struct hinic3_filter_t), 0);
+ if (!filter_rules) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+ NULL,
+ "Failed to allocate filter rules memory.");
+ return NULL;
+ }
+
+ flow = rte_zmalloc("hinic3_rte_flow", sizeof(struct rte_flow), 0);
+ if (!flow) {
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to allocate flow memory.");
+ rte_free(filter_rules);
+ return NULL;
+ }
+ /* Parses the flow rule to be created and generates a filter. */
+ ret = hinic3_flow_parse(dev, attr, pattern, actions, error,
+ filter_rules);
+ if (ret < 0)
+ goto free_flow;
+
+ switch (filter_rules->filter_type) {
+ case RTE_ETH_FILTER_ETHERTYPE:
+ ret = hinic3_flow_add_del_ethertype_filter(dev,
+ &filter_rules->ethertype_filter, true);
+ if (ret) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Create ethertype filter failed.");
+ goto free_flow;
+ }
+
+ flow->rule = filter_rules;
+ flow->filter_type = filter_rules->filter_type;
+ TAILQ_INSERT_TAIL(&nic_dev->filter_ethertype_list, flow, node);
+ break;
+
+ case RTE_ETH_FILTER_FDIR:
+ ret = hinic3_flow_add_del_fdir_filter(dev,
+ &filter_rules->fdir_filter, true);
+ if (ret) {
+ rte_flow_error_set(error, EINVAL,
+ HINIC3_FLOW_ERROR_TYPE_HANDLE, NULL,
+ "Create fdir filter failed.");
+ goto free_flow;
+ }
+
+ flow->rule = filter_rules;
+ flow->filter_type = filter_rules->filter_type;
+ TAILQ_INSERT_TAIL(&nic_dev->filter_fdir_rule_list, flow, node);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Filter type %d not supported",
+ filter_rules->filter_type);
+ rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Unsupport filter type.");
+ goto free_flow;
+ }
+
+ return flow;
+
+free_flow:
+ rte_free(flow);
+ rte_free(filter_rules);
+
+ return NULL;
+}
+
+static int
+hinic3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct rte_flow_error *error)
+{
+ int ret = -EINVAL;
+ enum rte_filter_type type;
+ struct hinic3_filter_t *rules = NULL;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+ if (!flow) {
+ PMD_DRV_LOG(ERR, "Invalid flow parameter!");
+ return -EPERM;
+ }
+
+ type = flow->filter_type;
+ rules = (struct hinic3_filter_t *)flow->rule;
+ /* Perform operations based on the type. */
+ switch (type) {
+ case RTE_ETH_FILTER_ETHERTYPE:
+ ret = hinic3_flow_add_del_ethertype_filter(dev,
+ &rules->ethertype_filter, false);
+ if (!ret)
+ TAILQ_REMOVE(&nic_dev->filter_ethertype_list, flow,
+ node);
+
+ flow->rule = rules;
+ flow->filter_type = rules->filter_type;
+ TAILQ_REMOVE(&nic_dev->filter_ethertype_list, flow, node);
+ break;
+
+ case RTE_ETH_FILTER_FDIR:
+ ret = hinic3_flow_add_del_fdir_filter(dev, &rules->fdir_filter,
+ false);
+ if (!ret)
+ TAILQ_REMOVE(&nic_dev->filter_fdir_rule_list, flow,
+ node);
+ break;
+ default:
+ PMD_DRV_LOG(WARNING, "Filter type %d not supported", type);
+ ret = -EINVAL;
+ break;
+ }
+
+ /* Deleted successfully. Resources are released. */
+ if (!ret) {
+ rte_free(rules);
+ rte_free(flow);
+ } else {
+ rte_flow_error_set(error, -ret, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to destroy flow.");
+ }
+
+ return ret;
+}
+
+/**
+ * Clear all fdir type flow rules on the network device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_flush_fdir_filter(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+ struct hinic3_filter_t *filter_rules = NULL;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_flow *flow;
+
+ while (true) {
+ flow = TAILQ_FIRST(&nic_dev->filter_fdir_rule_list);
+ if (flow == NULL)
+ break;
+ filter_rules = (struct hinic3_filter_t *)flow->rule;
+
+ /* Delete flow rules. */
+ ret = hinic3_flow_add_del_fdir_filter(dev,
+ &filter_rules->fdir_filter, false);
+
+ if (ret)
+ return ret;
+
+ TAILQ_REMOVE(&nic_dev->filter_fdir_rule_list, flow, node);
+ rte_free(filter_rules);
+ rte_free(flow);
+ }
+
+ return ret;
+}
+
+/**
+ * Clear all ether type flow rules on the network device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_flush_ethertype_filter(struct rte_eth_dev *dev)
+{
+ struct hinic3_filter_t *filter_rules = NULL;
+ struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+ struct rte_flow *flow;
+ int ret = 0;
+
+ while (true) {
+ flow = TAILQ_FIRST(&nic_dev->filter_ethertype_list);
+ if (flow == NULL)
+ break;
+ filter_rules = (struct hinic3_filter_t *)flow->rule;
+
+ /* Delete flow rules. */
+ ret = hinic3_flow_add_del_ethertype_filter(dev,
+ &filter_rules->ethertype_filter, false);
+
+ if (ret)
+ return ret;
+
+ TAILQ_REMOVE(&nic_dev->filter_ethertype_list, flow, node);
+ rte_free(filter_rules);
+ rte_free(flow);
+ }
+
+ return ret;
+}
+
+/**
+ * Clear all flow rules on the network device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+ int ret;
+
+ ret = hinic3_flow_flush_fdir_filter(dev);
+ if (ret) {
+ rte_flow_error_set(error, -ret, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to flush fdir flows.");
+ return -rte_errno;
+ }
+
+ ret = hinic3_flow_flush_ethertype_filter(dev);
+ if (ret) {
+ rte_flow_error_set(error, -ret, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+ NULL, "Failed to flush ethertype flows.");
+ return -rte_errno;
+ }
+ return ret;
+}
+
+/* Structure for managing flow table operations. */
+const struct rte_flow_ops hinic3_flow_ops = {
+ .validate = hinic3_flow_validate,
+ .create = hinic3_flow_create,
+ .destroy = hinic3_flow_destroy,
+ .flush = hinic3_flow_flush,
+};
diff --git a/drivers/net/hinic3/hinic3_flow.h b/drivers/net/hinic3/hinic3_flow.h
new file mode 100644
index 0000000000..9104337544
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_flow.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_FLOW_H_
+#define _HINIC3_FLOW_H_
+
+#include <rte_flow.h>
+
+/* Flow item type. */
+#define HINIC3_FLOW_ITEM_TYPE_END RTE_FLOW_ITEM_TYPE_END
+#define HINIC3_FLOW_ITEM_TYPE_VOID RTE_FLOW_ITEM_TYPE_VOID
+#define HINIC3_FLOW_ITEM_TYPE_INVERT RTE_FLOW_ITEM_TYPE_INVERT
+#define HINIC3_FLOW_ITEM_TYPE_ANY RTE_FLOW_ITEM_TYPE_ANY
+#define HINIC3_FLOW_ITEM_TYPE_PF RTE_FLOW_ITEM_TYPE_PF
+#define HINIC3_FLOW_ITEM_TYPE_VF RTE_FLOW_ITEM_TYPE_VF
+#define HINIC3_FLOW_ITEM_TYPE_PHY_PORT RTE_FLOW_ITEM_TYPE_PHY_PORT
+#define HINIC3_FLOW_ITEM_TYPE_PORT_ID RTE_FLOW_ITEM_TYPE_PORT_ID
+#define HINIC3_FLOW_ITEM_TYPE_RAW RTE_FLOW_ITEM_TYPE_RAW
+#define HINIC3_FLOW_ITEM_TYPE_ETH RTE_FLOW_ITEM_TYPE_ETH
+#define HINIC3_FLOW_ITEM_TYPE_VLAN RTE_FLOW_ITEM_TYPE_VLAN
+#define HINIC3_FLOW_ITEM_TYPE_IPV4 RTE_FLOW_ITEM_TYPE_IPV4
+#define HINIC3_FLOW_ITEM_TYPE_IPV6 RTE_FLOW_ITEM_TYPE_IPV6
+#define HINIC3_FLOW_ITEM_TYPE_ICMP RTE_FLOW_ITEM_TYPE_ICMP
+#define HINIC3_FLOW_ITEM_TYPE_UDP RTE_FLOW_ITEM_TYPE_UDP
+#define HINIC3_FLOW_ITEM_TYPE_TCP RTE_FLOW_ITEM_TYPE_TCP
+#define HINIC3_FLOW_ITEM_TYPE_SCTP RTE_FLOW_ITEM_TYPE_SCTP
+#define HINIC3_FLOW_ITEM_TYPE_VXLAN RTE_FLOW_ITEM_TYPE_VXLAN
+#define HINIC3_FLOW_ITEM_TYPE_E_TAG RTE_FLOW_ITEM_TYPE_E_TAG
+#define HINIC3_FLOW_ITEM_TYPE_NVGRE RTE_FLOW_ITEM_TYPE_NVGRE
+#define HINIC3_FLOW_ITEM_TYPE_MPLS RTE_FLOW_ITEM_TYPE_MPLS
+#define HINIC3_FLOW_ITEM_TYPE_GRE RTE_FLOW_ITEM_TYPE_GRE
+#define HINIC3_FLOW_ITEM_TYPE_FUZZY RTE_FLOW_ITEM_TYPE_FUZZY
+#define HINIC3_FLOW_ITEM_TYPE_GTP RTE_FLOW_ITEM_TYPE_GTP
+#define HINIC3_FLOW_ITEM_TYPE_GTPC RTE_FLOW_ITEM_TYPE_GTPC
+#define HINIC3_FLOW_ITEM_TYPE_GTPU RTE_FLOW_ITEM_TYPE_GTPU
+#define HINIC3_FLOW_ITEM_TYPE_ESP RTE_FLOW_ITEM_TYPE_ESP
+#define HINIC3_FLOW_ITEM_TYPE_GENEVE RTE_FLOW_ITEM_TYPE_GENEVE
+#define HINIC3_FLOW_ITEM_TYPE_VXLAN_GPE RTE_FLOW_ITEM_TYPE_VXLAN_GPE
+#define HINIC3_FLOW_ITEM_TYPE_ARP_ETH_IPV4 RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4
+#define HINIC3_FLOW_ITEM_TYPE_IPV6_EXT RTE_FLOW_ITEM_TYPE_IPV6_EXT
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6 RTE_FLOW_ITEM_TYPE_ICMP6
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_NS RTE_FLOW_ITEM_TYPE_ICMP6_ND_NS
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_NA RTE_FLOW_ITEM_TYPE_ICMP6_ND_NA
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_OPT RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH
+#define HINIC3_FLOW_ITEM_TYPE_MARK RTE_FLOW_ITEM_TYPE_MARK
+#define HINIC3_FLOW_ITEM_TYPE_META RTE_FLOW_ITEM_TYPE_META
+#define HINIC3_FLOW_ITEM_TYPE_GRE_KEY RTE_FLOW_ITEM_TYPE_GRE_KEY
+#define HINIC3_FLOW_ITEM_TYPE_GTP_PSC RTE_FLOW_ITEM_TYPE_GTP_PSC
+#define HINIC3_FLOW_ITEM_TYPE_PPPOES RTE_FLOW_ITEM_TYPE_PPPOES
+#define HINIC3_FLOW_ITEM_TYPE_PPPOED RTE_FLOW_ITEM_TYPE_PPPOED
+#define HINIC3_FLOW_ITEM_TYPE_PPPOE_PROTO_ID RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID
+#define HINIC3_FLOW_ITEM_TYPE_NSH RTE_FLOW_ITEM_TYPE_NSH
+#define HINIC3_FLOW_ITEM_TYPE_IGMP RTE_FLOW_ITEM_TYPE_IGMP
+#define HINIC3_FLOW_ITEM_TYPE_AH RTE_FLOW_ITEM_TYPE_AH
+#define HINIC3_FLOW_ITEM_TYPE_HIGIG2 RTE_FLOW_ITEM_TYPE_HIGIG2
+#define HINIC3_FLOW_ITEM_TYPE_TAG RTE_FLOW_ITEM_TYPE_TAG
+
+/* Flow error type. */
+#define HINIC3_FLOW_ERROR_TYPE_NONE RTE_FLOW_ERROR_TYPE_NONE
+#define HINIC3_FLOW_ERROR_TYPE_UNSPECIFIED RTE_FLOW_ERROR_TYPE_UNSPECIFIED
+#define HINIC3_FLOW_ERROR_TYPE_HANDLE RTE_FLOW_ERROR_TYPE_HANDLE
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_GROUP RTE_FLOW_ERROR_TYPE_ATTR_GROUP
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_PRIORITY RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_INGRESS RTE_FLOW_ERROR_TYPE_ATTR_INGRESS
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_EGRESS RTE_FLOW_ERROR_TYPE_ATTR_EGRESS
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_TRANSFER RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER
+#define HINIC3_FLOW_ERROR_TYPE_ATTR RTE_FLOW_ERROR_TYPE_ATTR
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_NUM RTE_FLOW_ERROR_TYPE_ITEM_NUM
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_SPEC RTE_FLOW_ERROR_TYPE_ITEM_SPEC
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_LAST RTE_FLOW_ERROR_TYPE_ITEM_LAST
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_MASK RTE_FLOW_ERROR_TYPE_ITEM_MASK
+#define HINIC3_FLOW_ERROR_TYPE_ITEM RTE_FLOW_ERROR_TYPE_ITEM
+#define HINIC3_FLOW_ERROR_TYPE_ACTION_NUM RTE_FLOW_ERROR_TYPE_ACTION_NUM
+#define HINIC3_FLOW_ERROR_TYPE_ACTION_CONF RTE_FLOW_ERROR_TYPE_ACTION_CONF
+#define HINIC3_FLOW_ERROR_TYPE_ACTION RTE_FLOW_ERROR_TYPE_ACTION
+
+#endif /**< _HINIC3_FLOW_H_ */
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 17/18] net/hinic3: add FDIR flow control module
2025-04-18 9:06 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
@ 2025-04-18 18:25 ` Stephen Hemminger
2025-04-18 18:27 ` Stephen Hemminger
` (2 subsequent siblings)
3 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:25 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev, Yi Chen, Xin Wang, Feifei Wang
On Fri, 18 Apr 2025 17:06:03 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> + (void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
Please don't add extra (void) here.
That is an older style used when C code was using lint.
And if you have an internal mutex_unlock function why is it returning
a value anyway. Should be void function.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 17/18] net/hinic3: add FDIR flow control module
2025-04-18 9:06 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
2025-04-18 18:25 ` Stephen Hemminger
@ 2025-04-18 18:27 ` Stephen Hemminger
2025-04-18 18:28 ` Stephen Hemminger
2025-04-18 18:30 ` Stephen Hemminger
3 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:27 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev, Yi Chen, Xin Wang, Feifei Wang
On Fri, 18 Apr 2025 17:06:03 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> + u8 pause_set; /**< Flag of PAUSE frame setting. */
> + pthread_mutex_t pause_mutuex;
> + struct nic_pause_config nic_pause;
> +
Please don't use pthread functions unless there is a special reason
to do so. Why not a simple spinlock.
PS: the spelling of mutex is wrong here.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 17/18] net/hinic3: add FDIR flow control module
2025-04-18 9:06 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
2025-04-18 18:25 ` Stephen Hemminger
2025-04-18 18:27 ` Stephen Hemminger
@ 2025-04-18 18:28 ` Stephen Hemminger
2025-04-18 18:30 ` Stephen Hemminger
3 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:28 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev, Yi Chen, Xin Wang, Feifei Wang
On Fri, 18 Apr 2025 17:06:03 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> From: Yi Chen <chenyi221@huawei.com>
>
> Added support for flow director filters, including ethertype, IPv4,
> IPv6, and tunnel VXLAN. In addition, user can add or delete filters.
>
> Signed-off-by: Yi Chen <chenyi221@huawei.com>
> Reviewed-by: Xin Wang <wangxin679@h-partners.com>
> Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Flow director is deprecated in DPDK and planned for removal.
Pleas support rte_flow instead.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 17/18] net/hinic3: add FDIR flow control module
2025-04-18 9:06 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
` (2 preceding siblings ...)
2025-04-18 18:28 ` Stephen Hemminger
@ 2025-04-18 18:30 ` Stephen Hemminger
3 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:30 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev, Yi Chen, Xin Wang, Feifei Wang
On Fri, 18 Apr 2025 17:06:03 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> + /* Alloc TCAM filter memory. */
> + tcam_filter = rte_zmalloc("hinic3_fdir_filter",
> + sizeof(struct hinic3_tcam_filter), 0);
> + if (tcam_filter == NULL)
> + return -ENOMEM;
> + (void)rte_memcpy(&tcam_filter->tcam_key, tcam_key,
> + sizeof(struct hinic3_tcam_key));
This line has three issues.
1. Don't use (void) cast, that is old BSD lint style.
2. Don't use rte_memcpy() for simple fixed size things, use memcpy instead.
3. Don't use memcpy when structure assignment would work.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [RFC 18/18] drivers/net: add hinic3 PMD build and doc files
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (16 preceding siblings ...)
2025-04-18 9:06 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
@ 2025-04-18 9:06 ` Feifei Wang
2025-04-18 17:22 ` Stephen Hemminger
2025-04-18 18:18 ` [RFC 00/18] add hinic3 PMD driver Stephen Hemminger
` (2 subsequent siblings)
20 siblings, 1 reply; 30+ messages in thread
From: Feifei Wang @ 2025-04-18 9:06 UTC (permalink / raw)
To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang
From: Yi Chen <chenyi221@huawei.com>
The meson.build file is added to this patch to enable
the hinic3 compilation function.
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
doc/guides/nics/features/hinic3.ini | 9 ++++++
drivers/net/hinic3/base/meson.build | 50 +++++++++++++++++++++++++++++
drivers/net/hinic3/meson.build | 44 +++++++++++++++++++++++++
drivers/net/meson.build | 1 +
4 files changed, 104 insertions(+)
create mode 100644 doc/guides/nics/features/hinic3.ini
create mode 100644 drivers/net/hinic3/base/meson.build
create mode 100644 drivers/net/hinic3/meson.build
diff --git a/doc/guides/nics/features/hinic3.ini b/doc/guides/nics/features/hinic3.ini
new file mode 100644
index 0000000000..8bafd49090
--- /dev/null
+++ b/doc/guides/nics/features/hinic3.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'hinic3' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
+ARMv8 = Y
diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build
new file mode 100644
index 0000000000..948f5efac2
--- /dev/null
+++ b/drivers/net/hinic3/base/meson.build
@@ -0,0 +1,50 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+sources = files(
+ 'hinic3_cmdq.c',
+ 'hinic3_eqs.c',
+ 'hinic3_hw_cfg.c',
+ 'hinic3_hw_comm.c',
+ 'hinic3_hwdev.c',
+ 'hinic3_hwif.c',
+ 'hinic3_mbox.c',
+ 'hinic3_mgmt.c',
+ 'hinic3_nic_cfg.c',
+ 'hinic3_nic_event.c',
+ 'hinic3_wq.c',
+)
+
+extra_flags = []
+
+# The driver runs only on arch64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+ extra_flags += [
+ '-Wno-int-to-pointer-cast',
+ '-Wno-pointer-to-int-cast',
+ ]
+endif
+
+foreach flag: extra_flags
+ if cc.has_argument(flag)
+ cflags += flag
+ endif
+endforeach
+
+deps += ['hash']
+c_args = cflags
+includes += include_directories('../')
+
+base_lib = static_library(
+ 'spnic_base',
+ sources,
+ dependencies: [
+ static_rte_eal,
+ static_rte_ethdev,
+ static_rte_bus_pci,
+ static_rte_hash,
+ ],
+ include_directories: includes,
+ c_args: c_args,
+)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build
new file mode 100644
index 0000000000..231e966b36
--- /dev/null
+++ b/drivers/net/hinic3/meson.build
@@ -0,0 +1,44 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+ subdir_done()
+endif
+
+if (arch_subdir != 'x86' and arch_subdir != 'arm'
+ or not dpdk_conf.get('RTE_ARCH_64'))
+ build = false
+ reason = 'only supported on x86_64 and aarch64'
+ subdir_done()
+endif
+
+cflags += [
+ '-DHW_CONVERT_ENDIAN',
+ '-D__HINIC_HUAWEI_SECUREC__',
+ '-fPIC',
+ '-fstack-protector-strong',
+]
+
+subdir('base')
+subdir('mml')
+objs = [base_objs] + [mml_objs]
+
+sources = files(
+ 'hinic3_ethdev.c',
+ 'hinic3_fdir.c',
+ 'hinic3_flow.c',
+ 'hinic3_nic_io.c',
+ 'hinic3_rx.c',
+ 'hinic3_tx.c',
+)
+
+if arch_subdir == 'arm' and dpdk_conf.get('RTE_ARCH_64')
+ cflags += ['-D__ARM64_NEON__']
+else
+ cflags += ['-D__X86_64_SSE__']
+endif
+
+includes += include_directories('base')
+includes += include_directories('mml')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 460eb69e5b..b5442349d4 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -23,6 +23,7 @@ drivers = [
'failsafe',
'gve',
'hinic',
+ 'hinic3',
'hns3',
'intel/e1000',
'intel/fm10k',
--
2.47.0.windows.2
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 18/18] drivers/net: add hinic3 PMD build and doc files
2025-04-18 9:06 ` [RFC 18/18] drivers/net: add hinic3 PMD build and doc files Feifei Wang
@ 2025-04-18 17:22 ` Stephen Hemminger
2025-04-19 2:52 ` 回复: " wangfeifei (J)
0 siblings, 1 reply; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 17:22 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev, Yi Chen, Xin Wang, Feifei Wang
On Fri, 18 Apr 2025 17:06:04 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> +cflags += [
> + '-DHW_CONVERT_ENDIAN',
> + '-D__HINIC_HUAWEI_SECUREC__',
> + '-fPIC',
> + '-fstack-protector-strong',
> +]
What is this?
Should not enable PIC or stack-protector at the driver level.
I assume the other stuff is huawei specific compiler flags.
> +if arch_subdir == 'arm' and dpdk_conf.get('RTE_ARCH_64')
> + cflags += ['-D__ARM64_NEON__']
> +else
> + cflags += ['-D__X86_64_SSE__']
> +endif
This should already be handled in the existing DPDK meson stuff.
Doing it at a per-driver level seems wrong.
^ permalink raw reply [flat|nested] 30+ messages in thread
* 回复: [RFC 18/18] drivers/net: add hinic3 PMD build and doc files
2025-04-18 17:22 ` Stephen Hemminger
@ 2025-04-19 2:52 ` wangfeifei (J)
0 siblings, 0 replies; 30+ messages in thread
From: wangfeifei (J) @ 2025-04-19 2:52 UTC (permalink / raw)
To: Stephen Hemminger, Feifei Wang
Cc: dev, chenyi (CY), Wangxin(kunpeng),
zengweiliang zengweiliang, Dumin(Dumin,KunPeng)
-----邮件原件-----
发件人: Stephen Hemminger <stephen@networkplumber.org>
发送时间: 2025年4月19日 1:23
收件人: Feifei Wang <wff_light@vip.163.com>
抄送: dev@dpdk.org; chenyi (CY) <chenyi221@huawei.com>; Wangxin(kunpeng) <wangxin679@h-partners.com>; wangfeifei (J) <wangfeifei40@huawei.com>
主题: Re: [RFC 18/18] drivers/net: add hinic3 PMD build and doc files
On Fri, 18 Apr 2025 17:06:04 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> +cflags += [
> + '-DHW_CONVERT_ENDIAN',
> + '-D__HINIC_HUAWEI_SECUREC__',
> + '-fPIC',
> + '-fstack-protector-strong',
> +]
What is this?
Should not enable PIC or stack-protector at the driver level.
I assume the other stuff is huawei specific compiler flags.
[Feifei] Got it. We will remove this, and should keep it the same as generic way.
> +if arch_subdir == 'arm' and dpdk_conf.get('RTE_ARCH_64')
> + cflags += ['-D__ARM64_NEON__']
> +else
> + cflags += ['-D__X86_64_SSE__']
> +endif
This should already be handled in the existing DPDK meson stuff.
Doing it at a per-driver level seems wrong.
[Feifei] Agree, the next version we will fix.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 00/18] add hinic3 PMD driver
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (17 preceding siblings ...)
2025-04-18 9:06 ` [RFC 18/18] drivers/net: add hinic3 PMD build and doc files Feifei Wang
@ 2025-04-18 18:18 ` Stephen Hemminger
2025-04-19 2:44 ` 回复: " wangfeifei (J)
2025-04-18 18:20 ` Stephen Hemminger
2025-04-18 18:32 ` Stephen Hemminger
20 siblings, 1 reply; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:18 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev
On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
> for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
You are supposed to remove the "*** BLURB HERE ***" when editing
the commit message.
^ permalink raw reply [flat|nested] 30+ messages in thread
* 回复: [RFC 00/18] add hinic3 PMD driver
2025-04-18 18:18 ` [RFC 00/18] add hinic3 PMD driver Stephen Hemminger
@ 2025-04-19 2:44 ` wangfeifei (J)
0 siblings, 0 replies; 30+ messages in thread
From: wangfeifei (J) @ 2025-04-19 2:44 UTC (permalink / raw)
To: Stephen Hemminger, Feifei Wang; +Cc: dev
-----邮件原件-----
发件人: Stephen Hemminger <stephen@networkplumber.org>
发送时间: 2025年4月19日 2:19
收件人: Feifei Wang <wff_light@vip.163.com>
抄送: dev@dpdk.org
主题: Re: [RFC 00/18] add hinic3 PMD driver
On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver
> support for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
You are supposed to remove the "*** BLURB HERE ***" when editing the commit message.
[Feifei] Sorry, We ignore this, the next version will fix it.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 00/18] add hinic3 PMD driver
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (18 preceding siblings ...)
2025-04-18 18:18 ` [RFC 00/18] add hinic3 PMD driver Stephen Hemminger
@ 2025-04-18 18:20 ` Stephen Hemminger
2025-04-18 18:32 ` Stephen Hemminger
20 siblings, 0 replies; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:20 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev
On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
> for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
>
> Feifei Wang (3):
> net/hinic3: add intro doc for hinic3
> net/hinic3: add dev ops
> net/hinic3: add Rx/Tx functions
>
> Xin Wang (7):
> net/hinic3: add basic header files
> net/hinic3: add support for cmdq mechanism
> net/hinic3: add NIC event module
> net/hinic3: add context and work queue support
> net/hinic3: add device initailization
> net/hinic3: add MML and EEPROM access feature
> net/hinic3: add RSS promiscuous ops
>
> Yi Chen (8):
> net/hinic3: add hardware interfaces of BAR operation
> net/hinic3: add eq mechanism function code
> net/hinic3: add mgmt module function code
> net/hinic3: add module about hardware operation
> net/hinic3: add a NIC business configuration module
> net/hinic3: add a mailbox communication module
> net/hinic3: add FDIR flow control module
> drivers/net: add hinic3 PMD build and doc files
>
> .mailmap | 4 +-
> MAINTAINERS | 6 +
> doc/guides/nics/features/hinic3.ini | 9 +
> doc/guides/nics/hinic3.rst | 52 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_25_07.rst | 32 +-
> drivers/net/hinic3/base/hinic3_cmd.h | 231 ++
> drivers/net/hinic3/base/hinic3_cmdq.c | 975 +++++
> drivers/net/hinic3/base/hinic3_cmdq.h | 230 ++
> drivers/net/hinic3/base/hinic3_compat.h | 266 ++
> drivers/net/hinic3/base/hinic3_csr.h | 108 +
> drivers/net/hinic3/base/hinic3_eqs.c | 719 ++++
> drivers/net/hinic3/base/hinic3_eqs.h | 98 +
> drivers/net/hinic3/base/hinic3_hw_cfg.c | 240 ++
> drivers/net/hinic3/base/hinic3_hw_cfg.h | 121 +
> drivers/net/hinic3/base/hinic3_hw_comm.c | 452 +++
> drivers/net/hinic3/base/hinic3_hw_comm.h | 366 ++
> drivers/net/hinic3/base/hinic3_hwdev.c | 573 +++
> drivers/net/hinic3/base/hinic3_hwdev.h | 177 +
> drivers/net/hinic3/base/hinic3_hwif.c | 779 ++++
> drivers/net/hinic3/base/hinic3_hwif.h | 142 +
> drivers/net/hinic3/base/hinic3_mbox.c | 1392 +++++++
> drivers/net/hinic3/base/hinic3_mbox.h | 199 +
> drivers/net/hinic3/base/hinic3_mgmt.c | 392 ++
> drivers/net/hinic3/base/hinic3_mgmt.h | 121 +
> drivers/net/hinic3/base/hinic3_nic_cfg.c | 1828 +++++++++
> drivers/net/hinic3/base/hinic3_nic_cfg.h | 1527 ++++++++
> drivers/net/hinic3/base/hinic3_nic_event.c | 433 +++
> drivers/net/hinic3/base/hinic3_nic_event.h | 39 +
> drivers/net/hinic3/base/hinic3_wq.c | 148 +
> drivers/net/hinic3/base/hinic3_wq.h | 109 +
> drivers/net/hinic3/base/meson.build | 50 +
> drivers/net/hinic3/hinic3_ethdev.c | 3866 ++++++++++++++++++++
> drivers/net/hinic3/hinic3_ethdev.h | 167 +
> drivers/net/hinic3/hinic3_fdir.c | 1394 +++++++
> drivers/net/hinic3/hinic3_fdir.h | 398 ++
> drivers/net/hinic3/hinic3_flow.c | 1700 +++++++++
> drivers/net/hinic3/hinic3_flow.h | 80 +
> drivers/net/hinic3/hinic3_nic_io.c | 827 +++++
> drivers/net/hinic3/hinic3_nic_io.h | 169 +
> drivers/net/hinic3/hinic3_rx.c | 1096 ++++++
> drivers/net/hinic3/hinic3_rx.h | 356 ++
> drivers/net/hinic3/hinic3_tx.c | 1028 ++++++
> drivers/net/hinic3/hinic3_tx.h | 315 ++
> drivers/net/hinic3/meson.build | 44 +
> drivers/net/hinic3/mml/hinic3_dbg.c | 171 +
> drivers/net/hinic3/mml/hinic3_dbg.h | 160 +
> drivers/net/hinic3/mml/hinic3_mml_cmd.c | 375 ++
> drivers/net/hinic3/mml/hinic3_mml_cmd.h | 131 +
> drivers/net/hinic3/mml/hinic3_mml_ioctl.c | 215 ++
> drivers/net/hinic3/mml/hinic3_mml_lib.c | 136 +
> drivers/net/hinic3/mml/hinic3_mml_lib.h | 275 ++
> drivers/net/hinic3/mml/hinic3_mml_main.c | 167 +
> drivers/net/hinic3/mml/hinic3_mml_queue.c | 749 ++++
> drivers/net/hinic3/mml/hinic3_mml_queue.h | 256 ++
> drivers/net/hinic3/mml/meson.build | 62 +
> drivers/net/meson.build | 1 +
> 57 files changed, 25926 insertions(+), 31 deletions(-)
> create mode 100644 doc/guides/nics/features/hinic3.ini
> create mode 100644 doc/guides/nics/hinic3.rst
> create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
> create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
> create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
> create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
> create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
> create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
> create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
> create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
> create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
> create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
> create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
> create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
> create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
> create mode 100644 drivers/net/hinic3/base/meson.build
> create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
> create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
> create mode 100644 drivers/net/hinic3/hinic3_fdir.c
> create mode 100644 drivers/net/hinic3/hinic3_fdir.h
> create mode 100644 drivers/net/hinic3/hinic3_flow.c
> create mode 100644 drivers/net/hinic3/hinic3_flow.h
> create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
> create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
> create mode 100644 drivers/net/hinic3/hinic3_rx.c
> create mode 100644 drivers/net/hinic3/hinic3_rx.h
> create mode 100644 drivers/net/hinic3/hinic3_tx.c
> create mode 100644 drivers/net/hinic3/hinic3_tx.h
> create mode 100644 drivers/net/hinic3/meson.build
> create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
> create mode 100644 drivers/net/hinic3/mml/meson.build
>
Clang is spotting a possible bug in driver.
FAILED: drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o
clang -Idrivers/net/hinic3/base/libspnic_base.a.p -Idrivers/net/hinic3/base -I../drivers/net/hinic3/base -Idrivers/net/hinic3 -I../drivers/net/hinic3 -Ilib/eal/common -I../lib/eal/common -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -I../kernel/linux -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/ethdev -I../lib/ethdev -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu -fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O3 -include rte_config.h -Wvla -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-address-of-packed-member -DHW_CONVERT_ENDIAN -D__HINIC_HUAWEI_SECUREC__ -fPIC -fstack-protector-strong -MD -MQ drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o -MF drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o.d -o drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o -c ../drivers/net/hinic3/base/hinic3_nic_cfg.c
../drivers/net/hinic3/base/hinic3_nic_cfg.c:1237:34: error: expression does not compute the number of elements in this array; element type is 'u16' (aka 'unsigned short'), not 'u32' (aka 'unsigned int') [-Werror,-Wsizeof-array-div]
1237 | size = sizeof(indir_tbl->entry) / sizeof(u32);
| ~~~~~~~~~~~~~~~~ ^
../drivers/net/hinic3/base/hinic3_nic_cfg.c:1237:34: note: place parentheses around the 'sizeof(u32)' expression to silence this warning
And then lots of other overrun bugs:
*Build Failed #3:
OS: AzureLinux3.0-64
Target: x86_64-native-linuxapp-gcc
FAILED: drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o
gcc -Idrivers/net/hinic3/base/libspnic_base.a.p -Idrivers/net/hinic3/base -I../drivers/net/hinic3/base -Idrivers/net/hinic3 -I../drivers/net/hinic3 -Ilib/eal/common -I../lib/eal/common -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -I../kernel/linux -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/ethdev -I../lib/ethdev -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O3 -include rte_config.h -Wvla -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-packed-not-aligned -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation -Wno-address-of-packed-member -DHW_CONVERT_ENDIAN -D__HINIC_HUAWEI_SECUREC__ -fPIC -fstack-protector-strong -MD -MQ drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o -MF drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o.d -o drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o -c ../drivers/net/hinic3/base/hinic3_mbox.c
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/13.2.0/include/immintrin.h:43,
from ../lib/eal/x86/include/rte_rtm.h:8,
from ../lib/eal/x86/include/rte_spinlock.h:9,
from ../lib/eal/x86/include/rte_rwlock.h:9,
from ../lib/eal/include/rte_eal_memconfig.h:10,
from ../lib/eal/include/rte_memory.h:21,
from ../lib/eal/include/rte_malloc.h:16,
from ../lib/ethdev/ethdev_pci.h:9,
from ../drivers/net/hinic3/base/hinic3_compat.h:14,
from ../drivers/net/hinic3/base/hinic3_mbox.c:5:
In function ‘_mm256_storeu_si256’,
inlined from ‘rte_mov32’ at ../lib/eal/x86/include/rte_memcpy.h:128:2,
inlined from ‘rte_mov64’ at ../lib/eal/x86/include/rte_memcpy.h:149:2,
inlined from ‘rte_mov128’ at ../lib/eal/x86/include/rte_memcpy.h:160:2,
inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:422:4,
inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10,
inlined from ‘mbox_copy_send_data’ at ../drivers/net/hinic3/base/hinic3_mbox.c:508:3,
inlined from ‘send_mbox_seg’ at ../drivers/net/hinic3/base/hinic3_mbox.c:630:2,
inlined from ‘send_mbox_to_func’ at ../drivers/net/hinic3/base/hinic3_mbox.c:777:9:
/usr/lib/gcc/x86_64-pc-linux-gnu/13.2.0/include/avxintrin.h:935:8: error: array subscript ‘__m256i_u[1]’ is partly outside array bounds of ‘u8[48]’ {aka ‘unsigned char[48]’} [-Werror=array-bounds=]
935 | *__P = __A;
| ~~~~~^~~~~
../drivers/net/hinic3/base/hinic3_mbox.c: In function ‘send_mbox_to_func’:
../drivers/net/hinic3/base/hinic3_mbox.c:504:12: note: at offset 32 into object ‘mbox_max_buf’ of size 48
504 | u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
| ^~~~~~~~~~~~
In function ‘_mm256_storeu_si256’,
inlined from ‘rte_mov32’ at ../lib/eal/x86/include/rte_memcpy.h:128:2,
inlined from ‘rte_mov64’ at ../lib/eal/x86/include/rte_memcpy.h:148:2,
inlined from ‘rte_mov128’ at ../lib/eal/x86/include/rte_memcpy.h:161:2,
inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:422:4,
inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10,
inlined from ‘mbox_copy_send_data’ at ../drivers/net/hinic3/base/hinic3_mbox.c:508:3,
inlined from ‘send_mbox_seg’ at ../drivers/net/hinic3/base/hinic3_mbox.c:630:2,
inlined from ‘send_mbox_to_func’ at ../drivers/net/hinic3/base/hinic3_mbox.c:777:9:
/usr/lib/gcc/x86_64-pc-linux-gnu/13.2.0/include/avxintrin.h:935:8: error: array subscript 2 is outside array bounds of ‘u8[48]’ {aka ‘unsigned char[48]’} [-Werror=array-bounds=]
935 | *__P = __A;
| ~~~~~^~~~~
../drivers/net/hinic3/base/hinic3_mbox.c: In function ‘send_mbox_to_func’:
../drivers/net/hinic3/base/hinic3_mbox.c:504:12: note
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC 00/18] add hinic3 PMD driver
2025-04-18 9:05 [RFC 00/18] add hinic3 PMD driver Feifei Wang
` (19 preceding siblings ...)
2025-04-18 18:20 ` Stephen Hemminger
@ 2025-04-18 18:32 ` Stephen Hemminger
2025-04-19 3:30 ` 回复: " wangfeifei (J)
20 siblings, 1 reply; 30+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:32 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev
On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
> for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
>
> Feifei Wang (3):
> net/hinic3: add intro doc for hinic3
> net/hinic3: add dev ops
> net/hinic3: add Rx/Tx functions
>
> Xin Wang (7):
> net/hinic3: add basic header files
> net/hinic3: add support for cmdq mechanism
> net/hinic3: add NIC event module
> net/hinic3: add context and work queue support
> net/hinic3: add device initailization
> net/hinic3: add MML and EEPROM access feature
> net/hinic3: add RSS promiscuous ops
>
> Yi Chen (8):
> net/hinic3: add hardware interfaces of BAR operation
> net/hinic3: add eq mechanism function code
> net/hinic3: add mgmt module function code
> net/hinic3: add module about hardware operation
> net/hinic3: add a NIC business configuration module
> net/hinic3: add a mailbox communication module
> net/hinic3: add FDIR flow control module
> drivers/net: add hinic3 PMD build and doc files
>
> .mailmap | 4 +-
> MAINTAINERS | 6 +
> doc/guides/nics/features/hinic3.ini | 9 +
> doc/guides/nics/hinic3.rst | 52 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_25_07.rst | 32 +-
> drivers/net/hinic3/base/hinic3_cmd.h | 231 ++
> drivers/net/hinic3/base/hinic3_cmdq.c | 975 +++++
> drivers/net/hinic3/base/hinic3_cmdq.h | 230 ++
> drivers/net/hinic3/base/hinic3_compat.h | 266 ++
> drivers/net/hinic3/base/hinic3_csr.h | 108 +
> drivers/net/hinic3/base/hinic3_eqs.c | 719 ++++
> drivers/net/hinic3/base/hinic3_eqs.h | 98 +
> drivers/net/hinic3/base/hinic3_hw_cfg.c | 240 ++
> drivers/net/hinic3/base/hinic3_hw_cfg.h | 121 +
> drivers/net/hinic3/base/hinic3_hw_comm.c | 452 +++
> drivers/net/hinic3/base/hinic3_hw_comm.h | 366 ++
> drivers/net/hinic3/base/hinic3_hwdev.c | 573 +++
> drivers/net/hinic3/base/hinic3_hwdev.h | 177 +
> drivers/net/hinic3/base/hinic3_hwif.c | 779 ++++
> drivers/net/hinic3/base/hinic3_hwif.h | 142 +
> drivers/net/hinic3/base/hinic3_mbox.c | 1392 +++++++
> drivers/net/hinic3/base/hinic3_mbox.h | 199 +
> drivers/net/hinic3/base/hinic3_mgmt.c | 392 ++
> drivers/net/hinic3/base/hinic3_mgmt.h | 121 +
> drivers/net/hinic3/base/hinic3_nic_cfg.c | 1828 +++++++++
> drivers/net/hinic3/base/hinic3_nic_cfg.h | 1527 ++++++++
> drivers/net/hinic3/base/hinic3_nic_event.c | 433 +++
> drivers/net/hinic3/base/hinic3_nic_event.h | 39 +
> drivers/net/hinic3/base/hinic3_wq.c | 148 +
> drivers/net/hinic3/base/hinic3_wq.h | 109 +
> drivers/net/hinic3/base/meson.build | 50 +
> drivers/net/hinic3/hinic3_ethdev.c | 3866 ++++++++++++++++++++
> drivers/net/hinic3/hinic3_ethdev.h | 167 +
> drivers/net/hinic3/hinic3_fdir.c | 1394 +++++++
> drivers/net/hinic3/hinic3_fdir.h | 398 ++
> drivers/net/hinic3/hinic3_flow.c | 1700 +++++++++
> drivers/net/hinic3/hinic3_flow.h | 80 +
> drivers/net/hinic3/hinic3_nic_io.c | 827 +++++
> drivers/net/hinic3/hinic3_nic_io.h | 169 +
> drivers/net/hinic3/hinic3_rx.c | 1096 ++++++
> drivers/net/hinic3/hinic3_rx.h | 356 ++
> drivers/net/hinic3/hinic3_tx.c | 1028 ++++++
> drivers/net/hinic3/hinic3_tx.h | 315 ++
> drivers/net/hinic3/meson.build | 44 +
> drivers/net/hinic3/mml/hinic3_dbg.c | 171 +
> drivers/net/hinic3/mml/hinic3_dbg.h | 160 +
> drivers/net/hinic3/mml/hinic3_mml_cmd.c | 375 ++
> drivers/net/hinic3/mml/hinic3_mml_cmd.h | 131 +
> drivers/net/hinic3/mml/hinic3_mml_ioctl.c | 215 ++
> drivers/net/hinic3/mml/hinic3_mml_lib.c | 136 +
> drivers/net/hinic3/mml/hinic3_mml_lib.h | 275 ++
> drivers/net/hinic3/mml/hinic3_mml_main.c | 167 +
> drivers/net/hinic3/mml/hinic3_mml_queue.c | 749 ++++
> drivers/net/hinic3/mml/hinic3_mml_queue.h | 256 ++
> drivers/net/hinic3/mml/meson.build | 62 +
> drivers/net/meson.build | 1 +
> 57 files changed, 25926 insertions(+), 31 deletions(-)
> create mode 100644 doc/guides/nics/features/hinic3.ini
> create mode 100644 doc/guides/nics/hinic3.rst
> create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
> create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
> create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
> create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
> create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
> create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
> create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
> create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
> create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
> create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
> create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
> create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
> create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
> create mode 100644 drivers/net/hinic3/base/meson.build
> create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
> create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
> create mode 100644 drivers/net/hinic3/hinic3_fdir.c
> create mode 100644 drivers/net/hinic3/hinic3_fdir.h
> create mode 100644 drivers/net/hinic3/hinic3_flow.c
> create mode 100644 drivers/net/hinic3/hinic3_flow.h
> create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
> create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
> create mode 100644 drivers/net/hinic3/hinic3_rx.c
> create mode 100644 drivers/net/hinic3/hinic3_rx.h
> create mode 100644 drivers/net/hinic3/hinic3_tx.c
> create mode 100644 drivers/net/hinic3/hinic3_tx.h
> create mode 100644 drivers/net/hinic3/meson.build
> create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
> create mode 100644 drivers/net/hinic3/mml/meson.build
>
Fix the build an other little things, and resubmit.
There is lots more here, don't expect it to be merged for several more revisions.
^ permalink raw reply [flat|nested] 30+ messages in thread
* 回复: [RFC 00/18] add hinic3 PMD driver
2025-04-18 18:32 ` Stephen Hemminger
@ 2025-04-19 3:30 ` wangfeifei (J)
0 siblings, 0 replies; 30+ messages in thread
From: wangfeifei (J) @ 2025-04-19 3:30 UTC (permalink / raw)
To: Stephen Hemminger, Feifei Wang
Cc: dev, zengweiliang zengweiliang, Dumin(Dumin,KunPeng)
-----邮件原件-----
发件人: Stephen Hemminger <stephen@networkplumber.org>
发送时间: 2025年4月19日 2:32
收件人: Feifei Wang <wff_light@vip.163.com>
抄送: dev@dpdk.org
主题: Re: [RFC 00/18] add hinic3 PMD driver
On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:
> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver
> support for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
>
> Feifei Wang (3):
> net/hinic3: add intro doc for hinic3
> net/hinic3: add dev ops
> net/hinic3: add Rx/Tx functions
>
> Xin Wang (7):
> net/hinic3: add basic header files
> net/hinic3: add support for cmdq mechanism
> net/hinic3: add NIC event module
> net/hinic3: add context and work queue support
> net/hinic3: add device initailization
> net/hinic3: add MML and EEPROM access feature
> net/hinic3: add RSS promiscuous ops
>
> Yi Chen (8):
> net/hinic3: add hardware interfaces of BAR operation
> net/hinic3: add eq mechanism function code
> net/hinic3: add mgmt module function code
> net/hinic3: add module about hardware operation
> net/hinic3: add a NIC business configuration module
> net/hinic3: add a mailbox communication module
> net/hinic3: add FDIR flow control module
> drivers/net: add hinic3 PMD build and doc files
>
> .mailmap | 4 +-
> MAINTAINERS | 6 +
> doc/guides/nics/features/hinic3.ini | 9 +
> doc/guides/nics/hinic3.rst | 52 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/rel_notes/release_25_07.rst | 32 +-
> drivers/net/hinic3/base/hinic3_cmd.h | 231 ++
> drivers/net/hinic3/base/hinic3_cmdq.c | 975 +++++
> drivers/net/hinic3/base/hinic3_cmdq.h | 230 ++
> drivers/net/hinic3/base/hinic3_compat.h | 266 ++
> drivers/net/hinic3/base/hinic3_csr.h | 108 +
> drivers/net/hinic3/base/hinic3_eqs.c | 719 ++++
> drivers/net/hinic3/base/hinic3_eqs.h | 98 +
> drivers/net/hinic3/base/hinic3_hw_cfg.c | 240 ++
> drivers/net/hinic3/base/hinic3_hw_cfg.h | 121 +
> drivers/net/hinic3/base/hinic3_hw_comm.c | 452 +++
> drivers/net/hinic3/base/hinic3_hw_comm.h | 366 ++
> drivers/net/hinic3/base/hinic3_hwdev.c | 573 +++
> drivers/net/hinic3/base/hinic3_hwdev.h | 177 +
> drivers/net/hinic3/base/hinic3_hwif.c | 779 ++++
> drivers/net/hinic3/base/hinic3_hwif.h | 142 +
> drivers/net/hinic3/base/hinic3_mbox.c | 1392 +++++++
> drivers/net/hinic3/base/hinic3_mbox.h | 199 +
> drivers/net/hinic3/base/hinic3_mgmt.c | 392 ++
> drivers/net/hinic3/base/hinic3_mgmt.h | 121 +
> drivers/net/hinic3/base/hinic3_nic_cfg.c | 1828 +++++++++
> drivers/net/hinic3/base/hinic3_nic_cfg.h | 1527 ++++++++
> drivers/net/hinic3/base/hinic3_nic_event.c | 433 +++
> drivers/net/hinic3/base/hinic3_nic_event.h | 39 +
> drivers/net/hinic3/base/hinic3_wq.c | 148 +
> drivers/net/hinic3/base/hinic3_wq.h | 109 +
> drivers/net/hinic3/base/meson.build | 50 +
> drivers/net/hinic3/hinic3_ethdev.c | 3866 ++++++++++++++++++++
> drivers/net/hinic3/hinic3_ethdev.h | 167 +
> drivers/net/hinic3/hinic3_fdir.c | 1394 +++++++
> drivers/net/hinic3/hinic3_fdir.h | 398 ++
> drivers/net/hinic3/hinic3_flow.c | 1700 +++++++++
> drivers/net/hinic3/hinic3_flow.h | 80 +
> drivers/net/hinic3/hinic3_nic_io.c | 827 +++++
> drivers/net/hinic3/hinic3_nic_io.h | 169 +
> drivers/net/hinic3/hinic3_rx.c | 1096 ++++++
> drivers/net/hinic3/hinic3_rx.h | 356 ++
> drivers/net/hinic3/hinic3_tx.c | 1028 ++++++
> drivers/net/hinic3/hinic3_tx.h | 315 ++
> drivers/net/hinic3/meson.build | 44 +
> drivers/net/hinic3/mml/hinic3_dbg.c | 171 +
> drivers/net/hinic3/mml/hinic3_dbg.h | 160 +
> drivers/net/hinic3/mml/hinic3_mml_cmd.c | 375 ++
> drivers/net/hinic3/mml/hinic3_mml_cmd.h | 131 +
> drivers/net/hinic3/mml/hinic3_mml_ioctl.c | 215 ++
> drivers/net/hinic3/mml/hinic3_mml_lib.c | 136 +
> drivers/net/hinic3/mml/hinic3_mml_lib.h | 275 ++
> drivers/net/hinic3/mml/hinic3_mml_main.c | 167 +
> drivers/net/hinic3/mml/hinic3_mml_queue.c | 749 ++++
> drivers/net/hinic3/mml/hinic3_mml_queue.h | 256 ++
> drivers/net/hinic3/mml/meson.build | 62 +
> drivers/net/meson.build | 1 +
> 57 files changed, 25926 insertions(+), 31 deletions(-) create mode
> 100644 doc/guides/nics/features/hinic3.ini
> create mode 100644 doc/guides/nics/hinic3.rst create mode 100644
> drivers/net/hinic3/base/hinic3_cmd.h
> create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
> create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
> create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
> create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
> create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
> create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
> create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
> create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
> create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
> create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
> create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
> create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
> create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
> create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
> create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
> create mode 100644 drivers/net/hinic3/base/meson.build
> create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
> create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
> create mode 100644 drivers/net/hinic3/hinic3_fdir.c create mode
> 100644 drivers/net/hinic3/hinic3_fdir.h create mode 100644
> drivers/net/hinic3/hinic3_flow.c create mode 100644
> drivers/net/hinic3/hinic3_flow.h create mode 100644
> drivers/net/hinic3/hinic3_nic_io.c
> create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
> create mode 100644 drivers/net/hinic3/hinic3_rx.c create mode 100644
> drivers/net/hinic3/hinic3_rx.h create mode 100644
> drivers/net/hinic3/hinic3_tx.c create mode 100644
> drivers/net/hinic3/hinic3_tx.h create mode 100644
> drivers/net/hinic3/meson.build create mode 100644
> drivers/net/hinic3/mml/hinic3_dbg.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
> create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
> create mode 100644 drivers/net/hinic3/mml/meson.build
>
Fix the build an other little things, and resubmit.
There is lots more here, don't expect it to be merged for several more revisions.
[Feifei] Thanks for the reviewing. Know about this, this version is a RFC proposal before 4.20.
In the next, we will fix above error.
^ permalink raw reply [flat|nested] 30+ messages in thread