DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC 00/18] add hinic3 PMD driver
@ 2025-04-18  7:02 Feifei Wang
  2025-04-18  7:02 ` [RFC 01/18] net/hinic3: add intro doc for hinic3 Feifei Wang
                   ` (9 more replies)
  0 siblings, 10 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev

*** BLURB HERE ***
The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.

Feifei Wang (3):
  net/hinic3: add intro doc for hinic3
  net/hinic3: add dev ops
  net/hinic3: add Rx/Tx functions

Xin Wang (7):
  net/hinic3: add basic header files
  net/hinic3: add support for cmdq mechanism
  net/hinic3: add NIC event module
  net/hinic3: add context and work queue support
  net/hinic3: add device initailization
  net/hinic3: add MML and EEPROM access feature
  net/hinic3: add RSS promiscuous ops

Yi Chen (8):
  net/hinic3: add hardware interfaces of BAR operation
  net/hinic3: add eq mechanism function code
  net/hinic3: add mgmt module function code
  net/hinic3: add module about hardware operation
  net/hinic3: add a NIC business configuration module
  net/hinic3: add a mailbox communication module
  net/hinic3: add FDIR flow control module
  drivers/net: add hinic3 PMD build and doc files

 .mailmap                                   |    4 +-
 MAINTAINERS                                |    6 +
 doc/guides/nics/features/hinic3.ini        |    9 +
 doc/guides/nics/hinic3.rst                 |   52 +
 doc/guides/nics/index.rst                  |    1 +
 doc/guides/rel_notes/release_25_07.rst     |   32 +-
 drivers/net/hinic3/base/hinic3_cmd.h       |  231 ++
 drivers/net/hinic3/base/hinic3_cmdq.c      |  975 +++++
 drivers/net/hinic3/base/hinic3_cmdq.h      |  230 ++
 drivers/net/hinic3/base/hinic3_compat.h    |  266 ++
 drivers/net/hinic3/base/hinic3_csr.h       |  108 +
 drivers/net/hinic3/base/hinic3_eqs.c       |  719 ++++
 drivers/net/hinic3/base/hinic3_eqs.h       |   98 +
 drivers/net/hinic3/base/hinic3_hw_cfg.c    |  240 ++
 drivers/net/hinic3/base/hinic3_hw_cfg.h    |  121 +
 drivers/net/hinic3/base/hinic3_hw_comm.c   |  452 +++
 drivers/net/hinic3/base/hinic3_hw_comm.h   |  366 ++
 drivers/net/hinic3/base/hinic3_hwdev.c     |  573 +++
 drivers/net/hinic3/base/hinic3_hwdev.h     |  177 +
 drivers/net/hinic3/base/hinic3_hwif.c      |  779 ++++
 drivers/net/hinic3/base/hinic3_hwif.h      |  142 +
 drivers/net/hinic3/base/hinic3_mbox.c      | 1392 +++++++
 drivers/net/hinic3/base/hinic3_mbox.h      |  199 +
 drivers/net/hinic3/base/hinic3_mgmt.c      |  392 ++
 drivers/net/hinic3/base/hinic3_mgmt.h      |  121 +
 drivers/net/hinic3/base/hinic3_nic_cfg.c   | 1828 +++++++++
 drivers/net/hinic3/base/hinic3_nic_cfg.h   | 1527 ++++++++
 drivers/net/hinic3/base/hinic3_nic_event.c |  433 +++
 drivers/net/hinic3/base/hinic3_nic_event.h |   39 +
 drivers/net/hinic3/base/hinic3_wq.c        |  148 +
 drivers/net/hinic3/base/hinic3_wq.h        |  109 +
 drivers/net/hinic3/base/meson.build        |   50 +
 drivers/net/hinic3/hinic3_ethdev.c         | 3866 ++++++++++++++++++++
 drivers/net/hinic3/hinic3_ethdev.h         |  167 +
 drivers/net/hinic3/hinic3_fdir.c           | 1394 +++++++
 drivers/net/hinic3/hinic3_fdir.h           |  398 ++
 drivers/net/hinic3/hinic3_flow.c           | 1700 +++++++++
 drivers/net/hinic3/hinic3_flow.h           |   80 +
 drivers/net/hinic3/hinic3_nic_io.c         |  827 +++++
 drivers/net/hinic3/hinic3_nic_io.h         |  169 +
 drivers/net/hinic3/hinic3_rx.c             | 1096 ++++++
 drivers/net/hinic3/hinic3_rx.h             |  356 ++
 drivers/net/hinic3/hinic3_tx.c             | 1028 ++++++
 drivers/net/hinic3/hinic3_tx.h             |  315 ++
 drivers/net/hinic3/meson.build             |   44 +
 drivers/net/hinic3/mml/hinic3_dbg.c        |  171 +
 drivers/net/hinic3/mml/hinic3_dbg.h        |  160 +
 drivers/net/hinic3/mml/hinic3_mml_cmd.c    |  375 ++
 drivers/net/hinic3/mml/hinic3_mml_cmd.h    |  131 +
 drivers/net/hinic3/mml/hinic3_mml_ioctl.c  |  215 ++
 drivers/net/hinic3/mml/hinic3_mml_lib.c    |  136 +
 drivers/net/hinic3/mml/hinic3_mml_lib.h    |  275 ++
 drivers/net/hinic3/mml/hinic3_mml_main.c   |  167 +
 drivers/net/hinic3/mml/hinic3_mml_queue.c  |  749 ++++
 drivers/net/hinic3/mml/hinic3_mml_queue.h  |  256 ++
 drivers/net/hinic3/mml/meson.build         |   62 +
 drivers/net/meson.build                    |    1 +
 57 files changed, 25926 insertions(+), 31 deletions(-)
 create mode 100644 doc/guides/nics/features/hinic3.ini
 create mode 100644 doc/guides/nics/hinic3.rst
 create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
 create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
 create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
 create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
 create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
 create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
 create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
 create mode 100644 drivers/net/hinic3/base/meson.build
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
 create mode 100644 drivers/net/hinic3/hinic3_fdir.c
 create mode 100644 drivers/net/hinic3/hinic3_fdir.h
 create mode 100644 drivers/net/hinic3/hinic3_flow.c
 create mode 100644 drivers/net/hinic3/hinic3_flow.h
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
 create mode 100644 drivers/net/hinic3/hinic3_rx.c
 create mode 100644 drivers/net/hinic3/hinic3_rx.h
 create mode 100644 drivers/net/hinic3/hinic3_tx.c
 create mode 100644 drivers/net/hinic3/hinic3_tx.h
 create mode 100644 drivers/net/hinic3/meson.build
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
 create mode 100644 drivers/net/hinic3/mml/meson.build

-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 01/18] net/hinic3: add intro doc for hinic3
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 10/18] net/hinic3: add context and work queue support Feifei Wang
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Feifei Wang, Yi Chen, Xin Wang

From: Feifei Wang <wangfeifei40@huawei.com>

This patch adds some basic files to describe the hinic3 driver.

Signed-off-by: Feifei Wang <wangfeifei40@huawei.com>
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
---
 .mailmap                               |  4 +-
 MAINTAINERS                            |  6 +++
 doc/guides/nics/hinic3.rst             | 52 ++++++++++++++++++++++++++
 doc/guides/nics/index.rst              |  1 +
 doc/guides/rel_notes/release_25_07.rst | 32 +---------------
 5 files changed, 64 insertions(+), 31 deletions(-)
 create mode 100644 doc/guides/nics/hinic3.rst

diff --git a/.mailmap b/.mailmap
index d8439b79ce..8c1341e783 100644
--- a/.mailmap
+++ b/.mailmap
@@ -429,7 +429,7 @@ Fang TongHao <fangtonghao@sangfor.com.cn>
 Fan Zhang <fanzhang.oss@gmail.com> <roy.fan.zhang@intel.com>
 Farah Smith <farah.smith@broadcom.com>
 Fei Chen <chenwei.0515@bytedance.com>
-Feifei Wang <feifei.wang2@arm.com> <feifei.wang@arm.com>
+Feifei Wang <wangfeifei40@huawei.com> <feifei.wang1218@gmail.com> <feifei.wang2@arm.com> <feifei.wang@arm.com> <wff_light@vip.163.com>
 Fei Qin <fei.qin@corigine.com>
 Fengjiang Liu <liufengjiang.0426@bytedance.com>
 Fengnan Chang <changfengnan@bytedance.com>
@@ -1718,6 +1718,7 @@ Xingguang He <xingguang.he@intel.com>
 Xingyou Chen <niatlantice@gmail.com>
 Xing Wang <xing_wang@realsil.com.cn>
 Xinying Yu <xinying.yu@corigine.com>
+Xin Wang <wangxin679@h-partners.com>
 Xin Long <longxin.xl@alibaba-inc.com>
 Xi Zhang <xix.zhang@intel.com>
 Xuan Ding <xuan.ding@intel.com>
@@ -1750,6 +1751,7 @@ Yelena Krivosheev <yelena@marvell.com>
 Yerden Zhumabekov <e_zhumabekov@sts.kz> <yerden.zhumabekov@sts.kz>
 Yevgeny Kliteynik <kliteyn@nvidia.com>
 Yicai Lu <luyicai@huawei.com>
+Yi Chen <chenyi221@huawei.com>
 Yiding Zhou <yidingx.zhou@intel.com>
 Yi Li <liyi1@chinatelecom.cn>
 Yi Liu <yi.liu@nxp.com>
diff --git a/MAINTAINERS b/MAINTAINERS
index 167cc74a15..f96a27210d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -773,6 +773,12 @@ F: drivers/net/hinic/
 F: doc/guides/nics/hinic.rst
 F: doc/guides/nics/features/hinic.ini
 
+Huawei hinic3
+M: Feifei Wang <wangfeifei40@huawei.com>
+F: drivers/net/hinic3/
+F: doc/guides/nics/hinic3.rst
+F: doc/guides/nics/features/hinic3.ini
+
 Intel Network Common Code
 M: Bruce Richardson <bruce.richardson@intel.com>
 T: git://dpdk.org/next/dpdk-next-net-intel
diff --git a/doc/guides/nics/hinic3.rst b/doc/guides/nics/hinic3.rst
new file mode 100644
index 0000000000..c7080c8c1d
--- /dev/null
+++ b/doc/guides/nics/hinic3.rst
@@ -0,0 +1,52 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+
+HINIC Poll Mode Driver
+======================
+
+The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
+for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
+
+Features
+--------
+
+- Multi arch support: x86_64, ARMv8.
+- Multiple queues for TX and RX
+- Receiver Side Scaling (RSS)
+- flow filtering
+- Checksum offload
+- TSO offload
+- Promiscuous mode
+- Port hardware statistics
+- Link state information
+- Link flow control
+- Scattered and gather for TX and RX
+- Allmulticast mode
+- MTU update
+- Multicast MAC filter
+- Flow API
+- Set Link down or up
+- VLAN filter and VLAN offload
+- SR-IOV - Partially supported at this point, VFIO only
+- FW version
+- LRO
+
+Prerequisites
+-------------
+
+- Learning about Huawei Hi1823 Series Intelligent NICs using
+  `<https://www.hikunpeng.com/compute/component/nic>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+X86-32, Windows, and BSD are not supported yet.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 10a2eca3b0..5ae4021ccb 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -33,6 +33,7 @@ Network Interface Controller Drivers
     fm10k
     gve
     hinic
+    hinic3
     hns3
     i40e
     ice
diff --git a/doc/guides/rel_notes/release_25_07.rst b/doc/guides/rel_notes/release_25_07.rst
index 093b85d206..1d65cf7829 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -24,37 +24,9 @@ DPDK Release 25.07
 New Features
 ------------
 
-.. This section should contain new features added in this release.
-   Sample format:
-
-   * **Add a title in the past tense with a full stop.**
-
-     Add a short 1-2 sentence description in the past tense.
-     The description should be enough to allow someone scanning
-     the release notes to understand the new feature.
-
-     If the feature adds a lot of sub-features you can use a bullet list
-     like this:
-
-     * Added feature foo to do something.
-     * Enhanced feature bar to do something else.
-
-     Refer to the previous release notes for examples.
-
-     Suggested order in release notes items:
-     * Core libs (EAL, mempool, ring, mbuf, buses)
-     * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
-       - ethdev (lib, PMDs)
-       - cryptodev (lib, PMDs)
-       - eventdev (lib, PMDs)
-       - etc
-     * Other libs
-     * Apps, Examples, Tools (if significant)
-
-     This section is a comment. Do not overwrite or remove it.
-     Also, make sure to start the actual text at the margin.
-     =======================================================
+* **Added Huawei hinic3 net driver [EXPERIMENTAL].**
 
+  * Added network driver for the Huawei SPx series Network Adapters.
 
 Removed Items
 -------------
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 10/18] net/hinic3: add context and work queue support
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
  2025-04-18  7:02 ` [RFC 01/18] net/hinic3: add intro doc for hinic3 Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 11/18] net/hinic3: add a mailbox communication module Feifei Wang
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen

From: Xin Wang <wangxin679@h-partners.com>

Work queue is used for cmdq and tx/rx buff description.
Nic business needs to configure cmdq context and txq/rxq
context. This patch adds data structures and function codes
for work queue and context.

Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
 drivers/net/hinic3/base/hinic3_wq.c | 148 ++++++++++++++++++++++++++++
 drivers/net/hinic3/base/hinic3_wq.h | 109 ++++++++++++++++++++
 2 files changed, 257 insertions(+)
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.h

diff --git a/drivers/net/hinic3/base/hinic3_wq.c b/drivers/net/hinic3/base/hinic3_wq.c
new file mode 100644
index 0000000000..9bccb10c9a
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_wq.c
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+#include <rte_bus_pci.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_pci.h>
+
+#include "hinic3_compat.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+
+static void
+free_wq_pages(struct hinic3_wq *wq)
+{
+	hinic3_memzone_free(wq->wq_mz);
+
+	wq->queue_buf_paddr = 0;
+	wq->queue_buf_vaddr = 0;
+}
+
+static int
+alloc_wq_pages(struct hinic3_hwdev *hwdev, struct hinic3_wq *wq, int qid)
+{
+	const struct rte_memzone *wq_mz;
+
+	wq_mz = hinic3_dma_zone_reserve(hwdev->eth_dev, "hinic3_wq_mz",
+					(uint16_t)qid, wq->wq_buf_size,
+					RTE_PGSIZE_256K, SOCKET_ID_ANY);
+	if (!wq_mz) {
+		PMD_DRV_LOG(ERR, "Allocate wq[%d] rq_mz failed", qid);
+		return -ENOMEM;
+	}
+
+	memset(wq_mz->addr, 0, wq->wq_buf_size);
+	wq->wq_mz = wq_mz;
+	wq->queue_buf_paddr = wq_mz->iova;
+	wq->queue_buf_vaddr = (u64)(u64 *)wq_mz->addr;
+
+	return 0;
+}
+
+void
+hinic3_put_wqe(struct hinic3_wq *wq, int num_wqebbs)
+{
+	wq->cons_idx += num_wqebbs;
+	rte_atomic_fetch_add_explicit(&wq->delta, num_wqebbs,
+				      rte_memory_order_seq_cst);
+}
+
+void *
+hinic3_read_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *cons_idx)
+{
+	u16 curr_cons_idx;
+
+	if ((rte_atomic_load_explicit(&wq->delta, rte_memory_order_seq_cst) +
+	     num_wqebbs) > wq->q_depth)
+		return NULL;
+
+	curr_cons_idx = (u16)(wq->cons_idx);
+
+	curr_cons_idx = MASKED_WQE_IDX(wq, curr_cons_idx);
+
+	*cons_idx = curr_cons_idx;
+
+	return WQ_WQE_ADDR(wq, (u32)(*cons_idx));
+}
+
+int
+hinic3_cmdq_alloc(struct hinic3_wq *wq, void *dev, int cmdq_blocks,
+		  u32 wq_buf_size, u32 wqebb_shift, u16 q_depth)
+{
+	struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+	int i, j;
+	int err;
+
+	/* Validate q_depth is power of 2 & wqebb_size is not 0. */
+	for (i = 0; i < cmdq_blocks; i++) {
+		wq[i].wqebb_size = 1U << wqebb_shift;
+		wq[i].wqebb_shift = wqebb_shift;
+		wq[i].wq_buf_size = wq_buf_size;
+		wq[i].q_depth = q_depth;
+
+		err = alloc_wq_pages(hwdev, &wq[i], i);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Failed to alloc CMDQ blocks");
+			goto cmdq_block_err;
+		}
+
+		wq[i].cons_idx = 0;
+		wq[i].prod_idx = 0;
+		rte_atomic_store_explicit(&wq[i].delta, q_depth,
+					  rte_memory_order_seq_cst);
+
+		wq[i].mask = q_depth - 1;
+	}
+
+	return 0;
+
+cmdq_block_err:
+	for (j = 0; j < i; j++)
+		free_wq_pages(&wq[j]);
+
+	return err;
+}
+
+void
+hinic3_cmdq_free(struct hinic3_wq *wq, int cmdq_blocks)
+{
+	int i;
+
+	for (i = 0; i < cmdq_blocks; i++)
+		free_wq_pages(&wq[i]);
+}
+
+void
+hinic3_wq_wqe_pg_clear(struct hinic3_wq *wq)
+{
+	wq->cons_idx = 0;
+	wq->prod_idx = 0;
+
+	memset((void *)wq->queue_buf_vaddr, 0, wq->wq_buf_size);
+}
+
+void *
+hinic3_get_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *prod_idx)
+{
+	u16 curr_prod_idx;
+
+	rte_atomic_fetch_sub_explicit(&wq->delta, num_wqebbs,
+				      rte_memory_order_seq_cst);
+	curr_prod_idx = (u16)(wq->prod_idx);
+	wq->prod_idx += num_wqebbs;
+	*prod_idx = MASKED_WQE_IDX(wq, curr_prod_idx);
+
+	return WQ_WQE_ADDR(wq, (u32)(*prod_idx));
+}
+
+void
+hinic3_set_sge(struct hinic3_sge *sge, uint64_t addr, u32 len)
+{
+	sge->hi_addr = upper_32_bits(addr);
+	sge->lo_addr = lower_32_bits(addr);
+	sge->len = len;
+}
diff --git a/drivers/net/hinic3/base/hinic3_wq.h b/drivers/net/hinic3/base/hinic3_wq.h
new file mode 100644
index 0000000000..84d54c2aeb
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_wq.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_WQ_H_
+#define _HINIC3_WQ_H_
+
+/* Use 0-level CLA, page size must be: SQ 16B(wqe) * 64k(max_q_depth). */
+#define HINIC3_DEFAULT_WQ_PAGE_SIZE 0x100000
+#define HINIC3_HW_WQ_PAGE_SIZE	    0x1000
+
+#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+
+#define WQ_WQE_ADDR(wq, idx)                                                           \
+	({                                                                             \
+		typeof(wq) __wq = (wq);                                                \
+		(void *)((u64)(__wq->queue_buf_vaddr) + ((idx) << __wq->wqebb_shift)); \
+	})
+
+struct hinic3_sge {
+	u32 hi_addr;
+	u32 lo_addr;
+	u32 len;
+};
+
+struct hinic3_wq {
+	/* The addresses are 64 bit in the HW. */
+	u64 queue_buf_vaddr;
+
+	u16 q_depth;
+	u16 mask;
+	RTE_ATOMIC(int32_t)delta;
+
+	u32 cons_idx;
+	u32 prod_idx;
+
+	u64 queue_buf_paddr;
+
+	u32 wqebb_size;
+	u32 wqebb_shift;
+
+	u32 wq_buf_size;
+
+	const struct rte_memzone *wq_mz;
+
+	u32 rsvd[5];
+};
+
+void hinic3_put_wqe(struct hinic3_wq *wq, int num_wqebbs);
+
+/**
+ * Read a WQE and update CI.
+ *
+ * @param[in] wq
+ * The work queue structure.
+ * @param[in] num_wqebbs
+ * The number of work queue elements to read.
+ * @param[out] cons_idx
+ * The updated consumer index.
+ *
+ * @return
+ * The address of WQE, or NULL if not enough elements are available.
+ */
+void *hinic3_read_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *cons_idx);
+
+/**
+ * Allocate command queue blocks and initialize related parameters.
+ *
+ * @param[in] wq
+ * The cmdq->wq structure.
+ * @param[in] dev
+ * The device context for the hardware.
+ * @param[in] cmdq_blocks
+ * The number of command queue blocks to allocate.
+ * @param[in] wq_buf_size
+ * The size of each work queue buffer.
+ * @param[in] wqebb_shift
+ * The shift value for determining the work queue element size.
+ * @param[in] q_depth
+ * The depth of each command queue.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_cmdq_alloc(struct hinic3_wq *wq, void *dev, int cmdq_blocks,
+		      u32 wq_buf_size, u32 wqebb_shift, u16 q_depth);
+
+void hinic3_cmdq_free(struct hinic3_wq *wq, int cmdq_blocks);
+
+void hinic3_wq_wqe_pg_clear(struct hinic3_wq *wq);
+
+/**
+ * Get WQE and update PI.
+ *
+ * @param[in] wq
+ * The cmdq->wq structure.
+ * @param[in] num_wqebbs
+ * The number of work queue elements to allocate.
+ * @param[out] prod_idx
+ * The updated producer index, masked according to the queue size.
+ *
+ * @return
+ * The address of the work queue element.
+ */
+void *hinic3_get_wqe(struct hinic3_wq *wq, int num_wqebbs, u16 *prod_idx);
+
+void hinic3_set_sge(struct hinic3_sge *sge, uint64_t addr, u32 len);
+
+#endif /* _HINIC3_WQ_H_ */
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 11/18] net/hinic3: add a mailbox communication module
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
  2025-04-18  7:02 ` [RFC 01/18] net/hinic3: add intro doc for hinic3 Feifei Wang
  2025-04-18  7:02 ` [RFC 10/18] net/hinic3: add context and work queue support Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 12/18] net/hinic3: add device initailization Feifei Wang
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang

From: Yi Chen <chenyi221@huawei.com>

This patch adds support for mailbox of hinic3 PMD driver,
mailbox is used for communication between PF/VF driver and MPU.
This patch provides mailbox-related data structures and functional
code.

Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
 drivers/net/hinic3/base/hinic3_mbox.c | 1392 +++++++++++++++++++++++++
 drivers/net/hinic3/base/hinic3_mbox.h |  199 ++++
 2 files changed, 1591 insertions(+)
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h

diff --git a/drivers/net/hinic3/base/hinic3_mbox.c b/drivers/net/hinic3/base/hinic3_mbox.c
new file mode 100644
index 0000000000..78dfee2b1c
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_mbox.c
@@ -0,0 +1,1392 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include "hinic3_compat.h"
+#include "hinic3_csr.h"
+#include "hinic3_eqs.h"
+#include "hinic3_hw_cfg.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_hwif.h"
+#include "hinic3_mbox.h"
+#include "hinic3_mgmt.h"
+#include "hinic3_nic_event.h"
+
+#define HINIC3_MBOX_INT_DST_FUNC_SHIFT	    0
+#define HINIC3_MBOX_INT_DST_AEQN_SHIFT	    10
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_SHIFT 12
+#define HINIC3_MBOX_INT_STAT_DMA_SHIFT	    14
+/* The size of data to be send (unit of 4 bytes). */
+#define HINIC3_MBOX_INT_TX_SIZE_SHIFT 20
+/* SO_RO(strong order, relax order). */
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_SHIFT 25
+#define HINIC3_MBOX_INT_WB_EN_SHIFT	     28
+
+#define HINIC3_MBOX_INT_DST_AEQN_MASK	    0x3
+#define HINIC3_MBOX_INT_SRC_RESP_AEQN_MASK  0x3
+#define HINIC3_MBOX_INT_STAT_DMA_MASK	    0x3F
+#define HINIC3_MBOX_INT_TX_SIZE_MASK	    0x1F
+#define HINIC3_MBOX_INT_STAT_DMA_SO_RO_MASK 0x3
+#define HINIC3_MBOX_INT_WB_EN_MASK	    0x1
+
+#define HINIC3_MBOX_INT_SET(val, field)           \
+	(((val) & HINIC3_MBOX_INT_##field##_MASK) \
+	 << HINIC3_MBOX_INT_##field##_SHIFT)
+
+enum hinic3_mbox_tx_status {
+	TX_NOT_DONE = 1,
+};
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_SHIFT 0
+
+/*
+ * Specifies the issue request for the message data.
+ * 0 - Tx request is done;
+ * 1 - Tx request is in process.
+ */
+#define HINIC3_MBOX_CTRL_TX_STATUS_SHIFT 1
+#define HINIC3_MBOX_CTRL_DST_FUNC_SHIFT	 16
+
+#define HINIC3_MBOX_CTRL_TRIGGER_AEQE_MASK 0x1
+#define HINIC3_MBOX_CTRL_TX_STATUS_MASK	   0x1
+#define HINIC3_MBOX_CTRL_DST_FUNC_MASK	   0x1FFF
+
+#define HINIC3_MBOX_CTRL_SET(val, field)           \
+	(((val) & HINIC3_MBOX_CTRL_##field##_MASK) \
+	 << HINIC3_MBOX_CTRL_##field##_SHIFT)
+
+#define MBOX_SEGLEN_MASK \
+	HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEG_LEN_MASK, SEG_LEN)
+
+#define MBOX_MSG_POLLING_TIMEOUT 500000 /* Unit is 10us. */
+#define HINIC3_MBOX_COMP_TIME	 40000U
+
+#define MBOX_MAX_BUF_SZ	      2048UL
+#define MBOX_HEADER_SZ	      8
+#define HINIC3_MBOX_DATA_SIZE (MBOX_MAX_BUF_SZ - MBOX_HEADER_SZ)
+
+#define MBOX_TLP_HEADER_SZ 16
+
+/* Mbox size is 64B, 8B for mbox_header, 8B reserved. */
+#define MBOX_SEG_LEN	   48
+#define MBOX_SEG_LEN_ALIGN 4
+#define MBOX_WB_STATUS_LEN 16UL
+
+/* Mbox write back status is 16B, only first 4B is used. */
+#define MBOX_WB_STATUS_ERRCODE_MASK	 0xFFFF
+#define MBOX_WB_STATUS_MASK		 0xFF
+#define MBOX_WB_ERROR_CODE_MASK		 0xFF00
+#define MBOX_WB_STATUS_FINISHED_SUCCESS	 0xFF
+#define MBOX_WB_STATUS_FINISHED_WITH_ERR 0xFE
+#define MBOX_WB_STATUS_NOT_FINISHED	 0x00
+
+/* Determine the write back status. */
+#define MBOX_STATUS_FINISHED(wb) \
+	(((wb) & MBOX_WB_STATUS_MASK) != MBOX_WB_STATUS_NOT_FINISHED)
+#define MBOX_STATUS_SUCCESS(wb) \
+	(((wb) & MBOX_WB_STATUS_MASK) == MBOX_WB_STATUS_FINISHED_SUCCESS)
+#define MBOX_STATUS_ERRCODE(wb) ((wb) & MBOX_WB_ERROR_CODE_MASK)
+
+/* Indicate the value related to the sequence ID. */
+#define SEQ_ID_START_VAL 0
+#define SEQ_ID_MAX_VAL	 42
+
+#define DST_AEQ_IDX_DEFAULT_VAL 0
+#define SRC_AEQ_IDX_DEFAULT_VAL 0
+#define NO_DMA_ATTRIBUTE_VAL	0
+
+#define MBOX_MSG_NO_DATA_LEN 1
+
+/* Obtain the specified content of the mailbox. */
+#define MBOX_BODY_FROM_HDR(header) ((u8 *)(header) + MBOX_HEADER_SZ)
+#define MBOX_AREA(hwif) \
+	((hwif)->cfg_regs_base + HINIC3_FUNC_CSR_MAILBOX_DATA_OFF)
+
+#define IS_PF_OR_PPF_SRC(src_func_idx) ((src_func_idx) < HINIC3_MAX_PF_FUNCS)
+
+#define MBOX_RESPONSE_ERROR	  0x1
+#define MBOX_MSG_ID_MASK	  0xF
+#define MBOX_MSG_ID(func_to_func) ((func_to_func)->send_msg_id)
+#define MBOX_MSG_ID_INC(func_to_func)                             \
+	({                                                        \
+		typeof(func_to_func) __func = (func_to_func);     \
+		MBOX_MSG_ID(__func) = (MBOX_MSG_ID(__func) + 1) & \
+				      MBOX_MSG_ID_MASK;           \
+	})
+
+/* Max message counter waits to process for one function. */
+#define HINIC3_MAX_MSG_CNT_TO_PROCESS 10
+
+enum mbox_ordering_type {
+	STRONG_ORDER,
+};
+
+enum mbox_write_back_type {
+	WRITE_BACK = 1,
+};
+
+enum mbox_aeq_trig_type {
+	NOT_TRIGGER,
+	TRIGGER,
+};
+
+static int send_mbox_to_func(struct hinic3_mbox *func_to_func,
+			     enum hinic3_mod_type mod, u16 cmd, void *msg,
+			     u16 msg_len, u16 dst_func,
+			     enum hinic3_msg_direction_type direction,
+			     enum hinic3_msg_ack_type ack_type,
+			     struct mbox_msg_info *msg_info);
+static int send_tlp_mbox_to_func(struct hinic3_mbox *func_to_func,
+				 enum hinic3_mod_type mod, u16 cmd, void *msg,
+				 u16 msg_len, u16 dst_func,
+				 enum hinic3_msg_direction_type direction,
+				 enum hinic3_msg_ack_type ack_type,
+				 struct mbox_msg_info *msg_info);
+
+static int
+recv_vf_mbox_handler(struct hinic3_mbox *func_to_func,
+		     struct hinic3_recv_mbox *recv_mbox, void *buf_out,
+		     u16 *out_size, __rte_unused void *param)
+{
+	int err = 0;
+
+	/*
+	 * Invoke the corresponding processing function according to the type of
+	 * the received mailbox.
+	 */
+	switch (recv_mbox->mod) {
+	case HINIC3_MOD_COMM:
+		err = vf_handle_pf_comm_mbox(func_to_func->hwdev, func_to_func,
+					     recv_mbox->cmd, recv_mbox->mbox,
+					     recv_mbox->mbox_len, buf_out,
+					     out_size);
+		break;
+	case HINIC3_MOD_CFGM:
+		err = cfg_mbx_vf_proc_msg(func_to_func->hwdev,
+			func_to_func->hwdev->cfg_mgmt,
+			recv_mbox->cmd, recv_mbox->mbox, recv_mbox->mbox_len,
+			buf_out, out_size);
+		break;
+	case HINIC3_MOD_L2NIC:
+		err = hinic3_vf_event_handler(func_to_func->hwdev,
+			func_to_func->hwdev->cfg_mgmt,
+			recv_mbox->cmd, recv_mbox->mbox, recv_mbox->mbox_len,
+			buf_out, out_size);
+		break;
+	case HINIC3_MOD_HILINK:
+		err = hinic3_vf_mag_event_handler(func_to_func->hwdev,
+			func_to_func->hwdev->cfg_mgmt,
+			recv_mbox->cmd, recv_mbox->mbox, recv_mbox->mbox_len,
+			buf_out, out_size);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "No handler, mod: %d", recv_mbox->mod);
+		err = HINIC3_MBOX_VF_CMD_ERROR;
+		break;
+	}
+
+	return err;
+}
+
+/**
+ * Respond to the accept packet, construct a response message, and send it.
+ *
+ * @param[in] func_to_func
+ * Context for inter-function communication.
+ * @param[in] recv_mbox
+ * Pointer to the received inter-function mailbox structure.
+ * @param[in] err
+ * Error Code.
+ * @param[in] out_size
+ * Output Size.
+ * @param[in] src_func_idx
+ * Index of the source function.
+ */
+static void
+response_for_recv_func_mbox(struct hinic3_mbox *func_to_func,
+			    struct hinic3_recv_mbox *recv_mbox, int err,
+			    u16 out_size, u16 src_func_idx)
+{
+	struct mbox_msg_info msg_info = {0};
+
+	if (recv_mbox->ack_type == HINIC3_MSG_ACK) {
+		msg_info.msg_id = recv_mbox->msg_info.msg_id;
+		if (err)
+			msg_info.status = HINIC3_MBOX_PF_SEND_ERR;
+
+		/* Select the sending function based on the packet type. */
+		if (IS_TLP_MBX(src_func_idx))
+			send_tlp_mbox_to_func(func_to_func, recv_mbox->mod,
+					      recv_mbox->cmd,
+					      recv_mbox->buf_out, out_size,
+					      src_func_idx, HINIC3_MSG_RESPONSE,
+					      HINIC3_MSG_NO_ACK, &msg_info);
+		else
+			send_mbox_to_func(func_to_func, recv_mbox->mod,
+					  recv_mbox->cmd, recv_mbox->buf_out,
+					  out_size, src_func_idx,
+					  HINIC3_MSG_RESPONSE,
+					  HINIC3_MSG_NO_ACK, &msg_info);
+	}
+}
+
+static bool
+check_func_mbox_ack_first(u8 mod)
+{
+	return mod == HINIC3_MOD_HILINK;
+}
+
+static void
+recv_func_mbox_handler(struct hinic3_mbox *func_to_func,
+		       struct hinic3_recv_mbox *recv_mbox, u16 src_func_idx,
+		       void *param)
+{
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+	void *buf_out = recv_mbox->buf_out;
+	bool ack_first = false;
+	u16 out_size = MBOX_MAX_BUF_SZ;
+	int err = 0;
+	/* Check whether the response is the first ACK message. */
+	ack_first = check_func_mbox_ack_first(recv_mbox->mod);
+	if (ack_first && recv_mbox->ack_type == HINIC3_MSG_ACK) {
+		response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+					    out_size, src_func_idx);
+	}
+
+	/* Processe mailbox information in the VF. */
+	if (HINIC3_IS_VF(hwdev)) {
+		err = recv_vf_mbox_handler(func_to_func, recv_mbox, buf_out,
+					   &out_size, param);
+	} else {
+		err = -EINVAL;
+		PMD_DRV_LOG(ERR,
+			"PMD doesn't support non-VF handle mailbox message");
+	}
+
+	if (!out_size || err)
+		out_size = MBOX_MSG_NO_DATA_LEN;
+
+	if (!ack_first && recv_mbox->ack_type == HINIC3_MSG_ACK) {
+		response_for_recv_func_mbox(func_to_func, recv_mbox, err,
+					    out_size, src_func_idx);
+	}
+}
+
+/**
+ * Processe mailbox responses from functions.
+ *
+ * @param[in] func_to_func
+ * Mailbox for inter-function communication.
+ * @param[in] recv_mbox
+ * Received mailbox message.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+resp_mbox_handler(struct hinic3_mbox *func_to_func,
+		  struct hinic3_recv_mbox *recv_mbox)
+{
+	int ret;
+	rte_spinlock_lock(&func_to_func->mbox_lock);
+	if (recv_mbox->msg_info.msg_id == func_to_func->send_msg_id &&
+	    func_to_func->event_flag == EVENT_START) {
+		func_to_func->event_flag = EVENT_SUCCESS;
+		ret = 0;
+	} else {
+		PMD_DRV_LOG(ERR,
+			    "Mbox response timeout, current send msg id(0x%x), "
+			    "recv msg id(0x%x), status(0x%x)",
+			    func_to_func->send_msg_id,
+			    recv_mbox->msg_info.msg_id,
+			    recv_mbox->msg_info.status);
+		ret = HINIC3_MSG_HANDLER_RES;
+	}
+	rte_spinlock_unlock(&func_to_func->mbox_lock);
+	return ret;
+}
+
+/**
+ * Check whether the received mailbox message segment is valid.
+ *
+ * @param[out] recv_mbox
+ * Received mailbox message.
+ * @param[in] mbox_header
+ * Mailbox header.
+ * @return
+ * The value true indicates valid, and the value false indicates invalid.
+ */
+static bool
+check_mbox_segment(struct hinic3_recv_mbox *recv_mbox, u64 mbox_header)
+{
+	u8 seq_id, seg_len, msg_id, mod;
+	u16 src_func_idx, cmd;
+
+	/* Get info from the mailbox header. */
+	seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+	seg_len = HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN);
+	src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+	msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+	mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+	cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+
+	if (seq_id > SEQ_ID_MAX_VAL || seg_len > MBOX_SEG_LEN)
+		goto seg_err;
+
+	/* New message segment, which saves its information to recv_mbox. */
+	if (seq_id == 0) {
+		recv_mbox->seq_id = seq_id;
+		recv_mbox->msg_info.msg_id = msg_id;
+		recv_mbox->mod = mod;
+		recv_mbox->cmd = cmd;
+	} else {
+		if ((seq_id != recv_mbox->seq_id + 1) ||
+		    msg_id != recv_mbox->msg_info.msg_id ||
+		    mod != recv_mbox->mod || cmd != recv_mbox->cmd)
+			goto seg_err;
+
+		recv_mbox->seq_id = seq_id;
+	}
+
+	return true;
+
+seg_err:
+	PMD_DRV_LOG(ERR,
+		    "Mailbox segment check failed, src func id: 0x%x, "
+		    "front seg info: seq id: 0x%x, msg id: 0x%x, mod: 0x%x, "
+		    "cmd: 0x%x",
+		    src_func_idx, recv_mbox->seq_id, recv_mbox->msg_info.msg_id,
+		    recv_mbox->mod, recv_mbox->cmd);
+	PMD_DRV_LOG(ERR,
+		    "Current seg info: seg len: 0x%x, seq id: 0x%x, "
+		    "msg id: 0x%x, mod: 0x%x, cmd: 0x%x",
+		    seg_len, seq_id, msg_id, mod, cmd);
+
+	return false;
+}
+
+static int
+recv_mbox_handler(struct hinic3_mbox *func_to_func, void *header,
+		  struct hinic3_recv_mbox *recv_mbox, void *param)
+{
+	u64 mbox_header = *((u64 *)header);
+	void *mbox_body = MBOX_BODY_FROM_HDR(header);
+	u16 src_func_idx;
+	int pos;
+	u8 seq_id;
+	/* Obtain information from the mailbox header. */
+	seq_id = HINIC3_MSG_HEADER_GET(mbox_header, SEQID);
+	src_func_idx = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+	if (!check_mbox_segment(recv_mbox, mbox_header)) {
+		recv_mbox->seq_id = SEQ_ID_MAX_VAL;
+		return HINIC3_MSG_HANDLER_RES;
+	}
+
+	pos = seq_id * MBOX_SEG_LEN;
+	memcpy((void *)((u8 *)recv_mbox->mbox + pos), (void *)mbox_body,
+	       (size_t)HINIC3_MSG_HEADER_GET(mbox_header, SEG_LEN));
+
+	if (!HINIC3_MSG_HEADER_GET(mbox_header, LAST))
+		return HINIC3_MSG_HANDLER_RES;
+	/* Setting the information about the recv mailbox. */
+	recv_mbox->cmd = HINIC3_MSG_HEADER_GET(mbox_header, CMD);
+	recv_mbox->mod = HINIC3_MSG_HEADER_GET(mbox_header, MODULE);
+	recv_mbox->mbox_len = HINIC3_MSG_HEADER_GET(mbox_header, MSG_LEN);
+	recv_mbox->ack_type = HINIC3_MSG_HEADER_GET(mbox_header, NO_ACK);
+	recv_mbox->msg_info.msg_id = HINIC3_MSG_HEADER_GET(mbox_header, MSG_ID);
+	recv_mbox->msg_info.status = HINIC3_MSG_HEADER_GET(mbox_header, STATUS);
+	recv_mbox->seq_id = SEQ_ID_MAX_VAL;
+
+	/*
+	 * If the received message is a response message, call the mbox response
+	 * processing function.
+	 */
+	if (HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION) ==
+	    HINIC3_MSG_RESPONSE) {
+		return resp_mbox_handler(func_to_func, recv_mbox);
+	}
+
+	recv_func_mbox_handler(func_to_func, recv_mbox, src_func_idx, param);
+	return HINIC3_MSG_HANDLER_RES;
+}
+
+static inline int
+hinic3_mbox_get_index(int func)
+{
+	return (func == HINIC3_MGMT_SRC_ID) ? HINIC3_MBOX_MPU_INDEX
+					    : HINIC3_MBOX_PF_INDEX;
+}
+
+int
+hinic3_mbox_func_aeqe_handler(void *handle, u8 *header, __rte_unused u8 size,
+			      void *param)
+{
+	struct hinic3_mbox *func_to_func = NULL;
+	struct hinic3_recv_mbox *recv_mbox = NULL;
+	u64 mbox_header = *((u64 *)header);
+	u64 src, dir;
+	/* Obtain the mailbox for communication between functions. */
+	func_to_func = ((struct hinic3_hwdev *)handle)->func_to_func;
+
+	dir = HINIC3_MSG_HEADER_GET(mbox_header, DIRECTION);
+	src = HINIC3_MSG_HEADER_GET(mbox_header, SRC_GLB_FUNC_IDX);
+
+	src = hinic3_mbox_get_index((int)src);
+	recv_mbox = (dir == HINIC3_MSG_DIRECT_SEND)
+			    ? &func_to_func->mbox_send[src]
+			    : &func_to_func->mbox_resp[src];
+	/* Processing Received Mailbox info. */
+	return recv_mbox_handler(func_to_func, (u64 *)header, recv_mbox, param);
+}
+
+static void
+clear_mbox_status(struct hinic3_send_mbox *mbox)
+{
+	*mbox->wb_status = 0;
+
+	/* Clear mailbox write back status. */
+	rte_wmb();
+}
+
+static void
+mbox_copy_header(struct hinic3_send_mbox *mbox, u64 *header)
+{
+	u32 *data = (u32 *)header;
+	u32 i, idx_max = MBOX_HEADER_SZ / sizeof(u32);
+
+	for (i = 0; i < idx_max; i++) {
+		rte_write32(cpu_to_be32(*(data + i)),
+			    mbox->data + i * sizeof(u32));
+	}
+}
+
+#define MBOX_DMA_MSG_INIT_XOR_VAL 0x5a5a5a5a
+static u32
+mbox_dma_msg_xor(u32 *data, u16 msg_len)
+{
+	u32 xor = MBOX_DMA_MSG_INIT_XOR_VAL;
+	u16 dw_len = msg_len / sizeof(u32);
+	u16 i;
+
+	for (i = 0; i < dw_len; i++)
+		xor ^= data[i];
+
+	return xor;
+}
+
+static void
+mbox_copy_send_data_addr(struct hinic3_send_mbox *mbox, u16 seg_len)
+{
+	u32 addr_h, addr_l, xor;
+
+	xor = mbox_dma_msg_xor(mbox->sbuff_vaddr, seg_len);
+	addr_h = upper_32_bits(mbox->sbuff_paddr);
+	addr_l = lower_32_bits(mbox->sbuff_paddr);
+
+	rte_write32(cpu_to_be32(xor), mbox->data + MBOX_HEADER_SZ);
+	rte_write32(cpu_to_be32(addr_h),
+		    mbox->data + MBOX_HEADER_SZ + sizeof(u32));
+	rte_write32(cpu_to_be32(addr_l),
+		    mbox->data + MBOX_HEADER_SZ + 0x2 * sizeof(u32));
+	rte_write32(cpu_to_be32((u32)seg_len),
+		    mbox->data + MBOX_HEADER_SZ + 0x3 * sizeof(u32));
+	/* Reserved field. */
+	rte_write32(0, mbox->data + MBOX_HEADER_SZ + 0x4 * sizeof(u32));
+	rte_write32(0, mbox->data + MBOX_HEADER_SZ + 0x5 * sizeof(u32));
+}
+
+static void
+mbox_copy_send_data(struct hinic3_send_mbox *mbox, void *seg, u16 seg_len)
+{
+	u32 *data = seg;
+	u32 data_len, chk_sz = sizeof(u32);
+	u32 i, idx_max;
+	u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
+
+	/* The mbox message should be aligned in 4 bytes. */
+	if (seg_len % chk_sz) {
+		rte_memcpy(mbox_max_buf, seg, seg_len);
+		data = (u32 *)mbox_max_buf;
+	}
+
+	data_len = seg_len;
+	idx_max = RTE_ALIGN(data_len, chk_sz) / chk_sz;
+
+	for (i = 0; i < idx_max; i++) {
+		rte_write32(cpu_to_be32(*(data + i)),
+			    mbox->data + MBOX_HEADER_SZ + i * sizeof(u32));
+	}
+}
+
+static void
+write_mbox_msg_attr(struct hinic3_mbox *func_to_func, u16 dst_func,
+		    u16 dst_aeqn, u16 seg_len)
+{
+	u32 mbox_int, mbox_ctrl;
+
+	/* If VF, function ids must self-learning by HW(PPF=1 PF=0). */
+	if (HINIC3_IS_VF(func_to_func->hwdev) &&
+	    dst_func != HINIC3_MGMT_SRC_ID) {
+		if (dst_func == HINIC3_HWIF_PPF_IDX(func_to_func->hwdev->hwif))
+			dst_func = 1;
+		else
+			dst_func = 0;
+	}
+	/* Set the interrupt attribute of the mailbox. */
+	mbox_int = HINIC3_MBOX_INT_SET(dst_aeqn, DST_AEQN) |
+		   HINIC3_MBOX_INT_SET(0, SRC_RESP_AEQN) |
+		   HINIC3_MBOX_INT_SET(NO_DMA_ATTRIBUTE_VAL, STAT_DMA) |
+		   HINIC3_MBOX_INT_SET(RTE_ALIGN(seg_len + MBOX_HEADER_SZ,
+						 MBOX_SEG_LEN_ALIGN) >>
+					       2,
+				       TX_SIZE) |
+		   HINIC3_MBOX_INT_SET(STRONG_ORDER, STAT_DMA_SO_RO) |
+		   HINIC3_MBOX_INT_SET(WRITE_BACK, WB_EN);
+
+	/* The interrupt attribute is written to the interrupt register. */
+	hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+			      HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF, mbox_int);
+
+	rte_wmb(); /**< Writing the mbox intr attributes */
+
+	/* Set the control attributes of the mailbox and write to register. */
+	mbox_ctrl = HINIC3_MBOX_CTRL_SET(TX_NOT_DONE, TX_STATUS);
+	mbox_ctrl |= HINIC3_MBOX_CTRL_SET(NOT_TRIGGER, TRIGGER_AEQE);
+	mbox_ctrl |= HINIC3_MBOX_CTRL_SET(dst_func, DST_FUNC);
+	hinic3_hwif_write_reg(func_to_func->hwdev->hwif,
+			      HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF, mbox_ctrl);
+}
+
+/**
+ * Read the value of the mailbox register of the hardware device.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ */
+static void
+dump_mbox_reg(struct hinic3_hwdev *hwdev)
+{
+	u32 val;
+	/* Read the value of the MBOX control register. */
+	val = hinic3_hwif_read_reg(hwdev->hwif,
+				   HINIC3_FUNC_CSR_MAILBOX_CONTROL_OFF);
+	PMD_DRV_LOG(ERR, "Mailbox control reg: 0x%x", val);
+	/* Read the value of the MBOX interrupt offset register. */
+	val = hinic3_hwif_read_reg(hwdev->hwif,
+				   HINIC3_FUNC_CSR_MAILBOX_INT_OFFSET_OFF);
+	PMD_DRV_LOG(ERR, "Mailbox interrupt offset: 0x%x", val);
+}
+
+static u16
+get_mbox_status(struct hinic3_send_mbox *mbox)
+{
+	/* Write back is 16B, but only use first 4B. */
+	u64 wb_val = be64_to_cpu(*mbox->wb_status);
+
+	rte_rmb(); /**< Verify reading before check. */
+
+	return (u16)(wb_val & MBOX_WB_STATUS_ERRCODE_MASK);
+}
+
+/**
+ * Sending Mailbox Message Segment.
+ *
+ * @param[in] func_to_func
+ * Mailbox for inter-function communication.
+ * @param[in] header
+ * Mailbox header.
+ * @param[in] dst_func
+ * Indicate destination func.
+ * @param[in] seg
+ * Segment data to be sent.
+ * @param[in] seg_len
+ * Length of the segment to be sent.
+ * @param[in] msg_info
+ * Indicate the message information.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+send_mbox_seg(struct hinic3_mbox *func_to_func, u64 header, u16 dst_func,
+	      void *seg, u16 seg_len, __rte_unused void *msg_info)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+	u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+	u16 dst_aeqn, wb_status = 0, errcode;
+	u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+	u32 cnt = 0;
+
+	/* Mbox to mgmt cpu, hardware doesn't care dst aeq id. */
+	if (num_aeqs >= 2)
+		dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND)
+				   ? HINIC3_ASYNC_MSG_AEQ
+				   : HINIC3_MBOX_RSP_MSG_AEQ;
+	else
+		dst_aeqn = 0;
+
+	clear_mbox_status(send_mbox);
+	mbox_copy_header(send_mbox, &header);
+	mbox_copy_send_data(send_mbox, seg, seg_len);
+
+	/* Set mailbox msg seg len. */
+	write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn, seg_len);
+	rte_wmb(); /**< Writing the mbox msg attributes. */
+
+	/* Wait until the status of the mailbox changes to Complete. */
+	while (cnt < MBOX_MSG_POLLING_TIMEOUT) {
+		wb_status = get_mbox_status(send_mbox);
+		if (MBOX_STATUS_FINISHED(wb_status))
+			break;
+
+		rte_delay_us(10);
+		cnt++;
+	}
+
+	if (cnt == MBOX_MSG_POLLING_TIMEOUT) {
+		PMD_DRV_LOG(ERR,
+			    "Send mailbox segment timeout, wb status: 0x%x",
+			    wb_status);
+		dump_mbox_reg(hwdev);
+		return -ETIMEDOUT;
+	}
+
+	if (!MBOX_STATUS_SUCCESS(wb_status)) {
+		PMD_DRV_LOG(ERR,
+			    "Send mailbox segment to function %d error, wb "
+			    "status: 0x%x",
+			    dst_func, wb_status);
+		errcode = MBOX_STATUS_ERRCODE(wb_status);
+		return errcode ? errcode : -EFAULT;
+	}
+
+	return 0;
+}
+
+static int
+send_tlp_mbox_seg(struct hinic3_mbox *func_to_func, u64 header, u16 dst_func,
+		  void *seg, u16 seg_len, __rte_unused void *msg_info)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+	u8 num_aeqs = hwdev->hwif->attr.num_aeqs;
+	u16 dst_aeqn, errcode, wb_status = 0;
+	u16 seq_dir = HINIC3_MSG_HEADER_GET(header, DIRECTION);
+	u32 cnt = 0;
+
+	/* Mbox to mgmt cpu, hardware doesn't care dst aeq id. */
+	if (num_aeqs >= 2)
+		dst_aeqn = (seq_dir == HINIC3_MSG_DIRECT_SEND)
+				   ? HINIC3_ASYNC_MSG_AEQ
+				   : HINIC3_MBOX_RSP_MSG_AEQ;
+	else
+		dst_aeqn = 0;
+
+	clear_mbox_status(send_mbox);
+	mbox_copy_header(send_mbox, &header);
+
+	/* Copy data to DMA buffer. */
+	memcpy((void *)send_mbox->sbuff_vaddr, (void *)seg, (size_t)seg_len);
+
+	/*
+	 * Copy data address to mailbox ctrl CSR(Control and Status Register).
+	 */
+	mbox_copy_send_data_addr(send_mbox, seg_len);
+
+	/* Set mailbox msg header size. */
+	write_mbox_msg_attr(func_to_func, dst_func, dst_aeqn,
+			    MBOX_TLP_HEADER_SZ);
+
+	rte_wmb(); /**< Writing the mbox msg attributes. */
+
+	/* Wait until the status of the mailbox changes to Complete. */
+	while (cnt < MBOX_MSG_POLLING_TIMEOUT) {
+		wb_status = get_mbox_status(send_mbox);
+		if (MBOX_STATUS_FINISHED(wb_status))
+			break;
+
+		rte_delay_us(10);
+		cnt++;
+	}
+
+	if (cnt == MBOX_MSG_POLLING_TIMEOUT) {
+		PMD_DRV_LOG(ERR,
+			    "Send mailbox segment timeout, wb status: 0x%x",
+			    wb_status);
+		dump_mbox_reg(hwdev);
+		return -ETIMEDOUT;
+	}
+
+	if (!MBOX_STATUS_SUCCESS(wb_status)) {
+		PMD_DRV_LOG(ERR,
+			    "Send mailbox segment to function %d error, wb "
+			    "status: 0x%x",
+			    dst_func, wb_status);
+		errcode = MBOX_STATUS_ERRCODE(wb_status);
+		return errcode ? errcode : -EFAULT;
+	}
+
+	return 0;
+}
+
+static int
+send_mbox_to_func(struct hinic3_mbox *func_to_func, enum hinic3_mod_type mod,
+		  u16 cmd, void *msg, u16 msg_len, u16 dst_func,
+		  enum hinic3_msg_direction_type direction,
+		  enum hinic3_msg_ack_type ack_type,
+		  struct mbox_msg_info *msg_info)
+{
+	int err = 0;
+	u32 seq_id = 0;
+	u16 seg_len = MBOX_SEG_LEN;
+	u16 rsp_aeq_id, left = msg_len;
+	u8 *msg_seg = (u8 *)msg;
+	u64 header = 0;
+
+	rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+
+	err = hinic3_mutex_lock(&func_to_func->msg_send_mutex);
+	if (err)
+		return err;
+
+	/* Set the header message. */
+	header = HINIC3_MSG_HEADER_SET(msg_len, MSG_LEN) |
+		 HINIC3_MSG_HEADER_SET(mod, MODULE) |
+		 HINIC3_MSG_HEADER_SET(seg_len, SEG_LEN) |
+		 HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+		 HINIC3_MSG_HEADER_SET(HINIC3_DATA_INLINE, DATA_TYPE) |
+		 HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+		 HINIC3_MSG_HEADER_SET(NOT_LAST_SEGMENT, LAST) |
+		 HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+		 HINIC3_MSG_HEADER_SET(cmd, CMD) |
+		 /* The VF's offset to it's associated PF. */
+		 HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+		 HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+		 HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+		 HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS);
+	/* Loop until all messages are sent. */
+	while (!(HINIC3_MSG_HEADER_GET(header, LAST))) {
+		if (left <= MBOX_SEG_LEN) {
+			header &= ~MBOX_SEGLEN_MASK;
+			header |= HINIC3_MSG_HEADER_SET(left, SEG_LEN);
+			header |= HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST);
+
+			seg_len = left;
+		}
+
+		err = send_mbox_seg(func_to_func, header, dst_func, msg_seg,
+				    seg_len, msg_info);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "Send mbox seg failed, seq_id: 0x%" PRIx64,
+				    HINIC3_MSG_HEADER_GET(header, SEQID));
+
+			goto send_err;
+		}
+
+		left -= MBOX_SEG_LEN;
+		msg_seg += MBOX_SEG_LEN;
+
+		seq_id++;
+		header &= ~(HINIC3_MSG_HEADER_SET(HINIC3_MSG_HEADER_SEQID_MASK,
+						  SEQID));
+		header |= HINIC3_MSG_HEADER_SET(seq_id, SEQID);
+	}
+
+send_err:
+	(void)hinic3_mutex_unlock(&func_to_func->msg_send_mutex);
+
+	return err;
+}
+
+static int
+send_tlp_mbox_to_func(struct hinic3_mbox *func_to_func,
+		      enum hinic3_mod_type mod, u16 cmd, void *msg, u16 msg_len,
+		      u16 dst_func, enum hinic3_msg_direction_type direction,
+		      enum hinic3_msg_ack_type ack_type,
+		      struct mbox_msg_info *msg_info)
+{
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+	u8 *msg_seg = (u8 *)msg;
+	int err = 0;
+	u16 rsp_aeq_id;
+	u64 header = 0;
+
+	rsp_aeq_id = HINIC3_MBOX_RSP_MSG_AEQ;
+
+	err = hinic3_mutex_lock(&func_to_func->msg_send_mutex);
+	if (err)
+		return err;
+
+	/* Set the header message. */
+	header = HINIC3_MSG_HEADER_SET(MBOX_TLP_HEADER_SZ, MSG_LEN) |
+		 HINIC3_MSG_HEADER_SET(MBOX_TLP_HEADER_SZ, SEG_LEN) |
+		 HINIC3_MSG_HEADER_SET(mod, MODULE) |
+		 HINIC3_MSG_HEADER_SET(LAST_SEGMENT, LAST) |
+		 HINIC3_MSG_HEADER_SET(ack_type, NO_ACK) |
+		 HINIC3_MSG_HEADER_SET(HINIC3_DATA_DMA, DATA_TYPE) |
+		 HINIC3_MSG_HEADER_SET(SEQ_ID_START_VAL, SEQID) |
+		 HINIC3_MSG_HEADER_SET(direction, DIRECTION) |
+		 HINIC3_MSG_HEADER_SET(cmd, CMD) |
+		 /* The VF's offset to it's associated PF. */
+		 HINIC3_MSG_HEADER_SET(msg_info->msg_id, MSG_ID) |
+		 HINIC3_MSG_HEADER_SET(rsp_aeq_id, AEQ_ID) |
+		 HINIC3_MSG_HEADER_SET(HINIC3_MSG_FROM_MBOX, SOURCE) |
+		 HINIC3_MSG_HEADER_SET(!!msg_info->status, STATUS) |
+		 HINIC3_MSG_HEADER_SET(hinic3_global_func_id(hwdev),
+				       SRC_GLB_FUNC_IDX);
+
+	/* Send a message. */
+	err = send_tlp_mbox_seg(func_to_func, header, dst_func, msg_seg,
+				msg_len, msg_info);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Send mbox seg failed, seq_id: 0x%" PRIx64,
+			    HINIC3_MSG_HEADER_GET(header, SEQID));
+	}
+
+	(void)hinic3_mutex_unlock(&func_to_func->msg_send_mutex);
+
+	return err;
+}
+
+/**
+ * Set mailbox F2F(Function to Function) event status.
+ *
+ * @param[out] func_to_func
+ * Context for inter-function communication.
+ * @param[in] event_flag
+ * Event status enumerated value.
+ */
+static void
+set_mbox_to_func_event(struct hinic3_mbox *func_to_func,
+		       enum mbox_event_state event_flag)
+{
+	rte_spinlock_lock(&func_to_func->mbox_lock);
+	func_to_func->event_flag = event_flag;
+	rte_spinlock_unlock(&func_to_func->mbox_lock);
+}
+
+/**
+ * Send data from one function to another and receive responses.
+ *
+ * @param[in] func_to_func
+ * Context for inter-function communication.
+ * @param[in] mod
+ * Command queue module type.
+ * @param[in] cmd
+ * Indicate the command to be executed.
+  the command to be executed.
+ * @param[in] dst_func
+ * Indicate destination func.
+ * @param[in] buf_in
+ * Pointer to the input buffer.
+ * @param[in] in_size
+ * Input buffer size.
+ * @param[out] buf_out
+ * Pointer to the output buffer.
+ * @param[out] out_size
+ * Output buffer size.
+ * @param[in] timeout
+ * Timeout interval for waiting for a response.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_mbox_to_func(struct hinic3_mbox *func_to_func, enum hinic3_mod_type mod,
+		    u16 cmd, u16 dst_func, void *buf_in, u16 in_size,
+		    void *buf_out, u16 *out_size, u32 timeout)
+{
+	/* Use mbox_resp to hole data which responsed from other function. */
+	struct hinic3_recv_mbox *mbox_for_resp = NULL;
+	struct mbox_msg_info msg_info = {0};
+	struct hinic3_eq *aeq = NULL;
+	u16 mbox_rsp_idx;
+	u32 time;
+	int err;
+
+	mbox_rsp_idx = (u16)hinic3_mbox_get_index(dst_func);
+	mbox_for_resp = &func_to_func->mbox_resp[mbox_rsp_idx];
+
+	err = hinic3_mutex_lock(&func_to_func->mbox_send_mutex);
+	if (err)
+		return err;
+
+	/* Set message ID and start event. */
+	msg_info.msg_id = MBOX_MSG_ID_INC(func_to_func);
+	set_mbox_to_func_event(func_to_func, EVENT_START);
+
+	/* Select a function to send messages based on the dst_func type. */
+	if (IS_TLP_MBX(dst_func))
+		err = send_tlp_mbox_to_func(func_to_func,
+			mod, cmd, buf_in, in_size, dst_func,
+			HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_ACK, &msg_info);
+	else
+		err = send_mbox_to_func(func_to_func, mod, cmd, buf_in, in_size,
+					dst_func, HINIC3_MSG_DIRECT_SEND,
+					HINIC3_MSG_ACK, &msg_info);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Send mailbox failed, msg_id: %d",
+			    msg_info.msg_id);
+		set_mbox_to_func_event(func_to_func, EVENT_FAIL);
+		goto send_err;
+	}
+
+	/* Wait for the response message. */
+	time = msecs_to_jiffies(timeout ? timeout : HINIC3_MBOX_COMP_TIME);
+	aeq = &func_to_func->hwdev->aeqs->aeq[HINIC3_MBOX_RSP_MSG_AEQ];
+	err = hinic3_aeq_poll_msg(aeq, time, NULL);
+	if (err) {
+		set_mbox_to_func_event(func_to_func, EVENT_TIMEOUT);
+		PMD_DRV_LOG(ERR, "Send mailbox message time out");
+		err = -ETIMEDOUT;
+		goto send_err;
+	}
+
+	/* Check whether mod and command of the rsp message match the sent message. */
+	if (mod != mbox_for_resp->mod || cmd != mbox_for_resp->cmd) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid response mbox message, mod: 0x%x, cmd: "
+			    "0x%x, expect mod: 0x%x, cmd: 0x%x",
+			    mbox_for_resp->mod, mbox_for_resp->cmd, mod, cmd);
+		err = -EFAULT;
+		goto send_err;
+	}
+
+	/* Check the response status. */
+	if (mbox_for_resp->msg_info.status) {
+		err = mbox_for_resp->msg_info.status;
+		goto send_err;
+	}
+
+	/* Check whether the length of the response message is valid. */
+	if (buf_out && out_size) {
+		if (*out_size < mbox_for_resp->mbox_len) {
+			PMD_DRV_LOG(ERR,
+				"Invalid response mbox message length: %d for "
+				"mod: %d cmd: %d, should less than: %d",
+				mbox_for_resp->mbox_len, mod, cmd, *out_size);
+			err = -EFAULT;
+			goto send_err;
+		}
+
+		if (mbox_for_resp->mbox_len)
+			memcpy(buf_out, mbox_for_resp->mbox,
+			       (size_t)(mbox_for_resp->mbox_len));
+
+		*out_size = mbox_for_resp->mbox_len;
+	}
+
+send_err:
+	(void)hinic3_mutex_unlock(&func_to_func->mbox_send_mutex);
+
+	return err;
+}
+
+static int
+mbox_func_params_valid(__rte_unused struct hinic3_mbox *func_to_func,
+		       void *buf_in, u16 in_size)
+{
+	if (!buf_in || !in_size)
+		return -EINVAL;
+
+	if (in_size > HINIC3_MBOX_DATA_SIZE) {
+		PMD_DRV_LOG(ERR, "Mbox msg len(%d) exceed limit(%" PRIu64 ")",
+			    in_size, HINIC3_MBOX_DATA_SIZE);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+hinic3_mbox_to_func_no_ack(struct hinic3_hwdev *hwdev, u16 func_idx,
+			   enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+			   u16 in_size)
+{
+	struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+	struct mbox_msg_info msg_info = {0};
+	int err;
+
+	err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size);
+	if (err)
+		return err;
+
+	err = hinic3_mutex_lock(&func_to_func->mbox_send_mutex);
+	if (err)
+		return err;
+
+	if (IS_TLP_MBX(func_idx))
+		err = send_tlp_mbox_to_func(func_to_func,
+			mod, cmd, buf_in, in_size, func_idx,
+			HINIC3_MSG_DIRECT_SEND, HINIC3_MSG_NO_ACK, &msg_info);
+	else
+		err = send_mbox_to_func(func_to_func, mod, cmd, buf_in, in_size,
+					func_idx, HINIC3_MSG_DIRECT_SEND,
+					HINIC3_MSG_NO_ACK, &msg_info);
+	if (err)
+		PMD_DRV_LOG(ERR, "Send mailbox no ack failed");
+
+	(void)hinic3_mutex_unlock(&func_to_func->mbox_send_mutex);
+
+	return err;
+}
+
+int
+hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+			 u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+			 u16 *out_size, u32 timeout)
+{
+	struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+	int err;
+	/* Verify the validity of the input parameters. */
+	err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+	if (err)
+		return err;
+
+	return hinic3_mbox_to_func(func_to_func, mod, cmd, HINIC3_MGMT_SRC_ID,
+				   buf_in, in_size, buf_out, out_size, timeout);
+}
+
+void
+hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev,
+			     enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+			     u16 in_size, u16 msg_id)
+{
+	struct mbox_msg_info msg_info;
+	u16 dst_func;
+
+	msg_info.msg_id = (u8)msg_id;
+	msg_info.status = 0;
+	dst_func = HINIC3_MGMT_SRC_ID;
+
+	if (IS_TLP_MBX(dst_func))
+		send_tlp_mbox_to_func(hwdev->func_to_func, mod, cmd, buf_in,
+				      in_size, HINIC3_MGMT_SRC_ID,
+				      HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK,
+				      &msg_info);
+	else
+		send_mbox_to_func(hwdev->func_to_func, mod, cmd, buf_in,
+				  in_size, HINIC3_MGMT_SRC_ID,
+				  HINIC3_MSG_RESPONSE, HINIC3_MSG_NO_ACK,
+				  &msg_info);
+}
+
+int
+hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev,
+				enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+				u16 in_size)
+{
+	struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+	int err;
+
+	err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+	if (err)
+		return err;
+
+	return hinic3_mbox_to_func_no_ack(hwdev, HINIC3_MGMT_SRC_ID, mod, cmd,
+					  buf_in, in_size);
+}
+
+int
+hinic3_mbox_to_pf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod, u16 cmd,
+		  void *buf_in, u16 in_size, void *buf_out, u16 *out_size,
+		  u32 timeout)
+{
+	int err;
+
+	if (!hwdev)
+		return -EINVAL;
+
+	/* Check the validity of parameters. */
+	err = mbox_func_params_valid(hwdev->func_to_func, buf_in, in_size);
+	if (err)
+		return err;
+
+	if (!HINIC3_IS_VF(hwdev)) {
+		PMD_DRV_LOG(ERR, "Params error, func_type: %d",
+			    hinic3_func_type(hwdev));
+		return -EINVAL;
+	}
+
+	/* Sending Email to PF. */
+	return hinic3_mbox_to_func(hwdev->func_to_func, mod, cmd,
+				   hinic3_pf_id_of_vf(hwdev), buf_in, in_size,
+				   buf_out, out_size, timeout);
+}
+
+int
+hinic3_mbox_to_vf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+		  u16 vf_id, u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+		  u16 *out_size, u32 timeout)
+{
+	struct hinic3_mbox *func_to_func = NULL;
+	u16 dst_func_idx;
+	int err = 0;
+
+	if (!hwdev)
+		return -EINVAL;
+
+	func_to_func = hwdev->func_to_func;
+	err = mbox_func_params_valid(func_to_func, buf_in, in_size);
+	if (err)
+		return err;
+
+	if (HINIC3_IS_VF(hwdev)) {
+		PMD_DRV_LOG(ERR, "Params error, func_type: %d",
+			    hinic3_func_type(hwdev));
+		return -EINVAL;
+	}
+
+	if (!vf_id) {
+		PMD_DRV_LOG(ERR, "VF id: %d error!", vf_id);
+		return -EINVAL;
+	}
+
+	/*
+	 * The sum of vf_offset_to_pf + vf_id is the VF's global function id of
+	 * VF in this pf.
+	 */
+	dst_func_idx = hinic3_glb_pf_vf_offset(hwdev) + vf_id;
+
+	return hinic3_mbox_to_func(func_to_func, mod, cmd, dst_func_idx, buf_in,
+				   in_size, buf_out, out_size, timeout);
+}
+
+static int
+init_mbox_info(struct hinic3_recv_mbox *mbox_info, int mbox_max_buf_sz)
+{
+	int err;
+
+	mbox_info->seq_id = SEQ_ID_MAX_VAL;
+
+	mbox_info->mbox =
+		rte_zmalloc("mbox", (size_t)mbox_max_buf_sz, 1); /*lint !e571*/
+	if (!mbox_info->mbox)
+		return -ENOMEM;
+
+	mbox_info->buf_out = rte_zmalloc("mbox_buf_out",
+		(size_t)mbox_max_buf_sz, 1); /*lint !e571*/
+	if (!mbox_info->buf_out) {
+		err = -ENOMEM;
+		goto alloc_buf_out_err;
+	}
+
+	return 0;
+
+alloc_buf_out_err:
+	rte_free(mbox_info->mbox);
+
+	return err;
+}
+
+static void
+clean_mbox_info(struct hinic3_recv_mbox *mbox_info)
+{
+	rte_free(mbox_info->buf_out);
+	rte_free(mbox_info->mbox);
+}
+
+static int
+alloc_mbox_info(struct hinic3_recv_mbox *mbox_info, int mbox_max_buf_sz)
+{
+	u16 func_idx, i;
+	int err;
+
+	for (func_idx = 0; func_idx < HINIC3_MAX_FUNCTIONS + 1; func_idx++) {
+		err = init_mbox_info(&mbox_info[func_idx], mbox_max_buf_sz);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Init mbox info failed");
+			goto init_mbox_info_err;
+		}
+	}
+
+	return 0;
+
+init_mbox_info_err:
+	for (i = 0; i < func_idx; i++)
+		clean_mbox_info(&mbox_info[i]);
+
+	return err;
+}
+
+static void
+free_mbox_info(struct hinic3_recv_mbox *mbox_info)
+{
+	u16 func_idx;
+
+	for (func_idx = 0; func_idx < HINIC3_MAX_FUNCTIONS + 1; func_idx++)
+		clean_mbox_info(&mbox_info[func_idx]);
+}
+
+static void
+prepare_send_mbox(struct hinic3_mbox *func_to_func)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+	send_mbox->data = MBOX_AREA(func_to_func->hwdev->hwif);
+}
+
+/**
+ * Allocate memory for the write-back state of the mailbox and write to
+ * register.
+ *
+ * @param[in] func_to_func
+ * Context for inter-function communication.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+alloc_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+	u32 addr_h, addr_l;
+
+	/* Reserved DMA area. */
+	send_mbox->wb_mz = hinic3_dma_zone_reserve(hwdev->eth_dev,
+		"wb_mz", 0, MBOX_WB_STATUS_LEN,
+		RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+	if (!send_mbox->wb_mz)
+		return -ENOMEM;
+
+	send_mbox->wb_vaddr = send_mbox->wb_mz->addr;
+	send_mbox->wb_paddr = send_mbox->wb_mz->iova;
+	send_mbox->wb_status = send_mbox->wb_vaddr;
+
+	addr_h = upper_32_bits(send_mbox->wb_paddr);
+	addr_l = lower_32_bits(send_mbox->wb_paddr);
+
+	/* Write info to the register. */
+	hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+			      addr_h);
+	hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+			      addr_l);
+
+	return 0;
+}
+
+static void
+free_mbox_wb_status(struct hinic3_mbox *func_to_func)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+	hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_H_OFF,
+			      0);
+	hinic3_hwif_write_reg(hwdev->hwif, HINIC3_FUNC_CSR_MAILBOX_RESULT_L_OFF,
+			      0);
+
+	hinic3_memzone_free(send_mbox->wb_mz);
+}
+
+static int
+alloc_mbox_tlp_buffer(struct hinic3_mbox *func_to_func)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+	struct hinic3_hwdev *hwdev = func_to_func->hwdev;
+
+	send_mbox->sbuff_mz = hinic3_dma_zone_reserve(hwdev->eth_dev,
+		"sbuff_mz", 0, MBOX_MAX_BUF_SZ, MBOX_MAX_BUF_SZ,
+		SOCKET_ID_ANY);
+	if (!send_mbox->sbuff_mz)
+		return -ENOMEM;
+
+	send_mbox->sbuff_vaddr = send_mbox->sbuff_mz->addr;
+	send_mbox->sbuff_paddr = send_mbox->sbuff_mz->iova;
+
+	return 0;
+}
+
+static void
+free_mbox_tlp_buffer(struct hinic3_mbox *func_to_func)
+{
+	struct hinic3_send_mbox *send_mbox = &func_to_func->send_mbox;
+
+	hinic3_memzone_free(send_mbox->sbuff_mz);
+}
+
+/**
+ * Initialize function to function communication.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_func_to_func_init(struct hinic3_hwdev *hwdev)
+{
+	struct hinic3_mbox *func_to_func;
+	int err;
+
+	func_to_func = rte_zmalloc("func_to_func", sizeof(*func_to_func), 1);
+	if (!func_to_func)
+		return -ENOMEM;
+
+	hwdev->func_to_func = func_to_func;
+	func_to_func->hwdev = hwdev;
+	(void)hinic3_mutex_init(&func_to_func->mbox_send_mutex, NULL);
+	(void)hinic3_mutex_init(&func_to_func->msg_send_mutex, NULL);
+	rte_spinlock_init(&func_to_func->mbox_lock);
+
+	/* Alloc the memory required by the mailbox. */
+	err = alloc_mbox_info(func_to_func->mbox_send, MBOX_MAX_BUF_SZ);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc mem for mbox_active failed");
+		goto alloc_mbox_for_send_err;
+	}
+
+	err = alloc_mbox_info(func_to_func->mbox_resp, MBOX_MAX_BUF_SZ);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc mem for mbox_passive failed");
+		goto alloc_mbox_for_resp_err;
+	}
+
+	err = alloc_mbox_tlp_buffer(func_to_func);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc mbox send buffer failed");
+		goto alloc_tlp_buffer_err;
+	}
+
+	err = alloc_mbox_wb_status(func_to_func);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc mbox write back status failed");
+		goto alloc_wb_status_err;
+	}
+
+	prepare_send_mbox(func_to_func);
+
+	return 0;
+
+alloc_wb_status_err:
+	free_mbox_tlp_buffer(func_to_func);
+
+alloc_tlp_buffer_err:
+	free_mbox_info(func_to_func->mbox_resp);
+
+alloc_mbox_for_resp_err:
+	free_mbox_info(func_to_func->mbox_send);
+
+alloc_mbox_for_send_err:
+	(void)hinic3_mutex_destroy(&func_to_func->msg_send_mutex);
+	(void)hinic3_mutex_destroy(&func_to_func->mbox_send_mutex);
+	rte_free(func_to_func);
+
+	return err;
+}
+
+void
+hinic3_func_to_func_free(struct hinic3_hwdev *hwdev)
+{
+	struct hinic3_mbox *func_to_func = hwdev->func_to_func;
+
+	free_mbox_wb_status(func_to_func);
+	free_mbox_tlp_buffer(func_to_func);
+	free_mbox_info(func_to_func->mbox_resp);
+	free_mbox_info(func_to_func->mbox_send);
+	(void)hinic3_mutex_destroy(&func_to_func->mbox_send_mutex);
+	(void)hinic3_mutex_destroy(&func_to_func->msg_send_mutex);
+
+	rte_free(func_to_func);
+}
diff --git a/drivers/net/hinic3/base/hinic3_mbox.h b/drivers/net/hinic3/base/hinic3_mbox.h
new file mode 100644
index 0000000000..eaf315952f
--- /dev/null
+++ b/drivers/net/hinic3/base/hinic3_mbox.h
@@ -0,0 +1,199 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_MBOX_H_
+#define _HINIC3_MBOX_H_
+
+#include "hinic3_mgmt.h"
+
+#define HINIC3_MBOX_PF_SEND_ERR	      0x1
+#define HINIC3_MBOX_PF_BUSY_ACTIVE_FW 0x2
+#define HINIC3_MBOX_VF_CMD_ERROR      0x3
+
+#define HINIC3_MGMT_SRC_ID 0x1FFF
+
+#define HINIC3_MAX_PF_FUNCS 32
+
+/* Message header define. */
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_SHIFT 0
+#define HINIC3_MSG_HEADER_STATUS_SHIFT		 13
+#define HINIC3_MSG_HEADER_SOURCE_SHIFT		 15
+#define HINIC3_MSG_HEADER_AEQ_ID_SHIFT		 16
+#define HINIC3_MSG_HEADER_MSG_ID_SHIFT		 18
+#define HINIC3_MSG_HEADER_CMD_SHIFT		 22
+
+#define HINIC3_MSG_HEADER_MSG_LEN_SHIFT	  32
+#define HINIC3_MSG_HEADER_MODULE_SHIFT	  43
+#define HINIC3_MSG_HEADER_SEG_LEN_SHIFT	  48
+#define HINIC3_MSG_HEADER_NO_ACK_SHIFT	  54
+#define HINIC3_MSG_HEADER_DATA_TYPE_SHIFT 55
+#define HINIC3_MSG_HEADER_SEQID_SHIFT	  56
+#define HINIC3_MSG_HEADER_LAST_SHIFT	  62
+#define HINIC3_MSG_HEADER_DIRECTION_SHIFT 63
+
+#define HINIC3_MSG_HEADER_CMD_MASK		0x3FF
+#define HINIC3_MSG_HEADER_MSG_ID_MASK		0xF
+#define HINIC3_MSG_HEADER_AEQ_ID_MASK		0x3
+#define HINIC3_MSG_HEADER_SOURCE_MASK		0x1
+#define HINIC3_MSG_HEADER_STATUS_MASK		0x1
+#define HINIC3_MSG_HEADER_SRC_GLB_FUNC_IDX_MASK 0x1FFF
+
+#define HINIC3_MSG_HEADER_MSG_LEN_MASK	 0x7FF
+#define HINIC3_MSG_HEADER_MODULE_MASK	 0x1F
+#define HINIC3_MSG_HEADER_SEG_LEN_MASK	 0x3F
+#define HINIC3_MSG_HEADER_NO_ACK_MASK	 0x1
+#define HINIC3_MSG_HEADER_DATA_TYPE_MASK 0x1
+#define HINIC3_MSG_HEADER_SEQID_MASK	 0x3F
+#define HINIC3_MSG_HEADER_LAST_MASK	 0x1
+#define HINIC3_MSG_HEADER_DIRECTION_MASK 0x1
+
+#define HINIC3_MSG_HEADER_GET(val, field)               \
+	(((val) >> HINIC3_MSG_HEADER_##field##_SHIFT) & \
+	 HINIC3_MSG_HEADER_##field##_MASK)
+#define HINIC3_MSG_HEADER_SET(val, field)                       \
+	((u64)(((u64)(val)) & HINIC3_MSG_HEADER_##field##_MASK) \
+	 << HINIC3_MSG_HEADER_##field##_SHIFT)
+
+#define IS_TLP_MBX(dst_func) ((dst_func) == HINIC3_MGMT_SRC_ID)
+
+enum hinic3_msg_direction_type {
+	HINIC3_MSG_DIRECT_SEND = 0,
+	HINIC3_MSG_RESPONSE = 1
+};
+
+enum hinic3_msg_segment_type { NOT_LAST_SEGMENT = 0, LAST_SEGMENT = 1 };
+
+enum hinic3_msg_ack_type { HINIC3_MSG_ACK, HINIC3_MSG_NO_ACK };
+
+enum hinic3_data_type { HINIC3_DATA_INLINE = 0, HINIC3_DATA_DMA = 1 };
+
+enum hinic3_msg_src_type { HINIC3_MSG_FROM_MGMT = 0, HINIC3_MSG_FROM_MBOX = 1 };
+
+enum hinic3_msg_aeq_type {
+	HINIC3_ASYNC_MSG_AEQ = 0,
+	/* Indicate dst_func or mgmt cpu which aeq to response mbox message. */
+	HINIC3_MBOX_RSP_MSG_AEQ = 1,
+	/* Indicate mgmt cpu which aeq to response api cmd message. */
+	HINIC3_MGMT_RSP_MSG_AEQ = 2
+};
+
+enum hinic3_mbox_seg_errcode {
+	MBOX_ERRCODE_NO_ERRORS = 0,
+	/* VF sends the mailbox data to the wrong destination functions. */
+	MBOX_ERRCODE_VF_TO_WRONG_FUNC = 0x100,
+	/* PPF sends the mailbox data to the wrong destination functions. */
+	MBOX_ERRCODE_PPF_TO_WRONG_FUNC = 0x200,
+	/* PF sends the mailbox data to the wrong destination functions. */
+	MBOX_ERRCODE_PF_TO_WRONG_FUNC = 0x300,
+	/* The mailbox data size is set to all zero. */
+	MBOX_ERRCODE_ZERO_DATA_SIZE = 0x400,
+	/* The sender func attribute has not been learned by CPI hardware. */
+	MBOX_ERRCODE_UNKNOWN_SRC_FUNC = 0x500,
+	/* The receiver func attr has not been learned by CPI hardware. */
+	MBOX_ERRCODE_UNKNOWN_DES_FUNC = 0x600
+};
+
+enum hinic3_mbox_func_index {
+	HINIC3_MBOX_MPU_INDEX = 0,
+	HINIC3_MBOX_PF_INDEX = 1,
+	HINIC3_MAX_FUNCTIONS = 2,
+};
+
+struct mbox_msg_info {
+	u8 msg_id;
+	u8 status; /**< Can only use 3 bit. */
+};
+
+struct hinic3_recv_mbox {
+	void *mbox;
+	u16 cmd;
+	enum hinic3_mod_type mod;
+	u16 mbox_len;
+	void *buf_out;
+	enum hinic3_msg_ack_type ack_type;
+	struct mbox_msg_info msg_info;
+	u8 seq_id;
+	RTE_ATOMIC(int32_t)msg_cnt;
+};
+
+struct hinic3_send_mbox {
+	u8 *data;
+	u64 *wb_status; /**< Write back status. */
+
+	const struct rte_memzone *wb_mz;
+	void *wb_vaddr;	     /**< Write back virtual address. */
+	rte_iova_t wb_paddr; /**< Write back physical address. */
+
+	const struct rte_memzone *sbuff_mz;
+	void *sbuff_vaddr;
+	rte_iova_t sbuff_paddr;
+};
+
+enum mbox_event_state {
+	EVENT_START = 0,
+	EVENT_FAIL,
+	EVENT_SUCCESS,
+	EVENT_TIMEOUT,
+	EVENT_END
+};
+
+/* Execution status of the callback function. */
+enum hinic3_mbox_cb_state {
+	HINIC3_VF_MBOX_CB_REG = 0,
+	HINIC3_VF_MBOX_CB_RUNNING,
+	HINIC3_PF_MBOX_CB_REG,
+	HINIC3_PF_MBOX_CB_RUNNING,
+	HINIC3_PPF_MBOX_CB_REG,
+	HINIC3_PPF_MBOX_CB_RUNNING,
+	HINIC3_PPF_TO_PF_MBOX_CB_REG,
+	HINIC3_PPF_TO_PF_MBOX_CB_RUNNIG
+};
+
+struct hinic3_mbox {
+	struct hinic3_hwdev *hwdev;
+
+	pthread_mutex_t mbox_send_mutex;
+	pthread_mutex_t msg_send_mutex;
+
+	struct hinic3_send_mbox send_mbox;
+
+	/* Last element for mgmt. */
+	struct hinic3_recv_mbox mbox_resp[HINIC3_MAX_FUNCTIONS + 1];
+	struct hinic3_recv_mbox mbox_send[HINIC3_MAX_FUNCTIONS + 1];
+
+	u8 send_msg_id;
+	enum mbox_event_state event_flag;
+	/* Lock for mbox event flag. */
+	rte_spinlock_t mbox_lock;
+};
+
+int hinic3_mbox_func_aeqe_handler(void *handle, u8 *header,
+				  __rte_unused u8 size, void *param);
+
+int hinic3_func_to_func_init(struct hinic3_hwdev *hwdev);
+
+void hinic3_func_to_func_free(struct hinic3_hwdev *hwdev);
+
+int hinic3_send_mbox_to_mgmt(struct hinic3_hwdev *hwdev,
+			     enum hinic3_mod_type mod, u16 cmd, void *buf_in,
+			     u16 in_size, void *buf_out, u16 *out_size,
+			     u32 timeout);
+
+void hinic3_response_mbox_to_mgmt(struct hinic3_hwdev *hwdev,
+				  enum hinic3_mod_type mod, u16 cmd,
+				  void *buf_in, u16 in_size, u16 msg_id);
+
+int hinic3_send_mbox_to_mgmt_no_ack(struct hinic3_hwdev *hwdev,
+				    enum hinic3_mod_type mod, u16 cmd,
+				    void *buf_in, u16 in_size);
+
+int hinic3_mbox_to_pf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+		      u16 cmd, void *buf_in, u16 in_size, void *buf_out,
+		      u16 *out_size, u32 timeout);
+
+int hinic3_mbox_to_vf(struct hinic3_hwdev *hwdev, enum hinic3_mod_type mod,
+		      u16 vf_id, u16 cmd, void *buf_in, u16 in_size,
+		      void *buf_out, u16 *out_size, u32 timeout);
+
+#endif /**< _HINIC3_MBOX_H_ */
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 12/18] net/hinic3: add device initailization
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (2 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 11/18] net/hinic3: add a mailbox communication module Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 13/18] net/hinic3: add dev ops Feifei Wang
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen

From: Xin Wang <wangxin679@h-partners.com>

This patch contains data structures and function codes
related to device initialization.

Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
 drivers/net/hinic3/hinic3_ethdev.c | 514 +++++++++++++++++++++++++++++
 drivers/net/hinic3/hinic3_ethdev.h | 119 +++++++
 2 files changed, 633 insertions(+)
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.h

diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
new file mode 100644
index 0000000000..c4b2f5ffe4
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -0,0 +1,514 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_csr.h"
+#include "base/hinic3_wq.h"
+#include "base/hinic3_eqs.h"
+#include "base/hinic3_cmdq.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_hwif.h"
+#include "base/hinic3_hw_cfg.h"
+#include "base/hinic3_hw_comm.h"
+#include "base/hinic3_nic_cfg.h"
+#include "base/hinic3_nic_event.h"
+#include "hinic3_ethdev.h"
+
+/**
+ * Interrupt handler triggered by NIC for handling specific event.
+ *
+ * @param[in] param
+ * The address of parameter (struct rte_eth_dev *) regsitered before.
+ */
+static void
+hinic3_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = param;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (!hinic3_get_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status)) {
+		PMD_DRV_LOG(WARNING,
+			    "Intr is disabled, ignore intr event, "
+			    "dev_name: %s, port_id: %d",
+			    nic_dev->dev_name, dev->data->port_id);
+		return;
+	}
+
+	/* Aeq0 msg handler. */
+	hinic3_dev_handle_aeq_event(nic_dev->hwdev, param);
+}
+
+static void
+hinic3_deinit_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+{
+	rte_free(nic_dev->txqs);
+	nic_dev->txqs = NULL;
+
+	rte_free(nic_dev->rxqs);
+	nic_dev->rxqs = NULL;
+}
+
+/**
+ * Init mac_vlan table in hardwares.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_init_mac_table(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev =
+		HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+	u8 addr_bytes[RTE_ETHER_ADDR_LEN];
+	u16 func_id = 0;
+	int err = 0;
+
+	err = hinic3_get_default_mac(nic_dev->hwdev, addr_bytes,
+				     RTE_ETHER_ADDR_LEN);
+	if (err)
+		return err;
+
+	rte_ether_addr_copy((struct rte_ether_addr *)addr_bytes,
+			    &eth_dev->data->mac_addrs[0]);
+	if (rte_is_zero_ether_addr(&eth_dev->data->mac_addrs[0]))
+		rte_eth_random_addr(eth_dev->data->mac_addrs[0].addr_bytes);
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+	err = hinic3_set_mac(nic_dev->hwdev,
+			     eth_dev->data->mac_addrs[0].addr_bytes, 0,
+			     func_id);
+	if (err && err != HINIC3_PF_SET_VF_ALREADY)
+		return err;
+
+	rte_ether_addr_copy(&eth_dev->data->mac_addrs[0],
+			    &nic_dev->default_addr);
+
+	return 0;
+}
+
+/**
+ * Deinit mac_vlan table in hardware.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev =
+		HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+	u16 func_id = 0;
+	int err;
+	int i;
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+	for (i = 0; i < HINIC3_MAX_UC_MAC_ADDRS; i++) {
+		if (rte_is_zero_ether_addr(&eth_dev->data->mac_addrs[i]))
+			continue;
+
+		err = hinic3_del_mac(nic_dev->hwdev,
+				     eth_dev->data->mac_addrs[i].addr_bytes, 0,
+				     func_id);
+		if (err && err != HINIC3_PF_SET_VF_ALREADY)
+			PMD_DRV_LOG(ERR,
+				    "Delete mac table failed, dev_name: %s",
+				    eth_dev->data->name);
+
+		memset(&eth_dev->data->mac_addrs[i], 0,
+		       sizeof(struct rte_ether_addr));
+	}
+
+	/* Delete multicast mac addrs. */
+	hinic3_delete_mc_addr_list(nic_dev);
+}
+
+/**
+ * Check the valid CoS bitmap to determine the available CoS IDs and set
+ * the default CoS ID to the highest valid one.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[out] cos_id
+ * Pointer to store the default CoS ID.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_pf_get_default_cos(struct hinic3_hwdev *hwdev, u8 *cos_id)
+{
+	u8 default_cos = 0;
+	u8 valid_cos_bitmap;
+	u8 i;
+
+	valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.cos_valid_bitmap;
+	if (!valid_cos_bitmap) {
+		PMD_DRV_LOG(ERR, "PF has none cos to support");
+		return -EFAULT;
+	}
+
+	for (i = 0; i < HINIC3_COS_NUM_MAX; i++) {
+		if (valid_cos_bitmap & BIT(i))
+			/* Find max cos id as default cos. */
+			default_cos = i;
+	}
+
+	*cos_id = default_cos;
+
+	return 0;
+}
+
+static int
+hinic3_init_default_cos(struct hinic3_nic_dev *nic_dev)
+{
+	u8 cos_id = 0;
+	int err;
+
+	if (!HINIC3_IS_VF(nic_dev->hwdev)) {
+		err = hinic3_pf_get_default_cos(nic_dev->hwdev, &cos_id);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Get PF default cos failed, err: %d",
+				    err);
+			return err;
+		}
+	} else {
+		err = hinic3_vf_get_default_cos(nic_dev->hwdev, &cos_id);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Get VF default cos failed, err: %d",
+				    err);
+			return err;
+		}
+	}
+
+	nic_dev->default_cos = cos_id;
+	PMD_DRV_LOG(INFO, "Default cos %d", nic_dev->default_cos);
+	return 0;
+}
+
+/**
+ * Initialize Class of Service (CoS). For PF devices, it also sync the link
+ * status with the physical port.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_default_hw_feature(struct hinic3_nic_dev *nic_dev)
+{
+	int err;
+
+	err = hinic3_init_default_cos(nic_dev);
+	if (err)
+		return err;
+
+	if (hinic3_func_type(nic_dev->hwdev) == TYPE_VF)
+		return 0;
+
+	err = hinic3_set_link_status_follow(nic_dev->hwdev,
+					    HINIC3_LINK_FOLLOW_PORT);
+	if (err == HINIC3_MGMT_CMD_UNSUPPORTED)
+		PMD_DRV_LOG(WARNING, "Don't support to set link status follow "
+				     "phy port status");
+	else if (err)
+		return err;
+
+	return 0;
+}
+
+/**
+ * Initialize the network function, including hardware configuration, memory
+ * allocation for data structures, MAC address setup, and interrupt enabling.
+ * It also registers interrupt callbacks and sets default hardware features.
+ * If any step fails, appropriate cleanup is performed.
+ *
+ * @param[out] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_func_init(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_tcam_info *tcam_info = NULL;
+	struct hinic3_nic_dev *nic_dev = NULL;
+	struct rte_pci_device *pci_dev = NULL;
+	int err;
+
+	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	/* EAL is secondary and eth_dev is already created. */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		PMD_DRV_LOG(INFO, "Initialize %s in secondary process",
+			    eth_dev->data->name);
+
+		return 0;
+	}
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+	memset(nic_dev, 0, sizeof(*nic_dev));
+	(void)snprintf(nic_dev->dev_name, sizeof(nic_dev->dev_name),
+		       "dbdf-%.4x:%.2x:%.2x.%x", pci_dev->addr.domain,
+		       pci_dev->addr.bus, pci_dev->addr.devid,
+		       pci_dev->addr.function);
+
+	/* Alloc mac_addrs. */
+	eth_dev->data->mac_addrs = rte_zmalloc("hinic3_mac",
+		HINIC3_MAX_UC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0);
+	if (!eth_dev->data->mac_addrs) {
+		PMD_DRV_LOG(ERR,
+			    "Allocate %zx bytes to store MAC addresses "
+			    "failed, dev_name: %s",
+			    HINIC3_MAX_UC_MAC_ADDRS *
+				    sizeof(struct rte_ether_addr),
+			    eth_dev->data->name);
+		err = -ENOMEM;
+		goto alloc_eth_addr_fail;
+	}
+
+	nic_dev->mc_list = rte_zmalloc("hinic3_mc",
+		HINIC3_MAX_MC_MAC_ADDRS * sizeof(struct rte_ether_addr), 0);
+	if (!nic_dev->mc_list) {
+		PMD_DRV_LOG(ERR,
+			    "Allocate %zx bytes to store multicast "
+			    "addresses failed, dev_name: %s",
+			    HINIC3_MAX_MC_MAC_ADDRS *
+				    sizeof(struct rte_ether_addr),
+			    eth_dev->data->name);
+		err = -ENOMEM;
+		goto alloc_mc_list_fail;
+	}
+
+	/* Create hardware device. */
+	nic_dev->hwdev = rte_zmalloc("hinic3_hwdev", sizeof(*nic_dev->hwdev),
+				     RTE_CACHE_LINE_SIZE);
+	if (!nic_dev->hwdev) {
+		PMD_DRV_LOG(ERR, "Allocate hwdev memory failed, dev_name: %s",
+			    eth_dev->data->name);
+		err = -ENOMEM;
+		goto alloc_hwdev_mem_fail;
+	}
+	nic_dev->hwdev->pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	nic_dev->hwdev->dev_handle = nic_dev;
+	nic_dev->hwdev->eth_dev = eth_dev;
+	nic_dev->hwdev->port_id = eth_dev->data->port_id;
+
+	err = hinic3_init_hwdev(nic_dev->hwdev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init chip hwdev failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_hwdev_fail;
+	}
+
+	nic_dev->max_sqs = hinic3_func_max_sqs(nic_dev->hwdev);
+	nic_dev->max_rqs = hinic3_func_max_rqs(nic_dev->hwdev);
+
+	err = hinic3_init_nic_hwdev(nic_dev->hwdev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init nic hwdev failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_nic_hwdev_fail;
+	}
+
+	err = hinic3_get_feature_from_hw(nic_dev->hwdev, &nic_dev->feature_cap,
+					 1);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			"Get nic feature from hardware failed, dev_name: %s",
+			eth_dev->data->name);
+		goto get_cap_fail;
+	}
+
+	err = hinic3_init_sw_rxtxqs(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init sw rxqs or txqs failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_sw_rxtxqs_fail;
+	}
+
+	err = hinic3_init_mac_table(eth_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init mac table failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_mac_table_fail;
+	}
+
+	/* Set hardware feature to default status. */
+	err = hinic3_set_default_hw_feature(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set hw default features failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto set_default_feature_fail;
+	}
+
+	/* Register callback func to eal lib. */
+	err = rte_intr_callback_register(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+					 hinic3_dev_interrupt_handler,
+					 (void *)eth_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Register intr callback failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto reg_intr_cb_fail;
+	}
+
+	/* Enable uio/vfio intr/eventfd mapping. */
+	err = rte_intr_enable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+	if (err) {
+		PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto enable_intr_fail;
+	}
+	tcam_info = &nic_dev->tcam;
+	memset(tcam_info, 0, sizeof(struct hinic3_tcam_info));
+	TAILQ_INIT(&tcam_info->tcam_list);
+	TAILQ_INIT(&tcam_info->tcam_dynamic_info.tcam_dynamic_list);
+	TAILQ_INIT(&nic_dev->filter_ethertype_list);
+	TAILQ_INIT(&nic_dev->filter_fdir_rule_list);
+
+	hinic3_mutex_init(&nic_dev->rx_mode_mutex, NULL);
+
+	hinic3_set_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status);
+
+	hinic3_set_bit(HINIC3_DEV_INIT, &nic_dev->dev_status);
+	PMD_DRV_LOG(INFO, "Initialize %s in primary succeed",
+		    eth_dev->data->name);
+
+	/**
+	 * Queue xstats filled automatically by ethdev layer.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+
+	return 0;
+
+enable_intr_fail:
+	(void)rte_intr_callback_unregister(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+					   hinic3_dev_interrupt_handler,
+					   (void *)eth_dev);
+
+reg_intr_cb_fail:
+set_default_feature_fail:
+	hinic3_deinit_mac_addr(eth_dev);
+
+init_mac_table_fail:
+	hinic3_deinit_sw_rxtxqs(nic_dev);
+
+init_sw_rxtxqs_fail:
+	hinic3_free_nic_hwdev(nic_dev->hwdev);
+
+get_cap_fail:
+init_nic_hwdev_fail:
+	hinic3_free_hwdev(nic_dev->hwdev);
+	eth_dev->dev_ops = NULL;
+	eth_dev->rx_queue_count = NULL;
+	eth_dev->rx_descriptor_status = NULL;
+	eth_dev->tx_descriptor_status = NULL;
+
+init_hwdev_fail:
+	rte_free(nic_dev->hwdev);
+	nic_dev->hwdev = NULL;
+
+alloc_hwdev_mem_fail:
+	rte_free(nic_dev->mc_list);
+	nic_dev->mc_list = NULL;
+
+alloc_mc_list_fail:
+	rte_free(eth_dev->data->mac_addrs);
+	eth_dev->data->mac_addrs = NULL;
+
+alloc_eth_addr_fail:
+	PMD_DRV_LOG(ERR, "Initialize %s in primary failed",
+		    eth_dev->data->name);
+	return err;
+}
+
+static int
+hinic3_dev_init(struct rte_eth_dev *eth_dev)
+{
+	struct rte_pci_device *pci_dev;
+
+	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	PMD_DRV_LOG(INFO, "Initializing %.4x:%.2x:%.2x.%x in %s process",
+		    pci_dev->addr.domain, pci_dev->addr.bus,
+		    pci_dev->addr.devid, pci_dev->addr.function,
+		    (rte_eal_process_type() == RTE_PROC_PRIMARY) ? "primary"
+								 : "secondary");
+
+	PMD_DRV_LOG(INFO, "Network Interface pmd driver version: %s",
+		    HINIC3_PMD_DRV_VERSION);
+
+	return hinic3_func_init(eth_dev);
+}
+
+static int
+hinic3_dev_uninit(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	hinic3_clear_bit(HINIC3_DEV_INIT, &nic_dev->dev_status);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	return hinic3_dev_close(dev);
+}
+
+static const struct rte_pci_id pci_id_hinic3_map[] = {
+#ifdef CONFIG_SP_VID_DID
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_STANDARD)},
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_SPNIC, HINIC3_DEV_ID_VF)},
+#else
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_STANDARD)},
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HINIC3_DEV_ID_VF)},
+#endif
+
+	{.vendor_id = 0},
+};
+
+static int
+hinic3_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
+		 struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_probe(pci_dev,
+		sizeof(struct hinic3_nic_dev), hinic3_dev_init);
+}
+
+static int
+hinic3_pci_remove(struct rte_pci_device *pci_dev)
+{
+	return rte_eth_dev_pci_generic_remove(pci_dev, hinic3_dev_uninit);
+}
+
+static struct rte_pci_driver rte_hinic3_pmd = {
+	.id_table = pci_id_hinic3_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+	.probe = hinic3_pci_probe,
+	.remove = hinic3_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_hinic3, rte_hinic3_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_hinic3, pci_id_hinic3_map);
+
+RTE_INIT(hinic3_init_log)
+{
+	hinic3_logtype = rte_log_register("pmd.net.hinic3");
+	if (hinic3_logtype >= 0)
+		rte_log_set_level(hinic3_logtype, RTE_LOG_INFO);
+}
diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h
new file mode 100644
index 0000000000..a69cf972e7
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_ethdev.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_ETHDEV_H_
+#define _HINIC3_ETHDEV_H_
+
+#include <rte_ethdev.h>
+#include <rte_ethdev_core.h>
+
+#define HINIC3_PMD_DRV_VERSION "B106"
+
+#define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle)
+
+#define HINIC3_PKT_RX_L4_CKSUM_BAD     RTE_MBUF_F_RX_L4_CKSUM_BAD
+#define HINIC3_PKT_RX_IP_CKSUM_BAD     RTE_MBUF_F_RX_IP_CKSUM_BAD
+#define HINIC3_PKT_RX_IP_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
+#define HINIC3_PKT_RX_L4_CKSUM_GOOD    RTE_MBUF_F_RX_L4_CKSUM_GOOD
+#define HINIC3_PKT_RX_IP_CKSUM_GOOD    RTE_MBUF_F_RX_IP_CKSUM_GOOD
+#define HINIC3_PKT_TX_TCP_SEG	       RTE_MBUF_F_TX_TCP_SEG
+#define HINIC3_PKT_TX_UDP_CKSUM	       RTE_MBUF_F_TX_UDP_CKSUM
+#define HINIC3_PKT_TX_TCP_CKSUM	       RTE_MBUF_F_TX_TCP_CKSUM
+#define HINIC3_PKT_TX_IP_CKSUM	       RTE_MBUF_F_TX_IP_CKSUM
+#define HINIC3_PKT_TX_VLAN_PKT	       RTE_MBUF_F_TX_VLAN
+#define HINIC3_PKT_TX_L4_MASK	       RTE_MBUF_F_TX_L4_MASK
+#define HINIC3_PKT_TX_SCTP_CKSUM       RTE_MBUF_F_TX_SCTP_CKSUM
+#define HINIC3_PKT_TX_IPV6	       RTE_MBUF_F_TX_IPV6
+#define HINIC3_PKT_TX_IPV4	       RTE_MBUF_F_TX_IPV4
+#define HINIC3_PKT_RX_VLAN	       RTE_MBUF_F_RX_VLAN
+#define HINIC3_PKT_RX_VLAN_STRIPPED    RTE_MBUF_F_RX_VLAN_STRIPPED
+#define HINIC3_PKT_RX_RSS_HASH	       RTE_MBUF_F_RX_RSS_HASH
+#define HINIC3_PKT_TX_TUNNEL_MASK      RTE_MBUF_F_TX_TUNNEL_MASK
+#define HINIC3_PKT_TX_TUNNEL_VXLAN     RTE_MBUF_F_TX_TUNNEL_VXLAN
+#define HINIC3_PKT_TX_OUTER_IP_CKSUM   RTE_MBUF_F_TX_OUTER_IP_CKSUM
+#define HINIC3_PKT_TX_OUTER_IPV6       RTE_MBUF_F_TX_OUTER_IPV6
+#define HINIC3_PKT_RX_LRO	       RTE_MBUF_F_RX_LRO
+#define HINIC3_PKT_TX_L4_NO_CKSUM      RTE_MBUF_F_TX_L4_NO_CKSUM
+
+#define HINCI3_CPY_MEMPOOL_NAME "cpy_mempool"
+/* Mbuf pool for copy invalid mbuf segs. */
+#define HINIC3_COPY_MEMPOOL_DEPTH 1024
+#define HINIC3_COPY_MEMPOOL_CACHE 128
+#define HINIC3_COPY_MBUF_SIZE	  4096
+
+#define HINIC3_DEV_NAME_LEN 32
+#define DEV_STOP_DELAY_MS   100
+#define DEV_START_DELAY_MS  100
+
+#define HINIC3_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t))
+#define HINIC3_VFTA_SIZE       (4096 / HINIC3_UINT32_BIT_SIZE)
+#define HINIC3_MAX_QUEUE_NUM   64
+
+#define HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev) \
+	((struct hinic3_nic_dev *)(dev)->data->dev_private)
+
+enum hinic3_dev_status {
+	HINIC3_DEV_INIT,
+	HINIC3_DEV_CLOSE,
+	HINIC3_DEV_START,
+	HINIC3_DEV_INTR_EN
+};
+
+enum hinic3_tx_cvlan_type {
+	HINIC3_TX_TPID0,
+};
+
+enum nic_feature_cap {
+	NIC_F_CSUM = BIT(0),
+	NIC_F_SCTP_CRC = BIT(1),
+	NIC_F_TSO = BIT(2),
+	NIC_F_LRO = BIT(3),
+	NIC_F_UFO = BIT(4),
+	NIC_F_RSS = BIT(5),
+	NIC_F_RX_VLAN_FILTER = BIT(6),
+	NIC_F_RX_VLAN_STRIP = BIT(7),
+	NIC_F_TX_VLAN_INSERT = BIT(8),
+	NIC_F_VXLAN_OFFLOAD = BIT(9),
+	NIC_F_IPSEC_OFFLOAD = BIT(10),
+	NIC_F_FDIR = BIT(11),
+	NIC_F_PROMISC = BIT(12),
+	NIC_F_ALLMULTI = BIT(13),
+};
+
+#define DEFAULT_DRV_FEATURE 0x3FFF
+
+struct hinic3_nic_dev {
+	struct hinic3_hwdev *hwdev; /**< Hardware device. */
+	struct hinic3_txq **txqs;
+	struct hinic3_rxq **rxqs;
+	struct rte_mempool *cpy_mpool;
+
+	u16 num_sqs;
+	u16 num_rqs;
+	u16 max_sqs;
+	u16 max_rqs;
+
+	u16 rx_buff_len;
+	u16 mtu_size;
+
+	u32 rx_mode;
+	u8 rx_queue_list[HINIC3_MAX_QUEUE_NUM];
+	rte_spinlock_t queue_list_lock;
+
+	pthread_mutex_t rx_mode_mutex;
+
+	u32 default_cos;
+	u32 rx_csum_en;
+
+	unsigned long dev_status;
+
+	struct rte_ether_addr default_addr;
+	struct rte_ether_addr *mc_list;
+
+	char dev_name[HINIC3_DEV_NAME_LEN];
+	u64 feature_cap;
+	u32 vfta[HINIC3_VFTA_SIZE]; /**< VLAN bitmap. */
+};
+
+#endif /* _HINIC3_ETHDEV_H_ */
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 13/18] net/hinic3: add dev ops
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (3 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 12/18] net/hinic3: add device initailization Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 14/18] net/hinic3: add Rx/Tx functions Feifei Wang
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Feifei Wang, Xin Wang, Yi Chen

From: Feifei Wang <wangfeifei40@huawei.com>

Add ops related function codes.

Signed-off-by: Feifei Wang <wangfeifei40@huawei.com>
Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
 drivers/net/hinic3/hinic3_ethdev.c | 2918 +++++++++++++++++++++++++++-
 drivers/net/hinic3/hinic3_nic_io.c |  827 ++++++++
 drivers/net/hinic3/hinic3_nic_io.h |  169 ++
 drivers/net/hinic3/hinic3_rx.c     |  811 ++++++++
 drivers/net/hinic3/hinic3_rx.h     |  356 ++++
 drivers/net/hinic3/hinic3_tx.c     |  274 +++
 drivers/net/hinic3/hinic3_tx.h     |  314 +++
 7 files changed, 5652 insertions(+), 17 deletions(-)
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
 create mode 100644 drivers/net/hinic3/hinic3_rx.c
 create mode 100644 drivers/net/hinic3/hinic3_rx.h
 create mode 100644 drivers/net/hinic3/hinic3_tx.c
 create mode 100644 drivers/net/hinic3/hinic3_tx.h

diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index c4b2f5ffe4..de380dddbb 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -21,42 +21,2917 @@
 #include "base/hinic3_hw_comm.h"
 #include "base/hinic3_nic_cfg.h"
 #include "base/hinic3_nic_event.h"
+#include "hinic3_pmd_nic_io.h"
+#include "hinic3_pmd_tx.h"
+#include "hinic3_pmd_rx.h"
 #include "hinic3_ethdev.h"
 
+#define HINIC3_MIN_RX_BUF_SIZE 1024
+
+#define HINIC3_DEFAULT_BURST_SIZE 32
+#define HINIC3_DEFAULT_NB_QUEUES  1
+#define HINIC3_DEFAULT_RING_SIZE  1024
+#define HINIC3_MAX_LRO_SIZE	  65536
+
+#define HINIC3_DEFAULT_RX_FREE_THRESH 32
+#define HINIC3_DEFAULT_TX_FREE_THRESH 32
+
+#define HINIC3_RX_WAIT_CYCLE_THRESH 500
+
+/**
+ * Get the 32-bit VFTA bit mask for the lower 5 bits of the VLAN ID.
+ *
+ * Vlan_id is a 12 bit number. The VFTA array is actually a 4096 bit array,
+ * 128 of 32bit elements. 2^5 = 32. The val of lower 5 bits specifies the bit
+ * in the 32bit element. The higher 7 bit val specifies VFTA array index.
+ */
+#define HINIC3_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F))
+/**
+ * Get the VFTA index from the upper 7 bits of the VLAN ID.
+ */
+#define HINIC3_VFTA_IDX(vlan_id) ((vlan_id) >> 5)
+
+#define HINIC3_LRO_DEFAULT_TIME_LIMIT 16
+#define HINIC3_LRO_UNIT_WQE_SIZE      1024 /**< Bytes. */
+
+#define HINIC3_MAX_RX_PKT_LEN(rxmod) ((rxmod).mtu)
+int hinic3_logtype; /**< Driver-specific log messages type. */
+
+/**
+ * The different receive modes for the NIC.
+ *
+ * The receive modes are represented as bit flags that control how the
+ * NIC handles various types of network traffic.
+ */
+enum hinic3_rx_mod {
+	/* Enable unicast receive mode. */
+	HINIC3_RX_MODE_UC = 1 << 0,
+	/* Enable multicast receive mode. */
+	HINIC3_RX_MODE_MC = 1 << 1,
+	/* Enable broadcast receive mode. */
+	HINIC3_RX_MODE_BC = 1 << 2,
+	/* Enable receive mode for all multicast addresses. */
+	HINIC3_RX_MODE_MC_ALL = 1 << 3,
+	/* Enable promiscuous mode, receiving all packets. */
+	HINIC3_RX_MODE_PROMISC = 1 << 4,
+};
+
+#define HINIC3_DEFAULT_RX_MODE \
+	(HINIC3_RX_MODE_UC | HINIC3_RX_MODE_MC | HINIC3_RX_MODE_BC)
+
+struct hinic3_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	u32 offset;
+};
+
+#define HINIC3_FUNC_STAT(_stat_item)                                       \
+	{                                                                  \
+		.name = #_stat_item,                                       \
+		.offset = offsetof(struct hinic3_vport_stats, _stat_item), \
+	}
+
+static const struct hinic3_xstats_name_off hinic3_vport_stats_strings[] = {
+	HINIC3_FUNC_STAT(tx_unicast_pkts_vport),
+	HINIC3_FUNC_STAT(tx_unicast_bytes_vport),
+	HINIC3_FUNC_STAT(tx_multicast_pkts_vport),
+	HINIC3_FUNC_STAT(tx_multicast_bytes_vport),
+	HINIC3_FUNC_STAT(tx_broadcast_pkts_vport),
+	HINIC3_FUNC_STAT(tx_broadcast_bytes_vport),
+
+	HINIC3_FUNC_STAT(rx_unicast_pkts_vport),
+	HINIC3_FUNC_STAT(rx_unicast_bytes_vport),
+	HINIC3_FUNC_STAT(rx_multicast_pkts_vport),
+	HINIC3_FUNC_STAT(rx_multicast_bytes_vport),
+	HINIC3_FUNC_STAT(rx_broadcast_pkts_vport),
+	HINIC3_FUNC_STAT(rx_broadcast_bytes_vport),
+
+	HINIC3_FUNC_STAT(tx_discard_vport),
+	HINIC3_FUNC_STAT(rx_discard_vport),
+	HINIC3_FUNC_STAT(tx_err_vport),
+	HINIC3_FUNC_STAT(rx_err_vport),
+};
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+
+#define HINIC3_VPORT_XSTATS_NUM ARRAY_SIZE(hinic3_vport_stats_strings)
+
+#define HINIC3_PORT_STAT(_stat_item)                                       \
+	{                                                                  \
+		.name = #_stat_item,                                       \
+		.offset = offsetof(struct mag_phy_port_stats, _stat_item), \
+	}
+
+static const struct hinic3_xstats_name_off hinic3_phyport_stats_strings[] = {
+	HINIC3_PORT_STAT(mac_tx_fragment_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_undersize_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_undermin_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_64_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_65_127_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_128_255_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_256_511_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_512_1023_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_1024_1518_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_1519_2047_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_2048_4095_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_4096_8191_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_8192_9216_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_9217_12287_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_12288_16383_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_1519_max_bad_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_1519_max_good_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_oversize_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_jabber_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_bad_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_bad_oct_num),
+	HINIC3_PORT_STAT(mac_tx_good_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_good_oct_num),
+	HINIC3_PORT_STAT(mac_tx_total_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_total_oct_num),
+	HINIC3_PORT_STAT(mac_tx_uni_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_multi_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_broad_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pause_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri0_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri1_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri2_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri3_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri4_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri5_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri6_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_pfc_pri7_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_control_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_err_all_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_from_app_good_pkt_num),
+	HINIC3_PORT_STAT(mac_tx_from_app_bad_pkt_num),
+
+	HINIC3_PORT_STAT(mac_rx_fragment_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_undersize_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_undermin_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_64_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_65_127_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_128_255_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_256_511_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_512_1023_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_1024_1518_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_1519_2047_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_2048_4095_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_4096_8191_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_8192_9216_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_9217_12287_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_12288_16383_oct_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_1519_max_bad_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_1519_max_good_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_oversize_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_jabber_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_bad_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_bad_oct_num),
+	HINIC3_PORT_STAT(mac_rx_good_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_good_oct_num),
+	HINIC3_PORT_STAT(mac_rx_total_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_total_oct_num),
+	HINIC3_PORT_STAT(mac_rx_uni_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_multi_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_broad_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pause_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri0_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri1_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri2_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri3_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri4_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri5_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri6_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_pfc_pri7_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_control_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_sym_err_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_fcs_err_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_send_app_good_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_send_app_bad_pkt_num),
+	HINIC3_PORT_STAT(mac_rx_unfilter_pkt_num),
+};
+
+#define HINIC3_PHYPORT_XSTATS_NUM ARRAY_SIZE(hinic3_phyport_stats_strings)
+
+#define HINIC3_RXQ_STAT(_stat_item)                                      \
+	{                                                                \
+		.name = #_stat_item,                                     \
+		.offset = offsetof(struct hinic3_rxq_stats, _stat_item), \
+	}
+
+/**
+ * The name and offset field of RXQ statistic items.
+ *
+ * The inclusion of additional statistics depends on the compilation flags:
+ * - `HINIC3_XSTAT_RXBUF_INFO` enables buffer-related stats.
+ * - `HINIC3_XSTAT_PROF_RX` enables performance timing stats.
+ * - `HINIC3_XSTAT_MBUF_USE` enables memory buffer usage stats.
+ */
+static const struct hinic3_xstats_name_off hinic3_rxq_stats_strings[] = {
+	HINIC3_RXQ_STAT(rx_nombuf),
+	HINIC3_RXQ_STAT(burst_pkts),
+	HINIC3_RXQ_STAT(errors),
+	HINIC3_RXQ_STAT(csum_errors),
+	HINIC3_RXQ_STAT(other_errors),
+	HINIC3_RXQ_STAT(empty),
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+	HINIC3_RXQ_STAT(rx_mbuf),
+	HINIC3_RXQ_STAT(rx_avail),
+	HINIC3_RXQ_STAT(rx_hole),
+#endif
+
+#ifdef HINIC3_XSTAT_PROF_RX
+	HINIC3_RXQ_STAT(app_tsc),
+	HINIC3_RXQ_STAT(pmd_tsc),
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+	HINIC3_RXQ_STAT(rx_alloc_mbuf_bytes),
+	HINIC3_RXQ_STAT(rx_free_mbuf_bytes),
+	HINIC3_RXQ_STAT(rx_left_mbuf_bytes),
+#endif
+};
+
+#define HINIC3_RXQ_XSTATS_NUM ARRAY_SIZE(hinic3_rxq_stats_strings)
+
+#define HINIC3_TXQ_STAT(_stat_item)                                      \
+	{                                                                \
+		.name = #_stat_item,                                     \
+		.offset = offsetof(struct hinic3_txq_stats, _stat_item), \
+	}
+
+/**
+ * The name and offset field of TXQ statistic items.
+ *
+ * The inclusion of additional statistics depends on the compilation flags:
+ * - `HINIC3_XSTAT_PROF_TX` enables performance timing stats.
+ * - `HINIC3_XSTAT_MBUF_USE` enables memory buffer usage stats.
+ */
+static const struct hinic3_xstats_name_off hinic3_txq_stats_strings[] = {
+	HINIC3_TXQ_STAT(tx_busy),
+	HINIC3_TXQ_STAT(offload_errors),
+	HINIC3_TXQ_STAT(burst_pkts),
+	HINIC3_TXQ_STAT(sge_len0),
+	HINIC3_TXQ_STAT(mbuf_null),
+
+#ifdef HINIC3_XSTAT_PROF_TX
+	HINIC3_TXQ_STAT(app_tsc),
+	HINIC3_TXQ_STAT(pmd_tsc),
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+	HINIC3_TXQ_STAT(tx_left_mbuf_bytes),
+#endif
+};
+
+#define HINIC3_TXQ_XSTATS_NUM ARRAY_SIZE(hinic3_txq_stats_strings)
+
+static int
+hinic3_xstats_calc_num(struct hinic3_nic_dev *nic_dev)
+{
+	if (HINIC3_IS_VF(nic_dev->hwdev)) {
+		return (HINIC3_VPORT_XSTATS_NUM +
+			HINIC3_RXQ_XSTATS_NUM * nic_dev->num_rqs +
+			HINIC3_TXQ_XSTATS_NUM * nic_dev->num_sqs);
+	} else {
+		return (HINIC3_VPORT_XSTATS_NUM + HINIC3_PHYPORT_XSTATS_NUM +
+			HINIC3_RXQ_XSTATS_NUM * nic_dev->num_rqs +
+			HINIC3_TXQ_XSTATS_NUM * nic_dev->num_sqs);
+	}
+}
+
+#define HINIC3_MAX_QUEUE_DEPTH 16384
+#define HINIC3_MIN_QUEUE_DEPTH 128
+#define HINIC3_TXD_ALIGN       1
+#define HINIC3_RXD_ALIGN       1
+
+static const struct rte_eth_desc_lim hinic3_rx_desc_lim = {
+	.nb_max = HINIC3_MAX_QUEUE_DEPTH,
+	.nb_min = HINIC3_MIN_QUEUE_DEPTH,
+	.nb_align = HINIC3_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim hinic3_tx_desc_lim = {
+	.nb_max = HINIC3_MAX_QUEUE_DEPTH,
+	.nb_min = HINIC3_MIN_QUEUE_DEPTH,
+	.nb_align = HINIC3_TXD_ALIGN,
+};
+
+static void hinic3_deinit_mac_addr(struct rte_eth_dev *eth_dev);
+
+static int hinic3_copy_mempool_init(struct hinic3_nic_dev *nic_dev);
+
+static void hinic3_copy_mempool_uninit(struct hinic3_nic_dev *nic_dev);
+
+/**
+ * Interrupt handler triggered by NIC for handling specific event.
+ *
+ * @param[in] param
+ * The address of parameter (struct rte_eth_dev *) regsitered before.
+ */
+static void
+hinic3_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = param;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (!hinic3_get_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status)) {
+		PMD_DRV_LOG(WARNING,
+			    "Intr is disabled, ignore intr event, "
+			    "dev_name: %s, port_id: %d",
+			    nic_dev->dev_name, dev->data->port_id);
+		return;
+	}
+
+	/* Aeq0 msg handler. */
+	hinic3_dev_handle_aeq_event(nic_dev->hwdev, param);
+}
+
+/**
+ * Do the config for TX/Rx queues, include queue number, mtu size and RSS.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_configure(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	nic_dev->num_sqs = dev->data->nb_tx_queues;
+	nic_dev->num_rqs = dev->data->nb_rx_queues;
+
+	if (nic_dev->num_sqs > nic_dev->max_sqs ||
+	    nic_dev->num_rqs > nic_dev->max_rqs) {
+		PMD_DRV_LOG(ERR,
+			    "num_sqs: %d or num_rqs: %d larger than "
+			    "max_sqs: %d or max_rqs: %d",
+			    nic_dev->num_sqs, nic_dev->num_rqs,
+			    nic_dev->max_sqs, nic_dev->max_rqs);
+		return -EINVAL;
+	}
+
+	/* The range of mtu is 384~9600. */
+
+	if (HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) <
+		    HINIC3_MIN_FRAME_SIZE ||
+	    HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) >
+		    HINIC3_MAX_JUMBO_FRAME_SIZE) {
+		PMD_DRV_LOG(ERR,
+			    "Max rx pkt len out of range, max_rx_pkt_len: %d, "
+			    "expect between %d and %d",
+			    HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode),
+			    HINIC3_MIN_FRAME_SIZE, HINIC3_MAX_JUMBO_FRAME_SIZE);
+		return -EINVAL;
+	}
+	nic_dev->mtu_size =
+		(u16)HINIC3_PKTLEN_TO_MTU(HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode));
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |=
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	/* Clear fdir filter. */
+	hinic3_free_fdir_filter(dev);
+
+	return 0;
+}
+
+/**
+ * Get information about the device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] info
+ * Info structure for ethernet device.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	info->max_rx_queues = nic_dev->max_rqs;
+	info->max_tx_queues = nic_dev->max_sqs;
+	info->min_rx_bufsize = HINIC3_MIN_RX_BUF_SIZE;
+	info->max_rx_pktlen = HINIC3_MAX_JUMBO_FRAME_SIZE;
+	info->max_mac_addrs = HINIC3_MAX_UC_MAC_ADDRS;
+	info->min_mtu = HINIC3_MIN_MTU_SIZE;
+	info->max_mtu = HINIC3_MAX_MTU_SIZE;
+	info->max_lro_pkt_size = HINIC3_MAX_LRO_SIZE;
+
+	info->rx_queue_offload_capa = 0;
+	info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_SCATTER | RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	info->tx_queue_offload_capa = 0;
+	info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
+	info->hash_key_size = HINIC3_RSS_KEY_SIZE;
+	info->reta_size = HINIC3_RSS_INDIR_SIZE;
+	info->flow_type_rss_offloads = HINIC3_RSS_OFFLOAD_ALL;
+
+	info->rx_desc_lim = hinic3_rx_desc_lim;
+	info->tx_desc_lim = hinic3_tx_desc_lim;
+
+	/* Driver-preferred rx/tx parameters. */
+	info->default_rxportconf.burst_size = HINIC3_DEFAULT_BURST_SIZE;
+	info->default_txportconf.burst_size = HINIC3_DEFAULT_BURST_SIZE;
+	info->default_rxportconf.nb_queues = HINIC3_DEFAULT_NB_QUEUES;
+	info->default_txportconf.nb_queues = HINIC3_DEFAULT_NB_QUEUES;
+	info->default_rxportconf.ring_size = HINIC3_DEFAULT_RING_SIZE;
+	info->default_txportconf.ring_size = HINIC3_DEFAULT_RING_SIZE;
+
+	return 0;
+}
+
+static int
+hinic3_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	char mgmt_ver[MGMT_VERSION_MAX_LEN] = {0};
+	int err;
+
+	err = hinic3_get_mgmt_version(nic_dev->hwdev, mgmt_ver,
+				      HINIC3_MGMT_VERSION_MAX_LEN);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Get fw version failed");
+		return -EIO;
+	}
+
+	if (fw_size < strlen((char *)mgmt_ver) + 1)
+		return (strlen((char *)mgmt_ver) + 1);
+
+	(void)snprintf(fw_version, fw_size, "%s", mgmt_ver);
+
+	return 0;
+}
+
+/**
+ * Set ethernet device link state up.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_set_link_up(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int err;
+
+	/*
+	 * Vport enable will set function valid in mpu.
+	 * So dev start status need to be checked before vport enable.
+	 */
+	if (hinic3_get_bit(HINIC3_DEV_START, &nic_dev->dev_status)) {
+		err = hinic3_set_vport_enable(nic_dev->hwdev, true);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Enable vport failed, dev_name: %s",
+				    nic_dev->dev_name);
+			return err;
+		}
+	}
+
+	/* Link status follow phy port status, mpu will open pma. */
+	err = hinic3_set_port_enable(nic_dev->hwdev, true);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Set MAC link up failed, dev_name: %s, port_id: %d",
+			    nic_dev->dev_name, dev->data->port_id);
+		return err;
+	}
+
+	return 0;
+}
+
+/**
+ * Set ethernet device link state down.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_set_link_down(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int err;
+
+	err = hinic3_set_vport_enable(nic_dev->hwdev, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Disable vport failed, dev_name: %s",
+			    nic_dev->dev_name);
+		return err;
+	}
+
+	/* Link status follow phy port status, mpu will close pma. */
+	err = hinic3_set_port_enable(nic_dev->hwdev, false);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			"Set MAC link down failed, dev_name: %s, port_id: %d",
+			nic_dev->dev_name, dev->data->port_id);
+		return err;
+	}
+
+	return 0;
+}
+
+/**
+ * Get device physical link information.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] wait_to_complete
+ * Wait for request completion.
+ *
+ * @return
+ * 0 : Link status changed
+ * -1 : Link status not changed.
+ */
+static int
+hinic3_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+#define CHECK_INTERVAL	10  /**< 10ms. */
+#define MAX_REPEAT_TIME 100 /**< 1s (100 * 10ms) in total. */
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_eth_link link;
+	u8 link_state;
+	unsigned int rep_cnt = MAX_REPEAT_TIME;
+	int ret;
+
+	memset(&link, 0, sizeof(link));
+	do {
+		/* Get link status information from hardware. */
+		ret = hinic3_get_link_state(nic_dev->hwdev, &link_state);
+		if (ret) {
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
+			goto out;
+		}
+
+		get_port_info(nic_dev->hwdev, link_state, &link);
+
+		if (!wait_to_complete || link.link_status)
+			break;
+
+		rte_delay_ms(CHECK_INTERVAL);
+	} while (rep_cnt--);
+
+out:
+	return rte_eth_linkstatus_set(dev, &link);
+}
+
+/**
+ * Reset all RX queues (RXQs).
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_reset_rx_queue(struct rte_eth_dev *dev)
+{
+	struct hinic3_rxq *rxq = NULL;
+	struct hinic3_nic_dev *nic_dev;
+	int q_id = 0;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	for (q_id = 0; q_id < nic_dev->num_rqs; q_id++) {
+		rxq = nic_dev->rxqs[q_id];
+
+		rxq->cons_idx = 0;
+		rxq->prod_idx = 0;
+		rxq->delta = rxq->q_depth;
+		rxq->next_to_update = 0;
+	}
+}
+
+/**
+ * Reset all TX queues (TXQs).
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_reset_tx_queue(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev;
+	struct hinic3_txq *txq = NULL;
+	int q_id = 0;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	for (q_id = 0; q_id < nic_dev->num_sqs; q_id++) {
+		txq = nic_dev->txqs[q_id];
+
+		txq->cons_idx = 0;
+		txq->prod_idx = 0;
+		txq->owner = 1;
+
+		/* Clear hardware ci. */
+		*txq->ci_vaddr_base = 0;
+	}
+}
+
+/**
+ * Create the receive queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] qid
+ * Receive queue index.
+ * @param[in] nb_desc
+ * Number of descriptors for receive queue.
+ * @param[in] socket_id
+ * Socket index on which memory must be allocated.
+ * @param[in] rx_conf
+ * Thresholds parameters (unused_).
+ * @param[in] mp
+ * Memory pool for buffer allocations.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc,
+		      unsigned int socket_id,
+		      __rte_unused const struct rte_eth_rxconf *rx_conf,
+		      struct rte_mempool *mp)
+{
+	struct hinic3_nic_dev *nic_dev;
+	struct hinic3_rxq *rxq = NULL;
+	const struct rte_memzone *rq_mz = NULL;
+	const struct rte_memzone *cqe_mz = NULL;
+	const struct rte_memzone *pi_mz = NULL;
+	u16 rq_depth, rx_free_thresh;
+	u32 queue_buf_size;
+	void *db_addr = NULL;
+	int wqe_count;
+	u32 buf_size;
+	int err;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	/* Queue depth must be power of 2, otherwise will be aligned up. */
+	rq_depth = (nb_desc & (nb_desc - 1))
+			   ? ((u16)(1U << (ilog2(nb_desc) + 1)))
+			   : nb_desc;
+
+	/*
+	 * Validate number of receive descriptors.
+	 * It must not exceed hardware maximum and minimum.
+	 */
+	if (rq_depth > HINIC3_MAX_QUEUE_DEPTH ||
+	    rq_depth < HINIC3_MIN_QUEUE_DEPTH) {
+		PMD_DRV_LOG(ERR,
+			    "RX queue depth is out of range from %d to %d,"
+			    "(nb_desc: %d, q_depth: %d, port: %d queue: %d)",
+			    HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH,
+			    (int)nb_desc, (int)rq_depth,
+			    (int)dev->data->port_id, (int)qid);
+		return -EINVAL;
+	}
+
+	/*
+	 * The RX descriptor ring will be cleaned after rxq->rx_free_thresh
+	 * descriptors are used or if the number of descriptors required
+	 * to transmit a packet is greater than the number of free RX
+	 * descriptors.
+	 * The following constraints must be satisfied:
+	 * - rx_free_thresh must be greater than 0.
+	 * - rx_free_thresh must be less than the size of the ring minus 1.
+	 * When set to zero use default values.
+	 */
+	rx_free_thresh = (u16)((rx_conf->rx_free_thresh)
+				       ? rx_conf->rx_free_thresh
+				       : HINIC3_DEFAULT_RX_FREE_THRESH);
+	if (rx_free_thresh >= (rq_depth - 1)) {
+		PMD_DRV_LOG(ERR,
+			    "rx_free_thresh must be less than the number "
+			    "of RX descriptors minus 1, rx_free_thresh: %u "
+			    "port: %d queue: %d)",
+			    (unsigned int)rx_free_thresh,
+			    (int)dev->data->port_id, (int)qid);
+
+		return -EINVAL;
+	}
+
+	rxq = rte_zmalloc_socket("hinic3_rq", sizeof(struct hinic3_rxq),
+				 RTE_CACHE_LINE_SIZE, (int)socket_id);
+	if (!rxq) {
+		PMD_DRV_LOG(ERR, "Allocate rxq[%d] failed, dev_name: %s", qid,
+			    dev->data->name);
+
+		return -ENOMEM;
+	}
+
+	/* Init rq parameters. */
+	rxq->nic_dev = nic_dev;
+	nic_dev->rxqs[qid] = rxq;
+	rxq->mb_pool = mp;
+	rxq->q_id = qid;
+	rxq->next_to_update = 0;
+	rxq->q_depth = rq_depth;
+	rxq->q_mask = rq_depth - 1;
+	rxq->delta = rq_depth;
+	rxq->cons_idx = 0;
+	rxq->prod_idx = 0;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->rxinfo_align_end = rxq->q_depth - rxq->rx_free_thresh;
+	rxq->port_id = dev->data->port_id;
+	rxq->wait_time_cycle = HINIC3_RX_WAIT_CYCLE_THRESH;
+
+	/* If buf_len used for function table, need to translated. */
+	u16 rx_buf_size =
+		rte_pktmbuf_data_room_size(rxq->mb_pool) - RTE_PKTMBUF_HEADROOM;
+	err = hinic3_convert_rx_buf_size(rx_buf_size, &buf_size);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Adjust buf size failed, dev_name: %s",
+			    dev->data->name);
+		goto adjust_bufsize_fail;
+	}
+
+	if (buf_size >= HINIC3_RX_BUF_SIZE_4K &&
+	    buf_size < HINIC3_RX_BUF_SIZE_16K)
+		rxq->wqe_type = HINIC3_EXTEND_RQ_WQE;
+	else
+		rxq->wqe_type = HINIC3_NORMAL_RQ_WQE;
+
+	rxq->wqebb_shift = HINIC3_RQ_WQEBB_SHIFT + rxq->wqe_type;
+	rxq->wqebb_size = (u16)BIT(rxq->wqebb_shift);
+
+	rxq->buf_len = (u16)buf_size;
+	rxq->rx_buff_shift = ilog2(rxq->buf_len);
+
+	pi_mz = hinic3_dma_zone_reserve(dev, "hinic3_rq_pi", qid, RTE_PGSIZE_4K,
+					RTE_CACHE_LINE_SIZE, (int)socket_id);
+	if (!pi_mz) {
+		PMD_DRV_LOG(ERR, "Allocate rxq[%d] pi_mz failed, dev_name: %s",
+			    qid, dev->data->name);
+		err = -ENOMEM;
+		goto alloc_pi_mz_fail;
+	}
+	rxq->pi_mz = pi_mz;
+	rxq->pi_dma_addr = pi_mz->iova;
+	rxq->pi_virt_addr = pi_mz->addr;
+
+	err = hinic3_alloc_db_addr(nic_dev->hwdev, &db_addr, HINIC3_DB_TYPE_RQ);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc rq doorbell addr failed");
+		goto alloc_db_err_fail;
+	}
+	rxq->db_addr = db_addr;
+
+	queue_buf_size = BIT(rxq->wqebb_shift) * rq_depth;
+	rq_mz = hinic3_dma_zone_reserve(dev, "hinic3_rq_mz", qid,
+					queue_buf_size, RTE_PGSIZE_256K,
+					(int)socket_id);
+	if (!rq_mz) {
+		PMD_DRV_LOG(ERR, "Allocate rxq[%d] rq_mz failed, dev_name: %s",
+			    qid, dev->data->name);
+		err = -ENOMEM;
+		goto alloc_rq_mz_fail;
+	}
+
+	memset(rq_mz->addr, 0, queue_buf_size);
+	rxq->rq_mz = rq_mz;
+	rxq->queue_buf_paddr = rq_mz->iova;
+	rxq->queue_buf_vaddr = rq_mz->addr;
+
+	rxq->rx_info = rte_zmalloc_socket("rx_info",
+					  rq_depth * sizeof(*rxq->rx_info),
+					  RTE_CACHE_LINE_SIZE, (int)socket_id);
+	if (!rxq->rx_info) {
+		PMD_DRV_LOG(ERR, "Allocate rx_info failed, dev_name: %s",
+			    dev->data->name);
+		err = -ENOMEM;
+		goto alloc_rx_info_fail;
+	}
+
+	cqe_mz = hinic3_dma_zone_reserve(dev, "hinic3_cqe_mz", qid,
+					 rq_depth * sizeof(*rxq->rx_cqe),
+					 RTE_CACHE_LINE_SIZE, (int)socket_id);
+	if (!cqe_mz) {
+		PMD_DRV_LOG(ERR, "Allocate cqe mem zone failed, dev_name: %s",
+			    dev->data->name);
+		err = -ENOMEM;
+		goto alloc_cqe_mz_fail;
+	}
+	memset(cqe_mz->addr, 0, rq_depth * sizeof(*rxq->rx_cqe));
+	rxq->cqe_mz = cqe_mz;
+	rxq->cqe_start_paddr = cqe_mz->iova;
+	rxq->cqe_start_vaddr = cqe_mz->addr;
+	rxq->rx_cqe = (struct hinic3_rq_cqe *)rxq->cqe_start_vaddr;
+
+	wqe_count = hinic3_rx_fill_wqe(rxq);
+	if (wqe_count != rq_depth) {
+		PMD_DRV_LOG(ERR,
+			    "Fill rx wqe failed, wqe_count: %d, dev_name: %s",
+			    wqe_count, dev->data->name);
+		err = -ENOMEM;
+		goto fill_rx_wqe_fail;
+	}
+	/* Record rxq pointer in rte_eth rx_queues. */
+	dev->data->rx_queues[qid] = rxq;
+
+	return 0;
+
+fill_rx_wqe_fail:
+	hinic3_memzone_free(rxq->cqe_mz);
+alloc_cqe_mz_fail:
+	rte_free(rxq->rx_info);
+
+alloc_rx_info_fail:
+	hinic3_memzone_free(rxq->rq_mz);
+
+alloc_rq_mz_fail:
+alloc_db_err_fail:
+	hinic3_memzone_free(rxq->pi_mz);
+
+alloc_pi_mz_fail:
+adjust_bufsize_fail:
+	rte_free(rxq);
+	nic_dev->rxqs[qid] = NULL;
+
+	return err;
+}
+
+/**
+ * Create the transmit queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_idx
+ * Transmit queue index.
+ * @param[in] nb_desc
+ * Number of descriptors for transmit queue.
+ * @param[in] socket_id
+ * Socket index on which memory must be allocated.
+ * @param[in] tx_conf
+ * Tx queue configuration parameters (unused_).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qid, uint16_t nb_desc,
+		      unsigned int socket_id,
+		      __rte_unused const struct rte_eth_txconf *tx_conf)
+{
+	struct hinic3_nic_dev *nic_dev;
+	struct hinic3_hwdev *hwdev;
+	struct hinic3_txq *txq = NULL;
+	const struct rte_memzone *sq_mz = NULL;
+	const struct rte_memzone *ci_mz = NULL;
+	void *db_addr = NULL;
+	u16 sq_depth, tx_free_thresh;
+	u32 queue_buf_size;
+	int err;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	hwdev = nic_dev->hwdev;
+
+	/* Queue depth must be power of 2, otherwise will be aligned up. */
+	sq_depth = (nb_desc & (nb_desc - 1))
+			   ? ((u16)(1U << (ilog2(nb_desc) + 1)))
+			   : nb_desc;
+
+	/*
+	 * Validate number of transmit descriptors.
+	 * It must not exceed hardware maximum and minimum.
+	 */
+	if (sq_depth > HINIC3_MAX_QUEUE_DEPTH ||
+	    sq_depth < HINIC3_MIN_QUEUE_DEPTH) {
+		PMD_DRV_LOG(ERR,
+			    "TX queue depth is out of range from %d to %d,"
+			    "(nb_desc: %d, q_depth: %d, port: %d queue: %d)",
+			    HINIC3_MIN_QUEUE_DEPTH, HINIC3_MAX_QUEUE_DEPTH,
+			    (int)nb_desc, (int)sq_depth,
+			    (int)dev->data->port_id, (int)qid);
+		return -EINVAL;
+	}
+
+	/*
+	 * The TX descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required
+	 * to transmit a packet is greater than the number of free TX
+	 * descriptors.
+	 * The following constraints must be satisfied:
+	 * - tx_free_thresh must be greater than 0.
+	 * - tx_free_thresh must be less than the size of the ring minus 1.
+	 * When set to zero use default values.
+	 */
+	tx_free_thresh = (u16)((tx_conf->tx_free_thresh)
+				       ? tx_conf->tx_free_thresh
+				       : HINIC3_DEFAULT_TX_FREE_THRESH);
+	if (tx_free_thresh >= (sq_depth - 1)) {
+		PMD_DRV_LOG(ERR,
+			    "tx_free_thresh must be less than the number of tx "
+			    "descriptors minus 1, tx_free_thresh: %u port: %d "
+			    "queue: %d",
+			    (unsigned int)tx_free_thresh,
+			    (int)dev->data->port_id, (int)qid);
+		return -EINVAL;
+	}
+
+	txq = rte_zmalloc_socket("hinic3_tx_queue", sizeof(struct hinic3_txq),
+				 RTE_CACHE_LINE_SIZE, (int)socket_id);
+	if (!txq) {
+		PMD_DRV_LOG(ERR, "Allocate txq[%d] failed, dev_name: %s", qid,
+			    dev->data->name);
+		return -ENOMEM;
+	}
+	nic_dev->txqs[qid] = txq;
+	txq->nic_dev = nic_dev;
+	txq->q_id = qid;
+	txq->q_depth = sq_depth;
+	txq->q_mask = sq_depth - 1;
+	txq->cons_idx = 0;
+	txq->prod_idx = 0;
+	txq->wqebb_shift = HINIC3_SQ_WQEBB_SHIFT;
+	txq->wqebb_size = (u16)BIT(txq->wqebb_shift);
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->owner = 1;
+	txq->cos = nic_dev->default_cos;
+
+	ci_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_ci", qid,
+					HINIC3_CI_Q_ADDR_SIZE,
+					HINIC3_CI_Q_ADDR_SIZE, (int)socket_id);
+	if (!ci_mz) {
+		PMD_DRV_LOG(ERR, "Allocate txq[%d] ci_mz failed, dev_name: %s",
+			    qid, dev->data->name);
+		err = -ENOMEM;
+		goto alloc_ci_mz_fail;
+	}
+	txq->ci_mz = ci_mz;
+	txq->ci_dma_base = ci_mz->iova;
+	txq->ci_vaddr_base = (volatile u16 *)ci_mz->addr;
+
+	queue_buf_size = BIT(txq->wqebb_shift) * sq_depth;
+	sq_mz = hinic3_dma_zone_reserve(dev, "hinic3_sq_mz", qid,
+					queue_buf_size, RTE_PGSIZE_256K,
+					(int)socket_id);
+	if (!sq_mz) {
+		PMD_DRV_LOG(ERR, "Allocate txq[%d] sq_mz failed, dev_name: %s",
+			    qid, dev->data->name);
+		err = -ENOMEM;
+		goto alloc_sq_mz_fail;
+	}
+	memset(sq_mz->addr, 0, queue_buf_size);
+	txq->sq_mz = sq_mz;
+	txq->queue_buf_paddr = sq_mz->iova;
+	txq->queue_buf_vaddr = sq_mz->addr;
+	txq->sq_head_addr = (u64)txq->queue_buf_vaddr;
+	txq->sq_bot_sge_addr = txq->sq_head_addr + queue_buf_size;
+
+	err = hinic3_alloc_db_addr(hwdev, &db_addr, HINIC3_DB_TYPE_SQ);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc sq doorbell addr failed");
+		goto alloc_db_err_fail;
+	}
+	txq->db_addr = db_addr;
+
+	txq->tx_info = rte_zmalloc_socket("tx_info",
+					  sq_depth * sizeof(*txq->tx_info),
+					  RTE_CACHE_LINE_SIZE, (int)socket_id);
+	if (!txq->tx_info) {
+		PMD_DRV_LOG(ERR, "Allocate tx_info failed, dev_name: %s",
+			    dev->data->name);
+		err = -ENOMEM;
+		goto alloc_tx_info_fail;
+	}
+
+	/* Record txq pointer in rte_eth tx_queues. */
+	dev->data->tx_queues[qid] = txq;
+
+	return 0;
+
+alloc_tx_info_fail:
+alloc_db_err_fail:
+	hinic3_memzone_free(txq->sq_mz);
+
+alloc_sq_mz_fail:
+	hinic3_memzone_free(txq->ci_mz);
+
+alloc_ci_mz_fail:
+	rte_free(txq);
+	return err;
+}
+
+static void
+hinic3_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	if (dev == NULL || dev->data == NULL || dev->data->rx_queues == NULL) {
+		PMD_DRV_LOG(WARNING, "rx queue is null when release");
+		return;
+	}
+	if (queue_id >= dev->data->nb_rx_queues) {
+		PMD_DRV_LOG(WARNING, "eth_dev: %s, rx queue id: %u is illegal",
+			    dev->data->name, queue_id);
+		return;
+	}
+	struct hinic3_rxq *rxq = dev->data->rx_queues[queue_id];
+	struct hinic3_nic_dev *nic_dev = NULL;
+
+	if (!rxq) {
+		PMD_DRV_LOG(WARNING, "Rxq is null when release");
+		return;
+	}
+
+	nic_dev = rxq->nic_dev;
+
+	hinic3_free_rxq_mbufs(rxq);
+
+	hinic3_memzone_free(rxq->cqe_mz);
+
+	rte_free(rxq->rx_info);
+	rxq->rx_info = NULL;
+
+	hinic3_memzone_free(rxq->rq_mz);
+
+	hinic3_memzone_free(rxq->pi_mz);
+
+	nic_dev->rxqs[rxq->q_id] = NULL;
+	rte_free(rxq);
+	dev->data->rx_queues[queue_id] = NULL;
+}
+
+static void
+hinic3_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	if (dev == NULL || dev->data == NULL || dev->data->tx_queues == NULL) {
+		PMD_DRV_LOG(WARNING, "tx queue is null when release");
+		return;
+	}
+	if (queue_id >= dev->data->nb_tx_queues) {
+		PMD_DRV_LOG(WARNING, "eth_dev: %s, tx queue id: %u is illegal",
+			    dev->data->name, queue_id);
+		return;
+	}
+	struct hinic3_txq *txq = dev->data->tx_queues[queue_id];
+	struct hinic3_nic_dev *nic_dev = NULL;
+
+	if (!txq) {
+		PMD_DRV_LOG(WARNING, "Txq is null when release");
+		return;
+	}
+	PMD_DRV_LOG(INFO, "%s txq_idx:%d queue release.",
+		    txq->nic_dev->dev_name, txq->q_id);
+	nic_dev = txq->nic_dev;
+
+	hinic3_free_txq_mbufs(txq);
+
+	rte_free(txq->tx_info);
+	txq->tx_info = NULL;
+
+	hinic3_memzone_free(txq->sq_mz);
+
+	hinic3_memzone_free(txq->ci_mz);
+
+	nic_dev->txqs[txq->q_id] = NULL;
+	rte_free(txq);
+	dev->data->tx_queues[queue_id] = NULL;
+}
+
+/**
+ * Start RXQ and enables flow director (fdir) filter for RXQ.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rq_id
+ * RX queue ID to be started.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_rx_queue_start(__rte_unused struct rte_eth_dev *dev,
+			  __rte_unused uint16_t rq_id)
+{
+	struct hinic3_rxq *rxq = NULL;
+	int rc;
+
+	if (rq_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rq_id];
+
+		rc = hinic3_start_rq(dev, rxq);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "Start rx queue failed, eth_dev:%s, "
+				    "queue_idx:%d",
+				    dev->data->name, rq_id);
+			return rc;
+		}
+
+		dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+	rc = hinic3_enable_rxq_fdir_filter(dev, (u32)rq_id, (u32)true);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to enable rq : %d fdir filter.",
+			    rq_id);
+		return rc;
+	}
+	return 0;
+}
+
+/**
+ * Stop RXQ and disable flow director (fdir) filter for RXQ.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rq_id
+ * RX queue ID to be stopped.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_rx_queue_stop(__rte_unused struct rte_eth_dev *dev,
+			 __rte_unused uint16_t rq_id)
+{
+	struct hinic3_rxq *rxq = NULL;
+	int rc;
+
+	if (rq_id < dev->data->nb_rx_queues) {
+		rxq = dev->data->rx_queues[rq_id];
+
+		rc = hinic3_stop_rq(dev, rxq);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "Stop rx queue failed, eth_dev:%s, "
+				    "queue_idx:%d",
+				    dev->data->name, rq_id);
+			return rc;
+		}
+
+		dev->data->rx_queue_state[rq_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	rc = hinic3_enable_rxq_fdir_filter(dev, (u32)rq_id, (u32)false);
+	if (rc) {
+		PMD_DRV_LOG(ERR, "Failed to disable rq : %d fdir filter.",
+			    rq_id);
+		return rc;
+	}
+
+	return 0;
+}
+
+static int
+hinic3_dev_tx_queue_start(__rte_unused struct rte_eth_dev *dev,
+			  __rte_unused uint16_t sq_id)
+{
+	struct hinic3_txq *txq = NULL;
+
+	PMD_DRV_LOG(INFO, "Start tx queue, eth_dev:%s, queue_idx:%d",
+		    dev->data->name, sq_id);
+
+	txq = dev->data->tx_queues[sq_id];
+	HINIC3_SET_TXQ_STARTED(txq);
+	dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STARTED;
+	return 0;
+}
+
+static int
+hinic3_dev_tx_queue_stop(__rte_unused struct rte_eth_dev *dev,
+			 __rte_unused uint16_t sq_id)
+{
+	struct hinic3_txq *txq = NULL;
+	int rc;
+
+	if (sq_id < dev->data->nb_tx_queues) {
+		txq = dev->data->tx_queues[sq_id];
+		rc = hinic3_stop_sq(txq);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "Stop tx queue failed, eth_dev:%s, "
+				    "queue_idx:%d",
+				    dev->data->name, sq_id);
+			return rc;
+		}
+
+		HINIC3_SET_TXQ_STOPPED(txq);
+		dev->data->tx_queue_state[sq_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+
+	return 0;
+}
+
+int
+hinic3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = PCI_DEV_TO_INTR_HANDLE(pci_dev);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u16 msix_intr;
+
+	if (!rte_intr_dp_is_en(intr_handle) || !intr_handle->intr_vec)
+		return 0;
+
+	if (queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	msix_intr = (u16)intr_handle->intr_vec[queue_id];
+	hinic3_set_msix_auto_mask_state(nic_dev->hwdev, msix_intr,
+					HINIC3_SET_MSIX_AUTO_MASK);
+	hinic3_set_msix_state(nic_dev->hwdev, msix_intr, HINIC3_MSIX_ENABLE);
+
+	return 0;
+}
+
+int
+hinic3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = PCI_DEV_TO_INTR_HANDLE(pci_dev);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u16 msix_intr;
+
+	if (!rte_intr_dp_is_en(intr_handle) || !intr_handle->intr_vec)
+		return 0;
+
+	if (queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	msix_intr = (u16)intr_handle->intr_vec[queue_id];
+	hinic3_set_msix_auto_mask_state(nic_dev->hwdev, msix_intr,
+					HINIC3_CLR_MSIX_AUTO_MASK);
+	hinic3_set_msix_state(nic_dev->hwdev, msix_intr, HINIC3_MSIX_DISABLE);
+	hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev, msix_intr,
+					  MSIX_RESEND_TIMER_CLEAR);
+
+	return 0;
+}
+
+static uint32_t
+hinic3_dev_rx_queue_count(__rte_unused void *rx_queue)
+{
+	return 0;
+}
+
+static int
+hinic3_dev_rx_descriptor_status(__rte_unused void *rx_queue,
+				__rte_unused uint16_t offset)
+{
+	return 0;
+}
+
+static int
+hinic3_dev_tx_descriptor_status(__rte_unused void *tx_queue,
+				__rte_unused uint16_t offset)
+{
+	return 0;
+}
+
+static int
+hinic3_set_lro(struct hinic3_nic_dev *nic_dev, struct rte_eth_conf *dev_conf)
+{
+	bool lro_en;
+	int max_lro_size, lro_max_pkt_len;
+	int err;
+
+	/* Config lro. */
+	lro_en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true
+									: false;
+	max_lro_size = (int)(dev_conf->rxmode.max_lro_pkt_size);
+	/* `max_lro_size` is divisible by `HINIC3_LRO_UNIT_WQE_SIZE`. */
+	lro_max_pkt_len = max_lro_size / HINIC3_LRO_UNIT_WQE_SIZE
+				  ? max_lro_size / HINIC3_LRO_UNIT_WQE_SIZE
+				  : 1;
+
+	PMD_DRV_LOG(INFO,
+		    "max_lro_size: %d, rx_buff_len: %d, lro_max_pkt_len: %d",
+		    max_lro_size, nic_dev->rx_buff_len, lro_max_pkt_len);
+	PMD_DRV_LOG(INFO, "max_rx_pkt_len: %d",
+		    HINIC3_MAX_RX_PKT_LEN(dev_conf->rxmode));
+	err = hinic3_set_rx_lro_state(nic_dev->hwdev, lro_en,
+				      HINIC3_LRO_DEFAULT_TIME_LIMIT,
+				      lro_max_pkt_len);
+	if (err)
+		PMD_DRV_LOG(ERR, "Set lro state failed, err: %d", err);
+	return err;
+}
+
+static int
+hinic3_set_vlan(struct rte_eth_dev *dev, struct rte_eth_conf *dev_conf)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	bool vlan_filter, vlan_strip;
+	int err;
+
+	/* Config vlan filter. */
+	vlan_filter = dev_conf->rxmode.offloads &
+		      RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+
+	err = hinic3_set_vlan_fliter(nic_dev->hwdev, vlan_filter);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Config vlan filter failed, device: %s, port_id: "
+			    "%d, err: %d",
+			    nic_dev->dev_name, dev->data->port_id, err);
+		return err;
+	}
+
+	/* Config vlan stripping. */
+	vlan_strip = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+
+	err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, vlan_strip);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Config vlan strip failed, device: %s, port_id: "
+			    "%d, err: %d",
+			    nic_dev->dev_name, dev->data->port_id, err);
+	}
+
+	return err;
+}
+
+/**
+ * Configure RX mode, checksum offload, LRO, RSS, VLAN and initialize the RXQ
+ * list.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_rxtx_configure(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
+	struct rte_eth_rss_conf *rss_conf = NULL;
+	int err;
+
+	/* Config rx mode. */
+	err = hinic3_set_rx_mode(nic_dev->hwdev, HINIC3_DEFAULT_RX_MODE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set rx_mode: 0x%x failed",
+			    HINIC3_DEFAULT_RX_MODE);
+		return err;
+	}
+	nic_dev->rx_mode = HINIC3_DEFAULT_RX_MODE;
+
+	/* Config rx checksum offload. */
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
+		nic_dev->rx_csum_en = HINIC3_DEFAULT_RX_CSUM_OFFLOAD;
+
+	err = hinic3_set_lro(nic_dev, dev_conf);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set lro failed");
+		return err;
+	}
+	/* Config RSS. */
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
+	    nic_dev->num_rqs > 1) {
+		rss_conf = &dev_conf->rx_adv_conf.rss_conf;
+		err = hinic3_update_rss_config(dev, rss_conf);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Set rss config failed, err: %d", err);
+			return err;
+		}
+	}
+
+	err = hinic3_set_vlan(dev, dev_conf);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set vlan failed, err: %d", err);
+		return err;
+	}
+
+	hinic3_init_rx_queue_list(nic_dev);
+
+	return 0;
+}
+
+/**
+ * Disable RX mode and RSS, and free associated resources.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static void
+hinic3_remove_rxtx_configure(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u8 prio_tc[HINIC3_DCB_UP_MAX] = {0};
+
+	hinic3_set_rx_mode(nic_dev->hwdev, 0);
+
+	if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+		hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_DISABLE, 0, prio_tc);
+		hinic3_rss_template_free(nic_dev->hwdev);
+		nic_dev->rss_state = HINIC3_RSS_DISABLE;
+	}
+}
+
+static bool
+hinic3_find_vlan_filter(struct hinic3_nic_dev *nic_dev, uint16_t vlan_id)
+{
+	u32 vid_idx, vid_bit;
+
+	vid_idx = HINIC3_VFTA_IDX(vlan_id);
+	vid_bit = HINIC3_VFTA_BIT(vlan_id);
+
+	return (nic_dev->vfta[vid_idx] & vid_bit) ? true : false;
+}
+
+static void
+hinic3_store_vlan_filter(struct hinic3_nic_dev *nic_dev, u16 vlan_id, bool on)
+{
+	u32 vid_idx, vid_bit;
+
+	vid_idx = HINIC3_VFTA_IDX(vlan_id);
+	vid_bit = HINIC3_VFTA_BIT(vlan_id);
+
+	if (on)
+		nic_dev->vfta[vid_idx] |= vid_bit;
+	else
+		nic_dev->vfta[vid_idx] &= ~vid_bit;
+}
+
+static void
+hinic3_remove_all_vlanid(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int vlan_id;
+	u16 func_id;
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+	for (vlan_id = 1; vlan_id < RTE_ETHER_MAX_VLAN_ID; vlan_id++) {
+		if (hinic3_find_vlan_filter(nic_dev, vlan_id)) {
+			hinic3_del_vlan(nic_dev->hwdev, vlan_id, func_id);
+			hinic3_store_vlan_filter(nic_dev, vlan_id, false);
+		}
+	}
+}
+
+static void
+hinic3_disable_interrupt(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+	if (!hinic3_get_bit(HINIC3_DEV_INIT, &nic_dev->dev_status))
+		return;
+
+	/* Disable rte interrupt. */
+	rte_intr_disable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+	rte_intr_callback_unregister(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+				     hinic3_dev_interrupt_handler, (void *)dev);
+}
+
+static void
+hinic3_enable_interrupt(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+	if (!hinic3_get_bit(HINIC3_DEV_INIT, &nic_dev->dev_status))
+		return;
+
+	/* Enable rte interrupt. */
+	rte_intr_enable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+	rte_intr_callback_register(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+				   hinic3_dev_interrupt_handler, (void *)dev);
+}
+
+#define HINIC3_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
+
+/** Dp interrupt msix attribute. */
+#define HINIC3_TXRX_MSIX_PENDING_LIMIT	  2
+#define HINIC3_TXRX_MSIX_COALESC_TIMER	  2
+#define HINIC3_TXRX_MSIX_RESEND_TIMER_CFG 7
+
+static int
+hinic3_init_rxq_msix_attr(void *hwdev, u16 msix_index)
+{
+	struct interrupt_info info = {0};
+	int err;
+
+	info.lli_set = 0;
+	info.interrupt_coalesc_set = 1;
+	info.pending_limt = HINIC3_TXRX_MSIX_PENDING_LIMIT;
+	info.coalesc_timer_cfg = HINIC3_TXRX_MSIX_COALESC_TIMER;
+	info.resend_timer_cfg = HINIC3_TXRX_MSIX_RESEND_TIMER_CFG;
+
+	info.msix_index = msix_index;
+	err = hinic3_set_interrupt_cfg(hwdev, info);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set msix attr failed, msix_index %d",
+			    msix_index);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static void
+hinic3_deinit_rxq_intr(struct rte_eth_dev *dev)
+{
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+}
+
+/**
+ * Initialize RX queue interrupts by enabling MSI-X, allocate interrupt vectors,
+ * and configure interrupt attributes for each RX queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, negative error code on failure.
+ * - -ENOTSUP if MSI-X interrupts are not supported.
+ * - Error code if enabling event file descriptors fails.
+ * - -ENOMEM if allocating interrupt vectors fails.
+ */
+static int
+hinic3_init_rxq_intr(struct rte_eth_dev *dev)
+{
+	struct rte_intr_handle *intr_handle = NULL;
+	struct hinic3_nic_dev *nic_dev = NULL;
+	struct hinic3_rxq *rxq = NULL;
+	u32 nb_rx_queues, i;
+	int err;
+
+	intr_handle = dev->intr_handle;
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	if (!dev->data->dev_conf.intr_conf.rxq)
+		return 0;
+
+	if (!rte_intr_cap_multiple(intr_handle)) {
+		PMD_DRV_LOG(ERR, "Rx queue interrupts require MSI-X interrupts"
+				 " (vfio-pci driver)");
+		return -ENOTSUP;
+	}
+
+	nb_rx_queues = dev->data->nb_rx_queues;
+	err = rte_intr_efd_enable(intr_handle, nb_rx_queues);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			"Failed to enable event fds for Rx queue interrupts");
+		return err;
+	}
+
+	intr_handle->intr_vec =
+		rte_zmalloc("hinic_intr_vec", nb_rx_queues * sizeof(int), 0);
+	if (intr_handle->intr_vec == NULL) {
+		PMD_DRV_LOG(ERR, "Failed to allocate intr_vec");
+		rte_intr_efd_disable(intr_handle);
+		return -ENOMEM;
+	}
+	intr_handle->vec_list_size = nb_rx_queues;
+	for (i = 0; i < nb_rx_queues; i++)
+		intr_handle->intr_vec[i] = (int)(i + HINIC3_RX_VEC_START);
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rxq->dp_intr_en = 1;
+		rxq->msix_entry_idx = (u16)intr_handle->intr_vec[i];
+
+		err = hinic3_init_rxq_msix_attr(nic_dev->hwdev,
+						rxq->msix_entry_idx);
+		if (err) {
+			hinic3_deinit_rxq_intr(dev);
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hinic3_init_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+{
+	u32 txq_size;
+	u32 rxq_size;
+
+	/* Allocate software txq array. */
+	txq_size = nic_dev->max_sqs * sizeof(*nic_dev->txqs);
+	nic_dev->txqs =
+		rte_zmalloc("hinic3_txqs", txq_size, RTE_CACHE_LINE_SIZE);
+	if (!nic_dev->txqs) {
+		PMD_DRV_LOG(ERR, "Allocate txqs failed");
+		return -ENOMEM;
+	}
+
+	/* Allocate software rxq array. */
+	rxq_size = nic_dev->max_rqs * sizeof(*nic_dev->rxqs);
+	nic_dev->rxqs =
+		rte_zmalloc("hinic3_rxqs", rxq_size, RTE_CACHE_LINE_SIZE);
+	if (!nic_dev->rxqs) {
+		/* Free txqs. */
+		rte_free(nic_dev->txqs);
+		nic_dev->txqs = NULL;
+
+		PMD_DRV_LOG(ERR, "Allocate rxqs failed");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void
+hinic3_deinit_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+{
+	rte_free(nic_dev->txqs);
+	nic_dev->txqs = NULL;
+
+	rte_free(nic_dev->rxqs);
+	nic_dev->rxqs = NULL;
+}
+
+static void
+hinic3_disable_queue_intr(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
+	int msix_intr;
+	int i;
+
+	if (intr_handle->intr_vec == NULL)
+		return;
+
+	for (i = 0; i < nic_dev->num_rqs; i++) {
+		msix_intr = intr_handle->intr_vec[i];
+		hinic3_set_msix_state(nic_dev->hwdev, (u16)msix_intr,
+				      HINIC3_MSIX_DISABLE);
+		hinic3_misx_intr_clear_resend_bit(nic_dev->hwdev,
+						  (u16)msix_intr,
+						  MSIX_RESEND_TIMER_CLEAR);
+	}
+}
+
+/**
+ * Start the device.
+ *
+ * Initialize function table, TXQ and TXQ context, configure RX offload, and
+ * enable vport and port to prepare receiving packets.
+ *
+ * @param[in] eth_dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_start(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev = NULL;
+	u64 nic_features;
+	struct hinic3_rxq *rxq = NULL;
+	int i;
+	int err;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+	err = hinic3_copy_mempool_init(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Create copy mempool failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_mpool_fail;
+	}
+	hinic3_update_msix_info(nic_dev->hwdev->hwif);
+	hinic3_disable_interrupt(eth_dev);
+	err = hinic3_init_rxq_intr(eth_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init rxq intr fail, eth_dev:%s",
+			    eth_dev->data->name);
+		goto init_rxq_intr_fail;
+	}
+
+	hinic3_get_func_rx_buf_size(nic_dev);
+	err = hinic3_init_function_table(nic_dev->hwdev, nic_dev->rx_buff_len);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init function table failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_func_tbl_fail;
+	}
+
+	nic_features = hinic3_get_driver_feature(nic_dev);
+	/*
+	 * You can update the features supported by the driver according to the
+	 * scenario here.
+	 */
+	nic_features &= DEFAULT_DRV_FEATURE;
+	hinic3_update_driver_feature(nic_dev, nic_features);
+
+	err = hinic3_set_feature_to_hw(nic_dev->hwdev, &nic_dev->feature_cap,
+				       1);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to set nic features to hardware, err %d",
+			    err);
+		goto get_feature_err;
+	}
+
+	/* Reset rx and tx queue. */
+	hinic3_reset_rx_queue(eth_dev);
+	hinic3_reset_tx_queue(eth_dev);
+
+	/* Init txq and rxq context. */
+	err = hinic3_init_qp_ctxts(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init qp context failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto init_qp_fail;
+	}
+
+	/* Set default mtu. */
+	err = hinic3_set_port_mtu(nic_dev->hwdev, nic_dev->mtu_size);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set mtu_size[%d] failed, dev_name: %s",
+			    nic_dev->mtu_size, eth_dev->data->name);
+		goto set_mtu_fail;
+	}
+	eth_dev->data->mtu = nic_dev->mtu_size;
+
+	/* Set rx configuration: rss/checksum/rxmode/lro. */
+	err = hinic3_set_rxtx_configure(eth_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto set_rxtx_config_fail;
+	}
+
+	/* Enable dev interrupt. */
+	hinic3_enable_interrupt(eth_dev);
+	err = hinic3_start_all_rqs(eth_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set rx config failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto start_rqs_fail;
+	}
+
+	hinic3_start_all_sqs(eth_dev);
+
+	/* Open virtual port and ready to start packet receiving. */
+	err = hinic3_set_vport_enable(nic_dev->hwdev, true);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Enable vport failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto en_vport_fail;
+	}
+
+	/* Open physical port and start packet receiving. */
+	err = hinic3_set_port_enable(nic_dev->hwdev, true);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Enable physical port failed, dev_name: %s",
+			    eth_dev->data->name);
+		goto en_port_fail;
+	}
+
+	/* Update eth_dev link status. */
+	if (eth_dev->data->dev_conf.intr_conf.lsc != 0)
+		(void)hinic3_link_update(eth_dev, 0);
+
+	hinic3_set_bit(HINIC3_DEV_START, &nic_dev->dev_status);
+
+	return 0;
+
+en_port_fail:
+	(void)hinic3_set_vport_enable(nic_dev->hwdev, false);
+
+en_vport_fail:
+	/* Flush tx && rx chip resources in case of setting vport fake fail. */
+	(void)hinic3_flush_qps_res(nic_dev->hwdev);
+	rte_delay_ms(DEV_START_DELAY_MS);
+	for (i = 0; i < nic_dev->num_rqs; i++) {
+		rxq = nic_dev->rxqs[i];
+		hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+		hinic3_free_rxq_mbufs(rxq);
+		hinic3_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+		eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+		eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+start_rqs_fail:
+	hinic3_remove_rxtx_configure(eth_dev);
+
+set_rxtx_config_fail:
+set_mtu_fail:
+	hinic3_free_qp_ctxts(nic_dev->hwdev);
+
+init_qp_fail:
+get_feature_err:
+init_func_tbl_fail:
+	hinic3_deinit_rxq_intr(eth_dev);
+init_rxq_intr_fail:
+	hinic3_copy_mempool_uninit(nic_dev);
+init_mpool_fail:
+	return err;
+}
+
+/**
+ * Look up or creates a memory pool for storing packet buffers used in copy
+ * operations.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * `-ENOMEM`: Memory pool creation fails.
+ */
+static int
+hinic3_copy_mempool_init(struct hinic3_nic_dev *nic_dev)
+{
+	nic_dev->cpy_mpool = rte_mempool_lookup(HINCI3_CPY_MEMPOOL_NAME);
+	if (nic_dev->cpy_mpool == NULL) {
+		nic_dev->cpy_mpool = rte_pktmbuf_pool_create(HINCI3_CPY_MEMPOOL_NAME,
+			HINIC3_COPY_MEMPOOL_DEPTH, HINIC3_COPY_MEMPOOL_CACHE,
+			0, HINIC3_COPY_MBUF_SIZE, (int)rte_socket_id());
+		if (nic_dev->cpy_mpool == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "Create copy mempool failed, errno: %d, "
+				    "dev_name: %s",
+				    rte_errno, HINCI3_CPY_MEMPOOL_NAME);
+			return -ENOMEM;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * Clear the reference to the copy memory pool without freeing it.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ */
+static void
+hinic3_copy_mempool_uninit(struct hinic3_nic_dev *nic_dev)
+{
+	nic_dev->cpy_mpool = NULL;
+}
+
+/**
+ * Stop the device.
+ *
+ * Stop phy port and vport, flush pending io request, clean context configure
+ * and free io resourece.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+static int
+hinic3_dev_stop(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev;
+	struct rte_eth_link link;
+	int err;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	if (!hinic3_test_and_clear_bit(HINIC3_DEV_START,
+				       &nic_dev->dev_status)) {
+		PMD_DRV_LOG(INFO, "Device %s already stopped",
+			    nic_dev->dev_name);
+		return 0;
+	}
+
+	/* Stop phy port and vport. */
+	err = hinic3_set_port_enable(nic_dev->hwdev, false);
+	if (err)
+		PMD_DRV_LOG(WARNING,
+			    "Disable phy port failed, error: %d, "
+			    "dev_name: %s, port_id: %d",
+			    err, dev->data->name, dev->data->port_id);
+
+	err = hinic3_set_vport_enable(nic_dev->hwdev, false);
+	if (err)
+		PMD_DRV_LOG(WARNING,
+			    "Disable vport failed, error: %d, "
+			    "dev_name: %s, port_id: %d",
+			    err, dev->data->name, dev->data->port_id);
+
+	/* Clear recorded link status. */
+	memset(&link, 0, sizeof(link));
+	(void)rte_eth_linkstatus_set(dev, &link);
+
+	/* Disable dp interrupt. */
+	hinic3_disable_queue_intr(dev);
+	hinic3_deinit_rxq_intr(dev);
+
+	/* Flush pending io request. */
+	hinic3_flush_txqs(nic_dev);
+
+	/* After set vport disable 100ms, no packets will be send to host. */
+	rte_delay_ms(DEV_STOP_DELAY_MS);
+
+	hinic3_flush_qps_res(nic_dev->hwdev);
+
+	/* Clean RSS table and rx_mode. */
+	hinic3_remove_rxtx_configure(dev);
+
+	/* Clean root context. */
+	hinic3_free_qp_ctxts(nic_dev->hwdev);
+
+	/* Free all tx and rx mbufs. */
+	hinic3_free_all_txq_mbufs(nic_dev);
+	hinic3_free_all_rxq_mbufs(nic_dev);
+
+	/* Free mempool. */
+	hinic3_copy_mempool_uninit(nic_dev);
+	return 0;
+}
+
+static void
+hinic3_dev_release(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev =
+		HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	int qid;
+
+	/* Release io resource. */
+	for (qid = 0; qid < nic_dev->num_sqs; qid++)
+		hinic3_tx_queue_release(eth_dev, qid);
+
+	for (qid = 0; qid < nic_dev->num_rqs; qid++)
+		hinic3_rx_queue_release(eth_dev, qid);
+
+	hinic3_deinit_sw_rxtxqs(nic_dev);
+
+	hinic3_deinit_mac_addr(eth_dev);
+	rte_free(nic_dev->mc_list);
+
+	hinic3_remove_all_vlanid(eth_dev);
+
+	hinic3_clear_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status);
+	hinic3_set_msix_state(nic_dev->hwdev, 0, HINIC3_MSIX_DISABLE);
+	rte_intr_disable(PCI_DEV_TO_INTR_HANDLE(pci_dev));
+	(void)rte_intr_callback_unregister(PCI_DEV_TO_INTR_HANDLE(pci_dev),
+					   hinic3_dev_interrupt_handler,
+					   (void *)eth_dev);
+
+	/* Destroy rx mode mutex. */
+	hinic3_mutex_destroy(&nic_dev->rx_mode_mutex);
+
+	hinic3_free_nic_hwdev(nic_dev->hwdev);
+	hinic3_free_hwdev(nic_dev->hwdev);
+
+	eth_dev->rx_pkt_burst = NULL;
+	eth_dev->tx_pkt_burst = NULL;
+	eth_dev->dev_ops = NULL;
+	eth_dev->rx_queue_count = NULL;
+	eth_dev->rx_descriptor_status = NULL;
+	eth_dev->tx_descriptor_status = NULL;
+
+	rte_free(nic_dev->hwdev);
+	nic_dev->hwdev = NULL;
+}
+
+/**
+ * Close the device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_close(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev =
+		HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+	int ret;
+
+	if (hinic3_test_and_set_bit(HINIC3_DEV_CLOSE, &nic_dev->dev_status)) {
+		PMD_DRV_LOG(WARNING, "Device %s already closed",
+			    nic_dev->dev_name);
+		return 0;
+	}
+
+	ret = hinic3_dev_stop(eth_dev);
+
+	hinic3_dev_release(eth_dev);
+	return ret;
+}
+
+static int
+hinic3_dev_reset(__rte_unused struct rte_eth_dev *dev)
+{
+	return 0;
+}
+
+#define MIN_RX_BUFFER_SIZE	      256
+#define MIN_RX_BUFFER_SIZE_SMALL_MODE 1518
+
+static int
+hinic3_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int err = 0;
+
+	PMD_DRV_LOG(INFO, "Set port mtu, port_id: %d, mtu: %d, max_pkt_len: %d",
+		    dev->data->port_id, mtu, HINIC3_MTU_TO_PKTLEN(mtu));
+
+	if (mtu < HINIC3_MIN_MTU_SIZE || mtu > HINIC3_MAX_MTU_SIZE) {
+		PMD_DRV_LOG(ERR, "Invalid mtu: %d, must between %d and %d", mtu,
+			    HINIC3_MIN_MTU_SIZE, HINIC3_MAX_MTU_SIZE);
+		return -EINVAL;
+	}
+
+	err = hinic3_set_port_mtu(nic_dev->hwdev, mtu);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set port mtu failed, err: %d", err);
+		return err;
+	}
+
+	/* Update max frame size. */
+	HINIC3_MAX_RX_PKT_LEN(dev->data->dev_conf.rxmode) =
+		HINIC3_MTU_TO_PKTLEN(mtu);
+	nic_dev->mtu_size = mtu;
+	return err;
+}
+
+/**
+ * Add or delete vlan id.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] vlan_id
+ * Vlan id is used to filter vlan packets.
+ * @param[in] enable
+ * Disable or enable vlan filter function.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int enable)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int err = 0;
+	u16 func_id;
+
+	if (vlan_id >= RTE_ETHER_MAX_VLAN_ID)
+		return -EINVAL;
+
+	if (vlan_id == 0)
+		return 0;
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+	if (enable) {
+		/* If vlanid is already set, just return. */
+		if (hinic3_find_vlan_filter(nic_dev, vlan_id)) {
+			PMD_DRV_LOG(INFO, "Vlan %u has been added, device: %s",
+				    vlan_id, nic_dev->dev_name);
+			return 0;
+		}
+
+		err = hinic3_add_vlan(nic_dev->hwdev, vlan_id, func_id);
+	} else {
+		/* If vlanid can't be found, just return. */
+		if (!hinic3_find_vlan_filter(nic_dev, vlan_id)) {
+			PMD_DRV_LOG(INFO,
+				    "Vlan %u is not in the vlan filter list, "
+				    "device: %s",
+				    vlan_id, nic_dev->dev_name);
+			return 0;
+		}
+
+		err = hinic3_del_vlan(nic_dev->hwdev, vlan_id, func_id);
+	}
+
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "%s vlan failed, func_id: %d, vlan_id: %d, err: %d",
+			    enable ? "Add" : "Remove", func_id, vlan_id, err);
+		return err;
+	}
+
+	hinic3_store_vlan_filter(nic_dev, vlan_id, enable);
+
+	PMD_DRV_LOG(INFO, "%s vlan %u succeed, device: %s",
+		    enable ? "Add" : "Remove", vlan_id, nic_dev->dev_name);
+
+	return 0;
+}
+
+/**
+ * Enable or disable vlan offload.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mask
+ * Definitions used for VLAN setting, vlan filter of vlan strip.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	bool on;
+	int err;
+
+	/* Enable or disable VLAN filter. */
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+			     ? true
+			     : false;
+		err = hinic3_set_vlan_fliter(nic_dev->hwdev, on);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "%s vlan filter failed, device: %s, "
+				    "port_id: %d, err: %d",
+				    on ? "Enable" : "Disable",
+				    nic_dev->dev_name, dev->data->port_id, err);
+			return err;
+		}
+
+		PMD_DRV_LOG(INFO,
+			    "%s vlan filter succeed, device: %s, port_id: %d",
+			    on ? "Enable" : "Disable", nic_dev->dev_name,
+			    dev->data->port_id);
+	}
+
+	/* Enable or disable VLAN stripping. */
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ? true
+									: false;
+		err = hinic3_set_rx_vlan_offload(nic_dev->hwdev, on);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "%s vlan strip failed, device: %s, "
+				    "port_id: %d, err: %d",
+				    on ? "Enable" : "Disable",
+				    nic_dev->dev_name, dev->data->port_id, err);
+			return err;
+		}
+
+		PMD_DRV_LOG(INFO,
+			    "%s vlan strip succeed, device: %s, port_id: %d",
+			    on ? "Enable" : "Disable", nic_dev->dev_name,
+			    dev->data->port_id);
+	}
+	return 0;
+}
+
+/**
+ * Enable allmulticast mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u32 rx_mode;
+	int err;
+
+	err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+	if (err)
+		return err;
+
+	rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_MC_ALL;
+
+	err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+	if (err) {
+		(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+		PMD_DRV_LOG(ERR, "Enable allmulticast failed, error: %d", err);
+		return err;
+	}
+
+	nic_dev->rx_mode = rx_mode;
+
+	(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+	PMD_DRV_LOG(INFO,
+		    "Enable allmulticast succeed, nic_dev: %s, port_id: %d",
+		    nic_dev->dev_name, dev->data->port_id);
+	return 0;
+}
+
+/**
+ * Disable allmulticast mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u32 rx_mode;
+	int err;
+
+	err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+	if (err)
+		return err;
+
+	rx_mode = nic_dev->rx_mode & (~HINIC3_RX_MODE_MC_ALL);
+
+	err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+	if (err) {
+		(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+		PMD_DRV_LOG(ERR, "Disable allmulticast failed, error: %d", err);
+		return err;
+	}
+
+	nic_dev->rx_mode = rx_mode;
+
+	(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+	PMD_DRV_LOG(INFO,
+		    "Disable allmulticast succeed, nic_dev: %s, port_id: %d",
+		    nic_dev->dev_name, dev->data->port_id);
+	return 0;
+}
+
+/**
+ * Get device generic statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] stats
+ * Stats structure output buffer.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_vport_stats vport_stats;
+	struct hinic3_rxq *rxq = NULL;
+	struct hinic3_txq *txq = NULL;
+	int i, err, q_num;
+	u64 rx_discards_pmd = 0;
+
+	err = hinic3_get_vport_stats(nic_dev->hwdev, &vport_stats);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Get vport stats from fw failed, nic_dev: %s",
+			    nic_dev->dev_name);
+		return err;
+	}
+
+	dev->data->rx_mbuf_alloc_failed = 0;
+
+	/* Rx queue stats. */
+	q_num = (nic_dev->num_rqs < RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			? nic_dev->num_rqs
+			: RTE_ETHDEV_QUEUE_STAT_CNTRS;
+	for (i = 0; i < q_num; i++) {
+		rxq = nic_dev->rxqs[i];
+#ifdef HINIC3_XSTAT_MBUF_USE
+		rxq->rxq_stats.rx_left_mbuf_bytes =
+			rxq->rxq_stats.rx_alloc_mbuf_bytes -
+			rxq->rxq_stats.rx_free_mbuf_bytes;
+#endif
+		rxq->rxq_stats.errors = rxq->rxq_stats.csum_errors +
+					rxq->rxq_stats.other_errors;
+
+		stats->q_ipackets[i] = rxq->rxq_stats.packets;
+		stats->q_ibytes[i] = rxq->rxq_stats.bytes;
+		stats->q_errors[i] = rxq->rxq_stats.errors;
+
+		stats->ierrors += rxq->rxq_stats.errors;
+		rx_discards_pmd += rxq->rxq_stats.dropped;
+		dev->data->rx_mbuf_alloc_failed += rxq->rxq_stats.rx_nombuf;
+	}
+
+	/* Tx queue stats. */
+	q_num = (nic_dev->num_sqs < RTE_ETHDEV_QUEUE_STAT_CNTRS)
+			? nic_dev->num_sqs
+			: RTE_ETHDEV_QUEUE_STAT_CNTRS;
+	for (i = 0; i < q_num; i++) {
+		txq = nic_dev->txqs[i];
+		stats->q_opackets[i] = txq->txq_stats.packets;
+		stats->q_obytes[i] = txq->txq_stats.bytes;
+		stats->oerrors += (txq->txq_stats.tx_busy +
+				   txq->txq_stats.offload_errors);
+	}
+
+	/* Vport stats. */
+	stats->oerrors += vport_stats.tx_discard_vport;
+
+	stats->imissed = vport_stats.rx_discard_vport + rx_discards_pmd;
+
+	stats->ipackets =
+		(vport_stats.rx_unicast_pkts_vport +
+		 vport_stats.rx_multicast_pkts_vport +
+		 vport_stats.rx_broadcast_pkts_vport - rx_discards_pmd);
+
+	stats->opackets = (vport_stats.tx_unicast_pkts_vport +
+			   vport_stats.tx_multicast_pkts_vport +
+			   vport_stats.tx_broadcast_pkts_vport);
+
+	stats->ibytes = (vport_stats.rx_unicast_bytes_vport +
+			 vport_stats.rx_multicast_bytes_vport +
+			 vport_stats.rx_broadcast_bytes_vport);
+
+	stats->obytes = (vport_stats.tx_unicast_bytes_vport +
+			 vport_stats.tx_multicast_bytes_vport +
+			 vport_stats.tx_broadcast_bytes_vport);
+	return 0;
+}
+
 /**
- * Interrupt handler triggered by NIC for handling specific event.
+ * Clear device generic statistics.
  *
- * @param[in] param
- * The address of parameter (struct rte_eth_dev *) regsitered before.
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_rxq *rxq = NULL;
+	struct hinic3_txq *txq = NULL;
+	int qid;
+	int err;
+
+	err = hinic3_clear_vport_stats(nic_dev->hwdev);
+	if (err)
+		return err;
+
+	for (qid = 0; qid < nic_dev->num_rqs; qid++) {
+		rxq = nic_dev->rxqs[qid];
+		memset(&rxq->rxq_stats, 0, sizeof(struct hinic3_rxq_stats));
+	}
+
+	for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+		txq = nic_dev->txqs[qid];
+		memset(&txq->txq_stats, 0, sizeof(struct hinic3_txq_stats));
+	}
+
+	return 0;
+}
+
+/**
+ * Get device extended statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] xstats
+ * Pointer to rte extended stats table.
+ * @param[in] n
+ * The size of the stats table.
+ *
+ * @return
+ * positive: Number of extended stats on success and stats is filled.
+ * negative: Failure.
+ */
+static int
+hinic3_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+		      unsigned int n)
+{
+	struct hinic3_nic_dev *nic_dev;
+	struct mag_phy_port_stats port_stats;
+	struct hinic3_vport_stats vport_stats;
+	struct hinic3_rxq *rxq = NULL;
+	struct hinic3_rxq_stats rxq_stats;
+	struct hinic3_txq *txq = NULL;
+	struct hinic3_txq_stats txq_stats;
+	u16 qid;
+	u32 i;
+	int err, count;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	count = hinic3_xstats_calc_num(nic_dev);
+	if ((int)n < count)
+		return count;
+
+	count = 0;
+
+	/* Get stats from rxq stats structure. */
+	for (qid = 0; qid < nic_dev->num_rqs; qid++) {
+		rxq = nic_dev->rxqs[qid];
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+		hinic3_get_stats(rxq);
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+		rxq->rxq_stats.rx_left_mbuf_bytes =
+			rxq->rxq_stats.rx_alloc_mbuf_bytes -
+			rxq->rxq_stats.rx_free_mbuf_bytes;
+#endif
+		rxq->rxq_stats.errors = rxq->rxq_stats.csum_errors +
+					rxq->rxq_stats.other_errors;
+
+		memcpy((void *)&rxq_stats, (void *)&rxq->rxq_stats,
+		       sizeof(rxq->rxq_stats));
+
+		for (i = 0; i < HINIC3_RXQ_XSTATS_NUM; i++) {
+			xstats[count].value = *(uint64_t *)(((char *)&rxq_stats) +
+					    hinic3_rxq_stats_strings[i].offset);
+			xstats[count].id = count;
+			count++;
+		}
+	}
+
+	/* Get stats from txq stats structure. */
+	for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+		txq = nic_dev->txqs[qid];
+		memcpy((void *)&txq_stats, (void *)&txq->txq_stats,
+		       sizeof(txq->txq_stats));
+
+		for (i = 0; i < HINIC3_TXQ_XSTATS_NUM; i++) {
+			xstats[count].value = *(uint64_t *)(((char *)&txq_stats) +
+					    hinic3_txq_stats_strings[i].offset);
+			xstats[count].id = count;
+			count++;
+		}
+	}
+
+	/* Get stats from vport stats structure. */
+	err = hinic3_get_vport_stats(nic_dev->hwdev, &vport_stats);
+	if (err)
+		return err;
+
+	for (i = 0; i < HINIC3_VPORT_XSTATS_NUM; i++) {
+		xstats[count].value =
+			*(uint64_t *)(((char *)&vport_stats) +
+				      hinic3_vport_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	if (HINIC3_IS_VF(nic_dev->hwdev))
+		return count;
+
+	/* Get stats from phy port stats structure. */
+	err = hinic3_get_phy_port_stats(nic_dev->hwdev, &port_stats);
+	if (err)
+		return err;
+
+	for (i = 0; i < HINIC3_PHYPORT_XSTATS_NUM; i++) {
+		xstats[count].value =
+			*(uint64_t *)(((char *)&port_stats) +
+				      hinic3_phyport_stats_strings[i].offset);
+		xstats[count].id = count;
+		count++;
+	}
+
+	return count;
+}
+
+/**
+ * Clear device extended statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int err;
+
+	err = hinic3_dev_stats_reset(dev);
+	if (err)
+		return err;
+
+	if (hinic3_func_type(nic_dev->hwdev) != TYPE_VF) {
+		err = hinic3_clear_phy_port_stats(nic_dev->hwdev);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+/**
+ * Retrieve names of extended device statistics.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] xstats_names
+ * Buffer to insert names into.
+ *
+ * @return
+ * Number of xstats names.
+ */
+static int
+hinic3_dev_xstats_get_names(struct rte_eth_dev *dev,
+			    struct rte_eth_xstat_name *xstats_names,
+			    __rte_unused unsigned int limit)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int count = 0;
+	u16 i, q_num;
+
+	if (xstats_names == NULL)
+		return hinic3_xstats_calc_num(nic_dev);
+
+	/* Get pmd rxq stats name. */
+	for (q_num = 0; q_num < nic_dev->num_rqs; q_num++) {
+		for (i = 0; i < HINIC3_RXQ_XSTATS_NUM; i++) {
+			snprintf(xstats_names[count].name,
+				 sizeof(xstats_names[count].name),
+				 "rxq%d_%s_pmd", q_num,
+				 hinic3_rxq_stats_strings[i].name);
+			count++;
+		}
+	}
+
+	/* Get pmd txq stats name. */
+	for (q_num = 0; q_num < nic_dev->num_sqs; q_num++) {
+		for (i = 0; i < HINIC3_TXQ_XSTATS_NUM; i++) {
+			snprintf(xstats_names[count].name,
+				 sizeof(xstats_names[count].name),
+				 "txq%d_%s_pmd", q_num,
+				 hinic3_txq_stats_strings[i].name);
+			count++;
+		}
+	}
+
+	/* Get vport stats name. */
+	for (i = 0; i < HINIC3_VPORT_XSTATS_NUM; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name), "%s",
+			 hinic3_vport_stats_strings[i].name);
+		count++;
+	}
+
+	if (HINIC3_IS_VF(nic_dev->hwdev))
+		return count;
+
+	/* Get phy port stats name. */
+	for (i = 0; i < HINIC3_PHYPORT_XSTATS_NUM; i++) {
+		snprintf(xstats_names[count].name,
+			 sizeof(xstats_names[count].name), "%s",
+			 hinic3_phyport_stats_strings[i].name);
+		count++;
+	}
+
+	return count;
+}
+
+/**
+ * Function used to get supported ptypes of an Ethernet device.
+ *
+ * @param[in] dev
+ * ethdev handle of port.
+ * @param[out] no_of_elements
+ * number of ptypes elements. Must be initialized to 0.
+ *
+ * @return
+ * Success, array of ptypes elements and valid no_of_elements > 0.
+ * Failures, NULL.
  */
+static const uint32_t *
+hinic3_dev_supported_ptypes_get(__rte_unused struct rte_eth_dev *dev,
+				__rte_unused size_t *no_of_elements)
+{
+	return 0;
+}
+
 static void
-hinic3_dev_interrupt_handler(void *param)
+hinic3_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		    struct rte_eth_rxq_info *rxq_info)
+{
+	struct hinic3_rxq *rxq = dev->data->rx_queues[queue_id];
+
+	rxq_info->mp = rxq->mb_pool;
+	rxq_info->nb_desc = rxq->q_depth;
+}
+
+static void
+hinic3_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+		    struct rte_eth_txq_info *txq_qinfo)
+{
+	struct hinic3_txq *txq = dev->data->tx_queues[queue_id];
+
+	txq_qinfo->nb_desc = txq->q_depth;
+}
+
+/**
+ * Update MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] addr
+ * Pointer to MAC address.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
 {
-	struct rte_eth_dev *dev = param;
 	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	char mac_addr[RTE_ETHER_ADDR_FMT_SIZE];
+	u16 func_id;
+	int err;
 
-	if (!hinic3_get_bit(HINIC3_DEV_INTR_EN, &nic_dev->dev_status)) {
-		PMD_DRV_LOG(WARNING,
-			    "Intr is disabled, ignore intr event, "
-			    "dev_name: %s, port_id: %d",
-			    nic_dev->dev_name, dev->data->port_id);
+	if (!rte_is_valid_assigned_ether_addr(addr)) {
+		rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE, addr);
+		PMD_DRV_LOG(ERR, "Set invalid MAC address %s", mac_addr);
+		return -EINVAL;
+	}
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+	err = hinic3_update_mac(nic_dev->hwdev,
+				nic_dev->default_addr.addr_bytes,
+				addr->addr_bytes, 0, func_id);
+	if (err)
+		return err;
+
+	rte_ether_addr_copy(addr, &nic_dev->default_addr);
+	rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE,
+			      &nic_dev->default_addr);
+
+	PMD_DRV_LOG(INFO, "Set new MAC address %s", mac_addr);
+	return 0;
+}
+
+/**
+ * Remove a MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] index
+ * MAC address index.
+ */
+static void
+hinic3_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u16 func_id;
+	int err;
+
+	if (index >= HINIC3_MAX_UC_MAC_ADDRS) {
+		PMD_DRV_LOG(INFO, "Remove MAC index(%u) is out of range",
+			    index);
 		return;
 	}
 
-	/* Aeq0 msg handler. */
-	hinic3_dev_handle_aeq_event(nic_dev->hwdev, param);
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+	err = hinic3_del_mac(nic_dev->hwdev,
+			     dev->data->mac_addrs[index].addr_bytes, 0,
+			     func_id);
+	if (err)
+		PMD_DRV_LOG(ERR, "Remove MAC index(%u) failed", index);
+}
+
+/**
+ * Add a MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mac_addr
+ * MAC address to register.
+ * @param[in] index
+ * MAC address index.
+ * @param[in] vmdq
+ * VMDq pool index to associate address with (unused_).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
+		    uint32_t index, __rte_unused uint32_t vmdq)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	unsigned int i;
+	u16 func_id;
+	int err;
+
+	if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
+		PMD_DRV_LOG(ERR, "Add invalid MAC address");
+		return -EINVAL;
+	}
+
+	if (index >= HINIC3_MAX_UC_MAC_ADDRS) {
+		PMD_DRV_LOG(ERR, "Add MAC index(%u) is out of range", index);
+		return -EINVAL;
+	}
+
+	/* Make sure this address doesn't already be configured. */
+	for (i = 0; i < HINIC3_MAX_UC_MAC_ADDRS; i++) {
+		if (rte_is_same_ether_addr(mac_addr,
+					   &dev->data->mac_addrs[i])) {
+			PMD_DRV_LOG(ERR, "MAC address is already configured");
+			return -EADDRINUSE;
+		}
+	}
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+	err = hinic3_set_mac(nic_dev->hwdev, mac_addr->addr_bytes, 0, func_id);
+	if (err)
+		return err;
+
+	return 0;
 }
 
+/**
+ * Delete all multicast MAC addresses from the NIC device.
+ *
+ * This function iterates over the list of multicast MAC addresses and removes
+ * each address from the NIC device by calling `hinic3_del_mac`. After each
+ * deletion, the address is reset to zero.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ */
 static void
-hinic3_deinit_sw_rxtxqs(struct hinic3_nic_dev *nic_dev)
+hinic3_delete_mc_addr_list(struct hinic3_nic_dev *nic_dev)
 {
-	rte_free(nic_dev->txqs);
-	nic_dev->txqs = NULL;
+	u16 func_id;
+	u32 i;
 
-	rte_free(nic_dev->rxqs);
-	nic_dev->rxqs = NULL;
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+	for (i = 0; i < HINIC3_MAX_MC_MAC_ADDRS; i++) {
+		if (rte_is_zero_ether_addr(&nic_dev->mc_list[i]))
+			break;
+
+		hinic3_del_mac(nic_dev->hwdev, nic_dev->mc_list[i].addr_bytes,
+			       0, func_id);
+		memset(&nic_dev->mc_list[i], 0, sizeof(struct rte_ether_addr));
+	}
+}
+
+/**
+ * Set multicast MAC address.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] mc_addr_set
+ * Pointer to multicast MAC address.
+ * @param[in] nb_mc_addr
+ * The number of multicast MAC address to set.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_set_mc_addr_list(struct rte_eth_dev *dev,
+			struct rte_ether_addr *mc_addr_set, uint32_t nb_mc_addr)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	char mac_addr[RTE_ETHER_ADDR_FMT_SIZE];
+	u16 func_id;
+	int err;
+	u32 i;
+
+	func_id = hinic3_global_func_id(nic_dev->hwdev);
+
+	/* Delete old multi_cast addrs firstly. */
+	hinic3_delete_mc_addr_list(nic_dev);
+
+	if (nb_mc_addr > HINIC3_MAX_MC_MAC_ADDRS)
+		return -EINVAL;
+
+	for (i = 0; i < nb_mc_addr; i++) {
+		if (!rte_is_multicast_ether_addr(&mc_addr_set[i])) {
+			rte_ether_format_addr(mac_addr, RTE_ETHER_ADDR_FMT_SIZE,
+					      &mc_addr_set[i]);
+			PMD_DRV_LOG(ERR,
+				    "Set mc MAC addr failed, addr(%s) invalid",
+				    mac_addr);
+			return -EINVAL;
+		}
+	}
+
+	for (i = 0; i < nb_mc_addr; i++) {
+		err = hinic3_set_mac(nic_dev->hwdev, mc_addr_set[i].addr_bytes,
+				     0, func_id);
+		if (err) {
+			hinic3_delete_mc_addr_list(nic_dev);
+			return err;
+		}
+
+		rte_ether_addr_copy(&mc_addr_set[i], &nic_dev->mc_list[i]);
+	}
+
+	return 0;
+}
+
+static int
+hinic3_get_reg(__rte_unused struct rte_eth_dev *dev,
+	       __rte_unused struct rte_dev_reg_info *regs)
+{
+	return 0;
 }
 
+static const struct eth_dev_ops hinic3_pmd_ops = {
+	.dev_configure                 = hinic3_dev_configure,
+	.dev_infos_get                 = hinic3_dev_infos_get,
+	.fw_version_get                = hinic3_fw_version_get,
+	.dev_set_link_up               = hinic3_dev_set_link_up,
+	.dev_set_link_down             = hinic3_dev_set_link_down,
+	.link_update                   = hinic3_link_update,
+	.rx_queue_setup                = hinic3_rx_queue_setup,
+	.tx_queue_setup                = hinic3_tx_queue_setup,
+	.rx_queue_release              = hinic3_rx_queue_release,
+	.tx_queue_release              = hinic3_tx_queue_release,
+	.rx_queue_start                = hinic3_dev_rx_queue_start,
+	.rx_queue_stop                 = hinic3_dev_rx_queue_stop,
+	.tx_queue_start                = hinic3_dev_tx_queue_start,
+	.tx_queue_stop                 = hinic3_dev_tx_queue_stop,
+	.rx_queue_intr_enable          = hinic3_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable         = hinic3_dev_rx_queue_intr_disable,
+	.dev_start                     = hinic3_dev_start,
+	.dev_stop                      = hinic3_dev_stop,
+	.dev_close                     = hinic3_dev_close,
+	.dev_reset                     = hinic3_dev_reset,
+	.mtu_set                       = hinic3_dev_set_mtu,
+	.vlan_filter_set               = hinic3_vlan_filter_set,
+	.vlan_offload_set              = hinic3_vlan_offload_set,
+	.allmulticast_enable           = hinic3_dev_allmulticast_enable,
+	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
+	.stats_get                     = hinic3_dev_stats_get,
+	.stats_reset                   = hinic3_dev_stats_reset,
+	.xstats_get                    = hinic3_dev_xstats_get,
+	.xstats_reset                  = hinic3_dev_xstats_reset,
+	.xstats_get_names              = hinic3_dev_xstats_get_names,
+	.dev_supported_ptypes_get      = hinic3_dev_supported_ptypes_get,
+	.rxq_info_get                  = hinic3_rxq_info_get,
+	.txq_info_get                  = hinic3_txq_info_get,
+	.mac_addr_set                  = hinic3_set_mac_addr,
+	.mac_addr_remove               = hinic3_mac_addr_remove,
+	.mac_addr_add                  = hinic3_mac_addr_add,
+	.set_mc_addr_list              = hinic3_set_mc_addr_list,
+	.get_reg                       = hinic3_get_reg,
+};
+
+static const struct eth_dev_ops hinic3_pmd_vf_ops = {
+	.dev_configure                 = hinic3_dev_configure,
+	.dev_infos_get                 = hinic3_dev_infos_get,
+	.fw_version_get                = hinic3_fw_version_get,
+	.rx_queue_setup                = hinic3_rx_queue_setup,
+	.tx_queue_setup                = hinic3_tx_queue_setup,
+	.rx_queue_intr_enable          = hinic3_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable         = hinic3_dev_rx_queue_intr_disable,
+
+	.rx_queue_start                = hinic3_dev_rx_queue_start,
+	.rx_queue_stop                 = hinic3_dev_rx_queue_stop,
+	.tx_queue_start                = hinic3_dev_tx_queue_start,
+	.tx_queue_stop                 = hinic3_dev_tx_queue_stop,
+
+	.dev_start                     = hinic3_dev_start,
+	.link_update                   = hinic3_link_update,
+	.rx_queue_release              = hinic3_rx_queue_release,
+	.tx_queue_release              = hinic3_tx_queue_release,
+	.dev_stop                      = hinic3_dev_stop,
+	.dev_close                     = hinic3_dev_close,
+	.mtu_set                       = hinic3_dev_set_mtu,
+	.vlan_filter_set               = hinic3_vlan_filter_set,
+	.vlan_offload_set              = hinic3_vlan_offload_set,
+	.allmulticast_enable           = hinic3_dev_allmulticast_enable,
+	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
+	.stats_get                     = hinic3_dev_stats_get,
+	.stats_reset                   = hinic3_dev_stats_reset,
+	.xstats_get                    = hinic3_dev_xstats_get,
+	.xstats_reset                  = hinic3_dev_xstats_reset,
+	.xstats_get_names              = hinic3_dev_xstats_get_names,
+	.rxq_info_get                  = hinic3_rxq_info_get,
+	.txq_info_get                  = hinic3_txq_info_get,
+	.mac_addr_set                  = hinic3_set_mac_addr,
+	.mac_addr_remove               = hinic3_mac_addr_remove,
+	.mac_addr_add                  = hinic3_mac_addr_add,
+	.set_mc_addr_list              = hinic3_set_mc_addr_list,
+};
+
 /**
  * Init mac_vlan table in hardwares.
  *
@@ -319,6 +3194,15 @@ hinic3_func_init(struct rte_eth_dev *eth_dev)
 	nic_dev->max_sqs = hinic3_func_max_sqs(nic_dev->hwdev);
 	nic_dev->max_rqs = hinic3_func_max_rqs(nic_dev->hwdev);
 
+	if (HINIC3_FUNC_TYPE(nic_dev->hwdev) == TYPE_VF)
+		eth_dev->dev_ops = &hinic3_pmd_vf_ops;
+	else
+		eth_dev->dev_ops = &hinic3_pmd_ops;
+
+	eth_dev->rx_queue_count = hinic3_dev_rx_queue_count;
+	eth_dev->rx_descriptor_status = hinic3_dev_rx_descriptor_status;
+	eth_dev->tx_descriptor_status = hinic3_dev_tx_descriptor_status;
+
 	err = hinic3_init_nic_hwdev(nic_dev->hwdev);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Init nic hwdev failed, dev_name: %s",
diff --git a/drivers/net/hinic3/hinic3_nic_io.c b/drivers/net/hinic3/hinic3_nic_io.c
new file mode 100644
index 0000000000..aba5a641bc
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_nic_io.c
@@ -0,0 +1,827 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_config.h>
+#include <rte_errno.h>
+#include <rte_ether.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_pci.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_cmd.h"
+#include "base/hinic3_cmdq.h"
+#include "base/hinic3_hw_comm.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+
+#define HINIC3_DEAULT_TX_CI_PENDING_LIMIT   3
+#define HINIC3_DEAULT_TX_CI_COALESCING_TIME 16
+#define HINIC3_DEAULT_DROP_THD_ON	    0xFFFF
+#define HINIC3_DEAULT_DROP_THD_OFF	    0
+
+#define WQ_PREFETCH_MAX	      6
+#define WQ_PREFETCH_MIN	      1
+#define WQ_PREFETCH_THRESHOLD 256
+
+#define HINIC3_Q_CTXT_MAX \
+	((u16)(((HINIC3_CMDQ_BUF_SIZE - 8) - RTE_PKTMBUF_HEADROOM) / 64))
+
+enum hinic3_qp_ctxt_type {
+	HINIC3_QP_CTXT_TYPE_SQ,
+	HINIC3_QP_CTXT_TYPE_RQ,
+};
+
+struct hinic3_qp_ctxt_header {
+	u16 num_queues;
+	u16 queue_type;
+	u16 start_qid;
+	u16 rsvd;
+};
+
+struct hinic3_sq_ctxt {
+	u32 ci_pi;
+	u32 drop_mode_sp;    /**< Packet drop mode and special flags. */
+	u32 wq_pfn_hi_owner; /**< High PFN and ownership flag. */
+	u32 wq_pfn_lo;	     /**< Low bits of work queue PFN. */
+
+	u32 rsvd0;	  /**< Reserved field 0. */
+	u32 pkt_drop_thd; /**< Packet drop threshold. */
+	u32 global_sq_id;
+	u32 vlan_ceq_attr; /**< VLAN and CEQ attributes. */
+
+	u32 pref_cache;	       /**< Cache prefetch settings for the queue. */
+	u32 pref_ci_owner;     /**< Prefetch settings for CI and ownership. */
+	u32 pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */
+	u32 pref_wq_pfn_lo;    /**< Prefetch settings for low PFN. */
+
+	u32 rsvd8;	     /**< Reserved field 8. */
+	u32 rsvd9;	     /**< Reserved field 9. */
+	u32 wq_block_pfn_hi; /**< High bits of work queue block PFN. */
+	u32 wq_block_pfn_lo; /**< Low bits of work queue block PFN. */
+};
+
+struct hinic3_rq_ctxt {
+	u32 ci_pi;
+	u32 ceq_attr;		  /**< Completion event queue attributes. */
+	u32 wq_pfn_hi_type_owner; /**< High PFN, WQE type and ownership flag. */
+	u32 wq_pfn_lo;		  /**< Low bits of work queue PFN. */
+
+	u32 rsvd[3];	 /**< Reserved field. */
+	u32 cqe_sge_len; /**< CQE scatter/gather element length. */
+
+	u32 pref_cache;	       /**< Cache prefetch settings for the queue. */
+	u32 pref_ci_owner;     /**< Prefetch settings for CI and ownership. */
+	u32 pref_wq_pfn_hi_ci; /**< Prefetch settings for high PFN and CI. */
+	u32 pref_wq_pfn_lo;    /**< Prefetch settings for low PFN. */
+
+	u32 pi_paddr_hi;     /**< High 32-bits of PI DMA address. */
+	u32 pi_paddr_lo;     /**< Low 32-bits of PI DMA address. */
+	u32 wq_block_pfn_hi; /**< High bits of work queue block PFN. */
+	u32 wq_block_pfn_lo; /**< Low bits of work queue block PFN. */
+};
+
+struct hinic3_sq_ctxt_block {
+	struct hinic3_qp_ctxt_header cmdq_hdr;
+	struct hinic3_sq_ctxt sq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_rq_ctxt_block {
+	struct hinic3_qp_ctxt_header cmdq_hdr;
+	struct hinic3_rq_ctxt rq_ctxt[HINIC3_Q_CTXT_MAX];
+};
+
+struct hinic3_clean_queue_ctxt {
+	struct hinic3_qp_ctxt_header cmdq_hdr;
+	u32 rsvd;
+};
+
+#define SQ_CTXT_SIZE(num_sqs)                         \
+	((u16)(sizeof(struct hinic3_qp_ctxt_header) + \
+	       (num_sqs) * sizeof(struct hinic3_sq_ctxt)))
+
+#define RQ_CTXT_SIZE(num_rqs)                         \
+	((u16)(sizeof(struct hinic3_qp_ctxt_header) + \
+	       (num_rqs) * sizeof(struct hinic3_rq_ctxt)))
+
+#define CI_IDX_HIGH_SHIFH 12
+
+#define CI_HIGN_IDX(val) ((val) >> CI_IDX_HIGH_SHIFH)
+
+#define SQ_CTXT_PI_IDX_SHIFT 0
+#define SQ_CTXT_CI_IDX_SHIFT 16
+
+#define SQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define SQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define SQ_CTXT_CI_PI_SET(val, member) \
+	(((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_MODE_SP_FLAG_SHIFT  0
+#define SQ_CTXT_MODE_PKT_DROP_SHIFT 1
+
+#define SQ_CTXT_MODE_SP_FLAG_MASK  0x1U
+#define SQ_CTXT_MODE_PKT_DROP_MASK 0x1U
+
+#define SQ_CTXT_MODE_SET(val, member)           \
+	(((val) & SQ_CTXT_MODE_##member##_MASK) \
+	 << SQ_CTXT_MODE_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_SHIFT 0
+#define SQ_CTXT_WQ_PAGE_OWNER_SHIFT  23
+
+#define SQ_CTXT_WQ_PAGE_HI_PFN_MASK 0xFFFFFU
+#define SQ_CTXT_WQ_PAGE_OWNER_MASK  0x1U
+
+#define SQ_CTXT_WQ_PAGE_SET(val, member)           \
+	(((val) & SQ_CTXT_WQ_PAGE_##member##_MASK) \
+	 << SQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define SQ_CTXT_PKT_DROP_THD_ON_SHIFT  0
+#define SQ_CTXT_PKT_DROP_THD_OFF_SHIFT 16
+
+#define SQ_CTXT_PKT_DROP_THD_ON_MASK  0xFFFFU
+#define SQ_CTXT_PKT_DROP_THD_OFF_MASK 0xFFFFU
+
+#define SQ_CTXT_PKT_DROP_THD_SET(val, member)       \
+	(((val) & SQ_CTXT_PKT_DROP_##member##_MASK) \
+	 << SQ_CTXT_PKT_DROP_##member##_SHIFT)
+
+#define SQ_CTXT_GLOBAL_SQ_ID_SHIFT 0
+
+#define SQ_CTXT_GLOBAL_SQ_ID_MASK 0x1FFFU
+
+#define SQ_CTXT_GLOBAL_QUEUE_ID_SET(val, member) \
+	(((val) & SQ_CTXT_##member##_MASK) << SQ_CTXT_##member##_SHIFT)
+
+#define SQ_CTXT_VLAN_TAG_SHIFT	       0
+#define SQ_CTXT_VLAN_TYPE_SEL_SHIFT    16
+#define SQ_CTXT_VLAN_INSERT_MODE_SHIFT 19
+#define SQ_CTXT_VLAN_CEQ_EN_SHIFT      23
+
+#define SQ_CTXT_VLAN_TAG_MASK	      0xFFFFU
+#define SQ_CTXT_VLAN_TYPE_SEL_MASK    0x7U
+#define SQ_CTXT_VLAN_INSERT_MODE_MASK 0x3U
+#define SQ_CTXT_VLAN_CEQ_EN_MASK      0x1U
+
+#define SQ_CTXT_VLAN_CEQ_SET(val, member)       \
+	(((val) & SQ_CTXT_VLAN_##member##_MASK) \
+	 << SQ_CTXT_VLAN_##member##_SHIFT)
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define SQ_CTXT_PREF_CACHE_MAX_SHIFT	   14
+#define SQ_CTXT_PREF_CACHE_MIN_SHIFT	   25
+
+#define SQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define SQ_CTXT_PREF_CACHE_MAX_MASK	  0x7FFU
+#define SQ_CTXT_PREF_CACHE_MIN_MASK	  0x7FU
+
+#define SQ_CTXT_PREF_CI_HI_SHIFT 0
+#define SQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define SQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define SQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define SQ_CTXT_PREF_CI_LOW_SHIFT    20
+
+#define SQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define SQ_CTXT_PREF_CI_LOW_MASK    0xFFFU
+
+#define SQ_CTXT_PREF_SET(val, member)           \
+	(((val) & SQ_CTXT_PREF_##member##_MASK) \
+	 << SQ_CTXT_PREF_##member##_SHIFT)
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define SQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define SQ_CTXT_WQ_BLOCK_SET(val, member)           \
+	(((val) & SQ_CTXT_WQ_BLOCK_##member##_MASK) \
+	 << SQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define RQ_CTXT_PI_IDX_SHIFT 0
+#define RQ_CTXT_CI_IDX_SHIFT 16
+
+#define RQ_CTXT_PI_IDX_MASK 0xFFFFU
+#define RQ_CTXT_CI_IDX_MASK 0xFFFFU
+
+#define RQ_CTXT_CI_PI_SET(val, member) \
+	(((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_CEQ_ATTR_INTR_SHIFT	21
+#define RQ_CTXT_CEQ_ATTR_INTR_ARM_SHIFT 30
+#define RQ_CTXT_CEQ_ATTR_EN_SHIFT	31
+
+#define RQ_CTXT_CEQ_ATTR_INTR_MASK     0x3FFU
+#define RQ_CTXT_CEQ_ATTR_INTR_ARM_MASK 0x1U
+#define RQ_CTXT_CEQ_ATTR_EN_MASK       0x1U
+
+#define RQ_CTXT_CEQ_ATTR_SET(val, member)           \
+	(((val) & RQ_CTXT_CEQ_ATTR_##member##_MASK) \
+	 << RQ_CTXT_CEQ_ATTR_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_SHIFT   0
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_SHIFT 28
+#define RQ_CTXT_WQ_PAGE_OWNER_SHIFT    31
+
+#define RQ_CTXT_WQ_PAGE_HI_PFN_MASK   0xFFFFFU
+#define RQ_CTXT_WQ_PAGE_WQE_TYPE_MASK 0x3U
+#define RQ_CTXT_WQ_PAGE_OWNER_MASK    0x1U
+
+#define RQ_CTXT_WQ_PAGE_SET(val, member)           \
+	(((val) & RQ_CTXT_WQ_PAGE_##member##_MASK) \
+	 << RQ_CTXT_WQ_PAGE_##member##_SHIFT)
+
+#define RQ_CTXT_CQE_LEN_SHIFT 28
+
+#define RQ_CTXT_CQE_LEN_MASK 0x3U
+
+#define RQ_CTXT_CQE_LEN_SET(val, member) \
+	(((val) & RQ_CTXT_##member##_MASK) << RQ_CTXT_##member##_SHIFT)
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_SHIFT 0
+#define RQ_CTXT_PREF_CACHE_MAX_SHIFT	   14
+#define RQ_CTXT_PREF_CACHE_MIN_SHIFT	   25
+
+#define RQ_CTXT_PREF_CACHE_THRESHOLD_MASK 0x3FFFU
+#define RQ_CTXT_PREF_CACHE_MAX_MASK	  0x7FFU
+#define RQ_CTXT_PREF_CACHE_MIN_MASK	  0x7FU
+
+#define RQ_CTXT_PREF_CI_HI_SHIFT 0
+#define RQ_CTXT_PREF_OWNER_SHIFT 4
+
+#define RQ_CTXT_PREF_CI_HI_MASK 0xFU
+#define RQ_CTXT_PREF_OWNER_MASK 0x1U
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_SHIFT 0
+#define RQ_CTXT_PREF_CI_LOW_SHIFT    20
+
+#define RQ_CTXT_PREF_WQ_PFN_HI_MASK 0xFFFFFU
+#define RQ_CTXT_PREF_CI_LOW_MASK    0xFFFU
+
+#define RQ_CTXT_PREF_SET(val, member)           \
+	(((val) & RQ_CTXT_PREF_##member##_MASK) \
+	 << RQ_CTXT_PREF_##member##_SHIFT)
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_SHIFT 0
+
+#define RQ_CTXT_WQ_BLOCK_PFN_HI_MASK 0x7FFFFFU
+
+#define RQ_CTXT_WQ_BLOCK_SET(val, member)           \
+	(((val) & RQ_CTXT_WQ_BLOCK_##member##_MASK) \
+	 << RQ_CTXT_WQ_BLOCK_##member##_SHIFT)
+
+#define SIZE_16BYTES(size) (RTE_ALIGN((size), 16) >> 4)
+
+#define WQ_PAGE_PFN_SHIFT  12
+#define WQ_BLOCK_PFN_SHIFT 9
+
+#define WQ_PAGE_PFN(page_addr)	((page_addr) >> WQ_PAGE_PFN_SHIFT)
+#define WQ_BLOCK_PFN(page_addr) ((page_addr) >> WQ_BLOCK_PFN_SHIFT)
+
+/**
+ * Prepare the command queue header and converted it to big-endian format.
+ *
+ * @param[out] qp_ctxt_hdr
+ * Pointer to command queue context header structure to be initialized.
+ * @param[in] ctxt_type
+ * Type of context (SQ/RQ) to be set in header.
+ * @param[in] num_queues
+ * Number of queues.
+ * @param[in] q_id
+ * Starting queue ID for this context.
+ */
+static void
+hinic3_qp_prepare_cmdq_header(struct hinic3_qp_ctxt_header *qp_ctxt_hdr,
+			      enum hinic3_qp_ctxt_type ctxt_type,
+			      u16 num_queues, u16 q_id)
+{
+	qp_ctxt_hdr->queue_type = ctxt_type;
+	qp_ctxt_hdr->num_queues = num_queues;
+	qp_ctxt_hdr->start_qid = q_id;
+	qp_ctxt_hdr->rsvd = 0;
+
+	rte_mb();
+
+	hinic3_cpu_to_be32(qp_ctxt_hdr, sizeof(*qp_ctxt_hdr));
+}
+
+/**
+ * Initialize context structure for specified TXQ by configuring various queue
+ * parameters (e.g., ci, pi, work queue page addresses).
+ *
+ * @param[in] sq
+ * Pointer to TXQ structure.
+ * @param[in] sq_id
+ * ID of TXQ being configured.
+ * @param[out] sq_ctxt
+ * Pointer to structure that will hold TXQ context.
+ */
+static void
+hinic3_sq_prepare_ctxt(struct hinic3_txq *sq, u16 sq_id,
+		       struct hinic3_sq_ctxt *sq_ctxt)
+{
+	u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
+	u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
+	u16 pi_start, ci_start;
+
+	ci_start = sq->cons_idx & sq->q_mask;
+	pi_start = sq->prod_idx & sq->q_mask;
+
+	/* Read the first page from hardware table. */
+	wq_page_addr = sq->queue_buf_paddr;
+
+	wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+	wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+	wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+	/* Use 0-level CLA. */
+	wq_block_pfn = WQ_BLOCK_PFN(wq_page_addr);
+	wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+	wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+	sq_ctxt->ci_pi = SQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+			 SQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+	sq_ctxt->drop_mode_sp = SQ_CTXT_MODE_SET(0, SP_FLAG) |
+				SQ_CTXT_MODE_SET(0, PKT_DROP);
+
+	sq_ctxt->wq_pfn_hi_owner = SQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+				   SQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+	sq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+	sq_ctxt->pkt_drop_thd =
+		SQ_CTXT_PKT_DROP_THD_SET(HINIC3_DEAULT_DROP_THD_ON, THD_ON) |
+		SQ_CTXT_PKT_DROP_THD_SET(HINIC3_DEAULT_DROP_THD_OFF, THD_OFF);
+
+	sq_ctxt->global_sq_id =
+		SQ_CTXT_GLOBAL_QUEUE_ID_SET(sq_id, GLOBAL_SQ_ID);
+
+	/* Insert c-vlan in default. */
+	sq_ctxt->vlan_ceq_attr = SQ_CTXT_VLAN_CEQ_SET(0, CEQ_EN) |
+				 SQ_CTXT_VLAN_CEQ_SET(1, INSERT_MODE);
+
+	sq_ctxt->rsvd0 = 0;
+
+	sq_ctxt->pref_cache =
+		SQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+		SQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+		SQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+	sq_ctxt->pref_ci_owner =
+		SQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+		SQ_CTXT_PREF_SET(1, OWNER);
+
+	sq_ctxt->pref_wq_pfn_hi_ci =
+		SQ_CTXT_PREF_SET(ci_start, CI_LOW) |
+		SQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI);
+
+	sq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+	sq_ctxt->wq_block_pfn_hi =
+		SQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+	sq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+
+	rte_mb();
+
+	hinic3_cpu_to_be32(sq_ctxt, sizeof(*sq_ctxt));
+}
+
+/**
+ * Initialize context structure for specified RXQ by configuring various queue
+ * parameters (e.g., ci, pi, work queue page addresses).
+ *
+ * @param[in] rq
+ * Pointer to RXQ structure.
+ * @param[out] rq_ctxt
+ * Pointer to structure that will hold RXQ context.
+ */
+static void
+hinic3_rq_prepare_ctxt(struct hinic3_rxq *rq, struct hinic3_rq_ctxt *rq_ctxt)
+{
+	u64 wq_page_addr, wq_page_pfn, wq_block_pfn;
+	u32 wq_page_pfn_hi, wq_page_pfn_lo, wq_block_pfn_hi, wq_block_pfn_lo;
+	u16 pi_start, ci_start;
+	u16 wqe_type = rq->wqebb_shift - HINIC3_RQ_WQEBB_SHIFT;
+	u8 intr_disable;
+
+	/* RQ depth is in unit of 8 Bytes. */
+	ci_start = (u16)((rq->cons_idx & rq->q_mask) << wqe_type);
+	pi_start = (u16)((rq->prod_idx & rq->q_mask) << wqe_type);
+
+	/* Read the first page from hardware table. */
+	wq_page_addr = rq->queue_buf_paddr;
+
+	wq_page_pfn = WQ_PAGE_PFN(wq_page_addr);
+	wq_page_pfn_hi = upper_32_bits(wq_page_pfn);
+	wq_page_pfn_lo = lower_32_bits(wq_page_pfn);
+
+	/* Use 0-level CLA. */
+	wq_block_pfn = WQ_BLOCK_PFN(wq_page_addr);
+	wq_block_pfn_hi = upper_32_bits(wq_block_pfn);
+	wq_block_pfn_lo = lower_32_bits(wq_block_pfn);
+
+	rq_ctxt->ci_pi = RQ_CTXT_CI_PI_SET(ci_start, CI_IDX) |
+			 RQ_CTXT_CI_PI_SET(pi_start, PI_IDX);
+
+	/* RQ doesn't need ceq, msix_entry_idx set 1, but mask not enable. */
+	intr_disable = rq->dp_intr_en ? 0 : 1;
+	rq_ctxt->ceq_attr = RQ_CTXT_CEQ_ATTR_SET(intr_disable, EN) |
+			    RQ_CTXT_CEQ_ATTR_SET(0, INTR_ARM) |
+			    RQ_CTXT_CEQ_ATTR_SET(rq->msix_entry_idx, INTR);
+
+	/* Use 32Byte WQE with SGE for CQE in default. */
+	rq_ctxt->wq_pfn_hi_type_owner =
+		RQ_CTXT_WQ_PAGE_SET(wq_page_pfn_hi, HI_PFN) |
+		RQ_CTXT_WQ_PAGE_SET(1, OWNER);
+
+	switch (wqe_type) {
+	case HINIC3_EXTEND_RQ_WQE:
+		/* Use 32Byte WQE with SGE for CQE. */
+		rq_ctxt->wq_pfn_hi_type_owner |=
+			RQ_CTXT_WQ_PAGE_SET(0, WQE_TYPE);
+		break;
+	case HINIC3_NORMAL_RQ_WQE:
+		/* Use 16Byte WQE with 32Bytes SGE for CQE. */
+		rq_ctxt->wq_pfn_hi_type_owner |=
+			RQ_CTXT_WQ_PAGE_SET(2, WQE_TYPE);
+		rq_ctxt->cqe_sge_len = RQ_CTXT_CQE_LEN_SET(1, CQE_LEN);
+		break;
+	default:
+		PMD_DRV_LOG(INFO, "Invalid rq wqe type: %u", wqe_type);
+	}
+
+	rq_ctxt->wq_pfn_lo = wq_page_pfn_lo;
+
+	rq_ctxt->pref_cache =
+		RQ_CTXT_PREF_SET(WQ_PREFETCH_MIN, CACHE_MIN) |
+		RQ_CTXT_PREF_SET(WQ_PREFETCH_MAX, CACHE_MAX) |
+		RQ_CTXT_PREF_SET(WQ_PREFETCH_THRESHOLD, CACHE_THRESHOLD);
+
+	rq_ctxt->pref_ci_owner =
+		RQ_CTXT_PREF_SET(CI_HIGN_IDX(ci_start), CI_HI) |
+		RQ_CTXT_PREF_SET(1, OWNER);
+
+	rq_ctxt->pref_wq_pfn_hi_ci =
+		RQ_CTXT_PREF_SET(wq_page_pfn_hi, WQ_PFN_HI) |
+		RQ_CTXT_PREF_SET(ci_start, CI_LOW);
+
+	rq_ctxt->pref_wq_pfn_lo = wq_page_pfn_lo;
+
+	rq_ctxt->pi_paddr_hi = upper_32_bits(rq->pi_dma_addr);
+	rq_ctxt->pi_paddr_lo = lower_32_bits(rq->pi_dma_addr);
+
+	rq_ctxt->wq_block_pfn_hi =
+		RQ_CTXT_WQ_BLOCK_SET(wq_block_pfn_hi, PFN_HI);
+
+	rq_ctxt->wq_block_pfn_lo = wq_block_pfn_lo;
+	rte_mb();
+
+	hinic3_cpu_to_be32(rq_ctxt, sizeof(*rq_ctxt));
+}
+
+/**
+ * Allocate a command buffer, prepare context for each SQ queue by setting
+ * various parameters, send context data to hardware. It processes SQ queues in
+ * batches, with each batch not exceeding `HINIC3_Q_CTXT_MAX` SQ contexts.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, a negative error code on failure.
+ * - -ENOMEM if the memory allocation for the command buffer fails.
+ * - -EFAULT if the hardware returns an error while processing the context data.
+ */
+static int
+init_sq_ctxts(struct hinic3_nic_dev *nic_dev)
+{
+	struct hinic3_sq_ctxt_block *sq_ctxt_block = NULL;
+	struct hinic3_sq_ctxt *sq_ctxt = NULL;
+	struct hinic3_cmd_buf *cmd_buf = NULL;
+	struct hinic3_txq *sq = NULL;
+	u64 out_param = 0;
+	u16 q_id, curr_id, max_ctxts, i;
+	int err = 0;
+
+	cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+	if (!cmd_buf) {
+		PMD_DRV_LOG(ERR, "Allocate cmd buf for sq ctx failed");
+		return -ENOMEM;
+	}
+
+	q_id = 0;
+	while (q_id < nic_dev->num_sqs) {
+		sq_ctxt_block = cmd_buf->buf;
+		sq_ctxt = sq_ctxt_block->sq_ctxt;
+
+		max_ctxts = (nic_dev->num_sqs - q_id) > HINIC3_Q_CTXT_MAX
+				    ? HINIC3_Q_CTXT_MAX
+				    : (nic_dev->num_sqs - q_id);
+
+		hinic3_qp_prepare_cmdq_header(&sq_ctxt_block->cmdq_hdr,
+					      HINIC3_QP_CTXT_TYPE_SQ, max_ctxts,
+					      q_id);
+
+		for (i = 0; i < max_ctxts; i++) {
+			curr_id = q_id + i;
+			sq = nic_dev->txqs[curr_id];
+			hinic3_sq_prepare_ctxt(sq, curr_id, &sq_ctxt[i]);
+		}
+
+		cmd_buf->size = SQ_CTXT_SIZE(max_ctxts);
+		rte_mb();
+		err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+					      HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+					      cmd_buf, &out_param, 0);
+		if (err || out_param != 0) {
+			PMD_DRV_LOG(ERR,
+				    "Set SQ ctxts failed, "
+				    "err: %d, out_param: %" PRIu64,
+				    err, out_param);
+
+			err = -EFAULT;
+			break;
+		}
+
+		q_id += max_ctxts;
+	}
+
+	hinic3_free_cmd_buf(cmd_buf);
+	return err;
+}
+
+/**
+ * Initialize context for all RQ in device.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ *
+ * @return
+ * 0 on success, a negative error code on failure.
+ * - -ENOMEM if the memory allocation for the command buffer fails.
+ * - -EFAULT if the hardware returns an error while processing the context data.
+ */
+static int
+init_rq_ctxts(struct hinic3_nic_dev *nic_dev)
+{
+	struct hinic3_rq_ctxt_block *rq_ctxt_block = NULL;
+	struct hinic3_rq_ctxt *rq_ctxt = NULL;
+	struct hinic3_cmd_buf *cmd_buf = NULL;
+	struct hinic3_rxq *rq = NULL;
+	u64 out_param = 0;
+	u16 q_id, curr_id, max_ctxts, i;
+	int err = 0;
+
+	cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+	if (!cmd_buf) {
+		PMD_DRV_LOG(ERR, "Allocate cmd buf for rq ctx failed");
+		return -ENOMEM;
+	}
+
+	q_id = 0;
+	while (q_id < nic_dev->num_rqs) {
+		rq_ctxt_block = cmd_buf->buf;
+		rq_ctxt = rq_ctxt_block->rq_ctxt;
+
+		max_ctxts = (nic_dev->num_rqs - q_id) > HINIC3_Q_CTXT_MAX
+				    ? HINIC3_Q_CTXT_MAX
+				    : (nic_dev->num_rqs - q_id);
+
+		hinic3_qp_prepare_cmdq_header(&rq_ctxt_block->cmdq_hdr,
+					      HINIC3_QP_CTXT_TYPE_RQ, max_ctxts,
+					      q_id);
+
+		for (i = 0; i < max_ctxts; i++) {
+			curr_id = q_id + i;
+			rq = nic_dev->rxqs[curr_id];
+			hinic3_rq_prepare_ctxt(rq, &rq_ctxt[i]);
+		}
+
+		cmd_buf->size = RQ_CTXT_SIZE(max_ctxts);
+		rte_mb();
+		err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+					      HINIC3_UCODE_CMD_MODIFY_QUEUE_CTX,
+					      cmd_buf, &out_param, 0);
+		if (err || out_param != 0) {
+			PMD_DRV_LOG(ERR,
+				    "Set RQ ctxts failed, "
+				    "err: %d, out_param: %" PRIu64,
+				    err, out_param);
+			err = -EFAULT;
+			break;
+		}
+
+		q_id += max_ctxts;
+	}
+
+	hinic3_free_cmd_buf(cmd_buf);
+	return err;
+}
+
+/**
+ * Allocate memory for command buffer, construct related command request, send a
+ * command to hardware to clean up queue offload context.
+ *
+ * @param[in] nic_dev
+ * Pointer to NIC device structure.
+ * @param[in] ctxt_type
+ * The type of queue context to clean.
+ * The queue context type that determines which queue type to clean up.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+clean_queue_offload_ctxt(struct hinic3_nic_dev *nic_dev,
+			 enum hinic3_qp_ctxt_type ctxt_type)
+{
+	struct hinic3_clean_queue_ctxt *ctxt_block = NULL;
+	struct hinic3_cmd_buf *cmd_buf;
+	u64 out_param = 0;
+	int err;
+
+	cmd_buf = hinic3_alloc_cmd_buf(nic_dev->hwdev);
+	if (!cmd_buf) {
+		PMD_DRV_LOG(ERR, "Allocate cmd buf for LRO/TSO space failed");
+		return -ENOMEM;
+	}
+
+	/* Construct related command request. */
+	ctxt_block = cmd_buf->buf;
+	/* Assumed max_rqs must be equal to max_sqs. */
+	ctxt_block->cmdq_hdr.num_queues = nic_dev->max_sqs;
+	ctxt_block->cmdq_hdr.queue_type = ctxt_type;
+	ctxt_block->cmdq_hdr.start_qid = 0;
+	/*
+	 * Add a memory barrier to ensure that instructions are not out of order
+	 * due to compilation optimization.
+	 */
+	rte_mb();
+
+	hinic3_cpu_to_be32(ctxt_block, sizeof(*ctxt_block));
+
+	cmd_buf->size = sizeof(*ctxt_block);
+
+	/* Send a command to hardware to clean up queue offload context. */
+	err = hinic3_cmdq_direct_resp(nic_dev->hwdev, HINIC3_MOD_L2NIC,
+				      HINIC3_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+				      cmd_buf, &out_param, 0);
+	if ((err) || (out_param)) {
+		PMD_DRV_LOG(ERR,
+			    "Clean queue offload ctxts failed, "
+			    "err: %d, out_param: %" PRIu64,
+			    err, out_param);
+		err = -EFAULT;
+	}
+
+	hinic3_free_cmd_buf(cmd_buf);
+	return err;
+}
+
+static int
+clean_qp_offload_ctxt(struct hinic3_nic_dev *nic_dev)
+{
+	/* Clean LRO/TSO context space. */
+	return (clean_queue_offload_ctxt(nic_dev, HINIC3_QP_CTXT_TYPE_SQ) ||
+		clean_queue_offload_ctxt(nic_dev, HINIC3_QP_CTXT_TYPE_RQ));
+}
+
+void
+hinic3_get_func_rx_buf_size(void *dev)
+{
+	struct hinic3_nic_dev *nic_dev = (struct hinic3_nic_dev *)dev;
+	struct hinic3_rxq *rxq = NULL;
+	u16 q_id;
+	u16 buf_size = 0;
+
+	for (q_id = 0; q_id < nic_dev->num_rqs; q_id++) {
+		rxq = nic_dev->rxqs[q_id];
+
+		if (rxq == NULL)
+			continue;
+
+		if (q_id == 0)
+			buf_size = rxq->buf_len;
+
+		buf_size = buf_size > rxq->buf_len ? rxq->buf_len : buf_size;
+	}
+
+	nic_dev->rx_buff_len = buf_size;
+}
+
+int
+hinic3_init_qp_ctxts(void *dev)
+{
+	struct hinic3_nic_dev *nic_dev = NULL;
+	struct hinic3_hwdev *hwdev = NULL;
+	struct hinic3_sq_attr sq_attr;
+	u32 rq_depth = 0;
+	u32 sq_depth = 0;
+	u16 q_id;
+	int err;
+
+	if (!dev)
+		return -EINVAL;
+
+	nic_dev = (struct hinic3_nic_dev *)dev;
+	hwdev = nic_dev->hwdev;
+
+	err = init_sq_ctxts(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init SQ ctxts failed");
+		return err;
+	}
+
+	err = init_rq_ctxts(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init RQ ctxts failed");
+		return err;
+	}
+
+	err = clean_qp_offload_ctxt(nic_dev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Clean qp offload ctxts failed");
+		return err;
+	}
+
+	if (nic_dev->num_rqs != 0)
+		rq_depth = ((u32)nic_dev->rxqs[0]->q_depth)
+			   << nic_dev->rxqs[0]->wqe_type;
+
+	if (nic_dev->num_sqs != 0)
+		sq_depth = nic_dev->txqs[0]->q_depth;
+
+	err = hinic3_set_root_ctxt(hwdev, rq_depth, sq_depth,
+				   nic_dev->rx_buff_len);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Set root context failed");
+		return err;
+	}
+
+	/* Configure CI tables for each SQ. */
+	for (q_id = 0; q_id < nic_dev->num_sqs; q_id++) {
+		sq_attr.ci_dma_base = nic_dev->txqs[q_id]->ci_dma_base >> 0x2;
+		sq_attr.pending_limit = HINIC3_DEAULT_TX_CI_PENDING_LIMIT;
+		sq_attr.coalescing_time = HINIC3_DEAULT_TX_CI_COALESCING_TIME;
+		sq_attr.intr_en = 0;
+		sq_attr.intr_idx = 0; /**< Tx doesn't need interrupt. */
+		sq_attr.l2nic_sqn = q_id;
+		sq_attr.dma_attr_off = 0;
+		err = hinic3_set_ci_table(hwdev, &sq_attr);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Set ci table failed");
+			goto set_cons_idx_table_err;
+		}
+	}
+
+	return 0;
+
+set_cons_idx_table_err:
+	hinic3_clean_root_ctxt(hwdev);
+	return err;
+}
+
+void
+hinic3_free_qp_ctxts(void *hwdev)
+{
+	if (!hwdev)
+		return;
+
+	hinic3_clean_root_ctxt(hwdev);
+}
+
+void
+hinic3_update_driver_feature(void *dev, u64 s_feature)
+{
+	struct hinic3_nic_dev *nic_dev = NULL;
+
+	if (!dev)
+		return;
+
+	nic_dev = (struct hinic3_nic_dev *)dev;
+	nic_dev->feature_cap = s_feature;
+
+	PMD_DRV_LOG(INFO, "Update nic feature to 0x%" PRIx64,
+		    nic_dev->feature_cap);
+}
+
+u64
+hinic3_get_driver_feature(void *dev)
+{
+	struct hinic3_nic_dev *nic_dev = NULL;
+
+	nic_dev = (struct hinic3_nic_dev *)dev;
+
+	return nic_dev->feature_cap;
+}
diff --git a/drivers/net/hinic3/hinic3_nic_io.h b/drivers/net/hinic3/hinic3_nic_io.h
new file mode 100644
index 0000000000..39ffb3c8fd
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_nic_io.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_NIC_IO_H_
+#define _HINIC3_NIC_IO_H_
+
+#define HINIC3_SQ_WQEBB_SHIFT 4
+#define HINIC3_RQ_WQEBB_SHIFT 3
+
+#define HINIC3_SQ_WQEBB_SIZE  BIT(HINIC3_SQ_WQEBB_SHIFT)
+#define HINIC3_CQE_SIZE_SHIFT 4
+
+/* Ci addr should RTE_CACHE_SIZE(64B) alignment for performance. */
+#define HINIC3_CI_Q_ADDR_SIZE 64
+
+#define CI_TABLE_SIZE(num_qps, pg_sz) \
+	(RTE_ALIGN((num_qps) * HINIC3_CI_Q_ADDR_SIZE, pg_sz))
+
+#define HINIC3_CI_VADDR(base_addr, q_id) \
+	((u8 *)(base_addr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+#define HINIC3_CI_PADDR(base_paddr, q_id) \
+	((base_paddr) + (q_id) * HINIC3_CI_Q_ADDR_SIZE)
+
+enum hinic3_rq_wqe_type {
+	HINIC3_COMPACT_RQ_WQE,
+	HINIC3_NORMAL_RQ_WQE,
+	HINIC3_EXTEND_RQ_WQE
+};
+
+enum hinic3_queue_type {
+	HINIC3_SQ,
+	HINIC3_RQ,
+	HINIC3_MAX_QUEUE_TYPE,
+};
+
+/* Doorbell info. */
+struct hinic3_db {
+	u32 db_info;
+	u32 pi_hi;
+};
+
+#define DB_INFO_QID_SHIFT	 0
+#define DB_INFO_NON_FILTER_SHIFT 22
+#define DB_INFO_CFLAG_SHIFT	 23
+#define DB_INFO_COS_SHIFT	 24
+#define DB_INFO_TYPE_SHIFT	 27
+
+#define DB_INFO_QID_MASK	0x1FFFU
+#define DB_INFO_NON_FILTER_MASK 0x1U
+#define DB_INFO_CFLAG_MASK	0x1U
+#define DB_INFO_COS_MASK	0x7U
+#define DB_INFO_TYPE_MASK	0x1FU
+#define DB_INFO_SET(val, member) \
+	(((u32)(val) & DB_INFO_##member##_MASK) << DB_INFO_##member##_SHIFT)
+
+#define DB_PI_LOW_MASK	      0xFFU
+#define DB_PI_HIGH_MASK	      0xFFU
+#define DB_PI_LOW(pi)	      ((pi) & DB_PI_LOW_MASK)
+#define DB_PI_HI_SHIFT	      8
+#define DB_PI_HIGH(pi)	      (((pi) >> DB_PI_HI_SHIFT) & DB_PI_HIGH_MASK)
+#define DB_INFO_UPPER_32(val) (((u64)(val)) << 32)
+
+#define DB_ADDR(db_addr, pi) ((u64 *)(db_addr) + DB_PI_LOW(pi))
+#define SRC_TYPE	     1
+
+/* Cflag data path. */
+#define SQ_CFLAG_DP 0
+#define RQ_CFLAG_DP 1
+
+#define MASKED_QUEUE_IDX(queue, idx) ((idx) & (queue)->q_mask)
+
+#define NIC_WQE_ADDR(queue, idx)                           \
+	({                                                 \
+		typeof(queue) __queue = (queue);           \
+		(void *)((u64)(__queue->queue_buf_vaddr) + \
+			 ((idx) << __queue->wqebb_shift)); \
+	})
+
+/**
+ * Write send queue doorbell.
+ *
+ * @param[in] db_addr
+ * Doorbell address.
+ * @param[in] q_id
+ * Send queue id.
+ * @param[in] cos
+ * Send queue cos.
+ * @param[in] cflag
+ * Cflag data path.
+ * @param[in] pi
+ * Send queue pi.
+ */
+static inline void
+hinic3_write_db(void *db_addr, u16 q_id, int cos, u8 cflag, u16 pi)
+{
+	u64 db;
+
+	/* Hardware will do endianness coverting. */
+	db = DB_PI_HIGH(pi);
+	db = DB_INFO_UPPER_32(db) | DB_INFO_SET(SRC_TYPE, TYPE) |
+	     DB_INFO_SET(cflag, CFLAG) | DB_INFO_SET(cos, COS) |
+	     DB_INFO_SET(q_id, QID);
+
+	rte_wmb(); /**< Write all before the doorbell. */
+
+	rte_write64(*((u64 *)&db), DB_ADDR(db_addr, pi));
+}
+
+/**
+ * Get minimum RX buffer size for device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ */
+void hinic3_get_func_rx_buf_size(void *dev);
+
+/**
+ * Initialize qps contexts, set SQ ci attributes, arm all SQ.
+ *
+ * Function will perform following steps:
+ * - Initialize SQ contexts.
+ * - Initialize RQ contexts.
+ * - Clean QP offload contexts of SQ and RQ.
+ * - Set root context for device.
+ * - Configure CI tables for each SQ.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_init_qp_ctxts(void *dev);
+
+/**
+ * Free queue pair context.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ */
+void hinic3_free_qp_ctxts(void *hwdev);
+
+/**
+ * Update driver feature capabilities.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] s_feature
+ * s_feature driver supported.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+void hinic3_update_driver_feature(void *dev, u64 s_feature);
+
+/**
+ * Get driver feature capabilities.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * Feature capabilities of driver.
+ */
+u64 hinic3_get_driver_feature(void *dev);
+
+#endif /* _HINIC3_NIC_IO_H_ */
diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c
new file mode 100644
index 0000000000..a1dc960236
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_rx.c
@@ -0,0 +1,811 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+#include <rte_ether.h>
+#include <rte_mbuf.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_pmd_hwif.h"
+#include "base/hinic3_pmd_hwdev.h"
+#include "base/hinic3_pmd_wq.h"
+#include "base/hinic3_pmd_nic_cfg.h"
+#include "hinic3_pmd_nic_io.h"
+#include "hinic3_pmd_ethdev.h"
+#include "hinic3_pmd_tx.h"
+#include "hinic3_pmd_rx.h"
+
+/**
+ * Get wqe from receive queue.
+ *
+ * @param[in] rxq
+ * Receive queue.
+ * @param[out] rq_wqe
+ * Receive queue wqe.
+ * @param[out] pi
+ * Current pi.
+ */
+static inline void
+hinic3_get_rq_wqe(struct hinic3_rxq *rxq, struct hinic3_rq_wqe **rq_wqe,
+		  u16 *pi)
+{
+	*pi = MASKED_QUEUE_IDX(rxq, rxq->prod_idx);
+
+	/* Get only one rxq wqe. */
+	rxq->prod_idx++;
+	rxq->delta--;
+
+	*rq_wqe = NIC_WQE_ADDR(rxq, *pi);
+}
+
+/**
+ * Put wqe into receive queue.
+ *
+ * @param[in] rxq
+ * Receive queue.
+ * @param[in] wqe_cnt
+ * Wqebb counters.
+ */
+static inline void
+hinic3_put_rq_wqe(struct hinic3_rxq *rxq, u16 wqe_cnt)
+{
+	rxq->delta += wqe_cnt;
+	rxq->prod_idx -= wqe_cnt;
+}
+
+/**
+ * Get receive queue local pi.
+ *
+ * @param[in] rxq
+ * Receive queue.
+ * @return
+ * Receive queue local pi.
+ */
+static inline u16
+hinic3_get_rq_local_pi(struct hinic3_rxq *rxq)
+{
+	return MASKED_QUEUE_IDX(rxq, rxq->prod_idx);
+}
+
+/**
+ * Update receive queue hardware pi.
+ *
+ * @param[in] rxq
+ * Receive queue
+ * @param[in] pi
+ * Receive queue pi to update
+ */
+static inline void
+hinic3_update_rq_hw_pi(struct hinic3_rxq *rxq, u16 pi)
+{
+	*rxq->pi_virt_addr =
+		(u16)cpu_to_be16((pi & rxq->q_mask) << rxq->wqe_type);
+}
+
+u16
+hinic3_rx_fill_wqe(struct hinic3_rxq *rxq)
+{
+	struct hinic3_rq_wqe *rq_wqe = NULL;
+	struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+	rte_iova_t cqe_dma;
+	u16 pi = 0;
+	u16 i;
+
+	cqe_dma = rxq->cqe_start_paddr;
+	for (i = 0; i < rxq->q_depth; i++) {
+		hinic3_get_rq_wqe(rxq, &rq_wqe, &pi);
+		if (!rq_wqe) {
+			PMD_DRV_LOG(ERR,
+				    "Get rq wqe failed, rxq id: %d, wqe id: %d",
+				    rxq->q_id, i);
+			break;
+		}
+
+		if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+			/* Unit of cqe length is 16B. */
+			hinic3_set_sge(&rq_wqe->extend_wqe.cqe_sect.sge,
+				       cqe_dma,
+				       HINIC3_CQE_LEN >> HINIC3_CQE_SIZE_SHIFT);
+			/* Use fixed len. */
+			rq_wqe->extend_wqe.buf_desc.sge.len =
+				nic_dev->rx_buff_len;
+		} else {
+			rq_wqe->normal_wqe.cqe_hi_addr = upper_32_bits(cqe_dma);
+			rq_wqe->normal_wqe.cqe_lo_addr = lower_32_bits(cqe_dma);
+		}
+
+		cqe_dma += sizeof(struct hinic3_rq_cqe);
+
+		hinic3_hw_be32_len(rq_wqe, rxq->wqebb_size);
+	}
+
+	hinic3_put_rq_wqe(rxq, i);
+
+	return i;
+}
+
+static struct rte_mbuf *
+hinic3_rx_alloc_mbuf(struct hinic3_rxq *rxq, rte_iova_t *dma_addr)
+{
+	struct rte_mbuf *mbuf = NULL;
+
+	if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, &mbuf, 1) != 0))
+		return NULL;
+
+	*dma_addr = rte_mbuf_data_iova_default(mbuf);
+#ifdef HINIC3_XSTAT_MBUF_USE
+	rxq->rxq_stats.rx_alloc_mbuf_bytes++;
+#endif
+	return mbuf;
+}
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+static void
+hinic3_rxq_buffer_done_count(struct hinic3_rxq *rxq)
+{
+	u16 sw_ci, avail_pkts = 0, hit_done = 0, cqe_hole = 0;
+	u32 status;
+	volatile struct hinic3_rq_cqe *rx_cqe;
+
+	for (sw_ci = 0; sw_ci < rxq->q_depth; sw_ci++) {
+		rx_cqe = &rxq->rx_cqe[sw_ci];
+
+		/* Check current ci is done. */
+		status = rx_cqe->status;
+		if (!HINIC3_GET_RX_DONE(status)) {
+			if (hit_done) {
+				cqe_hole++;
+				hit_done = 0;
+			}
+			continue;
+		}
+
+		avail_pkts++;
+		hit_done = 1;
+	}
+
+	rxq->rxq_stats.rx_avail = avail_pkts;
+	rxq->rxq_stats.rx_hole = cqe_hole;
+}
+
+void
+hinic3_get_stats(struct hinic3_rxq *rxq)
+{
+	rxq->rxq_stats.rx_mbuf = rxq->q_depth - hinic3_get_rq_free_wqebb(rxq);
+
+	hinic3_rxq_buffer_done_count(rxq);
+}
+#endif
+
+u16
+hinic3_rx_fill_buffers(struct hinic3_rxq *rxq)
+{
+	struct hinic3_rq_wqe *rq_wqe = NULL;
+	struct hinic3_rx_info *rx_info = NULL;
+	struct rte_mbuf *mb = NULL;
+	rte_iova_t dma_addr;
+	u16 i, free_wqebbs;
+
+	free_wqebbs = rxq->delta - 1;
+	for (i = 0; i < free_wqebbs; i++) {
+		rx_info = &rxq->rx_info[rxq->next_to_update];
+
+		mb = hinic3_rx_alloc_mbuf(rxq, &dma_addr);
+		if (!mb) {
+			PMD_DRV_LOG(ERR, "Alloc mbuf failed");
+			break;
+		}
+
+		rx_info->mbuf = mb;
+
+		rq_wqe = NIC_WQE_ADDR(rxq, rxq->next_to_update);
+
+		/* Fill buffer address only. */
+		if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+			rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+				hinic3_hw_be32(upper_32_bits(dma_addr));
+			rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+				hinic3_hw_be32(lower_32_bits(dma_addr));
+		} else {
+			rq_wqe->normal_wqe.buf_hi_addr =
+				hinic3_hw_be32(upper_32_bits(dma_addr));
+			rq_wqe->normal_wqe.buf_lo_addr =
+				hinic3_hw_be32(lower_32_bits(dma_addr));
+		}
+
+		rxq->next_to_update = (rxq->next_to_update + 1) & rxq->q_mask;
+	}
+
+	if (likely(i > 0)) {
+#ifndef HINIC3_RQ_DB
+		hinic3_write_db(rxq->db_addr, rxq->q_id, 0, RQ_CFLAG_DP,
+				(u16)(rxq->next_to_update << rxq->wqe_type));
+		/* Init rxq contxet used, need to optimization. */
+		rxq->prod_idx = rxq->next_to_update;
+#else
+		rte_wmb();
+		rxq->prod_idx = rxq->next_to_update;
+		hinic3_update_rq_hw_pi(rxq, rxq->next_to_update);
+#endif
+		rxq->delta -= i;
+	} else {
+		PMD_DRV_LOG(ERR, "Alloc rx buffers failed, rxq_id: %d",
+			    rxq->q_id);
+	}
+
+	return i;
+}
+
+void
+hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq)
+{
+	struct hinic3_rx_info *rx_info = NULL;
+	int free_wqebbs = hinic3_get_rq_free_wqebb(rxq) + 1;
+	volatile struct hinic3_rq_cqe *rx_cqe = NULL;
+	u16 ci;
+
+	while (free_wqebbs++ < rxq->q_depth) {
+		ci = hinic3_get_rq_local_ci(rxq);
+
+		rx_cqe = &rxq->rx_cqe[ci];
+
+		/* Clear done bit. */
+		rx_cqe->status = 0;
+
+		rx_info = &rxq->rx_info[ci];
+		rte_pktmbuf_free(rx_info->mbuf);
+		rx_info->mbuf = NULL;
+
+		hinic3_update_rq_local_ci(rxq, 1);
+#ifdef HINIC3_XSTAT_MBUF_USE
+		rxq->rxq_stats.rx_free_mbuf_bytes++;
+#endif
+	}
+}
+
+void
+hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev)
+{
+	u16 qid;
+
+	for (qid = 0; qid < nic_dev->num_rqs; qid++)
+		hinic3_free_rxq_mbufs(nic_dev->rxqs[qid]);
+}
+
+static u32
+hinic3_rx_alloc_mbuf_bulk(struct hinic3_rxq *rxq, struct rte_mbuf **mbufs,
+			  u32 exp_mbuf_cnt)
+{
+	u32 avail_cnt;
+	int err;
+
+	err = rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, exp_mbuf_cnt);
+	if (likely(err == 0)) {
+		avail_cnt = exp_mbuf_cnt;
+	} else {
+		avail_cnt = 0;
+		rxq->rxq_stats.rx_nombuf += exp_mbuf_cnt;
+	}
+#ifdef HINIC3_XSTAT_MBUF_USE
+	rxq->rxq_stats.rx_alloc_mbuf_bytes += avail_cnt;
+#endif
+	return avail_cnt;
+}
+
+static int
+hinic3_rearm_rxq_mbuf(struct hinic3_rxq *rxq)
+{
+	struct hinic3_rq_wqe *rq_wqe = NULL;
+	struct rte_mbuf **rearm_mbufs;
+	u32 i, free_wqebbs, rearm_wqebbs, exp_wqebbs;
+	rte_iova_t dma_addr;
+	u16 pi;
+	struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+
+	/* Check free wqebb cnt fo rearm. */
+	free_wqebbs = hinic3_get_rq_free_wqebb(rxq);
+	if (unlikely(free_wqebbs < rxq->rx_free_thresh))
+		return -ENOMEM;
+
+	/* Get rearm mbuf array. */
+	pi = hinic3_get_rq_local_pi(rxq);
+	rearm_mbufs = (struct rte_mbuf **)(&rxq->rx_info[pi]);
+
+	/* Check rxq free wqebbs turn around. */
+	exp_wqebbs = rxq->q_depth - pi;
+	if (free_wqebbs < exp_wqebbs)
+		exp_wqebbs = free_wqebbs;
+
+	/* Alloc mbuf in bulk. */
+	rearm_wqebbs = hinic3_rx_alloc_mbuf_bulk(rxq, rearm_mbufs, exp_wqebbs);
+	if (unlikely(rearm_wqebbs == 0))
+		return -ENOMEM;
+
+	/* Rearm rxq mbuf. */
+	rq_wqe = NIC_WQE_ADDR(rxq, pi);
+	for (i = 0; i < rearm_wqebbs; i++) {
+		dma_addr = rte_mbuf_data_iova_default(rearm_mbufs[i]);
+
+		/* Fill buffer address only. */
+		if (rxq->wqe_type == HINIC3_EXTEND_RQ_WQE) {
+			rq_wqe->extend_wqe.buf_desc.sge.hi_addr =
+				hinic3_hw_be32(upper_32_bits(dma_addr));
+			rq_wqe->extend_wqe.buf_desc.sge.lo_addr =
+				hinic3_hw_be32(lower_32_bits(dma_addr));
+			rq_wqe->extend_wqe.buf_desc.sge.len =
+				nic_dev->rx_buff_len;
+		} else {
+			rq_wqe->normal_wqe.buf_hi_addr =
+				hinic3_hw_be32(upper_32_bits(dma_addr));
+			rq_wqe->normal_wqe.buf_lo_addr =
+				hinic3_hw_be32(lower_32_bits(dma_addr));
+		}
+
+		rq_wqe =
+			(struct hinic3_rq_wqe *)((u64)rq_wqe + rxq->wqebb_size);
+	}
+	rxq->prod_idx += rearm_wqebbs;
+	rxq->delta -= rearm_wqebbs;
+
+#ifndef HINIC3_RQ_DB
+	hinic3_write_db(rxq->db_addr, rxq->q_id, 0, RQ_CFLAG_DP,
+			((pi + rearm_wqebbs) & rxq->q_mask) << rxq->wqe_type);
+#else
+	/* Update rxq hw_pi. */
+	rte_wmb();
+	hinic3_update_rq_hw_pi(rxq, pi + rearm_wqebbs);
+#endif
+	return 0;
+}
+
+static int
+hinic3_init_rss_key(struct hinic3_nic_dev *nic_dev,
+		    struct rte_eth_rss_conf *rss_conf)
+{
+	u8 default_rss_key[HINIC3_RSS_KEY_SIZE] = {
+			 0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+			 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+			 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+			 0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+			 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa};
+	u8 hashkey[HINIC3_RSS_KEY_SIZE] = {0};
+	int err;
+
+	if (rss_conf->rss_key == NULL ||
+	    rss_conf->rss_key_len > HINIC3_RSS_KEY_SIZE)
+		memcpy(hashkey, default_rss_key, HINIC3_RSS_KEY_SIZE);
+	else
+		memcpy(hashkey, rss_conf->rss_key, rss_conf->rss_key_len);
+
+	err = hinic3_rss_set_hash_key(nic_dev->hwdev, hashkey,
+				      HINIC3_RSS_KEY_SIZE);
+	if (err)
+		return err;
+
+	memcpy(nic_dev->rss_key, hashkey, HINIC3_RSS_KEY_SIZE);
+	return 0;
+}
+
+void
+hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev, u16 queue_id)
+{
+	u8 rss_queue_count = nic_dev->num_rss;
+
+	RTE_ASSERT(rss_queue_count <= (RTE_DIM(nic_dev->rx_queue_list) - 1));
+
+	nic_dev->rx_queue_list[rss_queue_count] = (u8)queue_id;
+	nic_dev->num_rss++;
+}
+
+void
+hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev)
+{
+	nic_dev->num_rss = 0;
+}
+
+static void
+hinic3_fill_indir_tbl(struct hinic3_nic_dev *nic_dev, u32 *indir_tbl)
+{
+	u8 rss_queue_count = nic_dev->num_rss;
+	int i = 0;
+	int j;
+
+	if (rss_queue_count == 0) {
+		/* Delete q_id from indir tbl. */
+		for (i = 0; i < HINIC3_RSS_INDIR_SIZE; i++)
+			/* Invalid value in indir tbl. */
+			indir_tbl[i] = 0xFFFF;
+	} else {
+		while (i < HINIC3_RSS_INDIR_SIZE)
+			for (j = 0; (j < rss_queue_count) &&
+				    (i < HINIC3_RSS_INDIR_SIZE); j++)
+				indir_tbl[i++] = nic_dev->rx_queue_list[j];
+	}
+}
+
+int
+hinic3_refill_indir_rqid(struct hinic3_rxq *rxq)
+{
+	struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+	u32 *indir_tbl;
+	int err;
+
+	indir_tbl = rte_zmalloc(NULL, HINIC3_RSS_INDIR_SIZE * sizeof(u32), 0);
+	if (!indir_tbl) {
+		PMD_DRV_LOG(ERR,
+			    "Alloc indir_tbl mem failed, "
+			    "eth_dev:%s, queue_idx:%d",
+			    nic_dev->dev_name, rxq->q_id);
+		return -ENOMEM;
+	}
+
+	/* Build indir tbl according to the number of rss queue. */
+	hinic3_fill_indir_tbl(nic_dev, indir_tbl);
+
+	err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indir_tbl,
+				       HINIC3_RSS_INDIR_SIZE);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			"Set indrect table failed, eth_dev:%s, queue_idx:%d",
+			nic_dev->dev_name, rxq->q_id);
+		goto out;
+	}
+
+out:
+	rte_free(indir_tbl);
+	return err;
+}
+
+static int
+hinic3_init_rss_type(struct hinic3_nic_dev *nic_dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct hinic3_rss_type rss_type = {0};
+	u64 rss_hf = rss_conf->rss_hf;
+	int err;
+
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+
+	err = hinic3_set_rss_type(nic_dev->hwdev, rss_type);
+	return err;
+}
+
+int
+hinic3_update_rss_config(struct rte_eth_dev *dev,
+			 struct rte_eth_rss_conf *rss_conf)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u8 prio_tc[HINIC3_DCB_UP_MAX] = {0};
+	u8 num_tc = 0;
+	int err;
+
+	if (rss_conf->rss_hf == 0) {
+		rss_conf->rss_hf = HINIC3_RSS_OFFLOAD_ALL;
+	} else if ((rss_conf->rss_hf & HINIC3_RSS_OFFLOAD_ALL) == 0) {
+		PMD_DRV_LOG(ERR, "Does't support rss hash type: %" PRIu64,
+			    rss_conf->rss_hf);
+		return -EINVAL;
+	}
+
+	err = hinic3_rss_template_alloc(nic_dev->hwdev);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Alloc rss template failed, err: %d", err);
+		return err;
+	}
+
+	err = hinic3_init_rss_key(nic_dev, rss_conf);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init rss hash key failed, err: %d", err);
+		goto init_rss_fail;
+	}
+
+	err = hinic3_init_rss_type(nic_dev, rss_conf);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init rss hash type failed, err: %d", err);
+		goto init_rss_fail;
+	}
+
+	err = hinic3_rss_set_hash_engine(nic_dev->hwdev,
+					 HINIC3_RSS_HASH_ENGINE_TYPE_TOEP);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Init rss hash function failed, err: %d", err);
+		goto init_rss_fail;
+	}
+
+	err = hinic3_rss_cfg(nic_dev->hwdev, HINIC3_RSS_ENABLE, num_tc,
+			     prio_tc);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Enable rss failed, err: %d", err);
+		goto init_rss_fail;
+	}
+
+	nic_dev->rss_state = HINIC3_RSS_ENABLE;
+	return 0;
+
+init_rss_fail:
+	if (hinic3_rss_template_free(nic_dev->hwdev))
+		PMD_DRV_LOG(WARNING, "Free rss template failed");
+
+	return err;
+}
+
+/**
+ * Search given queue array to find possition of given id.
+ * Return queue pos or queue_count if not found.
+ */
+static u8
+hinic3_find_queue_pos_by_rq_id(u8 *queues, u8 queues_count, u8 queue_id)
+{
+	u8 pos;
+
+	for (pos = 0; pos < queues_count; pos++) {
+		if (queue_id == queues[pos])
+			break;
+	}
+
+	return pos;
+}
+
+void
+hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev,
+				    u16 queue_id)
+{
+	u8 queue_pos;
+	u8 rss_queue_count = nic_dev->num_rss;
+
+	queue_pos = hinic3_find_queue_pos_by_rq_id(nic_dev->rx_queue_list,
+						   rss_queue_count,
+						   (u8)queue_id);
+	/*
+	 * If queue was not at the end of the list,
+	 * shift started queues up queue array list.
+	 */
+	if (queue_pos < rss_queue_count) {
+		rss_queue_count--;
+		memmove(nic_dev->rx_queue_list + queue_pos,
+			nic_dev->rx_queue_list + queue_pos + 1,
+			(rss_queue_count - queue_pos) *
+				sizeof(nic_dev->rx_queue_list[0]));
+	}
+
+	RTE_ASSERT(rss_queue_count < RTE_DIM(nic_dev->rx_queue_list));
+	nic_dev->num_rss = rss_queue_count;
+}
+
+static void
+hinic3_rx_queue_release_mbufs(struct hinic3_rxq *rxq)
+{
+	u16 sw_ci, ci_mask, free_wqebbs;
+	u16 rx_buf_len;
+	u32 status, vlan_len, pkt_len;
+	u32 pkt_left_len = 0;
+	u32 nr_released = 0;
+	struct hinic3_rx_info *rx_info;
+	volatile struct hinic3_rq_cqe *rx_cqe;
+
+	sw_ci = hinic3_get_rq_local_ci(rxq);
+	rx_info = &rxq->rx_info[sw_ci];
+	rx_cqe = &rxq->rx_cqe[sw_ci];
+	free_wqebbs = hinic3_get_rq_free_wqebb(rxq) + 1;
+	status = rx_cqe->status;
+	ci_mask = rxq->q_mask;
+
+	while (free_wqebbs < rxq->q_depth) {
+		rx_buf_len = rxq->buf_len;
+		if (pkt_left_len != 0) {
+			/* Flush continues jumbo rqe. */
+			pkt_left_len = (pkt_left_len <= rx_buf_len)
+					       ? 0
+					       : (pkt_left_len - rx_buf_len);
+		} else if (HINIC3_GET_RX_FLUSH(status)) {
+			/* Flush one released rqe. */
+			pkt_left_len = 0;
+		} else if (HINIC3_GET_RX_DONE(status)) {
+			/* Flush single packet or first jumbo rqe. */
+			vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+			pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+			pkt_left_len = (pkt_len <= rx_buf_len)
+					       ? 0
+					       : (pkt_len - rx_buf_len);
+		} else {
+			break;
+		}
+		rte_pktmbuf_free(rx_info->mbuf);
+
+		rx_info->mbuf = NULL;
+		rx_cqe->status = 0;
+		nr_released++;
+		free_wqebbs++;
+
+		/* Update ci to next cqe. */
+		sw_ci++;
+		sw_ci &= ci_mask;
+		rx_info = &rxq->rx_info[sw_ci];
+		rx_cqe = &rxq->rx_cqe[sw_ci];
+		status = rx_cqe->status;
+	}
+
+	hinic3_update_rq_local_ci(rxq, (u16)nr_released);
+}
+
+int
+hinic3_poll_rq_empty(struct hinic3_rxq *rxq)
+{
+	unsigned long timeout;
+	int free_wqebb;
+	int err = -EFAULT;
+
+	timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+	do {
+		free_wqebb = hinic3_get_rq_free_wqebb(rxq) + 1;
+		if (free_wqebb == rxq->q_depth) {
+			err = 0;
+			break;
+		}
+		hinic3_rx_queue_release_mbufs(rxq);
+		rte_delay_us(1);
+	} while (time_before(jiffies, timeout));
+
+	return err;
+}
+
+void
+hinic3_dump_cqe_status(struct hinic3_rxq *rxq, u32 *cqe_done_cnt,
+		       u32 *cqe_hole_cnt, u32 *head_ci, u32 *head_done)
+{
+	u16 sw_ci;
+	u16 avail_pkts = 0;
+	u16 hit_done = 0;
+	u16 cqe_hole = 0;
+	u32 status;
+	volatile struct hinic3_rq_cqe *rx_cqe;
+
+	sw_ci = hinic3_get_rq_local_ci(rxq);
+	rx_cqe = &rxq->rx_cqe[sw_ci];
+	status = rx_cqe->status;
+	*head_done = HINIC3_GET_RX_DONE(status);
+	*head_ci = sw_ci;
+
+	for (sw_ci = 0; sw_ci < rxq->q_depth; sw_ci++) {
+		rx_cqe = &rxq->rx_cqe[sw_ci];
+
+		/* Check current ci is done. */
+		status = rx_cqe->status;
+		if (!HINIC3_GET_RX_DONE(status) ||
+		    !HINIC3_GET_RX_FLUSH(status)) {
+			if (hit_done) {
+				cqe_hole++;
+				hit_done = 0;
+			}
+
+			continue;
+		}
+
+		avail_pkts++;
+		hit_done = 1;
+	}
+
+	*cqe_done_cnt = avail_pkts;
+	*cqe_hole_cnt = cqe_hole;
+}
+
+int
+hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq)
+{
+	struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+	u32 cqe_done_cnt = 0;
+	u32 cqe_hole_cnt = 0;
+	u32 head_ci, head_done;
+	int err;
+
+	/* Disable rxq intr. */
+	hinic3_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+
+	/* Lock dev queue switch. */
+	rte_spinlock_lock(&nic_dev->queue_list_lock);
+
+	if (nic_dev->num_rss == 1) {
+		err = hinic3_set_vport_enable(nic_dev->hwdev, false);
+		if (err) {
+			PMD_DRV_LOG(ERR, "%s Disable vport failed, rc:%d",
+				    nic_dev->dev_name, err);
+		}
+	}
+	hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+
+	/*
+	 * If RSS is enable, remove q_id from rss indir table.
+	 * If RSS is disable, no mbuf in rq, pakcet will be dropped.
+	 */
+	if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+		err = hinic3_refill_indir_rqid(rxq);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "Clear rq in indirect table failed, "
+				    "eth_dev:%s, queue_idx:%d",
+				    nic_dev->dev_name, rxq->q_id);
+			hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+			goto set_indir_failed;
+		}
+	}
+
+	/* Unlock dev queue list switch. */
+	rte_spinlock_unlock(&nic_dev->queue_list_lock);
+
+	/* Send flush rxq cmd to device. */
+	err = hinic3_set_rq_flush(nic_dev->hwdev, rxq->q_id);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Flush rq failed, eth_dev:%s, queue_idx:%d",
+			    nic_dev->dev_name, rxq->q_id);
+		goto rq_flush_failed;
+	}
+
+	err = hinic3_poll_rq_empty(rxq);
+	if (err) {
+		hinic3_dump_cqe_status(rxq, &cqe_done_cnt, &cqe_hole_cnt,
+				       &head_ci, &head_done);
+		PMD_DRV_LOG(ERR,
+			    "Poll rq empty timeout, eth_dev:%s, queue_idx:%d, "
+			    "mbuf_left:%d, "
+			    "cqe_done:%d, cqe_hole:%d, cqe[%d].done=%d",
+			    nic_dev->dev_name, rxq->q_id,
+			    rxq->q_depth - hinic3_get_rq_free_wqebb(rxq),
+			    cqe_done_cnt, cqe_hole_cnt, head_ci, head_done);
+		goto poll_rq_failed;
+	}
+
+	return 0;
+
+poll_rq_failed:
+rq_flush_failed:
+	rte_spinlock_lock(&nic_dev->queue_list_lock);
+set_indir_failed:
+	hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+	if (nic_dev->rss_state == HINIC3_RSS_ENABLE)
+		(void)hinic3_refill_indir_rqid(rxq);
+	rte_spinlock_unlock(&nic_dev->queue_list_lock);
+	hinic3_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+	return err;
+}
+
+int
+hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq)
+{
+	struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+	int err = 0;
+
+	/* Lock dev queue switch.  */
+	rte_spinlock_lock(&nic_dev->queue_list_lock);
+	hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+
+	if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+		err = hinic3_refill_indir_rqid(rxq);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "Refill rq to indrect table failed, "
+				    "eth_dev:%s, queue_idx:%d err:%d",
+				    nic_dev->dev_name, rxq->q_id, err);
+			hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+		}
+	}
+	hinic3_rearm_rxq_mbuf(rxq);
+	if (rxq->nic_dev->num_rss == 1) {
+		err = hinic3_set_vport_enable(nic_dev->hwdev, true);
+		if (err)
+			PMD_DRV_LOG(ERR, "%s enable vport failed, err:%d",
+				    nic_dev->dev_name, err);
+	}
+
+	/* Unlock dev queue list switch. */
+	rte_spinlock_unlock(&nic_dev->queue_list_lock);
+
+	hinic3_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+
+	return err;
+}
diff --git a/drivers/net/hinic3/hinic3_rx.h b/drivers/net/hinic3/hinic3_rx.h
new file mode 100644
index 0000000000..56386b2511
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_rx.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_RX_H_
+#define _HINIC3_RX_H_
+
+#include "hinic3_wq.h"
+#include "hinic3_nic_io.h"
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT    0
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT     21
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT    24
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK    0xFFFU
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK     0x1U
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK    0xFFU
+
+#define DPI_EXT_ACTION_FILED (1ULL << 32)
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member)               \
+	(((val) >> RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+	 RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define HINIC3_GET_RX_PKT_TYPE(offload_type) \
+	RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC3_GET_RX_PKT_UMBCAST(offload_type) \
+	RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+#define HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+	RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC3_GET_RSS_TYPES(offload_type) \
+	RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT  16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK  0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) \
+	(((val) >> RQ_CQE_SGE_##member##_SHIFT) & RQ_CQE_SGE_##member##_MASK)
+
+#define HINIC3_GET_RX_VLAN_TAG(vlan_len) RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC3_GET_RX_PKT_LEN(vlan_len) RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT  0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT   16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT  25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT  27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT     30
+#define RQ_CQE_STATUS_RXDONE_SHIFT    31
+#define RQ_CQE_STATUS_DECRY_PKT_SHIFT 29
+#define RQ_CQE_STATUS_FLUSH_SHIFT     28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK  0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK   0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK  0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK  0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK     0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK    0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK     0x1U
+#define RQ_CQE_STATUS_DECRY_PKT_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member)               \
+	(((val) >> RQ_CQE_STATUS_##member##_SHIFT) & \
+	 RQ_CQE_STATUS_##member##_MASK)
+
+#define HINIC3_GET_RX_CSUM_ERR(status) RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC3_GET_RX_DONE(status) RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC3_GET_RX_FLUSH(status) RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC3_GET_RX_BP_EN(status) RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC3_GET_RX_NUM_LRO(status) RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC3_RX_IS_DECRY_PKT(status) RQ_CQE_STATUS_GET(status, DECRY_PKT)
+
+#define RQ_CQE_SUPER_CQE_EN_SHIFT  0
+#define RQ_CQE_PKT_NUM_SHIFT	   1
+#define RQ_CQE_PKT_LAST_LEN_SHIFT  6
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+
+#define RQ_CQE_SUPER_CQE_EN_MASK  0x1
+#define RQ_CQE_PKT_NUM_MASK	  0x1FU
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK  0x1FFFU
+
+#define RQ_CQE_PKT_NUM_GET(val, member) \
+	(((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+#define HINIC3_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) \
+	(((val) >> RQ_CQE_##member##_SHIFT) & RQ_CQE_##member##_MASK)
+
+#define HINIC3_GET_SUPER_CQE_EN(pkt_info) \
+	RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define RQ_CQE_PKT_LEN_GET(val, member) \
+	(((val) >> RQ_CQE_PKT_##member##_SHIFT) & RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_SHIFT  8
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_SHIFT 0
+
+#define RQ_CQE_DECRY_INFO_DECRY_STATUS_MASK  0xFFU
+#define RQ_CQE_DECRY_INFO_ESP_NEXT_HEAD_MASK 0xFFU
+
+#define RQ_CQE_DECRY_INFO_GET(val, member)               \
+	(((val) >> RQ_CQE_DECRY_INFO_##member##_SHIFT) & \
+	 RQ_CQE_DECRY_INFO_##member##_MASK)
+
+#define HINIC3_GET_DECRYPT_STATUS(decry_info) \
+	RQ_CQE_DECRY_INFO_GET(decry_info, DECRY_STATUS)
+
+#define HINIC3_GET_ESP_NEXT_HEAD(decry_info) \
+	RQ_CQE_DECRY_INFO_GET(decry_info, ESP_NEXT_HEAD)
+
+/* Rx cqe checksum err */
+#define HINIC3_RX_CSUM_IP_CSUM_ERR      BIT(0)
+#define HINIC3_RX_CSUM_TCP_CSUM_ERR     BIT(1)
+#define HINIC3_RX_CSUM_UDP_CSUM_ERR     BIT(2)
+#define HINIC3_RX_CSUM_IGMP_CSUM_ERR    BIT(3)
+#define HINIC3_RX_CSUM_ICMP_V4_CSUM_ERR BIT(4)
+#define HINIC3_RX_CSUM_ICMP_V6_CSUM_ERR BIT(5)
+#define HINIC3_RX_CSUM_SCTP_CRC_ERR     BIT(6)
+#define HINIC3_RX_CSUM_HW_CHECK_NONE    BIT(7)
+#define HINIC3_RX_CSUM_IPSU_OTHER_ERR   BIT(8)
+
+#define HINIC3_DEFAULT_RX_CSUM_OFFLOAD 0xFFF
+#define HINIC3_CQE_LEN		       32
+
+#define HINIC3_RSS_OFFLOAD_ALL (         \
+	RTE_ETH_RSS_IPV4 |               \
+	RTE_ETH_RSS_FRAG_IPV4 |          \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |   \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |   \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 |               \
+	RTE_ETH_RSS_FRAG_IPV6 |          \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |   \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |   \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX |            \
+	RTE_ETH_RSS_IPV6_TCP_EX |        \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+
+struct hinic3_rxq_stats {
+	u64 packets;
+	u64 bytes;
+	u64 errors;
+	u64 csum_errors;
+	u64 other_errors;
+	u64 unlock_bp;
+	u64 dropped;
+
+	u64 rx_nombuf;
+	u64 rx_discards;
+	u64 burst_pkts;
+	u64 empty;
+	u64 tsc;
+#ifdef HINIC3_XSTAT_MBUF_USE
+	u64 rx_alloc_mbuf_bytes;
+	u64 rx_free_mbuf_bytes;
+	u64 rx_left_mbuf_bytes;
+#endif
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+	u64 rx_mbuf;
+	u64 rx_avail;
+	u64 rx_hole;
+#endif
+
+#ifdef HINIC3_XSTAT_PROF_RX
+	u64 app_tsc;
+	u64 pmd_tsc;
+#endif
+};
+
+struct __rte_cache_aligned hinic3_rq_cqe {
+	u32 status;
+	u32 vlan_len;
+
+	u32 offload_type;
+	u32 hash_val;
+	u32 mark_id_0;
+	u32 mark_id_1;
+	u32 mark_id_2;
+	u32 pkt_info;
+};
+
+/**
+ * Attention: please do not add any member in hinic3_rx_info
+ * because rxq bulk rearm mode will write mbuf in rx_info.
+ */
+struct hinic3_rx_info {
+	struct rte_mbuf *mbuf;
+};
+
+struct hinic3_sge_sect {
+	struct hinic3_sge sge;
+	u32 rsvd;
+};
+
+struct hinic3_rq_extend_wqe {
+	struct hinic3_sge_sect buf_desc;
+	struct hinic3_sge_sect cqe_sect;
+};
+
+struct hinic3_rq_normal_wqe {
+	u32 buf_hi_addr;
+	u32 buf_lo_addr;
+	u32 cqe_hi_addr;
+	u32 cqe_lo_addr;
+};
+
+struct hinic3_rq_wqe {
+	union {
+		struct hinic3_rq_normal_wqe normal_wqe;
+		struct hinic3_rq_extend_wqe extend_wqe;
+	};
+};
+
+struct __rte_cache_aligned hinic3_rxq {
+	struct hinic3_nic_dev *nic_dev;
+
+	u16 q_id;
+	u16 q_depth;
+	u16 q_mask;
+	u16 buf_len;
+
+	u32 rx_buff_shift;
+
+	u16 rx_free_thresh;
+	u16 rxinfo_align_end;
+	u16 wqebb_shift;
+	u16 wqebb_size;
+
+	u16 wqe_type;
+	u16 cons_idx;
+	u16 prod_idx;
+	u16 delta;
+
+	u16 next_to_update;
+	u16 port_id;
+
+	const struct rte_memzone *rq_mz;
+	void *queue_buf_vaddr; /**< rxq dma info */
+	rte_iova_t queue_buf_paddr;
+
+	const struct rte_memzone *pi_mz;
+	u16 *pi_virt_addr;
+	void *db_addr;
+	rte_iova_t pi_dma_addr;
+
+	struct hinic3_rx_info *rx_info;
+	struct hinic3_rq_cqe *rx_cqe;
+	struct rte_mempool *mb_pool;
+
+	const struct rte_memzone *cqe_mz;
+	rte_iova_t cqe_start_paddr;
+	void *cqe_start_vaddr;
+	u8 dp_intr_en;
+	u16 msix_entry_idx;
+
+	unsigned long status;
+	u64 wait_time_cycle;
+
+	struct hinic3_rxq_stats rxq_stats;
+#ifdef HINIC3_XSTAT_PROF_RX
+	uint64_t prof_rx_end_tsc; /**< Performance profiling. */
+#endif
+};
+
+u16 hinic3_rx_fill_wqe(struct hinic3_rxq *rxq);
+
+u16 hinic3_rx_fill_buffers(struct hinic3_rxq *rxq);
+
+void hinic3_free_rxq_mbufs(struct hinic3_rxq *rxq);
+
+void hinic3_free_all_rxq_mbufs(struct hinic3_nic_dev *nic_dev);
+
+int hinic3_update_rss_config(struct rte_eth_dev *dev,
+			     struct rte_eth_rss_conf *rss_conf);
+
+int hinic3_poll_rq_empty(struct hinic3_rxq *rxq);
+
+void hinic3_dump_cqe_status(struct hinic3_rxq *rxq, u32 *cqe_done_cnt,
+			    u32 *cqe_hole_cnt, u32 *head_ci, u32 *head_done);
+
+int hinic3_stop_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq);
+
+int hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq);
+
+u16 hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
+
+void hinic3_add_rq_to_rx_queue_list(struct hinic3_nic_dev *nic_dev,
+				    u16 queue_id);
+
+int hinic3_refill_indir_rqid(struct hinic3_rxq *rxq);
+
+void hinic3_init_rx_queue_list(struct hinic3_nic_dev *nic_dev);
+
+void hinic3_remove_rq_from_rx_queue_list(struct hinic3_nic_dev *nic_dev,
+					 u16 queue_id);
+int hinic3_start_all_rqs(struct rte_eth_dev *eth_dev);
+
+#ifdef HINIC3_XSTAT_RXBUF_INFO
+void hinic3_get_stats(struct hinic3_rxq *rxq);
+#endif
+
+/**
+ * Get receive queue local ci.
+ *
+ * @param[in] rxq
+ * Pointer to receive queue structure.
+ * @return
+ * Receive queue local ci.
+ */
+static inline u16
+hinic3_get_rq_local_ci(struct hinic3_rxq *rxq)
+{
+	return MASKED_QUEUE_IDX(rxq, rxq->cons_idx);
+}
+
+static inline u16
+hinic3_get_rq_free_wqebb(struct hinic3_rxq *rxq)
+{
+	return rxq->delta - 1;
+}
+
+/**
+ * Update receive queue local ci.
+ *
+ * @param[in] rxq
+ * Pointer to receive queue structure.
+ * @param[out] wqe_cnt
+ * Wqebb counters.
+ */
+static inline void
+hinic3_update_rq_local_ci(struct hinic3_rxq *rxq, u16 wqe_cnt)
+{
+	rxq->cons_idx += wqe_cnt;
+	rxq->delta += wqe_cnt;
+}
+
+#endif /* _HINIC3_RX_H_ */
diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c
new file mode 100644
index 0000000000..6f8c42e0c3
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_tx.c
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <rte_ether.h>
+#include <rte_io.h>
+#include <rte_mbuf.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_nic_cfg.h"
+#include "base/hinic3_hwdev.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_tx.h"
+
+#define HINIC3_TX_TASK_WRAPPED	  1
+#define HINIC3_TX_BD_DESC_WRAPPED 2
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN     0x50
+
+#define HINIC3_MAX_TX_FREE_BULK 64
+
+#define MAX_PAYLOAD_OFFSET 221
+
+#define HINIC3_TX_OUTER_CHECKSUM_FLAG_SET    1
+#define HINIC3_TX_OUTER_CHECKSUM_FLAG_NO_SET 0
+
+#define HINIC3_TX_OFFLOAD_MASK \
+	(HINIC3_TX_CKSUM_OFFLOAD_MASK | HINIC3_PKT_TX_VLAN_PKT)
+
+#define HINIC3_TX_CKSUM_OFFLOAD_MASK                          \
+	(HINIC3_PKT_TX_IP_CKSUM | HINIC3_PKT_TX_TCP_CKSUM |   \
+	 HINIC3_PKT_TX_UDP_CKSUM | HINIC3_PKT_TX_SCTP_CKSUM | \
+	 HINIC3_PKT_TX_OUTER_IP_CKSUM | HINIC3_PKT_TX_TCP_SEG)
+
+static inline u16
+hinic3_get_sq_free_wqebbs(struct hinic3_txq *sq)
+{
+	return ((sq->q_depth -
+		 (((sq->prod_idx - sq->cons_idx) + sq->q_depth) & sq->q_mask)) -
+		1);
+}
+
+static inline void
+hinic3_update_sq_local_ci(struct hinic3_txq *sq, u16 wqe_cnt)
+{
+	sq->cons_idx += wqe_cnt;
+}
+
+static inline u16
+hinic3_get_sq_local_ci(struct hinic3_txq *sq)
+{
+	return MASKED_QUEUE_IDX(sq, sq->cons_idx);
+}
+
+static inline u16
+hinic3_get_sq_hw_ci(struct hinic3_txq *sq)
+{
+	return MASKED_QUEUE_IDX(sq, hinic3_hw_cpu16(*sq->ci_vaddr_base));
+}
+
+int
+hinic3_start_all_sqs(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev = NULL;
+	struct hinic3_txq *txq = NULL;
+	int i;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+	for (i = 0; i < nic_dev->num_sqs; i++) {
+		txq = eth_dev->data->tx_queues[i];
+		HINIC3_SET_TXQ_STARTED(txq);
+		eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return 0;
+}
+
+static inline void
+hinic3_free_cpy_mbuf(struct hinic3_nic_dev *nic_dev __rte_unused,
+		     struct rte_mbuf *cpy_skb)
+{
+	rte_pktmbuf_free(cpy_skb);
+}
+
+/**
+ * Cleans up buffers (mbuf) in the send queue (txq) and returns these buffers to
+ * their memory pool.
+ *
+ * @param[in] txq
+ * Point to send queue.
+ * @param[in] free_cnt
+ * Number of mbufs to be released.
+ * @return
+ * Number of released mbufs.
+ */
+static int
+hinic3_xmit_mbuf_cleanup(struct hinic3_txq *txq, u32 free_cnt)
+{
+	struct hinic3_tx_info *tx_info = NULL;
+	struct rte_mbuf *mbuf = NULL;
+	struct rte_mbuf *mbuf_temp = NULL;
+	struct rte_mbuf *mbuf_free[HINIC3_MAX_TX_FREE_BULK];
+
+	int nb_free = 0;
+	int wqebb_cnt = 0;
+	u16 hw_ci, sw_ci, sq_mask;
+	u32 i;
+
+	hw_ci = hinic3_get_sq_hw_ci(txq);
+	sw_ci = hinic3_get_sq_local_ci(txq);
+	sq_mask = txq->q_mask;
+
+	for (i = 0; i < free_cnt; ++i) {
+		tx_info = &txq->tx_info[sw_ci];
+		if (hw_ci == sw_ci ||
+		    (((hw_ci - sw_ci) & sq_mask) < tx_info->wqebb_cnt))
+			break;
+		/*
+		 * The cpy_mbuf is usually used in the arge-sized package
+		 * scenario.
+		 */
+		if (unlikely(tx_info->cpy_mbuf != NULL)) {
+			hinic3_free_cpy_mbuf(txq->nic_dev, tx_info->cpy_mbuf);
+			tx_info->cpy_mbuf = NULL;
+		}
+		sw_ci = (sw_ci + tx_info->wqebb_cnt) & sq_mask;
+
+		wqebb_cnt += tx_info->wqebb_cnt;
+		mbuf = tx_info->mbuf;
+
+		if (likely(mbuf->nb_segs == 1)) {
+			mbuf_temp = rte_pktmbuf_prefree_seg(mbuf);
+			tx_info->mbuf = NULL;
+			if (unlikely(mbuf_temp == NULL))
+				continue;
+
+			mbuf_free[nb_free++] = mbuf_temp;
+			/*
+			 * If the pools of different mbufs are different,
+			 * release the mbufs of the same pool.
+			 */
+			if (unlikely(mbuf_temp->pool != mbuf_free[0]->pool ||
+				     nb_free >= HINIC3_MAX_TX_FREE_BULK)) {
+				rte_mempool_put_bulk(mbuf_free[0]->pool,
+						     (void **)mbuf_free,
+						     (nb_free - 1));
+				nb_free = 0;
+				mbuf_free[nb_free++] = mbuf_temp;
+			}
+		} else {
+			rte_pktmbuf_free(mbuf);
+			tx_info->mbuf = NULL;
+		}
+	}
+
+	if (nb_free > 0)
+		rte_mempool_put_bulk(mbuf_free[0]->pool, (void **)mbuf_free,
+				     nb_free);
+
+	hinic3_update_sq_local_ci(txq, wqebb_cnt);
+
+	return i;
+}
+
+static inline void
+hinic3_tx_free_mbuf_force(struct hinic3_txq *txq __rte_unused,
+			  struct rte_mbuf *mbuf)
+{
+	rte_pktmbuf_free(mbuf);
+}
+
+/**
+ * Release the mbuf and update the consumer index for sending queue.
+ *
+ * @param[in] txq
+ * Point to send queue.
+ */
+void
+hinic3_free_txq_mbufs(struct hinic3_txq *txq)
+{
+	struct hinic3_tx_info *tx_info = NULL;
+	u16 free_wqebbs;
+	u16 ci;
+
+	free_wqebbs = hinic3_get_sq_free_wqebbs(txq) + 1;
+
+	while (free_wqebbs < txq->q_depth) {
+		ci = hinic3_get_sq_local_ci(txq);
+
+		tx_info = &txq->tx_info[ci];
+		if (unlikely(tx_info->cpy_mbuf != NULL)) {
+			hinic3_free_cpy_mbuf(txq->nic_dev, tx_info->cpy_mbuf);
+			tx_info->cpy_mbuf = NULL;
+		}
+		hinic3_tx_free_mbuf_force(txq, tx_info->mbuf);
+		hinic3_update_sq_local_ci(txq, (u16)(tx_info->wqebb_cnt));
+
+		free_wqebbs = (u16)(free_wqebbs + tx_info->wqebb_cnt);
+		tx_info->mbuf = NULL;
+	}
+}
+
+void
+hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev)
+{
+	u16 qid;
+	for (qid = 0; qid < nic_dev->num_sqs; qid++)
+		hinic3_free_txq_mbufs(nic_dev->txqs[qid]);
+}
+
+int
+hinic3_tx_done_cleanup(void *txq, u32 free_cnt)
+{
+	struct hinic3_txq *tx_queue = txq;
+	u32 try_free_cnt = !free_cnt ? tx_queue->q_depth : free_cnt;
+
+	return hinic3_xmit_mbuf_cleanup(tx_queue, try_free_cnt);
+}
+
+int
+hinic3_stop_sq(struct hinic3_txq *txq)
+{
+	struct hinic3_nic_dev *nic_dev = txq->nic_dev;
+	unsigned long timeout;
+	int err = -EFAULT;
+	int free_wqebbs;
+
+	timeout = msecs_to_jiffies(HINIC3_FLUSH_QUEUE_TIMEOUT) + jiffies;
+	do {
+		hinic3_tx_done_cleanup(txq, 0);
+		free_wqebbs = hinic3_get_sq_free_wqebbs(txq) + 1;
+		if (free_wqebbs == txq->q_depth) {
+			err = 0;
+			break;
+		}
+
+		rte_delay_us(1);
+	} while (time_before(jiffies, timeout));
+
+	if (err)
+		PMD_DRV_LOG(WARNING,
+			    "%s Wait sq empty timeout, queue_idx: %u, "
+			    "sw_ci: %u, hw_ci: %u, sw_pi: %u, free_wqebbs: %u, "
+			    "q_depth:%u",
+			    nic_dev->dev_name, txq->q_id,
+			    hinic3_get_sq_local_ci(txq),
+			    hinic3_get_sq_hw_ci(txq),
+			    MASKED_QUEUE_IDX(txq, txq->prod_idx), free_wqebbs,
+			    txq->q_depth);
+
+	return err;
+}
+
+/**
+ * Stop all sending queues (SQs).
+ *
+ * @param[in] txq
+ * Point to send queue.
+ */
+void
+hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev)
+{
+	u16 qid;
+	int err;
+
+	for (qid = 0; qid < nic_dev->num_sqs; qid++) {
+		err = hinic3_stop_sq(nic_dev->txqs[qid]);
+		if (err)
+			PMD_DRV_LOG(ERR, "Stop sq%d failed", qid);
+	}
+}
diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h
new file mode 100644
index 0000000000..f4c61ea1b1
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_tx.h
@@ -0,0 +1,314 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_TX_H_
+#define _HINIC3_TX_H_
+
+#define MAX_SINGLE_SGE_SIZE		 65536
+#define HINIC3_NONTSO_PKT_MAX_SGE	 38 /**< non-tso max sge 38. */
+#define HINIC3_NONTSO_SEG_NUM_VALID(num) ((num) <= HINIC3_NONTSO_PKT_MAX_SGE)
+
+#define HINIC3_TSO_PKT_MAX_SGE		127 /**< tso max sge 127. */
+#define HINIC3_TSO_SEG_NUM_INVALID(num) ((num) > HINIC3_TSO_PKT_MAX_SGE)
+
+/* Tx offload info. */
+struct hinic3_tx_offload_info {
+	u8 outer_l2_len;
+	u8 outer_l3_type;
+	u16 outer_l3_len;
+
+	u8 inner_l2_len;
+	u8 inner_l3_type;
+	u16 inner_l3_len;
+
+	u8 tunnel_length;
+	u8 tunnel_type;
+	u8 inner_l4_type;
+	u8 inner_l4_len;
+
+	u16 payload_offset;
+	u8 inner_l4_tcp_udp;
+	u8 rsvd0; /**< Reserved field. */
+};
+
+/* Tx wqe ctx. */
+struct hinic3_wqe_info {
+	u8 around; /**< Indicates whether the WQE is bypassed. */
+	u8 cpy_mbuf_cnt;
+	u16 sge_cnt;
+
+	u8 offload;
+	u8 rsvd0; /**< Reserved field 0. */
+	u16 payload_offset;
+
+	u8 wrapped;
+	u8 owner;
+	u16 pi;
+
+	u16 wqebb_cnt;
+	u16 rsvd1; /**< Reserved field 1. */
+
+	u32 queue_info;
+};
+
+/* Descriptor for the send queue of wqe. */
+struct hinic3_sq_wqe_desc {
+	u32 ctrl_len;
+	u32 queue_info;
+	u32 hi_addr;
+	u32 lo_addr;
+};
+
+/* Describes the send queue task. */
+struct hinic3_sq_task {
+	u32 pkt_info0;
+	u32 ip_identify;
+	u32 pkt_info2;
+	u32 vlan_offload;
+};
+
+/* Descriptor that describes the transmit queue buffer. */
+struct hinic3_sq_bufdesc {
+	u32 len;     /**< 31-bits Length, L2NIC only use length[17:0]. */
+	u32 rsvd;    /**< Reserved field. */
+	u32 hi_addr; /**< Upper address. */
+	u32 lo_addr; /**< Lower address. */
+};
+
+/* Compact work queue entry that describes the send queue (SQ). */
+struct hinic3_sq_compact_wqe {
+	struct hinic3_sq_wqe_desc wqe_desc;
+};
+
+/* Extend work queue entry that describes the send queue (SQ). */
+struct hinic3_sq_extend_wqe {
+	struct hinic3_sq_wqe_desc wqe_desc;
+	struct hinic3_sq_task task;
+	struct hinic3_sq_bufdesc buf_desc[];
+};
+
+struct hinic3_sq_wqe {
+	union {
+		struct hinic3_sq_compact_wqe compact_wqe;
+		struct hinic3_sq_extend_wqe extend_wqe;
+	};
+};
+
+struct hinic3_sq_wqe_combo {
+	struct hinic3_sq_wqe_desc *hdr;
+	struct hinic3_sq_task *task;
+	struct hinic3_sq_bufdesc *bds_head;
+	u32 wqe_type;
+	u32 task_type;
+};
+
+enum sq_wqe_data_format {
+	SQ_NORMAL_WQE = 0,
+};
+
+/* Indicates the type of a WQE. */
+enum sq_wqe_ec_type {
+	SQ_WQE_COMPACT_TYPE = 0,
+	SQ_WQE_EXTENDED_TYPE = 1,
+};
+
+#define COMPACT_WQE_MAX_CTRL_LEN 0x3FFF
+
+/* Indicates the type of tasks with different lengths. */
+enum sq_wqe_tasksect_len_type {
+	SQ_WQE_TASKSECT_46BITS = 0,
+	SQ_WQE_TASKSECT_16BYTES = 1,
+};
+
+/** Setting and obtaining queue information */
+#define SQ_CTRL_BD0_LEN_SHIFT	   0
+#define SQ_CTRL_RSVD_SHIFT	   18
+#define SQ_CTRL_BUFDESC_NUM_SHIFT  19
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 27
+#define SQ_CTRL_DATA_FORMAT_SHIFT  28
+#define SQ_CTRL_DIRECT_SHIFT	   29
+#define SQ_CTRL_EXTENDED_SHIFT	   30
+#define SQ_CTRL_OWNER_SHIFT	   31
+
+#define SQ_CTRL_BD0_LEN_MASK	  0x3FFFFU
+#define SQ_CTRL_RSVD_MASK	  0x1U
+#define SQ_CTRL_BUFDESC_NUM_MASK  0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1U
+#define SQ_CTRL_DATA_FORMAT_MASK  0x1U
+#define SQ_CTRL_DIRECT_MASK	  0x1U
+#define SQ_CTRL_EXTENDED_MASK	  0x1U
+#define SQ_CTRL_OWNER_MASK	  0x1U
+
+#define SQ_CTRL_SET(val, member) \
+	(((u32)(val) & SQ_CTRL_##member##_MASK) << SQ_CTRL_##member##_SHIFT)
+#define SQ_CTRL_GET(val, member) \
+	(((val) >> SQ_CTRL_##member##_SHIFT) & SQ_CTRL_##member##_MASK)
+#define SQ_CTRL_CLEAR(val, member) \
+	((val) & (~(SQ_CTRL_##member##_MASK << SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_SHIFT  0
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT	   2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT	   10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT	   11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT	   13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT	   27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT	   28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT	   29
+
+#define SQ_CTRL_QUEUE_INFO_PKT_TYPE_MASK  0x3U
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK	  0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK	  0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK	  0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK	  0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK	  0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK	  0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK	  0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member)                \
+	(((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) \
+	 << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+#define SQ_CTRL_QUEUE_INFO_GET(val, member)               \
+	(((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) & \
+	 SQ_CTRL_QUEUE_INFO_##member##_MASK)
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member)          \
+	((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK \
+		    << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+/* Setting and obtaining task information */
+#define SQ_TASK_INFO0_TUNNEL_FLAG_SHIFT	    19
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_SHIFT  22
+#define SQ_TASK_INFO0_INNER_L4_EN_SHIFT	    24
+#define SQ_TASK_INFO0_INNER_L3_EN_SHIFT	    25
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_SHIFT 26
+#define SQ_TASK_INFO0_OUT_L4_EN_SHIFT	    27
+#define SQ_TASK_INFO0_OUT_L3_EN_SHIFT	    28
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_SHIFT   29
+#define SQ_TASK_INFO0_ESP_OFFLOAD_SHIFT	    30
+#define SQ_TASK_INFO0_IPSEC_PROTO_SHIFT	    31
+
+#define SQ_TASK_INFO0_TUNNEL_FLAG_MASK	   0x1U
+#define SQ_TASK_INFO0_ESP_NEXT_PROTO_MASK  0x3U
+#define SQ_TASK_INFO0_INNER_L4_EN_MASK	   0x1U
+#define SQ_TASK_INFO0_INNER_L3_EN_MASK	   0x1U
+#define SQ_TASK_INFO0_INNER_L4_PSEUDO_MASK 0x1U
+#define SQ_TASK_INFO0_OUT_L4_EN_MASK	   0x1U
+#define SQ_TASK_INFO0_OUT_L3_EN_MASK	   0x1U
+#define SQ_TASK_INFO0_OUT_L4_PSEUDO_MASK   0x1U
+#define SQ_TASK_INFO0_ESP_OFFLOAD_MASK	   0x1U
+#define SQ_TASK_INFO0_IPSEC_PROTO_MASK	   0x1U
+
+#define SQ_TASK_INFO0_SET(val, member)                \
+	(((u32)(val) & SQ_TASK_INFO0_##member##_MASK) \
+	 << SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member)               \
+	(((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+	 SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_SET(val, member)           \
+	(((val) & SQ_TASK_INFO1_##member##_MASK) \
+	 << SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member)               \
+	(((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+	 SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO3_VLAN_TAG_SHIFT	   0
+#define SQ_TASK_INFO3_VLAN_TYPE_SHIFT	   16
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_SHIFT 19
+
+#define SQ_TASK_INFO3_VLAN_TAG_MASK	  0xFFFFU
+#define SQ_TASK_INFO3_VLAN_TYPE_MASK	  0x7U
+#define SQ_TASK_INFO3_VLAN_TAG_VALID_MASK 0x1U
+
+#define SQ_TASK_INFO3_SET(val, member)           \
+	(((val) & SQ_TASK_INFO3_##member##_MASK) \
+	 << SQ_TASK_INFO3_##member##_SHIFT)
+#define SQ_TASK_INFO3_GET(val, member)               \
+	(((val) >> SQ_TASK_INFO3_##member##_SHIFT) & \
+	 SQ_TASK_INFO3_##member##_MASK)
+
+/* Defines the TX queue status. */
+enum hinic3_txq_status {
+	HINIC3_TXQ_STATUS_START = 0,
+	HINIC3_TXQ_STATUS_STOP,
+};
+
+/* Setting and obtaining status information. */
+#define HINIC3_TXQ_IS_STARTED(txq)  ((txq)->status == HINIC3_TXQ_STATUS_START)
+#define HINIC3_TXQ_IS_STOPPED(txq)  ((txq)->status == HINIC3_TXQ_STATUS_STOP)
+#define HINIC3_SET_TXQ_STARTED(txq) ((txq)->status = HINIC3_TXQ_STATUS_START)
+#define HINIC3_SET_TXQ_STOPPED(txq) ((txq)->status = HINIC3_TXQ_STATUS_STOP)
+
+#define HINIC3_FLUSH_QUEUE_TIMEOUT 3000
+
+/* Txq info. */
+struct hinic3_txq_stats {
+	u64 packets;
+	u64 bytes;
+	u64 tx_busy;
+	u64 offload_errors;
+	u64 burst_pkts;
+	u64 sge_len0;
+	u64 mbuf_null;
+	u64 cpy_pkts;
+	u64 sge_len_too_large;
+
+#ifdef HINIC3_XSTAT_PROF_TX
+	u64 app_tsc;
+	u64 pmd_tsc;
+#endif
+
+#ifdef HINIC3_XSTAT_MBUF_USE
+	u64 tx_left_mbuf_bytes;
+#endif
+};
+
+/* Structure for storing the information sent. */
+struct hinic3_tx_info {
+	struct rte_mbuf *mbuf;
+	struct rte_mbuf *cpy_mbuf;
+	int wqebb_cnt;
+};
+
+/* Indicates the sending queue of information. */
+struct __rte_cache_aligned hinic3_txq {
+	struct hinic3_nic_dev *nic_dev;
+	u16 q_id;
+	u16 q_depth;
+	u16 q_mask;
+	u16 wqebb_size;
+	u16 wqebb_shift;
+	u16 cons_idx;
+	u16 prod_idx;
+	u16 status;
+
+	u16 tx_free_thresh;
+	u16 owner;
+	void *db_addr;
+	struct hinic3_tx_info *tx_info;
+
+	const struct rte_memzone *sq_mz;
+	void *queue_buf_vaddr;
+	rte_iova_t queue_buf_paddr;
+
+	const struct rte_memzone *ci_mz;
+	volatile u16 *ci_vaddr_base;
+	rte_iova_t ci_dma_base;
+	u64 sq_head_addr;
+	u64 sq_bot_sge_addr;
+	u32 cos;
+	struct hinic3_txq_stats txq_stats;
+#ifdef HINIC3_XSTAT_PROF_TX
+	uint64_t prof_tx_end_tsc;
+#endif
+};
+
+void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev);
+void hinic3_free_txq_mbufs(struct hinic3_txq *txq);
+void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev);
+int hinic3_stop_sq(struct hinic3_txq *txq);
+int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev);
+int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt);
+#endif /**< _HINIC3_TX_H_ */
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 14/18] net/hinic3: add Rx/Tx functions
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (4 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 13/18] net/hinic3: add dev ops Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 15/18] net/hinic3: add MML and EEPROM access feature Feifei Wang
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Feifei Wang, Yi Chen, Xin Wang

From: Feifei Wang <wangfeifei40@huawei.com>

This patch add package sending and receiving function codes.

Signed-off-by: Feifei Wang <wangfeifei40@huawei.com>
Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
---
 drivers/net/hinic3/hinic3_ethdev.c |   9 +-
 drivers/net/hinic3/hinic3_rx.c     | 301 +++++++++++-
 drivers/net/hinic3/hinic3_tx.c     | 754 +++++++++++++++++++++++++++++
 drivers/net/hinic3/hinic3_tx.h     |   1 +
 4 files changed, 1054 insertions(+), 11 deletions(-)

diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index de380dddbb..7cd101e5c3 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -21,9 +21,9 @@
 #include "base/hinic3_hw_comm.h"
 #include "base/hinic3_nic_cfg.h"
 #include "base/hinic3_nic_event.h"
-#include "hinic3_pmd_nic_io.h"
-#include "hinic3_pmd_tx.h"
-#include "hinic3_pmd_rx.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
 #include "hinic3_ethdev.h"
 
 #define HINIC3_MIN_RX_BUF_SIZE 1024
@@ -3337,6 +3337,9 @@ hinic3_dev_init(struct rte_eth_dev *eth_dev)
 	PMD_DRV_LOG(INFO, "Network Interface pmd driver version: %s",
 		    HINIC3_PMD_DRV_VERSION);
 
+	eth_dev->rx_pkt_burst = hinic3_recv_pkts;
+	eth_dev->tx_pkt_burst = hinic3_xmit_pkts;
+
 	return hinic3_func_init(eth_dev);
 }
 
diff --git a/drivers/net/hinic3/hinic3_rx.c b/drivers/net/hinic3/hinic3_rx.c
index a1dc960236..318d9aadc3 100644
--- a/drivers/net/hinic3/hinic3_rx.c
+++ b/drivers/net/hinic3/hinic3_rx.c
@@ -5,14 +5,14 @@
 #include <rte_mbuf.h>
 
 #include "base/hinic3_compat.h"
-#include "base/hinic3_pmd_hwif.h"
-#include "base/hinic3_pmd_hwdev.h"
-#include "base/hinic3_pmd_wq.h"
-#include "base/hinic3_pmd_nic_cfg.h"
-#include "hinic3_pmd_nic_io.h"
-#include "hinic3_pmd_ethdev.h"
-#include "hinic3_pmd_tx.h"
-#include "hinic3_pmd_rx.h"
+#include "base/hinic3_hwif.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_wq.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_tx.h"
+#include "hinic3_rx.h"
 
 /**
  * Get wqe from receive queue.
@@ -809,3 +809,288 @@ hinic3_start_rq(struct rte_eth_dev *eth_dev, struct hinic3_rxq *rxq)
 
 	return err;
 }
+
+
+static inline u64
+hinic3_rx_vlan(u32 offload_type, u32 vlan_len, u16 *vlan_tci)
+{
+	uint16_t vlan_tag;
+
+	vlan_tag = HINIC3_GET_RX_VLAN_TAG(vlan_len);
+	if (!HINIC3_GET_RX_VLAN_OFFLOAD_EN(offload_type) || vlan_tag == 0) {
+		*vlan_tci = 0;
+		return 0;
+	}
+
+	*vlan_tci = vlan_tag;
+
+	return HINIC3_PKT_RX_VLAN | HINIC3_PKT_RX_VLAN_STRIPPED;
+}
+
+static inline u64
+hinic3_rx_csum(uint32_t status, struct hinic3_rxq *rxq)
+{
+	struct hinic3_nic_dev *nic_dev = rxq->nic_dev;
+	u32 csum_err;
+	u64 flags;
+
+	if (unlikely(!(nic_dev->rx_csum_en & HINIC3_DEFAULT_RX_CSUM_OFFLOAD)))
+		return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN;
+
+	csum_err = HINIC3_GET_RX_CSUM_ERR(status);
+	if (likely(csum_err == 0))
+		return (HINIC3_PKT_RX_IP_CKSUM_GOOD |
+			HINIC3_PKT_RX_L4_CKSUM_GOOD);
+
+	/*
+	 * If bypass bit is set, all other err status indications should be
+	 * ignored.
+	 */
+	if (unlikely(csum_err & HINIC3_RX_CSUM_HW_CHECK_NONE))
+		return HINIC3_PKT_RX_IP_CKSUM_UNKNOWN;
+
+	flags = 0;
+
+	/* IP checksum error. */
+	if (csum_err & HINIC3_RX_CSUM_IP_CSUM_ERR) {
+		flags |= HINIC3_PKT_RX_IP_CKSUM_BAD;
+		rxq->rxq_stats.csum_errors++;
+	}
+
+	/* L4 checksum error. */
+	if ((csum_err & HINIC3_RX_CSUM_TCP_CSUM_ERR) ||
+	    (csum_err & HINIC3_RX_CSUM_UDP_CSUM_ERR) ||
+	    (csum_err & HINIC3_RX_CSUM_SCTP_CRC_ERR)) {
+		flags |= HINIC3_PKT_RX_L4_CKSUM_BAD;
+		rxq->rxq_stats.csum_errors++;
+	}
+
+	if (unlikely(csum_err == HINIC3_RX_CSUM_IPSU_OTHER_ERR))
+		rxq->rxq_stats.other_errors++;
+
+	return flags;
+}
+
+static inline u64
+hinic3_rx_rss_hash(u32 offload_type, u32 rss_hash_value, u32 *rss_hash)
+{
+	u32 rss_type;
+
+	rss_type = HINIC3_GET_RSS_TYPES(offload_type);
+	if (likely(rss_type != 0)) {
+		*rss_hash = rss_hash_value;
+		return HINIC3_PKT_RX_RSS_HASH;
+	}
+
+	return 0;
+}
+
+static void
+hinic3_recv_jumbo_pkt(struct hinic3_rxq *rxq, struct rte_mbuf *head_mbuf,
+		      u32 remain_pkt_len)
+{
+	struct rte_mbuf *cur_mbuf = NULL;
+	struct rte_mbuf *rxm = NULL;
+	struct hinic3_rx_info *rx_info = NULL;
+	u16 sw_ci, rx_buf_len = rxq->buf_len;
+	u32 pkt_len;
+
+	while (remain_pkt_len > 0) {
+		sw_ci = hinic3_get_rq_local_ci(rxq);
+		rx_info = &rxq->rx_info[sw_ci];
+
+		hinic3_update_rq_local_ci(rxq, 1);
+
+		pkt_len = remain_pkt_len > rx_buf_len ? rx_buf_len
+						      : remain_pkt_len;
+		remain_pkt_len -= pkt_len;
+
+		cur_mbuf = rx_info->mbuf;
+		cur_mbuf->data_len = (u16)pkt_len;
+		cur_mbuf->next = NULL;
+
+		head_mbuf->pkt_len += cur_mbuf->data_len;
+		head_mbuf->nb_segs++;
+#ifdef HINIC3_XSTAT_MBUF_USE
+		rxq->rxq_stats.rx_free_mbuf_bytes++;
+#endif
+		if (!rxm)
+			head_mbuf->next = cur_mbuf;
+		else
+			rxm->next = cur_mbuf;
+
+		rxm = cur_mbuf;
+	}
+}
+
+int
+hinic3_start_all_rqs(struct rte_eth_dev *eth_dev)
+{
+	struct hinic3_nic_dev *nic_dev = NULL;
+	struct hinic3_rxq *rxq = NULL;
+	int err = 0;
+	int i;
+
+	nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(eth_dev);
+
+	for (i = 0; i < nic_dev->num_rqs; i++) {
+		rxq = eth_dev->data->rx_queues[i];
+		hinic3_add_rq_to_rx_queue_list(nic_dev, rxq->q_id);
+		err = hinic3_rearm_rxq_mbuf(rxq);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "Fail to alloc mbuf for Rx queue %d, "
+				    "qid = %u, need_mbuf: %d",
+				    i, rxq->q_id, rxq->q_depth);
+			goto out;
+		}
+		hinic3_dev_rx_queue_intr_enable(eth_dev, rxq->q_id);
+		eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	if (nic_dev->rss_state == HINIC3_RSS_ENABLE) {
+		err = hinic3_refill_indir_rqid(rxq);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "Refill rq to indrect table failed, "
+				    "eth_dev:%s, queue_idx:%d, err:%d",
+				    rxq->nic_dev->dev_name, rxq->q_id, err);
+			goto out;
+		}
+	}
+
+	return 0;
+out:
+	for (i = 0; i < nic_dev->num_rqs; i++) {
+		rxq = eth_dev->data->rx_queues[i];
+		hinic3_remove_rq_from_rx_queue_list(nic_dev, rxq->q_id);
+		hinic3_free_rxq_mbufs(rxq);
+		hinic3_dev_rx_queue_intr_disable(eth_dev, rxq->q_id);
+		eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	return err;
+}
+
+#define HINIC3_RX_EMPTY_THRESHOLD 3
+u16
+hinic3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
+{
+	struct hinic3_rxq *rxq = rx_queue;
+	struct hinic3_rx_info *rx_info = NULL;
+	volatile struct hinic3_rq_cqe *rx_cqe = NULL;
+	struct rte_mbuf *rxm = NULL;
+	u16 sw_ci, rx_buf_len, wqebb_cnt = 0, pkts = 0;
+	u32 status, pkt_len, vlan_len, offload_type, lro_num;
+	u64 rx_bytes = 0;
+	u32 hash_value;
+
+#ifdef HINIC3_XSTAT_PROF_RX
+	uint64_t t1 = rte_get_tsc_cycles();
+	uint64_t t2;
+#endif
+	if (((rte_get_timer_cycles() - rxq->rxq_stats.tsc) < rxq->wait_time_cycle) &&
+	    rxq->rxq_stats.empty >= HINIC3_RX_EMPTY_THRESHOLD)
+		goto out;
+
+	sw_ci = hinic3_get_rq_local_ci(rxq);
+	rx_buf_len = rxq->buf_len;
+
+	while (pkts < nb_pkts) {
+		rx_cqe = &rxq->rx_cqe[sw_ci];
+		status = hinic3_hw_cpu32((u32)(rte_atomic_load_explicit(&rx_cqe->status,
+			rte_memory_order_acquire)));
+		if (!HINIC3_GET_RX_DONE(status)) {
+			rxq->rxq_stats.empty++;
+			break;
+		}
+
+		vlan_len = hinic3_hw_cpu32(rx_cqe->vlan_len);
+
+		pkt_len = HINIC3_GET_RX_PKT_LEN(vlan_len);
+
+		rx_info = &rxq->rx_info[sw_ci];
+		rxm = rx_info->mbuf;
+
+		/* 1. Next ci point and prefetch. */
+		sw_ci++;
+		sw_ci &= rxq->q_mask;
+
+		/* 2. Prefetch next mbuf first 64B. */
+		rte_prefetch0(rxq->rx_info[sw_ci].mbuf);
+
+		/* 3. Jumbo frame process. */
+		if (likely(pkt_len <= (u32)rx_buf_len)) {
+			rxm->data_len = (u16)pkt_len;
+			rxm->pkt_len = pkt_len;
+			wqebb_cnt++;
+		} else {
+			rxm->data_len = rx_buf_len;
+			rxm->pkt_len = rx_buf_len;
+
+			/*
+			 * If receive jumbo, updating ci will be done by
+			 * hinic3_recv_jumbo_pkt function.
+			 */
+			hinic3_update_rq_local_ci(rxq, wqebb_cnt + 1);
+			wqebb_cnt = 0;
+			hinic3_recv_jumbo_pkt(rxq, rxm, pkt_len - rx_buf_len);
+			sw_ci = hinic3_get_rq_local_ci(rxq);
+		}
+
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rxm->port = rxq->port_id;
+
+		/* 4. Rx checksum offload. */
+		rxm->ol_flags |= hinic3_rx_csum(status, rxq);
+
+		/* 5. Vlan offload. */
+		offload_type = hinic3_hw_cpu32(rx_cqe->offload_type);
+
+		rxm->ol_flags |=
+			hinic3_rx_vlan(offload_type, vlan_len, &rxm->vlan_tci);
+
+		/* 6. RSS. */
+		hash_value = hinic3_hw_cpu32(rx_cqe->hash_val);
+		rxm->ol_flags |= hinic3_rx_rss_hash(offload_type, hash_value,
+						    &rxm->hash.rss);
+		/* 8. LRO. */
+		lro_num = HINIC3_GET_RX_NUM_LRO(status);
+		if (unlikely(lro_num != 0)) {
+			rxm->ol_flags |= HINIC3_PKT_RX_LRO;
+			rxm->tso_segsz = pkt_len / lro_num;
+		}
+
+		rx_cqe->status = 0;
+
+		rx_bytes += pkt_len;
+		rx_pkts[pkts++] = rxm;
+	}
+
+	if (pkts) {
+		/* 9. Update local ci. */
+		hinic3_update_rq_local_ci(rxq, wqebb_cnt);
+
+		/* Update packet stats. */
+		rxq->rxq_stats.packets += pkts;
+		rxq->rxq_stats.bytes += rx_bytes;
+		rxq->rxq_stats.empty = 0;
+#ifdef HINIC3_XSTAT_MBUF_USE
+		rxq->rxq_stats.rx_free_mbuf_bytes += pkts;
+#endif
+	}
+	rxq->rxq_stats.burst_pkts = pkts;
+	rxq->rxq_stats.tsc = rte_get_timer_cycles();
+out:
+	/* 10. Rearm mbuf to rxq. */
+	hinic3_rearm_rxq_mbuf(rxq);
+
+#ifdef HINIC3_XSTAT_PROF_RX
+	/* Do profiling stats. */
+	t2 = rte_get_tsc_cycles();
+	rxq->rxq_stats.app_tsc = t1 - rxq->prof_rx_end_tsc;
+	rxq->prof_rx_end_tsc = t2;
+	rxq->rxq_stats.pmd_tsc = t2 - t1;
+#endif
+
+	return pkts;
+}
diff --git a/drivers/net/hinic3/hinic3_tx.c b/drivers/net/hinic3/hinic3_tx.c
index 6f8c42e0c3..c2157ab4b9 100644
--- a/drivers/net/hinic3/hinic3_tx.c
+++ b/drivers/net/hinic3/hinic3_tx.c
@@ -60,6 +60,98 @@ hinic3_get_sq_hw_ci(struct hinic3_txq *sq)
 	return MASKED_QUEUE_IDX(sq, hinic3_hw_cpu16(*sq->ci_vaddr_base));
 }
 
+static void *
+hinic3_get_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info)
+{
+	u16 cur_pi = MASKED_QUEUE_IDX(sq, sq->prod_idx);
+	u32 end_pi;
+
+	end_pi = cur_pi + wqe_info->wqebb_cnt;
+	sq->prod_idx += wqe_info->wqebb_cnt;
+
+	wqe_info->owner = (u8)(sq->owner);
+	wqe_info->pi = cur_pi;
+	wqe_info->wrapped = 0;
+
+	if (unlikely(end_pi >= sq->q_depth)) {
+		sq->owner = !sq->owner;
+
+		if (likely(end_pi > sq->q_depth))
+			wqe_info->wrapped = (u8)(sq->q_depth - cur_pi);
+	}
+
+	return NIC_WQE_ADDR(sq, cur_pi);
+}
+
+static inline void
+hinic3_put_sq_wqe(struct hinic3_txq *sq, struct hinic3_wqe_info *wqe_info)
+{
+	if (wqe_info->owner != sq->owner)
+		sq->owner = wqe_info->owner;
+
+	sq->prod_idx -= wqe_info->wqebb_cnt;
+}
+
+/**
+ * Sets the WQE combination information in the transmit queue (SQ).
+ *
+ * @param[in] txq
+ * Point to send queue.
+ * @param[out] wqe_combo
+ * Point to wqe_combo of send queue(SQ).
+ * @param[in] wqe
+ * Point to wqe of send queue(SQ).
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ */
+static void
+hinic3_set_wqe_combo(struct hinic3_txq *txq,
+		     struct hinic3_sq_wqe_combo *wqe_combo,
+		     struct hinic3_sq_wqe *wqe,
+		     struct hinic3_wqe_info *wqe_info)
+{
+	wqe_combo->hdr = &wqe->compact_wqe.wqe_desc;
+
+	if (wqe_info->offload) {
+		if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) {
+			wqe_combo->task = (struct hinic3_sq_task *)
+				(void *)txq->sq_head_addr;
+			wqe_combo->bds_head = (struct hinic3_sq_bufdesc *)
+				(void *)(txq->sq_head_addr + txq->wqebb_size);
+		} else if (wqe_info->wrapped == HINIC3_TX_BD_DESC_WRAPPED) {
+			wqe_combo->task = &wqe->extend_wqe.task;
+			wqe_combo->bds_head = (struct hinic3_sq_bufdesc *)
+				(void *)(txq->sq_head_addr);
+		} else {
+			wqe_combo->task = &wqe->extend_wqe.task;
+			wqe_combo->bds_head = wqe->extend_wqe.buf_desc;
+		}
+
+		wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+		wqe_combo->task_type = SQ_WQE_TASKSECT_16BYTES;
+
+		return;
+	}
+
+	if (wqe_info->wrapped == HINIC3_TX_TASK_WRAPPED) {
+		wqe_combo->bds_head = (struct hinic3_sq_bufdesc *)
+			(void *)(txq->sq_head_addr);
+	} else {
+		wqe_combo->bds_head =
+			(struct hinic3_sq_bufdesc *)(&wqe->extend_wqe.task);
+	}
+
+	if (wqe_info->wqebb_cnt > 1) {
+		wqe_combo->wqe_type = SQ_WQE_EXTENDED_TYPE;
+		wqe_combo->task_type = SQ_WQE_TASKSECT_46BITS;
+
+		/* This section used as vlan insert, needs to clear. */
+		wqe_combo->bds_head->rsvd = 0;
+	} else {
+		wqe_combo->wqe_type = SQ_WQE_COMPACT_TYPE;
+	}
+}
+
 int
 hinic3_start_all_sqs(struct rte_eth_dev *eth_dev)
 {
@@ -220,6 +312,668 @@ hinic3_tx_done_cleanup(void *txq, u32 free_cnt)
 	return hinic3_xmit_mbuf_cleanup(tx_queue, try_free_cnt);
 }
 
+/**
+ * Prepare the data packet to be sent and calculate the internal L3 offset.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf to be processed.
+ * @param[out] inner_l3_offset
+ * Inner(IP Layer) L3 layer offset.
+ * @return
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_tx_offload_pkt_prepare(struct rte_mbuf *mbuf, u16 *inner_l3_offset)
+{
+	uint64_t ol_flags = mbuf->ol_flags;
+
+	/* Only support vxlan offload. */
+	if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) &&
+	    (!(ol_flags & HINIC3_PKT_TX_TUNNEL_VXLAN)))
+		return -EINVAL;
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	if (rte_validate_tx_offload(mbuf) != 0)
+		return -EINVAL;
+#endif
+	/* Support tunnel. */
+	if ((ol_flags & HINIC3_PKT_TX_TUNNEL_MASK)) {
+		if ((ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM) ||
+		    (ol_flags & HINIC3_PKT_TX_OUTER_IPV6) ||
+		    (ol_flags & HINIC3_PKT_TX_TCP_SEG)) {
+			/*
+			 * For this senmatic, l2_len of mbuf means
+			 * len(out_udp + vxlan + in_eth).
+			 */
+			*inner_l3_offset = mbuf->l2_len + mbuf->outer_l2_len +
+					   mbuf->outer_l3_len;
+		} else {
+			/*
+			 * For this senmatic, l2_len of mbuf means
+			 * len(out_eth + out_ip + out_udp + vxlan + in_eth).
+			 */
+			*inner_l3_offset = mbuf->l2_len;
+		}
+	} else {
+		/* For non-tunnel type pkts. */
+		*inner_l3_offset = mbuf->l2_len;
+	}
+
+	return 0;
+}
+
+static inline void
+hinic3_set_vlan_tx_offload(struct hinic3_sq_task *task, u16 vlan_tag,
+			   u8 vlan_type)
+{
+	task->vlan_offload = SQ_TASK_INFO3_SET(vlan_tag, VLAN_TAG) |
+			     SQ_TASK_INFO3_SET(vlan_type, VLAN_TYPE) |
+			     SQ_TASK_INFO3_SET(1U, VLAN_TAG_VALID);
+}
+
+/**
+ * Set the corresponding offload information based on ol_flags of the mbuf.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf for which offload needs to be set in the sending queue.
+ * @param[out] task
+ * Point to task of send queue(SQ).
+ * @param[out] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @return
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_set_tx_offload(struct rte_mbuf *mbuf, struct hinic3_sq_task *task,
+		      struct hinic3_wqe_info *wqe_info)
+{
+	uint64_t ol_flags = mbuf->ol_flags;
+	u16 pld_offset = 0;
+	u32 queue_info = 0;
+	u16 vlan_tag;
+
+	task->pkt_info0 = 0;
+	task->ip_identify = 0;
+	task->pkt_info2 = 0;
+	task->vlan_offload = 0;
+
+	/* Vlan offload. */
+	if (unlikely(ol_flags & HINIC3_PKT_TX_VLAN_PKT)) {
+		vlan_tag = mbuf->vlan_tci;
+		hinic3_set_vlan_tx_offload(task, vlan_tag, HINIC3_TX_TPID0);
+		task->vlan_offload = hinic3_hw_be32(task->vlan_offload);
+	}
+	/* Cksum offload. */
+	if (!(ol_flags & HINIC3_TX_CKSUM_OFFLOAD_MASK))
+		return 0;
+
+	/* Tso offload. */
+	if (ol_flags & HINIC3_PKT_TX_TCP_SEG) {
+		pld_offset = wqe_info->payload_offset;
+		if ((pld_offset >> 1) > MAX_PAYLOAD_OFFSET)
+			return -EINVAL;
+
+		task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+		task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+		queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, TSO);
+		queue_info |= SQ_CTRL_QUEUE_INFO_SET(pld_offset >> 1, PLDOFF);
+
+		/* Set MSS value. */
+		queue_info = SQ_CTRL_QUEUE_INFO_CLEAR(queue_info, MSS);
+		queue_info |= SQ_CTRL_QUEUE_INFO_SET(mbuf->tso_segsz, MSS);
+	} else {
+		if (ol_flags & HINIC3_PKT_TX_IP_CKSUM)
+			task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L3_EN);
+
+		switch (ol_flags & HINIC3_PKT_TX_L4_MASK) {
+		case HINIC3_PKT_TX_TCP_CKSUM:
+		case HINIC3_PKT_TX_UDP_CKSUM:
+		case HINIC3_PKT_TX_SCTP_CKSUM:
+			task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, INNER_L4_EN);
+			break;
+
+		case HINIC3_PKT_TX_L4_NO_CKSUM:
+			break;
+
+		default:
+			PMD_DRV_LOG(INFO, "not support pkt type");
+			return -EINVAL;
+		}
+	}
+
+	/* For vxlan, also can support PKT_TX_TUNNEL_GRE, etc. */
+	switch (ol_flags & HINIC3_PKT_TX_TUNNEL_MASK) {
+	case HINIC3_PKT_TX_TUNNEL_VXLAN:
+		task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, TUNNEL_FLAG);
+		break;
+
+	case 0:
+		break;
+
+	default:
+		/* For non UDP/GRE tunneling, drop the tunnel packet. */
+		PMD_DRV_LOG(INFO, "not support tunnel pkt type");
+		return -EINVAL;
+	}
+
+	if (ol_flags & HINIC3_PKT_TX_OUTER_IP_CKSUM)
+		task->pkt_info0 |= SQ_TASK_INFO0_SET(1U, OUT_L3_EN);
+
+	task->pkt_info0 = hinic3_hw_be32(task->pkt_info0);
+	task->pkt_info2 = hinic3_hw_be32(task->pkt_info2);
+	wqe_info->queue_info = queue_info;
+
+	return 0;
+}
+
+/**
+ * Check whether the number of segments in the mbuf is valid.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf to be verified.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @return
+ * true as valid, false as invalid.
+ */
+static bool
+hinic3_is_tso_sge_valid(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info)
+{
+	u32 total_len, limit_len, checked_len, left_len, adjust_mss;
+	u32 i, max_sges, left_sges, first_len;
+	struct rte_mbuf *mbuf_head, *mbuf_first;
+	struct rte_mbuf *mbuf_pre = mbuf;
+
+	left_sges = mbuf->nb_segs;
+	mbuf_head = mbuf;
+	mbuf_first = mbuf;
+
+	/* Tso sge number validation. */
+	if (unlikely(left_sges >= HINIC3_NONTSO_PKT_MAX_SGE)) {
+		checked_len = 0;
+		total_len = 0;
+		first_len = 0;
+		adjust_mss = mbuf->tso_segsz >= TX_MSS_MIN ? mbuf->tso_segsz
+							   : TX_MSS_MIN;
+		max_sges = HINIC3_NONTSO_PKT_MAX_SGE - 1;
+		limit_len = adjust_mss + wqe_info->payload_offset;
+
+		for (i = 0; (i < max_sges) && (total_len < limit_len); i++) {
+			total_len += mbuf->data_len;
+			mbuf_pre = mbuf;
+			mbuf = mbuf->next;
+		}
+
+		/* Each continues 38 mbufs segmust do one check. */
+		while (left_sges >= HINIC3_NONTSO_PKT_MAX_SGE) {
+			if (total_len >= limit_len) {
+				/* Update the limit len. */
+				limit_len = adjust_mss;
+				/* Update checked len. */
+				checked_len += first_len;
+				/* Record the first len. */
+				first_len = mbuf_first->data_len;
+				/* First mbuf move to the next. */
+				mbuf_first = mbuf_first->next;
+				/* Update total len. */
+				total_len -= first_len;
+				left_sges--;
+				i--;
+				for (;
+				     (i < max_sges) && (total_len < limit_len);
+				     i++) {
+					total_len += mbuf->data_len;
+					mbuf_pre = mbuf;
+					mbuf = mbuf->next;
+				}
+			} else {
+				/* Try to copy if not valid. */
+				checked_len += (total_len - mbuf_pre->data_len);
+
+				left_len = mbuf_head->pkt_len - checked_len;
+				if (left_len > HINIC3_COPY_MBUF_SIZE)
+					return false;
+				wqe_info->sge_cnt = (u16)(mbuf_head->nb_segs +
+							  i - left_sges);
+				wqe_info->cpy_mbuf_cnt = 1;
+
+				return true;
+			}
+		} /**< End of while. */
+	}
+
+	wqe_info->sge_cnt = mbuf_head->nb_segs;
+
+	return true;
+}
+
+/**
+ * Checks and processes transport offload information for data packets.
+ *
+ * @param[in] mbuf
+ * Point to the mbuf to send.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @return
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_get_tx_offload(struct rte_mbuf *mbuf, struct hinic3_wqe_info *wqe_info)
+{
+	uint64_t ol_flags = mbuf->ol_flags;
+	u16 i, total_len, inner_l3_offset = 0;
+	int err;
+	struct rte_mbuf *mbuf_pkt = NULL;
+
+	wqe_info->sge_cnt = mbuf->nb_segs;
+	/* Check if the packet set available offload flags. */
+	if (!(ol_flags & HINIC3_TX_OFFLOAD_MASK)) {
+		wqe_info->offload = 0;
+		return 0;
+	}
+
+	wqe_info->offload = 1;
+	err = hinic3_tx_offload_pkt_prepare(mbuf, &inner_l3_offset);
+	if (err)
+		return err;
+
+	/* Non tso mbuf only check sge num. */
+	if (likely(!(mbuf->ol_flags & HINIC3_PKT_TX_TCP_SEG))) {
+		if (unlikely(mbuf->pkt_len > MAX_SINGLE_SGE_SIZE))
+			/* Non tso packet len must less than 64KB. */
+			return -EINVAL;
+
+		if (likely(HINIC3_NONTSO_SEG_NUM_VALID(mbuf->nb_segs)))
+			/* Valid non-tso mbuf. */
+			return 0;
+
+		/*
+		 * The number of non-tso packet fragments must be less than 38,
+		 * and mbuf segs greater than 38 must be copied to other
+		 * buffers.
+		 */
+		total_len = 0;
+		mbuf_pkt = mbuf;
+		for (i = 0; i < (HINIC3_NONTSO_PKT_MAX_SGE - 1); i++) {
+			total_len += mbuf_pkt->data_len;
+			mbuf_pkt = mbuf_pkt->next;
+		}
+
+		/* Default support copy total 4k mbuf segs. */
+		if ((u32)(total_len + (u16)HINIC3_COPY_MBUF_SIZE) <
+		    mbuf->pkt_len)
+			return -EINVAL;
+
+		wqe_info->sge_cnt = HINIC3_NONTSO_PKT_MAX_SGE;
+		wqe_info->cpy_mbuf_cnt = 1;
+
+		return 0;
+	}
+
+	/* Tso mbuf. */
+	wqe_info->payload_offset =
+		inner_l3_offset + mbuf->l3_len + mbuf->l4_len;
+
+	/* Too many mbuf segs. */
+	if (unlikely(HINIC3_TSO_SEG_NUM_INVALID(mbuf->nb_segs)))
+		return -EINVAL;
+
+	/* Check whether can cover all tso mbuf segs or not. */
+	if (unlikely(!hinic3_is_tso_sge_valid(mbuf, wqe_info)))
+		return -EINVAL;
+
+	return 0;
+}
+
+static inline void
+hinic3_set_buf_desc(struct hinic3_sq_bufdesc *buf_descs, rte_iova_t addr,
+		    u32 len)
+{
+	buf_descs->hi_addr = hinic3_hw_be32(upper_32_bits(addr));
+	buf_descs->lo_addr = hinic3_hw_be32(lower_32_bits(addr));
+	buf_descs->len = hinic3_hw_be32(len);
+}
+
+static inline struct rte_mbuf *
+hinic3_alloc_cpy_mbuf(struct hinic3_nic_dev *nic_dev)
+{
+	return rte_pktmbuf_alloc(nic_dev->cpy_mpool);
+}
+
+/**
+ * Copy packets in the send queue(SQ).
+ *
+ * @param[in] nic_dev
+ * Point to nic device.
+ * @param[in] mbuf
+ * Point to the source mbuf.
+ * @param[in] seg_cnt
+ * Number of mbuf segments to be copied.
+ * @result
+ * The address of the copied mbuf.
+ */
+static void *
+hinic3_copy_tx_mbuf(struct hinic3_nic_dev *nic_dev, struct rte_mbuf *mbuf,
+		    u16 sge_cnt)
+{
+	struct rte_mbuf *dst_mbuf;
+	u32 offset = 0;
+	u16 i;
+
+	if (unlikely(!nic_dev->cpy_mpool))
+		return NULL;
+
+	dst_mbuf = hinic3_alloc_cpy_mbuf(nic_dev);
+	if (unlikely(!dst_mbuf))
+		return NULL;
+
+	dst_mbuf->data_off = 0;
+	dst_mbuf->data_len = 0;
+	for (i = 0; i < sge_cnt; i++) {
+		rte_memcpy((u8 *)dst_mbuf->buf_addr + offset,
+			   (u8 *)mbuf->buf_addr + mbuf->data_off,
+			   mbuf->data_len);
+		dst_mbuf->data_len += mbuf->data_len;
+		offset += mbuf->data_len;
+		mbuf = mbuf->next;
+	}
+	dst_mbuf->pkt_len = dst_mbuf->data_len;
+
+	return dst_mbuf;
+}
+
+/**
+ * Map the TX mbuf to the DMA address space and set related information for
+ * subsequent DMA transmission.
+ *
+ * @param[in] txq
+ * Point to send queue.
+ * @param[in] mbuf
+ * Point to the tx mbuf.
+ * @param[out] wqe_combo
+ * Point to send queue wqe_combo.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue(SQ).
+ * @result
+ * 0 as success, -EINVAL as failure.
+ */
+static int
+hinic3_mbuf_dma_map_sge(struct hinic3_txq *txq, struct rte_mbuf *mbuf,
+			struct hinic3_sq_wqe_combo *wqe_combo,
+			struct hinic3_wqe_info *wqe_info)
+{
+	struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr;
+	struct hinic3_sq_bufdesc *buf_desc = wqe_combo->bds_head;
+
+	uint16_t nb_segs = wqe_info->sge_cnt - wqe_info->cpy_mbuf_cnt;
+	uint16_t real_segs = mbuf->nb_segs;
+	rte_iova_t dma_addr;
+	u32 i;
+
+	for (i = 0; i < nb_segs; i++) {
+		if (unlikely(mbuf == NULL)) {
+			txq->txq_stats.mbuf_null++;
+			return -EINVAL;
+		}
+
+		if (unlikely(mbuf->data_len == 0)) {
+			txq->txq_stats.sge_len0++;
+			return -EINVAL;
+		}
+
+		dma_addr = rte_mbuf_data_iova(mbuf);
+		if (i == 0) {
+			if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE &&
+			    mbuf->data_len > COMPACT_WQE_MAX_CTRL_LEN) {
+				txq->txq_stats.sge_len_too_large++;
+				return -EINVAL;
+			}
+
+			wqe_desc->hi_addr =
+				hinic3_hw_be32(upper_32_bits(dma_addr));
+			wqe_desc->lo_addr =
+				hinic3_hw_be32(lower_32_bits(dma_addr));
+			wqe_desc->ctrl_len = mbuf->data_len;
+		} else {
+			/*
+			 * Parts of wqe is in sq bottom while parts
+			 * of wqe is in sq head.
+			 */
+			if (unlikely(wqe_info->wrapped &&
+				     (u64)buf_desc == txq->sq_bot_sge_addr))
+				buf_desc = (struct hinic3_sq_bufdesc *)
+					   (void *)txq->sq_head_addr;
+
+			hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len);
+			buf_desc++;
+		}
+		mbuf = mbuf->next;
+	}
+
+	/* For now: support over 38 sge, copy the last 2 mbuf. */
+	if (unlikely(wqe_info->cpy_mbuf_cnt != 0)) {
+		/*
+		 * Copy invalid mbuf segs to a valid buffer, lost performance.
+		 */
+		txq->txq_stats.cpy_pkts += 1;
+		mbuf = hinic3_copy_tx_mbuf(txq->nic_dev, mbuf,
+					   real_segs - nb_segs);
+		if (unlikely(!mbuf))
+			return -EINVAL;
+
+		txq->tx_info[wqe_info->pi].cpy_mbuf = mbuf;
+
+		/* Deal with the last mbuf. */
+		dma_addr = rte_mbuf_data_iova(mbuf);
+		if (unlikely(mbuf->data_len == 0)) {
+			txq->txq_stats.sge_len0++;
+			return -EINVAL;
+		}
+		/*
+		 * Parts of wqe is in sq bottom while parts
+		 * of wqe is in sq head.
+		 */
+		if (i == 0) {
+			wqe_desc->hi_addr =
+				hinic3_hw_be32(upper_32_bits(dma_addr));
+			wqe_desc->lo_addr =
+				hinic3_hw_be32(lower_32_bits(dma_addr));
+			wqe_desc->ctrl_len = mbuf->data_len;
+		} else {
+			if (unlikely(wqe_info->wrapped &&
+				     ((u64)buf_desc == txq->sq_bot_sge_addr)))
+				buf_desc = (struct hinic3_sq_bufdesc *)
+						   txq->sq_head_addr;
+
+			hinic3_set_buf_desc(buf_desc, dma_addr, mbuf->data_len);
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * Sets and configures fields in the transmit queue control descriptor based on
+ * the WQE type.
+ *
+ * @param[out] wqe_combo
+ * Point to wqe_combo of send queue.
+ * @param[in] wqe_info
+ * Point to wqe_info of send queue.
+ */
+static void
+hinic3_prepare_sq_ctrl(struct hinic3_sq_wqe_combo *wqe_combo,
+		       struct hinic3_wqe_info *wqe_info)
+{
+	struct hinic3_sq_wqe_desc *wqe_desc = wqe_combo->hdr;
+
+	if (wqe_combo->wqe_type == SQ_WQE_COMPACT_TYPE) {
+		wqe_desc->ctrl_len |=
+			SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+			SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+			SQ_CTRL_SET(wqe_info->owner, OWNER);
+		wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+		/* Compact wqe queue_info will transfer to ucode. */
+		wqe_desc->queue_info = 0;
+
+		return;
+	}
+
+	wqe_desc->ctrl_len |= SQ_CTRL_SET(wqe_info->sge_cnt, BUFDESC_NUM) |
+			      SQ_CTRL_SET(wqe_combo->task_type, TASKSECT_LEN) |
+			      SQ_CTRL_SET(SQ_NORMAL_WQE, DATA_FORMAT) |
+			      SQ_CTRL_SET(wqe_combo->wqe_type, EXTENDED) |
+			      SQ_CTRL_SET(wqe_info->owner, OWNER);
+
+	wqe_desc->ctrl_len = hinic3_hw_be32(wqe_desc->ctrl_len);
+
+	wqe_desc->queue_info = wqe_info->queue_info;
+	wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(1U, UC);
+
+	if (!SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS)) {
+		wqe_desc->queue_info |=
+			SQ_CTRL_QUEUE_INFO_SET(TX_MSS_DEFAULT, MSS);
+	} else if (SQ_CTRL_QUEUE_INFO_GET(wqe_desc->queue_info, MSS) <
+		   TX_MSS_MIN) {
+		/* Mss should not less than 80. */
+		wqe_desc->queue_info =
+			SQ_CTRL_QUEUE_INFO_CLEAR(wqe_desc->queue_info, MSS);
+		wqe_desc->queue_info |= SQ_CTRL_QUEUE_INFO_SET(TX_MSS_MIN, MSS);
+	}
+
+	wqe_desc->queue_info = hinic3_hw_be32(wqe_desc->queue_info);
+}
+
+/**
+ * It is responsible for sending data packets.
+ *
+ * @param[in] tx_queue
+ * Point to send queue.
+ * @param[in] tx_pkts
+ * Pointer to the array of data packets to be sent.
+ * @param[in] nb_pkts
+ * Number of sent packets.
+ * @return
+ * Number of actually sent packets.
+ */
+u16
+hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts)
+{
+	struct hinic3_txq *txq = tx_queue;
+	struct hinic3_tx_info *tx_info = NULL;
+	struct rte_mbuf *mbuf_pkt = NULL;
+	struct hinic3_sq_wqe_combo wqe_combo = {0};
+	struct hinic3_sq_wqe *sq_wqe = NULL;
+	struct hinic3_wqe_info wqe_info = {0};
+
+	u32 offload_err, free_cnt;
+	u64 tx_bytes = 0;
+	u16 free_wqebb_cnt, nb_tx;
+	int err;
+
+#ifdef HINIC3_XSTAT_PROF_TX
+	uint64_t t1, t2;
+	t1 = rte_get_tsc_cycles();
+#endif
+
+	if (unlikely(!HINIC3_TXQ_IS_STARTED(txq)))
+		return 0;
+
+	free_cnt = txq->tx_free_thresh;
+	/* Reclaim tx mbuf before xmit new packets. */
+	if (hinic3_get_sq_free_wqebbs(txq) < txq->tx_free_thresh)
+		hinic3_xmit_mbuf_cleanup(txq, free_cnt);
+
+	/* Tx loop routine. */
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		mbuf_pkt = *tx_pkts++;
+		if (unlikely(hinic3_get_tx_offload(mbuf_pkt, &wqe_info))) {
+			txq->txq_stats.offload_errors++;
+			break;
+		}
+
+		if (!wqe_info.offload)
+			wqe_info.wqebb_cnt = wqe_info.sge_cnt;
+		else
+			/* Use extended sq wqe with normal TS. */
+			wqe_info.wqebb_cnt = wqe_info.sge_cnt + 1;
+
+		free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq);
+		if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) {
+			/* Reclaim again. */
+			hinic3_xmit_mbuf_cleanup(txq, free_cnt);
+			free_wqebb_cnt = hinic3_get_sq_free_wqebbs(txq);
+			if (unlikely(wqe_info.wqebb_cnt > free_wqebb_cnt)) {
+				txq->txq_stats.tx_busy += (nb_pkts - nb_tx);
+				break;
+			}
+		}
+
+		/* Get sq wqe address from wqe_page. */
+		sq_wqe = hinic3_get_sq_wqe(txq, &wqe_info);
+		if (unlikely(!sq_wqe)) {
+			txq->txq_stats.tx_busy++;
+			break;
+		}
+
+		/* Task or bd section maybe warpped for one wqe. */
+		hinic3_set_wqe_combo(txq, &wqe_combo, sq_wqe, &wqe_info);
+
+		wqe_info.queue_info = 0;
+		/* Fill tx packet offload into qsf and task field. */
+		if (wqe_info.offload) {
+			offload_err = hinic3_set_tx_offload(mbuf_pkt,
+							    wqe_combo.task,
+							    &wqe_info);
+			if (unlikely(offload_err)) {
+				hinic3_put_sq_wqe(txq, &wqe_info);
+				txq->txq_stats.offload_errors++;
+				break;
+			}
+		}
+
+		/* Fill sq_wqe buf_desc and bd_desc. */
+		err = hinic3_mbuf_dma_map_sge(txq, mbuf_pkt, &wqe_combo,
+					      &wqe_info);
+		if (err) {
+			hinic3_put_sq_wqe(txq, &wqe_info);
+			txq->txq_stats.offload_errors++;
+			break;
+		}
+
+		/* Record tx info. */
+		tx_info = &txq->tx_info[wqe_info.pi];
+		tx_info->mbuf = mbuf_pkt;
+		tx_info->wqebb_cnt = wqe_info.wqebb_cnt;
+
+		hinic3_prepare_sq_ctrl(&wqe_combo, &wqe_info);
+
+		tx_bytes += mbuf_pkt->pkt_len;
+	}
+
+	/* Update txq stats. */
+	if (nb_tx) {
+		hinic3_write_db(txq->db_addr, txq->q_id, (int)(txq->cos),
+				SQ_CFLAG_DP,
+				MASKED_QUEUE_IDX(txq, txq->prod_idx));
+		txq->txq_stats.packets += nb_tx;
+		txq->txq_stats.bytes += tx_bytes;
+	}
+	txq->txq_stats.burst_pkts = nb_tx;
+
+#ifdef HINIC3_XSTAT_PROF_TX
+	t2 = rte_get_tsc_cycles();
+	txq->txq_stats.app_tsc = t1 - txq->prof_tx_end_tsc;
+	txq->prof_tx_end_tsc = t2;
+	txq->txq_stats.pmd_tsc = t2 - t1;
+	txq->txq_stats.burst_pkts = nb_tx;
+#endif
+
+	return nb_tx;
+}
+
 int
 hinic3_stop_sq(struct hinic3_txq *txq)
 {
diff --git a/drivers/net/hinic3/hinic3_tx.h b/drivers/net/hinic3/hinic3_tx.h
index f4c61ea1b1..6026b3fabc 100644
--- a/drivers/net/hinic3/hinic3_tx.h
+++ b/drivers/net/hinic3/hinic3_tx.h
@@ -308,6 +308,7 @@ struct __rte_cache_aligned hinic3_txq {
 void hinic3_flush_txqs(struct hinic3_nic_dev *nic_dev);
 void hinic3_free_txq_mbufs(struct hinic3_txq *txq);
 void hinic3_free_all_txq_mbufs(struct hinic3_nic_dev *nic_dev);
+u16 hinic3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
 int hinic3_stop_sq(struct hinic3_txq *txq);
 int hinic3_start_all_sqs(struct rte_eth_dev *eth_dev);
 int hinic3_tx_done_cleanup(void *txq, uint32_t free_cnt);
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 15/18] net/hinic3: add MML and EEPROM access feature
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (5 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 14/18] net/hinic3: add Rx/Tx functions Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 16/18] net/hinic3: add RSS promiscuous ops Feifei Wang
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen

From: Xin Wang <wangxin679@h-partners.com>

Add man-machine language support and implements the get eeprom method.

Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
 drivers/net/hinic3/hinic3_ethdev.c        |  13 +
 drivers/net/hinic3/mml/hinic3_dbg.c       | 171 +++++
 drivers/net/hinic3/mml/hinic3_dbg.h       | 160 +++++
 drivers/net/hinic3/mml/hinic3_mml_cmd.c   | 375 +++++++++++
 drivers/net/hinic3/mml/hinic3_mml_cmd.h   | 131 ++++
 drivers/net/hinic3/mml/hinic3_mml_ioctl.c | 215 +++++++
 drivers/net/hinic3/mml/hinic3_mml_lib.c   | 136 ++++
 drivers/net/hinic3/mml/hinic3_mml_lib.h   | 275 ++++++++
 drivers/net/hinic3/mml/hinic3_mml_main.c  | 167 +++++
 drivers/net/hinic3/mml/hinic3_mml_queue.c | 749 ++++++++++++++++++++++
 drivers/net/hinic3/mml/hinic3_mml_queue.h | 256 ++++++++
 drivers/net/hinic3/mml/meson.build        |  62 ++
 12 files changed, 2710 insertions(+)
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
 create mode 100644 drivers/net/hinic3/mml/meson.build

diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index 7cd101e5c3..9c5decb867 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -21,6 +21,7 @@
 #include "base/hinic3_hw_comm.h"
 #include "base/hinic3_nic_cfg.h"
 #include "base/hinic3_nic_event.h"
+#include "mml/hinic3_mml_lib.h"
 #include "hinic3_nic_io.h"
 #include "hinic3_tx.h"
 #include "hinic3_rx.h"
@@ -2276,6 +2277,16 @@ hinic3_dev_allmulticast_disable(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+hinic3_get_eeprom(__rte_unused struct rte_eth_dev *dev,
+		  struct rte_dev_eeprom_info *info)
+{
+#define MAX_BUF_OUT_LEN 2048
+
+	return hinic3_pmd_mml_lib(info->data, info->offset, info->data,
+				  &info->length, MAX_BUF_OUT_LEN);
+}
+
 /**
  * Get device generic statistics.
  *
@@ -2879,6 +2890,7 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
 	.vlan_offload_set              = hinic3_vlan_offload_set,
 	.allmulticast_enable           = hinic3_dev_allmulticast_enable,
 	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
+	.get_eeprom                    = hinic3_get_eeprom,
 	.stats_get                     = hinic3_dev_stats_get,
 	.stats_reset                   = hinic3_dev_stats_reset,
 	.xstats_get                    = hinic3_dev_xstats_get,
@@ -2919,6 +2931,7 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = {
 	.vlan_offload_set              = hinic3_vlan_offload_set,
 	.allmulticast_enable           = hinic3_dev_allmulticast_enable,
 	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
+	.get_eeprom                    = hinic3_get_eeprom,
 	.stats_get                     = hinic3_dev_stats_get,
 	.stats_reset                   = hinic3_dev_stats_reset,
 	.xstats_get                    = hinic3_dev_xstats_get,
diff --git a/drivers/net/hinic3/mml/hinic3_dbg.c b/drivers/net/hinic3/mml/hinic3_dbg.c
new file mode 100644
index 0000000000..7525b68dee
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_dbg.c
@@ -0,0 +1,171 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_compat.h"
+#include "hinic3_hwif.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_wq.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_rx.h"
+#include "hinic3_tx.h"
+#include "hinic3_nic_io.h"
+#include "hinic3_dbg.h"
+
+#define DB_IDX(db, db_base) \
+	((u32)(((ulong)(db) - (ulong)(db_base)) / HINIC3_DB_PAGE_SIZE))
+
+int
+hinic3_dbg_get_rq_info(void *hwdev, uint16_t q_id,
+		       struct hinic3_dbg_rq_info *rq_info, u16 *msg_size)
+{
+	struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+	struct hinic3_nic_dev *nic_dev =
+		(struct hinic3_nic_dev *)dev->dev_handle;
+	struct hinic3_rxq *rxq = NULL;
+
+	if (q_id >= nic_dev->num_rqs) {
+		PMD_DRV_LOG(ERR, "Invalid rx queue id, q_id: %d, num_rqs: %d",
+			    q_id, nic_dev->num_rqs);
+		return -EINVAL;
+	}
+
+	rq_info->q_id = q_id;
+	rxq = nic_dev->rxqs[q_id];
+
+	rq_info->hw_pi = (u16)cpu_to_be16(*rxq->pi_virt_addr);
+	rq_info->ci = rxq->cons_idx & rxq->q_mask;
+	rq_info->sw_pi = rxq->prod_idx & rxq->q_mask;
+	rq_info->wqebb_size = HINIC3_SQ_WQEBB_SIZE;
+	rq_info->q_depth = rxq->q_depth;
+	rq_info->buf_len = rxq->buf_len;
+	rq_info->ci_wqe_page_addr = rxq->queue_buf_vaddr;
+	rq_info->ci_cla_tbl_addr = NULL;
+	rq_info->msix_idx = 0;
+	rq_info->msix_vector = 0;
+
+	*msg_size = sizeof(*rq_info);
+
+	return 0;
+}
+
+int
+hinic3_dbg_get_rx_cqe_info(void *hwdev, uint16_t q_id, uint16_t idx,
+			   void *buf_out, uint16_t *out_size)
+{
+	struct hinic3_hwdev *dev = (struct hinic3_hwdev *)hwdev;
+	struct hinic3_nic_dev *nic_dev =
+		(struct hinic3_nic_dev *)dev->dev_handle;
+
+	if (q_id >= nic_dev->num_rqs || idx >= nic_dev->rxqs[q_id]->q_depth)
+		return -EFAULT;
+
+	(void)memcpy(buf_out, (void *)&nic_dev->rxqs[q_id]->rx_cqe[idx],
+		     sizeof(struct hinic3_rq_cqe));
+	*out_size = sizeof(struct hinic3_rq_cqe);
+
+	return 0;
+}
+
+int
+hinic3_dbg_get_sq_info(void *dev, u16 q_id, struct hinic3_dbg_sq_info *sq_info,
+		       u16 *msg_size)
+{
+	struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+	struct hinic3_nic_dev *nic_dev =
+		(struct hinic3_nic_dev *)hwdev->dev_handle;
+	struct hinic3_txq *txq = NULL;
+
+	if (q_id >= nic_dev->num_sqs) {
+		PMD_DRV_LOG(ERR,
+			    "Inputting tx queue id is larger than actual tx "
+			    "queue number, qid: %d, num_sqs: %d",
+			    q_id, nic_dev->num_sqs);
+		return -EINVAL;
+	}
+
+	sq_info->q_id = q_id;
+	txq = nic_dev->txqs[q_id];
+
+	sq_info->pi = txq->prod_idx & txq->q_mask;
+	sq_info->ci = txq->cons_idx & txq->q_mask;
+	sq_info->fi = (*(u16 *)txq->ci_vaddr_base) & txq->q_mask;
+	sq_info->q_depth = txq->q_depth;
+	sq_info->weqbb_size = HINIC3_SQ_WQEBB_SIZE;
+	sq_info->ci_addr =
+		(volatile u16 *)HINIC3_CI_VADDR(txq->ci_vaddr_base, q_id);
+	sq_info->cla_addr = txq->queue_buf_paddr;
+	sq_info->db_addr.phy_addr = (u64 *)txq->db_addr;
+	sq_info->pg_idx = DB_IDX(txq->db_addr, hwdev->hwif->db_base);
+
+	*msg_size = sizeof(*sq_info);
+
+	return 0;
+}
+
+int
+hinic3_dbg_get_sq_wqe_info(void *dev, u16 q_id, u16 idx, u16 wqebb_cnt, u8 *wqe,
+			   u16 *wqe_size)
+{
+	struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+	struct hinic3_nic_dev *nic_dev =
+		(struct hinic3_nic_dev *)hwdev->dev_handle;
+	struct hinic3_txq *txq = NULL;
+	void *src_wqe = NULL;
+	u32 offset;
+
+	if (q_id >= nic_dev->num_sqs) {
+		PMD_DRV_LOG(ERR,
+			    "Inputting tx queue id is larger than actual tx "
+			    "queue number, qid: %d, num_sqs: %d",
+			    q_id, nic_dev->num_sqs);
+		return -EINVAL;
+	}
+
+	txq = nic_dev->txqs[q_id];
+	if (idx + wqebb_cnt > txq->q_depth)
+		return -EFAULT;
+
+	src_wqe = (void *)txq->queue_buf_vaddr;
+	offset = (u32)idx << txq->wqebb_shift;
+
+	(void)memcpy((void *)wqe, (void *)((u8 *)src_wqe + offset),
+		     (size_t)((u32)wqebb_cnt << txq->wqebb_shift));
+
+	*wqe_size = (u16)((u32)wqebb_cnt << txq->wqebb_shift);
+	return 0;
+}
+
+int
+hinic3_dbg_get_rq_wqe_info(void *dev, u16 q_id, u16 idx, u16 wqebb_cnt, u8 *wqe,
+			   u16 *wqe_size)
+{
+	struct hinic3_hwdev *hwdev = (struct hinic3_hwdev *)dev;
+	struct hinic3_nic_dev *nic_dev =
+		(struct hinic3_nic_dev *)hwdev->dev_handle;
+	struct hinic3_rxq *rxq = NULL;
+	void *src_wqe = NULL;
+	u32 offset;
+
+	if (q_id >= nic_dev->num_rqs) {
+		PMD_DRV_LOG(ERR,
+			    "Inputting rx queue id is larger than actual rx "
+			    "queue number, qid: %d, num_rqs: %d",
+			    q_id, nic_dev->num_rqs);
+		return -EINVAL;
+	}
+
+	rxq = nic_dev->rxqs[q_id];
+	if (idx + wqebb_cnt > rxq->q_depth)
+		return -EFAULT;
+
+	src_wqe = (void *)rxq->queue_buf_vaddr;
+	offset = (u32)idx << rxq->wqebb_shift;
+
+	(void)memcpy((void *)wqe, (void *)((u8 *)src_wqe + offset),
+		     (size_t)((u32)wqebb_cnt << rxq->wqebb_shift));
+
+	*wqe_size = (u16)((u32)wqebb_cnt << rxq->wqebb_shift);
+	return 0;
+}
diff --git a/drivers/net/hinic3/mml/hinic3_dbg.h b/drivers/net/hinic3/mml/hinic3_dbg.h
new file mode 100644
index 0000000000..bac96c84a0
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_dbg.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#ifndef _HINIC3_MML_DBG_H
+#define _HINIC3_MML_DBG_H
+
+/* nic_tool */
+struct hinic3_tx_hw_page {
+	u64 *phy_addr;
+	u64 *map_addr;
+};
+
+/* nic_tool */
+struct hinic3_dbg_sq_info {
+	u16 q_id;
+	u16 pi;
+	u16 ci; /**< sw_ci */
+	u16 fi; /**< hw_ci */
+
+	u32 q_depth;
+	u16 weqbb_size;
+
+	volatile u16 *ci_addr;
+	u64 cla_addr;
+
+	struct hinic3_tx_hw_page db_addr;
+	u32 pg_idx;
+};
+
+/* nic_tool */
+struct hinic3_dbg_rq_info {
+	u16 q_id;
+	u16 hw_pi;
+	u16 ci; /**< sw_ci */
+	u16 sw_pi;
+	u16 wqebb_size;
+	u16 q_depth;
+	u16 buf_len;
+
+	void *ci_wqe_page_addr;
+	void *ci_cla_tbl_addr;
+	u16 msix_idx;
+	u32 msix_vector;
+};
+
+void *hinic3_dbg_get_sq_wq_handle(void *hwdev, u16 q_id);
+
+void *hinic3_dbg_get_rq_wq_handle(void *hwdev, u16 q_id);
+
+void *hinic3_dbg_get_sq_ci_addr(void *hwdev, u16 q_id);
+
+u16 hinic3_dbg_get_global_qpn(void *hwdev);
+
+/**
+ * Get details of specified RX queue and store in `rq_info`.
+ *
+ * @param[in] hwdev
+ * Pointer to the hardware device.
+ * @param[in] q_id
+ * RX queue ID.
+ * @param[out] rq_info
+ * Structure to store RX queue information.
+ * @param[out] msg_size
+ * Size of the message.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_dbg_get_rq_info(void *hwdev, uint16_t q_id,
+			   struct hinic3_dbg_rq_info *rq_info, u16 *msg_size);
+
+/**
+ * Get the RX CQE at the specified index from the given RX queue.
+ *
+ * @param[in] hwdev
+ * Pointer to hardware device structure.
+ * @param[in] q_id
+ * RX queue ID.
+ * @param[in] idx
+ * Index of the CQE.
+ * @param[out] buf_out
+ * Buffer to store the CQE.
+ * @param[out] out_size
+ * Size of the CQE.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int hinic3_dbg_get_rx_cqe_info(void *hwdev, uint16_t q_id, uint16_t idx,
+			       void *buf_out, uint16_t *out_size);
+
+/**
+ * Get SQ information for debugging.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] q_id
+ * ID of SQ to retrieve information for.
+ * @param[out] sq_info
+ * Pointer to the structure where the SQ information will be stored.
+ * @param[out] msg_size
+ * The size (in bytes) of the `sq_info` structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EINVAL if the queue ID is invalid.
+ */
+int hinic3_dbg_get_sq_info(void *hwdev, u16 q_id,
+			   struct hinic3_dbg_sq_info *sq_info, u16 *msg_size);
+
+/**
+ * Get WQE information from a send queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] q_id
+ * The ID of the send queue from which to retrieve WQE information.
+ * @param[in] idx
+ * The index of the first WQE to retrieve.
+ * @param[in] wqebb_cnt
+ * The number of WQEBBs to retrieve.
+ * @param[out] wqe
+ * Pointer to the buffer where the WQE data will be stored.
+ * @param[out] wqe_size
+ * The size (in bytes) of the retrieved WQE data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EINVAL if queue ID invalid.
+ * - -EFAULT if index invalid.
+ */
+int hinic3_dbg_get_sq_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+			       u8 *wqe, u16 *wqe_size);
+
+/**
+ * Get WQE information from a receive queue.
+ *
+ * @param[in] dev
+ * Pointer to the device structure.
+ * @param[in] q_id
+ * The ID of the receive queue from which to retrieve WQE information.
+ * @param[in] idx
+ * The index of the first WQE to retrieve.
+ * @param[in] wqebb_cnt
+ * The number of WQEBBs to retrieve.
+ * @param[out] wqe
+ * Pointer to the buffer where the WQE data will be stored.
+ * @param[out] wqe_size
+ * The size (in bytes) of the retrieved WQE data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -EINVAL if queue ID invalid.
+ * - -EFAULT if index invalid.
+ */
+int hinic3_dbg_get_rq_wqe_info(void *hwdev, u16 q_id, u16 idx, u16 wqebb_cnt,
+			       u8 *wqe, u16 *wqe_size);
+
+#endif /* _HINIC3_MML_DBG_H */
diff --git a/drivers/net/hinic3/mml/hinic3_mml_cmd.c b/drivers/net/hinic3/mml/hinic3_mml_cmd.c
new file mode 100644
index 0000000000..06d20a62bd
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_cmd.c
@@ -0,0 +1,375 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_mml_lib.h"
+#include "hinic3_compat.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_mml_cmd.h"
+
+/**
+ * Compares two strings for equality.
+ *
+ * @param[in] command
+ * The first string to compare.
+ * @param[in] argument
+ * The second string to compare.
+ *
+ * @return
+ * UDA_TRUE if the strings are equal, otherwise UDA_FALSE.
+ */
+static int
+string_cmp(const char *command, const char *argument)
+{
+	const char *cmd = command;
+	const char *arg = argument;
+
+	if (!cmd || !arg)
+		return UDA_FALSE;
+
+	if (strlen(cmd) != strlen(arg))
+		return UDA_FALSE;
+
+	do {
+		if (*cmd != *arg)
+			return UDA_FALSE;
+		cmd++;
+		arg++;
+	} while (*cmd != '\0');
+
+	return UDA_TRUE;
+}
+
+static void
+show_tool_version(cmd_adapter_t *adapter)
+{
+	hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+			   "hinic3 pmd version %s", HINIC3_PMD_DRV_VERSION);
+}
+
+static void
+show_tool_help(cmd_adapter_t *adapter)
+{
+	int i;
+	major_cmd_t *major_cmd = NULL;
+
+	if (!adapter)
+		return;
+
+	hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+			   "\n Usage:evsadm exec dump-hinic-status <major_cmd> "
+			   "[option]\n");
+	hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+			   "	-h, --help show help information");
+	hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+			   "	-v, --version show version information");
+	hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+			   "\n Major Commands:\n");
+
+	for (i = 0; i < adapter->major_cmds; i++) {
+		major_cmd = adapter->p_major_cmd[i];
+		hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+				   "	%-23s %s", major_cmd->name,
+				   major_cmd->description);
+	}
+	hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len, "");
+}
+
+void
+major_command_option(major_cmd_t *major_cmd, const char *little,
+		     const char *large, uint32_t have_param,
+		     command_record_t record)
+{
+	cmd_option_t *option = NULL;
+
+	if (major_cmd == NULL || (little == NULL && large == NULL) || !record) {
+		PMD_DRV_LOG(ERR, "Invalid input parameter.");
+		return;
+	}
+
+	if (major_cmd->option_count >= COMMAND_MAX_OPTIONS) {
+		PMD_DRV_LOG(ERR, "Do not support more than %d options",
+			    COMMAND_MAX_OPTIONS);
+		return;
+	}
+
+	option = &major_cmd->options[major_cmd->option_count];
+	major_cmd->options_repeat_flag[major_cmd->option_count] = 0;
+	major_cmd->option_count++;
+
+	option->record = record;
+	option->little = little;
+	option->large = large;
+	option->have_param = have_param;
+}
+
+void
+major_command_register(cmd_adapter_t *adapter, major_cmd_t *major_cmd)
+{
+	int i = 0;
+
+	if (adapter == NULL || major_cmd == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid input parameter.");
+		return;
+	}
+
+	if (adapter->major_cmds >= COMMAND_MAX_MAJORS) {
+		PMD_DRV_LOG(ERR, "Major Commands is full");
+		return;
+	}
+	while (adapter->p_major_cmd[i] != NULL)
+		i++;
+	adapter->p_major_cmd[i] = major_cmd;
+	adapter->major_cmds++;
+	major_cmd->adapter = adapter;
+	major_cmd->err_no = UDA_SUCCESS;
+	(void)memset(major_cmd->err_str, 0, sizeof(major_cmd->err_str));
+}
+
+static int
+is_help_version(cmd_adapter_t *adapter, int argc, char *arg)
+{
+	if (COMMAND_HELP_POSTION(argc) &&
+	    (string_cmp("-h", arg) || string_cmp("--help", arg))) {
+		show_tool_help(adapter);
+		return UDA_TRUE;
+	}
+
+	if (COMMAND_VERSION_POSTION(argc) &&
+	    (string_cmp("-v", arg) || string_cmp("--version", arg))) {
+		show_tool_version(adapter);
+		return UDA_TRUE;
+	}
+
+	return UDA_FALSE;
+}
+
+static int
+check_command_length(int argc, char **argv)
+{
+	int i;
+	unsigned long long str_len = 0;
+
+	for (i = 1; i < argc; i++)
+		str_len += strlen(argv[i]);
+
+	if (str_len > COMMAND_MAX_STRING)
+		return -UDA_EINVAL;
+
+	return UDA_SUCCESS;
+}
+
+static inline int
+char_check(const char cmd)
+{
+	if (cmd >= 'a' && cmd <= 'z')
+		return UDA_SUCCESS;
+
+	if (cmd >= 'A' && cmd <= 'Z')
+		return UDA_SUCCESS;
+	return UDA_FAIL;
+}
+
+static int
+major_command_check_param(cmd_option_t *option, char *arg)
+{
+	if (!option)
+		return -UDA_EINVAL;
+	if (option->have_param != 0) {
+		if (!arg || ((arg[0] == '-') && char_check(arg[1])))
+			return -UDA_EINVAL;
+		return UDA_SUCCESS;
+	}
+
+	return -UDA_ENOOBJ;
+}
+
+static int
+major_cmd_repeat_option_set(major_cmd_t *major_cmd, const cmd_option_t *option,
+			    u32 *options_repeat_flag)
+{
+	int err;
+
+	if (*options_repeat_flag != 0) {
+		major_cmd->err_no = -UDA_EINVAL;
+		err = snprintf(major_cmd->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Repeated option %s|%s.", option->little,
+			       option->large);
+		if (err <= 0) {
+			PMD_DRV_LOG(ERR,
+				"snprintf cmd repeat option failed, err: %d.",
+				err);
+		}
+		return -UDA_EINVAL;
+	}
+	*options_repeat_flag = 1;
+	return UDA_SUCCESS;
+}
+
+static int
+major_cmd_option_check(major_cmd_t *major_cmd, char **argv, int *index)
+{
+	int j, ret, err, option_ok, intermediate_var;
+	cmd_option_t *option = NULL;
+	char *arg = argv[*index];
+
+	/* Find command. */
+	for (j = 0; j < major_cmd->option_count; j++) {
+		option = &major_cmd->options[j];
+		option_ok = (((option->little != NULL) &&
+			      string_cmp(option->little, arg)) ||
+			     ((option->large != NULL) &&
+			      string_cmp(option->large, arg)));
+		if (!option_ok)
+			continue;
+		/* Find same option. */
+		ret = major_cmd_repeat_option_set(major_cmd,
+			option, &major_cmd->options_repeat_flag[j]);
+		if (ret != UDA_SUCCESS)
+			return ret;
+
+		arg = NULL;
+		/* If this option need parameters. */
+		intermediate_var = (*index) + 1;
+		ret = major_command_check_param(option, argv[intermediate_var]);
+		if (ret == UDA_SUCCESS) {
+			(*index)++;
+			arg = argv[*index];
+		} else if (ret == -UDA_EINVAL) {
+			major_cmd->err_no = -UDA_EINVAL;
+			err = snprintf(major_cmd->err_str,
+				       COMMANDER_ERR_MAX_STRING - 1,
+				       "%s|%s option need parameter.",
+				       option->little, option->large);
+			if (err <= 0) {
+				PMD_DRV_LOG(ERR,
+					    "snprintf cmd option need para "
+					    "failed, err: %d.",
+					    err);
+			}
+			return -UDA_EINVAL;
+		}
+
+		/* Record messages. */
+		ret = option->record(major_cmd, arg);
+		if (ret != UDA_SUCCESS)
+			return ret;
+		break;
+	}
+
+	/* Illegal option. */
+	if (j == major_cmd->option_count) {
+		major_cmd->err_no = -UDA_EINVAL;
+		err = snprintf(major_cmd->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "%s is not option needed.", arg);
+		if (err <= 0) {
+			PMD_DRV_LOG(ERR,
+				"snprintf cmd option invalid failed, err: %d.",
+				err);
+		}
+		return -UDA_EINVAL;
+	}
+	return UDA_SUCCESS;
+}
+
+static int
+major_command_parse(major_cmd_t *major_cmd, int argc, char **argv)
+{
+	int i, err;
+
+	for (i = 0; i < argc; i++) {
+		err = major_cmd_option_check(major_cmd, argv, &i);
+		if (err != UDA_SUCCESS)
+			return err;
+	}
+
+	return UDA_SUCCESS;
+}
+
+static int
+copy_reslut_to_buffer(void *buf_out, char *reslut, int len)
+{
+	int ret;
+
+	ret = snprintf(buf_out, len - 1, "%s", reslut);
+	if (ret <= 0)
+		return 0;
+
+	return ret + 1;
+}
+
+void
+command_parse(cmd_adapter_t *adapter, int argc, char **argv, void *buf_out,
+	      uint32_t *out_len)
+{
+	int i;
+	major_cmd_t *major_cmd = NULL;
+	char *arg = argv[1];
+
+	if (is_help_version(adapter, argc, arg) == UDA_TRUE) {
+		*out_len = (u32)copy_reslut_to_buffer(buf_out,
+			adapter->show_str, MAX_SHOW_STR_LEN);
+		return;
+	}
+
+	for (i = 0; i < adapter->major_cmds; i++) {
+		major_cmd = adapter->p_major_cmd[i];
+
+		/* Find major command. */
+		if (!string_cmp(major_cmd->name, arg))
+			continue;
+		if (check_command_length(argc, argv) != UDA_SUCCESS) {
+			major_cmd->err_no = -UDA_EINVAL;
+			(void)snprintf(major_cmd->err_str,
+				       COMMANDER_ERR_MAX_STRING - 1,
+				       "Command input too long.");
+			break;
+		}
+
+		/* Deal sub command. */
+		if (argc > SUB_COMMAND_OFFSET) {
+			if (major_command_parse(major_cmd,
+				    argc - SUB_COMMAND_OFFSET,
+				    argv + SUB_COMMAND_OFFSET) != UDA_SUCCESS) {
+				goto PARSE_OUT;
+			}
+		}
+
+		/* Command exec. */
+		major_cmd->execute(major_cmd);
+		break;
+	}
+
+	/* Not find command. */
+	if (i == adapter->major_cmds) {
+		hinic3_pmd_mml_log(adapter->show_str, &adapter->show_len,
+				   "Unknown major command, assign 'evsadm exec "
+				   "dump-hinic-status -h' for help.");
+		*out_len = (u32)copy_reslut_to_buffer(buf_out,
+			adapter->show_str, MAX_SHOW_STR_LEN);
+		return;
+	}
+
+PARSE_OUT:
+	if (major_cmd->err_no != UDA_SUCCESS &&
+	    major_cmd->err_no != -UDA_CANCEL) {
+		PMD_DRV_LOG(ERR, "%s command error(%d): %s", major_cmd->name,
+			    major_cmd->err_no, major_cmd->err_str);
+
+		hinic3_pmd_mml_log(major_cmd->show_str, &major_cmd->show_len,
+				   "%s command error(%d): %s",
+				   major_cmd->name, major_cmd->err_no,
+				   major_cmd->err_str);
+	}
+	*out_len = (u32)copy_reslut_to_buffer(buf_out, major_cmd->show_str,
+					      MAX_SHOW_STR_LEN);
+}
+
+void
+tool_target_init(int *bus_num, char *dev_name, int len)
+{
+	*bus_num = TRGET_UNKNOWN_BUS_NUM;
+	(void)memset(dev_name, 0, len);
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_cmd.h b/drivers/net/hinic3/mml/hinic3_mml_cmd.h
new file mode 100644
index 0000000000..0e1ece38f0
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_cmd.h
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#ifndef _HINIC3_MML_CMD
+#define _HINIC3_MML_CMD
+
+#include <stdint.h>
+
+#define COMMAND_HELP_POSTION(argc)            \
+	({                                    \
+		typeof(argc) __argc = (argc); \
+		(__argc == 1 || __argc == 2); \
+	})
+#define COMMAND_VERSION_POSTION(argc) ((argc) == 2)
+#define SUB_COMMAND_OFFSET	      2
+
+#define COMMAND_MAX_MAJORS	 128
+#define COMMAND_MAX_OPTIONS	 64
+#define PARAM_MAX_STRING	 128
+#define COMMAND_MAX_STRING	 512
+#define COMMANDER_ERR_MAX_STRING 128
+
+#define MAX_NAME_LEN	 32
+#define MAX_DES_LEN	 128
+#define MAX_SHOW_STR_LEN 2048
+
+struct tag_major_cmd_t;
+struct tag_cmd_adapter_t;
+
+typedef int (*command_record_t)(struct tag_major_cmd_t *major, char *param);
+typedef void (*command_executeute_t)(struct tag_major_cmd_t *major);
+
+typedef struct {
+	const char *little;
+	const char *large;
+	unsigned int have_param;
+	command_record_t record;
+} cmd_option_t;
+
+/* Major command structure for save command details and options. */
+typedef struct tag_major_cmd_t {
+	struct tag_cmd_adapter_t *adapter;
+	char name[MAX_NAME_LEN];
+	int option_count;
+	cmd_option_t options[COMMAND_MAX_OPTIONS];
+	uint32_t options_repeat_flag[COMMAND_MAX_OPTIONS];
+	command_executeute_t execute;
+	int err_no;
+	char err_str[COMMANDER_ERR_MAX_STRING];
+	char show_str[MAX_SHOW_STR_LEN];
+	int show_len;
+	char description[MAX_DES_LEN];
+	void *cmd_st; /**< Command show queue state structure. */
+} major_cmd_t;
+
+typedef struct tag_cmd_adapter_t {
+	const char *name;
+	const char *version;
+	major_cmd_t *p_major_cmd[COMMAND_MAX_MAJORS];
+	int major_cmds;
+	char show_str[MAX_SHOW_STR_LEN];
+	int show_len;
+	char *cmd_buf;
+} cmd_adapter_t;
+
+/**
+ * Add an option to a major command.
+ *
+ * This function adds a command option with its short and long forms, whether it
+ * requires a parameter, and the function to handle it.
+ *
+ * @param[in] major_cmd
+ * Pointer to the major command structure.
+ * @param[in] little
+ * Short form of the option.
+ * @param[in] large
+ * Long form of the option.
+ * @param[in] have_param
+ * Flag indicating whether the option requires a parameter.
+ * @param[in] record
+ * Function to handle the option's action.
+ */
+void major_command_option(major_cmd_t *major_cmd, const char *little,
+			  const char *large, uint32_t have_param,
+			  command_record_t record);
+
+/**
+ * Register a major command with adapter.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ * @param[in] major_cmd
+ * The major command to be registered with the adapter.
+ */
+void major_command_register(cmd_adapter_t *adapter, major_cmd_t *major_cmd);
+
+/**
+ * Parse and execute commands.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ * @param[in] argc
+ * The number of command arguments.
+ * @param[in] argv
+ * The array of command arguments.
+ * @param[out] buf_out
+ * The buffer used to store the output result.
+ * @param[out] out_len
+ * The length (in bytes) of the output result.
+ */
+void command_parse(cmd_adapter_t *adapter, int argc, char **argv, void *buf_out,
+		   uint32_t *out_len);
+
+/**
+ * Initialize the target bus number and device name.
+ *
+ * @param[out] bus_num
+ * Pointer to the bus number, which will be set to a default unknown value.
+ * @param[out] dev_name
+ * Pointer to the device name buffer, which will be cleared (set to zeros).
+ * @param[in] len
+ * The length of the device name buffer.
+ */
+void tool_target_init(int *bus_num, char *dev_name, int len);
+
+int cmd_show_q_init(cmd_adapter_t *adapter);
+int cmd_show_xstats_init(cmd_adapter_t *adapter);
+int cmd_show_dump_init(cmd_adapter_t *adapter);
+
+#endif /* _HINIC3_MML_CMD */
diff --git a/drivers/net/hinic3/mml/hinic3_mml_ioctl.c b/drivers/net/hinic3/mml/hinic3_mml_ioctl.c
new file mode 100644
index 0000000000..0fd6b97f5e
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_ioctl.c
@@ -0,0 +1,215 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+#include <rte_ethdev.h>
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_ether.h>
+#include <rte_ethdev_core.h>
+#include "hinic3_mml_lib.h"
+#include "hinic3_dbg.h"
+#include "hinic3_compat.h"
+#include "hinic3_csr.h"
+#include "hinic3_hwdev.h"
+#include "hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+
+static int
+get_tx_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+	    void *buf_out, uint16_t *out_size)
+{
+	uint16_t q_id = *((uint16_t *)buf_in);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (in_size != sizeof(int))
+		return -UDA_EINVAL;
+
+	return hinic3_dbg_get_sq_info(nic_dev->hwdev, q_id, buf_out, out_size);
+}
+
+static int
+get_tx_wqe_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+		void *buf_out, uint16_t *out_size)
+{
+	struct hinic_wqe_info *wqe_info = (struct hinic_wqe_info *)buf_in;
+	uint16_t q_id = (uint16_t)wqe_info->q_id;
+	uint16_t idx = (uint16_t)wqe_info->wqe_id;
+	uint16_t wqebb_cnt = (uint16_t)wqe_info->wqebb_cnt;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (in_size != sizeof(struct hinic_wqe_info))
+		return -UDA_EINVAL;
+
+	return hinic3_dbg_get_sq_wqe_info(nic_dev->hwdev, q_id, idx, wqebb_cnt,
+					  buf_out, out_size);
+}
+
+static int
+get_rx_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+	    void *buf_out, uint16_t *out_size)
+{
+	uint16_t q_id = *((uint16_t *)buf_in);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (in_size != sizeof(int))
+		return -UDA_EINVAL;
+
+	return hinic3_dbg_get_rq_info(nic_dev->hwdev, q_id, buf_out, out_size);
+}
+
+static int
+get_rx_wqe_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+		void *buf_out, uint16_t *out_size)
+{
+	struct hinic_wqe_info *wqe_info = (struct hinic_wqe_info *)buf_in;
+	uint16_t q_id = (uint16_t)wqe_info->q_id;
+	uint16_t idx = (uint16_t)wqe_info->wqe_id;
+	uint16_t wqebb_cnt = (uint16_t)wqe_info->wqebb_cnt;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (in_size != sizeof(struct hinic_wqe_info))
+		return -UDA_EINVAL;
+
+	return hinic3_dbg_get_rq_wqe_info(nic_dev->hwdev, q_id, idx, wqebb_cnt,
+					  buf_out, out_size);
+}
+
+static int
+get_rx_cqe_info(struct rte_eth_dev *dev, void *buf_in, uint16_t in_size,
+		void *buf_out, uint16_t *out_size)
+{
+	struct hinic_wqe_info *wqe_info = (struct hinic_wqe_info *)buf_in;
+	uint16_t q_id = (uint16_t)wqe_info->q_id;
+	uint16_t idx = (uint16_t)wqe_info->wqe_id;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (in_size != sizeof(struct hinic_wqe_info))
+		return -UDA_EINVAL;
+
+	return hinic3_dbg_get_rx_cqe_info(nic_dev->hwdev, q_id, idx, buf_out,
+					  out_size);
+}
+
+typedef int (*nic_drv_module)(struct rte_eth_dev *dev, void *buf_in,
+			      uint16_t in_size, void *buf_out,
+			      uint16_t *out_size);
+
+struct nic_drv_module_handle {
+	enum driver_cmd_type drv_cmd_name;
+	nic_drv_module drv_func;
+};
+
+const struct nic_drv_module_handle g_nic_drv_module_cmd_handle[] = {
+	{TX_INFO, get_tx_info},		{TX_WQE_INFO, get_tx_wqe_info},
+	{RX_INFO, get_rx_info},		{RX_WQE_INFO, get_rx_wqe_info},
+	{RX_CQE_INFO, get_rx_cqe_info},
+};
+
+static int
+send_to_nic_driver(struct rte_eth_dev *dev, struct msg_module *nt_msg)
+{
+	int index;
+	int err = 0;
+	enum driver_cmd_type cmd_type =
+		(enum driver_cmd_type)nt_msg->msg_formate;
+	int num_cmds = sizeof(g_nic_drv_module_cmd_handle) /
+		       sizeof(g_nic_drv_module_cmd_handle[0]);
+
+	for (index = 0; index < num_cmds; index++) {
+		if (cmd_type ==
+		    g_nic_drv_module_cmd_handle[index].drv_cmd_name) {
+			err = g_nic_drv_module_cmd_handle[index].drv_func(dev,
+				nt_msg->in_buf,
+				(uint16_t)nt_msg->buf_in_size, nt_msg->out_buf,
+				(uint16_t *)&nt_msg->buf_out_size);
+			break;
+		}
+	}
+
+	if (index == num_cmds) {
+		PMD_DRV_LOG(ERR, "Unknown nic driver cmd: %d", cmd_type);
+		err = -UDA_EINVAL;
+	}
+
+	return err;
+}
+
+static int
+hinic3_msg_handle(struct rte_eth_dev *dev, struct msg_module *nt_msg)
+{
+	int err;
+
+	switch (nt_msg->module) {
+	case SEND_TO_NIC_DRIVER:
+		err = send_to_nic_driver(dev, nt_msg);
+		if (err != 0)
+			PMD_DRV_LOG(ERR, "Send message to driver failed");
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unknown message module: %d", nt_msg->module);
+		err = -UDA_EINVAL;
+		break;
+	}
+
+	return err;
+}
+
+static struct rte_eth_dev *
+get_eth_dev_by_pci_addr(char *pci_addr, __rte_unused int len)
+{
+	uint32_t i;
+	struct rte_eth_dev *eth_dev = NULL;
+	struct rte_pci_device *pci_dev = NULL;
+	int ret;
+	uint32_t bus, devid, function;
+
+	ret = sscanf(pci_addr, "%02x:%02x.%x", &bus, &devid, &function);
+	if (ret <= 0) {
+		PMD_DRV_LOG(ERR,
+			    "Get pci bus devid and function id fail, err: %d",
+			    ret);
+		return NULL;
+	}
+
+	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
+		eth_dev = &rte_eth_devices[i];
+		if (eth_dev->state != RTE_ETH_DEV_ATTACHED)
+			continue;
+
+		pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+#ifdef CONFIG_SP_VID_DID
+		if (pci_dev->id.vendor_id == PCI_VENDOR_ID_SPNIC &&
+		    (pci_dev->id.device_id == HINIC3_DEV_ID_STANDARD ||
+		     pci_dev->id.device_id == HINIC3_DEV_ID_VF) &&
+#else
+		if (pci_dev->id.vendor_id == PCI_VENDOR_ID_HUAWEI &&
+		    (pci_dev->id.device_id == HINIC3_DEV_ID_STANDARD ||
+		     pci_dev->id.device_id == HINIC3_DEV_ID_VF) &&
+#endif
+		    pci_dev->addr.bus == bus && pci_dev->addr.devid == devid &&
+		    pci_dev->addr.function == function) {
+			return eth_dev;
+		}
+	}
+
+	return NULL;
+}
+
+int
+hinic3_pmd_mml_ioctl(void *msg)
+{
+	struct msg_module *nt_msg = msg;
+	struct rte_eth_dev *dev;
+
+	dev = get_eth_dev_by_pci_addr(nt_msg->device_name,
+				      sizeof(nt_msg->device_name));
+	if (!dev) {
+		PMD_DRV_LOG(ERR, "Can not get the device %s correctly",
+			    nt_msg->device_name);
+		return UDA_FAIL;
+	}
+
+	return hinic3_msg_handle(dev, nt_msg);
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_lib.c b/drivers/net/hinic3/mml/hinic3_mml_lib.c
new file mode 100644
index 0000000000..dae2efc54b
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_lib.c
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+#include "hinic3_compat.h"
+#include "hinic3_mml_lib.h"
+
+int
+tool_get_valid_target(char *name, struct tool_target *target)
+{
+	int ret = UDA_SUCCESS;
+
+	if (strlen(name) >= MAX_DEV_LEN) {
+		PMD_DRV_LOG(ERR,
+			    "Input parameter of device name is too long.");
+		ret = -UDA_ELEN;
+	} else {
+		(void)memcpy(target->dev_name, name, strlen(name));
+		target->bus_num = 0;
+	}
+
+	return ret;
+}
+
+static void
+fill_ioctl_msg_hd(struct msg_module *msg, unsigned int module,
+		  unsigned int msg_formate, unsigned int in_buff_len,
+		  unsigned int out_buff_len, char *dev_name, int bus_num)
+{
+	(void)memcpy(msg->device_name, dev_name, strlen(dev_name) + 1);
+
+	msg->module = module;
+	msg->msg_formate = msg_formate;
+	msg->buf_in_size = in_buff_len;
+	msg->buf_out_size = out_buff_len;
+	msg->bus_num = bus_num;
+}
+
+static int
+lib_ioctl(struct msg_module *in_buf, void *out_buf)
+{
+	in_buf->out_buf = out_buf;
+
+	return hinic3_pmd_mml_ioctl(in_buf);
+}
+
+int
+lib_tx_sq_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+		   int sq_id)
+{
+	struct msg_module msg_to_kernel;
+
+	(void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+	fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, TX_INFO,
+			  (unsigned int)sizeof(int),
+			  (unsigned int)sizeof(struct nic_sq_info),
+			  target.dev_name, target.bus_num);
+	msg_to_kernel.in_buf = (void *)&sq_id;
+
+	return lib_ioctl(&msg_to_kernel, sq_info);
+}
+
+int
+lib_tx_wqe_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+		    int sq_id, int wqe_id, void *nwqe, int nwqe_size)
+{
+	struct msg_module msg_to_kernel;
+	struct hinic_wqe_info wqe = {0};
+
+	wqe.wqe_id = wqe_id;
+	wqe.q_id = sq_id;
+	wqe.wqebb_cnt = nwqe_size / sq_info->sq_wqebb_size;
+
+	(void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+	fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, TX_WQE_INFO,
+			  (unsigned int)(sizeof(struct hinic_wqe_info)),
+			  nwqe_size, target.dev_name, target.bus_num);
+	msg_to_kernel.in_buf = (void *)&wqe;
+
+	return lib_ioctl(&msg_to_kernel, nwqe);
+}
+
+int
+lib_rx_rq_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+		   int rq_id)
+{
+	struct msg_module msg_to_kernel;
+
+	(void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+	fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, RX_INFO,
+			  (unsigned int)(sizeof(int)),
+			  (unsigned int)sizeof(struct nic_rq_info),
+			  target.dev_name, target.bus_num);
+	msg_to_kernel.in_buf = &rq_id;
+
+	return lib_ioctl(&msg_to_kernel, rq_info);
+}
+
+int
+lib_rx_wqe_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+		    int rq_id, int wqe_id, void *nwqe, int nwqe_size)
+{
+	struct msg_module msg_to_kernel;
+	struct hinic_wqe_info wqe = {0};
+
+	wqe.wqe_id = wqe_id;
+	wqe.q_id = rq_id;
+	wqe.wqebb_cnt = nwqe_size / rq_info->rq_wqebb_size;
+
+	(void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+	fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, RX_WQE_INFO,
+			  (unsigned int)(sizeof(struct hinic_wqe_info)),
+			  nwqe_size, target.dev_name, target.bus_num);
+	msg_to_kernel.in_buf = (void *)&wqe;
+
+	return lib_ioctl(&msg_to_kernel, nwqe);
+}
+
+int
+lib_rx_cqe_info_get(struct tool_target target,
+		    __rte_unused struct nic_rq_info *rq_info, int rq_id,
+		    int wqe_id, void *nwqe, int nwqe_size)
+{
+	struct msg_module msg_to_kernel;
+	struct hinic_wqe_info wqe = {0};
+
+	wqe.wqe_id = wqe_id;
+	wqe.q_id = rq_id;
+
+	(void)memset(&msg_to_kernel, 0, sizeof(msg_to_kernel));
+	fill_ioctl_msg_hd(&msg_to_kernel, SEND_TO_NIC_DRIVER, RX_CQE_INFO,
+			  (unsigned int)(sizeof(struct hinic_wqe_info)),
+			  nwqe_size, target.dev_name, target.bus_num);
+	msg_to_kernel.in_buf = (void *)&wqe;
+
+	return lib_ioctl(&msg_to_kernel, nwqe);
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_lib.h b/drivers/net/hinic3/mml/hinic3_mml_lib.h
new file mode 100644
index 0000000000..42c365922f
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_lib.h
@@ -0,0 +1,275 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#ifndef _HINIC3_MML_LIB
+#define _HINIC3_MML_LIB
+
+#include <string.h>
+#include <stdint.h>
+
+#include "hinic3_mml_cmd.h"
+#include "hinic3_compat.h"
+#include "hinic3_mgmt.h"
+
+#define MAX_DEV_LEN	      16
+#define TRGET_UNKNOWN_BUS_NUM (-1)
+
+#ifndef DEV_NAME_LEN
+#define DEV_NAME_LEN 64
+#endif
+
+enum {
+	UDA_SUCCESS = 0x0,
+	UDA_FAIL,
+	UDA_ENXIO,
+	UDA_ENONMEM,
+	UDA_EBUSY,
+	UDA_ECRC,
+	UDA_EINVAL,
+	UDA_EFAULT,
+	UDA_ELEN,
+	UDA_ECMD,
+	UDA_ENODRIVER,
+	UDA_EXIST,
+	UDA_EOVERSTEP,
+	UDA_ENOOBJ,
+	UDA_EOBJ,
+	UDA_ENOMATCH,
+	UDA_ETIMEOUT,
+
+	UDA_CONTOP,
+
+	UDA_REBOOT = 0xFD,
+	UDA_CANCEL = 0xFE,
+	UDA_KILLED = 0xFF,
+};
+
+#define PARAM_NEED     1
+#define PARAM_NOT_NEED 0
+
+#define BASE_ALL 0
+#define BASE_8	 8
+#define BASE_10	 10
+#define BASE_16	 16
+
+enum module_name {
+	SEND_TO_NPU = 1,
+	SEND_TO_MPU,
+	SEND_TO_SM,
+
+	SEND_TO_HW_DRIVER,
+	SEND_TO_NIC_DRIVER,
+	SEND_TO_OVS_DRIVER,
+	SEND_TO_ROCE_DRIVER,
+	SEND_TO_TOE_DRIVER,
+	SEND_TO_IWAP_DRIVER,
+	SEND_TO_FC_DRIVER,
+	SEND_FCOE_DRIVER,
+};
+
+enum driver_cmd_type {
+	TX_INFO = 1,
+	Q_NUM,
+	TX_WQE_INFO,
+	TX_MAPPING,
+	RX_INFO,
+	RX_WQE_INFO,
+	RX_CQE_INFO
+};
+
+struct tool_target {
+	int bus_num;
+	char dev_name[MAX_DEV_LEN];
+	void *pri;
+};
+
+struct nic_tx_hw_page {
+	long long phy_addr;
+	long long *map_addr;
+};
+
+struct nic_sq_info {
+	unsigned short q_id;
+	unsigned short pi; /**< Ring buffer queue producer point. */
+	unsigned short ci; /**< Ring buffer queue consumer point. */
+	unsigned short fi; /**< Ring buffer queue complete point. */
+	unsigned int sq_depth;
+	unsigned short sq_wqebb_size;
+	unsigned short *ci_addr;
+	unsigned long long cla_addr;
+
+	struct nic_tx_hw_page doorbell;
+	unsigned int page_idx;
+};
+
+struct comm_info_l2nic_sq_ci_attr {
+	struct mgmt_msg_head msg_head;
+
+	uint16_t func_idx;
+	uint8_t dma_attr_off;
+	uint8_t pending_limit;
+
+	uint8_t coalescing_time;
+	uint8_t int_en;
+	uint16_t int_offset;
+
+	uint32_t l2nic_sqn;
+	uint32_t rsv;
+	uint64_t ci_addr;
+};
+
+struct nic_rq_info {
+	unsigned short q_id; /**< Queue id in current function, 0, 1, 2... */
+
+	unsigned short hw_pi; /**< Where pkt buf allocated. */
+	unsigned short ci;    /**< Where hw pkt received, owned by hw. */
+	unsigned short sw_pi; /**< Where driver begin receive pkt. */
+	unsigned short rq_wqebb_size; /**< wqebb size, default to 32 bytes. */
+
+	unsigned short rq_depth;
+	unsigned short buf_len; /**< 2K. */
+	void *ci_wqe_page_addr; /**< For queue context init. */
+	void *ci_cla_tbl_addr;
+	unsigned short int_num;	  /**< RSS support should consider int_num. */
+	unsigned int msix_vector; /**< For debug. */
+};
+
+struct hinic_wqe_info {
+	int q_id;
+	void *slq_handle;
+	uint32_t wqe_id;
+	uint32_t wqebb_cnt;
+};
+
+struct npu_cmd_st {
+	uint32_t mod : 8;
+	uint32_t cmd : 8;
+	uint32_t ack_type : 3;
+	uint32_t direct_resp : 1;
+	uint32_t len : 12;
+};
+
+struct mpu_cmd_st {
+	uint32_t api_type : 8;
+	uint32_t mod : 8;
+	uint32_t cmd : 16;
+};
+
+struct msg_module {
+	char device_name[DEV_NAME_LEN];
+	uint32_t module;
+	union {
+		uint32_t msg_formate; /**< For driver. */
+		struct npu_cmd_st npu_cmd;
+		struct mpu_cmd_st mpu_cmd;
+	};
+	uint32_t timeout; /**< For mpu/npu cmd. */
+	uint32_t func_idx;
+	uint32_t buf_in_size;
+	uint32_t buf_out_size;
+	void *in_buf;
+	void *out_buf;
+	int bus_num;
+	uint32_t rsvd2[5];
+};
+
+/**
+ * Convert the provided string into `uint32_t` according to the specified base.
+ *
+ * @param[in] nptr
+ * The string to be converted.
+ * @param[in] base
+ * The base to use for conversion (e.g., 10 for decimal, 16 for hexadecimal).
+ * @param[out] value
+ * The output variable where the converted `uint32_t` value will be stored.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -UDA_EINVAL if the string is invalid or the value is out of range.
+ */
+static inline int
+string_toui(const char *nptr, int base, uint32_t *value)
+{
+	char *endptr = NULL;
+	long tmp_value;
+
+	tmp_value = strtol(nptr, &endptr, base);
+	if ((*endptr != 0) || tmp_value >= 0x7FFFFFFF || tmp_value < 0)
+		return -UDA_EINVAL;
+	*value = (uint32_t)tmp_value;
+	return UDA_SUCCESS;
+}
+
+#define UDA_TRUE  1
+#define UDA_FALSE 0
+
+/**
+ * Format and append a log message to a string buffer.
+ *
+ * @param[out] show_str
+ * The string buffer where the formatted message will be appended.
+ * @param[out] show_len
+ * The current length of the string in the buffer. It is updated after
+ * appending.
+ * @param[in] fmt
+ * The format string that specifies how to format the log message.
+ * @param[in] args
+ * The variable arguments to be formatted according to the format string.
+ */
+static inline void
+hinic3_pmd_mml_log(char *show_str, int *show_len, const char *fmt, ...)
+{
+	va_list args;
+	int ret = 0;
+
+	va_start(args, fmt);
+	ret = vsprintf(show_str + *show_len, fmt, args);
+	va_end(args);
+
+	if (ret > 0) {
+		*show_len += ret;
+	} else {
+		PMD_DRV_LOG(ERR, "MML show string snprintf failed, err: %d",
+			    ret);
+	}
+}
+
+/**
+ * Get a valid target device based on the given name.
+ *
+ * This function checks if the device name is valid (within the length limit)
+ * and then stores it in the target structure. The bus number is initialized to
+ * 0.
+ *
+ * @param[in] name
+ * The device name to be validated and stored.
+ * @param[out] target
+ * The structure where the device name and bus number will be stored.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int tool_get_valid_target(char *name, struct tool_target *target);
+
+int hinic3_pmd_mml_ioctl(void *msg);
+
+int lib_tx_sq_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+		       int sq_id);
+
+int lib_tx_wqe_info_get(struct tool_target target, struct nic_sq_info *sq_info,
+			int sq_id, int wqe_id, void *nwqe, int nwqe_size);
+
+int lib_rx_rq_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+		       int rq_id);
+
+int lib_rx_wqe_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+			int rq_id, int wqe_id, void *nwqe, int nwqe_size);
+
+int lib_rx_cqe_info_get(struct tool_target target, struct nic_rq_info *rq_info,
+			int rq_id, int wqe_id, void *nwqe, int nwqe_size);
+
+int hinic3_pmd_mml_lib(const char *buf_in, uint32_t in_size, char *buf_out,
+		       uint32_t *out_len, uint32_t max_buf_out_len);
+
+#endif /* _HINIC3_MML_LIB */
diff --git a/drivers/net/hinic3/mml/hinic3_mml_main.c b/drivers/net/hinic3/mml/hinic3_mml_main.c
new file mode 100644
index 0000000000..7830df479e
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_main.c
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_mml_lib.h"
+#include "hinic3_mml_cmd.h"
+
+#define MAX_ARGC 20
+
+/**
+ * Free all memory associated with the command adapter, including the command
+ * states and command buffer.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ */
+static void
+cmd_deinit(cmd_adapter_t *adapter)
+{
+	int i;
+
+	for (i = 0; i < COMMAND_MAX_MAJORS; i++) {
+		if (adapter->p_major_cmd[i]) {
+			if (adapter->p_major_cmd[i]->cmd_st) {
+				free(adapter->p_major_cmd[i]->cmd_st);
+				adapter->p_major_cmd[i]->cmd_st = NULL;
+			}
+
+			free(adapter->p_major_cmd[i]);
+			adapter->p_major_cmd[i] = NULL;
+		}
+	}
+
+	if (adapter->cmd_buf) {
+		free(adapter->cmd_buf);
+		adapter->cmd_buf = NULL;
+	}
+}
+
+static int
+cmd_init(cmd_adapter_t *adapter)
+{
+	int err;
+
+	err = cmd_show_q_init(adapter);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Init cmd show queue fail");
+		return err;
+	}
+
+	return UDA_SUCCESS;
+}
+
+/**
+ * Separate the input command string into arguments.
+ *
+ * @param[in] adapter
+ * Pointer to command adapter.
+ * @param[in] buf_in
+ * The input command string.
+ * @param[in] in_size
+ * The size of the input command string.
+ * @param[out] argv
+ * The array to store separated arguments.
+ *
+ * @return
+ * The number of arguments on success, a negative error code otherwise.
+ */
+static int
+cmd_separate(cmd_adapter_t *adapter, const char *buf_in, uint32_t in_size,
+	     char **argv)
+{
+	char *cmd_buf = NULL;
+	char *tmp = NULL;
+	char *saveptr = NULL;
+	int i;
+
+	cmd_buf = calloc(1, in_size + 1);
+	if (!cmd_buf) {
+		PMD_DRV_LOG(ERR, "Failed to allocate cmd_buf");
+		return -UDA_ENONMEM;
+	}
+
+	(void)memcpy(cmd_buf, buf_in, in_size);
+
+	tmp = cmd_buf;
+	for (i = 1; i < MAX_ARGC; i++) {
+		argv[i] = strtok_r(tmp, " ", &saveptr);
+		if (!argv[i])
+			break;
+		tmp = NULL;
+	}
+
+	if (i == MAX_ARGC) {
+		PMD_DRV_LOG(ERR, "Parameters is too many");
+		free(cmd_buf);
+		return -UDA_FAIL;
+	}
+
+	adapter->cmd_buf = cmd_buf;
+	return i;
+}
+
+/**
+ * Process the input command string, parse arguments, and return the result.
+ *
+ * @param[in] buf_in
+ * The input command string.
+ * @param[in] in_size
+ * The size of the input command string.
+ * @param[out] buf_out
+ * The output buffer to store the command result.
+ * @param[out] out_len
+ * The length of the output buffer.
+ * @param[in] max_buf_out_len
+ * The maximum size of the output buffer.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_pmd_mml_lib(const char *buf_in, uint32_t in_size, char *buf_out,
+		   uint32_t *out_len, uint32_t max_buf_out_len)
+{
+	cmd_adapter_t *adapter = NULL;
+	char *argv[MAX_ARGC];
+	int argc;
+	int err = -UDA_EINVAL;
+
+	if (!buf_in || !in_size) {
+		PMD_DRV_LOG(ERR, "Invalid param, buf_in: %d, in_size: 0x%x",
+			    !!buf_in, in_size);
+		return err;
+	}
+
+	if (!buf_out || max_buf_out_len < MAX_SHOW_STR_LEN) {
+		PMD_DRV_LOG(ERR,
+			"Invalid param, buf_out: %d, max_buf_out_len: 0x%x",
+			!!buf_out, max_buf_out_len);
+		return err;
+	}
+
+	adapter = calloc(1, sizeof(cmd_adapter_t));
+	if (!adapter) {
+		PMD_DRV_LOG(ERR, "Failed to allocate cmd adapter");
+		return -UDA_ENONMEM;
+	}
+
+	err = cmd_init(adapter);
+	if (err != 0)
+		goto parse_cmd_fail;
+
+	argc = cmd_separate(adapter, buf_in, in_size, argv);
+	if (argc < 0) {
+		err = -UDA_FAIL;
+		goto parse_cmd_fail;
+	}
+
+	(void)memset(buf_out, 0, max_buf_out_len);
+	command_parse(adapter, argc, argv, buf_out, out_len);
+
+parse_cmd_fail:
+	cmd_deinit(adapter);
+	free(adapter);
+
+	return err;
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_queue.c b/drivers/net/hinic3/mml/hinic3_mml_queue.c
new file mode 100644
index 0000000000..7d29c7ea52
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_queue.c
@@ -0,0 +1,749 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ */
+
+#include "hinic3_mml_lib.h"
+#include "hinic3_mml_cmd.h"
+#include "hinic3_mml_queue.h"
+
+#define ADDR_HI_BIT 32
+
+/**
+ * This function perform similar operations as `hinic3_pmd_mml_log`, but it
+ * return a code.
+ *
+ * @param[out] show_str
+ * The string buffer where the formatted message will be appended.
+ * @param[out] show_len
+ * The current length of the string in the buffer. It is updated after
+ * appending.
+ * @param[in] fmt
+ * The format string that specifies how to format the log message.
+ * @param[in] args
+ * The variable arguments to be formatted according to the format string.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - `-UDA_EINVAL` if an error occurs during the formatting process.
+ *
+ * @see hinic3_pmd_mml_log
+ */
+static int
+hinic3_pmd_mml_log_ret(char *show_str, int *show_len, const char *fmt, ...)
+{
+	va_list args;
+	int ret = 0;
+
+	va_start(args, fmt);
+	ret = vsprintf(show_str + *show_len, fmt, args);
+	va_end(args);
+
+	if (ret > 0) {
+		*show_len += ret;
+	} else {
+		PMD_DRV_LOG(ERR, "MML show string snprintf failed, err: %d",
+			    ret);
+		return -UDA_EINVAL;
+	}
+
+	return UDA_SUCCESS;
+}
+
+/**
+ * Format and log the information about the RQ by appending details such as
+ * queue ID, ci, sw pi, RQ depth, RQ WQE buffer size, buffer length, interrupt
+ * number, and MSIX vector to the output buffer.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ * @param[in] rq_info
+ * The receive queue information to be displayed, which includes various
+ * properties like queue ID, depth, interrupt number, etc.
+ */
+static void
+rx_show_rq_info(major_cmd_t *self, struct nic_rq_info *rq_info)
+{
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "Receive queue information:");
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "queue_id:%u",
+			   rq_info->q_id);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "ci:%u",
+			   rq_info->ci);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "sw_pi:%u",
+			   rq_info->sw_pi);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rq_depth:%u",
+			   rq_info->rq_depth);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "rq_wqebb_size:%u", rq_info->rq_wqebb_size);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_len:%u",
+			   rq_info->buf_len);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "int_num:%u",
+			   rq_info->int_num);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "msix_vector:%u",
+			   rq_info->msix_vector);
+}
+
+static void
+rx_show_wqe(major_cmd_t *self, nic_rq_wqe *wqe)
+{
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "Rx buffer section information:");
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_addr:0x%" PRIx64,
+		(((uint64_t)wqe->buf_desc.pkt_buf_addr_high) << ADDR_HI_BIT) |
+			wqe->buf_desc.pkt_buf_addr_low);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_len:%u",
+			   wqe->buf_desc.len);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:%u",
+			   wqe->rsvd0);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "Cqe buffer section information:");
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_hi:0x%" PRIx64,
+		(((uint64_t)wqe->cqe_sect.pkt_buf_addr_high) << ADDR_HI_BIT) |
+			wqe->cqe_sect.pkt_buf_addr_low);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "buf_len:%u",
+			   wqe->cqe_sect.len);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd1:%u",
+			   wqe->rsvd1);
+}
+
+static void
+rx_show_cqe_info(major_cmd_t *self, struct tag_l2nic_rx_cqe *wqe_cs)
+{
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "Rx cqe info:");
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw0:0x%08x",
+			   wqe_cs->dw0.value);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rx_done:0x%x",
+			   wqe_cs->dw0.bs.rx_done);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "bp_en:0x%x",
+			   wqe_cs->dw0.bs.bp_en);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "decry_pkt:0x%x",
+			   wqe_cs->dw0.bs.decry_pkt);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "flush:0x%x",
+			   wqe_cs->dw0.bs.flush);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "spec_flags:0x%x",
+			   wqe_cs->dw0.bs.spec_flags);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:0x%x",
+			   wqe_cs->dw0.bs.rsvd0);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "lro_num:0x%x",
+			   wqe_cs->dw0.bs.lro_num);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "checksum_err:0x%x", wqe_cs->dw0.bs.checksum_err);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw1:0x%08x",
+			   wqe_cs->dw1.value);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "length:%u",
+			   wqe_cs->dw1.bs.length);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "vlan:0x%x",
+			   wqe_cs->dw1.bs.vlan);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw2:0x%08x",
+			   wqe_cs->dw2.value);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rss_type:0x%x",
+			   wqe_cs->dw2.bs.rss_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:0x%x",
+			   wqe_cs->dw2.bs.rsvd0);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "vlan_offload_en:0x%x",
+			   wqe_cs->dw2.bs.vlan_offload_en);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "umbcast:0x%x",
+			   wqe_cs->dw2.bs.umbcast);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd1:0x%x",
+			   wqe_cs->dw2.bs.rsvd1);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "pkt_types:0x%x",
+			   wqe_cs->dw2.bs.pkt_types);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "rss_hash_value:0x%08x",
+			   wqe_cs->dw3.bs.rss_hash_value);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw4:0x%08x",
+			   wqe_cs->dw4.value);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw5:0x%08x",
+			   wqe_cs->dw5.value);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "mac_type:0x%x",
+			   wqe_cs->dw5.ovs_bs.mac_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "l3_type:0x%x",
+			   wqe_cs->dw5.ovs_bs.l3_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "l4_type:0x%x",
+			   wqe_cs->dw5.ovs_bs.l4_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:0x%x",
+			   wqe_cs->dw5.ovs_bs.rsvd0);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "traffic_type:0x%x",
+			   wqe_cs->dw5.ovs_bs.traffic_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "traffic_from:0x%x",
+			   wqe_cs->dw5.ovs_bs.traffic_from);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "cs_dw6:0x%08x",
+			   wqe_cs->dw6.value);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "localtag:0x%08x",
+			   wqe_cs->dw7.ovs_bs.localtag);
+}
+
+#define HINIC3_PMD_MML_LOG_RET(fmt, ...)                             \
+	hinic3_pmd_mml_log_ret(self->show_str, &self->show_len, fmt, \
+			       ##__VA_ARGS__)
+
+/**
+ * Display help information for queue command.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ * @param[in] argc
+ * A string representing the value associated with the command option (unused_).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+cmd_queue_help(major_cmd_t *self, __rte_unused char *argc)
+{
+	int ret;
+	ret = HINIC3_PMD_MML_LOG_RET("") ||
+	      HINIC3_PMD_MML_LOG_RET(" Usage: %s %s", self->name,
+				     "-i <device> -d <tx or rx> -t <type> "
+				     "-q <queue id> [-w <wqe id>]") ||
+	      HINIC3_PMD_MML_LOG_RET("\n %s", self->description) ||
+	      HINIC3_PMD_MML_LOG_RET("\n Options:\n") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "-h", "--help",
+				     "display this help and exit") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "-i",
+				     "--device=<device>",
+				     "device target, e.g. 08:00.0") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "-d", "--direction",
+				     "tx or rx") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "  ", "", "0: tx") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "  ", "", "1: rx") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "-t", "--type", "") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "  ", "",
+				     "0: queue info") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "  ", "",
+				     "1: wqe info") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "  ", "",
+				     "2: cqe info(only for rx)") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "-q", "--queue_id",
+				     "") ||
+	      HINIC3_PMD_MML_LOG_RET("	%s, %-25s %s", "-w", "--wqe_id", "") ||
+	      HINIC3_PMD_MML_LOG_RET("");
+
+	return ret;
+}
+
+static void
+tx_show_sq_info(major_cmd_t *self, struct nic_sq_info *sq_info)
+{
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "Send queue information:");
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "queue_id:%u",
+			   sq_info->q_id);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "pi:%u",
+			   sq_info->pi);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "ci:%u",
+			   sq_info->ci);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "fi:%u",
+			   sq_info->fi);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "sq_depth:%u",
+			   sq_info->sq_depth);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "sq_wqebb_size:%u", sq_info->sq_wqebb_size);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "cla_addr:0x%" PRIu64,
+			   sq_info->cla_addr);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "doorbell phy_addr:0x%" PRId64,
+			   (uintptr_t)sq_info->doorbell.phy_addr);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "page_idx:%u",
+			   sq_info->page_idx);
+}
+
+static void
+tx_show_wqe(major_cmd_t *self, struct nic_tx_wqe_desc *wqe)
+{
+	struct nic_tx_ctrl_section *control = NULL;
+	struct nic_tx_task_section *task = NULL;
+	unsigned int *val = (unsigned int *)wqe;
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw0:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw1:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw2:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw3:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw4:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw5:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw6:0x%08x",
+			   *(val++));
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "dw7:0x%08x",
+			   *(val++));
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "\nWqe may analyse as follows:");
+	control = &wqe->control;
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "\nInformation about wqe control section:");
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "ctrl_format:0x%08x", control->ctrl_format);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "owner:%u",
+			   control->ctrl_sec.o);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "extended_compact:%u", control->ctrl_sec.ec);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "direct_normal:%u", control->ctrl_sec.dn);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "inline_sgl:%u",
+			   control->ctrl_sec.df);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "ts_size:%u",
+			   control->ctrl_sec.tss);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "bds_len:%u",
+			   control->ctrl_sec.bdsl);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd:%u",
+			   control->ctrl_sec.r);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "1st_buf_len:%u",
+			   control->ctrl_sec.len);
+
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "queue_info:0x%08x", control->queue_info);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "pri:%u",
+			   control->qsf.pri);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "uc:%u",
+			   control->qsf.uc);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "sctp:%u",
+			   control->qsf.sctp);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "mss:%u",
+			   control->qsf.mss);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "tcp_udp_cs:%u",
+			   control->qsf.tcp_udp_cs);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "tso:%u",
+			   control->qsf.tso);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "ufo:%u",
+			   control->qsf.ufo);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "payload_offset:%u", control->qsf.payload_offset);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "pkt_type:%u",
+			   control->qsf.pkt_type);
+
+	/* First buffer section. */
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "bd0_hi_addr:0x%08x", wqe->bd0_hi_addr);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "bd0_lo_addr:0x%08x", wqe->bd0_lo_addr);
+
+	/* Show the task section. */
+	task = &wqe->task;
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "\nInformation about wqe task section:");
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "vport_id:%u",
+			   task->bs2.vport_id);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "vport_type:%u",
+			   task->bs2.vport_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "traffic_type:%u",
+			   task->bs2.traffic_type);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len,
+			   "slave_port_id:%u", task->bs2.slave_port_id);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "rsvd0:%u",
+			   task->bs2.rsvd0);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "crypto_en:%u",
+			   task->bs2.crypto_en);
+	hinic3_pmd_mml_log(self->show_str, &self->show_len, "pkt_type:%u",
+			   task->bs2.pkt_type);
+}
+
+static int
+cmd_queue_target(major_cmd_t *self, char *argc)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+	int ret;
+
+	if (tool_get_valid_target(argc, &show_q->target) != UDA_SUCCESS) {
+		self->err_no = -UDA_EINVAL;
+		ret = snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Unknown device %s.", argc);
+		if (ret <= 0) {
+			PMD_DRV_LOG(ERR,
+				    "snprintf queue err msg failed, ret: %d",
+				    ret);
+		}
+		return -UDA_EINVAL;
+	}
+
+	return UDA_SUCCESS;
+}
+
+static int
+get_queue_type(major_cmd_t *self, char *argc)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+	unsigned int num = 0;
+
+	if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Unknown queuetype %u.", num);
+		return -UDA_EINVAL;
+	}
+
+	show_q->qobj = (int)num;
+	return UDA_SUCCESS;
+}
+
+static int
+get_queue_id(major_cmd_t *self, char *argc)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+	unsigned int num = 0;
+
+	if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Invalid queue id.");
+		return -UDA_EINVAL;
+	}
+
+	show_q->q_id = (int)num;
+	return UDA_SUCCESS;
+}
+
+static int
+get_q_wqe_id(major_cmd_t *self, char *argc)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+	unsigned int num = 0;
+
+	if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Invalid wqe id.");
+		return -UDA_EINVAL;
+	}
+
+	show_q->wqe_id = (int)num;
+	return UDA_SUCCESS;
+}
+
+/**
+ * Set direction for queue query.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ * @param[in] argc
+ * The input argument representing the direction (as a string).
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -UDA_EINVAL If the input is invalid (not a number or out of range), it sets
+ * an error in `err_no` and `err_str`.
+ */
+static int
+get_direction(major_cmd_t *self, char *argc)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+	unsigned int num = 0;
+
+	if (string_toui(argc, BASE_10, &num) != UDA_SUCCESS || num > 1) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Unknown mode.");
+		return -UDA_EINVAL;
+	}
+
+	show_q->direction = (int)num;
+	return UDA_SUCCESS;
+}
+
+static int
+rx_param_check(major_cmd_t *self, struct cmd_show_q_st *rx_param)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+
+	if (rx_param->target.bus_num == TRGET_UNKNOWN_BUS_NUM) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Need device name.");
+		return self->err_no;
+	}
+
+	if (show_q->qobj > OBJ_CQE_INFO || show_q->qobj < OBJ_Q_INFO) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Unknown queue type.");
+		return self->err_no;
+	}
+
+	if (show_q->q_id == -1) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Need queue id.");
+		return self->err_no;
+	}
+
+	if (show_q->qobj != OBJ_Q_INFO && show_q->wqe_id == -1) {
+		self->err_no = -UDA_FAIL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Get cqe_info or wqe_info, must set wqeid.");
+		return -UDA_FAIL;
+	}
+
+	if (show_q->qobj == OBJ_Q_INFO && show_q->wqe_id != -1) {
+		self->err_no = -UDA_FAIL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Get queue info, need not set wqeid.");
+		return -UDA_FAIL;
+	}
+
+	return UDA_SUCCESS;
+}
+
+static int
+tx_param_check(major_cmd_t *self, struct cmd_show_q_st *tx_param)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+
+	if (tx_param->target.bus_num == TRGET_UNKNOWN_BUS_NUM) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Need device name.");
+		return self->err_no;
+	}
+
+	if (show_q->qobj > OBJ_WQE_INFO || show_q->qobj < OBJ_Q_INFO) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Unknown queue type.");
+		return self->err_no;
+	}
+
+	if (show_q->q_id == -1) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Need queue id.");
+		return self->err_no;
+	}
+
+	if (show_q->qobj == OBJ_WQE_INFO && show_q->wqe_id == -1) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Get wqe_info, must set wqeid.");
+		return self->err_no;
+	}
+
+	if (show_q->qobj != OBJ_WQE_INFO && show_q->wqe_id != -1) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Get queue info, need not set wqeid.");
+		return self->err_no;
+	}
+
+	return UDA_SUCCESS;
+}
+
+static void
+cmd_tx_execute(major_cmd_t *self)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+	int ret;
+	struct nic_sq_info sq_info = {0};
+	struct nic_tx_wqe_desc nwqe;
+
+	if (tx_param_check(self, show_q) != UDA_SUCCESS)
+		return;
+
+	if (show_q->qobj == OBJ_Q_INFO || show_q->qobj == OBJ_WQE_INFO) {
+		ret = lib_tx_sq_info_get(show_q->target, (void *)&sq_info,
+					 show_q->q_id);
+		if (ret != UDA_SUCCESS) {
+			self->err_no = ret;
+			(void)snprintf(self->err_str,
+				       COMMANDER_ERR_MAX_STRING - 1,
+				       "Get tx sq_info failed.");
+			return;
+		}
+
+		if (show_q->qobj == OBJ_Q_INFO) {
+			tx_show_sq_info(self, &sq_info);
+			return;
+		}
+
+		if (show_q->wqe_id >= (int)sq_info.sq_depth) {
+			self->err_no = -UDA_EINVAL;
+			(void)snprintf(self->err_str,
+				       COMMANDER_ERR_MAX_STRING - 1,
+				       "Max wqe id is %u.",
+				       sq_info.sq_depth - 1);
+			return;
+		}
+
+		(void)memset(&nwqe, 0, sizeof(nwqe));
+		ret = lib_tx_wqe_info_get(show_q->target, &sq_info,
+					  show_q->q_id, show_q->wqe_id,
+					  (void *)&nwqe, sizeof(nwqe));
+		if (ret != UDA_SUCCESS) {
+			self->err_no = ret;
+			(void)snprintf(self->err_str,
+				       COMMANDER_ERR_MAX_STRING - 1,
+				       "Get tx wqe_info failed.");
+			return;
+		}
+
+		tx_show_wqe(self, &nwqe);
+		return;
+	}
+}
+
+static void
+cmd_rx_execute(major_cmd_t *self)
+{
+	int ret;
+	struct nic_rq_info rq_info = {0};
+	struct tag_l2nic_rx_cqe cqe;
+	nic_rq_wqe wqe;
+	struct cmd_show_q_st *show_q = self->cmd_st;
+
+	if (rx_param_check(self, show_q) != UDA_SUCCESS)
+		return;
+
+	ret = lib_rx_rq_info_get(show_q->target, &rq_info, show_q->q_id);
+	if (ret != UDA_SUCCESS) {
+		self->err_no = ret;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Get rx rq_info failed.");
+		return;
+	}
+
+	if (show_q->qobj == OBJ_Q_INFO) {
+		rx_show_rq_info(self, &rq_info);
+		return;
+	}
+
+	if ((uint32_t)show_q->wqe_id >= rq_info.rq_depth) {
+		self->err_no = -UDA_EINVAL;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Max wqe id is %u.", rq_info.rq_depth - 1);
+		return;
+	}
+
+	if (show_q->qobj == OBJ_WQE_INFO) {
+		(void)memset(&wqe, 0, sizeof(wqe));
+		ret = lib_rx_wqe_info_get(show_q->target, &rq_info,
+					  show_q->q_id, show_q->wqe_id,
+					  (void *)&wqe, sizeof(wqe));
+		if (ret != UDA_SUCCESS) {
+			self->err_no = ret;
+			(void)snprintf(self->err_str,
+				       COMMANDER_ERR_MAX_STRING - 1,
+				       "Get rx wqe_info failed.");
+			return;
+		}
+
+		rx_show_wqe(self, &wqe);
+		return;
+	}
+
+	/* OBJ_CQE_INFO */
+	(void)memset(&cqe, 0, sizeof(cqe));
+	ret = lib_rx_cqe_info_get(show_q->target, &rq_info, show_q->q_id,
+				  show_q->wqe_id, (void *)&cqe, sizeof(cqe));
+	if (ret != UDA_SUCCESS) {
+		self->err_no = ret;
+		(void)snprintf(self->err_str, COMMANDER_ERR_MAX_STRING - 1,
+			       "Get rx cqe_info failed.");
+		return;
+	}
+
+	rx_show_cqe_info(self, &cqe);
+}
+
+/**
+ * Execute the NIC queue query command based on the direction.
+ *
+ * @param[in] self
+ * Pointer to major command structure.
+ */
+static void
+cmd_nic_queue_execute(major_cmd_t *self)
+{
+	struct cmd_show_q_st *show_q = self->cmd_st;
+
+	if (show_q->direction == -1) {
+		hinic3_pmd_mml_log(self->show_str, &self->show_len,
+				   "Need -d parameter.");
+		return;
+	}
+
+	if (show_q->direction == 0)
+		cmd_tx_execute(self);
+	else
+		cmd_rx_execute(self);
+}
+
+/**
+ * Initialize and register the queue query command.
+ *
+ * @param[in] adapter
+ * The command adapter, which holds the registered commands and their states.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ * - -UDA_ENONMEM if memory allocation fail or an error occur.
+ */
+int
+cmd_show_q_init(cmd_adapter_t *adapter)
+{
+	struct cmd_show_q_st *show_q = NULL;
+	major_cmd_t *show_q_cmd;
+
+	show_q_cmd = calloc(1, sizeof(*show_q_cmd));
+	if (!show_q_cmd) {
+		PMD_DRV_LOG(ERR, "Failed to allocate queue cmd");
+		return -UDA_ENONMEM;
+	}
+
+	(void)snprintf(show_q_cmd->name, MAX_NAME_LEN - 1, "%s", "nic_queue");
+	(void)snprintf(show_q_cmd->description,
+		MAX_DES_LEN - 1, "%s",
+		"Query the rx/tx queue information of a specified pci_addr");
+
+	show_q_cmd->option_count = 0;
+	show_q_cmd->execute = cmd_nic_queue_execute;
+
+	show_q = calloc(1, sizeof(*show_q));
+	if (!show_q) {
+		free(show_q_cmd);
+		PMD_DRV_LOG(ERR, "Failed to allocate show queue");
+		return -UDA_ENONMEM;
+	}
+
+	show_q->qobj = -1;
+	show_q->q_id = -1;
+	show_q->wqe_id = -1;
+	show_q->direction = -1;
+
+	show_q_cmd->cmd_st = show_q;
+
+	tool_target_init(&show_q->target.bus_num, show_q->target.dev_name,
+			 MAX_DEV_LEN);
+
+	major_command_option(show_q_cmd, "-h", "--help", PARAM_NOT_NEED,
+			     cmd_queue_help);
+	major_command_option(show_q_cmd, "-i", "--device", PARAM_NEED,
+			     cmd_queue_target);
+	major_command_option(show_q_cmd, "-t", "--type", PARAM_NEED,
+			     get_queue_type);
+	major_command_option(show_q_cmd, "-q", "--queue_id", PARAM_NEED,
+			     get_queue_id);
+	major_command_option(show_q_cmd, "-w", "--wqe_id", PARAM_NEED,
+			     get_q_wqe_id);
+	major_command_option(show_q_cmd, "-d", "--direction", PARAM_NEED,
+			     get_direction);
+
+	major_command_register(adapter, show_q_cmd);
+
+	return UDA_SUCCESS;
+}
diff --git a/drivers/net/hinic3/mml/hinic3_mml_queue.h b/drivers/net/hinic3/mml/hinic3_mml_queue.h
new file mode 100644
index 0000000000..633b1db50c
--- /dev/null
+++ b/drivers/net/hinic3/mml/hinic3_mml_queue.h
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) Huawei Technologies Co., Ltd. 2025. All rights reserved.
+ * Description   : hinic3 mml for queue
+ */
+
+#ifndef _HINIC3_MML_QUEUE
+#define _HINIC3_MML_QUEUE
+
+#define OBJ_Q_INFO   0
+#define OBJ_WQE_INFO 1
+#define OBJ_CQE_INFO 2
+
+/* TX. */
+struct nic_tx_ctrl_section {
+	union {
+		struct {
+			unsigned int len : 18;
+			unsigned int r : 1;
+			unsigned int bdsl : 8;
+			unsigned int tss : 1;
+			unsigned int df : 1;
+			unsigned int dn : 1;
+			unsigned int ec : 1;
+			unsigned int o : 1;
+		} ctrl_sec;
+		unsigned int ctrl_format;
+	};
+	union {
+		struct {
+			unsigned int pkt_type : 2;
+			unsigned int payload_offset : 8;
+			unsigned int ufo : 1;
+			unsigned int tso : 1;
+			unsigned int tcp_udp_cs : 1;
+			unsigned int mss : 14;
+			unsigned int sctp : 1;
+			unsigned int uc : 1;
+			unsigned int pri : 3;
+		} qsf;
+		unsigned int queue_info;
+	};
+};
+
+struct nic_tx_task_section {
+	unsigned int dw0;
+	unsigned int dw1;
+
+	/* dw2. */
+	union {
+		struct {
+			/*
+			 * When TX direct, output bond id;
+			 * when RX direct, output function id.
+			 */
+			unsigned int vport_id : 12;
+			unsigned int vport_type : 4;
+			unsigned int traffic_type : 6;
+			/*
+			 * Only used in TX direct, ctrl pkt(LACP\LLDP) output
+			 * port id.
+			 */
+			unsigned int slave_port_id : 2;
+			unsigned int rsvd0 : 6;
+			unsigned int crypto_en : 1;
+			unsigned int pkt_type : 1;
+		} bs2;
+		unsigned int dw2;
+	};
+
+	unsigned int dw3;
+};
+
+struct nic_tx_sge {
+	union {
+		struct {
+			unsigned int length : 31; /**< SGE length. */
+			unsigned int rsvd : 1;
+		} bs0;
+		unsigned int dw0;
+	};
+
+	union {
+		struct {
+			/* Key or unused. */
+			unsigned int key : 30;
+			/* 0:normal, 1:pointer to next SGE. */
+			unsigned int extension : 1;
+			/* 0:list, 1:last. */
+			unsigned int list : 1;
+		} bs1;
+		unsigned int dw1;
+	};
+
+	unsigned int dma_addr_high;
+	unsigned int dma_addr_low;
+};
+
+struct nic_tx_wqe_desc {
+	struct nic_tx_ctrl_section control;
+	struct nic_tx_task_section task;
+	unsigned int bd0_hi_addr;
+	unsigned int bd0_lo_addr;
+};
+
+/* RX. */
+typedef struct tag_l2nic_rx_cqe {
+	union {
+		struct {
+			unsigned int checksum_err : 16;
+			unsigned int lro_num : 8;
+			unsigned int rsvd0 : 1;
+			unsigned int spec_flags : 3;
+			unsigned int flush : 1;
+			unsigned int decry_pkt : 1;
+			unsigned int bp_en : 1;
+			unsigned int rx_done : 1;
+		} bs;
+		unsigned int value;
+	} dw0;
+
+	union {
+		struct {
+			unsigned int vlan : 16;
+			unsigned int length : 16;
+		} bs;
+		unsigned int value;
+	} dw1;
+
+	union {
+		struct {
+			unsigned int pkt_types : 12;
+			unsigned int rsvd1 : 7;
+			unsigned int umbcast : 2;
+			unsigned int vlan_offload_en : 1;
+			unsigned int rsvd0 : 2;
+			unsigned int rss_type : 8;
+		} bs;
+		unsigned int value;
+	} dw2;
+
+	union {
+		struct {
+			unsigned int rss_hash_value;
+		} bs;
+		unsigned int value;
+	} dw3;
+
+	/* dw4~dw7 field for nic/ovs multipexing. */
+	union {
+		struct { /**< For nic. */
+			unsigned int tx_ts_seq : 16;
+			unsigned int msg_1588_offset : 8;
+			unsigned int msg_1588_type : 4;
+			unsigned int rsvd : 1;
+			unsigned int if_rx_ts : 1;
+			unsigned int if_tx_ts : 1;
+			unsigned int if_1588 : 1;
+		} bs;
+
+		struct { /**< For ovs. */
+			unsigned int reserved;
+		} ovs_bs;
+
+		struct {
+			unsigned int xid;
+		} crypt_bs;
+
+		unsigned int value;
+	} dw4;
+
+	union {
+		struct { /**< For nic. */
+			unsigned int msg_1588_ts;
+		} bs;
+
+		struct { /**< For ovs. */
+			unsigned int traffic_from : 16;
+			unsigned int traffic_type : 6;
+			unsigned int rsvd0 : 2;
+			unsigned int l4_type : 3;
+			unsigned int l3_type : 3;
+			unsigned int mac_type : 2;
+		} ovs_bs;
+
+		struct { /**< For crypt. */
+			unsigned int esp_next_head : 8;
+			unsigned int decrypt_status : 8;
+			unsigned int rsvd : 16;
+		} crypt_bs;
+
+		unsigned int value;
+	} dw5;
+
+	union {
+		struct { /**< For nic. */
+			unsigned int lro_ts;
+		} bs;
+
+		struct { /**< For ovs. */
+			unsigned int reserved;
+		} ovs_bs;
+
+		unsigned int value;
+	} dw6;
+
+	union {
+		struct { /**< For nic. */
+			/* Data len of the first or middle pkt size. */
+			unsigned int first_len : 13;
+			/* Data len of the last pkt size. */
+			unsigned int last_len : 13;
+			/* The number of packet. */
+			unsigned int pkt_num : 5;
+			/* Only this bit = 1, other dw fields is valid. */
+			unsigned int super_cqe_en : 1;
+		} bs;
+
+		struct { /**< For ovs. */
+			unsigned int localtag;
+		} ovs_bs;
+
+		unsigned int value;
+	} dw7;
+} l2nic_rx_cqe_s;
+
+struct nic_rq_bd_sec {
+	unsigned int pkt_buf_addr_high; /**< Packet buffer address high. */
+	unsigned int pkt_buf_addr_low;	/**< Packet buffer address low. */
+	unsigned int len;
+};
+
+typedef struct _nic_rq_wqe {
+	/* RX buffer SGE. Notes, buf_desc.len limit in bit 0~13. */
+	struct nic_rq_bd_sec buf_desc;
+	/* Reserved field 0 for 16B align. */
+	unsigned int rsvd0;
+	/*
+	 * CQE buffer SGE. Notes, cqe_sect.len is in unit of 16B and limit in
+	 * bit 0~4.
+	 */
+	struct nic_rq_bd_sec cqe_sect;
+	/* Reserved field 1 for unused. */
+	unsigned int rsvd1;
+} nic_rq_wqe;
+
+/* CMD. */
+struct cmd_show_q_st {
+	struct tool_target target;
+
+	int qobj;
+	int q_id;
+	int wqe_id;
+	int direction;
+};
+
+#endif /* _HINIC3_MML_QUEUE */
diff --git a/drivers/net/hinic3/mml/meson.build b/drivers/net/hinic3/mml/meson.build
new file mode 100644
index 0000000000..f8d2650d8d
--- /dev/null
+++ b/drivers/net/hinic3/mml/meson.build
@@ -0,0 +1,62 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+sources = files(
+    'hinic3_dbg.c',
+    'hinic3_mml_cmd.c',
+    'hinic3_mml_ioctl.c',
+    'hinic3_mml_lib.c',
+    'hinic3_mml_main.c',
+    'hinic3_mml_queue.c',
+)
+
+extra_flags = [
+    '-Wno-cast-qual',
+    '-Wno-format',
+    '-Wno-format-nonliteral',
+    '-Wno-format-security',
+    '-Wno-missing-braces',
+    '-Wno-missing-field-initializers',
+    '-Wno-missing-prototypes',
+    '-Wno-pointer-sign',
+    '-Wno-pointer-to-int-cast',
+    '-Wno-sign-compare',
+    '-Wno-strict-aliasing',
+    '-Wno-unused-parameter',
+    '-Wno-unused-value',
+    '-Wno-unused-variable',
+]
+
+# The driver runs only on arch64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+    extra_flags += [
+        '-Wno-int-to-pointer-cast',
+        '-Wno-pointer-to-int-cast',
+    ]
+endif
+
+foreach flag: extra_flags
+    if cc.has_argument(flag)
+        cflags += flag
+    endif
+endforeach
+
+deps += ['hash']
+
+c_args = cflags
+includes += include_directories('../')
+includes += include_directories('../base/')
+
+mml_lib = static_library(
+    'hinic3_mml',
+    sources,
+    dependencies: [
+        static_rte_eal,
+        static_rte_ethdev,
+        static_rte_bus_pci,
+        static_rte_hash,
+    ],
+    include_directories: includes,
+    c_args: c_args,
+)
+mml_objs = mml_lib.extract_all_objects()
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 16/18] net/hinic3: add RSS promiscuous ops
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (6 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 15/18] net/hinic3: add MML and EEPROM access feature Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
  2025-04-18  7:02 ` [RFC 18/18] drivers/net: add hinic3 PMD build and doc files Feifei Wang
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Xin Wang, Feifei Wang, Yi Chen

From: Xin Wang <wangxin679@h-partners.com>

Add RSS and promiscuous ops related function codes.

Signed-off-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
Reviewed-by: Yi Chen <chenyi221@huawei.com>
---
 drivers/net/hinic3/hinic3_ethdev.c | 370 +++++++++++++++++++++++++++++
 drivers/net/hinic3/hinic3_ethdev.h |  31 +++
 2 files changed, 401 insertions(+)

diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index 9c5decb867..9d2dcf95f7 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -2277,6 +2277,281 @@ hinic3_dev_allmulticast_disable(struct rte_eth_dev *dev)
 	return 0;
 }
 
+/**
+ * Enable promiscuous mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u32 rx_mode;
+	int err;
+
+	err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+	if (err)
+		return err;
+
+	rx_mode = nic_dev->rx_mode | HINIC3_RX_MODE_PROMISC;
+
+	err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+	if (err) {
+		(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+		PMD_DRV_LOG(ERR, "Enable promiscuous failed");
+		return err;
+	}
+
+	nic_dev->rx_mode = rx_mode;
+
+	(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+	PMD_DRV_LOG(INFO,
+		    "Enable promiscuous, nic_dev: %s, port_id: %d, promisc: %d",
+		    nic_dev->dev_name, dev->data->port_id,
+		    dev->data->promiscuous);
+	return 0;
+}
+
+/**
+ * Disable promiscuous mode.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u32 rx_mode;
+	int err;
+
+	err = hinic3_mutex_lock(&nic_dev->rx_mode_mutex);
+	if (err)
+		return err;
+
+	rx_mode = nic_dev->rx_mode & (~HINIC3_RX_MODE_PROMISC);
+
+	err = hinic3_set_rx_mode(nic_dev->hwdev, rx_mode);
+	if (err) {
+		(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+		PMD_DRV_LOG(ERR, "Disable promiscuous failed");
+		return err;
+	}
+
+	nic_dev->rx_mode = rx_mode;
+
+	(void)hinic3_mutex_unlock(&nic_dev->rx_mode_mutex);
+
+	PMD_DRV_LOG(INFO,
+		"Disable promiscuous, nic_dev: %s, port_id: %d, promisc: %d",
+		nic_dev->dev_name, dev->data->port_id, dev->data->promiscuous);
+	return 0;
+}
+
+/**
+ * Get flow control configuration, including auto-negotiation and RX/TX pause
+ * settings.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ *
+ * @param[out] fc_conf
+ * The flow control configuration to be filled.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+
+/**
+ * Update the RSS hash key and RSS hash type.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rss_conf
+ * RSS configuration data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_hash_update(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_rss_type rss_type = {0};
+	u64 rss_hf = rss_conf->rss_hf;
+	int err = 0;
+
+	if (nic_dev->rss_state == HINIC3_RSS_DISABLE) {
+		if (rss_hf != 0)
+			return -EINVAL;
+
+		PMD_DRV_LOG(INFO, "RSS is not enabled");
+		return 0;
+	}
+
+	if (rss_conf->rss_key_len > HINIC3_RSS_KEY_SIZE) {
+		PMD_DRV_LOG(ERR, "Invalid RSS key, rss_key_len: %d",
+			    rss_conf->rss_key_len);
+		return -EINVAL;
+	}
+
+	if (rss_conf->rss_key) {
+		err = hinic3_rss_set_hash_key(nic_dev->hwdev, nic_dev->rss_key,
+					      HINIC3_RSS_KEY_SIZE);
+		if (err) {
+			PMD_DRV_LOG(ERR, "Set RSS hash key failed");
+			return err;
+		}
+		memcpy((void *)nic_dev->rss_key, (void *)rss_conf->rss_key,
+		       (size_t)rss_conf->rss_key_len);
+	}
+
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+				   RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
+				? 1
+				: 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+				   RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
+				? 1
+				: 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+
+	err = hinic3_set_rss_type(nic_dev->hwdev, rss_type);
+	if (err)
+		PMD_DRV_LOG(ERR, "Set RSS type failed");
+
+	return err;
+}
+
+/**
+ * Get the RSS hash configuration.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] rss_conf
+ * RSS configuration data.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_rss_type rss_type = {0};
+	int err;
+
+	if (!rss_conf)
+		return -EINVAL;
+
+	if (nic_dev->rss_state == HINIC3_RSS_DISABLE) {
+		rss_conf->rss_hf = 0;
+		PMD_DRV_LOG(INFO, "RSS is not enabled");
+		return 0;
+	}
+
+	if (rss_conf->rss_key && rss_conf->rss_key_len >= HINIC3_RSS_KEY_SIZE) {
+		/*
+		 * Get RSS key from driver to reduce the frequency of the MPU
+		 * accessing the RSS memory.
+		 */
+		rss_conf->rss_key_len = sizeof(nic_dev->rss_key);
+		memcpy((void *)rss_conf->rss_key, (void *)nic_dev->rss_key,
+		       (size_t)rss_conf->rss_key_len);
+	}
+
+	err = hinic3_get_rss_type(nic_dev->hwdev, &rss_type);
+	if (err)
+		return err;
+
+	rss_conf->rss_hf = 0;
+	rss_conf->rss_hf |=
+		rss_type.ipv4 ? (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+				 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+			      : 0;
+	rss_conf->rss_hf |= rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP
+					      : 0;
+	rss_conf->rss_hf |=
+		rss_type.ipv6 ? (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+				 RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+			      : 0;
+	rss_conf->rss_hf |= rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |= rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP
+					      : 0;
+	rss_conf->rss_hf |= rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |= rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP
+					      : 0;
+	rss_conf->rss_hf |= rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP
+					      : 0;
+
+	return 0;
+}
+
+/**
+ * Get the RETA indirection table.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] reta_conf
+ * Pointer to RETA configuration structure array.
+ * @param[in] reta_size
+ * Size of the RETA table.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_reta_query(struct rte_eth_dev *dev,
+		      struct rte_eth_rss_reta_entry64 *reta_conf,
+		      uint16_t reta_size)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u32 indirtbl[HINIC3_RSS_INDIR_SIZE] = {0};
+	u16 idx, shift;
+	u16 i;
+	int err;
+
+	if (nic_dev->rss_state == HINIC3_RSS_DISABLE) {
+		PMD_DRV_LOG(INFO, "RSS is not enabled");
+		return 0;
+	}
+
+	if (reta_size != HINIC3_RSS_INDIR_SIZE) {
+		PMD_DRV_LOG(ERR, "Invalid reta size, reta_size: %d", reta_size);
+		return -EINVAL;
+	}
+
+	err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl,
+				       HINIC3_RSS_INDIR_SIZE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Get RSS retas table failed, error: %d", err);
+		return err;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
+	}
+
+	return 0;
+}
+
 static int
 hinic3_get_eeprom(__rte_unused struct rte_eth_dev *dev,
 		  struct rte_dev_eeprom_info *info)
@@ -2287,6 +2562,68 @@ hinic3_get_eeprom(__rte_unused struct rte_eth_dev *dev,
 				  &info->length, MAX_BUF_OUT_LEN);
 }
 
+/**
+ * Update the RETA indirection table.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] reta_conf
+ * Pointer to RETA configuration structure array.
+ * @param[in] reta_size
+ * Size of the RETA table.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_rss_reta_update(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_reta_entry64 *reta_conf,
+		       uint16_t reta_size)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u32 indirtbl[HINIC3_RSS_INDIR_SIZE] = {0};
+	u16 idx, shift;
+	u16 i;
+	int err;
+
+	if (nic_dev->rss_state == HINIC3_RSS_DISABLE)
+		return 0;
+
+	if (reta_size != HINIC3_RSS_INDIR_SIZE) {
+		PMD_DRV_LOG(ERR, "Invalid reta size, reta_size: %d", reta_size);
+		return -EINVAL;
+	}
+
+	err = hinic3_rss_get_indir_tbl(nic_dev->hwdev, indirtbl,
+				       HINIC3_RSS_INDIR_SIZE);
+	if (err)
+		return err;
+
+	/* Update RSS reta table. */
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			indirtbl[i] = reta_conf[idx].reta[shift];
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		if (indirtbl[i] >= nic_dev->num_rqs) {
+			PMD_DRV_LOG(ERR,
+				"Invalid reta entry, index: %d, num_rqs: %d",
+				indirtbl[i], nic_dev->num_rqs);
+			return -EFAULT;
+		}
+	}
+
+	err = hinic3_rss_set_indir_tbl(nic_dev->hwdev, indirtbl,
+				       HINIC3_RSS_INDIR_SIZE);
+	if (err)
+		PMD_DRV_LOG(ERR, "Set RSS reta table failed");
+
+	return err;
+}
+
 /**
  * Get device generic statistics.
  *
@@ -2857,6 +3194,29 @@ hinic3_set_mc_addr_list(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/**
+ * Manage flow director filter operations.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] filter_type
+ * Filter type.
+ * @param[in] filter_op
+ * Operation to perform.
+ * @param[in] arg
+ * Pointer to operation-specific structure.
+ *
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_dev_filter_ctrl(struct rte_eth_dev *dev, const struct rte_flow_ops **arg)
+{
+	RTE_SET_USED(dev);
+	*arg = &hinic3_flow_ops;
+	return 0;
+}
+
 static int
 hinic3_get_reg(__rte_unused struct rte_eth_dev *dev,
 	       __rte_unused struct rte_dev_reg_info *regs)
@@ -2890,6 +3250,12 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
 	.vlan_offload_set              = hinic3_vlan_offload_set,
 	.allmulticast_enable           = hinic3_dev_allmulticast_enable,
 	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
+	.promiscuous_enable            = hinic3_dev_promiscuous_enable,
+	.promiscuous_disable           = hinic3_dev_promiscuous_disable,
+	.rss_hash_update               = hinic3_rss_hash_update,
+	.rss_hash_conf_get             = hinic3_rss_conf_get,
+	.reta_update                   = hinic3_rss_reta_update,
+	.reta_query                    = hinic3_rss_reta_query,
 	.get_eeprom                    = hinic3_get_eeprom,
 	.stats_get                     = hinic3_dev_stats_get,
 	.stats_reset                   = hinic3_dev_stats_reset,
@@ -2931,6 +3297,10 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = {
 	.vlan_offload_set              = hinic3_vlan_offload_set,
 	.allmulticast_enable           = hinic3_dev_allmulticast_enable,
 	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
+	.rss_hash_update               = hinic3_rss_hash_update,
+	.rss_hash_conf_get             = hinic3_rss_conf_get,
+	.reta_update                   = hinic3_rss_reta_update,
+	.reta_query                    = hinic3_rss_reta_query,
 	.get_eeprom                    = hinic3_get_eeprom,
 	.stats_get                     = hinic3_dev_stats_get,
 	.stats_reset                   = hinic3_dev_stats_reset,
diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h
index a69cf972e7..5dd7c7821a 100644
--- a/drivers/net/hinic3/hinic3_ethdev.h
+++ b/drivers/net/hinic3/hinic3_ethdev.h
@@ -97,6 +97,10 @@ struct hinic3_nic_dev {
 	u16 rx_buff_len;
 	u16 mtu_size;
 
+	u16 rss_state;
+	u8 num_rss; /**< Number of RSS queues. */
+	u8 rsvd0;   /**< Reserved field 0. */
+
 	u32 rx_mode;
 	u8 rx_queue_list[HINIC3_MAX_QUEUE_NUM];
 	rte_spinlock_t queue_list_lock;
@@ -106,6 +110,8 @@ struct hinic3_nic_dev {
 	u32 default_cos;
 	u32 rx_csum_en;
 
+	u8 rss_key[HINIC3_RSS_KEY_SIZE];
+
 	unsigned long dev_status;
 
 	struct rte_ether_addr default_addr;
@@ -116,4 +122,29 @@ struct hinic3_nic_dev {
 	u32 vfta[HINIC3_VFTA_SIZE]; /**< VLAN bitmap. */
 };
 
+/**
+ * Enable interrupt for the specified RX queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_id
+ * The ID of the receive queue for which the interrupt is being enabled.
+ * @return
+ * 0 on success, a negative error code on failure.
+ */
+int hinic3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);
+
+/**
+ * Disable interrupt for the specified RX queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] queue_id
+ * The ID of the receive queue for which the interrupt is being disabled.
+ * @return
+ * 0 on success, a negative error code on failure.
+ */
+int hinic3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev,
+				     uint16_t queue_id);
+
 #endif /* _HINIC3_ETHDEV_H_ */
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 17/18] net/hinic3: add FDIR flow control module
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (7 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 16/18] net/hinic3: add RSS promiscuous ops Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  2025-04-18  7:02 ` [RFC 18/18] drivers/net: add hinic3 PMD build and doc files Feifei Wang
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang

From: Yi Chen <chenyi221@huawei.com>

Added support for flow director filters, including ethertype, IPv4,
IPv6, and tunnel VXLAN. In addition, user can add or delete filters.

Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
 drivers/net/hinic3/hinic3_ethdev.c |   82 ++
 drivers/net/hinic3/hinic3_ethdev.h |   17 +
 drivers/net/hinic3/hinic3_fdir.c   | 1394 +++++++++++++++++++++++
 drivers/net/hinic3/hinic3_fdir.h   |  398 +++++++
 drivers/net/hinic3/hinic3_flow.c   | 1700 ++++++++++++++++++++++++++++
 drivers/net/hinic3/hinic3_flow.h   |   80 ++
 6 files changed, 3671 insertions(+)
 create mode 100644 drivers/net/hinic3/hinic3_fdir.c
 create mode 100644 drivers/net/hinic3/hinic3_fdir.h
 create mode 100644 drivers/net/hinic3/hinic3_flow.c
 create mode 100644 drivers/net/hinic3/hinic3_flow.h

diff --git a/drivers/net/hinic3/hinic3_ethdev.c b/drivers/net/hinic3/hinic3_ethdev.c
index 9d2dcf95f7..2b8d2dc7a7 100644
--- a/drivers/net/hinic3/hinic3_ethdev.c
+++ b/drivers/net/hinic3/hinic3_ethdev.c
@@ -2369,6 +2369,84 @@ hinic3_dev_promiscuous_disable(struct rte_eth_dev *dev)
  * @return
  * 0 on success, non-zero on failure.
  */
+static int
+hinic3_dev_flow_ctrl_get(struct rte_eth_dev *dev,
+			 struct rte_eth_fc_conf *fc_conf)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct nic_pause_config nic_pause;
+	int err;
+
+	err = hinic3_mutex_lock(&nic_dev->pause_mutuex);
+	if (err)
+		return err;
+
+	memset(&nic_pause, 0, sizeof(nic_pause));
+	err = hinic3_get_pause_info(nic_dev->hwdev, &nic_pause);
+	if (err) {
+		(void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+		return err;
+	}
+
+	if (nic_dev->pause_set || !nic_pause.auto_neg) {
+		nic_pause.rx_pause = nic_dev->nic_pause.rx_pause;
+		nic_pause.tx_pause = nic_dev->nic_pause.tx_pause;
+	}
+
+	fc_conf->autoneg = nic_pause.auto_neg;
+
+	if (nic_pause.tx_pause && nic_pause.rx_pause)
+		fc_conf->mode = RTE_ETH_FC_FULL;
+	else if (nic_pause.tx_pause)
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
+	else if (nic_pause.rx_pause)
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
+	else
+		fc_conf->mode = RTE_ETH_FC_NONE;
+
+	(void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+	return 0;
+}
+
+static int
+hinic3_dev_flow_ctrl_set(struct rte_eth_dev *dev,
+			 struct rte_eth_fc_conf *fc_conf)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct nic_pause_config nic_pause;
+	int err;
+
+	err = hinic3_mutex_lock(&nic_dev->pause_mutuex);
+	if (err)
+		return err;
+
+	memset(&nic_pause, 0, sizeof(nic_pause));
+	if ((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
+		nic_pause.tx_pause = true;
+
+	if ((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
+		nic_pause.rx_pause = true;
+
+	err = hinic3_set_pause_info(nic_dev->hwdev, nic_pause);
+	if (err) {
+		(void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+		return err;
+	}
+
+	nic_dev->pause_set = true;
+	nic_dev->nic_pause.rx_pause = nic_pause.rx_pause;
+	nic_dev->nic_pause.tx_pause = nic_pause.tx_pause;
+
+	PMD_DRV_LOG(INFO,
+		    "Just support set tx or rx pause info, tx: %s, rx: %s",
+		    nic_pause.tx_pause ? "on" : "off",
+		    nic_pause.rx_pause ? "on" : "off");
+
+	(void)hinic3_mutex_unlock(&nic_dev->pause_mutuex);
+	return 0;
+}
 
 /**
  * Update the RSS hash key and RSS hash type.
@@ -3252,6 +3330,8 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
 	.allmulticast_disable          = hinic3_dev_allmulticast_disable,
 	.promiscuous_enable            = hinic3_dev_promiscuous_enable,
 	.promiscuous_disable           = hinic3_dev_promiscuous_disable,
+	.flow_ctrl_get                 = hinic3_dev_flow_ctrl_get,
+	.flow_ctrl_set                 = hinic3_dev_flow_ctrl_set,
 	.rss_hash_update               = hinic3_rss_hash_update,
 	.rss_hash_conf_get             = hinic3_rss_conf_get,
 	.reta_update                   = hinic3_rss_reta_update,
@@ -3269,6 +3349,7 @@ static const struct eth_dev_ops hinic3_pmd_ops = {
 	.mac_addr_remove               = hinic3_mac_addr_remove,
 	.mac_addr_add                  = hinic3_mac_addr_add,
 	.set_mc_addr_list              = hinic3_set_mc_addr_list,
+	.flow_ops_get                  = hinic3_dev_filter_ctrl,
 	.get_reg                       = hinic3_get_reg,
 };
 
@@ -3313,6 +3394,7 @@ static const struct eth_dev_ops hinic3_pmd_vf_ops = {
 	.mac_addr_remove               = hinic3_mac_addr_remove,
 	.mac_addr_add                  = hinic3_mac_addr_add,
 	.set_mc_addr_list              = hinic3_set_mc_addr_list,
+	.flow_ops_get                  = hinic3_dev_filter_ctrl,
 };
 
 /**
diff --git a/drivers/net/hinic3/hinic3_ethdev.h b/drivers/net/hinic3/hinic3_ethdev.h
index 5dd7c7821a..07e24e971c 100644
--- a/drivers/net/hinic3/hinic3_ethdev.h
+++ b/drivers/net/hinic3/hinic3_ethdev.h
@@ -8,6 +8,8 @@
 #include <rte_ethdev.h>
 #include <rte_ethdev_core.h>
 
+#include "hinic3_fdir.h"
+
 #define HINIC3_PMD_DRV_VERSION "B106"
 
 #define PCI_DEV_TO_INTR_HANDLE(pci_dev) ((pci_dev)->intr_handle)
@@ -83,6 +85,9 @@ enum nic_feature_cap {
 
 #define DEFAULT_DRV_FEATURE 0x3FFF
 
+TAILQ_HEAD(hinic3_ethertype_filter_list, rte_flow);
+TAILQ_HEAD(hinic3_fdir_rule_filter_list, rte_flow);
+
 struct hinic3_nic_dev {
 	struct hinic3_hwdev *hwdev; /**< Hardware device. */
 	struct hinic3_txq **txqs;
@@ -114,14 +119,26 @@ struct hinic3_nic_dev {
 
 	unsigned long dev_status;
 
+	u8 pause_set; /**< Flag of PAUSE frame setting. */
+	pthread_mutex_t pause_mutuex;
+	struct nic_pause_config nic_pause;
+
 	struct rte_ether_addr default_addr;
 	struct rte_ether_addr *mc_list;
 
 	char dev_name[HINIC3_DEV_NAME_LEN];
 	u64 feature_cap;
 	u32 vfta[HINIC3_VFTA_SIZE]; /**< VLAN bitmap. */
+
+	u16 tcam_rule_nums;
+	u16 ethertype_rule_nums;
+	struct hinic3_tcam_info tcam;
+	struct hinic3_ethertype_filter_list filter_ethertype_list;
+	struct hinic3_fdir_rule_filter_list filter_fdir_rule_list;
 };
 
+extern const struct rte_flow_ops hinic3_flow_ops;
+
 /**
  * Enable interrupt for the specified RX queue.
  *
diff --git a/drivers/net/hinic3/hinic3_fdir.c b/drivers/net/hinic3/hinic3_fdir.c
new file mode 100644
index 0000000000..e36050f263
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_fdir.c
@@ -0,0 +1,1394 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <errno.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_ethdev.h>
+#include <rte_ether.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_hwif.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_fdir.h"
+
+#define HINIC3_UINT1_MAX  0x1
+#define HINIC3_UINT4_MAX  0xf
+#define HINIC3_UINT15_MAX 0x7fff
+
+#define HINIC3_DEV_PRIVATE_TO_TCAM_INFO(nic_dev) \
+	(&((struct hinic3_nic_dev *)(nic_dev))->tcam)
+
+/**
+ * Perform a bitwise AND operation on the input key value and mask, and stores
+ * the result in the key_y array.
+ *
+ * @param[out] key_y
+ * Array for storing results.
+ * @param[in] src_input
+ * Input key array.
+ * @param[in] mask
+ * Mask array.
+ * @param[in] len
+ * Length of the key value and mask.
+ */
+static void
+tcam_translate_key_y(u8 *key_y, u8 *src_input, u8 *mask, u8 len)
+{
+	u8 idx;
+
+	for (idx = 0; idx < len; idx++)
+		key_y[idx] = src_input[idx] & mask[idx];
+}
+
+/**
+ * Convert key_y to key_x using the exclusive OR operation.
+ *
+ * @param[out] key_x
+ * Array for storing results.
+ * @param[in] key_y
+ * Input key array.
+ * @param[in] mask
+ * Mask array.
+ * @param[in] len
+ * Length of the key value and mask.
+ */
+static void
+tcam_translate_key_x(u8 *key_x, u8 *key_y, u8 *mask, u8 len)
+{
+	u8 idx;
+
+	for (idx = 0; idx < len; idx++)
+		key_x[idx] = key_y[idx] ^ mask[idx];
+}
+
+static void
+tcam_key_calculate(struct hinic3_tcam_key *tcam_key,
+		   struct hinic3_tcam_cfg_rule *fdir_tcam_rule)
+{
+	tcam_translate_key_y(fdir_tcam_rule->key.y, (u8 *)(&tcam_key->key_info),
+			     (u8 *)(&tcam_key->key_mask),
+			     HINIC3_TCAM_FLOW_KEY_SIZE);
+	tcam_translate_key_x(fdir_tcam_rule->key.x, fdir_tcam_rule->key.y,
+			     (u8 *)(&tcam_key->key_mask),
+			     HINIC3_TCAM_FLOW_KEY_SIZE);
+}
+
+static void
+hinic3_fdir_tcam_ipv4_init(struct hinic3_fdir_filter *rule,
+			   struct hinic3_tcam_key *tcam_key)
+{
+	/* Fill type of ip. */
+	tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX;
+	tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+	/* Fill src IPv4. */
+	tcam_key->key_mask.sipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.src_ip);
+	tcam_key->key_mask.sipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.src_ip);
+	tcam_key->key_info.sipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.src_ip);
+	tcam_key->key_info.sipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.src_ip);
+
+	/* Fill dst IPv4. */
+	tcam_key->key_mask.dipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.dst_ip);
+	tcam_key->key_mask.dipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.dst_ip);
+	tcam_key->key_info.dipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.dst_ip);
+	tcam_key->key_info.dipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip);
+}
+
+static void
+hinic3_fdir_tcam_ipv6_init(struct hinic3_fdir_filter *rule,
+			   struct hinic3_tcam_key *tcam_key)
+{
+	/* Fill type of ip. */
+	tcam_key->key_mask_ipv6.ip_type = HINIC3_UINT1_MAX;
+	tcam_key->key_info_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+	/* Fill src IPv6. */
+	tcam_key->key_mask_ipv6.sipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+	tcam_key->key_mask_ipv6.sipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+	tcam_key->key_mask_ipv6.sipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+	tcam_key->key_mask_ipv6.sipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+	tcam_key->key_mask_ipv6.sipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+	tcam_key->key_mask_ipv6.sipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+	tcam_key->key_mask_ipv6.sipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+	tcam_key->key_mask_ipv6.sipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+	tcam_key->key_info_ipv6.sipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+	tcam_key->key_info_ipv6.sipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+	tcam_key->key_info_ipv6.sipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+	tcam_key->key_info_ipv6.sipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+	tcam_key->key_info_ipv6.sipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+	tcam_key->key_info_ipv6.sipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+	tcam_key->key_info_ipv6.sipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+	tcam_key->key_info_ipv6.sipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+
+	/* Fill dst IPv6. */
+	tcam_key->key_mask_ipv6.dipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+	tcam_key->key_mask_ipv6.dipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+	tcam_key->key_mask_ipv6.dipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+	tcam_key->key_mask_ipv6.dipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+	tcam_key->key_mask_ipv6.dipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+	tcam_key->key_mask_ipv6.dipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+	tcam_key->key_mask_ipv6.dipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+	tcam_key->key_mask_ipv6.dipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+	tcam_key->key_info_ipv6.dipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+	tcam_key->key_info_ipv6.dipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+	tcam_key->key_info_ipv6.dipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+	tcam_key->key_info_ipv6.dipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+	tcam_key->key_info_ipv6.dipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+	tcam_key->key_info_ipv6.dipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+	tcam_key->key_info_ipv6.dipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+	tcam_key->key_info_ipv6.dipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+}
+
+/**
+ * Set the TCAM information in notunnel scenario.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rule
+ * Pointer to the filtering rule.
+ * @param[in] tcam_key
+ * Pointer to the TCAM key.
+ */
+static void
+hinic3_fdir_tcam_notunnel_init(struct rte_eth_dev *dev,
+			       struct hinic3_fdir_filter *rule,
+			       struct hinic3_tcam_key *tcam_key)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	/* Fill tcam_key info. */
+	tcam_key->key_mask.sport = rule->key_mask.src_port;
+	tcam_key->key_info.sport = rule->key_spec.src_port;
+
+	tcam_key->key_mask.dport = rule->key_mask.dst_port;
+	tcam_key->key_info.dport = rule->key_spec.dst_port;
+
+	tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX;
+	tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+
+	tcam_key->key_mask.function_id = HINIC3_UINT15_MAX;
+	tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) &
+					 HINIC3_UINT15_MAX;
+
+	tcam_key->key_mask.ip_proto = rule->key_mask.proto;
+	tcam_key->key_info.ip_proto = rule->key_spec.proto;
+
+	if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4)
+		hinic3_fdir_tcam_ipv4_init(rule, tcam_key);
+	else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6)
+		hinic3_fdir_tcam_ipv6_init(rule, tcam_key);
+}
+
+static void
+hinic3_fdir_tcam_vxlan_ipv4_init(struct hinic3_fdir_filter *rule,
+				 struct hinic3_tcam_key *tcam_key)
+{
+	/* Fill type of ip. */
+	tcam_key->key_mask.ip_type = HINIC3_UINT1_MAX;
+	tcam_key->key_info.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+	/* Fill src ipv4. */
+	tcam_key->key_mask.sipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv4.src_ip);
+	tcam_key->key_mask.sipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv4.src_ip);
+	tcam_key->key_info.sipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv4.src_ip);
+	tcam_key->key_info.sipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv4.src_ip);
+
+	/* Fill dst ipv4. */
+	tcam_key->key_mask.dipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv4.dst_ip);
+	tcam_key->key_mask.dipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv4.dst_ip);
+	tcam_key->key_info.dipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv4.dst_ip);
+	tcam_key->key_info.dipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv4.dst_ip);
+}
+
+static void
+hinic3_fdir_tcam_vxlan_ipv6_init(struct hinic3_fdir_filter *rule,
+				 struct hinic3_tcam_key *tcam_key)
+{
+	/* Fill type of ip. */
+	tcam_key->key_mask_vxlan_ipv6.ip_type = HINIC3_UINT1_MAX;
+	tcam_key->key_info_vxlan_ipv6.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+	/* Use inner dst ipv6 to fill the dst ipv6 of tcam_key. */
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x1]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x1]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x2]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x2]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x3]);
+	tcam_key->key_mask_vxlan_ipv6.dipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.inner_ipv6.dst_ip[0x3]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x1]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x1]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x2]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x2]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]);
+	tcam_key->key_info_vxlan_ipv6.dipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.inner_ipv6.dst_ip[0x3]);
+}
+
+static void
+hinic3_fdir_tcam_outer_ipv6_init(struct hinic3_fdir_filter *rule,
+				 struct hinic3_tcam_key *tcam_key)
+{
+	tcam_key->key_mask_ipv6.sipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+	tcam_key->key_mask_ipv6.sipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0]);
+	tcam_key->key_mask_ipv6.sipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+	tcam_key->key_mask_ipv6.sipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x1]);
+	tcam_key->key_mask_ipv6.sipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+	tcam_key->key_mask_ipv6.sipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x2]);
+	tcam_key->key_mask_ipv6.sipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+	tcam_key->key_mask_ipv6.sipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.src_ip[0x3]);
+	tcam_key->key_info_ipv6.sipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+	tcam_key->key_info_ipv6.sipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0]);
+	tcam_key->key_info_ipv6.sipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+	tcam_key->key_info_ipv6.sipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x1]);
+	tcam_key->key_info_ipv6.sipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+	tcam_key->key_info_ipv6.sipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x2]);
+	tcam_key->key_info_ipv6.sipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+	tcam_key->key_info_ipv6.sipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.src_ip[0x3]);
+
+	tcam_key->key_mask_ipv6.dipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+	tcam_key->key_mask_ipv6.dipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0]);
+	tcam_key->key_mask_ipv6.dipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+	tcam_key->key_mask_ipv6.dipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x1]);
+	tcam_key->key_mask_ipv6.dipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+	tcam_key->key_mask_ipv6.dipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x2]);
+	tcam_key->key_mask_ipv6.dipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+	tcam_key->key_mask_ipv6.dipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv6.dst_ip[0x3]);
+	tcam_key->key_info_ipv6.dipv6_key0 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+	tcam_key->key_info_ipv6.dipv6_key1 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0]);
+	tcam_key->key_info_ipv6.dipv6_key2 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+	tcam_key->key_info_ipv6.dipv6_key3 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x1]);
+	tcam_key->key_info_ipv6.dipv6_key4 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+	tcam_key->key_info_ipv6.dipv6_key5 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x2]);
+	tcam_key->key_info_ipv6.dipv6_key6 =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+	tcam_key->key_info_ipv6.dipv6_key7 =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv6.dst_ip[0x3]);
+}
+
+static void
+hinic3_fdir_tcam_ipv6_vxlan_init(struct rte_eth_dev *dev,
+				 struct hinic3_fdir_filter *rule,
+				 struct hinic3_tcam_key *tcam_key)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	tcam_key->key_mask_ipv6.ip_proto = rule->key_mask.proto;
+	tcam_key->key_info_ipv6.ip_proto = rule->key_spec.proto;
+
+	tcam_key->key_mask_ipv6.tunnel_type = HINIC3_UINT4_MAX;
+	tcam_key->key_info_ipv6.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+
+	tcam_key->key_mask_ipv6.outer_ip_type = HINIC3_UINT1_MAX;
+	tcam_key->key_info_ipv6.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+	tcam_key->key_mask_ipv6.function_id = HINIC3_UINT15_MAX;
+	tcam_key->key_info_ipv6.function_id =
+		hinic3_global_func_id(nic_dev->hwdev) & HINIC3_UINT15_MAX;
+
+	tcam_key->key_mask_ipv6.dport = rule->key_mask.dst_port;
+	tcam_key->key_info_ipv6.dport = rule->key_spec.dst_port;
+
+	tcam_key->key_mask_ipv6.sport = rule->key_mask.src_port;
+	tcam_key->key_info_ipv6.sport = rule->key_spec.src_port;
+
+	if (rule->ip_type == HINIC3_FDIR_IP_TYPE_ANY)
+		hinic3_fdir_tcam_outer_ipv6_init(rule, tcam_key);
+}
+
+/**
+ * Sets the TCAM information in the VXLAN scenario.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] rule
+ * Pointer to the filtering rule.
+ * @param[in] tcam_key
+ * Pointer to the TCAM key.
+ */
+static void
+hinic3_fdir_tcam_vxlan_init(struct rte_eth_dev *dev,
+			    struct hinic3_fdir_filter *rule,
+			    struct hinic3_tcam_key *tcam_key)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (rule->outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV6) {
+		hinic3_fdir_tcam_ipv6_vxlan_init(dev, rule, tcam_key);
+		return;
+	}
+
+	tcam_key->key_mask.ip_proto = rule->key_mask.proto;
+	tcam_key->key_info.ip_proto = rule->key_spec.proto;
+
+	tcam_key->key_mask.sport = rule->key_mask.src_port;
+	tcam_key->key_info.sport = rule->key_spec.src_port;
+
+	tcam_key->key_mask.dport = rule->key_mask.dst_port;
+	tcam_key->key_info.dport = rule->key_spec.dst_port;
+
+	tcam_key->key_mask.outer_sipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.src_ip);
+	tcam_key->key_mask.outer_sipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.src_ip);
+	tcam_key->key_info.outer_sipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.src_ip);
+	tcam_key->key_info.outer_sipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.src_ip);
+
+	tcam_key->key_mask.outer_dipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.ipv4.dst_ip);
+	tcam_key->key_mask.outer_dipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.ipv4.dst_ip);
+	tcam_key->key_info.outer_dipv4_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.ipv4.dst_ip);
+	tcam_key->key_info.outer_dipv4_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.ipv4.dst_ip);
+
+	tcam_key->key_mask.vni_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_mask.tunnel.tunnel_id);
+	tcam_key->key_mask.vni_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_mask.tunnel.tunnel_id);
+	tcam_key->key_info.vni_h =
+		HINIC3_32_UPPER_16_BITS(rule->key_spec.tunnel.tunnel_id);
+	tcam_key->key_info.vni_l =
+		HINIC3_32_LOWER_16_BITS(rule->key_spec.tunnel.tunnel_id);
+
+	tcam_key->key_mask.tunnel_type = HINIC3_UINT4_MAX;
+	tcam_key->key_info.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+
+	tcam_key->key_mask.function_id = HINIC3_UINT15_MAX;
+	tcam_key->key_info.function_id = hinic3_global_func_id(nic_dev->hwdev) &
+					 HINIC3_UINT15_MAX;
+
+	if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV4)
+		hinic3_fdir_tcam_vxlan_ipv4_init(rule, tcam_key);
+
+	else if (rule->ip_type == HINIC3_FDIR_IP_TYPE_IPV6)
+		hinic3_fdir_tcam_vxlan_ipv6_init(rule, tcam_key);
+}
+
+static void
+hinic3_fdir_tcam_info_init(struct rte_eth_dev *dev,
+			   struct hinic3_fdir_filter *rule,
+			   struct hinic3_tcam_key *tcam_key,
+			   struct hinic3_tcam_cfg_rule *fdir_tcam_rule)
+{
+	/* Initialize the TCAM based on the tunnel type. */
+	if (rule->tunnel_type == HINIC3_FDIR_TUNNEL_MODE_NORMAL)
+		hinic3_fdir_tcam_notunnel_init(dev, rule, tcam_key);
+	else
+		hinic3_fdir_tcam_vxlan_init(dev, rule, tcam_key);
+
+	/* Set the queue index. */
+	fdir_tcam_rule->data.qid = rule->rq_index;
+	/* Calculate key of TCAM. */
+	tcam_key_calculate(tcam_key, fdir_tcam_rule);
+}
+
+/**
+ * Find filter in given ethertype filter list.
+ *
+ * @param[in] filter_list
+ * Point to the Ether filter list.
+ * @param[in] key
+ * The tcam key to find.
+ * @return
+ * If a matching filter is found, the filter is returned, otherwise
+ * RTE_ETH_FILTER_NONE.
+ */
+static inline uint16_t
+hinic3_ethertype_filter_lookup(struct hinic3_ethertype_filter_list *ethertype_list,
+			       uint16_t type)
+{
+	struct rte_flow *it;
+	struct hinic3_filter_t *filter_rules;
+
+	TAILQ_FOREACH(it, ethertype_list, node) {
+		filter_rules = it->rule;
+		if (type == filter_rules->ethertype_filter.ether_type)
+			return filter_rules->ethertype_filter.ether_type;
+	}
+
+	return RTE_ETH_FILTER_NONE;
+}
+
+/**
+ * Find the filter that matches the given key in the TCAM filter list.
+ *
+ * @param[in] filter_list
+ * Point to the tcam filter list.
+ * @param[in] key
+ * The tcam key to find.
+ * @return
+ * If a matching filter is found, the filter is returned, otherwise NULL.
+ */
+static inline struct hinic3_tcam_filter *
+hinic3_tcam_filter_lookup(struct hinic3_tcam_filter_list *filter_list,
+			  struct hinic3_tcam_key *key)
+{
+	struct hinic3_tcam_filter *it;
+
+	TAILQ_FOREACH(it, filter_list, entries) {
+		if (memcmp(key, &it->tcam_key,
+			   sizeof(struct hinic3_tcam_key)) == 0) {
+			return it;
+		}
+	}
+
+	return NULL;
+}
+/**
+ * Allocate memory for dynamic blocks and then add them to the queue.
+ *
+ * @param[in] tcam_info
+ * Point to TCAM information.
+ * @param[in] dynamic_block_id
+ * Indicate the ID of a dynamic block.
+ * @return
+ * Return the pointer to the dynamic block, or NULL if the allocation fails.
+ */
+static struct hinic3_tcam_dynamic_block *
+hinic3_alloc_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+				    u16 dynamic_block_id)
+{
+	struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+
+	dynamic_block_ptr =
+		rte_zmalloc("hinic3_tcam_dynamic_mem",
+			    sizeof(struct hinic3_tcam_dynamic_block), 0);
+	if (dynamic_block_ptr == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "Alloc fdir filter dynamic block index %d memory "
+			    "failed!",
+			    dynamic_block_id);
+		return NULL;
+	}
+
+	dynamic_block_ptr->dynamic_block_id = dynamic_block_id;
+
+	/* Add new block to the end of the TCAM dynamic block list. */
+	TAILQ_INSERT_TAIL(&tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+			  dynamic_block_ptr, entries);
+
+	tcam_info->tcam_dynamic_info.dynamic_block_cnt++;
+
+	return dynamic_block_ptr;
+}
+
+static void
+hinic3_free_dynamic_block_resource(struct hinic3_tcam_info *tcam_info,
+				   struct hinic3_tcam_dynamic_block *dynamic_block_ptr)
+{
+	if (dynamic_block_ptr == NULL)
+		return;
+
+	/* Remove the incoming dynamic block from the TCAM dynamic list. */
+	TAILQ_REMOVE(&tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+		     dynamic_block_ptr, entries);
+	rte_free(dynamic_block_ptr);
+
+	tcam_info->tcam_dynamic_info.dynamic_block_cnt--;
+}
+
+/**
+ * Check whether there are free positions in the dynamic TCAM filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] fdir_tcam_rule
+ * Indicate the filtering rule to be searched for.
+ * @param[in] tcam_info
+ * Ternary Content-Addressable Memory (TCAM) information.
+ * @param[in] tcam_filter
+ * Point to the TCAM filter.
+ * @param[out] tcam_index
+ * Indicate the TCAM index to be searched for.
+ * @result
+ * Pointer to the TCAM dynamic block. If the search fails, NULL is returned.
+ */
+static struct hinic3_tcam_dynamic_block *
+hinic3_dynamic_lookup_tcam_filter(struct rte_eth_dev *dev,
+				  struct hinic3_tcam_cfg_rule *fdir_tcam_rule,
+				  struct hinic3_tcam_info *tcam_info,
+				  struct hinic3_tcam_filter *tcam_filter,
+				  u16 *tcam_index)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u16 block_cnt = tcam_info->tcam_dynamic_info.dynamic_block_cnt;
+	struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+	struct hinic3_tcam_dynamic_block *tmp = NULL;
+	u16 rule_nums = nic_dev->tcam_rule_nums;
+	int block_alloc_flag = 0;
+	u16 dynamic_block_id = 0;
+	u16 index;
+	int err;
+
+	/*
+	 * Check whether the number of filtering rules reaches the maximum
+	 * capacity of dynamic TCAM blocks.
+	 */
+	if (rule_nums >= block_cnt * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+		if (block_cnt >= (HINIC3_TCAM_DYNAMIC_MAX_FILTERS /
+				  HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)) {
+			PMD_DRV_LOG(ERR,
+				"Dynamic tcam block is full, alloc failed!");
+			goto failed;
+		}
+		/*
+		 * The TCAM blocks are insufficient.
+		 * Apply for a new TCAM block.
+		 */
+		err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+					      &dynamic_block_id);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				"Fdir filter dynamic tcam alloc block failed!");
+			goto failed;
+		}
+
+		block_alloc_flag = 1;
+
+		/* Applying for Memory. */
+		dynamic_block_ptr =
+			hinic3_alloc_dynamic_block_resource(tcam_info,
+							    dynamic_block_id);
+		if (dynamic_block_ptr == NULL) {
+			PMD_DRV_LOG(ERR, "Fdir filter dynamic alloc block "
+					 "memory failed!");
+			goto block_alloc_failed;
+		}
+	}
+
+	/*
+	 * Find the first dynamic TCAM block that meets dynamci_index_cnt <
+	 * HINIC3_TCAM_DYNAMIC_BLOCK_SIZE.
+	 */
+	TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+		      entries) {
+		if (tmp->dynamic_index_cnt < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE)
+			break;
+	}
+
+	if (tmp == NULL ||
+	    tmp->dynamic_index_cnt >= HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+		PMD_DRV_LOG(ERR,
+			    "Fdir filter dynamic lookup for index failed!");
+		goto look_up_failed;
+	}
+
+	for (index = 0; index < HINIC3_TCAM_DYNAMIC_BLOCK_SIZE; index++) {
+		if (tmp->dynamic_index[index] == 0)
+			break;
+	}
+
+	/* Find the first free position. */
+	if (index == HINIC3_TCAM_DYNAMIC_BLOCK_SIZE) {
+		PMD_DRV_LOG(ERR,
+			    "tcam block 0x%x supports filter rules is full!",
+			    tmp->dynamic_block_id);
+		goto look_up_failed;
+	}
+
+	tcam_filter->dynamic_block_id = tmp->dynamic_block_id;
+	tcam_filter->index = index;
+	*tcam_index = index;
+
+	fdir_tcam_rule->index =
+		HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+		index;
+
+	return tmp;
+
+look_up_failed:
+	if (dynamic_block_ptr != NULL)
+		hinic3_free_dynamic_block_resource(tcam_info,
+						   dynamic_block_ptr);
+
+block_alloc_failed:
+	if (block_alloc_flag == 1)
+		(void)hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+
+failed:
+	return NULL;
+}
+
+/**
+ * Add a TCAM filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] tcam_key
+ * Pointer to the TCAM key.
+ * @param[in] fdir_tcam_rule
+ * Pointer to the  TCAM filtering rule.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_add_tcam_filter(struct rte_eth_dev *dev,
+		       struct hinic3_tcam_key *tcam_key,
+		       struct hinic3_tcam_cfg_rule *fdir_tcam_rule)
+{
+	struct hinic3_tcam_info *tcam_info =
+		HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_tcam_dynamic_block *dynamic_block_ptr = NULL;
+	struct hinic3_tcam_dynamic_block *tmp = NULL;
+	struct hinic3_tcam_filter *tcam_filter;
+	u16 tcam_block_index = 0;
+	u16 index = 0;
+	int err;
+
+	/* Alloc TCAM filter memory. */
+	tcam_filter = rte_zmalloc("hinic3_fdir_filter",
+				  sizeof(struct hinic3_tcam_filter), 0);
+	if (tcam_filter == NULL)
+		return -ENOMEM;
+	(void)rte_memcpy(&tcam_filter->tcam_key, tcam_key,
+			 sizeof(struct hinic3_tcam_key));
+	tcam_filter->queue = (u16)(fdir_tcam_rule->data.qid);
+
+	/* Add new TCAM rules. */
+	if (nic_dev->tcam_rule_nums == 0) {
+		err = hinic3_alloc_tcam_block(nic_dev->hwdev,
+					      &tcam_block_index);
+		if (err) {
+			PMD_DRV_LOG(ERR,
+				    "Fdir filter tcam alloc block failed!");
+			goto failed;
+		}
+
+		dynamic_block_ptr =
+			hinic3_alloc_dynamic_block_resource(tcam_info,
+							    tcam_block_index);
+		if (dynamic_block_ptr == NULL) {
+			PMD_DRV_LOG(ERR, "Fdir filter alloc dynamic first "
+					 "block memory failed!");
+			goto alloc_block_failed;
+		}
+	}
+
+	/*
+	 * Look for an available index in the dynamic block to store the new
+	 * TCAM filter.
+	 */
+	tmp = hinic3_dynamic_lookup_tcam_filter(dev, fdir_tcam_rule, tcam_info,
+						tcam_filter, &index);
+	if (tmp == NULL) {
+		PMD_DRV_LOG(ERR, "Dynamic lookup tcam filter failed!");
+		goto lookup_tcam_index_failed;
+	}
+
+	/* Add a new TCAM rule to the network device. */
+	err = hinic3_add_tcam_rule(nic_dev->hwdev, fdir_tcam_rule,
+				   TCAM_RULE_FDIR_TYPE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Fdir_tcam_rule add failed!");
+		goto add_tcam_rules_failed;
+	}
+
+	/* If there are no rules, TCAM filtering is enabled. */
+	if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) {
+		err = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, true);
+		if (err)
+			goto enable_failed;
+	}
+
+	/* Add a filter to the end of the queue. */
+	TAILQ_INSERT_TAIL(&tcam_info->tcam_list, tcam_filter, entries);
+
+	/* Update dynamic index. */
+	tmp->dynamic_index[index] = 1;
+	tmp->dynamic_index_cnt++;
+
+	nic_dev->tcam_rule_nums++;
+
+	PMD_DRV_LOG(INFO,
+		    "Add fdir tcam rule, function_id: 0x%x, "
+		    "tcam_block_id: %d, local_index: %d, global_index: %d, "
+		    "queue: %d, "
+		    "tcam_rule_nums: %d succeed",
+		    hinic3_global_func_id(nic_dev->hwdev),
+		    tcam_filter->dynamic_block_id, index, fdir_tcam_rule->index,
+		    fdir_tcam_rule->data.qid, nic_dev->tcam_rule_nums);
+
+	return 0;
+
+enable_failed:
+	(void)hinic3_del_tcam_rule(nic_dev->hwdev, fdir_tcam_rule->index,
+				   TCAM_RULE_FDIR_TYPE);
+
+add_tcam_rules_failed:
+lookup_tcam_index_failed:
+	if (nic_dev->tcam_rule_nums == 0 && dynamic_block_ptr != NULL)
+		hinic3_free_dynamic_block_resource(tcam_info,
+						   dynamic_block_ptr);
+
+alloc_block_failed:
+	if (nic_dev->tcam_rule_nums == 0)
+		(void)hinic3_free_tcam_block(nic_dev->hwdev, &tcam_block_index);
+
+failed:
+	rte_free(tcam_filter);
+	return -EFAULT;
+}
+
+/**
+ * Delete a TCAM filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] tcam_filter
+ * TCAM Filters to Delete.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_del_dynamic_tcam_filter(struct rte_eth_dev *dev,
+			       struct hinic3_tcam_filter *tcam_filter)
+{
+	struct hinic3_tcam_info *tcam_info =
+		HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	u16 dynamic_block_id = tcam_filter->dynamic_block_id;
+	struct hinic3_tcam_dynamic_block *tmp = NULL;
+	u32 index = 0;
+	int err;
+
+	/* Traverse to find the block that matches the given ID. */
+	TAILQ_FOREACH(tmp, &tcam_info->tcam_dynamic_info.tcam_dynamic_list,
+		      entries) {
+		if (tmp->dynamic_block_id == dynamic_block_id)
+			break;
+	}
+
+	if (tmp == NULL || tmp->dynamic_block_id != dynamic_block_id) {
+		PMD_DRV_LOG(ERR,
+			    "Fdir filter del dynamic lookup for block failed!");
+		return -EINVAL;
+	}
+	/* Calculate TCAM index. */
+	index = HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(tmp->dynamic_block_id) +
+		tcam_filter->index;
+
+	/* Delete a specified rule. */
+	err = hinic3_del_tcam_rule(nic_dev->hwdev, index, TCAM_RULE_FDIR_TYPE);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Fdir tcam rule del failed!");
+		return -EFAULT;
+	}
+
+	PMD_DRV_LOG(INFO,
+		    "Del fdir_tcam_dynamic_rule function_id: 0x%x, "
+		    "tcam_block_id: %d, local_index: %d, global_index: %d, "
+		    "local_rules_nums: %d, global_rule_nums: %d succeed",
+		    hinic3_global_func_id(nic_dev->hwdev), dynamic_block_id,
+		    tcam_filter->index, index, tmp->dynamic_index_cnt - 1,
+		    nic_dev->tcam_rule_nums - 1);
+
+	tmp->dynamic_index[tcam_filter->index] = 0;
+	tmp->dynamic_index_cnt--;
+	nic_dev->tcam_rule_nums--;
+	if (tmp->dynamic_index_cnt == 0) {
+		(void)hinic3_free_tcam_block(nic_dev->hwdev, &dynamic_block_id);
+
+		hinic3_free_dynamic_block_resource(tcam_info, tmp);
+	}
+
+	/* If the number of rules is 0, the TCAM filter is disabled. */
+	if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums))
+		(void)hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+	return 0;
+}
+
+static int
+hinic3_del_tcam_filter(struct rte_eth_dev *dev,
+		       struct hinic3_tcam_filter *tcam_filter)
+{
+	struct hinic3_tcam_info *tcam_info =
+		HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+	int err;
+
+	err = hinic3_del_dynamic_tcam_filter(dev, tcam_filter);
+	if (err < 0) {
+		PMD_DRV_LOG(ERR, "Del dynamic tcam filter failed!");
+		return err;
+	}
+
+	/* Remove the filter from the TCAM list. */
+	TAILQ_REMOVE(&tcam_info->tcam_list, tcam_filter, entries);
+
+	rte_free(tcam_filter);
+
+	return 0;
+}
+
+/**
+ * Add or deletes an fdir filter rule. This is the core function for operating
+ * filters.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] fdir_filter
+ * Pointer to the fdir filter.
+ * @param[in] add
+ * This is a Boolean value (of the bool type) indicating whether the action to
+ * be performed is to add (true) or delete (false) the filter rule.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
+				struct hinic3_fdir_filter *fdir_filter,
+				bool add)
+{
+	struct hinic3_tcam_info *tcam_info =
+		HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+	struct hinic3_tcam_filter *tcam_filter;
+	struct hinic3_tcam_cfg_rule fdir_tcam_rule;
+	struct hinic3_tcam_key tcam_key;
+	int ret;
+
+	memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule));
+	memset((void *)&tcam_key, 0, sizeof(struct hinic3_tcam_key));
+
+	hinic3_fdir_tcam_info_init(dev, fdir_filter, &tcam_key,
+				   &fdir_tcam_rule);
+	/* Search for a filter. */
+	tcam_filter =
+		hinic3_tcam_filter_lookup(&tcam_info->tcam_list, &tcam_key);
+	if (tcam_filter != NULL && add) {
+		PMD_DRV_LOG(ERR, "Filter exists.");
+		return -EEXIST;
+	}
+	if (tcam_filter == NULL && !add) {
+		PMD_DRV_LOG(ERR, "Filter doesn't exist.");
+		return -ENOENT;
+	}
+
+	/*
+	 * If the value of Add is true, the system performs the adding
+	 * operation.
+	 */
+	if (add) {
+		ret = hinic3_add_tcam_filter(dev, &tcam_key, &fdir_tcam_rule);
+		if (ret)
+			goto cfg_tcam_filter_err;
+
+		fdir_filter->tcam_index = (int)(fdir_tcam_rule.index);
+	} else {
+		PMD_DRV_LOG(INFO, "begin to del tcam filter");
+		ret = hinic3_del_tcam_filter(dev, tcam_filter);
+		if (ret)
+			goto cfg_tcam_filter_err;
+	}
+
+	return 0;
+
+cfg_tcam_filter_err:
+
+	return ret;
+}
+
+/**
+ * Enable or disable the TCAM filter for the receive queue.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] able
+ * Flag to enable or disable the filter.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, u32 queue_id, u32 able)
+{
+	struct hinic3_tcam_info *tcam_info =
+		HINIC3_DEV_PRIVATE_TO_TCAM_INFO(dev->data->dev_private);
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_tcam_filter *it;
+	struct hinic3_tcam_cfg_rule fdir_tcam_rule;
+	int ret;
+	u32 queue_res;
+	uint16_t index;
+
+	memset(&fdir_tcam_rule, 0, sizeof(struct hinic3_tcam_cfg_rule));
+
+	if (able) {
+		TAILQ_FOREACH(it, &tcam_info->tcam_list, entries) {
+			if (queue_id == it->queue) {
+				index = (u16)(HINIC3_PKT_TCAM_DYNAMIC_INDEX_START
+					      (it->dynamic_block_id) + it->index);
+
+				/*
+				 * When the rxq is start, find invalid rxq_id
+				 * and delete the fdir rule from the tcam.
+				 */
+				ret = hinic3_del_tcam_rule(nic_dev->hwdev,
+							   index,
+							   TCAM_RULE_FDIR_TYPE);
+				if (ret) {
+					PMD_DRV_LOG(ERR, "del invalid tcam "
+							 "rule failed!");
+					return -EFAULT;
+				}
+
+				fdir_tcam_rule.index = index;
+				fdir_tcam_rule.data.qid = queue_id;
+				tcam_key_calculate(&it->tcam_key,
+						   &fdir_tcam_rule);
+
+				/* To enable a rule, add a rule. */
+				ret = hinic3_add_tcam_rule(nic_dev->hwdev,
+							   &fdir_tcam_rule,
+							   TCAM_RULE_FDIR_TYPE);
+				if (ret) {
+					PMD_DRV_LOG(ERR, "add correct tcam "
+							 "rule failed!");
+					return -EFAULT;
+				}
+			}
+		}
+	} else {
+		queue_res = HINIC3_INVALID_QID_BASE | queue_id;
+
+		TAILQ_FOREACH(it, &tcam_info->tcam_list, entries) {
+			if (queue_id == it->queue) {
+				index = (u16)(HINIC3_PKT_TCAM_DYNAMIC_INDEX_START
+					      (it->dynamic_block_id) + it->index);
+
+				/*
+				 * When the rxq is stop, delete the fdir rule
+				 * from the tcam and add the corret fdir rule
+				 * from the tcam.
+				 */
+				ret = hinic3_del_tcam_rule(nic_dev->hwdev,
+							   index,
+							   TCAM_RULE_FDIR_TYPE);
+				if (ret) {
+					PMD_DRV_LOG(ERR, "del correct tcam "
+							 "rule failed!");
+					return -EFAULT;
+				}
+
+				fdir_tcam_rule.index = index;
+				fdir_tcam_rule.data.qid = queue_res;
+				tcam_key_calculate(&it->tcam_key,
+						   &fdir_tcam_rule);
+
+				/* Add the corret fdir rule from the tcam. */
+				ret = hinic3_add_tcam_rule(nic_dev->hwdev,
+							   &fdir_tcam_rule,
+							   TCAM_RULE_FDIR_TYPE);
+				if (ret) {
+					PMD_DRV_LOG(ERR, "add invalid tcam "
+							 "rule failed!");
+					return -EFAULT;
+				}
+			}
+		}
+	}
+
+	return ret;
+}
+
+void
+hinic3_free_fdir_filter(struct rte_eth_dev *dev)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	(void)hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev, false);
+
+	(void)hinic3_flush_tcam_rule(nic_dev->hwdev);
+}
+
+static int
+hinic3_flow_set_arp_filter(struct rte_eth_dev *dev,
+			   struct rte_eth_ethertype_filter *ethertype_filter,
+			   bool add)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int ret;
+
+	/* Setting the ARP Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_ARP,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		return ret;
+	}
+
+	/* Setting the ARP Request Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_ARP_REQ,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s arp request rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		goto set_arp_req_failed;
+	}
+
+	/* Setting the ARP Response Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_ARP_REP,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s arp response rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		goto set_arp_rep_failed;
+	}
+
+	return 0;
+
+set_arp_rep_failed:
+	(void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_ARP_REQ,
+					       ethertype_filter->queue, !add);
+
+set_arp_req_failed:
+	(void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_ARP,
+					       ethertype_filter->queue, !add);
+
+	return ret;
+}
+
+static int
+hinic3_flow_set_slow_filter(struct rte_eth_dev *dev,
+			    struct rte_eth_ethertype_filter *ethertype_filter,
+			    bool add)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int ret;
+
+	/* Setting the LACP Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_LACP,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s lacp fdir rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		return ret;
+	}
+
+	/* Setting the OAM Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_OAM,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s oam rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		goto set_arp_oam_failed;
+	}
+
+	return 0;
+
+set_arp_oam_failed:
+	(void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_LACP,
+					       ethertype_filter->queue, !add);
+
+	return ret;
+}
+
+static int
+hinic3_flow_set_lldp_filter(struct rte_eth_dev *dev,
+			    struct rte_eth_ethertype_filter *ethertype_filter,
+			    bool add)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int ret;
+
+	/* Setting the LLDP Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_LLDP,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s lldp fdir rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		return ret;
+	}
+
+	/* Setting the CDCP Filter. */
+	ret = hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_CDCP,
+					       ethertype_filter->queue, add);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s cdcp fdir rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		goto set_arp_cdcp_failed;
+	}
+
+	return 0;
+
+set_arp_cdcp_failed:
+	(void)hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+					       HINIC3_PKT_TYPE_LLDP,
+					       ethertype_filter->queue, !add);
+
+	return ret;
+}
+
+static int
+hinic3_flow_add_del_ethertype_filter_rule(struct rte_eth_dev *dev,
+					  struct rte_eth_ethertype_filter *ethertype_filter,
+					  bool add)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_ethertype_filter_list *ethertype_list =
+		&nic_dev->filter_ethertype_list;
+
+	/* Check whether the transferred rule exists. */
+	if (hinic3_ethertype_filter_lookup(ethertype_list,
+					   ethertype_filter->ether_type)) {
+		if (add) {
+			PMD_DRV_LOG(ERR,
+				"The rule already exists, can not to be added");
+			return -EPERM;
+		}
+	} else {
+		if (!add) {
+			PMD_DRV_LOG(ERR,
+				"The rule not exists, can not to be delete");
+			return -EPERM;
+		}
+	}
+	/* Create a filter based on the protocol type. */
+	switch (ethertype_filter->ether_type) {
+	case RTE_ETHER_TYPE_ARP:
+		return hinic3_flow_set_arp_filter(dev, ethertype_filter, add);
+	case RTE_ETHER_TYPE_RARP:
+		return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+			HINIC3_PKT_TYPE_RARP, ethertype_filter->queue, add);
+
+	case RTE_ETHER_TYPE_SLOW:
+		return hinic3_flow_set_slow_filter(dev, ethertype_filter, add);
+
+	case RTE_ETHER_TYPE_LLDP:
+		return hinic3_flow_set_lldp_filter(dev, ethertype_filter, add);
+
+	case RTE_ETHER_TYPE_CNM:
+		return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+			HINIC3_PKT_TYPE_CNM, ethertype_filter->queue, add);
+
+	case RTE_ETHER_TYPE_ECP:
+		return hinic3_set_fdir_ethertype_filter(nic_dev->hwdev,
+			HINIC3_PKT_TYPE_ECP, ethertype_filter->queue, add);
+
+	default:
+		PMD_DRV_LOG(ERR, "Unknown ethertype %d queue_id %d",
+			    ethertype_filter->ether_type,
+			    ethertype_filter->queue);
+		return -EPERM;
+	}
+}
+
+static int
+hinic3_flow_ethertype_rule_nums(struct rte_eth_ethertype_filter *ethertype_filter)
+{
+	switch (ethertype_filter->ether_type) {
+	case RTE_ETHER_TYPE_ARP:
+		return HINIC3_ARP_RULE_NUM;
+	case RTE_ETHER_TYPE_RARP:
+		return HINIC3_RARP_RULE_NUM;
+	case RTE_ETHER_TYPE_SLOW:
+		return HINIC3_SLOW_RULE_NUM;
+	case RTE_ETHER_TYPE_LLDP:
+		return HINIC3_LLDP_RULE_NUM;
+	case RTE_ETHER_TYPE_CNM:
+		return HINIC3_CNM_RULE_NUM;
+	case RTE_ETHER_TYPE_ECP:
+		return HINIC3_ECP_RULE_NUM;
+
+	default:
+		PMD_DRV_LOG(ERR, "Unknown ethertype %d",
+			    ethertype_filter->ether_type);
+		return 0;
+	}
+}
+
+/**
+ * Add or delete an Ethernet type filter rule.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] ethertype_filter
+ * Pointer to ethertype filter.
+ * @param[in] add
+ * This is a Boolean value (of the bool type) indicating whether the action to
+ * be performed is to add (true) or delete (false) the Ethernet type filter
+ * rule.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+int
+hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev,
+				     struct rte_eth_ethertype_filter *ethertype_filter,
+				     bool add)
+{
+	/* Get dev private info. */
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	int ret;
+	/* Add or remove an Ethernet type filter rule. */
+	ret = hinic3_flow_add_del_ethertype_filter_rule(dev, ethertype_filter,
+							add);
+
+	if (ret) {
+		PMD_DRV_LOG(ERR, "%s fdir ethertype rule failed, err: %d",
+			    add ? "Add" : "Del", ret);
+		return ret;
+	}
+	/*
+	 * If a rule is added and the rule is the first rule, rule filtering is
+	 * enabled. If a rule is deleted and the rule is the last one, rule
+	 * filtering is disabled.
+	 */
+	if (add) {
+		if (nic_dev->ethertype_rule_nums == 0) {
+			ret = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev,
+							       true);
+			if (ret) {
+				PMD_DRV_LOG(ERR,
+					    "enable fdir rule failed, err: %d",
+					    ret);
+				goto enable_fdir_failed;
+			}
+		}
+		nic_dev->ethertype_rule_nums =
+			nic_dev->ethertype_rule_nums +
+			hinic3_flow_ethertype_rule_nums(ethertype_filter);
+	} else {
+		nic_dev->ethertype_rule_nums =
+			nic_dev->ethertype_rule_nums -
+			hinic3_flow_ethertype_rule_nums(ethertype_filter);
+
+		if (!(nic_dev->ethertype_rule_nums + nic_dev->tcam_rule_nums)) {
+			ret = hinic3_set_fdir_tcam_rule_filter(nic_dev->hwdev,
+							       false);
+			if (ret) {
+				PMD_DRV_LOG(ERR,
+					    "disable fdir rule failed, err: %d",
+					    ret);
+			}
+		}
+	}
+
+	return 0;
+
+enable_fdir_failed:
+	(void)hinic3_flow_add_del_ethertype_filter_rule(dev, ethertype_filter,
+							!add);
+	return ret;
+}
diff --git a/drivers/net/hinic3/hinic3_fdir.h b/drivers/net/hinic3/hinic3_fdir.h
new file mode 100644
index 0000000000..fbb2461a44
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_fdir.h
@@ -0,0 +1,398 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_FDIR_H_
+#define _HINIC3_FDIR_H_
+
+#define HINIC3_FLOW_MAX_PATTERN_NUM 16
+
+#define HINIC3_TCAM_DYNAMIC_BLOCK_SIZE 16
+
+#define HINIC3_TCAM_DYNAMIC_MAX_FILTERS 1024
+
+#define HINIC3_PKT_TCAM_DYNAMIC_INDEX_START(block_index) \
+	(HINIC3_TCAM_DYNAMIC_BLOCK_SIZE * (block_index))
+
+/* Indicate a traffic filtering rule. */
+struct rte_flow {
+	TAILQ_ENTRY(rte_flow) node;
+	enum rte_filter_type filter_type;
+	void *rule;
+};
+
+struct hinic3_fdir_rule_key {
+	struct rte_eth_ipv4_flow ipv4;
+	struct rte_eth_ipv6_flow ipv6;
+	struct rte_eth_ipv4_flow inner_ipv4;
+	struct rte_eth_ipv6_flow inner_ipv6;
+	struct rte_eth_tunnel_flow tunnel;
+	uint16_t src_port;
+	uint16_t dst_port;
+	uint8_t proto;
+};
+
+struct hinic3_fdir_filter {
+	int tcam_index;
+	uint8_t ip_type; /**< Inner ip type. */
+	uint8_t outer_ip_type;
+	uint8_t tunnel_type;
+	struct hinic3_fdir_rule_key key_mask;
+	struct hinic3_fdir_rule_key key_spec;
+	uint32_t rq_index; /**< Queue assigned when matched. */
+};
+
+/* This structure is used to describe a basic filter type. */
+struct hinic3_filter_t {
+	u16 filter_rule_nums;
+	enum rte_filter_type filter_type;
+	struct rte_eth_ethertype_filter ethertype_filter;
+	struct hinic3_fdir_filter fdir_filter;
+};
+
+enum hinic3_fdir_tunnel_mode {
+	HINIC3_FDIR_TUNNEL_MODE_NORMAL = 0,
+	HINIC3_FDIR_TUNNEL_MODE_VXLAN = 1,
+};
+
+enum hinic3_fdir_ip_type {
+	HINIC3_FDIR_IP_TYPE_IPV4 = 0,
+	HINIC3_FDIR_IP_TYPE_IPV6 = 1,
+	HINIC3_FDIR_IP_TYPE_ANY = 2,
+};
+
+/* Describe the key structure of the TCAM. */
+struct hinic3_tcam_key_mem {
+#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN)
+	u32 rsvd0 : 16;
+	u32 ip_proto : 8;
+	u32 tunnel_type : 4;
+	u32 rsvd1 : 4;
+
+	u32 function_id : 15;
+	u32 ip_type : 1;
+
+	u32 sipv4_h : 16;
+	u32 sipv4_l : 16;
+
+	u32 dipv4_h : 16;
+	u32 dipv4_l : 16;
+	u32 rsvd2 : 16;
+
+	u32 rsvd3;
+
+	u32 rsvd4 : 16;
+	u32 dport : 16;
+
+	u32 sport : 16;
+	u32 rsvd5 : 16;
+
+	u32 rsvd6 : 16;
+	u32 outer_sipv4_h : 16;
+	u32 outer_sipv4_l : 16;
+
+	u32 outer_dipv4_h : 16;
+	u32 outer_dipv4_l : 16;
+	u32 vni_h : 16;
+
+	u32 vni_l : 16;
+	u32 rsvd7 : 16;
+#else
+	u32 rsvd1 : 4;
+	u32 tunnel_type : 4;
+	u32 ip_proto : 8;
+	u32 rsvd0 : 16;
+
+	u32 sipv4_h : 16;
+	u32 ip_type : 1;
+	u32 function_id : 15;
+
+	u32 dipv4_h : 16;
+	u32 sipv4_l : 16;
+
+	u32 rsvd2 : 16;
+	u32 dipv4_l : 16;
+
+	u32 rsvd3;
+
+	u32 dport : 16;
+	u32 rsvd4 : 16;
+
+	u32 rsvd5 : 16;
+	u32 sport : 16;
+
+	u32 outer_sipv4_h : 16;
+	u32 rsvd6 : 16;
+
+	u32 outer_dipv4_h : 16;
+	u32 outer_sipv4_l : 16;
+
+	u32 vni_h : 16;
+	u32 outer_dipv4_l : 16;
+
+	u32 rsvd7 : 16;
+	u32 vni_l : 16;
+#endif
+};
+
+/*
+ * Define the IPv6-related TCAM key data structure in common
+ * scenarios or IPv6 tunnel scenarios.
+ */
+struct hinic3_tcam_key_ipv6_mem {
+#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN)
+	u32 rsvd0 : 16;
+	/* Indicates the normal IPv6 nextHdr or inner IPv4/IPv6 next proto. */
+	u32 ip_proto : 8;
+	u32 tunnel_type : 4;
+	u32 outer_ip_type : 1;
+	u32 rsvd1 : 3;
+
+	u32 function_id : 15;
+	u32 ip_type : 1;
+	u32 sipv6_key0 : 16;
+
+	u32 sipv6_key1 : 16;
+	u32 sipv6_key2 : 16;
+
+	u32 sipv6_key3 : 16;
+	u32 sipv6_key4 : 16;
+
+	u32 sipv6_key5 : 16;
+	u32 sipv6_key6 : 16;
+
+	u32 sipv6_key7 : 16;
+	u32 dport : 16;
+
+	u32 sport : 16;
+	u32 dipv6_key0 : 16;
+
+	u32 dipv6_key1 : 16;
+	u32 dipv6_key2 : 16;
+
+	u32 dipv6_key3 : 16;
+	u32 dipv6_key4 : 16;
+
+	u32 dipv6_key5 : 16;
+	u32 dipv6_key6 : 16;
+
+	u32 dipv6_key7 : 16;
+	u32 rsvd2 : 16;
+#else
+	u32 rsvd1 : 3;
+	u32 outer_ip_type : 1;
+	u32 tunnel_type : 4;
+	u32 ip_proto : 8;
+	u32 rsvd0 : 16;
+
+	u32 sipv6_key0 : 16;
+	u32 ip_type : 1;
+	u32 function_id : 15;
+
+	u32 sipv6_key2 : 16;
+	u32 sipv6_key1 : 16;
+
+	u32 sipv6_key4 : 16;
+	u32 sipv6_key3 : 16;
+
+	u32 sipv6_key6 : 16;
+	u32 sipv6_key5 : 16;
+
+	u32 dport : 16;
+	u32 sipv6_key7 : 16;
+
+	u32 dipv6_key0 : 16;
+	u32 sport : 16;
+
+	u32 dipv6_key2 : 16;
+	u32 dipv6_key1 : 16;
+
+	u32 dipv6_key4 : 16;
+	u32 dipv6_key3 : 16;
+
+	u32 dipv6_key6 : 16;
+	u32 dipv6_key5 : 16;
+
+	u32 rsvd2 : 16;
+	u32 dipv6_key7 : 16;
+#endif
+};
+
+/*
+ * Define the tcam key value data structure related to IPv6 in
+ * the VXLAN scenario.
+ */
+struct hinic3_tcam_key_vxlan_ipv6_mem {
+#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN)
+	u32 rsvd0 : 16;
+	u32 ip_proto : 8;
+	u32 tunnel_type : 4;
+	u32 rsvd1 : 4;
+
+	u32 function_id : 15;
+	u32 ip_type : 1;
+	u32 dipv6_key0 : 16;
+
+	u32 dipv6_key1 : 16;
+	u32 dipv6_key2 : 16;
+
+	u32 dipv6_key3 : 16;
+	u32 dipv6_key4 : 16;
+
+	u32 dipv6_key5 : 16;
+	u32 dipv6_key6 : 16;
+
+	u32 dipv6_key7 : 16;
+	u32 dport : 16;
+
+	u32 sport : 16;
+	u32 rsvd2 : 16;
+
+	u32 rsvd3 : 16;
+	u32 outer_sipv4_h : 16;
+
+	u32 outer_sipv4_l : 16;
+	u32 outer_dipv4_h : 16;
+
+	u32 outer_dipv4_l : 16;
+	u32 vni_h : 16;
+
+	u32 vni_l : 16;
+	u32 rsvd4 : 16;
+#else
+	u32 rsvd1 : 4;
+	u32 tunnel_type : 4;
+	u32 ip_proto : 8;
+	u32 rsvd0 : 16;
+
+	u32 dipv6_key0 : 16;
+	u32 ip_type : 1;
+	u32 function_id : 15;
+
+	u32 dipv6_key2 : 16;
+	u32 dipv6_key1 : 16;
+
+	u32 dipv6_key4 : 16;
+	u32 dipv6_key3 : 16;
+
+	u32 dipv6_key6 : 16;
+	u32 dipv6_key5 : 16;
+
+	u32 dport : 16;
+	u32 dipv6_key7 : 16;
+
+	u32 rsvd2 : 16;
+	u32 sport : 16;
+
+	u32 outer_sipv4_h : 16;
+	u32 rsvd3 : 16;
+
+	u32 outer_dipv4_h : 16;
+	u32 outer_sipv4_l : 16;
+
+	u32 vni_h : 16;
+	u32 outer_dipv4_l : 16;
+
+	u32 rsvd4 : 16;
+	u32 vni_l : 16;
+#endif
+};
+
+/*
+ * TCAM key structure. The two unions indicate the key and mask respectively.
+ * The TCAM key is consistent with the TCAM entry.
+ */
+struct hinic3_tcam_key {
+	union {
+		struct hinic3_tcam_key_mem key_info;
+		struct hinic3_tcam_key_ipv6_mem key_info_ipv6;
+		struct hinic3_tcam_key_vxlan_ipv6_mem key_info_vxlan_ipv6;
+	};
+	union {
+		struct hinic3_tcam_key_mem key_mask;
+		struct hinic3_tcam_key_ipv6_mem key_mask_ipv6;
+		struct hinic3_tcam_key_vxlan_ipv6_mem key_mask_vxlan_ipv6;
+	};
+};
+
+/* Structure indicates the TCAM filter. */
+struct hinic3_tcam_filter {
+	TAILQ_ENTRY(hinic3_tcam_filter)
+	entries; /**< Filter entry, used for linked list operations. */
+	uint16_t dynamic_block_id;	 /**< Dynamic block ID. */
+	uint16_t index;			 /**< TCAM index. */
+	struct hinic3_tcam_key tcam_key; /**< Indicate TCAM key. */
+	uint16_t queue;			 /**< Allocated RX queue. */
+};
+
+/* Define a linked list header for storing hinic3_tcam_filter data. */
+TAILQ_HEAD(hinic3_tcam_filter_list, hinic3_tcam_filter);
+
+struct hinic3_tcam_dynamic_block {
+	TAILQ_ENTRY(hinic3_tcam_dynamic_block) entries;
+	u16 dynamic_block_id;
+	u16 dynamic_index_cnt;
+	u8 dynamic_index[HINIC3_TCAM_DYNAMIC_BLOCK_SIZE];
+};
+
+/* Define a linked list header for storing hinic3_tcam_dynamic_block data. */
+TAILQ_HEAD(hinic3_tcam_dynamic_filter_list, hinic3_tcam_dynamic_block);
+
+/* Indicate TCAM dynamic block info. */
+struct hinic3_tcam_dynamic_block_info {
+	struct hinic3_tcam_dynamic_filter_list tcam_dynamic_list;
+	u16 dynamic_block_cnt;
+};
+
+/* Structure is used to store TCAM information. */
+struct hinic3_tcam_info {
+	struct hinic3_tcam_filter_list tcam_list;
+	struct hinic3_tcam_dynamic_block_info tcam_dynamic_info;
+};
+
+/* Obtain the upper and lower 16 bits. */
+#define HINIC3_32_UPPER_16_BITS(n) ((((n) >> 16)) & 0xffff)
+#define HINIC3_32_LOWER_16_BITS(n) ((n) & 0xffff)
+
+/* Number of protocol rules */
+#define HINIC3_ARP_RULE_NUM  3
+#define HINIC3_RARP_RULE_NUM 1
+#define HINIC3_SLOW_RULE_NUM 2
+#define HINIC3_LLDP_RULE_NUM 2
+#define HINIC3_CNM_RULE_NUM  1
+#define HINIC3_ECP_RULE_NUM  2
+
+/* Define Ethernet type. */
+#define RTE_ETHER_TYPE_CNM 0x22e7
+#define RTE_ETHER_TYPE_ECP 0x8940
+
+/* Protocol type of the data packet. */
+enum hinic3_ether_type {
+	HINIC3_PKT_TYPE_ARP = 1,
+	HINIC3_PKT_TYPE_ARP_REQ,
+	HINIC3_PKT_TYPE_ARP_REP,
+	HINIC3_PKT_TYPE_RARP,
+	HINIC3_PKT_TYPE_LACP,
+	HINIC3_PKT_TYPE_LLDP,
+	HINIC3_PKT_TYPE_OAM,
+	HINIC3_PKT_TYPE_CDCP,
+	HINIC3_PKT_TYPE_CNM,
+	HINIC3_PKT_TYPE_ECP = 10,
+
+	HINIC3_PKT_UNKNOWN = 31,
+};
+
+int hinic3_flow_add_del_fdir_filter(struct rte_eth_dev *dev,
+				    struct hinic3_fdir_filter *fdir_filter,
+				    bool add);
+int hinic3_flow_add_del_ethertype_filter(struct rte_eth_dev *dev,
+					 struct rte_eth_ethertype_filter *ethertype_filter,
+					 bool add);
+
+void hinic3_free_fdir_filter(struct rte_eth_dev *dev);
+int hinic3_enable_rxq_fdir_filter(struct rte_eth_dev *dev, u32 queue_id,
+				  u32 able);
+int hinic3_flow_parse_attr(const struct rte_flow_attr *attr,
+			   struct rte_flow_error *error);
+
+#endif /**< _HINIC3_FDIR_H_ */
diff --git a/drivers/net/hinic3/hinic3_flow.c b/drivers/net/hinic3/hinic3_flow.c
new file mode 100644
index 0000000000..b310848530
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_flow.c
@@ -0,0 +1,1700 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#include <errno.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_malloc.h>
+
+#include "base/hinic3_compat.h"
+#include "base/hinic3_hwdev.h"
+#include "base/hinic3_nic_cfg.h"
+#include "hinic3_ethdev.h"
+#include "hinic3_fdir.h"
+#include "hinic3_flow.h"
+
+#define HINIC3_UINT8_MAX 0xff
+
+/* Indicate the type of the IPv4 ICPM matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_icmp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_ICMP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 any protocol matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_any[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_ANY,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the Ether matching pattern. */
+static enum rte_flow_item_type pattern_ethertype[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the TCP matching pattern. */
+static enum rte_flow_item_type pattern_ethertype_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_TCP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the UDP matching pattern. */
+static enum rte_flow_item_type pattern_ethertype_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_UDP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan any protocol matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_any[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ANY, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv4 matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv4[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv4 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv4_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv4 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv4_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv6 matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv6[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv6 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv6_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 vxlan IPv6 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_vxlan_ipv6_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 matching pattern. */
+static enum rte_flow_item_type pattern_ipv4[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_UDP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv4 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv4_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV4,
+	HINIC3_FLOW_ITEM_TYPE_TCP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 matching pattern. */
+static enum rte_flow_item_type pattern_ipv6[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_UDP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH,
+	HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_TCP,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 VXLAN any protocol matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan_any[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_ANY, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 VXLAN TCP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan_tcp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_TCP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+/* Indicate the type of the IPv6 VXLAN UDP matching pattern. */
+static enum rte_flow_item_type pattern_ipv6_vxlan_udp[] = {
+	HINIC3_FLOW_ITEM_TYPE_ETH, HINIC3_FLOW_ITEM_TYPE_IPV6,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_VXLAN,
+	HINIC3_FLOW_ITEM_TYPE_UDP, HINIC3_FLOW_ITEM_TYPE_END,
+};
+
+typedef int (*hinic3_parse_filter_t)(struct rte_eth_dev *dev,
+				     const struct rte_flow_attr *attr,
+				     const struct rte_flow_item pattern[],
+				     const struct rte_flow_action actions[],
+				     struct rte_flow_error *error,
+				     struct hinic3_filter_t *filter);
+
+/* Indicate valid filter mode . */
+struct hinic3_valid_pattern {
+	enum rte_flow_item_type *items;
+	hinic3_parse_filter_t parse_filter;
+};
+
+static int hinic3_flow_parse_fdir_filter(struct rte_eth_dev *dev,
+					 const struct rte_flow_attr *attr,
+					 const struct rte_flow_item pattern[],
+					 const struct rte_flow_action actions[],
+					 struct rte_flow_error *error,
+					 struct hinic3_filter_t *filter);
+
+static int hinic3_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
+					      const struct rte_flow_attr *attr,
+					      const struct rte_flow_item pattern[],
+					      const struct rte_flow_action actions[],
+					      struct rte_flow_error *error,
+					      struct hinic3_filter_t *filter);
+
+static int hinic3_flow_parse_fdir_vxlan_filter(struct rte_eth_dev *dev,
+					       const struct rte_flow_attr *attr,
+					       const struct rte_flow_item pattern[],
+					       const struct rte_flow_action actions[],
+					       struct rte_flow_error *error,
+					       struct hinic3_filter_t *filter);
+
+/*
+ * Define a supported pattern array, including the matching patterns of
+ * various network protocols and corresponding parsing functions.
+ */
+static const struct hinic3_valid_pattern hinic3_supported_patterns[] = {
+	/* Support ethertype. */
+	{pattern_ethertype, hinic3_flow_parse_ethertype_filter},
+	/* Support ipv4 but not tunnel, and any field can be masked. */
+	{pattern_ipv4, hinic3_flow_parse_fdir_filter},
+	{pattern_ipv4_any, hinic3_flow_parse_fdir_filter},
+	/* Support ipv4 + l4 but not tunnel, and any field can be masked. */
+	{pattern_ipv4_udp, hinic3_flow_parse_fdir_filter},
+	{pattern_ipv4_tcp, hinic3_flow_parse_fdir_filter},
+	/* Support ipv4 + icmp not tunnel, and any field can be masked. */
+	{pattern_ipv4_icmp, hinic3_flow_parse_fdir_filter},
+
+	/* Support ipv4 + l4 but not tunnel, and any field can be masked. */
+	{pattern_ethertype_udp, hinic3_flow_parse_fdir_filter},
+	{pattern_ethertype_tcp, hinic3_flow_parse_fdir_filter},
+
+	/* Support ipv4 + vxlan + any, and any field can be masked. */
+	{pattern_ipv4_vxlan, hinic3_flow_parse_fdir_vxlan_filter},
+	/* Support ipv4 + vxlan + ipv4, and any field can be masked. */
+	{pattern_ipv4_vxlan_ipv4, hinic3_flow_parse_fdir_vxlan_filter},
+	/* Support ipv4 + vxlan + ipv4 + l4, and any field can be masked. */
+	{pattern_ipv4_vxlan_ipv4_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+	{pattern_ipv4_vxlan_ipv4_udp, hinic3_flow_parse_fdir_vxlan_filter},
+	/* Support ipv4 + vxlan + ipv6, and any field can be masked. */
+	{pattern_ipv4_vxlan_ipv6, hinic3_flow_parse_fdir_vxlan_filter},
+	/* Support ipv4 + vxlan + ipv6 + l4, and any field can be masked. */
+	{pattern_ipv4_vxlan_ipv6_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+	{pattern_ipv4_vxlan_ipv6_udp, hinic3_flow_parse_fdir_vxlan_filter},
+	/* Support ipv4 + vxlan + l4, and any field can be masked. */
+	{pattern_ipv4_vxlan_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+	{pattern_ipv4_vxlan_udp, hinic3_flow_parse_fdir_vxlan_filter},
+	{pattern_ipv4_vxlan_any, hinic3_flow_parse_fdir_vxlan_filter},
+
+	/* Support ipv6 but not tunnel, and any field can be masked. */
+	{pattern_ipv6, hinic3_flow_parse_fdir_filter},
+	/* Support ipv6 + l4 but not tunnel, and any field can be masked. */
+	{pattern_ipv6_udp, hinic3_flow_parse_fdir_filter},
+	{pattern_ipv6_tcp, hinic3_flow_parse_fdir_filter},
+
+	/* Support ipv6 + vxlan + any, and any field can be masked. */
+	{pattern_ipv6_vxlan, hinic3_flow_parse_fdir_vxlan_filter},
+	{pattern_ipv6_vxlan_any, hinic3_flow_parse_fdir_vxlan_filter},
+
+	/* Support ipv6 + vxlan + l4, and any field can be masked. */
+	{pattern_ipv6_vxlan_tcp, hinic3_flow_parse_fdir_vxlan_filter},
+	{pattern_ipv6_vxlan_udp, hinic3_flow_parse_fdir_vxlan_filter},
+
+};
+
+static inline void
+net_addr_to_host(uint32_t *dst, const uint32_t *src, size_t len)
+{
+	size_t i;
+	for (i = 0; i < len; i++)
+		dst[i] = rte_be_to_cpu_32(src[i]);
+}
+
+static bool
+hinic3_match_pattern(enum rte_flow_item_type *item_array,
+		     const struct rte_flow_item *pattern)
+{
+	const struct rte_flow_item *item = pattern;
+
+	/* skip the first void item. */
+	while (item->type == HINIC3_FLOW_ITEM_TYPE_VOID)
+		item++;
+
+	/* Find no void item. */
+	while (((*item_array == item->type) &&
+		(*item_array != HINIC3_FLOW_ITEM_TYPE_END)) ||
+	       (item->type == HINIC3_FLOW_ITEM_TYPE_VOID)) {
+		if (item->type == HINIC3_FLOW_ITEM_TYPE_VOID) {
+			item++;
+		} else {
+			item_array++;
+			item++;
+		}
+	}
+
+	return (*item_array == HINIC3_FLOW_ITEM_TYPE_END &&
+		item->type == HINIC3_FLOW_ITEM_TYPE_END);
+}
+
+/**
+ * Find matching parsing filter functions.
+ *
+ * @param[in] pattern
+ * Pattern to match.
+ * @return
+ * Matched resolution filter. If no resolution filter is found, return NULL.
+ */
+static hinic3_parse_filter_t
+hinic3_find_parse_filter_func(const struct rte_flow_item *pattern)
+{
+	hinic3_parse_filter_t parse_filter = NULL;
+	uint8_t i;
+	/* Traverse all supported patterns. */
+	for (i = 0; i < RTE_DIM(hinic3_supported_patterns); i++) {
+		if (hinic3_match_pattern(hinic3_supported_patterns[i].items,
+					 pattern)) {
+			parse_filter =
+				hinic3_supported_patterns[i].parse_filter;
+			break;
+		}
+	}
+
+	return parse_filter;
+}
+
+/**
+ * Action for parsing and processing Ethernet types.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_action(struct rte_eth_dev *dev,
+			 const struct rte_flow_action *actions,
+			 struct rte_flow_error *error,
+			 struct hinic3_filter_t *filter)
+{
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action *act = actions;
+
+	/* skip the first void item. */
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID)
+		act++;
+
+	switch (act->type) {
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->fdir_filter.rq_index = act_q->index;
+		if (filter->fdir_filter.rq_index >= dev->data->nb_rx_queues) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ACTION, act,
+					   "Invalid action param.");
+			return -rte_errno;
+		}
+		break;
+	default:
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ACTION,
+				   act, "Invalid action type.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+int
+hinic3_flow_parse_attr(const struct rte_flow_attr *attr,
+		       struct rte_flow_error *error)
+{
+	/* Not supported. */
+	if (!attr->ingress || attr->egress || attr->priority || attr->group) {
+		rte_flow_error_set(error, EINVAL,
+				   HINIC3_FLOW_ERROR_TYPE_UNSPECIFIED, attr,
+				   "Only support ingress.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_ipv4(const struct rte_flow_item *flow_item,
+		      struct hinic3_filter_t *filter,
+		      struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec_ipv4, *mask_ipv4;
+
+	mask_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->mask;
+	spec_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->spec;
+	if (!mask_ipv4 || !spec_ipv4) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter ipv4 mask or spec");
+		return -rte_errno;
+	}
+
+	/*
+	 * Only support src address , dst addresses, proto,
+	 * others should be masked.
+	 */
+	if (mask_ipv4->hdr.version_ihl || mask_ipv4->hdr.type_of_service ||
+	    mask_ipv4->hdr.total_length || mask_ipv4->hdr.packet_id ||
+	    mask_ipv4->hdr.fragment_offset || mask_ipv4->hdr.time_to_live ||
+	    mask_ipv4->hdr.hdr_checksum) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Not supported by fdir filter, ipv4 only "
+				   "support src ip, dst ip, proto");
+		return -rte_errno;
+	}
+
+	/* Set the filter information. */
+	filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+	filter->fdir_filter.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+	filter->fdir_filter.key_mask.ipv4.src_ip =
+		rte_be_to_cpu_32(mask_ipv4->hdr.src_addr);
+	filter->fdir_filter.key_spec.ipv4.src_ip =
+		rte_be_to_cpu_32(spec_ipv4->hdr.src_addr);
+	filter->fdir_filter.key_mask.ipv4.dst_ip =
+		rte_be_to_cpu_32(mask_ipv4->hdr.dst_addr);
+	filter->fdir_filter.key_spec.ipv4.dst_ip =
+		rte_be_to_cpu_32(spec_ipv4->hdr.dst_addr);
+	filter->fdir_filter.key_mask.proto = mask_ipv4->hdr.next_proto_id;
+	filter->fdir_filter.key_spec.proto = spec_ipv4->hdr.next_proto_id;
+
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_ipv6(const struct rte_flow_item *flow_item,
+		      struct hinic3_filter_t *filter,
+		      struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6 *spec_ipv6, *mask_ipv6;
+
+	mask_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->mask;
+	spec_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->spec;
+	if (!mask_ipv6 || !spec_ipv6) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter ipv6 mask or spec");
+		return -rte_errno;
+	}
+
+	/* Only support dst addresses, src addresses, proto. */
+	if (mask_ipv6->hdr.vtc_flow || mask_ipv6->hdr.payload_len ||
+	    mask_ipv6->hdr.hop_limits) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Not supported by fdir filter, ipv6 only "
+				   "support src ip, dst ip, proto");
+		return -rte_errno;
+	}
+
+	/* Set the filter information. */
+	filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+	filter->fdir_filter.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+	net_addr_to_host(filter->fdir_filter.key_mask.ipv6.src_ip,
+			 (const uint32_t *)mask_ipv6->hdr.src_addr.a, 4);
+	net_addr_to_host(filter->fdir_filter.key_spec.ipv6.src_ip,
+			 (const uint32_t *)spec_ipv6->hdr.src_addr.a, 4);
+	net_addr_to_host(filter->fdir_filter.key_mask.ipv6.dst_ip,
+			 (const uint32_t *)mask_ipv6->hdr.dst_addr.a, 4);
+	net_addr_to_host(filter->fdir_filter.key_spec.ipv6.dst_ip,
+			 (const uint32_t *)spec_ipv6->hdr.dst_addr.a, 4);
+	filter->fdir_filter.key_mask.proto = mask_ipv6->hdr.proto;
+	filter->fdir_filter.key_spec.proto = spec_ipv6->hdr.proto;
+
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_tcp(const struct rte_flow_item *flow_item,
+		     struct hinic3_filter_t *filter,
+		     struct rte_flow_error *error)
+{
+	const struct rte_flow_item_tcp *spec_tcp, *mask_tcp;
+
+	mask_tcp = (const struct rte_flow_item_tcp *)flow_item->mask;
+	spec_tcp = (const struct rte_flow_item_tcp *)flow_item->spec;
+
+	filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+	filter->fdir_filter.key_spec.proto = IPPROTO_TCP;
+
+	if (!mask_tcp && !spec_tcp)
+		return 0;
+
+	if (!mask_tcp || !spec_tcp) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter tcp mask or spec");
+		return -rte_errno;
+	}
+
+	/* Only support src, dst ports, others should be masked. */
+	if (mask_tcp->hdr.sent_seq || mask_tcp->hdr.recv_ack ||
+	    mask_tcp->hdr.data_off || mask_tcp->hdr.rx_win ||
+	    mask_tcp->hdr.tcp_flags || mask_tcp->hdr.cksum ||
+	    mask_tcp->hdr.tcp_urp) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Not supported by fdir filter, tcp only "
+				   "support src port, dst port");
+		return -rte_errno;
+	}
+
+	/* Set the filter information. */
+	filter->fdir_filter.key_mask.src_port =
+		(u16)rte_be_to_cpu_16(mask_tcp->hdr.src_port);
+	filter->fdir_filter.key_spec.src_port =
+		(u16)rte_be_to_cpu_16(spec_tcp->hdr.src_port);
+	filter->fdir_filter.key_mask.dst_port =
+		(u16)rte_be_to_cpu_16(mask_tcp->hdr.dst_port);
+	filter->fdir_filter.key_spec.dst_port =
+		(u16)rte_be_to_cpu_16(spec_tcp->hdr.dst_port);
+
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_udp(const struct rte_flow_item *flow_item,
+		     struct hinic3_filter_t *filter,
+		     struct rte_flow_error *error)
+{
+	const struct rte_flow_item_udp *spec_udp, *mask_udp;
+
+	mask_udp = (const struct rte_flow_item_udp *)flow_item->mask;
+	spec_udp = (const struct rte_flow_item_udp *)flow_item->spec;
+
+	filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+	filter->fdir_filter.key_spec.proto = IPPROTO_UDP;
+
+	if (!mask_udp && !spec_udp)
+		return 0;
+
+	if (!mask_udp || !spec_udp) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter udp mask or spec");
+		return -rte_errno;
+	}
+
+	/* Set the filter information. */
+	filter->fdir_filter.key_mask.src_port =
+		(u16)rte_be_to_cpu_16(mask_udp->hdr.src_port);
+	filter->fdir_filter.key_spec.src_port =
+		(u16)rte_be_to_cpu_16(spec_udp->hdr.src_port);
+	filter->fdir_filter.key_mask.dst_port =
+		(u16)rte_be_to_cpu_16(mask_udp->hdr.dst_port);
+	filter->fdir_filter.key_spec.dst_port =
+		(u16)rte_be_to_cpu_16(spec_udp->hdr.dst_port);
+
+	return 0;
+}
+
+/**
+ * Parse the pattern of network traffic and apply the parsing result to the
+ * traffic filter.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, Its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_fdir_pattern(__rte_unused struct rte_eth_dev *dev,
+			       const struct rte_flow_item *pattern,
+			       struct rte_flow_error *error,
+			       struct hinic3_filter_t *filter)
+{
+	const struct rte_flow_item *flow_item = pattern;
+	enum rte_flow_item_type type;
+	int err;
+
+	filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_ANY;
+	/* Traverse all modes until HINIC3_FLOW_ITEM_TYPE_END is reached. */
+	for (; flow_item->type != HINIC3_FLOW_ITEM_TYPE_END; flow_item++) {
+		if (flow_item->last) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item, "Not support range");
+			return -rte_errno;
+		}
+		type = flow_item->type;
+		switch (type) {
+		case HINIC3_FLOW_ITEM_TYPE_ETH:
+			if (flow_item->spec || flow_item->mask) {
+				rte_flow_error_set(error, EINVAL,
+						   HINIC3_FLOW_ERROR_TYPE_ITEM,
+						   flow_item,
+						   "Not supported by fdir "
+						   "filter, not support mac");
+				return -rte_errno;
+			}
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_IPV4:
+			err = hinic3_flow_fdir_ipv4(flow_item, filter, error);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_IPV6:
+			err = hinic3_flow_fdir_ipv6(flow_item, filter, error);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_TCP:
+			err = hinic3_flow_fdir_tcp(flow_item, filter, error);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_UDP:
+			err = hinic3_flow_fdir_udp(flow_item, filter, error);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * Resolve rules for network traffic filters.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, Its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_fdir_filter(struct rte_eth_dev *dev,
+			      const struct rte_flow_attr *attr,
+			      const struct rte_flow_item pattern[],
+			      const struct rte_flow_action actions[],
+			      struct rte_flow_error *error,
+			      struct hinic3_filter_t *filter)
+{
+	int ret;
+
+	ret = hinic3_flow_parse_fdir_pattern(dev, pattern, error, filter);
+	if (ret)
+		return ret;
+
+	ret = hinic3_flow_parse_action(dev, actions, error, filter);
+	if (ret)
+		return ret;
+
+	ret = hinic3_flow_parse_attr(attr, error);
+	if (ret)
+		return ret;
+
+	filter->filter_type = RTE_ETH_FILTER_FDIR;
+
+	return 0;
+}
+
+/**
+ * Parse and process the actions of the Ethernet type.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, Its used to store and manipulate packet filtering rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_ethertype_action(struct rte_eth_dev *dev,
+				   const struct rte_flow_action *actions,
+				   struct rte_flow_error *error,
+				   struct hinic3_filter_t *filter)
+{
+	const struct rte_flow_action *act = actions;
+	const struct rte_flow_action_queue *act_q;
+
+	/* Skip the firset void item. */
+	while (act->type == RTE_FLOW_ACTION_TYPE_VOID)
+		act++;
+
+	switch (act->type) {
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		filter->ethertype_filter.queue = act_q->index;
+		if (filter->ethertype_filter.queue >= dev->data->nb_rx_queues) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ACTION, act,
+					   "Invalid action param.");
+			return -rte_errno;
+		}
+		break;
+
+	default:
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ACTION,
+				   act, "Invalid action type.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static int
+hinic3_flow_parse_ethertype_pattern(__rte_unused struct rte_eth_dev *dev,
+				    const struct rte_flow_item *pattern,
+				    struct rte_flow_error *error,
+				    struct hinic3_filter_t *filter)
+{
+	const struct rte_flow_item_eth *ether_spec, *ether_mask;
+	const struct rte_flow_item *flow_item = pattern;
+	enum rte_flow_item_type type;
+
+	/* Traverse all modes until HINIC3_FLOW_ITEM_TYPE_END is reached. */
+	for (; flow_item->type != HINIC3_FLOW_ITEM_TYPE_END; flow_item++) {
+		if (flow_item->last) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item, "Not support range");
+			return -rte_errno;
+		}
+		type = flow_item->type;
+		switch (type) {
+		case HINIC3_FLOW_ITEM_TYPE_ETH:
+			/* Obtaining Ethernet Specifications and Masks. */
+			ether_spec = (const struct rte_flow_item_eth *)
+					     flow_item->spec;
+			ether_mask = (const struct rte_flow_item_eth *)
+					     flow_item->mask;
+			if (!ether_spec || !ether_mask) {
+				rte_flow_error_set(error, EINVAL,
+						   HINIC3_FLOW_ERROR_TYPE_ITEM,
+						   flow_item,
+						   "NULL ETH spec/mask");
+				return -rte_errno;
+			}
+
+			/*
+			 * Mask bits of source MAC address must be full of 0.
+			 * Mask bits of destination MAC address must be full 0.
+			 * Filters traffic based on the type of Ethernet.
+			 */
+			if (!rte_is_zero_ether_addr(&ether_mask->src) ||
+			    (!rte_is_zero_ether_addr(&ether_mask->dst))) {
+				rte_flow_error_set(error, EINVAL,
+						   HINIC3_FLOW_ERROR_TYPE_ITEM,
+						   flow_item,
+						   "Invalid ether address mask");
+				return -rte_errno;
+			}
+
+			if ((ether_mask->type & UINT16_MAX) != UINT16_MAX) {
+				rte_flow_error_set(error, EINVAL,
+						   HINIC3_FLOW_ERROR_TYPE_ITEM,
+						   flow_item,
+						   "Invalid ethertype mask");
+				return -rte_errno;
+			}
+
+			filter->ethertype_filter.ether_type =
+				(u16)rte_be_to_cpu_16(ether_spec->type);
+
+			switch (filter->ethertype_filter.ether_type) {
+			case RTE_ETHER_TYPE_SLOW:
+				break;
+
+			case RTE_ETHER_TYPE_ARP:
+				break;
+
+			case RTE_ETHER_TYPE_RARP:
+				break;
+
+			case RTE_ETHER_TYPE_LLDP:
+				break;
+
+			default:
+				rte_flow_error_set(error, EINVAL,
+						   HINIC3_FLOW_ERROR_TYPE_ITEM,
+						   flow_item,
+						   "Unsupported ether_type in"
+						   " control packet filter.");
+				return -rte_errno;
+			}
+			break;
+
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+hinic3_flow_parse_ethertype_filter(struct rte_eth_dev *dev,
+				   const struct rte_flow_attr *attr,
+				   const struct rte_flow_item pattern[],
+				   const struct rte_flow_action actions[],
+				   struct rte_flow_error *error,
+				   struct hinic3_filter_t *filter)
+{
+	int ret;
+
+	ret = hinic3_flow_parse_ethertype_pattern(dev, pattern, error, filter);
+	if (ret)
+		return ret;
+
+	ret = hinic3_flow_parse_ethertype_action(dev, actions, error, filter);
+	if (ret)
+		return ret;
+
+	ret = hinic3_flow_parse_attr(attr, error);
+	if (ret)
+		return ret;
+
+	filter->filter_type = RTE_ETH_FILTER_ETHERTYPE;
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_ipv4(struct rte_flow_error *error,
+			     struct hinic3_filter_t *filter,
+			     const struct rte_flow_item *flow_item,
+			     enum hinic3_fdir_tunnel_mode tunnel_mode)
+{
+	const struct rte_flow_item_ipv4 *spec_ipv4, *mask_ipv4;
+	mask_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->mask;
+	spec_ipv4 = (const struct rte_flow_item_ipv4 *)flow_item->spec;
+
+	if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+		filter->fdir_filter.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+		if (!mask_ipv4 && !spec_ipv4)
+			return 0;
+
+		if (!mask_ipv4 || !spec_ipv4) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item,
+					   "Invalid fdir filter, vxlan outer "
+					   "ipv4 mask or spec");
+			return -rte_errno;
+		}
+
+		/*
+		 * Only support src address , dst addresses, others should be
+		 * masked.
+		 */
+		if (mask_ipv4->hdr.version_ihl ||
+		    mask_ipv4->hdr.type_of_service ||
+		    mask_ipv4->hdr.total_length || mask_ipv4->hdr.packet_id ||
+		    mask_ipv4->hdr.fragment_offset ||
+		    mask_ipv4->hdr.time_to_live ||
+		    mask_ipv4->hdr.next_proto_id ||
+		    mask_ipv4->hdr.hdr_checksum) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item,
+					   "Not supported by fdir filter, "
+					   "vxlan outer ipv4 only support "
+					   "src ip, dst ip");
+			return -rte_errno;
+		}
+
+		/* Set the filter information. */
+		filter->fdir_filter.key_mask.ipv4.src_ip =
+			rte_be_to_cpu_32(mask_ipv4->hdr.src_addr);
+		filter->fdir_filter.key_spec.ipv4.src_ip =
+			rte_be_to_cpu_32(spec_ipv4->hdr.src_addr);
+		filter->fdir_filter.key_mask.ipv4.dst_ip =
+			rte_be_to_cpu_32(mask_ipv4->hdr.dst_addr);
+		filter->fdir_filter.key_spec.ipv4.dst_ip =
+			rte_be_to_cpu_32(spec_ipv4->hdr.dst_addr);
+	} else {
+		filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV4;
+
+		if (!mask_ipv4 && !spec_ipv4)
+			return 0;
+
+		if (!mask_ipv4 || !spec_ipv4) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item,
+					   "Invalid fdir filter, vxlan inner "
+					   "ipv4 mask or spec");
+			return -rte_errno;
+		}
+
+		/*
+		 * Only support src addr , dst addr, ip proto, others should be
+		 * masked.
+		 */
+		if (mask_ipv4->hdr.version_ihl ||
+		    mask_ipv4->hdr.type_of_service ||
+		    mask_ipv4->hdr.total_length || mask_ipv4->hdr.packet_id ||
+		    mask_ipv4->hdr.fragment_offset ||
+		    mask_ipv4->hdr.time_to_live ||
+		    mask_ipv4->hdr.hdr_checksum) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item,
+					   "Not supported by fdir filter, "
+					   "vxlan inner ipv4 only support "
+					   "src ip, dst ip, proto");
+			return -rte_errno;
+		}
+
+		/* Set the filter information. */
+		filter->fdir_filter.key_mask.inner_ipv4.src_ip =
+			rte_be_to_cpu_32(mask_ipv4->hdr.src_addr);
+		filter->fdir_filter.key_spec.inner_ipv4.src_ip =
+			rte_be_to_cpu_32(spec_ipv4->hdr.src_addr);
+		filter->fdir_filter.key_mask.inner_ipv4.dst_ip =
+			rte_be_to_cpu_32(mask_ipv4->hdr.dst_addr);
+		filter->fdir_filter.key_spec.inner_ipv4.dst_ip =
+			rte_be_to_cpu_32(spec_ipv4->hdr.dst_addr);
+		filter->fdir_filter.key_mask.proto =
+			mask_ipv4->hdr.next_proto_id;
+		filter->fdir_filter.key_spec.proto =
+			spec_ipv4->hdr.next_proto_id;
+	}
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_ipv6(struct rte_flow_error *error,
+			     struct hinic3_filter_t *filter,
+			     const struct rte_flow_item *flow_item,
+			     enum hinic3_fdir_tunnel_mode tunnel_mode)
+{
+	const struct rte_flow_item_ipv6 *spec_ipv6, *mask_ipv6;
+
+	mask_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->mask;
+	spec_ipv6 = (const struct rte_flow_item_ipv6 *)flow_item->spec;
+
+	if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+		filter->fdir_filter.outer_ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+		if (!mask_ipv6 && !spec_ipv6)
+			return 0;
+
+		if (!mask_ipv6 || !spec_ipv6) {
+			rte_flow_error_set(error, EINVAL,
+				HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+				"Invalid fdir filter ipv6 mask or spec");
+			return -rte_errno;
+		}
+
+		/* Only support dst addresses, src addresses. */
+		if (mask_ipv6->hdr.vtc_flow || mask_ipv6->hdr.payload_len ||
+		    mask_ipv6->hdr.hop_limits || mask_ipv6->hdr.proto) {
+			rte_flow_error_set(error, EINVAL,
+				HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+				"Not supported by fdir filter, ipv6 only "
+				"support src ip, dst ip, proto");
+			return -rte_errno;
+		}
+
+		net_addr_to_host(filter->fdir_filter.key_mask.ipv6.src_ip,
+				 (const uint32_t *)mask_ipv6->hdr.src_addr.a, 4);
+		net_addr_to_host(filter->fdir_filter.key_spec.ipv6.src_ip,
+				 (const uint32_t *)spec_ipv6->hdr.src_addr.a, 4);
+		net_addr_to_host(filter->fdir_filter.key_mask.ipv6.dst_ip,
+				 (const uint32_t *)mask_ipv6->hdr.dst_addr.a, 4);
+		net_addr_to_host(filter->fdir_filter.key_spec.ipv6.dst_ip,
+				 (const uint32_t *)spec_ipv6->hdr.dst_addr.a, 4);
+	} else {
+		filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_IPV6;
+
+		if (!mask_ipv6 && !spec_ipv6)
+			return 0;
+
+		if (!mask_ipv6 || !spec_ipv6) {
+			rte_flow_error_set(error, EINVAL,
+				HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+				"Invalid fdir filter ipv6 mask or spec");
+			return -rte_errno;
+		}
+
+		/* Only support dst addresses, src addresses, proto. */
+		if (mask_ipv6->hdr.vtc_flow || mask_ipv6->hdr.payload_len ||
+		    mask_ipv6->hdr.hop_limits) {
+			rte_flow_error_set(error, EINVAL,
+				HINIC3_FLOW_ERROR_TYPE_ITEM, flow_item,
+				"Not supported by fdir filter, ipv6 only "
+				"support src ip, dst ip, proto");
+			return -rte_errno;
+		}
+
+		net_addr_to_host(filter->fdir_filter.key_mask.inner_ipv6.src_ip,
+				 (const uint32_t *)mask_ipv6->hdr.src_addr.a, 4);
+		net_addr_to_host(filter->fdir_filter.key_spec.inner_ipv6.src_ip,
+				 (const uint32_t *)spec_ipv6->hdr.src_addr.a, 4);
+		net_addr_to_host(filter->fdir_filter.key_mask.inner_ipv6.dst_ip,
+				 (const uint32_t *)mask_ipv6->hdr.dst_addr.a, 4);
+		net_addr_to_host(filter->fdir_filter.key_spec.inner_ipv6.dst_ip,
+				 (const uint32_t *)spec_ipv6->hdr.dst_addr.a, 4);
+
+		filter->fdir_filter.key_mask.proto = mask_ipv6->hdr.proto;
+		filter->fdir_filter.key_spec.proto = spec_ipv6->hdr.proto;
+	}
+
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_tcp(struct rte_flow_error *error,
+			    struct hinic3_filter_t *filter,
+			    enum hinic3_fdir_tunnel_mode tunnel_mode,
+			    const struct rte_flow_item *flow_item)
+{
+	const struct rte_flow_item_tcp *spec_tcp, *mask_tcp;
+
+	if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Not supported by fdir filter, vxlan only "
+				   "support inner tcp");
+		return -rte_errno;
+	}
+
+	filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+	filter->fdir_filter.key_spec.proto = IPPROTO_TCP;
+
+	mask_tcp = (const struct rte_flow_item_tcp *)flow_item->mask;
+	spec_tcp = (const struct rte_flow_item_tcp *)flow_item->spec;
+	if (!mask_tcp && !spec_tcp)
+		return 0;
+	if (!mask_tcp || !spec_tcp) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter tcp mask or spec");
+		return -rte_errno;
+	}
+
+	/* Only support src, dst ports, others should be masked. */
+	if (mask_tcp->hdr.sent_seq || mask_tcp->hdr.recv_ack ||
+	    mask_tcp->hdr.data_off || mask_tcp->hdr.rx_win ||
+	    mask_tcp->hdr.tcp_flags || mask_tcp->hdr.cksum ||
+	    mask_tcp->hdr.tcp_urp) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Not supported by fdir filter, vxlan inner "
+				   "tcp only support src port,dst port");
+		return -rte_errno;
+	}
+
+	/* Set the filter information. */
+	filter->fdir_filter.key_mask.src_port =
+		(u16)rte_be_to_cpu_16(mask_tcp->hdr.src_port);
+	filter->fdir_filter.key_spec.src_port =
+		(u16)rte_be_to_cpu_16(spec_tcp->hdr.src_port);
+	filter->fdir_filter.key_mask.dst_port =
+		(u16)rte_be_to_cpu_16(mask_tcp->hdr.dst_port);
+	filter->fdir_filter.key_spec.dst_port =
+		(u16)rte_be_to_cpu_16(spec_tcp->hdr.dst_port);
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_tunnel_udp(struct rte_flow_error *error,
+			    struct hinic3_filter_t *filter,
+			    enum hinic3_fdir_tunnel_mode tunnel_mode,
+			    const struct rte_flow_item *flow_item)
+{
+	const struct rte_flow_item_udp *spec_udp, *mask_udp;
+
+	mask_udp = (const struct rte_flow_item_udp *)flow_item->mask;
+	spec_udp = (const struct rte_flow_item_udp *)flow_item->spec;
+
+	if (tunnel_mode == HINIC3_FDIR_TUNNEL_MODE_NORMAL) {
+		/*
+		 * UDP is used to describe protocol,
+		 * spec and mask should be NULL.
+		 */
+		if (flow_item->spec || flow_item->mask) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item, "Invalid UDP item");
+			return -rte_errno;
+		}
+	} else {
+		filter->fdir_filter.key_mask.proto = HINIC3_UINT8_MAX;
+		filter->fdir_filter.key_spec.proto = IPPROTO_UDP;
+		if (!mask_udp && !spec_udp)
+			return 0;
+
+		if (!mask_udp || !spec_udp) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item,
+					   "Invalid fdir filter vxlan inner "
+					   "udp mask or spec");
+			return -rte_errno;
+		}
+
+		/* Set the filter information. */
+		filter->fdir_filter.key_mask.src_port =
+			(u16)rte_be_to_cpu_16(mask_udp->hdr.src_port);
+		filter->fdir_filter.key_spec.src_port =
+			(u16)rte_be_to_cpu_16(spec_udp->hdr.src_port);
+		filter->fdir_filter.key_mask.dst_port =
+			(u16)rte_be_to_cpu_16(mask_udp->hdr.dst_port);
+		filter->fdir_filter.key_spec.dst_port =
+			(u16)rte_be_to_cpu_16(spec_udp->hdr.dst_port);
+	}
+
+	return 0;
+}
+
+static int
+hinic3_flow_fdir_vxlan(struct rte_flow_error *error,
+		       struct hinic3_filter_t *filter,
+		       const struct rte_flow_item *flow_item)
+{
+	const struct rte_flow_item_vxlan *spec_vxlan, *mask_vxlan;
+	uint32_t vxlan_vni_id = 0;
+
+	spec_vxlan = (const struct rte_flow_item_vxlan *)flow_item->spec;
+	mask_vxlan = (const struct rte_flow_item_vxlan *)flow_item->mask;
+
+	filter->fdir_filter.tunnel_type = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+
+	if (!spec_vxlan && !mask_vxlan) {
+		return 0;
+	} else if (filter->fdir_filter.outer_ip_type == HINIC3_FDIR_IP_TYPE_IPV6) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter vxlan mask or spec, "
+				   "ipv6 vxlan, don't support vni");
+		return -rte_errno;
+	}
+
+	if (!spec_vxlan || !mask_vxlan) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   flow_item,
+				   "Invalid fdir filter vxlan mask or spec");
+		return -rte_errno;
+	}
+
+	rte_memcpy(((uint8_t *)&vxlan_vni_id + 1), spec_vxlan->vni, 3);
+	filter->fdir_filter.key_mask.tunnel.tunnel_id =
+		rte_be_to_cpu_32(vxlan_vni_id);
+	return 0;
+}
+
+static int
+hinic3_flow_parse_fdir_vxlan_pattern(__rte_unused struct rte_eth_dev *dev,
+				     const struct rte_flow_item *pattern,
+				     struct rte_flow_error *error,
+				     struct hinic3_filter_t *filter)
+{
+	const struct rte_flow_item *flow_item = pattern;
+	enum hinic3_fdir_tunnel_mode tunnel_mode =
+		HINIC3_FDIR_TUNNEL_MODE_NORMAL;
+	enum rte_flow_item_type type;
+	int err;
+
+	/* Inner and outer ip type, set it to any by default */
+	filter->fdir_filter.ip_type = HINIC3_FDIR_IP_TYPE_ANY;
+	filter->fdir_filter.outer_ip_type = HINIC3_FDIR_IP_TYPE_ANY;
+
+	for (; flow_item->type != HINIC3_FLOW_ITEM_TYPE_END; flow_item++) {
+		if (flow_item->last) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_ITEM,
+					   flow_item, "Not support range");
+			return -rte_errno;
+		}
+
+		type = flow_item->type;
+		switch (type) {
+		case HINIC3_FLOW_ITEM_TYPE_ETH:
+			/* All should be masked. */
+			if (flow_item->spec || flow_item->mask) {
+				rte_flow_error_set(error, EINVAL,
+						   HINIC3_FLOW_ERROR_TYPE_ITEM,
+						   flow_item,
+						   "Not supported by fdir "
+						   "filter, not support mac");
+				return -rte_errno;
+			}
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_IPV4:
+			err = hinic3_flow_fdir_tunnel_ipv4(error,
+				filter, flow_item, tunnel_mode);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_IPV6:
+			err = hinic3_flow_fdir_tunnel_ipv6(error,
+				filter, flow_item, tunnel_mode);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_TCP:
+			err = hinic3_flow_fdir_tunnel_tcp(error,
+				filter, tunnel_mode, flow_item);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_UDP:
+			err = hinic3_flow_fdir_tunnel_udp(error,
+				filter, tunnel_mode, flow_item);
+			if (err != 0)
+				return -rte_errno;
+			break;
+
+		case HINIC3_FLOW_ITEM_TYPE_VXLAN:
+			err = hinic3_flow_fdir_vxlan(error, filter, flow_item);
+			if (err != 0)
+				return -rte_errno;
+			tunnel_mode = HINIC3_FDIR_TUNNEL_MODE_VXLAN;
+			break;
+
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * Resolve VXLAN Filters in Flow Filters.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, its used to store and manipulate packet filterig rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse_fdir_vxlan_filter(struct rte_eth_dev *dev,
+				    const struct rte_flow_attr *attr,
+				    const struct rte_flow_item pattern[],
+				    const struct rte_flow_action actions[],
+				    struct rte_flow_error *error,
+				    struct hinic3_filter_t *filter)
+{
+	int ret;
+
+	ret = hinic3_flow_parse_fdir_vxlan_pattern(dev, pattern, error, filter);
+	if (ret)
+		return ret;
+
+	ret = hinic3_flow_parse_action(dev, actions, error, filter);
+	if (ret)
+		return ret;
+
+	ret = hinic3_flow_parse_attr(attr, error);
+	if (ret)
+		return ret;
+
+	filter->filter_type = RTE_ETH_FILTER_FDIR;
+
+	return 0;
+}
+
+/**
+ * Parse patterns and actions of network traffic.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @param[out] filter
+ * Filter information, its used to store and manipulate packet filterig rules.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_parse(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		  const struct rte_flow_item pattern[],
+		  const struct rte_flow_action actions[],
+		  struct rte_flow_error *error, struct hinic3_filter_t *filter)
+{
+	hinic3_parse_filter_t parse_filter;
+	uint32_t pattern_num = 0;
+	int ret = 0;
+	/* Check whether the parameter is valid. */
+	if (!pattern || !actions || !attr) {
+		rte_flow_error_set(error, EINVAL,
+				   HINIC3_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "NULL param.");
+		return -rte_errno;
+	}
+
+	while ((pattern + pattern_num)->type != HINIC3_FLOW_ITEM_TYPE_END) {
+		pattern_num++;
+		if (pattern_num > HINIC3_FLOW_MAX_PATTERN_NUM) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_MAX_PATTERN_NUM, NULL,
+					   "Too many patterns.");
+			return -rte_errno;
+		}
+	}
+	/*
+	 * The corresponding filter is returned. If the filter is not found,
+	 * NULL is returned.
+	 */
+	parse_filter = hinic3_find_parse_filter_func(pattern);
+	if (!parse_filter) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_ITEM,
+				   pattern, "Unsupported pattern");
+		return -rte_errno;
+	}
+	/* Parsing with filters. */
+	ret = parse_filter(dev, attr, pattern, actions, error, filter);
+
+	return ret;
+}
+
+/**
+ * Check whether the traffic rule provided by the user is valid.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		     const struct rte_flow_item pattern[],
+		     const struct rte_flow_action actions[],
+		     struct rte_flow_error *error)
+{
+	struct hinic3_filter_t filter_rules = {0};
+
+	return hinic3_flow_parse(dev, attr, pattern, actions, error,
+				 &filter_rules);
+}
+
+/**
+ * Create a flow item.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[in] attr
+ * Indicates the attribute of a flow rule.
+ * @param[in] pattern
+ * Indicates the pattern or matching condition of a traffic rule.
+ * @param[in] actions
+ * Indicates the action to be taken on the matched traffic.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @return
+ * If the operation is successful, the created flow is returned. Otherwise, NULL
+ * is returned.
+ *
+ */
+static struct rte_flow *
+hinic3_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct hinic3_filter_t *filter_rules = NULL;
+	struct rte_flow *flow = NULL;
+	int ret;
+
+	filter_rules =
+		rte_zmalloc("filter_rules", sizeof(struct hinic3_filter_t), 0);
+	if (!filter_rules) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+				   NULL,
+				   "Failed to allocate filter rules memory.");
+		return NULL;
+	}
+
+	flow = rte_zmalloc("hinic3_rte_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+				   NULL, "Failed to allocate flow memory.");
+		rte_free(filter_rules);
+		return NULL;
+	}
+	/* Parses the flow rule to be created and generates a filter. */
+	ret = hinic3_flow_parse(dev, attr, pattern, actions, error,
+				filter_rules);
+	if (ret < 0)
+		goto free_flow;
+
+	switch (filter_rules->filter_type) {
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ret = hinic3_flow_add_del_ethertype_filter(dev,
+			&filter_rules->ethertype_filter, true);
+		if (ret) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Create ethertype filter failed.");
+			goto free_flow;
+		}
+
+		flow->rule = filter_rules;
+		flow->filter_type = filter_rules->filter_type;
+		TAILQ_INSERT_TAIL(&nic_dev->filter_ethertype_list, flow, node);
+		break;
+
+	case RTE_ETH_FILTER_FDIR:
+		ret = hinic3_flow_add_del_fdir_filter(dev,
+			&filter_rules->fdir_filter, true);
+		if (ret) {
+			rte_flow_error_set(error, EINVAL,
+					   HINIC3_FLOW_ERROR_TYPE_HANDLE, NULL,
+					   "Create fdir filter failed.");
+			goto free_flow;
+		}
+
+		flow->rule = filter_rules;
+		flow->filter_type = filter_rules->filter_type;
+		TAILQ_INSERT_TAIL(&nic_dev->filter_fdir_rule_list, flow, node);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Filter type %d not supported",
+			    filter_rules->filter_type);
+		rte_flow_error_set(error, EINVAL, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+				   NULL, "Unsupport filter type.");
+		goto free_flow;
+	}
+
+	return flow;
+
+free_flow:
+	rte_free(flow);
+	rte_free(filter_rules);
+
+	return NULL;
+}
+
+static int
+hinic3_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
+		    struct rte_flow_error *error)
+{
+	int ret = -EINVAL;
+	enum rte_filter_type type;
+	struct hinic3_filter_t *rules = NULL;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+
+	if (!flow) {
+		PMD_DRV_LOG(ERR, "Invalid flow parameter!");
+		return -EPERM;
+	}
+
+	type = flow->filter_type;
+	rules = (struct hinic3_filter_t *)flow->rule;
+	/* Perform operations based on the type. */
+	switch (type) {
+	case RTE_ETH_FILTER_ETHERTYPE:
+		ret = hinic3_flow_add_del_ethertype_filter(dev,
+			&rules->ethertype_filter, false);
+		if (!ret)
+			TAILQ_REMOVE(&nic_dev->filter_ethertype_list, flow,
+				     node);
+
+		flow->rule = rules;
+		flow->filter_type = rules->filter_type;
+		TAILQ_REMOVE(&nic_dev->filter_ethertype_list, flow, node);
+		break;
+
+	case RTE_ETH_FILTER_FDIR:
+		ret = hinic3_flow_add_del_fdir_filter(dev, &rules->fdir_filter,
+						      false);
+		if (!ret)
+			TAILQ_REMOVE(&nic_dev->filter_fdir_rule_list, flow,
+				     node);
+		break;
+	default:
+		PMD_DRV_LOG(WARNING, "Filter type %d not supported", type);
+		ret = -EINVAL;
+		break;
+	}
+
+	/* Deleted successfully. Resources are released. */
+	if (!ret) {
+		rte_free(rules);
+		rte_free(flow);
+	} else {
+		rte_flow_error_set(error, -ret, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+				   NULL, "Failed to destroy flow.");
+	}
+
+	return ret;
+}
+
+/**
+ * Clear all fdir type flow rules on the network device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_flush_fdir_filter(struct rte_eth_dev *dev)
+{
+	int ret = 0;
+	struct hinic3_filter_t *filter_rules = NULL;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_flow *flow;
+
+	while (true) {
+		flow = TAILQ_FIRST(&nic_dev->filter_fdir_rule_list);
+		if (flow == NULL)
+			break;
+		filter_rules = (struct hinic3_filter_t *)flow->rule;
+
+		/* Delete flow rules. */
+		ret = hinic3_flow_add_del_fdir_filter(dev,
+			&filter_rules->fdir_filter, false);
+
+		if (ret)
+			return ret;
+
+		TAILQ_REMOVE(&nic_dev->filter_fdir_rule_list, flow, node);
+		rte_free(filter_rules);
+		rte_free(flow);
+	}
+
+	return ret;
+}
+
+/**
+ * Clear all ether type flow rules on the network device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_flush_ethertype_filter(struct rte_eth_dev *dev)
+{
+	struct hinic3_filter_t *filter_rules = NULL;
+	struct hinic3_nic_dev *nic_dev = HINIC3_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
+	struct rte_flow *flow;
+	int ret = 0;
+
+	while (true) {
+		flow = TAILQ_FIRST(&nic_dev->filter_ethertype_list);
+		if (flow == NULL)
+			break;
+		filter_rules = (struct hinic3_filter_t *)flow->rule;
+
+		/* Delete flow rules. */
+		ret = hinic3_flow_add_del_ethertype_filter(dev,
+			&filter_rules->ethertype_filter, false);
+
+		if (ret)
+			return ret;
+
+		TAILQ_REMOVE(&nic_dev->filter_ethertype_list, flow, node);
+		rte_free(filter_rules);
+		rte_free(flow);
+	}
+
+	return ret;
+}
+
+/**
+ * Clear all flow rules on the network device.
+ *
+ * @param[in] dev
+ * Pointer to ethernet device structure.
+ * @param[out] error
+ * Structure that contains error information, such as error code and error
+ * description.
+ * @return
+ * 0 on success, non-zero on failure.
+ */
+static int
+hinic3_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	int ret;
+
+	ret = hinic3_flow_flush_fdir_filter(dev);
+	if (ret) {
+		rte_flow_error_set(error, -ret, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+				   NULL, "Failed to flush fdir flows.");
+		return -rte_errno;
+	}
+
+	ret = hinic3_flow_flush_ethertype_filter(dev);
+	if (ret) {
+		rte_flow_error_set(error, -ret, HINIC3_FLOW_ERROR_TYPE_HANDLE,
+				   NULL, "Failed to flush ethertype flows.");
+		return -rte_errno;
+	}
+	return ret;
+}
+
+/* Structure for managing flow table operations. */
+const struct rte_flow_ops hinic3_flow_ops = {
+	.validate = hinic3_flow_validate,
+	.create = hinic3_flow_create,
+	.destroy = hinic3_flow_destroy,
+	.flush = hinic3_flow_flush,
+};
diff --git a/drivers/net/hinic3/hinic3_flow.h b/drivers/net/hinic3/hinic3_flow.h
new file mode 100644
index 0000000000..9104337544
--- /dev/null
+++ b/drivers/net/hinic3/hinic3_flow.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2025 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC3_FLOW_H_
+#define _HINIC3_FLOW_H_
+
+#include <rte_flow.h>
+
+/* Flow item type. */
+#define HINIC3_FLOW_ITEM_TYPE_END                  RTE_FLOW_ITEM_TYPE_END
+#define HINIC3_FLOW_ITEM_TYPE_VOID                 RTE_FLOW_ITEM_TYPE_VOID
+#define HINIC3_FLOW_ITEM_TYPE_INVERT               RTE_FLOW_ITEM_TYPE_INVERT
+#define HINIC3_FLOW_ITEM_TYPE_ANY                  RTE_FLOW_ITEM_TYPE_ANY
+#define HINIC3_FLOW_ITEM_TYPE_PF                   RTE_FLOW_ITEM_TYPE_PF
+#define HINIC3_FLOW_ITEM_TYPE_VF                   RTE_FLOW_ITEM_TYPE_VF
+#define HINIC3_FLOW_ITEM_TYPE_PHY_PORT             RTE_FLOW_ITEM_TYPE_PHY_PORT
+#define HINIC3_FLOW_ITEM_TYPE_PORT_ID              RTE_FLOW_ITEM_TYPE_PORT_ID
+#define HINIC3_FLOW_ITEM_TYPE_RAW                  RTE_FLOW_ITEM_TYPE_RAW
+#define HINIC3_FLOW_ITEM_TYPE_ETH                  RTE_FLOW_ITEM_TYPE_ETH
+#define HINIC3_FLOW_ITEM_TYPE_VLAN                 RTE_FLOW_ITEM_TYPE_VLAN
+#define HINIC3_FLOW_ITEM_TYPE_IPV4                 RTE_FLOW_ITEM_TYPE_IPV4
+#define HINIC3_FLOW_ITEM_TYPE_IPV6                 RTE_FLOW_ITEM_TYPE_IPV6
+#define HINIC3_FLOW_ITEM_TYPE_ICMP                 RTE_FLOW_ITEM_TYPE_ICMP
+#define HINIC3_FLOW_ITEM_TYPE_UDP                  RTE_FLOW_ITEM_TYPE_UDP
+#define HINIC3_FLOW_ITEM_TYPE_TCP                  RTE_FLOW_ITEM_TYPE_TCP
+#define HINIC3_FLOW_ITEM_TYPE_SCTP                 RTE_FLOW_ITEM_TYPE_SCTP
+#define HINIC3_FLOW_ITEM_TYPE_VXLAN                RTE_FLOW_ITEM_TYPE_VXLAN
+#define HINIC3_FLOW_ITEM_TYPE_E_TAG                RTE_FLOW_ITEM_TYPE_E_TAG
+#define HINIC3_FLOW_ITEM_TYPE_NVGRE                RTE_FLOW_ITEM_TYPE_NVGRE
+#define HINIC3_FLOW_ITEM_TYPE_MPLS                 RTE_FLOW_ITEM_TYPE_MPLS
+#define HINIC3_FLOW_ITEM_TYPE_GRE                  RTE_FLOW_ITEM_TYPE_GRE
+#define HINIC3_FLOW_ITEM_TYPE_FUZZY                RTE_FLOW_ITEM_TYPE_FUZZY
+#define HINIC3_FLOW_ITEM_TYPE_GTP                  RTE_FLOW_ITEM_TYPE_GTP
+#define HINIC3_FLOW_ITEM_TYPE_GTPC                 RTE_FLOW_ITEM_TYPE_GTPC
+#define HINIC3_FLOW_ITEM_TYPE_GTPU                 RTE_FLOW_ITEM_TYPE_GTPU
+#define HINIC3_FLOW_ITEM_TYPE_ESP                  RTE_FLOW_ITEM_TYPE_ESP
+#define HINIC3_FLOW_ITEM_TYPE_GENEVE               RTE_FLOW_ITEM_TYPE_GENEVE
+#define HINIC3_FLOW_ITEM_TYPE_VXLAN_GPE            RTE_FLOW_ITEM_TYPE_VXLAN_GPE
+#define HINIC3_FLOW_ITEM_TYPE_ARP_ETH_IPV4         RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4
+#define HINIC3_FLOW_ITEM_TYPE_IPV6_EXT             RTE_FLOW_ITEM_TYPE_IPV6_EXT
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6                RTE_FLOW_ITEM_TYPE_ICMP6
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_NS          RTE_FLOW_ITEM_TYPE_ICMP6_ND_NS
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_NA          RTE_FLOW_ITEM_TYPE_ICMP6_ND_NA
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_OPT         RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH
+#define HINIC3_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH
+#define HINIC3_FLOW_ITEM_TYPE_MARK                 RTE_FLOW_ITEM_TYPE_MARK
+#define HINIC3_FLOW_ITEM_TYPE_META                 RTE_FLOW_ITEM_TYPE_META
+#define HINIC3_FLOW_ITEM_TYPE_GRE_KEY              RTE_FLOW_ITEM_TYPE_GRE_KEY
+#define HINIC3_FLOW_ITEM_TYPE_GTP_PSC              RTE_FLOW_ITEM_TYPE_GTP_PSC
+#define HINIC3_FLOW_ITEM_TYPE_PPPOES               RTE_FLOW_ITEM_TYPE_PPPOES
+#define HINIC3_FLOW_ITEM_TYPE_PPPOED               RTE_FLOW_ITEM_TYPE_PPPOED
+#define HINIC3_FLOW_ITEM_TYPE_PPPOE_PROTO_ID       RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID
+#define HINIC3_FLOW_ITEM_TYPE_NSH                  RTE_FLOW_ITEM_TYPE_NSH
+#define HINIC3_FLOW_ITEM_TYPE_IGMP                 RTE_FLOW_ITEM_TYPE_IGMP
+#define HINIC3_FLOW_ITEM_TYPE_AH                   RTE_FLOW_ITEM_TYPE_AH
+#define HINIC3_FLOW_ITEM_TYPE_HIGIG2               RTE_FLOW_ITEM_TYPE_HIGIG2
+#define HINIC3_FLOW_ITEM_TYPE_TAG                  RTE_FLOW_ITEM_TYPE_TAG
+
+/* Flow error type. */
+#define HINIC3_FLOW_ERROR_TYPE_NONE                RTE_FLOW_ERROR_TYPE_NONE
+#define HINIC3_FLOW_ERROR_TYPE_UNSPECIFIED         RTE_FLOW_ERROR_TYPE_UNSPECIFIED
+#define HINIC3_FLOW_ERROR_TYPE_HANDLE              RTE_FLOW_ERROR_TYPE_HANDLE
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_GROUP          RTE_FLOW_ERROR_TYPE_ATTR_GROUP
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_PRIORITY       RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_INGRESS        RTE_FLOW_ERROR_TYPE_ATTR_INGRESS
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_EGRESS         RTE_FLOW_ERROR_TYPE_ATTR_EGRESS
+#define HINIC3_FLOW_ERROR_TYPE_ATTR_TRANSFER       RTE_FLOW_ERROR_TYPE_ATTR_TRANSFER
+#define HINIC3_FLOW_ERROR_TYPE_ATTR                RTE_FLOW_ERROR_TYPE_ATTR
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_NUM            RTE_FLOW_ERROR_TYPE_ITEM_NUM
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_SPEC           RTE_FLOW_ERROR_TYPE_ITEM_SPEC
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_LAST           RTE_FLOW_ERROR_TYPE_ITEM_LAST
+#define HINIC3_FLOW_ERROR_TYPE_ITEM_MASK           RTE_FLOW_ERROR_TYPE_ITEM_MASK
+#define HINIC3_FLOW_ERROR_TYPE_ITEM                RTE_FLOW_ERROR_TYPE_ITEM
+#define HINIC3_FLOW_ERROR_TYPE_ACTION_NUM          RTE_FLOW_ERROR_TYPE_ACTION_NUM
+#define HINIC3_FLOW_ERROR_TYPE_ACTION_CONF         RTE_FLOW_ERROR_TYPE_ACTION_CONF
+#define HINIC3_FLOW_ERROR_TYPE_ACTION              RTE_FLOW_ERROR_TYPE_ACTION
+
+#endif /**< _HINIC3_FLOW_H_ */
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 18/18] drivers/net: add hinic3 PMD build and doc files
  2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
                   ` (8 preceding siblings ...)
  2025-04-18  7:02 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
@ 2025-04-18  7:02 ` Feifei Wang
  9 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  7:02 UTC (permalink / raw)
  To: dev; +Cc: Yi Chen, Xin Wang, Feifei Wang

From: Yi Chen <chenyi221@huawei.com>

The meson.build file is added to this patch to enable
the hinic3 compilation function.

Signed-off-by: Yi Chen <chenyi221@huawei.com>
Reviewed-by: Xin Wang <wangxin679@h-partners.com>
Reviewed-by: Feifei Wang <wangfeifei40@huawei.com>
---
 doc/guides/nics/features/hinic3.ini |  9 ++++++
 drivers/net/hinic3/base/meson.build | 50 +++++++++++++++++++++++++++++
 drivers/net/hinic3/meson.build      | 44 +++++++++++++++++++++++++
 drivers/net/meson.build             |  1 +
 4 files changed, 104 insertions(+)
 create mode 100644 doc/guides/nics/features/hinic3.ini
 create mode 100644 drivers/net/hinic3/base/meson.build
 create mode 100644 drivers/net/hinic3/meson.build

diff --git a/doc/guides/nics/features/hinic3.ini b/doc/guides/nics/features/hinic3.ini
new file mode 100644
index 0000000000..8bafd49090
--- /dev/null
+++ b/doc/guides/nics/features/hinic3.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'hinic3' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux                = Y
+x86-64               = Y
+ARMv8                = Y
diff --git a/drivers/net/hinic3/base/meson.build b/drivers/net/hinic3/base/meson.build
new file mode 100644
index 0000000000..948f5efac2
--- /dev/null
+++ b/drivers/net/hinic3/base/meson.build
@@ -0,0 +1,50 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+sources = files(
+    'hinic3_cmdq.c',
+    'hinic3_eqs.c',
+    'hinic3_hw_cfg.c',
+    'hinic3_hw_comm.c',
+    'hinic3_hwdev.c',
+    'hinic3_hwif.c',
+    'hinic3_mbox.c',
+    'hinic3_mgmt.c',
+    'hinic3_nic_cfg.c',
+    'hinic3_nic_event.c',
+    'hinic3_wq.c',
+)
+
+extra_flags = []
+
+# The driver runs only on arch64 machine, remove 32bit warnings
+if not dpdk_conf.get('RTE_ARCH_64')
+    extra_flags += [
+        '-Wno-int-to-pointer-cast',
+        '-Wno-pointer-to-int-cast',
+    ]
+endif
+
+foreach flag: extra_flags
+    if cc.has_argument(flag)
+        cflags += flag
+    endif
+endforeach
+
+deps += ['hash']
+c_args = cflags
+includes += include_directories('../')
+
+base_lib = static_library(
+    'spnic_base',
+    sources,
+    dependencies: [
+        static_rte_eal,
+        static_rte_ethdev,
+        static_rte_bus_pci,
+        static_rte_hash,
+    ],
+    include_directories: includes,
+    c_args: c_args,
+)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/hinic3/meson.build b/drivers/net/hinic3/meson.build
new file mode 100644
index 0000000000..231e966b36
--- /dev/null
+++ b/drivers/net/hinic3/meson.build
@@ -0,0 +1,44 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2025 Huawei Technologies Co., Ltd
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+    subdir_done()
+endif
+
+if (arch_subdir != 'x86' and arch_subdir != 'arm'
+        or not dpdk_conf.get('RTE_ARCH_64'))
+    build = false
+    reason = 'only supported on x86_64 and aarch64'
+    subdir_done()
+endif
+
+cflags += [
+    '-DHW_CONVERT_ENDIAN',
+    '-D__HINIC_HUAWEI_SECUREC__',
+    '-fPIC',
+    '-fstack-protector-strong',
+]
+
+subdir('base')
+subdir('mml')
+objs = [base_objs] + [mml_objs]
+
+sources = files(
+    'hinic3_ethdev.c',
+    'hinic3_fdir.c',
+    'hinic3_flow.c',
+    'hinic3_nic_io.c',
+    'hinic3_rx.c',
+    'hinic3_tx.c',
+)
+
+if arch_subdir == 'arm' and dpdk_conf.get('RTE_ARCH_64')
+    cflags += ['-D__ARM64_NEON__']
+else
+    cflags += ['-D__X86_64_SSE__']
+endif
+
+includes += include_directories('base')
+includes += include_directories('mml')
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 460eb69e5b..b5442349d4 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -23,6 +23,7 @@ drivers = [
         'failsafe',
         'gve',
         'hinic',
+        'hinic3',
         'hns3',
         'intel/e1000',
         'intel/fm10k',
-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 00/18] add hinic3 PMD driver
  2025-04-18  9:05 Feifei Wang
  2025-04-18 18:18 ` Stephen Hemminger
  2025-04-18 18:20 ` Stephen Hemminger
@ 2025-04-18 18:32 ` Stephen Hemminger
  2 siblings, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:32 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev

On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:

> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
> for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
> 
> Feifei Wang (3):
>   net/hinic3: add intro doc for hinic3
>   net/hinic3: add dev ops
>   net/hinic3: add Rx/Tx functions
> 
> Xin Wang (7):
>   net/hinic3: add basic header files
>   net/hinic3: add support for cmdq mechanism
>   net/hinic3: add NIC event module
>   net/hinic3: add context and work queue support
>   net/hinic3: add device initailization
>   net/hinic3: add MML and EEPROM access feature
>   net/hinic3: add RSS promiscuous ops
> 
> Yi Chen (8):
>   net/hinic3: add hardware interfaces of BAR operation
>   net/hinic3: add eq mechanism function code
>   net/hinic3: add mgmt module function code
>   net/hinic3: add module about hardware operation
>   net/hinic3: add a NIC business configuration module
>   net/hinic3: add a mailbox communication module
>   net/hinic3: add FDIR flow control module
>   drivers/net: add hinic3 PMD build and doc files
> 
>  .mailmap                                   |    4 +-
>  MAINTAINERS                                |    6 +
>  doc/guides/nics/features/hinic3.ini        |    9 +
>  doc/guides/nics/hinic3.rst                 |   52 +
>  doc/guides/nics/index.rst                  |    1 +
>  doc/guides/rel_notes/release_25_07.rst     |   32 +-
>  drivers/net/hinic3/base/hinic3_cmd.h       |  231 ++
>  drivers/net/hinic3/base/hinic3_cmdq.c      |  975 +++++
>  drivers/net/hinic3/base/hinic3_cmdq.h      |  230 ++
>  drivers/net/hinic3/base/hinic3_compat.h    |  266 ++
>  drivers/net/hinic3/base/hinic3_csr.h       |  108 +
>  drivers/net/hinic3/base/hinic3_eqs.c       |  719 ++++
>  drivers/net/hinic3/base/hinic3_eqs.h       |   98 +
>  drivers/net/hinic3/base/hinic3_hw_cfg.c    |  240 ++
>  drivers/net/hinic3/base/hinic3_hw_cfg.h    |  121 +
>  drivers/net/hinic3/base/hinic3_hw_comm.c   |  452 +++
>  drivers/net/hinic3/base/hinic3_hw_comm.h   |  366 ++
>  drivers/net/hinic3/base/hinic3_hwdev.c     |  573 +++
>  drivers/net/hinic3/base/hinic3_hwdev.h     |  177 +
>  drivers/net/hinic3/base/hinic3_hwif.c      |  779 ++++
>  drivers/net/hinic3/base/hinic3_hwif.h      |  142 +
>  drivers/net/hinic3/base/hinic3_mbox.c      | 1392 +++++++
>  drivers/net/hinic3/base/hinic3_mbox.h      |  199 +
>  drivers/net/hinic3/base/hinic3_mgmt.c      |  392 ++
>  drivers/net/hinic3/base/hinic3_mgmt.h      |  121 +
>  drivers/net/hinic3/base/hinic3_nic_cfg.c   | 1828 +++++++++
>  drivers/net/hinic3/base/hinic3_nic_cfg.h   | 1527 ++++++++
>  drivers/net/hinic3/base/hinic3_nic_event.c |  433 +++
>  drivers/net/hinic3/base/hinic3_nic_event.h |   39 +
>  drivers/net/hinic3/base/hinic3_wq.c        |  148 +
>  drivers/net/hinic3/base/hinic3_wq.h        |  109 +
>  drivers/net/hinic3/base/meson.build        |   50 +
>  drivers/net/hinic3/hinic3_ethdev.c         | 3866 ++++++++++++++++++++
>  drivers/net/hinic3/hinic3_ethdev.h         |  167 +
>  drivers/net/hinic3/hinic3_fdir.c           | 1394 +++++++
>  drivers/net/hinic3/hinic3_fdir.h           |  398 ++
>  drivers/net/hinic3/hinic3_flow.c           | 1700 +++++++++
>  drivers/net/hinic3/hinic3_flow.h           |   80 +
>  drivers/net/hinic3/hinic3_nic_io.c         |  827 +++++
>  drivers/net/hinic3/hinic3_nic_io.h         |  169 +
>  drivers/net/hinic3/hinic3_rx.c             | 1096 ++++++
>  drivers/net/hinic3/hinic3_rx.h             |  356 ++
>  drivers/net/hinic3/hinic3_tx.c             | 1028 ++++++
>  drivers/net/hinic3/hinic3_tx.h             |  315 ++
>  drivers/net/hinic3/meson.build             |   44 +
>  drivers/net/hinic3/mml/hinic3_dbg.c        |  171 +
>  drivers/net/hinic3/mml/hinic3_dbg.h        |  160 +
>  drivers/net/hinic3/mml/hinic3_mml_cmd.c    |  375 ++
>  drivers/net/hinic3/mml/hinic3_mml_cmd.h    |  131 +
>  drivers/net/hinic3/mml/hinic3_mml_ioctl.c  |  215 ++
>  drivers/net/hinic3/mml/hinic3_mml_lib.c    |  136 +
>  drivers/net/hinic3/mml/hinic3_mml_lib.h    |  275 ++
>  drivers/net/hinic3/mml/hinic3_mml_main.c   |  167 +
>  drivers/net/hinic3/mml/hinic3_mml_queue.c  |  749 ++++
>  drivers/net/hinic3/mml/hinic3_mml_queue.h  |  256 ++
>  drivers/net/hinic3/mml/meson.build         |   62 +
>  drivers/net/meson.build                    |    1 +
>  57 files changed, 25926 insertions(+), 31 deletions(-)
>  create mode 100644 doc/guides/nics/features/hinic3.ini
>  create mode 100644 doc/guides/nics/hinic3.rst
>  create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
>  create mode 100644 drivers/net/hinic3/base/meson.build
>  create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
>  create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
>  create mode 100644 drivers/net/hinic3/hinic3_fdir.c
>  create mode 100644 drivers/net/hinic3/hinic3_fdir.h
>  create mode 100644 drivers/net/hinic3/hinic3_flow.c
>  create mode 100644 drivers/net/hinic3/hinic3_flow.h
>  create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
>  create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
>  create mode 100644 drivers/net/hinic3/hinic3_rx.c
>  create mode 100644 drivers/net/hinic3/hinic3_rx.h
>  create mode 100644 drivers/net/hinic3/hinic3_tx.c
>  create mode 100644 drivers/net/hinic3/hinic3_tx.h
>  create mode 100644 drivers/net/hinic3/meson.build
>  create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
>  create mode 100644 drivers/net/hinic3/mml/meson.build
> 


Fix the build an other little things, and resubmit.
There is lots more here, don't expect it to be merged for several more revisions.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 00/18] add hinic3 PMD driver
  2025-04-18  9:05 Feifei Wang
  2025-04-18 18:18 ` Stephen Hemminger
@ 2025-04-18 18:20 ` Stephen Hemminger
  2025-04-18 18:32 ` Stephen Hemminger
  2 siblings, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:20 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev

On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:

> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
> for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.
> 
> Feifei Wang (3):
>   net/hinic3: add intro doc for hinic3
>   net/hinic3: add dev ops
>   net/hinic3: add Rx/Tx functions
> 
> Xin Wang (7):
>   net/hinic3: add basic header files
>   net/hinic3: add support for cmdq mechanism
>   net/hinic3: add NIC event module
>   net/hinic3: add context and work queue support
>   net/hinic3: add device initailization
>   net/hinic3: add MML and EEPROM access feature
>   net/hinic3: add RSS promiscuous ops
> 
> Yi Chen (8):
>   net/hinic3: add hardware interfaces of BAR operation
>   net/hinic3: add eq mechanism function code
>   net/hinic3: add mgmt module function code
>   net/hinic3: add module about hardware operation
>   net/hinic3: add a NIC business configuration module
>   net/hinic3: add a mailbox communication module
>   net/hinic3: add FDIR flow control module
>   drivers/net: add hinic3 PMD build and doc files
> 
>  .mailmap                                   |    4 +-
>  MAINTAINERS                                |    6 +
>  doc/guides/nics/features/hinic3.ini        |    9 +
>  doc/guides/nics/hinic3.rst                 |   52 +
>  doc/guides/nics/index.rst                  |    1 +
>  doc/guides/rel_notes/release_25_07.rst     |   32 +-
>  drivers/net/hinic3/base/hinic3_cmd.h       |  231 ++
>  drivers/net/hinic3/base/hinic3_cmdq.c      |  975 +++++
>  drivers/net/hinic3/base/hinic3_cmdq.h      |  230 ++
>  drivers/net/hinic3/base/hinic3_compat.h    |  266 ++
>  drivers/net/hinic3/base/hinic3_csr.h       |  108 +
>  drivers/net/hinic3/base/hinic3_eqs.c       |  719 ++++
>  drivers/net/hinic3/base/hinic3_eqs.h       |   98 +
>  drivers/net/hinic3/base/hinic3_hw_cfg.c    |  240 ++
>  drivers/net/hinic3/base/hinic3_hw_cfg.h    |  121 +
>  drivers/net/hinic3/base/hinic3_hw_comm.c   |  452 +++
>  drivers/net/hinic3/base/hinic3_hw_comm.h   |  366 ++
>  drivers/net/hinic3/base/hinic3_hwdev.c     |  573 +++
>  drivers/net/hinic3/base/hinic3_hwdev.h     |  177 +
>  drivers/net/hinic3/base/hinic3_hwif.c      |  779 ++++
>  drivers/net/hinic3/base/hinic3_hwif.h      |  142 +
>  drivers/net/hinic3/base/hinic3_mbox.c      | 1392 +++++++
>  drivers/net/hinic3/base/hinic3_mbox.h      |  199 +
>  drivers/net/hinic3/base/hinic3_mgmt.c      |  392 ++
>  drivers/net/hinic3/base/hinic3_mgmt.h      |  121 +
>  drivers/net/hinic3/base/hinic3_nic_cfg.c   | 1828 +++++++++
>  drivers/net/hinic3/base/hinic3_nic_cfg.h   | 1527 ++++++++
>  drivers/net/hinic3/base/hinic3_nic_event.c |  433 +++
>  drivers/net/hinic3/base/hinic3_nic_event.h |   39 +
>  drivers/net/hinic3/base/hinic3_wq.c        |  148 +
>  drivers/net/hinic3/base/hinic3_wq.h        |  109 +
>  drivers/net/hinic3/base/meson.build        |   50 +
>  drivers/net/hinic3/hinic3_ethdev.c         | 3866 ++++++++++++++++++++
>  drivers/net/hinic3/hinic3_ethdev.h         |  167 +
>  drivers/net/hinic3/hinic3_fdir.c           | 1394 +++++++
>  drivers/net/hinic3/hinic3_fdir.h           |  398 ++
>  drivers/net/hinic3/hinic3_flow.c           | 1700 +++++++++
>  drivers/net/hinic3/hinic3_flow.h           |   80 +
>  drivers/net/hinic3/hinic3_nic_io.c         |  827 +++++
>  drivers/net/hinic3/hinic3_nic_io.h         |  169 +
>  drivers/net/hinic3/hinic3_rx.c             | 1096 ++++++
>  drivers/net/hinic3/hinic3_rx.h             |  356 ++
>  drivers/net/hinic3/hinic3_tx.c             | 1028 ++++++
>  drivers/net/hinic3/hinic3_tx.h             |  315 ++
>  drivers/net/hinic3/meson.build             |   44 +
>  drivers/net/hinic3/mml/hinic3_dbg.c        |  171 +
>  drivers/net/hinic3/mml/hinic3_dbg.h        |  160 +
>  drivers/net/hinic3/mml/hinic3_mml_cmd.c    |  375 ++
>  drivers/net/hinic3/mml/hinic3_mml_cmd.h    |  131 +
>  drivers/net/hinic3/mml/hinic3_mml_ioctl.c  |  215 ++
>  drivers/net/hinic3/mml/hinic3_mml_lib.c    |  136 +
>  drivers/net/hinic3/mml/hinic3_mml_lib.h    |  275 ++
>  drivers/net/hinic3/mml/hinic3_mml_main.c   |  167 +
>  drivers/net/hinic3/mml/hinic3_mml_queue.c  |  749 ++++
>  drivers/net/hinic3/mml/hinic3_mml_queue.h  |  256 ++
>  drivers/net/hinic3/mml/meson.build         |   62 +
>  drivers/net/meson.build                    |    1 +
>  57 files changed, 25926 insertions(+), 31 deletions(-)
>  create mode 100644 doc/guides/nics/features/hinic3.ini
>  create mode 100644 doc/guides/nics/hinic3.rst
>  create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
>  create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
>  create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
>  create mode 100644 drivers/net/hinic3/base/meson.build
>  create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
>  create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
>  create mode 100644 drivers/net/hinic3/hinic3_fdir.c
>  create mode 100644 drivers/net/hinic3/hinic3_fdir.h
>  create mode 100644 drivers/net/hinic3/hinic3_flow.c
>  create mode 100644 drivers/net/hinic3/hinic3_flow.h
>  create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
>  create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
>  create mode 100644 drivers/net/hinic3/hinic3_rx.c
>  create mode 100644 drivers/net/hinic3/hinic3_rx.h
>  create mode 100644 drivers/net/hinic3/hinic3_tx.c
>  create mode 100644 drivers/net/hinic3/hinic3_tx.h
>  create mode 100644 drivers/net/hinic3/meson.build
>  create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
>  create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
>  create mode 100644 drivers/net/hinic3/mml/meson.build
> 


Clang is spotting a possible bug in driver.

FAILED: drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o 
clang -Idrivers/net/hinic3/base/libspnic_base.a.p -Idrivers/net/hinic3/base -I../drivers/net/hinic3/base -Idrivers/net/hinic3 -I../drivers/net/hinic3 -Ilib/eal/common -I../lib/eal/common -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -I../kernel/linux -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/ethdev -I../lib/ethdev -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu -fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O3 -include rte_config.h -Wvla -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-address-of-packed-member -DHW_CONVERT_ENDIAN -D__HINIC_HUAWEI_SECUREC__ -fPIC -fstack-protector-strong -MD -MQ drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o -MF drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o.d -o drivers/net/hinic3/base/libspnic_base.a.p/hinic3_nic_cfg.c.o -c ../drivers/net/hinic3/base/hinic3_nic_cfg.c
../drivers/net/hinic3/base/hinic3_nic_cfg.c:1237:34: error: expression does not compute the number of elements in this array; element type is 'u16' (aka 'unsigned short'), not 'u32' (aka 'unsigned int') [-Werror,-Wsizeof-array-div]
 1237 |         size = sizeof(indir_tbl->entry) / sizeof(u32);
      |                       ~~~~~~~~~~~~~~~~  ^
../drivers/net/hinic3/base/hinic3_nic_cfg.c:1237:34: note: place parentheses around the 'sizeof(u32)' expression to silence this warning


And then lots of other overrun bugs:

*Build Failed #3:
OS: AzureLinux3.0-64
Target: x86_64-native-linuxapp-gcc
FAILED: drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o 
gcc -Idrivers/net/hinic3/base/libspnic_base.a.p -Idrivers/net/hinic3/base -I../drivers/net/hinic3/base -Idrivers/net/hinic3 -I../drivers/net/hinic3 -Ilib/eal/common -I../lib/eal/common -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -I../kernel/linux -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/ethdev -I../lib/ethdev -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O3 -include rte_config.h -Wvla -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-packed-not-aligned -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation -Wno-address-of-packed-member -DHW_CONVERT_ENDIAN -D__HINIC_HUAWEI_SECUREC__ -fPIC -fstack-protector-strong -MD -MQ drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o -MF drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o.d -o drivers/net/hinic3/base/libspnic_base.a.p/hinic3_mbox.c.o -c ../drivers/net/hinic3/base/hinic3_mbox.c
In file included from /usr/lib/gcc/x86_64-pc-linux-gnu/13.2.0/include/immintrin.h:43,
                 from ../lib/eal/x86/include/rte_rtm.h:8,
                 from ../lib/eal/x86/include/rte_spinlock.h:9,
                 from ../lib/eal/x86/include/rte_rwlock.h:9,
                 from ../lib/eal/include/rte_eal_memconfig.h:10,
                 from ../lib/eal/include/rte_memory.h:21,
                 from ../lib/eal/include/rte_malloc.h:16,
                 from ../lib/ethdev/ethdev_pci.h:9,
                 from ../drivers/net/hinic3/base/hinic3_compat.h:14,
                 from ../drivers/net/hinic3/base/hinic3_mbox.c:5:
In function ‘_mm256_storeu_si256’,
    inlined from ‘rte_mov32’ at ../lib/eal/x86/include/rte_memcpy.h:128:2,
    inlined from ‘rte_mov64’ at ../lib/eal/x86/include/rte_memcpy.h:149:2,
    inlined from ‘rte_mov128’ at ../lib/eal/x86/include/rte_memcpy.h:160:2,
    inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:422:4,
    inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10,
    inlined from ‘mbox_copy_send_data’ at ../drivers/net/hinic3/base/hinic3_mbox.c:508:3,
    inlined from ‘send_mbox_seg’ at ../drivers/net/hinic3/base/hinic3_mbox.c:630:2,
    inlined from ‘send_mbox_to_func’ at ../drivers/net/hinic3/base/hinic3_mbox.c:777:9:
/usr/lib/gcc/x86_64-pc-linux-gnu/13.2.0/include/avxintrin.h:935:8: error: array subscript ‘__m256i_u[1]’ is partly outside array bounds of ‘u8[48]’ {aka ‘unsigned char[48]’} [-Werror=array-bounds=]
  935 |   *__P = __A;
      |   ~~~~~^~~~~
../drivers/net/hinic3/base/hinic3_mbox.c: In function ‘send_mbox_to_func’:
../drivers/net/hinic3/base/hinic3_mbox.c:504:12: note: at offset 32 into object ‘mbox_max_buf’ of size 48
  504 |         u8 mbox_max_buf[MBOX_SEG_LEN] = {0};
      |            ^~~~~~~~~~~~
In function ‘_mm256_storeu_si256’,
    inlined from ‘rte_mov32’ at ../lib/eal/x86/include/rte_memcpy.h:128:2,
    inlined from ‘rte_mov64’ at ../lib/eal/x86/include/rte_memcpy.h:148:2,
    inlined from ‘rte_mov128’ at ../lib/eal/x86/include/rte_memcpy.h:161:2,
    inlined from ‘rte_memcpy_generic’ at ../lib/eal/x86/include/rte_memcpy.h:422:4,
    inlined from ‘rte_memcpy’ at ../lib/eal/x86/include/rte_memcpy.h:757:10,
    inlined from ‘mbox_copy_send_data’ at ../drivers/net/hinic3/base/hinic3_mbox.c:508:3,
    inlined from ‘send_mbox_seg’ at ../drivers/net/hinic3/base/hinic3_mbox.c:630:2,
    inlined from ‘send_mbox_to_func’ at ../drivers/net/hinic3/base/hinic3_mbox.c:777:9:
/usr/lib/gcc/x86_64-pc-linux-gnu/13.2.0/include/avxintrin.h:935:8: error: array subscript 2 is outside array bounds of ‘u8[48]’ {aka ‘unsigned char[48]’} [-Werror=array-bounds=]
  935 |   *__P = __A;
      |   ~~~~~^~~~~
../drivers/net/hinic3/base/hinic3_mbox.c: In function ‘send_mbox_to_func’:
../drivers/net/hinic3/base/hinic3_mbox.c:504:12: note

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 00/18] add hinic3 PMD driver
  2025-04-18  9:05 Feifei Wang
@ 2025-04-18 18:18 ` Stephen Hemminger
  2025-04-18 18:20 ` Stephen Hemminger
  2025-04-18 18:32 ` Stephen Hemminger
  2 siblings, 0 replies; 16+ messages in thread
From: Stephen Hemminger @ 2025-04-18 18:18 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev

On Fri, 18 Apr 2025 17:05:46 +0800
Feifei Wang <wff_light@vip.163.com> wrote:

> *** BLURB HERE ***
> The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
> for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.


You are supposed to remove the "*** BLURB HERE ***" when editing
the commit message.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 00/18] add hinic3 PMD driver
@ 2025-04-18  9:05 Feifei Wang
  2025-04-18 18:18 ` Stephen Hemminger
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  9:05 UTC (permalink / raw)
  To: dev

*** BLURB HERE ***
The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.

Feifei Wang (3):
  net/hinic3: add intro doc for hinic3
  net/hinic3: add dev ops
  net/hinic3: add Rx/Tx functions

Xin Wang (7):
  net/hinic3: add basic header files
  net/hinic3: add support for cmdq mechanism
  net/hinic3: add NIC event module
  net/hinic3: add context and work queue support
  net/hinic3: add device initailization
  net/hinic3: add MML and EEPROM access feature
  net/hinic3: add RSS promiscuous ops

Yi Chen (8):
  net/hinic3: add hardware interfaces of BAR operation
  net/hinic3: add eq mechanism function code
  net/hinic3: add mgmt module function code
  net/hinic3: add module about hardware operation
  net/hinic3: add a NIC business configuration module
  net/hinic3: add a mailbox communication module
  net/hinic3: add FDIR flow control module
  drivers/net: add hinic3 PMD build and doc files

 .mailmap                                   |    4 +-
 MAINTAINERS                                |    6 +
 doc/guides/nics/features/hinic3.ini        |    9 +
 doc/guides/nics/hinic3.rst                 |   52 +
 doc/guides/nics/index.rst                  |    1 +
 doc/guides/rel_notes/release_25_07.rst     |   32 +-
 drivers/net/hinic3/base/hinic3_cmd.h       |  231 ++
 drivers/net/hinic3/base/hinic3_cmdq.c      |  975 +++++
 drivers/net/hinic3/base/hinic3_cmdq.h      |  230 ++
 drivers/net/hinic3/base/hinic3_compat.h    |  266 ++
 drivers/net/hinic3/base/hinic3_csr.h       |  108 +
 drivers/net/hinic3/base/hinic3_eqs.c       |  719 ++++
 drivers/net/hinic3/base/hinic3_eqs.h       |   98 +
 drivers/net/hinic3/base/hinic3_hw_cfg.c    |  240 ++
 drivers/net/hinic3/base/hinic3_hw_cfg.h    |  121 +
 drivers/net/hinic3/base/hinic3_hw_comm.c   |  452 +++
 drivers/net/hinic3/base/hinic3_hw_comm.h   |  366 ++
 drivers/net/hinic3/base/hinic3_hwdev.c     |  573 +++
 drivers/net/hinic3/base/hinic3_hwdev.h     |  177 +
 drivers/net/hinic3/base/hinic3_hwif.c      |  779 ++++
 drivers/net/hinic3/base/hinic3_hwif.h      |  142 +
 drivers/net/hinic3/base/hinic3_mbox.c      | 1392 +++++++
 drivers/net/hinic3/base/hinic3_mbox.h      |  199 +
 drivers/net/hinic3/base/hinic3_mgmt.c      |  392 ++
 drivers/net/hinic3/base/hinic3_mgmt.h      |  121 +
 drivers/net/hinic3/base/hinic3_nic_cfg.c   | 1828 +++++++++
 drivers/net/hinic3/base/hinic3_nic_cfg.h   | 1527 ++++++++
 drivers/net/hinic3/base/hinic3_nic_event.c |  433 +++
 drivers/net/hinic3/base/hinic3_nic_event.h |   39 +
 drivers/net/hinic3/base/hinic3_wq.c        |  148 +
 drivers/net/hinic3/base/hinic3_wq.h        |  109 +
 drivers/net/hinic3/base/meson.build        |   50 +
 drivers/net/hinic3/hinic3_ethdev.c         | 3866 ++++++++++++++++++++
 drivers/net/hinic3/hinic3_ethdev.h         |  167 +
 drivers/net/hinic3/hinic3_fdir.c           | 1394 +++++++
 drivers/net/hinic3/hinic3_fdir.h           |  398 ++
 drivers/net/hinic3/hinic3_flow.c           | 1700 +++++++++
 drivers/net/hinic3/hinic3_flow.h           |   80 +
 drivers/net/hinic3/hinic3_nic_io.c         |  827 +++++
 drivers/net/hinic3/hinic3_nic_io.h         |  169 +
 drivers/net/hinic3/hinic3_rx.c             | 1096 ++++++
 drivers/net/hinic3/hinic3_rx.h             |  356 ++
 drivers/net/hinic3/hinic3_tx.c             | 1028 ++++++
 drivers/net/hinic3/hinic3_tx.h             |  315 ++
 drivers/net/hinic3/meson.build             |   44 +
 drivers/net/hinic3/mml/hinic3_dbg.c        |  171 +
 drivers/net/hinic3/mml/hinic3_dbg.h        |  160 +
 drivers/net/hinic3/mml/hinic3_mml_cmd.c    |  375 ++
 drivers/net/hinic3/mml/hinic3_mml_cmd.h    |  131 +
 drivers/net/hinic3/mml/hinic3_mml_ioctl.c  |  215 ++
 drivers/net/hinic3/mml/hinic3_mml_lib.c    |  136 +
 drivers/net/hinic3/mml/hinic3_mml_lib.h    |  275 ++
 drivers/net/hinic3/mml/hinic3_mml_main.c   |  167 +
 drivers/net/hinic3/mml/hinic3_mml_queue.c  |  749 ++++
 drivers/net/hinic3/mml/hinic3_mml_queue.h  |  256 ++
 drivers/net/hinic3/mml/meson.build         |   62 +
 drivers/net/meson.build                    |    1 +
 57 files changed, 25926 insertions(+), 31 deletions(-)
 create mode 100644 doc/guides/nics/features/hinic3.ini
 create mode 100644 doc/guides/nics/hinic3.rst
 create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
 create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
 create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
 create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
 create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
 create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
 create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
 create mode 100644 drivers/net/hinic3/base/meson.build
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
 create mode 100644 drivers/net/hinic3/hinic3_fdir.c
 create mode 100644 drivers/net/hinic3/hinic3_fdir.h
 create mode 100644 drivers/net/hinic3/hinic3_flow.c
 create mode 100644 drivers/net/hinic3/hinic3_flow.h
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
 create mode 100644 drivers/net/hinic3/hinic3_rx.c
 create mode 100644 drivers/net/hinic3/hinic3_rx.h
 create mode 100644 drivers/net/hinic3/hinic3_tx.c
 create mode 100644 drivers/net/hinic3/hinic3_tx.h
 create mode 100644 drivers/net/hinic3/meson.build
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
 create mode 100644 drivers/net/hinic3/mml/meson.build

-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 00/18] add hinic3 PMD driver
@ 2025-04-18  8:08 Feifei Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2025-04-18  8:08 UTC (permalink / raw)
  To: dev

*** BLURB HERE ***
The hinic3 PMD (**librte_net_hinic3**) provides poll mode driver support
for 25Gbps/100Gbps/200Gbps Huawei SPx series Network Adapters.

Feifei Wang (3):
  net/hinic3: add intro doc for hinic3
  net/hinic3: add dev ops
  net/hinic3: add Rx/Tx functions

Xin Wang (7):
  net/hinic3: add basic header files
  net/hinic3: add support for cmdq mechanism
  net/hinic3: add NIC event module
  net/hinic3: add context and work queue support
  net/hinic3: add device initailization
  net/hinic3: add MML and EEPROM access feature
  net/hinic3: add RSS promiscuous ops

Yi Chen (8):
  net/hinic3: add hardware interfaces of BAR operation
  net/hinic3: add eq mechanism function code
  net/hinic3: add mgmt module function code
  net/hinic3: add module about hardware operation
  net/hinic3: add a NIC business configuration module
  net/hinic3: add a mailbox communication module
  net/hinic3: add FDIR flow control module
  drivers/net: add hinic3 PMD build and doc files

 .mailmap                                   |    4 +-
 MAINTAINERS                                |    6 +
 doc/guides/nics/features/hinic3.ini        |    9 +
 doc/guides/nics/hinic3.rst                 |   52 +
 doc/guides/nics/index.rst                  |    1 +
 doc/guides/rel_notes/release_25_07.rst     |   32 +-
 drivers/net/hinic3/base/hinic3_cmd.h       |  231 ++
 drivers/net/hinic3/base/hinic3_cmdq.c      |  975 +++++
 drivers/net/hinic3/base/hinic3_cmdq.h      |  230 ++
 drivers/net/hinic3/base/hinic3_compat.h    |  266 ++
 drivers/net/hinic3/base/hinic3_csr.h       |  108 +
 drivers/net/hinic3/base/hinic3_eqs.c       |  719 ++++
 drivers/net/hinic3/base/hinic3_eqs.h       |   98 +
 drivers/net/hinic3/base/hinic3_hw_cfg.c    |  240 ++
 drivers/net/hinic3/base/hinic3_hw_cfg.h    |  121 +
 drivers/net/hinic3/base/hinic3_hw_comm.c   |  452 +++
 drivers/net/hinic3/base/hinic3_hw_comm.h   |  366 ++
 drivers/net/hinic3/base/hinic3_hwdev.c     |  573 +++
 drivers/net/hinic3/base/hinic3_hwdev.h     |  177 +
 drivers/net/hinic3/base/hinic3_hwif.c      |  779 ++++
 drivers/net/hinic3/base/hinic3_hwif.h      |  142 +
 drivers/net/hinic3/base/hinic3_mbox.c      | 1392 +++++++
 drivers/net/hinic3/base/hinic3_mbox.h      |  199 +
 drivers/net/hinic3/base/hinic3_mgmt.c      |  392 ++
 drivers/net/hinic3/base/hinic3_mgmt.h      |  121 +
 drivers/net/hinic3/base/hinic3_nic_cfg.c   | 1828 +++++++++
 drivers/net/hinic3/base/hinic3_nic_cfg.h   | 1527 ++++++++
 drivers/net/hinic3/base/hinic3_nic_event.c |  433 +++
 drivers/net/hinic3/base/hinic3_nic_event.h |   39 +
 drivers/net/hinic3/base/hinic3_wq.c        |  148 +
 drivers/net/hinic3/base/hinic3_wq.h        |  109 +
 drivers/net/hinic3/base/meson.build        |   50 +
 drivers/net/hinic3/hinic3_ethdev.c         | 3866 ++++++++++++++++++++
 drivers/net/hinic3/hinic3_ethdev.h         |  167 +
 drivers/net/hinic3/hinic3_fdir.c           | 1394 +++++++
 drivers/net/hinic3/hinic3_fdir.h           |  398 ++
 drivers/net/hinic3/hinic3_flow.c           | 1700 +++++++++
 drivers/net/hinic3/hinic3_flow.h           |   80 +
 drivers/net/hinic3/hinic3_nic_io.c         |  827 +++++
 drivers/net/hinic3/hinic3_nic_io.h         |  169 +
 drivers/net/hinic3/hinic3_rx.c             | 1096 ++++++
 drivers/net/hinic3/hinic3_rx.h             |  356 ++
 drivers/net/hinic3/hinic3_tx.c             | 1028 ++++++
 drivers/net/hinic3/hinic3_tx.h             |  315 ++
 drivers/net/hinic3/meson.build             |   44 +
 drivers/net/hinic3/mml/hinic3_dbg.c        |  171 +
 drivers/net/hinic3/mml/hinic3_dbg.h        |  160 +
 drivers/net/hinic3/mml/hinic3_mml_cmd.c    |  375 ++
 drivers/net/hinic3/mml/hinic3_mml_cmd.h    |  131 +
 drivers/net/hinic3/mml/hinic3_mml_ioctl.c  |  215 ++
 drivers/net/hinic3/mml/hinic3_mml_lib.c    |  136 +
 drivers/net/hinic3/mml/hinic3_mml_lib.h    |  275 ++
 drivers/net/hinic3/mml/hinic3_mml_main.c   |  167 +
 drivers/net/hinic3/mml/hinic3_mml_queue.c  |  749 ++++
 drivers/net/hinic3/mml/hinic3_mml_queue.h  |  256 ++
 drivers/net/hinic3/mml/meson.build         |   62 +
 drivers/net/meson.build                    |    1 +
 57 files changed, 25926 insertions(+), 31 deletions(-)
 create mode 100644 doc/guides/nics/features/hinic3.ini
 create mode 100644 doc/guides/nics/hinic3.rst
 create mode 100644 drivers/net/hinic3/base/hinic3_cmd.h
 create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_cmdq.h
 create mode 100644 drivers/net/hinic3/base/hinic3_compat.h
 create mode 100644 drivers/net/hinic3/base/hinic3_csr.h
 create mode 100644 drivers/net/hinic3/base/hinic3_eqs.c
 create mode 100644 drivers/net/hinic3/base/hinic3_eqs.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_cfg.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hw_comm.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hwdev.h
 create mode 100644 drivers/net/hinic3/base/hinic3_hwif.c
 create mode 100644 drivers/net/hinic3/base/hinic3_hwif.h
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mbox.h
 create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.c
 create mode 100644 drivers/net/hinic3/base/hinic3_mgmt.h
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.c
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_cfg.h
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.c
 create mode 100644 drivers/net/hinic3/base/hinic3_nic_event.h
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.c
 create mode 100644 drivers/net/hinic3/base/hinic3_wq.h
 create mode 100644 drivers/net/hinic3/base/meson.build
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.c
 create mode 100644 drivers/net/hinic3/hinic3_ethdev.h
 create mode 100644 drivers/net/hinic3/hinic3_fdir.c
 create mode 100644 drivers/net/hinic3/hinic3_fdir.h
 create mode 100644 drivers/net/hinic3/hinic3_flow.c
 create mode 100644 drivers/net/hinic3/hinic3_flow.h
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.c
 create mode 100644 drivers/net/hinic3/hinic3_nic_io.h
 create mode 100644 drivers/net/hinic3/hinic3_rx.c
 create mode 100644 drivers/net/hinic3/hinic3_rx.h
 create mode 100644 drivers/net/hinic3/hinic3_tx.c
 create mode 100644 drivers/net/hinic3/hinic3_tx.h
 create mode 100644 drivers/net/hinic3/meson.build
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_dbg.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_cmd.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_ioctl.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_lib.h
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_main.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.c
 create mode 100644 drivers/net/hinic3/mml/hinic3_mml_queue.h
 create mode 100644 drivers/net/hinic3/mml/meson.build

-- 
2.47.0.windows.2


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-05-05 12:52 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-18  7:02 [RFC 00/18] add hinic3 PMD driver Feifei Wang
2025-04-18  7:02 ` [RFC 01/18] net/hinic3: add intro doc for hinic3 Feifei Wang
2025-04-18  7:02 ` [RFC 10/18] net/hinic3: add context and work queue support Feifei Wang
2025-04-18  7:02 ` [RFC 11/18] net/hinic3: add a mailbox communication module Feifei Wang
2025-04-18  7:02 ` [RFC 12/18] net/hinic3: add device initailization Feifei Wang
2025-04-18  7:02 ` [RFC 13/18] net/hinic3: add dev ops Feifei Wang
2025-04-18  7:02 ` [RFC 14/18] net/hinic3: add Rx/Tx functions Feifei Wang
2025-04-18  7:02 ` [RFC 15/18] net/hinic3: add MML and EEPROM access feature Feifei Wang
2025-04-18  7:02 ` [RFC 16/18] net/hinic3: add RSS promiscuous ops Feifei Wang
2025-04-18  7:02 ` [RFC 17/18] net/hinic3: add FDIR flow control module Feifei Wang
2025-04-18  7:02 ` [RFC 18/18] drivers/net: add hinic3 PMD build and doc files Feifei Wang
2025-04-18  8:08 [RFC 00/18] add hinic3 PMD driver Feifei Wang
2025-04-18  9:05 Feifei Wang
2025-04-18 18:18 ` Stephen Hemminger
2025-04-18 18:20 ` Stephen Hemminger
2025-04-18 18:32 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).