DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@amd.com>
To: wang.junlong1@zte.com.cn, dev@dpdk.org
Subject: Re: zxdh: add zxdh poll mode driver
Date: Fri, 5 Jul 2024 18:32:01 +0100	[thread overview]
Message-ID: <2d5d67eb-1420-4dd8-85ca-e187bbdbd4d5@amd.com> (raw)
In-Reply-To: <20240603192857865CEjaTZFRXk6x3WbOud5lK@zte.com.cn>

On 6/3/2024 12:28 PM, wang.junlong1@zte.com.cn wrote:
> From 689a5e88b7ba123852153284b33911defc0f7b92 Mon Sep 17 00:00:00 2001
> From: Junlong Wang <wang.junlong1@zte.com.cn>
> Date: Mon, 3 Jun 2024 17:10:36 +0800
> Subject: [PATCH] zxdh: add zxdh poll mode driver
> 
> zxdh is for ZTE 25/100G Ethernet NIC.
> 

Hi Junlong,

Thanks for contributing, it is good to see ZTE drivers upstreamed.

During upstream, it helps to split the feature into multiple logical
parts. This way it is easier for people to review your code and in the
feature becomes easier to study and understand the code.

Please check another driver in the progress of upstreaming, latest
version of it has more structured patch series, you can use that as sample:
https://patches.dpdk.org/project/dpdk/list/?series=32313&state=%2A&archive=both


Meanwhile I will add some review comments below, although it is hard to
review a driver as single patch, still you can address issues in your
next version.


> Signed-off-by: Junlong Wang <wang.junlong1@zte.com.cn>
> ---
>  MAINTAINERS                        |    6 +
>  doc/guides/nics/features/zxdh.ini  |   38 +
>  doc/guides/nics/zxdh.rst           |   61 +
>  drivers/net/meson.build            |    1 +
>  drivers/net/zxdh/meson.build       |   94 +
>  drivers/net/zxdh/msg_chan_pub.h    |  274 +++
>  drivers/net/zxdh/version.map       |    3 +
>  drivers/net/zxdh/zxdh_common.c     |  512 +++++
>  drivers/net/zxdh/zxdh_common.h     |  154 ++
>  drivers/net/zxdh/zxdh_ethdev.c     | 3431 ++++++++++++++++++++++++++++
>  drivers/net/zxdh/zxdh_ethdev.h     |  244 ++
>  drivers/net/zxdh/zxdh_ethdev_ops.c | 2205 ++++++++++++++++++
>  drivers/net/zxdh/zxdh_ethdev_ops.h |  159 ++
>  drivers/net/zxdh/zxdh_flow.c       |  973 ++++++++
>  drivers/net/zxdh/zxdh_flow.h       |  129 ++
>  drivers/net/zxdh/zxdh_logs.h       |   72 +
>  drivers/net/zxdh/zxdh_msg_chan.c   | 1270 ++++++++++
>  drivers/net/zxdh/zxdh_msg_chan.h   |  380 +++
>  drivers/net/zxdh/zxdh_mtr.c        |  916 ++++++++
>  drivers/net/zxdh/zxdh_mtr.h        |   46 +
>  drivers/net/zxdh/zxdh_mtr_drv.c    |  527 +++++
>  drivers/net/zxdh/zxdh_mtr_drv.h    |  119 +
>  drivers/net/zxdh/zxdh_pci.c        |  499 ++++
>  drivers/net/zxdh/zxdh_pci.h        |  272 +++
>  drivers/net/zxdh/zxdh_queue.c      |  135 ++
>  drivers/net/zxdh/zxdh_queue.h      |  491 ++++
>  drivers/net/zxdh/zxdh_ring.h       |  160 ++
>  drivers/net/zxdh/zxdh_rxtx.c       | 1307 +++++++++++
>  drivers/net/zxdh/zxdh_rxtx.h       |   59 +
>  drivers/net/zxdh/zxdh_table_drv.h  |  323 +++
>  drivers/net/zxdh/zxdh_tables.c     | 2193 ++++++++++++++++++
>  drivers/net/zxdh/zxdh_tables.h     |  227 ++
>  drivers/net/zxdh/zxdh_telemetry.c  |  581 +++++
>  drivers/net/zxdh/zxdh_telemetry.h  |   30 +
>  34 files changed, 17891 insertions(+)
>  create mode 100644 doc/guides/nics/features/zxdh.ini
>  create mode 100644 doc/guides/nics/zxdh.rst
>  create mode 100644 drivers/net/zxdh/meson.build
>  create mode 100644 drivers/net/zxdh/msg_chan_pub.h
>  create mode 100644 drivers/net/zxdh/version.map
>  create mode 100644 drivers/net/zxdh/zxdh_common.c
>  create mode 100644 drivers/net/zxdh/zxdh_common.h
>  create mode 100644 drivers/net/zxdh/zxdh_ethdev.c
>  create mode 100644 drivers/net/zxdh/zxdh_ethdev.h
>  create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.c
>  create mode 100644 drivers/net/zxdh/zxdh_ethdev_ops.h
>  create mode 100644 drivers/net/zxdh/zxdh_flow.c
>  create mode 100644 drivers/net/zxdh/zxdh_flow.h
>  create mode 100644 drivers/net/zxdh/zxdh_logs.h
>  create mode 100644 drivers/net/zxdh/zxdh_msg_chan.c
>  create mode 100644 drivers/net/zxdh/zxdh_msg_chan.h
>  create mode 100644 drivers/net/zxdh/zxdh_mtr.c
>  create mode 100644 drivers/net/zxdh/zxdh_mtr.h
>  create mode 100644 drivers/net/zxdh/zxdh_mtr_drv.c
>  create mode 100644 drivers/net/zxdh/zxdh_mtr_drv.h
>  create mode 100644 drivers/net/zxdh/zxdh_pci.c
>  create mode 100644 drivers/net/zxdh/zxdh_pci.h
>  create mode 100644 drivers/net/zxdh/zxdh_queue.c
>  create mode 100644 drivers/net/zxdh/zxdh_queue.h
>  create mode 100644 drivers/net/zxdh/zxdh_ring.h
>  create mode 100644 drivers/net/zxdh/zxdh_rxtx.c
>  create mode 100644 drivers/net/zxdh/zxdh_rxtx.h
>  create mode 100644 drivers/net/zxdh/zxdh_table_drv.h
>  create mode 100644 drivers/net/zxdh/zxdh_tables.c
>  create mode 100644 drivers/net/zxdh/zxdh_tables.h
>  create mode 100644 drivers/net/zxdh/zxdh_telemetry.c
>  create mode 100644 drivers/net/zxdh/zxdh_telemetry.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index c9adff9846..34f9001b93 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1063,6 +1063,12 @@ F: drivers/net/memif/
>  F: doc/guides/nics/memif.rst
>  F: doc/guides/nics/features/memif.ini
> 
> +ZTE zxdh
> +M: Junlong Wang <wang.junlong1@zte.com.cn>
> +M: Lijie Shan <shan.lijie@zte.com.cn>
> +F: drivers/net/zxdh/
> +F: doc/guides/nics/zxdh.rst
> +F: doc/guides/nics/features/zxdh.ini
> 
>  Crypto Drivers
>  --------------
> diff --git a/doc/guides/nics/features/zxdh.ini b/doc/guides/nics/features/zxdh.ini
> new file mode 100644
> index 0000000000..fc41426077
> --- /dev/null
> +++ b/doc/guides/nics/features/zxdh.ini
> @@ -0,0 +1,38 @@
> +;
> +; Supported features of the 'zxdh' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Speed capabilities   = Y
> +Link status          = Y
> +Link status event    = Y
> +MTU update           = Y
> +Scattered Rx         = Y
> +TSO                  = Y
> +LRO                  = Y
> +Promiscuous mode     = Y
> +Allmulticast mode    = Y
> +Unicast MAC filter   = Y
> +Multicast MAC filter = Y
> +RSS hash             = Y
> +RSS key update       = Y
> +RSS reta update      = Y
> +Inner RSS            = Y
> +SR-IOV               = Y
> +VLAN filter          = Y
> +VLAN offload         = Y
> +L3 checksum offload  = Y
> +L4 checksum offload  = Y
> +Inner L3 checksum    = Y
> +Inner L4 checksum    = Y
> +Basic stats          = Y
> +Extended stats       = Y
> +Stats per queue      = Y
> +Flow control         = Y
> +FW version           = Y
> +Multiprocess aware   = Y
> +Linux                = Y
> +x86-64               = Y
> +ARMv8                = Y
> +

When there are multiple patches, this list can be updated with each
commit that adds that feature.


> diff --git a/doc/guides/nics/zxdh.rst b/doc/guides/nics/zxdh.rst
> new file mode 100644
> index 0000000000..f7cbc5755b
> --- /dev/null
> +++ b/doc/guides/nics/zxdh.rst
> @@ -0,0 +1,61 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2023 ZTE Corporation.
> +
> +
> +ZXDH Poll Mode Driver
> +======================
> +
> +The ZXDH PMD (**librte_net_zxdh**) provides poll mode driver support
> +for 25/100 Gbps ZXDH NX Series Ethernet Controller based on
> +the ZTE Ethernet Controller E310/E312.
> +

Can you please provide link to the product mentioned?

> +
> +Features
> +--------
> +
> +Features of the zxdh PMD are:
> +
> +- Multi arch support: x86_64, ARMv8.
> +- Multiple queues for TX and RX
> +- Receiver Side Scaling (RSS)
> +- MAC/VLAN filtering
> +- Checksum offload
> +- TSO offload
> +- VLAN/QinQ stripping and inserting
> +- Promiscuous mode
> +- Port hardware statistics
> +- Link state information
> +- Link flow control
> +- Scattered and gather for TX and RX
> +- SR-IOV VF
> +- VLAN filter and VLAN offload
> +- Allmulticast mode
> +- MTU update
> +- Jumbo frames
> +- Unicast MAC filter
> +- Multicast MAC filter
> +- Flow API
> +- Set Link down or up
> +- FW version
> +- LRO
> +

Similar to the .ini list, above list also should be constructed patch by
patch as the features introduced in each new patch.

> +Prerequisites
> +-------------
> +
> +This PMD driver need NPSDK library for system initialization and allocation of resources.
> +Communication between PMD and kernel modules is mediated by zxdh Kernel modules.
> +The NPSDK library and zxdh Kernel modules are not part of DPDK and must be installed
> +separately:
> +
> +- Getting the latest NPSDK library and software supports using
> +  ``_.
> +

You already mentioned you will include how to get npsdk in next version,
that is good.

But also can you please explain why npsdk is required, as far as I can
see driver is not a virtual driver, so where the dependency comes from?

> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +for details.
> +
> +Limitations or Known issues
> +---------------------------
> +X86-32, Power8, ARMv7 and BSD are not supported yet.
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index bd38b533c5..3778d1b29a 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -61,6 +61,7 @@ drivers = [
>          'vhost',
>          'virtio',
>          'vmxnet3',
> +        'zxdh',
>  ]
>  std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
>  std_deps += ['bus_pci']         # very many PMDs depend on PCI, so make std
> diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
> new file mode 100644
> index 0000000000..85e6eaa999
> --- /dev/null
> +++ b/drivers/net/zxdh/meson.build
> @@ -0,0 +1,94 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2023 ZTE Corporation
> +
> +sources += files('zxdh_ethdev.c',
> +	'zxdh_pci.c',
> +	'zxdh_rxtx.c',
> +	'zxdh_queue.c',
> +	'zxdh_ethdev_ops.c',
> +	'zxdh_flow.c',
> +	'zxdh_mtr.c',
> +	'zxdh_mtr_drv.c',
> +	'zxdh_common.c',
> +	'zxdh_tables.c',
> +	'zxdh_telemetry.c',
> +	'zxdh_msg_chan.c',
> +	)
> +
> +fs=import('fs')
> +project_dir = meson.source_root()
> +lib_npsdk_dir = '/usr/include/npsdk'
> +message('lib npsdk dir :  ' +lib_npsdk_dir)
> +dpp_include = lib_npsdk_dir + '/dpp/include/'
> +
> +cflags_options = [
> +		'-D DPP_FOR_PCIE',
> +		'-D MACRO_CPU64',
> +
> +]
> +foreach option:cflags_options
> +		if cc.has_argument(option)
> +				cflags += option
> +		endif
> +endforeach
> +cflags += '-fno-strict-aliasing'

Why strict-aliasing is disabled, is there an issue in the code, can't it
be resolved by updating the code?

> +
> +if arch_subdir == 'x86'
> +	lib_name = 'libdpp_x86_64_lit_64_rel'
> +else
> +	lib_name = 'libdpp_arm_aarch64_lit_64_rel'
> +endif
> +message('lib npsdk name :  ' + lib_name)
> +
> +lib = cc.find_library(lib_name , dirs : ['/usr/lib64' ], required: true)
> +
>

Build fails when dependency not found:

drivers/net/zxdh/meson.build:43:0: ERROR: C library
'libdpp_x86_64_lit_64_rel' not found

Btw, CI is not used to test the driver because of the apply failure. I
am not sure what caused this apply failure but for next version please
be sure to rebase it on top of latest 'main' branch, in case it helps.


> +
> +if not lib.found()
> +	build = false
> +	reason = 'missing dependency, lib_name'
> +else
> +	ext_deps += lib
> +	message(lib_npsdk_dir + '/sdk_comm/sdk_comm/comm/include')
> +	includes += include_directories(lib_npsdk_dir + '/sdk_comm/sdk_comm/comm/include')
> +	includes += include_directories(dpp_include)
> +	includes += include_directories(dpp_include + '/dev/module/se/')
> +	includes += include_directories(dpp_include + '/dev/chip/')
> +	includes += include_directories(dpp_include + '/api/')
> +	includes += include_directories(dpp_include + '/dev/reg/')
> +	includes += include_directories(dpp_include + '/dev/module/')
> +	includes += include_directories(dpp_include + '/qos/')
> +	includes += include_directories(dpp_include + '/agentchannel/')
> +
> +	includes += include_directories(dpp_include + '/diag/')
> +	includes += include_directories(dpp_include + '/dev/module/ppu/')
> +	includes += include_directories(dpp_include + '/dev/module/table/se/')
> +	includes += include_directories(dpp_include + '/dev/module/nppu/')
> +	includes += include_directories(dpp_include + '/dev/module/tm/')
> +	includes += include_directories(dpp_include + '/dev/module/dma/')
> +	includes += include_directories(dpp_include + '/dev/module/ddos/')
> +	includes += include_directories(dpp_include + '/dev/module/oam/')
> +	includes += include_directories(dpp_include + '/dev/module/trpg/')
> +	includes += include_directories(dpp_include + '/dev/module/dtb/')
> +endif
> +
> +deps += ['kvargs', 'bus_pci', 'timer']
> +
> +if arch_subdir == 'x86'
> +	if not machine_args.contains('-mno-avx512f')
> +		if cc.has_argument('-mavx512f') and cc.has_argument('-mavx512vl') and cc.has_argument('-mavx512bw')
> +			cflags += ['-DCC_AVX512_SUPPORT']
> +			zxdh_avx512_lib = static_library('zxdh_avx512_lib',
> +						  dependencies: [static_rte_ethdev,
> +						static_rte_kvargs, static_rte_bus_pci],
> +						  include_directories: includes,
> +						  c_args: [cflags, '-mavx512f', '-mavx512bw', '-mavx512vl'])
> +			if (toolchain == 'gcc' and cc.version().version_compare('>=8.3.0'))
> +				cflags += '-DVHOST_GCC_UNROLL_PRAGMA'
> +			elif (toolchain == 'clang' and cc.version().version_compare('>=3.7.0'))
> +				cflags += '-DVHOST_CLANG_UNROLL_PRAGMA'
> +			elif (toolchain == 'icc' and cc.version().version_compare('>=16.0.0'))
> +				cflags += '-DVHOST_ICC_UNROLL_PRAGMA'
> +			endif
> +		endif
> +	endif
> +endif
> diff --git a/drivers/net/zxdh/msg_chan_pub.h b/drivers/net/zxdh/msg_chan_pub.h
> new file mode 100644
> index 0000000000..f2413b2efa
> --- /dev/null
> +++ b/drivers/net/zxdh/msg_chan_pub.h
> @@ -0,0 +1,274 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 ZTE Corporation
> + */
> +
> +#ifndef _ZXDH_MSG_CHAN_PUB_H_
> +#define _ZXDH_MSG_CHAN_PUB_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +#include <string.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <pthread.h>
> +#include <unistd.h>
> +#include <stdint.h>
> +
> +#include <rte_ethdev.h>
> +
> +#define PCI_NAME_LENGTH     16
> +
> +enum DRIVER_TYPE {
> +	MSG_CHAN_END_MPF = 0,
> +	MSG_CHAN_END_PF,
> +	MSG_CHAN_END_VF,
> +	MSG_CHAN_END_RISC,
> +};
> +
> +enum BAR_MSG_RTN {
> +	BAR_MSG_OK = 0,
> +	BAR_MSG_ERR_MSGID,
> +	BAR_MSG_ERR_NULL,
> +	BAR_MSG_ERR_TYPE, /* Message type exception */
> +	BAR_MSG_ERR_MODULE, /* Module ID exception */
> +	BAR_MSG_ERR_BODY_NULL, /* Message body exception */
> +	BAR_MSG_ERR_LEN, /* Message length exception */
> +	BAR_MSG_ERR_TIME_OUT, /* Message sending length too long */
> +	BAR_MSG_ERR_NOT_READY, /* Abnormal message sending conditions*/
> +	BAR_MEG_ERR_NULL_FUNC, /* Empty receive processing function pointer*/
> +	BAR_MSG_ERR_REPEAT_REGISTER, /* Module duplicate registration*/
> +	BAR_MSG_ERR_UNGISTER, /* Repeated deregistration*/
> +	/**
> +	 * The sending interface parameter boundary structure pointer is empty
> +	 */
> +	BAR_MSG_ERR_NULL_PARA,
> +	BAR_MSG_ERR_REPSBUFF_LEN, /* The length of reps_buff is too short*/
> +	/**
> +	 * Unable to find the corresponding message processing function for this module
> +	 */
> +	BAR_MSG_ERR_MODULE_NOEXIST,
> +	/**
> +	 * The virtual address in the parameters passed in by the sending interface is empty
> +	 */
> +	BAR_MSG_ERR_VIRTADDR_NULL,
> +	BAR_MSG_ERR_REPLY, /* sync msg resp_error */
> +	BAR_MSG_ERR_MPF_NOT_SCANNED,
> +	BAR_MSG_ERR_KERNEL_READY,
> +	BAR_MSG_ERR_USR_RET_ERR,
> +	BAR_MSG_ERR_ERR_PCIEID,
> +	BAR_MSG_ERR_SOCKET, /* netlink sockte err */
> +};
> +
> +enum bar_module_id {
> +	BAR_MODULE_DBG = 0, /* 0:  debug */
> +	BAR_MODULE_TBL,     /* 1:  resource table */
> +	BAR_MODULE_MISX,    /* 2:  config msix */
> +	BAR_MODULE_SDA,     /* 3: */
> +	BAR_MODULE_RDMA,    /* 4: */
> +	BAR_MODULE_DEMO,    /* 5:  channel test */
> +	BAR_MODULE_SMMU,    /* 6: */
> +	BAR_MODULE_MAC,     /* 7:  mac rx/tx stats */
> +	BAR_MODULE_VDPA,    /* 8:  vdpa live migration */
> +	BAR_MODULE_VQM,     /* 9:  vqm live migration */
> +	BAR_MODULE_NP,      /* 10: vf msg callback np */
> +	BAR_MODULE_VPORT,   /* 11: get vport */
> +	BAR_MODULE_BDF,     /* 12: get bdf */
> +	BAR_MODULE_RISC_READY, /* 13: */
> +	BAR_MODULE_REVERSE,    /* 14: byte stream reverse */
> +	BAR_MDOULE_NVME,       /* 15: */
> +	BAR_MDOULE_NPSDK,      /* 16: */
> +	BAR_MODULE_NP_TODO,    /* 17: */
> +	MODULE_BAR_MSG_TO_PF,  /* 18: */
> +	MODULE_BAR_MSG_TO_VF,  /* 19: */
> +
> +	MODULE_FLASH = 32,
> +	BAR_MODULE_OFFSET_GET = 33,
> +	BAR_EVENT_OVS_WITH_VCB = 36, /* ovs<-->vcb */
> +
> +	BAR_MSG_MODULE_NUM = 100,
> +};
> +static inline const char *module_id_name(int val)
> +{
> +	switch (val) {
> +	case BAR_MODULE_DBG:        return "BAR_MODULE_DBG";
> +	case BAR_MODULE_TBL:        return "BAR_MODULE_TBL";
> +	case BAR_MODULE_MISX:       return "BAR_MODULE_MISX";
> +	case BAR_MODULE_SDA:        return "BAR_MODULE_SDA";
> +	case BAR_MODULE_RDMA:       return "BAR_MODULE_RDMA";
> +	case BAR_MODULE_DEMO:       return "BAR_MODULE_DEMO";
> +	case BAR_MODULE_SMMU:       return "BAR_MODULE_SMMU";
> +	case BAR_MODULE_MAC:        return "BAR_MODULE_MAC";
> +	case BAR_MODULE_VDPA:       return "BAR_MODULE_VDPA";
> +	case BAR_MODULE_VQM:        return "BAR_MODULE_VQM";
> +	case BAR_MODULE_NP:         return "BAR_MODULE_NP";
> +	case BAR_MODULE_VPORT:      return "BAR_MODULE_VPORT";
> +	case BAR_MODULE_BDF:        return "BAR_MODULE_BDF";
> +	case BAR_MODULE_RISC_READY: return "BAR_MODULE_RISC_READY";
> +	case BAR_MODULE_REVERSE:    return "BAR_MODULE_REVERSE";
> +	case BAR_MDOULE_NVME:       return "BAR_MDOULE_NVME";
> +	case BAR_MDOULE_NPSDK:      return "BAR_MDOULE_NPSDK";
> +	case BAR_MODULE_NP_TODO:    return "BAR_MODULE_NP_TODO";
> +	case MODULE_BAR_MSG_TO_PF:  return "MODULE_BAR_MSG_TO_PF";
> +	case MODULE_BAR_MSG_TO_VF:  return "MODULE_BAR_MSG_TO_VF";
> +	case MODULE_FLASH:          return "MODULE_FLASH";
> +	case BAR_MODULE_OFFSET_GET: return "BAR_MODULE_OFFSET_GET";
> +	case BAR_EVENT_OVS_WITH_VCB: return "BAR_EVENT_OVS_WITH_VCB";
> +	default: return "NA";
> +	}
> +}
> +
> +struct bar_msg_header {
> +	uint8_t valid : 1; /* used by __bar_chan_msg_valid_set/get */
> +	uint8_t sync  : 1;
> +	uint8_t emec  : 1; /* emergency? */
> +	uint8_t ack   : 1; /* ack msg? */
> +	uint8_t poll  : 1;
> +	uint8_t usr   : 1;
> +	uint8_t rsv;
> +	uint16_t module_id;
> +	uint16_t len;
> +	uint16_t msg_id;
> +	uint16_t src_pcieid;
> +	uint16_t dst_pcieid; /* used in PF-->VF */
> +}; /* 12B */
> +#define BAR_MSG_ADDR_CHAN_INTERVAL  (2 * 1024) /* channel size */
> +#define BAR_MSG_PLAYLOAD_OFFSET     (sizeof(struct bar_msg_header))
> +#define BAR_MSG_PAYLOAD_MAX_LEN     (BAR_MSG_ADDR_CHAN_INTERVAL - sizeof(struct bar_msg_header))
> +
> +struct zxdh_pci_bar_msg {
> +	uint64_t virt_addr; /* bar addr */
> +	void    *payload_addr;
> +	uint16_t payload_len;
> +	uint16_t emec;
> +	uint16_t src; /* refer to BAR_DRIVER_TYPE */
> +	uint16_t dst; /* refer to BAR_DRIVER_TYPE */
> +	uint16_t module_id;
> +	uint16_t src_pcieid;
> +	uint16_t dst_pcieid;
> +	uint16_t usr;
> +}; /* 32B */
> +
> +struct zxdh_msg_recviver_mem {
> +	void    *recv_buffer; /* first 4B is head, followed by payload */
> +	uint64_t buffer_len;
> +}; /* 16B */
> +
> +enum pciebar_layout_type {
> +	URI_VQM      = 0,
> +	URI_SPINLOCK = 1,
> +	URI_FWCAP    = 2,
> +	URI_FWSHR    = 3,
> +	URI_DRS_SEC  = 4,
> +	URI_RSV      = 5,
> +	URI_CTRLCH   = 6,
> +	URI_1588     = 7,
> +	URI_QBV      = 8,
> +	URI_MACPCS   = 9,
> +	URI_RDMA     = 10,
> +/* DEBUG PF */
> +	URI_MNP      = 11,
> +	URI_MSPM     = 12,
> +	URI_MVQM     = 13,
> +	URI_MDPI     = 14,
> +	URI_NP       = 15,
> +/* END DEBUG PF */
> +	URI_MAX,
> +};
> +
> +struct bar_offset_params {
> +	uint64_t virt_addr;  /* Bar space control space virtual address */
> +	uint16_t pcie_id;
> +	uint16_t type;  /* Module types corresponding to PCIBAR planning */
> +};
> +struct bar_offset_res {
> +	uint32_t bar_offset;
> +	uint32_t bar_length;
> +};
> +
> +/**
> + * Get the offset value of the specified module
> + * @bar_offset_params:  input parameter
> + * @bar_offset_res: Module offset and length
> + */
> +int zxdh_get_bar_offset(struct bar_offset_params *paras, struct bar_offset_res *res);
> +
> +typedef int (*zxdh_bar_chan_msg_recv_callback)(void *pay_load, uint16_t len, void *reps_buffer,
> +					uint16_t *reps_len, void *dev);
> +
> +/**
> + * Send synchronization messages through PCIE BAR space
> + * @in: Message sending information
> + * @result: Message result feedback
> + * @return: 0 successful, other failures
> + */
> +int zxdh_bar_chan_sync_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result);
> +
> +/**
> + * Sending asynchronous messages through PCIE BAR space
> + * @in: Message sending information
> + * @result: Message result feedback
> + * @return: 0 successful, other failures
> + */
> +int zxdh_bar_chan_async_msg_send(struct zxdh_pci_bar_msg *in, struct zxdh_msg_recviver_mem *result);
> +
> +/**
> + * PCIE BAR spatial message method, registering message reception callback
> + * @module_id: Registration module ID
> + * @callback: Pointer to the receive processing function implemented by the module
> + * @return: 0 successful, other failures
> + * Usually called during driver initialization
> + */
> +int zxdh_bar_chan_msg_recv_register(uint8_t module_id, zxdh_bar_chan_msg_recv_callback callback);
> +
> +/**
> + * PCIE BAR spatial message method, unregistered message receiving callback
> + * @module_id: Kernel PCIE device address
> + * @return: 0 successful, other failures
> + * Called during driver uninstallation
> + */
> +int zxdh_bar_chan_msg_recv_unregister(uint8_t module_id);
> +
> +/**
> + * Provide a message receiving interface for device driver interrupt handling functions
> + * @src:  Driver type for sending interrupts
> + * @dst:  Device driver's own driver type
> + * @virt_addr: The communication bar address of the device
> + * @return: 0 successful, other failures
> + */
> +int zxdh_bar_irq_recv(uint8_t src, uint8_t dst, uint64_t virt_addr, void *dev);
> +
> +/**
> + * Initialize spilock and clear the hardware lock address it belongs to
> + * @pcie_id: PCIE_id of PF device
> + * @bar_base_addr: Bar0 initial base address
> + */
> +int bar_chan_pf_init_spinlock(uint16_t pcie_id, uint64_t bar_base_addr);
> +
> +struct msix_para {
> +	uint16_t pcie_id;
> +	uint16_t vector_risc;
> +	uint16_t vector_pfvf;
> +	uint16_t vector_mpf;
> +	uint64_t virt_addr;
> +	uint16_t driver_type; /* refer to DRIVER_TYPE */
> +};
> +int zxdh_bar_chan_enable(struct msix_para *_msix_para, uint16_t *vport);
> +int zxdh_msg_chan_init(void);
> +int zxdh_bar_msg_chan_exit(void);
> +
> +struct zxdh_res_para {
> +	uint64_t virt_addr;
> +	uint16_t pcie_id;
> +	uint16_t src_type; /* refer to BAR_DRIVER_TYPE */
> +};
> +int zxdh_get_res_panel_id(struct zxdh_res_para *in, uint8_t *panel_id);
> +int zxdh_get_res_hash_id(struct zxdh_res_para *in, uint8_t *hash_id);
> +
> +int zxdh_mpf_bar0_phyaddr_get(uint64_t *pPhyaddr);
> +int zxdh_mpf_bar0_vaddr_get(uint64_t *pVaddr);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif /* _ZXDH_MSG_CHAN_PUB_H_ */
> diff --git a/drivers/net/zxdh/version.map b/drivers/net/zxdh/version.map
> new file mode 100644
> index 0000000000..4a76d1d52d
> --- /dev/null
> +++ b/drivers/net/zxdh/version.map
> @@ -0,0 +1,3 @@
> +DPDK_21 {
> +	local: *;
> +};

Can drop empty (no exported symbols) .map files, please check
Commit 7dde9c844a37 ("drivers: omit symbol map when unneeded")

> diff --git a/drivers/net/zxdh/zxdh_common.c b/drivers/net/zxdh/zxdh_common.c
> new file mode 100644
> index 0000000000..ca62393a08
> --- /dev/null
> +++ b/drivers/net/zxdh/zxdh_common.c
> @@ -0,0 +1,512 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 ZTE Corporation
> + */
> +
> +#include <stdint.h>
> +#include <string.h>
> +#include <stdio.h>
> +#include <errno.h>
> +#include <unistd.h>
> +
> +#include <rte_memcpy.h>
> +#include <rte_malloc.h>
> +#include <rte_common.h>
> +#include <rte_memory.h>
> +
> +#include "zxdh_logs.h"
> +#include "zxdh_common.h"
> +#include "zxdh_pci.h"
> +#include "zxdh_msg_chan.h"
> +#include "zxdh_queue.h"
> +#include "zxdh_ethdev_ops.h"
> +
> +#define ZXDH_COMMON_FIELD_PCIEID   0
> +#define ZXDH_COMMON_FIELD_DATACH   3
> +#define ZXDH_COMMON_FIELD_VPORT    4
> +#define ZXDH_COMMON_FIELD_PHYPORT  6
> +#define ZXDH_COMMON_FIELD_PANELID  5
> +#define ZXDH_COMMON_FIELD_HASHIDX  7
> +
> +#define ZXDH_MAC_STATS_OFFSET   (0x1000 + 408)
> +#define ZXDH_MAC_BYTES_OFFSET   (0xb000)
> +
> +uint64_t get_cur_time_s(uint64_t tsc)
> +{
> +	return (tsc/rte_get_tsc_hz());
> +}
> +
> +/** Nano seconds per second */
> +#define NS_PER_SEC 1E9
> +
> +uint64_t get_time_ns(uint64_t tsc)
> +{
> +	return (tsc*NS_PER_SEC/rte_get_tsc_hz());
> +}
> +/**
> + * Fun:
> + */


There are multiple instance of this empty 'Fun' comments, can you please
drop the empty ones?

> +void zxdh_hex_dump(uint8_t *buff, uint16_t buff_size)
> +{
> +	uint16_t i;
> +
> +	for (i = 0; i < buff_size; i++) {
> +		if ((i % 16) == 0)
> +			printf("\n");
> +		printf("%02x ", *(buff + i));
> +	}
> +	printf("\n");
>

Driver printing to stdout without application control is not desired,
can you please convert this to some debug log that application can control?

> +}
> +/**
> + * Fun:
> + */
> +uint32_t zxdh_read_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
> +	uint32_t val      = *((volatile uint32_t *)(baseaddr + reg));
> +	return val;
> +}
> +/**
> + * Fun:
> + */
> +void zxdh_write_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val)

Please follow dpdk coding convention, where return type is in its own
line, like:

void
zxdh_write_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, ...)

> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint64_t baseaddr = (uint64_t)(hw->bar_addr[bar]);
> +	*((volatile uint32_t *)(baseaddr + reg)) = val;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_send_command_toriscv(struct rte_eth_dev *dev,
> +	struct zxdh_pci_bar_msg      *in,
> +	enum bar_module_id           module_id,
> +	struct zxdh_msg_recviver_mem *msg_rsp)

To break long lines with multiple parameters, single tab is easy to
confuse with function body, instead double tab can be better.

> +{
> +	PMD_INIT_FUNC_TRACE();
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	in->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
> +	in->src = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF;
> +	in->dst = MSG_CHAN_END_RISC;
> +	in->module_id = module_id;
> +	in->src_pcieid = hw->pcie_id;
> +	if (zxdh_bar_chan_sync_msg_send(in, msg_rsp) != BAR_MSG_OK) {
> +		PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
> +		PMD_DRV_LOG(ERR, "msg_data:");
> +		HEX_DUMP(in->payload_addr, in->payload_len);
> +		return -1;
> +	}
> +	return 0;
> +}
> +/**
> + * Fun;
> + */
> +#define ZXDH_MSG_RSP_SIZE_MAX  512
> +static int32_t zxdh_send_command(struct zxdh_hw *hw,
> +	struct zxdh_pci_bar_msg      *desc,
> +	enum bar_module_id            module_id,
> +	struct zxdh_msg_recviver_mem *msg_rsp)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	desc->virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
> +	desc->src = hw->is_pf ? MSG_CHAN_END_PF:MSG_CHAN_END_VF;
> +	desc->dst = MSG_CHAN_END_RISC;
> +	desc->module_id = module_id;
> +	desc->src_pcieid = hw->pcie_id;
> +
> +	msg_rsp->buffer_len  = ZXDH_MSG_RSP_SIZE_MAX;
> +	msg_rsp->recv_buffer = rte_zmalloc(NULL, msg_rsp->buffer_len, 0);
> +	if (unlikely(msg_rsp->recv_buffer == NULL)) {
> +		PMD_DRV_LOG(ERR, "Failed to allocate messages response");
> +		return -ENOMEM;
> +	}
> +
> +	if (zxdh_bar_chan_sync_msg_send(desc, msg_rsp) != BAR_MSG_OK) {
> +		PMD_DRV_LOG(ERR, "Failed to send sync messages or receive response");
> +		PMD_DRV_LOG(ERR, "msg_data:");
> +		HEX_DUMP(desc->payload_addr, desc->payload_len);
> +		rte_free(msg_rsp->recv_buffer);
> +		return -1;
> +	}
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +struct zxdh_common_rsp_hdr {
> +	uint8_t  rsp_status;
> +	uint16_t rsp_len;
> +	uint8_t  reserved;
> +	uint8_t  payload_status;
> +	uint8_t  rsv;
> +	uint16_t payload_len;
> +} __rte_packed; /* 8B */

Putting an empty line between struct-function, function-function may
make easier to read the code.

> +static int32_t zxdh_common_rsp_check(struct zxdh_msg_recviver_mem *msg_rsp,
> +		void *buff, uint16_t len)
> +{
> +	struct zxdh_common_rsp_hdr *rsp_hdr = (struct zxdh_common_rsp_hdr *)msg_rsp->recv_buffer;
> +
> +	if ((rsp_hdr->payload_status != 0xaa) || (rsp_hdr->payload_len != len)) {
> +		PMD_DRV_LOG(ERR, "Common response is invalid, status:0x%x rsp_len:%d",
> +					rsp_hdr->payload_status, rsp_hdr->payload_len);
> +		return -1;
> +	}
> +	if (len != 0)
> +		memcpy(buff, rsp_hdr + 1, len);
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +struct zxdh_common_msg {
> +	uint8_t  type;    /* 0:read table 1:write table */
> +	uint8_t  field;
> +	uint16_t pcie_id;
> +	uint16_t slen;    /* Data length for write table */
> +	uint16_t reserved;
> +} __rte_packed; /* 8B */
> +static int32_t zxdh_fill_common_msg(struct zxdh_hw *hw,
> +	struct zxdh_pci_bar_msg *desc,
> +	uint8_t        type,
> +	uint8_t        field,
> +	void          *buff,
> +	uint16_t       buff_size)
> +{
> +	uint64_t msg_len = sizeof(struct zxdh_common_msg) + buff_size;
> +
> +	desc->payload_addr = rte_zmalloc(NULL, msg_len, 0);
> +	if (unlikely(desc->payload_addr == NULL)) {
> +		PMD_DRV_LOG(ERR, "Failed to allocate msg_data");
> +		return -ENOMEM;
> +	}
> +	memset(desc->payload_addr, 0, msg_len);
> +	desc->payload_len = msg_len;
> +	struct zxdh_common_msg *msg_data = (struct zxdh_common_msg *)desc->payload_addr;
> +
> +	msg_data->type = type;
> +	msg_data->field = field;
> +	msg_data->pcie_id = hw->pcie_id;
> +	msg_data->slen = buff_size;
> +	if (buff_size != 0)
> +		memcpy(msg_data + 1, buff, buff_size);
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +#define ZXDH_COMMON_TABLE_READ   0
> +#define ZXDH_COMMON_TABLE_WRITE  1
> +static int32_t zxdh_common_table_read(struct zxdh_hw *hw, uint8_t field,
> +			void *buff, uint16_t buff_size)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +	if (!hw->msg_chan_init) {
> +		PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
> +		return -1;
> +	}
> +	struct zxdh_pci_bar_msg desc;
> +	int32_t ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_READ, field, NULL, 0);
> +
> +	if (ret != 0) {
> +		PMD_DRV_LOG(ERR, "Failed to fill common msg");
> +		return ret;
> +	}
> +	struct zxdh_msg_recviver_mem msg_rsp;
> +
> +	ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp);
> +	if (ret != 0)
> +		goto free_msg_data;
> +
> +	ret = zxdh_common_rsp_check(&msg_rsp, buff, buff_size);
> +	if (ret != 0)
> +		goto free_rsp_data;
> +
> +free_rsp_data:
> +	rte_free(msg_rsp.recv_buffer);
> +free_msg_data:
> +	rte_free(desc.payload_addr);
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_common_table_write(struct zxdh_hw *hw, uint8_t field,
> +			void *buff, uint16_t buff_size)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +	if (!hw->msg_chan_init) {
> +		PMD_DRV_LOG(ERR, "Bar messages channel not initialized");
> +		return -1;
> +	}
> +	if ((buff_size != 0) && (buff == NULL)) {
> +		PMD_DRV_LOG(ERR, "Buff is invalid");
> +		return -1;
> +	}
> +	struct zxdh_pci_bar_msg desc;
> +	int32_t ret = zxdh_fill_common_msg(hw, &desc, ZXDH_COMMON_TABLE_WRITE,
> +					field, buff, buff_size);
> +
> +	if (ret != 0) {
> +		PMD_DRV_LOG(ERR, "Failed to fill common msg");
> +		return ret;
> +	}
> +	struct zxdh_msg_recviver_mem msg_rsp;
> +
> +	ret = zxdh_send_command(hw, &desc, BAR_MODULE_TBL, &msg_rsp);
> +	if (ret != 0)
> +		goto free_msg_data;
> +
> +	ret = zxdh_common_rsp_check(&msg_rsp, NULL, 0);
> +	if (ret != 0)
> +		goto free_rsp_data;
> +
> +free_rsp_data:
> +	rte_free(msg_rsp.recv_buffer);
> +free_msg_data:
> +	rte_free(desc.payload_addr);
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_datach_set(struct rte_eth_dev *dev)
> +{
> +	/* payload: queue_num(2byte) + pch1(2byte) + ** + pchn */
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t buff_size = (hw->queue_num + 1) * 2;
> +	void *buff = rte_zmalloc(NULL, buff_size, 0);
> +
> +	if (unlikely(buff == NULL)) {
> +		PMD_DRV_LOG(ERR, "Failed to allocate buff");
> +		return -ENOMEM;
> +	}
> +	memset(buff, 0, buff_size);
> +	uint16_t *pdata = (uint16_t *)buff;
> +	*pdata++ = hw->queue_num;
> +	uint16_t i;
> +
> +	for (i = 0; i < hw->queue_num; i++)
> +		*(pdata + i) = hw->channel_context[i].ph_chno;
> +
> +	int32_t ret = zxdh_common_table_write(hw, ZXDH_COMMON_FIELD_DATACH,
> +						(void *)buff, buff_size);
> +
> +	if (ret != 0)
> +		PMD_DRV_LOG(ERR, "Failed to setup data channel of common table");
> +
> +	rte_free(buff);
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_hw_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_opc opcode,
> +			struct zxdh_hw_stats *hw_stats)
> +{
> +	enum bar_module_id module_id;
> +
> +	switch (opcode) {
> +	case ZXDH_VQM_DEV_STATS_GET:
> +	case ZXDH_VQM_QUEUE_STATS_GET:
> +	case ZXDH_VQM_QUEUE_STATS_RESET:
> +		module_id = BAR_MODULE_VQM;
> +		break;
> +	case ZXDH_MAC_STATS_GET:
> +	case ZXDH_MAC_STATS_RESET:
> +		module_id = BAR_MODULE_MAC;
> +		break;
> +	default:
> +		PMD_DRV_LOG(ERR, "invalid opcode %u", opcode);
> +		return -1;
> +	}
> +	/* */
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct zxdh_msg_reply_info reply_info = {0};
> +	struct zxdh_msg_recviver_mem result = {
> +		.recv_buffer = &reply_info,
> +		.buffer_len = sizeof(struct zxdh_msg_reply_info),
> +	};
> +	/* */

Please remove empty comments.

> +	struct zxdh_msg_info msg_info = {0};
> +
> +	ctrl_msg_build(hw, opcode, &msg_info);
> +	struct zxdh_pci_bar_msg in = {0};
> +
> +	in.payload_addr = &msg_info;
> +	in.payload_len = sizeof(msg_info);
> +	if (zxdh_send_command_toriscv(dev, &in, module_id, &result) != 0) {
> +		PMD_DRV_LOG(ERR, "Failed to get hw stats");
> +		return -1;
> +	}
> +	struct zxdh_msg_reply_body *reply_body = &reply_info.reply_body;
> +
> +	rte_memcpy(hw_stats, &reply_body->riscv_rsp.port_hw_stats, sizeof(struct zxdh_hw_stats));
> +	return 0;
> +}
> +
> +int32_t zxdh_hw_mac_get(struct rte_eth_dev *dev, struct zxdh_hw_mac_stats *mac_stats,
> +			struct zxdh_hw_mac_bytes *mac_bytes)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint64_t virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MAC_OFFSET);
> +	uint64_t stats_addr =  0;
> +	uint64_t bytes_addr =  0;
> +
> +	if (hw->speed <= 25000) {
> +		stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * (hw->phyport % 4);
> +		bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * (hw->phyport % 4);
> +	} else {
> +		stats_addr = virt_addr + ZXDH_MAC_STATS_OFFSET + 352 * 4;
> +		bytes_addr = virt_addr + ZXDH_MAC_BYTES_OFFSET + 32 * 4;
> +	}
> +
> +	rte_memcpy(mac_stats, (void *)stats_addr, sizeof(struct zxdh_hw_mac_stats));
> +	rte_memcpy(mac_bytes, (void *)bytes_addr, sizeof(struct zxdh_hw_mac_bytes));
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_opc opcode)
> +{
> +	enum bar_module_id module_id;
> +
> +	switch (opcode) {
> +	case ZXDH_VQM_DEV_STATS_RESET:
> +		module_id = BAR_MODULE_VQM;
> +		break;
> +	case ZXDH_MAC_STATS_RESET:
> +		module_id = BAR_MODULE_MAC;
> +		break;
> +	default:
> +		PMD_DRV_LOG(ERR, "invalid opcode %u", opcode);
> +		return -1;
> +	}
> +	/* */
> +	struct zxdh_msg_reply_info reply_info = {0};
> +	struct zxdh_msg_recviver_mem result = {
> +		.recv_buffer = &reply_info,
> +		.buffer_len = sizeof(struct zxdh_msg_reply_info),
> +	};
> +	/* */
> +	struct zxdh_msg_info msg_info = {0};
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	ctrl_msg_build(hw, opcode, &msg_info);
> +	struct zxdh_pci_bar_msg in = {0};
> +
> +	in.payload_addr = &msg_info;
> +	in.payload_len = sizeof(msg_info);
> +	/* */
> +	if (zxdh_send_command_toriscv(dev, &in, module_id, &result) != 0) {
> +		PMD_DRV_LOG(ERR, "Failed to reset hw stats");
> +		return -1;
> +	}
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static inline void zxdh_fill_res_para(struct rte_eth_dev *dev, struct zxdh_res_para *param)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	param->pcie_id   = hw->pcie_id;
> +	param->virt_addr = hw->bar_addr[0] + ZXDH_CTRLCH_OFFSET;
> +	param->src_type  = BAR_MODULE_TBL;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid)
> +{
> +	struct zxdh_res_para param;
> +
> +	zxdh_fill_res_para(dev, &param);
> +	int32_t ret = zxdh_get_res_panel_id(&param, pannelid);
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	int32_t ret = zxdh_common_table_read(hw, ZXDH_COMMON_FIELD_PHYPORT,
> +					(void *)phyport, sizeof(*phyport));
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx)
> +{
> +	struct zxdh_res_para param;
> +
> +	zxdh_fill_res_para(dev, &param);
> +	int32_t ret = zxdh_get_res_hash_id(&param, hash_idx);
> +
> +	return ret;
> +}
> +#define DUPLEX_HALF   RTE_BIT32(0)
> +#define DUPLEX_FULL   RTE_BIT32(1)
> +
> +int32_t zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t status = 0;
> +
> +	if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS))
> +		zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, status),
> +					&status, sizeof(status));
> +
> +	link->link_status = status;
> +
> +	if (status == RTE_ETH_LINK_DOWN) {
> +		PMD_DRV_LOG(INFO, "Port is down!\n");
> +		link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
> +		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +	} else {
> +		struct zxdh_msg_info msg;
> +		struct zxdh_pci_bar_msg in = {0};
> +		struct zxdh_msg_reply_info rep = {0};
> +
> +		ctrl_msg_build(hw, ZXDH_MAC_LINK_GET, &msg);
> +
> +		in.payload_addr = &msg;
> +		in.payload_len = sizeof(msg);
> +
> +		struct zxdh_msg_recviver_mem rsp_data = {
> +			.recv_buffer = (void *)&rep,
> +			.buffer_len = sizeof(rep),
> +		};
> +		if (zxdh_send_command_toriscv(dev, &in, BAR_MODULE_MAC, &rsp_data) != BAR_MSG_OK) {
> +			PMD_DRV_LOG(ERR, "Failed to get link info");
> +			return -1;
> +		}
> +		struct zxdh_msg_reply_body *ack_msg =
> +				&(((struct zxdh_msg_reply_info *)rsp_data.recv_buffer)->reply_body);
> +
> +		link->link_speed = ack_msg->link_msg.speed;
> +		hw->speed_mode = ack_msg->link_msg.speed_modes;
> +		if ((ack_msg->link_msg.duplex & DUPLEX_FULL) == DUPLEX_FULL)
> +			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +		else
> +			link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
> +
> +		PMD_DRV_LOG(INFO, "Port is up!\n");
> +	}
> +	hw->speed = link->link_speed;
> +	PMD_DRV_LOG(INFO, "sw : admain_status %d ", hw->admin_status);
> +	PMD_DRV_LOG(INFO, "hw : link_status: %d,  link_speed: %d, link_duplex %d\n",
> +				link->link_status, link->link_speed, link->link_duplex);
> +	return 0;
> +}
> diff --git a/drivers/net/zxdh/zxdh_common.h b/drivers/net/zxdh/zxdh_common.h
> new file mode 100644
> index 0000000000..2010d01e63
> --- /dev/null
> +++ b/drivers/net/zxdh/zxdh_common.h
> @@ -0,0 +1,154 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 ZTE Corporation
> + */
> +
> +#ifndef _ZXDH_COMMON_H_
> +#define _ZXDH_COMMON_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
>

Although this doesn't hurt, is it expected to include this driver
internal header file from a C++ source file?
If this is not used at all, may be consider dropping them.

> +
> +#include <stdint.h>
> +#include <rte_ethdev.h>
> +#include <rte_common.h>
> +#include "msg_chan_pub.h"
> +#include "zxdh_logs.h"
> +
> +#define VF_IDX(pcie_id)  (pcie_id & 0xff)
> +#define PF_PCIE_ID(pcie_id)  ((pcie_id & 0xff00) | 1<<11)
> +#define VF_PCIE_ID(pcie_id, vf_idx)  ((pcie_id & 0xff00) | (1<<11) | (vf_idx&0xff))
> +
> +#define VFUNC_ACTIVE_BIT  11
> +#define VFUNC_NUM_MASK    0xff
> +#define GET_OWNER_PF_VPORT(vport)  ((vport&~(VFUNC_NUM_MASK))&(~(1<<VFUNC_ACTIVE_BIT)))
> +
> +/* riscv msg opcodes */
> +enum zxdh_agent_opc {
> +	ZXDH_MAC_STATS_GET = 10,
> +	ZXDH_MAC_STATS_RESET,
> +	ZXDH_MAC_PHYPORT_INIT,
> +	ZXDH_MAC_AUTONEG_SET,
> +	ZXDH_MAC_LINK_GET,
> +	ZXDH_MAC_LED_BLINK,
> +	ZXDH_MAC_FC_SET  = 18,
> +	ZXDH_MAC_FC_GET = 19,
> +	ZXDH_MAC_MODULE_EEPROM_READ = 20,
> +	ZXDH_VQM_DEV_STATS_GET = 21,
> +	ZXDH_VQM_DEV_STATS_RESET,
> +	ZXDH_FLASH_FIR_VERSION_GET = 23,
> +	ZXDH_VQM_QUEUE_STATS_GET,
> +	ZXDH_DEV_STATUS_NOTIFY = 24,
> +	ZXDH_VQM_QUEUE_STATS_RESET,
> +} __rte_packed;
> +
> +struct zxdh_hw_stats {
> +	uint64_t rx_total;
> +	uint64_t tx_total;
> +	uint64_t rx_bytes;
> +	uint64_t tx_bytes;
> +	uint64_t rx_error;
> +	uint64_t tx_error;
> +	uint64_t rx_drop;
> +} __rte_packed;
> +
> +struct zxdh_hw_mac_stats {
> +	uint64_t rx_total;
> +	uint64_t rx_pause;
> +	uint64_t rx_unicast;
> +	uint64_t rx_multicast;
> +	uint64_t rx_broadcast;
> +	uint64_t rx_vlan;
> +	uint64_t rx_size_64;
> +	uint64_t rx_size_65_127;
> +	uint64_t rx_size_128_255;
> +	uint64_t rx_size_256_511;
> +	uint64_t rx_size_512_1023;
> +	uint64_t rx_size_1024_1518;
> +	uint64_t rx_size_1519_mru;
> +	uint64_t rx_undersize;
> +	uint64_t rx_oversize;
> +	uint64_t rx_fragment;
> +	uint64_t rx_jabber;
> +	uint64_t rx_control;
> +	uint64_t rx_eee;
> +
> +	uint64_t tx_total;
> +	uint64_t tx_pause;
> +	uint64_t tx_unicast;
> +	uint64_t tx_multicast;
> +	uint64_t tx_broadcast;
> +	uint64_t tx_vlan;
> +	uint64_t tx_size_64;
> +	uint64_t tx_size_65_127;
> +	uint64_t tx_size_128_255;
> +	uint64_t tx_size_256_511;
> +	uint64_t tx_size_512_1023;
> +	uint64_t tx_size_1024_1518;
> +	uint64_t tx_size_1519_mtu;
> +	uint64_t tx_undersize;
> +	uint64_t tx_oversize;
> +	uint64_t tx_fragment;
> +	uint64_t tx_jabber;
> +	uint64_t tx_control;
> +	uint64_t tx_eee;
> +
> +	uint64_t rx_error;
> +	uint64_t rx_fcs_error;
> +	uint64_t rx_drop;
> +
> +	uint64_t tx_error;
> +	uint64_t tx_fcs_error;
> +	uint64_t tx_drop;
> +
> +} __rte_packed;
> +
> +struct zxdh_hw_mac_bytes {
> +	uint64_t rx_total_bytes;
> +	uint64_t rx_good_bytes;
> +	uint64_t tx_total_bytes;
> +	uint64_t tx_good_bytes;
> +} __rte_packed;
> +
> +void zxdh_hex_dump(uint8_t *buff, uint16_t buff_size);
> +
> +uint32_t zxdh_read_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg);
> +void zxdh_write_reg(struct rte_eth_dev *dev, uint32_t bar, uint32_t reg, uint32_t val);
> +int32_t zxdh_hw_stats_get(struct rte_eth_dev *dev, enum zxdh_agent_opc opcode,
> +			struct zxdh_hw_stats *hw_stats);
> +int32_t zxdh_hw_mac_get(struct rte_eth_dev *dev, struct zxdh_hw_mac_stats *mac_stats,
> +			struct zxdh_hw_mac_bytes *mac_bytes);
> +int32_t zxdh_hw_stats_reset(struct rte_eth_dev *dev, enum zxdh_agent_opc opcode);
> +int32_t zxdh_link_info_get(struct rte_eth_dev *dev, struct rte_eth_link *link);
> +int32_t zxdh_datach_set(struct rte_eth_dev *dev);
> +int32_t zxdh_vport_get(struct rte_eth_dev *dev, uint16_t *vport);
> +int32_t zxdh_pannelid_get(struct rte_eth_dev *dev, uint8_t *pannelid);
> +int32_t zxdh_phyport_get(struct rte_eth_dev *dev, uint8_t *phyport);
> +int32_t zxdh_hashidx_get(struct rte_eth_dev *dev, uint8_t *hash_idx);
> +int32_t zxdh_send_command_toriscv(struct rte_eth_dev *dev,
> +			struct zxdh_pci_bar_msg *in,
> +			enum bar_module_id module_id,
> +			struct zxdh_msg_recviver_mem *msg_rsp);
> +
> +#define HEX_DUMP(buff, buff_size)  zxdh_hex_dump((uint8_t *)buff, (uint16_t)buff_size)
> +
> +#define ZXDH_DIRECT_FLAG_BIT       (1UL << 15)
> +
> +#define ZXDH_FLAG_YES 1
> +#define ZXDH_FLAG_NO 0
> +
> +#define ZXDH_VLAN_TAG_LEN 4
> +
> +#define ZXDH_ETH_OVERHEAD  (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ZXDH_VLAN_TAG_LEN * 2)
> +#define ZXDH_MTU_TO_PKTLEN(mtu) ((mtu) + ZXDH_ETH_OVERHEAD)
> +
> +#define VLAN_TAG_LEN   4/* 802.3ac tag (not DMA'd) */
> +
> +uint64_t get_cur_time_s(uint64_t tsc);
> +uint64_t get_time_ns(uint64_t tsc);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _ZXDH_COMMON_H_ */
> diff --git a/drivers/net/zxdh/zxdh_ethdev.c b/drivers/net/zxdh/zxdh_ethdev.c
> new file mode 100644
> index 0000000000..222ecbd3c1
> --- /dev/null
> +++ b/drivers/net/zxdh/zxdh_ethdev.c
> @@ -0,0 +1,3431 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 ZTE Corporation
> + */
> +
> +#include <rte_memcpy.h>
> +#include <rte_malloc.h>
> +#include <rte_interrupts.h>
> +#include <eal_interrupts.h>
> +#include <ethdev_pci.h>
> +#include <rte_kvargs.h>
> +#include <rte_hexdump.h>
> +
> +#include "zxdh_ethdev.h"
> +#include "zxdh_pci.h"
> +#include "zxdh_logs.h"
> +#include "zxdh_queue.h"
> +#include "zxdh_rxtx.h"
> +#include "zxdh_msg_chan.h"
> +#include "zxdh_common.h"
> +#include "zxdh_ethdev_ops.h"
> +#include "zxdh_tables.h"
> +#include "dpp_dtb_table_api.h"
> +#include "dpp_dev.h"
> +#include "dpp_init.h"
> +#include "zxdh_ethdev.h"
> +#include "zxdh_table_drv.h"
> +#include "dpp_log_diag.h"
> +#include "dpp_dbgstat.h"
> +#include "dpp_trpg_api.h"
> +
> +#include "zxdh_telemetry.h"
> +
> +struct rte_zxdh_xstats_name_off {
> +	char name[RTE_ETH_XSTATS_NAME_SIZE];
> +	unsigned int offset;
> +};
> +static const struct rte_zxdh_xstats_name_off rte_zxdh_np_stat_strings[] = {
> +	{"np_rx_broadcast",    offsetof(struct zxdh_hw_np_stats, np_rx_broadcast)},
> +	{"np_tx_broadcast",    offsetof(struct zxdh_hw_np_stats, np_tx_broadcast)},
> +	{"np_rx_mtu_drop_pkts",   offsetof(struct zxdh_hw_np_stats, np_rx_mtu_drop_pkts)},
> +	{"np_tx_mtu_drop_pkts",   offsetof(struct zxdh_hw_np_stats, np_tx_mtu_drop_pkts)},
> +	{"np_tx_mtu_drop_bytes",   offsetof(struct zxdh_hw_np_stats, np_tx_mtu_drop_bytes)},
> +	{"np_rx_mtu_drop_bytes",   offsetof(struct zxdh_hw_np_stats, np_rx_mtu_drop_bytes)},
> +	{"np_rx_plcr_drop_pkts",  offsetof(struct zxdh_hw_np_stats, np_rx_mtr_drop_pkts)},
> +	{"np_rx_plcr_drop_bytes",  offsetof(struct zxdh_hw_np_stats, np_rx_mtr_drop_bytes)},
> +	{"np_tx_plcr_drop_pkts",  offsetof(struct zxdh_hw_np_stats,  np_tx_mtr_drop_pkts)},
> +	{"np_tx_plcr_drop_bytes",  offsetof(struct zxdh_hw_np_stats, np_tx_mtr_drop_bytes)},
> +};
> +/* [rt]x_qX_ is prepended to the name string here */
> +static const struct rte_zxdh_xstats_name_off rte_zxdh_rxq_stat_strings[] = {
> +	{"good_packets",           offsetof(struct virtnet_rx, stats.packets)},
> +	{"good_bytes",             offsetof(struct virtnet_rx, stats.bytes)},
> +	{"errors",                 offsetof(struct virtnet_rx, stats.errors)},
> +	{"multicast_packets",      offsetof(struct virtnet_rx, stats.multicast)},
> +	{"broadcast_packets",      offsetof(struct virtnet_rx, stats.broadcast)},
> +	{"truncated_err",          offsetof(struct virtnet_rx, stats.truncated_err)},
> +	{"undersize_packets",      offsetof(struct virtnet_rx, stats.size_bins[0])},
> +	{"size_64_packets",        offsetof(struct virtnet_rx, stats.size_bins[1])},
> +	{"size_65_127_packets",    offsetof(struct virtnet_rx, stats.size_bins[2])},
> +	{"size_128_255_packets",   offsetof(struct virtnet_rx, stats.size_bins[3])},
> +	{"size_256_511_packets",   offsetof(struct virtnet_rx, stats.size_bins[4])},
> +	{"size_512_1023_packets",  offsetof(struct virtnet_rx, stats.size_bins[5])},
> +	{"size_1024_1518_packets", offsetof(struct virtnet_rx, stats.size_bins[6])},
> +	{"size_1519_max_packets",  offsetof(struct virtnet_rx, stats.size_bins[7])},
> +};
> +
> +
> +/* [rt]x_qX_ is prepended to the name string here */
> +static const struct rte_zxdh_xstats_name_off rte_zxdh_txq_stat_strings[] = {
> +	{"good_packets",           offsetof(struct virtnet_tx, stats.packets)},
> +	{"good_bytes",             offsetof(struct virtnet_tx, stats.bytes)},
> +	{"errors",                 offsetof(struct virtnet_tx, stats.errors)},
> +	{"multicast_packets",      offsetof(struct virtnet_tx, stats.multicast)},
> +	{"broadcast_packets",      offsetof(struct virtnet_tx, stats.broadcast)},
> +	{"truncated_err",          offsetof(struct virtnet_tx, stats.truncated_err)},
> +	{"undersize_packets",      offsetof(struct virtnet_tx, stats.size_bins[0])},
> +	{"size_64_packets",        offsetof(struct virtnet_tx, stats.size_bins[1])},
> +	{"size_65_127_packets",    offsetof(struct virtnet_tx, stats.size_bins[2])},
> +	{"size_128_255_packets",   offsetof(struct virtnet_tx, stats.size_bins[3])},
> +	{"size_256_511_packets",   offsetof(struct virtnet_tx, stats.size_bins[4])},
> +	{"size_512_1023_packets",  offsetof(struct virtnet_tx, stats.size_bins[5])},
> +	{"size_1024_1518_packets", offsetof(struct virtnet_tx, stats.size_bins[6])},
> +	{"size_1519_max_packets",  offsetof(struct virtnet_tx, stats.size_bins[7])},
> +};
> +static const struct rte_zxdh_xstats_name_off rte_zxdh_mac_stat_strings[] = {
> +	{"mac_rx_total",    offsetof(struct zxdh_hw_mac_stats, rx_total)},
> +	{"mac_rx_pause",    offsetof(struct zxdh_hw_mac_stats, rx_pause)},
> +	{"mac_rx_unicast",   offsetof(struct zxdh_hw_mac_stats, rx_unicast)},
> +	{"mac_rx_multicast",   offsetof(struct zxdh_hw_mac_stats, rx_multicast)},
> +	{"mac_rx_broadcast",   offsetof(struct zxdh_hw_mac_stats, rx_broadcast)},
> +	{"mac_rx_vlan",   offsetof(struct zxdh_hw_mac_stats, rx_vlan)},
> +	{"mac_rx_size_64",  offsetof(struct zxdh_hw_mac_stats, rx_size_64)},
> +	{"mac_rx_size_65_127",  offsetof(struct zxdh_hw_mac_stats, rx_size_65_127)},
> +	{"mac_rx_size_128_255",  offsetof(struct zxdh_hw_mac_stats,  rx_size_128_255)},
> +	{"mac_rx_size_256_511",  offsetof(struct zxdh_hw_mac_stats, rx_size_256_511)},
> +	{"mac_rx_size_512_1023",    offsetof(struct zxdh_hw_mac_stats, rx_size_512_1023)},
> +	{"mac_rx_size_1024_1518",    offsetof(struct zxdh_hw_mac_stats, rx_size_1024_1518)},
> +	{"mac_rx_size_1519_mru",   offsetof(struct zxdh_hw_mac_stats, rx_size_1519_mru)},
> +	{"mac_rx_undersize",   offsetof(struct zxdh_hw_mac_stats, rx_undersize)},
> +	{"mac_rx_oversize",   offsetof(struct zxdh_hw_mac_stats, rx_oversize)},
> +	{"mac_rx_fragment",   offsetof(struct zxdh_hw_mac_stats, rx_fragment)},
> +	{"mac_rx_jabber",  offsetof(struct zxdh_hw_mac_stats, rx_jabber)},
> +	{"mac_rx_control",  offsetof(struct zxdh_hw_mac_stats, rx_control)},
> +	{"mac_rx_eee",  offsetof(struct zxdh_hw_mac_stats,  rx_eee)},
> +	{"mac_rx_error",  offsetof(struct zxdh_hw_mac_stats, rx_error)},
> +	{"mac_rx_fcs_error",    offsetof(struct zxdh_hw_mac_stats, rx_fcs_error)},
> +	{"mac_rx_drop",    offsetof(struct zxdh_hw_mac_stats, rx_drop)},
> +
> +	{"mac_tx_total",   offsetof(struct zxdh_hw_mac_stats, tx_total)},
> +	{"mac_tx_pause",   offsetof(struct zxdh_hw_mac_stats, tx_pause)},
> +	{"mac_tx_unicast",  offsetof(struct zxdh_hw_mac_stats, tx_unicast)},
> +	{"mac_tx_multicast",  offsetof(struct zxdh_hw_mac_stats, tx_multicast)},
> +	{"mac_tx_broadcast",  offsetof(struct zxdh_hw_mac_stats,  tx_broadcast)},
> +	{"mac_tx_vlan",  offsetof(struct zxdh_hw_mac_stats, tx_vlan)},
> +	{"mac_tx_size_64",   offsetof(struct zxdh_hw_mac_stats, tx_size_64)},
> +	{"mac_tx_size_65_127",   offsetof(struct zxdh_hw_mac_stats, tx_size_65_127)},
> +	{"mac_tx_size_128_255",  offsetof(struct zxdh_hw_mac_stats, tx_size_128_255)},
> +	{"mac_tx_size_256_511",  offsetof(struct zxdh_hw_mac_stats, tx_size_256_511)},
> +	{"mac_tx_size_512_1023",  offsetof(struct zxdh_hw_mac_stats,  tx_size_512_1023)},
> +	{"mac_tx_size_1024_1518",  offsetof(struct zxdh_hw_mac_stats, tx_size_1024_1518)},
> +	{"mac_tx_size_1519_mtu",   offsetof(struct zxdh_hw_mac_stats, tx_size_1519_mtu)},
> +	{"mac_tx_undersize",   offsetof(struct zxdh_hw_mac_stats, tx_undersize)},
> +	{"mac_tx_oversize",  offsetof(struct zxdh_hw_mac_stats, tx_oversize)},
> +	{"mac_tx_fragment",  offsetof(struct zxdh_hw_mac_stats, tx_fragment)},
> +	{"mac_tx_jabber",  offsetof(struct zxdh_hw_mac_stats,  tx_jabber)},
> +	{"mac_tx_control",  offsetof(struct zxdh_hw_mac_stats, tx_control)},
> +	{"mac_tx_eee",   offsetof(struct zxdh_hw_mac_stats, tx_eee)},
> +	{"mac_tx_error",   offsetof(struct zxdh_hw_mac_stats, tx_error)},
> +	{"mac_tx_fcs_error",  offsetof(struct zxdh_hw_mac_stats, tx_fcs_error)},
> +	{"mac_tx_drop",  offsetof(struct zxdh_hw_mac_stats, tx_drop)},
> +};
> +
> +static const struct rte_zxdh_xstats_name_off rte_zxdh_mac_bytes_strings[] = {
> +	{"mac_rx_total_bytes",   offsetof(struct zxdh_hw_mac_bytes, rx_total_bytes)},
> +	{"mac_rx_good_bytes",   offsetof(struct zxdh_hw_mac_bytes, rx_good_bytes)},
> +	{"mac_tx_total_bytes",  offsetof(struct zxdh_hw_mac_bytes,  tx_total_bytes)},
> +	{"mac_tx_good_bytes",  offsetof(struct zxdh_hw_mac_bytes, tx_good_bytes)},
> +};
> +
> +static const struct rte_zxdh_xstats_name_off rte_zxdh_vqm_stat_strings[] = {
> +	{"vqm_rx_vport_packets",    offsetof(struct zxdh_hw_stats, rx_total)},
> +	{"vqm_tx_vport_packets",    offsetof(struct zxdh_hw_stats, tx_total)},
> +	{"vqm_rx_vport_bytes",   offsetof(struct zxdh_hw_stats, rx_bytes)},
> +	{"vqm_tx_vport_bytes",   offsetof(struct zxdh_hw_stats, tx_bytes)},
> +	{"vqm_rx_vport_dropped",   offsetof(struct zxdh_hw_stats, rx_drop)},
> +};
> +
> +#define EAL_INTR_EPOLL_WAIT_FOREVER			(-1)
> +#define VLAN_TAG_LEN						4 /* 802.3ac tag (not DMA'd) */
> +
> +#define LOW3_BIT_MASK						0x7
> +#define LOW5_BIT_MASK						0x1f
> +
> +
> +#define ZXDH_VF_LOCK_REG					0x90
> +#define ZXDH_VF_LOCK_ENABLE_MASK			0x1
> +#define ZXDH_COI_TABLE_BASE_ADDR			0x5000
> +#define ZXDH_ACQUIRE_CHANNEL_NUM_MAX		10
> +
> +#define ZXDH_MIN_RX_BUFSIZE					64
> +
> +#define ZXDH_NB_RXQ_XSTATS (sizeof(rte_zxdh_rxq_stat_strings) / \
> +							sizeof(rte_zxdh_rxq_stat_strings[0]))
> +#define ZXDH_NB_TXQ_XSTATS (sizeof(rte_zxdh_txq_stat_strings) / \
> +							sizeof(rte_zxdh_txq_stat_strings[0]))
> +
> +#define ZXDH_NP_XSTATS (sizeof(rte_zxdh_np_stat_strings) / \
> +							sizeof(rte_zxdh_np_stat_strings[0]))
> +
> +#define ZXDH_MAC_XSTATS (sizeof(rte_zxdh_mac_stat_strings) / \
> +							sizeof(rte_zxdh_mac_stat_strings[0]))
> +
> +#define ZXDH_MAC_BYTES (sizeof(rte_zxdh_mac_bytes_strings) / \
> +							sizeof(rte_zxdh_mac_bytes_strings[0]))
> +
> +#define ZXDH_VQM_XSTATS (sizeof(rte_zxdh_vqm_stat_strings) / \
> +							sizeof(rte_zxdh_vqm_stat_strings[0]))
> +
> +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev);
> +static void zxdh_notify_peers(struct rte_eth_dev *dev);
> +static int32_t zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev);
> +static void zxdh_priv_res_free(struct zxdh_hw *priv);
> +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev);
> +static int zxdh_tables_init(struct rte_eth_dev *dev);
> +static int32_t zxdh_free_queues(struct rte_eth_dev *dev);
> +static int32_t zxdh_acquire_lock(struct rte_eth_dev *dev);
> +static int32_t zxdh_release_lock(struct rte_eth_dev *dev);
> +static int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch);
> +static int32_t zxdh_release_channel(struct rte_eth_dev *dev);
> +
> +static int vf_recv_bar_msg(void *pay_load, uint16_t len, void *reps_buffer,
> +			uint16_t *reps_len, void *eth_dev __rte_unused);
> +static int pf_recv_bar_msg(void *pay_load, uint16_t len, void *reps_buffer,
> +			uint16_t *reps_len, void *eth_dev __rte_unused);
> +static void zxdh_np_destroy(struct rte_eth_dev *dev);
> +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev);
> +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev);
> +static int32_t zxdh_dev_devargs_parse(struct rte_devargs *devargs, struct zxdh_hw *hw);
> +
> +int32_t zxdh_dev_xstats_get_names(struct rte_eth_dev *dev,
> +			struct rte_eth_xstat_name *xstats_names,
> +			__rte_unused unsigned int limit)
> +{
> +	uint32_t i     = 0;
> +	uint32_t count = 0;
> +	uint32_t t     = 0;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	unsigned int nstats = dev->data->nb_tx_queues * ZXDH_NB_TXQ_XSTATS +
> +					dev->data->nb_rx_queues * ZXDH_NB_RXQ_XSTATS +
> +					ZXDH_NP_XSTATS + ZXDH_VQM_XSTATS;
> +
> +	if (hw->is_pf)
> +		nstats += ZXDH_MAC_XSTATS + ZXDH_MAC_BYTES;
> +
> +	if (xstats_names != NULL) {
> +		/* Note: limit checked in rte_eth_xstats_names() */
> +		for (i = 0; i < ZXDH_NP_XSTATS; i++) {
> +			snprintf(xstats_names[count].name, sizeof(xstats_names[count].name),
> +			"%s", rte_zxdh_np_stat_strings[i].name);
> +			count++;
> +		}
> +		if (hw->is_pf) {
> +			for (i = 0; i < ZXDH_MAC_XSTATS; i++) {
> +				snprintf(xstats_names[count].name, sizeof(xstats_names[count].name),
> +				"%s", rte_zxdh_mac_stat_strings[i].name);
> +				count++;
> +			}
> +			for (i = 0; i < ZXDH_MAC_BYTES; i++) {
> +				snprintf(xstats_names[count].name, sizeof(xstats_names[count].name),
> +				"%s", rte_zxdh_mac_bytes_strings[i].name);
> +				count++;
> +			}
> +		}
> +		for (i = 0; i < ZXDH_VQM_XSTATS; i++) {
> +			snprintf(xstats_names[count].name, sizeof(xstats_names[count].name),
> +			"%s", rte_zxdh_vqm_stat_strings[i].name);
> +			count++;
> +		}
> +		for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +			struct virtnet_rx *rxvq = dev->data->rx_queues[i];
> +
> +			if (rxvq == NULL)
> +				continue;
> +			for (t = 0; t < ZXDH_NB_RXQ_XSTATS; t++) {
> +				snprintf(xstats_names[count].name, sizeof(xstats_names[count].name),
> +				"rx_q%u_%s", i, rte_zxdh_rxq_stat_strings[t].name);
> +				count++;
> +			}
> +		}
> +
> +		for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +			struct virtnet_tx *txvq = dev->data->tx_queues[i];
> +
> +			if (txvq == NULL)
> +				continue;
> +			for (t = 0; t < ZXDH_NB_TXQ_XSTATS; t++) {
> +				snprintf(xstats_names[count].name, sizeof(xstats_names[count].name),
> +				"tx_q%u_%s", i, rte_zxdh_txq_stat_strings[t].name);
> +				count++;
> +			}
> +		}
> +		PMD_DRV_LOG(INFO, "stats count  = %u", count);
> +		return count;
> +	}
> +	return nstats;
> +}
> +int32_t zxdh_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, uint32_t n)
> +{
> +	uint32_t i	   = 0;
> +	uint32_t count = 0;
> +	uint32_t t = 0;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct zxdh_hw_np_stats np_stats = {0};
> +	struct zxdh_hw_mac_stats mac_stats = {0};
> +	struct zxdh_hw_mac_bytes mac_bytes = {0};
> +	struct zxdh_hw_stats  vqm_stats = {0};
> +	uint32_t nstats = dev->data->nb_tx_queues * ZXDH_NB_TXQ_XSTATS +
> +			dev->data->nb_rx_queues * ZXDH_NB_RXQ_XSTATS +
> +			ZXDH_NP_XSTATS + ZXDH_VQM_XSTATS;
> +
> +	if (hw->is_pf) {
> +		nstats += ZXDH_MAC_XSTATS + ZXDH_MAC_BYTES;
> +		zxdh_hw_mac_get(dev, &mac_stats, &mac_bytes);
> +	}
> +	if (n < nstats)
> +		return nstats;
> +	zxdh_hw_stats_get(dev, ZXDH_VQM_DEV_STATS_GET,  &vqm_stats);
> +	zxdh_hw_np_stats(dev, &np_stats);
> +	for (i = 0; i < ZXDH_NP_XSTATS; i++) {
> +		xstats[count].value = *(uint64_t *)(((char *)&np_stats) +
> +				rte_zxdh_np_stat_strings[i].offset);
> +		xstats[count].id = count;
> +		count++;
> +	}
> +	if (hw->is_pf) {
> +		for (i = 0; i < ZXDH_MAC_XSTATS; i++) {
> +			xstats[count].value = *(uint64_t *)(((char *)&mac_stats) +
> +					rte_zxdh_mac_stat_strings[i].offset);
> +			xstats[count].id = count;
> +			count++;
> +		}
> +		for (i = 0; i < ZXDH_MAC_BYTES; i++) {
> +			xstats[count].value = *(uint64_t *)(((char *)&mac_bytes) +
> +					rte_zxdh_mac_bytes_strings[i].offset);
> +			xstats[count].id = count;
> +			count++;
> +		}
> +	}
> +	for (i = 0; i < ZXDH_VQM_XSTATS; i++) {
> +		xstats[count].value = *(uint64_t *)(((char *)&vqm_stats) +
> +				rte_zxdh_vqm_stat_strings[i].offset);
> +		xstats[count].id = count;
> +		count++;
> +	}
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		struct virtnet_rx *rxvq = dev->data->rx_queues[i];
> +
> +		if (rxvq == NULL)
> +			continue;
> +		for (t = 0; t < ZXDH_NB_RXQ_XSTATS; t++) {
> +			xstats[count].value = *(uint64_t *)(((char *)rxvq) +
> +					rte_zxdh_rxq_stat_strings[t].offset);
> +			xstats[count].id = count;
> +			count++;
> +		}
> +	}
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		struct virtnet_tx *txvq = dev->data->tx_queues[i];
> +
> +		if (txvq == NULL)
> +			continue;
> +
> +		for (t = 0; t < ZXDH_NB_TXQ_XSTATS; t++) {
> +			xstats[count].value = *(uint64_t *)(((char *)txvq) +
> +					rte_zxdh_txq_stat_strings[t].offset);
> +			xstats[count].id = count;
> +			count++;
> +		}
> +	}
> +	PMD_DRV_LOG(INFO, "stats count  = %u", count);
> +	return count;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct zxdh_hw_stats  vqm_stats = {0};
> +	struct zxdh_hw_np_stats np_stats = {0};
> +	struct zxdh_hw_mac_stats mac_stats = {0};
> +	struct zxdh_hw_mac_bytes mac_bytes = {0};
> +	uint32_t i = 0;
> +
> +	zxdh_hw_stats_get(dev, ZXDH_VQM_DEV_STATS_GET,  &vqm_stats);
> +	if (hw->is_pf)
> +		zxdh_hw_mac_get(dev, &mac_stats, &mac_bytes);
> +
> +	zxdh_hw_np_stats(dev, &np_stats);
> +
> +	stats->ipackets = vqm_stats.rx_total;
> +	stats->opackets = vqm_stats.tx_total;
> +	stats->ibytes = vqm_stats.rx_bytes;
> +	stats->obytes = vqm_stats.tx_bytes;
> +	stats->imissed = vqm_stats.rx_drop + mac_stats.rx_drop;
> +	stats->ierrors = vqm_stats.rx_error + mac_stats.rx_error + np_stats.np_rx_mtu_drop_pkts;
> +	stats->oerrors = vqm_stats.tx_error + mac_stats.tx_error + np_stats.np_tx_mtu_drop_pkts;
> +
> +	if (hw->i_mtr_en || hw->e_mtr_en)
> +		stats->imissed += np_stats.np_rx_mtr_drop_pkts;
> +
> +	stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> +	for (i = 0; (i < dev->data->nb_rx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) {
> +		struct virtnet_rx *rxvq = dev->data->rx_queues[i];
> +
> +		if (rxvq == NULL)
> +			continue;
> +		stats->q_ipackets[i] = *(uint64_t *)(((char *)rxvq) +
> +				rte_zxdh_rxq_stat_strings[0].offset);
> +		stats->q_ibytes[i] = *(uint64_t *)(((char *)rxvq) +
> +				rte_zxdh_rxq_stat_strings[1].offset);
> +		stats->q_errors[i] = *(uint64_t *)(((char *)rxvq) +
> +				rte_zxdh_rxq_stat_strings[2].offset);
> +		stats->q_errors[i] += *(uint64_t *)(((char *)rxvq) +
> +				rte_zxdh_rxq_stat_strings[5].offset);
> +	}
> +
> +	for (i = 0; (i < dev->data->nb_tx_queues) && (i < RTE_ETHDEV_QUEUE_STAT_CNTRS); i++) {
> +		struct virtnet_tx *txvq = dev->data->tx_queues[i];
> +
> +		if (txvq == NULL)
> +			continue;
> +		stats->q_opackets[i] = *(uint64_t *)(((char *)txvq) +
> +				rte_zxdh_txq_stat_strings[0].offset);
> +		stats->q_obytes[i] = *(uint64_t *)(((char *)txvq) +
> +				rte_zxdh_txq_stat_strings[1].offset);
> +		stats->q_errors[i] += *(uint64_t *)(((char *)txvq) +
> +				rte_zxdh_txq_stat_strings[2].offset);
> +		stats->q_errors[i] += *(uint64_t *)(((char *)txvq) +
> +				rte_zxdh_txq_stat_strings[5].offset);
> +	}
> +	return 0;
> +}
> +
> +/**
> + * Fun:
> + */
> +int32_t zxdh_dev_stats_reset(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	zxdh_hw_stats_reset(dev, ZXDH_VQM_DEV_STATS_RESET);
> +	if (hw->is_pf)
> +		zxdh_hw_stats_reset(dev, ZXDH_MAC_STATS_RESET);
> +
> +	return 0;
> +}
> +
> +

There are two spaces between above function, and none between below
function. These are basic, non functional syntax issues, so I won't
comment more about them, but please take care of these basics so they
don't grap our attention to get in way of real issues.
Please go through the code from scratch to address the syntax issues,
commented out code, empty comments, etc...


> +static void zxdh_init_vring(struct virtqueue *vq)
> +{
> +	int32_t  size	  = vq->vq_nentries;
> +	uint8_t *ring_mem = vq->vq_ring_virt_mem;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	memset(ring_mem, 0, vq->vq_ring_size);
> +
> +	vq->vq_used_cons_idx = 0;
> +	vq->vq_desc_head_idx = 0;
> +	vq->vq_avail_idx	 = 0;
> +	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
> +	vq->vq_free_cnt = vq->vq_nentries;
> +	memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries);
> +	vring_init_packed(&vq->vq_packed.ring, ring_mem, ZXDH_PCI_VRING_ALIGN, size);
> +	vring_desc_init_packed(vq, size);
> +	/*
> +	 * Disable device(host) interrupting guest
> +	 */
> +	virtqueue_disable_intr(vq);
> +}
> +/**
> + * Fun:
> + */
> +static inline int32_t get_queue_type(uint16_t vtpci_queue_idx)
> +{
> +	if (vtpci_queue_idx % 2 == 0)
> +		return VTNET_RQ;
> +	else
> +		return VTNET_TQ;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_logic_qidx)
> +{
> +	char vq_name[VIRTQUEUE_MAX_NAME_SZ] = {0};
> +	char vq_hdr_name[VIRTQUEUE_MAX_NAME_SZ] = {0};
> +	const struct rte_memzone *mz = NULL;
> +	const struct rte_memzone *hdr_mz = NULL;
> +	uint32_t size = 0;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct virtnet_rx *rxvq = NULL;
> +	struct virtnet_tx *txvq = NULL;
> +	struct virtqueue *vq = NULL;
> +	size_t sz_hdr_mz = 0;
> +	void *sw_ring = NULL;
> +	int32_t queue_type = get_queue_type(vtpci_logic_qidx);
> +	int32_t numa_node = dev->device->numa_node;
> +	uint16_t vtpci_phy_qidx = 0;
> +	uint32_t vq_size = 0;
> +	int32_t ret = 0;
> +
> +	if (hw->channel_context[vtpci_logic_qidx].valid == 0) {
> +		PMD_INIT_LOG(ERR, "lch %d is invalid", vtpci_logic_qidx);
> +		return -EINVAL;
> +	}
> +	vtpci_phy_qidx = hw->channel_context[vtpci_logic_qidx].ph_chno;
> +
> +	PMD_INIT_LOG(INFO, "vtpci_logic_qidx :%d setting up physical queue: %u on NUMA node %d",
> +			vtpci_logic_qidx, vtpci_phy_qidx, numa_node);
> +
> +	vq_size = hw->q_depth;
> +
> +	if (VTPCI_OPS(hw)->set_queue_num != NULL)
> +		VTPCI_OPS(hw)->set_queue_num(hw, vtpci_phy_qidx, vq_size);
> +
> +	snprintf(vq_name, sizeof(vq_name), "port%d_vq%d", dev->data->port_id, vtpci_phy_qidx);
> +
> +	size = RTE_ALIGN_CEIL(sizeof(*vq) + vq_size * sizeof(struct vq_desc_extra),
> +				RTE_CACHE_LINE_SIZE);
> +	if (queue_type == VTNET_TQ) {
> +		/*
> +		 * For each xmit packet, allocate a zxdh_net_hdr
> +		 * and indirect ring elements
> +		 */
> +		sz_hdr_mz = vq_size * sizeof(struct zxdh_tx_region);
> +	}
> +
> +	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE, numa_node);
> +	if (vq == NULL) {
> +		PMD_INIT_LOG(ERR, "can not allocate vq");
> +		return -ENOMEM;
> +	}
> +	hw->vqs[vtpci_logic_qidx] = vq;
> +
> +	vq->hw = hw;
> +	vq->vq_queue_index = vtpci_phy_qidx;
> +	vq->vq_nentries = vq_size;
> +
> +	vq->vq_packed.used_wrap_counter = 1;
> +	vq->vq_packed.cached_flags = VRING_PACKED_DESC_F_AVAIL;
> +	vq->vq_packed.event_flags_shadow = 0;
> +	if (queue_type == VTNET_RQ)
> +		vq->vq_packed.cached_flags |= VRING_DESC_F_WRITE;
> +
> +	/*
> +	 * Reserve a memzone for vring elements
> +	 */
> +	size = vring_size(hw, vq_size, ZXDH_PCI_VRING_ALIGN);
> +	vq->vq_ring_size = RTE_ALIGN_CEIL(size, ZXDH_PCI_VRING_ALIGN);
> +	PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size);
> +
> +	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
> +				numa_node, RTE_MEMZONE_IOVA_CONTIG,
> +				ZXDH_PCI_VRING_ALIGN);
> +	if (mz == NULL) {
> +		if (rte_errno == EEXIST)
> +			mz = rte_memzone_lookup(vq_name);
> +		if (mz == NULL) {
> +			ret = -ENOMEM;
> +			goto fail_q_alloc;
> +		}
> +	}
> +
> +	memset(mz->addr, 0, mz->len);
> +
> +	vq->vq_ring_mem = mz->iova;
> +	vq->vq_ring_virt_mem = mz->addr;
> +	PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem:	   0x%" PRIx64, (uint64_t)mz->iova);
> +	PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%" PRIx64, (uint64_t)(uintptr_t)mz->addr);
> +
> +	zxdh_init_vring(vq);
> +
> +	if (sz_hdr_mz) {
> +		snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
> +					dev->data->port_id, vtpci_phy_qidx);
> +		hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, sz_hdr_mz,
> +					numa_node, RTE_MEMZONE_IOVA_CONTIG,
> +					RTE_CACHE_LINE_SIZE);
> +		if (hdr_mz == NULL) {
> +			if (rte_errno == EEXIST)
> +				hdr_mz = rte_memzone_lookup(vq_hdr_name);
> +			if (hdr_mz == NULL) {
> +				ret = -ENOMEM;
> +				goto fail_q_alloc;
> +			}
> +		}
> +	}
> +
> +	if (queue_type == VTNET_RQ) {
> +		size_t sz_sw = (ZXDH_MBUF_BURST_SZ + vq_size) * sizeof(vq->sw_ring[0]);
> +
> +		sw_ring = rte_zmalloc_socket("sw_ring", sz_sw, RTE_CACHE_LINE_SIZE, numa_node);
> +		if (!sw_ring) {
> +			PMD_INIT_LOG(ERR, "can not allocate RX soft ring");
> +			ret = -ENOMEM;
> +			goto fail_q_alloc;
> +		}
> +
> +		vq->sw_ring = sw_ring;
> +		rxvq = &vq->rxq;
> +		rxvq->vq = vq;
> +		rxvq->port_id = dev->data->port_id;
> +		rxvq->mz = mz;
> +	} else {             /* queue_type == VTNET_TQ */
> +		txvq = &vq->txq;
> +		txvq->vq = vq;
> +		txvq->port_id = dev->data->port_id;
> +		txvq->mz = mz;
> +		txvq->virtio_net_hdr_mz = hdr_mz;
> +		txvq->virtio_net_hdr_mem = hdr_mz->iova;
> +	}
> +
> +	vq->offset = offsetof(struct rte_mbuf, buf_iova);
> +	if (queue_type == VTNET_TQ) {
> +		struct zxdh_tx_region *txr = hdr_mz->addr;
> +		uint32_t i;
> +
> +		memset(txr, 0, vq_size * sizeof(*txr));
> +		for (i = 0; i < vq_size; i++) {
> +			/* first indirect descriptor is always the tx header */
> +			struct vring_packed_desc *start_dp = txr[i].tx_packed_indir;
> +
> +			vring_desc_init_indirect_packed(start_dp, RTE_DIM(txr[i].tx_packed_indir));
> +			start_dp->addr = txvq->virtio_net_hdr_mem + i * sizeof(*txr) +
> +					offsetof(struct zxdh_tx_region, tx_hdr);
> +			/* length will be updated to actual pi hdr size when xmit pkt */
> +			start_dp->len = 0;
> +		}
> +	}
> +	if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
> +		PMD_INIT_LOG(ERR, "setup_queue failed");
> +		return -EINVAL;
> +	}
> +	return 0;
> +fail_q_alloc:
> +	rte_free(sw_ring);
> +	rte_memzone_free(hdr_mz);
> +	rte_memzone_free(mz);
> +	rte_free(vq);
> +	return ret;
> +}
> +
> +int32_t zxdh_free_queues(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t nr_vq = hw->queue_num;
> +	struct virtqueue *vq = NULL;
> +	int32_t queue_type = 0;
> +	uint16_t i = 0;
> +
> +	if (hw->vqs == NULL)
> +		return 0;
> +
> +	/* Clear COI table */
> +	if (zxdh_release_channel(dev) < 0) {
> +		PMD_INIT_LOG(ERR, "Failed to clear coi table");
> +		return -1;
> +	}
> +
> +	for (i = 0; i < nr_vq; i++) {
> +		vq = hw->vqs[i];
> +		if (vq == NULL)
> +			continue;
> +
> +		VTPCI_OPS(hw)->del_queue(hw, vq);
> +		queue_type = get_queue_type(i);
> +		if (queue_type == VTNET_RQ) {
> +			rte_free(vq->sw_ring);
> +			rte_memzone_free(vq->rxq.mz);
> +		} else if (queue_type == VTNET_TQ) {
> +			rte_memzone_free(vq->txq.mz);
> +			rte_memzone_free(vq->txq.virtio_net_hdr_mz);
> +		}
> +
> +		rte_free(vq);
> +		hw->vqs[i] = NULL;
> +		PMD_INIT_LOG(DEBUG, "Release to queue %d success!", i);
> +	}
> +
> +	rte_free(hw->vqs);
> +	hw->vqs = NULL;
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_alloc_queues(struct rte_eth_dev *dev, uint16_t nr_vq)
> +{
> +	uint16_t lch;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	hw->vqs = rte_zmalloc(NULL, sizeof(struct virtqueue *) * nr_vq, 0);
> +	if (!hw->vqs) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate vqs");
> +		return -ENOMEM;
> +	}
> +	for (lch = 0; lch < nr_vq; lch++) {
> +		if (zxdh_acquire_channel(dev, lch) < 0) {
> +			PMD_INIT_LOG(ERR, "Failed to acquire the channels");
> +			zxdh_free_queues(dev);
> +			return -1;
> +		}
> +		if (zxdh_init_queue(dev, lch) < 0) {
> +			PMD_INIT_LOG(ERR, "Failed to alloc virtio queue");
> +			zxdh_free_queues(dev);
> +			return -1;
> +		}
> +	}
> +	return 0;
> +}
> +
> +int32_t zxdh_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
> +{
> +	struct zxdh_hw	*hw   = dev->data->dev_private;
> +	struct virtnet_rx *rxvq = dev->data->rx_queues[queue_id];
> +	struct virtqueue  *vq	= rxvq->vq;
> +
> +	virtqueue_enable_intr(vq);
> +	zxdh_mb(hw->weak_barriers);
> +	return 0;
> +}
> +
> +int32_t zxdh_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
> +{
> +	struct virtnet_rx *rxvq = dev->data->rx_queues[queue_id];
> +	struct virtqueue  *vq	= rxvq->vq;
> +
> +	virtqueue_disable_intr(vq);
> +	return 0;
> +}
> +
> +
> +static int32_t zxdh_intr_unmask(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (rte_intr_ack(dev->intr_handle) < 0)
> +		return -1;
> +
> +	hw->use_msix = zxdh_vtpci_msix_detect(RTE_ETH_DEV_TO_PCI(dev));
> +
> +	return 0;
> +}
> +
> +static int32_t zxdh_intr_enable(struct rte_eth_dev *dev)
> +{
> +	int ret = 0;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (!hw->intr_enabled) {
> +		zxdh_intr_cb_reg(dev);
> +		ret = rte_intr_enable(dev->intr_handle);
> +		if (unlikely(ret))
> +			PMD_INIT_LOG(ERR, "Failed to enable %s intr", dev->data->name);
> +
> +		hw->intr_enabled = 1;
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_intr_disable(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (!hw->intr_enabled)
> +		return 0;
> +
> +	zxdh_intr_cb_unreg(dev);
> +	if (rte_intr_disable(dev->intr_handle) < 0)
> +		return -1;
> +
> +	hw->intr_enabled = 0;
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_dev_link_update(struct rte_eth_dev *dev, __rte_unused int32_t wait_to_complete)
> +{
> +	struct rte_eth_link link;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	int32_t ret = 0;
> +
> +	memset(&link, 0, sizeof(link));
> +	link.link_duplex = hw->duplex;
> +	link.link_speed  = hw->speed;
> +	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
> +
> +	if (!hw->started) {
> +		PMD_INIT_LOG(INFO, "port not start");
> +		link.link_status = RTE_ETH_LINK_DOWN;
> +		link.link_speed  = RTE_ETH_SPEED_NUM_UNKNOWN;
> +	}
> +	PMD_DRV_LOG(INFO, "Get link status from hw");
> +	ret = zxdh_link_info_get(dev, &link);
> +	if (ret != 0) {
> +		PMD_DRV_LOG(ERR, " Failed to get link status from hw\n");
> +		return ret;
> +	}
> +	link.link_status &= hw->admin_status;
> +	if (link.link_status == RTE_ETH_LINK_DOWN)
> +		link.link_speed  = RTE_ETH_SPEED_NUM_UNKNOWN;
> +
> +	PMD_DRV_LOG(INFO, "link.link_status %u link.link_speed %u link.link_duplex %u ",
> +			link.link_status, link.link_speed, link.link_duplex);
> +	ret = zxdh_dev_config_port_status(dev, link.link_status);
> +	if (ret != 0) {
> +		PMD_DRV_LOG(ERR, "set port attr.is_up = %u failed.", link.link_status);
> +		return ret;
> +	}
> +	return rte_eth_linkstatus_set(dev, &link);
> +}
> +/*
> + * Process  dev config changed interrupt. Call the callback
> + * if link state changed, generate gratuitous RARP packet if
> + * the status indicates an ANNOUNCE.
> + */
> +#define ZXDH_NET_S_LINK_UP   1 /* Link is up */
> +#define ZXDH_NET_S_ANNOUNCE  2 /* Announcement is needed */
> +
> +
> +#define ZXDH_PF_STATE_VF_AUTO 0
> +#define ZXDH_PF_STATE_VF_ENABLE 1
> +#define ZXDH_PF_STATE_VF_DSIABLE 2
> +static void zxdh_devconf_intr_handler(void *param)
> +{
> +	struct rte_eth_dev *dev = param;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t status = 0;
> +	/* Read interrupt status which clears interrupt */
> +	uint8_t isr = zxdh_vtpci_isr(hw);
> +
> +	if (zxdh_intr_unmask(dev) < 0)
> +		PMD_DRV_LOG(ERR, "interrupt enable failed");
> +	if (isr & ZXDH_PCI_ISR_CONFIG) {
> +		if (zxdh_dev_link_update(dev, 0) == 0)
> +			rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
> +
> +		if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS)) {
> +			zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, status),
> +					&status, sizeof(status));
> +			if (status & ZXDH_NET_S_ANNOUNCE)
> +				zxdh_notify_peers(dev);
> +		}
> +	}
> +}
> +
> +/* Interrupt handler triggered by NIC for handling specific interrupt. */
> +static void zxdh_fromriscv_intr_handler(void *param)
> +{
> +	struct rte_eth_dev *dev = param;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint64_t virt_addr = 0;
> +
> +	virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET);
> +	if (hw->is_pf) {
> +		PMD_INIT_LOG(INFO, "zxdh_risc2pf_intr_handler  PF ");
> +		zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_PF, virt_addr, dev);
> +	} else {
> +		PMD_INIT_LOG(INFO, "zxdh_riscvf_intr_handler  VF ");
> +		zxdh_bar_irq_recv(MSG_CHAN_END_RISC, MSG_CHAN_END_VF, virt_addr, dev);
> +
> +	}
> +}
> +
> +/* Interrupt handler triggered by NIC for handling specific interrupt. */
> +static void zxdh_frompfvf_intr_handler(void *param)
> +{
> +	struct rte_eth_dev *dev = param;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint64_t virt_addr = 0;
> +
> +	virt_addr = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_MSG_CHAN_PFVFSHARE_OFFSET);
> +	if (hw->is_pf) {
> +		PMD_INIT_LOG(INFO, "zxdh_pf2vf_intr_handler  PF ");
> +		zxdh_bar_irq_recv(MSG_CHAN_END_VF, MSG_CHAN_END_PF, virt_addr, dev);
> +	} else {
> +		PMD_INIT_LOG(INFO, "zxdh_pf2vf_intr_handler  VF ");
> +		zxdh_bar_irq_recv(MSG_CHAN_END_PF, MSG_CHAN_END_VF, virt_addr, dev);
> +
> +	}
> +}
> +
> +static int32_t zxdh_intr_release(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
> +		VTPCI_OPS(hw)->set_config_irq(hw, ZXDH_MSI_NO_VECTOR);
> +
> +	zxdh_queues_unbind_intr(dev);
> +	zxdh_intr_disable(dev);
> +
> +	rte_intr_efd_disable(dev->intr_handle);
> +	rte_intr_vec_list_free(dev->intr_handle);
> +	rte_free(hw->risc_intr);
> +	hw->risc_intr = NULL;
> +	rte_free(hw->dtb_intr);
> +	hw->dtb_intr = NULL;
> +	return 0;
> +}
> +
> +static uint64_t get_cur_time_ms(void)
> +{
> +	return (rte_rdtsc() / rte_get_tsc_hz());
> +}
> +
> +static int16_t zxdh_promisc_unint(struct zxdh_hw *hw)
> +{
> +	int16_t ret = 0, vf_group_id = 0;
> +	struct zxdh_brocast_t brocast_table = {0};
> +	struct zxdh_unitcast_t uc_table = {0};
> +	struct zxdh_multicast_t mc_table = {0};
> +
> +	for (; vf_group_id < 4; vf_group_id++) {
> +		DPP_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = {
> +			((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id,
> +			(ZXIC_UINT32 *)&brocast_table
> +		};
> +		DPP_DTB_USER_ENTRY_T eram_brocast = {
> +			.sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE,
> +			.p_entry_data = (void *)&eram_brocast_entry
> +		};
> +
> +		ret = dpp_dtb_table_entry_delete(DEVICE_NO, g_dtb_data.queueid, 1, &eram_brocast);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write eram-promisc failed, code:%d", ret);
> +			return ret;
> +		}
> +
> +		DPP_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = {
> +			((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id,
> +			(ZXIC_UINT32 *)&uc_table
> +		};
> +		DPP_DTB_USER_ENTRY_T entry_unicast = {
> +			.sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE,
> +			.p_entry_data = (void *)&eram_uc_entry
> +		};
> +
> +		ret = dpp_dtb_table_entry_delete(DEVICE_NO, g_dtb_data.queueid, 1, &entry_unicast);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write eram-promisc failed, code:%d", ret);
> +			return ret;
> +		}
> +
> +		DPP_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = {
> +			((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id,
> +			(ZXIC_UINT32 *)&mc_table
> +		};
> +		DPP_DTB_USER_ENTRY_T entry_multicast = {
> +			.sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE,
> +			.p_entry_data = (void *)&eram_mc_entry
> +		};
> +
> +		ret = dpp_dtb_table_entry_delete(DEVICE_NO, g_dtb_data.queueid,
> +					1, &entry_multicast);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write eram-promisc failed, code:%d", ret);
> +			return ret;
> +		}
> +	}
> +	return ret;
> +}
> +
> +
> +static int16_t zxdh_port_unint(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct zxdh_msg_info msg_info = {0};
> +	struct zxdh_port_att_entry port_attr = {0};
> +	int16_t ret = 0;
> +
> +	if (hw->i_mtr_en || hw->e_mtr_en)
> +		zxdh_mtr_release(dev);
> +
> +
> +	if (hw->is_pf == 1) {
> +		DPP_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (ZXIC_UINT32 *)&port_attr};
> +		DPP_DTB_USER_ENTRY_T entry = {
> +			.sdt_no = ZXDH_SDT_VPORT_ATT_TABLE,
> +			.p_entry_data = (void *)&port_attr_entry
> +		};
> +		ret = dpp_dtb_table_entry_delete(DEVICE_NO, g_dtb_data.queueid, 1, &entry);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write port_attr_eram failed, code:%d", ret);
> +			return ret;
> +		}
> +
> +		ret = zxdh_promisc_unint(hw);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write promisc_table failed, code:%d", ret);
> +			return ret;
> +		}
> +	} else {
> +		msg_head_build(hw, ZXDH_VF_PORT_UNINIT, &msg_info);
> +		ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "vf port_init failed");
> +
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_dev_close(struct rte_eth_dev *dev)
> +{
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> +		return 0;
> +	PMD_INIT_LOG(DEBUG, "zxdh_dev_close");
> +	int ret = zxdh_dev_stop(dev);
> +
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "%s :stop port %s failed ", __func__, dev->device->name);
> +		return -1;
> +	}
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	hw->started = 0;
> +	hw->admin_status = 0;
> +
> +	ret = zxdh_port_unint(dev);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "%s :unint port %s failed ", __func__, dev->device->name);
> +		return -1;
> +	}
> +	if (zxdh_shared_data != NULL)
> +		zxdh_mtr_release(dev);
> +
> +	zxdh_intr_release(dev);
> +
> +	PMD_DRV_LOG(INFO, "zxdh_dtb_data_destroy  begin  time: %ld s", get_cur_time_ms());
> +	zxdh_np_destroy(dev);
> +	PMD_DRV_LOG(INFO, "zxdh_dtb_data_destroy  end  time: %ld s", get_cur_time_ms());
> +
> +	zxdh_vtpci_reset(hw);
> +	zxdh_dev_free_mbufs(dev);
> +	zxdh_free_queues(dev);
> +
> +	zxdh_bar_msg_chan_exit();
> +	zxdh_priv_res_free(hw);
> +
> +	if (dev->data->mac_addrs != NULL) {
> +		rte_free(dev->data->mac_addrs);
> +		dev->data->mac_addrs = NULL;
> +	}
> +	if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
> +		rte_free(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key);
> +		dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> +	}
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +#define ZXDH_PMD_DEFAULT_HOST_FEATURES   \
> +	(1ULL << ZXDH_NET_F_MRG_RXBUF | \
> +	 1ULL << ZXDH_NET_F_STATUS    | \
> +	 1ULL << ZXDH_NET_F_MQ        | \
> +	 1ULL << ZXDH_F_ANY_LAYOUT    | \
> +	 1ULL << ZXDH_F_VERSION_1   | \
> +	 1ULL << ZXDH_F_RING_PACKED | \
> +	 1ULL << ZXDH_F_IN_ORDER    | \
> +	 1ULL << ZXDH_F_ORDER_PLATFORM | \
> +	 1ULL << ZXDH_F_NOTIFICATION_DATA |\
> +	 1ULL << ZXDH_NET_F_MAC | \
> +	 1ULL << ZXDH_NET_F_CSUM |\
> +	 1ULL << ZXDH_NET_F_GUEST_CSUM |\
> +	 1ULL << ZXDH_NET_F_GUEST_TSO4 |\
> +	 1ULL << ZXDH_NET_F_GUEST_TSO6 |\
> +	 1ULL << ZXDH_NET_F_HOST_TSO4 |\
> +	 1ULL << ZXDH_NET_F_HOST_TSO6 |\
> +	 1ULL << ZXDH_NET_F_GUEST_UFO |\
> +	 1ULL << ZXDH_NET_F_HOST_UFO)
> +
> +#define ZXDH_PMD_DEFAULT_GUEST_FEATURES   \
> +	(1ULL << ZXDH_NET_F_MRG_RXBUF | \
> +	 1ULL << ZXDH_NET_F_STATUS    | \
> +	 1ULL << ZXDH_NET_F_MQ        | \
> +	 1ULL << ZXDH_F_ANY_LAYOUT    | \
> +	 1ULL << ZXDH_F_VERSION_1     | \
> +	 1ULL << ZXDH_F_RING_PACKED   | \
> +	 1ULL << ZXDH_F_IN_ORDER      | \
> +	 1ULL << ZXDH_F_NOTIFICATION_DATA | \
> +	 1ULL << ZXDH_NET_F_MAC)
> +
> +#define ZXDH_RX_QUEUES_MAX  128U
> +#define ZXDH_TX_QUEUES_MAX  128U
> +static int32_t zxdh_get_pci_dev_config(struct zxdh_hw *hw)
> +{
> +	hw->host_features = zxdh_vtpci_get_features(hw);
> +	hw->host_features = ZXDH_PMD_DEFAULT_HOST_FEATURES;
> +
> +	uint64_t guest_features = (uint64_t)ZXDH_PMD_DEFAULT_GUEST_FEATURES;
> +	uint64_t nego_features = guest_features & hw->host_features;
> +
> +	hw->guest_features = nego_features;
> +
> +	if (hw->guest_features & (1ULL << ZXDH_NET_F_MAC)) {
> +		zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, mac),
> +				&hw->mac_addr, RTE_ETHER_ADDR_LEN);
> +		PMD_INIT_LOG(DEBUG, "get dev mac: %02X:%02X:%02X:%02X:%02X:%02X",
> +				hw->mac_addr[0], hw->mac_addr[1],
> +				hw->mac_addr[2], hw->mac_addr[3],
> +				hw->mac_addr[4], hw->mac_addr[5]);
> +	} else {
> +		rte_eth_random_addr(&hw->mac_addr[0]);
> +		PMD_INIT_LOG(DEBUG, "random dev mac: %02X:%02X:%02X:%02X:%02X:%02X",
> +				hw->mac_addr[0], hw->mac_addr[1],
> +				hw->mac_addr[2], hw->mac_addr[3],
> +				hw->mac_addr[4], hw->mac_addr[5]);
> +	}
> +	uint32_t max_queue_pairs;
> +
> +	zxdh_vtpci_read_dev_config(hw, offsetof(struct zxdh_net_config, max_virtqueue_pairs),
> +			&max_queue_pairs, sizeof(max_queue_pairs));
> +	PMD_INIT_LOG(DEBUG, "get max queue pairs %u", max_queue_pairs);
> +	if (max_queue_pairs == 0)
> +		hw->max_queue_pairs = ZXDH_RX_QUEUES_MAX;
> +	else
> +		hw->max_queue_pairs = RTE_MIN(ZXDH_RX_QUEUES_MAX, max_queue_pairs);
> +
> +	PMD_INIT_LOG(INFO, "set max queue pairs %d", hw->max_queue_pairs);
> +
> +	hw->weak_barriers = !vtpci_with_feature(hw, ZXDH_F_ORDER_PLATFORM);
> +	return 0;
> +}
> +
> +int32_t zxdh_dev_pause(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	rte_spinlock_lock(&hw->state_lock);
> +
> +	if (hw->started == 0) {
> +		/* Device is just stopped. */
> +		rte_spinlock_unlock(&hw->state_lock);
> +		return -1;
> +	}
> +	hw->started = 0;
> +	hw->admin_status = 0;
> +	/*
> +	 * Prevent the worker threads from touching queues to avoid contention,
> +	 * 1 ms should be enough for the ongoing Tx function to finish.
> +	 */
> +	rte_delay_ms(1);
> +	return 0;
> +}
> +
> +/*
> + * Recover hw state to let the worker threads continue.
> + */
> +void zxdh_dev_resume(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	hw->started = 1;
> +	hw->admin_status = 1;
> +	rte_spinlock_unlock(&hw->state_lock);
> +}
> +
> +/*
> + * Should be called only after device is paused.
> + */
> +int32_t zxdh_inject_pkts(struct rte_eth_dev *dev, struct rte_mbuf **tx_pkts, int32_t nb_pkts)
> +{
> +	struct zxdh_hw	*hw   = dev->data->dev_private;
> +	struct virtnet_tx *txvq = dev->data->tx_queues[0];
> +	int32_t ret = 0;
> +
> +	hw->inject_pkts = tx_pkts;
> +	ret = dev->tx_pkt_burst(txvq, tx_pkts, nb_pkts);
> +	hw->inject_pkts = NULL;
> +
> +	return ret;
> +}
>

Why driver inject pkts?


btw, this functions seems only called from this file, why not make it
'static'.
Please make functions static as much as possible.

> +
> +static void zxdh_notify_peers(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct virtnet_rx *rxvq = NULL;
> +	struct rte_mbuf *rarp_mbuf = NULL;
> +
> +	if (!dev->data->rx_queues)
> +		return;
> +
> +	rxvq = dev->data->rx_queues[0];
> +	if (!rxvq)
> +		return;
> +
> +	rarp_mbuf = rte_net_make_rarp_packet(rxvq->mpool, (struct rte_ether_addr *)hw->mac_addr);
> +	if (rarp_mbuf == NULL) {
> +		PMD_DRV_LOG(ERR, "failed to make RARP packet.");
> +		return;
> +	}
> +
> +	/* If virtio port just stopped, no need to send RARP */
> +	if (zxdh_dev_pause(dev) < 0) {
> +		rte_pktmbuf_free(rarp_mbuf);
> +		return;
> +	}
> +
> +	zxdh_inject_pkts(dev, &rarp_mbuf, 1);
> +	zxdh_dev_resume(dev);
> +}
> +/**
> + * Fun:
> + */
> +static int32_t set_rxtx_funcs(struct rte_eth_dev *eth_dev)
> +{
> +	eth_dev->tx_pkt_prepare = zxdh_xmit_pkts_prepare;
> +	struct zxdh_hw *hw = eth_dev->data->dev_private;
> +
> +	if (!vtpci_packed_queue(hw)) {
> +		PMD_INIT_LOG(ERR, " port %u not support packed queue", eth_dev->data->port_id);
> +		return -1;
> +	}
> +	if (!vtpci_with_feature(hw, ZXDH_NET_F_MRG_RXBUF)) {
> +		PMD_INIT_LOG(ERR, " port %u not support rx mergeable", eth_dev->data->port_id);
> +		return -1;
> +	}
> +	/* */
> +	eth_dev->tx_pkt_burst = &zxdh_xmit_pkts_packed;
> +	eth_dev->rx_pkt_burst = &zxdh_recv_mergeable_pkts_packed;
> +	return 0;
> +}
> +/* Only support 1:1 queue/interrupt mapping so far.
> + * TODO: support n:1 queue/interrupt mapping when there are limited number of
> + * interrupt vectors (<N+1).
> + */
> +static int32_t zxdh_queues_bind_intr(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	int32_t i;
> +	uint16_t vec;
> +
> +	if (!dev->data->dev_conf.intr_conf.rxq) {
> +		PMD_INIT_LOG(INFO, "queue/interrupt mask, nb_rx_queues %u",
> +				dev->data->nb_rx_queues);
> +		for (i = 0; i < dev->data->nb_rx_queues; ++i) {
> +			vec = VTPCI_OPS(hw)->set_queue_irq(hw,
> +					hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
> +			PMD_INIT_LOG(INFO, "vq%d irq set 0x%x, get 0x%x",
> +					i * 2, ZXDH_MSI_NO_VECTOR, vec);
> +		}
> +	} else {
> +		PMD_INIT_LOG(DEBUG, "queue/interrupt binding, nb_rx_queues %u",
> +				dev->data->nb_rx_queues);
> +		for (i = 0; i < dev->data->nb_rx_queues; ++i) {
> +			vec = VTPCI_OPS(hw)->set_queue_irq(hw,
> +					hw->vqs[i * 2], i + ZXDH_QUE_INTR_VEC_BASE);
> +			PMD_INIT_LOG(INFO, "vq%d irq set %d, get %d",
> +					i * 2, i + ZXDH_QUE_INTR_VEC_BASE, vec);
> +		}
> +	}
> +	/* mask all txq intr */
> +	for (i = 0; i < dev->data->nb_tx_queues; ++i) {
> +		vec = VTPCI_OPS(hw)->set_queue_irq(hw,
> +				hw->vqs[(i * 2) + 1], ZXDH_MSI_NO_VECTOR);
> +		PMD_INIT_LOG(INFO, "vq%d irq set 0x%x, get 0x%x",
> +				(i * 2) + 1, ZXDH_MSI_NO_VECTOR, vec);
> +	}
> +	return 0;
> +}
> +
> +static void zxdh_queues_unbind_intr(struct rte_eth_dev *dev)
> +{
> +	PMD_INIT_LOG(INFO, "queue/interrupt unbinding");
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	int32_t i;
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; ++i) {
> +		VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], ZXDH_MSI_NO_VECTOR);
> +		VTPCI_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2 + 1], ZXDH_MSI_NO_VECTOR);
> +	}
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_setup_dtb_interrupts(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (!hw->dtb_intr) {
> +		hw->dtb_intr = rte_zmalloc("dtb_intr", sizeof(struct rte_intr_handle), 0);
> +		if (hw->dtb_intr == NULL) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate dtb_intr");
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	if (dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1] < 0) {
> +		PMD_INIT_LOG(ERR, "[%d]dtb interrupt fd is invalid", ZXDH_MSIX_INTR_DTB_VEC - 1);
> +		rte_free(hw->dtb_intr);
> +		hw->dtb_intr = NULL;
> +		return -1;
> +	}
> +	hw->dtb_intr->fd = dev->intr_handle->efds[ZXDH_MSIX_INTR_DTB_VEC - 1];
> +	hw->dtb_intr->type = dev->intr_handle->type;
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_setup_risc_interrupts(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (!hw->risc_intr) {
> +		PMD_INIT_LOG(ERR, " to allocate risc_intr");
> +		hw->risc_intr = rte_zmalloc("risc_intr",
> +			ZXDH_MSIX_INTR_MSG_VEC_NUM * sizeof(struct rte_intr_handle), 0);
> +		if (hw->risc_intr == NULL) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate risc_intr");
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	uint8_t i;
> +
> +	for (i = 0; i < ZXDH_MSIX_INTR_MSG_VEC_NUM; i++) {
> +		if (dev->intr_handle->efds[i] < 0) {
> +			PMD_INIT_LOG(ERR, "[%u]risc interrupt fd is invalid", i);
> +			rte_free(hw->risc_intr);
> +			hw->risc_intr = NULL;
> +			return -1;
> +		}
> +
> +		struct rte_intr_handle *intr_handle = hw->risc_intr + i;
> +
> +		intr_handle->fd = dev->intr_handle->efds[i];
> +		intr_handle->type = dev->intr_handle->type;
> +	}
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static void zxdh_intr_cb_reg(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
> +		rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
> +
> +	/* register callback to update dev config intr */
> +	rte_intr_callback_register(dev->intr_handle, zxdh_devconf_intr_handler, dev);
> +	/* Register rsic_v to pf interrupt callback */
> +	struct rte_intr_handle *tmp = hw->risc_intr +
> +			(MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
> +
> +	rte_intr_callback_register(tmp, zxdh_frompfvf_intr_handler, dev);
> +
> +	tmp = hw->risc_intr + (MSIX_FROM_RISCV - ZXDH_MSIX_INTR_MSG_VEC_BASE);
> +	rte_intr_callback_register(tmp, zxdh_fromriscv_intr_handler, dev);
> +}
> +
> +static void zxdh_intr_cb_unreg(struct rte_eth_dev *dev)
> +{
> +	PMD_INIT_LOG(ERR, "");
> +	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
> +		rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
> +
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	/* register callback to update dev config intr */
> +	rte_intr_callback_unregister(dev->intr_handle, zxdh_devconf_intr_handler, dev);
> +	/* Register rsic_v to pf interrupt callback */
> +	struct rte_intr_handle *tmp = hw->risc_intr +
> +			(MSIX_FROM_PFVF - ZXDH_MSIX_INTR_MSG_VEC_BASE);
> +
> +	rte_intr_callback_unregister(tmp, zxdh_frompfvf_intr_handler, dev);
> +	tmp = hw->risc_intr + (MSIX_FROM_RISCV-ZXDH_MSIX_INTR_MSG_VEC_BASE);
> +	rte_intr_callback_unregister(tmp, zxdh_fromriscv_intr_handler, dev);
> +}
> +
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_configure_intr(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	int32_t ret = 0;
> +
> +	if (!rte_intr_cap_multiple(dev->intr_handle)) {
> +		PMD_INIT_LOG(ERR, "Multiple intr vector not supported");
> +		return -ENOTSUP;
> +	}
> +	zxdh_intr_release(dev);
> +	uint8_t nb_efd = ZXDH_MSIX_INTR_DTB_VEC_NUM + ZXDH_MSIX_INTR_MSG_VEC_NUM;
> +
> +	if (dev->data->dev_conf.intr_conf.rxq)
> +		nb_efd += dev->data->nb_rx_queues;
> +
> +	if (rte_intr_efd_enable(dev->intr_handle, nb_efd)) {
> +		PMD_INIT_LOG(ERR, "Fail to create eventfd");
> +		return -1;
> +	}
> +
> +	if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
> +					hw->max_queue_pairs+ZXDH_INTR_NONQUE_NUM)) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
> +					hw->max_queue_pairs+ZXDH_INTR_NONQUE_NUM);
> +		return -ENOMEM;
> +	}
> +	PMD_INIT_LOG(INFO, "allocate %u rxq vectors", dev->intr_handle->vec_list_size);
> +	if (zxdh_setup_risc_interrupts(dev) != 0) {
> +		PMD_INIT_LOG(ERR, "Error setting up rsic_v interrupts!");
> +		ret = -1;
> +		goto free_intr_vec;
> +	}
> +	if (zxdh_setup_dtb_interrupts(dev) != 0) {
> +		PMD_INIT_LOG(ERR, "Error setting up dtb interrupts!");
> +		ret = -1;
> +		goto free_intr_vec;
> +	}
> +
> +	if (zxdh_queues_bind_intr(dev) < 0) {
> +		PMD_INIT_LOG(ERR, "Failed to bind queue/interrupt");
> +		ret = -1;
> +		goto free_intr_vec;
> +	}
> +	/** DO NOT try to remove this! This function will enable msix,
> +	 * or QEMU will encounter SIGSEGV when DRIVER_OK is sent.
> +	 * And for legacy devices, this should be done before queue/vec
> +	 * binding to change the config size from 20 to 24, or
> +	 * ZXDH_MSI_QUEUE_VECTOR (22) will be ignored.
> +	 **/
> +	if (zxdh_intr_enable(dev) < 0) {
> +		PMD_DRV_LOG(ERR, "interrupt enable failed");
> +		ret = -1;
> +		goto free_intr_vec;
> +	}
> +	return 0;
> +
> +free_intr_vec:
> +	zxdh_intr_release(dev);
> +	return ret;
> +}
> +/**
> + * Fun: reset device and renegotiate features if needed
> + */
> +struct zxdh_hw_internal zxdh_hw_internal[RTE_MAX_ETHPORTS];
> +static int32_t zxdh_init_device(struct rte_eth_dev *eth_dev)
> +{
> +	struct zxdh_hw *hw = eth_dev->data->dev_private;
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> +	int ret = zxdh_read_pci_caps(pci_dev, hw);
> +
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "port 0x%x pci caps read failed .", hw->vport.vport);
> +		goto err;
> +	}
> +	zxdh_hw_internal[hw->port_id].vtpci_ops = &zxdh_modern_ops;
> +	zxdh_vtpci_reset(hw);
> +	zxdh_get_pci_dev_config(hw);
> +	if (hw->vqs) { /* not reachable? */
> +		zxdh_dev_free_mbufs(eth_dev);
> +		ret = zxdh_free_queues(eth_dev);
> +		if (ret < 0) {
> +			PMD_INIT_LOG(ERR, "port 0x%x free queue failed.", hw->vport.vport);
> +			goto err;
> +		}
> +	}
> +	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
> +	hw->vtnet_hdr_size = ZXDH_DL_NET_HDR_SIZE;
> +	hw->otpid = RTE_ETHER_TYPE_VLAN;
> +	hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
> +	hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +	hw->max_mtu = ZXDH_MAX_RX_PKTLEN - RTE_ETHER_HDR_LEN - VLAN_TAG_LEN - ZXDH_DL_NET_HDR_SIZE;
> +	PMD_INIT_LOG(DEBUG, "max_mtu=%u", hw->max_mtu);
> +	eth_dev->data->mtu = RTE_ETHER_MTU;
> +	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac_addr, &eth_dev->data->mac_addrs[0]);
> +	PMD_INIT_LOG(DEBUG, "PORT MAC: %02X:%02X:%02X:%02X:%02X:%02X",
> +		eth_dev->data->mac_addrs->addr_bytes[0],
> +		eth_dev->data->mac_addrs->addr_bytes[1],
> +		eth_dev->data->mac_addrs->addr_bytes[2],
> +		eth_dev->data->mac_addrs->addr_bytes[3],
> +		eth_dev->data->mac_addrs->addr_bytes[4],
> +		eth_dev->data->mac_addrs->addr_bytes[5]);
> +	/* If host does not support both status and MSI-X then disable LSC */
> +	if (vtpci_with_feature(hw, ZXDH_NET_F_STATUS) && (hw->use_msix != ZXDH_MSIX_NONE)) {
> +		eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
> +		PMD_INIT_LOG(DEBUG, "LSC enable");
> +	} else {
> +		eth_dev->data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC;
> +	}
> +	return 0;
> +
> +err:
> +	PMD_INIT_LOG(ERR, "port %d init device failed", eth_dev->data->port_id);
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static void zxdh_priv_res_init(struct zxdh_hw *hw)
> +{
> +	hw->vlan_fiter = (uint64_t *)rte_malloc("vlan_filter", 64 * sizeof(uint64_t), 1);
> +	memset(hw->vlan_fiter, 0, 64 * sizeof(uint64_t));
> +	if (hw->is_pf)
> +		hw->vfinfo = rte_zmalloc("vfinfo", ZXDH_MAX_VF * sizeof(struct vfinfo), 4);
> +	else
> +		hw->vfinfo = NULL;
> +}
> +/**
> + * Fun:
> + */
> +static void set_vfs_pcieid(struct zxdh_hw *hw)
> +{
> +	if (hw->pfinfo.vf_nums > ZXDH_MAX_VF) {
> +		PMD_DRV_LOG(ERR, "vf nums %u out of range", hw->pfinfo.vf_nums);
> +		return;
> +	}
> +	if (hw->vfinfo == NULL) {
> +		PMD_DRV_LOG(ERR, " vfinfo uninited");
> +		return;
> +	}
> +
> +	PMD_DRV_LOG(INFO, "vf nums %d", hw->pfinfo.vf_nums);
> +	int vf_idx;
> +
> +	for (vf_idx = 0; vf_idx < hw->pfinfo.vf_nums; vf_idx++)
> +		hw->vfinfo[vf_idx].pcieid = VF_PCIE_ID(hw->pcie_id, vf_idx);
> +
> +}
> +
> +
> +static void zxdh_sriovinfo_init(struct zxdh_hw *hw)
> +{
> +	hw->pfinfo.pcieid = PF_PCIE_ID(hw->pcie_id);
> +
> +	if (hw->is_pf)
> +		set_vfs_pcieid(hw);
> +}
> +/**
> + * Fun:
> + */
> +#define SRIOV_MSGINFO_LEN  256
> +enum sriov_msg_opcode {
> +	SRIOV_SET_VF_MAC = 0,    /* pf set vf's mac */
> +	SRIOV_SET_VF_VLAN,       /* pf set vf's vlan */
> +	SRIOV_SET_VF_LINK_STATE, /* pf set vf's link state */
> +	SRIOV_VF_RESET,
> +	SET_RSS_TABLE,
> +	SRIOV_OPCODE_NUM,
> +};
> +struct sriov_msg_payload {
> +	uint16_t pcieid;/* sender's pcie id */
> +	uint16_t vf_id;
> +	enum sriov_msg_opcode opcode;
> +	uint16_t slen;
> +	uint8_t content[0]; /* payload */
> +} __rte_packed;
> +int vf_recv_bar_msg(void *payload, uint16_t len __rte_unused,
> +			void *reps_buffer, uint16_t *reps_len, void *eth_dev __rte_unused)
> +{
> +	int32_t ret = 0;
> +	struct zxdh_hw *hw;
> +	struct sriov_msg_payload *msg_payload = (struct sriov_msg_payload *)payload;
> +	struct zxdh_msg_reply_body *reply_body = reps_buffer;
> +
> +	uint8_t *content = NULL;
> +	uint16_t vf_id = msg_payload->vf_id;
> +	uint16_t pcieid = msg_payload->pcieid;
> +	uint16_t opcode = msg_payload->opcode;
> +	uint16_t slen = msg_payload->slen;
> +
> +	content = msg_payload->content;
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)eth_dev;
> +
> +	if (dev == NULL) {
> +		PMD_DRV_LOG(ERR, "param invalid\n");
> +		ret = -2;
> +		return ret;
> +	}
> +	hw = dev->data->dev_private;
> +
> +	PMD_DRV_LOG(DEBUG, "%s content %p vf_id %d pcieid %x slen %d\n",
> +			__func__, content, vf_id, pcieid, slen);
> +	switch (opcode) {
> +	case SRIOV_SET_VF_MAC:
> +		PMD_DRV_LOG(DEBUG, "pf pcie id is 0x%x:\n", pcieid);
> +		PMD_DRV_LOG(DEBUG, "[VF GET MSG FROM PF]--vf mac is been set.\n");
> +		PMD_DRV_LOG(DEBUG, "VF[%d] old mac is %02X:%02X:%02X:%02X:%02X:%02X\n",
> +			vf_id,
> +			(hw->mac_addr)[0], (hw->mac_addr)[1], (hw->mac_addr)[2],
> +			(hw->mac_addr)[3], (hw->mac_addr)[4], (hw->mac_addr)[5]);
> +
> +		memcpy(hw->mac_addr, content, 6);
> +		reply_body->flag = ZXDH_REPS_SUCC;
> +		char str[ZXDH_MSG_REPLY_BODY_MAX_LEN] = "test";
> +
> +		sprintf(str, "vf %d process msg set mac ok ", vf_id);
> +		memcpy(reply_body->reply_data, str, strlen(str)+1);
> +		*reps_len = sizeof(*reply_body);
> +		break;
> +	case SRIOV_SET_VF_LINK_STATE:
> +		/* set vf link state(link up or link down) */
> +		PMD_DRV_LOG(DEBUG, "[VF GET MSG FROM PF]--vf link state is been set.\n");
> +		break;
> +	case SRIOV_VF_RESET:
> +		PMD_DRV_LOG(DEBUG, "[VF GET MSG FROM PF]--reset. port should be stopped\n");
> +		break;
> +	default:
> +		PMD_DRV_LOG(ERR, "[VF GET MSG FROM PF]--unknown msg opcode %d\n", opcode);
> +		ret = -1;
> +		break;
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static inline int config_func_call(struct zxdh_hw *hw, struct zxdh_msg_info *msg_info,
> +			struct zxdh_msg_reply_body *res, uint16_t *res_len)
> +{
> +	int ret = -1;
> +	struct zxdh_msg_head *msghead = &(msg_info->msg_head);
> +	enum zxdh_msg_type msg_type = msghead->msg_type;
> +
> +	if (!res || !res_len) {
> +		PMD_DRV_LOG(INFO, "-%s  invalid param\n", __func__);
> +		return -1;
> +	}
> +	if (proc_func[msg_type]) {
> +		PMD_DRV_LOG(INFO, "-%s begin-msg_type:%d\n", __func__, msg_type);
> +		ret = proc_func[msg_type](hw, msghead->vport,
> +				(void *)&msg_info->data, res, res_len);
> +		if (!ret)
> +			res->flag = ZXDH_REPS_SUCC;
> +	} else {
> +		res->flag = ZXDH_REPS_FAIL;
> +	}
> +	*res_len += sizeof(res->flag);
> +	PMD_DRV_LOG(INFO, "-%s-end-msg_type:%d -res_len 0x%x\n",
> +			__func__, msg_type, *res_len);
> +	return ret;
> +}
> +int pf_recv_bar_msg(void *pay_load, uint16_t len, void *reps_buffer,
> +			uint16_t *reps_len, void *eth_dev __rte_unused)
> +{
> +	struct zxdh_msg_info *msg_info = (struct zxdh_msg_info *)pay_load;
> +	struct zxdh_msg_head *msghead = &(msg_info->msg_head);
> +	struct zxdh_msg_reply_body *reply_body = reps_buffer;
> +	uint16_t vf_id = msghead->vf_id;
> +	uint16_t pcieid = msghead->pcieid;
> +	int32_t ret = 0;
> +	enum zxdh_msg_type msg_type = msghead->msg_type;
> +
> +	if (msg_type >= ZXDH_FUNC_END) {
> +		PMD_DRV_LOG(ERR, "%s vf_id %d pcieid 0x%x len %u msg_type %d unsupported\n",
> +				__func__, vf_id, pcieid, len, msg_type);
> +		ret = -2;
> +		goto msg_proc_end;
> +	}
> +	PMD_DRV_LOG(DEBUG, "%s vf_id %d pcieid 0x%x len %d msg_type %d\n",
> +			__func__, vf_id, pcieid, len, msg_type);
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)eth_dev;
> +
> +	if (dev == NULL) {
> +		PMD_DRV_LOG(ERR, "param invalid\n");
> +		ret = -2;
> +		goto msg_proc_end;
> +	}
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t reply_len = 0;
> +
> +	ret = config_func_call(hw, msg_info, reply_body, &reply_len);
> +	*reps_len = reply_len+sizeof(struct zxdh_msg_reply_head);
> +	PMD_DRV_LOG(INFO, "len %d\n", *reps_len);
> +
> +	return ret;
> +
> +msg_proc_end:
> +	PMD_DRV_LOG(DEBUG, "[PF GET MSG FROM VF] ret %d proc result:ret 0x%x reslt info: %s reply_len: 0x%x\n",
> +			ret, reply_body->flag, reply_body->reply_data, reply_len);
> +	memcpy(reply_body->reply_data, &ret, sizeof(ret));
> +	reply_len = sizeof(ret);
> +	*reps_len = sizeof(struct zxdh_msg_reply_head) + reply_len;
> +	rte_hexdump(stdout, "pf reply msg ", reply_body, reply_len);
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static void zxdh_msg_cb_reg(struct zxdh_hw *hw)
> +{
> +	if (hw->is_pf)
> +		zxdh_bar_chan_msg_recv_register(MODULE_BAR_MSG_TO_PF, pf_recv_bar_msg);
> +	else
> +		zxdh_bar_chan_msg_recv_register(MODULE_BAR_MSG_TO_VF, vf_recv_bar_msg);
> +}
> +static void zxdh_priv_res_free(struct zxdh_hw *priv)
> +{
> +	rte_free(priv->vlan_fiter);
> +	priv->vlan_fiter = NULL;
> +	rte_free(priv->vfinfo);
> +	priv->vfinfo = NULL;
> +	rte_free(priv->reta_idx);
> +	priv->reta_idx = NULL;
> +}
> +
> +static bool rx_offload_enabled(struct zxdh_hw *hw)
> +{
> +	return vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM) ||
> +		   vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
> +		   vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6) ||
> +		   (hw->vlan_offload_cfg.vlan_strip == 1);
> +}
> +
> +static bool tx_offload_enabled(struct zxdh_hw *hw)
> +{
> +	return vtpci_with_feature(hw, ZXDH_NET_F_CSUM) ||
> +		   vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO4) ||
> +		   vtpci_with_feature(hw, ZXDH_NET_F_HOST_TSO6) ||
> +		   vtpci_with_feature(hw, ZXDH_NET_F_HOST_UFO);
> +}
> +
> +static int32_t zxdh_features_update(struct zxdh_hw *hw,
> +				const struct rte_eth_rxmode *rxmode,
> +				const struct rte_eth_txmode *txmode)
> +{
> +	uint64_t rx_offloads = rxmode->offloads;
> +	uint64_t tx_offloads = txmode->offloads;
> +	uint64_t req_features = hw->guest_features;
> +
> +	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
> +		req_features |= (1ULL << ZXDH_NET_F_GUEST_CSUM);
> +
> +	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
> +		req_features |= (1ULL << ZXDH_NET_F_GUEST_TSO4) |
> +						(1ULL << ZXDH_NET_F_GUEST_TSO6);
> +
> +	if (tx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
> +		req_features |= (1ULL << ZXDH_NET_F_CSUM);
> +
> +	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
> +		req_features |= (1ULL << ZXDH_NET_F_HOST_TSO4) |
> +						(1ULL << ZXDH_NET_F_HOST_TSO6);
> +
> +	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_TSO)
> +		req_features |= (1ULL << ZXDH_NET_F_HOST_UFO);
> +
> +	req_features = req_features & hw->host_features;
> +	hw->guest_features =   req_features;
> +
> +	VTPCI_OPS(hw)->set_features(hw, req_features);
> +
> +	PMD_INIT_LOG(INFO, "set  featrue %lx!", req_features);
> +
> +	PMD_INIT_LOG(DEBUG, "host_features	= %" PRIx64, hw->host_features);
> +	PMD_INIT_LOG(DEBUG, "guest_features = %" PRIx64, hw->guest_features);
> +
> +	if ((rx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) &&
> +		 !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_CSUM)) {
> +		PMD_DRV_LOG(ERR, "rx checksum not available on this host");
> +		return -ENOTSUP;
> +	}
> +
> +	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
> +		(!vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO4) ||
> +		 !vtpci_with_feature(hw, ZXDH_NET_F_GUEST_TSO6))) {
> +		PMD_DRV_LOG(ERR, "Large Receive Offload not available on this host");
> +		return -ENOTSUP;
> +	}
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_acquire_lock(struct rte_eth_dev *dev)
> +{
> +	uint32_t var = zxdh_read_reg(dev, ZXDH_BAR0_INDEX, ZXDH_VF_LOCK_REG);
> +
> +	/* check whether lock is used */
> +	if (!(var & ZXDH_VF_LOCK_ENABLE_MASK))
> +		return -1;
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_release_lock(struct rte_eth_dev *dev)
> +{
> +	uint32_t var = zxdh_read_reg(dev, ZXDH_BAR0_INDEX, ZXDH_VF_LOCK_REG);
> +
> +	if (var & ZXDH_VF_LOCK_ENABLE_MASK) {
> +		var &= ~ZXDH_VF_LOCK_ENABLE_MASK;
> +		zxdh_write_reg(dev, ZXDH_BAR0_INDEX, ZXDH_VF_LOCK_REG, var);
> +		return 0;
> +	}
> +
> +	PMD_INIT_LOG(ERR, "No lock need to be release\n");
> +	return -1;
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_get_available_channel(struct rte_eth_dev *dev, uint8_t queue_type)
> +{
> +	uint16_t base	 = (queue_type == VTNET_RQ) ? 0 : 1;  /* txq only polls odd bits*/
> +	uint16_t i		 = 0;
> +	uint16_t j		 = 0;
> +	uint16_t done	 = 0;
> +	uint16_t timeout = 0;
>

It seems intentions is to allign '=' but went wrong, please fix it. And
there are more instance of this, please scan all code to fix them.

> +
> +	while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
> +		rte_delay_us_block(1000);
> +		/* acquire hw lock */
> +		if (zxdh_acquire_lock(dev) < 0) {
> +			PMD_INIT_LOG(ERR, "Acquiring hw lock got failed, timeout: %d", timeout);
> +			continue;
> +		}
> +		/* Iterate COI table and find free channel */
> +		for (i = ZXDH_QUEUES_BASE/32; i < ZXDH_TOTAL_QUEUES_NUM/32; i++) {
> +			uint32_t addr = ZXDH_QUERES_SHARE_BASE + (i * sizeof(uint32_t));
> +			uint32_t var = zxdh_read_reg(dev, ZXDH_BAR0_INDEX, addr);
> +
> +			for (j = base; j < 32; j += 2) {
> +				/* Got the available channel & update COI table */
> +				if ((var & (1 << j)) == 0) {
> +					var |= (1 << j);
> +					zxdh_write_reg(dev, ZXDH_BAR0_INDEX, addr, var);
> +					done = 1;
> +					break;
> +				}
> +			}
> +			if (done)
> +				break;
> +		}
> +		break;
> +	}
> +	if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
> +		PMD_INIT_LOG(ERR, "Failed to acquire channel");
> +		return -1;
> +	}
> +	zxdh_release_lock(dev);
> +	/* check for no channel condition */
> +	if (done != 1) {
> +		PMD_INIT_LOG(ERR, "NO availd queues\n");
> +		return -1;
> +	}
> +	/* reruen available channel ID */
> +	return (i * 32) + j;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_acquire_channel(struct rte_eth_dev *dev, uint16_t lch)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (hw->channel_context[lch].valid == 1) {
> +		PMD_INIT_LOG(DEBUG, "Logic channel:%u already acquired Physics channel:%u",
> +				lch, hw->channel_context[lch].ph_chno);
> +		return hw->channel_context[lch].ph_chno;
> +	}
> +	int32_t pch = zxdh_get_available_channel(dev, get_queue_type(lch));
> +
> +	if (pch < 0) {
> +		PMD_INIT_LOG(ERR, "Failed to acquire channel");
> +		return -1;
> +	}
> +	hw->channel_context[lch].ph_chno = (uint16_t)pch;
> +	hw->channel_context[lch].valid = 1;
> +	PMD_INIT_LOG(DEBUG, "Acquire channel success lch:%u --> pch:%d", lch, pch);
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_release_channel(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t nr_vq = hw->queue_num;
> +	uint32_t var  = 0;
> +	uint32_t addr = 0;
> +	uint32_t widx = 0;
> +	uint32_t bidx = 0;
> +	uint16_t pch  = 0;
> +	uint16_t lch  = 0;
> +	uint16_t timeout = 0;
> +
> +	while ((timeout++) < ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
> +		if (zxdh_acquire_lock(dev) != 0) {
> +			PMD_INIT_LOG(ERR,
> +				"Could not acquire lock to release channel, timeout %d", timeout);
> +			continue;
> +		}
> +		break;
> +	}
> +
> +	if (timeout >= ZXDH_ACQUIRE_CHANNEL_NUM_MAX) {
> +		PMD_INIT_LOG(ERR, "Acquire lock timeout");
> +		return -1;
> +	}
> +
> +	for (lch = 0; lch < nr_vq; lch++) {
> +		if (hw->channel_context[lch].valid == 0) {
> +			PMD_INIT_LOG(DEBUG, "Logic channel %d does not need to release", lch);
> +			continue;
> +		}
> +
> +		/* get coi table offset and index */
> +		pch  = hw->channel_context[lch].ph_chno;
> +		widx = pch / 32;
> +		bidx = pch % 32;
> +
> +		addr = ZXDH_QUERES_SHARE_BASE + (widx * sizeof(uint32_t));
> +		var  = zxdh_read_reg(dev, ZXDH_BAR0_INDEX, addr);
> +		var &= ~(1 << bidx);
> +		zxdh_write_reg(dev, ZXDH_BAR0_INDEX, addr, var);
> +
> +		hw->channel_context[lch].valid = 0;
> +		hw->channel_context[lch].ph_chno = 0;
> +	}
> +
> +	zxdh_release_lock(dev);
> +
> +	return 0;
> +}
> +
> +static int32_t zxdh_promisc_table_init(struct zxdh_hw *hw)
> +{
> +	uint32_t ret, vf_group_id = 0;
> +	struct zxdh_brocast_t brocast_table = {0};
> +	struct zxdh_unitcast_t uc_table = {0};
> +	struct zxdh_multicast_t mc_table = {0};
> +
> +	for (; vf_group_id < 4; vf_group_id++) {
> +		brocast_table.flag = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG);
> +		DPP_DTB_ERAM_ENTRY_INFO_T eram_brocast_entry = {
> +			((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id,
> +			(ZXIC_UINT32 *)&brocast_table
> +		};
> +		DPP_DTB_USER_ENTRY_T entry_brocast = {
> +			.sdt_no = ZXDH_SDT_BROCAST_ATT_TABLE,
> +			.p_entry_data = (void *)&eram_brocast_entry
> +		};
> +
> +		ret = dpp_dtb_table_entry_write(DEVICE_NO, g_dtb_data.queueid, 1, &entry_brocast);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write eram-brocast failed, code:%d", ret);
> +			return ret;
> +		}
> +
> +		uc_table.uc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG);
> +		DPP_DTB_ERAM_ENTRY_INFO_T eram_uc_entry = {
> +			((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id,
> +			(ZXIC_UINT32 *)&uc_table
> +		};
> +		DPP_DTB_USER_ENTRY_T entry_unicast = {
> +			.sdt_no = ZXDH_SDT_UNICAST_ATT_TABLE,
> +			.p_entry_data = (void *)&eram_uc_entry
> +		};
> +
> +		ret = dpp_dtb_table_entry_write(DEVICE_NO, g_dtb_data.queueid, 1, &entry_unicast);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write eram-unicast failed, code:%d", ret);
> +			return ret;
> +		}
> +
> +		mc_table.mc_flood_pf_enable = rte_be_to_cpu_32(ZXDH_TABLE_HIT_FLAG);
> +		DPP_DTB_ERAM_ENTRY_INFO_T eram_mc_entry = {
> +			((hw->vfid - ZXDH_BASE_VFID) << 2) + vf_group_id,
> +			(ZXIC_UINT32 *)&mc_table
> +		};
> +		DPP_DTB_USER_ENTRY_T entry_multicast = {
> +			.sdt_no = ZXDH_SDT_MULTICAST_ATT_TABLE,
> +			.p_entry_data = (void *)&eram_mc_entry
> +		};
> +
> +		ret = dpp_dtb_table_entry_write(DEVICE_NO, g_dtb_data.queueid,
> +					1, &entry_multicast);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Write eram-multicast failed, code:%d", ret);
> +			return ret;
> +		}
> +	}
> +
> +	PMD_DRV_LOG(DEBUG, "write promise tbl hw->hash_search_index:%d, vqm_vfid:%d",
> +			hw->hash_search_index, hw->vfid);
> +
> +	return ret;
> +}
> +
> +static int zxdh_config_qid(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct zxdh_port_att_entry port_attr = {0};
> +	struct zxdh_msg_info msg_info = {0};
> +	int ret = 0;
> +
> +	if (hw->is_pf) {
> +		DPP_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (ZXIC_UINT32 *)&port_attr};
> +		DPP_DTB_USER_ENTRY_T entry = {
> +			.sdt_no = ZXDH_SDT_VPORT_ATT_TABLE,
> +			.p_entry_data = (void *)&port_attr_entry
> +		};
> +
> +		ret = dpp_dtb_entry_get(DEVICE_NO, g_dtb_data.queueid, &entry, 1);
> +		port_attr.port_base_qid = hw->channel_context[0].ph_chno & 0xfff;
> +
> +		ret = dpp_dtb_table_entry_write(DEVICE_NO, g_dtb_data.queueid, 1, &entry);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "PF:%d port_base_qid insert failed\n", hw->vfid);
> +			return -ret;
> +		}
> +	} else {
> +		struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_set_msg;
> +
> +		msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info);
> +		attr_msg->mode = EGR_FLAG_PORT_BASE_QID;
> +		attr_msg->value = hw->channel_context[0].ph_chno&0xfff;
> +		ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ",
> +					hw->vport.vport, EGR_FLAG_PORT_BASE_QID);
> +			return ret;
> +		}
> +	}
> +	return ret;
> +}
> +/*
> + * Configure virtio device
>

'virtio' device?
Is the host interface from device a virtio-net interface?



> + * It returns 0 on success.
> + */
> +int32_t zxdh_dev_configure(struct rte_eth_dev *dev)
> +{
> +	const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> +	const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint64_t rx_offloads = rxmode->offloads;
> +	uint32_t nr_vq = 0;
> +	int32_t  ret = 0;
> +
> +	PMD_INIT_LOG(DEBUG, "configure");
> +
> +	if (dev->data->nb_rx_queues != dev->data->nb_tx_queues) {
> +		PMD_INIT_LOG(ERR, "nb_rx_queues=%d and nb_tx_queues=%d not equal!",
> +					 dev->data->nb_rx_queues, dev->data->nb_tx_queues);
> +		return -EINVAL;
> +	}
> +	if ((dev->data->nb_rx_queues + dev->data->nb_tx_queues) >= ZXDH_QUEUES_NUM_MAX) {
> +		PMD_INIT_LOG(ERR, "nb_rx_queues=%d + nb_tx_queues=%d must < (%d)!",
> +					 dev->data->nb_rx_queues, dev->data->nb_tx_queues,
> +					 ZXDH_QUEUES_NUM_MAX);
> +		return -EINVAL;
> +	}
> +	if ((rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) && (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE))	{
> +		PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
> +		return -EINVAL;
> +	}
> +
> +	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
> +		PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
> +		return -EINVAL;
> +	}
> +	if ((rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) && (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE))	{
> +		PMD_DRV_LOG(ERR, "Unsupported Rx multi queue mode %d", rxmode->mq_mode);
> +		return -EINVAL;
> +	}
> +
> +	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
> +		PMD_DRV_LOG(ERR, "Unsupported Tx multi queue mode %d", txmode->mq_mode);
> +		return -EINVAL;
> +	}
> +
> +	ret = zxdh_features_update(hw, rxmode, txmode);
> +	if (ret < 0)
> +		return ret;
> +
> +	/* check if lsc interrupt feature is enabled */
> +	if (dev->data->dev_conf.intr_conf.lsc) {
> +		if (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
> +			PMD_DRV_LOG(ERR, "link status not supported by host");
> +			return -ENOTSUP;
> +		}
> +	}
> +	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
> +		hw->vlan_offload_cfg.vlan_strip = 1;
> +
> +	hw->has_tx_offload = tx_offload_enabled(hw);
> +	hw->has_rx_offload = rx_offload_enabled(hw);
> +
> +	nr_vq = dev->data->nb_rx_queues + dev->data->nb_tx_queues;
> +	if (nr_vq == hw->queue_num) {
> +		/*no que changed */
> +		goto conf_end;
> +	}
> +
> +	PMD_DRV_LOG(DEBUG, "que changed need reset ");
> +	/* Reset the device although not necessary at startup */
> +	zxdh_vtpci_reset(hw);
> +
> +	/* Tell the host we've noticed this device. */
> +	zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_ACK);
> +
> +	/* Tell the host we've known how to drive the device. */
> +	zxdh_vtpci_set_status(hw, ZXDH_CONFIG_STATUS_DRIVER);
> +	/* The queue needs to be released when reconfiguring*/
> +	if (hw->vqs != NULL) {
> +		zxdh_dev_free_mbufs(dev);
> +		zxdh_free_queues(dev);
> +	}
> +
> +	hw->queue_num = nr_vq;
> +	ret = zxdh_alloc_queues(dev, nr_vq);
> +	if (ret < 0)
> +		return ret;
> +
> +	zxdh_datach_set(dev);
> +
> +	if (zxdh_configure_intr(dev) < 0) {
> +		PMD_INIT_LOG(ERR, "Failed to configure interrupt");
> +		zxdh_free_queues(dev);
> +		return -1;
> +	}
> +	ret = zxdh_config_qid(dev);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to configure base qid!");
> +		return -1;
> +	}
> +
> +	zxdh_vtpci_reinit_complete(hw);
> +
> +conf_end:
> +	ret = zxdh_rx_csum_lro_offload_configure(dev);
> +	if (ret)
> +		PMD_INIT_LOG(ERR, "Failed to configure csum offload!");
> +
> +	zxdh_dev_conf_offload(dev);
> +	PMD_INIT_LOG(DEBUG, " configure end");
> +
> +	return ret;
> +}
> +
> +int zxdh_vlan_filter_table_init(uint16_t vfid)
> +{
> +	int16_t ret = 0;
> +	struct zxdh_vlan_t vlan_table = {0};
> +
> +	for (uint8_t vlan_group = 0; vlan_group < VLAN_GROUP_NUM; vlan_group++) {
> +		if (vlan_group == 0) {
> +			vlan_table.vlans[0] |= (1 << FIRST_VLAN_GROUP_VALID_BITS);
> +			vlan_table.vlans[0] |= (1 << VLAN_GROUP_VALID_BITS);
> +
> +		} else {
> +			vlan_table.vlans[0] = 0;
> +		}
> +
> +		uint32_t index = (vlan_group << VQM_VFID_BITS) | vfid;
> +
> +		DPP_DTB_ERAM_ENTRY_INFO_T entry_data = {index, (ZXIC_UINT32 *)&vlan_table};
> +		DPP_DTB_USER_ENTRY_T user_entry = {ZXDH_SDT_VLAN_ATT_TABLE, &entry_data};
> +
> +		ret = dpp_dtb_table_entry_write(DEVICE_NO, g_dtb_data.queueid, 1, &user_entry);
> +		if (ret != DPP_OK)
> +			PMD_INIT_LOG(WARNING,
> +				"[vfid:%d], vlan_group:%d, init vlan filter tbl failed, ret:%d",
> +				vfid, vlan_group, ret);
> +	}
> +	return ret;
> +}
> +
> +static int zxdh_mac_config(struct rte_eth_dev *eth_dev)
> +{
> +	struct zxdh_hw *hw = eth_dev->data->dev_private;
> +	struct zxdh_msg_info msg_info = {0};
> +	int ret = 0;
> +
> +	if (hw->is_pf == 1) {
> +		PMD_INIT_LOG(INFO, "mac_config pf");
> +		ret = dev_mac_addr_add(hw->vport.vport,
> +				&eth_dev->data->mac_addrs[0], hw->hash_search_index);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "Failed to add mac: port 0x%x", hw->vport.vport);
> +
> +		hw->uc_num++;
> +	} else {
> +		PMD_DRV_LOG(INFO, "port 0x%x Send to pf\n", hw->vport.vport);
> +		struct zxdh_mac_filter *mac_filter = &msg_info.data.zxdh_mac_filter;
> +
> +		mac_filter->filter_flag = 0xff;
> +		rte_memcpy(&mac_filter->mac, &eth_dev->data->mac_addrs[0],
> +				sizeof(eth_dev->data->mac_addrs[0]));
> +		msg_head_build(hw, ZXDH_MAC_ADD, &msg_info);
> +		ret = zxdh_vf_send_msg_to_pf(eth_dev, &msg_info, sizeof(msg_info), NULL, 0);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ",
> +					hw->vport.vport, ZXDH_MAC_ADD);
> +			return ret;
> +		}
> +		hw->uc_num++;
> +	}
> +	return ret;
> +}
> +
> +int32_t zxdh_dev_config_port_status(struct rte_eth_dev *dev, uint16_t link_status)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct zxdh_port_att_entry port_attr = {0};
> +	struct zxdh_msg_info msg_info = {0};
> +	int32_t ret = 0;
> +
> +	if (hw->is_pf) {
> +		DPP_DTB_ERAM_ENTRY_INFO_T port_attr_entry = {hw->vfid, (ZXIC_UINT32 *)&port_attr};
> +		DPP_DTB_USER_ENTRY_T entry = {
> +			.sdt_no = ZXDH_SDT_VPORT_ATT_TABLE,
> +			.p_entry_data = (void *)&port_attr_entry
> +		};
> +
> +		ret = dpp_dtb_entry_get(DEVICE_NO, g_dtb_data.queueid, &entry, 1);
> +		port_attr.is_up = link_status;
> +
> +		ret = dpp_dtb_table_entry_write(DEVICE_NO, g_dtb_data.queueid, 1, &entry);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "PF:%d port_is_up insert failed\n", hw->vfid);
> +			return -ret;
> +		}
> +	} else {
> +		struct zxdh_port_attr_set_msg *attr_msg = &msg_info.data.port_attr_set_msg;
> +
> +		msg_head_build(hw, ZXDH_PORT_ATTRS_SET, &msg_info);
> +		attr_msg->mode = EGR_FLAG_VPORT_IS_UP;
> +		attr_msg->value = link_status;
> +		ret = zxdh_vf_send_msg_to_pf(dev, &msg_info, sizeof(msg_info), NULL, 0);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "Failed to send msg: port 0x%x msg type %d ",
> +				hw->vport.vport, EGR_FLAG_VPORT_IS_UP);
> +			return ret;
> +		}
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_dev_start(struct rte_eth_dev *dev)
> +{
> +	int32_t ret;
> +	uint16_t vtpci_logic_qidx;
> +	/* Finish the initialization of the queues */
> +	uint16_t i;
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		vtpci_logic_qidx = 2 * i + RQ_QUEUE_IDX;
> +		ret = zxdh_dev_rx_queue_setup_finish(dev, vtpci_logic_qidx);
> +		if (ret < 0)
> +			return ret;
> +	}
> +	set_rxtx_funcs(dev);
> +	ret = zxdh_intr_enable(dev);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, "interrupt enable failed");
> +		return -EIO;
> +	}
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct virtqueue *vq;
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		vtpci_logic_qidx = 2 * i + RQ_QUEUE_IDX;
> +		vq = hw->vqs[vtpci_logic_qidx];
> +		/* Flush the old packets */
> +		zxdh_virtqueue_rxvq_flush(vq);
> +		virtqueue_notify(vq);
> +	}
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		vtpci_logic_qidx = 2 * i + TQ_QUEUE_IDX;
> +		vq = hw->vqs[vtpci_logic_qidx];
> +		virtqueue_notify(vq);
> +	}
> +	hw->started = true;
> +	ret = zxdh_mac_config(hw->eth_dev);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, " mac config failed");
> +		zxdh_dev_set_link_up(dev);
> +	}
> +	return 0;
> +}
> +
> +static void zxdh_dev_free_mbufs(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	uint16_t nr_vq = hw->queue_num;
> +	uint32_t i, mbuf_num = 0;
> +
> +	const char *type __rte_unused;
> +	struct virtqueue *vq = NULL;
> +	struct rte_mbuf *buf = NULL;
> +	int32_t queue_type = 0;
> +
> +	if (hw->vqs == NULL)
> +		return;
> +
> +	for (i = 0; i < nr_vq; i++) {
> +		vq = hw->vqs[i];
> +		if (!vq)
> +			continue;
> +
> +		queue_type = get_queue_type(i);
> +		if (queue_type == VTNET_RQ)
> +			type = "rxq";
> +		else if (queue_type == VTNET_TQ)
> +			type = "txq";
> +		else
> +			continue;
> +
> +		PMD_INIT_LOG(DEBUG, "Before freeing %s[%d] used and unused buf", type, i);
> +
> +		while ((buf = zxdh_virtqueue_detach_unused(vq)) != NULL) {
> +			rte_pktmbuf_free(buf);
> +			mbuf_num++;
> +		}
> +
> +		PMD_INIT_LOG(DEBUG, "After freeing %s[%d] used and unused buf", type, i);
> +	}
> +
> +	PMD_INIT_LOG(DEBUG, "%d mbufs freed", mbuf_num);
> +}
> +
> +/*
> + * Stop device: disable interrupt and mark link down
> + */
> +int32_t zxdh_dev_stop(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (dev->data->dev_started == 0)
> +		return 0;
> +
> +	PMD_INIT_LOG(DEBUG, "stop");
> +
> +	rte_spinlock_lock(&hw->state_lock);
> +	if (!hw->started)
> +		goto out_unlock;
> +	hw->started = 0;
> +
> +	zxdh_intr_disable(dev);
> +	zxdh_dev_set_link_down(dev);
> +	/*que disable*/
> +
> +out_unlock:
> +	rte_spinlock_unlock(&hw->state_lock);
> +
> +	return 0;
> +}
> +/**
> + *  Fun:
> + */
> +static uint32_t zxdh_dev_speed_capa_get(uint32_t speed)
> +{
> +	switch (speed) {
> +	case RTE_ETH_SPEED_NUM_10G:  return RTE_ETH_LINK_SPEED_10G;
> +	case RTE_ETH_SPEED_NUM_20G:  return RTE_ETH_LINK_SPEED_20G;
> +	case RTE_ETH_SPEED_NUM_25G:  return RTE_ETH_LINK_SPEED_25G;
> +	case RTE_ETH_SPEED_NUM_40G:  return RTE_ETH_LINK_SPEED_40G;
> +	case RTE_ETH_SPEED_NUM_50G:  return RTE_ETH_LINK_SPEED_50G;
> +	case RTE_ETH_SPEED_NUM_56G:  return RTE_ETH_LINK_SPEED_56G;
> +	case RTE_ETH_SPEED_NUM_100G: return RTE_ETH_LINK_SPEED_100G;
> +	case RTE_ETH_SPEED_NUM_200G: return RTE_ETH_LINK_SPEED_200G;
> +	default:                     return 0;
> +	}
> +}
> +int32_t zxdh_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	dev_info->speed_capa	   = zxdh_dev_speed_capa_get(hw->speed);
> +	dev_info->max_rx_queues    = RTE_MIN(hw->max_queue_pairs, ZXDH_RX_QUEUES_MAX);
> +	dev_info->max_tx_queues    = RTE_MIN(hw->max_queue_pairs, ZXDH_TX_QUEUES_MAX);
> +	dev_info->min_rx_bufsize   = ZXDH_MIN_RX_BUFSIZE;
> +	dev_info->max_rx_pktlen    = ZXDH_MAX_RX_PKTLEN;
> +	dev_info->max_mac_addrs    = ZXDH_MAX_MAC_ADDRS;
> +	dev_info->rx_offload_capa  = (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
> +					RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
> +					RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
> +	dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
> +					RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
> +					RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
> +					RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM);
> +	dev_info->rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_SCATTER);
> +	dev_info->rx_offload_capa |=  RTE_ETH_RX_OFFLOAD_TCP_LRO;
> +	dev_info->rx_offload_capa |=  RTE_ETH_RX_OFFLOAD_RSS_HASH;
> +
> +	dev_info->reta_size = ZXDH_RETA_SIZE;
> +	dev_info->hash_key_size = ZXDH_RSK_LEN;
> +	dev_info->flow_type_rss_offloads = ZXDH_RSS_HF;
> +	dev_info->max_mtu = hw->max_mtu;
> +	dev_info->min_mtu = 50;
> +
> +	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS);
> +	dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_TCP_TSO |
> +					RTE_ETH_TX_OFFLOAD_UDP_TSO);
> +	dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
> +					RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
> +					RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO);
> +	dev_info->tx_offload_capa |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
> +					RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
> +					RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
> +					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
> +					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM);
> +
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static void zxdh_log_init(void)
> +{
> +#ifdef RTE_LIBRTE_ZXDH_DEBUG_TX
> +	if (zxdh_logtype_tx >= 0)
> +		rte_log_set_level(zxdh_logtype_tx, RTE_LOG_DEBUG);
> +#endif
> +#ifdef RTE_LIBRTE_ZXDH_DEBUG_RX
> +	if (zxdh_logtype_rx >= 0)
> +		rte_log_set_level(zxdh_logtype_rx, RTE_LOG_DEBUG);
> +#endif
>

If you put logging in the datapath, even it is not printing it may
consume a few cycles, so you may prefer to macros specific to datapath
logging and they may be enabled/disabled with 'RTE_ETHDEV_DEBUG_RX' &
'RTE_ETHDEV_DEBUG_TX', not driver specific macros.

> +#ifdef RTE_LIBRTE_ZXDH_DEBUG_MSG
> +	if (zxdh_logtype_msg >= 0)
> +		rte_log_set_level(zxdh_logtype_msg, RTE_LOG_DEBUG);
> +#endif
>

It is already dynamically configurable log level, do we need compile
time macro for this? Since we are trying to remove compile time flags as
much as possible, it would be nice to get rid of RTE_LIBRTE_ZXDH_DEBUG_MSG.

> +}
> +
> +struct zxdh_dtb_shared_data g_dtb_data = {0};
> +
> +static int zxdh_tbl_entry_destroy(struct rte_eth_dev *dev)
> +{
> +	int ret = 0;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (!g_dtb_data.init_done)
> +		return ret;
> +
> +	if (hw->is_pf) {
> +		/*hash  &ddr*/
> +		uint32_t sdt_no;
> +
> +		sdt_no = MK_SDT_NO(L2_ENTRY, hw->hash_search_index);
> +		ret = dpp_dtb_hash_online_delete(0, g_dtb_data.queueid, sdt_no);
> +		PMD_DRV_LOG(INFO, "%s dpp_dtb_hash_online_delete sdt_no %d",
> +				dev->data->name, sdt_no);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "%s dpp_dtb_hash_online_delete sdt_no %d failed",
> +				dev->data->name, sdt_no);
> +
> +		sdt_no = MK_SDT_NO(MC, hw->hash_search_index);
> +		ret = dpp_dtb_hash_online_delete(0, g_dtb_data.queueid, sdt_no);
> +		PMD_DRV_LOG(INFO, "%s dpp_dtb_hash_online_delete sdt_no %d",
> +				dev->data->name, sdt_no);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "%s dpp_dtb_hash_online_delete sdt_no %d failed",
> +				dev->data->name, sdt_no);
> +	}
> +
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +#define INVALID_DTBQUE  0xFFFF
> +static void _dtb_data_res_free(struct zxdh_hw *hw)
> +{
> +	struct rte_eth_dev *dev = hw->eth_dev;
> +
> +	if ((g_dtb_data.init_done) && (g_dtb_data.bind_device == dev))  {
> +		PMD_DRV_LOG(INFO, "%s g_dtb_data free queue %d",
> +				dev->data->name, g_dtb_data.queueid);
> +
> +		int ret = 0;
> +
> +		ret = dpp_np_online_uninstall(0, dev->data->name, g_dtb_data.queueid);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "%s dpp_np_online_uninstall failed", dev->data->name);
> +
> +		PMD_DRV_LOG(INFO, "%s dpp_np_online_uninstall queid %d",
> +				dev->data->name, g_dtb_data.queueid);
> +		if (g_dtb_data.dtb_table_conf_mz) {
> +			rte_memzone_free(g_dtb_data.dtb_table_conf_mz);
> +			PMD_DRV_LOG(INFO, "%s free  dtb_table_conf_mz  ", dev->data->name);
> +			g_dtb_data.dtb_table_conf_mz = NULL;
> +		}
> +		if (g_dtb_data.dtb_table_dump_mz) {
> +
> +			PMD_DRV_LOG(INFO, "%s free  dtb_table_dump_mz  ", dev->data->name);
> +			rte_memzone_free(g_dtb_data.dtb_table_dump_mz);
> +			g_dtb_data.dtb_table_dump_mz = NULL;
> +		}
> +		int i;
> +
> +		for (i = 0; i < DPU_MAX_BASE_DTB_TABLE_COUNT; i++) {
> +			if (g_dtb_data.dtb_table_bulk_dump_mz[i]) {
> +				rte_memzone_free(g_dtb_data.dtb_table_bulk_dump_mz[i]);
> +
> +				PMD_DRV_LOG(INFO, "%s free dtb_table_bulk_dump_mz[%d]",
> +						dev->data->name, i);
> +				g_dtb_data.dtb_table_bulk_dump_mz[i] = NULL;
> +			}
> +		}
> +		g_dtb_data.init_done = 0;
> +		g_dtb_data.bind_device = NULL;
> +	}
> +	if (zxdh_shared_data != NULL)
> +		zxdh_shared_data->npsdk_init_done = 0;
> +
> +}
> +
> +#define MK_SDT_HASHRES(table, hash_idx) \
> +{ \
> +	.mz_name = RTE_STR(ZXDH_## table ##_TABLE), \
> +	.mz_size = DPU_DTB_TABLE_BULK_ZCAM_DUMP_SIZE, \
> +	.sdt_no = ZXDH_SDT_##table##_TABLE0 + hash_idx, \
> +	.mz = NULL\
> +}
> +/**
> + * Fun:
> + */
> +static inline int zxdh_dtb_dump_res_init(struct zxdh_hw *hw __rte_unused,
> +			DPP_DEV_INIT_CTRL_T *dpp_ctrl)
> +{
> +	int ret = 0;
> +	int i;
> +
> +	struct zxdh_dtb_bulk_dump_info dtb_dump_baseres[] = {
> +	/* eram */
> +	{"zxdh_sdt_vxlan_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VXLAN_ATT_TABLE, NULL},
> +	{"zxdh_sdt_vport_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VPORT_ATT_TABLE, NULL},
> +	{"zxdh_sdt_panel_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_PANEL_ATT_TABLE, NULL},
> +	{"zxdh_sdt_rss_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_RSS_ATT_TABLE, NULL},
> +	{"zxdh_sdt_vlan_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_VLAN_ATT_TABLE, NULL},
> +	{"zxdh_sdt_lag_att_table", ZXDH_TBL_ERAM_DUMP_SIZE, ZXDH_SDT_LAG_ATT_TABLE, NULL},
> +	/* zcam */
> +	/*hash*/
> +	{"zxdh_sdt_l2_entry_table0", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE0, NULL},
> +	{"zxdh_sdt_l2_entry_table1", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE1, NULL},
> +	{"zxdh_sdt_l2_entry_table2", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE2, NULL},
> +	{"zxdh_sdt_l2_entry_table3", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE3, NULL},
> +	{"zxdh_sdt_l2_entry_table4", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE4, NULL},
> +	{"zxdh_sdt_l2_entry_table5", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_L2_ENTRY_TABLE5, NULL},
> +	{"zxdh_sdt_mc_table0", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE0, NULL},
> +	{"zxdh_sdt_mc_table1", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE1, NULL},
> +	{"zxdh_sdt_mc_table2", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE2, NULL},
> +	{"zxdh_sdt_mc_table3", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE3, NULL},
> +	{"zxdh_sdt_mc_table4", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE4, NULL},
> +	{"zxdh_sdt_mc_table5", ZXDH_TBL_ZCAM_DUMP_SIZE, ZXDH_SDT_MC_TABLE5, NULL},
> +	};
> +	for (i = 0; i < (int) RTE_DIM(dtb_dump_baseres); i++) {
> +		struct zxdh_dtb_bulk_dump_info *p = dtb_dump_baseres + i;
> +		const struct rte_memzone *generic_dump_mz = rte_memzone_reserve_aligned(p->mz_name,
> +					p->mz_size, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
> +
> +		if (generic_dump_mz == NULL) {
> +			PMD_DRV_LOG(ERR,
> +				"Cannot alloc mem for dtb tbl bulk dump, mz_name is %s, mz_size is %u",
> +				p->mz_name, p->mz_size);
> +			ret = -ENOMEM;
> +			return ret;
> +		}
> +		p->mz = generic_dump_mz;
> +		dpp_ctrl->dump_addr_info[i].vir_addr = generic_dump_mz->addr_64;
> +		dpp_ctrl->dump_addr_info[i].phy_addr = generic_dump_mz->iova;
> +		dpp_ctrl->dump_addr_info[i].sdt_no   = p->sdt_no;
> +		dpp_ctrl->dump_addr_info[i].size	  = p->mz_size;
> +		PMD_INIT_LOG(DEBUG,
> +			"dump_addr_info[%2d] vir_addr:0x%llx phy_addr:0x%llx sdt_no:%u size:%u",
> +			i,
> +			dpp_ctrl->dump_addr_info[i].vir_addr,
> +			dpp_ctrl->dump_addr_info[i].phy_addr,
> +			dpp_ctrl->dump_addr_info[i].sdt_no,
> +			dpp_ctrl->dump_addr_info[i].size);
> +
> +		g_dtb_data.dtb_table_bulk_dump_mz[dpp_ctrl->dump_sdt_num] = generic_dump_mz;
> +		dpp_ctrl->dump_sdt_num++;
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:  last entry to clear
> + */
> +static int zxdh_tbl_entry_offline_destroy(struct zxdh_hw *hw)
> +{
> +	int ret = 0;
> +
> +	if (!g_dtb_data.init_done)
> +		return ret;
> +
> +	if (hw->is_pf) {
> +		/*hash  &ddr*/
> +		uint32_t sdt_no;
> +
> +		sdt_no = MK_SDT_NO(L2_ENTRY, hw->hash_search_index);
> +		ret = dpp_dtb_hash_offline_delete(0, g_dtb_data.queueid, sdt_no, 0);
> +		PMD_DRV_LOG(INFO, "%d dpp_dtb_hash_offline_delete sdt_no %d",
> +				hw->port_id, sdt_no);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "%d dpp_dtb_hash_offline_delete sdt_no %d failed",
> +					hw->port_id, sdt_no);
> +
> +		sdt_no = MK_SDT_NO(MC, hw->hash_search_index);
> +		ret = dpp_dtb_hash_offline_delete(0, g_dtb_data.queueid, sdt_no, 0);
> +		PMD_DRV_LOG(INFO, "%d dpp_dtb_hash_offline_delete sdt_no %d",
> +				hw->port_id, sdt_no);
> +		if (ret)
> +			PMD_DRV_LOG(ERR, "%d dpp_dtb_hash_offline_delete sdt_no %d failed",
> +				hw->port_id, sdt_no);
> +
> +		/*eram  iterm by iterm*/
> +		/*etcam*/
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static inline int npsdk_dtb_res_init(struct rte_eth_dev *dev)
> +{
> +	int ret = 0;
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (g_dtb_data.init_done) {
> +		PMD_INIT_LOG(DEBUG, "DTB res already init done, dev %s no need init",
> +			dev->device->name);
> +		return 0;
> +	}
> +	g_dtb_data.queueid = INVALID_DTBQUE;
> +	g_dtb_data.bind_device = dev;
> +	g_dtb_data.dev_refcnt++;
> +	g_dtb_data.init_done = 1;
> +	/* */
> +	DPP_DEV_INIT_CTRL_T *dpp_ctrl = malloc(sizeof(*dpp_ctrl) +
> +			sizeof(DPP_DTB_ADDR_INFO_T) * 256);
> +
> +	if (dpp_ctrl == NULL) {
> +		PMD_INIT_LOG(ERR, "dev %s annot allocate memory for dpp_ctrl", dev->device->name);
> +		ret = -ENOMEM;
> +		goto free_res;
> +	}
> +	memset(dpp_ctrl, 0, sizeof(*dpp_ctrl) + sizeof(DPP_DTB_ADDR_INFO_T) * 256);
> +
> +	dpp_ctrl->queue_id = 0xff;
> +	dpp_ctrl->vport	 = hw->vport.vport;
> +	dpp_ctrl->vector = ZXDH_MSIX_INTR_DTB_VEC;
> +	strcpy((char *)dpp_ctrl->port_name, dev->device->name);
> +	dpp_ctrl->pcie_vir_addr = (ZXIC_ADDR_T)hw->bar_addr[0];
> +
> +	struct bar_offset_params param = {0};
> +	struct bar_offset_res  res = {0};
> +
> +	param.pcie_id = hw->pcie_id;
> +	param.virt_addr = hw->bar_addr[0]+ZXDH_CTRLCH_OFFSET;
> +	param.type = URI_NP;
> +
> +	ret = zxdh_get_bar_offset(&param, &res);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "dev %s get npbar offset failed", dev->device->name);
> +		goto free_res;
> +	}
> +	dpp_ctrl->np_bar_len = res.bar_length;
> +	dpp_ctrl->np_bar_offset = res.bar_offset;
> +	PMD_INIT_LOG(ERR,
> +		"dpp_ctrl->pcie_vir_addr 0x%llx bar_offs  0x%x bar_len 0x%x",
> +		dpp_ctrl->pcie_vir_addr, dpp_ctrl->np_bar_offset, dpp_ctrl->np_bar_len);
> +	if (!g_dtb_data.dtb_table_conf_mz) {
> +		const struct rte_memzone *conf_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_conf_mz",
> +				DPU_DTB_TABLE_CONF_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
> +
> +		if (conf_mz == NULL) {
> +			PMD_INIT_LOG(ERR,
> +				"dev %s annot allocate memory for dtb table conf",
> +				dev->device->name);
> +			ret = -ENOMEM;
> +			goto free_res;
> +		}
> +		dpp_ctrl->down_vir_addr = conf_mz->addr_64;
> +		dpp_ctrl->down_phy_addr = conf_mz->iova;
> +		g_dtb_data.dtb_table_conf_mz = conf_mz;
> +	}
> +	/* */
> +	if (!g_dtb_data.dtb_table_dump_mz) {
> +		const struct rte_memzone *dump_mz = rte_memzone_reserve_aligned("zxdh_dtb_table_dump_mz",
> +				DPU_DTB_TABLE_DUMP_SIZE, SOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);
> +
> +		if (dump_mz == NULL) {
> +			PMD_INIT_LOG(ERR,
> +				"dev %s Cannot allocate memory for dtb table dump",
> +				dev->device->name);
> +			ret = -ENOMEM;
> +			goto free_res;
> +		}
> +		dpp_ctrl->dump_vir_addr = dump_mz->addr_64;
> +		dpp_ctrl->dump_phy_addr = dump_mz->iova;
> +		g_dtb_data.dtb_table_dump_mz = dump_mz;
> +	}
> +	/* init bulk dump */
> +	zxdh_dtb_dump_res_init(hw, dpp_ctrl);
> +
> +	ret = dpp_host_np_init(0, dpp_ctrl);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "dev %s dpp host np init failed .ret %d", dev->device->name, ret);
> +		goto free_res;
> +	}
> +
> +	PMD_INIT_LOG(INFO, "dev %s dpp host np init ok.dtb queue %d",
> +		dev->device->name, dpp_ctrl->queue_id);
> +	g_dtb_data.queueid = dpp_ctrl->queue_id;
> +	free(dpp_ctrl);
> +	return 0;
> +
> +free_res:
> +	_dtb_data_res_free(hw);
> +	free(dpp_ctrl);
> +	return -ret;
> +}
> +/**
> + * Fun:
> + */
> +static uint32_t dpp_res_uni_init(ZXIC_UINT32 type)
> +{
> +	DPP_STATUS rc = DPP_OK;
> +	ZXIC_UINT32 dev_id = 0;
> +	DPP_APT_HASH_RES_INIT_T tHashResInit = {0};
> +	DPP_APT_ERAM_RES_INIT_T tEramResInit = {0};
> +	DPP_APT_ACL_RES_INIT_T tAclResInit = {0};
> +	DPP_APT_DDR_RES_INIT_T tDdrResInit = {0};
> +	DPP_APT_LPM_RES_INIT_T tLpmResInit = {0};
> +	DPP_APT_STAT_RES_INIT_T tStatResInit = {0};
> +
> +	ZXIC_COMM_MEMSET(&tHashResInit, 0x0, sizeof(DPP_APT_HASH_RES_INIT_T));
> +	ZXIC_COMM_MEMSET(&tEramResInit, 0x0, sizeof(DPP_APT_ERAM_RES_INIT_T));
> +	ZXIC_COMM_MEMSET(&tAclResInit, 0x0, sizeof(DPP_APT_ACL_RES_INIT_T));
> +	ZXIC_COMM_MEMSET(&tDdrResInit, 0x0, sizeof(DPP_APT_DDR_RES_INIT_T));
> +	ZXIC_COMM_MEMSET(&tLpmResInit, 0x0, sizeof(DPP_APT_LPM_RES_INIT_T));
> +	ZXIC_COMM_MEMSET(&tStatResInit, 0x0, sizeof(DPP_APT_STAT_RES_INIT_T));
> +
> +	/* Obtain all flow table resources */
> +	rc = dpp_apt_hash_res_get(type, &tHashResInit);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_drv_hash_res_get");
> +	rc = dpp_apt_eram_res_get(type, &tEramResInit);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_drv_eram_res_get");
> +	rc = dpp_apt_acl_res_get(type, &tAclResInit);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_drv_acl_res_get");
> +	rc = dpp_apt_ddr_res_get(type, &tDdrResInit);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_ddr_res_get");
> +	rc = dpp_apt_lpm_res_get(type, &tLpmResInit);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_lpm_res_get");
> +	rc = dpp_apt_stat_res_get(type, &tStatResInit);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_stat_res_get");
> +
> +	/* hash init */
> +	rc = dpp_apt_hash_global_res_init(dev_id);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_hash_global_res_init");
> +
> +	rc = dpp_apt_hash_func_res_init(dev_id, tHashResInit.func_num, tHashResInit.func_res);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_hash_func_res_init");
> +	PMD_INIT_LOG(INFO, " func_num  %d", tHashResInit.func_num);
> +
> +	rc = dpp_apt_hash_bulk_res_init(dev_id, tHashResInit.bulk_num, tHashResInit.bulk_res);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_hash_bulk_res_init");
> +	PMD_INIT_LOG(INFO, " bulk_num  %d", tHashResInit.bulk_num);
> +
> +	/* tbl-res must be initialized after fun-res and buld-res */
> +	rc = dpp_apt_hash_tbl_res_init(dev_id, tHashResInit.tbl_num, tHashResInit.tbl_res);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_hash_tbl_res_init");
> +	PMD_INIT_LOG(INFO, " tbl_num  %d", tHashResInit.tbl_num);
> +	/* eram init */
> +	rc = dpp_apt_eram_res_init(dev_id, tEramResInit.tbl_num, tEramResInit.eram_res);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_eram_res_init");
> +
> +	/* init acl */
> +	rc = dpp_apt_acl_res_init(dev_id, tAclResInit.tbl_num, tAclResInit.acl_res);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_apt_acl_res_init");
> +
> +	/* init stat */
> +	rc = dpp_stat_ppu_eram_baddr_set(dev_id, tStatResInit.eram_baddr);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_stat_ppu_eram_baddr_set");
> +
> +	rc = dpp_stat_ppu_eram_depth_set(dev_id, tStatResInit.eram_depth); /* unit: 128bits */
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_stat_ppu_eram_depth_set");
> +
> +	rc = dpp_se_cmmu_smmu1_cfg_set(dev_id, tStatResInit.ddr_baddr);
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_se_cmmu_smmu1_cfg_set");
> +
> +	rc = dpp_stat_ppu_ddr_baddr_set(dev_id, tStatResInit.ppu_ddr_offset); /* unit: 128bits */
> +	ZXIC_COMM_CHECK_RC(rc, "dpp_stat_ppu_eram_depth_set");
> +
> +	return DPP_OK;
> +}
> +
> +static inline int npsdk_apt_res_init(struct rte_eth_dev *dev __rte_unused)
> +{
> +	uint32_t ret = 0;
> +
> +	ret = dpp_res_uni_init(SE_NIC_RES_TYPE);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "init stand dpp res failed");
> +		return -1;
> +	}
> +
> +	PMD_INIT_LOG(INFO, " end ...time: %lu s", get_cur_time_ms());
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +static void zxdh_np_destroy(struct rte_eth_dev *dev)
> +{
> +	zxdh_tbl_entry_destroy(dev);
> +	if ((!g_dtb_data.init_done) && (!g_dtb_data.dev_refcnt))
> +		return;
> +
> +	if (--g_dtb_data.dev_refcnt == 0) {
> +		struct zxdh_hw *hw = dev->data->dev_private;
> +
> +		_dtb_data_res_free(hw);
> +	}
> +
> +	PMD_DRV_LOG(INFO, "g_dtb_data	dev_refcnt %d", g_dtb_data.dev_refcnt);
> +}
> +
> +/**
> + * Fun:
> + */
> +static int zxdh_tables_init(struct rte_eth_dev *dev)
> +{
> +	/*	port attr\pannel attr\rss\mac vlan filter flush */
> +	int ret = 0;
> +
> +	ret = zxdh_port_attr_init(dev);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, " zxdh_port_attr_init failed");
> +		return ret;
> +	}
> +
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (hw->is_pf) {
> +		ret = zxdh_panel_table_init(dev);
> +		if (ret) {
> +			PMD_INIT_LOG(ERR, " panel table init failed");
> +			return ret;
> +		}
> +		ret = zxdh_vlan_filter_table_init(vport_to_vfid(hw->vport));
> +		if (ret) {
> +			PMD_INIT_LOG(ERR, " panel table init failed");
> +			return ret;
> +		}
> +		ret = zxdh_promisc_table_init(hw);
> +		if (ret) {
> +			PMD_INIT_LOG(ERR, " promisc_table_init failed");
> +			return ret;
> +		}
> +		config_default_hash_key();
> +	}
> +	return ret;
> +}
> +/**
> + * Fun:
> + */
> +const char *MZ_ZXDH_PMD_SHARED_DATA = "zxdh_pmd_shared_data";
> +rte_spinlock_t zxdh_shared_data_lock = RTE_SPINLOCK_INITIALIZER;
> +struct zxdh_shared_data *zxdh_shared_data;
> +
> +static int zxdh_init_shared_data(void)
> +{
> +	const struct rte_memzone *mz;
> +	int ret = 0;
> +
> +	rte_spinlock_lock(&zxdh_shared_data_lock);
> +	if (zxdh_shared_data == NULL) {
> +		if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +			/* Allocate shared memory. */
> +			mz = rte_memzone_reserve(MZ_ZXDH_PMD_SHARED_DATA,
> +					sizeof(*zxdh_shared_data), SOCKET_ID_ANY, 0);
> +			if (mz == NULL) {
> +				PMD_INIT_LOG(ERR, "Cannot allocate zxdh shared data");
> +				ret = -rte_errno;
> +				goto error;
> +			}
> +			zxdh_shared_data = mz->addr;
> +			memset(zxdh_shared_data, 0, sizeof(*zxdh_shared_data));
> +			rte_spinlock_init(&zxdh_shared_data->lock);
> +		} else { /* Lookup allocated shared memory. */
> +			mz = rte_memzone_lookup(MZ_ZXDH_PMD_SHARED_DATA);
> +			if (mz == NULL) {
> +				PMD_INIT_LOG(ERR, "Cannot attach zxdh shared data");
> +				ret = -rte_errno;
> +				goto error;
> +			}
> +			zxdh_shared_data = mz->addr;
> +		}
> +	}
> +
> +error:
> +	rte_spinlock_unlock(&zxdh_shared_data_lock);
> +	return ret;
> +}
> +
> +static void zxdh_free_sh_res(void)
> +{
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		rte_spinlock_lock(&zxdh_shared_data_lock);
> +		if ((zxdh_shared_data != NULL) && zxdh_shared_data->init_done &&
> +			(--zxdh_shared_data->dev_refcnt == 0)) {
> +			rte_mempool_free(zxdh_shared_data->flow_mp);
> +			rte_mempool_free(zxdh_shared_data->mtr_mp);
> +			rte_mempool_free(zxdh_shared_data->mtr_profile_mp);
> +			rte_mempool_free(zxdh_shared_data->mtr_policy_mp);
> +		}
> +		rte_spinlock_unlock(&zxdh_shared_data_lock);
> +	}
> +}
> +
> +/**
> + * Fun:
> + */
> +static int zxdh_init_sh_res(struct zxdh_shared_data *sd)
> +{
> +	const char *MZ_ZXDH_FLOW_MP        = "zxdh_flow_mempool";
> +	const char *MZ_ZXDH_MTR_MP         = "zxdh_mtr_mempool";
> +	const char *MZ_ZXDH_MTR_PROFILE_MP = "zxdh_mtr_profile_mempool";
> +	const char *MZ_ZXDH_MTR_POLICY_MP = "zxdh_mtr_policy_mempool";
> +	struct rte_mempool *flow_mp = NULL;
> +	struct rte_mempool *mtr_mp = NULL;
> +	struct rte_mempool *mtr_profile_mp = NULL;
> +	struct rte_mempool *mtr_policy_mp = NULL;
> +
> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> +		flow_mp = rte_mempool_create(MZ_ZXDH_FLOW_MP, MAX_FLOW_NUM,
> +			sizeof(struct zxdh_flow),
> +			64, 0, NULL, NULL, NULL, NULL,
> +			SOCKET_ID_ANY, 0);
> +		if (flow_mp == NULL) {
> +			PMD_INIT_LOG(ERR, "Cannot allocate zxdh flow mempool");
> +			goto error;
> +		}
> +		mtr_mp = rte_mempool_create(MZ_ZXDH_MTR_MP, MAX_MTR_NUM,
> +			sizeof(struct zxdh_mtr_object),
> +			64, 0, NULL, NULL, NULL, NULL,
> +			SOCKET_ID_ANY, 0);
> +		if (mtr_mp == NULL) {
> +			PMD_INIT_LOG(ERR, "Cannot allocate zxdh mtr mempool");
> +			goto error;
> +		}
> +		mtr_profile_mp = rte_mempool_create(MZ_ZXDH_MTR_PROFILE_MP, MAX_MTR_PROFILE_NUM,
> +			sizeof(struct zxdh_meter_profile),
> +			64, 0, NULL, NULL, NULL, NULL,
> +			SOCKET_ID_ANY, 0);
> +		if (mtr_profile_mp == NULL) {
> +			PMD_INIT_LOG(ERR, "Cannot allocate zxdh mtr profile mempool");
> +			goto error;
> +		}
> +		mtr_policy_mp = rte_mempool_create(MZ_ZXDH_MTR_POLICY_MP, ZXDH_MAX_POLICY_NUM,
> +			sizeof(struct zxdh_meter_policy),
> +			64, 0, NULL, NULL, NULL, NULL,
> +			SOCKET_ID_ANY, 0);
> +		if (mtr_policy_mp == NULL) {
> +			PMD_INIT_LOG(ERR, "Cannot allocate zxdh mtr profile mempool");
> +			goto error;
> +		}
> +		sd->flow_mp = flow_mp;
> +		sd->mtr_mp = mtr_mp;
> +		sd->mtr_profile_mp = mtr_profile_mp;
> +		sd->mtr_policy_mp = mtr_policy_mp;
> +
> +		TAILQ_INIT(&zxdh_shared_data->flow_list);
> +		TAILQ_INIT(&zxdh_shared_data->meter_profile_list);
> +		TAILQ_INIT(&zxdh_shared_data->mtr_list);
> +		TAILQ_INIT(&zxdh_shared_data->mtr_policy_list);
> +	}
> +	return 0;
> +
> +error:
> +	rte_mempool_free(mtr_policy_mp);
> +	rte_mempool_free(mtr_profile_mp);
> +	rte_mempool_free(mtr_mp);
> +	rte_mempool_free(flow_mp);
> +	return -rte_errno;
> +}
> +
> +/**
> + * Fun:
> + */
> +struct zxdh_mtr_res g_mtr_res;
> +static void zxdh_mtr_init(void)
> +{
> +	rte_spinlock_init(&g_mtr_res.hw_plcr_res_lock);
> +	memset(&g_mtr_res, 0, sizeof(g_mtr_res));
> +}
> +
> +#define ZXDH_HASHIDX_MAX  6
> +
> +/**
> + * Fun:
> + */
> +static int zxdh_np_init(struct rte_eth_dev *eth_dev)
> +{
> +	uint32_t ret = 0;
> +	struct zxdh_hw *hw = eth_dev->data->dev_private;
> +
> +	if ((zxdh_shared_data != NULL) && zxdh_shared_data->npsdk_init_done) {
> +		g_dtb_data.dev_refcnt++;
> +		zxdh_tbl_entry_offline_destroy(hw);
> +		PMD_DRV_LOG(INFO, "no need to init dtb  dtb chanenl %d devref %d",
> +				g_dtb_data.queueid, g_dtb_data.dev_refcnt);
> +		return 0;
> +	}
> +
> +	if (hw->is_pf) {
> +		PMD_DRV_LOG(INFO, "dpp_dtb_res_init time: %ld s", get_cur_time_ms());
> +		ret = npsdk_dtb_res_init(eth_dev);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "dpp apt init failed, ret:%d ", ret);
> +			return -ret;
> +		}
> +		PMD_DRV_LOG(INFO, "dpp_dtb_res_init ok");
> +
> +		PMD_DRV_LOG(INFO, "%s time: %ld s", __func__, get_cur_time_ms());
> +		ret = npsdk_apt_res_init(eth_dev);
> +		if (ret) {
> +			PMD_DRV_LOG(ERR, "dpp apt init failed, ret:%d ", ret);
> +			return -ret;
> +		}
> +
> +		PMD_DRV_LOG(INFO, "dpp_apt_res_init ok");
> +		if (!hw->switchoffload) {
> +			if (hw->hash_search_index >= ZXDH_HASHIDX_MAX) {
> +				PMD_DRV_LOG(ERR, "invalid hash idx %d", hw->hash_search_index);
> +				return -1;
> +			}
> +			zxdh_tbl_entry_offline_destroy(hw);
> +		}
> +	}
> +	if (zxdh_shared_data != NULL)
> +		zxdh_shared_data->npsdk_init_done = 1;
> +
> +	PMD_DRV_LOG(DEBUG, "np init ok ");
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +static int zxdh_init_once(struct rte_eth_dev *eth_dev)
> +{
> +	PMD_INIT_LOG(DEBUG, "port 0x%x init...", eth_dev->data->port_id);
> +	if (zxdh_init_shared_data())
> +		return -rte_errno;
> +
> +	struct zxdh_shared_data *sd = zxdh_shared_data;
> +	int ret = 0;
> +
> +	rte_spinlock_lock(&sd->lock);
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
> +		if (!sd->init_done) {
> +			++sd->secondary_cnt;
> +			sd->init_done = true;
> +		}
> +		goto out;
> +	}
> +	/* RTE_PROC_PRIMARY */
> +	if (!sd->init_done) {
> +		/*shared struct and res init */
> +		ret = zxdh_init_sh_res(sd);
> +		if (ret != 0)
> +			goto out;
> +
> +		zxdh_mtr_init();
> +		sd->init_done = true;
> +	}
> +	sd->dev_refcnt++;
> +out:
> +	rte_spinlock_unlock(&sd->lock);
> +	return ret;
> +}
> +/* dev_ops for virtio, bare necessities for basic operation */
> +static const struct eth_dev_ops zxdh_eth_dev_ops = {
> +	.dev_configure			 = zxdh_dev_configure,
> +	.dev_start				 = zxdh_dev_start,
> +	.dev_stop				 = zxdh_dev_stop,
> +	.dev_close				 = zxdh_dev_close,
> +	.dev_infos_get			 = zxdh_dev_info_get,
> +	.stats_get				 = zxdh_dev_stats_get,
> +	.xstats_get				 = zxdh_dev_xstats_get,
> +	.xstats_get_names		 = zxdh_dev_xstats_get_names,
> +	.stats_reset			 = zxdh_dev_stats_reset,
> +	.xstats_reset			 = zxdh_dev_stats_reset,
> +	.link_update			 = zxdh_dev_link_update,
> +	.rx_queue_setup			 = zxdh_dev_rx_queue_setup,
> +	.rx_queue_intr_enable	 = zxdh_dev_rx_queue_intr_enable,
> +	.rx_queue_intr_disable	 = zxdh_dev_rx_queue_intr_disable,
> +	.rx_queue_release		 = NULL,
> +	.rxq_info_get			 = zxdh_rxq_info_get,
> +	.txq_info_get			 = zxdh_txq_info_get,
> +	.tx_queue_setup			 = zxdh_dev_tx_queue_setup,
> +	.tx_queue_release		 = NULL,
> +	.queue_stats_mapping_set = NULL,
> +
> +	.mac_addr_add			 = zxdh_dev_mac_addr_add,
> +	.mac_addr_remove		 = zxdh_dev_mac_addr_remove,
> +	.mac_addr_set			 = zxdh_dev_mac_addr_set,
> +	.mtu_set				 = zxdh_dev_mtu_set,
> +	.dev_set_link_up		 = zxdh_dev_set_link_up,
> +	.dev_set_link_down		 = zxdh_dev_set_link_down,
> +	.promiscuous_enable		 = zxdh_dev_promiscuous_enable,
> +	.promiscuous_disable	 = zxdh_dev_promiscuous_disable,
> +	.allmulticast_enable	 = zxdh_dev_allmulticast_enable,
> +	.allmulticast_disable	 = zxdh_dev_allmulticast_disable,
> +	.vlan_filter_set		 = zxdh_vlan_filter_set,
> +	.vlan_offload_set		 = zxdh_vlan_offload_set,
> +	.vlan_pvid_set			 = zxdh_vlan_pvid_set,
> +	.vlan_tpid_set			 = zxdh_vlan_tpid_set,
> +	.udp_tunnel_port_add	 = zxdh_dev_udp_tunnel_port_add,
> +	.udp_tunnel_port_del	 = zxdh_dev_udp_tunnel_port_del,
> +	.reta_update			 = zxdh_dev_rss_reta_update,
> +	.reta_query				 = zxdh_dev_rss_reta_query,
> +	.rss_hash_update		 = zxdh_rss_hash_update,
> +	.rss_hash_conf_get		 = zxdh_rss_hash_conf_get,
> +	.mtr_ops_get			 = zxdh_meter_ops_get,
> +	.flow_ops_get			 = zxdh_flow_ops_get,
> +	.fw_version_get			 = zxdh_dev_fw_version_get,
> +	.get_module_info		 = zxdh_dev_get_module_info,
> +	.get_module_eeprom		 = zxdh_dev_get_module_eeprom,
> +	.flow_ctrl_get			 = zxdh_flow_ctrl_get,
> +	.flow_ctrl_set			 = zxdh_flow_ctrl_set,
> +	.eth_dev_priv_dump		 = zxdh_dev_priv_dump,
> +};
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_msg_chan_enable(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +	struct msix_para misx_info = {
> +		.vector_risc = MSIX_FROM_RISCV,
> +		.vector_pfvf = MSIX_FROM_PFVF,
> +		.vector_mpf  = MSIX_FROM_MPF,
> +		.pcie_id     = hw->pcie_id,
> +		.driver_type = hw->is_pf ? MSG_CHAN_END_PF : MSG_CHAN_END_VF,
> +		.virt_addr   = (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX] + ZXDH_CTRLCH_OFFSET),
> +	};
> +
> +	return zxdh_bar_chan_enable(&misx_info, &hw->vport.vport);
> +}
> +
> +static int32_t zxdh_msg_chan_hwlock_init(struct rte_eth_dev *dev)
> +{
> +	struct zxdh_hw *hw = dev->data->dev_private;
> +
> +	if (!hw->is_pf)
> +		return 0;
> +	return bar_chan_pf_init_spinlock(hw->pcie_id, (uint64_t)(hw->bar_addr[ZXDH_BAR0_INDEX]));
> +}
> +
> +/**
> + * Fun:
> + */
> +static int zxdh_agent_comm(struct rte_eth_dev *eth_dev, struct zxdh_hw *hw)
> +{
> +	if (zxdh_phyport_get(eth_dev, &hw->phyport) != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to get phyport");
> +		return -1;
> +	}
> +	PMD_INIT_LOG(INFO, "Get phyport success: 0x%x", hw->phyport);
> +	hw->vfid = vport_to_vfid(hw->vport);
> +	if (zxdh_hashidx_get(eth_dev, &hw->hash_search_index) != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to get hash idx");
> +		return -1;
> +	}
> +	PMD_INIT_LOG(DEBUG, "Get hash idx success: 0x%x", hw->hash_search_index);
> +	if (zxdh_pannelid_get(eth_dev, &hw->panel_id) != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to get panel_id");
> +		return -1;
> +	}
> +	PMD_INIT_LOG(INFO, "Get pannel id success: 0x%x", hw->panel_id);
> +
> +	return 0;
> +}
> +/**
> + * Fun: is based on probe() function in zxdh_pci.c
> + * It returns 0 on success.
> + */
> +static int32_t zxdh_eth_dev_init(struct rte_eth_dev *eth_dev)
> +{
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> +	int ret;
> +	uint64_t pre_time = get_cur_time_ms();
> +
> +	PMD_INIT_LOG(INFO, "dev init begin time: %lu s", pre_time);
> +	eth_dev->dev_ops = &zxdh_eth_dev_ops;
> +
> +	/**
> +	 * Primary process does the whole initialization,
> +	 * for secondaryprocesses, we just select the same Rx and Tx function as primary.
> +	 */
> +	struct zxdh_hw *hw = eth_dev->data->dev_private;
> +
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
> +		VTPCI_OPS(hw) = &zxdh_modern_ops;
> +		set_rxtx_funcs(eth_dev);
> +		return 0;
> +	}
> +	/* Allocate memory for storing MAC addresses */
> +	eth_dev->data->mac_addrs = rte_zmalloc("zxdh_mac",
> +			ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN, 0);
> +	if (eth_dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate %d bytes store MAC addresses",
> +				ZXDH_MAX_MAC_ADDRS * RTE_ETHER_ADDR_LEN);
> +		return -ENOMEM;
> +	}
> +	memset(hw, 0, sizeof(*hw));
> +	ret = zxdh_dev_devargs_parse(eth_dev->device->devargs, hw);
> +	if (ret < 0) {
> +		PMD_INIT_LOG(ERR, "dev args parse failed");
> +		return -EINVAL;
> +	}
> +
> +	hw->bar_addr[0] = (uint64_t)pci_dev->mem_resource[0].addr;
> +	if (hw->bar_addr[0] == 0) {
> +		PMD_INIT_LOG(ERR, "Bad mem resource.");
> +		return -EIO;
> +	}
> +	hw->device_id = pci_dev->id.device_id;
> +	hw->port_id = eth_dev->data->port_id;
> +	hw->eth_dev = eth_dev;
> +	hw->speed = RTE_ETH_SPEED_NUM_UNKNOWN;
> +	hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +	hw->is_pf = 0;
> +
> +	hw->reta_idx = NULL;
> +	hw->vfinfo = NULL;
> +	hw->vlan_fiter = NULL;
> +
> +	hw->admin_status = RTE_ETH_LINK_UP;
> +	rte_spinlock_init(&hw->state_lock);
> +	if (pci_dev->id.device_id == ZXDH_PCI_PF_DEVICEID) {
> +		hw->is_pf = 1;
> +		hw->pfinfo.vf_nums = pci_dev->max_vfs;
> +	}
> +
> +	/* reset device and get dev config*/
> +	ret = zxdh_init_once(eth_dev);
> +	if (ret != 0)
> +		goto err_zxdh_init;
> +
> +	ret = zxdh_init_device(eth_dev);
> +	if (ret < 0)
> +		goto err_zxdh_init;
> +
> +	ret = zxdh_msg_chan_init();
> +	if (ret < 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init bar msg chan");
> +		goto err_zxdh_init;
> +	}
> +	hw->msg_chan_init = 1;
> +	PMD_INIT_LOG(DEBUG, "Init bar msg chan OK");
> +	ret = zxdh_msg_chan_hwlock_init(eth_dev);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "zxdh_msg_chan_hwlock_init failed ret %d", ret);
> +		goto err_zxdh_init;
> +	}
> +	ret = zxdh_msg_chan_enable(eth_dev);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "zxdh_msg_bar_chan_enable failed ret %d", ret);
> +		goto err_zxdh_init;
> +	}
> +	PMD_INIT_LOG(DEBUG, "pcie_id: 0x%x, vport: 0x%x", hw->pcie_id, hw->vport.vport);
> +
> +	ret = zxdh_agent_comm(eth_dev, hw);
> +	if (ret != 0)
> +		goto err_zxdh_init;
> +
> +	ret = zxdh_np_init(eth_dev);
> +	if (ret)
> +		goto err_zxdh_init;
> +
> +
> +	zxdh_priv_res_init(hw);
> +	zxdh_sriovinfo_init(hw);
> +	zxdh_msg_cb_reg(hw);
> +	zxdh_configure_intr(eth_dev);
> +	ret = zxdh_tables_init(eth_dev);
> +	if (ret != 0)
> +		goto err_zxdh_init;
> +
> +	uint64_t time = get_cur_time_ms();
> +
> +	PMD_INIT_LOG(ERR, "dev init end time: %lu s total time %" PRIu64, time, time - pre_time);
> +	return 0;
> +
> +err_zxdh_init:
> +	zxdh_intr_release(eth_dev);
> +	zxdh_np_destroy(eth_dev);
> +	zxdh_bar_msg_chan_exit();
> +	zxdh_priv_res_free(hw);
> +	zxdh_free_sh_res();
> +	rte_free(eth_dev->data->mac_addrs);
> +	eth_dev->data->mac_addrs = NULL;
> +	rte_free(eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key);
> +	eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> +	return ret;
> +}
> +
> +static unsigned int
> +log2above(unsigned int v)
> +{
> +	unsigned int l;
> +	unsigned int r;
> +
> +	for (l = 0, r = 0; (v >> 1); ++l, v >>= 1)
> +		r |= (v & 1);
> +	return l + r;
> +}
> +
> +static uint16_t zxdh_queue_desc_pre_setup(uint16_t desc)
> +{
> +	uint32_t nb_desc = desc;
> +
> +	if (desc < ZXDH_MIN_QUEUE_DEPTH) {
> +		PMD_RX_LOG(WARNING,
> +			"nb_desc(%u) increased number of descriptors to the min queue depth (%u)",
> +			desc, ZXDH_MIN_QUEUE_DEPTH);
> +		return ZXDH_MIN_QUEUE_DEPTH;
> +	}
> +
> +	if (desc > ZXDH_MAX_QUEUE_DEPTH) {
> +		PMD_RX_LOG(WARNING,
> +			"nb_desc(%u) can't be greater than max_rxds (%d), turn to max queue depth",
> +			desc, ZXDH_MAX_QUEUE_DEPTH);
> +		return ZXDH_MAX_QUEUE_DEPTH;
> +	}
> +
> +	if (!rte_is_power_of_2(desc)) {
> +		nb_desc = 1 << log2above(desc);
> +		if (nb_desc > ZXDH_MAX_QUEUE_DEPTH)
> +			nb_desc = ZXDH_MAX_QUEUE_DEPTH;
> +
> +		PMD_RX_LOG(WARNING,
> +			"nb_desc(%u) increased number of descriptors to the next power of two (%d)",
> +			desc, nb_desc);
> +	}
> +
> +	return nb_desc;
> +}
> +
> +static int32_t hw_q_depth_handler(const char *key __rte_unused,
> +				const char *value, void *ret_val)
> +{
> +	uint16_t val = 0;
> +	struct zxdh_hw *hw = ret_val;
> +
> +	val = strtoul(value, NULL, 0);
> +	uint16_t q_depth = zxdh_queue_desc_pre_setup(val);
> +
> +	hw->q_depth = q_depth;
> +	return 0;
> +}
> +
> +static int32_t zxdh_dev_devargs_parse(struct rte_devargs *devargs, struct zxdh_hw *hw)
> +{
> +	struct rte_kvargs *kvlist = NULL;
> +	int32_t ret = 0;
> +
> +	if (devargs == NULL)
> +		return 0;
> +
> +	kvlist = rte_kvargs_parse(devargs->args, NULL);
> +	if (kvlist == NULL) {
> +		PMD_INIT_LOG(ERR, "error when parsing param");
> +		return 0;
> +	}
> +
> +	ret = rte_kvargs_process(kvlist, "q_depth", hw_q_depth_handler, hw);
> +	if (ret < 0) {
> +		PMD_INIT_LOG(ERR, "Failed to parse q_depth");
> +		goto exit;
> +	}
> +	if (!hw->q_depth)
> +		hw->q_depth = ZXDH_MIN_QUEUE_DEPTH;
> +
> +exit:
> +	rte_kvargs_free(kvlist);
> +	return ret;
> +}
> +
> +/**
> + * Fun:
> + */
> +int32_t zxdh_eth_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> +			struct rte_pci_device *pci_dev)
> +{
> +#ifdef RTE_LIBRTE_ZXDH_DEBUG
> +	rte_log_set_level(zxdh_logtype_init, RTE_LOG_DEBUG);
> +	rte_log_set_level(zxdh_logtype_driver, RTE_LOG_DEBUG);
> +	rte_log_set_level(RTE_LOGTYPE_PMD, RTE_LOG_DEBUG);
> +#endif
> +	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct zxdh_hw), zxdh_eth_dev_init);
> +}
> +/**
> + * Fun:
> + */
> +static int32_t zxdh_eth_dev_uninit(struct rte_eth_dev *eth_dev)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
> +		return 0;
> +	zxdh_dev_close(eth_dev);
> +	return 0;
> +}
> +/**
> + * Fun:
> + */
> +int32_t zxdh_eth_pci_remove(struct rte_pci_device *pci_dev)
> +{
> +	int32_t ret = rte_eth_dev_pci_generic_remove(pci_dev, zxdh_eth_dev_uninit);
> +
> +	if (ret == -ENODEV) { /* Port has already been released by close. */
> +		ret = 0;
> +	}
> +	return ret;
> +}
> +static const struct rte_pci_id pci_id_zxdh_map[] = {
> +	{RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_PCI_PF_DEVICEID)},
> +	{RTE_PCI_DEVICE(PCI_VENDOR_ID_ZTE, ZXDH_PCI_VF_DEVICEID)},
> +	{.vendor_id = 0, /* sentinel */ },
> +};
> +static struct rte_pci_driver zxdh_pmd = {
> +	.driver = {.name = "net_zxdh", },
> +	.id_table = pci_id_zxdh_map,
> +	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
> +	.probe = zxdh_eth_pci_probe,
> +	.remove = zxdh_eth_pci_remove,
> +};
> +RTE_INIT(rte_zxdh_pmd_init)
> +{
> +	zxdh_log_init();
> +	rte_pci_register(&zxdh_pmd);
>

you can use 'RTE_PMD_REGISTER_PCI' insted, and call
'rte_telemetry_register_cmd()' from the probe functions, so that
telemetry exists only for cases zxdh device exists.

I assume 'rte_telemetry_register_cmd()' doesn't need to be in RTE_INIT
but can you please double check. If it has to be, you can create a
separate RTE_INIT() for telemetry.


> +	rte_telemetry_register_cmd("/zxdh/dumppkt",
> +		handle_pkt_dump,
> +		"Returns None. Parameter: port id, mode(0:all_off;1:rx_on;2:tx_on;3:all_on), dumplen");
>

This is not formalized yet, but for the telemetry command for drives,
what about using similar hierarchy logging has, like /pmd/net/zxdh/* ?

As far as I can see only other driver using telemetry directly is
'cnxk', pleease feel free to sync with Jerin (and cc me) to sync on this
format.

> +	rte_telemetry_register_cmd("/zxdh/dumpque",
> +		handle_queue_dump,
> +		"Returns None. Parameter: port id, queid, dump_descnum, logfile(eg /home/que.log)");
> +}
> +RTE_PMD_EXPORT_NAME(net_zxdh, __COUNTER__);
>

If you use 'RTE_PMD_REGISTER_PCI', can drop above.

> +RTE_PMD_REGISTER_PCI_TABLE(net_zxdh, pci_id_zxdh_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_zxdh, "* vfio-pci");
> +RTE_LOG_REGISTER(zxdh_logtype_init, pmd.net.zxdh.init, DEBUG);
> +RTE_LOG_REGISTER(zxdh_logtype_driver, pmd.net.zxdh.driver, INFO);
> +RTE_LOG_REGISTER(zxdh_logtype_zxdh_driver, pmd.net.zxdh.zxdh_driver, DEBUG);
> +RTE_LOG_REGISTER(zxdh_logtype_tx, pmd.net.zxdh.tx, NOTICE);
> +RTE_LOG_REGISTER(zxdh_logtype_rx, pmd.net.zxdh.rx, NOTICE);
> +RTE_LOG_REGISTER(zxdh_logtype_msg, pmd.net.zxdh.msg, INFO);
>

Can use 'RTE_LOG_REGISTER_SUFFIX' instead, simpler.

<...>

> +}
> +static void DataHitolo(uint64_t *data)

Please don't use CamelCase in function naming.

<...>

> +struct fd_flow_result {
> +	uint8_t rsv:7;
> +	uint8_t hit_flag:1;
> +	uint8_t rsv0;
> +	uint8_t uplink_flag; /*0:fdid;1:4B fdir;2:8B fdif*/
> +	uint8_t action_idx; /*1:fwd 2:drop*/
> +	rte_le16_t qid;
> +	rte_le16_t vfid;
> +	rte_le32_t uplink_fdid;
> +	uint8_t rsv1[3];
> +	uint8_t fdir_offset;/*����l2 offset*/
>

Please use only readable characters.

I am stopping the review here, can continue review in next version where
driver split into patch series.

<...>

      parent reply	other threads:[~2024-07-05 17:32 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-03 11:28 wang.junlong1
2024-06-03 14:58 ` Stephen Hemminger
2024-06-06 12:02 ` Junlong Wang
2024-07-05 17:31   ` Ferruh Yigit
2024-06-24 12:31 ` [v2] raw/zxdh: introduce zxdh raw device driver Yong Zhang
2024-07-09  6:00   ` [v3] raw/zxdh:Optimize device resource mapping process Yong Zhang
2024-07-05 17:32 ` Ferruh Yigit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d5d67eb-1420-4dd8-85ca-e187bbdbd4d5@amd.com \
    --to=ferruh.yigit@amd.com \
    --cc=dev@dpdk.org \
    --cc=wang.junlong1@zte.com.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).