* Re: [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers
@ 2019-06-12 14:24 Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions)
2019-06-14 15:37 ` Ferruh Yigit
0 siblings, 1 reply; 5+ messages in thread
From: Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions) @ 2019-06-12 14:24 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Wangxiaoyun (Cloud, Network Chip Application Development Dept),
zhouguoyang, Shahar Belkar, stephen, Luoxianjun
> On 6/6/2019 12:06 PM, Ziyang Xuan wrote:
> > Add various headers that define mgmt commands, cmdq commands, rx
> data
> > structures, tx data structures and basic defines for use in the code.
> >
> > Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
>
> <...>
>
> > +#define PMD_DRV_LOG(level, fmt, args...) \
> > + rte_log(RTE_LOG_ ## level, hinic_logtype, \
> > + HINIC_DRIVER_NAME": " fmt "\n", ##args)
> > +
> > +#define HINIC_ASSERT_EN
> > +
> > +#ifdef HINIC_ASSERT_EN
> > +#define HINIC_ASSERT(exp) \
> > + do { \
> > + if (!(exp)) { \
> > + rte_panic("line%d\tassert \"" #exp "\" failed\n", \
> > + __LINE__); \
> > + } \
> > + } while (0)
> > +#else
> > +#define HINIC_ASSERT(exp) do {} while (0)
> > +#endif
>
> So you are enabling asserting by default? Which can cause "rte_panic()" ?
>
> Please make sure asserting is disabled by default, and please tie this to the
> "CONFIG_RTE_ENABLE_ASSERT" config option. So it that option is disabled
> hinic also should disable the assertions.
I checked the places where use rte_panic, most of them can use code logic to guarantee correctness. And I have referenced other PMDs like mlx5, they use
rte_panic directly but use custom encapsulation, so I delete custom encapsulation above and the most rte_panic usage, and use directly like mlx5.
Is it OK?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers
2019-06-12 14:24 [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions)
@ 2019-06-14 15:37 ` Ferruh Yigit
0 siblings, 0 replies; 5+ messages in thread
From: Ferruh Yigit @ 2019-06-14 15:37 UTC (permalink / raw)
To: Xuanziyang (William,
Chip Application Design Logic and Hardware Development Dept
IT_Products & Solutions),
dev
Cc: Wangxiaoyun (Cloud, Network Chip Application Development Dept),
zhouguoyang, Shahar Belkar, stephen, Luoxianjun
On 6/12/2019 3:24 PM, Xuanziyang (William, Chip Application Design Logic and
Hardware Development Dept IT_Products & Solutions) wrote:
>
>> On 6/6/2019 12:06 PM, Ziyang Xuan wrote:
>>> Add various headers that define mgmt commands, cmdq commands, rx
>> data
>>> structures, tx data structures and basic defines for use in the code.
>>>
>>> Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
>>
>> <...>
>>
>>> +#define PMD_DRV_LOG(level, fmt, args...) \
>>> + rte_log(RTE_LOG_ ## level, hinic_logtype, \
>>> + HINIC_DRIVER_NAME": " fmt "\n", ##args)
>>> +
>>> +#define HINIC_ASSERT_EN
>>> +
>>> +#ifdef HINIC_ASSERT_EN
>>> +#define HINIC_ASSERT(exp) \
>>> + do { \
>>> + if (!(exp)) { \
>>> + rte_panic("line%d\tassert \"" #exp "\" failed\n", \
>>> + __LINE__); \
>>> + } \
>>> + } while (0)
>>> +#else
>>> +#define HINIC_ASSERT(exp) do {} while (0)
>>> +#endif
>>
>> So you are enabling asserting by default? Which can cause "rte_panic()" ?
>>
>> Please make sure asserting is disabled by default, and please tie this to the
>> "CONFIG_RTE_ENABLE_ASSERT" config option. So it that option is disabled
>> hinic also should disable the assertions.
>
> I checked the places where use rte_panic, most of them can use code logic to guarantee correctness. And I have referenced other PMDs like mlx5, they use
> rte_panic directly but use custom encapsulation, so I delete custom encapsulation above and the most rte_panic usage, and use directly like mlx5.
>
> Is it OK?
>
Also does it make enable 'HINIC_ASSERT' when global 'CONFIG_RTE_ENABLE_ASSERT'
config enabled?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers
2019-06-06 11:17 ` [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers Ziyang Xuan
2019-06-06 11:06 ` Ziyang Xuan
@ 2019-06-11 16:04 ` Ferruh Yigit
1 sibling, 0 replies; 5+ messages in thread
From: Ferruh Yigit @ 2019-06-11 16:04 UTC (permalink / raw)
To: Ziyang Xuan, dev
Cc: cloud.wangxiaoyun, zhouguoyang, shahar.belkar, stephen, luoxianjun
On 6/6/2019 12:06 PM, Ziyang Xuan wrote:
> Add various headers that define mgmt commands, cmdq commands,
> rx data structures, tx data structures and basic defines for
> use in the code.
>
> Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
<...>
> +#define PMD_DRV_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, hinic_logtype, \
> + HINIC_DRIVER_NAME": " fmt "\n", ##args)
> +
> +#define HINIC_ASSERT_EN
> +
> +#ifdef HINIC_ASSERT_EN
> +#define HINIC_ASSERT(exp) \
> + do { \
> + if (!(exp)) { \
> + rte_panic("line%d\tassert \"" #exp "\" failed\n", \
> + __LINE__); \
> + } \
> + } while (0)
> +#else
> +#define HINIC_ASSERT(exp) do {} while (0)
> +#endif
So you are enabling asserting by default? Which can cause "rte_panic()" ?
Please make sure asserting is disabled by default, and please tie this to the
"CONFIG_RTE_ENABLE_ASSERT" config option. So it that option is disabled hinic
also should disable the assertions.
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers
2019-06-06 11:04 [dpdk-dev] [PATCH v4 00/11] A new net PMD - hinic Ziyang Xuan
@ 2019-06-06 11:17 ` Ziyang Xuan
2019-06-06 11:06 ` Ziyang Xuan
2019-06-11 16:04 ` Ferruh Yigit
0 siblings, 2 replies; 5+ messages in thread
From: Ziyang Xuan @ 2019-06-06 11:17 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, cloud.wangxiaoyun, zhouguoyang, shahar.belkar,
stephen, luoxianjun, Ziyang Xuan
Add various headers that define mgmt commands, cmdq commands,
rx data structures, tx data structures and basic defines for
use in the code.
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
---
drivers/net/hinic/base/hinic_compat.h | 239 ++++++++++++
drivers/net/hinic/base/hinic_port_cmd.h | 483 ++++++++++++++++++++++++
drivers/net/hinic/base/hinic_qe_def.h | 450 ++++++++++++++++++++++
drivers/net/hinic/hinic_pmd_ethdev.h | 102 +++++
drivers/net/hinic/hinic_pmd_rx.h | 135 +++++++
drivers/net/hinic/hinic_pmd_tx.h | 97 +++++
6 files changed, 1506 insertions(+)
create mode 100644 drivers/net/hinic/base/hinic_compat.h
create mode 100644 drivers/net/hinic/base/hinic_port_cmd.h
create mode 100644 drivers/net/hinic/base/hinic_qe_def.h
create mode 100644 drivers/net/hinic/hinic_pmd_ethdev.h
create mode 100644 drivers/net/hinic/hinic_pmd_rx.h
create mode 100644 drivers/net/hinic/hinic_pmd_tx.h
diff --git a/drivers/net/hinic/base/hinic_compat.h b/drivers/net/hinic/base/hinic_compat.h
new file mode 100644
index 000000000..c5a3ee13b
--- /dev/null
+++ b/drivers/net/hinic/base/hinic_compat.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_COMPAT_H_
+#define _HINIC_COMPAT_H_
+
+#include <stdint.h>
+#include <sys/time.h>
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_memzone.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_config.h>
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+
+#ifndef dma_addr_t
+typedef uint64_t dma_addr_t;
+#endif
+
+#ifndef gfp_t
+#define gfp_t unsigned
+#endif
+
+#ifndef bool
+#define bool int
+#endif
+
+#ifndef FALSE
+#define FALSE (0)
+#endif
+
+#ifndef TRUE
+#define TRUE (1)
+#endif
+
+#ifndef false
+#define false (0)
+#endif
+
+#ifndef true
+#define true (1)
+#endif
+
+#ifndef NULL
+#define NULL ((void *)0)
+#endif
+
+#define HINIC_ERROR (-1)
+#define HINIC_OK (0)
+
+#ifndef BIT
+#define BIT(n) (1 << (n))
+#endif
+
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+/* Returns X / Y, rounding up. X must be nonnegative to round correctly. */
+#define DIV_ROUND_UP(X, Y) (((X) + ((Y) - 1)) / (Y))
+
+/* Returns X rounded up to the nearest multiple of Y. */
+#define ROUND_UP(X, Y) (DIV_ROUND_UP(X, Y) * (Y))
+
+#undef ALIGN
+#define ALIGN(x, a) RTE_ALIGN(x, a)
+
+#define PTR_ALIGN(p, a) ((typeof(p))ALIGN((unsigned long)(p), (a)))
+
+/* Reported driver name. */
+#define HINIC_DRIVER_NAME "net_hinic"
+
+extern int hinic_logtype;
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, hinic_logtype, \
+ HINIC_DRIVER_NAME": " fmt "\n", ##args)
+
+#define HINIC_ASSERT_EN
+
+#ifdef HINIC_ASSERT_EN
+#define HINIC_ASSERT(exp) \
+ do { \
+ if (!(exp)) { \
+ rte_panic("line%d\tassert \"" #exp "\" failed\n", \
+ __LINE__); \
+ } \
+ } while (0)
+#else
+#define HINIC_ASSERT(exp) do {} while (0)
+#endif
+
+#define HINIC_BUG_ON(x) HINIC_ASSERT(!(x))
+
+/* common definition */
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+#define ETH_HLEN 14
+#define ETH_CRC_LEN 4
+#define VLAN_PRIO_SHIFT 13
+#define VLAN_N_VID 4096
+
+/* bit order interface */
+#define cpu_to_be16(o) rte_cpu_to_be_16(o)
+#define cpu_to_be32(o) rte_cpu_to_be_32(o)
+#define cpu_to_be64(o) rte_cpu_to_be_64(o)
+#define cpu_to_le32(o) rte_cpu_to_le_32(o)
+#define be16_to_cpu(o) rte_be_to_cpu_16(o)
+#define be32_to_cpu(o) rte_be_to_cpu_32(o)
+#define be64_to_cpu(o) rte_be_to_cpu_64(o)
+#define le32_to_cpu(o) rte_le_to_cpu_32(o)
+
+/* virt memory and dma phy memory */
+#define __iomem
+#define __force
+#define GFP_KERNEL RTE_MEMZONE_IOVA_CONTIG
+#define HINIC_PAGE_SHIFT 12
+#define HINIC_PAGE_SIZE RTE_PGSIZE_4K
+#define HINIC_MEM_ALLOC_ALIGNE_MIN 8
+
+static inline int hinic_test_bit(int nr, volatile unsigned long *addr)
+{
+ int res;
+
+ rte_mb();
+ res = ((*addr) & (1UL << nr)) != 0;
+ rte_mb();
+ return res;
+}
+
+static inline void hinic_set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+ __sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void hinic_clear_bit(int nr, volatile unsigned long *addr)
+{
+ __sync_fetch_and_and(addr, ~(1UL << nr));
+}
+
+static inline int hinic_test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = (1UL << nr);
+
+ return __sync_fetch_and_and(addr, ~mask) & mask;
+}
+
+static inline int hinic_test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = (1UL << nr);
+
+ return __sync_fetch_and_or(addr, mask) & mask;
+}
+
+void *dma_zalloc_coherent(void *dev, size_t size, dma_addr_t *dma_handle,
+ gfp_t flag);
+void *dma_zalloc_coherent_aligned(void *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag);
+void *dma_zalloc_coherent_aligned256k(void *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag);
+void dma_free_coherent(void *dev, size_t size, void *virt, dma_addr_t phys);
+
+/* dma pool alloc and free */
+#define pci_pool dma_pool
+#define pci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, handle)
+#define pci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, addr)
+
+struct dma_pool *dma_pool_create(const char *name, void *dev, size_t size,
+ size_t align, size_t boundary);
+void dma_pool_destroy(struct dma_pool *pool);
+void *dma_pool_alloc(struct pci_pool *pool, int flags, dma_addr_t *dma_addr);
+void dma_pool_free(struct pci_pool *pool, void *vaddr, dma_addr_t dma);
+
+#define kzalloc(size, flag) rte_zmalloc(NULL, size, HINIC_MEM_ALLOC_ALIGNE_MIN)
+#define kzalloc_aligned(size, flag) rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE)
+#define kfree(ptr) rte_free(ptr)
+
+/* mmio interface */
+static inline void writel(u32 value, volatile void *addr)
+{
+ *(volatile u32 *)addr = value;
+}
+
+static inline u32 readl(const volatile void *addr)
+{
+ return *(const volatile u32 *)addr;
+}
+
+#define __raw_writel(value, reg) writel((value), (reg))
+#define __raw_readl(reg) readl((reg))
+
+/* Spinlock related interface */
+#define hinic_spinlock_t rte_spinlock_t
+
+#define spinlock_t rte_spinlock_t
+#define spin_lock_init(spinlock_prt) rte_spinlock_init(spinlock_prt)
+#define spin_lock_deinit(lock)
+#define spin_lock(spinlock_prt) rte_spinlock_lock(spinlock_prt)
+#define spin_unlock(spinlock_prt) rte_spinlock_unlock(spinlock_prt)
+
+static inline unsigned long get_timeofday_ms(void)
+{
+ struct timeval tv;
+
+ (void)gettimeofday(&tv, NULL);
+
+ return (unsigned long)tv.tv_sec * 1000 + tv.tv_usec / 1000;
+}
+
+#define jiffies get_timeofday_ms()
+#define msecs_to_jiffies(ms) (ms)
+#define time_before(now, end) ((now) < (end))
+
+/* misc kernel utils */
+static inline u16 ilog2(u32 n)
+{
+ u16 res = 0;
+
+ while (n > 1) {
+ n >>= 1;
+ res++;
+ }
+
+ return res;
+}
+
+#endif /* _HINIC_COMPAT_H_ */
diff --git a/drivers/net/hinic/base/hinic_port_cmd.h b/drivers/net/hinic/base/hinic_port_cmd.h
new file mode 100644
index 000000000..2af38c55a
--- /dev/null
+++ b/drivers/net/hinic/base/hinic_port_cmd.h
@@ -0,0 +1,483 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PORT_CMD_H_
+#define _HINIC_PORT_CMD_H_
+
+/* cmd of mgmt CPU message for NIC module */
+enum hinic_port_cmd {
+ HINIC_PORT_CMD_MGMT_RESET = 0x0,
+
+ HINIC_PORT_CMD_CHANGE_MTU = 0x2,
+
+ HINIC_PORT_CMD_ADD_VLAN = 0x3,
+ HINIC_PORT_CMD_DEL_VLAN,
+
+ HINIC_PORT_CMD_SET_PFC = 0x5,
+ HINIC_PORT_CMD_GET_PFC,
+ HINIC_PORT_CMD_SET_ETS,
+ HINIC_PORT_CMD_GET_ETS,
+
+ HINIC_PORT_CMD_SET_MAC = 0x9,
+ HINIC_PORT_CMD_GET_MAC,
+ HINIC_PORT_CMD_DEL_MAC,
+
+ HINIC_PORT_CMD_SET_RX_MODE = 0xc,
+ HINIC_PORT_CMD_SET_ANTI_ATTACK_RATE = 0xd,
+
+ HINIC_PORT_CMD_GET_AUTONEG_CAP = 0xf,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_AUTONET_STATE,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_SPEED,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_DUPLEX,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_MEDIA_TYPE,
+ /* not defined in base line */
+
+ HINIC_PORT_CMD_GET_PAUSE_INFO = 0x14,
+ HINIC_PORT_CMD_SET_PAUSE_INFO,
+
+ HINIC_PORT_CMD_GET_LINK_STATE = 0x18,
+ HINIC_PORT_CMD_SET_LRO = 0x19,
+ HINIC_PORT_CMD_SET_RX_CSUM = 0x1a,
+ HINIC_PORT_CMD_SET_RX_VLAN_OFFLOAD = 0x1b,
+
+ HINIC_PORT_CMD_GET_PORT_STATISTICS = 0x1c,
+ HINIC_PORT_CMD_CLEAR_PORT_STATISTICS,
+ HINIC_PORT_CMD_GET_VPORT_STAT,
+ HINIC_PORT_CMD_CLEAN_VPORT_STAT,
+
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_INDIR_TBL = 0x25,
+ HINIC_PORT_CMD_SET_RSS_TEMPLATE_INDIR_TBL,
+
+ HINIC_PORT_CMD_SET_PORT_ENABLE = 0x29,
+ HINIC_PORT_CMD_GET_PORT_ENABLE,
+
+ HINIC_PORT_CMD_SET_RSS_TEMPLATE_TBL = 0x2b,
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_TBL,
+ HINIC_PORT_CMD_SET_RSS_HASH_ENGINE,
+ HINIC_PORT_CMD_GET_RSS_HASH_ENGINE,
+ HINIC_PORT_CMD_GET_RSS_CTX_TBL,
+ HINIC_PORT_CMD_SET_RSS_CTX_TBL,
+ HINIC_PORT_CMD_RSS_TEMP_MGR,
+
+ /* 0x36 ~ 0x40 have defined in base line*/
+ HINIC_PORT_CMD_RSS_CFG = 0x42,
+
+ HINIC_PORT_CMD_GET_PHY_TYPE = 0x44,
+ HINIC_PORT_CMD_INIT_FUNC = 0x45,
+ HINIC_PORT_CMD_SET_LLI_PRI = 0x46,
+
+ HINIC_PORT_CMD_GET_LOOPBACK_MODE = 0x48,
+ HINIC_PORT_CMD_SET_LOOPBACK_MODE,
+
+ HINIC_PORT_CMD_GET_JUMBO_FRAME_SIZE = 0x4a,
+ HINIC_PORT_CMD_SET_JUMBO_FRAME_SIZE,
+
+ /* 0x4c ~ 0x57 have defined in base line*/
+
+ HINIC_PORT_CMD_GET_MGMT_VERSION = 0x58,
+ HINIC_PORT_CMD_GET_BOOT_VERSION,
+ HINIC_PORT_CMD_GET_MICROCODE_VERSION,
+
+ HINIC_PORT_CMD_GET_PORT_TYPE = 0x5b,
+
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_VPORT_ENABLE = 0x5c,
+ HINIC_PORT_CMD_SET_VPORT_ENABLE,
+
+ HINIC_PORT_CMD_GET_PORT_ID_BY_FUNC_ID = 0x5e,
+
+ HINIC_PORT_CMD_SET_LED_TEST = 0x5f,
+
+ HINIC_PORT_CMD_SET_LLI_STATE = 0x60,
+ HINIC_PORT_CMD_SET_LLI_TYPE,
+ HINIC_PORT_CMD_GET_LLI_CFG,
+
+ HINIC_PORT_CMD_GET_LRO = 0x63,
+
+ HINIC_PORT_CMD_GET_DMA_CS = 0x64,
+ HINIC_PORT_CMD_SET_DMA_CS,
+
+ HINIC_PORT_CMD_GET_GLOBAL_QPN = 0x66,
+
+ HINIC_PORT_CMD_SET_PFC_MISC = 0x67,
+ HINIC_PORT_CMD_GET_PFC_MISC,
+
+ HINIC_PORT_CMD_SET_VF_RATE = 0x69,
+ HINIC_PORT_CMD_SET_VF_VLAN,
+ HINIC_PORT_CMD_CLR_VF_VLAN,
+
+ /* 0x6c,0x6e have defined in base line*/
+ HINIC_PORT_CMD_SET_UCAPTURE_OPT = 0x6F,
+
+ HINIC_PORT_CMD_SET_TSO = 0x70,
+ HINIC_PORT_CMD_SET_PHY_POWER = 0x71,
+ HINIC_PORT_CMD_UPDATE_FW = 0x72,
+ HINIC_PORT_CMD_SET_RQ_IQ_MAP = 0x73,
+ /* not defined in base line */
+ HINIC_PORT_CMD_SET_PFC_THD = 0x75,
+ /* not defined in base line */
+
+ HINIC_PORT_CMD_LINK_STATUS_REPORT = 0xa0,
+
+ HINIC_PORT_CMD_SET_LOSSLESS_ETH = 0xa3,
+ HINIC_PORT_CMD_UPDATE_MAC = 0xa4,
+
+ HINIC_PORT_CMD_GET_UART_LOG = 0xa5,
+ HINIC_PORT_CMD_SET_UART_LOG,
+
+ HINIC_PORT_CMD_GET_PORT_INFO = 0xaa,
+
+ HINIC_MISC_SET_FUNC_SF_ENBITS = 0xab,
+ /* not defined in base line */
+ HINIC_MISC_GET_FUNC_SF_ENBITS,
+ /* not defined in base line */
+
+ HINIC_PORT_CMD_GET_SFP_INFO = 0xad,
+ HINIC_PORT_CMD_GET_FW_LOG = 0xca,
+ HINIC_PORT_CMD_SET_IPSU_MAC = 0xcb,
+ HINIC_PORT_CMD_GET_IPSU_MAC = 0xcc,
+
+ HINIC_PORT_CMD_SET_IQ_ENABLE = 0xd6,
+
+ HINIC_PORT_CMD_GET_LINK_MODE = 0xD9,
+ HINIC_PORT_CMD_SET_SPEED = 0xDA,
+ HINIC_PORT_CMD_SET_AUTONEG = 0xDB,
+
+ HINIC_PORT_CMD_CLEAR_QP_RES = 0xDD,
+ HINIC_PORT_CMD_SET_SUPER_CQE = 0xDE,
+ HINIC_PORT_CMD_SET_VF_COS = 0xDF,
+ HINIC_PORT_CMD_GET_VF_COS = 0xE1,
+
+ HINIC_PORT_CMD_CABLE_PLUG_EVENT = 0xE5,
+ HINIC_PORT_CMD_LINK_ERR_EVENT = 0xE6,
+
+ HINIC_PORT_CMD_SET_PORT_FUNCS_STATE = 0xE7,
+ HINIC_PORT_CMD_SET_COS_UP_MAP = 0xE8,
+
+ HINIC_PORT_CMD_RESET_LINK_CFG = 0xEB,
+ HINIC_PORT_CMD_GET_STD_SFP_INFO = 0xF0,
+
+ HINIC_PORT_CMD_FORCE_PKT_DROP = 0xF3,
+ HINIC_PORT_CMD_SET_LRO_TIMER = 0xF4,
+
+ HINIC_PORT_CMD_SET_VHD_CFG = 0xF7,
+ HINIC_PORT_CMD_SET_LINK_FOLLOW = 0xF8,
+ HINIC_PORT_CMD_SET_VF_MAX_MIN_RATE = 0xF9,
+ HINIC_PORT_CMD_SET_RXQ_LRO_ADPT = 0xFA,
+ HINIC_PORT_CMD_SET_Q_FILTER = 0xFC,
+ HINIC_PORT_CMD_SET_VLAN_FILTER = 0xFF
+};
+
+/* cmd of mgmt CPU message for HW module */
+enum hinic_mgmt_cmd {
+ HINIC_MGMT_CMD_RESET_MGMT = 0x0,
+ HINIC_MGMT_CMD_START_FLR = 0x1,
+ HINIC_MGMT_CMD_FLUSH_DOORBELL = 0x2,
+ HINIC_MGMT_CMD_GET_IO_STATUS = 0x3,
+ HINIC_MGMT_CMD_DMA_ATTR_SET = 0x4,
+
+ HINIC_MGMT_CMD_CMDQ_CTXT_SET = 0x10,
+ HINIC_MGMT_CMD_CMDQ_CTXT_GET,
+
+ HINIC_MGMT_CMD_VAT_SET = 0x12,
+ HINIC_MGMT_CMD_VAT_GET,
+
+ HINIC_MGMT_CMD_L2NIC_SQ_CI_ATTR_SET = 0x14,
+ HINIC_MGMT_CMD_L2NIC_SQ_CI_ATTR_GET,
+
+ HINIC_MGMT_CMD_PPF_HT_GPA_SET = 0x23,
+ HINIC_MGMT_CMD_RES_STATE_SET = 0x24,
+ HINIC_MGMT_CMD_FUNC_CACHE_OUT = 0x25,
+ HINIC_MGMT_CMD_FFM_SET = 0x26,
+
+ /* 0x29 not defined in base line,
+ * only used in open source driver
+ */
+ HINIC_MGMT_CMD_FUNC_RES_CLEAR = 0x29,
+
+ HINIC_MGMT_CMD_CEQ_CTRL_REG_WR_BY_UP = 0x33,
+ HINIC_MGMT_CMD_MSI_CTRL_REG_WR_BY_UP,
+ HINIC_MGMT_CMD_MSI_CTRL_REG_RD_BY_UP,
+
+ HINIC_MGMT_CMD_VF_RANDOM_ID_SET = 0x36,
+ HINIC_MGMT_CMD_FAULT_REPORT = 0x37,
+ HINIC_MGMT_CMD_HEART_LOST_REPORT = 0x38,
+
+ HINIC_MGMT_CMD_VPD_SET = 0x40,
+ HINIC_MGMT_CMD_VPD_GET,
+ HINIC_MGMT_CMD_LABEL_SET,
+ HINIC_MGMT_CMD_LABEL_GET,
+ HINIC_MGMT_CMD_SATIC_MAC_SET,
+ HINIC_MGMT_CMD_SATIC_MAC_GET,
+ HINIC_MGMT_CMD_SYNC_TIME = 0x46,
+ HINIC_MGMT_CMD_SET_LED_STATUS = 0x4A,
+ HINIC_MGMT_CMD_L2NIC_RESET = 0x4b,
+ HINIC_MGMT_CMD_FAST_RECYCLE_MODE_SET = 0x4d,
+ HINIC_MGMT_CMD_BIOS_NV_DATA_MGMT = 0x4E,
+ HINIC_MGMT_CMD_ACTIVATE_FW = 0x4F,
+ HINIC_MGMT_CMD_PAGESIZE_SET = 0x50,
+ HINIC_MGMT_CMD_PAGESIZE_GET = 0x51,
+ HINIC_MGMT_CMD_GET_BOARD_INFO = 0x52,
+ HINIC_MGMT_CMD_WATCHDOG_INFO = 0x56,
+ HINIC_MGMT_CMD_FMW_ACT_NTC = 0x57,
+ HINIC_MGMT_CMD_SET_VF_RANDOM_ID = 0x61,
+ HINIC_MGMT_CMD_GET_PPF_STATE = 0x63,
+ HINIC_MGMT_CMD_PCIE_DFX_NTC = 0x65,
+ HINIC_MGMT_CMD_PCIE_DFX_GET = 0x66,
+
+ HINIC_MGMT_CMD_GET_HOST_INFO = 0x67,
+
+ HINIC_MGMT_CMD_GET_PHY_INIT_STATUS = 0x6A,
+ HINIC_MGMT_CMD_GET_HW_PF_INFOS = 0x6D,
+};
+
+/* uCode related commands */
+enum hinic_ucode_cmd {
+ HINIC_UCODE_CMD_MDY_QUEUE_CONTEXT = 0,
+ HINIC_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC_UCODE_CMD_ARM_SQ,
+ HINIC_UCODE_CMD_ARM_RQ,
+ HINIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC_UCODE_CMD_GET_RSS_CONTEXT_TABLE,
+ HINIC_UCODE_CMD_SET_IQ_ENABLE,
+ HINIC_UCODE_CMD_SET_RQ_FLUSH = 10
+};
+
+enum sq_l4offload_type {
+ OFFLOAD_DISABLE = 0,
+ TCP_OFFLOAD_ENABLE = 1,
+ SCTP_OFFLOAD_ENABLE = 2,
+ UDP_OFFLOAD_ENABLE = 3,
+};
+
+enum sq_vlan_offload_flag {
+ VLAN_OFFLOAD_DISABLE = 0,
+ VLAN_OFFLOAD_ENABLE = 1,
+};
+
+enum sq_pkt_parsed_flag {
+ PKT_NOT_PARSED = 0,
+ PKT_PARSED = 1,
+};
+
+enum sq_l3_type {
+ UNKNOWN_L3TYPE = 0,
+ IPV6_PKT = 1,
+ IPV4_PKT_NO_CHKSUM_OFFLOAD = 2,
+ IPV4_PKT_WITH_CHKSUM_OFFLOAD = 3,
+};
+
+enum sq_md_type {
+ UNKNOWN_MD_TYPE = 0,
+};
+
+enum sq_l2type {
+ ETHERNET = 0,
+};
+
+enum sq_tunnel_l4_type {
+ NOT_TUNNEL,
+ TUNNEL_UDP_NO_CSUM,
+ TUNNEL_UDP_CSUM,
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+#define HINIC_RSS_TYPE_VALID_SHIFT 23
+#define HINIC_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC_RSS_TYPE_UDP_IPV4_SHIFT 31
+
+#define HINIC_RSS_TYPE_SET(val, member) \
+ (((u32)(val) & 0x1) << HINIC_RSS_TYPE_##member##_SHIFT)
+
+#define HINIC_RSS_TYPE_GET(val, member) \
+ (((u32)(val) >> HINIC_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+enum hinic_speed {
+ HINIC_SPEED_10MB_LINK = 0,
+ HINIC_SPEED_100MB_LINK,
+ HINIC_SPEED_1000MB_LINK,
+ HINIC_SPEED_10GB_LINK,
+ HINIC_SPEED_25GB_LINK,
+ HINIC_SPEED_40GB_LINK,
+ HINIC_SPEED_100GB_LINK,
+ HINIC_SPEED_UNKNOWN = 0xFF,
+};
+
+enum {
+ HINIC_IFLA_VF_LINK_STATE_AUTO, /* link state of the uplink */
+ HINIC_IFLA_VF_LINK_STATE_ENABLE, /* link always up */
+ HINIC_IFLA_VF_LINK_STATE_DISABLE, /* link always down */
+};
+
+#define HINIC_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HINIC_AF0_P2P_IDX_SHIFT 10
+#define HINIC_AF0_PCI_INTF_IDX_SHIFT 14
+#define HINIC_AF0_VF_IN_PF_SHIFT 16
+#define HINIC_AF0_FUNC_TYPE_SHIFT 24
+
+#define HINIC_AF0_FUNC_GLOBAL_IDX_MASK 0x3FF
+#define HINIC_AF0_P2P_IDX_MASK 0xF
+#define HINIC_AF0_PCI_INTF_IDX_MASK 0x3
+#define HINIC_AF0_VF_IN_PF_MASK 0xFF
+#define HINIC_AF0_FUNC_TYPE_MASK 0x1
+
+#define HINIC_AF0_GET(val, member) \
+ (((val) >> HINIC_AF0_##member##_SHIFT) & HINIC_AF0_##member##_MASK)
+
+#define HINIC_AF1_PPF_IDX_SHIFT 0
+#define HINIC_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HINIC_AF1_CEQS_PER_FUNC_SHIFT 12
+#define HINIC_AF1_IRQS_PER_FUNC_SHIFT 20
+#define HINIC_AF1_DMA_ATTR_PER_FUNC_SHIFT 24
+#define HINIC_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HINIC_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HINIC_AF1_PPF_IDX_MASK 0x1F
+#define HINIC_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HINIC_AF1_CEQS_PER_FUNC_MASK 0x7
+#define HINIC_AF1_IRQS_PER_FUNC_MASK 0xF
+#define HINIC_AF1_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HINIC_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HINIC_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HINIC_AF1_GET(val, member) \
+ (((val) >> HINIC_AF1_##member##_SHIFT) & HINIC_AF1_##member##_MASK)
+
+#define HINIC_AF2_GLOBAL_VF_ID_OF_PF_SHIFT 16
+#define HINIC_AF2_GLOBAL_VF_ID_OF_PF_MASK 0x3FF
+
+#define HINIC_AF2_GET(val, member) \
+ (((val) >> HINIC_AF2_##member##_SHIFT) & HINIC_AF2_##member##_MASK)
+
+#define HINIC_AF4_OUTBOUND_CTRL_SHIFT 0
+#define HINIC_AF4_DOORBELL_CTRL_SHIFT 1
+#define HINIC_AF4_OUTBOUND_CTRL_MASK 0x1
+#define HINIC_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HINIC_AF4_GET(val, member) \
+ (((val) >> HINIC_AF4_##member##_SHIFT) & HINIC_AF4_##member##_MASK)
+
+#define HINIC_AF4_SET(val, member) \
+ (((val) & HINIC_AF4_##member##_MASK) << HINIC_AF4_##member##_SHIFT)
+
+#define HINIC_AF4_CLEAR(val, member) \
+ ((val) & (~(HINIC_AF4_##member##_MASK << \
+ HINIC_AF4_##member##_SHIFT)))
+
+#define HINIC_AF5_PF_STATUS_SHIFT 0
+#define HINIC_AF5_PF_STATUS_MASK 0xFFFF
+
+#define HINIC_AF5_SET(val, member) \
+ (((val) & HINIC_AF5_##member##_MASK) << HINIC_AF5_##member##_SHIFT)
+
+#define HINIC_AF5_GET(val, member) \
+ (((val) >> HINIC_AF5_##member##_SHIFT) & HINIC_AF5_##member##_MASK)
+
+#define HINIC_AF5_CLEAR(val, member) \
+ ((val) & (~(HINIC_AF5_##member##_MASK << \
+ HINIC_AF5_##member##_SHIFT)))
+
+#define HINIC_PPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC_PPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC_PPF_ELECTION_SET(val, member) \
+ (((val) & HINIC_PPF_ELECTION_##member##_MASK) << \
+ HINIC_PPF_ELECTION_##member##_SHIFT)
+
+#define HINIC_PPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC_PPF_ELECTION_##member##_SHIFT) & \
+ HINIC_PPF_ELECTION_##member##_MASK)
+
+#define HINIC_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC_PPF_ELECTION_##member##_MASK \
+ << HINIC_PPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC_MPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC_MPF_ELECTION_SET(val, member) \
+ (((val) & HINIC_MPF_ELECTION_##member##_MASK) << \
+ HINIC_MPF_ELECTION_##member##_SHIFT)
+
+#define HINIC_MPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC_MPF_ELECTION_##member##_SHIFT) & \
+ HINIC_MPF_ELECTION_##member##_MASK)
+
+#define HINIC_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC_MPF_ELECTION_##member##_MASK \
+ << HINIC_MPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HINIC_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
+#define HINIC_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define HINIC_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define HINIC_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define HINIC_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HINIC_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define HINIC_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HINIC_IS_PF(dev) (HINIC_FUNC_TYPE(dev) == TYPE_PF)
+#define HINIC_IS_VF(dev) (HINIC_FUNC_TYPE(dev) == TYPE_VF)
+#define HINIC_IS_PPF(dev) (HINIC_FUNC_TYPE(dev) == TYPE_PPF)
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((unsigned long)(db) - (unsigned long)(db_base)) / \
+ HINIC_DB_PAGE_SIZE))
+
+enum hinic_pcie_nosnoop {
+ HINIC_PCIE_SNOOP = 0,
+ HINIC_PCIE_NO_SNOOP = 1,
+};
+
+enum hinic_pcie_tph {
+ HINIC_PCIE_TPH_DISABLE = 0,
+ HINIC_PCIE_TPH_ENABLE = 1,
+};
+
+enum hinic_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1,
+};
+
+enum hinic_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1,
+};
+
+enum hinic_pf_status {
+ HINIC_PF_STATUS_INIT = 0X0,
+ HINIC_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HINIC_PF_STATUS_FLR_START_FLAG = 0x12,
+ HINIC_PF_STATUS_FLR_FINISH_FLAG = 0x13,
+};
+
+/* total doorbell or direct wqe size is 512kB, db num: 128, dwqe: 128 */
+#define HINIC_DB_DWQE_SIZE 0x00080000
+
+/* db/dwqe page size: 4K */
+#define HINIC_DB_PAGE_SIZE 0x00001000ULL
+
+#define HINIC_DB_MAX_AREAS (HINIC_DB_DWQE_SIZE / HINIC_DB_PAGE_SIZE)
+
+#define HINIC_PCI_MSIX_ENTRY_SIZE 16
+#define HINIC_PCI_MSIX_ENTRY_VECTOR_CTRL 12
+#define HINIC_PCI_MSIX_ENTRY_CTRL_MASKBIT 1
+
+#endif /* _HINIC_PORT_CMD_H_ */
diff --git a/drivers/net/hinic/base/hinic_qe_def.h b/drivers/net/hinic/base/hinic_qe_def.h
new file mode 100644
index 000000000..85a45f72d
--- /dev/null
+++ b/drivers/net/hinic/base/hinic_qe_def.h
@@ -0,0 +1,450 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_QE_DEF_H_
+#define _HINIC_QE_DEF_H_
+
+#define HINIC_SQ_WQEBB_SIZE 64
+#define HINIC_RQ_WQE_SIZE 32
+#define HINIC_SQ_WQEBB_SHIFT 6
+#define HINIC_RQ_WQEBB_SHIFT 5
+
+#define HINIC_MAX_QUEUE_DEPTH 4096
+#define HINIC_MIN_QUEUE_DEPTH 128
+#define HINIC_TXD_ALIGN 1
+#define HINIC_RXD_ALIGN 1
+
+#define HINIC_SQ_DEPTH 1024
+#define HINIC_RQ_DEPTH 1024
+
+#define HINIC_RQ_WQE_MAX_SIZE 32
+
+#define SIZE_8BYTES(size) (ALIGN((u32)(size), 8) >> 3)
+
+/* SQ_CTRL */
+#define SQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 16
+#define SQ_CTRL_DATA_FORMAT_SHIFT 22
+#define SQ_CTRL_LEN_SHIFT 29
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1FU
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_LEN_MASK 0x3U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_GET(val, member) (((val) >> SQ_CTRL_##member##_SHIFT) \
+ & SQ_CTRL_##member##_MASK)
+
+#define SQ_CTRL_CLEAR(val, member) ((val) & \
+ (~(SQ_CTRL_##member##_MASK << \
+ SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) \
+ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) \
+ & SQ_CTRL_QUEUE_INFO_##member##_MASK)
+
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+#define SQ_TASK_INFO0_L2HDR_LEN_SHIFT 0
+#define SQ_TASK_INFO0_L4OFFLOAD_SHIFT 8
+#define SQ_TASK_INFO0_INNER_L3TYPE_SHIFT 10
+#define SQ_TASK_INFO0_VLAN_OFFLOAD_SHIFT 12
+#define SQ_TASK_INFO0_PARSE_FLAG_SHIFT 13
+#define SQ_TASK_INFO0_UFO_AVD_SHIFT 14
+#define SQ_TASK_INFO0_TSO_UFO_SHIFT 15
+#define SQ_TASK_INFO0_VLAN_TAG_SHIFT 16
+
+#define SQ_TASK_INFO0_L2HDR_LEN_MASK 0xFFU
+#define SQ_TASK_INFO0_L4OFFLOAD_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L3TYPE_MASK 0x3U
+#define SQ_TASK_INFO0_VLAN_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_PARSE_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_UFO_AVD_MASK 0x1U
+#define SQ_TASK_INFO0_TSO_UFO_MASK 0x1U
+#define SQ_TASK_INFO0_VLAN_TAG_MASK 0xFFFFU
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) << \
+ SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_MD_TYPE_SHIFT 8
+#define SQ_TASK_INFO1_INNER_L4LEN_SHIFT 16
+#define SQ_TASK_INFO1_INNER_L3LEN_SHIFT 24
+
+#define SQ_TASK_INFO1_MD_TYPE_MASK 0xFFU
+#define SQ_TASK_INFO1_INNER_L4LEN_MASK 0xFFU
+#define SQ_TASK_INFO1_INNER_L3LEN_MASK 0xFFU
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) << \
+ SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO2_TUNNEL_L4LEN_SHIFT 0
+#define SQ_TASK_INFO2_OUTER_L3LEN_SHIFT 8
+#define SQ_TASK_INFO2_TUNNEL_L4TYPE_SHIFT 16
+#define SQ_TASK_INFO2_OUTER_L3TYPE_SHIFT 24
+
+#define SQ_TASK_INFO2_TUNNEL_L4LEN_MASK 0xFFU
+#define SQ_TASK_INFO2_OUTER_L3LEN_MASK 0xFFU
+#define SQ_TASK_INFO2_TUNNEL_L4TYPE_MASK 0x7U
+#define SQ_TASK_INFO2_OUTER_L3TYPE_MASK 0x3U
+
+#define SQ_TASK_INFO2_SET(val, member) \
+ (((val) & SQ_TASK_INFO2_##member##_MASK) << \
+ SQ_TASK_INFO2_##member##_SHIFT)
+#define SQ_TASK_INFO2_GET(val, member) \
+ (((val) >> SQ_TASK_INFO2_##member##_SHIFT) & \
+ SQ_TASK_INFO2_##member##_MASK)
+
+#define SQ_TASK_INFO4_L2TYPE_SHIFT 31
+
+#define SQ_TASK_INFO4_L2TYPE_MASK 0x1U
+
+#define SQ_TASK_INFO4_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO4_##member##_MASK) << \
+ SQ_TASK_INFO4_##member##_SHIFT)
+
+/* SQ_DB */
+#define SQ_DB_OFF 0x00000800
+#define SQ_DB_INFO_HI_PI_SHIFT 0
+#define SQ_DB_INFO_QID_SHIFT 8
+#define SQ_DB_INFO_CFLAG_SHIFT 23
+#define SQ_DB_INFO_COS_SHIFT 24
+#define SQ_DB_INFO_TYPE_SHIFT 27
+#define SQ_DB_INFO_HI_PI_MASK 0xFFU
+#define SQ_DB_INFO_QID_MASK 0x3FFU
+#define SQ_DB_INFO_CFLAG_MASK 0x1U
+#define SQ_DB_INFO_COS_MASK 0x7U
+#define SQ_DB_INFO_TYPE_MASK 0x1FU
+#define SQ_DB_INFO_SET(val, member) (((u32)(val) & \
+ SQ_DB_INFO_##member##_MASK) << \
+ SQ_DB_INFO_##member##_SHIFT)
+
+#define SQ_DB_PI_LOW_MASK 0xFF
+#define SQ_DB_PI_LOW(pi) ((pi) & SQ_DB_PI_LOW_MASK)
+#define SQ_DB_PI_HI_SHIFT 8
+#define SQ_DB_PI_HIGH(pi) ((pi) >> SQ_DB_PI_HI_SHIFT)
+#define SQ_DB_ADDR(sq, pi) ((u64 *)((u8 __iomem *)((sq)->db_addr) + \
+ SQ_DB_OFF) + SQ_DB_PI_LOW(pi))
+#define SQ_DB 1
+#define SQ_CFLAG_DP 0 /* CFLAG_DATA_PATH */
+
+/* RQ_CTRL */
+#define RQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
+#define RQ_CTRL_COMPLETE_FORMAT_SHIFT 15
+#define RQ_CTRL_COMPLETE_LEN_SHIFT 27
+#define RQ_CTRL_LEN_SHIFT 29
+
+#define RQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFFU
+#define RQ_CTRL_COMPLETE_FORMAT_MASK 0x1U
+#define RQ_CTRL_COMPLETE_LEN_MASK 0x3U
+#define RQ_CTRL_LEN_MASK 0x3U
+
+#define RQ_CTRL_SET(val, member) (((val) & \
+ RQ_CTRL_##member##_MASK) << \
+ RQ_CTRL_##member##_SHIFT)
+
+#define RQ_CTRL_GET(val, member) (((val) >> \
+ RQ_CTRL_##member##_SHIFT) & \
+ RQ_CTRL_##member##_MASK)
+
+#define RQ_CTRL_CLEAR(val, member) ((val) & \
+ (~(RQ_CTRL_##member##_MASK << \
+ RQ_CTRL_##member##_SHIFT)))
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) (((val) >> \
+ RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define RQ_CQE_STATUS_CLEAR(val, member) ((val) & \
+ (~(RQ_CQE_STATUS_##member##_MASK << \
+ RQ_CQE_STATUS_##member##_SHIFT)))
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) (((val) >> \
+ RQ_CQE_SGE_##member##_SHIFT) & \
+ RQ_CQE_SGE_##member##_MASK)
+
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+
+#define RQ_CQE_PKT_NUM_GET(val, member) (((val) >> \
+ RQ_CQE_PKT_##member##_SHIFT) & \
+ RQ_CQE_PKT_##member##_MASK)
+#define HINIC_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) (((val) >> \
+ RQ_CQE_##member##_SHIFT) & \
+ RQ_CQE_##member##_MASK)
+#define HINIC_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define HINIC_GET_SUPER_CQE_EN_BE(pkt_info) ((pkt_info) & 0x1000000U)
+#define RQ_CQE_PKT_LEN_GET(val, member) (((val) >> \
+ RQ_CQE_PKT_##member##_SHIFT) & \
+ RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) (((val) >> \
+ RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define RQ_CQE_PKT_TYPES_NON_L2_MASK 0x800U
+#define RQ_CQE_PKT_TYPES_L2_MASK 0x7FU
+
+#define RQ_CQE_STATUS_CSUM_BYPASS_VAL 0x80U
+#define RQ_CQE_STATUS_CSUM_ERR_IP_MASK 0x39U
+#define RQ_CQE_STATUS_CSUM_ERR_L4_MASK 0x46U
+#define RQ_CQE_STATUS_CSUM_ERR_OTHER 0x100U
+
+#define SECT_SIZE_BYTES(size) ((size) << 3)
+
+#define HINIC_PF_SET_VF_ALREADY 0x4
+
+#define WQS_BLOCKS_PER_PAGE 4
+
+#define WQ_SIZE(wq) (u32)((u64)(wq)->q_depth * (wq)->wqebb_size)
+
+#define WQE_PAGE_NUM(wq, idx) (((idx) >> ((wq)->wqebbs_per_page_shift)) & \
+ ((wq)->num_q_pages - 1))
+
+#define WQE_PAGE_OFF(wq, idx) ((u64)((wq)->wqebb_size) * \
+ ((idx) & ((wq)->num_wqebbs_per_page - 1)))
+
+#define WQ_PAGE_ADDR_SIZE sizeof(u64)
+#define WQ_PAGE_ADDR_SIZE_SHIFT 3
+#define WQ_PAGE_ADDR(wq, idx) \
+ (u8 *)(*(u64 *)((u64)((wq)->shadow_block_vaddr) + \
+ (WQE_PAGE_NUM(wq, idx) << WQ_PAGE_ADDR_SIZE_SHIFT)))
+
+#define WQ_BLOCK_SIZE 4096UL
+#define WQS_PAGE_SIZE (WQS_BLOCKS_PER_PAGE * WQ_BLOCK_SIZE)
+#define WQ_MAX_PAGES (WQ_BLOCK_SIZE >> WQ_PAGE_ADDR_SIZE_SHIFT)
+
+#define CMDQ_BLOCKS_PER_PAGE 8
+#define CMDQ_BLOCK_SIZE 512UL
+#define CMDQ_PAGE_SIZE ALIGN((CMDQ_BLOCKS_PER_PAGE * \
+ CMDQ_BLOCK_SIZE), PAGE_SIZE)
+
+#define ADDR_4K_ALIGNED(addr) (0 == ((addr) & 0xfff))
+#define ADDR_256K_ALIGNED(addr) (0 == ((addr) & 0x3ffff))
+
+#define WQ_BASE_VADDR(wqs, wq) \
+ (u64 *)(((u64)((wqs)->page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_PADDR(wqs, wq) (((wqs)->page_paddr[(wq)->page_idx]) \
+ + (u64)(wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_ADDR(wqs, wq) \
+ (u64 *)(((u64)((wqs)->shadow_page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
+ (u64 *)(((u64)((cmdq_pages)->cmdq_page_vaddr)) \
+ + (wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_PADDR(cmdq_pages, wq) \
+ (((u64)((cmdq_pages)->cmdq_page_paddr)) \
+ + (u64)(wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_ADDR(cmdq_pages, wq) \
+ (u64 *)(((u64)((cmdq_pages)->cmdq_shadow_page_vaddr)) \
+ + (wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+
+#define WQE_SHADOW_PAGE(wq, wqe) \
+ (u16)(((unsigned long)(wqe) - (unsigned long)(wq)->shadow_wqe) \
+ / (wq)->max_wqe_size)
+
+#define WQE_IN_RANGE(wqe, start, end) \
+ (((unsigned long)(wqe) >= (unsigned long)(start)) && \
+ ((unsigned long)(wqe) < (unsigned long)(end)))
+
+#define WQ_NUM_PAGES(num_wqs) \
+ (ALIGN((u32)num_wqs, WQS_BLOCKS_PER_PAGE) / WQS_BLOCKS_PER_PAGE)
+
+/* Queue buffer related define */
+enum hinic_rx_buf_size {
+ HINIC_RX_BUF_SIZE_32B = 0x20,
+ HINIC_RX_BUF_SIZE_64B = 0x40,
+ HINIC_RX_BUF_SIZE_96B = 0x60,
+ HINIC_RX_BUF_SIZE_128B = 0x80,
+ HINIC_RX_BUF_SIZE_192B = 0xC0,
+ HINIC_RX_BUF_SIZE_256B = 0x100,
+ HINIC_RX_BUF_SIZE_384B = 0x180,
+ HINIC_RX_BUF_SIZE_512B = 0x200,
+ HINIC_RX_BUF_SIZE_768B = 0x300,
+ HINIC_RX_BUF_SIZE_1K = 0x400,
+ HINIC_RX_BUF_SIZE_1_5K = 0x600,
+ HINIC_RX_BUF_SIZE_2K = 0x800,
+ HINIC_RX_BUF_SIZE_3K = 0xC00,
+ HINIC_RX_BUF_SIZE_4K = 0x1000,
+ HINIC_RX_BUF_SIZE_8K = 0x2000,
+ HINIC_RX_BUF_SIZE_16K = 0x4000,
+};
+
+enum hinic_res_state {
+ HINIC_RES_CLEAN = 0,
+ HINIC_RES_ACTIVE = 1,
+};
+
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+
+#define BUF_DESC_SIZE_SHIFT 4
+
+#define HINIC_SQ_WQE_SIZE(num_sge) \
+ (sizeof(struct hinic_sq_ctrl) + \
+ sizeof(struct hinic_sq_task) + \
+ (unsigned int)((num_sge) << BUF_DESC_SIZE_SHIFT))
+
+#define HINIC_SQ_WQEBB_CNT(num_sge) \
+ (int)(ALIGN(HINIC_SQ_WQE_SIZE((u32)num_sge), \
+ HINIC_SQ_WQEBB_SIZE) >> HINIC_SQ_WQEBB_SHIFT)
+
+#define HINIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define HINIC_GET_PKT_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+
+#define HINIC_GET_RX_VLAN_TAG(vlan_len) \
+ RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC_GET_RX_PKT_LEN(vlan_len) \
+ RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define HINIC_GET_RX_CSUM_ERR(status) \
+ RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC_GET_RX_DONE(status) \
+ RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC_GET_RX_FLUSH(status) \
+ RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC_GET_RX_BP_EN(status) \
+ RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC_GET_RX_NUM_LRO(status) \
+ RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC_PKT_TYPES_UNKNOWN(pkt_types) \
+ ((pkt_types) & RQ_CQE_PKT_TYPES_NON_L2_MASK)
+
+#define HINIC_PKT_TYPES_L2(pkt_types) \
+ ((pkt_types) & RQ_CQE_PKT_TYPES_L2_MASK)
+
+#define HINIC_CSUM_ERR_BYPASSED(csum_err) \
+ ((csum_err) == RQ_CQE_STATUS_CSUM_BYPASS_VAL)
+
+#define HINIC_CSUM_ERR_IP(csum_err) \
+ ((csum_err) & RQ_CQE_STATUS_CSUM_ERR_IP_MASK)
+
+#define HINIC_CSUM_ERR_L4(csum_err) \
+ ((csum_err) & RQ_CQE_STATUS_CSUM_ERR_L4_MASK)
+
+#define HINIC_CSUM_ERR_OTHER(csum_err) \
+ ((csum_err) == RQ_CQE_STATUS_CSUM_ERR_OTHER)
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+enum sq_wqe_type {
+ SQ_NORMAL_WQE = 0,
+};
+
+enum rq_completion_fmt {
+ RQ_COMPLETE_SGE = 1
+};
+
+#define HINIC_VLAN_FILTER_EN (1U << 0)
+#define HINIC_BROADCAST_FILTER_EX_EN (1U << 1)
+
+#endif /* _HINIC_QE_DEF_H_ */
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.h b/drivers/net/hinic/hinic_pmd_ethdev.h
new file mode 100644
index 000000000..4b0555e89
--- /dev/null
+++ b/drivers/net/hinic/hinic_pmd_ethdev.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PMD_ETHDEV_H_
+#define _HINIC_PMD_ETHDEV_H_
+
+#include "base/hinic_pmd_dpdev.h"
+
+#define PMD_DRIVER_VERSION "2.0.0.1"
+
+/* Vendor ID used by Huawei devices */
+#define HINIC_HUAWEI_VENDOR_ID 0x19E5
+
+/* Hinic devices */
+#define HINIC_DEV_ID_PRD 0x1822
+#define HINIC_DEV_ID_MEZZ_25GE 0x0210
+#define HINIC_DEV_ID_MEZZ_40GE 0x020D
+#define HINIC_DEV_ID_MEZZ_100GE 0x0205
+
+#define HINIC_PMD_DEV_BOND (1)
+#define HINIC_PMD_DEV_EMPTY (-1)
+#define HINIC_DEV_NAME_MAX_LEN (32)
+
+#define HINIC_RSS_OFFLOAD_ALL ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4 |\
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
+#define HINIC_MTU_TO_PKTLEN(mtu) \
+ ((mtu) + ETH_HLEN + ETH_CRC_LEN)
+
+#define HINIC_PKTLEN_TO_MTU(pktlen) \
+ ((pktlen) - (ETH_HLEN + ETH_CRC_LEN))
+
+/* vhd type */
+#define HINIC_VHD_TYPE_0B (2)
+#define HINIC_VHD_TYPE_10B (1)
+#define HINIC_VHD_TYPE_12B (0)
+
+/* vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define HINIC_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F))
+#define HINIC_VFTA_IDX(vlan_id) ((vlan_id) >> 5)
+
+#define HINIC_INTR_CB_UNREG_MAX_RETRIES 10
+
+/* eth_dev ops */
+int hinic_dev_configure(struct rte_eth_dev *dev);
+void hinic_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);
+int hinic_dev_start(struct rte_eth_dev *dev);
+int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+void hinic_rx_queue_release(void *queue);
+void hinic_tx_queue_release(void *queue);
+void hinic_dev_stop(struct rte_eth_dev *dev);
+void hinic_dev_close(struct rte_eth_dev *dev);
+int hinic_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
+void hinic_dev_stats_reset(struct rte_eth_dev *dev);
+void hinic_dev_xstats_reset(struct rte_eth_dev *dev);
+void hinic_dev_promiscuous_enable(struct rte_eth_dev *dev);
+void hinic_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
+int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+int hinic_link_event_process(struct rte_eth_dev *dev, u8 status);
+void hinic_disable_interrupt(struct rte_eth_dev *dev);
+void hinic_free_all_sq(struct hinic_nic_dev *nic_dev);
+void hinic_free_all_rq(struct hinic_nic_dev *nic_dev);
+
+int hinic_rxtx_configure(struct rte_eth_dev *dev);
+int hinic_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+int hinic_rss_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+
+int hinic_dev_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+int hinic_dev_xstats_get_names(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit);
+
+int hinic_fw_version_get(struct rte_eth_dev *dev,
+ char *fw_version, size_t fw_size);
+
+#endif /* _HINIC_PMD_ETHDEV_H_ */
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
new file mode 100644
index 000000000..4d3fc2722
--- /dev/null
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PMD_RX_H_
+#define _HINIC_PMD_RX_H_
+
+/* rxq wq operations */
+#define HINIC_GET_RQ_WQE_MASK(rxq) \
+ ((rxq)->wq->mask)
+
+#define HINIC_GET_RQ_LOCAL_CI(rxq) \
+ (((rxq)->wq->cons_idx) & HINIC_GET_RQ_WQE_MASK(rxq))
+
+#define HINIC_GET_RQ_LOCAL_PI(rxq) \
+ (((rxq)->wq->prod_idx) & HINIC_GET_RQ_WQE_MASK(rxq))
+
+#define HINIC_UPDATE_RQ_LOCAL_CI(rxq, wqebb_cnt) \
+ do { \
+ (rxq)->wq->cons_idx += (wqebb_cnt); \
+ (rxq)->wq->delta += (wqebb_cnt); \
+ } while (0)
+
+#define HINIC_GET_RQ_FREE_WQEBBS(rxq) \
+ ((rxq)->wq->delta - 1)
+
+#define HINIC_UPDATE_RQ_HW_PI(rxq, pi) \
+ (*((rxq)->pi_virt_addr) = \
+ cpu_to_be16((pi) & HINIC_GET_RQ_WQE_MASK(rxq)))
+
+/* rxq cqe done and status bit */
+#define HINIC_GET_RX_DONE_BE(status) \
+ ((status) & 0x80U)
+
+#define HINIC_GET_RX_FLUSH_BE(status) \
+ ((status) & 0x10U)
+
+#define HINIC_DEFAULT_RX_FREE_THRESH 32
+
+#define HINIC_RX_CSUM_OFFLOAD_EN 0xFFF
+
+struct hinic_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 rx_nombuf;
+ u64 errors;
+ u64 rx_discards;
+
+#ifdef HINIC_XSTAT_MBUF_USE
+ u64 alloc_mbuf;
+ u64 free_mbuf;
+ u64 left_mbuf;
+#endif
+
+#ifdef HINIC_XSTAT_RXBUF_INFO
+ u64 rx_mbuf;
+ u64 rx_avail;
+ u64 rx_hole;
+ u64 burst_pkts;
+#endif
+
+#ifdef HINIC_XSTAT_PROF_RX
+ u64 app_tsc;
+ u64 pmd_tsc;
+#endif
+};
+
+/* Attention, Do not add any member in hinic_rx_info
+ * as rxq bulk rearm mode will write mbuf in rx_info
+ */
+struct hinic_rx_info {
+ struct rte_mbuf *mbuf;
+};
+
+struct hinic_rxq {
+ struct hinic_wq *wq;
+ volatile u16 *pi_virt_addr;
+
+ u16 port_id;
+ u16 q_id;
+ u16 q_depth;
+ u16 buf_len;
+
+ u16 rx_free_thresh;
+ u16 rxinfo_align_end;
+
+ unsigned long status;
+ struct hinic_rxq_stats rxq_stats;
+
+ struct hinic_nic_dev *nic_dev;
+
+ struct hinic_rx_info *rx_info;
+ volatile struct hinic_rq_cqe *rx_cqe;
+
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+ struct rte_mempool *mb_pool;
+
+#ifdef HINIC_XSTAT_PROF_RX
+ /* performance profiling */
+ uint64_t prof_rx_end_tsc;
+#endif
+};
+
+#ifdef HINIC_XSTAT_MBUF_USE
+void hinic_rx_free_mbuf(struct hinic_rxq *rxq, struct rte_mbuf *m);
+#else
+void hinic_rx_free_mbuf(struct rte_mbuf *m);
+#endif
+
+int hinic_setup_rx_resources(struct hinic_rxq *rxq);
+
+void hinic_free_all_rx_resources(struct rte_eth_dev *dev);
+
+void hinic_free_all_rx_mbuf(struct rte_eth_dev *dev);
+
+void hinic_free_rx_resources(struct hinic_rxq *rxq);
+
+u16 hinic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
+
+void hinic_free_all_rx_skbs(struct hinic_rxq *rxq);
+
+void hinic_rx_alloc_pkts(struct hinic_rxq *rxq);
+
+void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats);
+
+void hinic_rxq_stats_reset(struct hinic_rxq *rxq);
+
+int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on);
+
+int hinic_rx_configure(struct rte_eth_dev *dev);
+
+void hinic_rx_remove_configure(struct rte_eth_dev *dev);
+
+#endif /* _HINIC_PMD_RX_H_ */
diff --git a/drivers/net/hinic/hinic_pmd_tx.h b/drivers/net/hinic/hinic_pmd_tx.h
new file mode 100644
index 000000000..15fe31c85
--- /dev/null
+++ b/drivers/net/hinic/hinic_pmd_tx.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PMD_TX_H_
+#define _HINIC_PMD_TX_H_
+
+#define HINIC_DEFAULT_TX_FREE_THRESH 32
+#define HINIC_MAX_TX_FREE_BULK 64
+
+/* txq wq operations */
+#define HINIC_GET_SQ_WQE_MASK(txq) \
+ ((txq)->wq->mask)
+
+#define HINIC_GET_SQ_HW_CI(txq) \
+ ((be16_to_cpu(*(txq)->cons_idx_addr)) & HINIC_GET_SQ_WQE_MASK(txq))
+
+#define HINIC_GET_SQ_LOCAL_CI(txq) \
+ (((txq)->wq->cons_idx) & HINIC_GET_SQ_WQE_MASK(txq))
+
+#define HINIC_UPDATE_SQ_LOCAL_CI(txq, wqebb_cnt) \
+ do { \
+ (txq)->wq->cons_idx += wqebb_cnt; \
+ (txq)->wq->delta += wqebb_cnt; \
+ } while (0)
+
+#define HINIC_GET_SQ_FREE_WQEBBS(txq) \
+ ((txq)->wq->delta - 1)
+
+#define HINIC_IS_SQ_EMPTY(txq) \
+ (((txq)->wq->delta) == ((txq)->q_depth))
+
+#define HINIC_GET_WQ_TAIL(txq) ((txq)->wq->queue_buf_vaddr + \
+ (txq)->wq->wq_buf_size)
+#define HINIC_GET_WQ_HEAD(txq) ((txq)->wq->queue_buf_vaddr)
+
+struct hinic_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 rl_drop;
+ u64 tx_busy;
+ u64 off_errs;
+ u64 cpy_pkts;
+
+#ifdef HINIC_XSTAT_PROF_TX
+ u64 app_tsc;
+ u64 pmd_tsc;
+ u64 burst_pkts;
+#endif
+};
+
+struct hinic_tx_info {
+ struct rte_mbuf *mbuf;
+ int wqebb_cnt;
+ struct rte_mbuf *cpy_mbuf;
+};
+
+struct hinic_txq {
+ /* cacheline0 */
+ struct hinic_nic_dev *nic_dev;
+ struct hinic_wq *wq;
+ struct hinic_sq *sq;
+ volatile u16 *cons_idx_addr;
+ struct hinic_tx_info *tx_info;
+
+ u16 tx_free_thresh;
+ u16 port_id;
+ u16 q_id;
+ u16 q_depth;
+ u32 cos;
+
+ /* cacheline1 */
+ struct hinic_txq_stats txq_stats;
+ u64 sq_head_addr;
+ u64 sq_bot_sge_addr;
+#ifdef HINIC_XSTAT_PROF_TX
+ uint64_t prof_tx_end_tsc; /* performance profiling */
+#endif
+};
+
+int hinic_setup_tx_resources(struct hinic_txq *txq);
+
+void hinic_free_all_tx_resources(struct rte_eth_dev *eth_dev);
+
+void hinic_free_all_tx_mbuf(struct rte_eth_dev *eth_dev);
+
+void hinic_free_tx_resources(struct hinic_txq *txq);
+
+u16 hinic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
+
+void hinic_free_all_tx_skbs(struct hinic_txq *txq);
+
+void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats);
+
+void hinic_txq_stats_reset(struct hinic_txq *txq);
+
+#endif /* _HINIC_PMD_TX_H_ */
--
2.18.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers
2019-06-06 11:17 ` [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers Ziyang Xuan
@ 2019-06-06 11:06 ` Ziyang Xuan
2019-06-11 16:04 ` Ferruh Yigit
1 sibling, 0 replies; 5+ messages in thread
From: Ziyang Xuan @ 2019-06-06 11:06 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, cloud.wangxiaoyun, zhouguoyang, shahar.belkar,
stephen, luoxianjun, Ziyang Xuan
Add various headers that define mgmt commands, cmdq commands,
rx data structures, tx data structures and basic defines for
use in the code.
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
---
drivers/net/hinic/base/hinic_compat.h | 239 ++++++++++++
drivers/net/hinic/base/hinic_port_cmd.h | 483 ++++++++++++++++++++++++
drivers/net/hinic/base/hinic_qe_def.h | 450 ++++++++++++++++++++++
drivers/net/hinic/hinic_pmd_ethdev.h | 102 +++++
drivers/net/hinic/hinic_pmd_rx.h | 135 +++++++
drivers/net/hinic/hinic_pmd_tx.h | 97 +++++
6 files changed, 1506 insertions(+)
create mode 100644 drivers/net/hinic/base/hinic_compat.h
create mode 100644 drivers/net/hinic/base/hinic_port_cmd.h
create mode 100644 drivers/net/hinic/base/hinic_qe_def.h
create mode 100644 drivers/net/hinic/hinic_pmd_ethdev.h
create mode 100644 drivers/net/hinic/hinic_pmd_rx.h
create mode 100644 drivers/net/hinic/hinic_pmd_tx.h
diff --git a/drivers/net/hinic/base/hinic_compat.h b/drivers/net/hinic/base/hinic_compat.h
new file mode 100644
index 000000000..c5a3ee13b
--- /dev/null
+++ b/drivers/net/hinic/base/hinic_compat.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_COMPAT_H_
+#define _HINIC_COMPAT_H_
+
+#include <stdint.h>
+#include <sys/time.h>
+#include <rte_common.h>
+#include <rte_byteorder.h>
+#include <rte_memzone.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_spinlock.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_config.h>
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+
+#ifndef dma_addr_t
+typedef uint64_t dma_addr_t;
+#endif
+
+#ifndef gfp_t
+#define gfp_t unsigned
+#endif
+
+#ifndef bool
+#define bool int
+#endif
+
+#ifndef FALSE
+#define FALSE (0)
+#endif
+
+#ifndef TRUE
+#define TRUE (1)
+#endif
+
+#ifndef false
+#define false (0)
+#endif
+
+#ifndef true
+#define true (1)
+#endif
+
+#ifndef NULL
+#define NULL ((void *)0)
+#endif
+
+#define HINIC_ERROR (-1)
+#define HINIC_OK (0)
+
+#ifndef BIT
+#define BIT(n) (1 << (n))
+#endif
+
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+
+/* Returns X / Y, rounding up. X must be nonnegative to round correctly. */
+#define DIV_ROUND_UP(X, Y) (((X) + ((Y) - 1)) / (Y))
+
+/* Returns X rounded up to the nearest multiple of Y. */
+#define ROUND_UP(X, Y) (DIV_ROUND_UP(X, Y) * (Y))
+
+#undef ALIGN
+#define ALIGN(x, a) RTE_ALIGN(x, a)
+
+#define PTR_ALIGN(p, a) ((typeof(p))ALIGN((unsigned long)(p), (a)))
+
+/* Reported driver name. */
+#define HINIC_DRIVER_NAME "net_hinic"
+
+extern int hinic_logtype;
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, hinic_logtype, \
+ HINIC_DRIVER_NAME": " fmt "\n", ##args)
+
+#define HINIC_ASSERT_EN
+
+#ifdef HINIC_ASSERT_EN
+#define HINIC_ASSERT(exp) \
+ do { \
+ if (!(exp)) { \
+ rte_panic("line%d\tassert \"" #exp "\" failed\n", \
+ __LINE__); \
+ } \
+ } while (0)
+#else
+#define HINIC_ASSERT(exp) do {} while (0)
+#endif
+
+#define HINIC_BUG_ON(x) HINIC_ASSERT(!(x))
+
+/* common definition */
+#ifndef ETH_ALEN
+#define ETH_ALEN 6
+#endif
+#define ETH_HLEN 14
+#define ETH_CRC_LEN 4
+#define VLAN_PRIO_SHIFT 13
+#define VLAN_N_VID 4096
+
+/* bit order interface */
+#define cpu_to_be16(o) rte_cpu_to_be_16(o)
+#define cpu_to_be32(o) rte_cpu_to_be_32(o)
+#define cpu_to_be64(o) rte_cpu_to_be_64(o)
+#define cpu_to_le32(o) rte_cpu_to_le_32(o)
+#define be16_to_cpu(o) rte_be_to_cpu_16(o)
+#define be32_to_cpu(o) rte_be_to_cpu_32(o)
+#define be64_to_cpu(o) rte_be_to_cpu_64(o)
+#define le32_to_cpu(o) rte_le_to_cpu_32(o)
+
+/* virt memory and dma phy memory */
+#define __iomem
+#define __force
+#define GFP_KERNEL RTE_MEMZONE_IOVA_CONTIG
+#define HINIC_PAGE_SHIFT 12
+#define HINIC_PAGE_SIZE RTE_PGSIZE_4K
+#define HINIC_MEM_ALLOC_ALIGNE_MIN 8
+
+static inline int hinic_test_bit(int nr, volatile unsigned long *addr)
+{
+ int res;
+
+ rte_mb();
+ res = ((*addr) & (1UL << nr)) != 0;
+ rte_mb();
+ return res;
+}
+
+static inline void hinic_set_bit(unsigned int nr, volatile unsigned long *addr)
+{
+ __sync_fetch_and_or(addr, (1UL << nr));
+}
+
+static inline void hinic_clear_bit(int nr, volatile unsigned long *addr)
+{
+ __sync_fetch_and_and(addr, ~(1UL << nr));
+}
+
+static inline int hinic_test_and_clear_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = (1UL << nr);
+
+ return __sync_fetch_and_and(addr, ~mask) & mask;
+}
+
+static inline int hinic_test_and_set_bit(int nr, volatile unsigned long *addr)
+{
+ unsigned long mask = (1UL << nr);
+
+ return __sync_fetch_and_or(addr, mask) & mask;
+}
+
+void *dma_zalloc_coherent(void *dev, size_t size, dma_addr_t *dma_handle,
+ gfp_t flag);
+void *dma_zalloc_coherent_aligned(void *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag);
+void *dma_zalloc_coherent_aligned256k(void *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag);
+void dma_free_coherent(void *dev, size_t size, void *virt, dma_addr_t phys);
+
+/* dma pool alloc and free */
+#define pci_pool dma_pool
+#define pci_pool_alloc(pool, flags, handle) dma_pool_alloc(pool, flags, handle)
+#define pci_pool_free(pool, vaddr, addr) dma_pool_free(pool, vaddr, addr)
+
+struct dma_pool *dma_pool_create(const char *name, void *dev, size_t size,
+ size_t align, size_t boundary);
+void dma_pool_destroy(struct dma_pool *pool);
+void *dma_pool_alloc(struct pci_pool *pool, int flags, dma_addr_t *dma_addr);
+void dma_pool_free(struct pci_pool *pool, void *vaddr, dma_addr_t dma);
+
+#define kzalloc(size, flag) rte_zmalloc(NULL, size, HINIC_MEM_ALLOC_ALIGNE_MIN)
+#define kzalloc_aligned(size, flag) rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE)
+#define kfree(ptr) rte_free(ptr)
+
+/* mmio interface */
+static inline void writel(u32 value, volatile void *addr)
+{
+ *(volatile u32 *)addr = value;
+}
+
+static inline u32 readl(const volatile void *addr)
+{
+ return *(const volatile u32 *)addr;
+}
+
+#define __raw_writel(value, reg) writel((value), (reg))
+#define __raw_readl(reg) readl((reg))
+
+/* Spinlock related interface */
+#define hinic_spinlock_t rte_spinlock_t
+
+#define spinlock_t rte_spinlock_t
+#define spin_lock_init(spinlock_prt) rte_spinlock_init(spinlock_prt)
+#define spin_lock_deinit(lock)
+#define spin_lock(spinlock_prt) rte_spinlock_lock(spinlock_prt)
+#define spin_unlock(spinlock_prt) rte_spinlock_unlock(spinlock_prt)
+
+static inline unsigned long get_timeofday_ms(void)
+{
+ struct timeval tv;
+
+ (void)gettimeofday(&tv, NULL);
+
+ return (unsigned long)tv.tv_sec * 1000 + tv.tv_usec / 1000;
+}
+
+#define jiffies get_timeofday_ms()
+#define msecs_to_jiffies(ms) (ms)
+#define time_before(now, end) ((now) < (end))
+
+/* misc kernel utils */
+static inline u16 ilog2(u32 n)
+{
+ u16 res = 0;
+
+ while (n > 1) {
+ n >>= 1;
+ res++;
+ }
+
+ return res;
+}
+
+#endif /* _HINIC_COMPAT_H_ */
diff --git a/drivers/net/hinic/base/hinic_port_cmd.h b/drivers/net/hinic/base/hinic_port_cmd.h
new file mode 100644
index 000000000..2af38c55a
--- /dev/null
+++ b/drivers/net/hinic/base/hinic_port_cmd.h
@@ -0,0 +1,483 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PORT_CMD_H_
+#define _HINIC_PORT_CMD_H_
+
+/* cmd of mgmt CPU message for NIC module */
+enum hinic_port_cmd {
+ HINIC_PORT_CMD_MGMT_RESET = 0x0,
+
+ HINIC_PORT_CMD_CHANGE_MTU = 0x2,
+
+ HINIC_PORT_CMD_ADD_VLAN = 0x3,
+ HINIC_PORT_CMD_DEL_VLAN,
+
+ HINIC_PORT_CMD_SET_PFC = 0x5,
+ HINIC_PORT_CMD_GET_PFC,
+ HINIC_PORT_CMD_SET_ETS,
+ HINIC_PORT_CMD_GET_ETS,
+
+ HINIC_PORT_CMD_SET_MAC = 0x9,
+ HINIC_PORT_CMD_GET_MAC,
+ HINIC_PORT_CMD_DEL_MAC,
+
+ HINIC_PORT_CMD_SET_RX_MODE = 0xc,
+ HINIC_PORT_CMD_SET_ANTI_ATTACK_RATE = 0xd,
+
+ HINIC_PORT_CMD_GET_AUTONEG_CAP = 0xf,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_AUTONET_STATE,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_SPEED,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_DUPLEX,
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_MEDIA_TYPE,
+ /* not defined in base line */
+
+ HINIC_PORT_CMD_GET_PAUSE_INFO = 0x14,
+ HINIC_PORT_CMD_SET_PAUSE_INFO,
+
+ HINIC_PORT_CMD_GET_LINK_STATE = 0x18,
+ HINIC_PORT_CMD_SET_LRO = 0x19,
+ HINIC_PORT_CMD_SET_RX_CSUM = 0x1a,
+ HINIC_PORT_CMD_SET_RX_VLAN_OFFLOAD = 0x1b,
+
+ HINIC_PORT_CMD_GET_PORT_STATISTICS = 0x1c,
+ HINIC_PORT_CMD_CLEAR_PORT_STATISTICS,
+ HINIC_PORT_CMD_GET_VPORT_STAT,
+ HINIC_PORT_CMD_CLEAN_VPORT_STAT,
+
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_INDIR_TBL = 0x25,
+ HINIC_PORT_CMD_SET_RSS_TEMPLATE_INDIR_TBL,
+
+ HINIC_PORT_CMD_SET_PORT_ENABLE = 0x29,
+ HINIC_PORT_CMD_GET_PORT_ENABLE,
+
+ HINIC_PORT_CMD_SET_RSS_TEMPLATE_TBL = 0x2b,
+ HINIC_PORT_CMD_GET_RSS_TEMPLATE_TBL,
+ HINIC_PORT_CMD_SET_RSS_HASH_ENGINE,
+ HINIC_PORT_CMD_GET_RSS_HASH_ENGINE,
+ HINIC_PORT_CMD_GET_RSS_CTX_TBL,
+ HINIC_PORT_CMD_SET_RSS_CTX_TBL,
+ HINIC_PORT_CMD_RSS_TEMP_MGR,
+
+ /* 0x36 ~ 0x40 have defined in base line*/
+ HINIC_PORT_CMD_RSS_CFG = 0x42,
+
+ HINIC_PORT_CMD_GET_PHY_TYPE = 0x44,
+ HINIC_PORT_CMD_INIT_FUNC = 0x45,
+ HINIC_PORT_CMD_SET_LLI_PRI = 0x46,
+
+ HINIC_PORT_CMD_GET_LOOPBACK_MODE = 0x48,
+ HINIC_PORT_CMD_SET_LOOPBACK_MODE,
+
+ HINIC_PORT_CMD_GET_JUMBO_FRAME_SIZE = 0x4a,
+ HINIC_PORT_CMD_SET_JUMBO_FRAME_SIZE,
+
+ /* 0x4c ~ 0x57 have defined in base line*/
+
+ HINIC_PORT_CMD_GET_MGMT_VERSION = 0x58,
+ HINIC_PORT_CMD_GET_BOOT_VERSION,
+ HINIC_PORT_CMD_GET_MICROCODE_VERSION,
+
+ HINIC_PORT_CMD_GET_PORT_TYPE = 0x5b,
+
+ /* not defined in base line */
+ HINIC_PORT_CMD_GET_VPORT_ENABLE = 0x5c,
+ HINIC_PORT_CMD_SET_VPORT_ENABLE,
+
+ HINIC_PORT_CMD_GET_PORT_ID_BY_FUNC_ID = 0x5e,
+
+ HINIC_PORT_CMD_SET_LED_TEST = 0x5f,
+
+ HINIC_PORT_CMD_SET_LLI_STATE = 0x60,
+ HINIC_PORT_CMD_SET_LLI_TYPE,
+ HINIC_PORT_CMD_GET_LLI_CFG,
+
+ HINIC_PORT_CMD_GET_LRO = 0x63,
+
+ HINIC_PORT_CMD_GET_DMA_CS = 0x64,
+ HINIC_PORT_CMD_SET_DMA_CS,
+
+ HINIC_PORT_CMD_GET_GLOBAL_QPN = 0x66,
+
+ HINIC_PORT_CMD_SET_PFC_MISC = 0x67,
+ HINIC_PORT_CMD_GET_PFC_MISC,
+
+ HINIC_PORT_CMD_SET_VF_RATE = 0x69,
+ HINIC_PORT_CMD_SET_VF_VLAN,
+ HINIC_PORT_CMD_CLR_VF_VLAN,
+
+ /* 0x6c,0x6e have defined in base line*/
+ HINIC_PORT_CMD_SET_UCAPTURE_OPT = 0x6F,
+
+ HINIC_PORT_CMD_SET_TSO = 0x70,
+ HINIC_PORT_CMD_SET_PHY_POWER = 0x71,
+ HINIC_PORT_CMD_UPDATE_FW = 0x72,
+ HINIC_PORT_CMD_SET_RQ_IQ_MAP = 0x73,
+ /* not defined in base line */
+ HINIC_PORT_CMD_SET_PFC_THD = 0x75,
+ /* not defined in base line */
+
+ HINIC_PORT_CMD_LINK_STATUS_REPORT = 0xa0,
+
+ HINIC_PORT_CMD_SET_LOSSLESS_ETH = 0xa3,
+ HINIC_PORT_CMD_UPDATE_MAC = 0xa4,
+
+ HINIC_PORT_CMD_GET_UART_LOG = 0xa5,
+ HINIC_PORT_CMD_SET_UART_LOG,
+
+ HINIC_PORT_CMD_GET_PORT_INFO = 0xaa,
+
+ HINIC_MISC_SET_FUNC_SF_ENBITS = 0xab,
+ /* not defined in base line */
+ HINIC_MISC_GET_FUNC_SF_ENBITS,
+ /* not defined in base line */
+
+ HINIC_PORT_CMD_GET_SFP_INFO = 0xad,
+ HINIC_PORT_CMD_GET_FW_LOG = 0xca,
+ HINIC_PORT_CMD_SET_IPSU_MAC = 0xcb,
+ HINIC_PORT_CMD_GET_IPSU_MAC = 0xcc,
+
+ HINIC_PORT_CMD_SET_IQ_ENABLE = 0xd6,
+
+ HINIC_PORT_CMD_GET_LINK_MODE = 0xD9,
+ HINIC_PORT_CMD_SET_SPEED = 0xDA,
+ HINIC_PORT_CMD_SET_AUTONEG = 0xDB,
+
+ HINIC_PORT_CMD_CLEAR_QP_RES = 0xDD,
+ HINIC_PORT_CMD_SET_SUPER_CQE = 0xDE,
+ HINIC_PORT_CMD_SET_VF_COS = 0xDF,
+ HINIC_PORT_CMD_GET_VF_COS = 0xE1,
+
+ HINIC_PORT_CMD_CABLE_PLUG_EVENT = 0xE5,
+ HINIC_PORT_CMD_LINK_ERR_EVENT = 0xE6,
+
+ HINIC_PORT_CMD_SET_PORT_FUNCS_STATE = 0xE7,
+ HINIC_PORT_CMD_SET_COS_UP_MAP = 0xE8,
+
+ HINIC_PORT_CMD_RESET_LINK_CFG = 0xEB,
+ HINIC_PORT_CMD_GET_STD_SFP_INFO = 0xF0,
+
+ HINIC_PORT_CMD_FORCE_PKT_DROP = 0xF3,
+ HINIC_PORT_CMD_SET_LRO_TIMER = 0xF4,
+
+ HINIC_PORT_CMD_SET_VHD_CFG = 0xF7,
+ HINIC_PORT_CMD_SET_LINK_FOLLOW = 0xF8,
+ HINIC_PORT_CMD_SET_VF_MAX_MIN_RATE = 0xF9,
+ HINIC_PORT_CMD_SET_RXQ_LRO_ADPT = 0xFA,
+ HINIC_PORT_CMD_SET_Q_FILTER = 0xFC,
+ HINIC_PORT_CMD_SET_VLAN_FILTER = 0xFF
+};
+
+/* cmd of mgmt CPU message for HW module */
+enum hinic_mgmt_cmd {
+ HINIC_MGMT_CMD_RESET_MGMT = 0x0,
+ HINIC_MGMT_CMD_START_FLR = 0x1,
+ HINIC_MGMT_CMD_FLUSH_DOORBELL = 0x2,
+ HINIC_MGMT_CMD_GET_IO_STATUS = 0x3,
+ HINIC_MGMT_CMD_DMA_ATTR_SET = 0x4,
+
+ HINIC_MGMT_CMD_CMDQ_CTXT_SET = 0x10,
+ HINIC_MGMT_CMD_CMDQ_CTXT_GET,
+
+ HINIC_MGMT_CMD_VAT_SET = 0x12,
+ HINIC_MGMT_CMD_VAT_GET,
+
+ HINIC_MGMT_CMD_L2NIC_SQ_CI_ATTR_SET = 0x14,
+ HINIC_MGMT_CMD_L2NIC_SQ_CI_ATTR_GET,
+
+ HINIC_MGMT_CMD_PPF_HT_GPA_SET = 0x23,
+ HINIC_MGMT_CMD_RES_STATE_SET = 0x24,
+ HINIC_MGMT_CMD_FUNC_CACHE_OUT = 0x25,
+ HINIC_MGMT_CMD_FFM_SET = 0x26,
+
+ /* 0x29 not defined in base line,
+ * only used in open source driver
+ */
+ HINIC_MGMT_CMD_FUNC_RES_CLEAR = 0x29,
+
+ HINIC_MGMT_CMD_CEQ_CTRL_REG_WR_BY_UP = 0x33,
+ HINIC_MGMT_CMD_MSI_CTRL_REG_WR_BY_UP,
+ HINIC_MGMT_CMD_MSI_CTRL_REG_RD_BY_UP,
+
+ HINIC_MGMT_CMD_VF_RANDOM_ID_SET = 0x36,
+ HINIC_MGMT_CMD_FAULT_REPORT = 0x37,
+ HINIC_MGMT_CMD_HEART_LOST_REPORT = 0x38,
+
+ HINIC_MGMT_CMD_VPD_SET = 0x40,
+ HINIC_MGMT_CMD_VPD_GET,
+ HINIC_MGMT_CMD_LABEL_SET,
+ HINIC_MGMT_CMD_LABEL_GET,
+ HINIC_MGMT_CMD_SATIC_MAC_SET,
+ HINIC_MGMT_CMD_SATIC_MAC_GET,
+ HINIC_MGMT_CMD_SYNC_TIME = 0x46,
+ HINIC_MGMT_CMD_SET_LED_STATUS = 0x4A,
+ HINIC_MGMT_CMD_L2NIC_RESET = 0x4b,
+ HINIC_MGMT_CMD_FAST_RECYCLE_MODE_SET = 0x4d,
+ HINIC_MGMT_CMD_BIOS_NV_DATA_MGMT = 0x4E,
+ HINIC_MGMT_CMD_ACTIVATE_FW = 0x4F,
+ HINIC_MGMT_CMD_PAGESIZE_SET = 0x50,
+ HINIC_MGMT_CMD_PAGESIZE_GET = 0x51,
+ HINIC_MGMT_CMD_GET_BOARD_INFO = 0x52,
+ HINIC_MGMT_CMD_WATCHDOG_INFO = 0x56,
+ HINIC_MGMT_CMD_FMW_ACT_NTC = 0x57,
+ HINIC_MGMT_CMD_SET_VF_RANDOM_ID = 0x61,
+ HINIC_MGMT_CMD_GET_PPF_STATE = 0x63,
+ HINIC_MGMT_CMD_PCIE_DFX_NTC = 0x65,
+ HINIC_MGMT_CMD_PCIE_DFX_GET = 0x66,
+
+ HINIC_MGMT_CMD_GET_HOST_INFO = 0x67,
+
+ HINIC_MGMT_CMD_GET_PHY_INIT_STATUS = 0x6A,
+ HINIC_MGMT_CMD_GET_HW_PF_INFOS = 0x6D,
+};
+
+/* uCode related commands */
+enum hinic_ucode_cmd {
+ HINIC_UCODE_CMD_MDY_QUEUE_CONTEXT = 0,
+ HINIC_UCODE_CMD_CLEAN_QUEUE_CONTEXT,
+ HINIC_UCODE_CMD_ARM_SQ,
+ HINIC_UCODE_CMD_ARM_RQ,
+ HINIC_UCODE_CMD_SET_RSS_INDIR_TABLE,
+ HINIC_UCODE_CMD_SET_RSS_CONTEXT_TABLE,
+ HINIC_UCODE_CMD_GET_RSS_INDIR_TABLE,
+ HINIC_UCODE_CMD_GET_RSS_CONTEXT_TABLE,
+ HINIC_UCODE_CMD_SET_IQ_ENABLE,
+ HINIC_UCODE_CMD_SET_RQ_FLUSH = 10
+};
+
+enum sq_l4offload_type {
+ OFFLOAD_DISABLE = 0,
+ TCP_OFFLOAD_ENABLE = 1,
+ SCTP_OFFLOAD_ENABLE = 2,
+ UDP_OFFLOAD_ENABLE = 3,
+};
+
+enum sq_vlan_offload_flag {
+ VLAN_OFFLOAD_DISABLE = 0,
+ VLAN_OFFLOAD_ENABLE = 1,
+};
+
+enum sq_pkt_parsed_flag {
+ PKT_NOT_PARSED = 0,
+ PKT_PARSED = 1,
+};
+
+enum sq_l3_type {
+ UNKNOWN_L3TYPE = 0,
+ IPV6_PKT = 1,
+ IPV4_PKT_NO_CHKSUM_OFFLOAD = 2,
+ IPV4_PKT_WITH_CHKSUM_OFFLOAD = 3,
+};
+
+enum sq_md_type {
+ UNKNOWN_MD_TYPE = 0,
+};
+
+enum sq_l2type {
+ ETHERNET = 0,
+};
+
+enum sq_tunnel_l4_type {
+ NOT_TUNNEL,
+ TUNNEL_UDP_NO_CSUM,
+ TUNNEL_UDP_CSUM,
+};
+
+#define NIC_RSS_CMD_TEMP_ALLOC 0x01
+#define NIC_RSS_CMD_TEMP_FREE 0x02
+
+#define HINIC_RSS_TYPE_VALID_SHIFT 23
+#define HINIC_RSS_TYPE_TCP_IPV6_EXT_SHIFT 24
+#define HINIC_RSS_TYPE_IPV6_EXT_SHIFT 25
+#define HINIC_RSS_TYPE_TCP_IPV6_SHIFT 26
+#define HINIC_RSS_TYPE_IPV6_SHIFT 27
+#define HINIC_RSS_TYPE_TCP_IPV4_SHIFT 28
+#define HINIC_RSS_TYPE_IPV4_SHIFT 29
+#define HINIC_RSS_TYPE_UDP_IPV6_SHIFT 30
+#define HINIC_RSS_TYPE_UDP_IPV4_SHIFT 31
+
+#define HINIC_RSS_TYPE_SET(val, member) \
+ (((u32)(val) & 0x1) << HINIC_RSS_TYPE_##member##_SHIFT)
+
+#define HINIC_RSS_TYPE_GET(val, member) \
+ (((u32)(val) >> HINIC_RSS_TYPE_##member##_SHIFT) & 0x1)
+
+enum hinic_speed {
+ HINIC_SPEED_10MB_LINK = 0,
+ HINIC_SPEED_100MB_LINK,
+ HINIC_SPEED_1000MB_LINK,
+ HINIC_SPEED_10GB_LINK,
+ HINIC_SPEED_25GB_LINK,
+ HINIC_SPEED_40GB_LINK,
+ HINIC_SPEED_100GB_LINK,
+ HINIC_SPEED_UNKNOWN = 0xFF,
+};
+
+enum {
+ HINIC_IFLA_VF_LINK_STATE_AUTO, /* link state of the uplink */
+ HINIC_IFLA_VF_LINK_STATE_ENABLE, /* link always up */
+ HINIC_IFLA_VF_LINK_STATE_DISABLE, /* link always down */
+};
+
+#define HINIC_AF0_FUNC_GLOBAL_IDX_SHIFT 0
+#define HINIC_AF0_P2P_IDX_SHIFT 10
+#define HINIC_AF0_PCI_INTF_IDX_SHIFT 14
+#define HINIC_AF0_VF_IN_PF_SHIFT 16
+#define HINIC_AF0_FUNC_TYPE_SHIFT 24
+
+#define HINIC_AF0_FUNC_GLOBAL_IDX_MASK 0x3FF
+#define HINIC_AF0_P2P_IDX_MASK 0xF
+#define HINIC_AF0_PCI_INTF_IDX_MASK 0x3
+#define HINIC_AF0_VF_IN_PF_MASK 0xFF
+#define HINIC_AF0_FUNC_TYPE_MASK 0x1
+
+#define HINIC_AF0_GET(val, member) \
+ (((val) >> HINIC_AF0_##member##_SHIFT) & HINIC_AF0_##member##_MASK)
+
+#define HINIC_AF1_PPF_IDX_SHIFT 0
+#define HINIC_AF1_AEQS_PER_FUNC_SHIFT 8
+#define HINIC_AF1_CEQS_PER_FUNC_SHIFT 12
+#define HINIC_AF1_IRQS_PER_FUNC_SHIFT 20
+#define HINIC_AF1_DMA_ATTR_PER_FUNC_SHIFT 24
+#define HINIC_AF1_MGMT_INIT_STATUS_SHIFT 30
+#define HINIC_AF1_PF_INIT_STATUS_SHIFT 31
+
+#define HINIC_AF1_PPF_IDX_MASK 0x1F
+#define HINIC_AF1_AEQS_PER_FUNC_MASK 0x3
+#define HINIC_AF1_CEQS_PER_FUNC_MASK 0x7
+#define HINIC_AF1_IRQS_PER_FUNC_MASK 0xF
+#define HINIC_AF1_DMA_ATTR_PER_FUNC_MASK 0x7
+#define HINIC_AF1_MGMT_INIT_STATUS_MASK 0x1
+#define HINIC_AF1_PF_INIT_STATUS_MASK 0x1
+
+#define HINIC_AF1_GET(val, member) \
+ (((val) >> HINIC_AF1_##member##_SHIFT) & HINIC_AF1_##member##_MASK)
+
+#define HINIC_AF2_GLOBAL_VF_ID_OF_PF_SHIFT 16
+#define HINIC_AF2_GLOBAL_VF_ID_OF_PF_MASK 0x3FF
+
+#define HINIC_AF2_GET(val, member) \
+ (((val) >> HINIC_AF2_##member##_SHIFT) & HINIC_AF2_##member##_MASK)
+
+#define HINIC_AF4_OUTBOUND_CTRL_SHIFT 0
+#define HINIC_AF4_DOORBELL_CTRL_SHIFT 1
+#define HINIC_AF4_OUTBOUND_CTRL_MASK 0x1
+#define HINIC_AF4_DOORBELL_CTRL_MASK 0x1
+
+#define HINIC_AF4_GET(val, member) \
+ (((val) >> HINIC_AF4_##member##_SHIFT) & HINIC_AF4_##member##_MASK)
+
+#define HINIC_AF4_SET(val, member) \
+ (((val) & HINIC_AF4_##member##_MASK) << HINIC_AF4_##member##_SHIFT)
+
+#define HINIC_AF4_CLEAR(val, member) \
+ ((val) & (~(HINIC_AF4_##member##_MASK << \
+ HINIC_AF4_##member##_SHIFT)))
+
+#define HINIC_AF5_PF_STATUS_SHIFT 0
+#define HINIC_AF5_PF_STATUS_MASK 0xFFFF
+
+#define HINIC_AF5_SET(val, member) \
+ (((val) & HINIC_AF5_##member##_MASK) << HINIC_AF5_##member##_SHIFT)
+
+#define HINIC_AF5_GET(val, member) \
+ (((val) >> HINIC_AF5_##member##_SHIFT) & HINIC_AF5_##member##_MASK)
+
+#define HINIC_AF5_CLEAR(val, member) \
+ ((val) & (~(HINIC_AF5_##member##_MASK << \
+ HINIC_AF5_##member##_SHIFT)))
+
+#define HINIC_PPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC_PPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC_PPF_ELECTION_SET(val, member) \
+ (((val) & HINIC_PPF_ELECTION_##member##_MASK) << \
+ HINIC_PPF_ELECTION_##member##_SHIFT)
+
+#define HINIC_PPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC_PPF_ELECTION_##member##_SHIFT) & \
+ HINIC_PPF_ELECTION_##member##_MASK)
+
+#define HINIC_PPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC_PPF_ELECTION_##member##_MASK \
+ << HINIC_PPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC_MPF_ELECTION_IDX_SHIFT 0
+
+#define HINIC_MPF_ELECTION_IDX_MASK 0x1F
+
+#define HINIC_MPF_ELECTION_SET(val, member) \
+ (((val) & HINIC_MPF_ELECTION_##member##_MASK) << \
+ HINIC_MPF_ELECTION_##member##_SHIFT)
+
+#define HINIC_MPF_ELECTION_GET(val, member) \
+ (((val) >> HINIC_MPF_ELECTION_##member##_SHIFT) & \
+ HINIC_MPF_ELECTION_##member##_MASK)
+
+#define HINIC_MPF_ELECTION_CLEAR(val, member) \
+ ((val) & (~(HINIC_MPF_ELECTION_##member##_MASK \
+ << HINIC_MPF_ELECTION_##member##_SHIFT)))
+
+#define HINIC_HWIF_NUM_AEQS(hwif) ((hwif)->attr.num_aeqs)
+#define HINIC_HWIF_NUM_CEQS(hwif) ((hwif)->attr.num_ceqs)
+#define HINIC_HWIF_NUM_IRQS(hwif) ((hwif)->attr.num_irqs)
+#define HINIC_HWIF_GLOBAL_IDX(hwif) ((hwif)->attr.func_global_idx)
+#define HINIC_HWIF_GLOBAL_VF_OFFSET(hwif) ((hwif)->attr.global_vf_id_of_pf)
+#define HINIC_HWIF_PPF_IDX(hwif) ((hwif)->attr.ppf_idx)
+#define HINIC_PCI_INTF_IDX(hwif) ((hwif)->attr.pci_intf_idx)
+
+#define HINIC_FUNC_TYPE(dev) ((dev)->hwif->attr.func_type)
+#define HINIC_IS_PF(dev) (HINIC_FUNC_TYPE(dev) == TYPE_PF)
+#define HINIC_IS_VF(dev) (HINIC_FUNC_TYPE(dev) == TYPE_VF)
+#define HINIC_IS_PPF(dev) (HINIC_FUNC_TYPE(dev) == TYPE_PPF)
+
+#define DB_IDX(db, db_base) \
+ ((u32)(((unsigned long)(db) - (unsigned long)(db_base)) / \
+ HINIC_DB_PAGE_SIZE))
+
+enum hinic_pcie_nosnoop {
+ HINIC_PCIE_SNOOP = 0,
+ HINIC_PCIE_NO_SNOOP = 1,
+};
+
+enum hinic_pcie_tph {
+ HINIC_PCIE_TPH_DISABLE = 0,
+ HINIC_PCIE_TPH_ENABLE = 1,
+};
+
+enum hinic_outbound_ctrl {
+ ENABLE_OUTBOUND = 0x0,
+ DISABLE_OUTBOUND = 0x1,
+};
+
+enum hinic_doorbell_ctrl {
+ ENABLE_DOORBELL = 0x0,
+ DISABLE_DOORBELL = 0x1,
+};
+
+enum hinic_pf_status {
+ HINIC_PF_STATUS_INIT = 0X0,
+ HINIC_PF_STATUS_ACTIVE_FLAG = 0x11,
+ HINIC_PF_STATUS_FLR_START_FLAG = 0x12,
+ HINIC_PF_STATUS_FLR_FINISH_FLAG = 0x13,
+};
+
+/* total doorbell or direct wqe size is 512kB, db num: 128, dwqe: 128 */
+#define HINIC_DB_DWQE_SIZE 0x00080000
+
+/* db/dwqe page size: 4K */
+#define HINIC_DB_PAGE_SIZE 0x00001000ULL
+
+#define HINIC_DB_MAX_AREAS (HINIC_DB_DWQE_SIZE / HINIC_DB_PAGE_SIZE)
+
+#define HINIC_PCI_MSIX_ENTRY_SIZE 16
+#define HINIC_PCI_MSIX_ENTRY_VECTOR_CTRL 12
+#define HINIC_PCI_MSIX_ENTRY_CTRL_MASKBIT 1
+
+#endif /* _HINIC_PORT_CMD_H_ */
diff --git a/drivers/net/hinic/base/hinic_qe_def.h b/drivers/net/hinic/base/hinic_qe_def.h
new file mode 100644
index 000000000..85a45f72d
--- /dev/null
+++ b/drivers/net/hinic/base/hinic_qe_def.h
@@ -0,0 +1,450 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_QE_DEF_H_
+#define _HINIC_QE_DEF_H_
+
+#define HINIC_SQ_WQEBB_SIZE 64
+#define HINIC_RQ_WQE_SIZE 32
+#define HINIC_SQ_WQEBB_SHIFT 6
+#define HINIC_RQ_WQEBB_SHIFT 5
+
+#define HINIC_MAX_QUEUE_DEPTH 4096
+#define HINIC_MIN_QUEUE_DEPTH 128
+#define HINIC_TXD_ALIGN 1
+#define HINIC_RXD_ALIGN 1
+
+#define HINIC_SQ_DEPTH 1024
+#define HINIC_RQ_DEPTH 1024
+
+#define HINIC_RQ_WQE_MAX_SIZE 32
+
+#define SIZE_8BYTES(size) (ALIGN((u32)(size), 8) >> 3)
+
+/* SQ_CTRL */
+#define SQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
+#define SQ_CTRL_TASKSECT_LEN_SHIFT 16
+#define SQ_CTRL_DATA_FORMAT_SHIFT 22
+#define SQ_CTRL_LEN_SHIFT 29
+#define SQ_CTRL_OWNER_SHIFT 31
+
+#define SQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFFU
+#define SQ_CTRL_TASKSECT_LEN_MASK 0x1FU
+#define SQ_CTRL_DATA_FORMAT_MASK 0x1U
+#define SQ_CTRL_LEN_MASK 0x3U
+#define SQ_CTRL_OWNER_MASK 0x1U
+
+#define SQ_CTRL_GET(val, member) (((val) >> SQ_CTRL_##member##_SHIFT) \
+ & SQ_CTRL_##member##_MASK)
+
+#define SQ_CTRL_CLEAR(val, member) ((val) & \
+ (~(SQ_CTRL_##member##_MASK << \
+ SQ_CTRL_##member##_SHIFT)))
+
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_SHIFT 2
+#define SQ_CTRL_QUEUE_INFO_UFO_SHIFT 10
+#define SQ_CTRL_QUEUE_INFO_TSO_SHIFT 11
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_SHIFT 12
+#define SQ_CTRL_QUEUE_INFO_MSS_SHIFT 13
+#define SQ_CTRL_QUEUE_INFO_SCTP_SHIFT 27
+#define SQ_CTRL_QUEUE_INFO_UC_SHIFT 28
+#define SQ_CTRL_QUEUE_INFO_PRI_SHIFT 29
+
+#define SQ_CTRL_QUEUE_INFO_PLDOFF_MASK 0xFFU
+#define SQ_CTRL_QUEUE_INFO_UFO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TSO_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_TCPUDP_CS_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_MSS_MASK 0x3FFFU
+#define SQ_CTRL_QUEUE_INFO_SCTP_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_UC_MASK 0x1U
+#define SQ_CTRL_QUEUE_INFO_PRI_MASK 0x7U
+
+#define SQ_CTRL_QUEUE_INFO_SET(val, member) \
+ (((u32)(val) & SQ_CTRL_QUEUE_INFO_##member##_MASK) \
+ << SQ_CTRL_QUEUE_INFO_##member##_SHIFT)
+
+#define SQ_CTRL_QUEUE_INFO_GET(val, member) \
+ (((val) >> SQ_CTRL_QUEUE_INFO_##member##_SHIFT) \
+ & SQ_CTRL_QUEUE_INFO_##member##_MASK)
+
+#define SQ_CTRL_QUEUE_INFO_CLEAR(val, member) \
+ ((val) & (~(SQ_CTRL_QUEUE_INFO_##member##_MASK << \
+ SQ_CTRL_QUEUE_INFO_##member##_SHIFT)))
+
+#define SQ_TASK_INFO0_L2HDR_LEN_SHIFT 0
+#define SQ_TASK_INFO0_L4OFFLOAD_SHIFT 8
+#define SQ_TASK_INFO0_INNER_L3TYPE_SHIFT 10
+#define SQ_TASK_INFO0_VLAN_OFFLOAD_SHIFT 12
+#define SQ_TASK_INFO0_PARSE_FLAG_SHIFT 13
+#define SQ_TASK_INFO0_UFO_AVD_SHIFT 14
+#define SQ_TASK_INFO0_TSO_UFO_SHIFT 15
+#define SQ_TASK_INFO0_VLAN_TAG_SHIFT 16
+
+#define SQ_TASK_INFO0_L2HDR_LEN_MASK 0xFFU
+#define SQ_TASK_INFO0_L4OFFLOAD_MASK 0x3U
+#define SQ_TASK_INFO0_INNER_L3TYPE_MASK 0x3U
+#define SQ_TASK_INFO0_VLAN_OFFLOAD_MASK 0x1U
+#define SQ_TASK_INFO0_PARSE_FLAG_MASK 0x1U
+#define SQ_TASK_INFO0_UFO_AVD_MASK 0x1U
+#define SQ_TASK_INFO0_TSO_UFO_MASK 0x1U
+#define SQ_TASK_INFO0_VLAN_TAG_MASK 0xFFFFU
+
+#define SQ_TASK_INFO0_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO0_##member##_MASK) << \
+ SQ_TASK_INFO0_##member##_SHIFT)
+#define SQ_TASK_INFO0_GET(val, member) \
+ (((val) >> SQ_TASK_INFO0_##member##_SHIFT) & \
+ SQ_TASK_INFO0_##member##_MASK)
+
+#define SQ_TASK_INFO1_MD_TYPE_SHIFT 8
+#define SQ_TASK_INFO1_INNER_L4LEN_SHIFT 16
+#define SQ_TASK_INFO1_INNER_L3LEN_SHIFT 24
+
+#define SQ_TASK_INFO1_MD_TYPE_MASK 0xFFU
+#define SQ_TASK_INFO1_INNER_L4LEN_MASK 0xFFU
+#define SQ_TASK_INFO1_INNER_L3LEN_MASK 0xFFU
+
+#define SQ_TASK_INFO1_SET(val, member) \
+ (((val) & SQ_TASK_INFO1_##member##_MASK) << \
+ SQ_TASK_INFO1_##member##_SHIFT)
+#define SQ_TASK_INFO1_GET(val, member) \
+ (((val) >> SQ_TASK_INFO1_##member##_SHIFT) & \
+ SQ_TASK_INFO1_##member##_MASK)
+
+#define SQ_TASK_INFO2_TUNNEL_L4LEN_SHIFT 0
+#define SQ_TASK_INFO2_OUTER_L3LEN_SHIFT 8
+#define SQ_TASK_INFO2_TUNNEL_L4TYPE_SHIFT 16
+#define SQ_TASK_INFO2_OUTER_L3TYPE_SHIFT 24
+
+#define SQ_TASK_INFO2_TUNNEL_L4LEN_MASK 0xFFU
+#define SQ_TASK_INFO2_OUTER_L3LEN_MASK 0xFFU
+#define SQ_TASK_INFO2_TUNNEL_L4TYPE_MASK 0x7U
+#define SQ_TASK_INFO2_OUTER_L3TYPE_MASK 0x3U
+
+#define SQ_TASK_INFO2_SET(val, member) \
+ (((val) & SQ_TASK_INFO2_##member##_MASK) << \
+ SQ_TASK_INFO2_##member##_SHIFT)
+#define SQ_TASK_INFO2_GET(val, member) \
+ (((val) >> SQ_TASK_INFO2_##member##_SHIFT) & \
+ SQ_TASK_INFO2_##member##_MASK)
+
+#define SQ_TASK_INFO4_L2TYPE_SHIFT 31
+
+#define SQ_TASK_INFO4_L2TYPE_MASK 0x1U
+
+#define SQ_TASK_INFO4_SET(val, member) \
+ (((u32)(val) & SQ_TASK_INFO4_##member##_MASK) << \
+ SQ_TASK_INFO4_##member##_SHIFT)
+
+/* SQ_DB */
+#define SQ_DB_OFF 0x00000800
+#define SQ_DB_INFO_HI_PI_SHIFT 0
+#define SQ_DB_INFO_QID_SHIFT 8
+#define SQ_DB_INFO_CFLAG_SHIFT 23
+#define SQ_DB_INFO_COS_SHIFT 24
+#define SQ_DB_INFO_TYPE_SHIFT 27
+#define SQ_DB_INFO_HI_PI_MASK 0xFFU
+#define SQ_DB_INFO_QID_MASK 0x3FFU
+#define SQ_DB_INFO_CFLAG_MASK 0x1U
+#define SQ_DB_INFO_COS_MASK 0x7U
+#define SQ_DB_INFO_TYPE_MASK 0x1FU
+#define SQ_DB_INFO_SET(val, member) (((u32)(val) & \
+ SQ_DB_INFO_##member##_MASK) << \
+ SQ_DB_INFO_##member##_SHIFT)
+
+#define SQ_DB_PI_LOW_MASK 0xFF
+#define SQ_DB_PI_LOW(pi) ((pi) & SQ_DB_PI_LOW_MASK)
+#define SQ_DB_PI_HI_SHIFT 8
+#define SQ_DB_PI_HIGH(pi) ((pi) >> SQ_DB_PI_HI_SHIFT)
+#define SQ_DB_ADDR(sq, pi) ((u64 *)((u8 __iomem *)((sq)->db_addr) + \
+ SQ_DB_OFF) + SQ_DB_PI_LOW(pi))
+#define SQ_DB 1
+#define SQ_CFLAG_DP 0 /* CFLAG_DATA_PATH */
+
+/* RQ_CTRL */
+#define RQ_CTRL_BUFDESC_SECT_LEN_SHIFT 0
+#define RQ_CTRL_COMPLETE_FORMAT_SHIFT 15
+#define RQ_CTRL_COMPLETE_LEN_SHIFT 27
+#define RQ_CTRL_LEN_SHIFT 29
+
+#define RQ_CTRL_BUFDESC_SECT_LEN_MASK 0xFFU
+#define RQ_CTRL_COMPLETE_FORMAT_MASK 0x1U
+#define RQ_CTRL_COMPLETE_LEN_MASK 0x3U
+#define RQ_CTRL_LEN_MASK 0x3U
+
+#define RQ_CTRL_SET(val, member) (((val) & \
+ RQ_CTRL_##member##_MASK) << \
+ RQ_CTRL_##member##_SHIFT)
+
+#define RQ_CTRL_GET(val, member) (((val) >> \
+ RQ_CTRL_##member##_SHIFT) & \
+ RQ_CTRL_##member##_MASK)
+
+#define RQ_CTRL_CLEAR(val, member) ((val) & \
+ (~(RQ_CTRL_##member##_MASK << \
+ RQ_CTRL_##member##_SHIFT)))
+
+#define RQ_CQE_STATUS_CSUM_ERR_SHIFT 0
+#define RQ_CQE_STATUS_NUM_LRO_SHIFT 16
+#define RQ_CQE_STATUS_LRO_PUSH_SHIFT 25
+#define RQ_CQE_STATUS_LRO_ENTER_SHIFT 26
+#define RQ_CQE_STATUS_LRO_INTR_SHIFT 27
+
+#define RQ_CQE_STATUS_BP_EN_SHIFT 30
+#define RQ_CQE_STATUS_RXDONE_SHIFT 31
+#define RQ_CQE_STATUS_FLUSH_SHIFT 28
+
+#define RQ_CQE_STATUS_CSUM_ERR_MASK 0xFFFFU
+#define RQ_CQE_STATUS_NUM_LRO_MASK 0xFFU
+#define RQ_CQE_STATUS_LRO_PUSH_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_ENTER_MASK 0X1U
+#define RQ_CQE_STATUS_LRO_INTR_MASK 0X1U
+#define RQ_CQE_STATUS_BP_EN_MASK 0X1U
+#define RQ_CQE_STATUS_RXDONE_MASK 0x1U
+#define RQ_CQE_STATUS_FLUSH_MASK 0x1U
+
+#define RQ_CQE_STATUS_GET(val, member) (((val) >> \
+ RQ_CQE_STATUS_##member##_SHIFT) & \
+ RQ_CQE_STATUS_##member##_MASK)
+
+#define RQ_CQE_STATUS_CLEAR(val, member) ((val) & \
+ (~(RQ_CQE_STATUS_##member##_MASK << \
+ RQ_CQE_STATUS_##member##_SHIFT)))
+
+#define RQ_CQE_SGE_VLAN_SHIFT 0
+#define RQ_CQE_SGE_LEN_SHIFT 16
+
+#define RQ_CQE_SGE_VLAN_MASK 0xFFFFU
+#define RQ_CQE_SGE_LEN_MASK 0xFFFFU
+
+#define RQ_CQE_SGE_GET(val, member) (((val) >> \
+ RQ_CQE_SGE_##member##_SHIFT) & \
+ RQ_CQE_SGE_##member##_MASK)
+
+#define RQ_CQE_PKT_NUM_SHIFT 1
+#define RQ_CQE_PKT_FIRST_LEN_SHIFT 19
+#define RQ_CQE_PKT_LAST_LEN_SHIFT 6
+#define RQ_CQE_SUPER_CQE_EN_SHIFT 0
+
+#define RQ_CQE_PKT_FIRST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_LAST_LEN_MASK 0x1FFFU
+#define RQ_CQE_PKT_NUM_MASK 0x1FU
+#define RQ_CQE_SUPER_CQE_EN_MASK 0x1
+
+#define RQ_CQE_PKT_NUM_GET(val, member) (((val) >> \
+ RQ_CQE_PKT_##member##_SHIFT) & \
+ RQ_CQE_PKT_##member##_MASK)
+#define HINIC_GET_RQ_CQE_PKT_NUM(pkt_info) RQ_CQE_PKT_NUM_GET(pkt_info, NUM)
+
+#define RQ_CQE_SUPER_CQE_EN_GET(val, member) (((val) >> \
+ RQ_CQE_##member##_SHIFT) & \
+ RQ_CQE_##member##_MASK)
+#define HINIC_GET_SUPER_CQE_EN(pkt_info) \
+ RQ_CQE_SUPER_CQE_EN_GET(pkt_info, SUPER_CQE_EN)
+
+#define HINIC_GET_SUPER_CQE_EN_BE(pkt_info) ((pkt_info) & 0x1000000U)
+#define RQ_CQE_PKT_LEN_GET(val, member) (((val) >> \
+ RQ_CQE_PKT_##member##_SHIFT) & \
+ RQ_CQE_PKT_##member##_MASK)
+
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_SHIFT 21
+#define RQ_CQE_OFFOLAD_TYPE_VLAN_EN_MASK 0x1U
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_SHIFT 0
+#define RQ_CQE_OFFOLAD_TYPE_PKT_TYPE_MASK 0xFFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_SHIFT 19
+#define RQ_CQE_OFFOLAD_TYPE_PKT_UMBCAST_MASK 0x3U
+
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_SHIFT 24
+#define RQ_CQE_OFFOLAD_TYPE_RSS_TYPE_MASK 0xFFU
+
+#define RQ_CQE_OFFOLAD_TYPE_GET(val, member) (((val) >> \
+ RQ_CQE_OFFOLAD_TYPE_##member##_SHIFT) & \
+ RQ_CQE_OFFOLAD_TYPE_##member##_MASK)
+
+#define RQ_CQE_PKT_TYPES_NON_L2_MASK 0x800U
+#define RQ_CQE_PKT_TYPES_L2_MASK 0x7FU
+
+#define RQ_CQE_STATUS_CSUM_BYPASS_VAL 0x80U
+#define RQ_CQE_STATUS_CSUM_ERR_IP_MASK 0x39U
+#define RQ_CQE_STATUS_CSUM_ERR_L4_MASK 0x46U
+#define RQ_CQE_STATUS_CSUM_ERR_OTHER 0x100U
+
+#define SECT_SIZE_BYTES(size) ((size) << 3)
+
+#define HINIC_PF_SET_VF_ALREADY 0x4
+
+#define WQS_BLOCKS_PER_PAGE 4
+
+#define WQ_SIZE(wq) (u32)((u64)(wq)->q_depth * (wq)->wqebb_size)
+
+#define WQE_PAGE_NUM(wq, idx) (((idx) >> ((wq)->wqebbs_per_page_shift)) & \
+ ((wq)->num_q_pages - 1))
+
+#define WQE_PAGE_OFF(wq, idx) ((u64)((wq)->wqebb_size) * \
+ ((idx) & ((wq)->num_wqebbs_per_page - 1)))
+
+#define WQ_PAGE_ADDR_SIZE sizeof(u64)
+#define WQ_PAGE_ADDR_SIZE_SHIFT 3
+#define WQ_PAGE_ADDR(wq, idx) \
+ (u8 *)(*(u64 *)((u64)((wq)->shadow_block_vaddr) + \
+ (WQE_PAGE_NUM(wq, idx) << WQ_PAGE_ADDR_SIZE_SHIFT)))
+
+#define WQ_BLOCK_SIZE 4096UL
+#define WQS_PAGE_SIZE (WQS_BLOCKS_PER_PAGE * WQ_BLOCK_SIZE)
+#define WQ_MAX_PAGES (WQ_BLOCK_SIZE >> WQ_PAGE_ADDR_SIZE_SHIFT)
+
+#define CMDQ_BLOCKS_PER_PAGE 8
+#define CMDQ_BLOCK_SIZE 512UL
+#define CMDQ_PAGE_SIZE ALIGN((CMDQ_BLOCKS_PER_PAGE * \
+ CMDQ_BLOCK_SIZE), PAGE_SIZE)
+
+#define ADDR_4K_ALIGNED(addr) (0 == ((addr) & 0xfff))
+#define ADDR_256K_ALIGNED(addr) (0 == ((addr) & 0x3ffff))
+
+#define WQ_BASE_VADDR(wqs, wq) \
+ (u64 *)(((u64)((wqs)->page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_PADDR(wqs, wq) (((wqs)->page_paddr[(wq)->page_idx]) \
+ + (u64)(wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define WQ_BASE_ADDR(wqs, wq) \
+ (u64 *)(((u64)((wqs)->shadow_page_vaddr[(wq)->page_idx])) \
+ + (wq)->block_idx * WQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_VADDR(cmdq_pages, wq) \
+ (u64 *)(((u64)((cmdq_pages)->cmdq_page_vaddr)) \
+ + (wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_PADDR(cmdq_pages, wq) \
+ (((u64)((cmdq_pages)->cmdq_page_paddr)) \
+ + (u64)(wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define CMDQ_BASE_ADDR(cmdq_pages, wq) \
+ (u64 *)(((u64)((cmdq_pages)->cmdq_shadow_page_vaddr)) \
+ + (wq)->block_idx * CMDQ_BLOCK_SIZE)
+
+#define MASKED_WQE_IDX(wq, idx) ((idx) & (wq)->mask)
+
+#define WQE_SHADOW_PAGE(wq, wqe) \
+ (u16)(((unsigned long)(wqe) - (unsigned long)(wq)->shadow_wqe) \
+ / (wq)->max_wqe_size)
+
+#define WQE_IN_RANGE(wqe, start, end) \
+ (((unsigned long)(wqe) >= (unsigned long)(start)) && \
+ ((unsigned long)(wqe) < (unsigned long)(end)))
+
+#define WQ_NUM_PAGES(num_wqs) \
+ (ALIGN((u32)num_wqs, WQS_BLOCKS_PER_PAGE) / WQS_BLOCKS_PER_PAGE)
+
+/* Queue buffer related define */
+enum hinic_rx_buf_size {
+ HINIC_RX_BUF_SIZE_32B = 0x20,
+ HINIC_RX_BUF_SIZE_64B = 0x40,
+ HINIC_RX_BUF_SIZE_96B = 0x60,
+ HINIC_RX_BUF_SIZE_128B = 0x80,
+ HINIC_RX_BUF_SIZE_192B = 0xC0,
+ HINIC_RX_BUF_SIZE_256B = 0x100,
+ HINIC_RX_BUF_SIZE_384B = 0x180,
+ HINIC_RX_BUF_SIZE_512B = 0x200,
+ HINIC_RX_BUF_SIZE_768B = 0x300,
+ HINIC_RX_BUF_SIZE_1K = 0x400,
+ HINIC_RX_BUF_SIZE_1_5K = 0x600,
+ HINIC_RX_BUF_SIZE_2K = 0x800,
+ HINIC_RX_BUF_SIZE_3K = 0xC00,
+ HINIC_RX_BUF_SIZE_4K = 0x1000,
+ HINIC_RX_BUF_SIZE_8K = 0x2000,
+ HINIC_RX_BUF_SIZE_16K = 0x4000,
+};
+
+enum hinic_res_state {
+ HINIC_RES_CLEAN = 0,
+ HINIC_RES_ACTIVE = 1,
+};
+
+#define DEFAULT_RX_BUF_SIZE ((u16)0xB)
+
+#define BUF_DESC_SIZE_SHIFT 4
+
+#define HINIC_SQ_WQE_SIZE(num_sge) \
+ (sizeof(struct hinic_sq_ctrl) + \
+ sizeof(struct hinic_sq_task) + \
+ (unsigned int)((num_sge) << BUF_DESC_SIZE_SHIFT))
+
+#define HINIC_SQ_WQEBB_CNT(num_sge) \
+ (int)(ALIGN(HINIC_SQ_WQE_SIZE((u32)num_sge), \
+ HINIC_SQ_WQEBB_SIZE) >> HINIC_SQ_WQEBB_SHIFT)
+
+#define HINIC_GET_RX_VLAN_OFFLOAD_EN(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, VLAN_EN)
+
+#define HINIC_GET_RSS_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, RSS_TYPE)
+
+#define HINIC_GET_PKT_TYPES(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC_GET_RX_PKT_TYPE(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_TYPE)
+
+#define HINIC_GET_RX_PKT_UMBCAST(offload_type) \
+ RQ_CQE_OFFOLAD_TYPE_GET(offload_type, PKT_UMBCAST)
+
+
+#define HINIC_GET_RX_VLAN_TAG(vlan_len) \
+ RQ_CQE_SGE_GET(vlan_len, VLAN)
+
+#define HINIC_GET_RX_PKT_LEN(vlan_len) \
+ RQ_CQE_SGE_GET(vlan_len, LEN)
+
+#define HINIC_GET_RX_CSUM_ERR(status) \
+ RQ_CQE_STATUS_GET(status, CSUM_ERR)
+
+#define HINIC_GET_RX_DONE(status) \
+ RQ_CQE_STATUS_GET(status, RXDONE)
+
+#define HINIC_GET_RX_FLUSH(status) \
+ RQ_CQE_STATUS_GET(status, FLUSH)
+
+#define HINIC_GET_RX_BP_EN(status) \
+ RQ_CQE_STATUS_GET(status, BP_EN)
+
+#define HINIC_GET_RX_NUM_LRO(status) \
+ RQ_CQE_STATUS_GET(status, NUM_LRO)
+
+#define HINIC_PKT_TYPES_UNKNOWN(pkt_types) \
+ ((pkt_types) & RQ_CQE_PKT_TYPES_NON_L2_MASK)
+
+#define HINIC_PKT_TYPES_L2(pkt_types) \
+ ((pkt_types) & RQ_CQE_PKT_TYPES_L2_MASK)
+
+#define HINIC_CSUM_ERR_BYPASSED(csum_err) \
+ ((csum_err) == RQ_CQE_STATUS_CSUM_BYPASS_VAL)
+
+#define HINIC_CSUM_ERR_IP(csum_err) \
+ ((csum_err) & RQ_CQE_STATUS_CSUM_ERR_IP_MASK)
+
+#define HINIC_CSUM_ERR_L4(csum_err) \
+ ((csum_err) & RQ_CQE_STATUS_CSUM_ERR_L4_MASK)
+
+#define HINIC_CSUM_ERR_OTHER(csum_err) \
+ ((csum_err) == RQ_CQE_STATUS_CSUM_ERR_OTHER)
+
+#define TX_MSS_DEFAULT 0x3E00
+#define TX_MSS_MIN 0x50
+
+enum sq_wqe_type {
+ SQ_NORMAL_WQE = 0,
+};
+
+enum rq_completion_fmt {
+ RQ_COMPLETE_SGE = 1
+};
+
+#define HINIC_VLAN_FILTER_EN (1U << 0)
+#define HINIC_BROADCAST_FILTER_EX_EN (1U << 1)
+
+#endif /* _HINIC_QE_DEF_H_ */
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.h b/drivers/net/hinic/hinic_pmd_ethdev.h
new file mode 100644
index 000000000..4b0555e89
--- /dev/null
+++ b/drivers/net/hinic/hinic_pmd_ethdev.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PMD_ETHDEV_H_
+#define _HINIC_PMD_ETHDEV_H_
+
+#include "base/hinic_pmd_dpdev.h"
+
+#define PMD_DRIVER_VERSION "2.0.0.1"
+
+/* Vendor ID used by Huawei devices */
+#define HINIC_HUAWEI_VENDOR_ID 0x19E5
+
+/* Hinic devices */
+#define HINIC_DEV_ID_PRD 0x1822
+#define HINIC_DEV_ID_MEZZ_25GE 0x0210
+#define HINIC_DEV_ID_MEZZ_40GE 0x020D
+#define HINIC_DEV_ID_MEZZ_100GE 0x0205
+
+#define HINIC_PMD_DEV_BOND (1)
+#define HINIC_PMD_DEV_EMPTY (-1)
+#define HINIC_DEV_NAME_MAX_LEN (32)
+
+#define HINIC_RSS_OFFLOAD_ALL ( \
+ ETH_RSS_IPV4 | \
+ ETH_RSS_FRAG_IPV4 |\
+ ETH_RSS_NONFRAG_IPV4_TCP | \
+ ETH_RSS_NONFRAG_IPV4_UDP | \
+ ETH_RSS_IPV6 | \
+ ETH_RSS_FRAG_IPV6 | \
+ ETH_RSS_NONFRAG_IPV6_TCP | \
+ ETH_RSS_NONFRAG_IPV6_UDP | \
+ ETH_RSS_IPV6_EX | \
+ ETH_RSS_IPV6_TCP_EX | \
+ ETH_RSS_IPV6_UDP_EX)
+
+#define HINIC_MTU_TO_PKTLEN(mtu) \
+ ((mtu) + ETH_HLEN + ETH_CRC_LEN)
+
+#define HINIC_PKTLEN_TO_MTU(pktlen) \
+ ((pktlen) - (ETH_HLEN + ETH_CRC_LEN))
+
+/* vhd type */
+#define HINIC_VHD_TYPE_0B (2)
+#define HINIC_VHD_TYPE_10B (1)
+#define HINIC_VHD_TYPE_12B (0)
+
+/* vlan_id is a 12 bit number.
+ * The VFTA array is actually a 4096 bit array, 128 of 32bit elements.
+ * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element.
+ * The higher 7 bit val specifies VFTA array index.
+ */
+#define HINIC_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F))
+#define HINIC_VFTA_IDX(vlan_id) ((vlan_id) >> 5)
+
+#define HINIC_INTR_CB_UNREG_MAX_RETRIES 10
+
+/* eth_dev ops */
+int hinic_dev_configure(struct rte_eth_dev *dev);
+void hinic_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);
+int hinic_dev_start(struct rte_eth_dev *dev);
+int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete);
+void hinic_rx_queue_release(void *queue);
+void hinic_tx_queue_release(void *queue);
+void hinic_dev_stop(struct rte_eth_dev *dev);
+void hinic_dev_close(struct rte_eth_dev *dev);
+int hinic_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
+void hinic_dev_stats_reset(struct rte_eth_dev *dev);
+void hinic_dev_xstats_reset(struct rte_eth_dev *dev);
+void hinic_dev_promiscuous_enable(struct rte_eth_dev *dev);
+void hinic_dev_promiscuous_disable(struct rte_eth_dev *dev);
+
+int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask);
+int hinic_link_event_process(struct rte_eth_dev *dev, u8 status);
+void hinic_disable_interrupt(struct rte_eth_dev *dev);
+void hinic_free_all_sq(struct hinic_nic_dev *nic_dev);
+void hinic_free_all_rq(struct hinic_nic_dev *nic_dev);
+
+int hinic_rxtx_configure(struct rte_eth_dev *dev);
+int hinic_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+int hinic_rss_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+
+int hinic_dev_xstats_get(struct rte_eth_dev *dev,
+ struct rte_eth_xstat *xstats, unsigned int n);
+int hinic_dev_xstats_get_names(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit);
+
+int hinic_fw_version_get(struct rte_eth_dev *dev,
+ char *fw_version, size_t fw_size);
+
+#endif /* _HINIC_PMD_ETHDEV_H_ */
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
new file mode 100644
index 000000000..4d3fc2722
--- /dev/null
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PMD_RX_H_
+#define _HINIC_PMD_RX_H_
+
+/* rxq wq operations */
+#define HINIC_GET_RQ_WQE_MASK(rxq) \
+ ((rxq)->wq->mask)
+
+#define HINIC_GET_RQ_LOCAL_CI(rxq) \
+ (((rxq)->wq->cons_idx) & HINIC_GET_RQ_WQE_MASK(rxq))
+
+#define HINIC_GET_RQ_LOCAL_PI(rxq) \
+ (((rxq)->wq->prod_idx) & HINIC_GET_RQ_WQE_MASK(rxq))
+
+#define HINIC_UPDATE_RQ_LOCAL_CI(rxq, wqebb_cnt) \
+ do { \
+ (rxq)->wq->cons_idx += (wqebb_cnt); \
+ (rxq)->wq->delta += (wqebb_cnt); \
+ } while (0)
+
+#define HINIC_GET_RQ_FREE_WQEBBS(rxq) \
+ ((rxq)->wq->delta - 1)
+
+#define HINIC_UPDATE_RQ_HW_PI(rxq, pi) \
+ (*((rxq)->pi_virt_addr) = \
+ cpu_to_be16((pi) & HINIC_GET_RQ_WQE_MASK(rxq)))
+
+/* rxq cqe done and status bit */
+#define HINIC_GET_RX_DONE_BE(status) \
+ ((status) & 0x80U)
+
+#define HINIC_GET_RX_FLUSH_BE(status) \
+ ((status) & 0x10U)
+
+#define HINIC_DEFAULT_RX_FREE_THRESH 32
+
+#define HINIC_RX_CSUM_OFFLOAD_EN 0xFFF
+
+struct hinic_rxq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 rx_nombuf;
+ u64 errors;
+ u64 rx_discards;
+
+#ifdef HINIC_XSTAT_MBUF_USE
+ u64 alloc_mbuf;
+ u64 free_mbuf;
+ u64 left_mbuf;
+#endif
+
+#ifdef HINIC_XSTAT_RXBUF_INFO
+ u64 rx_mbuf;
+ u64 rx_avail;
+ u64 rx_hole;
+ u64 burst_pkts;
+#endif
+
+#ifdef HINIC_XSTAT_PROF_RX
+ u64 app_tsc;
+ u64 pmd_tsc;
+#endif
+};
+
+/* Attention, Do not add any member in hinic_rx_info
+ * as rxq bulk rearm mode will write mbuf in rx_info
+ */
+struct hinic_rx_info {
+ struct rte_mbuf *mbuf;
+};
+
+struct hinic_rxq {
+ struct hinic_wq *wq;
+ volatile u16 *pi_virt_addr;
+
+ u16 port_id;
+ u16 q_id;
+ u16 q_depth;
+ u16 buf_len;
+
+ u16 rx_free_thresh;
+ u16 rxinfo_align_end;
+
+ unsigned long status;
+ struct hinic_rxq_stats rxq_stats;
+
+ struct hinic_nic_dev *nic_dev;
+
+ struct hinic_rx_info *rx_info;
+ volatile struct hinic_rq_cqe *rx_cqe;
+
+ dma_addr_t cqe_start_paddr;
+ void *cqe_start_vaddr;
+ struct rte_mempool *mb_pool;
+
+#ifdef HINIC_XSTAT_PROF_RX
+ /* performance profiling */
+ uint64_t prof_rx_end_tsc;
+#endif
+};
+
+#ifdef HINIC_XSTAT_MBUF_USE
+void hinic_rx_free_mbuf(struct hinic_rxq *rxq, struct rte_mbuf *m);
+#else
+void hinic_rx_free_mbuf(struct rte_mbuf *m);
+#endif
+
+int hinic_setup_rx_resources(struct hinic_rxq *rxq);
+
+void hinic_free_all_rx_resources(struct rte_eth_dev *dev);
+
+void hinic_free_all_rx_mbuf(struct rte_eth_dev *dev);
+
+void hinic_free_rx_resources(struct hinic_rxq *rxq);
+
+u16 hinic_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
+
+void hinic_free_all_rx_skbs(struct hinic_rxq *rxq);
+
+void hinic_rx_alloc_pkts(struct hinic_rxq *rxq);
+
+void hinic_rxq_get_stats(struct hinic_rxq *rxq, struct hinic_rxq_stats *stats);
+
+void hinic_rxq_stats_reset(struct hinic_rxq *rxq);
+
+int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on);
+
+int hinic_rx_configure(struct rte_eth_dev *dev);
+
+void hinic_rx_remove_configure(struct rte_eth_dev *dev);
+
+#endif /* _HINIC_PMD_RX_H_ */
diff --git a/drivers/net/hinic/hinic_pmd_tx.h b/drivers/net/hinic/hinic_pmd_tx.h
new file mode 100644
index 000000000..15fe31c85
--- /dev/null
+++ b/drivers/net/hinic/hinic_pmd_tx.h
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017 Huawei Technologies Co., Ltd
+ */
+
+#ifndef _HINIC_PMD_TX_H_
+#define _HINIC_PMD_TX_H_
+
+#define HINIC_DEFAULT_TX_FREE_THRESH 32
+#define HINIC_MAX_TX_FREE_BULK 64
+
+/* txq wq operations */
+#define HINIC_GET_SQ_WQE_MASK(txq) \
+ ((txq)->wq->mask)
+
+#define HINIC_GET_SQ_HW_CI(txq) \
+ ((be16_to_cpu(*(txq)->cons_idx_addr)) & HINIC_GET_SQ_WQE_MASK(txq))
+
+#define HINIC_GET_SQ_LOCAL_CI(txq) \
+ (((txq)->wq->cons_idx) & HINIC_GET_SQ_WQE_MASK(txq))
+
+#define HINIC_UPDATE_SQ_LOCAL_CI(txq, wqebb_cnt) \
+ do { \
+ (txq)->wq->cons_idx += wqebb_cnt; \
+ (txq)->wq->delta += wqebb_cnt; \
+ } while (0)
+
+#define HINIC_GET_SQ_FREE_WQEBBS(txq) \
+ ((txq)->wq->delta - 1)
+
+#define HINIC_IS_SQ_EMPTY(txq) \
+ (((txq)->wq->delta) == ((txq)->q_depth))
+
+#define HINIC_GET_WQ_TAIL(txq) ((txq)->wq->queue_buf_vaddr + \
+ (txq)->wq->wq_buf_size)
+#define HINIC_GET_WQ_HEAD(txq) ((txq)->wq->queue_buf_vaddr)
+
+struct hinic_txq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 rl_drop;
+ u64 tx_busy;
+ u64 off_errs;
+ u64 cpy_pkts;
+
+#ifdef HINIC_XSTAT_PROF_TX
+ u64 app_tsc;
+ u64 pmd_tsc;
+ u64 burst_pkts;
+#endif
+};
+
+struct hinic_tx_info {
+ struct rte_mbuf *mbuf;
+ int wqebb_cnt;
+ struct rte_mbuf *cpy_mbuf;
+};
+
+struct hinic_txq {
+ /* cacheline0 */
+ struct hinic_nic_dev *nic_dev;
+ struct hinic_wq *wq;
+ struct hinic_sq *sq;
+ volatile u16 *cons_idx_addr;
+ struct hinic_tx_info *tx_info;
+
+ u16 tx_free_thresh;
+ u16 port_id;
+ u16 q_id;
+ u16 q_depth;
+ u32 cos;
+
+ /* cacheline1 */
+ struct hinic_txq_stats txq_stats;
+ u64 sq_head_addr;
+ u64 sq_bot_sge_addr;
+#ifdef HINIC_XSTAT_PROF_TX
+ uint64_t prof_tx_end_tsc; /* performance profiling */
+#endif
+};
+
+int hinic_setup_tx_resources(struct hinic_txq *txq);
+
+void hinic_free_all_tx_resources(struct rte_eth_dev *eth_dev);
+
+void hinic_free_all_tx_mbuf(struct rte_eth_dev *eth_dev);
+
+void hinic_free_tx_resources(struct hinic_txq *txq);
+
+u16 hinic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
+
+void hinic_free_all_tx_skbs(struct hinic_txq *txq);
+
+void hinic_txq_get_stats(struct hinic_txq *txq, struct hinic_txq_stats *stats);
+
+void hinic_txq_stats_reset(struct hinic_txq *txq);
+
+#endif /* _HINIC_PMD_TX_H_ */
--
2.18.0
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-06-14 15:37 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-12 14:24 [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers Xuanziyang (William, Chip Application Design Logic and Hardware Development Dept IT_Products & Solutions)
2019-06-14 15:37 ` Ferruh Yigit
-- strict thread matches above, loose matches on Subject: below --
2019-06-06 11:04 [dpdk-dev] [PATCH v4 00/11] A new net PMD - hinic Ziyang Xuan
2019-06-06 11:17 ` [dpdk-dev] [PATCH v4 07/11] net/hinic/base: add various headers Ziyang Xuan
2019-06-06 11:06 ` Ziyang Xuan
2019-06-11 16:04 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).