DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD
@ 2021-06-02  9:40 Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure Jiawen Wu
                   ` (25 more replies)
  0 siblings, 26 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

This patch set provides a skeleton of ngbe PMD,
which adapted to Wangxun WX1860 series NICs.

v5:
- Extend patches with device initialization and RxTx functions.

v4:
- Fix compile error.

v3:
- Use rte_ether functions to define marcos.

v2:
- Correct some clerical errors.
- Use ethdev debug flags instead of driver own.

Jiawen Wu (24):
  net/ngbe: add build and doc infrastructure
  net/ngbe: add device IDs
  net/ngbe: support probe and remove
  net/ngbe: add device init and uninit
  net/ngbe: add log type and error type
  net/ngbe: define registers
  net/ngbe: set MAC type and LAN id
  net/ngbe: init and validate EEPROM
  net/ngbe: add HW initialization
  net/ngbe: identify PHY and reset PHY
  net/ngbe: store MAC address
  net/ngbe: add info get operation
  net/ngbe: support link update
  net/ngbe: setup the check PHY link
  net/ngbe: add Rx queue setup and release
  net/ngbe: add Tx queue setup and release
  net/ngbe: add Rx and Tx init
  net/ngbe: add packet type
  net/ngbe: add simple Rx and Tx flow
  net/ngbe: support bulk and scatter Rx
  net/ngbe: support full-featured Tx path
  net/ngbe: add device start operation
  net/ngbe: start and stop RxTx
  net/ngbe: add device stop operation

 MAINTAINERS                            |    6 +
 doc/guides/nics/features/ngbe.ini      |   25 +
 doc/guides/nics/index.rst              |    1 +
 doc/guides/nics/ngbe.rst               |   58 +
 doc/guides/rel_notes/release_21_08.rst |    6 +
 drivers/net/meson.build                |    1 +
 drivers/net/ngbe/base/meson.build      |   26 +
 drivers/net/ngbe/base/ngbe.h           |   11 +
 drivers/net/ngbe/base/ngbe_devids.h    |   84 +
 drivers/net/ngbe/base/ngbe_dummy.h     |  209 ++
 drivers/net/ngbe/base/ngbe_eeprom.c    |  203 ++
 drivers/net/ngbe/base/ngbe_eeprom.h    |   17 +
 drivers/net/ngbe/base/ngbe_hw.c        | 1069 +++++++++
 drivers/net/ngbe/base/ngbe_hw.h        |   59 +
 drivers/net/ngbe/base/ngbe_mng.c       |  198 ++
 drivers/net/ngbe/base/ngbe_mng.h       |   65 +
 drivers/net/ngbe/base/ngbe_osdep.h     |  178 ++
 drivers/net/ngbe/base/ngbe_phy.c       |  451 ++++
 drivers/net/ngbe/base/ngbe_phy.h       |   62 +
 drivers/net/ngbe/base/ngbe_phy_mvl.c   |  251 +++
 drivers/net/ngbe/base/ngbe_phy_mvl.h   |   97 +
 drivers/net/ngbe/base/ngbe_phy_rtl.c   |  240 ++
 drivers/net/ngbe/base/ngbe_phy_rtl.h   |   89 +
 drivers/net/ngbe/base/ngbe_phy_yt.c    |  272 +++
 drivers/net/ngbe/base/ngbe_phy_yt.h    |   76 +
 drivers/net/ngbe/base/ngbe_regs.h      | 1490 +++++++++++++
 drivers/net/ngbe/base/ngbe_status.h    |  125 ++
 drivers/net/ngbe/base/ngbe_type.h      |  210 ++
 drivers/net/ngbe/meson.build           |   22 +
 drivers/net/ngbe/ngbe_ethdev.c         | 1266 +++++++++++
 drivers/net/ngbe/ngbe_ethdev.h         |  146 ++
 drivers/net/ngbe/ngbe_logs.h           |   46 +
 drivers/net/ngbe/ngbe_ptypes.c         |  640 ++++++
 drivers/net/ngbe/ngbe_ptypes.h         |  351 +++
 drivers/net/ngbe/ngbe_rxtx.c           | 2829 ++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h           |  366 +++
 drivers/net/ngbe/version.map           |    3 +
 37 files changed, 11248 insertions(+)
 create mode 100644 doc/guides/nics/features/ngbe.ini
 create mode 100644 doc/guides/nics/ngbe.rst
 create mode 100644 drivers/net/ngbe/base/meson.build
 create mode 100644 drivers/net/ngbe/base/ngbe.h
 create mode 100644 drivers/net/ngbe/base/ngbe_devids.h
 create mode 100644 drivers/net/ngbe/base/ngbe_dummy.h
 create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.c
 create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.h
 create mode 100644 drivers/net/ngbe/base/ngbe_hw.c
 create mode 100644 drivers/net/ngbe/base/ngbe_hw.h
 create mode 100644 drivers/net/ngbe/base/ngbe_mng.c
 create mode 100644 drivers/net/ngbe/base/ngbe_mng.h
 create mode 100644 drivers/net/ngbe/base/ngbe_osdep.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.h
 create mode 100644 drivers/net/ngbe/base/ngbe_regs.h
 create mode 100644 drivers/net/ngbe/base/ngbe_status.h
 create mode 100644 drivers/net/ngbe/base/ngbe_type.h
 create mode 100644 drivers/net/ngbe/meson.build
 create mode 100644 drivers/net/ngbe/ngbe_ethdev.c
 create mode 100644 drivers/net/ngbe/ngbe_ethdev.h
 create mode 100644 drivers/net/ngbe/ngbe_logs.h
 create mode 100644 drivers/net/ngbe/ngbe_ptypes.c
 create mode 100644 drivers/net/ngbe/ngbe_ptypes.h
 create mode 100644 drivers/net/ngbe/ngbe_rxtx.c
 create mode 100644 drivers/net/ngbe/ngbe_rxtx.h
 create mode 100644 drivers/net/ngbe/version.map

-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 17:05   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs Jiawen Wu
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 MAINTAINERS                            |  6 ++++++
 doc/guides/nics/features/ngbe.ini      | 10 +++++++++
 doc/guides/nics/index.rst              |  1 +
 doc/guides/nics/ngbe.rst               | 28 ++++++++++++++++++++++++++
 doc/guides/rel_notes/release_21_08.rst |  6 ++++++
 drivers/net/meson.build                |  1 +
 drivers/net/ngbe/meson.build           | 12 +++++++++++
 drivers/net/ngbe/ngbe_ethdev.c         |  5 +++++
 drivers/net/ngbe/ngbe_ethdev.h         |  5 +++++
 drivers/net/ngbe/version.map           |  3 +++
 10 files changed, 77 insertions(+)
 create mode 100644 doc/guides/nics/features/ngbe.ini
 create mode 100644 doc/guides/nics/ngbe.rst
 create mode 100644 drivers/net/ngbe/meson.build
 create mode 100644 drivers/net/ngbe/ngbe_ethdev.c
 create mode 100644 drivers/net/ngbe/ngbe_ethdev.h
 create mode 100644 drivers/net/ngbe/version.map

diff --git a/MAINTAINERS b/MAINTAINERS
index 5877a16971..04672f6eaa 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -903,6 +903,12 @@ F: drivers/net/txgbe/
 F: doc/guides/nics/txgbe.rst
 F: doc/guides/nics/features/txgbe.ini
 
+Wangxun ngbe
+M: Jiawen Wu <jiawenwu@trustnetic.com>
+F: drivers/net/ngbe/
+F: doc/guides/nics/ngbe.rst
+F: doc/guides/nics/features/ngbe.ini
+
 VMware vmxnet3
 M: Yong Wang <yongwang@vmware.com>
 F: drivers/net/vmxnet3/
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
new file mode 100644
index 0000000000..a7a524defc
--- /dev/null
+++ b/doc/guides/nics/features/ngbe.ini
@@ -0,0 +1,10 @@
+;
+; Supported features of the 'ngbe' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux                = Y
+ARMv8                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 799697caf0..31a3e6bcdc 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -47,6 +47,7 @@ Network Interface Controller Drivers
     netvsc
     nfb
     nfp
+    ngbe
     null
     octeontx
     octeontx2
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
new file mode 100644
index 0000000000..4ec2623a05
--- /dev/null
+++ b/doc/guides/nics/ngbe.rst
@@ -0,0 +1,28 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+
+NGBE Poll Mode Driver
+======================
+
+The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
+for Wangxun 1 Gigabit Ethernet NICs.
+
+Prerequisites
+-------------
+
+- Learning about Wangxun 1 Gigabit Ethernet NICs using
+  `<https://www.net-swift.com/a/386.html>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+
+Build with ICC is not supported yet.
+Power8, ARMv7 and BSD are not supported yet.
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index a6ecfdf3ce..2deac4f398 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -55,6 +55,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added Wangxun ngbe PMD.**
+
+  Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs.
+
+  See the :doc:`../nics/ngbe` for more details.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index c8b5ce2980..d6c1751540 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -40,6 +40,7 @@ drivers = [
         'netvsc',
         'nfb',
         'nfp',
+	'ngbe',
         'null',
         'octeontx',
         'octeontx2',
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
new file mode 100644
index 0000000000..de2d7be716
--- /dev/null
+++ b/drivers/net/ngbe/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+
+if is_windows
+	build = false
+	reason = 'not supported on Windows'
+	subdir_done()
+endif
+
+sources = files(
+	'ngbe_ethdev.c',
+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
new file mode 100644
index 0000000000..e424ff11a2
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
new file mode 100644
index 0000000000..e424ff11a2
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
diff --git a/drivers/net/ngbe/version.map b/drivers/net/ngbe/version.map
new file mode 100644
index 0000000000..4a76d1d52d
--- /dev/null
+++ b/drivers/net/ngbe/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+	local: *;
+};
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 17:08   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 03/24] net/ngbe: support probe and remove Jiawen Wu
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device IDs for Wangxun 1Gb NICs, and register rte_ngbe_pmd.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/meson.build   | 18 +++++++
 drivers/net/ngbe/base/ngbe_devids.h | 84 +++++++++++++++++++++++++++++
 drivers/net/ngbe/meson.build        |  6 +++
 drivers/net/ngbe/ngbe_ethdev.c      | 51 ++++++++++++++++++
 4 files changed, 159 insertions(+)
 create mode 100644 drivers/net/ngbe/base/meson.build
 create mode 100644 drivers/net/ngbe/base/ngbe_devids.h

diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
new file mode 100644
index 0000000000..c5f6467743
--- /dev/null
+++ b/drivers/net/ngbe/base/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+
+sources = []
+
+error_cflags = []
+
+c_args = cflags
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('ngbe_base', sources,
+	dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci],
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ngbe/base/ngbe_devids.h b/drivers/net/ngbe/base/ngbe_devids.h
new file mode 100644
index 0000000000..81671f71da
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_devids.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_DEVIDS_H_
+#define _NGBE_DEVIDS_H_
+
+/*
+ * Vendor ID
+ */
+#ifndef PCI_VENDOR_ID_WANGXUN
+#define PCI_VENDOR_ID_WANGXUN                   0x8088
+#endif
+
+/*
+ * Device IDs
+ */
+#define NGBE_DEV_ID_EM_VF			0x0110
+#define   NGBE_SUB_DEV_ID_EM_VF			0x0110
+#define NGBE_DEV_ID_EM				0x0100
+#define   NGBE_SUB_DEV_ID_EM_MVL_RGMII		0x0200
+#define   NGBE_SUB_DEV_ID_EM_MVL_SFP		0x0403
+#define   NGBE_SUB_DEV_ID_EM_RTL_SGMII		0x0410
+#define   NGBE_SUB_DEV_ID_EM_YT8521S_SFP	0x0460
+
+#define NGBE_DEV_ID_EM_WX1860AL_W		0x0100
+#define NGBE_DEV_ID_EM_WX1860AL_W_VF		0x0110
+#define NGBE_DEV_ID_EM_WX1860A2			0x0101
+#define NGBE_DEV_ID_EM_WX1860A2_VF		0x0111
+#define NGBE_DEV_ID_EM_WX1860A2S		0x0102
+#define NGBE_DEV_ID_EM_WX1860A2S_VF		0x0112
+#define NGBE_DEV_ID_EM_WX1860A4			0x0103
+#define NGBE_DEV_ID_EM_WX1860A4_VF		0x0113
+#define NGBE_DEV_ID_EM_WX1860A4S		0x0104
+#define NGBE_DEV_ID_EM_WX1860A4S_VF		0x0114
+#define NGBE_DEV_ID_EM_WX1860AL2		0x0105
+#define NGBE_DEV_ID_EM_WX1860AL2_VF		0x0115
+#define NGBE_DEV_ID_EM_WX1860AL2S		0x0106
+#define NGBE_DEV_ID_EM_WX1860AL2S_VF		0x0116
+#define NGBE_DEV_ID_EM_WX1860AL4		0x0107
+#define NGBE_DEV_ID_EM_WX1860AL4_VF		0x0117
+#define NGBE_DEV_ID_EM_WX1860AL4S		0x0108
+#define NGBE_DEV_ID_EM_WX1860AL4S_VF		0x0118
+#define NGBE_DEV_ID_EM_WX1860NCSI		0x0109
+#define NGBE_DEV_ID_EM_WX1860NCSI_VF		0x0119
+#define NGBE_DEV_ID_EM_WX1860A1			0x010A
+#define NGBE_DEV_ID_EM_WX1860A1_VF		0x011A
+#define NGBE_DEV_ID_EM_WX1860A1L		0x010B
+#define NGBE_DEV_ID_EM_WX1860A1L_VF		0x011B
+#define   NGBE_SUB_DEV_ID_EM_ZTE5201_RJ45	0x0100
+#define   NGBE_SUB_DEV_ID_EM_SF100F_LP		0x0103
+#define   NGBE_SUB_DEV_ID_EM_M88E1512_RJ45	0x0200
+#define   NGBE_SUB_DEV_ID_EM_SF100HT		0x0102
+#define   NGBE_SUB_DEV_ID_EM_SF200T		0x0201
+#define   NGBE_SUB_DEV_ID_EM_SF200HT		0x0202
+#define   NGBE_SUB_DEV_ID_EM_SF200T_S		0x0210
+#define   NGBE_SUB_DEV_ID_EM_SF200HT_S		0x0220
+#define   NGBE_SUB_DEV_ID_EM_SF200HXT		0x0230
+#define   NGBE_SUB_DEV_ID_EM_SF400T		0x0401
+#define   NGBE_SUB_DEV_ID_EM_SF400HT		0x0402
+#define   NGBE_SUB_DEV_ID_EM_M88E1512_SFP	0x0403
+#define   NGBE_SUB_DEV_ID_EM_SF400T_S		0x0410
+#define   NGBE_SUB_DEV_ID_EM_SF400HT_S		0x0420
+#define   NGBE_SUB_DEV_ID_EM_SF400HXT		0x0430
+#define   NGBE_SUB_DEV_ID_EM_SF400_OCP		0x0440
+#define   NGBE_SUB_DEV_ID_EM_SF400_LY		0x0450
+#define   NGBE_SUB_DEV_ID_EM_SF400_LY_YT	0x0470
+
+/* Assign excessive id with masks */
+#define NGBE_INTERNAL_MASK			0x000F
+#define NGBE_OEM_MASK				0x00F0
+#define NGBE_WOL_SUP_MASK			0x4000
+#define NGBE_NCSI_SUP_MASK			0x8000
+
+#define NGBE_INTERNAL_SFP			0x0003
+#define NGBE_OCP_CARD				0x0040
+#define NGBE_LY_M88E1512_SFP			0x0050
+#define NGBE_YT8521S_SFP			0x0060
+#define NGBE_LY_YT8521S_SFP			0x0070
+#define NGBE_WOL_SUP				0x4000
+#define NGBE_NCSI_SUP				0x8000
+
+#endif /* _NGBE_DEVIDS_H_ */
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index de2d7be716..81173fa7f0 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -7,6 +7,12 @@ if is_windows
 	subdir_done()
 endif
 
+subdir('base')
+objs = [base_objs]
+
 sources = files(
 	'ngbe_ethdev.c',
 )
+
+includes += include_directories('base')
+
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e424ff11a2..0f1fa86fe6 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -3,3 +3,54 @@
  * Copyright(c) 2010-2017 Intel Corporation
  */
 
+#include <ethdev_pci.h>
+
+#include <base/ngbe_devids.h>
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_ngbe_map[] = {
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2S) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4S) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2S) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4S) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860NCSI) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1L) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL_W) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		struct rte_pci_device *pci_dev)
+{
+	RTE_SET_USED(pci_dev);
+
+	return 0;
+}
+
+static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
+{
+	RTE_SET_USED(pci_dev);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_ngbe_pmd = {
+	.id_table = pci_id_ngbe_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+		     RTE_PCI_DRV_INTR_LSC,
+	.probe = eth_ngbe_pci_probe,
+	.remove = eth_ngbe_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
+
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 03/24] net/ngbe: support probe and remove
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 17:27   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 04/24] net/ngbe: add device init and uninit Jiawen Wu
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add basic PCIe ethdev probe and remove.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini |  1 +
 drivers/net/ngbe/ngbe_ethdev.c    | 44 ++++++++++++++++++++++++++++---
 drivers/net/ngbe/ngbe_ethdev.h    | 10 +++++++
 3 files changed, 52 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index a7a524defc..977286ac04 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Multiprocess aware   = Y
 Linux                = Y
 ARMv8                = Y
 x86-32               = Y
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 0f1fa86fe6..83af1a6bc7 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -3,9 +3,11 @@
  * Copyright(c) 2010-2017 Intel Corporation
  */
 
+#include <rte_common.h>
 #include <ethdev_pci.h>
 
 #include <base/ngbe_devids.h>
+#include "ngbe_ethdev.h"
 
 /*
  * The set of PCI devices this driver supports
@@ -26,20 +28,56 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static int
+eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	return 0;
+}
+
+static int
+eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	RTE_SET_USED(eth_dev);
+
+	return 0;
+}
+
 static int
 eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		struct rte_pci_device *pci_dev)
 {
-	RTE_SET_USED(pci_dev);
+	int retval;
+
+	retval = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+			sizeof(struct ngbe_adapter),
+			eth_dev_pci_specific_init, pci_dev,
+			eth_ngbe_dev_init, NULL);
+
+	if (retval)
+		return retval;
 
 	return 0;
 }
 
 static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
 {
-	RTE_SET_USED(pci_dev);
+	struct rte_eth_dev *ethdev;
 
-	return 0;
+	ethdev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!ethdev)
+		return 0;
+
+	return rte_eth_dev_destroy(ethdev, eth_ngbe_dev_uninit);
 }
 
 static struct rte_pci_driver rte_ngbe_pmd = {
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index e424ff11a2..b79570dc51 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -3,3 +3,13 @@
  * Copyright(c) 2010-2017 Intel Corporation
  */
 
+#ifndef _NGBE_ETHDEV_H_
+#define _NGBE_ETHDEV_H_
+
+/*
+ * Structure to store private data for each driver instance (for each port).
+ */
+struct ngbe_adapter {
+};
+
+#endif /* _NGBE_ETHDEV_H_ */
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 04/24] net/ngbe: add device init and uninit
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (2 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 03/24] net/ngbe: support probe and remove Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 17:36   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type Jiawen Wu
                   ` (21 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add basic init and uninit function.
Map device IDs and subsystem IDs to single ID for easy operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/meson.build  |   4 +-
 drivers/net/ngbe/base/ngbe.h       |  11 ++
 drivers/net/ngbe/base/ngbe_hw.c    |  60 ++++++++++
 drivers/net/ngbe/base/ngbe_hw.h    |  13 +++
 drivers/net/ngbe/base/ngbe_osdep.h | 175 +++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_type.h  |  28 +++++
 drivers/net/ngbe/ngbe_ethdev.c     |  36 +++++-
 drivers/net/ngbe/ngbe_ethdev.h     |   7 ++
 8 files changed, 331 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/ngbe/base/ngbe.h
 create mode 100644 drivers/net/ngbe/base/ngbe_hw.c
 create mode 100644 drivers/net/ngbe/base/ngbe_hw.h
 create mode 100644 drivers/net/ngbe/base/ngbe_osdep.h
 create mode 100644 drivers/net/ngbe/base/ngbe_type.h

diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index c5f6467743..fdbfa99916 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -1,7 +1,9 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
 
-sources = []
+sources = [
+	'ngbe_hw.c',
+]
 
 error_cflags = []
 
diff --git a/drivers/net/ngbe/base/ngbe.h b/drivers/net/ngbe/base/ngbe.h
new file mode 100644
index 0000000000..63fad12ad3
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_H_
+#define _NGBE_H_
+
+#include "ngbe_type.h"
+#include "ngbe_hw.h"
+
+#endif /* _NGBE_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
new file mode 100644
index 0000000000..0fab47f272
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_hw.h"
+
+void ngbe_map_device_id(struct ngbe_hw *hw)
+{
+	u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
+	u16 internal = hw->sub_system_id & NGBE_INTERNAL_MASK;
+	hw->is_pf = true;
+
+	/* move subsystem_device_id to device_id */
+	switch (hw->device_id) {
+	case NGBE_DEV_ID_EM_WX1860AL_W_VF:
+	case NGBE_DEV_ID_EM_WX1860A2_VF:
+	case NGBE_DEV_ID_EM_WX1860A2S_VF:
+	case NGBE_DEV_ID_EM_WX1860A4_VF:
+	case NGBE_DEV_ID_EM_WX1860A4S_VF:
+	case NGBE_DEV_ID_EM_WX1860AL2_VF:
+	case NGBE_DEV_ID_EM_WX1860AL2S_VF:
+	case NGBE_DEV_ID_EM_WX1860AL4_VF:
+	case NGBE_DEV_ID_EM_WX1860AL4S_VF:
+	case NGBE_DEV_ID_EM_WX1860NCSI_VF:
+	case NGBE_DEV_ID_EM_WX1860A1_VF:
+	case NGBE_DEV_ID_EM_WX1860A1L_VF:
+		hw->device_id = NGBE_DEV_ID_EM_VF;
+		hw->sub_device_id = NGBE_SUB_DEV_ID_EM_VF;
+		hw->is_pf = false;
+		break;
+	case NGBE_DEV_ID_EM_WX1860AL_W:
+	case NGBE_DEV_ID_EM_WX1860A2:
+	case NGBE_DEV_ID_EM_WX1860A2S:
+	case NGBE_DEV_ID_EM_WX1860A4:
+	case NGBE_DEV_ID_EM_WX1860A4S:
+	case NGBE_DEV_ID_EM_WX1860AL2:
+	case NGBE_DEV_ID_EM_WX1860AL2S:
+	case NGBE_DEV_ID_EM_WX1860AL4:
+	case NGBE_DEV_ID_EM_WX1860AL4S:
+	case NGBE_DEV_ID_EM_WX1860NCSI:
+	case NGBE_DEV_ID_EM_WX1860A1:
+	case NGBE_DEV_ID_EM_WX1860A1L:
+		hw->device_id = NGBE_DEV_ID_EM;
+		if (oem == NGBE_LY_M88E1512_SFP ||
+				internal == NGBE_INTERNAL_SFP)
+			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_MVL_SFP;
+		else if (hw->sub_system_id == NGBE_SUB_DEV_ID_EM_M88E1512_RJ45)
+			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_MVL_RGMII;
+		else if (oem == NGBE_YT8521S_SFP ||
+				oem == NGBE_LY_YT8521S_SFP)
+			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_YT8521S_SFP;
+		else
+			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_RTL_SGMII;
+		break;
+	default:
+		break;
+	}
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
new file mode 100644
index 0000000000..b320d126ec
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_HW_H_
+#define _NGBE_HW_H_
+
+#include "ngbe_type.h"
+
+void ngbe_map_device_id(struct ngbe_hw *hw);
+
+#endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_osdep.h b/drivers/net/ngbe/base/ngbe_osdep.h
new file mode 100644
index 0000000000..ef3d3d9180
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_osdep.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_OS_H_
+#define _NGBE_OS_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <rte_version.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_io.h>
+
+#define RTE_LIBRTE_NGBE_TM        DCPV(1, 0)
+#define TMZ_PADDR(mz)  ((mz)->iova)
+#define TMZ_VADDR(mz)  ((mz)->addr)
+#define TDEV_NAME(eth_dev)  ((eth_dev)->device->name)
+
+#define ngbe_unused __rte_unused
+
+#define usec_delay(x) rte_delay_us(x)
+#define msec_delay(x) rte_delay_ms(x)
+#define usleep(x)     rte_delay_us(x)
+#define msleep(x)     rte_delay_ms(x)
+
+#define FALSE               0
+#define TRUE                1
+
+#ifndef false
+#define false               0
+#endif
+#ifndef true
+#define true                1
+#endif
+#define min(a, b)	RTE_MIN(a, b)
+#define max(a, b)	RTE_MAX(a, b)
+
+/* Bunch of defines for shared code bogosity */
+
+static inline void UNREFERENCED(const char *a __rte_unused, ...) {}
+#define UNREFERENCED_PARAMETER(args...) UNREFERENCED("", ##args)
+
+#define STATIC static
+
+typedef uint8_t		u8;
+typedef int8_t		s8;
+typedef uint16_t	u16;
+typedef int16_t		s16;
+typedef uint32_t	u32;
+typedef int32_t		s32;
+typedef uint64_t	u64;
+typedef int64_t		s64;
+
+/* Little Endian defines */
+#ifndef __le16
+#define __le16  u16
+#define __le32  u32
+#define __le64  u64
+#endif
+#ifndef __be16
+#define __be16  u16
+#define __be32  u32
+#define __be64  u64
+#endif
+
+/* Bit shift and mask */
+#define BIT_MASK4                 (0x0000000FU)
+#define BIT_MASK8                 (0x000000FFU)
+#define BIT_MASK16                (0x0000FFFFU)
+#define BIT_MASK32                (0xFFFFFFFFU)
+#define BIT_MASK64                (0xFFFFFFFFFFFFFFFFUL)
+
+#ifndef cpu_to_le32
+#define cpu_to_le16(v)          rte_cpu_to_le_16((u16)(v))
+#define cpu_to_le32(v)          rte_cpu_to_le_32((u32)(v))
+#define cpu_to_le64(v)          rte_cpu_to_le_64((u64)(v))
+#define le_to_cpu16(v)          rte_le_to_cpu_16((u16)(v))
+#define le_to_cpu32(v)          rte_le_to_cpu_32((u32)(v))
+#define le_to_cpu64(v)          rte_le_to_cpu_64((u64)(v))
+
+#define cpu_to_be16(v)          rte_cpu_to_be_16((u16)(v))
+#define cpu_to_be32(v)          rte_cpu_to_be_32((u32)(v))
+#define cpu_to_be64(v)          rte_cpu_to_be_64((u64)(v))
+#define be_to_cpu16(v)          rte_be_to_cpu_16((u16)(v))
+#define be_to_cpu32(v)          rte_be_to_cpu_32((u32)(v))
+#define be_to_cpu64(v)          rte_be_to_cpu_64((u64)(v))
+
+#define le_to_be16(v)           rte_bswap16((u16)(v))
+#define le_to_be32(v)           rte_bswap32((u32)(v))
+#define le_to_be64(v)           rte_bswap64((u64)(v))
+#define be_to_le16(v)           rte_bswap16((u16)(v))
+#define be_to_le32(v)           rte_bswap32((u32)(v))
+#define be_to_le64(v)           rte_bswap64((u64)(v))
+
+#define npu_to_le16(v)          (v)
+#define npu_to_le32(v)          (v)
+#define npu_to_le64(v)          (v)
+#define le_to_npu16(v)          (v)
+#define le_to_npu32(v)          (v)
+#define le_to_npu64(v)          (v)
+
+#define npu_to_be16(v)          le_to_be16((u16)(v))
+#define npu_to_be32(v)          le_to_be32((u32)(v))
+#define npu_to_be64(v)          le_to_be64((u64)(v))
+#define be_to_npu16(v)          be_to_le16((u16)(v))
+#define be_to_npu32(v)          be_to_le32((u32)(v))
+#define be_to_npu64(v)          be_to_le64((u64)(v))
+#endif /* !cpu_to_le32 */
+
+static inline u16 REVERT_BIT_MASK16(u16 mask)
+{
+	mask = ((mask & 0x5555) << 1) | ((mask & 0xAAAA) >> 1);
+	mask = ((mask & 0x3333) << 2) | ((mask & 0xCCCC) >> 2);
+	mask = ((mask & 0x0F0F) << 4) | ((mask & 0xF0F0) >> 4);
+	return ((mask & 0x00FF) << 8) | ((mask & 0xFF00) >> 8);
+}
+
+static inline u32 REVERT_BIT_MASK32(u32 mask)
+{
+	mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1);
+	mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2);
+	mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4);
+	mask = ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8);
+	return ((mask & 0x0000FFFF) << 16) | ((mask & 0xFFFF0000) >> 16);
+}
+
+static inline u64 REVERT_BIT_MASK64(u64 mask)
+{
+	mask = ((mask & 0x5555555555555555) << 1) |
+	       ((mask & 0xAAAAAAAAAAAAAAAA) >> 1);
+	mask = ((mask & 0x3333333333333333) << 2) |
+	       ((mask & 0xCCCCCCCCCCCCCCCC) >> 2);
+	mask = ((mask & 0x0F0F0F0F0F0F0F0F) << 4) |
+	       ((mask & 0xF0F0F0F0F0F0F0F0) >> 4);
+	mask = ((mask & 0x00FF00FF00FF00FF) << 8) |
+	       ((mask & 0xFF00FF00FF00FF00) >> 8);
+	mask = ((mask & 0x0000FFFF0000FFFF) << 16) |
+	       ((mask & 0xFFFF0000FFFF0000) >> 16);
+	return ((mask & 0x00000000FFFFFFFF) << 32) |
+	       ((mask & 0xFFFFFFFF00000000) >> 32);
+}
+
+#define IOMEM
+
+#define prefetch(x) rte_prefetch0(x)
+
+#define ARRAY_SIZE(x) ((int32_t)RTE_DIM(x))
+
+#ifndef MAX_UDELAY_MS
+#define MAX_UDELAY_MS 5
+#endif
+
+#define ETH_ADDR_LEN	6
+#define ETH_FCS_LEN	4
+
+/* Check whether address is multicast. This is little-endian specific check.*/
+#define NGBE_IS_MULTICAST(address) \
+		rte_is_multicast_ether_addr(address)
+
+/* Check whether an address is broadcast. */
+#define NGBE_IS_BROADCAST(address) \
+		rte_is_broadcast_ether_addr(address)
+
+#define ETH_P_8021Q      0x8100
+#define ETH_P_8021AD     0x88A8
+
+#endif /* _NGBE_OS_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
new file mode 100644
index 0000000000..b6bde11dcd
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_TYPE_H_
+#define _NGBE_TYPE_H_
+
+#define NGBE_ALIGN		128 /* as intel did */
+
+#include "ngbe_osdep.h"
+#include "ngbe_devids.h"
+
+struct ngbe_hw {
+	void IOMEM *hw_addr;
+	u16 device_id;
+	u16 vendor_id;
+	u16 sub_device_id;
+	u16 sub_system_id;
+	bool allow_unsupported_sfp;
+
+	uint64_t isb_dma;
+	void IOMEM *isb_mem;
+
+	bool is_pf;
+};
+
+#endif /* _NGBE_TYPE_H_ */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 83af1a6bc7..3431a9a9a7 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -6,9 +6,11 @@
 #include <rte_common.h>
 #include <ethdev_pci.h>
 
-#include <base/ngbe_devids.h>
+#include "base/ngbe.h"
 #include "ngbe_ethdev.h"
 
+static int ngbe_dev_close(struct rte_eth_dev *dev);
+
 /*
  * The set of PCI devices this driver supports
  */
@@ -32,12 +34,31 @@ static int
 eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
+	const struct rte_memzone *mz;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
+	/* Vendor and Device ID need to be set before init of shared code */
+	hw->device_id = pci_dev->id.device_id;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->sub_system_id = pci_dev->id.subsystem_device_id;
+	ngbe_map_device_id(hw);
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->allow_unsupported_sfp = 1;
+
+	/* Reserve memory for interrupt status block */
+	mz = rte_eth_dma_zone_reserve(eth_dev, "ngbe_driver", -1,
+		16, NGBE_ALIGN, SOCKET_ID_ANY);
+	if (mz == NULL)
+		return -ENOMEM;
+
+	hw->isb_dma = TMZ_PADDR(mz);
+	hw->isb_mem = TMZ_VADDR(mz);
+
 	return 0;
 }
 
@@ -47,7 +68,7 @@ eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	RTE_SET_USED(eth_dev);
+	ngbe_dev_close(eth_dev);
 
 	return 0;
 }
@@ -88,6 +109,17 @@ static struct rte_pci_driver rte_ngbe_pmd = {
 	.remove = eth_ngbe_pci_remove,
 };
 
+/*
+ * Reset and stop device.
+ */
+static int
+ngbe_dev_close(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+
+	return 0;
+}
+
 RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index b79570dc51..f6cee4a4a9 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -10,6 +10,13 @@
  * Structure to store private data for each driver instance (for each port).
  */
 struct ngbe_adapter {
+	struct ngbe_hw             hw;
 };
 
+#define NGBE_DEV_ADAPTER(dev) \
+	((struct ngbe_adapter *)(dev)->data->dev_private)
+
+#define NGBE_DEV_HW(dev) \
+	(&((struct ngbe_adapter *)(dev)->data->dev_private)->hw)
+
 #endif /* _NGBE_ETHDEV_H_ */
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (3 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 04/24] net/ngbe: add device init and uninit Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 17:54   ` Andrew Rybchenko
  2021-07-01 13:57   ` David Marchand
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 06/24] net/ngbe: define registers Jiawen Wu
                   ` (20 subsequent siblings)
  25 siblings, 2 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add log type and error type to trace functions.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/ngbe.rst            |  20 +++++
 drivers/net/ngbe/base/ngbe_status.h | 125 ++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_type.h   |   1 +
 drivers/net/ngbe/ngbe_ethdev.c      |  16 ++++
 drivers/net/ngbe/ngbe_logs.h        |  46 ++++++++++
 5 files changed, 208 insertions(+)
 create mode 100644 drivers/net/ngbe/base/ngbe_status.h
 create mode 100644 drivers/net/ngbe/ngbe_logs.h

diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 4ec2623a05..c274a15aab 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -15,6 +15,26 @@ Prerequisites
 
 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
 
+Pre-Installation Configuration
+------------------------------
+
+Dynamic Logging Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One may leverage EAL option "--log-level" to change default levels
+for the log types supported by the driver. The option is used with
+an argument typically consisting of two parts separated by a colon.
+
+NGBE PMD provides the following log types available for control:
+
+- ``pmd.net.ngbe.driver`` (default level is **notice**)
+
+  Affects driver-wide messages unrelated to any particular devices.
+
+- ``pmd.net.ngbe.init`` (default level is **notice**)
+
+  Extra logging of the messages during PMD initialization.
+
 Driver compilation and testing
 ------------------------------
 
diff --git a/drivers/net/ngbe/base/ngbe_status.h b/drivers/net/ngbe/base/ngbe_status.h
new file mode 100644
index 0000000000..b1836c6479
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_status.h
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_STATUS_H_
+#define _NGBE_STATUS_H_
+
+/* Error Codes:
+ * common error
+ * module error(simple)
+ * module error(detailed)
+ *
+ * (-256, 256): reserved for non-ngbe defined error code
+ */
+#define TERR_BASE (0x100)
+enum ngbe_error {
+	TERR_NULL = TERR_BASE,
+	TERR_ANY,
+	TERR_NOSUPP,
+	TERR_NOIMPL,
+	TERR_NOMEM,
+	TERR_NOSPACE,
+	TERR_NOENTRY,
+	TERR_CONFIG,
+	TERR_ARGS,
+	TERR_PARAM,
+	TERR_INVALID,
+	TERR_TIMEOUT,
+	TERR_VERSION,
+	TERR_REGISTER,
+	TERR_FEATURE,
+	TERR_RESET,
+	TERR_AUTONEG,
+	TERR_MBX,
+	TERR_I2C,
+	TERR_FC,
+	TERR_FLASH,
+	TERR_DEVICE,
+	TERR_HOSTIF,
+	TERR_SRAM,
+	TERR_EEPROM,
+	TERR_EEPROM_CHECKSUM,
+	TERR_EEPROM_PROTECT,
+	TERR_EEPROM_VERSION,
+	TERR_MAC,
+	TERR_MAC_ADDR,
+	TERR_SFP,
+	TERR_SFP_INITSEQ,
+	TERR_SFP_PRESENT,
+	TERR_SFP_SUPPORT,
+	TERR_SFP_SETUP,
+	TERR_PHY,
+	TERR_PHY_ADDR,
+	TERR_PHY_INIT,
+	TERR_FDIR_CMD,
+	TERR_FDIR_REINIT,
+	TERR_SWFW_SYNC,
+	TERR_SWFW_COMMAND,
+	TERR_FC_CFG,
+	TERR_FC_NEGO,
+	TERR_LINK_SETUP,
+	TERR_PCIE_PENDING,
+	TERR_PBA_SECTION,
+	TERR_OVERTEMP,
+	TERR_UNDERTEMP,
+	TERR_XPCS_POWERUP,
+};
+
+/* WARNING: just for legacy compatibility */
+#define NGBE_NOT_IMPLEMENTED 0x7FFFFFFF
+#define NGBE_ERR_OPS_DUMMY   0x3FFFFFFF
+
+/* Error Codes */
+#define NGBE_ERR_EEPROM				-(TERR_BASE + 1)
+#define NGBE_ERR_EEPROM_CHECKSUM		-(TERR_BASE + 2)
+#define NGBE_ERR_PHY				-(TERR_BASE + 3)
+#define NGBE_ERR_CONFIG				-(TERR_BASE + 4)
+#define NGBE_ERR_PARAM				-(TERR_BASE + 5)
+#define NGBE_ERR_MAC_TYPE			-(TERR_BASE + 6)
+#define NGBE_ERR_UNKNOWN_PHY			-(TERR_BASE + 7)
+#define NGBE_ERR_LINK_SETUP			-(TERR_BASE + 8)
+#define NGBE_ERR_ADAPTER_STOPPED		-(TERR_BASE + 9)
+#define NGBE_ERR_INVALID_MAC_ADDR		-(TERR_BASE + 10)
+#define NGBE_ERR_DEVICE_NOT_SUPPORTED		-(TERR_BASE + 11)
+#define NGBE_ERR_MASTER_REQUESTS_PENDING	-(TERR_BASE + 12)
+#define NGBE_ERR_INVALID_LINK_SETTINGS		-(TERR_BASE + 13)
+#define NGBE_ERR_AUTONEG_NOT_COMPLETE		-(TERR_BASE + 14)
+#define NGBE_ERR_RESET_FAILED			-(TERR_BASE + 15)
+#define NGBE_ERR_SWFW_SYNC			-(TERR_BASE + 16)
+#define NGBE_ERR_PHY_ADDR_INVALID		-(TERR_BASE + 17)
+#define NGBE_ERR_I2C				-(TERR_BASE + 18)
+#define NGBE_ERR_SFP_NOT_SUPPORTED		-(TERR_BASE + 19)
+#define NGBE_ERR_SFP_NOT_PRESENT		-(TERR_BASE + 20)
+#define NGBE_ERR_SFP_NO_INIT_SEQ_PRESENT	-(TERR_BASE + 21)
+#define NGBE_ERR_NO_SAN_ADDR_PTR		-(TERR_BASE + 22)
+#define NGBE_ERR_FDIR_REINIT_FAILED		-(TERR_BASE + 23)
+#define NGBE_ERR_EEPROM_VERSION			-(TERR_BASE + 24)
+#define NGBE_ERR_NO_SPACE			-(TERR_BASE + 25)
+#define NGBE_ERR_OVERTEMP			-(TERR_BASE + 26)
+#define NGBE_ERR_FC_NOT_NEGOTIATED		-(TERR_BASE + 27)
+#define NGBE_ERR_FC_NOT_SUPPORTED		-(TERR_BASE + 28)
+#define NGBE_ERR_SFP_SETUP_NOT_COMPLETE		-(TERR_BASE + 30)
+#define NGBE_ERR_PBA_SECTION			-(TERR_BASE + 31)
+#define NGBE_ERR_INVALID_ARGUMENT		-(TERR_BASE + 32)
+#define NGBE_ERR_HOST_INTERFACE_COMMAND		-(TERR_BASE + 33)
+#define NGBE_ERR_OUT_OF_MEM			-(TERR_BASE + 34)
+#define NGBE_ERR_FEATURE_NOT_SUPPORTED		-(TERR_BASE + 36)
+#define NGBE_ERR_EEPROM_PROTECTED_REGION	-(TERR_BASE + 37)
+#define NGBE_ERR_FDIR_CMD_INCOMPLETE		-(TERR_BASE + 38)
+#define NGBE_ERR_FW_RESP_INVALID		-(TERR_BASE + 39)
+#define NGBE_ERR_TOKEN_RETRY			-(TERR_BASE + 40)
+#define NGBE_ERR_FLASH_LOADING_FAILED		-(TERR_BASE + 41)
+
+#define NGBE_ERR_NOSUPP                        -(TERR_BASE + 42)
+#define NGBE_ERR_UNDERTEMP                     -(TERR_BASE + 43)
+#define NGBE_ERR_XPCS_POWER_UP_FAILED          -(TERR_BASE + 44)
+#define NGBE_ERR_PHY_INIT_NOT_DONE             -(TERR_BASE + 45)
+#define NGBE_ERR_TIMEOUT                       -(TERR_BASE + 46)
+#define NGBE_ERR_REGISTER                      -(TERR_BASE + 47)
+#define NGBE_ERR_MNG_ACCESS_FAILED             -(TERR_BASE + 49)
+#define NGBE_ERR_PHY_TYPE                      -(TERR_BASE + 50)
+#define NGBE_ERR_PHY_TIMEOUT                   -(TERR_BASE + 51)
+
+#endif /* _NGBE_STATUS_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index b6bde11dcd..bcc9f74216 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -8,6 +8,7 @@
 
 #define NGBE_ALIGN		128 /* as intel did */
 
+#include "ngbe_status.h"
 #include "ngbe_osdep.h"
 #include "ngbe_devids.h"
 
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3431a9a9a7..f24c3e173e 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -6,6 +6,7 @@
 #include <rte_common.h>
 #include <ethdev_pci.h>
 
+#include "ngbe_logs.h"
 #include "base/ngbe.h"
 #include "ngbe_ethdev.h"
 
@@ -37,6 +38,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
 	const struct rte_memzone *mz;
 
+	PMD_INIT_FUNC_TRACE();
+
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
@@ -65,6 +68,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 static int
 eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
 {
+	PMD_INIT_FUNC_TRACE();
+
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
@@ -115,6 +120,8 @@ static struct rte_pci_driver rte_ngbe_pmd = {
 static int
 ngbe_dev_close(struct rte_eth_dev *dev)
 {
+	PMD_INIT_FUNC_TRACE();
+
 	RTE_SET_USED(dev);
 
 	return 0;
@@ -124,3 +131,12 @@ RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
 
+RTE_LOG_REGISTER(ngbe_logtype_init, pmd.net.ngbe.init, NOTICE);
+RTE_LOG_REGISTER(ngbe_logtype_driver, pmd.net.ngbe.driver, NOTICE);
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+	RTE_LOG_REGISTER(ngbe_logtype_rx, pmd.net.ngbe.rx, DEBUG);
+#endif
+#ifdef RTE_ETHDEV_DEBUG_TX
+	RTE_LOG_REGISTER(ngbe_logtype_tx, pmd.net.ngbe.tx, DEBUG);
+#endif
diff --git a/drivers/net/ngbe/ngbe_logs.h b/drivers/net/ngbe/ngbe_logs.h
new file mode 100644
index 0000000000..c5d1ab0930
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_LOGS_H_
+#define _NGBE_LOGS_H_
+
+/*
+ * PMD_USER_LOG: for user
+ */
+extern int ngbe_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ngbe_logtype_init, \
+		"%s(): " fmt "\n", __func__, ##args)
+
+extern int ngbe_logtype_driver;
+#define PMD_DRV_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ngbe_logtype_driver, \
+		"%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+extern int ngbe_logtype_rx;
+#define PMD_RX_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ngbe_logtype_rx,	\
+		"%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+extern int ngbe_logtype_tx;
+#define PMD_TX_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, ngbe_logtype_tx,	\
+		"%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define TLOG_DEBUG(fmt, args...)  PMD_DRV_LOG(DEBUG, fmt, ##args)
+
+#define DEBUGOUT(fmt, args...)    TLOG_DEBUG(fmt, ##args)
+#define PMD_INIT_FUNC_TRACE()     TLOG_DEBUG(" >>")
+#define DEBUGFUNC(fmt)            TLOG_DEBUG(fmt)
+
+#endif /* _NGBE_LOGS_H_ */
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 06/24] net/ngbe: define registers
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (4 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 07/24] net/ngbe: set MAC type and LAN id Jiawen Wu
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Define all registers that will be used.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_regs.h | 1490 +++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_type.h |    2 +
 2 files changed, 1492 insertions(+)
 create mode 100644 drivers/net/ngbe/base/ngbe_regs.h

diff --git a/drivers/net/ngbe/base/ngbe_regs.h b/drivers/net/ngbe/base/ngbe_regs.h
new file mode 100644
index 0000000000..737bd796a1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_regs.h
@@ -0,0 +1,1490 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_REGS_H_
+#define _NGBE_REGS_H_
+
+#define NGBE_PVMBX_QSIZE          (16) /* 16*4B */
+#define NGBE_PVMBX_BSIZE          (NGBE_PVMBX_QSIZE * 4)
+
+#define NGBE_REMOVED(a) (0)
+
+#define NGBE_REG_DUMMY             0xFFFFFF
+
+#define MS8(shift, mask)          (((u8)(mask)) << (shift))
+#define LS8(val, shift, mask)     (((u8)(val) & (u8)(mask)) << (shift))
+#define RS8(reg, shift, mask)     (((u8)(reg) >> (shift)) & (u8)(mask))
+
+#define MS16(shift, mask)         (((u16)(mask)) << (shift))
+#define LS16(val, shift, mask)    (((u16)(val) & (u16)(mask)) << (shift))
+#define RS16(reg, shift, mask)    (((u16)(reg) >> (shift)) & (u16)(mask))
+
+#define MS32(shift, mask)         (((u32)(mask)) << (shift))
+#define LS32(val, shift, mask)    (((u32)(val) & (u32)(mask)) << (shift))
+#define RS32(reg, shift, mask)    (((u32)(reg) >> (shift)) & (u32)(mask))
+
+#define MS64(shift, mask)         (((u64)(mask)) << (shift))
+#define LS64(val, shift, mask)    (((u64)(val) & (u64)(mask)) << (shift))
+#define RS64(reg, shift, mask)    (((u64)(reg) >> (shift)) & (u64)(mask))
+
+#define MS(shift, mask)           MS32(shift, mask)
+#define LS(val, shift, mask)      LS32(val, shift, mask)
+#define RS(reg, shift, mask)      RS32(reg, shift, mask)
+
+#define ROUND_UP(x, y)          (((x) + (y) - 1) / (y) * (y))
+#define ROUND_DOWN(x, y)        ((x) / (y) * (y))
+#define ROUND_OVER(x, maxbits, unitbits) \
+	((x) >= 1 << (maxbits) ? 0 : (x) >> (unitbits))
+
+/* autoc bits definition */
+#define NGBE_AUTOC                       NGBE_REG_DUMMY
+#define   NGBE_AUTOC_FLU                 MS64(0, 0x1)
+#define   NGBE_AUTOC_10G_PMA_PMD_MASK    MS64(7, 0x3) /* parallel */
+#define   NGBE_AUTOC_10G_XAUI            LS64(0, 7, 0x3)
+#define   NGBE_AUTOC_10G_KX4             LS64(1, 7, 0x3)
+#define   NGBE_AUTOC_10G_CX4             LS64(2, 7, 0x3)
+#define   NGBE_AUTOC_10G_KR              LS64(3, 7, 0x3) /* fixme */
+#define   NGBE_AUTOC_1G_PMA_PMD_MASK     MS64(9, 0x7)
+#define   NGBE_AUTOC_1G_BX               LS64(0, 9, 0x7)
+#define   NGBE_AUTOC_1G_KX               LS64(1, 9, 0x7)
+#define   NGBE_AUTOC_1G_SFI              LS64(0, 9, 0x7)
+#define   NGBE_AUTOC_1G_KX_BX            LS64(1, 9, 0x7)
+#define   NGBE_AUTOC_AN_RESTART          MS64(12, 0x1)
+#define   NGBE_AUTOC_LMS_MASK            MS64(13, 0x7)
+#define   NGBE_AUTOC_LMS_10G             LS64(3, 13, 0x7)
+#define   NGBE_AUTOC_LMS_KX4_KX_KR       LS64(4, 13, 0x7)
+#define   NGBE_AUTOC_LMS_SGMII_1G_100M   LS64(5, 13, 0x7)
+#define   NGBE_AUTOC_LMS_KX4_KX_KR_1G_AN LS64(6, 13, 0x7)
+#define   NGBE_AUTOC_LMS_KX4_KX_KR_SGMII LS64(7, 13, 0x7)
+#define   NGBE_AUTOC_LMS_1G_LINK_NO_AN   LS64(0, 13, 0x7)
+#define   NGBE_AUTOC_LMS_10G_LINK_NO_AN  LS64(1, 13, 0x7)
+#define   NGBE_AUTOC_LMS_1G_AN           LS64(2, 13, 0x7)
+#define   NGBE_AUTOC_LMS_KX4_AN          LS64(4, 13, 0x7)
+#define   NGBE_AUTOC_LMS_KX4_AN_1G_AN    LS64(6, 13, 0x7)
+#define   NGBE_AUTOC_LMS_ATTACH_TYPE     LS64(7, 13, 0x7)
+#define   NGBE_AUTOC_LMS_AN              MS64(15, 0x7)
+
+#define   NGBE_AUTOC_KR_SUPP             MS64(16, 0x1)
+#define   NGBE_AUTOC_FECR                MS64(17, 0x1)
+#define   NGBE_AUTOC_FECA                MS64(18, 0x1)
+#define   NGBE_AUTOC_AN_RX_ALIGN         MS64(18, 0x1F) /* fixme */
+#define   NGBE_AUTOC_AN_RX_DRIFT         MS64(23, 0x3)
+#define   NGBE_AUTOC_AN_RX_LOOSE         MS64(24, 0x3)
+#define   NGBE_AUTOC_PD_TMR              MS64(25, 0x3)
+#define   NGBE_AUTOC_RF                  MS64(27, 0x1)
+#define   NGBE_AUTOC_ASM_PAUSE           MS64(29, 0x1)
+#define   NGBE_AUTOC_SYM_PAUSE           MS64(28, 0x1)
+#define   NGBE_AUTOC_PAUSE               MS64(28, 0x3)
+#define   NGBE_AUTOC_KX_SUPP             MS64(30, 0x1)
+#define   NGBE_AUTOC_KX4_SUPP            MS64(31, 0x1)
+
+#define   NGBE_AUTOC_10GS_PMA_PMD_MASK   MS64(48, 0x3)  /* serial */
+#define   NGBE_AUTOC_10GS_KR             LS64(0, 48, 0x3)
+#define   NGBE_AUTOC_10GS_XFI            LS64(1, 48, 0x3)
+#define   NGBE_AUTOC_10GS_SFI            LS64(2, 48, 0x3)
+#define   NGBE_AUTOC_LINK_DIA_MASK       MS64(60, 0x7)
+#define   NGBE_AUTOC_LINK_DIA_D3_MASK    LS64(5, 60, 0x7)
+
+#define   NGBE_AUTOC_SPEED_MASK          MS64(32, 0xFFFF)
+#define   NGBD_AUTOC_SPEED(r)            RS64(r, 32, 0xFFFF)
+#define   NGBE_AUTOC_SPEED(v)            LS64(v, 32, 0xFFFF)
+#define     NGBE_LINK_SPEED_UNKNOWN      0
+#define     NGBE_LINK_SPEED_10M_FULL     0x0002
+#define     NGBE_LINK_SPEED_100M_FULL    0x0008
+#define     NGBE_LINK_SPEED_1GB_FULL     0x0020
+#define     NGBE_LINK_SPEED_2_5GB_FULL   0x0400
+#define     NGBE_LINK_SPEED_5GB_FULL     0x0800
+#define     NGBE_LINK_SPEED_10GB_FULL    0x0080
+#define     NGBE_LINK_SPEED_40GB_FULL    0x0100
+#define   NGBE_AUTOC_AUTONEG             MS64(63, 0x1)
+
+
+
+/* Hardware Datapath:
+ *  RX:     / Queue <- Filter \
+ *      Host     |             TC <=> SEC <=> MAC <=> PHY
+ *  TX:     \ Queue -> Filter /
+ *
+ * Packet Filter:
+ *  RX: RSS < FDIR < Filter < Encrypt
+ *
+ * Macro Argument Naming:
+ *   rp = ring pair         [0,127]
+ *   tc = traffic class     [0,7]
+ *   up = user priority     [0,7]
+ *   pi = pool index        [0,63]
+ *   r  = register
+ *   v  = value
+ *   s  = shift
+ *   m  = mask
+ *   i,j,k  = array index
+ *   H,L    = high/low bits
+ *   HI,LO  = high/low state
+ */
+
+#define NGBE_ETHPHYIF                  NGBE_REG_DUMMY
+#define   NGBE_ETHPHYIF_MDIO_ACT       MS(1, 0x1)
+#define   NGBE_ETHPHYIF_MDIO_MODE      MS(2, 0x1)
+#define   NGBE_ETHPHYIF_MDIO_BASE(r)   RS(r, 3, 0x1F)
+#define   NGBE_ETHPHYIF_MDIO_SHARED    MS(13, 0x1)
+#define   NGBE_ETHPHYIF_SPEED_10M      MS(17, 0x1)
+#define   NGBE_ETHPHYIF_SPEED_100M     MS(18, 0x1)
+#define   NGBE_ETHPHYIF_SPEED_1G       MS(19, 0x1)
+#define   NGBE_ETHPHYIF_SPEED_2_5G     MS(20, 0x1)
+#define   NGBE_ETHPHYIF_SPEED_10G      MS(21, 0x1)
+#define   NGBE_ETHPHYIF_SGMII_ENABLE   MS(25, 0x1)
+#define   NGBE_ETHPHYIF_INT_PHY_MODE   MS(24, 0x1)
+#define   NGBE_ETHPHYIF_IO_XPCS        MS(30, 0x1)
+#define   NGBE_ETHPHYIF_IO_EPHY        MS(31, 0x1)
+
+/******************************************************************************
+ * Chip Registers
+ ******************************************************************************/
+/**
+ * Chip Status
+ **/
+#define NGBE_PWR		0x010000
+#define   NGBE_PWR_LAN(r)	RS(r, 28, 0xC)
+#define     NGBE_PWR_LAN_0	(1)
+#define     NGBE_PWR_LAN_1	(2)
+#define     NGBE_PWR_LAN_2	(3)
+#define     NGBE_PWR_LAN_3	(4)
+#define NGBE_CTL		0x010004
+#define NGBE_LOCKPF		0x010008
+#define NGBE_RST		0x01000C
+#define   NGBE_RST_SW		MS(0, 0x1)
+#define   NGBE_RST_LAN(i)	MS(((i) + 1), 0x1)
+#define   NGBE_RST_FW		MS(5, 0x1)
+#define   NGBE_RST_ETH(i)	MS(((i) + 29), 0x1)
+#define   NGBE_RST_GLB		MS(31, 0x1)
+#define   NGBE_RST_DEFAULT	(NGBE_RST_SW | \
+				NGBE_RST_LAN(0) | \
+				NGBE_RST_LAN(1) | \
+				NGBE_RST_LAN(2) | \
+				NGBE_RST_LAN(3))
+#define NGBE_PROB			0x010010
+#define NGBE_IODRV			0x010024
+#define NGBE_STAT			0x010028
+#define   NGBE_STAT_MNGINIT		MS(0, 0x1)
+#define   NGBE_STAT_MNGVETO		MS(8, 0x1)
+#define   NGBE_STAT_ECCLAN0		MS(16, 0x1)
+#define   NGBE_STAT_ECCLAN1		MS(17, 0x1)
+#define   NGBE_STAT_ECCLAN2		MS(18, 0x1)
+#define   NGBE_STAT_ECCLAN3		MS(19, 0x1)
+#define   NGBE_STAT_ECCMNG		MS(20, 0x1)
+#define   NGBE_STAT_ECCPCORE		MS(21, 0X1)
+#define   NGBE_STAT_ECCPCIW		MS(22, 0x1)
+#define   NGBE_STAT_ECCPCIEPHY		MS(23, 0x1)
+#define   NGBE_STAT_ECCFMGR		MS(24, 0x1)
+#define   NGBE_STAT_GPHY_IN_RST(i)	MS(((i) + 9), 0x1)
+#define NGBE_RSTSTAT			0x010030
+#define   NGBE_RSTSTAT_PROG		MS(20, 0x1)
+#define   NGBE_RSTSTAT_PREP		MS(19, 0x1)
+#define   NGBE_RSTSTAT_TYPE_MASK	MS(16, 0x7)
+#define   NGBE_RSTSTAT_TYPE(r)		RS(r, 16, 0x7)
+#define   NGBE_RSTSTAT_TYPE_PE		LS(0, 16, 0x7)
+#define   NGBE_RSTSTAT_TYPE_PWR		LS(1, 16, 0x7)
+#define   NGBE_RSTSTAT_TYPE_HOT		LS(2, 16, 0x7)
+#define   NGBE_RSTSTAT_TYPE_SW		LS(3, 16, 0x7)
+#define   NGBE_RSTSTAT_TYPE_FW		LS(4, 16, 0x7)
+#define   NGBE_RSTSTAT_TMRINIT_MASK	MS(8, 0xFF)
+#define   NGBE_RSTSTAT_TMRINIT(v)	LS(v, 8, 0xFF)
+#define   NGBE_RSTSTAT_TMRCNT_MASK	MS(0, 0xFF)
+#define   NGBE_RSTSTAT_TMRCNT(v)	LS(v, 0, 0xFF)
+#define NGBE_PWRTMR			0x010034
+
+/**
+ * SPI(Flash)
+ **/
+#define NGBE_SPICMD               0x010104
+#define   NGBE_SPICMD_ADDR(v)     LS(v, 0, 0xFFFFFF)
+#define   NGBE_SPICMD_CLK(v)      LS(v, 25, 0x7)
+#define   NGBE_SPICMD_CMD(v)      LS(v, 28, 0x7)
+#define NGBE_SPIDAT               0x010108
+#define   NGBE_SPIDAT_BYPASS      MS(31, 0x1)
+#define   NGBE_SPIDAT_STATUS(v)   LS(v, 16, 0xFF)
+#define   NGBE_SPIDAT_OPDONE      MS(0, 0x1)
+#define NGBE_SPISTAT              0x01010C
+#define   NGBE_SPISTAT_OPDONE     MS(0, 0x1)
+#define   NGBE_SPISTAT_BPFLASH    MS(31, 0x1)
+#define NGBE_SPIUSRCMD            0x010110
+#define NGBE_SPICFG0              0x010114
+#define NGBE_SPICFG1              0x010118
+
+/* FMGR Registers */
+#define NGBE_ILDRSTAT                  0x010120
+#define   NGBE_ILDRSTAT_PCIRST         MS(0, 0x1)
+#define   NGBE_ILDRSTAT_PWRRST         MS(1, 0x1)
+#define   NGBE_ILDRSTAT_SWRST          MS(11, 0x1)
+#define   NGBE_ILDRSTAT_SWRST_LAN0     MS(13, 0x1)
+#define   NGBE_ILDRSTAT_SWRST_LAN1     MS(14, 0x1)
+#define   NGBE_ILDRSTAT_SWRST_LAN2     MS(15, 0x1)
+#define   NGBE_ILDRSTAT_SWRST_LAN3     MS(16, 0x1)
+
+#define NGBE_SRAM                 0x010124
+#define   NGBE_SRAM_SZ(v)         LS(v, 28, 0x7)
+#define NGBE_SRAMCTLECC           0x010130
+#define NGBE_SRAMINJECC           0x010134
+#define NGBE_SRAMECC              0x010138
+
+/* Sensors for PVT(Process Voltage Temperature) */
+#define NGBE_TSCTRL			0x010300
+#define   NGBE_TSCTRL_EVALMD		MS(31, 0x1)
+#define NGBE_TSEN			0x010304
+#define   NGBE_TSEN_ENA			MS(0, 0x1)
+#define NGBE_TSSTAT			0x010308
+#define   NGBE_TSSTAT_VLD		MS(16, 0x1)
+#define   NGBE_TSSTAT_DATA(r)		RS(r, 0, 0x3FF)
+#define NGBE_TSATHRE			0x01030C
+#define NGBE_TSDTHRE			0x010310
+#define NGBE_TSINTR			0x010314
+#define   NGBE_TSINTR_AEN		MS(0, 0x1)
+#define   NGBE_TSINTR_DEN		MS(1, 0x1)
+#define NGBE_TSALM			0x010318
+#define   NGBE_TSALM_LO			MS(0, 0x1)
+#define   NGBE_TSALM_HI			MS(1, 0x1)
+
+#define NGBE_EFUSE_WDATA0          0x010320
+#define NGBE_EFUSE_WDATA1          0x010324
+#define NGBE_EFUSE_RDATA0          0x010328
+#define NGBE_EFUSE_RDATA1          0x01032C
+#define NGBE_EFUSE_STATUS          0x010330
+
+/******************************************************************************
+ * Port Registers
+ ******************************************************************************/
+/* Internal PHY reg_offset [0,31] */
+#define NGBE_PHY_CONFIG(reg_offset)	(0x014000 + (reg_offset) * 4)
+
+/* Port Control */
+#define NGBE_PORTCTL                   0x014400
+#define   NGBE_PORTCTL_VLANEXT         MS(0, 0x1)
+#define   NGBE_PORTCTL_ETAG            MS(1, 0x1)
+#define   NGBE_PORTCTL_QINQ            MS(2, 0x1)
+#define   NGBE_PORTCTL_DRVLOAD         MS(3, 0x1)
+#define   NGBE_PORTCTL_NUMVT_MASK      MS(12, 0x1)
+#define   NGBE_PORTCTL_NUMVT_8         LS(1, 12, 0x1)
+#define   NGBE_PORTCTL_RSTDONE         MS(14, 0x1)
+#define   NGBE_PORTCTL_TEREDODIA       MS(27, 0x1)
+#define   NGBE_PORTCTL_GENEVEDIA       MS(28, 0x1)
+#define   NGBE_PORTCTL_VXLANGPEDIA     MS(30, 0x1)
+#define   NGBE_PORTCTL_VXLANDIA        MS(31, 0x1)
+
+/* Port Status */
+#define NGBE_PORTSTAT                  0x014404
+#define   NGBE_PORTSTAT_BW_MASK        MS(1, 0x7)
+#define     NGBE_PORTSTAT_BW_1G        MS(1, 0x1)
+#define     NGBE_PORTSTAT_BW_100M      MS(2, 0x1)
+#define     NGBE_PORTSTAT_BW_10M       MS(3, 0x1)
+#define   NGBE_PORTSTAT_ID(r)          RS(r, 8, 0x3)
+
+#define NGBE_EXTAG                     0x014408
+#define   NGBE_EXTAG_ETAG_MASK         MS(0, 0xFFFF)
+#define   NGBE_EXTAG_ETAG(v)           LS(v, 0, 0xFFFF)
+#define   NGBE_EXTAG_VLAN_MASK         MS(16, 0xFFFF)
+#define   NGBE_EXTAG_VLAN(v)           LS(v, 16, 0xFFFF)
+
+#define NGBE_TCPTIME                   0x014420
+
+#define NGBE_LEDCTL                     0x014424
+#define   NGBE_LEDCTL_SEL(s)            MS((s), 0x1)
+#define   NGBE_LEDCTL_OD(s)             MS(((s) + 16), 0x1)
+	/* s=1G(1),100M(2),10M(3) */
+#define   NGBE_LEDCTL_100M      (NGBE_LEDCTL_SEL(2) | NGBE_LEDCTL_OD(2))
+
+#define NGBE_TAGTPID(i)                (0x014430 + (i) * 4) /*0-3*/
+#define   NGBE_TAGTPID_LSB_MASK        MS(0, 0xFFFF)
+#define   NGBE_TAGTPID_LSB(v)          LS(v, 0, 0xFFFF)
+#define   NGBE_TAGTPID_MSB_MASK        MS(16, 0xFFFF)
+#define   NGBE_TAGTPID_MSB(v)          LS(v, 16, 0xFFFF)
+
+#define NGBE_LAN_SPEED			0x014440
+#define   NGBE_LAN_SPEED_MASK		MS(0, 0x3)
+
+/* GPIO Registers */
+#define NGBE_GPIODATA			0x014800
+#define   NGBE_GPIOBIT_0      MS(0, 0x1) /* O:tx fault */
+#define   NGBE_GPIOBIT_1      MS(1, 0x1) /* O:tx disabled */
+#define   NGBE_GPIOBIT_2      MS(2, 0x1) /* I:sfp module absent */
+#define   NGBE_GPIOBIT_3      MS(3, 0x1) /* I:rx signal lost */
+#define   NGBE_GPIOBIT_4      MS(4, 0x1) /* O:rate select, 1G(0) 10G(1) */
+#define   NGBE_GPIOBIT_5      MS(5, 0x1) /* O:rate select, 1G(0) 10G(1) */
+#define   NGBE_GPIOBIT_6      MS(6, 0x1) /* I:ext phy interrupt */
+#define   NGBE_GPIOBIT_7      MS(7, 0x1) /* I:fan speed alarm */
+#define NGBE_GPIODIR			0x014804
+#define   NGBE_GPIODIR_DDR(v)		LS(v, 0, 0x3)
+#define NGBE_GPIOCTL			0x014808
+#define NGBE_GPIOINTEN			0x014830
+#define   NGBE_GPIOINTEN_INT(v)		LS(v, 0, 0x3)
+#define NGBE_GPIOINTMASK		0x014834
+#define NGBE_GPIOINTTYPE		0x014838
+#define   NGBE_GPIOINTTYPE_LEVEL(v)	LS(v, 0, 0x3)
+#define NGBE_GPIOINTPOL			0x01483C
+#define   NGBE_GPIOINTPOL_ACT(v)	LS(v, 0, 0x3)
+#define NGBE_GPIOINTSTAT		0x014840
+#define NGBE_GPIOINTDB			0x014848
+#define NGBE_GPIOEOI			0x01484C
+#define NGBE_GPIODAT			0x014850
+
+/* TPH */
+#define NGBE_TPHCFG               0x014F00
+
+/******************************************************************************
+ * Transmit DMA Registers
+ ******************************************************************************/
+/* TDMA Control */
+#define NGBE_DMATXCTRL			0x018000
+#define   NGBE_DMATXCTRL_ENA		MS(0, 0x1)
+#define   NGBE_DMATXCTRL_TPID_MASK	MS(16, 0xFFFF)
+#define   NGBE_DMATXCTRL_TPID(v)	LS(v, 16, 0xFFFF)
+#define NGBE_POOLTXENA(i)		(0x018004 + (i) * 4) /*0*/
+#define NGBE_PRBTXDMACTL		0x018010
+#define NGBE_ECCTXDMACTL		0x018014
+#define NGBE_ECCTXDMAINJ		0x018018
+#define NGBE_ECCTXDMA			0x01801C
+#define NGBE_PBTXDMATH			0x018020
+#define NGBE_QPTXLLI			0x018040
+#define NGBE_POOLTXLBET			0x018050
+#define NGBE_POOLTXASET			0x018058
+#define NGBE_POOLTXASMAC		0x018060
+#define NGBE_POOLTXASVLAN		0x018070
+#define NGBE_POOLTXDSA			0x0180A0
+#define NGBE_POOLTAG(pl)		(0x018100 + (pl) * 4) /*0-7*/
+#define   NGBE_POOLTAG_VTAG(v)		LS(v, 0, 0xFFFF)
+#define   NGBE_POOLTAG_VTAG_MASK	MS(0, 0xFFFF)
+#define   TXGBD_POOLTAG_VTAG_UP(r)	RS(r, 13, 0x7)
+#define   NGBE_POOLTAG_TPIDSEL(v)	LS(v, 24, 0x7)
+#define   NGBE_POOLTAG_ETAG_MASK	MS(27, 0x3)
+#define   NGBE_POOLTAG_ETAG		LS(2, 27, 0x3)
+#define   NGBE_POOLTAG_ACT_MASK		MS(30, 0x3)
+#define   NGBE_POOLTAG_ACT_ALWAYS	LS(1, 30, 0x3)
+#define   NGBE_POOLTAG_ACT_NEVER	LS(2, 30, 0x3)
+
+/* Queue Arbiter(QoS) */
+#define NGBE_QARBTXCTL			0x018200
+#define   NGBE_QARBTXCTL_DA		MS(6, 0x1)
+#define NGBE_QARBTXRATE			0x018404
+#define   NGBE_QARBTXRATE_MIN(v)	LS(v, 0, 0x3FFF)
+#define   NGBE_QARBTXRATE_MAX(v)	LS(v, 16, 0x3FFF)
+
+/* ETAG */
+#define NGBE_POOLETAG(pl)         (0x018700 + (pl) * 4)
+
+/******************************************************************************
+ * Receive DMA Registers
+ ******************************************************************************/
+/* Receive Control */
+#define NGBE_ARBRXCTL			0x012000
+#define   NGBE_ARBRXCTL_DIA		MS(6, 0x1)
+#define NGBE_POOLRXENA(i)		(0x012004 + (i) * 4) /*0*/
+#define NGBE_PRBRDMA			0x012010
+#define NGBE_ECCRXDMACTL		0x012014
+#define NGBE_ECCRXDMAINJ		0x012018
+#define NGBE_ECCRXDMA			0x01201C
+#define NGBE_POOLRXDNA			0x0120A0
+#define NGBE_QPRXDROP			0x012080
+#define NGBE_QPRXSTRPVLAN		0x012090
+
+/******************************************************************************
+ * Packet Buffer
+ ******************************************************************************/
+/* Flow Control */
+#define NGBE_FCXOFFTM			0x019200
+#define NGBE_FCWTRLO			0x019220
+#define   NGBE_FCWTRLO_TH(v)		LS(v, 10, 0x1FF) /*KB*/
+#define   NGBE_FCWTRLO_XON		MS(31, 0x1)
+#define NGBE_FCWTRHI			0x019260
+#define   NGBE_FCWTRHI_TH(v)		LS(v, 10, 0x1FF) /*KB*/
+#define   NGBE_FCWTRHI_XOFF		MS(31, 0x1)
+#define NGBE_RXFCRFSH			0x0192A0
+#define   NGBE_RXFCFSH_TIME(v)		LS(v, 0, 0xFFFF)
+#define NGBE_FCSTAT			0x01CE00
+#define   NGBE_FCSTAT_DLNK		MS(0, 0x1)
+#define   NGBE_FCSTAT_ULNK		MS(8, 0x1)
+
+#define NGBE_RXFCCFG                   0x011090
+#define   NGBE_RXFCCFG_FC              MS(0, 0x1)
+#define NGBE_TXFCCFG                   0x0192A4
+#define   NGBE_TXFCCFG_FC              MS(3, 0x1)
+
+/* Data Buffer */
+#define NGBE_PBRXCTL                   0x019000
+#define   NGBE_PBRXCTL_ST              MS(0, 0x1)
+#define   NGBE_PBRXCTL_ENA             MS(31, 0x1)
+#define NGBE_PBRXSTAT                  0x019004
+#define NGBE_PBRXSIZE                  0x019020
+#define   NGBE_PBRXSIZE_KB(v)          LS(v, 10, 0x3F)
+
+#define NGBE_PBRXOFTMR                 0x019094
+#define NGBE_PBRXDBGCMD                0x019090
+#define NGBE_PBRXDBGDAT                0x0190A0
+
+#define NGBE_PBTXSIZE                  0x01CC00
+
+/* LLI */
+#define NGBE_PBRXLLI              0x19080
+#define   NGBE_PBRXLLI_SZLT(v)    LS(v, 0, 0xFFF)
+#define   NGBE_PBRXLLI_UPLT(v)    LS(v, 16, 0x7)
+#define   NGBE_PBRXLLI_UPEA       MS(19, 0x1)
+
+/* Port Arbiter(QoS) */
+#define NGBE_PARBTXCTL            0x01CD00
+#define   NGBE_PARBTXCTL_DA       MS(6, 0x1)
+
+/******************************************************************************
+ * Packet Filter (L2-7)
+ ******************************************************************************/
+/**
+ * Receive Scaling
+ **/
+#define NGBE_POOLRSS(pl)		(0x019300 + (pl) * 4) /*0-7*/
+#define   NGBE_POOLRSS_L4HDR		MS(1, 0x1)
+#define   NGBE_POOLRSS_L3HDR		MS(2, 0x1)
+#define   NGBE_POOLRSS_L2HDR		MS(3, 0x1)
+#define   NGBE_POOLRSS_L2TUN		MS(4, 0x1)
+#define   NGBE_POOLRSS_TUNHDR		MS(5, 0x1)
+#define NGBE_RSSTBL(i)			(0x019400 + (i) * 4) /*32*/
+#define NGBE_RSSKEY(i)			(0x019480 + (i) * 4) /*10*/
+#define NGBE_RACTL			0x0194F4
+#define   NGBE_RACTL_RSSENA		MS(2, 0x1)
+#define   NGBE_RACTL_RSSMASK		MS(16, 0xFFFF)
+#define   NGBE_RACTL_RSSIPV4TCP		MS(16, 0x1)
+#define   NGBE_RACTL_RSSIPV4		MS(17, 0x1)
+#define   NGBE_RACTL_RSSIPV6		MS(20, 0x1)
+#define   NGBE_RACTL_RSSIPV6TCP		MS(21, 0x1)
+#define   NGBE_RACTL_RSSIPV4UDP		MS(22, 0x1)
+#define   NGBE_RACTL_RSSIPV6UDP		MS(23, 0x1)
+
+/**
+ * Flow Director
+ **/
+#define PERFECT_BUCKET_64KB_HASH_MASK	0x07FF	/* 11 bits */
+#define PERFECT_BUCKET_128KB_HASH_MASK	0x0FFF	/* 12 bits */
+#define PERFECT_BUCKET_256KB_HASH_MASK	0x1FFF	/* 13 bits */
+#define SIG_BUCKET_64KB_HASH_MASK	0x1FFF	/* 13 bits */
+#define SIG_BUCKET_128KB_HASH_MASK	0x3FFF	/* 14 bits */
+#define SIG_BUCKET_256KB_HASH_MASK	0x7FFF	/* 15 bits */
+
+/**
+ * 5-tuple Filter
+ **/
+#define NGBE_5TFPORT(i)			(0x019A00 + (i) * 4) /*0-7*/
+#define   NGBE_5TFPORT_SRC(v)		LS(v, 0, 0xFFFF)
+#define   NGBE_5TFPORT_DST(v)		LS(v, 16, 0xFFFF)
+#define NGBE_5TFCTL0(i)			(0x019C00 + (i) * 4) /*0-7*/
+#define   NGBE_5TFCTL0_PROTO(v)		LS(v, 0, 0x3)
+enum ngbe_5tuple_protocol {
+	NGBE_5TF_PROT_TCP = 0,
+	NGBE_5TF_PROT_UDP,
+	NGBE_5TF_PROT_SCTP,
+	NGBE_5TF_PROT_NONE,
+};
+#define   NGBE_5TFCTL0_PRI(v)		LS(v, 2, 0x7)
+#define   NGBE_5TFCTL0_POOL(v)		LS(v, 8, 0x7)
+#define   NGBE_5TFCTL0_MASK		MS(27, 0xF)
+#define     NGBE_5TFCTL0_MSPORT		MS(27, 0x1)
+#define     NGBE_5TFCTL0_MDPORT		MS(28, 0x1)
+#define     NGBE_5TFCTL0_MPROTO		MS(29, 0x1)
+#define     NGBE_5TFCTL0_MPOOL		MS(30, 0x1)
+#define   NGBE_5TFCTL0_ENA		MS(31, 0x1)
+#define NGBE_5TFCTL1(i)			(0x019E00 + (i) * 4) /*0-7*/
+#define   NGBE_5TFCTL1_CHKSZ		MS(12, 0x1)
+#define   NGBE_5TFCTL1_LLI		MS(20, 0x1)
+#define   NGBE_5TFCTL1_QP(v)		LS(v, 21, 0x7)
+
+/**
+ * Storm Control
+ **/
+#define NGBE_STRMCTL              0x015004
+#define   NGBE_STRMCTL_MCPNSH     MS(0, 0x1)
+#define   NGBE_STRMCTL_MCDROP     MS(1, 0x1)
+#define   NGBE_STRMCTL_BCPNSH     MS(2, 0x1)
+#define   NGBE_STRMCTL_BCDROP     MS(3, 0x1)
+#define   NGBE_STRMCTL_DFTPOOL    MS(4, 0x1)
+#define   NGBE_STRMCTL_ITVL(v)    LS(v, 8, 0x3FF)
+#define NGBE_STRMTH               0x015008
+#define   NGBE_STRMTH_MC(v)       LS(v, 0, 0xFFFF)
+#define   NGBE_STRMTH_BC(v)       LS(v, 16, 0xFFFF)
+
+/******************************************************************************
+ * Ether Flow
+ ******************************************************************************/
+#define NGBE_PSRCTL		       0x015000
+#define   NGBE_PSRCTL_TPE	       MS(4, 0x1)
+#define   NGBE_PSRCTL_ADHF12_MASK      MS(5, 0x3)
+#define   NGBE_PSRCTL_ADHF12(v)        LS(v, 5, 0x3)
+#define   NGBE_PSRCTL_UCHFENA	       MS(7, 0x1)
+#define   NGBE_PSRCTL_MCHFENA	       MS(7, 0x1)
+#define   NGBE_PSRCTL_MCP	       MS(8, 0x1)
+#define   NGBE_PSRCTL_UCP	       MS(9, 0x1)
+#define   NGBE_PSRCTL_BCA	       MS(10, 0x1)
+#define   NGBE_PSRCTL_L4CSUM	       MS(12, 0x1)
+#define   NGBE_PSRCTL_PCSD	       MS(13, 0x1)
+#define   NGBE_PSRCTL_LBENA	       MS(18, 0x1)
+#define NGBE_FRMSZ		       0x015020
+#define   NGBE_FRMSZ_MAX_MASK	       MS(0, 0xFFFF)
+#define   NGBE_FRMSZ_MAX(v)	       LS(v, 0, 0xFFFF)
+#define NGBE_VLANCTL		       0x015088
+#define   NGBE_VLANCTL_TPID_MASK       MS(0, 0xFFFF)
+#define   NGBE_VLANCTL_TPID(v)	       LS(v, 0, 0xFFFF)
+#define   NGBE_VLANCTL_CFI	       MS(28, 0x1)
+#define   NGBE_VLANCTL_CFIENA	       MS(29, 0x1)
+#define   NGBE_VLANCTL_VFE	       MS(30, 0x1)
+#define NGBE_POOLCTL		       0x0151B0
+#define   NGBE_POOLCTL_DEFDSA	       MS(29, 0x1)
+#define   NGBE_POOLCTL_RPLEN	       MS(30, 0x1)
+#define   NGBE_POOLCTL_MODE_MASK       MS(16, 0x3)
+#define     NGBE_PSRPOOL_MODE_MAC      LS(0, 16, 0x3)
+#define     NGBE_PSRPOOL_MODE_ETAG     LS(1, 16, 0x3)
+#define   NGBE_POOLCTL_DEFPL(v)        LS(v, 7, 0x7)
+#define     NGBE_POOLCTL_DEFPL_MASK    MS(7, 0x7)
+
+#define NGBE_ETFLT(i)                  (0x015128 + (i) * 4) /*0-7*/
+#define   NGBE_ETFLT_ETID(v)           LS(v, 0, 0xFFFF)
+#define   NGBE_ETFLT_ETID_MASK         MS(0, 0xFFFF)
+#define   NGBE_ETFLT_POOL(v)           LS(v, 20, 0x7)
+#define   NGBE_ETFLT_POOLENA           MS(26, 0x1)
+#define   NGBE_ETFLT_TXAS              MS(29, 0x1)
+#define   NGBE_ETFLT_1588              MS(30, 0x1)
+#define   NGBE_ETFLT_ENA               MS(31, 0x1)
+#define NGBE_ETCLS(i)                  (0x019100 + (i) * 4) /*0-7*/
+#define   NGBE_ETCLS_QPID(v)           LS(v, 16, 0x7)
+#define   NGBD_ETCLS_QPID(r)           RS(r, 16, 0x7)
+#define   NGBE_ETCLS_LLI               MS(29, 0x1)
+#define   NGBE_ETCLS_QENA              MS(31, 0x1)
+#define NGBE_SYNCLS                    0x019130
+#define   NGBE_SYNCLS_ENA              MS(0, 0x1)
+#define   NGBE_SYNCLS_QPID(v)          LS(v, 1, 0x7)
+#define   NGBD_SYNCLS_QPID(r)          RS(r, 1, 0x7)
+#define   NGBE_SYNCLS_QPID_MASK        MS(1, 0x7)
+#define   NGBE_SYNCLS_HIPRIO           MS(31, 0x1)
+
+/* MAC & VLAN & NVE */
+#define NGBE_PSRVLANIDX           0x016230 /*0-31*/
+#define NGBE_PSRVLAN              0x016220
+#define   NGBE_PSRVLAN_VID(v)     LS(v, 0, 0xFFF)
+#define   NGBE_PSRVLAN_EA         MS(31, 0x1)
+#define NGBE_PSRVLANPLM(i)        (0x016224 + (i) * 4) /*0-1*/
+
+/**
+ * Mirror Rules
+ **/
+#define NGBE_MIRRCTL(i)	               (0x015B00 + (i) * 4)
+#define  NGBE_MIRRCTL_POOL	       MS(0, 0x1)
+#define  NGBE_MIRRCTL_UPLINK	       MS(1, 0x1)
+#define  NGBE_MIRRCTL_DNLINK	       MS(2, 0x1)
+#define  NGBE_MIRRCTL_VLAN	       MS(3, 0x1)
+#define  NGBE_MIRRCTL_DESTP(v)	       LS(v, 8, 0x7)
+#define NGBE_MIRRVLANL(i)	       (0x015B10 + (i) * 8)
+#define NGBE_MIRRPOOLL(i)	       (0x015B30 + (i) * 8)
+
+/**
+ * Time Stamp
+ **/
+#define NGBE_TSRXCTL		0x015188
+#define   NGBE_TSRXCTL_VLD	MS(0, 0x1)
+#define   NGBE_TSRXCTL_TYPE(v)	LS(v, 1, 0x7)
+#define     NGBE_TSRXCTL_TYPE_V2L2	(0)
+#define     NGBE_TSRXCTL_TYPE_V1L4	(1)
+#define     NGBE_TSRXCTL_TYPE_V2L24	(2)
+#define     NGBE_TSRXCTL_TYPE_V2EVENT	(5)
+#define   NGBE_TSRXCTL_ENA	MS(4, 0x1)
+#define NGBE_TSRXSTMPL		0x0151E8
+#define NGBE_TSRXSTMPH		0x0151A4
+#define NGBE_TSTXCTL		0x011F00
+#define   NGBE_TSTXCTL_VLD	MS(0, 0x1)
+#define   NGBE_TSTXCTL_ENA	MS(4, 0x1)
+#define NGBE_TSTXSTMPL		0x011F04
+#define NGBE_TSTXSTMPH		0x011F08
+#define NGBE_TSTIMEL		0x011F0C
+#define NGBE_TSTIMEH		0x011F10
+#define NGBE_TSTIMEINC		0x011F14
+#define   NGBE_TSTIMEINC_IV(v)	LS(v, 0, 0x7FFFFFF)
+
+/**
+ * Wake on Lan
+ **/
+#define NGBE_WOLCTL               0x015B80
+#define NGBE_WOLIPCTL             0x015B84
+#define NGBE_WOLIP4(i)            (0x015BC0 + (i) * 4) /* 0-3 */
+#define NGBE_WOLIP6(i)            (0x015BE0 + (i) * 4) /* 0-3 */
+
+#define NGBE_WOLFLEXCTL           0x015CFC
+#define NGBE_WOLFLEXI             0x015B8C
+#define NGBE_WOLFLEXDAT(i)        (0x015C00 + (i) * 16) /* 0-15 */
+#define NGBE_WOLFLEXMSK(i)        (0x015C08 + (i) * 16) /* 0-15 */
+
+/******************************************************************************
+ * Security Registers
+ ******************************************************************************/
+#define NGBE_SECRXCTL			0x017000
+#define   NGBE_SECRXCTL_ODSA		MS(0, 0x1)
+#define   NGBE_SECRXCTL_XDSA		MS(1, 0x1)
+#define   NGBE_SECRXCTL_CRCSTRIP	MS(2, 0x1)
+#define   NGBE_SECRXCTL_SAVEBAD		MS(6, 0x1)
+#define NGBE_SECRXSTAT			0x017004
+#define   NGBE_SECRXSTAT_RDY		MS(0, 0x1)
+#define   NGBE_SECRXSTAT_ECC		MS(1, 0x1)
+
+#define NGBE_SECTXCTL			0x01D000
+#define   NGBE_SECTXCTL_ODSA		MS(0, 0x1)
+#define   NGBE_SECTXCTL_XDSA		MS(1, 0x1)
+#define   NGBE_SECTXCTL_STFWD		MS(2, 0x1)
+#define   NGBE_SECTXCTL_MSKIV		MS(3, 0x1)
+#define NGBE_SECTXSTAT			0x01D004
+#define   NGBE_SECTXSTAT_RDY		MS(0, 0x1)
+#define   NGBE_SECTXSTAT_ECC		MS(1, 0x1)
+#define NGBE_SECTXBUFAF			0x01D008
+#define NGBE_SECTXBUFAE			0x01D00C
+#define NGBE_SECTXIFG			0x01D020
+#define   NGBE_SECTXIFG_MIN(v)		LS(v, 0, 0xF)
+#define   NGBE_SECTXIFG_MIN_MASK	MS(0, 0xF)
+
+/**
+ * LinkSec
+ **/
+#define NGBE_LSECRXCAP	               0x017200
+#define NGBE_LSECRXCTL                0x017204
+	/* disabled(0),check(1),strict(2),drop(3) */
+#define   NGBE_LSECRXCTL_MODE_MASK    MS(2, 0x3)
+#define   NGBE_LSECRXCTL_MODE_STRICT  LS(2, 2, 0x3)
+#define   NGBE_LSECRXCTL_POSTHDR      MS(6, 0x1)
+#define   NGBE_LSECRXCTL_REPLAY       MS(7, 0x1)
+#define NGBE_LSECRXSCIL               0x017208
+#define NGBE_LSECRXSCIH               0x01720C
+#define NGBE_LSECRXSA(i)              (0x017210 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRXPN(i)              (0x017218 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRXKEY(n, i)	       (0x017220 + 0x10 * (n) + 4 * (i)) /*0-3*/
+#define NGBE_LSECTXCAP                0x01D200
+#define NGBE_LSECTXCTL                0x01D204
+	/* disabled(0), auth(1), auth+encrypt(2) */
+#define   NGBE_LSECTXCTL_MODE_MASK    MS(0, 0x3)
+#define   NGBE_LSECTXCTL_MODE_AUTH    LS(1, 0, 0x3)
+#define   NGBE_LSECTXCTL_MODE_AENC    LS(2, 0, 0x3)
+#define   NGBE_LSECTXCTL_PNTRH_MASK   MS(8, 0xFFFFFF)
+#define   NGBE_LSECTXCTL_PNTRH(v)     LS(v, 8, 0xFFFFFF)
+#define NGBE_LSECTXSCIL               0x01D208
+#define NGBE_LSECTXSCIH               0x01D20C
+#define NGBE_LSECTXSA                 0x01D210
+#define NGBE_LSECTXPN0                0x01D214
+#define NGBE_LSECTXPN1                0x01D218
+#define NGBE_LSECTXKEY0(i)            (0x01D21C + (i) * 4) /* 0-3 */
+#define NGBE_LSECTXKEY1(i)            (0x01D22C + (i) * 4) /* 0-3 */
+
+#define NGBE_LSECRX_UTPKT             0x017240
+#define NGBE_LSECRX_DECOCT            0x017244
+#define NGBE_LSECRX_VLDOCT            0x017248
+#define NGBE_LSECRX_BTPKT             0x01724C
+#define NGBE_LSECRX_NOSCIPKT          0x017250
+#define NGBE_LSECRX_UNSCIPKT          0x017254
+#define NGBE_LSECRX_UNCHKPKT          0x017258
+#define NGBE_LSECRX_DLYPKT            0x01725C
+#define NGBE_LSECRX_LATEPKT           0x017260
+#define NGBE_LSECRX_OKPKT(i)          (0x017264 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRX_BADPKT(i)         (0x01726C + (i) * 4) /* 0-1 */
+#define NGBE_LSECRX_INVPKT(i)         (0x017274 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRX_BADSAPKT(i)       (0x01727C + (i) * 8) /* 0-3 */
+#define NGBE_LSECRX_INVSAPKT(i)       (0x017280 + (i) * 8) /* 0-3 */
+#define NGBE_LSECTX_UTPKT             0x01D23C
+#define NGBE_LSECTX_ENCPKT            0x01D240
+#define NGBE_LSECTX_PROTPKT           0x01D244
+#define NGBE_LSECTX_ENCOCT            0x01D248
+#define NGBE_LSECTX_PROTOCT           0x01D24C
+
+/******************************************************************************
+ * MAC Registers
+ ******************************************************************************/
+#define NGBE_MACRXCFG                  0x011004
+#define   NGBE_MACRXCFG_ENA            MS(0, 0x1)
+#define   NGBE_MACRXCFG_JUMBO          MS(8, 0x1)
+#define   NGBE_MACRXCFG_LB             MS(10, 0x1)
+#define NGBE_MACCNTCTL                 0x011800
+#define   NGBE_MACCNTCTL_RC            MS(2, 0x1)
+
+#define NGBE_MACRXFLT                  0x011008
+#define   NGBE_MACRXFLT_PROMISC        MS(0, 0x1)
+#define   NGBE_MACRXFLT_CTL_MASK       MS(6, 0x3)
+#define   NGBE_MACRXFLT_CTL_DROP       LS(0, 6, 0x3)
+#define   NGBE_MACRXFLT_CTL_NOPS       LS(1, 6, 0x3)
+#define   NGBE_MACRXFLT_CTL_NOFT       LS(2, 6, 0x3)
+#define   NGBE_MACRXFLT_CTL_PASS       LS(3, 6, 0x3)
+#define   NGBE_MACRXFLT_RXALL          MS(31, 0x1)
+
+/******************************************************************************
+ * Statistic Registers
+ ******************************************************************************/
+/* Ring Counter */
+#define NGBE_QPRXPKT(rp)                 (0x001014 + 0x40 * (rp))
+#define NGBE_QPRXOCTL(rp)                (0x001018 + 0x40 * (rp))
+#define NGBE_QPRXOCTH(rp)                (0x00101C + 0x40 * (rp))
+#define NGBE_QPRXMPKT(rp)                (0x001020 + 0x40 * (rp))
+#define NGBE_QPRXBPKT(rp)                (0x001024 + 0x40 * (rp))
+#define NGBE_QPTXPKT(rp)                 (0x003014 + 0x40 * (rp))
+#define NGBE_QPTXOCTL(rp)                (0x003018 + 0x40 * (rp))
+#define NGBE_QPTXOCTH(rp)                (0x00301C + 0x40 * (rp))
+#define NGBE_QPTXMPKT(rp)                (0x003020 + 0x40 * (rp))
+#define NGBE_QPTXBPKT(rp)                (0x003024 + 0x40 * (rp))
+
+/* TDMA Counter */
+#define NGBE_DMATXDROP			0x018300
+#define NGBE_DMATXSECDROP		0x018304
+#define NGBE_DMATXPKT			0x018308
+#define NGBE_DMATXOCTL			0x01830C
+#define NGBE_DMATXOCTH			0x018310
+#define NGBE_DMATXMNG			0x018314
+
+/* RDMA Counter */
+#define NGBE_DMARXDROP			0x012500
+#define NGBE_DMARXPKT			0x012504
+#define NGBE_DMARXOCTL			0x012508
+#define NGBE_DMARXOCTH			0x01250C
+#define NGBE_DMARXMNG			0x012510
+
+/* Packet Buffer Counter */
+#define NGBE_PBRXMISS			0x019040
+#define NGBE_PBRXPKT			0x019060
+#define NGBE_PBRXREP			0x019064
+#define NGBE_PBRXDROP			0x019068
+#define NGBE_PBLBSTAT			0x01906C
+#define   NGBE_PBLBSTAT_FREE(r)		RS(r, 0, 0x3FF)
+#define   NGBE_PBLBSTAT_FULL		MS(11, 0x1)
+#define NGBE_PBRXWRPTR			0x019180
+#define   NGBE_PBRXWRPTR_HEAD(r)	RS(r, 0, 0xFFFF)
+#define   NGBE_PBRXWRPTR_TAIL(r)	RS(r, 16, 0xFFFF)
+#define NGBE_PBRXRDPTR			0x0191A0
+#define   NGBE_PBRXRDPTR_HEAD(r)	RS(r, 0, 0xFFFF)
+#define   NGBE_PBRXRDPTR_TAIL(r)	RS(r, 16, 0xFFFF)
+#define NGBE_PBRXDATA			0x0191C0
+#define   NGBE_PBRXDATA_RDPTR(r)	RS(r, 0, 0xFFFF)
+#define   NGBE_PBRXDATA_WRPTR(r)	RS(r, 16, 0xFFFF)
+#define NGBE_PBRX_USDSP			0x0191E0
+#define NGBE_RXPBPFCDMACL		0x019210
+#define NGBE_RXPBPFCDMACH		0x019214
+#define NGBE_PBTXLNKXOFF		0x019218
+#define NGBE_PBTXLNKXON			0x01921C
+
+#define NGBE_PBTXSTAT			0x01C004
+#define   NGBE_PBTXSTAT_EMPT(tc, r)	((1 << (tc) & (r)) >> (tc))
+
+#define NGBE_PBRXLNKXOFF		0x011988
+#define NGBE_PBRXLNKXON			0x011E0C
+
+#define NGBE_PBLPBK			0x01CF08
+
+/* Ether Flow Counter */
+#define NGBE_LANPKTDROP			0x0151C0
+#define NGBE_MNGPKTDROP			0x0151C4
+
+#define NGBE_PSRLANPKTCNT		0x0151B8
+#define NGBE_PSRMNGPKTCNT		0x0151BC
+
+/* MAC Counter */
+#define NGBE_MACRXERRCRCL           0x011928
+#define NGBE_MACRXERRCRCH           0x01192C
+#define NGBE_MACRXERRLENL           0x011978
+#define NGBE_MACRXERRLENH           0x01197C
+#define NGBE_MACRX1TO64L            0x001940
+#define NGBE_MACRX1TO64H            0x001944
+#define NGBE_MACRX65TO127L          0x001948
+#define NGBE_MACRX65TO127H          0x00194C
+#define NGBE_MACRX128TO255L         0x001950
+#define NGBE_MACRX128TO255H         0x001954
+#define NGBE_MACRX256TO511L         0x001958
+#define NGBE_MACRX256TO511H         0x00195C
+#define NGBE_MACRX512TO1023L        0x001960
+#define NGBE_MACRX512TO1023H        0x001964
+#define NGBE_MACRX1024TOMAXL        0x001968
+#define NGBE_MACRX1024TOMAXH        0x00196C
+#define NGBE_MACTX1TO64L            0x001834
+#define NGBE_MACTX1TO64H            0x001838
+#define NGBE_MACTX65TO127L          0x00183C
+#define NGBE_MACTX65TO127H          0x001840
+#define NGBE_MACTX128TO255L         0x001844
+#define NGBE_MACTX128TO255H         0x001848
+#define NGBE_MACTX256TO511L         0x00184C
+#define NGBE_MACTX256TO511H         0x001850
+#define NGBE_MACTX512TO1023L        0x001854
+#define NGBE_MACTX512TO1023H        0x001858
+#define NGBE_MACTX1024TOMAXL        0x00185C
+#define NGBE_MACTX1024TOMAXH        0x001860
+
+#define NGBE_MACRXUNDERSIZE         0x011938
+#define NGBE_MACRXOVERSIZE          0x01193C
+#define NGBE_MACRXJABBER            0x011934
+
+#define NGBE_MACRXPKTL                0x011900
+#define NGBE_MACRXPKTH                0x011904
+#define NGBE_MACTXPKTL                0x01181C
+#define NGBE_MACTXPKTH                0x011820
+#define NGBE_MACRXGBOCTL              0x011908
+#define NGBE_MACRXGBOCTH              0x01190C
+#define NGBE_MACTXGBOCTL              0x011814
+#define NGBE_MACTXGBOCTH              0x011818
+
+#define NGBE_MACRXOCTL                0x011918
+#define NGBE_MACRXOCTH                0x01191C
+#define NGBE_MACRXMPKTL               0x011920
+#define NGBE_MACRXMPKTH               0x011924
+#define NGBE_MACTXOCTL                0x011824
+#define NGBE_MACTXOCTH                0x011828
+#define NGBE_MACTXMPKTL               0x01182C
+#define NGBE_MACTXMPKTH               0x011830
+
+/* Management Counter */
+#define NGBE_MNGOUT		0x01CF00
+#define NGBE_MNGIN		0x01CF04
+#define NGBE_MNGDROP		0x01CF0C
+
+/* MAC SEC Counter */
+#define NGBE_LSECRXUNTAG	0x017240
+#define NGBE_LSECRXDECOCT	0x017244
+#define NGBE_LSECRXVLDOCT	0x017248
+#define NGBE_LSECRXBADTAG	0x01724C
+#define NGBE_LSECRXNOSCI	0x017250
+#define NGBE_LSECRXUKSCI	0x017254
+#define NGBE_LSECRXUNCHK	0x017258
+#define NGBE_LSECRXDLY		0x01725C
+#define NGBE_LSECRXLATE		0x017260
+#define NGBE_LSECRXGOOD		0x017264
+#define NGBE_LSECRXBAD		0x01726C
+#define NGBE_LSECRXUK		0x017274
+#define NGBE_LSECRXBADSA	0x01727C
+#define NGBE_LSECRXUKSA		0x017280
+#define NGBE_LSECTXUNTAG	0x01D23C
+#define NGBE_LSECTXENC		0x01D240
+#define NGBE_LSECTXPTT		0x01D244
+#define NGBE_LSECTXENCOCT	0x01D248
+#define NGBE_LSECTXPTTOCT	0x01D24C
+
+/* Management Counter */
+#define NGBE_MNGOS2BMC                 0x01E094
+#define NGBE_MNGBMC2OS                 0x01E090
+
+/******************************************************************************
+ * PF(Physical Function) Registers
+ ******************************************************************************/
+/* Interrupt */
+#define NGBE_ICRMISC		0x000100
+#define   NGBE_ICRMISC_MASK	MS(8, 0xFFFFFF)
+#define   NGBE_ICRMISC_RST	MS(10, 0x1) /* device reset event */
+#define   NGBE_ICRMISC_TS	MS(11, 0x1) /* time sync */
+#define   NGBE_ICRMISC_STALL	MS(12, 0x1) /* trans or recv path is stalled */
+#define   NGBE_ICRMISC_LNKSEC	MS(13, 0x1) /* Tx LinkSec require key exchange*/
+#define   NGBE_ICRMISC_ERRBUF	MS(14, 0x1) /* Packet Buffer Overrun */
+#define   NGBE_ICRMISC_ERRMAC	MS(17, 0x1) /* err reported by MAC */
+#define   NGBE_ICRMISC_PHY	MS(18, 0x1) /* interrupt reported by eth phy */
+#define   NGBE_ICRMISC_ERRIG	MS(20, 0x1) /* integrity error */
+#define   NGBE_ICRMISC_SPI	MS(21, 0x1) /* SPI interface */
+#define   NGBE_ICRMISC_VFMBX	MS(23, 0x1) /* VF-PF message box */
+#define   NGBE_ICRMISC_GPIO	MS(26, 0x1) /* GPIO interrupt */
+#define   NGBE_ICRMISC_ERRPCI	MS(27, 0x1) /* pcie request error */
+#define   NGBE_ICRMISC_HEAT	MS(28, 0x1) /* overheat detection */
+#define   NGBE_ICRMISC_PROBE	MS(29, 0x1) /* probe match */
+#define   NGBE_ICRMISC_MNGMBX	MS(30, 0x1) /* mng mailbox */
+#define   NGBE_ICRMISC_TIMER	MS(31, 0x1) /* tcp timer */
+#define   NGBE_ICRMISC_DEFAULT	( \
+			NGBE_ICRMISC_RST | \
+			NGBE_ICRMISC_ERRMAC | \
+			NGBE_ICRMISC_PHY | \
+			NGBE_ICRMISC_ERRIG | \
+			NGBE_ICRMISC_GPIO | \
+			NGBE_ICRMISC_VFMBX | \
+			NGBE_ICRMISC_MNGMBX | \
+			NGBE_ICRMISC_STALL | \
+			NGBE_ICRMISC_TIMER)
+#define NGBE_ICSMISC			0x000104
+#define NGBE_IENMISC			0x000108
+#define NGBE_IVARMISC			0x0004FC
+#define   NGBE_IVARMISC_VEC(v)		LS(v, 0, 0x7)
+#define   NGBE_IVARMISC_VLD		MS(7, 0x1)
+#define NGBE_ICR(i)			(0x000120 + (i) * 4) /*0*/
+#define   NGBE_ICR_MASK			MS(0, 0x1FF)
+#define NGBE_ICS(i)			(0x000130 + (i) * 4) /*0*/
+#define   NGBE_ICS_MASK			NGBE_ICR_MASK
+#define NGBE_IMS(i)			(0x000140 + (i) * 4) /*0*/
+#define   NGBE_IMS_MASK			NGBE_ICR_MASK
+#define NGBE_IMC(i)			(0x000150 + (i) * 4) /*0*/
+#define   NGBE_IMC_MASK			NGBE_ICR_MASK
+#define NGBE_IVAR(i)			(0x000500 + (i) * 4) /*0-3*/
+#define   NGBE_IVAR_VEC(v)		LS(v, 0, 0x7)
+#define   NGBE_IVAR_VLD			MS(7, 0x1)
+#define NGBE_TCPTMR			0x000170
+#define NGBE_ITRSEL			0x000180
+
+/* P2V Mailbox */
+#define NGBE_MBMEM(i)		(0x005000 + 0x40 * (i)) /*0-7*/
+#define NGBE_MBCTL(i)		(0x000600 + 4 * (i)) /*0-7*/
+#define   NGBE_MBCTL_STS	MS(0, 0x1) /* Initiate message send to VF */
+#define   NGBE_MBCTL_ACK	MS(1, 0x1) /* Ack message recv'd from VF */
+#define   NGBE_MBCTL_VFU	MS(2, 0x1) /* VF owns the mailbox buffer */
+#define   NGBE_MBCTL_PFU	MS(3, 0x1) /* PF owns the mailbox buffer */
+#define   NGBE_MBCTL_RVFU	MS(4, 0x1) /* Reset VFU - used when VF stuck */
+#define NGBE_MBVFICR			0x000480
+#define   NGBE_MBVFICR_INDEX(vf)	((vf) >> 4)
+#define   NGBE_MBVFICR_VFREQ_MASK	(0x0000FFFF) /* bits for VF messages */
+#define   NGBE_MBVFICR_VFREQ_VF1	(0x00000001) /* bit for VF 1 message */
+#define   NGBE_MBVFICR_VFACK_MASK	(0xFFFF0000) /* bits for VF acks */
+#define   NGBE_MBVFICR_VFACK_VF1	(0x00010000) /* bit for VF 1 ack */
+#define NGBE_FLRVFP			0x000490
+#define NGBE_FLRVFE			0x0004A0
+#define NGBE_FLRVFEC			0x0004A8
+
+/******************************************************************************
+ * VF(Virtual Function) Registers
+ ******************************************************************************/
+#define NGBE_VFPBWRAP			0x000000
+#define   NGBE_VFPBWRAP_WRAP		MS(0, 0x7)
+#define   NGBE_VFPBWRAP_EMPT		MS(3, 0x1)
+#define NGBE_VFSTATUS			0x000004
+#define   NGBE_VFSTATUS_UP		MS(0, 0x1)
+#define   NGBE_VFSTATUS_BW_MASK		MS(1, 0x7)
+#define     NGBE_VFSTATUS_BW_1G		LS(0x1, 1, 0x7)
+#define     NGBE_VFSTATUS_BW_100M	LS(0x2, 1, 0x7)
+#define     NGBE_VFSTATUS_BW_10M	LS(0x4, 1, 0x7)
+#define   NGBE_VFSTATUS_BUSY		MS(4, 0x1)
+#define   NGBE_VFSTATUS_LANID		MS(8, 0x3)
+#define NGBE_VFRST			0x000008
+#define   NGBE_VFRST_SET		MS(0, 0x1)
+#define NGBE_VFMSIXECC			0x00000C
+#define NGBE_VFPLCFG			0x000078
+#define   NGBE_VFPLCFG_RSV		MS(0, 0x1)
+#define   NGBE_VFPLCFG_PSR(v)		LS(v, 1, 0x1F)
+#define     NGBE_VFPLCFG_PSRL4HDR	(0x1)
+#define     NGBE_VFPLCFG_PSRL3HDR	(0x2)
+#define     NGBE_VFPLCFG_PSRL2HDR	(0x4)
+#define     NGBE_VFPLCFG_PSRTUNHDR	(0x8)
+#define     NGBE_VFPLCFG_PSRTUNMAC	(0x10)
+#define NGBE_VFICR			0x000100
+#define   NGBE_VFICR_MASK		LS(3, 0, 0x3)
+#define   NGBE_VFICR_MBX		MS(1, 0x1)
+#define   NGBE_VFICR_DONE1		MS(0, 0x1)
+#define NGBE_VFICS			0x000104
+#define   NGBE_VFICS_MASK		NGBE_VFICR_MASK
+#define NGBE_VFIMS			0x000108
+#define   NGBE_VFIMS_MASK		NGBE_VFICR_MASK
+#define NGBE_VFIMC			0x00010C
+#define   NGBE_VFIMC_MASK		NGBE_VFICR_MASK
+#define NGBE_VFGPIE			0x000118
+#define NGBE_VFIVAR(i)			(0x000240 + 4 * (i)) /*0-1*/
+#define NGBE_VFIVARMISC			0x000260
+#define   NGBE_VFIVAR_ALLOC(v)		LS(v, 0, 0x1)
+#define   NGBE_VFIVAR_VLD		MS(7, 0x1)
+
+#define NGBE_VFMBCTL			0x000600
+#define   NGBE_VFMBCTL_REQ         MS(0, 0x1) /* Request for PF Ready bit */
+#define   NGBE_VFMBCTL_ACK         MS(1, 0x1) /* Ack PF message received */
+#define   NGBE_VFMBCTL_VFU         MS(2, 0x1) /* VF owns the mailbox buffer */
+#define   NGBE_VFMBCTL_PFU         MS(3, 0x1) /* PF owns the mailbox buffer */
+#define   NGBE_VFMBCTL_PFSTS       MS(4, 0x1) /* PF wrote a message in the MB */
+#define   NGBE_VFMBCTL_PFACK       MS(5, 0x1) /* PF ack the previous VF msg */
+#define   NGBE_VFMBCTL_RSTI        MS(6, 0x1) /* PF has reset indication */
+#define   NGBE_VFMBCTL_RSTD        MS(7, 0x1) /* PF has indicated reset done */
+#define   NGBE_VFMBCTL_R2C_BITS		(NGBE_VFMBCTL_RSTD | \
+					NGBE_VFMBCTL_PFSTS | \
+					NGBE_VFMBCTL_PFACK)
+#define NGBE_VFMBX			0x000C00 /*0-15*/
+#define NGBE_VFTPHCTL(i)		0x000D00
+
+/******************************************************************************
+ * PF&VF TxRx Interface
+ ******************************************************************************/
+#define RNGLEN(v)     ROUND_OVER(v, 13, 7)
+#define HDRLEN(v)     ROUND_OVER(v, 10, 6)
+#define PKTLEN(v)     ROUND_OVER(v, 14, 10)
+#define INTTHR(v)     ROUND_OVER(v, 4,  0)
+
+#define	NGBE_RING_DESC_ALIGN	128
+#define	NGBE_RING_DESC_MIN	128
+#define	NGBE_RING_DESC_MAX	8192
+#define NGBE_RXD_ALIGN		NGBE_RING_DESC_ALIGN
+#define NGBE_TXD_ALIGN		NGBE_RING_DESC_ALIGN
+
+/* receive ring */
+#define NGBE_RXBAL(rp)                 (0x001000 + 0x40 * (rp))
+#define NGBE_RXBAH(rp)                 (0x001004 + 0x40 * (rp))
+#define NGBE_RXRP(rp)                  (0x00100C + 0x40 * (rp))
+#define NGBE_RXWP(rp)                  (0x001008 + 0x40 * (rp))
+#define NGBE_RXCFG(rp)                 (0x001010 + 0x40 * (rp))
+#define   NGBE_RXCFG_ENA               MS(0, 0x1)
+#define   NGBE_RXCFG_RNGLEN(v)         LS(RNGLEN(v), 1, 0x3F)
+#define   NGBE_RXCFG_PKTLEN(v)         LS(PKTLEN(v), 8, 0xF)
+#define     NGBE_RXCFG_PKTLEN_MASK     MS(8, 0xF)
+#define   NGBE_RXCFG_HDRLEN(v)         LS(HDRLEN(v), 12, 0xF)
+#define     NGBE_RXCFG_HDRLEN_MASK     MS(12, 0xF)
+#define   NGBE_RXCFG_WTHRESH(v)        LS(v, 16, 0x7)
+#define   NGBE_RXCFG_ETAG              MS(22, 0x1)
+#define   NGBE_RXCFG_SPLIT             MS(26, 0x1)
+#define   NGBE_RXCFG_CNTAG             MS(28, 0x1)
+#define   NGBE_RXCFG_DROP              MS(30, 0x1)
+#define   NGBE_RXCFG_VLAN              MS(31, 0x1)
+
+/* transmit ring */
+#define NGBE_TXBAL(rp)                 (0x003000 + 0x40 * (rp)) /*0-7*/
+#define NGBE_TXBAH(rp)                 (0x003004 + 0x40 * (rp))
+#define NGBE_TXWP(rp)                  (0x003008 + 0x40 * (rp))
+#define NGBE_TXRP(rp)                  (0x00300C + 0x40 * (rp))
+#define NGBE_TXCFG(rp)                 (0x003010 + 0x40 * (rp))
+#define   NGBE_TXCFG_ENA               MS(0, 0x1)
+#define   NGBE_TXCFG_BUFLEN_MASK       MS(1, 0x3F)
+#define   NGBE_TXCFG_BUFLEN(v)         LS(RNGLEN(v), 1, 0x3F)
+#define   NGBE_TXCFG_HTHRESH_MASK      MS(8, 0xF)
+#define   NGBE_TXCFG_HTHRESH(v)        LS(v, 8, 0xF)
+#define   NGBE_TXCFG_WTHRESH_MASK      MS(16, 0x7F)
+#define   NGBE_TXCFG_WTHRESH(v)        LS(v, 16, 0x7F)
+#define   NGBE_TXCFG_FLUSH             MS(26, 0x1)
+
+/* interrupt registers */
+#define NGBE_BMEPEND			0x000168
+#define   NGBE_BMEPEND_ST		MS(0, 0x1)
+#define NGBE_ITRI			0x000180
+#define NGBE_ITR(i)			(0x000200 + 4 * (i))
+#define   NGBE_ITR_IVAL_MASK		MS(2, 0x1FFF) /* 1ns/10G, 10ns/REST */
+#define   NGBE_ITR_IVAL(v)		LS(v, 2, 0x1FFF) /*1ns/10G, 10ns/REST*/
+#define     NGBE_ITR_IVAL_1G(us)	NGBE_ITR_IVAL((us) / 2)
+#define     NGBE_ITR_IVAL_10G(us)	NGBE_ITR_IVAL((us) / 20)
+#define   NGBE_ITR_LLIEA		MS(15, 0x1)
+#define   NGBE_ITR_LLICREDIT(v)		LS(v, 16, 0x1F)
+#define   NGBE_ITR_CNT(v)		LS(v, 21, 0x3FF)
+#define   NGBE_ITR_WRDSA		MS(31, 0x1)
+#define NGBE_GPIE			0x000118
+#define   NGBE_GPIE_MSIX		MS(0, 0x1)
+#define   NGBE_GPIE_LLIEA		MS(1, 0x1)
+#define   NGBE_GPIE_LLIVAL(v)		LS(v, 3, 0x1F)
+#define   NGBE_GPIE_LLIVAL_H(v)		LS(v, 16, 0x7FF)
+
+/******************************************************************************
+ * Debug Registers
+ ******************************************************************************/
+/**
+ * Probe
+ **/
+#define NGBE_PRBCTL                    0x010200
+#define NGBE_PRBSTA                    0x010204
+#define NGBE_PRBDAT                    0x010220
+#define NGBE_PRBCNT                    0x010228
+
+#define NGBE_PRBPCI                    0x01F010
+#define NGBE_PRBPSR                    0x015010
+#define NGBE_PRBRDB                    0x019010
+#define NGBE_PRBTDB                    0x01C010
+#define NGBE_PRBRSEC                   0x017010
+#define NGBE_PRBTSEC                   0x01D010
+#define NGBE_PRBMNG                    0x01E010
+#define NGBE_PRBRMAC                   0x011014
+#define NGBE_PRBTMAC                   0x011010
+#define NGBE_PRBREMAC                  0x011E04
+#define NGBE_PRBTEMAC                  0x011E00
+
+/**
+ * ECC
+ **/
+#define NGBE_ECCRXPBCTL                0x019014
+#define NGBE_ECCRXPBINJ                0x019018
+#define NGBE_ECCRXPB                   0x01901C
+#define NGBE_ECCTXPBCTL                0x01C014
+#define NGBE_ECCTXPBINJ                0x01C018
+#define NGBE_ECCTXPB                   0x01C01C
+
+#define NGBE_ECCRXETHCTL               0x015014
+#define NGBE_ECCRXETHINJ               0x015018
+#define NGBE_ECCRXETH                  0x01401C
+
+#define NGBE_ECCRXSECCTL               0x017014
+#define NGBE_ECCRXSECINJ               0x017018
+#define NGBE_ECCRXSEC                  0x01701C
+#define NGBE_ECCTXSECCTL               0x01D014
+#define NGBE_ECCTXSECINJ               0x01D018
+#define NGBE_ECCTXSEC                  0x01D01C
+
+#define NGBE_P2VMBX_SIZE          (16) /* 16*4B */
+#define NGBE_P2MMBX_SIZE          (64) /* 64*4B */
+
+/**************** Global Registers ****************************/
+#define NGBE_POOLETHCTL(pl)            (0x015600 + (pl) * 4)
+#define   NGBE_POOLETHCTL_LBDIA        MS(0, 0x1)
+#define   NGBE_POOLETHCTL_LLBDIA       MS(1, 0x1)
+#define   NGBE_POOLETHCTL_LLB          MS(2, 0x1)
+#define   NGBE_POOLETHCTL_UCP          MS(4, 0x1)
+#define   NGBE_POOLETHCTL_ETP          MS(5, 0x1)
+#define   NGBE_POOLETHCTL_VLA          MS(6, 0x1)
+#define   NGBE_POOLETHCTL_VLP          MS(7, 0x1)
+#define   NGBE_POOLETHCTL_UTA          MS(8, 0x1)
+#define   NGBE_POOLETHCTL_MCHA         MS(9, 0x1)
+#define   NGBE_POOLETHCTL_UCHA         MS(10, 0x1)
+#define   NGBE_POOLETHCTL_BCA          MS(11, 0x1)
+#define   NGBE_POOLETHCTL_MCP          MS(12, 0x1)
+#define NGBE_POOLDROPSWBK(i)           (0x0151C8 + (i) * 4) /*0-1*/
+
+/**************************** Receive DMA registers **************************/
+
+#define NGBE_RPUP2TC                   0x019008
+#define   NGBE_RPUP2TC_UP_SHIFT        3
+#define   NGBE_RPUP2TC_UP_MASK         0x7
+
+/* mac switcher */
+#define NGBE_ETHADDRL                  0x016200
+#define   NGBE_ETHADDRL_AD0(v)         LS(v, 0, 0xFF)
+#define   NGBE_ETHADDRL_AD1(v)         LS(v, 8, 0xFF)
+#define   NGBE_ETHADDRL_AD2(v)         LS(v, 16, 0xFF)
+#define   NGBE_ETHADDRL_AD3(v)         LS(v, 24, 0xFF)
+#define   NGBE_ETHADDRL_ETAG(r)        RS(r, 0, 0x3FFF)
+#define NGBE_ETHADDRH                  0x016204
+#define   NGBE_ETHADDRH_AD4(v)         LS(v, 0, 0xFF)
+#define   NGBE_ETHADDRH_AD5(v)         LS(v, 8, 0xFF)
+#define   NGBE_ETHADDRH_AD_MASK        MS(0, 0xFFFF)
+#define   NGBE_ETHADDRH_ETAG           MS(30, 0x1)
+#define   NGBE_ETHADDRH_VLD            MS(31, 0x1)
+#define NGBE_ETHADDRASS                0x016208
+#define NGBE_ETHADDRIDX                0x016210
+
+/* Outmost Barrier Filters */
+#define NGBE_MCADDRTBL(i)              (0x015200 + (i) * 4) /*0-127*/
+#define NGBE_UCADDRTBL(i)              (0x015400 + (i) * 4) /*0-127*/
+#define NGBE_VLANTBL(i)                (0x016000 + (i) * 4) /*0-127*/
+
+#define NGBE_MNGFLEXSEL                0x1582C
+#define NGBE_MNGFLEXDWL(i)             (0x15A00 + ((i) * 16))
+#define NGBE_MNGFLEXDWH(i)             (0x15A04 + ((i) * 16))
+#define NGBE_MNGFLEXMSK(i)             (0x15A08 + ((i) * 16))
+
+#define NGBE_LANFLEXSEL                0x15B8C
+#define NGBE_LANFLEXDWL(i)             (0x15C00 + ((i) * 16))
+#define NGBE_LANFLEXDWH(i)             (0x15C04 + ((i) * 16))
+#define NGBE_LANFLEXMSK(i)             (0x15C08 + ((i) * 16))
+#define NGBE_LANFLEXCTL                0x15CFC
+
+/* ipsec */
+#define NGBE_IPSRXIDX                  0x017100
+#define   NGBE_IPSRXIDX_ENA            MS(0, 0x1)
+#define   NGBE_IPSRXIDX_TB_MASK        MS(1, 0x3)
+#define   NGBE_IPSRXIDX_TB_IP          LS(1, 1, 0x3)
+#define   NGBE_IPSRXIDX_TB_SPI         LS(2, 1, 0x3)
+#define   NGBE_IPSRXIDX_TB_KEY         LS(3, 1, 0x3)
+#define   NGBE_IPSRXIDX_TBIDX(v)       LS(v, 3, 0xF)
+#define   NGBE_IPSRXIDX_READ           MS(30, 0x1)
+#define   NGBE_IPSRXIDX_WRITE          MS(31, 0x1)
+#define NGBE_IPSRXADDR(i)              (0x017104 + (i) * 4)
+
+#define NGBE_IPSRXSPI                  0x017114
+#define NGBE_IPSRXADDRIDX              0x017118
+#define NGBE_IPSRXKEY(i)               (0x01711C + (i) * 4)
+#define NGBE_IPSRXSALT                 0x01712C
+#define NGBE_IPSRXMODE                 0x017130
+#define   NGBE_IPSRXMODE_IPV6          0x00000010
+#define   NGBE_IPSRXMODE_DEC           0x00000008
+#define   NGBE_IPSRXMODE_ESP           0x00000004
+#define   NGBE_IPSRXMODE_AH            0x00000002
+#define   NGBE_IPSRXMODE_VLD           0x00000001
+#define NGBE_IPSTXIDX                  0x01D100
+#define   NGBE_IPSTXIDX_ENA            MS(0, 0x1)
+#define   NGBE_IPSTXIDX_SAIDX(v)       LS(v, 3, 0x3FF)
+#define   NGBE_IPSTXIDX_READ           MS(30, 0x1)
+#define   NGBE_IPSTXIDX_WRITE          MS(31, 0x1)
+#define NGBE_IPSTXSALT                 0x01D104
+#define NGBE_IPSTXKEY(i)               (0x01D108 + (i) * 4)
+
+#define NGBE_MACTXCFG                  0x011000
+#define   NGBE_MACTXCFG_TE             MS(0, 0x1)
+#define   NGBE_MACTXCFG_SPEED_MASK     MS(29, 0x3)
+#define   NGBE_MACTXCFG_SPEED(v)       LS(v, 29, 0x3)
+#define   NGBE_MACTXCFG_SPEED_10G      LS(0, 29, 0x3)
+#define   NGBE_MACTXCFG_SPEED_1G       LS(3, 29, 0x3)
+
+#define NGBE_ISBADDRL                  0x000160
+#define NGBE_ISBADDRH                  0x000164
+
+#define NGBE_ARBPOOLIDX                0x01820C
+#define NGBE_ARBTXRATE                 0x018404
+#define   NGBE_ARBTXRATE_MIN(v)        LS(v, 0, 0x3FFF)
+#define   NGBE_ARBTXRATE_MAX(v)        LS(v, 16, 0x3FFF)
+
+/* qos */
+#define NGBE_ARBTXCTL                  0x018200
+#define   NGBE_ARBTXCTL_RRM            MS(1, 0x1)
+#define   NGBE_ARBTXCTL_WSP            MS(2, 0x1)
+#define   NGBE_ARBTXCTL_DIA            MS(6, 0x1)
+#define NGBE_ARBTXMMW                  0x018208
+
+/* Management */
+#define NGBE_MNGFWSYNC            0x01E000
+#define   NGBE_MNGFWSYNC_REQ      MS(0, 0x1)
+#define NGBE_MNGSWSYNC            0x01E004
+#define   NGBE_MNGSWSYNC_REQ      MS(0, 0x1)
+#define NGBE_SWSEM                0x01002C
+#define   NGBE_SWSEM_PF           MS(0, 0x1)
+#define NGBE_MNGSEM               0x01E008
+#define   NGBE_MNGSEM_SW(v)       LS(v, 0, 0xFFFF)
+#define   NGBE_MNGSEM_SWPHY       MS(0, 0x1)
+#define   NGBE_MNGSEM_SWMBX       MS(2, 0x1)
+#define   NGBE_MNGSEM_SWFLASH     MS(3, 0x1)
+#define   NGBE_MNGSEM_FW(v)       LS(v, 16, 0xFFFF)
+#define   NGBE_MNGSEM_FWPHY       MS(16, 0x1)
+#define   NGBE_MNGSEM_FWMBX       MS(18, 0x1)
+#define   NGBE_MNGSEM_FWFLASH     MS(19, 0x1)
+#define NGBE_MNGMBXCTL            0x01E044
+#define   NGBE_MNGMBXCTL_SWRDY    MS(0, 0x1)
+#define   NGBE_MNGMBXCTL_SWACK    MS(1, 0x1)
+#define   NGBE_MNGMBXCTL_FWRDY    MS(2, 0x1)
+#define   NGBE_MNGMBXCTL_FWACK    MS(3, 0x1)
+#define NGBE_MNGMBX               0x01E100
+
+/**
+ * MDIO(PHY)
+ **/
+#define NGBE_MDIOSCA                   0x011200
+#define   NGBE_MDIOSCA_REG(v)          LS(v, 0, 0xFFFF)
+#define   NGBE_MDIOSCA_PORT(v)         LS(v, 16, 0x1F)
+#define   NGBE_MDIOSCA_DEV(v)          LS(v, 21, 0x1F)
+#define NGBE_MDIOSCD                   0x011204
+#define   NGBE_MDIOSCD_DAT_R(r)        RS(r, 0, 0xFFFF)
+#define   NGBE_MDIOSCD_DAT(v)          LS(v, 0, 0xFFFF)
+#define   NGBE_MDIOSCD_CMD_PREAD       LS(2, 16, 0x3)
+#define   NGBE_MDIOSCD_CMD_WRITE       LS(1, 16, 0x3)
+#define   NGBE_MDIOSCD_CMD_READ        LS(3, 16, 0x3)
+#define   NGBE_MDIOSCD_SADDR           MS(18, 0x1)
+#define   NGBE_MDIOSCD_CLOCK(v)        LS(v, 19, 0x7)
+#define   NGBE_MDIOSCD_BUSY            MS(22, 0x1)
+
+#define NGBE_MDIOMODE			0x011220
+#define   NGBE_MDIOMODE_MASK		MS(0, 0xF)
+#define   NGBE_MDIOMODE_PRT3CL22	MS(3, 0x1)
+#define   NGBE_MDIOMODE_PRT2CL22	MS(2, 0x1)
+#define   NGBE_MDIOMODE_PRT1CL22	MS(1, 0x1)
+#define   NGBE_MDIOMODE_PRT0CL22	MS(0, 0x1)
+
+#define NVM_OROM_OFFSET		0x17
+#define NVM_OROM_BLK_LOW	0x83
+#define NVM_OROM_BLK_HI		0x84
+#define NVM_OROM_PATCH_MASK	0xFF
+#define NVM_OROM_SHIFT		8
+#define NVM_VER_MASK		0x00FF /* version mask */
+#define NVM_VER_SHIFT		8     /* version bit shift */
+#define NVM_OEM_PROD_VER_PTR	0x1B  /* OEM Product version block pointer */
+#define NVM_OEM_PROD_VER_CAP_OFF 0x1  /* OEM Product version format offset */
+#define NVM_OEM_PROD_VER_OFF_L	0x2   /* OEM Product version offset low */
+#define NVM_OEM_PROD_VER_OFF_H	0x3   /* OEM Product version offset high */
+#define NVM_OEM_PROD_VER_CAP_MASK 0xF /* OEM Product version cap mask */
+#define NVM_OEM_PROD_VER_MOD_LEN 0x3  /* OEM Product version module length */
+#define NVM_ETK_OFF_LOW		0x2D  /* version low order word */
+#define NVM_ETK_OFF_HI		0x2E  /* version high order word */
+#define NVM_ETK_SHIFT		16    /* high version word shift */
+#define NVM_VER_INVALID		0xFFFF
+#define NVM_ETK_VALID		0x8000
+#define NVM_INVALID_PTR		0xFFFF
+#define NVM_VER_SIZE		32    /* version string size */
+
+#define NGBE_REG_RSSTBL   NGBE_RSSTBL(0)
+#define NGBE_REG_RSSKEY   NGBE_RSSKEY(0)
+
+/*
+ * read non-rc counters
+ */
+#define NGBE_UPDCNT32(reg, last, cur)                           \
+do {                                                             \
+	uint32_t latest = rd32(hw, reg);                         \
+	if (hw->offset_loaded || hw->rx_loaded)			 \
+		last = 0;					 \
+	cur += (latest - last) & UINT_MAX;                       \
+	last = latest;                                           \
+} while (0)
+
+#define NGBE_UPDCNT36(regl, last, cur)                          \
+do {                                                             \
+	uint64_t new_lsb = rd32(hw, regl);                       \
+	uint64_t new_msb = rd32(hw, regl + 4);                   \
+	uint64_t latest = ((new_msb << 32) | new_lsb);           \
+	if (hw->offset_loaded || hw->rx_loaded)			 \
+		last = 0;					 \
+	cur += (0x1000000000LL + latest - last) & 0xFFFFFFFFFLL; \
+	last = latest;                                           \
+} while (0)
+
+/**
+ * register operations
+ **/
+#define NGBE_REG_READ32(addr)               rte_read32(addr)
+#define NGBE_REG_READ32_RELAXED(addr)       rte_read32_relaxed(addr)
+#define NGBE_REG_WRITE32(addr, val)         rte_write32(val, addr)
+#define NGBE_REG_WRITE32_RELAXED(addr, val) rte_write32_relaxed(val, addr)
+
+#define NGBE_DEAD_READ_REG         0xdeadbeefU
+#define NGBE_FAILED_READ_REG       0xffffffffU
+#define NGBE_REG_ADDR(hw, reg) \
+	((volatile u32 *)((char *)(hw)->hw_addr + (reg)))
+
+static inline u32
+ngbe_get32(volatile u32 *addr)
+{
+	u32 val = NGBE_REG_READ32(addr);
+	return rte_le_to_cpu_32(val);
+}
+
+static inline void
+ngbe_set32(volatile u32 *addr, u32 val)
+{
+	val = rte_cpu_to_le_32(val);
+	NGBE_REG_WRITE32(addr, val);
+}
+
+static inline u32
+ngbe_get32_masked(volatile u32 *addr, u32 mask)
+{
+	u32 val = ngbe_get32(addr);
+	val &= mask;
+	return val;
+}
+
+static inline void
+ngbe_set32_masked(volatile u32 *addr, u32 mask, u32 field)
+{
+	u32 val = ngbe_get32(addr);
+	val = ((val & ~mask) | (field & mask));
+	ngbe_set32(addr, val);
+}
+
+static inline u32
+ngbe_get32_relaxed(volatile u32 *addr)
+{
+	u32 val = NGBE_REG_READ32_RELAXED(addr);
+	return rte_le_to_cpu_32(val);
+}
+
+static inline void
+ngbe_set32_relaxed(volatile u32 *addr, u32 val)
+{
+	val = rte_cpu_to_le_32(val);
+	NGBE_REG_WRITE32_RELAXED(addr, val);
+}
+
+static inline u32
+rd32(struct ngbe_hw *hw, u32 reg)
+{
+	if (reg == NGBE_REG_DUMMY)
+		return 0;
+	return ngbe_get32(NGBE_REG_ADDR(hw, reg));
+}
+
+static inline void
+wr32(struct ngbe_hw *hw, u32 reg, u32 val)
+{
+	if (reg == NGBE_REG_DUMMY)
+		return;
+	ngbe_set32(NGBE_REG_ADDR(hw, reg), val);
+}
+
+static inline u32
+rd32m(struct ngbe_hw *hw, u32 reg, u32 mask)
+{
+	u32 val = rd32(hw, reg);
+	val &= mask;
+	return val;
+}
+
+static inline void
+wr32m(struct ngbe_hw *hw, u32 reg, u32 mask, u32 field)
+{
+	u32 val = rd32(hw, reg);
+	val = ((val & ~mask) | (field & mask));
+	wr32(hw, reg, val);
+}
+
+static inline u64
+rd64(struct ngbe_hw *hw, u32 reg)
+{
+	u64 lsb = rd32(hw, reg);
+	u64 msb = rd32(hw, reg + 4);
+	return (lsb | msb << 32);
+}
+
+static inline void
+wr64(struct ngbe_hw *hw, u32 reg, u64 val)
+{
+	wr32(hw, reg, (u32)val);
+	wr32(hw, reg + 4, (u32)(val >> 32));
+}
+
+/* poll register */
+static inline u32
+po32m(struct ngbe_hw *hw, u32 reg, u32 mask, u32 expect, u32 *actual,
+	u32 loop, u32 slice)
+{
+	bool usec = true;
+	u32 value = 0, all = 0;
+
+	if (slice > 1000 * MAX_UDELAY_MS) {
+		usec = false;
+		slice = (slice + 500) / 1000;
+	}
+
+	do {
+		all |= rd32(hw, reg);
+		value |= mask & all;
+		if (value == expect)
+			break;
+
+		usec ? usec_delay(slice) : msec_delay(slice);
+	} while (--loop > 0);
+
+	if (actual)
+		*actual = all;
+
+	return loop;
+}
+
+/* flush all write operations */
+#define ngbe_flush(hw) rd32(hw, 0x00100C)
+
+#define rd32a(hw, reg, idx) ( \
+	rd32((hw), (reg) + ((idx) << 2)))
+#define wr32a(hw, reg, idx, val) \
+	wr32((hw), (reg) + ((idx) << 2), (val))
+
+#define rd32w(hw, reg, mask, slice) do { \
+	rd32((hw), reg); \
+	po32m((hw), reg, mask, mask, NULL, 5, slice); \
+} while (0)
+
+#define wr32w(hw, reg, val, mask, slice) do { \
+	wr32((hw), reg, val); \
+	po32m((hw), reg, mask, mask, NULL, 5, slice); \
+} while (0)
+
+#define NGBE_XPCS_IDAADDR    0x13000
+#define NGBE_XPCS_IDADATA    0x13004
+#define NGBE_EPHY_IDAADDR    0x13008
+#define NGBE_EPHY_IDADATA    0x1300C
+static inline u32
+rd32_epcs(struct ngbe_hw *hw, u32 addr)
+{
+	u32 data;
+	wr32(hw, NGBE_XPCS_IDAADDR, addr);
+	data = rd32(hw, NGBE_XPCS_IDADATA);
+	return data;
+}
+
+static inline void
+wr32_epcs(struct ngbe_hw *hw, u32 addr, u32 data)
+{
+	wr32(hw, NGBE_XPCS_IDAADDR, addr);
+	wr32(hw, NGBE_XPCS_IDADATA, data);
+}
+
+static inline u32
+rd32_ephy(struct ngbe_hw *hw, u32 addr)
+{
+	u32 data;
+	wr32(hw, NGBE_EPHY_IDAADDR, addr);
+	data = rd32(hw, NGBE_EPHY_IDADATA);
+	return data;
+}
+
+static inline void
+wr32_ephy(struct ngbe_hw *hw, u32 addr, u32 data)
+{
+	wr32(hw, NGBE_EPHY_IDAADDR, addr);
+	wr32(hw, NGBE_EPHY_IDADATA, data);
+}
+
+#endif /* _NGBE_REGS_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index bcc9f74216..4243e04eef 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -26,4 +26,6 @@ struct ngbe_hw {
 	bool is_pf;
 };
 
+#include "ngbe_regs.h"
+
 #endif /* _NGBE_TYPE_H_ */
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 07/24] net/ngbe: set MAC type and LAN id
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (5 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 06/24] net/ngbe: define registers Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 08/24] net/ngbe: init and validate EEPROM Jiawen Wu
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Initialize the shared code.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_dummy.h |  37 +++++++++
 drivers/net/ngbe/base/ngbe_hw.c    | 122 +++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_hw.h    |   5 ++
 drivers/net/ngbe/base/ngbe_osdep.h |   2 +
 drivers/net/ngbe/base/ngbe_type.h  |  53 +++++++++++++
 drivers/net/ngbe/ngbe_ethdev.c     |   8 ++
 6 files changed, 227 insertions(+)
 create mode 100644 drivers/net/ngbe/base/ngbe_dummy.h

diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
new file mode 100644
index 0000000000..75b4e50bca
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_TYPE_DUMMY_H_
+#define _NGBE_TYPE_DUMMY_H_
+
+#ifdef TUP
+#elif defined(__GNUC__)
+#define TUP(x) x##_unused ngbe_unused
+#elif defined(__LCLINT__)
+#define TUP(x) x /*@unused@*/
+#else
+#define TUP(x) x
+#endif /*TUP*/
+#define TUP0 TUP(p0)
+#define TUP1 TUP(p1)
+#define TUP2 TUP(p2)
+#define TUP3 TUP(p3)
+#define TUP4 TUP(p4)
+#define TUP5 TUP(p5)
+#define TUP6 TUP(p6)
+#define TUP7 TUP(p7)
+#define TUP8 TUP(p8)
+#define TUP9 TUP(p9)
+
+/* struct ngbe_bus_operations */
+static inline void ngbe_bus_set_lan_id_dummy(struct ngbe_hw *TUP0)
+{
+}
+static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
+{
+	hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
+}
+
+#endif /* _NGBE_TYPE_DUMMY_H_ */
+
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 0fab47f272..014bb0faee 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -3,8 +3,74 @@
  * Copyright(c) 2010-2017 Intel Corporation
  */
 
+#include "ngbe_type.h"
 #include "ngbe_hw.h"
 
+/**
+ *  ngbe_set_lan_id_multi_port - Set LAN id for PCIe multiple port devices
+ *  @hw: pointer to the HW structure
+ *
+ *  Determines the LAN function id by reading memory-mapped registers and swaps
+ *  the port value if requested, and set MAC instance for devices.
+ **/
+void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw)
+{
+	struct ngbe_bus_info *bus = &hw->bus;
+	u32 reg = 0;
+
+	DEBUGFUNC("ngbe_set_lan_id_multi_port");
+
+	reg = rd32(hw, NGBE_PORTSTAT);
+	bus->lan_id = NGBE_PORTSTAT_ID(reg);
+	bus->func = bus->lan_id;
+}
+
+/**
+ *  ngbe_set_mac_type - Sets MAC type
+ *  @hw: pointer to the HW structure
+ *
+ *  This function sets the mac type of the adapter based on the
+ *  vendor ID and device ID stored in the hw structure.
+ **/
+s32 ngbe_set_mac_type(struct ngbe_hw *hw)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("ngbe_set_mac_type");
+
+	if (hw->vendor_id != PCI_VENDOR_ID_WANGXUN) {
+		DEBUGOUT("Unsupported vendor id: %x", hw->vendor_id);
+		return NGBE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	switch (hw->sub_device_id) {
+	case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
+	case NGBE_SUB_DEV_ID_EM_MVL_RGMII:
+		hw->phy.media_type = ngbe_media_type_copper;
+		hw->mac.type = ngbe_mac_em;
+		break;
+	case NGBE_SUB_DEV_ID_EM_MVL_SFP:
+	case NGBE_SUB_DEV_ID_EM_YT8521S_SFP:
+		hw->phy.media_type = ngbe_media_type_fiber;
+		hw->mac.type = ngbe_mac_em;
+		break;
+	case NGBE_SUB_DEV_ID_EM_VF:
+		hw->phy.media_type = ngbe_media_type_virtual;
+		hw->mac.type = ngbe_mac_em_vf;
+		break;
+	default:
+		err = NGBE_ERR_DEVICE_NOT_SUPPORTED;
+		hw->phy.media_type = ngbe_media_type_unknown;
+		hw->mac.type = ngbe_mac_unknown;
+		DEBUGOUT("Unsupported device id: %x", hw->device_id);
+		break;
+	}
+
+	DEBUGOUT("found mac: %d media: %d, returns: %d\n",
+		  hw->mac.type, hw->phy.media_type, err);
+	return err;
+}
+
 void ngbe_map_device_id(struct ngbe_hw *hw)
 {
 	u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
@@ -58,3 +124,59 @@ void ngbe_map_device_id(struct ngbe_hw *hw)
 	}
 }
 
+/**
+ *  ngbe_init_ops_pf - Inits func ptrs and MAC type
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize the function pointers and assign the MAC type.
+ *  Does not touch the hardware.
+ **/
+s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
+{
+	struct ngbe_bus_info *bus = &hw->bus;
+
+	DEBUGFUNC("ngbe_init_ops_pf");
+
+	/* BUS */
+	bus->set_lan_id = ngbe_set_lan_id_multi_port;
+
+	return 0;
+}
+
+/**
+ *  ngbe_init_shared_code - Initialize the shared code
+ *  @hw: pointer to hardware structure
+ *
+ *  This will assign function pointers and assign the MAC type and PHY code.
+ *  Does not touch the hardware. This function must be called prior to any
+ *  other function in the shared code. The ngbe_hw structure should be
+ *  memset to 0 prior to calling this function.  The following fields in
+ *  hw structure should be filled in prior to calling this function:
+ *  hw_addr, back, device_id, vendor_id, subsystem_device_id
+ **/
+s32 ngbe_init_shared_code(struct ngbe_hw *hw)
+{
+	s32 status = 0;
+
+	DEBUGFUNC("ngbe_init_shared_code");
+
+	/*
+	 * Set the mac type
+	 */
+	ngbe_set_mac_type(hw);
+
+	ngbe_init_ops_dummy(hw);
+	switch (hw->mac.type) {
+	case ngbe_mac_em:
+		ngbe_init_ops_pf(hw);
+		break;
+	default:
+		status = NGBE_ERR_DEVICE_NOT_SUPPORTED;
+		break;
+	}
+
+	hw->bus.set_lan_id(hw);
+
+	return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index b320d126ec..7d5de49248 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -8,6 +8,11 @@
 
 #include "ngbe_type.h"
 
+void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
+
+s32 ngbe_init_shared_code(struct ngbe_hw *hw);
+s32 ngbe_set_mac_type(struct ngbe_hw *hw);
+s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
 void ngbe_map_device_id(struct ngbe_hw *hw);
 
 #endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_osdep.h b/drivers/net/ngbe/base/ngbe_osdep.h
index ef3d3d9180..94cc10315e 100644
--- a/drivers/net/ngbe/base/ngbe_osdep.h
+++ b/drivers/net/ngbe/base/ngbe_osdep.h
@@ -19,6 +19,8 @@
 #include <rte_config.h>
 #include <rte_io.h>
 
+#include "../ngbe_logs.h"
+
 #define RTE_LIBRTE_NGBE_TM        DCPV(1, 0)
 #define TMZ_PADDR(mz)  ((mz)->iova)
 #define TMZ_VADDR(mz)  ((mz)->addr)
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 4243e04eef..15f1778d6a 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -12,8 +12,60 @@
 #include "ngbe_osdep.h"
 #include "ngbe_devids.h"
 
+enum ngbe_mac_type {
+	ngbe_mac_unknown = 0,
+	ngbe_mac_em,
+	ngbe_mac_em_vf,
+	ngbe_num_macs
+};
+
+enum ngbe_phy_type {
+	ngbe_phy_unknown = 0,
+	ngbe_phy_none,
+	ngbe_phy_rtl,
+	ngbe_phy_mvl,
+	ngbe_phy_mvl_sfi,
+	ngbe_phy_yt8521s,
+	ngbe_phy_yt8521s_sfi,
+	ngbe_phy_zte,
+	ngbe_phy_cu_mtd,
+};
+
+enum ngbe_media_type {
+	ngbe_media_type_unknown = 0,
+	ngbe_media_type_fiber,
+	ngbe_media_type_fiber_qsfp,
+	ngbe_media_type_copper,
+	ngbe_media_type_backplane,
+	ngbe_media_type_cx4,
+	ngbe_media_type_virtual
+};
+
+struct ngbe_hw;
+
+/* Bus parameters */
+struct ngbe_bus_info {
+	void (*set_lan_id)(struct ngbe_hw *hw);
+
+	u16 func;
+	u8 lan_id;
+};
+
+struct ngbe_mac_info {
+	enum ngbe_mac_type type;
+};
+
+struct ngbe_phy_info {
+	enum ngbe_media_type media_type;
+	enum ngbe_phy_type type;
+};
+
 struct ngbe_hw {
 	void IOMEM *hw_addr;
+	void *back;
+	struct ngbe_mac_info mac;
+	struct ngbe_phy_info phy;
+	struct ngbe_bus_info bus;
 	u16 device_id;
 	u16 vendor_id;
 	u16 sub_device_id;
@@ -27,5 +79,6 @@ struct ngbe_hw {
 };
 
 #include "ngbe_regs.h"
+#include "ngbe_dummy.h"
 
 #endif /* _NGBE_TYPE_H_ */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index f24c3e173e..12b80ec695 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -37,6 +37,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
 	const struct rte_memzone *mz;
+	int err;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -62,6 +63,13 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	hw->isb_dma = TMZ_PADDR(mz);
 	hw->isb_mem = TMZ_VADDR(mz);
 
+	/* Initialize the shared code (base driver) */
+	err = ngbe_init_shared_code(hw);
+	if (err != 0) {
+		PMD_INIT_LOG(ERR, "Shared code init failed: %d", err);
+		return -EIO;
+	}
+
 	return 0;
 }
 
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 08/24] net/ngbe: init and validate EEPROM
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (6 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 07/24] net/ngbe: set MAC type and LAN id Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 09/24] net/ngbe: add HW initialization Jiawen Wu
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Reset swfw lock before NVM access, init EEPROM and validate the
checksum.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/meson.build   |   2 +
 drivers/net/ngbe/base/ngbe_dummy.h  |  23 ++++
 drivers/net/ngbe/base/ngbe_eeprom.c | 203 ++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_eeprom.h |  17 +++
 drivers/net/ngbe/base/ngbe_hw.c     |  83 ++++++++++++
 drivers/net/ngbe/base/ngbe_hw.h     |   3 +
 drivers/net/ngbe/base/ngbe_mng.c    | 198 +++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_mng.h    |  65 +++++++++
 drivers/net/ngbe/base/ngbe_type.h   |  24 ++++
 drivers/net/ngbe/ngbe_ethdev.c      |  39 ++++++
 10 files changed, 657 insertions(+)
 create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.c
 create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.h
 create mode 100644 drivers/net/ngbe/base/ngbe_mng.c
 create mode 100644 drivers/net/ngbe/base/ngbe_mng.h

diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index fdbfa99916..ddd122ec45 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -2,7 +2,9 @@
 # Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
 
 sources = [
+	'ngbe_eeprom.c',
 	'ngbe_hw.c',
+	'ngbe_mng.c',
 ]
 
 error_cflags = []
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 75b4e50bca..ade03eae81 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -28,9 +28,32 @@
 static inline void ngbe_bus_set_lan_id_dummy(struct ngbe_hw *TUP0)
 {
 }
+/* struct ngbe_rom_operations */
+static inline s32 ngbe_rom_init_params_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0,
+					u16 *TUP1)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
+					u32 TUP1)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
+					u32 TUP1)
+{
+}
 static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 {
 	hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
+	hw->rom.init_params = ngbe_rom_init_params_dummy;
+	hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
+	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
+	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
 }
 
 #endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.c b/drivers/net/ngbe/base/ngbe_eeprom.c
new file mode 100644
index 0000000000..0ebbb7a29e
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_eeprom.c
@@ -0,0 +1,203 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_hw.h"
+#include "ngbe_mng.h"
+#include "ngbe_eeprom.h"
+
+/**
+ *  ngbe_init_eeprom_params - Initialize EEPROM params
+ *  @hw: pointer to hardware structure
+ *
+ *  Initializes the EEPROM parameters ngbe_rom_info within the
+ *  ngbe_hw struct in order to set up EEPROM access.
+ **/
+s32 ngbe_init_eeprom_params(struct ngbe_hw *hw)
+{
+	struct ngbe_rom_info *eeprom = &hw->rom;
+	u32 eec;
+	u16 eeprom_size;
+
+	DEBUGFUNC("ngbe_init_eeprom_params");
+
+	if (eeprom->type != ngbe_eeprom_unknown)
+		return 0;
+
+	eeprom->type = ngbe_eeprom_none;
+	/* Set default semaphore delay to 10ms which is a well
+	 * tested value
+	 */
+	eeprom->semaphore_delay = 10; /*ms*/
+	/* Clear EEPROM page size, it will be initialized as needed */
+	eeprom->word_page_size = 0;
+
+	/*
+	 * Check for EEPROM present first.
+	 * If not present leave as none
+	 */
+	eec = rd32(hw, NGBE_SPISTAT);
+	if (!(eec & NGBE_SPISTAT_BPFLASH)) {
+		eeprom->type = ngbe_eeprom_flash;
+
+		/*
+		 * SPI EEPROM is assumed here.  This code would need to
+		 * change if a future EEPROM is not SPI.
+		 */
+		eeprom_size = 4096;
+		eeprom->word_size = eeprom_size >> 1;
+	}
+
+	eeprom->address_bits = 16;
+	eeprom->sw_addr = 0x80;
+
+	DEBUGOUT("eeprom params: type = %d, size = %d, address bits: "
+		  "%d %d\n", eeprom->type, eeprom->word_size,
+		  eeprom->address_bits, eeprom->sw_addr);
+
+	return 0;
+}
+
+/**
+ *  ngbe_get_eeprom_semaphore - Get hardware semaphore
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets the hardware semaphores so EEPROM access can occur for bit-bang method
+ **/
+s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw)
+{
+	s32 status = NGBE_ERR_EEPROM;
+	u32 timeout = 2000;
+	u32 i;
+	u32 swsm;
+
+	DEBUGFUNC("ngbe_get_eeprom_semaphore");
+
+
+	/* Get SMBI software semaphore between device drivers first */
+	for (i = 0; i < timeout; i++) {
+		/*
+		 * If the SMBI bit is 0 when we read it, then the bit will be
+		 * set and we have the semaphore
+		 */
+		swsm = rd32(hw, NGBE_SWSEM);
+		if (!(swsm & NGBE_SWSEM_PF)) {
+			status = 0;
+			break;
+		}
+		usec_delay(50);
+	}
+
+	if (i == timeout) {
+		DEBUGOUT("Driver can't access the eeprom - SMBI Semaphore "
+			 "not granted.\n");
+		/*
+		 * this release is particularly important because our attempts
+		 * above to get the semaphore may have succeeded, and if there
+		 * was a timeout, we should unconditionally clear the semaphore
+		 * bits to free the driver to make progress
+		 */
+		ngbe_release_eeprom_semaphore(hw);
+
+		usec_delay(50);
+		/*
+		 * one last try
+		 * If the SMBI bit is 0 when we read it, then the bit will be
+		 * set and we have the semaphore
+		 */
+		swsm = rd32(hw, NGBE_SWSEM);
+		if (!(swsm & NGBE_SWSEM_PF))
+			status = 0;
+	}
+
+	/* Now get the semaphore between SW/FW through the SWESMBI bit */
+	if (status == 0) {
+		for (i = 0; i < timeout; i++) {
+			/* Set the SW EEPROM semaphore bit to request access */
+			wr32m(hw, NGBE_MNGSWSYNC,
+				NGBE_MNGSWSYNC_REQ, NGBE_MNGSWSYNC_REQ);
+
+			/*
+			 * If we set the bit successfully then we got the
+			 * semaphore.
+			 */
+			swsm = rd32(hw, NGBE_MNGSWSYNC);
+			if (swsm & NGBE_MNGSWSYNC_REQ)
+				break;
+
+			usec_delay(50);
+		}
+
+		/*
+		 * Release semaphores and return error if SW EEPROM semaphore
+		 * was not granted because we don't have access to the EEPROM
+		 */
+		if (i >= timeout) {
+			DEBUGOUT("SWESMBI Software EEPROM semaphore not granted.\n");
+			ngbe_release_eeprom_semaphore(hw);
+			status = NGBE_ERR_EEPROM;
+		}
+	} else {
+		DEBUGOUT("Software semaphore SMBI between device drivers "
+			 "not granted.\n");
+	}
+
+	return status;
+}
+
+/**
+ *  ngbe_release_eeprom_semaphore - Release hardware semaphore
+ *  @hw: pointer to hardware structure
+ *
+ *  This function clears hardware semaphore bits.
+ **/
+void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw)
+{
+	DEBUGFUNC("ngbe_release_eeprom_semaphore");
+
+	wr32m(hw, NGBE_MNGSWSYNC, NGBE_MNGSWSYNC_REQ, 0);
+	wr32m(hw, NGBE_SWSEM, NGBE_SWSEM_PF, 0);
+	ngbe_flush(hw);
+}
+
+/**
+ *  ngbe_validate_eeprom_checksum_em - Validate EEPROM checksum
+ *  @hw: pointer to hardware structure
+ *  @checksum_val: calculated checksum
+ *
+ *  Performs checksum calculation and validates the EEPROM checksum.  If the
+ *  caller does not need checksum_val, the value can be NULL.
+ **/
+s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw,
+					   u16 *checksum_val)
+{
+	u32 eeprom_cksum_devcap = 0;
+	int err = 0;
+
+	DEBUGFUNC("ngbe_validate_eeprom_checksum_em");
+	UNREFERENCED_PARAMETER(checksum_val);
+
+	/* Check EEPROM only once */
+	if (hw->bus.lan_id == 0) {
+		wr32(hw, NGBE_CALSUM_CAP_STATUS, 0x0);
+		wr32(hw, NGBE_EEPROM_VERSION_STORE_REG, 0x0);
+	} else {
+		eeprom_cksum_devcap = rd32(hw, NGBE_CALSUM_CAP_STATUS);
+		hw->rom.saved_version = rd32(hw, NGBE_EEPROM_VERSION_STORE_REG);
+	}
+
+	if (hw->bus.lan_id == 0 || eeprom_cksum_devcap == 0) {
+		err = ngbe_hic_check_cap(hw);
+		if (err != 0) {
+			PMD_INIT_LOG(ERR,
+				"The EEPROM checksum is not valid: %d", err);
+			return -EIO;
+		}
+	}
+
+	hw->rom.cksum_devcap = eeprom_cksum_devcap & 0xffff;
+
+	return err;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.h b/drivers/net/ngbe/base/ngbe_eeprom.h
new file mode 100644
index 0000000000..0c2819df4a
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_eeprom.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_EEPROM_H_
+#define _NGBE_EEPROM_H_
+
+#define NGBE_CALSUM_CAP_STATUS         0x10224
+#define NGBE_EEPROM_VERSION_STORE_REG  0x1022C
+
+s32 ngbe_init_eeprom_params(struct ngbe_hw *hw);
+s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw, u16 *checksum_val);
+s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw);
+void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw);
+
+#endif /* _NGBE_EEPROM_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 014bb0faee..de6b75e1c0 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -4,6 +4,8 @@
  */
 
 #include "ngbe_type.h"
+#include "ngbe_eeprom.h"
+#include "ngbe_mng.h"
 #include "ngbe_hw.h"
 
 /**
@@ -25,6 +27,77 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw)
 	bus->func = bus->lan_id;
 }
 
+/**
+ *  ngbe_acquire_swfw_sync - Acquire SWFW semaphore
+ *  @hw: pointer to hardware structure
+ *  @mask: Mask to specify which semaphore to acquire
+ *
+ *  Acquires the SWFW semaphore through the MNGSEM register for the specified
+ *  function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask)
+{
+	u32 mngsem = 0;
+	u32 swmask = NGBE_MNGSEM_SW(mask);
+	u32 fwmask = NGBE_MNGSEM_FW(mask);
+	u32 timeout = 200;
+	u32 i;
+
+	DEBUGFUNC("ngbe_acquire_swfw_sync");
+
+	for (i = 0; i < timeout; i++) {
+		/*
+		 * SW NVM semaphore bit is used for access to all
+		 * SW_FW_SYNC bits (not just NVM)
+		 */
+		if (ngbe_get_eeprom_semaphore(hw))
+			return NGBE_ERR_SWFW_SYNC;
+
+		mngsem = rd32(hw, NGBE_MNGSEM);
+		if (mngsem & (fwmask | swmask)) {
+			/* Resource is currently in use by FW or SW */
+			ngbe_release_eeprom_semaphore(hw);
+			msec_delay(5);
+		} else {
+			mngsem |= swmask;
+			wr32(hw, NGBE_MNGSEM, mngsem);
+			ngbe_release_eeprom_semaphore(hw);
+			return 0;
+		}
+	}
+
+	/* If time expired clear the bits holding the lock and retry */
+	if (mngsem & (fwmask | swmask))
+		ngbe_release_swfw_sync(hw, mngsem & (fwmask | swmask));
+
+	msec_delay(5);
+	return NGBE_ERR_SWFW_SYNC;
+}
+
+/**
+ *  ngbe_release_swfw_sync - Release SWFW semaphore
+ *  @hw: pointer to hardware structure
+ *  @mask: Mask to specify which semaphore to release
+ *
+ *  Releases the SWFW semaphore through the MNGSEM register for the specified
+ *  function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
+{
+	u32 mngsem;
+	u32 swmask = mask;
+
+	DEBUGFUNC("ngbe_release_swfw_sync");
+
+	ngbe_get_eeprom_semaphore(hw);
+
+	mngsem = rd32(hw, NGBE_MNGSEM);
+	mngsem &= ~swmask;
+	wr32(hw, NGBE_MNGSEM, mngsem);
+
+	ngbe_release_eeprom_semaphore(hw);
+}
+
 /**
  *  ngbe_set_mac_type - Sets MAC type
  *  @hw: pointer to the HW structure
@@ -134,12 +207,22 @@ void ngbe_map_device_id(struct ngbe_hw *hw)
 s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 {
 	struct ngbe_bus_info *bus = &hw->bus;
+	struct ngbe_mac_info *mac = &hw->mac;
+	struct ngbe_rom_info *rom = &hw->rom;
 
 	DEBUGFUNC("ngbe_init_ops_pf");
 
 	/* BUS */
 	bus->set_lan_id = ngbe_set_lan_id_multi_port;
 
+	/* MAC */
+	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
+	mac->release_swfw_sync = ngbe_release_swfw_sync;
+
+	/* EEPROM */
+	rom->init_params = ngbe_init_eeprom_params;
+	rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
+
 	return 0;
 }
 
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 7d5de49248..5e508fb67f 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -10,6 +10,9 @@
 
 void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
 
+s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
+void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
+
 s32 ngbe_init_shared_code(struct ngbe_hw *hw);
 s32 ngbe_set_mac_type(struct ngbe_hw *hw);
 s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_mng.c b/drivers/net/ngbe/base/ngbe_mng.c
new file mode 100644
index 0000000000..87891a91e1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_mng.c
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_type.h"
+#include "ngbe_mng.h"
+
+/**
+ *  ngbe_hic_unlocked - Issue command to manageability block unlocked
+ *  @hw: pointer to the HW structure
+ *  @buffer: command to write and where the return status will be placed
+ *  @length: length of buffer, must be multiple of 4 bytes
+ *  @timeout: time in ms to wait for command completion
+ *
+ *  Communicates with the manageability block. On success return 0
+ *  else returns semaphore error when encountering an error acquiring
+ *  semaphore or NGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ *
+ *  This function assumes that the NGBE_MNGSEM_SWMBX semaphore is held
+ *  by the caller.
+ **/
+static s32
+ngbe_hic_unlocked(struct ngbe_hw *hw, u32 *buffer, u32 length, u32 timeout)
+{
+	u32 value, loop;
+	u16 i, dword_len;
+
+	DEBUGFUNC("ngbe_hic_unlocked");
+
+	if (!length || length > NGBE_PMMBX_BSIZE) {
+		DEBUGOUT("Buffer length failure buffersize=%d.\n", length);
+		return NGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	/* Calculate length in DWORDs. We must be DWORD aligned */
+	if (length % sizeof(u32)) {
+		DEBUGOUT("Buffer length failure, not aligned to dword");
+		return NGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	dword_len = length >> 2;
+
+	/* The device driver writes the relevant command block
+	 * into the ram area.
+	 */
+	for (i = 0; i < dword_len; i++) {
+		wr32a(hw, NGBE_MNGMBX, i, cpu_to_le32(buffer[i]));
+		buffer[i] = rd32a(hw, NGBE_MNGMBX, i);
+	}
+	ngbe_flush(hw);
+
+	/* Setting this bit tells the ARC that a new command is pending. */
+	wr32m(hw, NGBE_MNGMBXCTL,
+	      NGBE_MNGMBXCTL_SWRDY, NGBE_MNGMBXCTL_SWRDY);
+
+	/* Check command completion */
+	loop = po32m(hw, NGBE_MNGMBXCTL,
+		NGBE_MNGMBXCTL_FWRDY, NGBE_MNGMBXCTL_FWRDY,
+		&value, timeout, 1000);
+	if (!loop || !(value & NGBE_MNGMBXCTL_FWACK)) {
+		DEBUGOUT("Command has failed with no status valid.\n");
+		return NGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	return 0;
+}
+
+/**
+ *  ngbe_host_interface_command - Issue command to manageability block
+ *  @hw: pointer to the HW structure
+ *  @buffer: contains the command to write and where the return status will
+ *   be placed
+ *  @length: length of buffer, must be multiple of 4 bytes
+ *  @timeout: time in ms to wait for command completion
+ *  @return_data: read and return data from the buffer (true) or not (false)
+ *   Needed because FW structures are big endian and decoding of
+ *   these fields can be 8 bit or 16 bit based on command. Decoding
+ *   is not easily understood without making a table of commands.
+ *   So we will leave this up to the caller to read back the data
+ *   in these cases.
+ *
+ *  Communicates with the manageability block. On success return 0
+ *  else returns semaphore error when encountering an error acquiring
+ *  semaphore or NGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+static s32
+ngbe_host_interface_command(struct ngbe_hw *hw, u32 *buffer,
+				 u32 length, u32 timeout, bool return_data)
+{
+	u32 hdr_size = sizeof(struct ngbe_hic_hdr);
+	struct ngbe_hic_hdr *resp = (struct ngbe_hic_hdr *)buffer;
+	u16 buf_len;
+	s32 err;
+	u32 bi;
+	u32 dword_len;
+
+	DEBUGFUNC("ngbe_host_interface_command");
+
+	if (length == 0 || length > NGBE_PMMBX_BSIZE) {
+		DEBUGOUT("Buffer length failure buffersize=%d.\n", length);
+		return NGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	/* Take management host interface semaphore */
+	err = hw->mac.acquire_swfw_sync(hw, NGBE_MNGSEM_SWMBX);
+	if (err)
+		return err;
+
+	err = ngbe_hic_unlocked(hw, buffer, length, timeout);
+	if (err)
+		goto rel_out;
+
+	if (!return_data)
+		goto rel_out;
+
+	/* Calculate length in DWORDs */
+	dword_len = hdr_size >> 2;
+
+	/* first pull in the header so we know the buffer length */
+	for (bi = 0; bi < dword_len; bi++)
+		buffer[bi] = rd32a(hw, NGBE_MNGMBX, bi);
+
+	/*
+	 * If there is any thing in data position pull it in
+	 * Read Flash command requires reading buffer length from
+	 * two byes instead of one byte
+	 */
+	if (resp->cmd == 0x30) {
+		for (; bi < dword_len + 2; bi++)
+			buffer[bi] = rd32a(hw, NGBE_MNGMBX, bi);
+
+		buf_len = (((u16)(resp->cmd_or_resp.ret_status) << 3)
+				  & 0xF00) | resp->buf_len;
+		hdr_size += (2 << 2);
+	} else {
+		buf_len = resp->buf_len;
+	}
+	if (!buf_len)
+		goto rel_out;
+
+	if (length < buf_len + hdr_size) {
+		DEBUGOUT("Buffer not large enough for reply message.\n");
+		err = NGBE_ERR_HOST_INTERFACE_COMMAND;
+		goto rel_out;
+	}
+
+	/* Calculate length in DWORDs, add 3 for odd lengths */
+	dword_len = (buf_len + 3) >> 2;
+
+	/* Pull in the rest of the buffer (bi is where we left off) */
+	for (; bi <= dword_len; bi++)
+		buffer[bi] = rd32a(hw, NGBE_MNGMBX, bi);
+
+rel_out:
+	hw->mac.release_swfw_sync(hw, NGBE_MNGSEM_SWMBX);
+
+	return err;
+}
+
+s32 ngbe_hic_check_cap(struct ngbe_hw *hw)
+{
+	struct ngbe_hic_read_shadow_ram command;
+	s32 err;
+	int i;
+
+	DEBUGFUNC("\n");
+
+	command.hdr.req.cmd = FW_EEPROM_CHECK_STATUS;
+	command.hdr.req.buf_lenh = 0;
+	command.hdr.req.buf_lenl = 0;
+	command.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+	/* convert offset from words to bytes */
+	command.address = 0;
+	/* one word */
+	command.length = 0;
+
+	for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+		err = ngbe_host_interface_command(hw, (u32 *)&command,
+				sizeof(command),
+				NGBE_HI_COMMAND_TIMEOUT, true);
+		if (err)
+			continue;
+
+		command.hdr.rsp.ret_status &= 0x1F;
+		if (command.hdr.rsp.ret_status !=
+			FW_CEM_RESP_STATUS_SUCCESS)
+			err = NGBE_ERR_HOST_INTERFACE_COMMAND;
+
+		break;
+	}
+
+	if (!err && command.address != FW_CHECKSUM_CAP_ST_PASS)
+		err = NGBE_ERR_EEPROM_CHECKSUM;
+
+	return err;
+}
diff --git a/drivers/net/ngbe/base/ngbe_mng.h b/drivers/net/ngbe/base/ngbe_mng.h
new file mode 100644
index 0000000000..383e0dc0d1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_mng.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_MNG_H_
+#define _NGBE_MNG_H_
+
+#include "ngbe_type.h"
+
+#define NGBE_PMMBX_QSIZE       64 /* Num of dwords in range */
+#define NGBE_PMMBX_BSIZE       (NGBE_PMMBX_QSIZE * 4)
+#define NGBE_HI_COMMAND_TIMEOUT        5000 /* Process HI command limit */
+
+/* CEM Support */
+#define FW_CEM_MAX_RETRIES              3
+#define FW_CEM_RESP_STATUS_SUCCESS      0x1
+#define FW_DEFAULT_CHECKSUM             0xFF /* checksum always 0xFF */
+#define FW_EEPROM_CHECK_STATUS		0xE9
+
+#define FW_CHECKSUM_CAP_ST_PASS	0x80658383
+#define FW_CHECKSUM_CAP_ST_FAIL	0x70657376
+
+/* Host Interface Command Structures */
+struct ngbe_hic_hdr {
+	u8 cmd;
+	u8 buf_len;
+	union {
+		u8 cmd_resv;
+		u8 ret_status;
+	} cmd_or_resp;
+	u8 checksum;
+};
+
+struct ngbe_hic_hdr2_req {
+	u8 cmd;
+	u8 buf_lenh;
+	u8 buf_lenl;
+	u8 checksum;
+};
+
+struct ngbe_hic_hdr2_rsp {
+	u8 cmd;
+	u8 buf_lenl;
+	u8 ret_status;     /* 7-5: high bits of buf_len, 4-0: status */
+	u8 checksum;
+};
+
+union ngbe_hic_hdr2 {
+	struct ngbe_hic_hdr2_req req;
+	struct ngbe_hic_hdr2_rsp rsp;
+};
+
+/* These need to be dword aligned */
+struct ngbe_hic_read_shadow_ram {
+	union ngbe_hic_hdr2 hdr;
+	u32 address;
+	u16 length;
+	u16 pad2;
+	u16 data;
+	u16 pad3;
+};
+
+s32 ngbe_hic_check_cap(struct ngbe_hw *hw);
+#endif /* _NGBE_MNG_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 15f1778d6a..28b15cfb5a 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -12,6 +12,13 @@
 #include "ngbe_osdep.h"
 #include "ngbe_devids.h"
 
+enum ngbe_eeprom_type {
+	ngbe_eeprom_unknown = 0,
+	ngbe_eeprom_spi,
+	ngbe_eeprom_flash,
+	ngbe_eeprom_none /* No NVM support */
+};
+
 enum ngbe_mac_type {
 	ngbe_mac_unknown = 0,
 	ngbe_mac_em,
@@ -51,7 +58,23 @@ struct ngbe_bus_info {
 	u8 lan_id;
 };
 
+struct ngbe_rom_info {
+	s32 (*init_params)(struct ngbe_hw *hw);
+	s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val);
+
+	enum ngbe_eeprom_type type;
+	u32 semaphore_delay;
+	u16 word_size;
+	u16 address_bits;
+	u16 word_page_size;
+	u32 sw_addr;
+	u32 saved_version;
+	u16 cksum_devcap;
+};
+
 struct ngbe_mac_info {
+	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	enum ngbe_mac_type type;
 };
 
@@ -65,6 +88,7 @@ struct ngbe_hw {
 	void *back;
 	struct ngbe_mac_info mac;
 	struct ngbe_phy_info phy;
+	struct ngbe_rom_info rom;
 	struct ngbe_bus_info bus;
 	u16 device_id;
 	u16 vendor_id;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 12b80ec695..c2f92a8437 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -31,6 +31,29 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+/*
+ * Ensure that all locks are released before first NVM or PHY access
+ */
+static void
+ngbe_swfw_lock_reset(struct ngbe_hw *hw)
+{
+	uint16_t mask;
+
+	/*
+	 * These ones are more tricky since they are common to all ports; but
+	 * swfw_sync retries last long enough (1s) to be almost sure that if
+	 * lock can not be taken it is due to an improper lock of the
+	 * semaphore.
+	 */
+	mask = NGBE_MNGSEM_SWPHY |
+	       NGBE_MNGSEM_SWMBX |
+	       NGBE_MNGSEM_SWFLASH;
+	if (hw->mac.acquire_swfw_sync(hw, mask) < 0)
+		PMD_DRV_LOG(DEBUG, "SWFW common locks released");
+
+	hw->mac.release_swfw_sync(hw, mask);
+}
+
 static int
 eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
@@ -70,6 +93,22 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
+	/* Unlock any pending hardware semaphore */
+	ngbe_swfw_lock_reset(hw);
+
+	err = hw->rom.init_params(hw);
+	if (err != 0) {
+		PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
+		return -EIO;
+	}
+
+	/* Make sure we have a good EEPROM before we read from it */
+	err = hw->rom.validate_checksum(hw, NULL);
+	if (err != 0) {
+		PMD_INIT_LOG(ERR, "The EEPROM checksum is not valid: %d", err);
+		return -EIO;
+	}
+
 	return 0;
 }
 
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 09/24] net/ngbe: add HW initialization
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (7 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 08/24] net/ngbe: init and validate EEPROM Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 18:01   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 10/24] net/ngbe: identify PHY and reset PHY Jiawen Wu
                   ` (16 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Initialize the hardware by resetting the hardware in base code.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_dummy.h |  21 +++
 drivers/net/ngbe/base/ngbe_hw.c    | 235 +++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_hw.h    |   9 ++
 drivers/net/ngbe/base/ngbe_type.h  |  24 +++
 drivers/net/ngbe/ngbe_ethdev.c     |   7 +
 5 files changed, 296 insertions(+)

diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index ade03eae81..d0081acc2b 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -38,6 +38,19 @@ static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0,
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+/* struct ngbe_mac_operations */
+static inline s32 ngbe_mac_init_hw_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_reset_hw_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
@@ -47,13 +60,21 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
 }
+static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 {
 	hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
 	hw->rom.init_params = ngbe_rom_init_params_dummy;
 	hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
+	hw->mac.init_hw = ngbe_mac_init_hw_dummy;
+	hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
+	hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+	hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
 }
 
 #endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index de6b75e1c0..9fa40f7de1 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -8,6 +8,133 @@
 #include "ngbe_mng.h"
 #include "ngbe_hw.h"
 
+/**
+ *  ngbe_init_hw - Generic hardware initialization
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize the hardware by resetting the hardware, filling the bus info
+ *  structure and media type, clears all on chip counters, initializes receive
+ *  address registers, multicast table, VLAN filter table, calls routine to set
+ *  up link and flow control settings, and leaves transmit and receive units
+ *  disabled and uninitialized
+ **/
+s32 ngbe_init_hw(struct ngbe_hw *hw)
+{
+	s32 status;
+
+	DEBUGFUNC("ngbe_init_hw");
+
+	/* Reset the hardware */
+	status = hw->mac.reset_hw(hw);
+
+	if (status != 0)
+		DEBUGOUT("Failed to initialize HW, STATUS = %d\n", status);
+
+	return status;
+}
+
+static void
+ngbe_reset_misc_em(struct ngbe_hw *hw)
+{
+	int i;
+
+	wr32(hw, NGBE_ISBADDRL, hw->isb_dma & 0xFFFFFFFF);
+	wr32(hw, NGBE_ISBADDRH, hw->isb_dma >> 32);
+
+	/* receive packets that size > 2048 */
+	wr32m(hw, NGBE_MACRXCFG,
+		NGBE_MACRXCFG_JUMBO, NGBE_MACRXCFG_JUMBO);
+
+	wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+		NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
+
+	/* clear counters on read */
+	wr32m(hw, NGBE_MACCNTCTL,
+		NGBE_MACCNTCTL_RC, NGBE_MACCNTCTL_RC);
+
+	wr32m(hw, NGBE_RXFCCFG,
+		NGBE_RXFCCFG_FC, NGBE_RXFCCFG_FC);
+	wr32m(hw, NGBE_TXFCCFG,
+		NGBE_TXFCCFG_FC, NGBE_TXFCCFG_FC);
+
+	wr32m(hw, NGBE_MACRXFLT,
+		NGBE_MACRXFLT_PROMISC, NGBE_MACRXFLT_PROMISC);
+
+	wr32m(hw, NGBE_RSTSTAT,
+		NGBE_RSTSTAT_TMRINIT_MASK, NGBE_RSTSTAT_TMRINIT(30));
+
+	/* errata 4: initialize mng flex tbl and wakeup flex tbl*/
+	wr32(hw, NGBE_MNGFLEXSEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, NGBE_MNGFLEXDWL(i), 0);
+		wr32(hw, NGBE_MNGFLEXDWH(i), 0);
+		wr32(hw, NGBE_MNGFLEXMSK(i), 0);
+	}
+	wr32(hw, NGBE_LANFLEXSEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, NGBE_LANFLEXDWL(i), 0);
+		wr32(hw, NGBE_LANFLEXDWH(i), 0);
+		wr32(hw, NGBE_LANFLEXMSK(i), 0);
+	}
+
+	/* set pause frame dst mac addr */
+	wr32(hw, NGBE_RXPBPFCDMACL, 0xC2000001);
+	wr32(hw, NGBE_RXPBPFCDMACH, 0x0180);
+
+	wr32(hw, NGBE_MDIOMODE, 0xF);
+
+	wr32m(hw, NGBE_GPIE, NGBE_GPIE_MSIX, NGBE_GPIE_MSIX);
+
+	if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
+		(hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
+		/* gpio0 is used to power on/off control*/
+		wr32(hw, NGBE_GPIODIR, NGBE_GPIODIR_DDR(1));
+		wr32(hw, NGBE_GPIODATA, NGBE_GPIOBIT_0);
+	}
+
+	hw->mac.init_thermal_sensor_thresh(hw);
+
+	/* enable mac transmiter */
+	wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_TE, NGBE_MACTXCFG_TE);
+
+	/* sellect GMII */
+	wr32m(hw, NGBE_MACTXCFG,
+		NGBE_MACTXCFG_SPEED_MASK, NGBE_MACTXCFG_SPEED_1G);
+
+	for (i = 0; i < 4; i++)
+		wr32m(hw, NGBE_IVAR(i), 0x80808080, 0);
+}
+
+/**
+ *  ngbe_reset_hw_em - Perform hardware reset
+ *  @hw: pointer to hardware structure
+ *
+ *  Resets the hardware by resetting the transmit and receive units, masks
+ *  and clears all interrupts, perform a PHY reset, and perform a link (MAC)
+ *  reset.
+ **/
+s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
+{
+	s32 status;
+
+	DEBUGFUNC("ngbe_reset_hw_em");
+
+	/* Call adapter stop to disable tx/rx and clear interrupts */
+	status = hw->mac.stop_hw(hw);
+	if (status != 0)
+		return status;
+
+	wr32(hw, NGBE_RST, NGBE_RST_LAN(hw->bus.lan_id));
+	ngbe_flush(hw);
+	msec_delay(50);
+
+	ngbe_reset_misc_em(hw);
+
+	msec_delay(50);
+
+	return status;
+}
+
 /**
  *  ngbe_set_lan_id_multi_port - Set LAN id for PCIe multiple port devices
  *  @hw: pointer to the HW structure
@@ -27,6 +154,57 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw)
 	bus->func = bus->lan_id;
 }
 
+/**
+ *  ngbe_stop_hw - Generic stop Tx/Rx units
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets the adapter_stopped flag within ngbe_hw struct. Clears interrupts,
+ *  disables transmit and receive units. The adapter_stopped flag is used by
+ *  the shared code and drivers to determine if the adapter is in a stopped
+ *  state and should not touch the hardware.
+ **/
+s32 ngbe_stop_hw(struct ngbe_hw *hw)
+{
+	u32 reg_val;
+	u16 i;
+
+	DEBUGFUNC("ngbe_stop_hw");
+
+	/*
+	 * Set the adapter_stopped flag so other driver functions stop touching
+	 * the hardware
+	 */
+	hw->adapter_stopped = true;
+
+	/* Disable the receive unit */
+	ngbe_disable_rx(hw);
+
+	/* Clear interrupt mask to stop interrupts from being generated */
+	wr32(hw, NGBE_IENMISC, 0);
+	wr32(hw, NGBE_IMS(0), NGBE_IMS_MASK);
+
+	/* Clear any pending interrupts, flush previous writes */
+	wr32(hw, NGBE_ICRMISC, NGBE_ICRMISC_MASK);
+	wr32(hw, NGBE_ICR(0), NGBE_ICR_MASK);
+
+	/* Disable the transmit unit.  Each queue must be disabled. */
+	for (i = 0; i < hw->mac.max_tx_queues; i++)
+		wr32(hw, NGBE_TXCFG(i), NGBE_TXCFG_FLUSH);
+
+	/* Disable the receive unit by stopping each queue */
+	for (i = 0; i < hw->mac.max_rx_queues; i++) {
+		reg_val = rd32(hw, NGBE_RXCFG(i));
+		reg_val &= ~NGBE_RXCFG_ENA;
+		wr32(hw, NGBE_RXCFG(i), reg_val);
+	}
+
+	/* flush all queues disables */
+	ngbe_flush(hw);
+	msec_delay(2);
+
+	return 0;
+}
+
 /**
  *  ngbe_acquire_swfw_sync - Acquire SWFW semaphore
  *  @hw: pointer to hardware structure
@@ -98,6 +276,54 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
 	ngbe_release_eeprom_semaphore(hw);
 }
 
+/**
+ *  ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
+ *  @hw: pointer to hardware structure
+ *
+ *  Inits the thermal sensor thresholds according to the NVM map
+ *  and save off the threshold and location values into mac.thermal_sensor_data
+ **/
+s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw)
+{
+	struct ngbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data;
+
+	DEBUGFUNC("ngbe_init_thermal_sensor_thresh");
+
+	memset(data, 0, sizeof(struct ngbe_thermal_sensor_data));
+
+	if (hw->bus.lan_id != 0)
+		return NGBE_NOT_IMPLEMENTED;
+
+	wr32(hw, NGBE_TSINTR,
+		NGBE_TSINTR_AEN | NGBE_TSINTR_DEN);
+	wr32(hw, NGBE_TSEN, NGBE_TSEN_ENA);
+
+
+	data->sensor[0].alarm_thresh = 115;
+	wr32(hw, NGBE_TSATHRE, 0x344);
+	data->sensor[0].dalarm_thresh = 110;
+	wr32(hw, NGBE_TSDTHRE, 0x330);
+
+	return 0;
+}
+
+void ngbe_disable_rx(struct ngbe_hw *hw)
+{
+	u32 pfdtxgswc;
+
+	pfdtxgswc = rd32(hw, NGBE_PSRCTL);
+	if (pfdtxgswc & NGBE_PSRCTL_LBENA) {
+		pfdtxgswc &= ~NGBE_PSRCTL_LBENA;
+		wr32(hw, NGBE_PSRCTL, pfdtxgswc);
+		hw->mac.set_lben = true;
+	} else {
+		hw->mac.set_lben = false;
+	}
+
+	wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, 0);
+	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
+}
+
 /**
  *  ngbe_set_mac_type - Sets MAC type
  *  @hw: pointer to the HW structure
@@ -216,13 +442,22 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	bus->set_lan_id = ngbe_set_lan_id_multi_port;
 
 	/* MAC */
+	mac->init_hw = ngbe_init_hw;
+	mac->reset_hw = ngbe_reset_hw_em;
+	mac->stop_hw = ngbe_stop_hw;
 	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
 	mac->release_swfw_sync = ngbe_release_swfw_sync;
 
+	/* Manageability interface */
+	mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
+
 	/* EEPROM */
 	rom->init_params = ngbe_init_eeprom_params;
 	rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
 
+	mac->max_rx_queues	= NGBE_EM_MAX_RX_QUEUES;
+	mac->max_tx_queues	= NGBE_EM_MAX_TX_QUEUES;
+
 	return 0;
 }
 
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 5e508fb67f..39b2fe696b 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -8,11 +8,20 @@
 
 #include "ngbe_type.h"
 
+#define NGBE_EM_MAX_TX_QUEUES 8
+#define NGBE_EM_MAX_RX_QUEUES 8
+
+s32 ngbe_init_hw(struct ngbe_hw *hw);
+s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
+s32 ngbe_stop_hw(struct ngbe_hw *hw);
+
 void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
 
 s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
 void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
 
+s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
+void ngbe_disable_rx(struct ngbe_hw *hw);
 s32 ngbe_init_shared_code(struct ngbe_hw *hw);
 s32 ngbe_set_mac_type(struct ngbe_hw *hw);
 s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 28b15cfb5a..21756c4690 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -6,12 +6,24 @@
 #ifndef _NGBE_TYPE_H_
 #define _NGBE_TYPE_H_
 
+#define NGBE_FRAME_SIZE_DFT       (1522) /* Default frame size, +FCS */
+
 #define NGBE_ALIGN		128 /* as intel did */
 
 #include "ngbe_status.h"
 #include "ngbe_osdep.h"
 #include "ngbe_devids.h"
 
+struct ngbe_thermal_diode_data {
+	s16 temp;
+	s16 alarm_thresh;
+	s16 dalarm_thresh;
+};
+
+struct ngbe_thermal_sensor_data {
+	struct ngbe_thermal_diode_data sensor[1];
+};
+
 enum ngbe_eeprom_type {
 	ngbe_eeprom_unknown = 0,
 	ngbe_eeprom_spi,
@@ -73,9 +85,20 @@ struct ngbe_rom_info {
 };
 
 struct ngbe_mac_info {
+	s32 (*init_hw)(struct ngbe_hw *hw);
+	s32 (*reset_hw)(struct ngbe_hw *hw);
+	s32 (*stop_hw)(struct ngbe_hw *hw);
 	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+
+	/* Manageability interface */
+	s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
+
 	enum ngbe_mac_type type;
+	u32 max_tx_queues;
+	u32 max_rx_queues;
+	struct ngbe_thermal_sensor_data  thermal_sensor_data;
+	bool set_lben;
 };
 
 struct ngbe_phy_info {
@@ -94,6 +117,7 @@ struct ngbe_hw {
 	u16 vendor_id;
 	u16 sub_device_id;
 	u16 sub_system_id;
+	bool adapter_stopped;
 	bool allow_unsupported_sfp;
 
 	uint64_t isb_dma;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index c2f92a8437..bb5923c485 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -109,6 +109,13 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
+	err = hw->mac.init_hw(hw);
+
+	if (err) {
+		PMD_INIT_LOG(ERR, "Hardware Initialization Failure: %d", err);
+		return -EIO;
+	}
+
 	return 0;
 }
 
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 10/24] net/ngbe: identify PHY and reset PHY
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (8 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 09/24] net/ngbe: add HW initialization Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 11/24] net/ngbe: store MAC address Jiawen Wu
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Identify PHY to get the PHY type, and perform a PHY reset.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/meson.build    |   4 +
 drivers/net/ngbe/base/ngbe_dummy.h   |  40 +++
 drivers/net/ngbe/base/ngbe_hw.c      |  38 +++
 drivers/net/ngbe/base/ngbe_hw.h      |   2 +
 drivers/net/ngbe/base/ngbe_phy.c     | 426 +++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_phy.h     |  60 ++++
 drivers/net/ngbe/base/ngbe_phy_mvl.c |  89 ++++++
 drivers/net/ngbe/base/ngbe_phy_mvl.h |  92 ++++++
 drivers/net/ngbe/base/ngbe_phy_rtl.c |  44 +++
 drivers/net/ngbe/base/ngbe_phy_rtl.h |  83 ++++++
 drivers/net/ngbe/base/ngbe_phy_yt.c  | 112 +++++++
 drivers/net/ngbe/base/ngbe_phy_yt.h  |  66 +++++
 drivers/net/ngbe/base/ngbe_type.h    |  17 ++
 13 files changed, 1073 insertions(+)
 create mode 100644 drivers/net/ngbe/base/ngbe_phy.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.h
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.c
 create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.h

diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index ddd122ec45..146134f671 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -5,6 +5,10 @@ sources = [
 	'ngbe_eeprom.c',
 	'ngbe_hw.c',
 	'ngbe_mng.c',
+	'ngbe_phy.c',
+	'ngbe_phy_rtl.c',
+	'ngbe_phy_mvl.c',
+	'ngbe_phy_yt.c',
 ]
 
 error_cflags = []
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index d0081acc2b..15017cfd82 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -64,6 +64,39 @@ static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_mac_check_overtemp_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+/* struct ngbe_phy_operations */
+static inline s32 ngbe_phy_identify_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_reset_hw_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_read_reg_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+					u32 TUP2, u16 *TUP3)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_write_reg_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+					u32 TUP2, u16 TUP3)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_read_reg_unlocked_dummy(struct ngbe_hw *TUP0,
+					u32 TUP1, u32 TUP2, u16 *TUP3)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_write_reg_unlocked_dummy(struct ngbe_hw *TUP0,
+					u32 TUP1, u32 TUP2, u16 TUP3)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 {
 	hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
@@ -75,6 +108,13 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
 	hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
+	hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
+	hw->phy.identify = ngbe_phy_identify_dummy;
+	hw->phy.reset_hw = ngbe_phy_reset_hw_dummy;
+	hw->phy.read_reg = ngbe_phy_read_reg_dummy;
+	hw->phy.write_reg = ngbe_phy_write_reg_dummy;
+	hw->phy.read_reg_unlocked = ngbe_phy_read_reg_unlocked_dummy;
+	hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy;
 }
 
 #endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 9fa40f7de1..ebd163d9e6 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -4,6 +4,7 @@
  */
 
 #include "ngbe_type.h"
+#include "ngbe_phy.h"
 #include "ngbe_eeprom.h"
 #include "ngbe_mng.h"
 #include "ngbe_hw.h"
@@ -124,6 +125,15 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
 	if (status != 0)
 		return status;
 
+	/* Identify PHY and related function pointers */
+	status = ngbe_init_phy(hw);
+	if (status)
+		return status;
+
+	/* Reset PHY */
+	if (!hw->phy.reset_disable)
+		hw->phy.reset_hw(hw);
+
 	wr32(hw, NGBE_RST, NGBE_RST_LAN(hw->bus.lan_id));
 	ngbe_flush(hw);
 	msec_delay(50);
@@ -307,6 +317,24 @@ s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw)
 	return 0;
 }
 
+s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw)
+{
+	s32 status = 0;
+	u32 ts_state;
+
+	DEBUGFUNC("ngbe_mac_check_overtemp");
+
+	/* Check that the LASI temp alarm status was triggered */
+	ts_state = rd32(hw, NGBE_TSALM);
+
+	if (ts_state & NGBE_TSALM_HI)
+		status = NGBE_ERR_UNDERTEMP;
+	else if (ts_state & NGBE_TSALM_LO)
+		status = NGBE_ERR_OVERTEMP;
+
+	return status;
+}
+
 void ngbe_disable_rx(struct ngbe_hw *hw)
 {
 	u32 pfdtxgswc;
@@ -434,6 +462,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 {
 	struct ngbe_bus_info *bus = &hw->bus;
 	struct ngbe_mac_info *mac = &hw->mac;
+	struct ngbe_phy_info *phy = &hw->phy;
 	struct ngbe_rom_info *rom = &hw->rom;
 
 	DEBUGFUNC("ngbe_init_ops_pf");
@@ -441,6 +470,14 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	/* BUS */
 	bus->set_lan_id = ngbe_set_lan_id_multi_port;
 
+	/* PHY */
+	phy->identify = ngbe_identify_phy;
+	phy->read_reg = ngbe_read_phy_reg;
+	phy->write_reg = ngbe_write_phy_reg;
+	phy->read_reg_unlocked = ngbe_read_phy_reg_mdi;
+	phy->write_reg_unlocked = ngbe_write_phy_reg_mdi;
+	phy->reset_hw = ngbe_reset_phy;
+
 	/* MAC */
 	mac->init_hw = ngbe_init_hw;
 	mac->reset_hw = ngbe_reset_hw_em;
@@ -450,6 +487,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 
 	/* Manageability interface */
 	mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
+	mac->check_overtemp = ngbe_mac_check_overtemp;
 
 	/* EEPROM */
 	rom->init_params = ngbe_init_eeprom_params;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 39b2fe696b..3c8e646bb7 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -21,10 +21,12 @@ s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
 void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
 
 s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
+s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
 void ngbe_disable_rx(struct ngbe_hw *hw);
 s32 ngbe_init_shared_code(struct ngbe_hw *hw);
 s32 ngbe_set_mac_type(struct ngbe_hw *hw);
 s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
+s32 ngbe_init_phy(struct ngbe_hw *hw);
 void ngbe_map_device_id(struct ngbe_hw *hw);
 
 #endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
new file mode 100644
index 0000000000..d1b4cc9e5f
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_hw.h"
+#include "ngbe_phy.h"
+
+s32 ngbe_mdi_map_register(mdi_reg_t *reg, mdi_reg_22_t *reg22)
+{
+	bool match = 1;
+	switch (reg->device_type) {
+	case NGBE_MD_DEV_PMA_PMD:
+		switch (reg->addr) {
+		case NGBE_MD_PHY_ID_HIGH:
+		case NGBE_MD_PHY_ID_LOW:
+			reg22->page = 0;
+			reg22->addr = reg->addr;
+			reg22->device_type = 0;
+			break;
+		default:
+			match = 0;
+		}
+		break;
+	default:
+		match = 0;
+		break;
+	}
+
+	if (!match) {
+		reg22->page = reg->device_type;
+		reg22->device_type = reg->device_type;
+		reg22->addr = reg->addr;
+	}
+
+	return 0;
+}
+
+/**
+ * ngbe_probe_phy - Identify a single address for a PHY
+ * @hw: pointer to hardware structure
+ * @phy_addr: PHY address to probe
+ *
+ * Returns true if PHY found
+ */
+static bool ngbe_probe_phy(struct ngbe_hw *hw, u16 phy_addr)
+{
+	if (!ngbe_validate_phy_addr(hw, phy_addr)) {
+		DEBUGOUT("Unable to validate PHY address 0x%04X\n",
+			phy_addr);
+		return false;
+	}
+
+	if (ngbe_get_phy_id(hw))
+		return false;
+
+	hw->phy.type = ngbe_get_phy_type_from_id(hw);
+	if (hw->phy.type == ngbe_phy_unknown)
+		return false;
+
+	return true;
+}
+
+/**
+ *  ngbe_identify_phy - Get physical layer module
+ *  @hw: pointer to hardware structure
+ *
+ *  Determines the physical layer module found on the current adapter.
+ **/
+s32 ngbe_identify_phy(struct ngbe_hw *hw)
+{
+	s32 err = NGBE_ERR_PHY_ADDR_INVALID;
+	u16 phy_addr;
+
+	DEBUGFUNC("ngbe_identify_phy");
+
+	if (hw->phy.type != ngbe_phy_unknown)
+		return 0;
+
+	/* select claus22 */
+	wr32(hw, NGBE_MDIOMODE, NGBE_MDIOMODE_MASK);
+
+	for (phy_addr = 0; phy_addr < NGBE_MAX_PHY_ADDR; phy_addr++) {
+		if (ngbe_probe_phy(hw, phy_addr)) {
+			err = 0;
+			break;
+		}
+	}
+
+	return err;
+}
+
+/**
+ * ngbe_check_reset_blocked - check status of MNG FW veto bit
+ * @hw: pointer to the hardware structure
+ *
+ * This function checks the STAT.MNGVETO bit to see if there are
+ * any constraints on link from manageability.  For MAC's that don't
+ * have this bit just return faluse since the link can not be blocked
+ * via this method.
+ **/
+s32 ngbe_check_reset_blocked(struct ngbe_hw *hw)
+{
+	u32 mmngc;
+
+	DEBUGFUNC("ngbe_check_reset_blocked");
+
+	mmngc = rd32(hw, NGBE_STAT);
+	if (mmngc & NGBE_STAT_MNGVETO) {
+		DEBUGOUT("MNG_VETO bit detected.\n");
+		return true;
+	}
+
+	return false;
+}
+
+/**
+ *  ngbe_validate_phy_addr - Determines phy address is valid
+ *  @hw: pointer to hardware structure
+ *  @phy_addr: PHY address
+ *
+ **/
+bool ngbe_validate_phy_addr(struct ngbe_hw *hw, u32 phy_addr)
+{
+	u16 phy_id = 0;
+	bool valid = false;
+
+	DEBUGFUNC("ngbe_validate_phy_addr");
+
+	if (hw->sub_device_id == NGBE_SUB_DEV_ID_EM_YT8521S_SFP)
+		return true;
+
+	hw->phy.addr = phy_addr;
+	hw->phy.read_reg(hw, NGBE_MD_PHY_ID_HIGH,
+			     NGBE_MD_DEV_PMA_PMD, &phy_id);
+
+	if (phy_id != 0xFFFF && phy_id != 0x0)
+		valid = true;
+
+	DEBUGOUT("PHY ID HIGH is 0x%04X\n", phy_id);
+
+	return valid;
+}
+
+/**
+ *  ngbe_get_phy_id - Get the phy ID
+ *  @hw: pointer to hardware structure
+ *
+ **/
+s32 ngbe_get_phy_id(struct ngbe_hw *hw)
+{
+	u32 err;
+	u16 phy_id_high = 0;
+	u16 phy_id_low = 0;
+
+	DEBUGFUNC("ngbe_get_phy_id");
+
+	err = hw->phy.read_reg(hw, NGBE_MD_PHY_ID_HIGH,
+				      NGBE_MD_DEV_PMA_PMD,
+				      &phy_id_high);
+	hw->phy.id = (u32)(phy_id_high << 16);
+
+	err = hw->phy.read_reg(hw, NGBE_MD_PHY_ID_LOW,
+				NGBE_MD_DEV_PMA_PMD,
+				&phy_id_low);
+	hw->phy.id |= (u32)(phy_id_low & NGBE_PHY_REVISION_MASK);
+	hw->phy.revision = (u32)(phy_id_low & ~NGBE_PHY_REVISION_MASK);
+
+	DEBUGOUT("PHY_ID_HIGH 0x%04X, PHY_ID_LOW 0x%04X\n",
+		  phy_id_high, phy_id_low);
+
+	return err;
+}
+
+/**
+ *  ngbe_get_phy_type_from_id - Get the phy type
+ *  @phy_id: PHY ID information
+ *
+ **/
+enum ngbe_phy_type ngbe_get_phy_type_from_id(struct ngbe_hw *hw)
+{
+	enum ngbe_phy_type phy_type;
+
+	DEBUGFUNC("ngbe_get_phy_type_from_id");
+
+	switch (hw->phy.id) {
+	case NGBE_PHYID_RTL:
+		phy_type = ngbe_phy_rtl;
+		break;
+	case NGBE_PHYID_MVL:
+		if (hw->phy.media_type == ngbe_media_type_fiber)
+			phy_type = ngbe_phy_mvl_sfi;
+		else
+			phy_type = ngbe_phy_mvl;
+		break;
+	case NGBE_PHYID_YT:
+		if (hw->phy.media_type == ngbe_media_type_fiber)
+			phy_type = ngbe_phy_yt8521s_sfi;
+		else
+			phy_type = ngbe_phy_yt8521s;
+		break;
+	default:
+		phy_type = ngbe_phy_unknown;
+		break;
+	}
+
+	return phy_type;
+}
+
+/**
+ *  ngbe_reset_phy - Performs a PHY reset
+ *  @hw: pointer to hardware structure
+ **/
+s32 ngbe_reset_phy(struct ngbe_hw *hw)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("ngbe_reset_phy");
+
+	if (hw->phy.type == ngbe_phy_unknown)
+		err = ngbe_identify_phy(hw);
+
+	if (err != 0 || hw->phy.type == ngbe_phy_none)
+		return err;
+
+	/* Don't reset PHY if it's shut down due to overtemp. */
+	if (hw->mac.check_overtemp(hw) == NGBE_ERR_OVERTEMP)
+		return err;
+
+	/* Blocked by MNG FW so bail */
+	if (ngbe_check_reset_blocked(hw))
+		return err;
+
+	switch (hw->phy.type) {
+	case ngbe_phy_rtl:
+		err = ngbe_reset_phy_rtl(hw);
+		break;
+	case ngbe_phy_mvl:
+	case ngbe_phy_mvl_sfi:
+		err = ngbe_reset_phy_mvl(hw);
+		break;
+	case ngbe_phy_yt8521s:
+	case ngbe_phy_yt8521s_sfi:
+		err = ngbe_reset_phy_yt(hw);
+		break;
+	default:
+		break;
+	}
+
+	return err;
+}
+
+/**
+ *  ngbe_read_phy_mdi - Reads a value from a specified PHY register without
+ *  the SWFW lock
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit address of PHY register to read
+ *  @device_type: 5 bit device type
+ *  @phy_data: Pointer to read data from PHY register
+ **/
+s32 ngbe_read_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			   u16 *phy_data)
+{
+	u32 command, data;
+
+	/* Setup and write the address cycle command */
+	command = NGBE_MDIOSCA_REG(reg_addr) |
+		  NGBE_MDIOSCA_DEV(device_type) |
+		  NGBE_MDIOSCA_PORT(hw->phy.addr);
+	wr32(hw, NGBE_MDIOSCA, command);
+
+	command = NGBE_MDIOSCD_CMD_READ |
+		  NGBE_MDIOSCD_BUSY |
+		  NGBE_MDIOSCD_CLOCK(6);
+	wr32(hw, NGBE_MDIOSCD, command);
+
+	/*
+	 * Check every 10 usec to see if the address cycle completed.
+	 * The MDI Command bit will clear when the operation is
+	 * complete
+	 */
+	if (!po32m(hw, NGBE_MDIOSCD, NGBE_MDIOSCD_BUSY,
+		0, NULL, 100, 100)) {
+		DEBUGOUT("PHY address command did not complete\n");
+		return NGBE_ERR_PHY;
+	}
+
+	data = rd32(hw, NGBE_MDIOSCD);
+	*phy_data = (u16)NGBE_MDIOSCD_DAT_R(data);
+
+	return 0;
+}
+
+/**
+ *  ngbe_read_phy_reg - Reads a value from a specified PHY register
+ *  using the SWFW lock - this function is needed in most cases
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit address of PHY register to read
+ *  @device_type: 5 bit device type
+ *  @phy_data: Pointer to read data from PHY register
+ **/
+s32 ngbe_read_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+			       u32 device_type, u16 *phy_data)
+{
+	s32 err;
+	u32 gssr = hw->phy.phy_semaphore_mask;
+
+	DEBUGFUNC("ngbe_read_phy_reg");
+
+	if (hw->mac.acquire_swfw_sync(hw, gssr))
+		return NGBE_ERR_SWFW_SYNC;
+
+	err = hw->phy.read_reg_unlocked(hw, reg_addr, device_type,
+					phy_data);
+
+	hw->mac.release_swfw_sync(hw, gssr);
+
+	return err;
+}
+
+/**
+ *  ngbe_write_phy_reg_mdi - Writes a value to specified PHY register
+ *  without SWFW lock
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit PHY register to write
+ *  @device_type: 5 bit device type
+ *  @phy_data: Data to write to the PHY register
+ **/
+s32 ngbe_write_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data)
+{
+	u32 command;
+
+	/* write command */
+	command = NGBE_MDIOSCA_REG(reg_addr) |
+		  NGBE_MDIOSCA_DEV(device_type) |
+		  NGBE_MDIOSCA_PORT(hw->phy.addr);
+	wr32(hw, NGBE_MDIOSCA, command);
+
+	command = NGBE_MDIOSCD_CMD_WRITE |
+		  NGBE_MDIOSCD_DAT(phy_data) |
+		  NGBE_MDIOSCD_BUSY |
+		  NGBE_MDIOSCD_CLOCK(6);
+	wr32(hw, NGBE_MDIOSCD, command);
+
+	/* wait for completion */
+	if (!po32m(hw, NGBE_MDIOSCD, NGBE_MDIOSCD_BUSY,
+		0, NULL, 100, 100)) {
+		TLOG_DEBUG("PHY write cmd didn't complete\n");
+		return -TERR_PHY;
+	}
+
+	return 0;
+}
+
+/**
+ *  ngbe_write_phy_reg - Writes a value to specified PHY register
+ *  using SWFW lock- this function is needed in most cases
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit PHY register to write
+ *  @device_type: 5 bit device type
+ *  @phy_data: Data to write to the PHY register
+ **/
+s32 ngbe_write_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data)
+{
+	s32 err;
+	u32 gssr = hw->phy.phy_semaphore_mask;
+
+	DEBUGFUNC("ngbe_write_phy_reg");
+
+	if (hw->mac.acquire_swfw_sync(hw, gssr))
+		err = NGBE_ERR_SWFW_SYNC;
+
+	err = hw->phy.write_reg_unlocked(hw, reg_addr, device_type,
+					 phy_data);
+
+	hw->mac.release_swfw_sync(hw, gssr);
+
+	return err;
+}
+
+/**
+ *  ngbe_init_phy - PHY specific init
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize any function pointers that were not able to be
+ *  set during init_shared_code because the PHY type was
+ *  not known.
+ *
+ **/
+s32 ngbe_init_phy(struct ngbe_hw *hw)
+{
+	struct ngbe_phy_info *phy = &hw->phy;
+	s32 err = 0;
+
+	DEBUGFUNC("ngbe_init_phy");
+
+	hw->phy.addr = 0;
+
+	switch (hw->sub_device_id) {
+	case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
+		hw->phy.read_reg_unlocked = ngbe_read_phy_reg_rtl;
+		hw->phy.write_reg_unlocked = ngbe_write_phy_reg_rtl;
+		break;
+	case NGBE_SUB_DEV_ID_EM_MVL_RGMII:
+	case NGBE_SUB_DEV_ID_EM_MVL_SFP:
+		hw->phy.read_reg_unlocked = ngbe_read_phy_reg_mvl;
+		hw->phy.write_reg_unlocked = ngbe_write_phy_reg_mvl;
+		break;
+	case NGBE_SUB_DEV_ID_EM_YT8521S_SFP:
+		hw->phy.read_reg_unlocked = ngbe_read_phy_reg_yt;
+		hw->phy.write_reg_unlocked = ngbe_write_phy_reg_yt;
+		break;
+	default:
+		break;
+	}
+
+	hw->phy.phy_semaphore_mask = NGBE_MNGSEM_SWPHY;
+
+	/* Identify the PHY */
+	err = phy->identify(hw);
+
+	return err;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy.h b/drivers/net/ngbe/base/ngbe_phy.h
new file mode 100644
index 0000000000..59d9efe025
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_PHY_H_
+#define _NGBE_PHY_H_
+
+#include "ngbe_type.h"
+#include "ngbe_phy_rtl.h"
+#include "ngbe_phy_mvl.h"
+#include "ngbe_phy_yt.h"
+
+/******************************************************************************
+ * PHY MDIO Registers:
+ ******************************************************************************/
+#define NGBE_MAX_PHY_ADDR		32
+
+/* (dev_type = 1) */
+#define NGBE_MD_DEV_PMA_PMD		0x1
+#define NGBE_MD_PHY_ID_HIGH		0x2 /* PHY ID High Reg*/
+#define NGBE_MD_PHY_ID_LOW		0x3 /* PHY ID Low Reg*/
+#define   NGBE_PHY_REVISION_MASK	0xFFFFFFF0
+
+/* IEEE 802.3 Clause 22 */
+struct mdi_reg_22 {
+	u16 page;
+	u16 addr;
+	u16 device_type;
+};
+typedef struct mdi_reg_22 mdi_reg_22_t;
+
+/* IEEE 802.3ae Clause 45 */
+struct mdi_reg {
+	u16 device_type;
+	u16 addr;
+};
+typedef struct mdi_reg mdi_reg_t;
+
+#define NGBE_MD22_PHY_ID_HIGH		0x2 /* PHY ID High Reg*/
+#define NGBE_MD22_PHY_ID_LOW		0x3 /* PHY ID Low Reg*/
+
+s32 ngbe_mdi_map_register(mdi_reg_t *reg, mdi_reg_22_t *reg22);
+
+bool ngbe_validate_phy_addr(struct ngbe_hw *hw, u32 phy_addr);
+enum ngbe_phy_type ngbe_get_phy_type_from_id(struct ngbe_hw *hw);
+s32 ngbe_get_phy_id(struct ngbe_hw *hw);
+s32 ngbe_identify_phy(struct ngbe_hw *hw);
+s32 ngbe_reset_phy(struct ngbe_hw *hw);
+s32 ngbe_read_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			   u16 *phy_data);
+s32 ngbe_write_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			    u16 phy_data);
+s32 ngbe_read_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+			       u32 device_type, u16 *phy_data);
+s32 ngbe_write_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data);
+s32 ngbe_check_reset_blocked(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
new file mode 100644
index 0000000000..40419a61f6
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy_mvl.h"
+
+#define MVL_PHY_RST_WAIT_PERIOD  5
+
+s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+	mdi_reg_t reg;
+	mdi_reg_22_t reg22;
+
+	reg.device_type = device_type;
+	reg.addr = reg_addr;
+
+	if (hw->phy.media_type == ngbe_media_type_fiber)
+		ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 1);
+	else
+		ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 0);
+
+	ngbe_mdi_map_register(&reg, &reg22);
+
+	ngbe_read_phy_reg_mdi(hw, reg22.addr, reg22.device_type, phy_data);
+
+	return 0;
+}
+
+s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data)
+{
+	mdi_reg_t reg;
+	mdi_reg_22_t reg22;
+
+	reg.device_type = device_type;
+	reg.addr = reg_addr;
+
+	if (hw->phy.media_type == ngbe_media_type_fiber)
+		ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 1);
+	else
+		ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 0);
+
+	ngbe_mdi_map_register(&reg, &reg22);
+
+	ngbe_write_phy_reg_mdi(hw, reg22.addr, reg22.device_type, phy_data);
+
+	return 0;
+}
+
+s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
+{
+	u32 i;
+	u16 ctrl = 0;
+	s32 status = 0;
+
+	DEBUGFUNC("ngbe_reset_phy_mvl");
+
+	if (hw->phy.type != ngbe_phy_mvl && hw->phy.type != ngbe_phy_mvl_sfi)
+		return NGBE_ERR_PHY_TYPE;
+
+	/* select page 18 reg 20 */
+	status = ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 18);
+
+	/* mode select to RGMII-to-copper or RGMII-to-sfi*/
+	if (hw->phy.type == ngbe_phy_mvl)
+		ctrl = MVL_GEN_CTL_MODE_COPPER;
+	else
+		ctrl = MVL_GEN_CTL_MODE_FIBER;
+	status = ngbe_write_phy_reg_mdi(hw, MVL_GEN_CTL, 0, ctrl);
+	/* mode reset */
+	ctrl |= MVL_GEN_CTL_RESET;
+	status = ngbe_write_phy_reg_mdi(hw, MVL_GEN_CTL, 0, ctrl);
+
+	for (i = 0; i < MVL_PHY_RST_WAIT_PERIOD; i++) {
+		status = ngbe_read_phy_reg_mdi(hw, MVL_GEN_CTL, 0, &ctrl);
+		if (!(ctrl & MVL_GEN_CTL_RESET))
+			break;
+		msleep(1);
+	}
+
+	if (i == MVL_PHY_RST_WAIT_PERIOD) {
+		DEBUGOUT("PHY reset polling failed to complete.\n");
+		return NGBE_ERR_RESET_FAILED;
+	}
+
+	return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
new file mode 100644
index 0000000000..a88ace9ec1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy.h"
+
+#ifndef _NGBE_PHY_MVL_H_
+#define _NGBE_PHY_MVL_H_
+
+#define NGBE_PHYID_MVL			0x01410DD0U
+
+/* Page 0 for Copper, Page 1 for Fiber */
+#define MVL_CTRL			0x0
+#define   MVL_CTRL_RESET		MS16(15, 0x1)
+#define   MVL_CTRL_ANE			MS16(12, 0x1)
+#define   MVL_CTRL_RESTART_AN		MS16(9, 0x1)
+#define MVL_ANA				0x4
+/* copper */
+#define   MVL_CANA_ASM_PAUSE		MS16(11, 0x1)
+#define   MVL_CANA_PAUSE		MS16(10, 0x1)
+#define   MVL_PHY_100BASET_FULL		MS16(8, 0x1)
+#define   MVL_PHY_100BASET_HALF		MS16(7, 0x1)
+#define   MVL_PHY_10BASET_FULL		MS16(6, 0x1)
+#define   MVL_PHY_10BASET_HALF		MS16(5, 0x1)
+/* fiber */
+#define   MVL_FANA_PAUSE_MASK		MS16(7, 0x3)
+#define     MVL_FANA_SYM_PAUSE		LS16(1, 7, 0x3)
+#define     MVL_FANA_ASM_PAUSE		LS16(2, 7, 0x3)
+#define   MVL_PHY_1000BASEX_HALF	MS16(6, 0x1)
+#define   MVL_PHY_1000BASEX_FULL	MS16(5, 0x1)
+#define MVL_LPAR			0x5
+#define   MVL_CLPAR_ASM_PAUSE		MS(11, 0x1)
+#define   MVL_CLPAR_PAUSE		MS(10, 0x1)
+#define   MVL_FLPAR_PAUSE_MASK		MS(7, 0x3)
+#define MVL_PHY_1000BASET		0x9
+#define   MVL_PHY_1000BASET_FULL	MS16(9, 0x1)
+#define   MVL_PHY_1000BASET_HALF	MS16(8, 0x1)
+#define MVL_CTRL1			0x10
+#define   MVL_CTRL1_INTR_POL		MS16(2, 0x1)
+#define MVL_PHYSR			0x11
+#define   MVL_PHYSR_SPEED_MASK		MS16(14, 0x3)
+#define     MVL_PHYSR_SPEED_1000M	LS16(2, 14, 0x3)
+#define     MVL_PHYSR_SPEED_100M	LS16(1, 14, 0x3)
+#define     MVL_PHYSR_SPEED_10M		LS16(0, 14, 0x3)
+#define   MVL_PHYSR_LINK		MS16(10, 0x1)
+#define MVL_INTR_EN			0x12
+#define   MVL_INTR_EN_ANC		MS16(11, 0x1)
+#define   MVL_INTR_EN_LSC		MS16(10, 0x1)
+#define MVL_INTR			0x13
+#define   MVL_INTR_ANC			MS16(11, 0x1)
+#define   MVL_INTR_LSC			MS16(10, 0x1)
+
+/* Page 2 */
+#define MVL_RGM_CTL2			0x15
+#define   MVL_RGM_CTL2_TTC		MS16(4, 0x1)
+#define   MVL_RGM_CTL2_RTC		MS16(5, 0x1)
+/* Page 3 */
+#define MVL_LEDFCR			0x10
+#define   MVL_LEDFCR_CTL1		MS16(4, 0xF)
+#define     MVL_LEDFCR_CTL1_CONF	LS16(6, 4, 0xF)
+#define   MVL_LEDFCR_CTL0		MS16(0, 0xF)
+#define     MVL_LEDFCR_CTL0_CONF	LS16(1, 0, 0xF)
+#define MVL_LEDPCR			0x11
+#define   MVL_LEDPCR_CTL1		MS16(2, 0x3)
+#define     MVL_LEDPCR_CTL1_CONF	LS16(1, 2, 0x3)
+#define   MVL_LEDPCR_CTL0		MS16(0, 0x3)
+#define     MVL_LEDPCR_CTL0_CONF	LS16(1, 0, 0x3)
+#define MVL_LEDTCR			0x12
+#define   MVL_LEDTCR_INTR_POL		MS16(11, 0x1)
+#define   MVL_LEDTCR_INTR_EN		MS16(7, 0x1)
+/* Page 18 */
+#define MVL_GEN_CTL			0x14
+#define   MVL_GEN_CTL_RESET		MS16(15, 0x1)
+#define   MVL_GEN_CTL_MODE(v)		LS16(v, 0, 0x7)
+#define     MVL_GEN_CTL_MODE_COPPER	LS16(0, 0, 0x7)
+#define     MVL_GEN_CTL_MODE_FIBER	LS16(2, 0, 0x7)
+
+/* reg 22 */
+#define MVL_PAGE_SEL			22
+
+/* reg 19_0 INT status*/
+#define MVL_PHY_ANC                      0x0800
+#define MVL_PHY_LSC                      0x0400
+
+s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			u16 *phy_data);
+s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			u16 phy_data);
+
+s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_MVL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
new file mode 100644
index 0000000000..9c98a8ecaf
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy_rtl.h"
+
+s32 ngbe_read_phy_reg_rtl(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+	mdi_reg_t reg;
+	mdi_reg_22_t reg22;
+
+	reg.device_type = device_type;
+	reg.addr = reg_addr;
+	ngbe_mdi_map_register(&reg, &reg22);
+
+	wr32(hw, NGBE_PHY_CONFIG(RTL_PAGE_SELECT), reg22.page);
+	*phy_data = 0xFFFF & rd32(hw, NGBE_PHY_CONFIG(reg22.addr));
+
+	return 0;
+}
+
+s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data)
+{
+	mdi_reg_t reg;
+	mdi_reg_22_t reg22;
+
+	reg.device_type = device_type;
+	reg.addr = reg_addr;
+	ngbe_mdi_map_register(&reg, &reg22);
+
+	wr32(hw, NGBE_PHY_CONFIG(RTL_PAGE_SELECT), reg22.page);
+	wr32(hw, NGBE_PHY_CONFIG(reg22.addr), phy_data);
+
+	return 0;
+}
+
+s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw)
+{
+	UNREFERENCED_PARAMETER(hw);
+	return 0;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
new file mode 100644
index 0000000000..ecb60b0ddd
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy.h"
+
+#ifndef _NGBE_PHY_RTL_H_
+#define _NGBE_PHY_RTL_H_
+
+#define NGBE_PHYID_RTL			0x001CC800U
+
+/* Page 0 */
+#define RTL_DEV_ZERO			0
+#define RTL_BMCR			0x0
+#define   RTL_BMCR_RESET		MS16(15, 0x1)
+#define	  RTL_BMCR_SPEED_SELECT0	MS16(13, 0x1)
+#define   RTL_BMCR_ANE			MS16(12, 0x1)
+#define   RTL_BMCR_RESTART_AN		MS16(9, 0x1)
+#define   RTL_BMCR_DUPLEX		MS16(8, 0x1)
+#define   RTL_BMCR_SPEED_SELECT1	MS16(6, 0x1)
+#define RTL_BMSR			0x1
+#define   RTL_BMSR_ANC			MS16(5, 0x1)
+#define RTL_ID1_OFFSET			0x2
+#define RTL_ID2_OFFSET			0x3
+#define RTL_ID_MASK			0xFFFFFC00U
+#define RTL_ANAR			0x4
+#define   RTL_ANAR_APAUSE		MS16(11, 0x1)
+#define   RTL_ANAR_PAUSE		MS16(10, 0x1)
+#define   RTL_ANAR_100F			MS16(8, 0x1)
+#define   RTL_ANAR_100H			MS16(7, 0x1)
+#define   RTL_ANAR_10F			MS16(6, 0x1)
+#define   RTL_ANAR_10H			MS16(5, 0x1)
+#define RTL_ANLPAR			0x5
+#define   RTL_ANLPAR_LP			MS16(10, 0x3)
+#define RTL_GBCR			0x9
+#define   RTL_GBCR_1000F		MS16(9, 0x1)
+/* Page 0xa42*/
+#define RTL_GSR				0x10
+#define   RTL_GSR_ST			MS16(0, 0x7)
+#define   RTL_GSR_ST_LANON		MS16(0, 0x3)
+#define RTL_INER			0x12
+#define   RTL_INER_LSC			MS16(4, 0x1)
+#define   RTL_INER_ANC			MS16(3, 0x1)
+/* Page 0xa43*/
+#define RTL_PHYSR			0x1A
+#define   RTL_PHYSR_SPEED_MASK		MS16(4, 0x3)
+#define     RTL_PHYSR_SPEED_RES		LS16(3, 4, 0x3)
+#define     RTL_PHYSR_SPEED_1000M	LS16(2, 4, 0x3)
+#define     RTL_PHYSR_SPEED_100M	LS16(1, 4, 0x3)
+#define     RTL_PHYSR_SPEED_10M		LS16(0, 4, 0x3)
+#define   RTL_PHYSR_DP			MS16(3, 0x1)
+#define   RTL_PHYSR_RTLS		MS16(2, 0x1)
+#define RTL_INSR			0x1D
+#define   RTL_INSR_ACCESS		MS16(5, 0x1)
+#define   RTL_INSR_LSC			MS16(4, 0x1)
+#define   RTL_INSR_ANC			MS16(3, 0x1)
+/* Page 0xa46*/
+#define RTL_SCR				0x14
+#define   RTL_SCR_EXTINI		MS16(1, 0x1)
+#define   RTL_SCR_EFUSE			MS16(0, 0x1)
+/* Page 0xa47*/
+/* Page 0xd04*/
+#define RTL_LCR				0x10
+#define RTL_EEELCR			0x11
+#define RTL_LPCR			0x12
+
+/* INTERNAL PHY CONTROL */
+#define RTL_PAGE_SELECT			31
+#define NGBE_INTERNAL_PHY_OFFSET_MAX	32
+#define NGBE_INTERNAL_PHY_ID		0x000732
+
+#define NGBE_INTPHY_LED0		0x0010
+#define NGBE_INTPHY_LED1		0x0040
+#define NGBE_INTPHY_LED2		0x2000
+
+s32 ngbe_read_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			u16 *phy_data);
+s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			u16 phy_data);
+
+s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_RTL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
new file mode 100644
index 0000000000..84b20de45c
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy_yt.h"
+
+#define YT_PHY_RST_WAIT_PERIOD		5
+
+s32 ngbe_read_phy_reg_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+	mdi_reg_t reg;
+	mdi_reg_22_t reg22;
+
+	reg.device_type = device_type;
+	reg.addr = reg_addr;
+
+	ngbe_mdi_map_register(&reg, &reg22);
+
+	/* Read MII reg according to media type */
+	if (hw->phy.media_type == ngbe_media_type_fiber) {
+		ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+					reg22.device_type, YT_SMI_PHY_SDS);
+		ngbe_read_phy_reg_mdi(hw, reg22.addr,
+					reg22.device_type, phy_data);
+		ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+					reg22.device_type, 0);
+	} else {
+		ngbe_read_phy_reg_mdi(hw, reg22.addr,
+					reg22.device_type, phy_data);
+	}
+
+	return 0;
+}
+
+s32 ngbe_write_phy_reg_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data)
+{
+	mdi_reg_t reg;
+	mdi_reg_22_t reg22;
+
+	reg.device_type = device_type;
+	reg.addr = reg_addr;
+
+	ngbe_mdi_map_register(&reg, &reg22);
+
+	/* Write MII reg according to media type */
+	if (hw->phy.media_type == ngbe_media_type_fiber) {
+		ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+					reg22.device_type, YT_SMI_PHY_SDS);
+		ngbe_write_phy_reg_mdi(hw, reg22.addr,
+					reg22.device_type, phy_data);
+		ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+					reg22.device_type, 0);
+	} else {
+		ngbe_write_phy_reg_mdi(hw, reg22.addr,
+					reg22.device_type, phy_data);
+	}
+
+	return 0;
+}
+
+s32 ngbe_read_phy_reg_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+	ngbe_write_phy_reg_mdi(hw, 0x1E, device_type, reg_addr);
+	ngbe_read_phy_reg_mdi(hw, 0x1F, device_type, phy_data);
+
+	return 0;
+}
+
+s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data)
+{
+	ngbe_write_phy_reg_mdi(hw, 0x1E, device_type, reg_addr);
+	ngbe_write_phy_reg_mdi(hw, 0x1F, device_type, phy_data);
+
+	return 0;
+}
+
+s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
+{
+	u32 i;
+	u16 ctrl = 0;
+	s32 status = 0;
+
+	DEBUGFUNC("ngbe_reset_phy_yt");
+
+	if (hw->phy.type != ngbe_phy_yt8521s &&
+		hw->phy.type != ngbe_phy_yt8521s_sfi)
+		return NGBE_ERR_PHY_TYPE;
+
+	status = hw->phy.read_reg(hw, YT_BCR, 0, &ctrl);
+	/* sds software reset */
+	ctrl |= YT_BCR_RESET;
+	status = hw->phy.write_reg(hw, YT_BCR, 0, ctrl);
+
+	for (i = 0; i < YT_PHY_RST_WAIT_PERIOD; i++) {
+		status = hw->phy.read_reg(hw, YT_BCR, 0, &ctrl);
+		if (!(ctrl & YT_BCR_RESET))
+			break;
+		msleep(1);
+	}
+
+	if (i == YT_PHY_RST_WAIT_PERIOD) {
+		DEBUGOUT("PHY reset polling failed to complete.\n");
+		return NGBE_ERR_RESET_FAILED;
+	}
+
+	return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
new file mode 100644
index 0000000000..03b53ece86
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy.h"
+
+#ifndef _NGBE_PHY_YT_H_
+#define _NGBE_PHY_YT_H_
+
+#define NGBE_PHYID_YT			0x00000110U
+
+/* Common EXT */
+#define YT_SMI_PHY			0xA000
+#define   YT_SMI_PHY_SDS		MS16(1, 0x1) /* 0 for UTP */
+#define YT_CHIP				0xA001
+#define   YT_CHIP_SW_RST		MS16(15, 0x1)
+#define   YT_CHIP_SW_LDO_EN		MS16(6, 0x1)
+#define   YT_CHIP_MODE_SEL(v)		LS16(v, 0, 0x7)
+#define YT_RGMII_CONF1			0xA003
+#define   YT_RGMII_CONF1_RXDELAY	MS16(10, 0xF)
+#define   YT_RGMII_CONF1_TXDELAY_FE	MS16(4, 0xF)
+#define   YT_RGMII_CONF1_TXDELAY	MS16(0, 0x1)
+#define YT_MISC				0xA006
+#define   YT_MISC_FIBER_PRIO		MS16(8, 0x1) /* 0 for UTP */
+
+/* MII common registers in UTP and SDS */
+#define YT_BCR				0x0
+#define   YT_BCR_RESET			MS16(15, 0x1)
+#define YT_ANA				0x4
+/* copper */
+#define   YT_ANA_100BASET_FULL		MS16(8, 0x1)
+#define   YT_ANA_10BASET_FULL		MS16(6, 0x1)
+/* fiber */
+#define   YT_FANA_PAUSE_MASK		MS16(7, 0x3)
+
+#define YT_LPAR				0x5
+#define   YT_CLPAR_ASM_PAUSE		MS(11, 0x1)
+#define   YT_CLPAR_PAUSE		MS(10, 0x1)
+#define   YT_FLPAR_PAUSE_MASK		MS(7, 0x3)
+
+#define YT_MS_CTRL			0x9
+#define   YT_MS_1000BASET_FULL		MS16(9, 0x1)
+#define YT_SPST				0x11
+#define   YT_SPST_SPEED_MASK		MS16(14, 0x3)
+#define	    YT_SPST_SPEED_1000M		LS16(2, 14, 0x3)
+#define	    YT_SPST_SPEED_100M		LS16(1, 14, 0x3)
+#define	    YT_SPST_SPEED_10M		LS16(0, 14, 0x3)
+#define   YT_SPST_LINK			MS16(10, 0x1)
+
+/* UTP only */
+#define YT_INTR				0x12
+#define   YT_INTR_ENA_MASK		MS16(2, 0x3)
+#define YT_INTR_STATUS			0x13
+
+s32 ngbe_read_phy_reg_yt(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			u16 *phy_data);
+s32 ngbe_write_phy_reg_yt(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+			u16 phy_data);
+s32 ngbe_read_phy_reg_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data);
+s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data);
+
+s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_YT_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 21756c4690..b5c05e0f2f 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -93,6 +93,7 @@ struct ngbe_mac_info {
 
 	/* Manageability interface */
 	s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
+	s32 (*check_overtemp)(struct ngbe_hw *hw);
 
 	enum ngbe_mac_type type;
 	u32 max_tx_queues;
@@ -102,8 +103,24 @@ struct ngbe_mac_info {
 };
 
 struct ngbe_phy_info {
+	s32 (*identify)(struct ngbe_hw *hw);
+	s32 (*reset_hw)(struct ngbe_hw *hw);
+	s32 (*read_reg)(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 *phy_data);
+	s32 (*write_reg)(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data);
+	s32 (*read_reg_unlocked)(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 *phy_data);
+	s32 (*write_reg_unlocked)(struct ngbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data);
+
 	enum ngbe_media_type media_type;
 	enum ngbe_phy_type type;
+	u32 addr;
+	u32 id;
+	u32 revision;
+	u32 phy_semaphore_mask;
+	bool reset_disable;
 };
 
 struct ngbe_hw {
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 11/24] net/ngbe: store MAC address
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (9 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 10/24] net/ngbe: identify PHY and reset PHY Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 12/24] net/ngbe: add info get operation Jiawen Wu
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Store MAC addresses and init receive address filters.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_dummy.h |  33 +++
 drivers/net/ngbe/base/ngbe_hw.c    | 323 +++++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_hw.h    |  13 ++
 drivers/net/ngbe/base/ngbe_osdep.h |   1 +
 drivers/net/ngbe/base/ngbe_type.h  |  19 ++
 drivers/net/ngbe/ngbe_ethdev.c     |  25 +++
 drivers/net/ngbe/ngbe_ethdev.h     |   2 +
 7 files changed, 416 insertions(+)

diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 15017cfd82..8462d6d1cb 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -51,6 +51,10 @@ static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
@@ -60,6 +64,29 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
 }
+static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+					u8 *TUP2, u32 TUP3, u32 TUP4)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_clear_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_set_vmdq_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+					u32 TUP2)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_clear_vmdq_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+					u32 TUP2)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_init_rx_addrs_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
@@ -105,8 +132,14 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->mac.init_hw = ngbe_mac_init_hw_dummy;
 	hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
 	hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
+	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+	hw->mac.set_rar = ngbe_mac_set_rar_dummy;
+	hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
+	hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
+	hw->mac.clear_vmdq = ngbe_mac_clear_vmdq_dummy;
+	hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy;
 	hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
 	hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
 	hw->phy.identify = ngbe_phy_identify_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index ebd163d9e6..386557f468 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -142,9 +142,49 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
 
 	msec_delay(50);
 
+	/* Store the permanent mac address */
+	hw->mac.get_mac_addr(hw, hw->mac.perm_addr);
+
+	/*
+	 * Store MAC address from RAR0, clear receive address registers, and
+	 * clear the multicast table.
+	 */
+	hw->mac.num_rar_entries = NGBE_EM_RAR_ENTRIES;
+	hw->mac.init_rx_addrs(hw);
+
 	return status;
 }
 
+/**
+ *  ngbe_get_mac_addr - Generic get MAC address
+ *  @hw: pointer to hardware structure
+ *  @mac_addr: Adapter MAC address
+ *
+ *  Reads the adapter's MAC address from first Receive Address Register (RAR0)
+ *  A reset of the adapter must be performed prior to calling this function
+ *  in order for the MAC address to have been loaded from the EEPROM into RAR0
+ **/
+s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr)
+{
+	u32 rar_high;
+	u32 rar_low;
+	u16 i;
+
+	DEBUGFUNC("ngbe_get_mac_addr");
+
+	wr32(hw, NGBE_ETHADDRIDX, 0);
+	rar_high = rd32(hw, NGBE_ETHADDRH);
+	rar_low = rd32(hw, NGBE_ETHADDRL);
+
+	for (i = 0; i < 2; i++)
+		mac_addr[i] = (u8)(rar_high >> (1 - i) * 8);
+
+	for (i = 0; i < 4; i++)
+		mac_addr[i + 2] = (u8)(rar_low >> (3 - i) * 8);
+
+	return 0;
+}
+
 /**
  *  ngbe_set_lan_id_multi_port - Set LAN id for PCIe multiple port devices
  *  @hw: pointer to the HW structure
@@ -215,6 +255,196 @@ s32 ngbe_stop_hw(struct ngbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  ngbe_validate_mac_addr - Validate MAC address
+ *  @mac_addr: pointer to MAC address.
+ *
+ *  Tests a MAC address to ensure it is a valid Individual Address.
+ **/
+s32 ngbe_validate_mac_addr(u8 *mac_addr)
+{
+	s32 status = 0;
+
+	DEBUGFUNC("ngbe_validate_mac_addr");
+
+	/* Make sure it is not a multicast address */
+	if (NGBE_IS_MULTICAST((struct rte_ether_addr *)mac_addr)) {
+		status = NGBE_ERR_INVALID_MAC_ADDR;
+	/* Not a broadcast address */
+	} else if (NGBE_IS_BROADCAST((struct rte_ether_addr *)mac_addr)) {
+		status = NGBE_ERR_INVALID_MAC_ADDR;
+	/* Reject the zero address */
+	} else if (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+		   mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0) {
+		status = NGBE_ERR_INVALID_MAC_ADDR;
+	}
+	return status;
+}
+
+/**
+ *  ngbe_set_rar - Set Rx address register
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *  @addr: Address to put into receive address register
+ *  @vmdq: VMDq "set" or "pool" index
+ *  @enable_addr: set flag that address is active
+ *
+ *  Puts an ethernet address into a receive address register.
+ **/
+s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+			  u32 enable_addr)
+{
+	u32 rar_low, rar_high;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("ngbe_set_rar");
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries) {
+		DEBUGOUT("RAR index %d is out of range.\n", index);
+		return NGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	/* setup VMDq pool selection before this RAR gets enabled */
+	hw->mac.set_vmdq(hw, index, vmdq);
+
+	/*
+	 * HW expects these in little endian so we reverse the byte
+	 * order from network order (big endian) to little endian
+	 */
+	rar_low = NGBE_ETHADDRL_AD0(addr[5]) |
+		  NGBE_ETHADDRL_AD1(addr[4]) |
+		  NGBE_ETHADDRL_AD2(addr[3]) |
+		  NGBE_ETHADDRL_AD3(addr[2]);
+	/*
+	 * Some parts put the VMDq setting in the extra RAH bits,
+	 * so save everything except the lower 16 bits that hold part
+	 * of the address and the address valid bit.
+	 */
+	rar_high = rd32(hw, NGBE_ETHADDRH);
+	rar_high &= ~NGBE_ETHADDRH_AD_MASK;
+	rar_high |= (NGBE_ETHADDRH_AD4(addr[1]) |
+		     NGBE_ETHADDRH_AD5(addr[0]));
+
+	rar_high &= ~NGBE_ETHADDRH_VLD;
+	if (enable_addr != 0)
+		rar_high |= NGBE_ETHADDRH_VLD;
+
+	wr32(hw, NGBE_ETHADDRIDX, index);
+	wr32(hw, NGBE_ETHADDRL, rar_low);
+	wr32(hw, NGBE_ETHADDRH, rar_high);
+
+	return 0;
+}
+
+/**
+ *  ngbe_clear_rar - Remove Rx address register
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *
+ *  Clears an ethernet address from a receive address register.
+ **/
+s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index)
+{
+	u32 rar_high;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("ngbe_clear_rar");
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries) {
+		DEBUGOUT("RAR index %d is out of range.\n", index);
+		return NGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	/*
+	 * Some parts put the VMDq setting in the extra RAH bits,
+	 * so save everything except the lower 16 bits that hold part
+	 * of the address and the address valid bit.
+	 */
+	wr32(hw, NGBE_ETHADDRIDX, index);
+	rar_high = rd32(hw, NGBE_ETHADDRH);
+	rar_high &= ~(NGBE_ETHADDRH_AD_MASK | NGBE_ETHADDRH_VLD);
+
+	wr32(hw, NGBE_ETHADDRL, 0);
+	wr32(hw, NGBE_ETHADDRH, rar_high);
+
+	/* clear VMDq pool/queue selection for this RAR */
+	hw->mac.clear_vmdq(hw, index, BIT_MASK32);
+
+	return 0;
+}
+
+/**
+ *  ngbe_init_rx_addrs - Initializes receive address filters.
+ *  @hw: pointer to hardware structure
+ *
+ *  Places the MAC address in receive address register 0 and clears the rest
+ *  of the receive address registers. Clears the multicast table. Assumes
+ *  the receiver is in reset when the routine is called.
+ **/
+s32 ngbe_init_rx_addrs(struct ngbe_hw *hw)
+{
+	u32 i;
+	u32 psrctl;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("ngbe_init_rx_addrs");
+
+	/*
+	 * If the current mac address is valid, assume it is a software override
+	 * to the permanent address.
+	 * Otherwise, use the permanent address from the eeprom.
+	 */
+	if (ngbe_validate_mac_addr(hw->mac.addr) ==
+	    NGBE_ERR_INVALID_MAC_ADDR) {
+		/* Get the MAC address from the RAR0 for later reference */
+		hw->mac.get_mac_addr(hw, hw->mac.addr);
+
+		DEBUGOUT(" Keeping Current RAR0 Addr =%.2X %.2X %.2X ",
+			  hw->mac.addr[0], hw->mac.addr[1],
+			  hw->mac.addr[2]);
+		DEBUGOUT("%.2X %.2X %.2X\n", hw->mac.addr[3],
+			  hw->mac.addr[4], hw->mac.addr[5]);
+	} else {
+		/* Setup the receive address. */
+		DEBUGOUT("Overriding MAC Address in RAR[0]\n");
+		DEBUGOUT(" New MAC Addr =%.2X %.2X %.2X ",
+			  hw->mac.addr[0], hw->mac.addr[1],
+			  hw->mac.addr[2]);
+		DEBUGOUT("%.2X %.2X %.2X\n", hw->mac.addr[3],
+			  hw->mac.addr[4], hw->mac.addr[5]);
+
+		hw->mac.set_rar(hw, 0, hw->mac.addr, 0, true);
+	}
+
+	/* clear VMDq pool/queue selection for RAR 0 */
+	hw->mac.clear_vmdq(hw, 0, BIT_MASK32);
+
+	/* Zero out the other receive addresses. */
+	DEBUGOUT("Clearing RAR[1-%d]\n", rar_entries - 1);
+	for (i = 1; i < rar_entries; i++) {
+		wr32(hw, NGBE_ETHADDRIDX, i);
+		wr32(hw, NGBE_ETHADDRL, 0);
+		wr32(hw, NGBE_ETHADDRH, 0);
+	}
+
+	/* Clear the MTA */
+	hw->addr_ctrl.mta_in_use = 0;
+	psrctl = rd32(hw, NGBE_PSRCTL);
+	psrctl &= ~(NGBE_PSRCTL_ADHF12_MASK | NGBE_PSRCTL_MCHFENA);
+	psrctl |= NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+	wr32(hw, NGBE_PSRCTL, psrctl);
+
+	DEBUGOUT(" Clearing MTA\n");
+	for (i = 0; i < hw->mac.mcft_size; i++)
+		wr32(hw, NGBE_MCADDRTBL(i), 0);
+
+	ngbe_init_uta_tables(hw);
+
+	return 0;
+}
+
 /**
  *  ngbe_acquire_swfw_sync - Acquire SWFW semaphore
  *  @hw: pointer to hardware structure
@@ -286,6 +516,89 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
 	ngbe_release_eeprom_semaphore(hw);
 }
 
+/**
+ *  ngbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
+ *  @hw: pointer to hardware struct
+ *  @rar: receive address register index to disassociate
+ *  @vmdq: VMDq pool index to remove from the rar
+ **/
+s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq)
+{
+	u32 mpsar;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("ngbe_clear_vmdq");
+
+	/* Make sure we are using a valid rar index range */
+	if (rar >= rar_entries) {
+		DEBUGOUT("RAR index %d is out of range.\n", rar);
+		return NGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	wr32(hw, NGBE_ETHADDRIDX, rar);
+	mpsar = rd32(hw, NGBE_ETHADDRASS);
+
+	if (NGBE_REMOVED(hw->hw_addr))
+		goto done;
+
+	if (!mpsar)
+		goto done;
+
+	mpsar &= ~(1 << vmdq);
+	wr32(hw, NGBE_ETHADDRASS, mpsar);
+
+	/* was that the last pool using this rar? */
+	if (mpsar == 0 && rar != 0)
+		hw->mac.clear_rar(hw, rar);
+done:
+	return 0;
+}
+
+/**
+ *  ngbe_set_vmdq - Associate a VMDq pool index with a rx address
+ *  @hw: pointer to hardware struct
+ *  @rar: receive address register index to associate with a VMDq index
+ *  @vmdq: VMDq pool index
+ **/
+s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq)
+{
+	u32 mpsar;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("ngbe_set_vmdq");
+
+	/* Make sure we are using a valid rar index range */
+	if (rar >= rar_entries) {
+		DEBUGOUT("RAR index %d is out of range.\n", rar);
+		return NGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	wr32(hw, NGBE_ETHADDRIDX, rar);
+
+	mpsar = rd32(hw, NGBE_ETHADDRASS);
+	mpsar |= 1 << vmdq;
+	wr32(hw, NGBE_ETHADDRASS, mpsar);
+
+	return 0;
+}
+
+/**
+ *  ngbe_init_uta_tables - Initialize the Unicast Table Array
+ *  @hw: pointer to hardware structure
+ **/
+s32 ngbe_init_uta_tables(struct ngbe_hw *hw)
+{
+	int i;
+
+	DEBUGFUNC("ngbe_init_uta_tables");
+	DEBUGOUT(" Clearing UTA\n");
+
+	for (i = 0; i < 128; i++)
+		wr32(hw, NGBE_UCADDRTBL(i), 0);
+
+	return 0;
+}
+
 /**
  *  ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
  *  @hw: pointer to hardware structure
@@ -481,10 +794,18 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	/* MAC */
 	mac->init_hw = ngbe_init_hw;
 	mac->reset_hw = ngbe_reset_hw_em;
+	mac->get_mac_addr = ngbe_get_mac_addr;
 	mac->stop_hw = ngbe_stop_hw;
 	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
 	mac->release_swfw_sync = ngbe_release_swfw_sync;
 
+	/* RAR */
+	mac->set_rar = ngbe_set_rar;
+	mac->clear_rar = ngbe_clear_rar;
+	mac->init_rx_addrs = ngbe_init_rx_addrs;
+	mac->set_vmdq = ngbe_set_vmdq;
+	mac->clear_vmdq = ngbe_clear_vmdq;
+
 	/* Manageability interface */
 	mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
 	mac->check_overtemp = ngbe_mac_check_overtemp;
@@ -493,6 +814,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	rom->init_params = ngbe_init_eeprom_params;
 	rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
 
+	mac->mcft_size		= NGBE_EM_MC_TBL_SIZE;
+	mac->num_rar_entries	= NGBE_EM_RAR_ENTRIES;
 	mac->max_rx_queues	= NGBE_EM_MAX_RX_QUEUES;
 	mac->max_tx_queues	= NGBE_EM_MAX_TX_QUEUES;
 
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 3c8e646bb7..0b3d60ae29 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -10,16 +10,29 @@
 
 #define NGBE_EM_MAX_TX_QUEUES 8
 #define NGBE_EM_MAX_RX_QUEUES 8
+#define NGBE_EM_RAR_ENTRIES   32
+#define NGBE_EM_MC_TBL_SIZE   32
 
 s32 ngbe_init_hw(struct ngbe_hw *hw);
 s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
 s32 ngbe_stop_hw(struct ngbe_hw *hw);
+s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
 
 void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
 
+s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+			  u32 enable_addr);
+s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
+s32 ngbe_init_rx_addrs(struct ngbe_hw *hw);
+
+s32 ngbe_validate_mac_addr(u8 *mac_addr);
 s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
 void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
 
+s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
+
 s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
 s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
 void ngbe_disable_rx(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_osdep.h b/drivers/net/ngbe/base/ngbe_osdep.h
index 94cc10315e..8c2a1271cb 100644
--- a/drivers/net/ngbe/base/ngbe_osdep.h
+++ b/drivers/net/ngbe/base/ngbe_osdep.h
@@ -18,6 +18,7 @@
 #include <rte_byteorder.h>
 #include <rte_config.h>
 #include <rte_io.h>
+#include <rte_ether.h>
 
 #include "../ngbe_logs.h"
 
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index b5c05e0f2f..5add9ec2a3 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -62,6 +62,10 @@ enum ngbe_media_type {
 
 struct ngbe_hw;
 
+struct ngbe_addr_filter_info {
+	u32 mta_in_use;
+};
+
 /* Bus parameters */
 struct ngbe_bus_info {
 	void (*set_lan_id)(struct ngbe_hw *hw);
@@ -88,14 +92,28 @@ struct ngbe_mac_info {
 	s32 (*init_hw)(struct ngbe_hw *hw);
 	s32 (*reset_hw)(struct ngbe_hw *hw);
 	s32 (*stop_hw)(struct ngbe_hw *hw);
+	s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
 	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 
+	/* RAR */
+	s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+			  u32 enable_addr);
+	s32 (*clear_rar)(struct ngbe_hw *hw, u32 index);
+	s32 (*set_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+	s32 (*clear_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+	s32 (*init_rx_addrs)(struct ngbe_hw *hw);
+
 	/* Manageability interface */
 	s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
 	s32 (*check_overtemp)(struct ngbe_hw *hw);
 
 	enum ngbe_mac_type type;
+	u8 addr[ETH_ADDR_LEN];
+	u8 perm_addr[ETH_ADDR_LEN];
+	s32 mc_filter_type;
+	u32 mcft_size;
+	u32 num_rar_entries;
 	u32 max_tx_queues;
 	u32 max_rx_queues;
 	struct ngbe_thermal_sensor_data  thermal_sensor_data;
@@ -127,6 +145,7 @@ struct ngbe_hw {
 	void IOMEM *hw_addr;
 	void *back;
 	struct ngbe_mac_info mac;
+	struct ngbe_addr_filter_info addr_ctrl;
 	struct ngbe_phy_info phy;
 	struct ngbe_rom_info rom;
 	struct ngbe_bus_info bus;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index bb5923c485..e9ddbe9753 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -116,6 +116,31 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("ngbe", RTE_ETHER_ADDR_LEN *
+					       hw->mac.num_rar_entries, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate %u bytes needed to store "
+			     "MAC addresses",
+			     RTE_ETHER_ADDR_LEN * hw->mac.num_rar_entries);
+		return -ENOMEM;
+	}
+
+	/* Copy the permanent MAC address */
+	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.perm_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* Allocate memory for storing hash filter MAC addresses */
+	eth_dev->data->hash_mac_addrs = rte_zmalloc("ngbe",
+			RTE_ETHER_ADDR_LEN * NGBE_VMDQ_NUM_UC_MAC, 0);
+	if (eth_dev->data->hash_mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate %d bytes needed to store MAC addresses",
+			     RTE_ETHER_ADDR_LEN * NGBE_VMDQ_NUM_UC_MAC);
+		return -ENOMEM;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index f6cee4a4a9..5917ff02aa 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -19,4 +19,6 @@ struct ngbe_adapter {
 #define NGBE_DEV_HW(dev) \
 	(&((struct ngbe_adapter *)(dev)->data->dev_private)->hw)
 
+#define NGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
+
 #endif /* _NGBE_ETHDEV_H_ */
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 12/24] net/ngbe: add info get operation
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (10 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 11/24] net/ngbe: store MAC address Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 18:13   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 13/24] net/ngbe: support link update Jiawen Wu
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device information get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini |  1 +
 drivers/net/ngbe/meson.build      |  1 +
 drivers/net/ngbe/ngbe_ethdev.c    | 86 +++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_ethdev.h    | 26 ++++++++++
 drivers/net/ngbe/ngbe_rxtx.c      | 67 ++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h      | 15 ++++++
 6 files changed, 196 insertions(+)
 create mode 100644 drivers/net/ngbe/ngbe_rxtx.c
 create mode 100644 drivers/net/ngbe/ngbe_rxtx.h

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 977286ac04..ca03a255de 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -4,6 +4,7 @@
 ; Refer to default.ini for the full list of available PMD features.
 ;
 [Features]
+Speed capabilities   = Y
 Multiprocess aware   = Y
 Linux                = Y
 ARMv8                = Y
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 81173fa7f0..9e75b82f1c 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
 
 sources = files(
 	'ngbe_ethdev.c',
+	'ngbe_rxtx.c',
 )
 
 includes += include_directories('base')
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e9ddbe9753..07df677b64 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -9,6 +9,7 @@
 #include "ngbe_logs.h"
 #include "base/ngbe.h"
 #include "ngbe_ethdev.h"
+#include "ngbe_rxtx.h"
 
 static int ngbe_dev_close(struct rte_eth_dev *dev);
 
@@ -31,6 +32,22 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static const struct rte_eth_desc_lim rx_desc_lim = {
+	.nb_max = NGBE_RING_DESC_MAX,
+	.nb_min = NGBE_RING_DESC_MIN,
+	.nb_align = NGBE_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim tx_desc_lim = {
+	.nb_max = NGBE_RING_DESC_MAX,
+	.nb_min = NGBE_RING_DESC_MIN,
+	.nb_align = NGBE_TXD_ALIGN,
+	.nb_seg_max = NGBE_TX_MAX_SEG,
+	.nb_mtu_seg_max = NGBE_TX_MAX_SEG,
+};
+
+static const struct eth_dev_ops ngbe_eth_dev_ops;
+
 /*
  * Ensure that all locks are released before first NVM or PHY access
  */
@@ -64,6 +81,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 
 	PMD_INIT_FUNC_TRACE();
 
+	eth_dev->dev_ops = &ngbe_eth_dev_ops;
+
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
@@ -206,6 +225,73 @@ ngbe_dev_close(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+
+	dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
+	dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
+	dev_info->min_rx_bufsize = 1024;
+	dev_info->max_rx_pktlen = 15872;
+	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
+	dev_info->max_hash_mac_addrs = NGBE_VMDQ_NUM_UC_MAC;
+	dev_info->max_vfs = pci_dev->max_vfs;
+	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
+	dev_info->rx_queue_offload_capa = ngbe_get_rx_queue_offloads(dev);
+	dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
+				     dev_info->rx_queue_offload_capa);
+	dev_info->tx_queue_offload_capa = 0;
+	dev_info->tx_offload_capa = ngbe_get_tx_port_offloads(dev);
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = NGBE_DEFAULT_RX_PTHRESH,
+			.hthresh = NGBE_DEFAULT_RX_HTHRESH,
+			.wthresh = NGBE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = NGBE_DEFAULT_TX_PTHRESH,
+			.hthresh = NGBE_DEFAULT_TX_HTHRESH,
+			.wthresh = NGBE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = NGBE_DEFAULT_TX_FREE_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = rx_desc_lim;
+	dev_info->tx_desc_lim = tx_desc_lim;
+
+	dev_info->hash_key_size = NGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
+	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->flow_type_rss_offloads = NGBE_RSS_OFFLOAD_ALL;
+
+	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+
+	/* Driver-preferred Rx/Tx parameters */
+	dev_info->default_rxportconf.burst_size = 32;
+	dev_info->default_txportconf.burst_size = 32;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = 256;
+	dev_info->default_txportconf.ring_size = 256;
+
+	return 0;
+}
+
+static const struct eth_dev_ops ngbe_eth_dev_ops = {
+	.dev_infos_get              = ngbe_dev_info_get,
+};
+
 RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
 RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 5917ff02aa..b4e2000dd3 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -6,6 +6,19 @@
 #ifndef _NGBE_ETHDEV_H_
 #define _NGBE_ETHDEV_H_
 
+#define NGBE_HKEY_MAX_INDEX 10
+
+#define NGBE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_IPV6_EX | \
+	ETH_RSS_IPV6_TCP_EX | \
+	ETH_RSS_IPV6_UDP_EX)
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -21,4 +34,17 @@ struct ngbe_adapter {
 
 #define NGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
 
+/*
+ *  Default values for RX/TX configuration
+ */
+#define NGBE_DEFAULT_RX_FREE_THRESH  32
+#define NGBE_DEFAULT_RX_PTHRESH      8
+#define NGBE_DEFAULT_RX_HTHRESH      8
+#define NGBE_DEFAULT_RX_WTHRESH      0
+
+#define NGBE_DEFAULT_TX_FREE_THRESH  32
+#define NGBE_DEFAULT_TX_PTHRESH      32
+#define NGBE_DEFAULT_TX_HTHRESH      0
+#define NGBE_DEFAULT_TX_WTHRESH      0
+
 #endif /* _NGBE_ETHDEV_H_ */
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
new file mode 100644
index 0000000000..ae24367b18
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+
+#include "base/ngbe.h"
+#include "ngbe_ethdev.h"
+#include "ngbe_rxtx.h"
+
+uint64_t
+ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
+{
+	uint64_t tx_offload_capa;
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+
+	tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_SCTP_CKSUM  |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_UDP_TSO	   |
+		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
+		DEV_TX_OFFLOAD_IP_TNL_TSO	|
+		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	if (hw->is_pf)
+		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+
+	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	return tx_offload_capa;
+}
+
+uint64_t
+ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
+{
+	return DEV_RX_OFFLOAD_VLAN_STRIP;
+}
+
+uint64_t
+ngbe_get_rx_port_offloads(struct rte_eth_dev *dev)
+{
+	uint64_t offloads;
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+
+	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
+		   DEV_RX_OFFLOAD_UDP_CKSUM   |
+		   DEV_RX_OFFLOAD_TCP_CKSUM   |
+		   DEV_RX_OFFLOAD_KEEP_CRC    |
+		   DEV_RX_OFFLOAD_JUMBO_FRAME |
+		   DEV_RX_OFFLOAD_VLAN_FILTER |
+		   DEV_RX_OFFLOAD_SCATTER;
+
+	if (hw->is_pf)
+		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
+			     DEV_RX_OFFLOAD_QINQ_STRIP |
+			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+
+	return offloads;
+}
+
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
new file mode 100644
index 0000000000..39011ee286
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_RXTX_H_
+#define _NGBE_RXTX_H_
+
+#define NGBE_TX_MAX_SEG                    40
+
+uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
+uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
+uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
+
+#endif /* _NGBE_RXTX_H_ */
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 13/24] net/ngbe: support link update
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (11 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 12/24] net/ngbe: add info get operation Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 18:45   ` Andrew Rybchenko
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 14/24] net/ngbe: setup the check PHY link Jiawen Wu
                   ` (12 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Register to handle device interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini  |   2 +
 doc/guides/nics/ngbe.rst           |   5 +
 drivers/net/ngbe/base/ngbe_dummy.h |   6 +
 drivers/net/ngbe/base/ngbe_type.h  |  11 +
 drivers/net/ngbe/ngbe_ethdev.c     | 364 +++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_ethdev.h     |  28 +++
 6 files changed, 416 insertions(+)

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index ca03a255de..291a542a42 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -5,6 +5,8 @@
 ;
 [Features]
 Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
 Multiprocess aware   = Y
 Linux                = Y
 ARMv8                = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index c274a15aab..de2ef65664 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -7,6 +7,11 @@ NGBE Poll Mode Driver
 The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
 for Wangxun 1 Gigabit Ethernet NICs.
 
+Features
+--------
+
+- Link state information
+
 Prerequisites
 -------------
 
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 8462d6d1cb..4273e5af36 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -64,6 +64,11 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
 }
+static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
+					bool *TUP3, bool TUP4)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
 					u8 *TUP2, u32 TUP3, u32 TUP4)
 {
@@ -135,6 +140,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+	hw->mac.check_link = ngbe_mac_check_link_dummy;
 	hw->mac.set_rar = ngbe_mac_set_rar_dummy;
 	hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
 	hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 5add9ec2a3..d05d2ff28a 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -96,6 +96,8 @@ struct ngbe_mac_info {
 	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 
+	s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
+			       bool *link_up, bool link_up_wait_to_complete);
 	/* RAR */
 	s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
@@ -116,6 +118,7 @@ struct ngbe_mac_info {
 	u32 num_rar_entries;
 	u32 max_tx_queues;
 	u32 max_rx_queues;
+	bool get_link_status;
 	struct ngbe_thermal_sensor_data  thermal_sensor_data;
 	bool set_lben;
 };
@@ -141,6 +144,14 @@ struct ngbe_phy_info {
 	bool reset_disable;
 };
 
+enum ngbe_isb_idx {
+	NGBE_ISB_HEADER,
+	NGBE_ISB_MISC,
+	NGBE_ISB_VEC0,
+	NGBE_ISB_VEC1,
+	NGBE_ISB_MAX
+};
+
 struct ngbe_hw {
 	void IOMEM *hw_addr;
 	void *back;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 07df677b64..97b6de3aa4 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -6,6 +6,8 @@
 #include <rte_common.h>
 #include <ethdev_pci.h>
 
+#include <rte_alarm.h>
+
 #include "ngbe_logs.h"
 #include "base/ngbe.h"
 #include "ngbe_ethdev.h"
@@ -13,6 +15,9 @@
 
 static int ngbe_dev_close(struct rte_eth_dev *dev);
 
+static void ngbe_dev_interrupt_handler(void *param);
+static void ngbe_dev_interrupt_delayed_handler(void *param);
+
 /*
  * The set of PCI devices this driver supports
  */
@@ -47,6 +52,26 @@ static const struct rte_eth_desc_lim tx_desc_lim = {
 };
 
 static const struct eth_dev_ops ngbe_eth_dev_ops;
+static inline void
+ngbe_enable_intr(struct rte_eth_dev *dev)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+
+	wr32(hw, NGBE_IENMISC, intr->mask_misc);
+	wr32(hw, NGBE_IMC(0), intr->mask & BIT_MASK32);
+	ngbe_flush(hw);
+}
+
+static void
+ngbe_disable_intr(struct ngbe_hw *hw)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	wr32(hw, NGBE_IMS(0), NGBE_IMS_MASK);
+	ngbe_flush(hw);
+}
+
 
 /*
  * Ensure that all locks are released before first NVM or PHY access
@@ -76,7 +101,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	const struct rte_memzone *mz;
+	uint32_t ctrl_ext;
 	int err;
 
 	PMD_INIT_FUNC_TRACE();
@@ -135,6 +162,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
+	/* disable interrupt */
+	ngbe_disable_intr(hw);
+
 	/* Allocate memory for storing MAC addresses */
 	eth_dev->data->mac_addrs = rte_zmalloc("ngbe", RTE_ETHER_ADDR_LEN *
 					       hw->mac.num_rar_entries, 0);
@@ -160,6 +190,22 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -ENOMEM;
 	}
 
+	ctrl_ext = rd32(hw, NGBE_PORTCTL);
+	/* let hardware know driver is loaded */
+	ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
+	wr32(hw, NGBE_PORTCTL, ctrl_ext);
+	ngbe_flush(hw);
+
+	rte_intr_callback_register(intr_handle,
+				   ngbe_dev_interrupt_handler, eth_dev);
+
+	/* enable uio/vfio intr/eventfd mapping */
+	rte_intr_enable(intr_handle);
+
+	/* enable support intr */
+	ngbe_enable_intr(eth_dev);
+
+
 	return 0;
 }
 
@@ -212,6 +258,19 @@ static struct rte_pci_driver rte_ngbe_pmd = {
 	.remove = eth_ngbe_pci_remove,
 };
 
+static int
+ngbe_dev_configure(struct rte_eth_dev *dev)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* set flag to update link status after init */
+	intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+
+	return 0;
+}
+
 /*
  * Reset and stop device.
  */
@@ -288,8 +347,313 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+/* return 0 means link status changed, -1 means not changed */
+int
+ngbe_dev_link_update_share(struct rte_eth_dev *dev,
+			    int wait_to_complete)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct rte_eth_link link;
+	u32 link_speed = NGBE_LINK_SPEED_UNKNOWN;
+	u32 lan_speed = 0;
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	bool link_up;
+	int err;
+	int wait = 1;
+
+	memset(&link, 0, sizeof(link));
+	link.link_status = ETH_LINK_DOWN;
+	link.link_speed = ETH_SPEED_NUM_NONE;
+	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = ETH_LINK_AUTONEG;
+
+	hw->mac.get_link_status = true;
+
+	if (intr->flags & NGBE_FLAG_NEED_LINK_CONFIG)
+		return rte_eth_linkstatus_set(dev, &link);
+
+	/* check if it needs to wait to complete, if lsc interrupt is enabled */
+	if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)
+		wait = 0;
+
+	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
+
+	if (err != 0) {
+		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		return rte_eth_linkstatus_set(dev, &link);
+	}
+
+	if (link_up == 0)
+		return rte_eth_linkstatus_set(dev, &link);
+
+	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
+	link.link_status = ETH_LINK_UP;
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	switch (link_speed) {
+	default:
+	case NGBE_LINK_SPEED_UNKNOWN:
+		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+
+	case NGBE_LINK_SPEED_10M_FULL:
+		link.link_speed = ETH_SPEED_NUM_10M;
+		lan_speed = 0;
+		break;
+
+	case NGBE_LINK_SPEED_100M_FULL:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		lan_speed = 1;
+		break;
+
+	case NGBE_LINK_SPEED_1GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		lan_speed = 2;
+		break;
+
+	case NGBE_LINK_SPEED_2_5GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+
+	case NGBE_LINK_SPEED_5GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+
+	case NGBE_LINK_SPEED_10GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	}
+
+	if (hw->is_pf) {
+		wr32m(hw, NGBE_LAN_SPEED, NGBE_LAN_SPEED_MASK, lan_speed);
+		if (link_speed & (NGBE_LINK_SPEED_1GB_FULL |
+			NGBE_LINK_SPEED_100M_FULL | NGBE_LINK_SPEED_10M_FULL)) {
+			wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_SPEED_MASK,
+				NGBE_MACTXCFG_SPEED_1G | NGBE_MACTXCFG_TE);
+		}
+	}
+
+	return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+	return ngbe_dev_link_update_share(dev, wait_to_complete);
+}
+
+/*
+ * It reads ICR and sets flag for the link_update.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
+{
+	uint32_t eicr;
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+
+	/* clear all cause mask */
+	ngbe_disable_intr(hw);
+
+	/* read-on-clear nic registers here */
+	eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
+	PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
+
+	intr->flags = 0;
+
+	/* set flag for async link update */
+	if (eicr & NGBE_ICRMISC_PHY)
+		intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+
+	if (eicr & NGBE_ICRMISC_VFMBX)
+		intr->flags |= NGBE_FLAG_MAILBOX;
+
+	if (eicr & NGBE_ICRMISC_LNKSEC)
+		intr->flags |= NGBE_FLAG_MACSEC;
+
+	if (eicr & NGBE_ICRMISC_GPIO)
+		intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+
+	return 0;
+}
+
+/**
+ * It gets and then prints the link status.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static void
+ngbe_dev_link_status_print(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_eth_link link;
+
+	rte_eth_linkstatus_get(dev, &link);
+
+	if (link.link_status) {
+		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
+					(int)(dev->data->port_id),
+					(unsigned int)link.link_speed,
+			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+					"full-duplex" : "half-duplex");
+	} else {
+		PMD_INIT_LOG(INFO, " Port %d: Link Down",
+				(int)(dev->data->port_id));
+	}
+	PMD_INIT_LOG(DEBUG, "PCI Address: " PCI_PRI_FMT,
+				pci_dev->addr.domain,
+				pci_dev->addr.bus,
+				pci_dev->addr.devid,
+				pci_dev->addr.function);
+}
+
+/*
+ * It executes link_update after knowing an interrupt occurred.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	int64_t timeout;
+
+	PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags);
+
+	if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
+		struct rte_eth_link link;
+
+		/*get the link status before link update, for predicting later*/
+		rte_eth_linkstatus_get(dev, &link);
+
+		ngbe_dev_link_update(dev, 0);
+
+		/* likely to up */
+		if (!link.link_status)
+			/* handle it 1 sec later, wait it being stable */
+			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
+		/* likely to down */
+		else
+			/* handle it 4 sec later, wait it being stable */
+			timeout = NGBE_LINK_DOWN_CHECK_TIMEOUT;
+
+		ngbe_dev_link_status_print(dev);
+		if (rte_eal_alarm_set(timeout * 1000,
+				      ngbe_dev_interrupt_delayed_handler,
+				      (void *)dev) < 0) {
+			PMD_DRV_LOG(ERR, "Error setting alarm");
+		} else {
+			/* remember original mask */
+			intr->mask_misc_orig = intr->mask_misc;
+			/* only disable lsc interrupt */
+			intr->mask_misc &= ~NGBE_ICRMISC_PHY;
+
+			intr->mask_orig = intr->mask;
+			/* only disable all misc interrupts */
+			intr->mask &= ~(1ULL << NGBE_MISC_VEC_ID);
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "enable intr immediately");
+	ngbe_enable_intr(dev);
+
+	return 0;
+}
+
+/**
+ * Interrupt handler which shall be registered for alarm callback for delayed
+ * handling specific interrupt to wait for the stable nic state. As the
+ * NIC interrupt state is not stable for ngbe after link is just down,
+ * it needs to wait 4 seconds to get the stable status.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) registered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ngbe_dev_interrupt_delayed_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	uint32_t eicr;
+
+	ngbe_disable_intr(hw);
+
+	eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
+
+	if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
+		ngbe_dev_link_update(dev, 0);
+		intr->flags &= ~NGBE_FLAG_NEED_LINK_UPDATE;
+		ngbe_dev_link_status_print(dev);
+		rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL);
+	}
+
+	if (intr->flags & NGBE_FLAG_MACSEC) {
+		rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_MACSEC,
+					      NULL);
+		intr->flags &= ~NGBE_FLAG_MACSEC;
+	}
+
+	/* restore original mask */
+	intr->mask_misc = intr->mask_misc_orig;
+	intr->mask_misc_orig = 0;
+	intr->mask = intr->mask_orig;
+	intr->mask_orig = 0;
+
+	PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr);
+	ngbe_enable_intr(dev);
+}
+
+/**
+ * Interrupt handler triggered by NIC  for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) registered before.
+ *
+ * @return
+ *  void
+ */
+static void
+ngbe_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+
+	ngbe_dev_interrupt_get_status(dev);
+	ngbe_dev_interrupt_action(dev);
+}
+
 static const struct eth_dev_ops ngbe_eth_dev_ops = {
+	.dev_configure              = ngbe_dev_configure,
 	.dev_infos_get              = ngbe_dev_info_get,
+	.link_update                = ngbe_dev_link_update,
 };
 
 RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index b4e2000dd3..10c23c41d1 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -6,6 +6,13 @@
 #ifndef _NGBE_ETHDEV_H_
 #define _NGBE_ETHDEV_H_
 
+/* need update link, bit flag */
+#define NGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
+#define NGBE_FLAG_MAILBOX          (uint32_t)(1 << 1)
+#define NGBE_FLAG_PHY_INTERRUPT    (uint32_t)(1 << 2)
+#define NGBE_FLAG_MACSEC           (uint32_t)(1 << 3)
+#define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
+
 #define NGBE_HKEY_MAX_INDEX 10
 
 #define NGBE_RSS_OFFLOAD_ALL ( \
@@ -19,11 +26,23 @@
 	ETH_RSS_IPV6_TCP_EX | \
 	ETH_RSS_IPV6_UDP_EX)
 
+#define NGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
+
+/* structure for interrupt relative data */
+struct ngbe_interrupt {
+	uint32_t flags;
+	uint32_t mask_misc;
+	uint32_t mask_misc_orig; /* save mask during delayed handler */
+	uint64_t mask;
+	uint64_t mask_orig; /* save mask during delayed handler */
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
 struct ngbe_adapter {
 	struct ngbe_hw             hw;
+	struct ngbe_interrupt      intr;
 };
 
 #define NGBE_DEV_ADAPTER(dev) \
@@ -32,6 +51,15 @@ struct ngbe_adapter {
 #define NGBE_DEV_HW(dev) \
 	(&((struct ngbe_adapter *)(dev)->data->dev_private)->hw)
 
+#define NGBE_DEV_INTR(dev) \
+	(&((struct ngbe_adapter *)(dev)->data->dev_private)->intr)
+
+int
+ngbe_dev_link_update_share(struct rte_eth_dev *dev,
+		int wait_to_complete);
+
+#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
+#define NGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define NGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
 
 /*
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 14/24] net/ngbe: setup the check PHY link
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (12 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 13/24] net/ngbe: support link update Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release Jiawen Wu
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Setup PHY, determine link and speed status from PHY.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_dummy.h   |  18 ++++
 drivers/net/ngbe/base/ngbe_hw.c      |  53 ++++++++++
 drivers/net/ngbe/base/ngbe_hw.h      |   6 ++
 drivers/net/ngbe/base/ngbe_phy.c     |  22 +++++
 drivers/net/ngbe/base/ngbe_phy.h     |   2 +
 drivers/net/ngbe/base/ngbe_phy_mvl.c |  98 +++++++++++++++++++
 drivers/net/ngbe/base/ngbe_phy_mvl.h |   4 +
 drivers/net/ngbe/base/ngbe_phy_rtl.c | 138 +++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_phy_rtl.h |   4 +
 drivers/net/ngbe/base/ngbe_phy_yt.c  | 134 ++++++++++++++++++++++++++
 drivers/net/ngbe/base/ngbe_phy_yt.h  |   9 ++
 drivers/net/ngbe/base/ngbe_type.h    |   9 ++
 12 files changed, 497 insertions(+)

diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 4273e5af36..709e01659c 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -64,6 +64,11 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
 }
+static inline s32 ngbe_mac_setup_link_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+					bool TUP2)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
 					bool *TUP3, bool TUP4)
 {
@@ -129,6 +134,16 @@ static inline s32 ngbe_phy_write_reg_unlocked_dummy(struct ngbe_hw *TUP0,
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_phy_setup_link_dummy(struct ngbe_hw *TUP0,
+					u32 TUP1, bool TUP2)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
+					bool *TUP2)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 {
 	hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
@@ -140,6 +155,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+	hw->mac.setup_link = ngbe_mac_setup_link_dummy;
 	hw->mac.check_link = ngbe_mac_check_link_dummy;
 	hw->mac.set_rar = ngbe_mac_set_rar_dummy;
 	hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
@@ -154,6 +170,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->phy.write_reg = ngbe_phy_write_reg_dummy;
 	hw->phy.read_reg_unlocked = ngbe_phy_read_reg_unlocked_dummy;
 	hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy;
+	hw->phy.setup_link = ngbe_phy_setup_link_dummy;
+	hw->phy.check_link = ngbe_phy_check_link_dummy;
 }
 
 #endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 386557f468..00ac4ce838 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -599,6 +599,54 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  ngbe_check_mac_link_em - Determine link and speed status
+ *  @hw: pointer to hardware structure
+ *  @speed: pointer to link speed
+ *  @link_up: true when link is up
+ *  @link_up_wait_to_complete: bool used to wait for link up or not
+ *
+ *  Reads the links register to determine if link is up and the current speed
+ **/
+s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
+			bool *link_up, bool link_up_wait_to_complete)
+{
+	u32 i, reg;
+	s32 status = 0;
+
+	DEBUGFUNC("ngbe_check_mac_link_em");
+
+	reg = rd32(hw, NGBE_GPIOINTSTAT);
+	wr32(hw, NGBE_GPIOEOI, reg);
+
+	if (link_up_wait_to_complete) {
+		for (i = 0; i < hw->mac.max_link_up_time; i++) {
+			status = hw->phy.check_link(hw, speed, link_up);
+			if (*link_up)
+				break;
+			msec_delay(100);
+		}
+	} else {
+		status = hw->phy.check_link(hw, speed, link_up);
+	}
+
+	return status;
+}
+
+s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
+			       u32 speed,
+			       bool autoneg_wait_to_complete)
+{
+	s32 status;
+
+	DEBUGFUNC("\n");
+
+	/* Setup the PHY according to input speed */
+	status = hw->phy.setup_link(hw, speed, autoneg_wait_to_complete);
+
+	return status;
+}
+
 /**
  *  ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
  *  @hw: pointer to hardware structure
@@ -806,6 +854,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	mac->set_vmdq = ngbe_set_vmdq;
 	mac->clear_vmdq = ngbe_clear_vmdq;
 
+	/* Link */
+	mac->check_link = ngbe_check_mac_link_em;
+	mac->setup_link = ngbe_setup_mac_link_em;
+
 	/* Manageability interface */
 	mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
 	mac->check_overtemp = ngbe_mac_check_overtemp;
@@ -853,6 +905,7 @@ s32 ngbe_init_shared_code(struct ngbe_hw *hw)
 		status = NGBE_ERR_DEVICE_NOT_SUPPORTED;
 		break;
 	}
+	hw->mac.max_link_up_time = NGBE_LINK_UP_TIME;
 
 	hw->bus.set_lan_id(hw);
 
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 0b3d60ae29..1689223168 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -20,6 +20,12 @@ s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
 
 void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
 
+s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
+			bool *link_up, bool link_up_wait_to_complete);
+s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
+			       u32 speed,
+			       bool autoneg_wait_to_complete);
+
 s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
 s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
index d1b4cc9e5f..7a9baada81 100644
--- a/drivers/net/ngbe/base/ngbe_phy.c
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -420,7 +420,29 @@ s32 ngbe_init_phy(struct ngbe_hw *hw)
 
 	/* Identify the PHY */
 	err = phy->identify(hw);
+	if (err == NGBE_ERR_PHY_ADDR_INVALID)
+		goto init_phy_ops_out;
 
+	/* Set necessary function pointers based on PHY type */
+	switch (hw->phy.type) {
+	case ngbe_phy_rtl:
+		hw->phy.check_link = ngbe_check_phy_link_rtl;
+		hw->phy.setup_link = ngbe_setup_phy_link_rtl;
+		break;
+	case ngbe_phy_mvl:
+	case ngbe_phy_mvl_sfi:
+		hw->phy.check_link = ngbe_check_phy_link_mvl;
+		hw->phy.setup_link = ngbe_setup_phy_link_mvl;
+		break;
+	case ngbe_phy_yt8521s:
+	case ngbe_phy_yt8521s_sfi:
+		hw->phy.check_link = ngbe_check_phy_link_yt;
+		hw->phy.setup_link = ngbe_setup_phy_link_yt;
+	default:
+		break;
+	}
+
+init_phy_ops_out:
 	return err;
 }
 
diff --git a/drivers/net/ngbe/base/ngbe_phy.h b/drivers/net/ngbe/base/ngbe_phy.h
index 59d9efe025..96e47b5bb9 100644
--- a/drivers/net/ngbe/base/ngbe_phy.h
+++ b/drivers/net/ngbe/base/ngbe_phy.h
@@ -22,6 +22,8 @@
 #define NGBE_MD_PHY_ID_LOW		0x3 /* PHY ID Low Reg*/
 #define   NGBE_PHY_REVISION_MASK	0xFFFFFFF0
 
+#define NGBE_MII_AUTONEG_REG			0x0
+
 /* IEEE 802.3 Clause 22 */
 struct mdi_reg_22 {
 	u16 page;
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
index 40419a61f6..a1c055e238 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -48,6 +48,64 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
 	return 0;
 }
 
+s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw, u32 speed,
+				bool autoneg_wait_to_complete)
+{
+	u16 value_r4 = 0;
+	u16 value_r9 = 0;
+	u16 value;
+
+	DEBUGFUNC("ngbe_setup_phy_link_mvl");
+	UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+	hw->phy.autoneg_advertised = 0;
+
+	if (hw->phy.type == ngbe_phy_mvl) {
+		if (speed & NGBE_LINK_SPEED_1GB_FULL) {
+			value_r9 |= MVL_PHY_1000BASET_FULL;
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+		}
+
+		if (speed & NGBE_LINK_SPEED_100M_FULL) {
+			value_r4 |= MVL_PHY_100BASET_FULL;
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_100M_FULL;
+		}
+
+		if (speed & NGBE_LINK_SPEED_10M_FULL) {
+			value_r4 |= MVL_PHY_10BASET_FULL;
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_10M_FULL;
+		}
+
+		hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+		value &= ~(MVL_PHY_100BASET_FULL |
+			   MVL_PHY_100BASET_HALF |
+			   MVL_PHY_10BASET_FULL |
+			   MVL_PHY_10BASET_HALF);
+		value_r4 |= value;
+		hw->phy.write_reg(hw, MVL_ANA, 0, value_r4);
+
+		hw->phy.read_reg(hw, MVL_PHY_1000BASET, 0, &value);
+		value &= ~(MVL_PHY_1000BASET_FULL |
+			   MVL_PHY_1000BASET_HALF);
+		value_r9 |= value;
+		hw->phy.write_reg(hw, MVL_PHY_1000BASET, 0, value_r9);
+	} else {
+		hw->phy.autoneg_advertised = 1;
+
+		hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+		value &= ~(MVL_PHY_1000BASEX_HALF | MVL_PHY_1000BASEX_FULL);
+		value |= MVL_PHY_1000BASEX_FULL;
+		hw->phy.write_reg(hw, MVL_ANA, 0, value);
+	}
+
+	value = MVL_CTRL_RESTART_AN | MVL_CTRL_ANE;
+	ngbe_write_phy_reg_mdi(hw, MVL_CTRL, 0, value);
+
+	hw->phy.read_reg(hw, MVL_INTR, 0, &value);
+
+	return 0;
+}
+
 s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
 {
 	u32 i;
@@ -87,3 +145,43 @@ s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
 	return status;
 }
 
+s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw,
+		u32 *speed, bool *link_up)
+{
+	s32 status = 0;
+	u16 phy_link = 0;
+	u16 phy_speed = 0;
+	u16 phy_data = 0;
+	u16 insr = 0;
+
+	DEBUGFUNC("ngbe_check_phy_link_mvl");
+
+	/* Initialize speed and link to default case */
+	*link_up = false;
+	*speed = NGBE_LINK_SPEED_UNKNOWN;
+
+	hw->phy.read_reg(hw, MVL_INTR, 0, &insr);
+
+	/*
+	 * Check current speed and link status of the PHY register.
+	 * This is a vendor specific register and may have to
+	 * be changed for other copper PHYs.
+	 */
+	status = hw->phy.read_reg(hw, MVL_PHYSR, 0, &phy_data);
+	phy_link = phy_data & MVL_PHYSR_LINK;
+	phy_speed = phy_data & MVL_PHYSR_SPEED_MASK;
+
+	if (phy_link == MVL_PHYSR_LINK) {
+		*link_up = true;
+
+		if (phy_speed == MVL_PHYSR_SPEED_1000M)
+			*speed = NGBE_LINK_SPEED_1GB_FULL;
+		else if (phy_speed == MVL_PHYSR_SPEED_100M)
+			*speed = NGBE_LINK_SPEED_100M_FULL;
+		else if (phy_speed == MVL_PHYSR_SPEED_10M)
+			*speed = NGBE_LINK_SPEED_10M_FULL;
+	}
+
+	return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
index a88ace9ec1..a663a429dd 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -89,4 +89,8 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
 
 s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
 
+s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw,
+		u32 *speed, bool *link_up);
+s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw,
+			u32 speed, bool autoneg_wait_to_complete);
 #endif /* _NGBE_PHY_MVL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
index 9c98a8ecaf..5214ce5a8a 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -36,9 +36,147 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
 	return 0;
 }
 
+/**
+ *  ngbe_setup_phy_link_rtl - Set and restart auto-neg
+ *  @hw: pointer to hardware structure
+ *
+ *  Restart auto-negotiation and PHY and waits for completion.
+ **/
+s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
+		u32 speed, bool autoneg_wait_to_complete)
+{
+	u16 autoneg_reg = NGBE_MII_AUTONEG_REG;
+
+	DEBUGFUNC("ngbe_setup_phy_link_rtl");
+
+	UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+	hw->phy.read_reg(hw, RTL_INSR, 0xa43, &autoneg_reg);
+
+	/*
+	 * Clear autoneg_advertised and set new values based on input link
+	 * speed.
+	 */
+	if (speed) {
+		hw->phy.autoneg_advertised = 0;
+
+		if (speed & NGBE_LINK_SPEED_1GB_FULL)
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+
+		if (speed & NGBE_LINK_SPEED_100M_FULL)
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_100M_FULL;
+
+		if (speed & NGBE_LINK_SPEED_10M_FULL)
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_10M_FULL;
+	}
+
+	/* disable 10/100M Half Duplex */
+	hw->phy.read_reg(hw, RTL_ANAR, RTL_DEV_ZERO, &autoneg_reg);
+	autoneg_reg &= 0xFF5F;
+	hw->phy.write_reg(hw, RTL_ANAR, RTL_DEV_ZERO, autoneg_reg);
+
+	/* set advertise enable according to input speed */
+	if (!(speed & NGBE_LINK_SPEED_1GB_FULL)) {
+		hw->phy.read_reg(hw, RTL_GBCR,
+			RTL_DEV_ZERO, &autoneg_reg);
+		autoneg_reg &= ~RTL_GBCR_1000F;
+		hw->phy.write_reg(hw, RTL_GBCR,
+			RTL_DEV_ZERO, autoneg_reg);
+	} else {
+		hw->phy.read_reg(hw, RTL_GBCR,
+			RTL_DEV_ZERO, &autoneg_reg);
+		autoneg_reg |= RTL_GBCR_1000F;
+		hw->phy.write_reg(hw, RTL_GBCR,
+			RTL_DEV_ZERO, autoneg_reg);
+	}
+
+	if (!(speed & NGBE_LINK_SPEED_100M_FULL)) {
+		hw->phy.read_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, &autoneg_reg);
+		autoneg_reg &= ~RTL_ANAR_100F;
+		autoneg_reg &= ~RTL_ANAR_100H;
+		hw->phy.write_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, autoneg_reg);
+	} else {
+		hw->phy.read_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, &autoneg_reg);
+		autoneg_reg |= RTL_ANAR_100F;
+		hw->phy.write_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, autoneg_reg);
+	}
+
+	if (!(speed & NGBE_LINK_SPEED_10M_FULL)) {
+		hw->phy.read_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, &autoneg_reg);
+		autoneg_reg &= ~RTL_ANAR_10F;
+		autoneg_reg &= ~RTL_ANAR_10H;
+		hw->phy.write_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, autoneg_reg);
+	} else {
+		hw->phy.read_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, &autoneg_reg);
+		autoneg_reg |= RTL_ANAR_10F;
+		hw->phy.write_reg(hw, RTL_ANAR,
+			RTL_DEV_ZERO, autoneg_reg);
+	}
+
+	/* restart AN and wait AN done interrupt */
+	autoneg_reg = RTL_BMCR_RESTART_AN | RTL_BMCR_ANE;
+	hw->phy.write_reg(hw, RTL_BMCR, RTL_DEV_ZERO, autoneg_reg);
+
+	autoneg_reg = 0x205B;
+	hw->phy.write_reg(hw, RTL_LCR, 0xd04, autoneg_reg);
+	hw->phy.write_reg(hw, RTL_EEELCR, 0xd04, 0);
+
+	hw->phy.read_reg(hw, RTL_LPCR, 0xd04, &autoneg_reg);
+	autoneg_reg = autoneg_reg & 0xFFFC;
+	autoneg_reg |= 0x2;
+	hw->phy.write_reg(hw, RTL_LPCR, 0xd04, autoneg_reg);
+
+	return 0;
+}
+
 s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw)
 {
 	UNREFERENCED_PARAMETER(hw);
 	return 0;
 }
 
+s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw, u32 *speed, bool *link_up)
+{
+	s32 status = 0;
+	u16 phy_link = 0;
+	u16 phy_speed = 0;
+	u16 phy_data = 0;
+	u16 insr = 0;
+
+	DEBUGFUNC("ngbe_check_phy_link_rtl");
+
+	hw->phy.read_reg(hw, RTL_INSR, 0xa43, &insr);
+
+	/* Initialize speed and link to default case */
+	*link_up = false;
+	*speed = NGBE_LINK_SPEED_UNKNOWN;
+
+	/*
+	 * Check current speed and link status of the PHY register.
+	 * This is a vendor specific register and may have to
+	 * be changed for other copper PHYs.
+	 */
+	status = hw->phy.read_reg(hw, RTL_PHYSR, 0xa43, &phy_data);
+	phy_link = phy_data & RTL_PHYSR_RTLS;
+	phy_speed = phy_data & (RTL_PHYSR_SPEED_MASK | RTL_PHYSR_DP);
+	if (phy_link == RTL_PHYSR_RTLS) {
+		*link_up = true;
+
+		if (phy_speed == (RTL_PHYSR_SPEED_1000M | RTL_PHYSR_DP))
+			*speed = NGBE_LINK_SPEED_1GB_FULL;
+		else if (phy_speed == (RTL_PHYSR_SPEED_100M | RTL_PHYSR_DP))
+			*speed = NGBE_LINK_SPEED_100M_FULL;
+		else if (phy_speed == (RTL_PHYSR_SPEED_10M | RTL_PHYSR_DP))
+			*speed = NGBE_LINK_SPEED_10M_FULL;
+	}
+
+	return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
index ecb60b0ddd..e8bc4a1bd7 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -78,6 +78,10 @@ s32 ngbe_read_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
 s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
 			u16 phy_data);
 
+s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
+		u32 speed, bool autoneg_wait_to_complete);
 s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
+s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw,
+			u32 *speed, bool *link_up);
 
 #endif /* _NGBE_PHY_RTL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index 84b20de45c..f518dc0af6 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -78,6 +78,104 @@ s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
 	return 0;
 }
 
+s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, YT_SMI_PHY_SDS);
+	ngbe_read_phy_reg_ext_yt(hw, reg_addr, device_type, phy_data);
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, 0);
+
+	return 0;
+}
+
+s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data)
+{
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, YT_SMI_PHY_SDS);
+	ngbe_write_phy_reg_ext_yt(hw, reg_addr, device_type, phy_data);
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, 0);
+
+	return 0;
+}
+
+s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
+				bool autoneg_wait_to_complete)
+{
+	u16 value_r4 = 0;
+	u16 value_r9 = 0;
+	u16 value;
+
+	DEBUGFUNC("ngbe_setup_phy_link_yt");
+	UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+	hw->phy.autoneg_advertised = 0;
+
+	if (hw->phy.type == ngbe_phy_yt8521s) {
+		/*disable 100/10base-T Self-negotiation ability*/
+		hw->phy.read_reg(hw, YT_ANA, 0, &value);
+		value &= ~(YT_ANA_100BASET_FULL | YT_ANA_10BASET_FULL);
+		hw->phy.write_reg(hw, YT_ANA, 0, value);
+
+		/*disable 1000base-T Self-negotiation ability*/
+		hw->phy.read_reg(hw, YT_MS_CTRL, 0, &value);
+		value &= ~YT_MS_1000BASET_FULL;
+		hw->phy.write_reg(hw, YT_MS_CTRL, 0, value);
+
+		if (speed & NGBE_LINK_SPEED_1GB_FULL) {
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+			value_r9 |= YT_MS_1000BASET_FULL;
+		}
+		if (speed & NGBE_LINK_SPEED_100M_FULL) {
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_100M_FULL;
+			value_r4 |= YT_ANA_100BASET_FULL;
+		}
+		if (speed & NGBE_LINK_SPEED_10M_FULL) {
+			hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_10M_FULL;
+			value_r4 |= YT_ANA_10BASET_FULL;
+		}
+
+		/* enable 1000base-T Self-negotiation ability */
+		hw->phy.read_reg(hw, YT_MS_CTRL, 0, &value);
+		value |= value_r9;
+		hw->phy.write_reg(hw, YT_MS_CTRL, 0, value);
+
+		/* enable 100/10base-T Self-negotiation ability */
+		hw->phy.read_reg(hw, YT_ANA, 0, &value);
+		value |= value_r4;
+		hw->phy.write_reg(hw, YT_ANA, 0, value);
+
+		/* software reset to make the above configuration take effect*/
+		hw->phy.read_reg(hw, YT_BCR, 0, &value);
+		value |= YT_BCR_RESET;
+		hw->phy.write_reg(hw, YT_BCR, 0, value);
+	} else {
+		hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+
+		/* RGMII_Config1 : Config rx and tx training delay */
+		value = YT_RGMII_CONF1_RXDELAY |
+			YT_RGMII_CONF1_TXDELAY_FE |
+			YT_RGMII_CONF1_TXDELAY;
+		ngbe_write_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, value);
+		value = YT_CHIP_MODE_SEL(1) |
+			YT_CHIP_SW_LDO_EN |
+			YT_CHIP_SW_RST;
+		ngbe_write_phy_reg_ext_yt(hw, YT_CHIP, 0, value);
+
+		/* software reset */
+		ngbe_write_phy_reg_sds_ext_yt(hw, 0x0, 0, 0x9140);
+
+		/* power on phy */
+		hw->phy.read_reg(hw, YT_BCR, 0, &value);
+		value &= ~YT_BCR_PWDN;
+		hw->phy.write_reg(hw, YT_BCR, 0, value);
+	}
+
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
+	ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &value);
+
+	return 0;
+}
+
 s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
 {
 	u32 i;
@@ -110,3 +208,39 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
 	return status;
 }
 
+s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
+		u32 *speed, bool *link_up)
+{
+	s32 status = 0;
+	u16 phy_link = 0;
+	u16 phy_speed = 0;
+	u16 phy_data = 0;
+	u16 insr = 0;
+
+	DEBUGFUNC("ngbe_check_phy_link_yt");
+
+	/* Initialize speed and link to default case */
+	*link_up = false;
+	*speed = NGBE_LINK_SPEED_UNKNOWN;
+
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
+	ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &insr);
+
+	status = hw->phy.read_reg(hw, YT_SPST, 0, &phy_data);
+	phy_link = phy_data & YT_SPST_LINK;
+	phy_speed = phy_data & YT_SPST_SPEED_MASK;
+
+	if (phy_link) {
+		*link_up = true;
+
+		if (phy_speed == YT_SPST_SPEED_1000M)
+			*speed = NGBE_LINK_SPEED_1GB_FULL;
+		else if (phy_speed == YT_SPST_SPEED_100M)
+			*speed = NGBE_LINK_SPEED_100M_FULL;
+		else if (phy_speed == YT_SPST_SPEED_10M)
+			*speed = NGBE_LINK_SPEED_10M_FULL;
+	}
+
+	return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
index 03b53ece86..26820ecb92 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.h
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -26,6 +26,7 @@
 /* MII common registers in UTP and SDS */
 #define YT_BCR				0x0
 #define   YT_BCR_RESET			MS16(15, 0x1)
+#define   YT_BCR_PWDN			MS16(11, 0x1)
 #define YT_ANA				0x4
 /* copper */
 #define   YT_ANA_100BASET_FULL		MS16(8, 0x1)
@@ -60,7 +61,15 @@ s32 ngbe_read_phy_reg_ext_yt(struct ngbe_hw *hw,
 		u32 reg_addr, u32 device_type, u16 *phy_data);
 s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
 		u32 reg_addr, u32 device_type, u16 phy_data);
+s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 *phy_data);
+s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+		u32 reg_addr, u32 device_type, u16 phy_data);
 
 s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
 
+s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
+		u32 *speed, bool *link_up);
+s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw,
+			u32 speed, bool autoneg_wait_to_complete);
 #endif /* _NGBE_PHY_YT_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index d05d2ff28a..bc99d9c3db 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -6,6 +6,8 @@
 #ifndef _NGBE_TYPE_H_
 #define _NGBE_TYPE_H_
 
+#define NGBE_LINK_UP_TIME	90 /* 9.0 Seconds */
+
 #define NGBE_FRAME_SIZE_DFT       (1522) /* Default frame size, +FCS */
 
 #define NGBE_ALIGN		128 /* as intel did */
@@ -96,6 +98,8 @@ struct ngbe_mac_info {
 	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 
+	s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
+			       bool autoneg_wait_to_complete);
 	s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
 			       bool *link_up, bool link_up_wait_to_complete);
 	/* RAR */
@@ -121,6 +125,7 @@ struct ngbe_mac_info {
 	bool get_link_status;
 	struct ngbe_thermal_sensor_data  thermal_sensor_data;
 	bool set_lben;
+	u32  max_link_up_time;
 };
 
 struct ngbe_phy_info {
@@ -134,6 +139,9 @@ struct ngbe_phy_info {
 				u32 device_type, u16 *phy_data);
 	s32 (*write_reg_unlocked)(struct ngbe_hw *hw, u32 reg_addr,
 				u32 device_type, u16 phy_data);
+	s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
+				bool autoneg_wait_to_complete);
+	s32 (*check_link)(struct ngbe_hw *hw, u32 *speed, bool *link_up);
 
 	enum ngbe_media_type media_type;
 	enum ngbe_phy_type type;
@@ -142,6 +150,7 @@ struct ngbe_phy_info {
 	u32 revision;
 	u32 phy_semaphore_mask;
 	bool reset_disable;
+	u32 autoneg_advertised;
 };
 
 enum ngbe_isb_idx {
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (13 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 14/24] net/ngbe: setup the check PHY link Jiawen Wu
@ 2021-06-02  9:40 ` Jiawen Wu
  2021-06-14 18:53   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 16/24] net/ngbe: add Tx " Jiawen Wu
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:40 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Setup device Rx queue and release Rx queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev.c |   9 +
 drivers/net/ngbe/ngbe_ethdev.h |   8 +
 drivers/net/ngbe/ngbe_rxtx.c   | 305 +++++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h   |  90 ++++++++++
 4 files changed, 412 insertions(+)

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 97b6de3aa4..8eb41a7a2b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -262,12 +262,19 @@ static int
 ngbe_dev_configure(struct rte_eth_dev *dev)
 {
 	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
 
 	PMD_INIT_FUNC_TRACE();
 
 	/* set flag to update link status after init */
 	intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
 
+	/*
+	 * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
+	 * allocation Rx preconditions we will reset it.
+	 */
+	adapter->rx_bulk_alloc_allowed = true;
+
 	return 0;
 }
 
@@ -654,6 +661,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
 	.dev_configure              = ngbe_dev_configure,
 	.dev_infos_get              = ngbe_dev_info_get,
 	.link_update                = ngbe_dev_link_update,
+	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
+	.rx_queue_release           = ngbe_dev_rx_queue_release,
 };
 
 RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 10c23c41d1..c324ca7e0f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -43,6 +43,7 @@ struct ngbe_interrupt {
 struct ngbe_adapter {
 	struct ngbe_hw             hw;
 	struct ngbe_interrupt      intr;
+	bool rx_bulk_alloc_allowed;
 };
 
 #define NGBE_DEV_ADAPTER(dev) \
@@ -54,6 +55,13 @@ struct ngbe_adapter {
 #define NGBE_DEV_INTR(dev) \
 	(&((struct ngbe_adapter *)(dev)->data->dev_private)->intr)
 
+void ngbe_dev_rx_queue_release(void *rxq);
+
+int  ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+		uint16_t nb_rx_desc, unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mb_pool);
+
 int
 ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index ae24367b18..9992983bef 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -3,9 +3,14 @@
  * Copyright(c) 2010-2017 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <stdint.h>
 #include <rte_ethdev.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
 
+#include "ngbe_logs.h"
 #include "base/ngbe.h"
 #include "ngbe_ethdev.h"
 #include "ngbe_rxtx.h"
@@ -37,6 +42,166 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	return tx_offload_capa;
 }
 
+/**
+ * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
+ *
+ * The "next" pointer of the last segment of (not-yet-completed) RSC clusters
+ * in the sw_rsc_ring is not set to NULL but rather points to the next
+ * mbuf of this RSC aggregation (that has not been completed yet and still
+ * resides on the HW ring). So, instead of calling for rte_pktmbuf_free() we
+ * will just free first "nb_segs" segments of the cluster explicitly by calling
+ * an rte_pktmbuf_free_seg().
+ *
+ * @m scattered cluster head
+ */
+static void __rte_cold
+ngbe_free_sc_cluster(struct rte_mbuf *m)
+{
+	uint16_t i, nb_segs = m->nb_segs;
+	struct rte_mbuf *next_seg;
+
+	for (i = 0; i < nb_segs; i++) {
+		next_seg = m->next;
+		rte_pktmbuf_free_seg(m);
+		m = next_seg;
+	}
+}
+
+static void __rte_cold
+ngbe_rx_queue_release_mbufs(struct ngbe_rx_queue *rxq)
+{
+	unsigned int i;
+
+	if (rxq->sw_ring != NULL) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i].mbuf != NULL) {
+				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+				rxq->sw_ring[i].mbuf = NULL;
+			}
+		}
+		if (rxq->rx_nb_avail) {
+			for (i = 0; i < rxq->rx_nb_avail; ++i) {
+				struct rte_mbuf *mb;
+
+				mb = rxq->rx_stage[rxq->rx_next_avail + i];
+				rte_pktmbuf_free_seg(mb);
+			}
+			rxq->rx_nb_avail = 0;
+		}
+	}
+
+	if (rxq->sw_sc_ring)
+		for (i = 0; i < rxq->nb_rx_desc; i++)
+			if (rxq->sw_sc_ring[i].fbuf) {
+				ngbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
+				rxq->sw_sc_ring[i].fbuf = NULL;
+			}
+}
+
+static void __rte_cold
+ngbe_rx_queue_release(struct ngbe_rx_queue *rxq)
+{
+	if (rxq != NULL) {
+		ngbe_rx_queue_release_mbufs(rxq);
+		rte_free(rxq->sw_ring);
+		rte_free(rxq->sw_sc_ring);
+		rte_free(rxq);
+	}
+}
+
+void __rte_cold
+ngbe_dev_rx_queue_release(void *rxq)
+{
+	ngbe_rx_queue_release(rxq);
+}
+
+/*
+ * Check if Rx Burst Bulk Alloc function can be used.
+ * Return
+ *        0: the preconditions are satisfied and the bulk allocation function
+ *           can be used.
+ *  -EINVAL: the preconditions are NOT satisfied and the default Rx burst
+ *           function must be used.
+ */
+static inline int __rte_cold
+check_rx_burst_bulk_alloc_preconditions(struct ngbe_rx_queue *rxq)
+{
+	int ret = 0;
+
+	/*
+	 * Make sure the following pre-conditions are satisfied:
+	 *   rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST
+	 *   rxq->rx_free_thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
+	 * Scattered packets are not supported.  This should be checked
+	 * outside of this function.
+	 */
+	if (!(rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "RTE_PMD_NGBE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, RTE_PMD_NGBE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
+/* Reset dynamic ngbe_rx_queue fields back to defaults */
+static void __rte_cold
+ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
+{
+	static const struct ngbe_rx_desc zeroed_desc = {
+						{{0}, {0} }, {{0}, {0} } };
+	unsigned int i;
+	uint16_t len = rxq->nb_rx_desc;
+
+	/*
+	 * By default, the Rx queue setup function allocates enough memory for
+	 * NGBE_RING_DESC_MAX.  The Rx Burst bulk allocation function requires
+	 * extra memory at the end of the descriptor ring to be zero'd out.
+	 */
+	if (adapter->rx_bulk_alloc_allowed)
+		/* zero out extra memory */
+		len += RTE_PMD_NGBE_RX_MAX_BURST;
+
+	/*
+	 * Zero out HW ring memory. Zero out extra memory at the end of
+	 * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
+	 * reads extra memory as zeros.
+	 */
+	for (i = 0; i < len; i++)
+		rxq->rx_ring[i] = zeroed_desc;
+
+	/*
+	 * initialize extra software ring entries. Space for these extra
+	 * entries is always allocated
+	 */
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = rxq->nb_rx_desc; i < len; ++i)
+		rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
 uint64_t
 ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
@@ -65,3 +230,143 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	return offloads;
 }
 
+int __rte_cold
+ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			 uint16_t queue_idx,
+			 uint16_t nb_desc,
+			 unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	const struct rte_memzone *rz;
+	struct ngbe_rx_queue *rxq;
+	struct ngbe_hw     *hw;
+	uint16_t len;
+	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
+	uint64_t offloads;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = NGBE_DEV_HW(dev);
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/*
+	 * Validate number of receive descriptors.
+	 * It must not exceed hardware maximum, and must be multiple
+	 * of NGBE_ALIGN.
+	 */
+	if (nb_desc % NGBE_RXD_ALIGN != 0 ||
+			nb_desc > NGBE_RING_DESC_MAX ||
+			nb_desc < NGBE_RING_DESC_MIN) {
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed... */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* First allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("ethdev RX queue",
+				 sizeof(struct ngbe_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL)
+		return -ENOMEM;
+	rxq->mb_pool = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->reg_idx = queue_idx;
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = RTE_ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->offloads = offloads;
+	rxq->pkt_type_mask = NGBE_PTID_MASK;
+
+	/*
+	 * Allocate RX ring hardware descriptors. A memzone large enough to
+	 * handle the maximum ring size is allocated in order to allow for
+	 * resizing in later calls to the queue setup function.
+	 */
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      RX_RING_SZ, NGBE_ALIGN, socket_id);
+	if (rz == NULL) {
+		ngbe_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Zero init all the descriptors in the ring.
+	 */
+	memset(rz->addr, 0, RX_RING_SZ);
+
+	rxq->rdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXWP(rxq->reg_idx));
+	rxq->rdh_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXRP(rxq->reg_idx));
+
+	rxq->rx_ring_phys_addr = TMZ_PADDR(rz);
+	rxq->rx_ring = (struct ngbe_rx_desc *)TMZ_VADDR(rz);
+
+	/*
+	 * Certain constraints must be met in order to use the bulk buffer
+	 * allocation Rx burst function. If any of Rx queues doesn't meet them
+	 * the feature should be disabled for the whole port.
+	 */
+	if (check_rx_burst_bulk_alloc_preconditions(rxq)) {
+		PMD_INIT_LOG(DEBUG, "queue[%d] doesn't meet Rx Bulk Alloc "
+				    "preconditions - canceling the feature for "
+				    "the whole port[%d]",
+			     rxq->queue_id, rxq->port_id);
+		adapter->rx_bulk_alloc_allowed = false;
+	}
+
+	/*
+	 * Allocate software ring. Allow for space at the end of the
+	 * S/W ring to make sure look-ahead logic in bulk alloc Rx burst
+	 * function does not access an invalid memory region.
+	 */
+	len = nb_desc;
+	if (adapter->rx_bulk_alloc_allowed)
+		len += RTE_PMD_NGBE_RX_MAX_BURST;
+
+	rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
+					  sizeof(struct ngbe_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq->sw_ring) {
+		ngbe_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Always allocate even if it's not going to be needed in order to
+	 * simplify the code.
+	 *
+	 * This ring is used in Scattered Rx cases and Scattered Rx may
+	 * be requested in ngbe_dev_rx_init(), which is called later from
+	 * dev_start() flow.
+	 */
+	rxq->sw_sc_ring =
+		rte_zmalloc_socket("rxq->sw_sc_ring",
+				  sizeof(struct ngbe_scattered_rx_entry) * len,
+				  RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq->sw_sc_ring) {
+		ngbe_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	PMD_INIT_LOG(DEBUG, "sw_ring=%p sw_sc_ring=%p hw_ring=%p "
+			    "dma_addr=0x%" PRIx64,
+		     rxq->sw_ring, rxq->sw_sc_ring, rxq->rx_ring,
+		     rxq->rx_ring_phys_addr);
+
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	ngbe_reset_rx_queue(adapter, rxq);
+
+	return 0;
+}
+
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 39011ee286..e1676a53b4 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -6,7 +6,97 @@
 #ifndef _NGBE_RXTX_H_
 #define _NGBE_RXTX_H_
 
+/*****************************************************************************
+ * Receive Descriptor
+ *****************************************************************************/
+struct ngbe_rx_desc {
+	struct {
+		union {
+			__le32 dw0;
+			struct {
+				__le16 pkt;
+				__le16 hdr;
+			} lo;
+		};
+		union {
+			__le32 dw1;
+			struct {
+				__le16 ipid;
+				__le16 csum;
+			} hi;
+		};
+	} qw0; /* also as r.pkt_addr */
+	struct {
+		union {
+			__le32 dw2;
+			struct {
+				__le32 status;
+			} lo;
+		};
+		union {
+			__le32 dw3;
+			struct {
+				__le16 len;
+				__le16 tag;
+			} hi;
+		};
+	} qw1; /* also as r.hdr_addr */
+};
+
+#define RTE_PMD_NGBE_RX_MAX_BURST 32
+
+#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
+		    sizeof(struct ngbe_rx_desc))
+
 #define NGBE_TX_MAX_SEG                    40
+#define NGBE_PTID_MASK                     0xFF
+
+/**
+ * Structure associated with each descriptor of the RX ring of a RX queue.
+ */
+struct ngbe_rx_entry {
+	struct rte_mbuf *mbuf; /**< mbuf associated with RX descriptor. */
+};
+
+struct ngbe_scattered_rx_entry {
+	struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
+};
+
+/**
+ * Structure associated with each RX queue.
+ */
+struct ngbe_rx_queue {
+	struct rte_mempool  *mb_pool; /**< mbuf pool to populate RX ring. */
+	volatile struct ngbe_rx_desc *rx_ring; /**< RX ring virtual address. */
+	uint64_t            rx_ring_phys_addr; /**< RX ring DMA address. */
+	volatile uint32_t   *rdt_reg_addr; /**< RDT register address. */
+	volatile uint32_t   *rdh_reg_addr; /**< RDH register address. */
+	struct ngbe_rx_entry *sw_ring; /**< address of RX software ring. */
+	/**< address of scattered Rx software ring. */
+	struct ngbe_scattered_rx_entry *sw_sc_ring;
+	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
+	struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
+	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
+	uint16_t            rx_tail;  /**< current value of RDT register. */
+	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
+	uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
+	uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	uint16_t            rx_free_thresh; /**< max free RX desc to hold. */
+	uint16_t            queue_id; /**< RX queue index. */
+	uint16_t            reg_idx;  /**< RX queue register index. */
+	/**< Packet type mask for different NICs. */
+	uint16_t            pkt_type_mask;
+	uint16_t            port_id;  /**< Device port identifier. */
+	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
+	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
+	uint8_t             rx_deferred_start; /**< not in global dev start. */
+	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
+	struct rte_mbuf fake_mbuf;
+	/** hold packets to return to application */
+	struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
+};
 
 uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
 uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 16/24] net/ngbe: add Tx queue setup and release
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (14 preceding siblings ...)
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 18:59   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 17/24] net/ngbe: add Rx and Tx init Jiawen Wu
                   ` (9 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Setup device Tx queue and release Tx queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev.c |   2 +
 drivers/net/ngbe/ngbe_ethdev.h |   6 +
 drivers/net/ngbe/ngbe_rxtx.c   | 212 +++++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h   |  91 ++++++++++++++
 4 files changed, 311 insertions(+)

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 8eb41a7a2b..2f8ac48f33 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -663,6 +663,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
 	.link_update                = ngbe_dev_link_update,
 	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
 	.rx_queue_release           = ngbe_dev_rx_queue_release,
+	.tx_queue_setup             = ngbe_dev_tx_queue_setup,
+	.tx_queue_release           = ngbe_dev_tx_queue_release,
 };
 
 RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index c324ca7e0f..f52d813a47 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -57,11 +57,17 @@ struct ngbe_adapter {
 
 void ngbe_dev_rx_queue_release(void *rxq);
 
+void ngbe_dev_tx_queue_release(void *txq);
+
 int  ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 		uint16_t nb_rx_desc, unsigned int socket_id,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mb_pool);
 
+int  ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+		uint16_t nb_tx_desc, unsigned int socket_id,
+		const struct rte_eth_txconf *tx_conf);
+
 int
 ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 9992983bef..2d8db3245f 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -15,6 +15,99 @@
 #include "ngbe_ethdev.h"
 #include "ngbe_rxtx.h"
 
+#ifndef DEFAULT_TX_FREE_THRESH
+#define DEFAULT_TX_FREE_THRESH 32
+#endif
+
+/*********************************************************************
+ *
+ *  Queue management functions
+ *
+ **********************************************************************/
+
+static void __rte_cold
+ngbe_tx_queue_release_mbufs(struct ngbe_tx_queue *txq)
+{
+	unsigned int i;
+
+	if (txq->sw_ring != NULL) {
+		for (i = 0; i < txq->nb_tx_desc; i++) {
+			if (txq->sw_ring[i].mbuf != NULL) {
+				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+				txq->sw_ring[i].mbuf = NULL;
+			}
+		}
+	}
+}
+
+static void __rte_cold
+ngbe_tx_free_swring(struct ngbe_tx_queue *txq)
+{
+	if (txq != NULL &&
+	    txq->sw_ring != NULL)
+		rte_free(txq->sw_ring);
+}
+
+static void __rte_cold
+ngbe_tx_queue_release(struct ngbe_tx_queue *txq)
+{
+	if (txq != NULL && txq->ops != NULL) {
+		txq->ops->release_mbufs(txq);
+		txq->ops->free_swring(txq);
+		rte_free(txq);
+	}
+}
+
+void __rte_cold
+ngbe_dev_tx_queue_release(void *txq)
+{
+	ngbe_tx_queue_release(txq);
+}
+
+/* (Re)set dynamic ngbe_tx_queue fields to defaults */
+static void __rte_cold
+ngbe_reset_tx_queue(struct ngbe_tx_queue *txq)
+{
+	static const struct ngbe_tx_desc zeroed_desc = {0};
+	struct ngbe_tx_entry *txe = txq->sw_ring;
+	uint16_t prev, i;
+
+	/* Zero out HW ring memory */
+	for (i = 0; i < txq->nb_tx_desc; i++)
+		txq->tx_ring[i] = zeroed_desc;
+
+	/* Initialize SW ring entries */
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct ngbe_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->dw3 = rte_cpu_to_le_32(NGBE_TXD_DD);
+		txe[i].mbuf = NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
+	txq->tx_tail = 0;
+
+	/*
+	 * Always allow 1 descriptor to be un-allocated to avoid
+	 * a H/W race condition
+	 */
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->ctx_curr = 0;
+	memset((void *)&txq->ctx_cache, 0,
+		NGBE_CTX_NUM * sizeof(struct ngbe_ctx_info));
+}
+
+static const struct ngbe_txq_ops def_txq_ops = {
+	.release_mbufs = ngbe_tx_queue_release_mbufs,
+	.free_swring = ngbe_tx_free_swring,
+	.reset = ngbe_reset_tx_queue,
+};
+
 uint64_t
 ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
@@ -42,6 +135,125 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	return tx_offload_capa;
 }
 
+int __rte_cold
+ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			 uint16_t queue_idx,
+			 uint16_t nb_desc,
+			 unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	const struct rte_memzone *tz;
+	struct ngbe_tx_queue *txq;
+	struct ngbe_hw     *hw;
+	uint16_t tx_free_thresh;
+	uint64_t offloads;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = NGBE_DEV_HW(dev);
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	/*
+	 * Validate number of transmit descriptors.
+	 * It must not exceed hardware maximum, and must be multiple
+	 * of NGBE_ALIGN.
+	 */
+	if (nb_desc % NGBE_TXD_ALIGN != 0 ||
+	    nb_desc > NGBE_RING_DESC_MAX ||
+	    nb_desc < NGBE_RING_DESC_MIN) {
+		return -EINVAL;
+	}
+
+	/*
+	 * The TX descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required
+	 * to transmit a packet is greater than the number of free TX
+	 * descriptors.
+	 * One descriptor in the TX ring is used as a sentinel to avoid a
+	 * H/W race condition, hence the maximum threshold constraints.
+	 * When set to zero use default values.
+	 */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+			tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh must be less than the number of "
+			     "TX descriptors minus 3. (tx_free_thresh=%u "
+			     "port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id, (int)queue_idx);
+		return -(EINVAL);
+	}
+
+	if ((nb_desc % tx_free_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_free_thresh=%u "
+			     "port=%d queue=%d)", (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id, (int)queue_idx);
+		return -(EINVAL);
+	}
+
+	/* Free memory prior to re-allocation if needed... */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		ngbe_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* First allocate the tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue",
+				 sizeof(struct ngbe_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL)
+		return -ENOMEM;
+
+	/*
+	 * Allocate TX ring hardware descriptors. A memzone large enough to
+	 * handle the maximum ring size is allocated in order to allow for
+	 * resizing in later calls to the queue setup function.
+	 */
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+			sizeof(struct ngbe_tx_desc) * NGBE_RING_DESC_MAX,
+			NGBE_ALIGN, socket_id);
+	if (tz == NULL) {
+		ngbe_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+	txq->reg_idx = queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->ops = &def_txq_ops;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	txq->tdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXWP(txq->reg_idx));
+	txq->tdc_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXCFG(txq->reg_idx));
+
+	txq->tx_ring_phys_addr = TMZ_PADDR(tz);
+	txq->tx_ring = (struct ngbe_tx_desc *)TMZ_VADDR(tz);
+
+	/* Allocate software ring */
+	txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
+				sizeof(struct ngbe_tx_entry) * nb_desc,
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		ngbe_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+	PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
+		     txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+
+	txq->ops->reset(txq);
+
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
 /**
  * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
  *
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index e1676a53b4..2db5cc3f2a 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -43,6 +43,31 @@ struct ngbe_rx_desc {
 	} qw1; /* also as r.hdr_addr */
 };
 
+/*****************************************************************************
+ * Transmit Descriptor
+ *****************************************************************************/
+/**
+ * Transmit Context Descriptor (NGBE_TXD_TYP=CTXT)
+ **/
+struct ngbe_tx_ctx_desc {
+	__le32 dw0; /* w.vlan_macip_lens  */
+	__le32 dw1; /* w.seqnum_seed      */
+	__le32 dw2; /* w.type_tucmd_mlhl  */
+	__le32 dw3; /* w.mss_l4len_idx    */
+};
+
+/* @ngbe_tx_ctx_desc.dw3 */
+#define NGBE_TXD_DD               MS(0, 0x1) /* descriptor done */
+
+/**
+ * Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
+ **/
+struct ngbe_tx_desc {
+	__le64 qw0; /* r.buffer_addr ,  w.reserved    */
+	__le32 dw2; /* r.cmd_type_len,  w.nxtseq_seed */
+	__le32 dw3; /* r.olinfo_status, w.status      */
+};
+
 #define RTE_PMD_NGBE_RX_MAX_BURST 32
 
 #define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
@@ -62,6 +87,15 @@ struct ngbe_scattered_rx_entry {
 	struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
 };
 
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct ngbe_tx_entry {
+	struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
+	uint16_t next_id; /**< Index of next descriptor in ring. */
+	uint16_t last_id; /**< Index of last scattered descriptor. */
+};
+
 /**
  * Structure associated with each RX queue.
  */
@@ -98,6 +132,63 @@ struct ngbe_rx_queue {
 	struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
 };
 
+/**
+ * NGBE CTX Constants
+ */
+enum ngbe_ctx_num {
+	NGBE_CTX_0    = 0, /**< CTX0 */
+	NGBE_CTX_1    = 1, /**< CTX1  */
+	NGBE_CTX_NUM  = 2, /**< CTX NUMBER  */
+};
+
+/**
+ * Structure to check if new context need be built
+ */
+struct ngbe_ctx_info {
+	uint64_t flags;           /**< ol_flags for context build. */
+};
+
+/**
+ * Structure associated with each TX queue.
+ */
+struct ngbe_tx_queue {
+	/** TX ring virtual address. */
+	volatile struct ngbe_tx_desc *tx_ring;
+	uint64_t            tx_ring_phys_addr; /**< TX ring DMA address. */
+	struct ngbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD.*/
+	volatile uint32_t   *tdt_reg_addr; /**< Address of TDT register. */
+	volatile uint32_t   *tdc_reg_addr; /**< Address of TDC register. */
+	uint16_t            nb_tx_desc;    /**< number of TX descriptors. */
+	uint16_t            tx_tail;       /**< current value of TDT reg. */
+	/**< Start freeing TX buffers if there are less free descriptors than
+	 *   this value.
+	 */
+	uint16_t            tx_free_thresh;
+	/** Index to last TX descriptor to have been cleaned. */
+	uint16_t            last_desc_cleaned;
+	/** Total number of TX descriptors ready to be allocated. */
+	uint16_t            nb_tx_free;
+	uint16_t            tx_next_dd;    /**< next desc to scan for DD bit */
+	uint16_t            queue_id;      /**< TX queue index. */
+	uint16_t            reg_idx;       /**< TX queue register index. */
+	uint16_t            port_id;       /**< Device port identifier. */
+	uint8_t             pthresh;       /**< Prefetch threshold register. */
+	uint8_t             hthresh;       /**< Host threshold register. */
+	uint8_t             wthresh;       /**< Write-back threshold reg. */
+	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint32_t            ctx_curr;      /**< Hardware context states. */
+	/** Hardware context0 history. */
+	struct ngbe_ctx_info ctx_cache[NGBE_CTX_NUM];
+	const struct ngbe_txq_ops *ops;       /**< txq ops */
+	uint8_t             tx_deferred_start; /**< not in global dev start. */
+};
+
+struct ngbe_txq_ops {
+	void (*release_mbufs)(struct ngbe_tx_queue *txq);
+	void (*free_swring)(struct ngbe_tx_queue *txq);
+	void (*reset)(struct ngbe_tx_queue *txq);
+};
+
 uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
 uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
 uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 17/24] net/ngbe: add Rx and Tx init
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (15 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 16/24] net/ngbe: add Tx " Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 19:01   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 18/24] net/ngbe: add packet type Jiawen Wu
                   ` (8 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Initializes receive unit and transmit unit.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini |   6 +
 doc/guides/nics/ngbe.rst          |   2 +
 drivers/net/ngbe/ngbe_ethdev.h    |   5 +
 drivers/net/ngbe/ngbe_rxtx.c      | 187 ++++++++++++++++++++++++++++++
 4 files changed, 200 insertions(+)

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 291a542a42..abde1e2a67 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -7,6 +7,12 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+CRC offload          = P
+VLAN offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
 Multiprocess aware   = Y
 Linux                = Y
 ARMv8                = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index de2ef65664..e56baf26b4 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -10,6 +10,8 @@ for Wangxun 1 Gigabit Ethernet NICs.
 Features
 --------
 
+- Checksum offload
+- Jumbo frames
 - Link state information
 
 Prerequisites
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index f52d813a47..a9482f3001 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -13,6 +13,7 @@
 #define NGBE_FLAG_MACSEC           (uint32_t)(1 << 3)
 #define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
 
+#define NGBE_VLAN_TAG_SIZE 4
 #define NGBE_HKEY_MAX_INDEX 10
 
 #define NGBE_RSS_OFFLOAD_ALL ( \
@@ -68,6 +69,10 @@ int  ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
+int ngbe_dev_rx_init(struct rte_eth_dev *dev);
+
+void ngbe_dev_tx_init(struct rte_eth_dev *dev);
+
 int
 ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 2d8db3245f..68d7e651af 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -582,3 +582,190 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/*
+ * Initializes Receive Unit.
+ */
+int __rte_cold
+ngbe_dev_rx_init(struct rte_eth_dev *dev)
+{
+	struct ngbe_hw *hw;
+	struct ngbe_rx_queue *rxq;
+	uint64_t bus_addr;
+	uint32_t fctrl;
+	uint32_t hlreg0;
+	uint32_t srrctl;
+	uint32_t rdrxctl;
+	uint32_t rxcsum;
+	uint16_t buf_size;
+	uint16_t i;
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = NGBE_DEV_HW(dev);
+
+	/*
+	 * Make sure receives are disabled while setting
+	 * up the RX context (registers, descriptor rings, etc.).
+	 */
+	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
+	wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, 0);
+
+	/* Enable receipt of broadcasted frames */
+	fctrl = rd32(hw, NGBE_PSRCTL);
+	fctrl |= NGBE_PSRCTL_BCA;
+	wr32(hw, NGBE_PSRCTL, fctrl);
+
+	/*
+	 * Configure CRC stripping, if any.
+	 */
+	hlreg0 = rd32(hw, NGBE_SECRXCTL);
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		hlreg0 &= ~NGBE_SECRXCTL_CRCSTRIP;
+	else
+		hlreg0 |= NGBE_SECRXCTL_CRCSTRIP;
+	hlreg0 &= ~NGBE_SECRXCTL_XDSA;
+	wr32(hw, NGBE_SECRXCTL, hlreg0);
+
+	/*
+	 * Configure jumbo frame support, if any.
+	 */
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+			NGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
+	} else {
+		wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+			NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
+	}
+
+	/*
+	 * If loopback mode is configured, set LPBK bit.
+	 */
+	hlreg0 = rd32(hw, NGBE_PSRCTL);
+	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
+		hlreg0 |= NGBE_PSRCTL_LBENA;
+	else
+		hlreg0 &= ~NGBE_PSRCTL_LBENA;
+
+	wr32(hw, NGBE_PSRCTL, hlreg0);
+
+	/*
+	 * Assume no header split and no VLAN strip support
+	 * on any Rx queue first .
+	 */
+	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+
+	/* Setup RX queues */
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+
+		/*
+		 * Reset crc_len in case it was changed after queue setup by a
+		 * call to configure.
+		 */
+		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+			rxq->crc_len = RTE_ETHER_CRC_LEN;
+		else
+			rxq->crc_len = 0;
+
+		/* Setup the Base and Length of the Rx Descriptor Rings */
+		bus_addr = rxq->rx_ring_phys_addr;
+		wr32(hw, NGBE_RXBAL(rxq->reg_idx),
+				(uint32_t)(bus_addr & BIT_MASK32));
+		wr32(hw, NGBE_RXBAH(rxq->reg_idx),
+				(uint32_t)(bus_addr >> 32));
+		wr32(hw, NGBE_RXRP(rxq->reg_idx), 0);
+		wr32(hw, NGBE_RXWP(rxq->reg_idx), 0);
+
+		srrctl = NGBE_RXCFG_RNGLEN(rxq->nb_rx_desc);
+
+		/* Set if packets are dropped when no descriptors available */
+		if (rxq->drop_en)
+			srrctl |= NGBE_RXCFG_DROP;
+
+		/*
+		 * Configure the RX buffer size in the PKTLEN field of
+		 * the RXCFG register of the queue.
+		 * The value is in 1 KB resolution. Valid values can be from
+		 * 1 KB to 16 KB.
+		 */
+		buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
+			RTE_PKTMBUF_HEADROOM);
+		buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
+		srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
+
+		wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
+
+		/* It adds dual VLAN length for supporting dual VLAN */
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * NGBE_VLAN_TAG_SIZE > buf_size)
+			dev->data->scattered_rx = 1;
+		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	}
+
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+		dev->data->scattered_rx = 1;
+
+	/*
+	 * Setup the Checksum Register.
+	 * Disable Full-Packet Checksum which is mutually exclusive with RSS.
+	 * Enable IP/L4 checksum computation by hardware if requested to do so.
+	 */
+	rxcsum = rd32(hw, NGBE_PSRCTL);
+	rxcsum |= NGBE_PSRCTL_PCSD;
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxcsum |= NGBE_PSRCTL_L4CSUM;
+	else
+		rxcsum &= ~NGBE_PSRCTL_L4CSUM;
+
+	wr32(hw, NGBE_PSRCTL, rxcsum);
+
+	if (hw->is_pf) {
+		rdrxctl = rd32(hw, NGBE_SECRXCTL);
+		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+			rdrxctl &= ~NGBE_SECRXCTL_CRCSTRIP;
+		else
+			rdrxctl |= NGBE_SECRXCTL_CRCSTRIP;
+		wr32(hw, NGBE_SECRXCTL, rdrxctl);
+	}
+
+	return 0;
+}
+
+/*
+ * Initializes Transmit Unit.
+ */
+void __rte_cold
+ngbe_dev_tx_init(struct rte_eth_dev *dev)
+{
+	struct ngbe_hw     *hw;
+	struct ngbe_tx_queue *txq;
+	uint64_t bus_addr;
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = NGBE_DEV_HW(dev);
+
+	/* Enable TX CRC (checksum offload requirement) and hw padding
+	 * (TSO requirement)
+	 */
+	wr32m(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_ODSA, NGBE_SECTXCTL_ODSA);
+	wr32m(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_XDSA, 0);
+
+	/* Setup the Base and Length of the Tx Descriptor Rings */
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+
+		bus_addr = txq->tx_ring_phys_addr;
+		wr32(hw, NGBE_TXBAL(txq->reg_idx),
+				(uint32_t)(bus_addr & BIT_MASK32));
+		wr32(hw, NGBE_TXBAH(txq->reg_idx),
+				(uint32_t)(bus_addr >> 32));
+		wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_BUFLEN_MASK,
+			NGBE_TXCFG_BUFLEN(txq->nb_tx_desc));
+		/* Setup the HW Tx Head and TX Tail descriptor pointers */
+		wr32(hw, NGBE_TXRP(txq->reg_idx), 0);
+		wr32(hw, NGBE_TXWP(txq->reg_idx), 0);
+	}
+}
+
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 18/24] net/ngbe: add packet type
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (16 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 17/24] net/ngbe: add Rx and Tx init Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 19:06   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 19/24] net/ngbe: add simple Rx and Tx flow Jiawen Wu
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add packet type marco definition and convert ptype to ptid.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini |   1 +
 doc/guides/nics/ngbe.rst          |   1 +
 drivers/net/ngbe/meson.build      |   1 +
 drivers/net/ngbe/ngbe_ethdev.c    |   8 +
 drivers/net/ngbe/ngbe_ethdev.h    |   4 +
 drivers/net/ngbe/ngbe_ptypes.c    | 640 ++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_ptypes.h    | 351 ++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h      |   1 -
 8 files changed, 1006 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ngbe/ngbe_ptypes.c
 create mode 100644 drivers/net/ngbe/ngbe_ptypes.h

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index abde1e2a67..e24d8d0b55 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -13,6 +13,7 @@ CRC offload          = P
 VLAN offload         = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Packet type parsing  = Y
 Multiprocess aware   = Y
 Linux                = Y
 ARMv8                = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index e56baf26b4..04fa3e90a8 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -10,6 +10,7 @@ for Wangxun 1 Gigabit Ethernet NICs.
 Features
 --------
 
+- Packet type information
 - Checksum offload
 - Jumbo frames
 - Link state information
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 9e75b82f1c..fd571399b3 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
 
 sources = files(
 	'ngbe_ethdev.c',
+	'ngbe_ptypes.c',
 	'ngbe_rxtx.c',
 )
 
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 2f8ac48f33..672db88133 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -354,6 +354,13 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+const uint32_t *
+ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+	return ngbe_get_supported_ptypes();
+}
+
 /* return 0 means link status changed, -1 means not changed */
 int
 ngbe_dev_link_update_share(struct rte_eth_dev *dev,
@@ -661,6 +668,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
 	.dev_configure              = ngbe_dev_configure,
 	.dev_infos_get              = ngbe_dev_info_get,
 	.link_update                = ngbe_dev_link_update,
+	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
 	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
 	.rx_queue_release           = ngbe_dev_rx_queue_release,
 	.tx_queue_setup             = ngbe_dev_tx_queue_setup,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index a9482f3001..6881351252 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -6,6 +6,8 @@
 #ifndef _NGBE_ETHDEV_H_
 #define _NGBE_ETHDEV_H_
 
+#include "ngbe_ptypes.h"
+
 /* need update link, bit flag */
 #define NGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
 #define NGBE_FLAG_MAILBOX          (uint32_t)(1 << 1)
@@ -94,4 +96,6 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 #define NGBE_DEFAULT_TX_HTHRESH      0
 #define NGBE_DEFAULT_TX_WTHRESH      0
 
+const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+
 #endif /* _NGBE_ETHDEV_H_ */
diff --git a/drivers/net/ngbe/ngbe_ptypes.c b/drivers/net/ngbe/ngbe_ptypes.c
new file mode 100644
index 0000000000..4b6cd374f6
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ptypes.c
@@ -0,0 +1,640 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+
+#include "base/ngbe_type.h"
+#include "ngbe_ptypes.h"
+
+/* The ngbe_ptype_lookup is used to convert from the 8-bit ptid in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ */
+#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \
+	[ptid] = (RTE_PTYPE_L2_##l2 | \
+		RTE_PTYPE_L3_##l3 | \
+		RTE_PTYPE_L4_##l4 | \
+		RTE_PTYPE_TUNNEL_##tun | \
+		RTE_PTYPE_INNER_L2_##el2 | \
+		RTE_PTYPE_INNER_L3_##el3 | \
+		RTE_PTYPE_INNER_L4_##el4)
+
+#define RTE_PTYPE_L2_NONE               0
+#define RTE_PTYPE_L3_NONE               0
+#define RTE_PTYPE_L4_NONE               0
+#define RTE_PTYPE_TUNNEL_NONE           0
+#define RTE_PTYPE_INNER_L2_NONE         0
+#define RTE_PTYPE_INNER_L3_NONE         0
+#define RTE_PTYPE_INNER_L4_NONE         0
+
+static u32 ngbe_ptype_lookup[NGBE_PTID_MAX] __rte_cache_aligned = {
+	/* L2:0-3 L3:4-7 L4:8-11 TUN:12-15 EL2:16-19 EL3:20-23 EL2:24-27 */
+	/* L2: ETH */
+	TPTE(0x10, ETHER,          NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x11, ETHER,          NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x12, ETHER_TIMESYNC, NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x13, ETHER_FIP,      NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x14, ETHER_LLDP,     NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x15, ETHER_CNM,      NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x16, ETHER_EAPOL,    NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x17, ETHER_ARP,      NONE, NONE, NONE, NONE, NONE, NONE),
+	/* L2: Ethertype Filter */
+	TPTE(0x18, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x19, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x1A, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x1B, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x1C, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x1D, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x1E, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	TPTE(0x1F, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+	/* L3: IP */
+	TPTE(0x20, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
+	TPTE(0x21, ETHER, IPV4, FRAG,    NONE, NONE, NONE, NONE),
+	TPTE(0x22, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
+	TPTE(0x23, ETHER, IPV4, UDP,     NONE, NONE, NONE, NONE),
+	TPTE(0x24, ETHER, IPV4, TCP,     NONE, NONE, NONE, NONE),
+	TPTE(0x25, ETHER, IPV4, SCTP,    NONE, NONE, NONE, NONE),
+	TPTE(0x29, ETHER, IPV6, FRAG,    NONE, NONE, NONE, NONE),
+	TPTE(0x2A, ETHER, IPV6, NONFRAG, NONE, NONE, NONE, NONE),
+	TPTE(0x2B, ETHER, IPV6, UDP,     NONE, NONE, NONE, NONE),
+	TPTE(0x2C, ETHER, IPV6, TCP,     NONE, NONE, NONE, NONE),
+	TPTE(0x2D, ETHER, IPV6, SCTP,    NONE, NONE, NONE, NONE),
+	/* IPv4 -> IPv4/IPv6 */
+	TPTE(0x81, ETHER, IPV4, NONE, IP, NONE, IPV4, FRAG),
+	TPTE(0x82, ETHER, IPV4, NONE, IP, NONE, IPV4, NONFRAG),
+	TPTE(0x83, ETHER, IPV4, NONE, IP, NONE, IPV4, UDP),
+	TPTE(0x84, ETHER, IPV4, NONE, IP, NONE, IPV4, TCP),
+	TPTE(0x85, ETHER, IPV4, NONE, IP, NONE, IPV4, SCTP),
+	TPTE(0x89, ETHER, IPV4, NONE, IP, NONE, IPV6, FRAG),
+	TPTE(0x8A, ETHER, IPV4, NONE, IP, NONE, IPV6, NONFRAG),
+	TPTE(0x8B, ETHER, IPV4, NONE, IP, NONE, IPV6, UDP),
+	TPTE(0x8C, ETHER, IPV4, NONE, IP, NONE, IPV6, TCP),
+	TPTE(0x8D, ETHER, IPV4, NONE, IP, NONE, IPV6, SCTP),
+	/* IPv4 -> GRE/Teredo/VXLAN -> NONE/IPv4/IPv6 */
+	TPTE(0x90, ETHER, IPV4, NONE, VXLAN_GPE, NONE, NONE, NONE),
+	TPTE(0x91, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, FRAG),
+	TPTE(0x92, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, NONFRAG),
+	TPTE(0x93, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, UDP),
+	TPTE(0x94, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, TCP),
+	TPTE(0x95, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, SCTP),
+	TPTE(0x99, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, FRAG),
+	TPTE(0x9A, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, NONFRAG),
+	TPTE(0x9B, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, UDP),
+	TPTE(0x9C, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, TCP),
+	TPTE(0x9D, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, SCTP),
+	/* IPv4 -> GRE/Teredo/VXLAN -> MAC -> NONE/IPv4/IPv6 */
+	TPTE(0xA0, ETHER, IPV4, NONE, GRENAT, ETHER, NONE,  NONE),
+	TPTE(0xA1, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, FRAG),
+	TPTE(0xA2, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, NONFRAG),
+	TPTE(0xA3, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, UDP),
+	TPTE(0xA4, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, TCP),
+	TPTE(0xA5, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, SCTP),
+	TPTE(0xA9, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, FRAG),
+	TPTE(0xAA, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, NONFRAG),
+	TPTE(0xAB, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, UDP),
+	TPTE(0xAC, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, TCP),
+	TPTE(0xAD, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, SCTP),
+	/* IPv4 -> GRE/Teredo/VXLAN -> MAC+VLAN -> NONE/IPv4/IPv6 */
+	TPTE(0xB0, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, NONE,  NONE),
+	TPTE(0xB1, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, FRAG),
+	TPTE(0xB2, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, NONFRAG),
+	TPTE(0xB3, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, UDP),
+	TPTE(0xB4, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, TCP),
+	TPTE(0xB5, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, SCTP),
+	TPTE(0xB9, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, FRAG),
+	TPTE(0xBA, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, NONFRAG),
+	TPTE(0xBB, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, UDP),
+	TPTE(0xBC, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, TCP),
+	TPTE(0xBD, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, SCTP),
+	/* IPv6 -> IPv4/IPv6 */
+	TPTE(0xC1, ETHER, IPV6, NONE, IP, NONE, IPV4, FRAG),
+	TPTE(0xC2, ETHER, IPV6, NONE, IP, NONE, IPV4, NONFRAG),
+	TPTE(0xC3, ETHER, IPV6, NONE, IP, NONE, IPV4, UDP),
+	TPTE(0xC4, ETHER, IPV6, NONE, IP, NONE, IPV4, TCP),
+	TPTE(0xC5, ETHER, IPV6, NONE, IP, NONE, IPV4, SCTP),
+	TPTE(0xC9, ETHER, IPV6, NONE, IP, NONE, IPV6, FRAG),
+	TPTE(0xCA, ETHER, IPV6, NONE, IP, NONE, IPV6, NONFRAG),
+	TPTE(0xCB, ETHER, IPV6, NONE, IP, NONE, IPV6, UDP),
+	TPTE(0xCC, ETHER, IPV6, NONE, IP, NONE, IPV6, TCP),
+	TPTE(0xCD, ETHER, IPV6, NONE, IP, NONE, IPV6, SCTP),
+	/* IPv6 -> GRE/Teredo/VXLAN -> NONE/IPv4/IPv6 */
+	TPTE(0xD0, ETHER, IPV6, NONE, GRENAT, NONE, NONE,  NONE),
+	TPTE(0xD1, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, FRAG),
+	TPTE(0xD2, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, NONFRAG),
+	TPTE(0xD3, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, UDP),
+	TPTE(0xD4, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, TCP),
+	TPTE(0xD5, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, SCTP),
+	TPTE(0xD9, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, FRAG),
+	TPTE(0xDA, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, NONFRAG),
+	TPTE(0xDB, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, UDP),
+	TPTE(0xDC, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, TCP),
+	TPTE(0xDD, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, SCTP),
+	/* IPv6 -> GRE/Teredo/VXLAN -> MAC -> NONE/IPv4/IPv6 */
+	TPTE(0xE0, ETHER, IPV6, NONE, GRENAT, ETHER, NONE,  NONE),
+	TPTE(0xE1, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, FRAG),
+	TPTE(0xE2, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, NONFRAG),
+	TPTE(0xE3, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, UDP),
+	TPTE(0xE4, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, TCP),
+	TPTE(0xE5, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, SCTP),
+	TPTE(0xE9, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, FRAG),
+	TPTE(0xEA, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, NONFRAG),
+	TPTE(0xEB, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, UDP),
+	TPTE(0xEC, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, TCP),
+	TPTE(0xED, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, SCTP),
+	/* IPv6 -> GRE/Teredo/VXLAN -> MAC+VLAN -> NONE/IPv4/IPv6 */
+	TPTE(0xF0, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, NONE,  NONE),
+	TPTE(0xF1, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, FRAG),
+	TPTE(0xF2, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, NONFRAG),
+	TPTE(0xF3, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, UDP),
+	TPTE(0xF4, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, TCP),
+	TPTE(0xF5, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, SCTP),
+	TPTE(0xF9, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, FRAG),
+	TPTE(0xFA, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, NONFRAG),
+	TPTE(0xFB, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, UDP),
+	TPTE(0xFC, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, TCP),
+	TPTE(0xFD, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, SCTP),
+};
+
+u32 *ngbe_get_supported_ptypes(void)
+{
+	static u32 ptypes[] = {
+		/* For non-vec functions,
+		 * refers to ngbe_rxd_pkt_info_to_pkt_type();
+		 */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L3_IPV6,
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static inline u8
+ngbe_encode_ptype_mac(u32 ptype)
+{
+	u8 ptid;
+
+	ptid = NGBE_PTID_PKT_MAC;
+
+	switch (ptype & RTE_PTYPE_L2_MASK) {
+	case RTE_PTYPE_UNKNOWN:
+		break;
+	case RTE_PTYPE_L2_ETHER_TIMESYNC:
+		ptid |= NGBE_PTID_TYP_TS;
+		break;
+	case RTE_PTYPE_L2_ETHER_ARP:
+		ptid |= NGBE_PTID_TYP_ARP;
+		break;
+	case RTE_PTYPE_L2_ETHER_LLDP:
+		ptid |= NGBE_PTID_TYP_LLDP;
+		break;
+	default:
+		ptid |= NGBE_PTID_TYP_MAC;
+		break;
+	}
+
+	return ptid;
+}
+
+static inline u8
+ngbe_encode_ptype_ip(u32 ptype)
+{
+	u8 ptid;
+
+	ptid = NGBE_PTID_PKT_IP;
+
+	switch (ptype & RTE_PTYPE_L3_MASK) {
+	case RTE_PTYPE_L3_IPV4:
+	case RTE_PTYPE_L3_IPV4_EXT:
+	case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+		break;
+	case RTE_PTYPE_L3_IPV6:
+	case RTE_PTYPE_L3_IPV6_EXT:
+	case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+		ptid |= NGBE_PTID_PKT_IPV6;
+		break;
+	default:
+		return ngbe_encode_ptype_mac(ptype);
+	}
+
+	switch (ptype & RTE_PTYPE_L4_MASK) {
+	case RTE_PTYPE_L4_TCP:
+		ptid |= NGBE_PTID_TYP_TCP;
+		break;
+	case RTE_PTYPE_L4_UDP:
+		ptid |= NGBE_PTID_TYP_UDP;
+		break;
+	case RTE_PTYPE_L4_SCTP:
+		ptid |= NGBE_PTID_TYP_SCTP;
+		break;
+	case RTE_PTYPE_L4_FRAG:
+		ptid |= NGBE_PTID_TYP_IPFRAG;
+		break;
+	default:
+		ptid |= NGBE_PTID_TYP_IPDATA;
+		break;
+	}
+
+	return ptid;
+}
+
+static inline u8
+ngbe_encode_ptype_tunnel(u32 ptype)
+{
+	u8 ptid;
+
+	ptid = NGBE_PTID_PKT_TUN;
+
+	switch (ptype & RTE_PTYPE_L3_MASK) {
+	case RTE_PTYPE_L3_IPV4:
+	case RTE_PTYPE_L3_IPV4_EXT:
+	case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+		break;
+	case RTE_PTYPE_L3_IPV6:
+	case RTE_PTYPE_L3_IPV6_EXT:
+	case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+		ptid |= NGBE_PTID_TUN_IPV6;
+		break;
+	default:
+		return ngbe_encode_ptype_ip(ptype);
+	}
+
+	/* VXLAN/GRE/Teredo/VXLAN-GPE are not supported in EM */
+	switch (ptype & RTE_PTYPE_TUNNEL_MASK) {
+	case RTE_PTYPE_TUNNEL_IP:
+		ptid |= NGBE_PTID_TUN_EI;
+		break;
+	case RTE_PTYPE_TUNNEL_GRE:
+	case RTE_PTYPE_TUNNEL_VXLAN_GPE:
+		ptid |= NGBE_PTID_TUN_EIG;
+		break;
+	case RTE_PTYPE_TUNNEL_VXLAN:
+	case RTE_PTYPE_TUNNEL_NVGRE:
+	case RTE_PTYPE_TUNNEL_GENEVE:
+	case RTE_PTYPE_TUNNEL_GRENAT:
+		break;
+	default:
+		return ptid;
+	}
+
+	switch (ptype & RTE_PTYPE_INNER_L2_MASK) {
+	case RTE_PTYPE_INNER_L2_ETHER:
+		ptid |= NGBE_PTID_TUN_EIGM;
+		break;
+	case RTE_PTYPE_INNER_L2_ETHER_VLAN:
+		ptid |= NGBE_PTID_TUN_EIGMV;
+		break;
+	case RTE_PTYPE_INNER_L2_ETHER_QINQ:
+		ptid |= NGBE_PTID_TUN_EIGMV;
+		break;
+	default:
+		break;
+	}
+
+	switch (ptype & RTE_PTYPE_INNER_L3_MASK) {
+	case RTE_PTYPE_INNER_L3_IPV4:
+	case RTE_PTYPE_INNER_L3_IPV4_EXT:
+	case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+		break;
+	case RTE_PTYPE_INNER_L3_IPV6:
+	case RTE_PTYPE_INNER_L3_IPV6_EXT:
+	case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+		ptid |= NGBE_PTID_PKT_IPV6;
+		break;
+	default:
+		return ptid;
+	}
+
+	switch (ptype & RTE_PTYPE_INNER_L4_MASK) {
+	case RTE_PTYPE_INNER_L4_TCP:
+		ptid |= NGBE_PTID_TYP_TCP;
+		break;
+	case RTE_PTYPE_INNER_L4_UDP:
+		ptid |= NGBE_PTID_TYP_UDP;
+		break;
+	case RTE_PTYPE_INNER_L4_SCTP:
+		ptid |= NGBE_PTID_TYP_SCTP;
+		break;
+	case RTE_PTYPE_INNER_L4_FRAG:
+		ptid |= NGBE_PTID_TYP_IPFRAG;
+		break;
+	default:
+		ptid |= NGBE_PTID_TYP_IPDATA;
+		break;
+	}
+
+	return ptid;
+}
+
+u32 ngbe_decode_ptype(u8 ptid)
+{
+	if (-1 != ngbe_etflt_id(ptid))
+		return RTE_PTYPE_UNKNOWN;
+
+	return ngbe_ptype_lookup[ptid];
+}
+
+u8 ngbe_encode_ptype(u32 ptype)
+{
+	u8 ptid = 0;
+
+	if (ptype & RTE_PTYPE_TUNNEL_MASK)
+		ptid = ngbe_encode_ptype_tunnel(ptype);
+	else if (ptype & RTE_PTYPE_L3_MASK)
+		ptid = ngbe_encode_ptype_ip(ptype);
+	else if (ptype & RTE_PTYPE_L2_MASK)
+		ptid = ngbe_encode_ptype_mac(ptype);
+	else
+		ptid = NGBE_PTID_NULL;
+
+	return ptid;
+}
+
+/**
+ * Use 2 different table for normal packet and tunnel packet
+ * to save the space.
+ */
+const u32
+ngbe_ptype_table[NGBE_PTID_MAX] __rte_cache_aligned = {
+	[NGBE_PT_ETHER] = RTE_PTYPE_L2_ETHER,
+	[NGBE_PT_IPV4] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4,
+	[NGBE_PT_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[NGBE_PT_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[NGBE_PT_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+	[NGBE_PT_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT,
+	[NGBE_PT_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[NGBE_PT_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[NGBE_PT_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+	[NGBE_PT_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6,
+	[NGBE_PT_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[NGBE_PT_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[NGBE_PT_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP,
+	[NGBE_PT_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT,
+	[NGBE_PT_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[NGBE_PT_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[NGBE_PT_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_SCTP,
+	[NGBE_PT_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6,
+	[NGBE_PT_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_IPV4_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_IPV4_EXT_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6,
+	[NGBE_PT_IPV4_EXT_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_IPV4_EXT_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_IPV4_EXT_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[NGBE_PT_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_IPV4_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_IPV4_EXT_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[NGBE_PT_IPV4_EXT_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_IPV4_EXT_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_IPV4_EXT_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
+};
+
+const u32
+ngbe_ptype_table_tn[NGBE_PTID_MAX] __rte_cache_aligned = {
+	[NGBE_PT_NVGRE] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER,
+	[NGBE_PT_NVGRE_IPV4] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT,
+	[NGBE_PT_NVGRE_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6,
+	[NGBE_PT_NVGRE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[NGBE_PT_NVGRE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
+		RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_NVGRE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
+		RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_NVGRE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
+		RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_NVGRE_IPV4_IPV6_EXT_TCP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_TUNNEL_GRE | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
+		RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_NVGRE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
+		RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_NVGRE_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_NVGRE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
+		RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_NVGRE_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_NVGRE_IPV4_IPV6_EXT_UDP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_TUNNEL_GRE | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_NVGRE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_NVGRE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_NVGRE_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
+		RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_NVGRE_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
+		RTE_PTYPE_INNER_L4_UDP,
+
+	[NGBE_PT_VXLAN] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER,
+	[NGBE_PT_VXLAN_IPV4] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT,
+	[NGBE_PT_VXLAN_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6,
+	[NGBE_PT_VXLAN_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[NGBE_PT_VXLAN_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_VXLAN_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_VXLAN_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_VXLAN_IPV4_IPV6_EXT_TCP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_VXLAN_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_VXLAN_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_VXLAN_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+	[NGBE_PT_VXLAN_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_VXLAN_IPV4_IPV6_EXT_UDP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[NGBE_PT_VXLAN_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_VXLAN_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_SCTP,
+	[NGBE_PT_VXLAN_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[NGBE_PT_VXLAN_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_UDP,
+};
+
diff --git a/drivers/net/ngbe/ngbe_ptypes.h b/drivers/net/ngbe/ngbe_ptypes.h
new file mode 100644
index 0000000000..1b965c02d8
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ptypes.h
@@ -0,0 +1,351 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_PTYPE_H_
+#define _NGBE_PTYPE_H_
+
+/**
+ * PTID(Packet Type Identifier, 8bits)
+ * - Bit 3:0 detailed types.
+ * - Bit 5:4 basic types.
+ * - Bit 7:6 tunnel types.
+ **/
+#define NGBE_PTID_NULL                 0
+#define NGBE_PTID_MAX                  256
+#define NGBE_PTID_MASK                 0xFF
+#define NGBE_PTID_MASK_TUNNEL          0x7F
+
+/* TUN */
+#define NGBE_PTID_TUN_IPV6             0x40
+#define NGBE_PTID_TUN_EI               0x00 /* IP */
+#define NGBE_PTID_TUN_EIG              0x10 /* IP+GRE */
+#define NGBE_PTID_TUN_EIGM             0x20 /* IP+GRE+MAC */
+#define NGBE_PTID_TUN_EIGMV            0x30 /* IP+GRE+MAC+VLAN */
+
+/* PKT for !TUN */
+#define NGBE_PTID_PKT_TUN             (0x80)
+#define NGBE_PTID_PKT_MAC             (0x10)
+#define NGBE_PTID_PKT_IP              (0x20)
+#define NGBE_PTID_PKT_FCOE            (0x30)
+
+/* TYP for PKT=mac */
+#define NGBE_PTID_TYP_MAC             (0x01)
+#define NGBE_PTID_TYP_TS              (0x02) /* time sync */
+#define NGBE_PTID_TYP_FIP             (0x03)
+#define NGBE_PTID_TYP_LLDP            (0x04)
+#define NGBE_PTID_TYP_CNM             (0x05)
+#define NGBE_PTID_TYP_EAPOL           (0x06)
+#define NGBE_PTID_TYP_ARP             (0x07)
+#define NGBE_PTID_TYP_ETF             (0x08)
+
+/* TYP for PKT=ip */
+#define NGBE_PTID_PKT_IPV6            (0x08)
+#define NGBE_PTID_TYP_IPFRAG          (0x01)
+#define NGBE_PTID_TYP_IPDATA          (0x02)
+#define NGBE_PTID_TYP_UDP             (0x03)
+#define NGBE_PTID_TYP_TCP             (0x04)
+#define NGBE_PTID_TYP_SCTP            (0x05)
+
+/* TYP for PKT=fcoe */
+#define NGBE_PTID_PKT_VFT             (0x08)
+#define NGBE_PTID_TYP_FCOE            (0x00)
+#define NGBE_PTID_TYP_FCDATA          (0x01)
+#define NGBE_PTID_TYP_FCRDY           (0x02)
+#define NGBE_PTID_TYP_FCRSP           (0x03)
+#define NGBE_PTID_TYP_FCOTHER         (0x04)
+
+/* packet type non-ip values */
+enum ngbe_l2_ptids {
+	NGBE_PTID_L2_ABORTED = (NGBE_PTID_PKT_MAC),
+	NGBE_PTID_L2_MAC = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_MAC),
+	NGBE_PTID_L2_TMST = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_TS),
+	NGBE_PTID_L2_FIP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_FIP),
+	NGBE_PTID_L2_LLDP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_LLDP),
+	NGBE_PTID_L2_CNM = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_CNM),
+	NGBE_PTID_L2_EAPOL = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_EAPOL),
+	NGBE_PTID_L2_ARP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_ARP),
+
+	NGBE_PTID_L2_IPV4_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPFRAG),
+	NGBE_PTID_L2_IPV4 = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPDATA),
+	NGBE_PTID_L2_IPV4_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_UDP),
+	NGBE_PTID_L2_IPV4_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_TCP),
+	NGBE_PTID_L2_IPV4_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_SCTP),
+	NGBE_PTID_L2_IPV6_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+			NGBE_PTID_TYP_IPFRAG),
+	NGBE_PTID_L2_IPV6 = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+			NGBE_PTID_TYP_IPDATA),
+	NGBE_PTID_L2_IPV6_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+			NGBE_PTID_TYP_UDP),
+	NGBE_PTID_L2_IPV6_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+			NGBE_PTID_TYP_TCP),
+	NGBE_PTID_L2_IPV6_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
+			NGBE_PTID_TYP_SCTP),
+
+	NGBE_PTID_L2_FCOE = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_TYP_FCOE),
+	NGBE_PTID_L2_FCOE_FCDATA = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_TYP_FCDATA),
+	NGBE_PTID_L2_FCOE_FCRDY = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_TYP_FCRDY),
+	NGBE_PTID_L2_FCOE_FCRSP = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_TYP_FCRSP),
+	NGBE_PTID_L2_FCOE_FCOTHER = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_TYP_FCOTHER),
+	NGBE_PTID_L2_FCOE_VFT = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_PKT_VFT),
+	NGBE_PTID_L2_FCOE_VFT_FCDATA = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCDATA),
+	NGBE_PTID_L2_FCOE_VFT_FCRDY = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCRDY),
+	NGBE_PTID_L2_FCOE_VFT_FCRSP = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCRSP),
+	NGBE_PTID_L2_FCOE_VFT_FCOTHER = (NGBE_PTID_PKT_FCOE |
+			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCOTHER),
+
+	NGBE_PTID_L2_TUN4_MAC = (NGBE_PTID_PKT_TUN |
+			NGBE_PTID_TUN_EIGM),
+	NGBE_PTID_L2_TUN6_MAC = (NGBE_PTID_PKT_TUN |
+			NGBE_PTID_TUN_IPV6 | NGBE_PTID_TUN_EIGM),
+};
+
+
+/*
+ * PTYPE(Packet Type, 32bits)
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ * please ref to rte_mbuf.h: rte_mbuf.packet_type
+ */
+struct rte_ngbe_ptype {
+	u32 l2:4;  /* outer mac */
+	u32 l3:4;  /* outer internet protocol */
+	u32 l4:4;  /* outer transport protocol */
+	u32 tun:4; /* tunnel protocol */
+
+	u32 el2:4; /* inner mac */
+	u32 el3:4; /* inner internet protocol */
+	u32 el4:4; /* inner transport protocol */
+	u32 rsv:3;
+	u32 known:1;
+};
+
+#ifndef RTE_PTYPE_UNKNOWN
+#define RTE_PTYPE_UNKNOWN                   0x00000000
+#define RTE_PTYPE_L2_ETHER                  0x00000001
+#define RTE_PTYPE_L2_ETHER_TIMESYNC         0x00000002
+#define RTE_PTYPE_L2_ETHER_ARP              0x00000003
+#define RTE_PTYPE_L2_ETHER_LLDP             0x00000004
+#define RTE_PTYPE_L2_ETHER_NSH              0x00000005
+#define RTE_PTYPE_L2_ETHER_FCOE             0x00000009
+#define RTE_PTYPE_L3_IPV4                   0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT               0x00000030
+#define RTE_PTYPE_L3_IPV6                   0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN       0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT               0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN       0x000000e0
+#define RTE_PTYPE_L4_TCP                    0x00000100
+#define RTE_PTYPE_L4_UDP                    0x00000200
+#define RTE_PTYPE_L4_FRAG                   0x00000300
+#define RTE_PTYPE_L4_SCTP                   0x00000400
+#define RTE_PTYPE_L4_ICMP                   0x00000500
+#define RTE_PTYPE_L4_NONFRAG                0x00000600
+#define RTE_PTYPE_TUNNEL_IP                 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE                0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN              0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE              0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE             0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT             0x00006000
+#define RTE_PTYPE_INNER_L2_ETHER            0x00010000
+#define RTE_PTYPE_INNER_L2_ETHER_VLAN       0x00020000
+#define RTE_PTYPE_INNER_L3_IPV4             0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT         0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6             0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT         0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_L4_TCP              0x01000000
+#define RTE_PTYPE_INNER_L4_UDP              0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG             0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP             0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP             0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG          0x06000000
+#endif /* !RTE_PTYPE_UNKNOWN */
+#define RTE_PTYPE_L3_IPV4u                  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
+#define RTE_PTYPE_L3_IPV6u                  RTE_PTYPE_L3_IPV6_EXT_UNKNOWN
+#define RTE_PTYPE_INNER_L3_IPV4u            RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN
+#define RTE_PTYPE_INNER_L3_IPV6u            RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN
+#define RTE_PTYPE_L2_ETHER_FIP              RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_CNM              RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_EAPOL            RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_FILTER           RTE_PTYPE_L2_ETHER
+
+u32 *ngbe_get_supported_ptypes(void);
+u32 ngbe_decode_ptype(u8 ptid);
+u8 ngbe_encode_ptype(u32 ptype);
+
+/**
+ * PT(Packet Type, 32bits)
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ * PT is a more accurate version of PTYPE
+ **/
+#define NGBE_PT_ETHER                   0x00
+#define NGBE_PT_IPV4                    0x01
+#define NGBE_PT_IPV4_TCP                0x11
+#define NGBE_PT_IPV4_UDP                0x21
+#define NGBE_PT_IPV4_SCTP               0x41
+#define NGBE_PT_IPV4_EXT                0x03
+#define NGBE_PT_IPV4_EXT_TCP            0x13
+#define NGBE_PT_IPV4_EXT_UDP            0x23
+#define NGBE_PT_IPV4_EXT_SCTP           0x43
+#define NGBE_PT_IPV6                    0x04
+#define NGBE_PT_IPV6_TCP                0x14
+#define NGBE_PT_IPV6_UDP                0x24
+#define NGBE_PT_IPV6_SCTP               0x44
+#define NGBE_PT_IPV6_EXT                0x0C
+#define NGBE_PT_IPV6_EXT_TCP            0x1C
+#define NGBE_PT_IPV6_EXT_UDP            0x2C
+#define NGBE_PT_IPV6_EXT_SCTP           0x4C
+#define NGBE_PT_IPV4_IPV6               0x05
+#define NGBE_PT_IPV4_IPV6_TCP           0x15
+#define NGBE_PT_IPV4_IPV6_UDP           0x25
+#define NGBE_PT_IPV4_IPV6_SCTP          0x45
+#define NGBE_PT_IPV4_EXT_IPV6           0x07
+#define NGBE_PT_IPV4_EXT_IPV6_TCP       0x17
+#define NGBE_PT_IPV4_EXT_IPV6_UDP       0x27
+#define NGBE_PT_IPV4_EXT_IPV6_SCTP      0x47
+#define NGBE_PT_IPV4_IPV6_EXT           0x0D
+#define NGBE_PT_IPV4_IPV6_EXT_TCP       0x1D
+#define NGBE_PT_IPV4_IPV6_EXT_UDP       0x2D
+#define NGBE_PT_IPV4_IPV6_EXT_SCTP      0x4D
+#define NGBE_PT_IPV4_EXT_IPV6_EXT       0x0F
+#define NGBE_PT_IPV4_EXT_IPV6_EXT_TCP   0x1F
+#define NGBE_PT_IPV4_EXT_IPV6_EXT_UDP   0x2F
+#define NGBE_PT_IPV4_EXT_IPV6_EXT_SCTP  0x4F
+
+#define NGBE_PT_NVGRE                   0x00
+#define NGBE_PT_NVGRE_IPV4              0x01
+#define NGBE_PT_NVGRE_IPV4_TCP          0x11
+#define NGBE_PT_NVGRE_IPV4_UDP          0x21
+#define NGBE_PT_NVGRE_IPV4_SCTP         0x41
+#define NGBE_PT_NVGRE_IPV4_EXT          0x03
+#define NGBE_PT_NVGRE_IPV4_EXT_TCP      0x13
+#define NGBE_PT_NVGRE_IPV4_EXT_UDP      0x23
+#define NGBE_PT_NVGRE_IPV4_EXT_SCTP     0x43
+#define NGBE_PT_NVGRE_IPV6              0x04
+#define NGBE_PT_NVGRE_IPV6_TCP          0x14
+#define NGBE_PT_NVGRE_IPV6_UDP          0x24
+#define NGBE_PT_NVGRE_IPV6_SCTP         0x44
+#define NGBE_PT_NVGRE_IPV6_EXT          0x0C
+#define NGBE_PT_NVGRE_IPV6_EXT_TCP      0x1C
+#define NGBE_PT_NVGRE_IPV6_EXT_UDP      0x2C
+#define NGBE_PT_NVGRE_IPV6_EXT_SCTP     0x4C
+#define NGBE_PT_NVGRE_IPV4_IPV6         0x05
+#define NGBE_PT_NVGRE_IPV4_IPV6_TCP     0x15
+#define NGBE_PT_NVGRE_IPV4_IPV6_UDP     0x25
+#define NGBE_PT_NVGRE_IPV4_IPV6_EXT     0x0D
+#define NGBE_PT_NVGRE_IPV4_IPV6_EXT_TCP 0x1D
+#define NGBE_PT_NVGRE_IPV4_IPV6_EXT_UDP 0x2D
+
+#define NGBE_PT_VXLAN                   0x80
+#define NGBE_PT_VXLAN_IPV4              0x81
+#define NGBE_PT_VXLAN_IPV4_TCP          0x91
+#define NGBE_PT_VXLAN_IPV4_UDP          0xA1
+#define NGBE_PT_VXLAN_IPV4_SCTP         0xC1
+#define NGBE_PT_VXLAN_IPV4_EXT          0x83
+#define NGBE_PT_VXLAN_IPV4_EXT_TCP      0x93
+#define NGBE_PT_VXLAN_IPV4_EXT_UDP      0xA3
+#define NGBE_PT_VXLAN_IPV4_EXT_SCTP     0xC3
+#define NGBE_PT_VXLAN_IPV6              0x84
+#define NGBE_PT_VXLAN_IPV6_TCP          0x94
+#define NGBE_PT_VXLAN_IPV6_UDP          0xA4
+#define NGBE_PT_VXLAN_IPV6_SCTP         0xC4
+#define NGBE_PT_VXLAN_IPV6_EXT          0x8C
+#define NGBE_PT_VXLAN_IPV6_EXT_TCP      0x9C
+#define NGBE_PT_VXLAN_IPV6_EXT_UDP      0xAC
+#define NGBE_PT_VXLAN_IPV6_EXT_SCTP     0xCC
+#define NGBE_PT_VXLAN_IPV4_IPV6         0x85
+#define NGBE_PT_VXLAN_IPV4_IPV6_TCP     0x95
+#define NGBE_PT_VXLAN_IPV4_IPV6_UDP     0xA5
+#define NGBE_PT_VXLAN_IPV4_IPV6_EXT     0x8D
+#define NGBE_PT_VXLAN_IPV4_IPV6_EXT_TCP 0x9D
+#define NGBE_PT_VXLAN_IPV4_IPV6_EXT_UDP 0xAD
+
+#define NGBE_PT_MAX    256
+extern const u32 ngbe_ptype_table[NGBE_PT_MAX];
+extern const u32 ngbe_ptype_table_tn[NGBE_PT_MAX];
+
+
+/* ether type filter list: one static filter per filter consumer. This is
+ *                 to avoid filter collisions later. Add new filters
+ *                 here!!
+ *      EAPOL 802.1x (0x888e): Filter 0
+ *      FCoE (0x8906):   Filter 2
+ *      1588 (0x88f7):   Filter 3
+ *      FIP  (0x8914):   Filter 4
+ *      LLDP (0x88CC):   Filter 5
+ *      LACP (0x8809):   Filter 6
+ *      FC   (0x8808):   Filter 7
+ */
+#define NGBE_ETF_ID_EAPOL        0
+#define NGBE_ETF_ID_FCOE         2
+#define NGBE_ETF_ID_1588         3
+#define NGBE_ETF_ID_FIP          4
+#define NGBE_ETF_ID_LLDP         5
+#define NGBE_ETF_ID_LACP         6
+#define NGBE_ETF_ID_FC           7
+#define NGBE_ETF_ID_MAX          8
+
+#define NGBE_PTID_ETF_MIN  0x18
+#define NGBE_PTID_ETF_MAX  0x1F
+static inline int ngbe_etflt_id(u8 ptid)
+{
+	if (ptid >= NGBE_PTID_ETF_MIN && ptid <= NGBE_PTID_ETF_MAX)
+		return ptid - NGBE_PTID_ETF_MIN;
+	else
+		return -1;
+}
+
+struct ngbe_udphdr {
+	__be16	source;
+	__be16	dest;
+	__be16	len;
+	__be16	check;
+};
+
+struct ngbe_vxlanhdr {
+	__be32 vx_flags;
+	__be32 vx_vni;
+};
+
+struct ngbe_genevehdr {
+	u8 opt_len:6;
+	u8 ver:2;
+	u8 rsvd1:6;
+	u8 critical:1;
+	u8 oam:1;
+	__be16 proto_type;
+
+	u8 vni[3];
+	u8 rsvd2;
+};
+
+struct ngbe_nvgrehdr {
+	__be16 flags;
+	__be16 proto;
+	__be32 tni;
+};
+
+#endif /* _NGBE_PTYPE_H_ */
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 2db5cc3f2a..f30da10ae3 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -74,7 +74,6 @@ struct ngbe_tx_desc {
 		    sizeof(struct ngbe_rx_desc))
 
 #define NGBE_TX_MAX_SEG                    40
-#define NGBE_PTID_MASK                     0xFF
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 19/24] net/ngbe: add simple Rx and Tx flow
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (17 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 18/24] net/ngbe: add packet type Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 19:10   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 20/24] net/ngbe: support bulk and scatter Rx Jiawen Wu
                   ` (6 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Initialize device with the simplest receive and transmit functions.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev.c |   8 +-
 drivers/net/ngbe/ngbe_ethdev.h |   6 +
 drivers/net/ngbe/ngbe_rxtx.c   | 482 +++++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h   | 110 ++++++++
 4 files changed, 604 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 672db88133..4dab920caa 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -109,6 +109,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &ngbe_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
+	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -357,8 +359,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 const uint32_t *
 ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
-	return ngbe_get_supported_ptypes();
+	if (dev->rx_pkt_burst == ngbe_recv_pkts)
+		return ngbe_get_supported_ptypes();
+
+	return NULL;
 }
 
 /* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 6881351252..c0f8483eca 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -75,6 +75,12 @@ int ngbe_dev_rx_init(struct rte_eth_dev *dev);
 
 void ngbe_dev_tx_init(struct rte_eth_dev *dev);
 
+uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts);
+
+uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
+
 int
 ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 68d7e651af..9462da5b7a 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -15,10 +15,492 @@
 #include "ngbe_ethdev.h"
 #include "ngbe_rxtx.h"
 
+/*
+ * Prefetch a cache line into all cache levels.
+ */
+#define rte_ngbe_prefetch(p)   rte_prefetch0(p)
+
+/*********************************************************************
+ *
+ *  TX functions
+ *
+ **********************************************************************/
+
+/*
+ * Check for descriptors with their DD bit set and free mbufs.
+ * Return the total number of buffers freed.
+ */
+static __rte_always_inline int
+ngbe_tx_free_bufs(struct ngbe_tx_queue *txq)
+{
+	struct ngbe_tx_entry *txep;
+	uint32_t status;
+	int i, nb_free = 0;
+	struct rte_mbuf *m, *free[RTE_NGBE_TX_MAX_FREE_BUF_SZ];
+
+	/* check DD bit on threshold descriptor */
+	status = txq->tx_ring[txq->tx_next_dd].dw3;
+	if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
+		if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
+			ngbe_set32_masked(txq->tdc_reg_addr,
+				NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
+		return 0;
+	}
+
+	/*
+	 * first buffer to free from S/W ring is at index
+	 * tx_next_dd - (tx_free_thresh-1)
+	 */
+	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_free_thresh - 1)];
+	for (i = 0; i < txq->tx_free_thresh; ++i, ++txep) {
+		/* free buffers one at a time */
+		m = rte_pktmbuf_prefree_seg(txep->mbuf);
+		txep->mbuf = NULL;
+
+		if (unlikely(m == NULL))
+			continue;
+
+		if (nb_free >= RTE_NGBE_TX_MAX_FREE_BUF_SZ ||
+		    (nb_free > 0 && m->pool != free[0]->pool)) {
+			rte_mempool_put_bulk(free[0]->pool,
+					     (void **)free, nb_free);
+			nb_free = 0;
+		}
+
+		free[nb_free++] = m;
+	}
+
+	if (nb_free > 0)
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+
+	/* buffers were freed, update counters */
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_free_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_free_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
+
+	return txq->tx_free_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ngbe_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t buf_dma_addr;
+	uint32_t pkt_len;
+	int i;
+
+	for (i = 0; i < 4; ++i, ++txdp, ++pkts) {
+		buf_dma_addr = rte_mbuf_data_iova(*pkts);
+		pkt_len = (*pkts)->data_len;
+
+		/* write data to descriptor */
+		txdp->qw0 = rte_cpu_to_le_64(buf_dma_addr);
+		txdp->dw2 = cpu_to_le32(NGBE_TXD_FLAGS |
+					NGBE_TXD_DATLEN(pkt_len));
+		txdp->dw3 = cpu_to_le32(NGBE_TXD_PAYLEN(pkt_len));
+
+		rte_prefetch0(&(*pkts)->pool);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ngbe_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t buf_dma_addr;
+	uint32_t pkt_len;
+
+	buf_dma_addr = rte_mbuf_data_iova(*pkts);
+	pkt_len = (*pkts)->data_len;
+
+	/* write data to descriptor */
+	txdp->qw0 = cpu_to_le64(buf_dma_addr);
+	txdp->dw2 = cpu_to_le32(NGBE_TXD_FLAGS |
+				NGBE_TXD_DATLEN(pkt_len));
+	txdp->dw3 = cpu_to_le32(NGBE_TXD_PAYLEN(pkt_len));
+
+	rte_prefetch0(&(*pkts)->pool);
+}
+
+/*
+ * Fill H/W descriptor ring with mbuf data.
+ * Copy mbuf pointers to the S/W ring.
+ */
+static inline void
+ngbe_tx_fill_hw_ring(struct ngbe_tx_queue *txq, struct rte_mbuf **pkts,
+		      uint16_t nb_pkts)
+{
+	volatile struct ngbe_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+	struct ngbe_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+	int mainpart, leftover;
+	int i, j;
+
+	/*
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = (nb_pkts & ((uint32_t)~N_PER_LOOP_MASK));
+	leftover = (nb_pkts & ((uint32_t)N_PER_LOOP_MASK));
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j)
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue;
+	uint16_t n = 0;
+
+	/*
+	 * Begin scanning the H/W ring for done descriptors when the
+	 * number of available descriptors drops below tx_free_thresh.  For
+	 * each done descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ngbe_tx_free_bufs(txq);
+
+	/* Only use descriptors that are available */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	/* Use exactly nb_pkts descriptors */
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+
+	/*
+	 * At this point, we know there are enough descriptors in the
+	 * ring to transmit all the packets.  This assumes that each
+	 * mbuf contains a single segment, and that no new offloads
+	 * are expected, which would require a new context descriptor.
+	 */
+
+	/*
+	 * See if we're going to wrap-around. If so, handle the top
+	 * of the descriptor ring first, then do the bottom.  If not,
+	 * the processing looks just like the "bottom" part anyway...
+	 */
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		ngbe_tx_fill_hw_ring(txq, tx_pkts, n);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill H/W descriptor ring with mbuf data */
+	ngbe_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/*
+	 * Check for wrap-around. This would only happen if we used
+	 * up to the last descriptor in the ring, no more, no less.
+	 */
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
+		   (uint16_t)txq->tx_tail, (uint16_t)nb_pkts);
+
+	/* update tail pointer */
+	rte_wmb();
+	ngbe_set32_relaxed(txq->tdt_reg_addr, txq->tx_tail);
+
+	return nb_pkts;
+}
+
+uint16_t
+ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts)
+{
+	uint16_t nb_tx;
+
+	/* Try to transmit at least chunks of TX_MAX_BURST pkts */
+	if (likely(nb_pkts <= RTE_PMD_NGBE_TX_MAX_BURST))
+		return tx_xmit_pkts(tx_queue, tx_pkts, nb_pkts);
+
+	/* transmit more than the max burst, in chunks of TX_MAX_BURST */
+	nb_tx = 0;
+	while (nb_pkts) {
+		uint16_t ret, n;
+
+		n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_TX_MAX_BURST);
+		ret = tx_xmit_pkts(tx_queue, &tx_pkts[nb_tx], n);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < n)
+			break;
+	}
+
+	return nb_tx;
+}
+
 #ifndef DEFAULT_TX_FREE_THRESH
 #define DEFAULT_TX_FREE_THRESH 32
 #endif
 
+/*********************************************************************
+ *
+ *  RX functions
+ *
+ **********************************************************************/
+static inline uint32_t
+ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
+{
+	uint16_t ptid = NGBE_RXD_PTID(pkt_info);
+
+	ptid &= ptid_mask;
+
+	return ngbe_decode_ptype(ptid);
+}
+
+static inline uint64_t
+ngbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
+{
+	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+		PKT_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  PKT_RX_FDIR,
+	};
+	return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)];
+}
+
+static inline uint64_t
+rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
+{
+	uint64_t pkt_flags;
+
+	/*
+	 * Check if VLAN present only.
+	 * Do not check whether L3/L4 rx checksum done by NIC or not,
+	 * That can be found from rte_eth_rxmode.offloads flag
+	 */
+	pkt_flags = (rx_status & NGBE_RXD_STAT_VLAN &&
+		     vlan_flags & PKT_RX_VLAN_STRIPPED)
+		    ? vlan_flags : 0;
+
+	return pkt_flags;
+}
+
+static inline uint64_t
+rx_desc_error_to_pkt_flags(uint32_t rx_status)
+{
+	uint64_t pkt_flags = 0;
+
+	/* checksum offload can't be disabled */
+	if (rx_status & NGBE_RXD_STAT_IPCS) {
+		pkt_flags |= (rx_status & NGBE_RXD_ERR_IPCS
+				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+	}
+
+	if (rx_status & NGBE_RXD_STAT_L4CS) {
+		pkt_flags |= (rx_status & NGBE_RXD_ERR_L4CS
+				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+	}
+
+	if (rx_status & NGBE_RXD_STAT_EIPCS &&
+	    rx_status & NGBE_RXD_ERR_EIPCS) {
+		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
+	}
+
+
+	return pkt_flags;
+}
+
+uint16_t
+ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts)
+{
+	struct ngbe_rx_queue *rxq;
+	volatile struct ngbe_rx_desc *rx_ring;
+	volatile struct ngbe_rx_desc *rxdp;
+	struct ngbe_rx_entry *sw_ring;
+	struct ngbe_rx_entry *rxe;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct ngbe_rx_desc rxd;
+	uint64_t dma_addr;
+	uint32_t staterr;
+	uint32_t pkt_info;
+	uint16_t pkt_len;
+	uint16_t rx_id;
+	uint16_t nb_rx;
+	uint16_t nb_hold;
+	uint64_t pkt_flags;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+	sw_ring = rxq->sw_ring;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
+	while (nb_rx < nb_pkts) {
+		/*
+		 * The order of operations here is important as the DD status
+		 * bit must not be read after any other descriptor fields.
+		 * rx_ring and rxdp are pointing to volatile data so the order
+		 * of accesses cannot be reordered by the compiler. If they were
+		 * not volatile, they could be reordered which could lead to
+		 * using invalid descriptor fields when read from rxd.
+		 */
+		rxdp = &rx_ring[rx_id];
+		staterr = rxdp->qw1.lo.status;
+		if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
+			break;
+		rxd = *rxdp;
+
+		/*
+		 * End of packet.
+		 *
+		 * If the NGBE_RXD_STAT_EOP flag is not set, the RX packet
+		 * is likely to be invalid and to be dropped by the various
+		 * validation checks performed by the network stack.
+		 *
+		 * Allocate a new mbuf to replenish the RX ring descriptor.
+		 * If the allocation fails:
+		 *    - arrange for that RX descriptor to be the first one
+		 *      being parsed the next time the receive function is
+		 *      invoked [on the same queue].
+		 *
+		 *    - Stop parsing the RX ring and return immediately.
+		 *
+		 * This policy do not drop the packet received in the RX
+		 * descriptor for which the allocation of a new mbuf failed.
+		 * Thus, it allows that packet to be later retrieved if
+		 * mbuf have been freed in the mean time.
+		 * As a side effect, holding RX descriptors instead of
+		 * systematically giving them back to the NIC may lead to
+		 * RX ring exhaustion situations.
+		 * However, the NIC can gracefully prevent such situations
+		 * to happen by sending specific "back-pressure" flow control
+		 * frames to its peer(s).
+		 */
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
+			   "ext_err_stat=0x%08x pkt_len=%u",
+			   (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
+			   (uint16_t)rx_id, (uint32_t)staterr,
+			   (uint16_t)rte_le_to_cpu_16(rxd.qw1.hi.len));
+
+		nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+		if (nmb == NULL) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", (uint16_t)rxq->port_id,
+				   (uint16_t)rxq->queue_id);
+			dev->data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf while processing current one. */
+		rte_ngbe_prefetch(sw_ring[rx_id].mbuf);
+
+		/*
+		 * When next RX descriptor is on a cache-line boundary,
+		 * prefetch the next 4 RX descriptors and the next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_ngbe_prefetch(&rx_ring[rx_id]);
+			rte_ngbe_prefetch(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		NGBE_RXD_HDRADDR(rxdp, 0);
+		NGBE_RXD_PKTADDR(rxdp, dma_addr);
+
+		/*
+		 * Initialize the returned mbuf.
+		 * 1) setup generic mbuf fields:
+		 *    - number of segments,
+		 *    - next segment,
+		 *    - packet length,
+		 *    - RX port identifier.
+		 * 2) integrate hardware offload data, if any:
+		 *    - RSS flag & hash,
+		 *    - IP checksum flag,
+		 *    - VLAN TCI, if any,
+		 *    - error flags.
+		 */
+		pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len) -
+				      rxq->crc_len);
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = pkt_len;
+		rxm->data_len = pkt_len;
+		rxm->port = rxq->port_id;
+
+		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
+		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
+
+		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
+					rxq->vlan_flags);
+		pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+		pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+		rxm->ol_flags = pkt_flags;
+		rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
+						       rxq->pkt_type_mask);
+
+		if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
+
+		/*
+		 * Store the mbuf address into the next entry of the array
+		 * of returned packets.
+		 */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/*
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register.
+	 * Update the RDT with the value of the last processed RX descriptor
+	 * minus 1, to guarantee that the RDT register is never equal to the
+	 * RDH register, which creates a "full" ring situation from the
+	 * hardware point of view...
+	 */
+	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
+			   (uint16_t)rx_id, (uint16_t)nb_hold,
+			   (uint16_t)nb_rx);
+		rx_id = (uint16_t)((rx_id == 0) ?
+				(rxq->nb_rx_desc - 1) : (rx_id - 1));
+		ngbe_set32(rxq->rdt_reg_addr, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+	return nb_rx;
+}
+
 /*********************************************************************
  *
  *  Queue management functions
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index f30da10ae3..d6b9127cb4 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -43,6 +43,85 @@ struct ngbe_rx_desc {
 	} qw1; /* also as r.hdr_addr */
 };
 
+/* @ngbe_rx_desc.qw0 */
+#define NGBE_RXD_PKTADDR(rxd, v)  \
+	(((volatile __le64 *)(rxd))[0] = cpu_to_le64(v))
+
+/* @ngbe_rx_desc.qw1 */
+#define NGBE_RXD_HDRADDR(rxd, v)  \
+	(((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
+
+/* @ngbe_rx_desc.dw0 */
+#define NGBE_RXD_RSSTYPE(dw)      RS(dw, 0, 0xF)
+#define   NGBE_RSSTYPE_NONE       0
+#define   NGBE_RSSTYPE_IPV4TCP    1
+#define   NGBE_RSSTYPE_IPV4       2
+#define   NGBE_RSSTYPE_IPV6TCP    3
+#define   NGBE_RSSTYPE_IPV4SCTP   4
+#define   NGBE_RSSTYPE_IPV6       5
+#define   NGBE_RSSTYPE_IPV6SCTP   6
+#define   NGBE_RSSTYPE_IPV4UDP    7
+#define   NGBE_RSSTYPE_IPV6UDP    8
+#define   NGBE_RSSTYPE_FDIR       15
+#define NGBE_RXD_SECTYPE(dw)      RS(dw, 4, 0x3)
+#define NGBE_RXD_SECTYPE_NONE     LS(0, 4, 0x3)
+#define NGBE_RXD_SECTYPE_IPSECESP LS(2, 4, 0x3)
+#define NGBE_RXD_SECTYPE_IPSECAH  LS(3, 4, 0x3)
+#define NGBE_RXD_TPIDSEL(dw)      RS(dw, 6, 0x7)
+#define NGBE_RXD_PTID(dw)         RS(dw, 9, 0xFF)
+#define NGBE_RXD_RSCCNT(dw)       RS(dw, 17, 0xF)
+#define NGBE_RXD_HDRLEN(dw)       RS(dw, 21, 0x3FF)
+#define NGBE_RXD_SPH              MS(31, 0x1)
+
+/* @ngbe_rx_desc.dw1 */
+/** bit 0-31, as rss hash when  **/
+#define NGBE_RXD_RSSHASH(rxd)     ((rxd)->qw0.dw1)
+
+/** bit 0-31, as ip csum when  **/
+#define NGBE_RXD_IPID(rxd)        ((rxd)->qw0.hi.ipid)
+#define NGBE_RXD_CSUM(rxd)        ((rxd)->qw0.hi.csum)
+
+/* @ngbe_rx_desc.dw2 */
+#define NGBE_RXD_STATUS(rxd)      ((rxd)->qw1.lo.status)
+/** bit 0-1 **/
+#define NGBE_RXD_STAT_DD          MS(0, 0x1) /* Descriptor Done */
+#define NGBE_RXD_STAT_EOP         MS(1, 0x1) /* End of Packet */
+/** bit 2-31, when EOP=0 **/
+#define NGBE_RXD_NEXTP_RESV(v)    LS(v, 2, 0x3)
+#define NGBE_RXD_NEXTP(dw)        RS(dw, 4, 0xFFFF) /* Next Descriptor */
+/** bit 2-31, when EOP=1 **/
+#define NGBE_RXD_PKT_CLS_MASK     MS(2, 0x7) /* Packet Class */
+#define NGBE_RXD_PKT_CLS_TC_RSS   LS(0, 2, 0x7) /* RSS Hash */
+#define NGBE_RXD_PKT_CLS_FLM      LS(1, 2, 0x7) /* FDir Match */
+#define NGBE_RXD_PKT_CLS_SYN      LS(2, 2, 0x7) /* TCP Sync */
+#define NGBE_RXD_PKT_CLS_5TUPLE   LS(3, 2, 0x7) /* 5 Tuple */
+#define NGBE_RXD_PKT_CLS_ETF      LS(4, 2, 0x7) /* Ethertype Filter */
+#define NGBE_RXD_STAT_VLAN        MS(5, 0x1) /* IEEE VLAN Packet */
+#define NGBE_RXD_STAT_UDPCS       MS(6, 0x1) /* UDP xsum calculated */
+#define NGBE_RXD_STAT_L4CS        MS(7, 0x1) /* L4 xsum calculated */
+#define NGBE_RXD_STAT_IPCS        MS(8, 0x1) /* IP xsum calculated */
+#define NGBE_RXD_STAT_PIF         MS(9, 0x1) /* Non-unicast address */
+#define NGBE_RXD_STAT_EIPCS       MS(10, 0x1) /* Encap IP xsum calculated */
+#define NGBE_RXD_STAT_VEXT        MS(11, 0x1) /* Multi-VLAN */
+#define NGBE_RXD_STAT_IPV6EX      MS(12, 0x1) /* IPv6 with option header */
+#define NGBE_RXD_STAT_LLINT       MS(13, 0x1) /* Pkt caused LLI */
+#define NGBE_RXD_STAT_1588        MS(14, 0x1) /* IEEE1588 Time Stamp */
+#define NGBE_RXD_STAT_SECP        MS(15, 0x1) /* Security Processing */
+#define NGBE_RXD_STAT_LB          MS(16, 0x1) /* Loopback Status */
+/*** bit 17-30, when PTYPE=IP ***/
+#define NGBE_RXD_STAT_BMC         MS(17, 0x1) /* PTYPE=IP, BMC status */
+#define NGBE_RXD_ERR_HBO          MS(23, 0x1) /* Header Buffer Overflow */
+#define NGBE_RXD_ERR_EIPCS        MS(26, 0x1) /* Encap IP header error */
+#define NGBE_RXD_ERR_SECERR       MS(27, 0x1) /* macsec or ipsec error */
+#define NGBE_RXD_ERR_RXE          MS(29, 0x1) /* Any MAC Error */
+#define NGBE_RXD_ERR_L4CS         MS(30, 0x1) /* TCP/UDP xsum error */
+#define NGBE_RXD_ERR_IPCS         MS(31, 0x1) /* IP xsum error */
+#define NGBE_RXD_ERR_CSUM(dw)     RS(dw, 30, 0x3)
+
+/* @ngbe_rx_desc.dw3 */
+#define NGBE_RXD_LENGTH(rxd)           ((rxd)->qw1.hi.len)
+#define NGBE_RXD_VLAN(rxd)             ((rxd)->qw1.hi.tag)
+
 /*****************************************************************************
  * Transmit Descriptor
  *****************************************************************************/
@@ -68,11 +147,40 @@ struct ngbe_tx_desc {
 	__le32 dw3; /* r.olinfo_status, w.status      */
 };
 
+/* @ngbe_tx_desc.dw2 */
+#define NGBE_TXD_DATLEN(v)        ((0xFFFF & (v))) /* data buffer length */
+#define NGBE_TXD_1588             ((0x1) << 19) /* IEEE1588 time stamp */
+#define NGBE_TXD_DATA             ((0x0) << 20) /* data descriptor */
+#define NGBE_TXD_EOP              ((0x1) << 24) /* End of Packet */
+#define NGBE_TXD_FCS              ((0x1) << 25) /* Insert FCS */
+#define NGBE_TXD_LINKSEC          ((0x1) << 26) /* Insert LinkSec */
+#define NGBE_TXD_ECU              ((0x1) << 28) /* forward to ECU */
+#define NGBE_TXD_CNTAG            ((0x1) << 29) /* insert CN tag */
+#define NGBE_TXD_VLE              ((0x1) << 30) /* insert VLAN tag */
+#define NGBE_TXD_TSE              ((0x1) << 31) /* transmit segmentation */
+
+#define NGBE_TXD_FLAGS (NGBE_TXD_FCS | NGBE_TXD_EOP)
+
+/* @ngbe_tx_desc.dw3 */
+#define NGBE_TXD_DD_UNUSED        NGBE_TXD_DD
+#define NGBE_TXD_IDX_UNUSED(v)    NGBE_TXD_IDX(v)
+#define NGBE_TXD_CC               ((0x1) << 7) /* check context */
+#define NGBE_TXD_IPSEC            ((0x1) << 8) /* request ipsec offload */
+#define NGBE_TXD_L4CS             ((0x1) << 9) /* insert TCP/UDP/SCTP csum */
+#define NGBE_TXD_IPCS             ((0x1) << 10) /* insert IPv4 csum */
+#define NGBE_TXD_EIPCS            ((0x1) << 11) /* insert outer IP csum */
+#define NGBE_TXD_MNGFLT           ((0x1) << 12) /* enable management filter */
+#define NGBE_TXD_PAYLEN(v)        ((0x7FFFF & (v)) << 13) /* payload length */
+
+#define RTE_PMD_NGBE_TX_MAX_BURST 32
 #define RTE_PMD_NGBE_RX_MAX_BURST 32
+#define RTE_NGBE_TX_MAX_FREE_BUF_SZ 64
 
 #define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
 		    sizeof(struct ngbe_rx_desc))
 
+#define rte_packet_prefetch(p)  rte_prefetch1(p)
+
 #define NGBE_TX_MAX_SEG                    40
 
 /**
@@ -124,6 +232,8 @@ struct ngbe_rx_queue {
 	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
+	/** flags to set in mbuf when a vlan is detected. */
+	uint64_t            vlan_flags;
 	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 20/24] net/ngbe: support bulk and scatter Rx
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (18 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 19/24] net/ngbe: add simple Rx and Tx flow Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 19:17   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path Jiawen Wu
                   ` (5 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add bulk allocation receive function, and support scattered Rx rely on
Rx offload.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/ngbe.rst       |   1 +
 drivers/net/ngbe/ngbe_ethdev.c |  15 +-
 drivers/net/ngbe/ngbe_ethdev.h |   8 +
 drivers/net/ngbe/ngbe_rxtx.c   | 583 +++++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h   |   2 +
 5 files changed, 607 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 04fa3e90a8..e999e0b580 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -14,6 +14,7 @@ Features
 - Checksum offload
 - Jumbo frames
 - Link state information
+- Scattered and gather for RX
 
 Prerequisites
 -------------
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 4dab920caa..260bca0e4f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -112,8 +112,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
 	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
 
-	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		ngbe_set_rx_function(eth_dev);
+
 		return 0;
+	}
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
@@ -359,7 +367,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 const uint32_t *
 ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
-	if (dev->rx_pkt_burst == ngbe_recv_pkts)
+	if (dev->rx_pkt_burst == ngbe_recv_pkts ||
+	    dev->rx_pkt_burst == ngbe_recv_pkts_sc_single_alloc ||
+	    dev->rx_pkt_burst == ngbe_recv_pkts_sc_bulk_alloc ||
+	    dev->rx_pkt_burst == ngbe_recv_pkts_bulk_alloc)
 		return ngbe_get_supported_ptypes();
 
 	return NULL;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index c0f8483eca..1e21db5e25 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -78,6 +78,14 @@ void ngbe_dev_tx_init(struct rte_eth_dev *dev);
 uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
+uint16_t ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+				    uint16_t nb_pkts);
+
+uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
+		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
+		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
 uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 9462da5b7a..f633718237 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -321,6 +321,257 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
 	return pkt_flags;
 }
 
+/*
+ * LOOK_AHEAD defines how many desc statuses to check beyond the
+ * current descriptor.
+ * It must be a pound define for optimal performance.
+ * Do not change the value of LOOK_AHEAD, as the ngbe_rx_scan_hw_ring
+ * function only works with LOOK_AHEAD=8.
+ */
+#define LOOK_AHEAD 8
+#if (LOOK_AHEAD != 8)
+#error "PMD NGBE: LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
+{
+	volatile struct ngbe_rx_desc *rxdp;
+	struct ngbe_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t pkt_flags;
+	int nb_dd;
+	uint32_t s[LOOK_AHEAD];
+	uint32_t pkt_info[LOOK_AHEAD];
+	int i, j, nb_rx = 0;
+	uint32_t status;
+
+	/* get references to current descriptor and S/W ring entry */
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	status = rxdp->qw1.lo.status;
+	/* check to make sure there is at least 1 packet to receive */
+	if (!(status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
+		return 0;
+
+	/*
+	 * Scan LOOK_AHEAD descriptors at a time to determine which descriptors
+	 * reference packets that are ready to be received.
+	 */
+	for (i = 0; i < RTE_PMD_NGBE_RX_MAX_BURST;
+	     i += LOOK_AHEAD, rxdp += LOOK_AHEAD, rxep += LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = 0; j < LOOK_AHEAD; j++)
+			s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status);
+
+		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+		/* Compute how many status bits were set */
+		for (nb_dd = 0; nb_dd < LOOK_AHEAD &&
+				(s[nb_dd] & NGBE_RXD_STAT_DD); nb_dd++)
+			;
+
+		for (j = 0; j < nb_dd; j++)
+			pkt_info[j] = rte_le_to_cpu_32(rxdp[j].qw0.dw0);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf format */
+		for (j = 0; j < nb_dd; ++j) {
+			mb = rxep[j].mbuf;
+			pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len) -
+				  rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].qw1.hi.tag);
+
+			/* convert descriptor fields to rte mbuf flags */
+			pkt_flags = rx_desc_status_to_pkt_flags(s[j],
+					rxq->vlan_flags);
+			pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+			pkt_flags |=
+				ngbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
+			mb->ol_flags = pkt_flags;
+			mb->packet_type =
+				ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
+				rxq->pkt_type_mask);
+
+			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+				mb->hash.rss =
+					rte_le_to_cpu_32(rxdp[j].qw0.dw1);
+		}
+
+		/* Move mbuf pointers from the S/W ring to the stage */
+		for (j = 0; j < LOOK_AHEAD; ++j)
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+
+		/* stop if all requested packets could not be received */
+		if (nb_dd != LOOK_AHEAD)
+			break;
+	}
+
+	/* clear software ring entries so we can cleanup correctly */
+	for (i = 0; i < nb_rx; ++i)
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+
+	return nb_rx;
+}
+
+static inline int
+ngbe_rx_alloc_bufs(struct ngbe_rx_queue *rxq, bool reset_mbuf)
+{
+	volatile struct ngbe_rx_desc *rxdp;
+	struct ngbe_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx;
+	__le64 dma_addr;
+	int diag, i;
+
+	/* allocate buffers in bulk directly into the S/W ring */
+	alloc_idx = rxq->rx_free_trigger - (rxq->rx_free_thresh - 1);
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0))
+		return -ENOMEM;
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; ++i) {
+		/* populate the static rte mbuf fields */
+		mb = rxep[i].mbuf;
+		if (reset_mbuf)
+			mb->port = rxq->port_id;
+
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* populate the descriptors */
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		NGBE_RXD_HDRADDR(&rxdp[i], 0);
+		NGBE_RXD_PKTADDR(&rxdp[i], dma_addr);
+	}
+
+	/* update state of internal queue structure */
+	rxq->rx_free_trigger = rxq->rx_free_trigger + rxq->rx_free_thresh;
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = rxq->rx_free_thresh - 1;
+
+	/* no errors */
+	return 0;
+}
+
+static inline uint16_t
+ngbe_rx_fill_from_stage(struct ngbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+	int i;
+
+	/* how many packets are ready to return? */
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	/* copy mbuf pointers to the application's packet list */
+	for (i = 0; i < nb_pkts; ++i)
+		rx_pkts[i] = stage[i];
+
+	/* update internal queue state */
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline uint16_t
+ngbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+	     uint16_t nb_pkts)
+{
+	struct ngbe_rx_queue *rxq = (struct ngbe_rx_queue *)rx_queue;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
+	uint16_t nb_rx = 0;
+
+	/* Any previously recv'd pkts will be returned from the Rx stage */
+	if (rxq->rx_nb_avail)
+		return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	/* Scan the H/W ring for packets to receive */
+	nb_rx = (uint16_t)ngbe_rx_scan_hw_ring(rxq);
+
+	/* update internal queue state */
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	/* if required, allocate new buffers to replenish descriptors */
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		uint16_t cur_free_trigger = rxq->rx_free_trigger;
+
+		if (ngbe_rx_alloc_bufs(rxq, true) != 0) {
+			int i, j;
+
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", (uint16_t)rxq->port_id,
+				   (uint16_t)rxq->queue_id);
+
+			dev->data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+
+			/*
+			 * Need to rewind any previous receives if we cannot
+			 * allocate new buffers to replenish the old ones.
+			 */
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; ++i, ++j)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+
+		/* update tail pointer */
+		rte_wmb();
+		ngbe_set32_relaxed(rxq->rdt_reg_addr, cur_free_trigger);
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	/* received any packets this loop? */
+	if (rxq->rx_nb_avail)
+		return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
+/* split requests into chunks of size RTE_PMD_NGBE_RX_MAX_BURST */
+uint16_t
+ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts)
+{
+	uint16_t nb_rx;
+
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= RTE_PMD_NGBE_RX_MAX_BURST))
+		return ngbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	/* request is relatively large, chunk it up */
+	nb_rx = 0;
+	while (nb_pkts) {
+		uint16_t ret, n;
+
+		n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_RX_MAX_BURST);
+		ret = ngbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < n)
+			break;
+	}
+
+	return nb_rx;
+}
+
 uint16_t
 ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts)
@@ -501,6 +752,288 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+/**
+ * ngbe_fill_cluster_head_buf - fill the first mbuf of the returned packet
+ *
+ * Fill the following info in the HEAD buffer of the Rx cluster:
+ *    - RX port identifier
+ *    - hardware offload data, if any:
+ *      - RSS flag & hash
+ *      - IP checksum flag
+ *      - VLAN TCI, if any
+ *      - error flags
+ * @head HEAD of the packet cluster
+ * @desc HW descriptor to get data from
+ * @rxq Pointer to the Rx queue
+ */
+static inline void
+ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
+		struct ngbe_rx_queue *rxq, uint32_t staterr)
+{
+	uint32_t pkt_info;
+	uint64_t pkt_flags;
+
+	head->port = rxq->port_id;
+
+	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	 * set in the pkt_flags field.
+	 */
+	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
+	pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
+	pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
+	pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+	pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+	head->ol_flags = pkt_flags;
+	head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
+						rxq->pkt_type_mask);
+
+	if (likely(pkt_flags & PKT_RX_RSS_HASH))
+		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
+}
+
+/**
+ * ngbe_recv_pkts_sc - receive handler for scatter case.
+ *
+ * @rx_queue Rx queue handle
+ * @rx_pkts table of received packets
+ * @nb_pkts size of rx_pkts table
+ * @bulk_alloc if TRUE bulk allocation is used for a HW ring refilling
+ *
+ * Returns the number of received packets/clusters (according to the "bulk
+ * receive" interface).
+ */
+static inline uint16_t
+ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
+		    bool bulk_alloc)
+{
+	struct ngbe_rx_queue *rxq = rx_queue;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
+	volatile struct ngbe_rx_desc *rx_ring = rxq->rx_ring;
+	struct ngbe_rx_entry *sw_ring = rxq->sw_ring;
+	struct ngbe_scattered_rx_entry *sw_sc_ring = rxq->sw_sc_ring;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = rxq->nb_rx_hold;
+	uint16_t prev_id = rxq->rx_tail;
+
+	while (nb_rx < nb_pkts) {
+		bool eop;
+		struct ngbe_rx_entry *rxe;
+		struct ngbe_scattered_rx_entry *sc_entry;
+		struct ngbe_scattered_rx_entry *next_sc_entry = NULL;
+		struct ngbe_rx_entry *next_rxe = NULL;
+		struct rte_mbuf *first_seg;
+		struct rte_mbuf *rxm;
+		struct rte_mbuf *nmb = NULL;
+		struct ngbe_rx_desc rxd;
+		uint16_t data_len;
+		uint16_t next_id;
+		volatile struct ngbe_rx_desc *rxdp;
+		uint32_t staterr;
+
+next_desc:
+		rxdp = &rx_ring[rx_id];
+		staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status);
+
+		if (!(staterr & NGBE_RXD_STAT_DD))
+			break;
+
+		rxd = *rxdp;
+
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
+				  "staterr=0x%x data_len=%u",
+			   rxq->port_id, rxq->queue_id, rx_id, staterr,
+			   rte_le_to_cpu_16(rxd.qw1.hi.len));
+
+		if (!bulk_alloc) {
+			nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+			if (nmb == NULL) {
+				PMD_RX_LOG(DEBUG, "RX mbuf alloc failed "
+						  "port_id=%u queue_id=%u",
+					   rxq->port_id, rxq->queue_id);
+
+				dev->data->rx_mbuf_alloc_failed++;
+				break;
+			}
+		} else if (nb_hold > rxq->rx_free_thresh) {
+			uint16_t next_rdt = rxq->rx_free_trigger;
+
+			if (!ngbe_rx_alloc_bufs(rxq, false)) {
+				rte_wmb();
+				ngbe_set32_relaxed(rxq->rdt_reg_addr,
+							    next_rdt);
+				nb_hold -= rxq->rx_free_thresh;
+			} else {
+				PMD_RX_LOG(DEBUG, "RX bulk alloc failed "
+						  "port_id=%u queue_id=%u",
+					   rxq->port_id, rxq->queue_id);
+
+				dev->data->rx_mbuf_alloc_failed++;
+				break;
+			}
+		}
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id];
+		eop = staterr & NGBE_RXD_STAT_EOP;
+
+		next_id = rx_id + 1;
+		if (next_id == rxq->nb_rx_desc)
+			next_id = 0;
+
+		/* Prefetch next mbuf while processing current one. */
+		rte_ngbe_prefetch(sw_ring[next_id].mbuf);
+
+		/*
+		 * When next RX descriptor is on a cache-line boundary,
+		 * prefetch the next 4 RX descriptors and the next 4 pointers
+		 * to mbufs.
+		 */
+		if ((next_id & 0x3) == 0) {
+			rte_ngbe_prefetch(&rx_ring[next_id]);
+			rte_ngbe_prefetch(&sw_ring[next_id]);
+		}
+
+		rxm = rxe->mbuf;
+
+		if (!bulk_alloc) {
+			__le64 dma =
+			  rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+			/*
+			 * Update RX descriptor with the physical address of the
+			 * new data buffer of the new allocated mbuf.
+			 */
+			rxe->mbuf = nmb;
+
+			rxm->data_off = RTE_PKTMBUF_HEADROOM;
+			NGBE_RXD_HDRADDR(rxdp, 0);
+			NGBE_RXD_PKTADDR(rxdp, dma);
+		} else {
+			rxe->mbuf = NULL;
+		}
+
+		/*
+		 * Set data length & data buffer address of mbuf.
+		 */
+		data_len = rte_le_to_cpu_16(rxd.qw1.hi.len);
+		rxm->data_len = data_len;
+
+		if (!eop) {
+			uint16_t nextp_id;
+
+			nextp_id = next_id;
+			next_sc_entry = &sw_sc_ring[nextp_id];
+			next_rxe = &sw_ring[nextp_id];
+			rte_ngbe_prefetch(next_rxe);
+		}
+
+		sc_entry = &sw_sc_ring[rx_id];
+		first_seg = sc_entry->fbuf;
+		sc_entry->fbuf = NULL;
+
+		/*
+		 * If this is the first buffer of the received packet,
+		 * set the pointer to the first mbuf of the packet and
+		 * initialize its context.
+		 * Otherwise, update the total length and the number of segments
+		 * of the current scattered packet, and update the pointer to
+		 * the last mbuf of the current packet.
+		 */
+		if (first_seg == NULL) {
+			first_seg = rxm;
+			first_seg->pkt_len = data_len;
+			first_seg->nb_segs = 1;
+		} else {
+			first_seg->pkt_len += data_len;
+			first_seg->nb_segs++;
+		}
+
+		prev_id = rx_id;
+		rx_id = next_id;
+
+		/*
+		 * If this is not the last buffer of the received packet, update
+		 * the pointer to the first mbuf at the NEXTP entry in the
+		 * sw_sc_ring and continue to parse the RX ring.
+		 */
+		if (!eop && next_rxe) {
+			rxm->next = next_rxe->mbuf;
+			next_sc_entry->fbuf = first_seg;
+			goto next_desc;
+		}
+
+		/* Initialize the first mbuf of the returned packet */
+		ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr);
+
+		/* Deal with the case, when HW CRC srip is disabled. */
+		first_seg->pkt_len -= rxq->crc_len;
+		if (unlikely(rxm->data_len <= rxq->crc_len)) {
+			struct rte_mbuf *lp;
+
+			for (lp = first_seg; lp->next != rxm; lp = lp->next)
+				;
+
+			first_seg->nb_segs--;
+			lp->data_len -= rxq->crc_len - rxm->data_len;
+			lp->next = NULL;
+			rte_pktmbuf_free_seg(rxm);
+		} else {
+			rxm->data_len -= rxq->crc_len;
+		}
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_packet_prefetch((char *)first_seg->buf_addr +
+			first_seg->data_off);
+
+		/*
+		 * Store the mbuf address into the next entry of the array
+		 * of returned packets.
+		 */
+		rx_pkts[nb_rx++] = first_seg;
+	}
+
+	/*
+	 * Record index of the next RX descriptor to probe.
+	 */
+	rxq->rx_tail = rx_id;
+
+	/*
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register.
+	 * Update the RDT with the value of the last processed RX descriptor
+	 * minus 1, to guarantee that the RDT register is never equal to the
+	 * RDH register, which creates a "full" ring situation from the
+	 * hardware point of view...
+	 */
+	if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx);
+
+		rte_wmb();
+		ngbe_set32_relaxed(rxq->rdt_reg_addr, prev_id);
+		nb_hold = 0;
+	}
+
+	rxq->nb_rx_hold = nb_hold;
+	return nb_rx;
+}
+
+uint16_t
+ngbe_recv_pkts_sc_single_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts)
+{
+	return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, false);
+}
+
+uint16_t
+ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, true);
+}
+
 /*********************************************************************
  *
  *  Queue management functions
@@ -1064,6 +1597,54 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
+void __rte_cold
+ngbe_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
+
+	if (dev->data->scattered_rx) {
+		/*
+		 * Set the scattered callback: there are bulk and
+		 * single allocation versions.
+		 */
+		if (adapter->rx_bulk_alloc_allowed) {
+			PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk "
+					   "allocation callback (port=%d).",
+				     dev->data->port_id);
+			dev->rx_pkt_burst = ngbe_recv_pkts_sc_bulk_alloc;
+		} else {
+			PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, "
+					    "single allocation) "
+					    "Scattered Rx callback "
+					    "(port=%d).",
+				     dev->data->port_id);
+
+			dev->rx_pkt_burst = ngbe_recv_pkts_sc_single_alloc;
+		}
+	/*
+	 * Below we set "simple" callbacks according to port/queues parameters.
+	 * If parameters allow we are going to choose between the following
+	 * callbacks:
+	 *    - Bulk Allocation
+	 *    - Single buffer allocation (the simplest one)
+	 */
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+				    "satisfied. Rx Burst Bulk Alloc function "
+				    "will be used on port=%d.",
+			     dev->data->port_id);
+
+		dev->rx_pkt_burst = ngbe_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are not "
+				    "satisfied, or Scattered Rx is requested "
+				    "(port=%d).",
+			     dev->data->port_id);
+
+		dev->rx_pkt_burst = ngbe_recv_pkts;
+	}
+}
+
 /*
  * Initializes Receive Unit.
  */
@@ -1211,6 +1792,8 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
 		wr32(hw, NGBE_SECRXCTL, rdrxctl);
 	}
 
+	ngbe_set_rx_function(dev);
+
 	return 0;
 }
 
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index d6b9127cb4..4b8596b24a 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -298,6 +298,8 @@ struct ngbe_txq_ops {
 	void (*reset)(struct ngbe_tx_queue *txq);
 };
 
+void ngbe_set_rx_function(struct rte_eth_dev *dev);
+
 uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
 uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
 uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (19 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 20/24] net/ngbe: support bulk and scatter Rx Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 19:22   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 22/24] net/ngbe: add device start operation Jiawen Wu
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add the full-featured transmit function, which supports checksum, TSO,
tunnel parse, etc.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini |   3 +
 doc/guides/nics/ngbe.rst          |   3 +-
 drivers/net/ngbe/meson.build      |   2 +
 drivers/net/ngbe/ngbe_ethdev.c    |  16 +-
 drivers/net/ngbe/ngbe_ethdev.h    |   3 +
 drivers/net/ngbe/ngbe_rxtx.c      | 639 ++++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h      |  56 +++
 7 files changed, 720 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index e24d8d0b55..443c6691a3 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -9,10 +9,13 @@ Link status          = Y
 Link status event    = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
+TSO                  = Y
 CRC offload          = P
 VLAN offload         = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Inner L3 checksum    = P
+Inner L4 checksum    = P
 Packet type parsing  = Y
 Multiprocess aware   = Y
 Linux                = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index e999e0b580..cf3fafabd8 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -12,9 +12,10 @@ Features
 
 - Packet type information
 - Checksum offload
+- TSO offload
 - Jumbo frames
 - Link state information
-- Scattered and gather for RX
+- Scattered and gather for TX and RX
 
 Prerequisites
 -------------
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index fd571399b3..069e648a36 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -16,5 +16,7 @@ sources = files(
 	'ngbe_rxtx.c',
 )
 
+deps += ['security']
+
 includes += include_directories('base')
 
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 260bca0e4f..1a6419e5a4 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -110,7 +110,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 
 	eth_dev->dev_ops = &ngbe_eth_dev_ops;
 	eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
-	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
+	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts;
 
 	/*
 	 * For secondary processes, we don't initialise any further as primary
@@ -118,6 +118,20 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	 * RX and TX function.
 	 */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		struct ngbe_tx_queue *txq;
+		/* TX queue function in primary, set by last queue initialized
+		 * Tx queue may not initialized by primary process
+		 */
+		if (eth_dev->data->tx_queues) {
+			uint16_t nb_tx_queues = eth_dev->data->nb_tx_queues;
+			txq = eth_dev->data->tx_queues[nb_tx_queues - 1];
+			ngbe_set_tx_function(eth_dev, txq);
+		} else {
+			/* Use default TX function if we get here */
+			PMD_INIT_LOG(NOTICE, "No TX queues configured yet. "
+				     "Using default TX function.");
+		}
+
 		ngbe_set_rx_function(eth_dev);
 
 		return 0;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 1e21db5e25..035b1ad5c8 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -86,6 +86,9 @@ uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
 uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
 		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
 
+uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
+
 uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index f633718237..3f3f2cab06 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -8,6 +8,7 @@
 #include <stdint.h>
 #include <rte_ethdev.h>
 #include <ethdev_driver.h>
+#include <rte_security_driver.h>
 #include <rte_malloc.h>
 
 #include "ngbe_logs.h"
@@ -15,6 +16,18 @@
 #include "ngbe_ethdev.h"
 #include "ngbe_rxtx.h"
 
+/* Bit Mask to indicate what bits required for building TX context */
+static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
+		PKT_TX_OUTER_IPV6 |
+		PKT_TX_OUTER_IPV4 |
+		PKT_TX_IPV6 |
+		PKT_TX_IPV4 |
+		PKT_TX_VLAN_PKT |
+		PKT_TX_L4_MASK |
+		PKT_TX_TCP_SEG |
+		PKT_TX_TUNNEL_MASK |
+		PKT_TX_OUTER_IP_CKSUM);
+
 /*
  * Prefetch a cache line into all cache levels.
  */
@@ -248,10 +261,608 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
+static inline void
+ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
+		volatile struct ngbe_tx_ctx_desc *ctx_txd,
+		uint64_t ol_flags, union ngbe_tx_offload tx_offload,
+		__rte_unused uint64_t *mdata)
+{
+	union ngbe_tx_offload tx_offload_mask;
+	uint32_t type_tucmd_mlhl;
+	uint32_t mss_l4len_idx;
+	uint32_t ctx_idx;
+	uint32_t vlan_macip_lens;
+	uint32_t tunnel_seed;
+
+	ctx_idx = txq->ctx_curr;
+	tx_offload_mask.data[0] = 0;
+	tx_offload_mask.data[1] = 0;
+
+	/* Specify which HW CTX to upload. */
+	mss_l4len_idx = NGBE_TXD_IDX(ctx_idx);
+	type_tucmd_mlhl = NGBE_TXD_CTXT;
+
+	tx_offload_mask.ptid |= ~0;
+	type_tucmd_mlhl |= NGBE_TXD_PTID(tx_offload.ptid);
+
+	/* check if TCP segmentation required for this packet */
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		tx_offload_mask.l2_len |= ~0;
+		tx_offload_mask.l3_len |= ~0;
+		tx_offload_mask.l4_len |= ~0;
+		tx_offload_mask.tso_segsz |= ~0;
+		mss_l4len_idx |= NGBE_TXD_MSS(tx_offload.tso_segsz);
+		mss_l4len_idx |= NGBE_TXD_L4LEN(tx_offload.l4_len);
+	} else { /* no TSO, check if hardware checksum is needed */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+		}
+
+		switch (ol_flags & PKT_TX_L4_MASK) {
+		case PKT_TX_UDP_CKSUM:
+			mss_l4len_idx |=
+				NGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+			break;
+		case PKT_TX_TCP_CKSUM:
+			mss_l4len_idx |=
+				NGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+			break;
+		case PKT_TX_SCTP_CKSUM:
+			mss_l4len_idx |=
+				NGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+			break;
+		default:
+			break;
+		}
+	}
+
+	vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
+
+	if (ol_flags & PKT_TX_TUNNEL_MASK) {
+		tx_offload_mask.outer_tun_len |= ~0;
+		tx_offload_mask.outer_l2_len |= ~0;
+		tx_offload_mask.outer_l3_len |= ~0;
+		tx_offload_mask.l2_len |= ~0;
+		tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
+		tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
+
+		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
+		case PKT_TX_TUNNEL_IPIP:
+			/* for non UDP / GRE tunneling, set to 0b */
+			break;
+		default:
+			PMD_TX_LOG(ERR, "Tunnel type not supported");
+			return;
+		}
+		vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len);
+	} else {
+		tunnel_seed = 0;
+		vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
+	}
+
+	if (ol_flags & PKT_TX_VLAN_PKT) {
+		tx_offload_mask.vlan_tci |= ~0;
+		vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
+	}
+
+	txq->ctx_cache[ctx_idx].flags = ol_flags;
+	txq->ctx_cache[ctx_idx].tx_offload.data[0] =
+		tx_offload_mask.data[0] & tx_offload.data[0];
+	txq->ctx_cache[ctx_idx].tx_offload.data[1] =
+		tx_offload_mask.data[1] & tx_offload.data[1];
+	txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;
+
+	ctx_txd->dw0 = rte_cpu_to_le_32(vlan_macip_lens);
+	ctx_txd->dw1 = rte_cpu_to_le_32(tunnel_seed);
+	ctx_txd->dw2 = rte_cpu_to_le_32(type_tucmd_mlhl);
+	ctx_txd->dw3 = rte_cpu_to_le_32(mss_l4len_idx);
+}
+
+/*
+ * Check which hardware context can be used. Use the existing match
+ * or create a new context descriptor.
+ */
+static inline uint32_t
+what_ctx_update(struct ngbe_tx_queue *txq, uint64_t flags,
+		   union ngbe_tx_offload tx_offload)
+{
+	/* If match with the current used context */
+	if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
+		     & tx_offload.data[0])) &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
+		     & tx_offload.data[1]))))
+		return txq->ctx_curr;
+
+	/* What if match with the next context  */
+	txq->ctx_curr ^= 1;
+	if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
+		     & tx_offload.data[0])) &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
+		     & tx_offload.data[1]))))
+		return txq->ctx_curr;
+
+	/* Mismatch, use the previous context */
+	return NGBE_CTX_NUM;
+}
+
+static inline uint32_t
+tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
+{
+	uint32_t tmp = 0;
+
+	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+		tmp |= NGBE_TXD_CC;
+		tmp |= NGBE_TXD_L4CS;
+	}
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		tmp |= NGBE_TXD_CC;
+		tmp |= NGBE_TXD_IPCS;
+	}
+	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		tmp |= NGBE_TXD_CC;
+		tmp |= NGBE_TXD_EIPCS;
+	}
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		tmp |= NGBE_TXD_CC;
+		/* implies IPv4 cksum */
+		if (ol_flags & PKT_TX_IPV4)
+			tmp |= NGBE_TXD_IPCS;
+		tmp |= NGBE_TXD_L4CS;
+	}
+	if (ol_flags & PKT_TX_VLAN_PKT)
+		tmp |= NGBE_TXD_CC;
+
+	return tmp;
+}
+
+static inline uint32_t
+tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
+{
+	uint32_t cmdtype = 0;
+
+	if (ol_flags & PKT_TX_VLAN_PKT)
+		cmdtype |= NGBE_TXD_VLE;
+	if (ol_flags & PKT_TX_TCP_SEG)
+		cmdtype |= NGBE_TXD_TSE;
+	if (ol_flags & PKT_TX_MACSEC)
+		cmdtype |= NGBE_TXD_LINKSEC;
+	return cmdtype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+{
+	bool tun;
+
+	if (ptype)
+		return ngbe_encode_ptype(ptype);
+
+	/* Only support flags in NGBE_TX_OFFLOAD_MASK */
+	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+
+	/* L2 level */
+	ptype = RTE_PTYPE_L2_ETHER;
+	if (oflags & PKT_TX_VLAN)
+		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
+
+	/* L3 level */
+	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
+		ptype |= RTE_PTYPE_L3_IPV4;
+	else if (oflags & (PKT_TX_OUTER_IPV6))
+		ptype |= RTE_PTYPE_L3_IPV6;
+
+	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
+		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
+	else if (oflags & (PKT_TX_IPV6))
+		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
+
+	/* L4 level */
+	switch (oflags & (PKT_TX_L4_MASK)) {
+	case PKT_TX_TCP_CKSUM:
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+		break;
+	case PKT_TX_UDP_CKSUM:
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
+		break;
+	}
+
+	if (oflags & PKT_TX_TCP_SEG)
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+
+	/* Tunnel */
+	switch (oflags & PKT_TX_TUNNEL_MASK) {
+	case PKT_TX_TUNNEL_VXLAN:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_VXLAN;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_GRE:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_GRE;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_GENEVE:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_GENEVE;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_VXLAN_GPE:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
+		break;
+	case PKT_TX_TUNNEL_IPIP:
+	case PKT_TX_TUNNEL_IP:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_IP;
+		break;
+	}
+
+	return ngbe_encode_ptype(ptype);
+}
+
 #ifndef DEFAULT_TX_FREE_THRESH
 #define DEFAULT_TX_FREE_THRESH 32
 #endif
 
+/* Reset transmit descriptors after they have been used */
+static inline int
+ngbe_xmit_cleanup(struct ngbe_tx_queue *txq)
+{
+	struct ngbe_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct ngbe_tx_desc *txr = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+	uint32_t status;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_free_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	status = txr[desc_to_clean_to].dw3;
+	if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
+		PMD_TX_LOG(DEBUG,
+			"TX descriptor %4u is not done"
+			"(port=%d queue=%d)",
+			desc_to_clean_to,
+			txq->port_id, txq->queue_id);
+		if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
+			ngbe_set32_masked(txq->tdc_reg_addr,
+				NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
+		/* Failed to clean any descriptors, better luck next time */
+		return -(1);
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+						last_desc_cleaned);
+
+	PMD_TX_LOG(DEBUG,
+		"Cleaning %4u TX descriptors: %4u to %4u "
+		"(port=%d queue=%d)",
+		nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to,
+		txq->port_id, txq->queue_id);
+
+	/*
+	 * The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txr[desc_to_clean_to].dw3 = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	/* No Error */
+	return 0;
+}
+
+uint16_t
+ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	struct ngbe_tx_queue *txq;
+	struct ngbe_tx_entry *sw_ring;
+	struct ngbe_tx_entry *txe, *txn;
+	volatile struct ngbe_tx_desc *txr;
+	volatile struct ngbe_tx_desc *txd;
+	struct rte_mbuf     *tx_pkt;
+	struct rte_mbuf     *m_seg;
+	uint64_t buf_dma_addr;
+	uint32_t olinfo_status;
+	uint32_t cmd_type_len;
+	uint32_t pkt_len;
+	uint16_t slen;
+	uint64_t ol_flags;
+	uint16_t tx_id;
+	uint16_t tx_last;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint64_t tx_ol_req;
+	uint32_t ctx = 0;
+	uint32_t new_ctx;
+	union ngbe_tx_offload tx_offload;
+
+	tx_offload.data[0] = 0;
+	tx_offload.data[1] = 0;
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr     = txq->tx_ring;
+	tx_id   = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Determine if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		ngbe_xmit_cleanup(txq);
+
+	rte_prefetch0(&txe->mbuf->pool);
+
+	/* TX loop */
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		new_ctx = 0;
+		tx_pkt = *tx_pkts++;
+		pkt_len = tx_pkt->pkt_len;
+
+		/*
+		 * Determine how many (if any) context descriptors
+		 * are needed for offload functionality.
+		 */
+		ol_flags = tx_pkt->ol_flags;
+
+		/* If hardware offload required */
+		tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK;
+		if (tx_ol_req) {
+			tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req,
+					tx_pkt->packet_type);
+			tx_offload.l2_len = tx_pkt->l2_len;
+			tx_offload.l3_len = tx_pkt->l3_len;
+			tx_offload.l4_len = tx_pkt->l4_len;
+			tx_offload.vlan_tci = tx_pkt->vlan_tci;
+			tx_offload.tso_segsz = tx_pkt->tso_segsz;
+			tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+			tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+			tx_offload.outer_tun_len = 0;
+
+
+			/* If new context need be built or reuse the exist ctx*/
+			ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
+			/* Only allocate context descriptor if required */
+			new_ctx = (ctx == NGBE_CTX_NUM);
+			ctx = txq->ctx_curr;
+		}
+
+		/*
+		 * Keep track of how many descriptors are used this loop
+		 * This will always be the number of segments + the number of
+		 * Context descriptors required to transmit the packet
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
+
+		/*
+		 * The number of descriptors that must be allocated for a
+		 * packet is the number of segments of that packet, plus 1
+		 * Context Descriptor for the hardware offload, if any.
+		 * Determine the last TX descriptor to allocate in the TX ring
+		 * for the packet, starting from the current position (tx_id)
+		 * in the ring.
+		 */
+		tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u"
+			   " tx_first=%u tx_last=%u",
+			   (uint16_t)txq->port_id,
+			   (uint16_t)txq->queue_id,
+			   (uint32_t)pkt_len,
+			   (uint16_t)tx_id,
+			   (uint16_t)tx_last);
+
+		/*
+		 * Make sure there are enough TX descriptors available to
+		 * transmit the entire packet.
+		 * nb_used better be less than or equal to txq->tx_free_thresh
+		 */
+		if (nb_used > txq->nb_tx_free) {
+			PMD_TX_LOG(DEBUG,
+				"Not enough free TX descriptors "
+				"nb_used=%4u nb_free=%4u "
+				"(port=%d queue=%d)",
+				nb_used, txq->nb_tx_free,
+				txq->port_id, txq->queue_id);
+
+			if (ngbe_xmit_cleanup(txq) != 0) {
+				/* Could not clean any descriptors */
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+
+			/* nb_used better be <= txq->tx_free_thresh */
+			if (unlikely(nb_used > txq->tx_free_thresh)) {
+				PMD_TX_LOG(DEBUG,
+					"The number of descriptors needed to "
+					"transmit the packet exceeds the "
+					"RS bit threshold. This will impact "
+					"performance."
+					"nb_used=%4u nb_free=%4u "
+					"tx_free_thresh=%4u. "
+					"(port=%d queue=%d)",
+					nb_used, txq->nb_tx_free,
+					txq->tx_free_thresh,
+					txq->port_id, txq->queue_id);
+				/*
+				 * Loop here until there are enough TX
+				 * descriptors or until the ring cannot be
+				 * cleaned.
+				 */
+				while (nb_used > txq->nb_tx_free) {
+					if (ngbe_xmit_cleanup(txq) != 0) {
+						/*
+						 * Could not clean any
+						 * descriptors
+						 */
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/*
+		 * By now there are enough free TX descriptors to transmit
+		 * the packet.
+		 */
+
+		/*
+		 * Set common flags of all TX Data Descriptors.
+		 *
+		 * The following bits must be set in the first Data Descriptor
+		 * and are ignored in the other ones:
+		 *   - NGBE_TXD_FCS
+		 *
+		 * The following bits must only be set in the last Data
+		 * Descriptor:
+		 *   - NGBE_TXD_EOP
+		 */
+		cmd_type_len = NGBE_TXD_FCS;
+
+		olinfo_status = 0;
+		if (tx_ol_req) {
+			if (ol_flags & PKT_TX_TCP_SEG) {
+				/* when TSO is on, paylen in descriptor is the
+				 * not the packet len but the tcp payload len
+				 */
+				pkt_len -= (tx_offload.l2_len +
+					tx_offload.l3_len + tx_offload.l4_len);
+				pkt_len -=
+					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+					? tx_offload.outer_l2_len +
+					  tx_offload.outer_l3_len : 0;
+			}
+
+			/*
+			 * Setup the TX Advanced Context Descriptor if required
+			 */
+			if (new_ctx) {
+				volatile struct ngbe_tx_ctx_desc *ctx_txd;
+
+				ctx_txd = (volatile struct ngbe_tx_ctx_desc *)
+				    &txr[tx_id];
+
+				txn = &sw_ring[txe->next_id];
+				rte_prefetch0(&txn->mbuf->pool);
+
+				if (txe->mbuf != NULL) {
+					rte_pktmbuf_free_seg(txe->mbuf);
+					txe->mbuf = NULL;
+				}
+
+				ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
+					tx_offload,
+					rte_security_dynfield(tx_pkt));
+
+				txe->last_id = tx_last;
+				tx_id = txe->next_id;
+				txe = txn;
+			}
+
+			/*
+			 * Setup the TX Advanced Data Descriptor,
+			 * This path will go through
+			 * whatever new/reuse the context descriptor
+			 */
+			cmd_type_len  |= tx_desc_ol_flags_to_cmdtype(ol_flags);
+			olinfo_status |=
+				tx_desc_cksum_flags_to_olinfo(ol_flags);
+			olinfo_status |= NGBE_TXD_IDX(ctx);
+		}
+
+		olinfo_status |= NGBE_TXD_PAYLEN(pkt_len);
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+			rte_prefetch0(&txn->mbuf->pool);
+
+			if (txe->mbuf != NULL)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/*
+			 * Set up Transmit Data Descriptor.
+			 */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->qw0 = rte_cpu_to_le_64(buf_dma_addr);
+			txd->dw2 = rte_cpu_to_le_32(cmd_type_len | slen);
+			txd->dw3 = rte_cpu_to_le_32(olinfo_status);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg != NULL);
+
+		/*
+		 * The last packet data descriptor needs End Of Packet (EOP)
+		 */
+		cmd_type_len |= NGBE_TXD_EOP;
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		txd->dw2 |= rte_cpu_to_le_32(cmd_type_len);
+	}
+
+end_of_tx:
+
+	rte_wmb();
+
+	/*
+	 * Set the Transmit Descriptor Tail (TDT)
+	 */
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
+		   (uint16_t)tx_id, (uint16_t)nb_tx);
+	ngbe_set32_relaxed(txq->tdt_reg_addr, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
+}
+
 /*********************************************************************
  *
  *  RX functions
@@ -1123,6 +1734,31 @@ static const struct ngbe_txq_ops def_txq_ops = {
 	.reset = ngbe_reset_tx_queue,
 };
 
+/* Takes an ethdev and a queue and sets up the tx function to be used based on
+ * the queue parameters. Used in tx_queue_setup by primary process and then
+ * in dev_init by secondary process when attaching to an existing ethdev.
+ */
+void __rte_cold
+ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq)
+{
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if (txq->offloads == 0 &&
+			txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) {
+		PMD_INIT_LOG(DEBUG, "Using simple tx code path");
+		dev->tx_pkt_burst = ngbe_xmit_pkts_simple;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
+		PMD_INIT_LOG(DEBUG,
+				" - offloads = 0x%" PRIx64,
+				txq->offloads);
+		PMD_INIT_LOG(DEBUG,
+				" - tx_free_thresh = %lu [RTE_PMD_NGBE_TX_MAX_BURST=%lu]",
+				(unsigned long)txq->tx_free_thresh,
+				(unsigned long)RTE_PMD_NGBE_TX_MAX_BURST);
+		dev->tx_pkt_burst = ngbe_xmit_pkts;
+	}
+}
+
 uint64_t
 ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
@@ -1262,6 +1898,9 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
 		     txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
 
+	/* set up scalar TX function as appropriate */
+	ngbe_set_tx_function(dev, txq);
+
 	txq->ops->reset(txq);
 
 	dev->data->tx_queues[queue_idx] = txq;
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 4b8596b24a..2cb98e2497 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -135,8 +135,35 @@ struct ngbe_tx_ctx_desc {
 	__le32 dw3; /* w.mss_l4len_idx    */
 };
 
+/* @ngbe_tx_ctx_desc.dw0 */
+#define NGBE_TXD_IPLEN(v)         LS(v, 0, 0x1FF) /* ip/fcoe header end */
+#define NGBE_TXD_MACLEN(v)        LS(v, 9, 0x7F) /* desc mac len */
+#define NGBE_TXD_VLAN(v)          LS(v, 16, 0xFFFF) /* vlan tag */
+
+/* @ngbe_tx_ctx_desc.dw1 */
+/*** bit 0-31, when NGBE_TXD_DTYP_FCOE=0 ***/
+#define NGBE_TXD_IPSEC_SAIDX(v)   LS(v, 0, 0x3FF) /* ipsec SA index */
+#define NGBE_TXD_ETYPE(v)         LS(v, 11, 0x1) /* tunnel type */
+#define NGBE_TXD_ETYPE_UDP        LS(0, 11, 0x1)
+#define NGBE_TXD_ETYPE_GRE        LS(1, 11, 0x1)
+#define NGBE_TXD_EIPLEN(v)        LS(v, 12, 0x7F) /* tunnel ip header */
+#define NGBE_TXD_DTYP_FCOE        MS(16, 0x1) /* FCoE/IP descriptor */
+#define NGBE_TXD_ETUNLEN(v)       LS(v, 21, 0xFF) /* tunnel header */
+#define NGBE_TXD_DECTTL(v)        LS(v, 29, 0xF) /* decrease ip TTL */
+
+/* @ngbe_tx_ctx_desc.dw2 */
+#define NGBE_TXD_IPSEC_ESPLEN(v)  LS(v, 1, 0x1FF) /* ipsec ESP length */
+#define NGBE_TXD_SNAP             MS(10, 0x1) /* SNAP indication */
+#define NGBE_TXD_TPID_SEL(v)      LS(v, 11, 0x7) /* vlan tag index */
+#define NGBE_TXD_IPSEC_ESP        MS(14, 0x1) /* ipsec type: esp=1 ah=0 */
+#define NGBE_TXD_IPSEC_ESPENC     MS(15, 0x1) /* ESP encrypt */
+#define NGBE_TXD_CTXT             MS(20, 0x1) /* context descriptor */
+#define NGBE_TXD_PTID(v)          LS(v, 24, 0xFF) /* packet type */
 /* @ngbe_tx_ctx_desc.dw3 */
 #define NGBE_TXD_DD               MS(0, 0x1) /* descriptor done */
+#define NGBE_TXD_IDX(v)           LS(v, 4, 0x1) /* ctxt desc index */
+#define NGBE_TXD_L4LEN(v)         LS(v, 8, 0xFF) /* l4 header length */
+#define NGBE_TXD_MSS(v)           LS(v, 16, 0xFFFF) /* l4 MSS */
 
 /**
  * Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
@@ -250,11 +277,34 @@ enum ngbe_ctx_num {
 	NGBE_CTX_NUM  = 2, /**< CTX NUMBER  */
 };
 
+/** Offload features */
+union ngbe_tx_offload {
+	uint64_t data[2];
+	struct {
+		uint64_t ptid:8; /**< Packet Type Identifier. */
+		uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /**< L3 (IP) Header Length. */
+		uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
+		uint64_t tso_segsz:16; /**< TCP TSO segment size */
+		uint64_t vlan_tci:16;
+		/**< VLAN Tag Control Identifier (CPU order). */
+
+		/* fields for TX offloading of tunnels */
+		uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */
+		uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
+		uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */
+	};
+};
+
 /**
  * Structure to check if new context need be built
  */
 struct ngbe_ctx_info {
 	uint64_t flags;           /**< ol_flags for context build. */
+	/**< tx offload: vlan, tso, l2-l3-l4 lengths. */
+	union ngbe_tx_offload tx_offload;
+	/** compare mask for tx offload. */
+	union ngbe_tx_offload tx_offload_mask;
 };
 
 /**
@@ -298,6 +348,12 @@ struct ngbe_txq_ops {
 	void (*reset)(struct ngbe_tx_queue *txq);
 };
 
+/* Takes an ethdev and a queue and sets up the tx function to be used based on
+ * the queue parameters. Used in tx_queue_setup by primary process and then
+ * in dev_init by secondary process when attaching to an existing ethdev.
+ */
+void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq);
+
 void ngbe_set_rx_function(struct rte_eth_dev *dev);
 
 uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 22/24] net/ngbe: add device start operation
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (20 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 19:33   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 23/24] net/ngbe: start and stop RxTx Jiawen Wu
                   ` (3 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Setup misx interrupt, complete PHY configuration and set device link speed,
to start device.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/base/ngbe_dummy.h   |  16 +
 drivers/net/ngbe/base/ngbe_hw.c      |  50 ++++
 drivers/net/ngbe/base/ngbe_hw.h      |   4 +
 drivers/net/ngbe/base/ngbe_phy.c     |   3 +
 drivers/net/ngbe/base/ngbe_phy_mvl.c |  64 ++++
 drivers/net/ngbe/base/ngbe_phy_mvl.h |   1 +
 drivers/net/ngbe/base/ngbe_phy_rtl.c |  58 ++++
 drivers/net/ngbe/base/ngbe_phy_rtl.h |   2 +
 drivers/net/ngbe/base/ngbe_phy_yt.c  |  26 ++
 drivers/net/ngbe/base/ngbe_phy_yt.h  |   1 +
 drivers/net/ngbe/base/ngbe_type.h    |  17 ++
 drivers/net/ngbe/ngbe_ethdev.c       | 420 ++++++++++++++++++++++++++-
 drivers/net/ngbe/ngbe_ethdev.h       |   6 +
 13 files changed, 667 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 709e01659c..dfc7b13192 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -47,6 +47,10 @@ static inline s32 ngbe_mac_reset_hw_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_mac_start_hw_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
@@ -74,6 +78,11 @@ static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_mac_get_link_capabilities_dummy(struct ngbe_hw *TUP0,
+					u32 *TUP1, bool *TUP2)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
 					u8 *TUP2, u32 TUP3, u32 TUP4)
 {
@@ -110,6 +119,10 @@ static inline s32 ngbe_phy_identify_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_phy_init_hw_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_phy_reset_hw_dummy(struct ngbe_hw *TUP0)
 {
 	return NGBE_ERR_OPS_DUMMY;
@@ -151,12 +164,14 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
 	hw->mac.init_hw = ngbe_mac_init_hw_dummy;
 	hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
+	hw->mac.start_hw = ngbe_mac_start_hw_dummy;
 	hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
 	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
 	hw->mac.setup_link = ngbe_mac_setup_link_dummy;
 	hw->mac.check_link = ngbe_mac_check_link_dummy;
+	hw->mac.get_link_capabilities = ngbe_mac_get_link_capabilities_dummy;
 	hw->mac.set_rar = ngbe_mac_set_rar_dummy;
 	hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
 	hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
@@ -165,6 +180,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
 	hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
 	hw->phy.identify = ngbe_phy_identify_dummy;
+	hw->phy.init_hw = ngbe_phy_init_hw_dummy;
 	hw->phy.reset_hw = ngbe_phy_reset_hw_dummy;
 	hw->phy.read_reg = ngbe_phy_read_reg_dummy;
 	hw->phy.write_reg = ngbe_phy_write_reg_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 00ac4ce838..b0bc714741 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -9,6 +9,22 @@
 #include "ngbe_mng.h"
 #include "ngbe_hw.h"
 
+/**
+ *  ngbe_start_hw - Prepare hardware for Tx/Rx
+ *  @hw: pointer to hardware structure
+ *
+ *  Starts the hardware.
+ **/
+s32 ngbe_start_hw(struct ngbe_hw *hw)
+{
+	DEBUGFUNC("ngbe_start_hw");
+
+	/* Clear adapter stopped flag */
+	hw->adapter_stopped = false;
+
+	return 0;
+}
+
 /**
  *  ngbe_init_hw - Generic hardware initialization
  *  @hw: pointer to hardware structure
@@ -27,6 +43,10 @@ s32 ngbe_init_hw(struct ngbe_hw *hw)
 
 	/* Reset the hardware */
 	status = hw->mac.reset_hw(hw);
+	if (status == 0) {
+		/* Start the HW */
+		status = hw->mac.start_hw(hw);
+	}
 
 	if (status != 0)
 		DEBUGOUT("Failed to initialize HW, STATUS = %d\n", status);
@@ -633,6 +653,30 @@ s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
 	return status;
 }
 
+s32 ngbe_get_link_capabilities_em(struct ngbe_hw *hw,
+				      u32 *speed,
+				      bool *autoneg)
+{
+	s32 status = 0;
+
+	DEBUGFUNC("\n");
+
+	switch (hw->sub_device_id) {
+	case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
+		*speed = NGBE_LINK_SPEED_1GB_FULL |
+			NGBE_LINK_SPEED_100M_FULL |
+			NGBE_LINK_SPEED_10M_FULL;
+		*autoneg = false;
+		hw->phy.link_mode = NGBE_PHYSICAL_LAYER_1000BASE_T |
+				NGBE_PHYSICAL_LAYER_100BASE_TX;
+		break;
+	default:
+		break;
+	}
+
+	return status;
+}
+
 s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
 			       u32 speed,
 			       bool autoneg_wait_to_complete)
@@ -842,6 +886,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	/* MAC */
 	mac->init_hw = ngbe_init_hw;
 	mac->reset_hw = ngbe_reset_hw_em;
+	mac->start_hw = ngbe_start_hw;
 	mac->get_mac_addr = ngbe_get_mac_addr;
 	mac->stop_hw = ngbe_stop_hw;
 	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
@@ -855,6 +900,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	mac->clear_vmdq = ngbe_clear_vmdq;
 
 	/* Link */
+	mac->get_link_capabilities = ngbe_get_link_capabilities_em;
 	mac->check_link = ngbe_check_mac_link_em;
 	mac->setup_link = ngbe_setup_mac_link_em;
 
@@ -871,6 +917,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	mac->max_rx_queues	= NGBE_EM_MAX_RX_QUEUES;
 	mac->max_tx_queues	= NGBE_EM_MAX_TX_QUEUES;
 
+	mac->default_speeds = NGBE_LINK_SPEED_10M_FULL |
+				NGBE_LINK_SPEED_100M_FULL |
+				NGBE_LINK_SPEED_1GB_FULL;
+
 	return 0;
 }
 
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 1689223168..4fee5735ac 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -14,6 +14,7 @@
 #define NGBE_EM_MC_TBL_SIZE   32
 
 s32 ngbe_init_hw(struct ngbe_hw *hw);
+s32 ngbe_start_hw(struct ngbe_hw *hw);
 s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
 s32 ngbe_stop_hw(struct ngbe_hw *hw);
 s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
@@ -22,6 +23,9 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
 
 s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
 			bool *link_up, bool link_up_wait_to_complete);
+s32 ngbe_get_link_capabilities_em(struct ngbe_hw *hw,
+				      u32 *speed,
+				      bool *autoneg);
 s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
 			       u32 speed,
 			       bool autoneg_wait_to_complete);
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
index 7a9baada81..47a5687b48 100644
--- a/drivers/net/ngbe/base/ngbe_phy.c
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -426,16 +426,19 @@ s32 ngbe_init_phy(struct ngbe_hw *hw)
 	/* Set necessary function pointers based on PHY type */
 	switch (hw->phy.type) {
 	case ngbe_phy_rtl:
+		hw->phy.init_hw = ngbe_init_phy_rtl;
 		hw->phy.check_link = ngbe_check_phy_link_rtl;
 		hw->phy.setup_link = ngbe_setup_phy_link_rtl;
 		break;
 	case ngbe_phy_mvl:
 	case ngbe_phy_mvl_sfi:
+		hw->phy.init_hw = ngbe_init_phy_mvl;
 		hw->phy.check_link = ngbe_check_phy_link_mvl;
 		hw->phy.setup_link = ngbe_setup_phy_link_mvl;
 		break;
 	case ngbe_phy_yt8521s:
 	case ngbe_phy_yt8521s_sfi:
+		hw->phy.init_hw = ngbe_init_phy_yt;
 		hw->phy.check_link = ngbe_check_phy_link_yt;
 		hw->phy.setup_link = ngbe_setup_phy_link_yt;
 	default:
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
index a1c055e238..33d21edfce 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -48,6 +48,70 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
 	return 0;
 }
 
+s32 ngbe_init_phy_mvl(struct ngbe_hw *hw)
+{
+	s32 ret_val = 0;
+	u16 value = 0;
+	int i;
+
+	DEBUGFUNC("ngbe_init_phy_mvl");
+
+	/* enable interrupts, only link status change and an done is allowed */
+	ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 2);
+	ngbe_read_phy_reg_mdi(hw, MVL_RGM_CTL2, 0, &value);
+	value &= ~MVL_RGM_CTL2_TTC;
+	value |= MVL_RGM_CTL2_RTC;
+	ngbe_write_phy_reg_mdi(hw, MVL_RGM_CTL2, 0, value);
+
+	hw->phy.write_reg(hw, MVL_CTRL, 0, MVL_CTRL_RESET);
+	for (i = 0; i < 15; i++) {
+		ngbe_read_phy_reg_mdi(hw, MVL_CTRL, 0, &value);
+		if (value & MVL_CTRL_RESET)
+			msleep(1);
+		else
+			break;
+	}
+
+	if (i == 15) {
+		DEBUGOUT("phy reset exceeds maximum waiting period.\n");
+		return NGBE_ERR_TIMEOUT;
+	}
+
+	ret_val = hw->phy.reset_hw(hw);
+	if (ret_val)
+		return ret_val;
+
+	/* set LED2 to interrupt output and INTn active low */
+	ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 3);
+	ngbe_read_phy_reg_mdi(hw, MVL_LEDTCR, 0, &value);
+	value |= MVL_LEDTCR_INTR_EN;
+	value &= ~(MVL_LEDTCR_INTR_POL);
+	ngbe_write_phy_reg_mdi(hw, MVL_LEDTCR, 0, value);
+
+	if (hw->phy.type == ngbe_phy_mvl_sfi) {
+		hw->phy.read_reg(hw, MVL_CTRL1, 0, &value);
+		value &= ~MVL_CTRL1_INTR_POL;
+		ngbe_write_phy_reg_mdi(hw, MVL_CTRL1, 0, value);
+	}
+
+	/* enable link status change and AN complete interrupts */
+	value = MVL_INTR_EN_ANC | MVL_INTR_EN_LSC;
+	hw->phy.write_reg(hw, MVL_INTR_EN, 0, value);
+
+	/* LED control */
+	ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 3);
+	ngbe_read_phy_reg_mdi(hw, MVL_LEDFCR, 0, &value);
+	value &= ~(MVL_LEDFCR_CTL0 | MVL_LEDFCR_CTL1);
+	value |= MVL_LEDFCR_CTL0_CONF | MVL_LEDFCR_CTL1_CONF;
+	ngbe_write_phy_reg_mdi(hw, MVL_LEDFCR, 0, value);
+	ngbe_read_phy_reg_mdi(hw, MVL_LEDPCR, 0, &value);
+	value &= ~(MVL_LEDPCR_CTL0 | MVL_LEDPCR_CTL1);
+	value |= MVL_LEDPCR_CTL0_CONF | MVL_LEDPCR_CTL1_CONF;
+	ngbe_write_phy_reg_mdi(hw, MVL_LEDPCR, 0, value);
+
+	return ret_val;
+}
+
 s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw, u32 speed,
 				bool autoneg_wait_to_complete)
 {
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
index a663a429dd..34cb1e838a 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -86,6 +86,7 @@ s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
 			u16 *phy_data);
 s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
 			u16 phy_data);
+s32 ngbe_init_phy_mvl(struct ngbe_hw *hw);
 
 s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
 
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
index 5214ce5a8a..535259d8fd 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -36,6 +36,64 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
 	return 0;
 }
 
+s32 ngbe_init_phy_rtl(struct ngbe_hw *hw)
+{
+	int i;
+	u16 value = 0;
+
+	/* enable interrupts, only link status change and an done is allowed */
+	value = RTL_INER_LSC | RTL_INER_ANC;
+	hw->phy.write_reg(hw, RTL_INER, 0xa42, value);
+
+	hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+
+	for (i = 0; i < 15; i++) {
+		if (!rd32m(hw, NGBE_STAT,
+			NGBE_STAT_GPHY_IN_RST(hw->bus.lan_id)))
+			break;
+
+		msec_delay(10);
+	}
+	if (i == 15) {
+		DEBUGOUT("GPhy reset exceeds maximum times.\n");
+		return NGBE_ERR_PHY_TIMEOUT;
+	}
+
+	for (i = 0; i < 1000; i++) {
+		hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+		if (value & RTL_INSR_ACCESS)
+			break;
+	}
+
+	hw->phy.write_reg(hw, RTL_SCR, 0xa46, RTL_SCR_EFUSE);
+	for (i = 0; i < 1000; i++) {
+		hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+		if (value & RTL_INSR_ACCESS)
+			break;
+	}
+	if (i == 1000)
+		return NGBE_ERR_PHY_TIMEOUT;
+
+	hw->phy.write_reg(hw, RTL_SCR, 0xa46, RTL_SCR_EXTINI);
+	for (i = 0; i < 1000; i++) {
+		hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+		if (value & RTL_INSR_ACCESS)
+			break;
+	}
+	if (i == 1000)
+		return NGBE_ERR_PHY_TIMEOUT;
+
+	for (i = 0; i < 1000; i++) {
+		hw->phy.read_reg(hw, RTL_GSR, 0xa42, &value);
+		if ((value & RTL_GSR_ST) == RTL_GSR_ST_LANON)
+			break;
+	}
+	if (i == 1000)
+		return NGBE_ERR_PHY_TIMEOUT;
+
+	return 0;
+}
+
 /**
  *  ngbe_setup_phy_link_rtl - Set and restart auto-neg
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
index e8bc4a1bd7..e6e7df5254 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -80,6 +80,8 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
 
 s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
 		u32 speed, bool autoneg_wait_to_complete);
+
+s32 ngbe_init_phy_rtl(struct ngbe_hw *hw);
 s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
 s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw,
 			u32 *speed, bool *link_up);
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index f518dc0af6..94d3430fa4 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -98,6 +98,32 @@ s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
 	return 0;
 }
 
+s32 ngbe_init_phy_yt(struct ngbe_hw *hw)
+{
+	u16 value = 0;
+
+	DEBUGFUNC("ngbe_init_phy_yt");
+
+	if (hw->phy.type != ngbe_phy_yt8521s_sfi)
+		return 0;
+
+	/* select sds area register */
+	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
+	/* enable interrupts */
+	ngbe_write_phy_reg_mdi(hw, YT_INTR, 0, YT_INTR_ENA_MASK);
+
+	/* select fiber_to_rgmii first in multiplex */
+	ngbe_read_phy_reg_ext_yt(hw, YT_MISC, 0, &value);
+	value |= YT_MISC_FIBER_PRIO;
+	ngbe_write_phy_reg_ext_yt(hw, YT_MISC, 0, value);
+
+	hw->phy.read_reg(hw, YT_BCR, 0, &value);
+	value |= YT_BCR_PWDN;
+	hw->phy.write_reg(hw, YT_BCR, 0, value);
+
+	return 0;
+}
+
 s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
 				bool autoneg_wait_to_complete)
 {
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
index 26820ecb92..5babd841c1 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.h
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -65,6 +65,7 @@ s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
 		u32 reg_addr, u32 device_type, u16 *phy_data);
 s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
 		u32 reg_addr, u32 device_type, u16 phy_data);
+s32 ngbe_init_phy_yt(struct ngbe_hw *hw);
 
 s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
 
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index bc99d9c3db..601fb85b91 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -26,6 +26,12 @@ struct ngbe_thermal_sensor_data {
 	struct ngbe_thermal_diode_data sensor[1];
 };
 
+/* Physical layer type */
+#define NGBE_PHYSICAL_LAYER_UNKNOWN		0
+#define NGBE_PHYSICAL_LAYER_10GBASE_T		0x00001
+#define NGBE_PHYSICAL_LAYER_1000BASE_T		0x00002
+#define NGBE_PHYSICAL_LAYER_100BASE_TX		0x00004
+
 enum ngbe_eeprom_type {
 	ngbe_eeprom_unknown = 0,
 	ngbe_eeprom_spi,
@@ -93,15 +99,20 @@ struct ngbe_rom_info {
 struct ngbe_mac_info {
 	s32 (*init_hw)(struct ngbe_hw *hw);
 	s32 (*reset_hw)(struct ngbe_hw *hw);
+	s32 (*start_hw)(struct ngbe_hw *hw);
 	s32 (*stop_hw)(struct ngbe_hw *hw);
 	s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
 	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 
+	/* Link */
 	s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
 			       bool autoneg_wait_to_complete);
 	s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
 			       bool *link_up, bool link_up_wait_to_complete);
+	s32 (*get_link_capabilities)(struct ngbe_hw *hw,
+				      u32 *speed, bool *autoneg);
+
 	/* RAR */
 	s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
@@ -126,10 +137,13 @@ struct ngbe_mac_info {
 	struct ngbe_thermal_sensor_data  thermal_sensor_data;
 	bool set_lben;
 	u32  max_link_up_time;
+
+	u32 default_speeds;
 };
 
 struct ngbe_phy_info {
 	s32 (*identify)(struct ngbe_hw *hw);
+	s32 (*init_hw)(struct ngbe_hw *hw);
 	s32 (*reset_hw)(struct ngbe_hw *hw);
 	s32 (*read_reg)(struct ngbe_hw *hw, u32 reg_addr,
 				u32 device_type, u16 *phy_data);
@@ -151,6 +165,7 @@ struct ngbe_phy_info {
 	u32 phy_semaphore_mask;
 	bool reset_disable;
 	u32 autoneg_advertised;
+	u32 link_mode;
 };
 
 enum ngbe_isb_idx {
@@ -178,6 +193,8 @@ struct ngbe_hw {
 
 	uint64_t isb_dma;
 	void IOMEM *isb_mem;
+	u16 nb_rx_queues;
+	u16 nb_tx_queues;
 
 	bool is_pf;
 };
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 1a6419e5a4..3812663591 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -14,9 +14,17 @@
 #include "ngbe_rxtx.h"
 
 static int ngbe_dev_close(struct rte_eth_dev *dev);
-
+static int ngbe_dev_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete);
+
+static void ngbe_dev_link_status_print(struct rte_eth_dev *dev);
+static int ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on);
+static int ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev);
+static int ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev);
+static int ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev);
 static void ngbe_dev_interrupt_handler(void *param);
 static void ngbe_dev_interrupt_delayed_handler(void *param);
+static void ngbe_configure_msix(struct rte_eth_dev *dev);
 
 /*
  * The set of PCI devices this driver supports
@@ -52,6 +60,25 @@ static const struct rte_eth_desc_lim tx_desc_lim = {
 };
 
 static const struct eth_dev_ops ngbe_eth_dev_ops;
+static inline int32_t
+ngbe_pf_reset_hw(struct ngbe_hw *hw)
+{
+	uint32_t ctrl_ext;
+	int32_t status;
+
+	status = hw->mac.reset_hw(hw);
+
+	ctrl_ext = rd32(hw, NGBE_PORTCTL);
+	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
+	ctrl_ext |= NGBE_PORTCTL_RSTDONE;
+	wr32(hw, NGBE_PORTCTL, ctrl_ext);
+	ngbe_flush(hw);
+
+	if (status == NGBE_ERR_SFP_NOT_PRESENT)
+		status = 0;
+	return status;
+}
+
 static inline void
 ngbe_enable_intr(struct rte_eth_dev *dev)
 {
@@ -217,9 +244,18 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	ctrl_ext = rd32(hw, NGBE_PORTCTL);
 	/* let hardware know driver is loaded */
 	ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
+	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
+	ctrl_ext |= NGBE_PORTCTL_RSTDONE;
 	wr32(hw, NGBE_PORTCTL, ctrl_ext);
 	ngbe_flush(hw);
 
+	PMD_INIT_LOG(DEBUG, "MAC: %d, PHY: %d",
+			(int)hw->mac.type, (int)hw->phy.type);
+
+	PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
+		     eth_dev->data->port_id, pci_dev->id.vendor_id,
+		     pci_dev->id.device_id);
+
 	rte_intr_callback_register(intr_handle,
 				   ngbe_dev_interrupt_handler, eth_dev);
 
@@ -302,6 +338,196 @@ ngbe_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static void
+ngbe_dev_phy_intr_setup(struct rte_eth_dev *dev)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+
+	wr32(hw, NGBE_GPIODIR, NGBE_GPIODIR_DDR(1));
+	wr32(hw, NGBE_GPIOINTEN, NGBE_GPIOINTEN_INT(3));
+	wr32(hw, NGBE_GPIOINTTYPE, NGBE_GPIOINTTYPE_LEVEL(0));
+	if (hw->phy.type == ngbe_phy_yt8521s_sfi)
+		wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(0));
+	else
+		wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(3));
+
+	intr->mask_misc |= NGBE_ICRMISC_GPIO;
+}
+
+/*
+ * Configure device link speed and setup link.
+ * It returns 0 on success.
+ */
+static int
+ngbe_dev_start(struct rte_eth_dev *dev)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	uint32_t intr_vector = 0;
+	int err;
+	bool link_up = false, negotiate = 0;
+	uint32_t speed = 0;
+	uint32_t allowed_speeds = 0;
+	int status;
+	uint32_t *link_speeds;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR,
+		"Invalid link_speeds for port %u, fix speed not supported",
+				dev->data->port_id);
+		return -EINVAL;
+	}
+
+	/* disable uio/vfio intr/eventfd mapping */
+	rte_intr_disable(intr_handle);
+
+	/* stop adapter */
+	hw->adapter_stopped = 0;
+	ngbe_stop_hw(hw);
+
+	/* reinitialize adapter
+	 * this calls reset and start
+	 */
+	hw->nb_rx_queues = dev->data->nb_rx_queues;
+	hw->nb_tx_queues = dev->data->nb_tx_queues;
+	status = ngbe_pf_reset_hw(hw);
+	if (status != 0)
+		return -1;
+	hw->mac.start_hw(hw);
+	hw->mac.get_link_status = true;
+
+	ngbe_dev_phy_intr_setup(dev);
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (intr_handle->intr_vec == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+				     " intr_vec", dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* confiugre msix for sleep until rx interrupt */
+	ngbe_configure_msix(dev);
+
+	/* initialize transmission unit */
+	ngbe_dev_tx_init(dev);
+
+	/* This can fail when allocating mbufs for descriptor rings */
+	err = ngbe_dev_rx_init(dev);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Unable to initialize RX hardware");
+		goto error;
+	}
+
+	/* Skip link setup if loopback mode is enabled. */
+	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
+		goto skip_link_setup;
+
+	err = hw->mac.check_link(hw, &speed, &link_up, 0);
+	if (err)
+		goto error;
+	dev->data->dev_link.link_status = link_up;
+
+	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
+	if (err)
+		goto error;
+
+	allowed_speeds = 0;
+	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
+		allowed_speeds |= ETH_LINK_SPEED_1G;
+	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
+		allowed_speeds |= ETH_LINK_SPEED_100M;
+	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
+		allowed_speeds |= ETH_LINK_SPEED_10M;
+
+	link_speeds = &dev->data->dev_conf.link_speeds;
+	if (*link_speeds & ~allowed_speeds) {
+		PMD_INIT_LOG(ERR, "Invalid link setting");
+		goto error;
+	}
+
+	speed = 0x0;
+	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		speed = hw->mac.default_speeds;
+	} else {
+		if (*link_speeds & ETH_LINK_SPEED_1G)
+			speed |= NGBE_LINK_SPEED_1GB_FULL;
+		if (*link_speeds & ETH_LINK_SPEED_100M)
+			speed |= NGBE_LINK_SPEED_100M_FULL;
+		if (*link_speeds & ETH_LINK_SPEED_10M)
+			speed |= NGBE_LINK_SPEED_10M_FULL;
+	}
+
+	hw->phy.init_hw(hw);
+	err = hw->mac.setup_link(hw, speed, link_up);
+	if (err)
+		goto error;
+
+skip_link_setup:
+
+	if (rte_intr_allow_others(intr_handle)) {
+		ngbe_dev_misc_interrupt_setup(dev);
+		/* check if lsc interrupt is enabled */
+		if (dev->data->dev_conf.intr_conf.lsc != 0)
+			ngbe_dev_lsc_interrupt_setup(dev, TRUE);
+		else
+			ngbe_dev_lsc_interrupt_setup(dev, FALSE);
+		ngbe_dev_macsec_interrupt_setup(dev);
+		ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
+	} else {
+		rte_intr_callback_unregister(intr_handle,
+					     ngbe_dev_interrupt_handler, dev);
+		if (dev->data->dev_conf.intr_conf.lsc != 0)
+			PMD_INIT_LOG(INFO, "lsc won't enable because of"
+				     " no intr multiplex");
+	}
+
+	/* check if rxq interrupt is enabled */
+	if (dev->data->dev_conf.intr_conf.rxq != 0 &&
+	    rte_intr_dp_is_en(intr_handle))
+		ngbe_dev_rxq_interrupt_setup(dev);
+
+	/* enable uio/vfio intr/eventfd mapping */
+	rte_intr_enable(intr_handle);
+
+	/* resume enabled intr since hw reset */
+	ngbe_enable_intr(dev);
+
+	if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
+		(hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
+		/* gpio0 is used to power on/off control*/
+		wr32(hw, NGBE_GPIODATA, 0);
+	}
+
+	/*
+	 * Update link status right before return, because it may
+	 * start link configuration process in a separate thread.
+	 */
+	ngbe_dev_link_update(dev, 0);
+
+	return 0;
+
+error:
+	PMD_INIT_LOG(ERR, "failure in dev start: %d", err);
+	return -EIO;
+}
+
 /*
  * Reset and stop device.
  */
@@ -487,6 +713,106 @@ ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	return ngbe_dev_link_update_share(dev, wait_to_complete);
 }
 
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ * @param on
+ *  Enable or Disable.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+
+	ngbe_dev_link_status_print(dev);
+	if (on) {
+		intr->mask_misc |= NGBE_ICRMISC_PHY;
+		intr->mask_misc |= NGBE_ICRMISC_GPIO;
+	} else {
+		intr->mask_misc &= ~NGBE_ICRMISC_PHY;
+		intr->mask_misc &= ~NGBE_ICRMISC_GPIO;
+	}
+
+	return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	u64 mask;
+
+	mask = NGBE_ICR_MASK;
+	mask &= (1ULL << NGBE_MISC_VEC_ID);
+	intr->mask |= mask;
+	intr->mask_misc |= NGBE_ICRMISC_GPIO;
+
+	return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+	u64 mask;
+
+	mask = NGBE_ICR_MASK;
+	mask &= ~((1ULL << NGBE_RX_VEC_START) - 1);
+	intr->mask |= mask;
+
+	return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev)
+{
+	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
+
+	intr->mask_misc |= NGBE_ICRMISC_LNKSEC;
+
+	return 0;
+}
+
 /*
  * It reads ICR and sets flag for the link_update.
  *
@@ -693,9 +1019,101 @@ ngbe_dev_interrupt_handler(void *param)
 	ngbe_dev_interrupt_action(dev);
 }
 
+/**
+ * set the IVAR registers, mapping interrupt causes to vectors
+ * @param hw
+ *  pointer to ngbe_hw struct
+ * @direction
+ *  0 for Rx, 1 for Tx, -1 for other causes
+ * @queue
+ *  queue to map the corresponding interrupt to
+ * @msix_vector
+ *  the vector to map to the corresponding queue
+ */
+void
+ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
+		   uint8_t queue, uint8_t msix_vector)
+{
+	uint32_t tmp, idx;
+
+	if (direction == -1) {
+		/* other causes */
+		msix_vector |= NGBE_IVARMISC_VLD;
+		idx = 0;
+		tmp = rd32(hw, NGBE_IVARMISC);
+		tmp &= ~(0xFF << idx);
+		tmp |= (msix_vector << idx);
+		wr32(hw, NGBE_IVARMISC, tmp);
+	} else {
+		/* rx or tx causes */
+		/* Workround for ICR lost */
+		idx = ((16 * (queue & 1)) + (8 * direction));
+		tmp = rd32(hw, NGBE_IVAR(queue >> 1));
+		tmp &= ~(0xFF << idx);
+		tmp |= (msix_vector << idx);
+		wr32(hw, NGBE_IVAR(queue >> 1), tmp);
+	}
+}
+
+/**
+ * Sets up the hardware to properly generate MSI-X interrupts
+ * @hw
+ *  board private structure
+ */
+static void
+ngbe_configure_msix(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	uint32_t queue_id, base = NGBE_MISC_VEC_ID;
+	uint32_t vec = NGBE_MISC_VEC_ID;
+	uint32_t gpie;
+
+	/* won't configure msix register if no mapping is done
+	 * between intr vector and event fd
+	 * but if misx has been enabled already, need to configure
+	 * auto clean, auto mask and throttling.
+	 */
+	gpie = rd32(hw, NGBE_GPIE);
+	if (!rte_intr_dp_is_en(intr_handle) &&
+	    !(gpie & NGBE_GPIE_MSIX))
+		return;
+
+	if (rte_intr_allow_others(intr_handle)) {
+		base = NGBE_RX_VEC_START;
+		vec = base;
+	}
+
+	/* setup GPIE for MSI-x mode */
+	gpie = rd32(hw, NGBE_GPIE);
+	gpie |= NGBE_GPIE_MSIX;
+	wr32(hw, NGBE_GPIE, gpie);
+
+	/* Populate the IVAR table and set the ITR values to the
+	 * corresponding register.
+	 */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		for (queue_id = 0; queue_id < dev->data->nb_rx_queues;
+			queue_id++) {
+			/* by default, 1:1 mapping */
+			ngbe_set_ivar_map(hw, 0, queue_id, vec);
+			intr_handle->intr_vec[queue_id] = vec;
+			if (vec < base + intr_handle->nb_efd - 1)
+				vec++;
+		}
+
+		ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
+	}
+	wr32(hw, NGBE_ITR(NGBE_MISC_VEC_ID),
+			NGBE_ITR_IVAL_1G(NGBE_QUEUE_ITR_INTERVAL_DEFAULT)
+			| NGBE_ITR_WRDSA);
+}
+
 static const struct eth_dev_ops ngbe_eth_dev_ops = {
 	.dev_configure              = ngbe_dev_configure,
 	.dev_infos_get              = ngbe_dev_info_get,
+	.dev_start                  = ngbe_dev_start,
 	.link_update                = ngbe_dev_link_update,
 	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
 	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 035b1ad5c8..0b8dba571b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -18,6 +18,8 @@
 #define NGBE_VLAN_TAG_SIZE 4
 #define NGBE_HKEY_MAX_INDEX 10
 
+#define NGBE_QUEUE_ITR_INTERVAL_DEFAULT	500 /* 500us */
+
 #define NGBE_RSS_OFFLOAD_ALL ( \
 	ETH_RSS_IPV4 | \
 	ETH_RSS_NONFRAG_IPV4_TCP | \
@@ -30,6 +32,7 @@
 	ETH_RSS_IPV6_UDP_EX)
 
 #define NGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
+#define NGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
 /* structure for interrupt relative data */
 struct ngbe_interrupt {
@@ -92,6 +95,9 @@ uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts);
 
+void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
+			       uint8_t queue, uint8_t msix_vector);
+
 int
 ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		int wait_to_complete);
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 23/24] net/ngbe: start and stop RxTx
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (21 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 22/24] net/ngbe: add device start operation Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-14 20:44   ` Andrew Rybchenko
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 24/24] net/ngbe: add device stop operation Jiawen Wu
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Support to start and stop receive and transmit unit for specified
queues.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 doc/guides/nics/features/ngbe.ini  |   1 +
 drivers/net/ngbe/base/ngbe_dummy.h |  15 ++
 drivers/net/ngbe/base/ngbe_hw.c    | 105 ++++++++++
 drivers/net/ngbe/base/ngbe_hw.h    |   4 +
 drivers/net/ngbe/base/ngbe_type.h  |   5 +
 drivers/net/ngbe/ngbe_ethdev.c     |  10 +
 drivers/net/ngbe/ngbe_ethdev.h     |  15 ++
 drivers/net/ngbe/ngbe_rxtx.c       | 307 +++++++++++++++++++++++++++++
 drivers/net/ngbe/ngbe_rxtx.h       |   3 +
 9 files changed, 465 insertions(+)

diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 443c6691a3..43b6b2c2c7 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -7,6 +7,7 @@
 Speed capabilities   = Y
 Link status          = Y
 Link status event    = Y
+Queue start/stop     = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 TSO                  = Y
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index dfc7b13192..384631b4f1 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -59,6 +59,18 @@ static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1)
 {
 	return NGBE_ERR_OPS_DUMMY;
 }
+static inline s32 ngbe_mac_enable_rx_dma_dummy(struct ngbe_hw *TUP0, u32 TUP1)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_disable_sec_rx_path_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_enable_sec_rx_path_dummy(struct ngbe_hw *TUP0)
+{
+	return NGBE_ERR_OPS_DUMMY;
+}
 static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
 					u32 TUP1)
 {
@@ -167,6 +179,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
 	hw->mac.start_hw = ngbe_mac_start_hw_dummy;
 	hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
 	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
+	hw->mac.enable_rx_dma = ngbe_mac_enable_rx_dma_dummy;
+	hw->mac.disable_sec_rx_path = ngbe_mac_disable_sec_rx_path_dummy;
+	hw->mac.enable_sec_rx_path = ngbe_mac_enable_sec_rx_path_dummy;
 	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
 	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
 	hw->mac.setup_link = ngbe_mac_setup_link_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index b0bc714741..030068f3f7 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -536,6 +536,63 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
 	ngbe_release_eeprom_semaphore(hw);
 }
 
+/**
+ *  ngbe_disable_sec_rx_path - Stops the receive data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Stops the receive data path and waits for the HW to internally empty
+ *  the Rx security block
+ **/
+s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw)
+{
+#define NGBE_MAX_SECRX_POLL 4000
+
+	int i;
+	u32 secrxreg;
+
+	DEBUGFUNC("ngbe_disable_sec_rx_path");
+
+
+	secrxreg = rd32(hw, NGBE_SECRXCTL);
+	secrxreg |= NGBE_SECRXCTL_XDSA;
+	wr32(hw, NGBE_SECRXCTL, secrxreg);
+	for (i = 0; i < NGBE_MAX_SECRX_POLL; i++) {
+		secrxreg = rd32(hw, NGBE_SECRXSTAT);
+		if (!(secrxreg & NGBE_SECRXSTAT_RDY))
+			/* Use interrupt-safe sleep just in case */
+			usec_delay(10);
+		else
+			break;
+	}
+
+	/* For informational purposes only */
+	if (i >= NGBE_MAX_SECRX_POLL)
+		DEBUGOUT("Rx unit being enabled before security "
+			 "path fully disabled.  Continuing with init.\n");
+
+	return 0;
+}
+
+/**
+ *  ngbe_enable_sec_rx_path - Enables the receive data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Enables the receive data path.
+ **/
+s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw)
+{
+	u32 secrxreg;
+
+	DEBUGFUNC("ngbe_enable_sec_rx_path");
+
+	secrxreg = rd32(hw, NGBE_SECRXCTL);
+	secrxreg &= ~NGBE_SECRXCTL_XDSA;
+	wr32(hw, NGBE_SECRXCTL, secrxreg);
+	ngbe_flush(hw);
+
+	return 0;
+}
+
 /**
  *  ngbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
  *  @hw: pointer to hardware struct
@@ -757,6 +814,21 @@ void ngbe_disable_rx(struct ngbe_hw *hw)
 	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
 }
 
+void ngbe_enable_rx(struct ngbe_hw *hw)
+{
+	u32 pfdtxgswc;
+
+	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, NGBE_MACRXCFG_ENA);
+	wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, NGBE_PBRXCTL_ENA);
+
+	if (hw->mac.set_lben) {
+		pfdtxgswc = rd32(hw, NGBE_PSRCTL);
+		pfdtxgswc |= NGBE_PSRCTL_LBENA;
+		wr32(hw, NGBE_PSRCTL, pfdtxgswc);
+		hw->mac.set_lben = false;
+	}
+}
+
 /**
  *  ngbe_set_mac_type - Sets MAC type
  *  @hw: pointer to the HW structure
@@ -803,6 +875,36 @@ s32 ngbe_set_mac_type(struct ngbe_hw *hw)
 	return err;
 }
 
+/**
+ *  ngbe_enable_rx_dma - Enable the Rx DMA unit
+ *  @hw: pointer to hardware structure
+ *  @regval: register value to write to RXCTRL
+ *
+ *  Enables the Rx DMA unit
+ **/
+s32 ngbe_enable_rx_dma(struct ngbe_hw *hw, u32 regval)
+{
+	DEBUGFUNC("ngbe_enable_rx_dma");
+
+	/*
+	 * Workaround silicon errata when enabling the Rx datapath.
+	 * If traffic is incoming before we enable the Rx unit, it could hang
+	 * the Rx DMA unit.  Therefore, make sure the security engine is
+	 * completely disabled prior to enabling the Rx unit.
+	 */
+
+	hw->mac.disable_sec_rx_path(hw);
+
+	if (regval & NGBE_PBRXCTL_ENA)
+		ngbe_enable_rx(hw);
+	else
+		ngbe_disable_rx(hw);
+
+	hw->mac.enable_sec_rx_path(hw);
+
+	return 0;
+}
+
 void ngbe_map_device_id(struct ngbe_hw *hw)
 {
 	u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
@@ -887,11 +989,14 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
 	mac->init_hw = ngbe_init_hw;
 	mac->reset_hw = ngbe_reset_hw_em;
 	mac->start_hw = ngbe_start_hw;
+	mac->enable_rx_dma = ngbe_enable_rx_dma;
 	mac->get_mac_addr = ngbe_get_mac_addr;
 	mac->stop_hw = ngbe_stop_hw;
 	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
 	mac->release_swfw_sync = ngbe_release_swfw_sync;
 
+	mac->disable_sec_rx_path = ngbe_disable_sec_rx_path;
+	mac->enable_sec_rx_path = ngbe_enable_sec_rx_path;
 	/* RAR */
 	mac->set_rar = ngbe_set_rar;
 	mac->clear_rar = ngbe_clear_rar;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 4fee5735ac..01f41fe9b3 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -34,6 +34,8 @@ s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
 s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
 s32 ngbe_init_rx_addrs(struct ngbe_hw *hw);
+s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw);
+s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw);
 
 s32 ngbe_validate_mac_addr(u8 *mac_addr);
 s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
@@ -46,10 +48,12 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
 s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
 s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
 void ngbe_disable_rx(struct ngbe_hw *hw);
+void ngbe_enable_rx(struct ngbe_hw *hw);
 s32 ngbe_init_shared_code(struct ngbe_hw *hw);
 s32 ngbe_set_mac_type(struct ngbe_hw *hw);
 s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
 s32 ngbe_init_phy(struct ngbe_hw *hw);
+s32 ngbe_enable_rx_dma(struct ngbe_hw *hw, u32 regval);
 void ngbe_map_device_id(struct ngbe_hw *hw);
 
 #endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 601fb85b91..134d2019e1 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -102,6 +102,9 @@ struct ngbe_mac_info {
 	s32 (*start_hw)(struct ngbe_hw *hw);
 	s32 (*stop_hw)(struct ngbe_hw *hw);
 	s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
+	s32 (*enable_rx_dma)(struct ngbe_hw *hw, u32 regval);
+	s32 (*disable_sec_rx_path)(struct ngbe_hw *hw);
+	s32 (*enable_sec_rx_path)(struct ngbe_hw *hw);
 	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
 
@@ -196,6 +199,8 @@ struct ngbe_hw {
 	u16 nb_rx_queues;
 	u16 nb_tx_queues;
 
+	u32 q_rx_regs[8 * 4];
+	u32 q_tx_regs[8 * 4];
 	bool is_pf;
 };
 
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3812663591..2b551c00c7 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -435,6 +435,12 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
+	err = ngbe_dev_rxtx_start(dev);
+	if (err < 0) {
+		PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
+		goto error;
+	}
+
 	/* Skip link setup if loopback mode is enabled. */
 	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
 		goto skip_link_setup;
@@ -1116,6 +1122,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
 	.dev_start                  = ngbe_dev_start,
 	.link_update                = ngbe_dev_link_update,
 	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
+	.rx_queue_start	            = ngbe_dev_rx_queue_start,
+	.rx_queue_stop              = ngbe_dev_rx_queue_stop,
+	.tx_queue_start	            = ngbe_dev_tx_queue_start,
+	.tx_queue_stop              = ngbe_dev_tx_queue_stop,
 	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
 	.rx_queue_release           = ngbe_dev_rx_queue_release,
 	.tx_queue_setup             = ngbe_dev_tx_queue_setup,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 0b8dba571b..97ced40e4b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -78,6 +78,21 @@ int ngbe_dev_rx_init(struct rte_eth_dev *dev);
 
 void ngbe_dev_tx_init(struct rte_eth_dev *dev);
 
+int ngbe_dev_rxtx_start(struct rte_eth_dev *dev);
+
+void ngbe_dev_save_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id);
+void ngbe_dev_store_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id);
+void ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
+void ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
+
+int ngbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int ngbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
+int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
 uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 3f3f2cab06..daa2d7ae4d 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -2236,6 +2236,38 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int __rte_cold
+ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
+{
+	struct ngbe_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	unsigned int i;
+
+	/* Initialize software ring entries */
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile struct ngbe_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "RX mbuf alloc failed queue_id=%u",
+				     (unsigned int)rxq->queue_id);
+			return -ENOMEM;
+		}
+
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+		rxd = &rxq->rx_ring[i];
+		NGBE_RXD_HDRADDR(rxd, 0);
+		NGBE_RXD_PKTADDR(rxd, dma_addr);
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
 void __rte_cold
 ngbe_set_rx_function(struct rte_eth_dev *dev)
 {
@@ -2473,3 +2505,278 @@ ngbe_dev_tx_init(struct rte_eth_dev *dev)
 	}
 }
 
+/*
+ * Set up link loopback mode Tx->Rx.
+ */
+static inline void __rte_cold
+ngbe_setup_loopback_link(struct ngbe_hw *hw)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_LB, NGBE_MACRXCFG_LB);
+
+	msec_delay(50);
+}
+
+/*
+ * Start Transmit and Receive Units.
+ */
+int __rte_cold
+ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
+{
+	struct ngbe_hw     *hw;
+	struct ngbe_tx_queue *txq;
+	struct ngbe_rx_queue *rxq;
+	uint32_t dmatxctl;
+	uint32_t rxctrl;
+	uint16_t i;
+	int ret = 0;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = NGBE_DEV_HW(dev);
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		/* Setup Transmit Threshold Registers */
+		wr32m(hw, NGBE_TXCFG(txq->reg_idx),
+		      NGBE_TXCFG_HTHRESH_MASK |
+		      NGBE_TXCFG_WTHRESH_MASK,
+		      NGBE_TXCFG_HTHRESH(txq->hthresh) |
+		      NGBE_TXCFG_WTHRESH(txq->wthresh));
+	}
+
+	dmatxctl = rd32(hw, NGBE_DMATXCTRL);
+	dmatxctl |= NGBE_DMATXCTRL_ENA;
+	wr32(hw, NGBE_DMATXCTRL, dmatxctl);
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq->tx_deferred_start) {
+			ret = ngbe_dev_tx_queue_start(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq->rx_deferred_start) {
+			ret = ngbe_dev_rx_queue_start(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	/* Enable Receive engine */
+	rxctrl = rd32(hw, NGBE_PBRXCTL);
+	rxctrl |= NGBE_PBRXCTL_ENA;
+	hw->mac.enable_rx_dma(hw, rxctrl);
+
+	/* If loopback mode is enabled, set up the link accordingly */
+	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
+		ngbe_setup_loopback_link(hw);
+
+	return 0;
+}
+
+void
+ngbe_dev_save_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id)
+{
+	u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
+	*(reg++) = rd32(hw, NGBE_RXBAL(rx_queue_id));
+	*(reg++) = rd32(hw, NGBE_RXBAH(rx_queue_id));
+	*(reg++) = rd32(hw, NGBE_RXCFG(rx_queue_id));
+}
+
+void
+ngbe_dev_store_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id)
+{
+	u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
+	wr32(hw, NGBE_RXBAL(rx_queue_id), *(reg++));
+	wr32(hw, NGBE_RXBAH(rx_queue_id), *(reg++));
+	wr32(hw, NGBE_RXCFG(rx_queue_id), *(reg++) & ~NGBE_RXCFG_ENA);
+}
+
+void
+ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
+{
+	u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
+	*(reg++) = rd32(hw, NGBE_TXBAL(tx_queue_id));
+	*(reg++) = rd32(hw, NGBE_TXBAH(tx_queue_id));
+	*(reg++) = rd32(hw, NGBE_TXCFG(tx_queue_id));
+}
+
+void
+ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
+{
+	u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
+	wr32(hw, NGBE_TXBAL(tx_queue_id), *(reg++));
+	wr32(hw, NGBE_TXBAH(tx_queue_id), *(reg++));
+	wr32(hw, NGBE_TXCFG(tx_queue_id), *(reg++) & ~NGBE_TXCFG_ENA);
+}
+
+/*
+ * Start Receive Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct ngbe_rx_queue *rxq;
+	uint32_t rxdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	/* Allocate buffers for descriptor rings */
+	if (ngbe_alloc_rx_queue_mbufs(rxq) != 0) {
+		PMD_INIT_LOG(ERR, "Could not alloc mbuf for queue:%d",
+			     rx_queue_id);
+		return -1;
+	}
+	rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+	rxdctl |= NGBE_RXCFG_ENA;
+	wr32(hw, NGBE_RXCFG(rxq->reg_idx), rxdctl);
+
+	/* Wait until RX Enable ready */
+	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+	} while (--poll_ms && !(rxdctl & NGBE_RXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not enable Rx Queue %d", rx_queue_id);
+	rte_wmb();
+	wr32(hw, NGBE_RXRP(rxq->reg_idx), 0);
+	wr32(hw, NGBE_RXWP(rxq->reg_idx), rxq->nb_rx_desc - 1);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/*
+ * Stop Receive Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
+	struct ngbe_rx_queue *rxq;
+	uint32_t rxdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	ngbe_dev_save_rx_queue(hw, rxq->reg_idx);
+	wr32m(hw, NGBE_RXCFG(rxq->reg_idx), NGBE_RXCFG_ENA, 0);
+
+	/* Wait until RX Enable bit clear */
+	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+	} while (--poll_ms && (rxdctl & NGBE_RXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not disable Rx Queue %d", rx_queue_id);
+
+	rte_delay_us(RTE_NGBE_WAIT_100_US);
+	ngbe_dev_store_rx_queue(hw, rxq->reg_idx);
+
+	ngbe_rx_queue_release_mbufs(rxq);
+	ngbe_reset_rx_queue(adapter, rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+/*
+ * Start Transmit Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct ngbe_tx_queue *txq;
+	uint32_t txdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, NGBE_TXCFG_ENA);
+
+	/* Wait until TX Enable ready */
+	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
+	} while (--poll_ms && !(txdctl & NGBE_TXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not enable "
+			     "Tx Queue %d", tx_queue_id);
+
+	rte_wmb();
+	wr32(hw, NGBE_TXWP(txq->reg_idx), txq->tx_tail);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/*
+ * Stop Transmit Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct ngbe_tx_queue *txq;
+	uint32_t txdctl;
+	uint32_t txtdh, txtdt;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Wait until TX queue is empty */
+	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_us(RTE_NGBE_WAIT_100_US);
+		txtdh = rd32(hw, NGBE_TXRP(txq->reg_idx));
+		txtdt = rd32(hw, NGBE_TXWP(txq->reg_idx));
+	} while (--poll_ms && (txtdh != txtdt));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR,
+			"Tx Queue %d is not empty when stopping.",
+			tx_queue_id);
+
+	ngbe_dev_save_tx_queue(hw, txq->reg_idx);
+	wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, 0);
+
+	/* Wait until TX Enable bit clear */
+	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
+	} while (--poll_ms && (txdctl & NGBE_TXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not disable Tx Queue %d",
+			tx_queue_id);
+
+	rte_delay_us(RTE_NGBE_WAIT_100_US);
+	ngbe_dev_store_tx_queue(hw, txq->reg_idx);
+
+	if (txq->ops != NULL) {
+		txq->ops->release_mbufs(txq);
+		txq->ops->reset(txq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 2cb98e2497..48241bd634 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -208,6 +208,9 @@ struct ngbe_tx_desc {
 
 #define rte_packet_prefetch(p)  rte_prefetch1(p)
 
+#define RTE_NGBE_REGISTER_POLL_WAIT_10_MS  10
+#define RTE_NGBE_WAIT_100_US               100
+
 #define NGBE_TX_MAX_SEG                    40
 
 /**
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [dpdk-dev] [PATCH v5 24/24] net/ngbe: add device stop operation
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (22 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 23/24] net/ngbe: start and stop RxTx Jiawen Wu
@ 2021-06-02  9:41 ` Jiawen Wu
  2021-06-11  1:38 ` [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
  2021-06-14 20:56 ` Andrew Rybchenko
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-02  9:41 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Support to stop, close and reset device.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/ngbe/ngbe_ethdev.c | 123 ++++++++++++++++++++++++++++++++-
 drivers/net/ngbe/ngbe_ethdev.h |   7 ++
 drivers/net/ngbe/ngbe_rxtx.c   |  47 +++++++++++++
 3 files changed, 175 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 2b551c00c7..d6ea621b86 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -531,20 +531,136 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 error:
 	PMD_INIT_LOG(ERR, "failure in dev start: %d", err);
+	ngbe_dev_clear_queues(dev);
 	return -EIO;
 }
 
+/*
+ * Stop device: disable rx and tx functions to allow for reconfiguring.
+ */
+static int
+ngbe_dev_stop(struct rte_eth_dev *dev)
+{
+	struct rte_eth_link link;
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	if (hw->adapter_stopped)
+		return 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
+		(hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
+		/* gpio0 is used to power on/off control*/
+		wr32(hw, NGBE_GPIODATA, NGBE_GPIOBIT_0);
+	}
+
+	/* disable interrupts */
+	ngbe_disable_intr(hw);
+
+	/* reset the NIC */
+	ngbe_pf_reset_hw(hw);
+	hw->adapter_stopped = 0;
+
+	/* stop adapter */
+	ngbe_stop_hw(hw);
+
+	ngbe_dev_clear_queues(dev);
+
+	/* Clear stored conf */
+	dev->data->scattered_rx = 0;
+
+	/* Clear recorded link status */
+	memset(&link, 0, sizeof(link));
+	rte_eth_linkstatus_set(dev, &link);
+
+	if (!rte_intr_allow_others(intr_handle))
+		/* resume to the default handler */
+		rte_intr_callback_register(intr_handle,
+					   ngbe_dev_interrupt_handler,
+					   (void *)dev);
+
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec != NULL) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
+	hw->adapter_stopped = true;
+	dev->data->dev_started = 0;
+
+	return 0;
+}
+
 /*
  * Reset and stop device.
  */
 static int
 ngbe_dev_close(struct rte_eth_dev *dev)
 {
+	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	int retries = 0;
+	int ret;
+
 	PMD_INIT_FUNC_TRACE();
 
-	RTE_SET_USED(dev);
+	ngbe_pf_reset_hw(hw);
 
-	return 0;
+	ret = ngbe_dev_stop(dev);
+
+	ngbe_dev_free_queues(dev);
+
+	/* reprogram the RAR[0] in case user changed it. */
+	ngbe_set_rar(hw, 0, hw->mac.addr, 0, true);
+
+	/* Unlock any pending hardware semaphore */
+	ngbe_swfw_lock_reset(hw);
+
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	do {
+		ret = rte_intr_callback_unregister(intr_handle,
+				ngbe_dev_interrupt_handler, dev);
+		if (ret >= 0 || ret == -ENOENT) {
+			break;
+		} else if (ret != -EAGAIN) {
+			PMD_INIT_LOG(ERR,
+				"intr callback unregister failed: %d",
+				ret);
+		}
+		rte_delay_ms(100);
+	} while (retries++ < (10 + NGBE_LINK_UP_TIME));
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	rte_free(dev->data->hash_mac_addrs);
+	dev->data->hash_mac_addrs = NULL;
+
+	return ret;
+}
+
+/*
+ * Reset PF device.
+ */
+static int
+ngbe_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	ret = eth_ngbe_dev_uninit(dev);
+	if (ret)
+		return ret;
+
+	ret = eth_ngbe_dev_init(dev, NULL);
+
+	return ret;
 }
 
 static int
@@ -1120,6 +1236,9 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
 	.dev_configure              = ngbe_dev_configure,
 	.dev_infos_get              = ngbe_dev_info_get,
 	.dev_start                  = ngbe_dev_start,
+	.dev_stop                   = ngbe_dev_stop,
+	.dev_close                  = ngbe_dev_close,
+	.dev_reset                  = ngbe_dev_reset,
 	.link_update                = ngbe_dev_link_update,
 	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
 	.rx_queue_start	            = ngbe_dev_rx_queue_start,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 97ced40e4b..7ca3ebbda3 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -61,6 +61,13 @@ struct ngbe_adapter {
 #define NGBE_DEV_INTR(dev) \
 	(&((struct ngbe_adapter *)(dev)->data->dev_private)->intr)
 
+/*
+ * RX/TX function prototypes
+ */
+void ngbe_dev_clear_queues(struct rte_eth_dev *dev);
+
+void ngbe_dev_free_queues(struct rte_eth_dev *dev);
+
 void ngbe_dev_rx_queue_release(void *rxq);
 
 void ngbe_dev_tx_queue_release(void *txq);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index daa2d7ae4d..a76b9d50a1 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -2236,6 +2236,53 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
+void __rte_cold
+ngbe_dev_clear_queues(struct rte_eth_dev *dev)
+{
+	unsigned int i;
+	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct ngbe_tx_queue *txq = dev->data->tx_queues[i];
+
+		if (txq != NULL) {
+			txq->ops->release_mbufs(txq);
+			txq->ops->reset(txq);
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		struct ngbe_rx_queue *rxq = dev->data->rx_queues[i];
+
+		if (rxq != NULL) {
+			ngbe_rx_queue_release_mbufs(rxq);
+			ngbe_reset_rx_queue(adapter, rxq);
+		}
+	}
+}
+
+void
+ngbe_dev_free_queues(struct rte_eth_dev *dev)
+{
+	unsigned int i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		ngbe_dev_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		ngbe_dev_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
+}
+
 static int __rte_cold
 ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
 {
-- 
2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (23 preceding siblings ...)
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 24/24] net/ngbe: add device stop operation Jiawen Wu
@ 2021-06-11  1:38 ` Jiawen Wu
  2021-06-14 20:56 ` Andrew Rybchenko
  25 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-11  1:38 UTC (permalink / raw)
  To: dev, 'Andrew Rybchenko'

Hi,

> -----Original Message-----
> From: Jiawen Wu <jiawenwu@trustnetic.com>
> Sent: Wednesday, June 2, 2021 5:41 PM
> To: dev@dpdk.org
> Cc: Jiawen Wu <jiawenwu@trustnetic.com>
> Subject: [PATCH v5 00/24] net: ngbe PMD
> 
> This patch set provides a skeleton of ngbe PMD, which adapted to Wangxun
> WX1860 series NICs.
> 
> v5:
> - Extend patches with device initialization and RxTx functions.
> 
> v4:
> - Fix compile error.
> 
> v3:
> - Use rte_ether functions to define marcos.
> 
> v2:
> - Correct some clerical errors.
> - Use ethdev debug flags instead of driver own.
> 
> Jiawen Wu (24):
>   net/ngbe: add build and doc infrastructure
>   net/ngbe: add device IDs
>   net/ngbe: support probe and remove
>   net/ngbe: add device init and uninit
>   net/ngbe: add log type and error type
>   net/ngbe: define registers
>   net/ngbe: set MAC type and LAN id
>   net/ngbe: init and validate EEPROM
>   net/ngbe: add HW initialization
>   net/ngbe: identify PHY and reset PHY
>   net/ngbe: store MAC address
>   net/ngbe: add info get operation
>   net/ngbe: support link update
>   net/ngbe: setup the check PHY link
>   net/ngbe: add Rx queue setup and release
>   net/ngbe: add Tx queue setup and release
>   net/ngbe: add Rx and Tx init
>   net/ngbe: add packet type
>   net/ngbe: add simple Rx and Tx flow
>   net/ngbe: support bulk and scatter Rx
>   net/ngbe: support full-featured Tx path
>   net/ngbe: add device start operation
>   net/ngbe: start and stop RxTx
>   net/ngbe: add device stop operation
> 
>  MAINTAINERS                            |    6 +
>  doc/guides/nics/features/ngbe.ini      |   25 +
>  doc/guides/nics/index.rst              |    1 +
>  doc/guides/nics/ngbe.rst               |   58 +
>  doc/guides/rel_notes/release_21_08.rst |    6 +
>  drivers/net/meson.build                |    1 +
>  drivers/net/ngbe/base/meson.build      |   26 +
>  drivers/net/ngbe/base/ngbe.h           |   11 +
>  drivers/net/ngbe/base/ngbe_devids.h    |   84 +
>  drivers/net/ngbe/base/ngbe_dummy.h     |  209 ++
>  drivers/net/ngbe/base/ngbe_eeprom.c    |  203 ++
>  drivers/net/ngbe/base/ngbe_eeprom.h    |   17 +
>  drivers/net/ngbe/base/ngbe_hw.c        | 1069 +++++++++
>  drivers/net/ngbe/base/ngbe_hw.h        |   59 +
>  drivers/net/ngbe/base/ngbe_mng.c       |  198 ++
>  drivers/net/ngbe/base/ngbe_mng.h       |   65 +
>  drivers/net/ngbe/base/ngbe_osdep.h     |  178 ++
>  drivers/net/ngbe/base/ngbe_phy.c       |  451 ++++
>  drivers/net/ngbe/base/ngbe_phy.h       |   62 +
>  drivers/net/ngbe/base/ngbe_phy_mvl.c   |  251 +++
>  drivers/net/ngbe/base/ngbe_phy_mvl.h   |   97 +
>  drivers/net/ngbe/base/ngbe_phy_rtl.c   |  240 ++
>  drivers/net/ngbe/base/ngbe_phy_rtl.h   |   89 +
>  drivers/net/ngbe/base/ngbe_phy_yt.c    |  272 +++
>  drivers/net/ngbe/base/ngbe_phy_yt.h    |   76 +
>  drivers/net/ngbe/base/ngbe_regs.h      | 1490 +++++++++++++
>  drivers/net/ngbe/base/ngbe_status.h    |  125 ++
>  drivers/net/ngbe/base/ngbe_type.h      |  210 ++
>  drivers/net/ngbe/meson.build           |   22 +
>  drivers/net/ngbe/ngbe_ethdev.c         | 1266 +++++++++++
>  drivers/net/ngbe/ngbe_ethdev.h         |  146 ++
>  drivers/net/ngbe/ngbe_logs.h           |   46 +
>  drivers/net/ngbe/ngbe_ptypes.c         |  640 ++++++
>  drivers/net/ngbe/ngbe_ptypes.h         |  351 +++
>  drivers/net/ngbe/ngbe_rxtx.c           | 2829
> ++++++++++++++++++++++++
>  drivers/net/ngbe/ngbe_rxtx.h           |  366 +++
>  drivers/net/ngbe/version.map           |    3 +
>  37 files changed, 11248 insertions(+)
>  create mode 100644 doc/guides/nics/features/ngbe.ini  create mode
> 100644 doc/guides/nics/ngbe.rst  create mode 100644
> drivers/net/ngbe/base/meson.build  create mode 100644
> drivers/net/ngbe/base/ngbe.h  create mode 100644
> drivers/net/ngbe/base/ngbe_devids.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_dummy.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.c
>  create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_hw.c  create mode
> 100644 drivers/net/ngbe/base/ngbe_hw.h  create mode 100644
> drivers/net/ngbe/base/ngbe_mng.c  create mode 100644
> drivers/net/ngbe/base/ngbe_mng.h  create mode 100644
> drivers/net/ngbe/base/ngbe_osdep.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_phy.c  create mode
> 100644 drivers/net/ngbe/base/ngbe_phy.h  create mode 100644
> drivers/net/ngbe/base/ngbe_phy_mvl.c
>  create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.c
>  create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.c
>  create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_regs.h  create mode
> 100644 drivers/net/ngbe/base/ngbe_status.h
>  create mode 100644 drivers/net/ngbe/base/ngbe_type.h  create mode
> 100644 drivers/net/ngbe/meson.build  create mode 100644
> drivers/net/ngbe/ngbe_ethdev.c  create mode 100644
> drivers/net/ngbe/ngbe_ethdev.h  create mode 100644
> drivers/net/ngbe/ngbe_logs.h  create mode 100644
> drivers/net/ngbe/ngbe_ptypes.c  create mode 100644
> drivers/net/ngbe/ngbe_ptypes.h  create mode 100644
> drivers/net/ngbe/ngbe_rxtx.c  create mode 100644
> drivers/net/ngbe/ngbe_rxtx.h  create mode 100644
> drivers/net/ngbe/version.map
> 
> --
> 2.27.0




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure Jiawen Wu
@ 2021-06-14 17:05   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 17:05 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Adding bare minimum PMD library and doc build infrastructure
> and claim the maintainership for ngbe PMD.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   MAINTAINERS                            |  6 ++++++
>   doc/guides/nics/features/ngbe.ini      | 10 +++++++++
>   doc/guides/nics/index.rst              |  1 +
>   doc/guides/nics/ngbe.rst               | 28 ++++++++++++++++++++++++++
>   doc/guides/rel_notes/release_21_08.rst |  6 ++++++
>   drivers/net/meson.build                |  1 +
>   drivers/net/ngbe/meson.build           | 12 +++++++++++
>   drivers/net/ngbe/ngbe_ethdev.c         |  5 +++++
>   drivers/net/ngbe/ngbe_ethdev.h         |  5 +++++
>   drivers/net/ngbe/version.map           |  3 +++
>   10 files changed, 77 insertions(+)
>   create mode 100644 doc/guides/nics/features/ngbe.ini
>   create mode 100644 doc/guides/nics/ngbe.rst
>   create mode 100644 drivers/net/ngbe/meson.build
>   create mode 100644 drivers/net/ngbe/ngbe_ethdev.c
>   create mode 100644 drivers/net/ngbe/ngbe_ethdev.h
>   create mode 100644 drivers/net/ngbe/version.map
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 5877a16971..04672f6eaa 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -903,6 +903,12 @@ F: drivers/net/txgbe/
>   F: doc/guides/nics/txgbe.rst
>   F: doc/guides/nics/features/txgbe.ini
>   
> +Wangxun ngbe
> +M: Jiawen Wu <jiawenwu@trustnetic.com>
> +F: drivers/net/ngbe/
> +F: doc/guides/nics/ngbe.rst
> +F: doc/guides/nics/features/ngbe.ini
> +

Because of alphabetical order (n before t), it should go
just beforenet/txgbe.

>   VMware vmxnet3
>   M: Yong Wang <yongwang@vmware.com>
>   F: drivers/net/vmxnet3/
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> new file mode 100644
> index 0000000000..a7a524defc
> --- /dev/null
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -0,0 +1,10 @@
> +;
> +; Supported features of the 'ngbe' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Linux                = Y
> +ARMv8                = Y
> +x86-32               = Y
> +x86-64               = Y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 799697caf0..31a3e6bcdc 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -47,6 +47,7 @@ Network Interface Controller Drivers
>       netvsc
>       nfb
>       nfp
> +    ngbe
>       null
>       octeontx
>       octeontx2
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> new file mode 100644
> index 0000000000..4ec2623a05
> --- /dev/null
> +++ b/doc/guides/nics/ngbe.rst
> @@ -0,0 +1,28 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> +
> +NGBE Poll Mode Driver
> +======================
> +
> +The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
> +for Wangxun 1 Gigabit Ethernet NICs.
> +
> +Prerequisites
> +-------------
> +
> +- Learning about Wangxun 1 Gigabit Ethernet NICs using
> +  `<https://www.net-swift.com/a/386.html>`_.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +for details.
> +
> +Limitations or Known issues
> +---------------------------
> +
> +Build with ICC is not supported yet.
> +Power8, ARMv7 and BSD are not supported yet.
> diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
> index a6ecfdf3ce..2deac4f398 100644
> --- a/doc/guides/rel_notes/release_21_08.rst
> +++ b/doc/guides/rel_notes/release_21_08.rst
> @@ -55,6 +55,12 @@ New Features
>        Also, make sure to start the actual text at the margin.
>        =======================================================
>   
> +* **Added Wangxun ngbe PMD.**
> +
> +  Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs.
> +
> +  See the :doc:`../nics/ngbe` for more details.
> +
>   
>   Removed Items
>   -------------
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build
> index c8b5ce2980..d6c1751540 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -40,6 +40,7 @@ drivers = [
>           'netvsc',
>           'nfb',
>           'nfp',
> +	'ngbe',
>           'null',
>           'octeontx',
>           'octeontx2',
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> new file mode 100644
> index 0000000000..de2d7be716
> --- /dev/null
> +++ b/drivers/net/ngbe/meson.build
> @@ -0,0 +1,12 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> +
> +if is_windows
> +	build = false
> +	reason = 'not supported on Windows'
> +	subdir_done()
> +endif
> +
> +sources = files(
> +	'ngbe_ethdev.c',
> +)
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> new file mode 100644
> index 0000000000..e424ff11a2
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -0,0 +1,5 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +

I strongly dislike empty files with just copyrights.
At least dummy (always fail) probe/remove should be
in the first patch.

> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> new file mode 100644
> index 0000000000..e424ff11a2
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -0,0 +1,5 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +

Same here

> diff --git a/drivers/net/ngbe/version.map b/drivers/net/ngbe/version.map
> new file mode 100644
> index 0000000000..4a76d1d52d
> --- /dev/null
> +++ b/drivers/net/ngbe/version.map
> @@ -0,0 +1,3 @@
> +DPDK_21 {
> +	local: *;
> +};
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs Jiawen Wu
@ 2021-06-14 17:08   ` Andrew Rybchenko
  2021-06-15  2:52     ` Jiawen Wu
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 17:08 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Add device IDs for Wangxun 1Gb NICs, and register rte_ngbe_pmd.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   drivers/net/ngbe/base/meson.build   | 18 +++++++
>   drivers/net/ngbe/base/ngbe_devids.h | 84 +++++++++++++++++++++++++++++
>   drivers/net/ngbe/meson.build        |  6 +++
>   drivers/net/ngbe/ngbe_ethdev.c      | 51 ++++++++++++++++++
>   4 files changed, 159 insertions(+)
>   create mode 100644 drivers/net/ngbe/base/meson.build
>   create mode 100644 drivers/net/ngbe/base/ngbe_devids.h
> 
> diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
> new file mode 100644
> index 0000000000..c5f6467743
> --- /dev/null
> +++ b/drivers/net/ngbe/base/meson.build
> @@ -0,0 +1,18 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> +
> +sources = []
> +
> +error_cflags = []
> +
> +c_args = cflags
> +foreach flag: error_cflags
> +	if cc.has_argument(flag)
> +		c_args += flag
> +	endif
> +endforeach
> +
> +base_lib = static_library('ngbe_base', sources,
> +	dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci],
> +	c_args: c_args)
> +base_objs = base_lib.extract_all_objects()
> diff --git a/drivers/net/ngbe/base/ngbe_devids.h b/drivers/net/ngbe/base/ngbe_devids.h
> new file mode 100644
> index 0000000000..81671f71da
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_devids.h
> @@ -0,0 +1,84 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_DEVIDS_H_
> +#define _NGBE_DEVIDS_H_
> +
> +/*
> + * Vendor ID
> + */
> +#ifndef PCI_VENDOR_ID_WANGXUN
> +#define PCI_VENDOR_ID_WANGXUN                   0x8088
> +#endif
> +
> +/*
> + * Device IDs
> + */
> +#define NGBE_DEV_ID_EM_VF			0x0110
> +#define   NGBE_SUB_DEV_ID_EM_VF			0x0110
> +#define NGBE_DEV_ID_EM				0x0100
> +#define   NGBE_SUB_DEV_ID_EM_MVL_RGMII		0x0200
> +#define   NGBE_SUB_DEV_ID_EM_MVL_SFP		0x0403
> +#define   NGBE_SUB_DEV_ID_EM_RTL_SGMII		0x0410
> +#define   NGBE_SUB_DEV_ID_EM_YT8521S_SFP	0x0460
> +
> +#define NGBE_DEV_ID_EM_WX1860AL_W		0x0100
> +#define NGBE_DEV_ID_EM_WX1860AL_W_VF		0x0110
> +#define NGBE_DEV_ID_EM_WX1860A2			0x0101
> +#define NGBE_DEV_ID_EM_WX1860A2_VF		0x0111
> +#define NGBE_DEV_ID_EM_WX1860A2S		0x0102
> +#define NGBE_DEV_ID_EM_WX1860A2S_VF		0x0112
> +#define NGBE_DEV_ID_EM_WX1860A4			0x0103
> +#define NGBE_DEV_ID_EM_WX1860A4_VF		0x0113
> +#define NGBE_DEV_ID_EM_WX1860A4S		0x0104
> +#define NGBE_DEV_ID_EM_WX1860A4S_VF		0x0114
> +#define NGBE_DEV_ID_EM_WX1860AL2		0x0105
> +#define NGBE_DEV_ID_EM_WX1860AL2_VF		0x0115
> +#define NGBE_DEV_ID_EM_WX1860AL2S		0x0106
> +#define NGBE_DEV_ID_EM_WX1860AL2S_VF		0x0116
> +#define NGBE_DEV_ID_EM_WX1860AL4		0x0107
> +#define NGBE_DEV_ID_EM_WX1860AL4_VF		0x0117
> +#define NGBE_DEV_ID_EM_WX1860AL4S		0x0108
> +#define NGBE_DEV_ID_EM_WX1860AL4S_VF		0x0118
> +#define NGBE_DEV_ID_EM_WX1860NCSI		0x0109
> +#define NGBE_DEV_ID_EM_WX1860NCSI_VF		0x0119
> +#define NGBE_DEV_ID_EM_WX1860A1			0x010A
> +#define NGBE_DEV_ID_EM_WX1860A1_VF		0x011A
> +#define NGBE_DEV_ID_EM_WX1860A1L		0x010B
> +#define NGBE_DEV_ID_EM_WX1860A1L_VF		0x011B
> +#define   NGBE_SUB_DEV_ID_EM_ZTE5201_RJ45	0x0100
> +#define   NGBE_SUB_DEV_ID_EM_SF100F_LP		0x0103
> +#define   NGBE_SUB_DEV_ID_EM_M88E1512_RJ45	0x0200
> +#define   NGBE_SUB_DEV_ID_EM_SF100HT		0x0102
> +#define   NGBE_SUB_DEV_ID_EM_SF200T		0x0201
> +#define   NGBE_SUB_DEV_ID_EM_SF200HT		0x0202
> +#define   NGBE_SUB_DEV_ID_EM_SF200T_S		0x0210
> +#define   NGBE_SUB_DEV_ID_EM_SF200HT_S		0x0220
> +#define   NGBE_SUB_DEV_ID_EM_SF200HXT		0x0230
> +#define   NGBE_SUB_DEV_ID_EM_SF400T		0x0401
> +#define   NGBE_SUB_DEV_ID_EM_SF400HT		0x0402
> +#define   NGBE_SUB_DEV_ID_EM_M88E1512_SFP	0x0403
> +#define   NGBE_SUB_DEV_ID_EM_SF400T_S		0x0410
> +#define   NGBE_SUB_DEV_ID_EM_SF400HT_S		0x0420
> +#define   NGBE_SUB_DEV_ID_EM_SF400HXT		0x0430
> +#define   NGBE_SUB_DEV_ID_EM_SF400_OCP		0x0440
> +#define   NGBE_SUB_DEV_ID_EM_SF400_LY		0x0450
> +#define   NGBE_SUB_DEV_ID_EM_SF400_LY_YT	0x0470
> +
> +/* Assign excessive id with masks */
> +#define NGBE_INTERNAL_MASK			0x000F
> +#define NGBE_OEM_MASK				0x00F0
> +#define NGBE_WOL_SUP_MASK			0x4000
> +#define NGBE_NCSI_SUP_MASK			0x8000
> +
> +#define NGBE_INTERNAL_SFP			0x0003
> +#define NGBE_OCP_CARD				0x0040
> +#define NGBE_LY_M88E1512_SFP			0x0050
> +#define NGBE_YT8521S_SFP			0x0060
> +#define NGBE_LY_YT8521S_SFP			0x0070
> +#define NGBE_WOL_SUP				0x4000
> +#define NGBE_NCSI_SUP				0x8000
> +
> +#endif /* _NGBE_DEVIDS_H_ */
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index de2d7be716..81173fa7f0 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -7,6 +7,12 @@ if is_windows
>   	subdir_done()
>   endif
>   
> +subdir('base')
> +objs = [base_objs]
> +
>   sources = files(
>   	'ngbe_ethdev.c',
>   )
> +
> +includes += include_directories('base')
> +
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index e424ff11a2..0f1fa86fe6 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -3,3 +3,54 @@
>    * Copyright(c) 2010-2017 Intel Corporation
>    */
>   
> +#include <ethdev_pci.h>
> +
> +#include <base/ngbe_devids.h>
> +
> +/*
> + * The set of PCI devices this driver supports
> + */
> +static const struct rte_pci_id pci_id_ngbe_map[] = {
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2S) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4S) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2S) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4S) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860NCSI) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1L) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL_W) },

Are all these devices supported at once? Or do some devices require
extra code and it would be clear to add its IDs later?

> +	{ .vendor_id = 0, /* sentinel */ },
> +};
> +
> +static int
> +eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> +		struct rte_pci_device *pci_dev)
> +{
> +	RTE_SET_USED(pci_dev);
> +
> +	return 0;

IMHO more correct behaviour of such dummy functions is to return
failure.

> +}
> +
> +static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
> +{
> +	RTE_SET_USED(pci_dev);
> +
> +	return 0;
> +}
> +
> +static struct rte_pci_driver rte_ngbe_pmd = {
> +	.id_table = pci_id_ngbe_map,
> +	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> +		     RTE_PCI_DRV_INTR_LSC,

LSC should be added here when it is actually supported.

> +	.probe = eth_ngbe_pci_probe,
> +	.remove = eth_ngbe_pci_remove,
> +};
> +
> +RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
> +
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 03/24] net/ngbe: support probe and remove
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 03/24] net/ngbe: support probe and remove Jiawen Wu
@ 2021-06-14 17:27   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 17:27 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Add basic PCIe ethdev probe and remove.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/features/ngbe.ini |  1 +
>   drivers/net/ngbe/ngbe_ethdev.c    | 44 ++++++++++++++++++++++++++++---
>   drivers/net/ngbe/ngbe_ethdev.h    | 10 +++++++
>   3 files changed, 52 insertions(+), 3 deletions(-)
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index a7a524defc..977286ac04 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -4,6 +4,7 @@
>   ; Refer to default.ini for the full list of available PMD features.
>   ;
>   [Features]
> +Multiprocess aware   = Y
>   Linux                = Y
>   ARMv8                = Y
>   x86-32               = Y
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 0f1fa86fe6..83af1a6bc7 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -3,9 +3,11 @@
>    * Copyright(c) 2010-2017 Intel Corporation
>    */
>   
> +#include <rte_common.h>
>   #include <ethdev_pci.h>
>   
>   #include <base/ngbe_devids.h>
> +#include "ngbe_ethdev.h"
>   
>   /*
>    * The set of PCI devices this driver supports
> @@ -26,20 +28,56 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
>   	{ .vendor_id = 0, /* sentinel */ },
>   };
>   
> +static int
> +eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> +{
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> +
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> +		return 0;
> +
> +	rte_eth_copy_pci_info(eth_dev, pci_dev);
> +
> +	return 0;

I think it is misleading to return success when you do nothing.

> +}
> +
> +static int
> +eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
> +{
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> +		return 0;
> +
> +	RTE_SET_USED(eth_dev);
> +
> +	return 0;
> +}
> +
>   static int
>   eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
>   		struct rte_pci_device *pci_dev)
>   {
> -	RTE_SET_USED(pci_dev);
> +	int retval;
> +
> +	retval = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
> +			sizeof(struct ngbe_adapter),
> +			eth_dev_pci_specific_init, pci_dev,
> +			eth_ngbe_dev_init, NULL);
> +
> +	if (retval)

DPDK coding style requires explicit comparison with 0.

> +		return retval;
>   
>   	return 0;
>   }
>   
>   static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
>   {
> -	RTE_SET_USED(pci_dev);
> +	struct rte_eth_dev *ethdev;
>   
> -	return 0;
> +	ethdev = rte_eth_dev_allocated(pci_dev->device.name);
> +	if (!ethdev)

DPDK coding style requires explicit comparison with NULL.

> +		return 0;
> +
> +	return rte_eth_dev_destroy(ethdev, eth_ngbe_dev_uninit);
>   }
>   
>   static struct rte_pci_driver rte_ngbe_pmd = {
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index e424ff11a2..b79570dc51 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -3,3 +3,13 @@
>    * Copyright(c) 2010-2017 Intel Corporation
>    */
>   
> +#ifndef _NGBE_ETHDEV_H_
> +#define _NGBE_ETHDEV_H_
> +
> +/*
> + * Structure to store private data for each driver instance (for each port).
> + */
> +struct ngbe_adapter {

As far as I know not all compilers like empty structures.

> +};
> +
> +#endif /* _NGBE_ETHDEV_H_ */
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 04/24] net/ngbe: add device init and uninit
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 04/24] net/ngbe: add device init and uninit Jiawen Wu
@ 2021-06-14 17:36   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 17:36 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Add basic init and uninit function.
> Map device IDs and subsystem IDs to single ID for easy operation.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   drivers/net/ngbe/base/meson.build  |   4 +-
>   drivers/net/ngbe/base/ngbe.h       |  11 ++
>   drivers/net/ngbe/base/ngbe_hw.c    |  60 ++++++++++
>   drivers/net/ngbe/base/ngbe_hw.h    |  13 +++
>   drivers/net/ngbe/base/ngbe_osdep.h | 175 +++++++++++++++++++++++++++++
>   drivers/net/ngbe/base/ngbe_type.h  |  28 +++++
>   drivers/net/ngbe/ngbe_ethdev.c     |  36 +++++-
>   drivers/net/ngbe/ngbe_ethdev.h     |   7 ++
>   8 files changed, 331 insertions(+), 3 deletions(-)
>   create mode 100644 drivers/net/ngbe/base/ngbe.h
>   create mode 100644 drivers/net/ngbe/base/ngbe_hw.c
>   create mode 100644 drivers/net/ngbe/base/ngbe_hw.h
>   create mode 100644 drivers/net/ngbe/base/ngbe_osdep.h
>   create mode 100644 drivers/net/ngbe/base/ngbe_type.h
> 
> diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
> index c5f6467743..fdbfa99916 100644
> --- a/drivers/net/ngbe/base/meson.build
> +++ b/drivers/net/ngbe/base/meson.build
> @@ -1,7 +1,9 @@
>   # SPDX-License-Identifier: BSD-3-Clause
>   # Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
>   
> -sources = []
> +sources = [
> +	'ngbe_hw.c',
> +]
>   
>   error_cflags = []
>   
> diff --git a/drivers/net/ngbe/base/ngbe.h b/drivers/net/ngbe/base/ngbe.h
> new file mode 100644
> index 0000000000..63fad12ad3
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe.h
> @@ -0,0 +1,11 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + */
> +
> +#ifndef _NGBE_H_
> +#define _NGBE_H_
> +
> +#include "ngbe_type.h"
> +#include "ngbe_hw.h"
> +
> +#endif /* _NGBE_H_ */
> diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
> new file mode 100644
> index 0000000000..0fab47f272
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_hw.c
> @@ -0,0 +1,60 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#include "ngbe_hw.h"
> +
> +void ngbe_map_device_id(struct ngbe_hw *hw)
> +{
> +	u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
> +	u16 internal = hw->sub_system_id & NGBE_INTERNAL_MASK;
> +	hw->is_pf = true;
> +
> +	/* move subsystem_device_id to device_id */
> +	switch (hw->device_id) {
> +	case NGBE_DEV_ID_EM_WX1860AL_W_VF:
> +	case NGBE_DEV_ID_EM_WX1860A2_VF:
> +	case NGBE_DEV_ID_EM_WX1860A2S_VF:
> +	case NGBE_DEV_ID_EM_WX1860A4_VF:
> +	case NGBE_DEV_ID_EM_WX1860A4S_VF:
> +	case NGBE_DEV_ID_EM_WX1860AL2_VF:
> +	case NGBE_DEV_ID_EM_WX1860AL2S_VF:
> +	case NGBE_DEV_ID_EM_WX1860AL4_VF:
> +	case NGBE_DEV_ID_EM_WX1860AL4S_VF:
> +	case NGBE_DEV_ID_EM_WX1860NCSI_VF:
> +	case NGBE_DEV_ID_EM_WX1860A1_VF:
> +	case NGBE_DEV_ID_EM_WX1860A1L_VF:
> +		hw->device_id = NGBE_DEV_ID_EM_VF;
> +		hw->sub_device_id = NGBE_SUB_DEV_ID_EM_VF;
> +		hw->is_pf = false;
> +		break;
> +	case NGBE_DEV_ID_EM_WX1860AL_W:
> +	case NGBE_DEV_ID_EM_WX1860A2:
> +	case NGBE_DEV_ID_EM_WX1860A2S:
> +	case NGBE_DEV_ID_EM_WX1860A4:
> +	case NGBE_DEV_ID_EM_WX1860A4S:
> +	case NGBE_DEV_ID_EM_WX1860AL2:
> +	case NGBE_DEV_ID_EM_WX1860AL2S:
> +	case NGBE_DEV_ID_EM_WX1860AL4:
> +	case NGBE_DEV_ID_EM_WX1860AL4S:
> +	case NGBE_DEV_ID_EM_WX1860NCSI:
> +	case NGBE_DEV_ID_EM_WX1860A1:
> +	case NGBE_DEV_ID_EM_WX1860A1L:
> +		hw->device_id = NGBE_DEV_ID_EM;
> +		if (oem == NGBE_LY_M88E1512_SFP ||
> +				internal == NGBE_INTERNAL_SFP)
> +			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_MVL_SFP;
> +		else if (hw->sub_system_id == NGBE_SUB_DEV_ID_EM_M88E1512_RJ45)
> +			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_MVL_RGMII;
> +		else if (oem == NGBE_YT8521S_SFP ||
> +				oem == NGBE_LY_YT8521S_SFP)
> +			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_YT8521S_SFP;
> +		else
> +			hw->sub_device_id = NGBE_SUB_DEV_ID_EM_RTL_SGMII;
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
> new file mode 100644
> index 0000000000..b320d126ec
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_hw.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_HW_H_
> +#define _NGBE_HW_H_
> +
> +#include "ngbe_type.h"
> +
> +void ngbe_map_device_id(struct ngbe_hw *hw);
> +
> +#endif /* _NGBE_HW_H_ */
> diff --git a/drivers/net/ngbe/base/ngbe_osdep.h b/drivers/net/ngbe/base/ngbe_osdep.h
> new file mode 100644
> index 0000000000..ef3d3d9180
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_osdep.h
> @@ -0,0 +1,175 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_OS_H_
> +#define _NGBE_OS_H_
> +
> +#include <string.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdarg.h>
> +#include <rte_version.h>
> +#include <rte_common.h>
> +#include <rte_debug.h>
> +#include <rte_cycles.h>
> +#include <rte_log.h>
> +#include <rte_byteorder.h>
> +#include <rte_config.h>
> +#include <rte_io.h>
> +
> +#define RTE_LIBRTE_NGBE_TM        DCPV(1, 0)
> +#define TMZ_PADDR(mz)  ((mz)->iova)
> +#define TMZ_VADDR(mz)  ((mz)->addr)
> +#define TDEV_NAME(eth_dev)  ((eth_dev)->device->name)
> +
> +#define ngbe_unused __rte_unused
> +
> +#define usec_delay(x) rte_delay_us(x)
> +#define msec_delay(x) rte_delay_ms(x)
> +#define usleep(x)     rte_delay_us(x)
> +#define msleep(x)     rte_delay_ms(x)
> +
> +#define FALSE               0
> +#define TRUE                1
> +
> +#ifndef false
> +#define false               0
> +#endif
> +#ifndef true
> +#define true                1
> +#endif
> +#define min(a, b)	RTE_MIN(a, b)
> +#define max(a, b)	RTE_MAX(a, b)
> +
> +/* Bunch of defines for shared code bogosity */
> +
> +static inline void UNREFERENCED(const char *a __rte_unused, ...) {}
> +#define UNREFERENCED_PARAMETER(args...) UNREFERENCED("", ##args)
> +
> +#define STATIC static
> +
> +typedef uint8_t		u8;
> +typedef int8_t		s8;
> +typedef uint16_t	u16;
> +typedef int16_t		s16;
> +typedef uint32_t	u32;
> +typedef int32_t		s32;
> +typedef uint64_t	u64;
> +typedef int64_t		s64;
> +
> +/* Little Endian defines */
> +#ifndef __le16
> +#define __le16  u16
> +#define __le32  u32
> +#define __le64  u64
> +#endif
> +#ifndef __be16
> +#define __be16  u16
> +#define __be32  u32
> +#define __be64  u64
> +#endif
> +
> +/* Bit shift and mask */
> +#define BIT_MASK4                 (0x0000000FU)
> +#define BIT_MASK8                 (0x000000FFU)
> +#define BIT_MASK16                (0x0000FFFFU)
> +#define BIT_MASK32                (0xFFFFFFFFU)
> +#define BIT_MASK64                (0xFFFFFFFFFFFFFFFFUL)
> +
> +#ifndef cpu_to_le32
> +#define cpu_to_le16(v)          rte_cpu_to_le_16((u16)(v))
> +#define cpu_to_le32(v)          rte_cpu_to_le_32((u32)(v))
> +#define cpu_to_le64(v)          rte_cpu_to_le_64((u64)(v))
> +#define le_to_cpu16(v)          rte_le_to_cpu_16((u16)(v))
> +#define le_to_cpu32(v)          rte_le_to_cpu_32((u32)(v))
> +#define le_to_cpu64(v)          rte_le_to_cpu_64((u64)(v))
> +
> +#define cpu_to_be16(v)          rte_cpu_to_be_16((u16)(v))
> +#define cpu_to_be32(v)          rte_cpu_to_be_32((u32)(v))
> +#define cpu_to_be64(v)          rte_cpu_to_be_64((u64)(v))
> +#define be_to_cpu16(v)          rte_be_to_cpu_16((u16)(v))
> +#define be_to_cpu32(v)          rte_be_to_cpu_32((u32)(v))
> +#define be_to_cpu64(v)          rte_be_to_cpu_64((u64)(v))
> +
> +#define le_to_be16(v)           rte_bswap16((u16)(v))
> +#define le_to_be32(v)           rte_bswap32((u32)(v))
> +#define le_to_be64(v)           rte_bswap64((u64)(v))
> +#define be_to_le16(v)           rte_bswap16((u16)(v))
> +#define be_to_le32(v)           rte_bswap32((u32)(v))
> +#define be_to_le64(v)           rte_bswap64((u64)(v))
> +
> +#define npu_to_le16(v)          (v)
> +#define npu_to_le32(v)          (v)
> +#define npu_to_le64(v)          (v)
> +#define le_to_npu16(v)          (v)
> +#define le_to_npu32(v)          (v)
> +#define le_to_npu64(v)          (v)
> +
> +#define npu_to_be16(v)          le_to_be16((u16)(v))
> +#define npu_to_be32(v)          le_to_be32((u32)(v))
> +#define npu_to_be64(v)          le_to_be64((u64)(v))
> +#define be_to_npu16(v)          be_to_le16((u16)(v))
> +#define be_to_npu32(v)          be_to_le32((u32)(v))
> +#define be_to_npu64(v)          be_to_le64((u64)(v))
> +#endif /* !cpu_to_le32 */
> +
> +static inline u16 REVERT_BIT_MASK16(u16 mask)
> +{
> +	mask = ((mask & 0x5555) << 1) | ((mask & 0xAAAA) >> 1);
> +	mask = ((mask & 0x3333) << 2) | ((mask & 0xCCCC) >> 2);
> +	mask = ((mask & 0x0F0F) << 4) | ((mask & 0xF0F0) >> 4);
> +	return ((mask & 0x00FF) << 8) | ((mask & 0xFF00) >> 8);
> +}
> +
> +static inline u32 REVERT_BIT_MASK32(u32 mask)
> +{
> +	mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1);
> +	mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2);
> +	mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4);
> +	mask = ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8);
> +	return ((mask & 0x0000FFFF) << 16) | ((mask & 0xFFFF0000) >> 16);
> +}
> +
> +static inline u64 REVERT_BIT_MASK64(u64 mask)
> +{
> +	mask = ((mask & 0x5555555555555555) << 1) |
> +	       ((mask & 0xAAAAAAAAAAAAAAAA) >> 1);
> +	mask = ((mask & 0x3333333333333333) << 2) |
> +	       ((mask & 0xCCCCCCCCCCCCCCCC) >> 2);
> +	mask = ((mask & 0x0F0F0F0F0F0F0F0F) << 4) |
> +	       ((mask & 0xF0F0F0F0F0F0F0F0) >> 4);
> +	mask = ((mask & 0x00FF00FF00FF00FF) << 8) |
> +	       ((mask & 0xFF00FF00FF00FF00) >> 8);
> +	mask = ((mask & 0x0000FFFF0000FFFF) << 16) |
> +	       ((mask & 0xFFFF0000FFFF0000) >> 16);
> +	return ((mask & 0x00000000FFFFFFFF) << 32) |
> +	       ((mask & 0xFFFFFFFF00000000) >> 32);
> +}
> +
> +#define IOMEM
> +
> +#define prefetch(x) rte_prefetch0(x)
> +
> +#define ARRAY_SIZE(x) ((int32_t)RTE_DIM(x))
> +
> +#ifndef MAX_UDELAY_MS
> +#define MAX_UDELAY_MS 5
> +#endif
> +
> +#define ETH_ADDR_LEN	6
> +#define ETH_FCS_LEN	4
> +
> +/* Check whether address is multicast. This is little-endian specific check.*/
> +#define NGBE_IS_MULTICAST(address) \
> +		rte_is_multicast_ether_addr(address)
> +
> +/* Check whether an address is broadcast. */
> +#define NGBE_IS_BROADCAST(address) \
> +		rte_is_broadcast_ether_addr(address)
> +
> +#define ETH_P_8021Q      0x8100
> +#define ETH_P_8021AD     0x88A8
> +
> +#endif /* _NGBE_OS_H_ */
> diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
> new file mode 100644
> index 0000000000..b6bde11dcd
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_type.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_TYPE_H_
> +#define _NGBE_TYPE_H_
> +
> +#define NGBE_ALIGN		128 /* as intel did */
> +
> +#include "ngbe_osdep.h"
> +#include "ngbe_devids.h"
> +
> +struct ngbe_hw {
> +	void IOMEM *hw_addr;
> +	u16 device_id;
> +	u16 vendor_id;
> +	u16 sub_device_id;
> +	u16 sub_system_id;
> +	bool allow_unsupported_sfp;
> +
> +	uint64_t isb_dma;
> +	void IOMEM *isb_mem;
> +
> +	bool is_pf;
> +};
> +
> +#endif /* _NGBE_TYPE_H_ */
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 83af1a6bc7..3431a9a9a7 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -6,9 +6,11 @@
>   #include <rte_common.h>
>   #include <ethdev_pci.h>
>   
> -#include <base/ngbe_devids.h>
> +#include "base/ngbe.h"
>   #include "ngbe_ethdev.h"
>   
> +static int ngbe_dev_close(struct rte_eth_dev *dev);
> +
>   /*
>    * The set of PCI devices this driver supports
>    */
> @@ -32,12 +34,31 @@ static int
>   eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   {
>   	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> +	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
> +	const struct rte_memzone *mz;
>   
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   		return 0;
>   
>   	rte_eth_copy_pci_info(eth_dev, pci_dev);
>   
> +	/* Vendor and Device ID need to be set before init of shared code */
> +	hw->device_id = pci_dev->id.device_id;
> +	hw->vendor_id = pci_dev->id.vendor_id;
> +	hw->sub_system_id = pci_dev->id.subsystem_device_id;
> +	ngbe_map_device_id(hw);
> +	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
> +	hw->allow_unsupported_sfp = 1;

Above line is absolute unclear in the patch. Why?
Does the driver already supports something?

> +
> +	/* Reserve memory for interrupt status block */
> +	mz = rte_eth_dma_zone_reserve(eth_dev, "ngbe_driver", -1,
> +		16, NGBE_ALIGN, SOCKET_ID_ANY);

Why 16? What is 16? Please, add define or use existing define from base
driver.

> +	if (mz == NULL)
> +		return -ENOMEM;
> +
> +	hw->isb_dma = TMZ_PADDR(mz);
> +	hw->isb_mem = TMZ_VADDR(mz);
> +
>   	return 0;
>   }
>   
> @@ -47,7 +68,7 @@ eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   		return 0;
>   
> -	RTE_SET_USED(eth_dev);
> +	ngbe_dev_close(eth_dev);
>   
>   	return 0;
>   }
> @@ -88,6 +109,17 @@ static struct rte_pci_driver rte_ngbe_pmd = {
>   	.remove = eth_ngbe_pci_remove,
>   };
>   
> +/*
> + * Reset and stop device.
> + */
> +static int
> +ngbe_dev_close(struct rte_eth_dev *dev)
> +{
> +	RTE_SET_USED(dev);
> +
> +	return 0;
> +}
> +
>   RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
>   RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
>   RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index b79570dc51..f6cee4a4a9 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -10,6 +10,13 @@
>    * Structure to store private data for each driver instance (for each port).
>    */
>   struct ngbe_adapter {
> +	struct ngbe_hw             hw;
>   };
>   
> +#define NGBE_DEV_ADAPTER(dev) \
> +	((struct ngbe_adapter *)(dev)->data->dev_private)
> +
> +#define NGBE_DEV_HW(dev) \
> +	(&((struct ngbe_adapter *)(dev)->data->dev_private)->hw)
> +

Above two macros should be static inline functions to
be type-safe for input dev argument. These static
functions will allow to add extra checks if required
in the future.

>   #endif /* _NGBE_ETHDEV_H_ */
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type Jiawen Wu
@ 2021-06-14 17:54   ` Andrew Rybchenko
  2021-06-15  7:13     ` Jiawen Wu
  2021-07-01 13:57   ` David Marchand
  1 sibling, 1 reply; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 17:54 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Add log type and error type to trace functions.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/ngbe.rst            |  20 +++++
>   drivers/net/ngbe/base/ngbe_status.h | 125 ++++++++++++++++++++++++++++
>   drivers/net/ngbe/base/ngbe_type.h   |   1 +
>   drivers/net/ngbe/ngbe_ethdev.c      |  16 ++++
>   drivers/net/ngbe/ngbe_logs.h        |  46 ++++++++++
>   5 files changed, 208 insertions(+)
>   create mode 100644 drivers/net/ngbe/base/ngbe_status.h
>   create mode 100644 drivers/net/ngbe/ngbe_logs.h
> 
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index 4ec2623a05..c274a15aab 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -15,6 +15,26 @@ Prerequisites
>   
>   - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
>   

It should be two empty lines before section start.

> +Pre-Installation Configuration
> +------------------------------
> +
> +Dynamic Logging Parameters
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +One may leverage EAL option "--log-level" to change default levels
> +for the log types supported by the driver. The option is used with
> +an argument typically consisting of two parts separated by a colon.
> +
> +NGBE PMD provides the following log types available for control:
> +
> +- ``pmd.net.ngbe.driver`` (default level is **notice**)
> +
> +  Affects driver-wide messages unrelated to any particular devices.
> +
> +- ``pmd.net.ngbe.init`` (default level is **notice**)
> +
> +  Extra logging of the messages during PMD initialization.
> +

Same here.

>   Driver compilation and testing
>   ------------------------------
>   
> diff --git a/drivers/net/ngbe/base/ngbe_status.h b/drivers/net/ngbe/base/ngbe_status.h
> new file mode 100644
> index 0000000000..b1836c6479
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_status.h
> @@ -0,0 +1,125 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_STATUS_H_
> +#define _NGBE_STATUS_H_
> +
> +/* Error Codes:
> + * common error
> + * module error(simple)
> + * module error(detailed)
> + *
> + * (-256, 256): reserved for non-ngbe defined error code
> + */
> +#define TERR_BASE (0x100)
> +enum ngbe_error {
> +	TERR_NULL = TERR_BASE,
> +	TERR_ANY,
> +	TERR_NOSUPP,
> +	TERR_NOIMPL,
> +	TERR_NOMEM,
> +	TERR_NOSPACE,
> +	TERR_NOENTRY,
> +	TERR_CONFIG,
> +	TERR_ARGS,
> +	TERR_PARAM,
> +	TERR_INVALID,
> +	TERR_TIMEOUT,
> +	TERR_VERSION,
> +	TERR_REGISTER,
> +	TERR_FEATURE,
> +	TERR_RESET,
> +	TERR_AUTONEG,
> +	TERR_MBX,
> +	TERR_I2C,
> +	TERR_FC,
> +	TERR_FLASH,
> +	TERR_DEVICE,
> +	TERR_HOSTIF,
> +	TERR_SRAM,
> +	TERR_EEPROM,
> +	TERR_EEPROM_CHECKSUM,
> +	TERR_EEPROM_PROTECT,
> +	TERR_EEPROM_VERSION,
> +	TERR_MAC,
> +	TERR_MAC_ADDR,
> +	TERR_SFP,
> +	TERR_SFP_INITSEQ,
> +	TERR_SFP_PRESENT,
> +	TERR_SFP_SUPPORT,
> +	TERR_SFP_SETUP,
> +	TERR_PHY,
> +	TERR_PHY_ADDR,
> +	TERR_PHY_INIT,
> +	TERR_FDIR_CMD,
> +	TERR_FDIR_REINIT,
> +	TERR_SWFW_SYNC,
> +	TERR_SWFW_COMMAND,
> +	TERR_FC_CFG,
> +	TERR_FC_NEGO,
> +	TERR_LINK_SETUP,
> +	TERR_PCIE_PENDING,
> +	TERR_PBA_SECTION,
> +	TERR_OVERTEMP,
> +	TERR_UNDERTEMP,
> +	TERR_XPCS_POWERUP,
> +};
> +
> +/* WARNING: just for legacy compatibility */
> +#define NGBE_NOT_IMPLEMENTED 0x7FFFFFFF
> +#define NGBE_ERR_OPS_DUMMY   0x3FFFFFFF
> +
> +/* Error Codes */
> +#define NGBE_ERR_EEPROM				-(TERR_BASE + 1)
> +#define NGBE_ERR_EEPROM_CHECKSUM		-(TERR_BASE + 2)
> +#define NGBE_ERR_PHY				-(TERR_BASE + 3)
> +#define NGBE_ERR_CONFIG				-(TERR_BASE + 4)
> +#define NGBE_ERR_PARAM				-(TERR_BASE + 5)
> +#define NGBE_ERR_MAC_TYPE			-(TERR_BASE + 6)
> +#define NGBE_ERR_UNKNOWN_PHY			-(TERR_BASE + 7)
> +#define NGBE_ERR_LINK_SETUP			-(TERR_BASE + 8)
> +#define NGBE_ERR_ADAPTER_STOPPED		-(TERR_BASE + 9)
> +#define NGBE_ERR_INVALID_MAC_ADDR		-(TERR_BASE + 10)
> +#define NGBE_ERR_DEVICE_NOT_SUPPORTED		-(TERR_BASE + 11)
> +#define NGBE_ERR_MASTER_REQUESTS_PENDING	-(TERR_BASE + 12)
> +#define NGBE_ERR_INVALID_LINK_SETTINGS		-(TERR_BASE + 13)
> +#define NGBE_ERR_AUTONEG_NOT_COMPLETE		-(TERR_BASE + 14)
> +#define NGBE_ERR_RESET_FAILED			-(TERR_BASE + 15)
> +#define NGBE_ERR_SWFW_SYNC			-(TERR_BASE + 16)
> +#define NGBE_ERR_PHY_ADDR_INVALID		-(TERR_BASE + 17)
> +#define NGBE_ERR_I2C				-(TERR_BASE + 18)
> +#define NGBE_ERR_SFP_NOT_SUPPORTED		-(TERR_BASE + 19)
> +#define NGBE_ERR_SFP_NOT_PRESENT		-(TERR_BASE + 20)
> +#define NGBE_ERR_SFP_NO_INIT_SEQ_PRESENT	-(TERR_BASE + 21)
> +#define NGBE_ERR_NO_SAN_ADDR_PTR		-(TERR_BASE + 22)
> +#define NGBE_ERR_FDIR_REINIT_FAILED		-(TERR_BASE + 23)
> +#define NGBE_ERR_EEPROM_VERSION			-(TERR_BASE + 24)
> +#define NGBE_ERR_NO_SPACE			-(TERR_BASE + 25)
> +#define NGBE_ERR_OVERTEMP			-(TERR_BASE + 26)
> +#define NGBE_ERR_FC_NOT_NEGOTIATED		-(TERR_BASE + 27)
> +#define NGBE_ERR_FC_NOT_SUPPORTED		-(TERR_BASE + 28)
> +#define NGBE_ERR_SFP_SETUP_NOT_COMPLETE		-(TERR_BASE + 30)
> +#define NGBE_ERR_PBA_SECTION			-(TERR_BASE + 31)
> +#define NGBE_ERR_INVALID_ARGUMENT		-(TERR_BASE + 32)
> +#define NGBE_ERR_HOST_INTERFACE_COMMAND		-(TERR_BASE + 33)
> +#define NGBE_ERR_OUT_OF_MEM			-(TERR_BASE + 34)
> +#define NGBE_ERR_FEATURE_NOT_SUPPORTED		-(TERR_BASE + 36)
> +#define NGBE_ERR_EEPROM_PROTECTED_REGION	-(TERR_BASE + 37)
> +#define NGBE_ERR_FDIR_CMD_INCOMPLETE		-(TERR_BASE + 38)
> +#define NGBE_ERR_FW_RESP_INVALID		-(TERR_BASE + 39)
> +#define NGBE_ERR_TOKEN_RETRY			-(TERR_BASE + 40)
> +#define NGBE_ERR_FLASH_LOADING_FAILED		-(TERR_BASE + 41)
> +
> +#define NGBE_ERR_NOSUPP                        -(TERR_BASE + 42)
> +#define NGBE_ERR_UNDERTEMP                     -(TERR_BASE + 43)
> +#define NGBE_ERR_XPCS_POWER_UP_FAILED          -(TERR_BASE + 44)
> +#define NGBE_ERR_PHY_INIT_NOT_DONE             -(TERR_BASE + 45)
> +#define NGBE_ERR_TIMEOUT                       -(TERR_BASE + 46)
> +#define NGBE_ERR_REGISTER                      -(TERR_BASE + 47)
> +#define NGBE_ERR_MNG_ACCESS_FAILED             -(TERR_BASE + 49)
> +#define NGBE_ERR_PHY_TYPE                      -(TERR_BASE + 50)
> +#define NGBE_ERR_PHY_TIMEOUT                   -(TERR_BASE + 51)

Not sure that I understand how above define are related to logging.

> +
> +#endif /* _NGBE_STATUS_H_ */
> diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
> index b6bde11dcd..bcc9f74216 100644
> --- a/drivers/net/ngbe/base/ngbe_type.h
> +++ b/drivers/net/ngbe/base/ngbe_type.h
> @@ -8,6 +8,7 @@
>   
>   #define NGBE_ALIGN		128 /* as intel did */
>   
> +#include "ngbe_status.h"
>   #include "ngbe_osdep.h"
>   #include "ngbe_devids.h"
>   
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 3431a9a9a7..f24c3e173e 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -6,6 +6,7 @@
>   #include <rte_common.h>
>   #include <ethdev_pci.h>
>   
> +#include "ngbe_logs.h"
>   #include "base/ngbe.h"
>   #include "ngbe_ethdev.h"
>   
> @@ -37,6 +38,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
>   	const struct rte_memzone *mz;
>   
> +	PMD_INIT_FUNC_TRACE();
> +
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   		return 0;
>   
> @@ -65,6 +68,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   static int
>   eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
>   {
> +	PMD_INIT_FUNC_TRACE();
> +
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   		return 0;
>   
> @@ -115,6 +120,8 @@ static struct rte_pci_driver rte_ngbe_pmd = {
>   static int
>   ngbe_dev_close(struct rte_eth_dev *dev)
>   {
> +	PMD_INIT_FUNC_TRACE();
> +
>   	RTE_SET_USED(dev);
>   
>   	return 0;
> @@ -124,3 +131,12 @@ RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
>   RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
>   RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
>   
> +RTE_LOG_REGISTER(ngbe_logtype_init, pmd.net.ngbe.init, NOTICE);
> +RTE_LOG_REGISTER(ngbe_logtype_driver, pmd.net.ngbe.driver, NOTICE);
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> +	RTE_LOG_REGISTER(ngbe_logtype_rx, pmd.net.ngbe.rx, DEBUG);
> +#endif
> +#ifdef RTE_ETHDEV_DEBUG_TX
> +	RTE_LOG_REGISTER(ngbe_logtype_tx, pmd.net.ngbe.tx, DEBUG);
> +#endif
> diff --git a/drivers/net/ngbe/ngbe_logs.h b/drivers/net/ngbe/ngbe_logs.h
> new file mode 100644
> index 0000000000..c5d1ab0930
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_logs.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_LOGS_H_
> +#define _NGBE_LOGS_H_
> +
> +/*
> + * PMD_USER_LOG: for user
> + */
> +extern int ngbe_logtype_init;
> +#define PMD_INIT_LOG(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, ngbe_logtype_init, \
> +		"%s(): " fmt "\n", __func__, ##args)
> +
> +extern int ngbe_logtype_driver;
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, ngbe_logtype_driver, \
> +		"%s(): " fmt "\n", __func__, ##args)
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> +extern int ngbe_logtype_rx;
> +#define PMD_RX_LOG(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, ngbe_logtype_rx,	\
> +		"%s(): " fmt "\n", __func__, ##args)
> +#else
> +#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> +extern int ngbe_logtype_tx;
> +#define PMD_TX_LOG(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, ngbe_logtype_tx,	\
> +		"%s(): " fmt "\n", __func__, ##args)
> +#else
> +#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#define TLOG_DEBUG(fmt, args...)  PMD_DRV_LOG(DEBUG, fmt, ##args)
> +
> +#define DEBUGOUT(fmt, args...)    TLOG_DEBUG(fmt, ##args)
> +#define PMD_INIT_FUNC_TRACE()     TLOG_DEBUG(" >>")
> +#define DEBUGFUNC(fmt)            TLOG_DEBUG(fmt)
> +
> +#endif /* _NGBE_LOGS_H_ */
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 09/24] net/ngbe: add HW initialization
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 09/24] net/ngbe: add HW initialization Jiawen Wu
@ 2021-06-14 18:01   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 18:01 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Initialize the hardware by resetting the hardware in base code.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

[snip]

> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index c2f92a8437..bb5923c485 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -109,6 +109,13 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   		return -EIO;
>   	}
>   
> +	err = hw->mac.init_hw(hw);
> +
> +	if (err) {

Explicit comparison with 0 should be used.

> +		PMD_INIT_LOG(ERR, "Hardware Initialization Failure: %d", err);
> +		return -EIO;
> +	}
> +
>   	return 0;
>   }
>   
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 12/24] net/ngbe: add info get operation
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 12/24] net/ngbe: add info get operation Jiawen Wu
@ 2021-06-14 18:13   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 18:13 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Add device information get operation.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/features/ngbe.ini |  1 +
>   drivers/net/ngbe/meson.build      |  1 +
>   drivers/net/ngbe/ngbe_ethdev.c    | 86 +++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_ethdev.h    | 26 ++++++++++
>   drivers/net/ngbe/ngbe_rxtx.c      | 67 ++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h      | 15 ++++++
>   6 files changed, 196 insertions(+)
>   create mode 100644 drivers/net/ngbe/ngbe_rxtx.c
>   create mode 100644 drivers/net/ngbe/ngbe_rxtx.h
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 977286ac04..ca03a255de 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -4,6 +4,7 @@
>   ; Refer to default.ini for the full list of available PMD features.
>   ;
>   [Features]
> +Speed capabilities   = Y
>   Multiprocess aware   = Y
>   Linux                = Y
>   ARMv8                = Y
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index 81173fa7f0..9e75b82f1c 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -12,6 +12,7 @@ objs = [base_objs]
>   
>   sources = files(
>   	'ngbe_ethdev.c',
> +	'ngbe_rxtx.c',
>   )
>   
>   includes += include_directories('base')
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index e9ddbe9753..07df677b64 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -9,6 +9,7 @@
>   #include "ngbe_logs.h"
>   #include "base/ngbe.h"
>   #include "ngbe_ethdev.h"
> +#include "ngbe_rxtx.h"
>   
>   static int ngbe_dev_close(struct rte_eth_dev *dev);
>   
> @@ -31,6 +32,22 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
>   	{ .vendor_id = 0, /* sentinel */ },
>   };
>   
> +static const struct rte_eth_desc_lim rx_desc_lim = {
> +	.nb_max = NGBE_RING_DESC_MAX,
> +	.nb_min = NGBE_RING_DESC_MIN,
> +	.nb_align = NGBE_RXD_ALIGN,
> +};
> +
> +static const struct rte_eth_desc_lim tx_desc_lim = {
> +	.nb_max = NGBE_RING_DESC_MAX,
> +	.nb_min = NGBE_RING_DESC_MIN,
> +	.nb_align = NGBE_TXD_ALIGN,
> +	.nb_seg_max = NGBE_TX_MAX_SEG,
> +	.nb_mtu_seg_max = NGBE_TX_MAX_SEG,
> +};
> +
> +static const struct eth_dev_ops ngbe_eth_dev_ops;
> +
>   /*
>    * Ensure that all locks are released before first NVM or PHY access
>    */
> @@ -64,6 +81,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   
>   	PMD_INIT_FUNC_TRACE();
>   
> +	eth_dev->dev_ops = &ngbe_eth_dev_ops;
> +
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   		return 0;
>   
> @@ -206,6 +225,73 @@ ngbe_dev_close(struct rte_eth_dev *dev)
>   	return 0;
>   }
>   
> +static int
> +ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> +{
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +
> +	dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
> +	dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
> +	dev_info->min_rx_bufsize = 1024;
> +	dev_info->max_rx_pktlen = 15872;
> +	dev_info->max_mac_addrs = hw->mac.num_rar_entries;

Is it 1 or something else? If something else, it should be reported
when you actually support it.

> +	dev_info->max_hash_mac_addrs = NGBE_VMDQ_NUM_UC_MAC;

It should be set when actually supported.

> +	dev_info->max_vfs = pci_dev->max_vfs;

Again it should be reported when you can actually
enable and support VFs.

> +	dev_info->max_vmdq_pools = ETH_64_POOLS;

Same, when you implement supoort in dev_configure

> +	dev_info->vmdq_queue_num = dev_info->max_rx_queues;

Same

> +	dev_info->rx_queue_offload_capa = ngbe_get_rx_queue_offloads(dev);
> +	dev_info->rx_offload_capa = (ngbe_get_rx_port_offloads(dev) |
> +				     dev_info->rx_queue_offload_capa);
> +	dev_info->tx_queue_offload_capa = 0;
> +	dev_info->tx_offload_capa = ngbe_get_tx_port_offloads(dev);

Offloads must be reported when actually supported.
I.e. when PMD user can really request the offload, use it and
it will work.

> +
> +	dev_info->default_rxconf = (struct rte_eth_rxconf) {
> +		.rx_thresh = {
> +			.pthresh = NGBE_DEFAULT_RX_PTHRESH,
> +			.hthresh = NGBE_DEFAULT_RX_HTHRESH,
> +			.wthresh = NGBE_DEFAULT_RX_WTHRESH,
> +		},
> +		.rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
> +		.rx_drop_en = 0,
> +		.offloads = 0,
> +	};
> +
> +	dev_info->default_txconf = (struct rte_eth_txconf) {
> +		.tx_thresh = {
> +			.pthresh = NGBE_DEFAULT_TX_PTHRESH,
> +			.hthresh = NGBE_DEFAULT_TX_HTHRESH,
> +			.wthresh = NGBE_DEFAULT_TX_WTHRESH,
> +		},
> +		.tx_free_thresh = NGBE_DEFAULT_TX_FREE_THRESH,
> +		.offloads = 0,
> +	};

It makes sense to report some values to above fields when
you actually take them into account on configure stage.

> +
> +	dev_info->rx_desc_lim = rx_desc_lim;

It belongs to the patch which implements Rx queue setup.

> +	dev_info->tx_desc_lim = tx_desc_lim;

It belongs to the patch which implements Tx queue setup.

> +
> +	dev_info->hash_key_size = NGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
> +	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
> +	dev_info->flow_type_rss_offloads = NGBE_RSS_OFFLOAD_ALL;
> +
> +	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
> +	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
> +
> +	/* Driver-preferred Rx/Tx parameters */
> +	dev_info->default_rxportconf.burst_size = 32;
> +	dev_info->default_txportconf.burst_size = 32;
> +	dev_info->default_rxportconf.nb_queues = 1;
> +	dev_info->default_txportconf.nb_queues = 1;
> +	dev_info->default_rxportconf.ring_size = 256;
> +	dev_info->default_txportconf.ring_size = 256;

Basically it is misleading to report any kind of information
which is not actually supported. So, all above lines belong
to patches which actually support it.

> +
> +	return 0;
> +}
> +
> +static const struct eth_dev_ops ngbe_eth_dev_ops = {
> +	.dev_infos_get              = ngbe_dev_info_get,
> +};
> +
>   RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
>   RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
>   RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 5917ff02aa..b4e2000dd3 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -6,6 +6,19 @@
>   #ifndef _NGBE_ETHDEV_H_
>   #define _NGBE_ETHDEV_H_
>   
> +#define NGBE_HKEY_MAX_INDEX 10
> +
> +#define NGBE_RSS_OFFLOAD_ALL ( \
> +	ETH_RSS_IPV4 | \
> +	ETH_RSS_NONFRAG_IPV4_TCP | \
> +	ETH_RSS_NONFRAG_IPV4_UDP | \
> +	ETH_RSS_IPV6 | \
> +	ETH_RSS_NONFRAG_IPV6_TCP | \
> +	ETH_RSS_NONFRAG_IPV6_UDP | \
> +	ETH_RSS_IPV6_EX | \
> +	ETH_RSS_IPV6_TCP_EX | \
> +	ETH_RSS_IPV6_UDP_EX)
> +
>   /*
>    * Structure to store private data for each driver instance (for each port).
>    */
> @@ -21,4 +34,17 @@ struct ngbe_adapter {
>   
>   #define NGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
>   
> +/*
> + *  Default values for RX/TX configuration
> + */
> +#define NGBE_DEFAULT_RX_FREE_THRESH  32
> +#define NGBE_DEFAULT_RX_PTHRESH      8
> +#define NGBE_DEFAULT_RX_HTHRESH      8
> +#define NGBE_DEFAULT_RX_WTHRESH      0
> +
> +#define NGBE_DEFAULT_TX_FREE_THRESH  32
> +#define NGBE_DEFAULT_TX_PTHRESH      32
> +#define NGBE_DEFAULT_TX_HTHRESH      0
> +#define NGBE_DEFAULT_TX_WTHRESH      0
> +
>   #endif /* _NGBE_ETHDEV_H_ */
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> new file mode 100644
> index 0000000000..ae24367b18
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -0,0 +1,67 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#include <stdint.h>
> +#include <rte_ethdev.h>
> +
> +#include "base/ngbe.h"
> +#include "ngbe_ethdev.h"
> +#include "ngbe_rxtx.h"
> +
> +uint64_t
> +ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
> +{
> +	uint64_t tx_offload_capa;
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +
> +	tx_offload_capa =
> +		DEV_TX_OFFLOAD_VLAN_INSERT |
> +		DEV_TX_OFFLOAD_IPV4_CKSUM  |
> +		DEV_TX_OFFLOAD_UDP_CKSUM   |
> +		DEV_TX_OFFLOAD_TCP_CKSUM   |
> +		DEV_TX_OFFLOAD_SCTP_CKSUM  |
> +		DEV_TX_OFFLOAD_TCP_TSO     |
> +		DEV_TX_OFFLOAD_UDP_TSO	   |
> +		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
> +		DEV_TX_OFFLOAD_IP_TNL_TSO	|
> +		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
> +		DEV_TX_OFFLOAD_MULTI_SEGS;
> +
> +	if (hw->is_pf)
> +		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
> +
> +	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
> +
> +	return tx_offload_capa;
> +}
> +
> +uint64_t
> +ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
> +{
> +	return DEV_RX_OFFLOAD_VLAN_STRIP;
> +}
> +
> +uint64_t
> +ngbe_get_rx_port_offloads(struct rte_eth_dev *dev)
> +{
> +	uint64_t offloads;
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +
> +	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
> +		   DEV_RX_OFFLOAD_UDP_CKSUM   |
> +		   DEV_RX_OFFLOAD_TCP_CKSUM   |
> +		   DEV_RX_OFFLOAD_KEEP_CRC    |
> +		   DEV_RX_OFFLOAD_JUMBO_FRAME |
> +		   DEV_RX_OFFLOAD_VLAN_FILTER |
> +		   DEV_RX_OFFLOAD_SCATTER;
> +
> +	if (hw->is_pf)
> +		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
> +			     DEV_RX_OFFLOAD_QINQ_STRIP |
> +			     DEV_RX_OFFLOAD_VLAN_EXTEND);
> +
> +	return offloads;
> +}
> +
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> new file mode 100644
> index 0000000000..39011ee286
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_RXTX_H_
> +#define _NGBE_RXTX_H_
> +
> +#define NGBE_TX_MAX_SEG                    40
> +
> +uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
> +uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
> +uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
> +
> +#endif /* _NGBE_RXTX_H_ */
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 13/24] net/ngbe: support link update
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 13/24] net/ngbe: support link update Jiawen Wu
@ 2021-06-14 18:45   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 18:45 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Register to handle device interrupt.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/features/ngbe.ini  |   2 +
>   doc/guides/nics/ngbe.rst           |   5 +
>   drivers/net/ngbe/base/ngbe_dummy.h |   6 +
>   drivers/net/ngbe/base/ngbe_type.h  |  11 +
>   drivers/net/ngbe/ngbe_ethdev.c     | 364 +++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_ethdev.h     |  28 +++
>   6 files changed, 416 insertions(+)
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index ca03a255de..291a542a42 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -5,6 +5,8 @@
>   ;
>   [Features]
>   Speed capabilities   = Y
> +Link status          = Y
> +Link status event    = Y
>   Multiprocess aware   = Y
>   Linux                = Y
>   ARMv8                = Y
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index c274a15aab..de2ef65664 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -7,6 +7,11 @@ NGBE Poll Mode Driver
>   The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
>   for Wangxun 1 Gigabit Ethernet NICs.
>   

Two empty lines before the section.

> +Features
> +--------
> +
> +- Link state information
> +
>   Prerequisites
>   -------------
>   
> diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
> index 8462d6d1cb..4273e5af36 100644
> --- a/drivers/net/ngbe/base/ngbe_dummy.h
> +++ b/drivers/net/ngbe/base/ngbe_dummy.h
> @@ -64,6 +64,11 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
>   					u32 TUP1)
>   {
>   }
> +static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
> +					bool *TUP3, bool TUP4)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
>   static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
>   					u8 *TUP2, u32 TUP3, u32 TUP4)
>   {
> @@ -135,6 +140,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
>   	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
>   	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
>   	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
> +	hw->mac.check_link = ngbe_mac_check_link_dummy;
>   	hw->mac.set_rar = ngbe_mac_set_rar_dummy;
>   	hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
>   	hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
> diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
> index 5add9ec2a3..d05d2ff28a 100644
> --- a/drivers/net/ngbe/base/ngbe_type.h
> +++ b/drivers/net/ngbe/base/ngbe_type.h
> @@ -96,6 +96,8 @@ struct ngbe_mac_info {
>   	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
>   	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
>   
> +	s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
> +			       bool *link_up, bool link_up_wait_to_complete);
>   	/* RAR */
>   	s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
>   			  u32 enable_addr);
> @@ -116,6 +118,7 @@ struct ngbe_mac_info {
>   	u32 num_rar_entries;
>   	u32 max_tx_queues;
>   	u32 max_rx_queues;
> +	bool get_link_status;
>   	struct ngbe_thermal_sensor_data  thermal_sensor_data;
>   	bool set_lben;
>   };
> @@ -141,6 +144,14 @@ struct ngbe_phy_info {
>   	bool reset_disable;
>   };
>   
> +enum ngbe_isb_idx {
> +	NGBE_ISB_HEADER,
> +	NGBE_ISB_MISC,
> +	NGBE_ISB_VEC0,
> +	NGBE_ISB_VEC1,
> +	NGBE_ISB_MAX
> +};
> +
>   struct ngbe_hw {
>   	void IOMEM *hw_addr;
>   	void *back;
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 07df677b64..97b6de3aa4 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -6,6 +6,8 @@
>   #include <rte_common.h>
>   #include <ethdev_pci.h>
>   
> +#include <rte_alarm.h>
> +
>   #include "ngbe_logs.h"
>   #include "base/ngbe.h"
>   #include "ngbe_ethdev.h"
> @@ -13,6 +15,9 @@
>   
>   static int ngbe_dev_close(struct rte_eth_dev *dev);
>   
> +static void ngbe_dev_interrupt_handler(void *param);
> +static void ngbe_dev_interrupt_delayed_handler(void *param);
> +
>   /*
>    * The set of PCI devices this driver supports
>    */
> @@ -47,6 +52,26 @@ static const struct rte_eth_desc_lim tx_desc_lim = {
>   };
>   
>   static const struct eth_dev_ops ngbe_eth_dev_ops;
> +static inline void
> +ngbe_enable_intr(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +
> +	wr32(hw, NGBE_IENMISC, intr->mask_misc);
> +	wr32(hw, NGBE_IMC(0), intr->mask & BIT_MASK32);
> +	ngbe_flush(hw);
> +}
> +
> +static void
> +ngbe_disable_intr(struct ngbe_hw *hw)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	wr32(hw, NGBE_IMS(0), NGBE_IMS_MASK);
> +	ngbe_flush(hw);
> +}
> +
>   
>   /*
>    * Ensure that all locks are released before first NVM or PHY access
> @@ -76,7 +101,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   {
>   	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
>   	struct ngbe_hw *hw = NGBE_DEV_HW(eth_dev);
> +	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
>   	const struct rte_memzone *mz;
> +	uint32_t ctrl_ext;
>   	int err;
>   
>   	PMD_INIT_FUNC_TRACE();
> @@ -135,6 +162,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   		return -EIO;
>   	}
>   
> +	/* disable interrupt */
> +	ngbe_disable_intr(hw);
> +
>   	/* Allocate memory for storing MAC addresses */
>   	eth_dev->data->mac_addrs = rte_zmalloc("ngbe", RTE_ETHER_ADDR_LEN *
>   					       hw->mac.num_rar_entries, 0);
> @@ -160,6 +190,22 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   		return -ENOMEM;
>   	}
>   
> +	ctrl_ext = rd32(hw, NGBE_PORTCTL);
> +	/* let hardware know driver is loaded */
> +	ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
> +	wr32(hw, NGBE_PORTCTL, ctrl_ext);
> +	ngbe_flush(hw);
> +
> +	rte_intr_callback_register(intr_handle,
> +				   ngbe_dev_interrupt_handler, eth_dev);
> +
> +	/* enable uio/vfio intr/eventfd mapping */
> +	rte_intr_enable(intr_handle);
> +
> +	/* enable support intr */
> +	ngbe_enable_intr(eth_dev);
> +
> +
>   	return 0;
>   }
>   
> @@ -212,6 +258,19 @@ static struct rte_pci_driver rte_ngbe_pmd = {
>   	.remove = eth_ngbe_pci_remove,
>   };
>   
> +static int
> +ngbe_dev_configure(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	/* set flag to update link status after init */
> +	intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> +
> +	return 0;
> +}
> +
>   /*
>    * Reset and stop device.
>    */
> @@ -288,8 +347,313 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>   	return 0;
>   }
>   
> +/* return 0 means link status changed, -1 means not changed */
> +int
> +ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> +			    int wait_to_complete)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct rte_eth_link link;
> +	u32 link_speed = NGBE_LINK_SPEED_UNKNOWN;
> +	u32 lan_speed = 0;
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	bool link_up;
> +	int err;
> +	int wait = 1;
> +
> +	memset(&link, 0, sizeof(link));
> +	link.link_status = ETH_LINK_DOWN;
> +	link.link_speed = ETH_SPEED_NUM_NONE;
> +	link.link_duplex = ETH_LINK_HALF_DUPLEX;
> +	link.link_autoneg = ETH_LINK_AUTONEG;
> +
> +	hw->mac.get_link_status = true;
> +
> +	if (intr->flags & NGBE_FLAG_NEED_LINK_CONFIG)
> +		return rte_eth_linkstatus_set(dev, &link);
> +
> +	/* check if it needs to wait to complete, if lsc interrupt is enabled */
> +	if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)
> +		wait = 0;
> +
> +	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
> +
> +	if (err != 0) {
> +		link.link_speed = ETH_SPEED_NUM_100M;
> +		link.link_duplex = ETH_LINK_FULL_DUPLEX;
> +		return rte_eth_linkstatus_set(dev, &link);
> +	}
> +
> +	if (link_up == 0)

bool should not be compared with 0, it should be !link_up here

> +		return rte_eth_linkstatus_set(dev, &link);
> +
> +	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
> +	link.link_status = ETH_LINK_UP;
> +	link.link_duplex = ETH_LINK_FULL_DUPLEX;
> +
> +	switch (link_speed) {
> +	default:
> +	case NGBE_LINK_SPEED_UNKNOWN:
> +		link.link_duplex = ETH_LINK_FULL_DUPLEX;
> +		link.link_speed = ETH_SPEED_NUM_100M;
> +		break;
> +
> +	case NGBE_LINK_SPEED_10M_FULL:
> +		link.link_speed = ETH_SPEED_NUM_10M;
> +		lan_speed = 0;
> +		break;
> +
> +	case NGBE_LINK_SPEED_100M_FULL:
> +		link.link_speed = ETH_SPEED_NUM_100M;
> +		lan_speed = 1;
> +		break;
> +
> +	case NGBE_LINK_SPEED_1GB_FULL:
> +		link.link_speed = ETH_SPEED_NUM_1G;
> +		lan_speed = 2;
> +		break;
> +
> +	case NGBE_LINK_SPEED_2_5GB_FULL:
> +		link.link_speed = ETH_SPEED_NUM_2_5G;
> +		break;
> +
> +	case NGBE_LINK_SPEED_5GB_FULL:
> +		link.link_speed = ETH_SPEED_NUM_5G;
> +		break;
> +
> +	case NGBE_LINK_SPEED_10GB_FULL:
> +		link.link_speed = ETH_SPEED_NUM_10G;
> +		break;

It looks like it does not match speed_capa reported in
the dev_info path.

> +	}
> +
> +	if (hw->is_pf) {
> +		wr32m(hw, NGBE_LAN_SPEED, NGBE_LAN_SPEED_MASK, lan_speed);
> +		if (link_speed & (NGBE_LINK_SPEED_1GB_FULL |
> +			NGBE_LINK_SPEED_100M_FULL | NGBE_LINK_SPEED_10M_FULL)) {
> +			wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_SPEED_MASK,
> +				NGBE_MACTXCFG_SPEED_1G | NGBE_MACTXCFG_TE);
> +		}
> +	}
> +
> +	return rte_eth_linkstatus_set(dev, &link);
> +}
> +
> +static int
> +ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
> +{
> +	return ngbe_dev_link_update_share(dev, wait_to_complete);
> +}
> +
> +/*
> + * It reads ICR and sets flag for the link_update.
> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
> +{
> +	uint32_t eicr;
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +
> +	/* clear all cause mask */
> +	ngbe_disable_intr(hw);
> +
> +	/* read-on-clear nic registers here */
> +	eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
> +	PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
> +
> +	intr->flags = 0;
> +
> +	/* set flag for async link update */
> +	if (eicr & NGBE_ICRMISC_PHY)
> +		intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> +
> +	if (eicr & NGBE_ICRMISC_VFMBX)
> +		intr->flags |= NGBE_FLAG_MAILBOX;
> +
> +	if (eicr & NGBE_ICRMISC_LNKSEC)
> +		intr->flags |= NGBE_FLAG_MACSEC;
> +
> +	if (eicr & NGBE_ICRMISC_GPIO)
> +		intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> +
> +	return 0;
> +}
> +
> +/**
> + * It gets and then prints the link status.
> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static void
> +ngbe_dev_link_status_print(struct rte_eth_dev *dev)
> +{
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> +	struct rte_eth_link link;
> +
> +	rte_eth_linkstatus_get(dev, &link);
> +
> +	if (link.link_status) {

Compare with ETH_LINK_UP

> +		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
> +					(int)(dev->data->port_id),
> +					(unsigned int)link.link_speed,
> +			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
> +					"full-duplex" : "half-duplex");
> +	} else {
> +		PMD_INIT_LOG(INFO, " Port %d: Link Down",
> +				(int)(dev->data->port_id));
> +	}
> +	PMD_INIT_LOG(DEBUG, "PCI Address: " PCI_PRI_FMT,
> +				pci_dev->addr.domain,
> +				pci_dev->addr.bus,
> +				pci_dev->addr.devid,
> +				pci_dev->addr.function);
> +}
> +
> +/*
> + * It executes link_update after knowing an interrupt occurred.
> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	int64_t timeout;
> +
> +	PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags);
> +
> +	if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
> +		struct rte_eth_link link;
> +
> +		/*get the link status before link update, for predicting later*/
> +		rte_eth_linkstatus_get(dev, &link);
> +
> +		ngbe_dev_link_update(dev, 0);
> +
> +		/* likely to up */
> +		if (!link.link_status)

Compare with ETH_LINK_UP

> +			/* handle it 1 sec later, wait it being stable */
> +			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
> +		/* likely to down */
> +		else
> +			/* handle it 4 sec later, wait it being stable */
> +			timeout = NGBE_LINK_DOWN_CHECK_TIMEOUT;
> +
> +		ngbe_dev_link_status_print(dev);
> +		if (rte_eal_alarm_set(timeout * 1000,
> +				      ngbe_dev_interrupt_delayed_handler,
> +				      (void *)dev) < 0) {
> +			PMD_DRV_LOG(ERR, "Error setting alarm");
> +		} else {
> +			/* remember original mask */
> +			intr->mask_misc_orig = intr->mask_misc;
> +			/* only disable lsc interrupt */
> +			intr->mask_misc &= ~NGBE_ICRMISC_PHY;
> +
> +			intr->mask_orig = intr->mask;
> +			/* only disable all misc interrupts */
> +			intr->mask &= ~(1ULL << NGBE_MISC_VEC_ID);
> +		}
> +	}
> +
> +	PMD_DRV_LOG(DEBUG, "enable intr immediately");
> +	ngbe_enable_intr(dev);
> +
> +	return 0;
> +}
> +
> +/**
> + * Interrupt handler which shall be registered for alarm callback for delayed
> + * handling specific interrupt to wait for the stable nic state. As the
> + * NIC interrupt state is not stable for ngbe after link is just down,
> + * it needs to wait 4 seconds to get the stable status.
> + *
> + * @param handle
> + *  Pointer to interrupt handle.
> + * @param param
> + *  The address of parameter (struct rte_eth_dev *) registered before.
> + *
> + * @return
> + *  void
> + */
> +static void
> +ngbe_dev_interrupt_delayed_handler(void *param)
> +{
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	uint32_t eicr;
> +
> +	ngbe_disable_intr(hw);
> +
> +	eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
> +
> +	if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
> +		ngbe_dev_link_update(dev, 0);
> +		intr->flags &= ~NGBE_FLAG_NEED_LINK_UPDATE;
> +		ngbe_dev_link_status_print(dev);
> +		rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
> +					      NULL);
> +	}
> +
> +	if (intr->flags & NGBE_FLAG_MACSEC) {
> +		rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_MACSEC,
> +					      NULL);
> +		intr->flags &= ~NGBE_FLAG_MACSEC;
> +	}
> +
> +	/* restore original mask */
> +	intr->mask_misc = intr->mask_misc_orig;
> +	intr->mask_misc_orig = 0;
> +	intr->mask = intr->mask_orig;
> +	intr->mask_orig = 0;
> +
> +	PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr);
> +	ngbe_enable_intr(dev);
> +}
> +
> +/**
> + * Interrupt handler triggered by NIC  for handling
> + * specific interrupt.
> + *
> + * @param handle
> + *  Pointer to interrupt handle.
> + * @param param
> + *  The address of parameter (struct rte_eth_dev *) registered before.
> + *
> + * @return
> + *  void
> + */
> +static void
> +ngbe_dev_interrupt_handler(void *param)
> +{
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> +
> +	ngbe_dev_interrupt_get_status(dev);
> +	ngbe_dev_interrupt_action(dev);
> +}
> +
>   static const struct eth_dev_ops ngbe_eth_dev_ops = {
> +	.dev_configure              = ngbe_dev_configure,
>   	.dev_infos_get              = ngbe_dev_info_get,
> +	.link_update                = ngbe_dev_link_update,
>   };
>   
>   RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index b4e2000dd3..10c23c41d1 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -6,6 +6,13 @@
>   #ifndef _NGBE_ETHDEV_H_
>   #define _NGBE_ETHDEV_H_
>   
> +/* need update link, bit flag */
> +#define NGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
> +#define NGBE_FLAG_MAILBOX          (uint32_t)(1 << 1)
> +#define NGBE_FLAG_PHY_INTERRUPT    (uint32_t)(1 << 2)
> +#define NGBE_FLAG_MACSEC           (uint32_t)(1 << 3)
> +#define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
> +
>   #define NGBE_HKEY_MAX_INDEX 10
>   
>   #define NGBE_RSS_OFFLOAD_ALL ( \
> @@ -19,11 +26,23 @@
>   	ETH_RSS_IPV6_TCP_EX | \
>   	ETH_RSS_IPV6_UDP_EX)
>   
> +#define NGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
> +
> +/* structure for interrupt relative data */
> +struct ngbe_interrupt {
> +	uint32_t flags;
> +	uint32_t mask_misc;
> +	uint32_t mask_misc_orig; /* save mask during delayed handler */
> +	uint64_t mask;
> +	uint64_t mask_orig; /* save mask during delayed handler */
> +};
> +
>   /*
>    * Structure to store private data for each driver instance (for each port).
>    */
>   struct ngbe_adapter {
>   	struct ngbe_hw             hw;
> +	struct ngbe_interrupt      intr;
>   };
>   
>   #define NGBE_DEV_ADAPTER(dev) \
> @@ -32,6 +51,15 @@ struct ngbe_adapter {
>   #define NGBE_DEV_HW(dev) \
>   	(&((struct ngbe_adapter *)(dev)->data->dev_private)->hw)
>   
> +#define NGBE_DEV_INTR(dev) \
> +	(&((struct ngbe_adapter *)(dev)->data->dev_private)->intr)
> +

Above should be a static inline function.

> +int
> +ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> +		int wait_to_complete);
> +
> +#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
> +#define NGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
>   #define NGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
>   
>   /*
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release Jiawen Wu
@ 2021-06-14 18:53   ` Andrew Rybchenko
  2021-06-15  7:50     ` Jiawen Wu
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 18:53 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> Setup device Rx queue and release Rx queue.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   drivers/net/ngbe/ngbe_ethdev.c |   9 +
>   drivers/net/ngbe/ngbe_ethdev.h |   8 +
>   drivers/net/ngbe/ngbe_rxtx.c   | 305 +++++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h   |  90 ++++++++++
>   4 files changed, 412 insertions(+)
> 
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 97b6de3aa4..8eb41a7a2b 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -262,12 +262,19 @@ static int
>   ngbe_dev_configure(struct rte_eth_dev *dev)
>   {
>   	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
>   
>   	PMD_INIT_FUNC_TRACE();
>   
>   	/* set flag to update link status after init */
>   	intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
>   
> +	/*
> +	 * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
> +	 * allocation Rx preconditions we will reset it.
> +	 */
> +	adapter->rx_bulk_alloc_allowed = true;
> +
>   	return 0;
>   }
>   
> @@ -654,6 +661,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
>   	.dev_configure              = ngbe_dev_configure,
>   	.dev_infos_get              = ngbe_dev_info_get,
>   	.link_update                = ngbe_dev_link_update,
> +	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
> +	.rx_queue_release           = ngbe_dev_rx_queue_release,
>   };
>   
>   RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 10c23c41d1..c324ca7e0f 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -43,6 +43,7 @@ struct ngbe_interrupt {
>   struct ngbe_adapter {
>   	struct ngbe_hw             hw;
>   	struct ngbe_interrupt      intr;
> +	bool rx_bulk_alloc_allowed;
>   };
>   
>   #define NGBE_DEV_ADAPTER(dev) \
> @@ -54,6 +55,13 @@ struct ngbe_adapter {
>   #define NGBE_DEV_INTR(dev) \
>   	(&((struct ngbe_adapter *)(dev)->data->dev_private)->intr)
>   
> +void ngbe_dev_rx_queue_release(void *rxq);
> +
> +int  ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> +		uint16_t nb_rx_desc, unsigned int socket_id,
> +		const struct rte_eth_rxconf *rx_conf,
> +		struct rte_mempool *mb_pool);
> +
>   int
>   ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>   		int wait_to_complete);
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index ae24367b18..9992983bef 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -3,9 +3,14 @@
>    * Copyright(c) 2010-2017 Intel Corporation
>    */
>   
> +#include <sys/queue.h>
> +
>   #include <stdint.h>
>   #include <rte_ethdev.h>
> +#include <ethdev_driver.h>
> +#include <rte_malloc.h>
>   
> +#include "ngbe_logs.h"
>   #include "base/ngbe.h"
>   #include "ngbe_ethdev.h"
>   #include "ngbe_rxtx.h"
> @@ -37,6 +42,166 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
>   	return tx_offload_capa;
>   }
>   
> +/**
> + * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
> + *
> + * The "next" pointer of the last segment of (not-yet-completed) RSC clusters
> + * in the sw_rsc_ring is not set to NULL but rather points to the next
> + * mbuf of this RSC aggregation (that has not been completed yet and still
> + * resides on the HW ring). So, instead of calling for rte_pktmbuf_free() we
> + * will just free first "nb_segs" segments of the cluster explicitly by calling
> + * an rte_pktmbuf_free_seg().
> + *
> + * @m scattered cluster head
> + */
> +static void __rte_cold
> +ngbe_free_sc_cluster(struct rte_mbuf *m)
> +{
> +	uint16_t i, nb_segs = m->nb_segs;
> +	struct rte_mbuf *next_seg;
> +
> +	for (i = 0; i < nb_segs; i++) {
> +		next_seg = m->next;
> +		rte_pktmbuf_free_seg(m);
> +		m = next_seg;
> +	}
> +}
> +
> +static void __rte_cold
> +ngbe_rx_queue_release_mbufs(struct ngbe_rx_queue *rxq)
> +{
> +	unsigned int i;
> +
> +	if (rxq->sw_ring != NULL) {
> +		for (i = 0; i < rxq->nb_rx_desc; i++) {
> +			if (rxq->sw_ring[i].mbuf != NULL) {
> +				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
> +				rxq->sw_ring[i].mbuf = NULL;
> +			}
> +		}
> +		if (rxq->rx_nb_avail) {
> +			for (i = 0; i < rxq->rx_nb_avail; ++i) {
> +				struct rte_mbuf *mb;
> +
> +				mb = rxq->rx_stage[rxq->rx_next_avail + i];
> +				rte_pktmbuf_free_seg(mb);
> +			}
> +			rxq->rx_nb_avail = 0;
> +		}
> +	}
> +
> +	if (rxq->sw_sc_ring)
> +		for (i = 0; i < rxq->nb_rx_desc; i++)
> +			if (rxq->sw_sc_ring[i].fbuf) {

Compare with NULL

> +				ngbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
> +				rxq->sw_sc_ring[i].fbuf = NULL;
> +			}
> +}
> +
> +static void __rte_cold
> +ngbe_rx_queue_release(struct ngbe_rx_queue *rxq)
> +{
> +	if (rxq != NULL) {
> +		ngbe_rx_queue_release_mbufs(rxq);
> +		rte_free(rxq->sw_ring);
> +		rte_free(rxq->sw_sc_ring);
> +		rte_free(rxq);
> +	}
> +}
> +
> +void __rte_cold
> +ngbe_dev_rx_queue_release(void *rxq)
> +{
> +	ngbe_rx_queue_release(rxq);
> +}
> +
> +/*
> + * Check if Rx Burst Bulk Alloc function can be used.
> + * Return
> + *        0: the preconditions are satisfied and the bulk allocation function
> + *           can be used.
> + *  -EINVAL: the preconditions are NOT satisfied and the default Rx burst
> + *           function must be used.
> + */
> +static inline int __rte_cold
> +check_rx_burst_bulk_alloc_preconditions(struct ngbe_rx_queue *rxq)
> +{
> +	int ret = 0;
> +
> +	/*
> +	 * Make sure the following pre-conditions are satisfied:
> +	 *   rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST
> +	 *   rxq->rx_free_thresh < rxq->nb_rx_desc
> +	 *   (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
> +	 * Scattered packets are not supported.  This should be checked
> +	 * outside of this function.
> +	 */
> +	if (!(rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST)) {
> +		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> +			     "rxq->rx_free_thresh=%d, "
> +			     "RTE_PMD_NGBE_RX_MAX_BURST=%d",
> +			     rxq->rx_free_thresh, RTE_PMD_NGBE_RX_MAX_BURST);
> +		ret = -EINVAL;
> +	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
> +		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> +			     "rxq->rx_free_thresh=%d, "
> +			     "rxq->nb_rx_desc=%d",
> +			     rxq->rx_free_thresh, rxq->nb_rx_desc);
> +		ret = -EINVAL;
> +	} else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
> +		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> +			     "rxq->nb_rx_desc=%d, "
> +			     "rxq->rx_free_thresh=%d",
> +			     rxq->nb_rx_desc, rxq->rx_free_thresh);
> +		ret = -EINVAL;
> +	}
> +
> +	return ret;
> +}
> +
> +/* Reset dynamic ngbe_rx_queue fields back to defaults */
> +static void __rte_cold
> +ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
> +{
> +	static const struct ngbe_rx_desc zeroed_desc = {
> +						{{0}, {0} }, {{0}, {0} } };
> +	unsigned int i;
> +	uint16_t len = rxq->nb_rx_desc;
> +
> +	/*
> +	 * By default, the Rx queue setup function allocates enough memory for
> +	 * NGBE_RING_DESC_MAX.  The Rx Burst bulk allocation function requires
> +	 * extra memory at the end of the descriptor ring to be zero'd out.
> +	 */
> +	if (adapter->rx_bulk_alloc_allowed)
> +		/* zero out extra memory */
> +		len += RTE_PMD_NGBE_RX_MAX_BURST;
> +
> +	/*
> +	 * Zero out HW ring memory. Zero out extra memory at the end of
> +	 * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
> +	 * reads extra memory as zeros.
> +	 */
> +	for (i = 0; i < len; i++)
> +		rxq->rx_ring[i] = zeroed_desc;
> +
> +	/*
> +	 * initialize extra software ring entries. Space for these extra
> +	 * entries is always allocated
> +	 */
> +	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
> +	for (i = rxq->nb_rx_desc; i < len; ++i)
> +		rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
> +
> +	rxq->rx_nb_avail = 0;
> +	rxq->rx_next_avail = 0;
> +	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
> +	rxq->rx_tail = 0;
> +	rxq->nb_rx_hold = 0;
> +	rxq->pkt_first_seg = NULL;
> +	rxq->pkt_last_seg = NULL;
> +}
> +
>   uint64_t
>   ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
>   {
> @@ -65,3 +230,143 @@ ngbe_get_rx_port_offloads(struct rte_eth_dev *dev)
>   	return offloads;
>   }
>   
> +int __rte_cold
> +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> +			 uint16_t queue_idx,
> +			 uint16_t nb_desc,
> +			 unsigned int socket_id,
> +			 const struct rte_eth_rxconf *rx_conf,
> +			 struct rte_mempool *mp)
> +{
> +	const struct rte_memzone *rz;
> +	struct ngbe_rx_queue *rxq;
> +	struct ngbe_hw     *hw;
> +	uint16_t len;
> +	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
> +	uint64_t offloads;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	hw = NGBE_DEV_HW(dev);
> +
> +	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
> +
> +	/*
> +	 * Validate number of receive descriptors.
> +	 * It must not exceed hardware maximum, and must be multiple
> +	 * of NGBE_ALIGN.
> +	 */
> +	if (nb_desc % NGBE_RXD_ALIGN != 0 ||
> +			nb_desc > NGBE_RING_DESC_MAX ||
> +			nb_desc < NGBE_RING_DESC_MIN) {
> +		return -EINVAL;
> +	}
> +
> +	/* Free memory prior to re-allocation if needed... */
> +	if (dev->data->rx_queues[queue_idx] != NULL) {
> +		ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
> +		dev->data->rx_queues[queue_idx] = NULL;
> +	}
> +
> +	/* First allocate the rx queue data structure */
> +	rxq = rte_zmalloc_socket("ethdev RX queue",
> +				 sizeof(struct ngbe_rx_queue),
> +				 RTE_CACHE_LINE_SIZE, socket_id);
> +	if (rxq == NULL)
> +		return -ENOMEM;
> +	rxq->mb_pool = mp;
> +	rxq->nb_rx_desc = nb_desc;
> +	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
> +	rxq->queue_id = queue_idx;
> +	rxq->reg_idx = queue_idx;
> +	rxq->port_id = dev->data->port_id;
> +	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
> +		rxq->crc_len = RTE_ETHER_CRC_LEN;
> +	else
> +		rxq->crc_len = 0;
> +	rxq->drop_en = rx_conf->rx_drop_en;
> +	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> +	rxq->offloads = offloads;
> +	rxq->pkt_type_mask = NGBE_PTID_MASK;
> +
> +	/*
> +	 * Allocate RX ring hardware descriptors. A memzone large enough to
> +	 * handle the maximum ring size is allocated in order to allow for
> +	 * resizing in later calls to the queue setup function.
> +	 */
> +	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
> +				      RX_RING_SZ, NGBE_ALIGN, socket_id);
> +	if (rz == NULL) {
> +		ngbe_rx_queue_release(rxq);
> +		return -ENOMEM;
> +	}
> +
> +	/*
> +	 * Zero init all the descriptors in the ring.
> +	 */
> +	memset(rz->addr, 0, RX_RING_SZ);
> +
> +	rxq->rdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXWP(rxq->reg_idx));
> +	rxq->rdh_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXRP(rxq->reg_idx));
> +
> +	rxq->rx_ring_phys_addr = TMZ_PADDR(rz);
> +	rxq->rx_ring = (struct ngbe_rx_desc *)TMZ_VADDR(rz);
> +
> +	/*
> +	 * Certain constraints must be met in order to use the bulk buffer
> +	 * allocation Rx burst function. If any of Rx queues doesn't meet them
> +	 * the feature should be disabled for the whole port.
> +	 */
> +	if (check_rx_burst_bulk_alloc_preconditions(rxq)) {
> +		PMD_INIT_LOG(DEBUG, "queue[%d] doesn't meet Rx Bulk Alloc "
> +				    "preconditions - canceling the feature for "
> +				    "the whole port[%d]",
> +			     rxq->queue_id, rxq->port_id);
> +		adapter->rx_bulk_alloc_allowed = false;
> +	}
> +
> +	/*
> +	 * Allocate software ring. Allow for space at the end of the
> +	 * S/W ring to make sure look-ahead logic in bulk alloc Rx burst
> +	 * function does not access an invalid memory region.
> +	 */
> +	len = nb_desc;
> +	if (adapter->rx_bulk_alloc_allowed)
> +		len += RTE_PMD_NGBE_RX_MAX_BURST;
> +
> +	rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
> +					  sizeof(struct ngbe_rx_entry) * len,
> +					  RTE_CACHE_LINE_SIZE, socket_id);
> +	if (!rxq->sw_ring) {

compare with NULL

> +		ngbe_rx_queue_release(rxq);
> +		return -ENOMEM;
> +	}
> +
> +	/*
> +	 * Always allocate even if it's not going to be needed in order to
> +	 * simplify the code.
> +	 *
> +	 * This ring is used in Scattered Rx cases and Scattered Rx may
> +	 * be requested in ngbe_dev_rx_init(), which is called later from
> +	 * dev_start() flow.
> +	 */
> +	rxq->sw_sc_ring =
> +		rte_zmalloc_socket("rxq->sw_sc_ring",
> +				  sizeof(struct ngbe_scattered_rx_entry) * len,
> +				  RTE_CACHE_LINE_SIZE, socket_id);
> +	if (!rxq->sw_sc_ring) {

compare with NULL

> +		ngbe_rx_queue_release(rxq);
> +		return -ENOMEM;
> +	}
> +
> +	PMD_INIT_LOG(DEBUG, "sw_ring=%p sw_sc_ring=%p hw_ring=%p "
> +			    "dma_addr=0x%" PRIx64,
> +		     rxq->sw_ring, rxq->sw_sc_ring, rxq->rx_ring,
> +		     rxq->rx_ring_phys_addr);
> +
> +	dev->data->rx_queues[queue_idx] = rxq;
> +
> +	ngbe_reset_rx_queue(adapter, rxq);
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index 39011ee286..e1676a53b4 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -6,7 +6,97 @@
>   #ifndef _NGBE_RXTX_H_
>   #define _NGBE_RXTX_H_
>   
> +/*****************************************************************************
> + * Receive Descriptor
> + *****************************************************************************/
> +struct ngbe_rx_desc {
> +	struct {
> +		union {
> +			__le32 dw0;

rte_* types shuld be used

> +			struct {
> +				__le16 pkt;
> +				__le16 hdr;
> +			} lo;
> +		};
> +		union {
> +			__le32 dw1;
> +			struct {
> +				__le16 ipid;
> +				__le16 csum;
> +			} hi;
> +		};
> +	} qw0; /* also as r.pkt_addr */
> +	struct {
> +		union {
> +			__le32 dw2;
> +			struct {
> +				__le32 status;
> +			} lo;
> +		};
> +		union {
> +			__le32 dw3;
> +			struct {
> +				__le16 len;
> +				__le16 tag;
> +			} hi;
> +		};
> +	} qw1; /* also as r.hdr_addr */
> +};
> +
> +#define RTE_PMD_NGBE_RX_MAX_BURST 32
> +
> +#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
> +		    sizeof(struct ngbe_rx_desc))
> +
>   #define NGBE_TX_MAX_SEG                    40
> +#define NGBE_PTID_MASK                     0xFF
> +
> +/**
> + * Structure associated with each descriptor of the RX ring of a RX queue.
> + */
> +struct ngbe_rx_entry {
> +	struct rte_mbuf *mbuf; /**< mbuf associated with RX descriptor. */
> +};
> +
> +struct ngbe_scattered_rx_entry {
> +	struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
> +};
> +
> +/**
> + * Structure associated with each RX queue.
> + */
> +struct ngbe_rx_queue {
> +	struct rte_mempool  *mb_pool; /**< mbuf pool to populate RX ring. */
> +	volatile struct ngbe_rx_desc *rx_ring; /**< RX ring virtual address. */
> +	uint64_t            rx_ring_phys_addr; /**< RX ring DMA address. */
> +	volatile uint32_t   *rdt_reg_addr; /**< RDT register address. */
> +	volatile uint32_t   *rdh_reg_addr; /**< RDH register address. */
> +	struct ngbe_rx_entry *sw_ring; /**< address of RX software ring. */
> +	/**< address of scattered Rx software ring. */
> +	struct ngbe_scattered_rx_entry *sw_sc_ring;
> +	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
> +	struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
> +	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
> +	uint16_t            rx_tail;  /**< current value of RDT register. */
> +	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
> +	uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
> +	uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
> +	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
> +	uint16_t            rx_free_thresh; /**< max free RX desc to hold. */
> +	uint16_t            queue_id; /**< RX queue index. */
> +	uint16_t            reg_idx;  /**< RX queue register index. */
> +	/**< Packet type mask for different NICs. */
> +	uint16_t            pkt_type_mask;
> +	uint16_t            port_id;  /**< Device port identifier. */
> +	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
> +	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
> +	uint8_t             rx_deferred_start; /**< not in global dev start. */
> +	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
> +	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
> +	struct rte_mbuf fake_mbuf;
> +	/** hold packets to return to application */
> +	struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
> +};
>   
>   uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
>   uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 16/24] net/ngbe: add Tx queue setup and release
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 16/24] net/ngbe: add Tx " Jiawen Wu
@ 2021-06-14 18:59   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 18:59 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Setup device Tx queue and release Tx queue.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   drivers/net/ngbe/ngbe_ethdev.c |   2 +
>   drivers/net/ngbe/ngbe_ethdev.h |   6 +
>   drivers/net/ngbe/ngbe_rxtx.c   | 212 +++++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h   |  91 ++++++++++++++
>   4 files changed, 311 insertions(+)
> 
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 8eb41a7a2b..2f8ac48f33 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -663,6 +663,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
>   	.link_update                = ngbe_dev_link_update,
>   	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
>   	.rx_queue_release           = ngbe_dev_rx_queue_release,
> +	.tx_queue_setup             = ngbe_dev_tx_queue_setup,
> +	.tx_queue_release           = ngbe_dev_tx_queue_release,
>   };
>   
>   RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index c324ca7e0f..f52d813a47 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -57,11 +57,17 @@ struct ngbe_adapter {
>   
>   void ngbe_dev_rx_queue_release(void *rxq);
>   
> +void ngbe_dev_tx_queue_release(void *txq);
> +
>   int  ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>   		uint16_t nb_rx_desc, unsigned int socket_id,
>   		const struct rte_eth_rxconf *rx_conf,
>   		struct rte_mempool *mb_pool);
>   
> +int  ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
> +		uint16_t nb_tx_desc, unsigned int socket_id,
> +		const struct rte_eth_txconf *tx_conf);
> +
>   int
>   ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>   		int wait_to_complete);
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index 9992983bef..2d8db3245f 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -15,6 +15,99 @@
>   #include "ngbe_ethdev.h"
>   #include "ngbe_rxtx.h"
>   
> +#ifndef DEFAULT_TX_FREE_THRESH
> +#define DEFAULT_TX_FREE_THRESH 32
> +#endif

The define definitely belongs to a header, since
it should be reported in dev_info.

> +
> +/*********************************************************************
> + *
> + *  Queue management functions
> + *
> + **********************************************************************/
> +
> +static void __rte_cold
> +ngbe_tx_queue_release_mbufs(struct ngbe_tx_queue *txq)
> +{
> +	unsigned int i;
> +
> +	if (txq->sw_ring != NULL) {
> +		for (i = 0; i < txq->nb_tx_desc; i++) {
> +			if (txq->sw_ring[i].mbuf != NULL) {
> +				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
> +				txq->sw_ring[i].mbuf = NULL;
> +			}
> +		}
> +	}
> +}
> +
> +static void __rte_cold
> +ngbe_tx_free_swring(struct ngbe_tx_queue *txq)
> +{
> +	if (txq != NULL &&
> +	    txq->sw_ring != NULL)

Check for txq->sw_ring is not required.

> +		rte_free(txq->sw_ring);
> +}
> +
> +static void __rte_cold
> +ngbe_tx_queue_release(struct ngbe_tx_queue *txq)
> +{
> +	if (txq != NULL && txq->ops != NULL) {
> +		txq->ops->release_mbufs(txq);
> +		txq->ops->free_swring(txq);
> +		rte_free(txq);

Shouldn't we free txq even if ops is NULL?

> +	}
> +}
> +
> +void __rte_cold
> +ngbe_dev_tx_queue_release(void *txq)
> +{
> +	ngbe_tx_queue_release(txq);
> +}
> +
> +/* (Re)set dynamic ngbe_tx_queue fields to defaults */
> +static void __rte_cold
> +ngbe_reset_tx_queue(struct ngbe_tx_queue *txq)
> +{
> +	static const struct ngbe_tx_desc zeroed_desc = {0};
> +	struct ngbe_tx_entry *txe = txq->sw_ring;
> +	uint16_t prev, i;
> +
> +	/* Zero out HW ring memory */
> +	for (i = 0; i < txq->nb_tx_desc; i++)
> +		txq->tx_ring[i] = zeroed_desc;
> +
> +	/* Initialize SW ring entries */
> +	prev = (uint16_t)(txq->nb_tx_desc - 1);
> +	for (i = 0; i < txq->nb_tx_desc; i++) {
> +		volatile struct ngbe_tx_desc *txd = &txq->tx_ring[i];

Why is volatile used above? Please, add a comment.

> +
> +		txd->dw3 = rte_cpu_to_le_32(NGBE_TXD_DD);
> +		txe[i].mbuf = NULL;
> +		txe[i].last_id = i;
> +		txe[prev].next_id = i;
> +		prev = i;
> +	}
> +
> +	txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
> +	txq->tx_tail = 0;
> +
> +	/*
> +	 * Always allow 1 descriptor to be un-allocated to avoid
> +	 * a H/W race condition
> +	 */
> +	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
> +	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
> +	txq->ctx_curr = 0;
> +	memset((void *)&txq->ctx_cache, 0,
> +		NGBE_CTX_NUM * sizeof(struct ngbe_ctx_info));
> +}
> +
> +static const struct ngbe_txq_ops def_txq_ops = {
> +	.release_mbufs = ngbe_tx_queue_release_mbufs,
> +	.free_swring = ngbe_tx_free_swring,
> +	.reset = ngbe_reset_tx_queue,
> +};
> +
>   uint64_t
>   ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
>   {
> @@ -42,6 +135,125 @@ ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
>   	return tx_offload_capa;
>   }
>   
> +int __rte_cold
> +ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
> +			 uint16_t queue_idx,
> +			 uint16_t nb_desc,
> +			 unsigned int socket_id,
> +			 const struct rte_eth_txconf *tx_conf)
> +{
> +	const struct rte_memzone *tz;
> +	struct ngbe_tx_queue *txq;
> +	struct ngbe_hw     *hw;
> +	uint16_t tx_free_thresh;
> +	uint64_t offloads;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	hw = NGBE_DEV_HW(dev);
> +
> +	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
> +
> +	/*
> +	 * Validate number of transmit descriptors.
> +	 * It must not exceed hardware maximum, and must be multiple
> +	 * of NGBE_ALIGN.
> +	 */
> +	if (nb_desc % NGBE_TXD_ALIGN != 0 ||
> +	    nb_desc > NGBE_RING_DESC_MAX ||
> +	    nb_desc < NGBE_RING_DESC_MIN) {
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * The TX descriptor ring will be cleaned after txq->tx_free_thresh
> +	 * descriptors are used or if the number of descriptors required
> +	 * to transmit a packet is greater than the number of free TX
> +	 * descriptors.
> +	 * One descriptor in the TX ring is used as a sentinel to avoid a
> +	 * H/W race condition, hence the maximum threshold constraints.
> +	 * When set to zero use default values.
> +	 */
> +	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
> +			tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
> +	if (tx_free_thresh >= (nb_desc - 3)) {
> +		PMD_INIT_LOG(ERR, "tx_free_thresh must be less than the number of "
> +			     "TX descriptors minus 3. (tx_free_thresh=%u "
> +			     "port=%d queue=%d)",
> +			     (unsigned int)tx_free_thresh,
> +			     (int)dev->data->port_id, (int)queue_idx);
> +		return -(EINVAL);
> +	}
> +
> +	if ((nb_desc % tx_free_thresh) != 0) {

I guess internal parenthesis are not required here.

> +		PMD_INIT_LOG(ERR, "tx_free_thresh must be a divisor of the "
> +			     "number of TX descriptors. (tx_free_thresh=%u "
> +			     "port=%d queue=%d)", (unsigned int)tx_free_thresh,
> +			     (int)dev->data->port_id, (int)queue_idx);
> +		return -(EINVAL);
> +	}
> +
> +	/* Free memory prior to re-allocation if needed... */
> +	if (dev->data->tx_queues[queue_idx] != NULL) {
> +		ngbe_tx_queue_release(dev->data->tx_queues[queue_idx]);
> +		dev->data->tx_queues[queue_idx] = NULL;
> +	}
> +
> +	/* First allocate the tx queue data structure */
> +	txq = rte_zmalloc_socket("ethdev TX queue",
> +				 sizeof(struct ngbe_tx_queue),
> +				 RTE_CACHE_LINE_SIZE, socket_id);
> +	if (txq == NULL)
> +		return -ENOMEM;
> +
> +	/*
> +	 * Allocate TX ring hardware descriptors. A memzone large enough to
> +	 * handle the maximum ring size is allocated in order to allow for
> +	 * resizing in later calls to the queue setup function.
> +	 */
> +	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
> +			sizeof(struct ngbe_tx_desc) * NGBE_RING_DESC_MAX,
> +			NGBE_ALIGN, socket_id);
> +	if (tz == NULL) {
> +		ngbe_tx_queue_release(txq);
> +		return -ENOMEM;
> +	}
> +
> +	txq->nb_tx_desc = nb_desc;
> +	txq->tx_free_thresh = tx_free_thresh;
> +	txq->pthresh = tx_conf->tx_thresh.pthresh;
> +	txq->hthresh = tx_conf->tx_thresh.hthresh;
> +	txq->wthresh = tx_conf->tx_thresh.wthresh;
> +	txq->queue_id = queue_idx;
> +	txq->reg_idx = queue_idx;
> +	txq->port_id = dev->data->port_id;
> +	txq->offloads = offloads;
> +	txq->ops = &def_txq_ops;
> +	txq->tx_deferred_start = tx_conf->tx_deferred_start;
> +
> +	txq->tdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXWP(txq->reg_idx));
> +	txq->tdc_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXCFG(txq->reg_idx));
> +
> +	txq->tx_ring_phys_addr = TMZ_PADDR(tz);
> +	txq->tx_ring = (struct ngbe_tx_desc *)TMZ_VADDR(tz);
> +
> +	/* Allocate software ring */
> +	txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
> +				sizeof(struct ngbe_tx_entry) * nb_desc,
> +				RTE_CACHE_LINE_SIZE, socket_id);
> +	if (txq->sw_ring == NULL) {
> +		ngbe_tx_queue_release(txq);
> +		return -ENOMEM;
> +	}
> +	PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
> +		     txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
> +
> +	txq->ops->reset(txq);
> +
> +	dev->data->tx_queues[queue_idx] = txq;
> +
> +	return 0;
> +}
> +
>   /**
>    * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
>    *
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index e1676a53b4..2db5cc3f2a 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -43,6 +43,31 @@ struct ngbe_rx_desc {
>   	} qw1; /* also as r.hdr_addr */
>   };
>   
> +/*****************************************************************************
> + * Transmit Descriptor
> + *****************************************************************************/
> +/**
> + * Transmit Context Descriptor (NGBE_TXD_TYP=CTXT)
> + **/
> +struct ngbe_tx_ctx_desc {
> +	__le32 dw0; /* w.vlan_macip_lens  */

rte_* types should be used

> +	__le32 dw1; /* w.seqnum_seed      */
> +	__le32 dw2; /* w.type_tucmd_mlhl  */
> +	__le32 dw3; /* w.mss_l4len_idx    */
> +};
> +
> +/* @ngbe_tx_ctx_desc.dw3 */
> +#define NGBE_TXD_DD               MS(0, 0x1) /* descriptor done */
> +
> +/**
> + * Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
> + **/
> +struct ngbe_tx_desc {
> +	__le64 qw0; /* r.buffer_addr ,  w.reserved    */

rte_le* types should be used

> +	__le32 dw2; /* r.cmd_type_len,  w.nxtseq_seed */
> +	__le32 dw3; /* r.olinfo_status, w.status      */
> +};
> +
>   #define RTE_PMD_NGBE_RX_MAX_BURST 32
>   
>   #define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
> @@ -62,6 +87,15 @@ struct ngbe_scattered_rx_entry {
>   	struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
>   };
>   
> +/**
> + * Structure associated with each descriptor of the TX ring of a TX queue.
> + */
> +struct ngbe_tx_entry {
> +	struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
> +	uint16_t next_id; /**< Index of next descriptor in ring. */
> +	uint16_t last_id; /**< Index of last scattered descriptor. */
> +};
> +
>   /**
>    * Structure associated with each RX queue.
>    */
> @@ -98,6 +132,63 @@ struct ngbe_rx_queue {
>   	struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
>   };
>   
> +/**
> + * NGBE CTX Constants
> + */
> +enum ngbe_ctx_num {
> +	NGBE_CTX_0    = 0, /**< CTX0 */
> +	NGBE_CTX_1    = 1, /**< CTX1  */
> +	NGBE_CTX_NUM  = 2, /**< CTX NUMBER  */
> +};
> +
> +/**
> + * Structure to check if new context need be built
> + */
> +struct ngbe_ctx_info {
> +	uint64_t flags;           /**< ol_flags for context build. */
> +};
> +
> +/**
> + * Structure associated with each TX queue.
> + */
> +struct ngbe_tx_queue {
> +	/** TX ring virtual address. */
> +	volatile struct ngbe_tx_desc *tx_ring;
> +	uint64_t            tx_ring_phys_addr; /**< TX ring DMA address. */
> +	struct ngbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD.*/
> +	volatile uint32_t   *tdt_reg_addr; /**< Address of TDT register. */
> +	volatile uint32_t   *tdc_reg_addr; /**< Address of TDC register. */
> +	uint16_t            nb_tx_desc;    /**< number of TX descriptors. */
> +	uint16_t            tx_tail;       /**< current value of TDT reg. */
> +	/**< Start freeing TX buffers if there are less free descriptors than
> +	 *   this value.
> +	 */
> +	uint16_t            tx_free_thresh;
> +	/** Index to last TX descriptor to have been cleaned. */
> +	uint16_t            last_desc_cleaned;
> +	/** Total number of TX descriptors ready to be allocated. */
> +	uint16_t            nb_tx_free;
> +	uint16_t            tx_next_dd;    /**< next desc to scan for DD bit */
> +	uint16_t            queue_id;      /**< TX queue index. */
> +	uint16_t            reg_idx;       /**< TX queue register index. */
> +	uint16_t            port_id;       /**< Device port identifier. */
> +	uint8_t             pthresh;       /**< Prefetch threshold register. */
> +	uint8_t             hthresh;       /**< Host threshold register. */
> +	uint8_t             wthresh;       /**< Write-back threshold reg. */
> +	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
> +	uint32_t            ctx_curr;      /**< Hardware context states. */
> +	/** Hardware context0 history. */
> +	struct ngbe_ctx_info ctx_cache[NGBE_CTX_NUM];
> +	const struct ngbe_txq_ops *ops;       /**< txq ops */
> +	uint8_t             tx_deferred_start; /**< not in global dev start. */
> +};
> +
> +struct ngbe_txq_ops {
> +	void (*release_mbufs)(struct ngbe_tx_queue *txq);
> +	void (*free_swring)(struct ngbe_tx_queue *txq);
> +	void (*reset)(struct ngbe_tx_queue *txq);
> +};
> +
>   uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
>   uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
>   uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 17/24] net/ngbe: add Rx and Tx init
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 17/24] net/ngbe: add Rx and Tx init Jiawen Wu
@ 2021-06-14 19:01   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:01 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Initializes receive unit and transmit unit.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

The patch is a dead code since added functions are not used.

> ---
>   doc/guides/nics/features/ngbe.ini |   6 +
>   doc/guides/nics/ngbe.rst          |   2 +
>   drivers/net/ngbe/ngbe_ethdev.h    |   5 +
>   drivers/net/ngbe/ngbe_rxtx.c      | 187 ++++++++++++++++++++++++++++++
>   4 files changed, 200 insertions(+)
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 291a542a42..abde1e2a67 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -7,6 +7,12 @@
>   Speed capabilities   = Y
>   Link status          = Y
>   Link status event    = Y
> +Jumbo frame          = Y
> +Scattered Rx         = Y
> +CRC offload          = P
> +VLAN offload         = P
> +L3 checksum offload  = P
> +L4 checksum offload  = P
>   Multiprocess aware   = Y
>   Linux                = Y
>   ARMv8                = Y
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index de2ef65664..e56baf26b4 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -10,6 +10,8 @@ for Wangxun 1 Gigabit Ethernet NICs.
>   Features
>   --------
>   
> +- Checksum offload
> +- Jumbo frames
>   - Link state information
>   
>   Prerequisites
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index f52d813a47..a9482f3001 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -13,6 +13,7 @@
>   #define NGBE_FLAG_MACSEC           (uint32_t)(1 << 3)
>   #define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
>   
> +#define NGBE_VLAN_TAG_SIZE 4
>   #define NGBE_HKEY_MAX_INDEX 10
>   
>   #define NGBE_RSS_OFFLOAD_ALL ( \
> @@ -68,6 +69,10 @@ int  ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
>   		uint16_t nb_tx_desc, unsigned int socket_id,
>   		const struct rte_eth_txconf *tx_conf);
>   
> +int ngbe_dev_rx_init(struct rte_eth_dev *dev);
> +
> +void ngbe_dev_tx_init(struct rte_eth_dev *dev);
> +
>   int
>   ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>   		int wait_to_complete);
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index 2d8db3245f..68d7e651af 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -582,3 +582,190 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
>   	return 0;
>   }
>   
> +/*
> + * Initializes Receive Unit.
> + */
> +int __rte_cold
> +ngbe_dev_rx_init(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_hw *hw;
> +	struct ngbe_rx_queue *rxq;
> +	uint64_t bus_addr;
> +	uint32_t fctrl;
> +	uint32_t hlreg0;
> +	uint32_t srrctl;
> +	uint32_t rdrxctl;
> +	uint32_t rxcsum;
> +	uint16_t buf_size;
> +	uint16_t i;
> +	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	hw = NGBE_DEV_HW(dev);
> +
> +	/*
> +	 * Make sure receives are disabled while setting
> +	 * up the RX context (registers, descriptor rings, etc.).
> +	 */
> +	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
> +	wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, 0);
> +
> +	/* Enable receipt of broadcasted frames */
> +	fctrl = rd32(hw, NGBE_PSRCTL);
> +	fctrl |= NGBE_PSRCTL_BCA;
> +	wr32(hw, NGBE_PSRCTL, fctrl);
> +
> +	/*
> +	 * Configure CRC stripping, if any.
> +	 */
> +	hlreg0 = rd32(hw, NGBE_SECRXCTL);
> +	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
> +		hlreg0 &= ~NGBE_SECRXCTL_CRCSTRIP;
> +	else
> +		hlreg0 |= NGBE_SECRXCTL_CRCSTRIP;
> +	hlreg0 &= ~NGBE_SECRXCTL_XDSA;
> +	wr32(hw, NGBE_SECRXCTL, hlreg0);
> +
> +	/*
> +	 * Configure jumbo frame support, if any.
> +	 */
> +	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
> +		wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
> +			NGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
> +	} else {
> +		wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
> +			NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
> +	}
> +
> +	/*
> +	 * If loopback mode is configured, set LPBK bit.
> +	 */
> +	hlreg0 = rd32(hw, NGBE_PSRCTL);
> +	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
> +		hlreg0 |= NGBE_PSRCTL_LBENA;
> +	else
> +		hlreg0 &= ~NGBE_PSRCTL_LBENA;
> +
> +	wr32(hw, NGBE_PSRCTL, hlreg0);
> +
> +	/*
> +	 * Assume no header split and no VLAN strip support
> +	 * on any Rx queue first .
> +	 */
> +	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
> +
> +	/* Setup RX queues */
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		rxq = dev->data->rx_queues[i];
> +
> +		/*
> +		 * Reset crc_len in case it was changed after queue setup by a
> +		 * call to configure.
> +		 */
> +		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
> +			rxq->crc_len = RTE_ETHER_CRC_LEN;
> +		else
> +			rxq->crc_len = 0;
> +
> +		/* Setup the Base and Length of the Rx Descriptor Rings */
> +		bus_addr = rxq->rx_ring_phys_addr;
> +		wr32(hw, NGBE_RXBAL(rxq->reg_idx),
> +				(uint32_t)(bus_addr & BIT_MASK32));
> +		wr32(hw, NGBE_RXBAH(rxq->reg_idx),
> +				(uint32_t)(bus_addr >> 32));
> +		wr32(hw, NGBE_RXRP(rxq->reg_idx), 0);
> +		wr32(hw, NGBE_RXWP(rxq->reg_idx), 0);
> +
> +		srrctl = NGBE_RXCFG_RNGLEN(rxq->nb_rx_desc);
> +
> +		/* Set if packets are dropped when no descriptors available */
> +		if (rxq->drop_en)
> +			srrctl |= NGBE_RXCFG_DROP;
> +
> +		/*
> +		 * Configure the RX buffer size in the PKTLEN field of
> +		 * the RXCFG register of the queue.
> +		 * The value is in 1 KB resolution. Valid values can be from
> +		 * 1 KB to 16 KB.
> +		 */
> +		buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
> +			RTE_PKTMBUF_HEADROOM);
> +		buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
> +		srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
> +
> +		wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
> +
> +		/* It adds dual VLAN length for supporting dual VLAN */
> +		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
> +					    2 * NGBE_VLAN_TAG_SIZE > buf_size)
> +			dev->data->scattered_rx = 1;
> +		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
> +			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
> +	}
> +
> +	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
> +		dev->data->scattered_rx = 1;
> +
> +	/*
> +	 * Setup the Checksum Register.
> +	 * Disable Full-Packet Checksum which is mutually exclusive with RSS.
> +	 * Enable IP/L4 checksum computation by hardware if requested to do so.
> +	 */
> +	rxcsum = rd32(hw, NGBE_PSRCTL);
> +	rxcsum |= NGBE_PSRCTL_PCSD;
> +	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
> +		rxcsum |= NGBE_PSRCTL_L4CSUM;
> +	else
> +		rxcsum &= ~NGBE_PSRCTL_L4CSUM;
> +
> +	wr32(hw, NGBE_PSRCTL, rxcsum);
> +
> +	if (hw->is_pf) {
> +		rdrxctl = rd32(hw, NGBE_SECRXCTL);
> +		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
> +			rdrxctl &= ~NGBE_SECRXCTL_CRCSTRIP;
> +		else
> +			rdrxctl |= NGBE_SECRXCTL_CRCSTRIP;
> +		wr32(hw, NGBE_SECRXCTL, rdrxctl);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Initializes Transmit Unit.
> + */
> +void __rte_cold
> +ngbe_dev_tx_init(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_hw     *hw;
> +	struct ngbe_tx_queue *txq;
> +	uint64_t bus_addr;
> +	uint16_t i;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	hw = NGBE_DEV_HW(dev);
> +
> +	/* Enable TX CRC (checksum offload requirement) and hw padding
> +	 * (TSO requirement)
> +	 */
> +	wr32m(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_ODSA, NGBE_SECTXCTL_ODSA);
> +	wr32m(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_XDSA, 0);
> +
> +	/* Setup the Base and Length of the Tx Descriptor Rings */
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		txq = dev->data->tx_queues[i];
> +
> +		bus_addr = txq->tx_ring_phys_addr;
> +		wr32(hw, NGBE_TXBAL(txq->reg_idx),
> +				(uint32_t)(bus_addr & BIT_MASK32));
> +		wr32(hw, NGBE_TXBAH(txq->reg_idx),
> +				(uint32_t)(bus_addr >> 32));
> +		wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_BUFLEN_MASK,
> +			NGBE_TXCFG_BUFLEN(txq->nb_tx_desc));
> +		/* Setup the HW Tx Head and TX Tail descriptor pointers */
> +		wr32(hw, NGBE_TXRP(txq->reg_idx), 0);
> +		wr32(hw, NGBE_TXWP(txq->reg_idx), 0);
> +	}
> +}
> +
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 18/24] net/ngbe: add packet type
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 18/24] net/ngbe: add packet type Jiawen Wu
@ 2021-06-14 19:06   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:06 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Add packet type marco definition and convert ptype to ptid.

What about eth_dev_ptypes_set_t callback?

> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/features/ngbe.ini |   1 +
>   doc/guides/nics/ngbe.rst          |   1 +
>   drivers/net/ngbe/meson.build      |   1 +
>   drivers/net/ngbe/ngbe_ethdev.c    |   8 +
>   drivers/net/ngbe/ngbe_ethdev.h    |   4 +
>   drivers/net/ngbe/ngbe_ptypes.c    | 640 ++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_ptypes.h    | 351 ++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h      |   1 -
>   8 files changed, 1006 insertions(+), 1 deletion(-)
>   create mode 100644 drivers/net/ngbe/ngbe_ptypes.c
>   create mode 100644 drivers/net/ngbe/ngbe_ptypes.h
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index abde1e2a67..e24d8d0b55 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -13,6 +13,7 @@ CRC offload          = P
>   VLAN offload         = P
>   L3 checksum offload  = P
>   L4 checksum offload  = P
> +Packet type parsing  = Y
>   Multiprocess aware   = Y
>   Linux                = Y
>   ARMv8                = Y
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index e56baf26b4..04fa3e90a8 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -10,6 +10,7 @@ for Wangxun 1 Gigabit Ethernet NICs.
>   Features
>   --------
>   
> +- Packet type information
>   - Checksum offload
>   - Jumbo frames
>   - Link state information
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index 9e75b82f1c..fd571399b3 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -12,6 +12,7 @@ objs = [base_objs]
>   
>   sources = files(
>   	'ngbe_ethdev.c',
> +	'ngbe_ptypes.c',
>   	'ngbe_rxtx.c',
>   )
>   
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 2f8ac48f33..672db88133 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -354,6 +354,13 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>   	return 0;
>   }
>   
> +const uint32_t *
> +ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
> +{
> +	RTE_SET_USED(dev);
> +	return ngbe_get_supported_ptypes();
> +}
> +
>   /* return 0 means link status changed, -1 means not changed */
>   int
>   ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> @@ -661,6 +668,7 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
>   	.dev_configure              = ngbe_dev_configure,
>   	.dev_infos_get              = ngbe_dev_info_get,
>   	.link_update                = ngbe_dev_link_update,
> +	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
>   	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
>   	.rx_queue_release           = ngbe_dev_rx_queue_release,
>   	.tx_queue_setup             = ngbe_dev_tx_queue_setup,
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index a9482f3001..6881351252 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -6,6 +6,8 @@
>   #ifndef _NGBE_ETHDEV_H_
>   #define _NGBE_ETHDEV_H_
>   
> +#include "ngbe_ptypes.h"
> +
>   /* need update link, bit flag */
>   #define NGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
>   #define NGBE_FLAG_MAILBOX          (uint32_t)(1 << 1)
> @@ -94,4 +96,6 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>   #define NGBE_DEFAULT_TX_HTHRESH      0
>   #define NGBE_DEFAULT_TX_WTHRESH      0
>   
> +const uint32_t *ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
> +
>   #endif /* _NGBE_ETHDEV_H_ */
> diff --git a/drivers/net/ngbe/ngbe_ptypes.c b/drivers/net/ngbe/ngbe_ptypes.c
> new file mode 100644
> index 0000000000..4b6cd374f6
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ptypes.c
> @@ -0,0 +1,640 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + */
> +
> +#include <rte_mbuf.h>
> +#include <rte_memory.h>
> +
> +#include "base/ngbe_type.h"
> +#include "ngbe_ptypes.h"
> +
> +/* The ngbe_ptype_lookup is used to convert from the 8-bit ptid in the
> + * hardware to a bit-field that can be used by SW to more easily determine the
> + * packet type.
> + *
> + * Macros are used to shorten the table lines and make this table human
> + * readable.
> + *
> + * We store the PTYPE in the top byte of the bit field - this is just so that
> + * we can check that the table doesn't have a row missing, as the index into
> + * the table should be the PTYPE.
> + */
> +#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \
> +	[ptid] = (RTE_PTYPE_L2_##l2 | \
> +		RTE_PTYPE_L3_##l3 | \
> +		RTE_PTYPE_L4_##l4 | \
> +		RTE_PTYPE_TUNNEL_##tun | \
> +		RTE_PTYPE_INNER_L2_##el2 | \
> +		RTE_PTYPE_INNER_L3_##el3 | \
> +		RTE_PTYPE_INNER_L4_##el4)
> +
> +#define RTE_PTYPE_L2_NONE               0
> +#define RTE_PTYPE_L3_NONE               0
> +#define RTE_PTYPE_L4_NONE               0
> +#define RTE_PTYPE_TUNNEL_NONE           0
> +#define RTE_PTYPE_INNER_L2_NONE         0
> +#define RTE_PTYPE_INNER_L3_NONE         0
> +#define RTE_PTYPE_INNER_L4_NONE         0
> +
> +static u32 ngbe_ptype_lookup[NGBE_PTID_MAX] __rte_cache_aligned = {
> +	/* L2:0-3 L3:4-7 L4:8-11 TUN:12-15 EL2:16-19 EL3:20-23 EL2:24-27 */
> +	/* L2: ETH */
> +	TPTE(0x10, ETHER,          NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x11, ETHER,          NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x12, ETHER_TIMESYNC, NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x13, ETHER_FIP,      NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x14, ETHER_LLDP,     NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x15, ETHER_CNM,      NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x16, ETHER_EAPOL,    NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x17, ETHER_ARP,      NONE, NONE, NONE, NONE, NONE, NONE),
> +	/* L2: Ethertype Filter */
> +	TPTE(0x18, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x19, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x1A, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x1B, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x1C, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x1D, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x1E, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	TPTE(0x1F, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
> +	/* L3: IP */
> +	TPTE(0x20, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
> +	TPTE(0x21, ETHER, IPV4, FRAG,    NONE, NONE, NONE, NONE),
> +	TPTE(0x22, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
> +	TPTE(0x23, ETHER, IPV4, UDP,     NONE, NONE, NONE, NONE),
> +	TPTE(0x24, ETHER, IPV4, TCP,     NONE, NONE, NONE, NONE),
> +	TPTE(0x25, ETHER, IPV4, SCTP,    NONE, NONE, NONE, NONE),
> +	TPTE(0x29, ETHER, IPV6, FRAG,    NONE, NONE, NONE, NONE),
> +	TPTE(0x2A, ETHER, IPV6, NONFRAG, NONE, NONE, NONE, NONE),
> +	TPTE(0x2B, ETHER, IPV6, UDP,     NONE, NONE, NONE, NONE),
> +	TPTE(0x2C, ETHER, IPV6, TCP,     NONE, NONE, NONE, NONE),
> +	TPTE(0x2D, ETHER, IPV6, SCTP,    NONE, NONE, NONE, NONE),
> +	/* IPv4 -> IPv4/IPv6 */
> +	TPTE(0x81, ETHER, IPV4, NONE, IP, NONE, IPV4, FRAG),
> +	TPTE(0x82, ETHER, IPV4, NONE, IP, NONE, IPV4, NONFRAG),
> +	TPTE(0x83, ETHER, IPV4, NONE, IP, NONE, IPV4, UDP),
> +	TPTE(0x84, ETHER, IPV4, NONE, IP, NONE, IPV4, TCP),
> +	TPTE(0x85, ETHER, IPV4, NONE, IP, NONE, IPV4, SCTP),
> +	TPTE(0x89, ETHER, IPV4, NONE, IP, NONE, IPV6, FRAG),
> +	TPTE(0x8A, ETHER, IPV4, NONE, IP, NONE, IPV6, NONFRAG),
> +	TPTE(0x8B, ETHER, IPV4, NONE, IP, NONE, IPV6, UDP),
> +	TPTE(0x8C, ETHER, IPV4, NONE, IP, NONE, IPV6, TCP),
> +	TPTE(0x8D, ETHER, IPV4, NONE, IP, NONE, IPV6, SCTP),
> +	/* IPv4 -> GRE/Teredo/VXLAN -> NONE/IPv4/IPv6 */
> +	TPTE(0x90, ETHER, IPV4, NONE, VXLAN_GPE, NONE, NONE, NONE),
> +	TPTE(0x91, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, FRAG),
> +	TPTE(0x92, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, NONFRAG),
> +	TPTE(0x93, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, UDP),
> +	TPTE(0x94, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, TCP),
> +	TPTE(0x95, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV4, SCTP),
> +	TPTE(0x99, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, FRAG),
> +	TPTE(0x9A, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, NONFRAG),
> +	TPTE(0x9B, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, UDP),
> +	TPTE(0x9C, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, TCP),
> +	TPTE(0x9D, ETHER, IPV4, NONE, VXLAN_GPE, NONE, IPV6, SCTP),
> +	/* IPv4 -> GRE/Teredo/VXLAN -> MAC -> NONE/IPv4/IPv6 */
> +	TPTE(0xA0, ETHER, IPV4, NONE, GRENAT, ETHER, NONE,  NONE),
> +	TPTE(0xA1, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, FRAG),
> +	TPTE(0xA2, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, NONFRAG),
> +	TPTE(0xA3, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, UDP),
> +	TPTE(0xA4, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, TCP),
> +	TPTE(0xA5, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, SCTP),
> +	TPTE(0xA9, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, FRAG),
> +	TPTE(0xAA, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, NONFRAG),
> +	TPTE(0xAB, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, UDP),
> +	TPTE(0xAC, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, TCP),
> +	TPTE(0xAD, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, SCTP),
> +	/* IPv4 -> GRE/Teredo/VXLAN -> MAC+VLAN -> NONE/IPv4/IPv6 */
> +	TPTE(0xB0, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, NONE,  NONE),
> +	TPTE(0xB1, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, FRAG),
> +	TPTE(0xB2, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, NONFRAG),
> +	TPTE(0xB3, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, UDP),
> +	TPTE(0xB4, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, TCP),
> +	TPTE(0xB5, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, SCTP),
> +	TPTE(0xB9, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, FRAG),
> +	TPTE(0xBA, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, NONFRAG),
> +	TPTE(0xBB, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, UDP),
> +	TPTE(0xBC, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, TCP),
> +	TPTE(0xBD, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, SCTP),
> +	/* IPv6 -> IPv4/IPv6 */
> +	TPTE(0xC1, ETHER, IPV6, NONE, IP, NONE, IPV4, FRAG),
> +	TPTE(0xC2, ETHER, IPV6, NONE, IP, NONE, IPV4, NONFRAG),
> +	TPTE(0xC3, ETHER, IPV6, NONE, IP, NONE, IPV4, UDP),
> +	TPTE(0xC4, ETHER, IPV6, NONE, IP, NONE, IPV4, TCP),
> +	TPTE(0xC5, ETHER, IPV6, NONE, IP, NONE, IPV4, SCTP),
> +	TPTE(0xC9, ETHER, IPV6, NONE, IP, NONE, IPV6, FRAG),
> +	TPTE(0xCA, ETHER, IPV6, NONE, IP, NONE, IPV6, NONFRAG),
> +	TPTE(0xCB, ETHER, IPV6, NONE, IP, NONE, IPV6, UDP),
> +	TPTE(0xCC, ETHER, IPV6, NONE, IP, NONE, IPV6, TCP),
> +	TPTE(0xCD, ETHER, IPV6, NONE, IP, NONE, IPV6, SCTP),
> +	/* IPv6 -> GRE/Teredo/VXLAN -> NONE/IPv4/IPv6 */
> +	TPTE(0xD0, ETHER, IPV6, NONE, GRENAT, NONE, NONE,  NONE),
> +	TPTE(0xD1, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, FRAG),
> +	TPTE(0xD2, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, NONFRAG),
> +	TPTE(0xD3, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, UDP),
> +	TPTE(0xD4, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, TCP),
> +	TPTE(0xD5, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, SCTP),
> +	TPTE(0xD9, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, FRAG),
> +	TPTE(0xDA, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, NONFRAG),
> +	TPTE(0xDB, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, UDP),
> +	TPTE(0xDC, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, TCP),
> +	TPTE(0xDD, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, SCTP),
> +	/* IPv6 -> GRE/Teredo/VXLAN -> MAC -> NONE/IPv4/IPv6 */
> +	TPTE(0xE0, ETHER, IPV6, NONE, GRENAT, ETHER, NONE,  NONE),
> +	TPTE(0xE1, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, FRAG),
> +	TPTE(0xE2, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, NONFRAG),
> +	TPTE(0xE3, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, UDP),
> +	TPTE(0xE4, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, TCP),
> +	TPTE(0xE5, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, SCTP),
> +	TPTE(0xE9, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, FRAG),
> +	TPTE(0xEA, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, NONFRAG),
> +	TPTE(0xEB, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, UDP),
> +	TPTE(0xEC, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, TCP),
> +	TPTE(0xED, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, SCTP),
> +	/* IPv6 -> GRE/Teredo/VXLAN -> MAC+VLAN -> NONE/IPv4/IPv6 */
> +	TPTE(0xF0, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, NONE,  NONE),
> +	TPTE(0xF1, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, FRAG),
> +	TPTE(0xF2, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, NONFRAG),
> +	TPTE(0xF3, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, UDP),
> +	TPTE(0xF4, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, TCP),
> +	TPTE(0xF5, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, SCTP),
> +	TPTE(0xF9, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, FRAG),
> +	TPTE(0xFA, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, NONFRAG),
> +	TPTE(0xFB, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, UDP),
> +	TPTE(0xFC, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, TCP),
> +	TPTE(0xFD, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, SCTP),
> +};
> +
> +u32 *ngbe_get_supported_ptypes(void)
> +{
> +	static u32 ptypes[] = {
> +		/* For non-vec functions,
> +		 * refers to ngbe_rxd_pkt_info_to_pkt_type();
> +		 */
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L3_IPV4,
> +		RTE_PTYPE_L3_IPV4_EXT,
> +		RTE_PTYPE_L3_IPV6,
> +		RTE_PTYPE_L3_IPV6_EXT,
> +		RTE_PTYPE_L4_SCTP,
> +		RTE_PTYPE_L4_TCP,
> +		RTE_PTYPE_L4_UDP,
> +		RTE_PTYPE_TUNNEL_IP,
> +		RTE_PTYPE_INNER_L3_IPV6,
> +		RTE_PTYPE_INNER_L3_IPV6_EXT,
> +		RTE_PTYPE_INNER_L4_TCP,
> +		RTE_PTYPE_INNER_L4_UDP,
> +		RTE_PTYPE_UNKNOWN
> +	};
> +
> +	return ptypes;
> +}
> +
> +static inline u8
> +ngbe_encode_ptype_mac(u32 ptype)
> +{
> +	u8 ptid;
> +
> +	ptid = NGBE_PTID_PKT_MAC;
> +
> +	switch (ptype & RTE_PTYPE_L2_MASK) {
> +	case RTE_PTYPE_UNKNOWN:
> +		break;
> +	case RTE_PTYPE_L2_ETHER_TIMESYNC:
> +		ptid |= NGBE_PTID_TYP_TS;
> +		break;
> +	case RTE_PTYPE_L2_ETHER_ARP:
> +		ptid |= NGBE_PTID_TYP_ARP;
> +		break;
> +	case RTE_PTYPE_L2_ETHER_LLDP:
> +		ptid |= NGBE_PTID_TYP_LLDP;
> +		break;
> +	default:
> +		ptid |= NGBE_PTID_TYP_MAC;
> +		break;
> +	}
> +
> +	return ptid;
> +}
> +
> +static inline u8
> +ngbe_encode_ptype_ip(u32 ptype)
> +{
> +	u8 ptid;
> +
> +	ptid = NGBE_PTID_PKT_IP;
> +
> +	switch (ptype & RTE_PTYPE_L3_MASK) {
> +	case RTE_PTYPE_L3_IPV4:
> +	case RTE_PTYPE_L3_IPV4_EXT:
> +	case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
> +		break;
> +	case RTE_PTYPE_L3_IPV6:
> +	case RTE_PTYPE_L3_IPV6_EXT:
> +	case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
> +		ptid |= NGBE_PTID_PKT_IPV6;
> +		break;
> +	default:
> +		return ngbe_encode_ptype_mac(ptype);
> +	}
> +
> +	switch (ptype & RTE_PTYPE_L4_MASK) {
> +	case RTE_PTYPE_L4_TCP:
> +		ptid |= NGBE_PTID_TYP_TCP;
> +		break;
> +	case RTE_PTYPE_L4_UDP:
> +		ptid |= NGBE_PTID_TYP_UDP;
> +		break;
> +	case RTE_PTYPE_L4_SCTP:
> +		ptid |= NGBE_PTID_TYP_SCTP;
> +		break;
> +	case RTE_PTYPE_L4_FRAG:
> +		ptid |= NGBE_PTID_TYP_IPFRAG;
> +		break;
> +	default:
> +		ptid |= NGBE_PTID_TYP_IPDATA;
> +		break;
> +	}
> +
> +	return ptid;
> +}
> +
> +static inline u8
> +ngbe_encode_ptype_tunnel(u32 ptype)
> +{
> +	u8 ptid;
> +
> +	ptid = NGBE_PTID_PKT_TUN;
> +
> +	switch (ptype & RTE_PTYPE_L3_MASK) {
> +	case RTE_PTYPE_L3_IPV4:
> +	case RTE_PTYPE_L3_IPV4_EXT:
> +	case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
> +		break;
> +	case RTE_PTYPE_L3_IPV6:
> +	case RTE_PTYPE_L3_IPV6_EXT:
> +	case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
> +		ptid |= NGBE_PTID_TUN_IPV6;
> +		break;
> +	default:
> +		return ngbe_encode_ptype_ip(ptype);
> +	}
> +
> +	/* VXLAN/GRE/Teredo/VXLAN-GPE are not supported in EM */
> +	switch (ptype & RTE_PTYPE_TUNNEL_MASK) {
> +	case RTE_PTYPE_TUNNEL_IP:
> +		ptid |= NGBE_PTID_TUN_EI;
> +		break;
> +	case RTE_PTYPE_TUNNEL_GRE:
> +	case RTE_PTYPE_TUNNEL_VXLAN_GPE:
> +		ptid |= NGBE_PTID_TUN_EIG;
> +		break;
> +	case RTE_PTYPE_TUNNEL_VXLAN:
> +	case RTE_PTYPE_TUNNEL_NVGRE:
> +	case RTE_PTYPE_TUNNEL_GENEVE:
> +	case RTE_PTYPE_TUNNEL_GRENAT:
> +		break;
> +	default:
> +		return ptid;
> +	}
> +
> +	switch (ptype & RTE_PTYPE_INNER_L2_MASK) {
> +	case RTE_PTYPE_INNER_L2_ETHER:
> +		ptid |= NGBE_PTID_TUN_EIGM;
> +		break;
> +	case RTE_PTYPE_INNER_L2_ETHER_VLAN:
> +		ptid |= NGBE_PTID_TUN_EIGMV;
> +		break;
> +	case RTE_PTYPE_INNER_L2_ETHER_QINQ:
> +		ptid |= NGBE_PTID_TUN_EIGMV;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	switch (ptype & RTE_PTYPE_INNER_L3_MASK) {
> +	case RTE_PTYPE_INNER_L3_IPV4:
> +	case RTE_PTYPE_INNER_L3_IPV4_EXT:
> +	case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
> +		break;
> +	case RTE_PTYPE_INNER_L3_IPV6:
> +	case RTE_PTYPE_INNER_L3_IPV6_EXT:
> +	case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
> +		ptid |= NGBE_PTID_PKT_IPV6;
> +		break;
> +	default:
> +		return ptid;
> +	}
> +
> +	switch (ptype & RTE_PTYPE_INNER_L4_MASK) {
> +	case RTE_PTYPE_INNER_L4_TCP:
> +		ptid |= NGBE_PTID_TYP_TCP;
> +		break;
> +	case RTE_PTYPE_INNER_L4_UDP:
> +		ptid |= NGBE_PTID_TYP_UDP;
> +		break;
> +	case RTE_PTYPE_INNER_L4_SCTP:
> +		ptid |= NGBE_PTID_TYP_SCTP;
> +		break;
> +	case RTE_PTYPE_INNER_L4_FRAG:
> +		ptid |= NGBE_PTID_TYP_IPFRAG;
> +		break;
> +	default:
> +		ptid |= NGBE_PTID_TYP_IPDATA;
> +		break;
> +	}
> +
> +	return ptid;
> +}
> +
> +u32 ngbe_decode_ptype(u8 ptid)
> +{
> +	if (-1 != ngbe_etflt_id(ptid))
> +		return RTE_PTYPE_UNKNOWN;
> +
> +	return ngbe_ptype_lookup[ptid];
> +}
> +
> +u8 ngbe_encode_ptype(u32 ptype)
> +{
> +	u8 ptid = 0;
> +
> +	if (ptype & RTE_PTYPE_TUNNEL_MASK)
> +		ptid = ngbe_encode_ptype_tunnel(ptype);
> +	else if (ptype & RTE_PTYPE_L3_MASK)
> +		ptid = ngbe_encode_ptype_ip(ptype);
> +	else if (ptype & RTE_PTYPE_L2_MASK)
> +		ptid = ngbe_encode_ptype_mac(ptype);
> +	else
> +		ptid = NGBE_PTID_NULL;
> +
> +	return ptid;
> +}
> +
> +/**
> + * Use 2 different table for normal packet and tunnel packet
> + * to save the space.
> + */
> +const u32
> +ngbe_ptype_table[NGBE_PTID_MAX] __rte_cache_aligned = {
> +	[NGBE_PT_ETHER] = RTE_PTYPE_L2_ETHER,
> +	[NGBE_PT_IPV4] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4,
> +	[NGBE_PT_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
> +	[NGBE_PT_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
> +	[NGBE_PT_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
> +	[NGBE_PT_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT,
> +	[NGBE_PT_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
> +	[NGBE_PT_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
> +	[NGBE_PT_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
> +	[NGBE_PT_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6,
> +	[NGBE_PT_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
> +	[NGBE_PT_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
> +	[NGBE_PT_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP,
> +	[NGBE_PT_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6_EXT,
> +	[NGBE_PT_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
> +	[NGBE_PT_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
> +	[NGBE_PT_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_SCTP,
> +	[NGBE_PT_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6,
> +	[NGBE_PT_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_IPV4_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_IPV4_EXT_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6,
> +	[NGBE_PT_IPV4_EXT_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_IPV4_EXT_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_IPV4_EXT_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT,
> +	[NGBE_PT_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_IPV4_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_IPV4_EXT_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT,
> +	[NGBE_PT_IPV4_EXT_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_IPV4_EXT_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_IPV4_EXT_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
> +};
> +
> +const u32
> +ngbe_ptype_table_tn[NGBE_PTID_MAX] __rte_cache_aligned = {
> +	[NGBE_PT_NVGRE] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER,
> +	[NGBE_PT_NVGRE_IPV4] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT,
> +	[NGBE_PT_NVGRE_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6,
> +	[NGBE_PT_NVGRE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT,
> +	[NGBE_PT_NVGRE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
> +		RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_NVGRE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
> +		RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_NVGRE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
> +		RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_NVGRE_IPV4_IPV6_EXT_TCP] =
> +		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		RTE_PTYPE_TUNNEL_GRE | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
> +		RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_NVGRE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
> +		RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_NVGRE_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
> +		RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_NVGRE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
> +		RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_NVGRE_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
> +		RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_NVGRE_IPV4_IPV6_EXT_UDP] =
> +		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		RTE_PTYPE_TUNNEL_GRE | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_NVGRE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
> +		RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_NVGRE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
> +		RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_NVGRE_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
> +		RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_NVGRE_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
> +		RTE_PTYPE_INNER_L4_UDP,
> +
> +	[NGBE_PT_VXLAN] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER,
> +	[NGBE_PT_VXLAN_IPV4] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4_EXT,
> +	[NGBE_PT_VXLAN_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6,
> +	[NGBE_PT_VXLAN_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT,
> +	[NGBE_PT_VXLAN_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_VXLAN_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_VXLAN_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_VXLAN_IPV4_IPV6_EXT_TCP] =
> +		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_VXLAN_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_VXLAN_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_VXLAN_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
> +	[NGBE_PT_VXLAN_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_VXLAN_IPV4_IPV6_EXT_UDP] =
> +		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
> +		RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN |
> +		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
> +	[NGBE_PT_VXLAN_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_VXLAN_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_SCTP,
> +	[NGBE_PT_VXLAN_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_TCP,
> +	[NGBE_PT_VXLAN_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
> +		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
> +		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_UDP,
> +};
> +
> diff --git a/drivers/net/ngbe/ngbe_ptypes.h b/drivers/net/ngbe/ngbe_ptypes.h
> new file mode 100644
> index 0000000000..1b965c02d8
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ptypes.h
> @@ -0,0 +1,351 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + */
> +
> +#ifndef _NGBE_PTYPE_H_
> +#define _NGBE_PTYPE_H_
> +
> +/**
> + * PTID(Packet Type Identifier, 8bits)
> + * - Bit 3:0 detailed types.
> + * - Bit 5:4 basic types.
> + * - Bit 7:6 tunnel types.
> + **/
> +#define NGBE_PTID_NULL                 0
> +#define NGBE_PTID_MAX                  256
> +#define NGBE_PTID_MASK                 0xFF
> +#define NGBE_PTID_MASK_TUNNEL          0x7F
> +
> +/* TUN */
> +#define NGBE_PTID_TUN_IPV6             0x40
> +#define NGBE_PTID_TUN_EI               0x00 /* IP */
> +#define NGBE_PTID_TUN_EIG              0x10 /* IP+GRE */
> +#define NGBE_PTID_TUN_EIGM             0x20 /* IP+GRE+MAC */
> +#define NGBE_PTID_TUN_EIGMV            0x30 /* IP+GRE+MAC+VLAN */
> +
> +/* PKT for !TUN */
> +#define NGBE_PTID_PKT_TUN             (0x80)
> +#define NGBE_PTID_PKT_MAC             (0x10)
> +#define NGBE_PTID_PKT_IP              (0x20)
> +#define NGBE_PTID_PKT_FCOE            (0x30)
> +
> +/* TYP for PKT=mac */
> +#define NGBE_PTID_TYP_MAC             (0x01)
> +#define NGBE_PTID_TYP_TS              (0x02) /* time sync */
> +#define NGBE_PTID_TYP_FIP             (0x03)
> +#define NGBE_PTID_TYP_LLDP            (0x04)
> +#define NGBE_PTID_TYP_CNM             (0x05)
> +#define NGBE_PTID_TYP_EAPOL           (0x06)
> +#define NGBE_PTID_TYP_ARP             (0x07)
> +#define NGBE_PTID_TYP_ETF             (0x08)
> +
> +/* TYP for PKT=ip */
> +#define NGBE_PTID_PKT_IPV6            (0x08)
> +#define NGBE_PTID_TYP_IPFRAG          (0x01)
> +#define NGBE_PTID_TYP_IPDATA          (0x02)
> +#define NGBE_PTID_TYP_UDP             (0x03)
> +#define NGBE_PTID_TYP_TCP             (0x04)
> +#define NGBE_PTID_TYP_SCTP            (0x05)
> +
> +/* TYP for PKT=fcoe */
> +#define NGBE_PTID_PKT_VFT             (0x08)
> +#define NGBE_PTID_TYP_FCOE            (0x00)
> +#define NGBE_PTID_TYP_FCDATA          (0x01)
> +#define NGBE_PTID_TYP_FCRDY           (0x02)
> +#define NGBE_PTID_TYP_FCRSP           (0x03)
> +#define NGBE_PTID_TYP_FCOTHER         (0x04)
> +
> +/* packet type non-ip values */
> +enum ngbe_l2_ptids {
> +	NGBE_PTID_L2_ABORTED = (NGBE_PTID_PKT_MAC),
> +	NGBE_PTID_L2_MAC = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_MAC),
> +	NGBE_PTID_L2_TMST = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_TS),
> +	NGBE_PTID_L2_FIP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_FIP),
> +	NGBE_PTID_L2_LLDP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_LLDP),
> +	NGBE_PTID_L2_CNM = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_CNM),
> +	NGBE_PTID_L2_EAPOL = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_EAPOL),
> +	NGBE_PTID_L2_ARP = (NGBE_PTID_PKT_MAC | NGBE_PTID_TYP_ARP),
> +
> +	NGBE_PTID_L2_IPV4_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPFRAG),
> +	NGBE_PTID_L2_IPV4 = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_IPDATA),
> +	NGBE_PTID_L2_IPV4_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_UDP),
> +	NGBE_PTID_L2_IPV4_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_TCP),
> +	NGBE_PTID_L2_IPV4_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_TYP_SCTP),
> +	NGBE_PTID_L2_IPV6_FRAG = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
> +			NGBE_PTID_TYP_IPFRAG),
> +	NGBE_PTID_L2_IPV6 = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
> +			NGBE_PTID_TYP_IPDATA),
> +	NGBE_PTID_L2_IPV6_UDP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
> +			NGBE_PTID_TYP_UDP),
> +	NGBE_PTID_L2_IPV6_TCP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
> +			NGBE_PTID_TYP_TCP),
> +	NGBE_PTID_L2_IPV6_SCTP = (NGBE_PTID_PKT_IP | NGBE_PTID_PKT_IPV6 |
> +			NGBE_PTID_TYP_SCTP),
> +
> +	NGBE_PTID_L2_FCOE = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_TYP_FCOE),
> +	NGBE_PTID_L2_FCOE_FCDATA = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_TYP_FCDATA),
> +	NGBE_PTID_L2_FCOE_FCRDY = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_TYP_FCRDY),
> +	NGBE_PTID_L2_FCOE_FCRSP = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_TYP_FCRSP),
> +	NGBE_PTID_L2_FCOE_FCOTHER = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_TYP_FCOTHER),
> +	NGBE_PTID_L2_FCOE_VFT = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_PKT_VFT),
> +	NGBE_PTID_L2_FCOE_VFT_FCDATA = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCDATA),
> +	NGBE_PTID_L2_FCOE_VFT_FCRDY = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCRDY),
> +	NGBE_PTID_L2_FCOE_VFT_FCRSP = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCRSP),
> +	NGBE_PTID_L2_FCOE_VFT_FCOTHER = (NGBE_PTID_PKT_FCOE |
> +			NGBE_PTID_PKT_VFT | NGBE_PTID_TYP_FCOTHER),
> +
> +	NGBE_PTID_L2_TUN4_MAC = (NGBE_PTID_PKT_TUN |
> +			NGBE_PTID_TUN_EIGM),
> +	NGBE_PTID_L2_TUN6_MAC = (NGBE_PTID_PKT_TUN |
> +			NGBE_PTID_TUN_IPV6 | NGBE_PTID_TUN_EIGM),
> +};
> +
> +
> +/*
> + * PTYPE(Packet Type, 32bits)
> + * - Bit 3:0 is for L2 types.
> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
> + * - Bit 15:12 is for tunnel types.
> + * - Bit 19:16 is for inner L2 types.
> + * - Bit 23:20 is for inner L3 types.
> + * - Bit 27:24 is for inner L4 types.
> + * - Bit 31:28 is reserved.
> + * please ref to rte_mbuf.h: rte_mbuf.packet_type
> + */
> +struct rte_ngbe_ptype {
> +	u32 l2:4;  /* outer mac */
> +	u32 l3:4;  /* outer internet protocol */
> +	u32 l4:4;  /* outer transport protocol */
> +	u32 tun:4; /* tunnel protocol */
> +
> +	u32 el2:4; /* inner mac */
> +	u32 el3:4; /* inner internet protocol */
> +	u32 el4:4; /* inner transport protocol */
> +	u32 rsv:3;
> +	u32 known:1;
> +};
> +
> +#ifndef RTE_PTYPE_UNKNOWN
> +#define RTE_PTYPE_UNKNOWN                   0x00000000
> +#define RTE_PTYPE_L2_ETHER                  0x00000001
> +#define RTE_PTYPE_L2_ETHER_TIMESYNC         0x00000002
> +#define RTE_PTYPE_L2_ETHER_ARP              0x00000003
> +#define RTE_PTYPE_L2_ETHER_LLDP             0x00000004
> +#define RTE_PTYPE_L2_ETHER_NSH              0x00000005
> +#define RTE_PTYPE_L2_ETHER_FCOE             0x00000009
> +#define RTE_PTYPE_L3_IPV4                   0x00000010
> +#define RTE_PTYPE_L3_IPV4_EXT               0x00000030
> +#define RTE_PTYPE_L3_IPV6                   0x00000040
> +#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN       0x00000090
> +#define RTE_PTYPE_L3_IPV6_EXT               0x000000c0
> +#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN       0x000000e0
> +#define RTE_PTYPE_L4_TCP                    0x00000100
> +#define RTE_PTYPE_L4_UDP                    0x00000200
> +#define RTE_PTYPE_L4_FRAG                   0x00000300
> +#define RTE_PTYPE_L4_SCTP                   0x00000400
> +#define RTE_PTYPE_L4_ICMP                   0x00000500
> +#define RTE_PTYPE_L4_NONFRAG                0x00000600
> +#define RTE_PTYPE_TUNNEL_IP                 0x00001000
> +#define RTE_PTYPE_TUNNEL_GRE                0x00002000
> +#define RTE_PTYPE_TUNNEL_VXLAN              0x00003000
> +#define RTE_PTYPE_TUNNEL_NVGRE              0x00004000
> +#define RTE_PTYPE_TUNNEL_GENEVE             0x00005000
> +#define RTE_PTYPE_TUNNEL_GRENAT             0x00006000
> +#define RTE_PTYPE_INNER_L2_ETHER            0x00010000
> +#define RTE_PTYPE_INNER_L2_ETHER_VLAN       0x00020000
> +#define RTE_PTYPE_INNER_L3_IPV4             0x00100000
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT         0x00200000
> +#define RTE_PTYPE_INNER_L3_IPV6             0x00300000
> +#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT         0x00500000
> +#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
> +#define RTE_PTYPE_INNER_L4_TCP              0x01000000
> +#define RTE_PTYPE_INNER_L4_UDP              0x02000000
> +#define RTE_PTYPE_INNER_L4_FRAG             0x03000000
> +#define RTE_PTYPE_INNER_L4_SCTP             0x04000000
> +#define RTE_PTYPE_INNER_L4_ICMP             0x05000000
> +#define RTE_PTYPE_INNER_L4_NONFRAG          0x06000000
> +#endif /* !RTE_PTYPE_UNKNOWN */
> +#define RTE_PTYPE_L3_IPV4u                  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
> +#define RTE_PTYPE_L3_IPV6u                  RTE_PTYPE_L3_IPV6_EXT_UNKNOWN
> +#define RTE_PTYPE_INNER_L3_IPV4u            RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN
> +#define RTE_PTYPE_INNER_L3_IPV6u            RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN
> +#define RTE_PTYPE_L2_ETHER_FIP              RTE_PTYPE_L2_ETHER
> +#define RTE_PTYPE_L2_ETHER_CNM              RTE_PTYPE_L2_ETHER
> +#define RTE_PTYPE_L2_ETHER_EAPOL            RTE_PTYPE_L2_ETHER
> +#define RTE_PTYPE_L2_ETHER_FILTER           RTE_PTYPE_L2_ETHER
> +
> +u32 *ngbe_get_supported_ptypes(void);
> +u32 ngbe_decode_ptype(u8 ptid);
> +u8 ngbe_encode_ptype(u32 ptype);
> +
> +/**
> + * PT(Packet Type, 32bits)
> + * - Bit 3:0 is for L2 types.
> + * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
> + * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
> + * - Bit 15:12 is for tunnel types.
> + * - Bit 19:16 is for inner L2 types.
> + * - Bit 23:20 is for inner L3 types.
> + * - Bit 27:24 is for inner L4 types.
> + * - Bit 31:28 is reserved.
> + * PT is a more accurate version of PTYPE
> + **/
> +#define NGBE_PT_ETHER                   0x00
> +#define NGBE_PT_IPV4                    0x01
> +#define NGBE_PT_IPV4_TCP                0x11
> +#define NGBE_PT_IPV4_UDP                0x21
> +#define NGBE_PT_IPV4_SCTP               0x41
> +#define NGBE_PT_IPV4_EXT                0x03
> +#define NGBE_PT_IPV4_EXT_TCP            0x13
> +#define NGBE_PT_IPV4_EXT_UDP            0x23
> +#define NGBE_PT_IPV4_EXT_SCTP           0x43
> +#define NGBE_PT_IPV6                    0x04
> +#define NGBE_PT_IPV6_TCP                0x14
> +#define NGBE_PT_IPV6_UDP                0x24
> +#define NGBE_PT_IPV6_SCTP               0x44
> +#define NGBE_PT_IPV6_EXT                0x0C
> +#define NGBE_PT_IPV6_EXT_TCP            0x1C
> +#define NGBE_PT_IPV6_EXT_UDP            0x2C
> +#define NGBE_PT_IPV6_EXT_SCTP           0x4C
> +#define NGBE_PT_IPV4_IPV6               0x05
> +#define NGBE_PT_IPV4_IPV6_TCP           0x15
> +#define NGBE_PT_IPV4_IPV6_UDP           0x25
> +#define NGBE_PT_IPV4_IPV6_SCTP          0x45
> +#define NGBE_PT_IPV4_EXT_IPV6           0x07
> +#define NGBE_PT_IPV4_EXT_IPV6_TCP       0x17
> +#define NGBE_PT_IPV4_EXT_IPV6_UDP       0x27
> +#define NGBE_PT_IPV4_EXT_IPV6_SCTP      0x47
> +#define NGBE_PT_IPV4_IPV6_EXT           0x0D
> +#define NGBE_PT_IPV4_IPV6_EXT_TCP       0x1D
> +#define NGBE_PT_IPV4_IPV6_EXT_UDP       0x2D
> +#define NGBE_PT_IPV4_IPV6_EXT_SCTP      0x4D
> +#define NGBE_PT_IPV4_EXT_IPV6_EXT       0x0F
> +#define NGBE_PT_IPV4_EXT_IPV6_EXT_TCP   0x1F
> +#define NGBE_PT_IPV4_EXT_IPV6_EXT_UDP   0x2F
> +#define NGBE_PT_IPV4_EXT_IPV6_EXT_SCTP  0x4F
> +
> +#define NGBE_PT_NVGRE                   0x00
> +#define NGBE_PT_NVGRE_IPV4              0x01
> +#define NGBE_PT_NVGRE_IPV4_TCP          0x11
> +#define NGBE_PT_NVGRE_IPV4_UDP          0x21
> +#define NGBE_PT_NVGRE_IPV4_SCTP         0x41
> +#define NGBE_PT_NVGRE_IPV4_EXT          0x03
> +#define NGBE_PT_NVGRE_IPV4_EXT_TCP      0x13
> +#define NGBE_PT_NVGRE_IPV4_EXT_UDP      0x23
> +#define NGBE_PT_NVGRE_IPV4_EXT_SCTP     0x43
> +#define NGBE_PT_NVGRE_IPV6              0x04
> +#define NGBE_PT_NVGRE_IPV6_TCP          0x14
> +#define NGBE_PT_NVGRE_IPV6_UDP          0x24
> +#define NGBE_PT_NVGRE_IPV6_SCTP         0x44
> +#define NGBE_PT_NVGRE_IPV6_EXT          0x0C
> +#define NGBE_PT_NVGRE_IPV6_EXT_TCP      0x1C
> +#define NGBE_PT_NVGRE_IPV6_EXT_UDP      0x2C
> +#define NGBE_PT_NVGRE_IPV6_EXT_SCTP     0x4C
> +#define NGBE_PT_NVGRE_IPV4_IPV6         0x05
> +#define NGBE_PT_NVGRE_IPV4_IPV6_TCP     0x15
> +#define NGBE_PT_NVGRE_IPV4_IPV6_UDP     0x25
> +#define NGBE_PT_NVGRE_IPV4_IPV6_EXT     0x0D
> +#define NGBE_PT_NVGRE_IPV4_IPV6_EXT_TCP 0x1D
> +#define NGBE_PT_NVGRE_IPV4_IPV6_EXT_UDP 0x2D
> +
> +#define NGBE_PT_VXLAN                   0x80
> +#define NGBE_PT_VXLAN_IPV4              0x81
> +#define NGBE_PT_VXLAN_IPV4_TCP          0x91
> +#define NGBE_PT_VXLAN_IPV4_UDP          0xA1
> +#define NGBE_PT_VXLAN_IPV4_SCTP         0xC1
> +#define NGBE_PT_VXLAN_IPV4_EXT          0x83
> +#define NGBE_PT_VXLAN_IPV4_EXT_TCP      0x93
> +#define NGBE_PT_VXLAN_IPV4_EXT_UDP      0xA3
> +#define NGBE_PT_VXLAN_IPV4_EXT_SCTP     0xC3
> +#define NGBE_PT_VXLAN_IPV6              0x84
> +#define NGBE_PT_VXLAN_IPV6_TCP          0x94
> +#define NGBE_PT_VXLAN_IPV6_UDP          0xA4
> +#define NGBE_PT_VXLAN_IPV6_SCTP         0xC4
> +#define NGBE_PT_VXLAN_IPV6_EXT          0x8C
> +#define NGBE_PT_VXLAN_IPV6_EXT_TCP      0x9C
> +#define NGBE_PT_VXLAN_IPV6_EXT_UDP      0xAC
> +#define NGBE_PT_VXLAN_IPV6_EXT_SCTP     0xCC
> +#define NGBE_PT_VXLAN_IPV4_IPV6         0x85
> +#define NGBE_PT_VXLAN_IPV4_IPV6_TCP     0x95
> +#define NGBE_PT_VXLAN_IPV4_IPV6_UDP     0xA5
> +#define NGBE_PT_VXLAN_IPV4_IPV6_EXT     0x8D
> +#define NGBE_PT_VXLAN_IPV4_IPV6_EXT_TCP 0x9D
> +#define NGBE_PT_VXLAN_IPV4_IPV6_EXT_UDP 0xAD
> +
> +#define NGBE_PT_MAX    256
> +extern const u32 ngbe_ptype_table[NGBE_PT_MAX];
> +extern const u32 ngbe_ptype_table_tn[NGBE_PT_MAX];
> +
> +
> +/* ether type filter list: one static filter per filter consumer. This is
> + *                 to avoid filter collisions later. Add new filters
> + *                 here!!
> + *      EAPOL 802.1x (0x888e): Filter 0
> + *      FCoE (0x8906):   Filter 2
> + *      1588 (0x88f7):   Filter 3
> + *      FIP  (0x8914):   Filter 4
> + *      LLDP (0x88CC):   Filter 5
> + *      LACP (0x8809):   Filter 6
> + *      FC   (0x8808):   Filter 7
> + */
> +#define NGBE_ETF_ID_EAPOL        0
> +#define NGBE_ETF_ID_FCOE         2
> +#define NGBE_ETF_ID_1588         3
> +#define NGBE_ETF_ID_FIP          4
> +#define NGBE_ETF_ID_LLDP         5
> +#define NGBE_ETF_ID_LACP         6
> +#define NGBE_ETF_ID_FC           7
> +#define NGBE_ETF_ID_MAX          8
> +
> +#define NGBE_PTID_ETF_MIN  0x18
> +#define NGBE_PTID_ETF_MAX  0x1F
> +static inline int ngbe_etflt_id(u8 ptid)
> +{
> +	if (ptid >= NGBE_PTID_ETF_MIN && ptid <= NGBE_PTID_ETF_MAX)
> +		return ptid - NGBE_PTID_ETF_MIN;
> +	else
> +		return -1;
> +}
> +
> +struct ngbe_udphdr {
> +	__be16	source;

rte_be* types should be used

> +	__be16	dest;
> +	__be16	len;
> +	__be16	check;
> +};
> +
> +struct ngbe_vxlanhdr {
> +	__be32 vx_flags;

rte_be* types should be used

> +	__be32 vx_vni;
> +};
> +
> +struct ngbe_genevehdr {
> +	u8 opt_len:6;
> +	u8 ver:2;
> +	u8 rsvd1:6;
> +	u8 critical:1;
> +	u8 oam:1;
> +	__be16 proto_type;

rte_be* types should be used

> +
> +	u8 vni[3];
> +	u8 rsvd2;
> +};
> +
> +struct ngbe_nvgrehdr {
> +	__be16 flags;

rte_be* types should be used

> +	__be16 proto;
> +	__be32 tni;
> +};
> +
> +#endif /* _NGBE_PTYPE_H_ */
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index 2db5cc3f2a..f30da10ae3 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -74,7 +74,6 @@ struct ngbe_tx_desc {
>   		    sizeof(struct ngbe_rx_desc))
>   
>   #define NGBE_TX_MAX_SEG                    40
> -#define NGBE_PTID_MASK                     0xFF
>   
>   /**
>    * Structure associated with each descriptor of the RX ring of a RX queue.
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 19/24] net/ngbe: add simple Rx and Tx flow
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 19/24] net/ngbe: add simple Rx and Tx flow Jiawen Wu
@ 2021-06-14 19:10   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:10 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Initialize device with the simplest receive and transmit functions.

Why Rx and Tx are mixed in one patch? It looks separate code.

> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   drivers/net/ngbe/ngbe_ethdev.c |   8 +-
>   drivers/net/ngbe/ngbe_ethdev.h |   6 +
>   drivers/net/ngbe/ngbe_rxtx.c   | 482 +++++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h   | 110 ++++++++
>   4 files changed, 604 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 672db88133..4dab920caa 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -109,6 +109,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   	PMD_INIT_FUNC_TRACE();
>   
>   	eth_dev->dev_ops = &ngbe_eth_dev_ops;
> +	eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
> +	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
>   
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>   		return 0;
> @@ -357,8 +359,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>   const uint32_t *
>   ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
>   {
> -	RTE_SET_USED(dev);
> -	return ngbe_get_supported_ptypes();
> +	if (dev->rx_pkt_burst == ngbe_recv_pkts)
> +		return ngbe_get_supported_ptypes();
> +
> +	return NULL;
>   }
>   
>   /* return 0 means link status changed, -1 means not changed */
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 6881351252..c0f8483eca 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -75,6 +75,12 @@ int ngbe_dev_rx_init(struct rte_eth_dev *dev);
>   
>   void ngbe_dev_tx_init(struct rte_eth_dev *dev);
>   
> +uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> +		uint16_t nb_pkts);
> +
> +uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
> +		uint16_t nb_pkts);
> +
>   int
>   ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>   		int wait_to_complete);
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index 68d7e651af..9462da5b7a 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -15,10 +15,492 @@
>   #include "ngbe_ethdev.h"
>   #include "ngbe_rxtx.h"
>   
> +/*
> + * Prefetch a cache line into all cache levels.
> + */
> +#define rte_ngbe_prefetch(p)   rte_prefetch0(p)
> +
> +/*********************************************************************
> + *
> + *  TX functions

TX -> Tx

> + *
> + **********************************************************************/
> +
> +/*
> + * Check for descriptors with their DD bit set and free mbufs.
> + * Return the total number of buffers freed.
> + */
> +static __rte_always_inline int
> +ngbe_tx_free_bufs(struct ngbe_tx_queue *txq)
> +{
> +	struct ngbe_tx_entry *txep;
> +	uint32_t status;
> +	int i, nb_free = 0;
> +	struct rte_mbuf *m, *free[RTE_NGBE_TX_MAX_FREE_BUF_SZ];
> +
> +	/* check DD bit on threshold descriptor */
> +	status = txq->tx_ring[txq->tx_next_dd].dw3;
> +	if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
> +		if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
> +			ngbe_set32_masked(txq->tdc_reg_addr,
> +				NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
> +		return 0;
> +	}
> +
> +	/*
> +	 * first buffer to free from S/W ring is at index
> +	 * tx_next_dd - (tx_free_thresh-1)
> +	 */
> +	txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_free_thresh - 1)];
> +	for (i = 0; i < txq->tx_free_thresh; ++i, ++txep) {
> +		/* free buffers one at a time */
> +		m = rte_pktmbuf_prefree_seg(txep->mbuf);
> +		txep->mbuf = NULL;
> +
> +		if (unlikely(m == NULL))
> +			continue;
> +
> +		if (nb_free >= RTE_NGBE_TX_MAX_FREE_BUF_SZ ||
> +		    (nb_free > 0 && m->pool != free[0]->pool)) {
> +			rte_mempool_put_bulk(free[0]->pool,
> +					     (void **)free, nb_free);
> +			nb_free = 0;
> +		}
> +
> +		free[nb_free++] = m;
> +	}
> +
> +	if (nb_free > 0)
> +		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
> +
> +	/* buffers were freed, update counters */
> +	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_free_thresh);
> +	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_free_thresh);
> +	if (txq->tx_next_dd >= txq->nb_tx_desc)
> +		txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
> +
> +	return txq->tx_free_thresh;
> +}
> +
> +/* Populate 4 descriptors with data from 4 mbufs */
> +static inline void
> +tx4(volatile struct ngbe_tx_desc *txdp, struct rte_mbuf **pkts)
> +{
> +	uint64_t buf_dma_addr;
> +	uint32_t pkt_len;
> +	int i;
> +
> +	for (i = 0; i < 4; ++i, ++txdp, ++pkts) {
> +		buf_dma_addr = rte_mbuf_data_iova(*pkts);
> +		pkt_len = (*pkts)->data_len;
> +
> +		/* write data to descriptor */
> +		txdp->qw0 = rte_cpu_to_le_64(buf_dma_addr);
> +		txdp->dw2 = cpu_to_le32(NGBE_TXD_FLAGS |
> +					NGBE_TXD_DATLEN(pkt_len));
> +		txdp->dw3 = cpu_to_le32(NGBE_TXD_PAYLEN(pkt_len));
> +
> +		rte_prefetch0(&(*pkts)->pool);
> +	}
> +}
> +
> +/* Populate 1 descriptor with data from 1 mbuf */
> +static inline void
> +tx1(volatile struct ngbe_tx_desc *txdp, struct rte_mbuf **pkts)
> +{
> +	uint64_t buf_dma_addr;
> +	uint32_t pkt_len;
> +
> +	buf_dma_addr = rte_mbuf_data_iova(*pkts);
> +	pkt_len = (*pkts)->data_len;
> +
> +	/* write data to descriptor */
> +	txdp->qw0 = cpu_to_le64(buf_dma_addr);
> +	txdp->dw2 = cpu_to_le32(NGBE_TXD_FLAGS |
> +				NGBE_TXD_DATLEN(pkt_len));
> +	txdp->dw3 = cpu_to_le32(NGBE_TXD_PAYLEN(pkt_len));
> +
> +	rte_prefetch0(&(*pkts)->pool);
> +}
> +
> +/*
> + * Fill H/W descriptor ring with mbuf data.
> + * Copy mbuf pointers to the S/W ring.
> + */
> +static inline void
> +ngbe_tx_fill_hw_ring(struct ngbe_tx_queue *txq, struct rte_mbuf **pkts,
> +		      uint16_t nb_pkts)
> +{
> +	volatile struct ngbe_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
> +	struct ngbe_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
> +	const int N_PER_LOOP = 4;
> +	const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
> +	int mainpart, leftover;
> +	int i, j;
> +
> +	/*
> +	 * Process most of the packets in chunks of N pkts.  Any
> +	 * leftover packets will get processed one at a time.
> +	 */
> +	mainpart = (nb_pkts & ((uint32_t)~N_PER_LOOP_MASK));
> +	leftover = (nb_pkts & ((uint32_t)N_PER_LOOP_MASK));
> +	for (i = 0; i < mainpart; i += N_PER_LOOP) {
> +		/* Copy N mbuf pointers to the S/W ring */
> +		for (j = 0; j < N_PER_LOOP; ++j)
> +			(txep + i + j)->mbuf = *(pkts + i + j);
> +		tx4(txdp + i, pkts + i);
> +	}
> +
> +	if (unlikely(leftover > 0)) {
> +		for (i = 0; i < leftover; ++i) {
> +			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
> +			tx1(txdp + mainpart + i, pkts + mainpart + i);
> +		}
> +	}
> +}
> +
> +static inline uint16_t
> +tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> +	     uint16_t nb_pkts)
> +{
> +	struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue;
> +	uint16_t n = 0;
> +
> +	/*
> +	 * Begin scanning the H/W ring for done descriptors when the
> +	 * number of available descriptors drops below tx_free_thresh.  For
> +	 * each done descriptor, free the associated buffer.
> +	 */
> +	if (txq->nb_tx_free < txq->tx_free_thresh)
> +		ngbe_tx_free_bufs(txq);
> +
> +	/* Only use descriptors that are available */
> +	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
> +	if (unlikely(nb_pkts == 0))
> +		return 0;
> +
> +	/* Use exactly nb_pkts descriptors */
> +	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
> +
> +	/*
> +	 * At this point, we know there are enough descriptors in the
> +	 * ring to transmit all the packets.  This assumes that each
> +	 * mbuf contains a single segment, and that no new offloads
> +	 * are expected, which would require a new context descriptor.
> +	 */
> +
> +	/*
> +	 * See if we're going to wrap-around. If so, handle the top
> +	 * of the descriptor ring first, then do the bottom.  If not,
> +	 * the processing looks just like the "bottom" part anyway...
> +	 */
> +	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
> +		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
> +		ngbe_tx_fill_hw_ring(txq, tx_pkts, n);
> +		txq->tx_tail = 0;
> +	}
> +
> +	/* Fill H/W descriptor ring with mbuf data */
> +	ngbe_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
> +	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
> +
> +	/*
> +	 * Check for wrap-around. This would only happen if we used
> +	 * up to the last descriptor in the ring, no more, no less.
> +	 */
> +	if (txq->tx_tail >= txq->nb_tx_desc)
> +		txq->tx_tail = 0;
> +
> +	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
> +		   (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
> +		   (uint16_t)txq->tx_tail, (uint16_t)nb_pkts);
> +
> +	/* update tail pointer */
> +	rte_wmb();
> +	ngbe_set32_relaxed(txq->tdt_reg_addr, txq->tx_tail);
> +
> +	return nb_pkts;
> +}
> +
> +uint16_t
> +ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
> +		       uint16_t nb_pkts)
> +{
> +	uint16_t nb_tx;
> +
> +	/* Try to transmit at least chunks of TX_MAX_BURST pkts */
> +	if (likely(nb_pkts <= RTE_PMD_NGBE_TX_MAX_BURST))
> +		return tx_xmit_pkts(tx_queue, tx_pkts, nb_pkts);
> +
> +	/* transmit more than the max burst, in chunks of TX_MAX_BURST */
> +	nb_tx = 0;
> +	while (nb_pkts) {
> +		uint16_t ret, n;
> +
> +		n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_TX_MAX_BURST);
> +		ret = tx_xmit_pkts(tx_queue, &tx_pkts[nb_tx], n);
> +		nb_tx = (uint16_t)(nb_tx + ret);
> +		nb_pkts = (uint16_t)(nb_pkts - ret);
> +		if (ret < n)
> +			break;
> +	}
> +
> +	return nb_tx;
> +}
> +
>   #ifndef DEFAULT_TX_FREE_THRESH
>   #define DEFAULT_TX_FREE_THRESH 32
>   #endif
>   
> +/*********************************************************************
> + *
> + *  RX functions

RX -> Rx

> + *
> + **********************************************************************/
> +static inline uint32_t
> +ngbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
> +{
> +	uint16_t ptid = NGBE_RXD_PTID(pkt_info);
> +
> +	ptid &= ptid_mask;
> +
> +	return ngbe_decode_ptype(ptid);
> +}
> +
> +static inline uint64_t
> +ngbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
> +{
> +	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
> +		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
> +		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
> +		PKT_RX_RSS_HASH, 0, 0, 0,
> +		0, 0, 0,  PKT_RX_FDIR,
> +	};
> +	return ip_rss_types_map[NGBE_RXD_RSSTYPE(pkt_info)];
> +}
> +
> +static inline uint64_t
> +rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
> +{
> +	uint64_t pkt_flags;
> +
> +	/*
> +	 * Check if VLAN present only.
> +	 * Do not check whether L3/L4 rx checksum done by NIC or not,
> +	 * That can be found from rte_eth_rxmode.offloads flag
> +	 */
> +	pkt_flags = (rx_status & NGBE_RXD_STAT_VLAN &&
> +		     vlan_flags & PKT_RX_VLAN_STRIPPED)
> +		    ? vlan_flags : 0;
> +
> +	return pkt_flags;
> +}
> +
> +static inline uint64_t
> +rx_desc_error_to_pkt_flags(uint32_t rx_status)
> +{
> +	uint64_t pkt_flags = 0;
> +
> +	/* checksum offload can't be disabled */
> +	if (rx_status & NGBE_RXD_STAT_IPCS) {
> +		pkt_flags |= (rx_status & NGBE_RXD_ERR_IPCS
> +				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
> +	}
> +
> +	if (rx_status & NGBE_RXD_STAT_L4CS) {
> +		pkt_flags |= (rx_status & NGBE_RXD_ERR_L4CS
> +				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
> +	}
> +
> +	if (rx_status & NGBE_RXD_STAT_EIPCS &&
> +	    rx_status & NGBE_RXD_ERR_EIPCS) {
> +		pkt_flags |= PKT_RX_OUTER_IP_CKSUM_BAD;
> +	}
> +
> +
> +	return pkt_flags;
> +}
> +
> +uint16_t
> +ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> +		uint16_t nb_pkts)
> +{
> +	struct ngbe_rx_queue *rxq;
> +	volatile struct ngbe_rx_desc *rx_ring;
> +	volatile struct ngbe_rx_desc *rxdp;
> +	struct ngbe_rx_entry *sw_ring;
> +	struct ngbe_rx_entry *rxe;
> +	struct rte_mbuf *rxm;
> +	struct rte_mbuf *nmb;
> +	struct ngbe_rx_desc rxd;
> +	uint64_t dma_addr;
> +	uint32_t staterr;
> +	uint32_t pkt_info;
> +	uint16_t pkt_len;
> +	uint16_t rx_id;
> +	uint16_t nb_rx;
> +	uint16_t nb_hold;
> +	uint64_t pkt_flags;
> +
> +	nb_rx = 0;
> +	nb_hold = 0;
> +	rxq = rx_queue;
> +	rx_id = rxq->rx_tail;
> +	rx_ring = rxq->rx_ring;
> +	sw_ring = rxq->sw_ring;
> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> +	while (nb_rx < nb_pkts) {
> +		/*
> +		 * The order of operations here is important as the DD status
> +		 * bit must not be read after any other descriptor fields.
> +		 * rx_ring and rxdp are pointing to volatile data so the order
> +		 * of accesses cannot be reordered by the compiler. If they were
> +		 * not volatile, they could be reordered which could lead to
> +		 * using invalid descriptor fields when read from rxd.
> +		 */
> +		rxdp = &rx_ring[rx_id];
> +		staterr = rxdp->qw1.lo.status;
> +		if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
> +			break;
> +		rxd = *rxdp;
> +
> +		/*
> +		 * End of packet.
> +		 *
> +		 * If the NGBE_RXD_STAT_EOP flag is not set, the RX packet
> +		 * is likely to be invalid and to be dropped by the various
> +		 * validation checks performed by the network stack.
> +		 *
> +		 * Allocate a new mbuf to replenish the RX ring descriptor.
> +		 * If the allocation fails:
> +		 *    - arrange for that RX descriptor to be the first one
> +		 *      being parsed the next time the receive function is
> +		 *      invoked [on the same queue].
> +		 *
> +		 *    - Stop parsing the RX ring and return immediately.
> +		 *
> +		 * This policy do not drop the packet received in the RX
> +		 * descriptor for which the allocation of a new mbuf failed.
> +		 * Thus, it allows that packet to be later retrieved if
> +		 * mbuf have been freed in the mean time.
> +		 * As a side effect, holding RX descriptors instead of
> +		 * systematically giving them back to the NIC may lead to
> +		 * RX ring exhaustion situations.
> +		 * However, the NIC can gracefully prevent such situations
> +		 * to happen by sending specific "back-pressure" flow control
> +		 * frames to its peer(s).
> +		 */
> +		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
> +			   "ext_err_stat=0x%08x pkt_len=%u",
> +			   (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
> +			   (uint16_t)rx_id, (uint32_t)staterr,
> +			   (uint16_t)rte_le_to_cpu_16(rxd.qw1.hi.len));
> +
> +		nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
> +		if (nmb == NULL) {
> +			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
> +				   "queue_id=%u", (uint16_t)rxq->port_id,
> +				   (uint16_t)rxq->queue_id);
> +			dev->data->rx_mbuf_alloc_failed++;
> +			break;
> +		}
> +
> +		nb_hold++;
> +		rxe = &sw_ring[rx_id];
> +		rx_id++;
> +		if (rx_id == rxq->nb_rx_desc)
> +			rx_id = 0;
> +
> +		/* Prefetch next mbuf while processing current one. */
> +		rte_ngbe_prefetch(sw_ring[rx_id].mbuf);
> +
> +		/*
> +		 * When next RX descriptor is on a cache-line boundary,
> +		 * prefetch the next 4 RX descriptors and the next 8 pointers
> +		 * to mbufs.
> +		 */
> +		if ((rx_id & 0x3) == 0) {
> +			rte_ngbe_prefetch(&rx_ring[rx_id]);
> +			rte_ngbe_prefetch(&sw_ring[rx_id]);
> +		}
> +
> +		rxm = rxe->mbuf;
> +		rxe->mbuf = nmb;
> +		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> +		NGBE_RXD_HDRADDR(rxdp, 0);
> +		NGBE_RXD_PKTADDR(rxdp, dma_addr);
> +
> +		/*
> +		 * Initialize the returned mbuf.
> +		 * 1) setup generic mbuf fields:
> +		 *    - number of segments,
> +		 *    - next segment,
> +		 *    - packet length,
> +		 *    - RX port identifier.
> +		 * 2) integrate hardware offload data, if any:
> +		 *    - RSS flag & hash,
> +		 *    - IP checksum flag,
> +		 *    - VLAN TCI, if any,
> +		 *    - error flags.
> +		 */
> +		pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len) -
> +				      rxq->crc_len);
> +		rxm->data_off = RTE_PKTMBUF_HEADROOM;
> +		rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
> +		rxm->nb_segs = 1;
> +		rxm->next = NULL;
> +		rxm->pkt_len = pkt_len;
> +		rxm->data_len = pkt_len;
> +		rxm->port = rxq->port_id;
> +
> +		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
> +		/* Only valid if PKT_RX_VLAN set in pkt_flags */
> +		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
> +
> +		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
> +					rxq->vlan_flags);
> +		pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
> +		pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info);
> +		rxm->ol_flags = pkt_flags;
> +		rxm->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
> +						       rxq->pkt_type_mask);
> +
> +		if (likely(pkt_flags & PKT_RX_RSS_HASH))
> +			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
> +
> +		/*
> +		 * Store the mbuf address into the next entry of the array
> +		 * of returned packets.
> +		 */
> +		rx_pkts[nb_rx++] = rxm;
> +	}
> +	rxq->rx_tail = rx_id;
> +
> +	/*
> +	 * If the number of free RX descriptors is greater than the RX free

RX -> Rx

> +	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
> +	 * register.
> +	 * Update the RDT with the value of the last processed RX descriptor

RX -> Rx

> +	 * minus 1, to guarantee that the RDT register is never equal to the
> +	 * RDH register, which creates a "full" ring situation from the
> +	 * hardware point of view...
> +	 */
> +	nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
> +	if (nb_hold > rxq->rx_free_thresh) {
> +		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
> +			   "nb_hold=%u nb_rx=%u",
> +			   (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
> +			   (uint16_t)rx_id, (uint16_t)nb_hold,
> +			   (uint16_t)nb_rx);
> +		rx_id = (uint16_t)((rx_id == 0) ?
> +				(rxq->nb_rx_desc - 1) : (rx_id - 1));
> +		ngbe_set32(rxq->rdt_reg_addr, rx_id);
> +		nb_hold = 0;
> +	}
> +	rxq->nb_rx_hold = nb_hold;
> +	return nb_rx;
> +}
> +
>   /*********************************************************************
>    *
>    *  Queue management functions
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index f30da10ae3..d6b9127cb4 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -43,6 +43,85 @@ struct ngbe_rx_desc {
>   	} qw1; /* also as r.hdr_addr */
>   };
>   
> +/* @ngbe_rx_desc.qw0 */
> +#define NGBE_RXD_PKTADDR(rxd, v)  \
> +	(((volatile __le64 *)(rxd))[0] = cpu_to_le64(v))
> +
> +/* @ngbe_rx_desc.qw1 */
> +#define NGBE_RXD_HDRADDR(rxd, v)  \
> +	(((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
> +
> +/* @ngbe_rx_desc.dw0 */
> +#define NGBE_RXD_RSSTYPE(dw)      RS(dw, 0, 0xF)
> +#define   NGBE_RSSTYPE_NONE       0
> +#define   NGBE_RSSTYPE_IPV4TCP    1
> +#define   NGBE_RSSTYPE_IPV4       2
> +#define   NGBE_RSSTYPE_IPV6TCP    3
> +#define   NGBE_RSSTYPE_IPV4SCTP   4
> +#define   NGBE_RSSTYPE_IPV6       5
> +#define   NGBE_RSSTYPE_IPV6SCTP   6
> +#define   NGBE_RSSTYPE_IPV4UDP    7
> +#define   NGBE_RSSTYPE_IPV6UDP    8
> +#define   NGBE_RSSTYPE_FDIR       15
> +#define NGBE_RXD_SECTYPE(dw)      RS(dw, 4, 0x3)
> +#define NGBE_RXD_SECTYPE_NONE     LS(0, 4, 0x3)
> +#define NGBE_RXD_SECTYPE_IPSECESP LS(2, 4, 0x3)
> +#define NGBE_RXD_SECTYPE_IPSECAH  LS(3, 4, 0x3)
> +#define NGBE_RXD_TPIDSEL(dw)      RS(dw, 6, 0x7)
> +#define NGBE_RXD_PTID(dw)         RS(dw, 9, 0xFF)
> +#define NGBE_RXD_RSCCNT(dw)       RS(dw, 17, 0xF)
> +#define NGBE_RXD_HDRLEN(dw)       RS(dw, 21, 0x3FF)
> +#define NGBE_RXD_SPH              MS(31, 0x1)
> +
> +/* @ngbe_rx_desc.dw1 */
> +/** bit 0-31, as rss hash when  **/
> +#define NGBE_RXD_RSSHASH(rxd)     ((rxd)->qw0.dw1)
> +
> +/** bit 0-31, as ip csum when  **/
> +#define NGBE_RXD_IPID(rxd)        ((rxd)->qw0.hi.ipid)
> +#define NGBE_RXD_CSUM(rxd)        ((rxd)->qw0.hi.csum)
> +
> +/* @ngbe_rx_desc.dw2 */
> +#define NGBE_RXD_STATUS(rxd)      ((rxd)->qw1.lo.status)
> +/** bit 0-1 **/
> +#define NGBE_RXD_STAT_DD          MS(0, 0x1) /* Descriptor Done */
> +#define NGBE_RXD_STAT_EOP         MS(1, 0x1) /* End of Packet */
> +/** bit 2-31, when EOP=0 **/
> +#define NGBE_RXD_NEXTP_RESV(v)    LS(v, 2, 0x3)
> +#define NGBE_RXD_NEXTP(dw)        RS(dw, 4, 0xFFFF) /* Next Descriptor */
> +/** bit 2-31, when EOP=1 **/
> +#define NGBE_RXD_PKT_CLS_MASK     MS(2, 0x7) /* Packet Class */
> +#define NGBE_RXD_PKT_CLS_TC_RSS   LS(0, 2, 0x7) /* RSS Hash */
> +#define NGBE_RXD_PKT_CLS_FLM      LS(1, 2, 0x7) /* FDir Match */
> +#define NGBE_RXD_PKT_CLS_SYN      LS(2, 2, 0x7) /* TCP Sync */
> +#define NGBE_RXD_PKT_CLS_5TUPLE   LS(3, 2, 0x7) /* 5 Tuple */
> +#define NGBE_RXD_PKT_CLS_ETF      LS(4, 2, 0x7) /* Ethertype Filter */
> +#define NGBE_RXD_STAT_VLAN        MS(5, 0x1) /* IEEE VLAN Packet */
> +#define NGBE_RXD_STAT_UDPCS       MS(6, 0x1) /* UDP xsum calculated */
> +#define NGBE_RXD_STAT_L4CS        MS(7, 0x1) /* L4 xsum calculated */
> +#define NGBE_RXD_STAT_IPCS        MS(8, 0x1) /* IP xsum calculated */
> +#define NGBE_RXD_STAT_PIF         MS(9, 0x1) /* Non-unicast address */
> +#define NGBE_RXD_STAT_EIPCS       MS(10, 0x1) /* Encap IP xsum calculated */
> +#define NGBE_RXD_STAT_VEXT        MS(11, 0x1) /* Multi-VLAN */
> +#define NGBE_RXD_STAT_IPV6EX      MS(12, 0x1) /* IPv6 with option header */
> +#define NGBE_RXD_STAT_LLINT       MS(13, 0x1) /* Pkt caused LLI */
> +#define NGBE_RXD_STAT_1588        MS(14, 0x1) /* IEEE1588 Time Stamp */
> +#define NGBE_RXD_STAT_SECP        MS(15, 0x1) /* Security Processing */
> +#define NGBE_RXD_STAT_LB          MS(16, 0x1) /* Loopback Status */
> +/*** bit 17-30, when PTYPE=IP ***/
> +#define NGBE_RXD_STAT_BMC         MS(17, 0x1) /* PTYPE=IP, BMC status */
> +#define NGBE_RXD_ERR_HBO          MS(23, 0x1) /* Header Buffer Overflow */
> +#define NGBE_RXD_ERR_EIPCS        MS(26, 0x1) /* Encap IP header error */
> +#define NGBE_RXD_ERR_SECERR       MS(27, 0x1) /* macsec or ipsec error */
> +#define NGBE_RXD_ERR_RXE          MS(29, 0x1) /* Any MAC Error */
> +#define NGBE_RXD_ERR_L4CS         MS(30, 0x1) /* TCP/UDP xsum error */
> +#define NGBE_RXD_ERR_IPCS         MS(31, 0x1) /* IP xsum error */
> +#define NGBE_RXD_ERR_CSUM(dw)     RS(dw, 30, 0x3)
> +
> +/* @ngbe_rx_desc.dw3 */
> +#define NGBE_RXD_LENGTH(rxd)           ((rxd)->qw1.hi.len)
> +#define NGBE_RXD_VLAN(rxd)             ((rxd)->qw1.hi.tag)
> +
>   /*****************************************************************************
>    * Transmit Descriptor
>    *****************************************************************************/
> @@ -68,11 +147,40 @@ struct ngbe_tx_desc {
>   	__le32 dw3; /* r.olinfo_status, w.status      */
>   };
>   
> +/* @ngbe_tx_desc.dw2 */
> +#define NGBE_TXD_DATLEN(v)        ((0xFFFF & (v))) /* data buffer length */
> +#define NGBE_TXD_1588             ((0x1) << 19) /* IEEE1588 time stamp */
> +#define NGBE_TXD_DATA             ((0x0) << 20) /* data descriptor */
> +#define NGBE_TXD_EOP              ((0x1) << 24) /* End of Packet */
> +#define NGBE_TXD_FCS              ((0x1) << 25) /* Insert FCS */
> +#define NGBE_TXD_LINKSEC          ((0x1) << 26) /* Insert LinkSec */
> +#define NGBE_TXD_ECU              ((0x1) << 28) /* forward to ECU */
> +#define NGBE_TXD_CNTAG            ((0x1) << 29) /* insert CN tag */
> +#define NGBE_TXD_VLE              ((0x1) << 30) /* insert VLAN tag */
> +#define NGBE_TXD_TSE              ((0x1) << 31) /* transmit segmentation */
> +
> +#define NGBE_TXD_FLAGS (NGBE_TXD_FCS | NGBE_TXD_EOP)
> +
> +/* @ngbe_tx_desc.dw3 */
> +#define NGBE_TXD_DD_UNUSED        NGBE_TXD_DD
> +#define NGBE_TXD_IDX_UNUSED(v)    NGBE_TXD_IDX(v)
> +#define NGBE_TXD_CC               ((0x1) << 7) /* check context */
> +#define NGBE_TXD_IPSEC            ((0x1) << 8) /* request ipsec offload */
> +#define NGBE_TXD_L4CS             ((0x1) << 9) /* insert TCP/UDP/SCTP csum */
> +#define NGBE_TXD_IPCS             ((0x1) << 10) /* insert IPv4 csum */
> +#define NGBE_TXD_EIPCS            ((0x1) << 11) /* insert outer IP csum */
> +#define NGBE_TXD_MNGFLT           ((0x1) << 12) /* enable management filter */
> +#define NGBE_TXD_PAYLEN(v)        ((0x7FFFF & (v)) << 13) /* payload length */
> +
> +#define RTE_PMD_NGBE_TX_MAX_BURST 32
>   #define RTE_PMD_NGBE_RX_MAX_BURST 32
> +#define RTE_NGBE_TX_MAX_FREE_BUF_SZ 64
>   
>   #define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
>   		    sizeof(struct ngbe_rx_desc))
>   
> +#define rte_packet_prefetch(p)  rte_prefetch1(p)
> +
>   #define NGBE_TX_MAX_SEG                    40
>   
>   /**
> @@ -124,6 +232,8 @@ struct ngbe_rx_queue {
>   	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
>   	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
>   	uint8_t             rx_deferred_start; /**< not in global dev start. */
> +	/** flags to set in mbuf when a vlan is detected. */
> +	uint64_t            vlan_flags;
>   	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
>   	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
>   	struct rte_mbuf fake_mbuf;
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 20/24] net/ngbe: support bulk and scatter Rx
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 20/24] net/ngbe: support bulk and scatter Rx Jiawen Wu
@ 2021-06-14 19:17   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:17 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Add bulk allocation receive function, and support scattered Rx rely on
> Rx offload.

The patch should advertise Rx scatter offload, not earlier.
Corresponding bits of the patch should be here as well.

> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/ngbe.rst       |   1 +
>   drivers/net/ngbe/ngbe_ethdev.c |  15 +-
>   drivers/net/ngbe/ngbe_ethdev.h |   8 +
>   drivers/net/ngbe/ngbe_rxtx.c   | 583 +++++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h   |   2 +
>   5 files changed, 607 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index 04fa3e90a8..e999e0b580 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -14,6 +14,7 @@ Features
>   - Checksum offload
>   - Jumbo frames
>   - Link state information
> +- Scattered and gather for RX
>   
>   Prerequisites
>   -------------
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 4dab920caa..260bca0e4f 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -112,8 +112,16 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   	eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
>   	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
>   
> -	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> +	/*
> +	 * For secondary processes, we don't initialise any further as primary
> +	 * has already done this work. Only check we don't need a different
> +	 * RX and TX function.

RX -> Rx, TX -> Tx

> +	 */
> +	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> +		ngbe_set_rx_function(eth_dev);
> +
>   		return 0;
> +	}
>   
>   	rte_eth_copy_pci_info(eth_dev, pci_dev);
>   
> @@ -359,7 +367,10 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>   const uint32_t *
>   ngbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
>   {
> -	if (dev->rx_pkt_burst == ngbe_recv_pkts)
> +	if (dev->rx_pkt_burst == ngbe_recv_pkts ||
> +	    dev->rx_pkt_burst == ngbe_recv_pkts_sc_single_alloc ||
> +	    dev->rx_pkt_burst == ngbe_recv_pkts_sc_bulk_alloc ||
> +	    dev->rx_pkt_burst == ngbe_recv_pkts_bulk_alloc)

I don't understand why 3 flavors of the Rx are added in the single
patch. It looks separate features with separate conditions to use.

>   		return ngbe_get_supported_ptypes();
>   
>   	return NULL;
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index c0f8483eca..1e21db5e25 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -78,6 +78,14 @@ void ngbe_dev_tx_init(struct rte_eth_dev *dev);
>   uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>   		uint16_t nb_pkts);
>   
> +uint16_t ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> +				    uint16_t nb_pkts);
> +
> +uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
> +		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
> +uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
> +		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
> +
>   uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
>   		uint16_t nb_pkts);
>   
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index 9462da5b7a..f633718237 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -321,6 +321,257 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
>   	return pkt_flags;
>   }
>   
> +/*
> + * LOOK_AHEAD defines how many desc statuses to check beyond the
> + * current descriptor.
> + * It must be a pound define for optimal performance.
> + * Do not change the value of LOOK_AHEAD, as the ngbe_rx_scan_hw_ring
> + * function only works with LOOK_AHEAD=8.
> + */
> +#define LOOK_AHEAD 8
> +#if (LOOK_AHEAD != 8)
> +#error "PMD NGBE: LOOK_AHEAD must be 8\n"
> +#endif
> +static inline int
> +ngbe_rx_scan_hw_ring(struct ngbe_rx_queue *rxq)
> +{
> +	volatile struct ngbe_rx_desc *rxdp;
> +	struct ngbe_rx_entry *rxep;
> +	struct rte_mbuf *mb;
> +	uint16_t pkt_len;
> +	uint64_t pkt_flags;
> +	int nb_dd;
> +	uint32_t s[LOOK_AHEAD];
> +	uint32_t pkt_info[LOOK_AHEAD];
> +	int i, j, nb_rx = 0;
> +	uint32_t status;
> +
> +	/* get references to current descriptor and S/W ring entry */
> +	rxdp = &rxq->rx_ring[rxq->rx_tail];
> +	rxep = &rxq->sw_ring[rxq->rx_tail];
> +
> +	status = rxdp->qw1.lo.status;
> +	/* check to make sure there is at least 1 packet to receive */
> +	if (!(status & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
> +		return 0;
> +
> +	/*
> +	 * Scan LOOK_AHEAD descriptors at a time to determine which descriptors
> +	 * reference packets that are ready to be received.
> +	 */
> +	for (i = 0; i < RTE_PMD_NGBE_RX_MAX_BURST;
> +	     i += LOOK_AHEAD, rxdp += LOOK_AHEAD, rxep += LOOK_AHEAD) {
> +		/* Read desc statuses backwards to avoid race condition */
> +		for (j = 0; j < LOOK_AHEAD; j++)
> +			s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status);
> +
> +		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
> +
> +		/* Compute how many status bits were set */
> +		for (nb_dd = 0; nb_dd < LOOK_AHEAD &&
> +				(s[nb_dd] & NGBE_RXD_STAT_DD); nb_dd++)
> +			;
> +
> +		for (j = 0; j < nb_dd; j++)
> +			pkt_info[j] = rte_le_to_cpu_32(rxdp[j].qw0.dw0);
> +
> +		nb_rx += nb_dd;
> +
> +		/* Translate descriptor info to mbuf format */
> +		for (j = 0; j < nb_dd; ++j) {
> +			mb = rxep[j].mbuf;
> +			pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len) -
> +				  rxq->crc_len;
> +			mb->data_len = pkt_len;
> +			mb->pkt_len = pkt_len;
> +			mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].qw1.hi.tag);
> +
> +			/* convert descriptor fields to rte mbuf flags */
> +			pkt_flags = rx_desc_status_to_pkt_flags(s[j],
> +					rxq->vlan_flags);
> +			pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
> +			pkt_flags |=
> +				ngbe_rxd_pkt_info_to_pkt_flags(pkt_info[j]);
> +			mb->ol_flags = pkt_flags;
> +			mb->packet_type =
> +				ngbe_rxd_pkt_info_to_pkt_type(pkt_info[j],
> +				rxq->pkt_type_mask);
> +
> +			if (likely(pkt_flags & PKT_RX_RSS_HASH))
> +				mb->hash.rss =
> +					rte_le_to_cpu_32(rxdp[j].qw0.dw1);
> +		}
> +
> +		/* Move mbuf pointers from the S/W ring to the stage */
> +		for (j = 0; j < LOOK_AHEAD; ++j)
> +			rxq->rx_stage[i + j] = rxep[j].mbuf;
> +
> +		/* stop if all requested packets could not be received */
> +		if (nb_dd != LOOK_AHEAD)
> +			break;
> +	}
> +
> +	/* clear software ring entries so we can cleanup correctly */
> +	for (i = 0; i < nb_rx; ++i)
> +		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
> +
> +	return nb_rx;
> +}
> +
> +static inline int
> +ngbe_rx_alloc_bufs(struct ngbe_rx_queue *rxq, bool reset_mbuf)
> +{
> +	volatile struct ngbe_rx_desc *rxdp;
> +	struct ngbe_rx_entry *rxep;
> +	struct rte_mbuf *mb;
> +	uint16_t alloc_idx;
> +	__le64 dma_addr;
> +	int diag, i;
> +
> +	/* allocate buffers in bulk directly into the S/W ring */
> +	alloc_idx = rxq->rx_free_trigger - (rxq->rx_free_thresh - 1);
> +	rxep = &rxq->sw_ring[alloc_idx];
> +	diag = rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
> +				    rxq->rx_free_thresh);
> +	if (unlikely(diag != 0))
> +		return -ENOMEM;
> +
> +	rxdp = &rxq->rx_ring[alloc_idx];
> +	for (i = 0; i < rxq->rx_free_thresh; ++i) {
> +		/* populate the static rte mbuf fields */
> +		mb = rxep[i].mbuf;
> +		if (reset_mbuf)
> +			mb->port = rxq->port_id;
> +
> +		rte_mbuf_refcnt_set(mb, 1);
> +		mb->data_off = RTE_PKTMBUF_HEADROOM;
> +
> +		/* populate the descriptors */
> +		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
> +		NGBE_RXD_HDRADDR(&rxdp[i], 0);
> +		NGBE_RXD_PKTADDR(&rxdp[i], dma_addr);
> +	}
> +
> +	/* update state of internal queue structure */
> +	rxq->rx_free_trigger = rxq->rx_free_trigger + rxq->rx_free_thresh;
> +	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
> +		rxq->rx_free_trigger = rxq->rx_free_thresh - 1;
> +
> +	/* no errors */
> +	return 0;
> +}
> +
> +static inline uint16_t
> +ngbe_rx_fill_from_stage(struct ngbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
> +			 uint16_t nb_pkts)
> +{
> +	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
> +	int i;
> +
> +	/* how many packets are ready to return? */
> +	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
> +
> +	/* copy mbuf pointers to the application's packet list */
> +	for (i = 0; i < nb_pkts; ++i)
> +		rx_pkts[i] = stage[i];
> +
> +	/* update internal queue state */
> +	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
> +	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
> +
> +	return nb_pkts;
> +}
> +
> +static inline uint16_t
> +ngbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> +	     uint16_t nb_pkts)
> +{
> +	struct ngbe_rx_queue *rxq = (struct ngbe_rx_queue *)rx_queue;
> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> +	uint16_t nb_rx = 0;
> +
> +	/* Any previously recv'd pkts will be returned from the Rx stage */
> +	if (rxq->rx_nb_avail)
> +		return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
> +
> +	/* Scan the H/W ring for packets to receive */
> +	nb_rx = (uint16_t)ngbe_rx_scan_hw_ring(rxq);
> +
> +	/* update internal queue state */
> +	rxq->rx_next_avail = 0;
> +	rxq->rx_nb_avail = nb_rx;
> +	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
> +
> +	/* if required, allocate new buffers to replenish descriptors */
> +	if (rxq->rx_tail > rxq->rx_free_trigger) {
> +		uint16_t cur_free_trigger = rxq->rx_free_trigger;
> +
> +		if (ngbe_rx_alloc_bufs(rxq, true) != 0) {
> +			int i, j;
> +
> +			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
> +				   "queue_id=%u", (uint16_t)rxq->port_id,
> +				   (uint16_t)rxq->queue_id);
> +
> +			dev->data->rx_mbuf_alloc_failed +=
> +				rxq->rx_free_thresh;
> +
> +			/*
> +			 * Need to rewind any previous receives if we cannot
> +			 * allocate new buffers to replenish the old ones.
> +			 */
> +			rxq->rx_nb_avail = 0;
> +			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
> +			for (i = 0, j = rxq->rx_tail; i < nb_rx; ++i, ++j)
> +				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
> +
> +			return 0;
> +		}
> +
> +		/* update tail pointer */
> +		rte_wmb();
> +		ngbe_set32_relaxed(rxq->rdt_reg_addr, cur_free_trigger);
> +	}
> +
> +	if (rxq->rx_tail >= rxq->nb_rx_desc)
> +		rxq->rx_tail = 0;
> +
> +	/* received any packets this loop? */
> +	if (rxq->rx_nb_avail)
> +		return ngbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
> +
> +	return 0;
> +}
> +
> +/* split requests into chunks of size RTE_PMD_NGBE_RX_MAX_BURST */
> +uint16_t
> +ngbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> +			   uint16_t nb_pkts)
> +{
> +	uint16_t nb_rx;
> +
> +	if (unlikely(nb_pkts == 0))
> +		return 0;
> +
> +	if (likely(nb_pkts <= RTE_PMD_NGBE_RX_MAX_BURST))
> +		return ngbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
> +
> +	/* request is relatively large, chunk it up */
> +	nb_rx = 0;
> +	while (nb_pkts) {
> +		uint16_t ret, n;
> +
> +		n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_RX_MAX_BURST);
> +		ret = ngbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
> +		nb_rx = (uint16_t)(nb_rx + ret);
> +		nb_pkts = (uint16_t)(nb_pkts - ret);
> +		if (ret < n)
> +			break;
> +	}
> +
> +	return nb_rx;
> +}
> +
>   uint16_t
>   ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>   		uint16_t nb_pkts)
> @@ -501,6 +752,288 @@ ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>   	return nb_rx;
>   }
>   
> +/**
> + * ngbe_fill_cluster_head_buf - fill the first mbuf of the returned packet
> + *
> + * Fill the following info in the HEAD buffer of the Rx cluster:
> + *    - RX port identifier
> + *    - hardware offload data, if any:
> + *      - RSS flag & hash
> + *      - IP checksum flag
> + *      - VLAN TCI, if any
> + *      - error flags
> + * @head HEAD of the packet cluster
> + * @desc HW descriptor to get data from
> + * @rxq Pointer to the Rx queue
> + */
> +static inline void
> +ngbe_fill_cluster_head_buf(struct rte_mbuf *head, struct ngbe_rx_desc *desc,
> +		struct ngbe_rx_queue *rxq, uint32_t staterr)
> +{
> +	uint32_t pkt_info;
> +	uint64_t pkt_flags;
> +
> +	head->port = rxq->port_id;
> +
> +	/* The vlan_tci field is only valid when PKT_RX_VLAN is
> +	 * set in the pkt_flags field.
> +	 */
> +	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
> +	pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
> +	pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
> +	pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
> +	pkt_flags |= ngbe_rxd_pkt_info_to_pkt_flags(pkt_info);
> +	head->ol_flags = pkt_flags;
> +	head->packet_type = ngbe_rxd_pkt_info_to_pkt_type(pkt_info,
> +						rxq->pkt_type_mask);
> +
> +	if (likely(pkt_flags & PKT_RX_RSS_HASH))
> +		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
> +}
> +
> +/**
> + * ngbe_recv_pkts_sc - receive handler for scatter case.
> + *
> + * @rx_queue Rx queue handle
> + * @rx_pkts table of received packets
> + * @nb_pkts size of rx_pkts table
> + * @bulk_alloc if TRUE bulk allocation is used for a HW ring refilling
> + *
> + * Returns the number of received packets/clusters (according to the "bulk
> + * receive" interface).
> + */
> +static inline uint16_t
> +ngbe_recv_pkts_sc(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
> +		    bool bulk_alloc)
> +{
> +	struct ngbe_rx_queue *rxq = rx_queue;
> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> +	volatile struct ngbe_rx_desc *rx_ring = rxq->rx_ring;
> +	struct ngbe_rx_entry *sw_ring = rxq->sw_ring;
> +	struct ngbe_scattered_rx_entry *sw_sc_ring = rxq->sw_sc_ring;
> +	uint16_t rx_id = rxq->rx_tail;
> +	uint16_t nb_rx = 0;
> +	uint16_t nb_hold = rxq->nb_rx_hold;
> +	uint16_t prev_id = rxq->rx_tail;
> +
> +	while (nb_rx < nb_pkts) {
> +		bool eop;
> +		struct ngbe_rx_entry *rxe;
> +		struct ngbe_scattered_rx_entry *sc_entry;
> +		struct ngbe_scattered_rx_entry *next_sc_entry = NULL;
> +		struct ngbe_rx_entry *next_rxe = NULL;
> +		struct rte_mbuf *first_seg;
> +		struct rte_mbuf *rxm;
> +		struct rte_mbuf *nmb = NULL;
> +		struct ngbe_rx_desc rxd;
> +		uint16_t data_len;
> +		uint16_t next_id;
> +		volatile struct ngbe_rx_desc *rxdp;
> +		uint32_t staterr;
> +
> +next_desc:
> +		rxdp = &rx_ring[rx_id];
> +		staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status);
> +
> +		if (!(staterr & NGBE_RXD_STAT_DD))
> +			break;
> +
> +		rxd = *rxdp;
> +
> +		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
> +				  "staterr=0x%x data_len=%u",
> +			   rxq->port_id, rxq->queue_id, rx_id, staterr,
> +			   rte_le_to_cpu_16(rxd.qw1.hi.len));
> +
> +		if (!bulk_alloc) {
> +			nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
> +			if (nmb == NULL) {
> +				PMD_RX_LOG(DEBUG, "RX mbuf alloc failed "
> +						  "port_id=%u queue_id=%u",
> +					   rxq->port_id, rxq->queue_id);
> +
> +				dev->data->rx_mbuf_alloc_failed++;
> +				break;
> +			}
> +		} else if (nb_hold > rxq->rx_free_thresh) {
> +			uint16_t next_rdt = rxq->rx_free_trigger;
> +
> +			if (!ngbe_rx_alloc_bufs(rxq, false)) {
> +				rte_wmb();
> +				ngbe_set32_relaxed(rxq->rdt_reg_addr,
> +							    next_rdt);
> +				nb_hold -= rxq->rx_free_thresh;
> +			} else {
> +				PMD_RX_LOG(DEBUG, "RX bulk alloc failed "
> +						  "port_id=%u queue_id=%u",
> +					   rxq->port_id, rxq->queue_id);
> +
> +				dev->data->rx_mbuf_alloc_failed++;
> +				break;
> +			}
> +		}
> +
> +		nb_hold++;
> +		rxe = &sw_ring[rx_id];
> +		eop = staterr & NGBE_RXD_STAT_EOP;
> +
> +		next_id = rx_id + 1;
> +		if (next_id == rxq->nb_rx_desc)
> +			next_id = 0;
> +
> +		/* Prefetch next mbuf while processing current one. */
> +		rte_ngbe_prefetch(sw_ring[next_id].mbuf);
> +
> +		/*
> +		 * When next RX descriptor is on a cache-line boundary,
> +		 * prefetch the next 4 RX descriptors and the next 4 pointers
> +		 * to mbufs.
> +		 */
> +		if ((next_id & 0x3) == 0) {
> +			rte_ngbe_prefetch(&rx_ring[next_id]);
> +			rte_ngbe_prefetch(&sw_ring[next_id]);
> +		}
> +
> +		rxm = rxe->mbuf;
> +
> +		if (!bulk_alloc) {
> +			__le64 dma =
> +			  rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> +			/*
> +			 * Update RX descriptor with the physical address of the
> +			 * new data buffer of the new allocated mbuf.
> +			 */
> +			rxe->mbuf = nmb;
> +
> +			rxm->data_off = RTE_PKTMBUF_HEADROOM;
> +			NGBE_RXD_HDRADDR(rxdp, 0);
> +			NGBE_RXD_PKTADDR(rxdp, dma);
> +		} else {
> +			rxe->mbuf = NULL;
> +		}
> +
> +		/*
> +		 * Set data length & data buffer address of mbuf.
> +		 */
> +		data_len = rte_le_to_cpu_16(rxd.qw1.hi.len);
> +		rxm->data_len = data_len;
> +
> +		if (!eop) {
> +			uint16_t nextp_id;
> +
> +			nextp_id = next_id;
> +			next_sc_entry = &sw_sc_ring[nextp_id];
> +			next_rxe = &sw_ring[nextp_id];
> +			rte_ngbe_prefetch(next_rxe);
> +		}
> +
> +		sc_entry = &sw_sc_ring[rx_id];
> +		first_seg = sc_entry->fbuf;
> +		sc_entry->fbuf = NULL;
> +
> +		/*
> +		 * If this is the first buffer of the received packet,
> +		 * set the pointer to the first mbuf of the packet and
> +		 * initialize its context.
> +		 * Otherwise, update the total length and the number of segments
> +		 * of the current scattered packet, and update the pointer to
> +		 * the last mbuf of the current packet.
> +		 */
> +		if (first_seg == NULL) {
> +			first_seg = rxm;
> +			first_seg->pkt_len = data_len;
> +			first_seg->nb_segs = 1;
> +		} else {
> +			first_seg->pkt_len += data_len;
> +			first_seg->nb_segs++;
> +		}
> +
> +		prev_id = rx_id;
> +		rx_id = next_id;
> +
> +		/*
> +		 * If this is not the last buffer of the received packet, update
> +		 * the pointer to the first mbuf at the NEXTP entry in the
> +		 * sw_sc_ring and continue to parse the RX ring.

RX -> Rx

> +		 */
> +		if (!eop && next_rxe) {
> +			rxm->next = next_rxe->mbuf;
> +			next_sc_entry->fbuf = first_seg;
> +			goto next_desc;
> +		}
> +
> +		/* Initialize the first mbuf of the returned packet */
> +		ngbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr);
> +
> +		/* Deal with the case, when HW CRC srip is disabled. */
> +		first_seg->pkt_len -= rxq->crc_len;
> +		if (unlikely(rxm->data_len <= rxq->crc_len)) {
> +			struct rte_mbuf *lp;
> +
> +			for (lp = first_seg; lp->next != rxm; lp = lp->next)
> +				;
> +
> +			first_seg->nb_segs--;
> +			lp->data_len -= rxq->crc_len - rxm->data_len;
> +			lp->next = NULL;
> +			rte_pktmbuf_free_seg(rxm);
> +		} else {
> +			rxm->data_len -= rxq->crc_len;
> +		}
> +
> +		/* Prefetch data of first segment, if configured to do so. */
> +		rte_packet_prefetch((char *)first_seg->buf_addr +
> +			first_seg->data_off);
> +
> +		/*
> +		 * Store the mbuf address into the next entry of the array
> +		 * of returned packets.
> +		 */
> +		rx_pkts[nb_rx++] = first_seg;
> +	}
> +
> +	/*
> +	 * Record index of the next RX descriptor to probe.
> +	 */
> +	rxq->rx_tail = rx_id;
> +
> +	/*
> +	 * If the number of free RX descriptors is greater than the RX free

RX -> Rx twice

> +	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
> +	 * register.
> +	 * Update the RDT with the value of the last processed RX descriptor
> +	 * minus 1, to guarantee that the RDT register is never equal to the
> +	 * RDH register, which creates a "full" ring situation from the
> +	 * hardware point of view...
> +	 */
> +	if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) {
> +		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
> +			   "nb_hold=%u nb_rx=%u",
> +			   rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx);
> +
> +		rte_wmb();
> +		ngbe_set32_relaxed(rxq->rdt_reg_addr, prev_id);
> +		nb_hold = 0;
> +	}
> +
> +	rxq->nb_rx_hold = nb_hold;
> +	return nb_rx;
> +}
> +
> +uint16_t
> +ngbe_recv_pkts_sc_single_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> +				 uint16_t nb_pkts)
> +{
> +	return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, false);
> +}
> +
> +uint16_t
> +ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
> +			       uint16_t nb_pkts)
> +{
> +	return ngbe_recv_pkts_sc(rx_queue, rx_pkts, nb_pkts, true);
> +}
> +
>   /*********************************************************************
>    *
>    *  Queue management functions
> @@ -1064,6 +1597,54 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
>   	return 0;
>   }
>   
> +void __rte_cold
> +ngbe_set_rx_function(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
> +
> +	if (dev->data->scattered_rx) {
> +		/*
> +		 * Set the scattered callback: there are bulk and
> +		 * single allocation versions.
> +		 */
> +		if (adapter->rx_bulk_alloc_allowed) {
> +			PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk "
> +					   "allocation callback (port=%d).",
> +				     dev->data->port_id);
> +			dev->rx_pkt_burst = ngbe_recv_pkts_sc_bulk_alloc;
> +		} else {
> +			PMD_INIT_LOG(DEBUG, "Using Regular (non-vector, "
> +					    "single allocation) "
> +					    "Scattered Rx callback "
> +					    "(port=%d).",
> +				     dev->data->port_id);
> +
> +			dev->rx_pkt_burst = ngbe_recv_pkts_sc_single_alloc;
> +		}
> +	/*
> +	 * Below we set "simple" callbacks according to port/queues parameters.
> +	 * If parameters allow we are going to choose between the following
> +	 * callbacks:
> +	 *    - Bulk Allocation
> +	 *    - Single buffer allocation (the simplest one)
> +	 */
> +	} else if (adapter->rx_bulk_alloc_allowed) {
> +		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
> +				    "satisfied. Rx Burst Bulk Alloc function "
> +				    "will be used on port=%d.",
> +			     dev->data->port_id);
> +
> +		dev->rx_pkt_burst = ngbe_recv_pkts_bulk_alloc;
> +	} else {
> +		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are not "
> +				    "satisfied, or Scattered Rx is requested "
> +				    "(port=%d).",
> +			     dev->data->port_id);
> +
> +		dev->rx_pkt_burst = ngbe_recv_pkts;
> +	}
> +}
> +
>   /*
>    * Initializes Receive Unit.
>    */
> @@ -1211,6 +1792,8 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
>   		wr32(hw, NGBE_SECRXCTL, rdrxctl);
>   	}
>   
> +	ngbe_set_rx_function(dev);
> +
>   	return 0;
>   }
>   
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index d6b9127cb4..4b8596b24a 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -298,6 +298,8 @@ struct ngbe_txq_ops {
>   	void (*reset)(struct ngbe_tx_queue *txq);
>   };
>   
> +void ngbe_set_rx_function(struct rte_eth_dev *dev);
> +
>   uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
>   uint64_t ngbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
>   uint64_t ngbe_get_rx_port_offloads(struct rte_eth_dev *dev);
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path Jiawen Wu
@ 2021-06-14 19:22   ` Andrew Rybchenko
  2021-06-14 19:23     ` Andrew Rybchenko
  0 siblings, 1 reply; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:22 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Add the full-featured transmit function, which supports checksum, TSO,
> tunnel parse, etc.

The patch should adviertise corresponding offloads support in features,
in dev_info Tx offloads.

Tx offloads require Tx prepare implemenation. Can you really skip it?

> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/features/ngbe.ini |   3 +
>   doc/guides/nics/ngbe.rst          |   3 +-
>   drivers/net/ngbe/meson.build      |   2 +
>   drivers/net/ngbe/ngbe_ethdev.c    |  16 +-
>   drivers/net/ngbe/ngbe_ethdev.h    |   3 +
>   drivers/net/ngbe/ngbe_rxtx.c      | 639 ++++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h      |  56 +++
>   7 files changed, 720 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index e24d8d0b55..443c6691a3 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -9,10 +9,13 @@ Link status          = Y
>   Link status event    = Y
>   Jumbo frame          = Y
>   Scattered Rx         = Y
> +TSO                  = Y
>   CRC offload          = P
>   VLAN offload         = P
>   L3 checksum offload  = P
>   L4 checksum offload  = P
> +Inner L3 checksum    = P
> +Inner L4 checksum    = P
>   Packet type parsing  = Y
>   Multiprocess aware   = Y
>   Linux                = Y
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index e999e0b580..cf3fafabd8 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -12,9 +12,10 @@ Features
>   
>   - Packet type information
>   - Checksum offload
> +- TSO offload
>   - Jumbo frames
>   - Link state information
> -- Scattered and gather for RX
> +- Scattered and gather for TX and RX

TX -> Tx

>   
>   Prerequisites
>   -------------
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index fd571399b3..069e648a36 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -16,5 +16,7 @@ sources = files(
>   	'ngbe_rxtx.c',
>   )
>   
> +deps += ['security']
> +
>   includes += include_directories('base')
>   
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 260bca0e4f..1a6419e5a4 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -110,7 +110,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   
>   	eth_dev->dev_ops = &ngbe_eth_dev_ops;
>   	eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
> -	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
> +	eth_dev->tx_pkt_burst = &ngbe_xmit_pkts;
>   
>   	/*
>   	 * For secondary processes, we don't initialise any further as primary
> @@ -118,6 +118,20 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   	 * RX and TX function.
>   	 */
>   	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> +		struct ngbe_tx_queue *txq;
> +		/* TX queue function in primary, set by last queue initialized

TX -> Tx

> +		 * Tx queue may not initialized by primary process
> +		 */
> +		if (eth_dev->data->tx_queues) {
> +			uint16_t nb_tx_queues = eth_dev->data->nb_tx_queues;
> +			txq = eth_dev->data->tx_queues[nb_tx_queues - 1];
> +			ngbe_set_tx_function(eth_dev, txq);
> +		} else {
> +			/* Use default TX function if we get here */
> +			PMD_INIT_LOG(NOTICE, "No TX queues configured yet. "
> +				     "Using default TX function.");
> +		}
> +
>   		ngbe_set_rx_function(eth_dev);
>   
>   		return 0;
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 1e21db5e25..035b1ad5c8 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -86,6 +86,9 @@ uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
>   uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
>   		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
>   
> +uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> +		uint16_t nb_pkts);
> +
>   uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
>   		uint16_t nb_pkts);
>   
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index f633718237..3f3f2cab06 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -8,6 +8,7 @@
>   #include <stdint.h>
>   #include <rte_ethdev.h>
>   #include <ethdev_driver.h>
> +#include <rte_security_driver.h>
>   #include <rte_malloc.h>
>   
>   #include "ngbe_logs.h"
> @@ -15,6 +16,18 @@
>   #include "ngbe_ethdev.h"
>   #include "ngbe_rxtx.h"
>   
> +/* Bit Mask to indicate what bits required for building TX context */
> +static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
> +		PKT_TX_OUTER_IPV6 |
> +		PKT_TX_OUTER_IPV4 |
> +		PKT_TX_IPV6 |
> +		PKT_TX_IPV4 |
> +		PKT_TX_VLAN_PKT |
> +		PKT_TX_L4_MASK |
> +		PKT_TX_TCP_SEG |
> +		PKT_TX_TUNNEL_MASK |
> +		PKT_TX_OUTER_IP_CKSUM);
> +

Can we add offloads one by one in a separate patches?
It would simplify review a lot. Right now it is very
hard to understand if you lost something or not.

>   /*
>    * Prefetch a cache line into all cache levels.
>    */
> @@ -248,10 +261,608 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
>   	return nb_tx;
>   }
>   
> +static inline void
> +ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
> +		volatile struct ngbe_tx_ctx_desc *ctx_txd,
> +		uint64_t ol_flags, union ngbe_tx_offload tx_offload,
> +		__rte_unused uint64_t *mdata)
> +{
> +	union ngbe_tx_offload tx_offload_mask;
> +	uint32_t type_tucmd_mlhl;
> +	uint32_t mss_l4len_idx;
> +	uint32_t ctx_idx;
> +	uint32_t vlan_macip_lens;
> +	uint32_t tunnel_seed;
> +
> +	ctx_idx = txq->ctx_curr;
> +	tx_offload_mask.data[0] = 0;
> +	tx_offload_mask.data[1] = 0;
> +
> +	/* Specify which HW CTX to upload. */
> +	mss_l4len_idx = NGBE_TXD_IDX(ctx_idx);
> +	type_tucmd_mlhl = NGBE_TXD_CTXT;
> +
> +	tx_offload_mask.ptid |= ~0;
> +	type_tucmd_mlhl |= NGBE_TXD_PTID(tx_offload.ptid);
> +
> +	/* check if TCP segmentation required for this packet */
> +	if (ol_flags & PKT_TX_TCP_SEG) {
> +		tx_offload_mask.l2_len |= ~0;
> +		tx_offload_mask.l3_len |= ~0;
> +		tx_offload_mask.l4_len |= ~0;
> +		tx_offload_mask.tso_segsz |= ~0;
> +		mss_l4len_idx |= NGBE_TXD_MSS(tx_offload.tso_segsz);
> +		mss_l4len_idx |= NGBE_TXD_L4LEN(tx_offload.l4_len);
> +	} else { /* no TSO, check if hardware checksum is needed */
> +		if (ol_flags & PKT_TX_IP_CKSUM) {
> +			tx_offload_mask.l2_len |= ~0;
> +			tx_offload_mask.l3_len |= ~0;
> +		}
> +
> +		switch (ol_flags & PKT_TX_L4_MASK) {
> +		case PKT_TX_UDP_CKSUM:
> +			mss_l4len_idx |=
> +				NGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
> +			tx_offload_mask.l2_len |= ~0;
> +			tx_offload_mask.l3_len |= ~0;
> +			break;
> +		case PKT_TX_TCP_CKSUM:
> +			mss_l4len_idx |=
> +				NGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
> +			tx_offload_mask.l2_len |= ~0;
> +			tx_offload_mask.l3_len |= ~0;
> +			break;
> +		case PKT_TX_SCTP_CKSUM:
> +			mss_l4len_idx |=
> +				NGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
> +			tx_offload_mask.l2_len |= ~0;
> +			tx_offload_mask.l3_len |= ~0;
> +			break;
> +		default:
> +			break;
> +		}
> +	}
> +
> +	vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
> +
> +	if (ol_flags & PKT_TX_TUNNEL_MASK) {
> +		tx_offload_mask.outer_tun_len |= ~0;
> +		tx_offload_mask.outer_l2_len |= ~0;
> +		tx_offload_mask.outer_l3_len |= ~0;
> +		tx_offload_mask.l2_len |= ~0;
> +		tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
> +		tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
> +
> +		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
> +		case PKT_TX_TUNNEL_IPIP:
> +			/* for non UDP / GRE tunneling, set to 0b */
> +			break;
> +		default:
> +			PMD_TX_LOG(ERR, "Tunnel type not supported");
> +			return;
> +		}
> +		vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len);
> +	} else {
> +		tunnel_seed = 0;
> +		vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
> +	}
> +
> +	if (ol_flags & PKT_TX_VLAN_PKT) {
> +		tx_offload_mask.vlan_tci |= ~0;
> +		vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
> +	}
> +
> +	txq->ctx_cache[ctx_idx].flags = ol_flags;
> +	txq->ctx_cache[ctx_idx].tx_offload.data[0] =
> +		tx_offload_mask.data[0] & tx_offload.data[0];
> +	txq->ctx_cache[ctx_idx].tx_offload.data[1] =
> +		tx_offload_mask.data[1] & tx_offload.data[1];
> +	txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;
> +
> +	ctx_txd->dw0 = rte_cpu_to_le_32(vlan_macip_lens);
> +	ctx_txd->dw1 = rte_cpu_to_le_32(tunnel_seed);
> +	ctx_txd->dw2 = rte_cpu_to_le_32(type_tucmd_mlhl);
> +	ctx_txd->dw3 = rte_cpu_to_le_32(mss_l4len_idx);
> +}
> +
> +/*
> + * Check which hardware context can be used. Use the existing match
> + * or create a new context descriptor.
> + */
> +static inline uint32_t
> +what_ctx_update(struct ngbe_tx_queue *txq, uint64_t flags,
> +		   union ngbe_tx_offload tx_offload)
> +{
> +	/* If match with the current used context */
> +	if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
> +		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
> +		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
> +		     & tx_offload.data[0])) &&
> +		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
> +		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
> +		     & tx_offload.data[1]))))
> +		return txq->ctx_curr;
> +
> +	/* What if match with the next context  */
> +	txq->ctx_curr ^= 1;
> +	if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
> +		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
> +		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
> +		     & tx_offload.data[0])) &&
> +		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
> +		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
> +		     & tx_offload.data[1]))))
> +		return txq->ctx_curr;
> +
> +	/* Mismatch, use the previous context */
> +	return NGBE_CTX_NUM;
> +}
> +
> +static inline uint32_t
> +tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
> +{
> +	uint32_t tmp = 0;
> +
> +	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
> +		tmp |= NGBE_TXD_CC;
> +		tmp |= NGBE_TXD_L4CS;
> +	}
> +	if (ol_flags & PKT_TX_IP_CKSUM) {
> +		tmp |= NGBE_TXD_CC;
> +		tmp |= NGBE_TXD_IPCS;
> +	}
> +	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
> +		tmp |= NGBE_TXD_CC;
> +		tmp |= NGBE_TXD_EIPCS;
> +	}
> +	if (ol_flags & PKT_TX_TCP_SEG) {
> +		tmp |= NGBE_TXD_CC;
> +		/* implies IPv4 cksum */
> +		if (ol_flags & PKT_TX_IPV4)
> +			tmp |= NGBE_TXD_IPCS;
> +		tmp |= NGBE_TXD_L4CS;
> +	}
> +	if (ol_flags & PKT_TX_VLAN_PKT)
> +		tmp |= NGBE_TXD_CC;
> +
> +	return tmp;
> +}
> +
> +static inline uint32_t
> +tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
> +{
> +	uint32_t cmdtype = 0;
> +
> +	if (ol_flags & PKT_TX_VLAN_PKT)
> +		cmdtype |= NGBE_TXD_VLE;
> +	if (ol_flags & PKT_TX_TCP_SEG)
> +		cmdtype |= NGBE_TXD_TSE;
> +	if (ol_flags & PKT_TX_MACSEC)
> +		cmdtype |= NGBE_TXD_LINKSEC;
> +	return cmdtype;
> +}
> +
> +static inline uint8_t
> +tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
> +{
> +	bool tun;
> +
> +	if (ptype)
> +		return ngbe_encode_ptype(ptype);
> +
> +	/* Only support flags in NGBE_TX_OFFLOAD_MASK */
> +	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
> +
> +	/* L2 level */
> +	ptype = RTE_PTYPE_L2_ETHER;
> +	if (oflags & PKT_TX_VLAN)
> +		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
> +
> +	/* L3 level */
> +	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
> +		ptype |= RTE_PTYPE_L3_IPV4;
> +	else if (oflags & (PKT_TX_OUTER_IPV6))
> +		ptype |= RTE_PTYPE_L3_IPV6;
> +
> +	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
> +		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
> +	else if (oflags & (PKT_TX_IPV6))
> +		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
> +
> +	/* L4 level */
> +	switch (oflags & (PKT_TX_L4_MASK)) {
> +	case PKT_TX_TCP_CKSUM:
> +		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
> +		break;
> +	case PKT_TX_UDP_CKSUM:
> +		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
> +		break;
> +	case PKT_TX_SCTP_CKSUM:
> +		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
> +		break;
> +	}
> +
> +	if (oflags & PKT_TX_TCP_SEG)
> +		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
> +
> +	/* Tunnel */
> +	switch (oflags & PKT_TX_TUNNEL_MASK) {
> +	case PKT_TX_TUNNEL_VXLAN:
> +		ptype |= RTE_PTYPE_L2_ETHER |
> +			 RTE_PTYPE_L3_IPV4 |
> +			 RTE_PTYPE_TUNNEL_VXLAN;
> +		ptype |= RTE_PTYPE_INNER_L2_ETHER;
> +		break;
> +	case PKT_TX_TUNNEL_GRE:
> +		ptype |= RTE_PTYPE_L2_ETHER |
> +			 RTE_PTYPE_L3_IPV4 |
> +			 RTE_PTYPE_TUNNEL_GRE;
> +		ptype |= RTE_PTYPE_INNER_L2_ETHER;
> +		break;
> +	case PKT_TX_TUNNEL_GENEVE:
> +		ptype |= RTE_PTYPE_L2_ETHER |
> +			 RTE_PTYPE_L3_IPV4 |
> +			 RTE_PTYPE_TUNNEL_GENEVE;
> +		ptype |= RTE_PTYPE_INNER_L2_ETHER;
> +		break;
> +	case PKT_TX_TUNNEL_VXLAN_GPE:
> +		ptype |= RTE_PTYPE_L2_ETHER |
> +			 RTE_PTYPE_L3_IPV4 |
> +			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
> +		break;
> +	case PKT_TX_TUNNEL_IPIP:
> +	case PKT_TX_TUNNEL_IP:
> +		ptype |= RTE_PTYPE_L2_ETHER |
> +			 RTE_PTYPE_L3_IPV4 |
> +			 RTE_PTYPE_TUNNEL_IP;
> +		break;
> +	}
> +
> +	return ngbe_encode_ptype(ptype);
> +}
> +
>   #ifndef DEFAULT_TX_FREE_THRESH
>   #define DEFAULT_TX_FREE_THRESH 32
>   #endif
>   
> +/* Reset transmit descriptors after they have been used */
> +static inline int
> +ngbe_xmit_cleanup(struct ngbe_tx_queue *txq)
> +{
> +	struct ngbe_tx_entry *sw_ring = txq->sw_ring;
> +	volatile struct ngbe_tx_desc *txr = txq->tx_ring;
> +	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
> +	uint16_t nb_tx_desc = txq->nb_tx_desc;
> +	uint16_t desc_to_clean_to;
> +	uint16_t nb_tx_to_clean;
> +	uint32_t status;
> +
> +	/* Determine the last descriptor needing to be cleaned */
> +	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_free_thresh);
> +	if (desc_to_clean_to >= nb_tx_desc)
> +		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
> +
> +	/* Check to make sure the last descriptor to clean is done */
> +	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
> +	status = txr[desc_to_clean_to].dw3;
> +	if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
> +		PMD_TX_LOG(DEBUG,
> +			"TX descriptor %4u is not done"
> +			"(port=%d queue=%d)",
> +			desc_to_clean_to,
> +			txq->port_id, txq->queue_id);
> +		if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
> +			ngbe_set32_masked(txq->tdc_reg_addr,
> +				NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
> +		/* Failed to clean any descriptors, better luck next time */
> +		return -(1);
> +	}
> +
> +	/* Figure out how many descriptors will be cleaned */
> +	if (last_desc_cleaned > desc_to_clean_to)
> +		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
> +							desc_to_clean_to);
> +	else
> +		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
> +						last_desc_cleaned);
> +
> +	PMD_TX_LOG(DEBUG,
> +		"Cleaning %4u TX descriptors: %4u to %4u "
> +		"(port=%d queue=%d)",
> +		nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to,
> +		txq->port_id, txq->queue_id);
> +
> +	/*
> +	 * The last descriptor to clean is done, so that means all the
> +	 * descriptors from the last descriptor that was cleaned
> +	 * up to the last descriptor with the RS bit set
> +	 * are done. Only reset the threshold descriptor.
> +	 */
> +	txr[desc_to_clean_to].dw3 = 0;
> +
> +	/* Update the txq to reflect the last descriptor that was cleaned */
> +	txq->last_desc_cleaned = desc_to_clean_to;
> +	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
> +
> +	/* No Error */
> +	return 0;
> +}
> +
> +uint16_t
> +ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
> +		uint16_t nb_pkts)
> +{
> +	struct ngbe_tx_queue *txq;
> +	struct ngbe_tx_entry *sw_ring;
> +	struct ngbe_tx_entry *txe, *txn;
> +	volatile struct ngbe_tx_desc *txr;
> +	volatile struct ngbe_tx_desc *txd;
> +	struct rte_mbuf     *tx_pkt;
> +	struct rte_mbuf     *m_seg;
> +	uint64_t buf_dma_addr;
> +	uint32_t olinfo_status;
> +	uint32_t cmd_type_len;
> +	uint32_t pkt_len;
> +	uint16_t slen;
> +	uint64_t ol_flags;
> +	uint16_t tx_id;
> +	uint16_t tx_last;
> +	uint16_t nb_tx;
> +	uint16_t nb_used;
> +	uint64_t tx_ol_req;
> +	uint32_t ctx = 0;
> +	uint32_t new_ctx;
> +	union ngbe_tx_offload tx_offload;
> +
> +	tx_offload.data[0] = 0;
> +	tx_offload.data[1] = 0;
> +	txq = tx_queue;
> +	sw_ring = txq->sw_ring;
> +	txr     = txq->tx_ring;
> +	tx_id   = txq->tx_tail;
> +	txe = &sw_ring[tx_id];
> +
> +	/* Determine if the descriptor ring needs to be cleaned. */
> +	if (txq->nb_tx_free < txq->tx_free_thresh)
> +		ngbe_xmit_cleanup(txq);
> +
> +	rte_prefetch0(&txe->mbuf->pool);
> +
> +	/* TX loop */
> +	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
> +		new_ctx = 0;
> +		tx_pkt = *tx_pkts++;
> +		pkt_len = tx_pkt->pkt_len;
> +
> +		/*
> +		 * Determine how many (if any) context descriptors
> +		 * are needed for offload functionality.
> +		 */
> +		ol_flags = tx_pkt->ol_flags;
> +
> +		/* If hardware offload required */
> +		tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK;
> +		if (tx_ol_req) {
> +			tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req,
> +					tx_pkt->packet_type);
> +			tx_offload.l2_len = tx_pkt->l2_len;
> +			tx_offload.l3_len = tx_pkt->l3_len;
> +			tx_offload.l4_len = tx_pkt->l4_len;
> +			tx_offload.vlan_tci = tx_pkt->vlan_tci;
> +			tx_offload.tso_segsz = tx_pkt->tso_segsz;
> +			tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
> +			tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
> +			tx_offload.outer_tun_len = 0;
> +
> +
> +			/* If new context need be built or reuse the exist ctx*/
> +			ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
> +			/* Only allocate context descriptor if required */
> +			new_ctx = (ctx == NGBE_CTX_NUM);
> +			ctx = txq->ctx_curr;
> +		}
> +
> +		/*
> +		 * Keep track of how many descriptors are used this loop
> +		 * This will always be the number of segments + the number of
> +		 * Context descriptors required to transmit the packet
> +		 */
> +		nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
> +
> +		/*
> +		 * The number of descriptors that must be allocated for a
> +		 * packet is the number of segments of that packet, plus 1
> +		 * Context Descriptor for the hardware offload, if any.
> +		 * Determine the last TX descriptor to allocate in the TX ring
> +		 * for the packet, starting from the current position (tx_id)
> +		 * in the ring.
> +		 */
> +		tx_last = (uint16_t)(tx_id + nb_used - 1);
> +
> +		/* Circular ring */
> +		if (tx_last >= txq->nb_tx_desc)
> +			tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
> +
> +		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u"
> +			   " tx_first=%u tx_last=%u",
> +			   (uint16_t)txq->port_id,
> +			   (uint16_t)txq->queue_id,
> +			   (uint32_t)pkt_len,
> +			   (uint16_t)tx_id,
> +			   (uint16_t)tx_last);
> +
> +		/*
> +		 * Make sure there are enough TX descriptors available to
> +		 * transmit the entire packet.
> +		 * nb_used better be less than or equal to txq->tx_free_thresh
> +		 */
> +		if (nb_used > txq->nb_tx_free) {
> +			PMD_TX_LOG(DEBUG,
> +				"Not enough free TX descriptors "
> +				"nb_used=%4u nb_free=%4u "
> +				"(port=%d queue=%d)",
> +				nb_used, txq->nb_tx_free,
> +				txq->port_id, txq->queue_id);
> +
> +			if (ngbe_xmit_cleanup(txq) != 0) {
> +				/* Could not clean any descriptors */
> +				if (nb_tx == 0)
> +					return 0;
> +				goto end_of_tx;
> +			}
> +
> +			/* nb_used better be <= txq->tx_free_thresh */
> +			if (unlikely(nb_used > txq->tx_free_thresh)) {
> +				PMD_TX_LOG(DEBUG,
> +					"The number of descriptors needed to "
> +					"transmit the packet exceeds the "
> +					"RS bit threshold. This will impact "
> +					"performance."
> +					"nb_used=%4u nb_free=%4u "
> +					"tx_free_thresh=%4u. "
> +					"(port=%d queue=%d)",
> +					nb_used, txq->nb_tx_free,
> +					txq->tx_free_thresh,
> +					txq->port_id, txq->queue_id);
> +				/*
> +				 * Loop here until there are enough TX
> +				 * descriptors or until the ring cannot be
> +				 * cleaned.
> +				 */
> +				while (nb_used > txq->nb_tx_free) {
> +					if (ngbe_xmit_cleanup(txq) != 0) {
> +						/*
> +						 * Could not clean any
> +						 * descriptors
> +						 */
> +						if (nb_tx == 0)
> +							return 0;
> +						goto end_of_tx;
> +					}
> +				}
> +			}
> +		}
> +
> +		/*
> +		 * By now there are enough free TX descriptors to transmit
> +		 * the packet.
> +		 */
> +
> +		/*
> +		 * Set common flags of all TX Data Descriptors.
> +		 *
> +		 * The following bits must be set in the first Data Descriptor
> +		 * and are ignored in the other ones:
> +		 *   - NGBE_TXD_FCS
> +		 *
> +		 * The following bits must only be set in the last Data
> +		 * Descriptor:
> +		 *   - NGBE_TXD_EOP
> +		 */
> +		cmd_type_len = NGBE_TXD_FCS;
> +
> +		olinfo_status = 0;
> +		if (tx_ol_req) {
> +			if (ol_flags & PKT_TX_TCP_SEG) {
> +				/* when TSO is on, paylen in descriptor is the
> +				 * not the packet len but the tcp payload len
> +				 */
> +				pkt_len -= (tx_offload.l2_len +
> +					tx_offload.l3_len + tx_offload.l4_len);
> +				pkt_len -=
> +					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
> +					? tx_offload.outer_l2_len +
> +					  tx_offload.outer_l3_len : 0;
> +			}
> +
> +			/*
> +			 * Setup the TX Advanced Context Descriptor if required
> +			 */
> +			if (new_ctx) {
> +				volatile struct ngbe_tx_ctx_desc *ctx_txd;
> +
> +				ctx_txd = (volatile struct ngbe_tx_ctx_desc *)
> +				    &txr[tx_id];
> +
> +				txn = &sw_ring[txe->next_id];
> +				rte_prefetch0(&txn->mbuf->pool);
> +
> +				if (txe->mbuf != NULL) {
> +					rte_pktmbuf_free_seg(txe->mbuf);
> +					txe->mbuf = NULL;
> +				}
> +
> +				ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
> +					tx_offload,
> +					rte_security_dynfield(tx_pkt));
> +
> +				txe->last_id = tx_last;
> +				tx_id = txe->next_id;
> +				txe = txn;
> +			}
> +
> +			/*
> +			 * Setup the TX Advanced Data Descriptor,
> +			 * This path will go through
> +			 * whatever new/reuse the context descriptor
> +			 */
> +			cmd_type_len  |= tx_desc_ol_flags_to_cmdtype(ol_flags);
> +			olinfo_status |=
> +				tx_desc_cksum_flags_to_olinfo(ol_flags);
> +			olinfo_status |= NGBE_TXD_IDX(ctx);
> +		}
> +
> +		olinfo_status |= NGBE_TXD_PAYLEN(pkt_len);
> +
> +		m_seg = tx_pkt;
> +		do {
> +			txd = &txr[tx_id];
> +			txn = &sw_ring[txe->next_id];
> +			rte_prefetch0(&txn->mbuf->pool);
> +
> +			if (txe->mbuf != NULL)
> +				rte_pktmbuf_free_seg(txe->mbuf);
> +			txe->mbuf = m_seg;
> +
> +			/*
> +			 * Set up Transmit Data Descriptor.
> +			 */
> +			slen = m_seg->data_len;
> +			buf_dma_addr = rte_mbuf_data_iova(m_seg);
> +			txd->qw0 = rte_cpu_to_le_64(buf_dma_addr);
> +			txd->dw2 = rte_cpu_to_le_32(cmd_type_len | slen);
> +			txd->dw3 = rte_cpu_to_le_32(olinfo_status);
> +			txe->last_id = tx_last;
> +			tx_id = txe->next_id;
> +			txe = txn;
> +			m_seg = m_seg->next;
> +		} while (m_seg != NULL);
> +
> +		/*
> +		 * The last packet data descriptor needs End Of Packet (EOP)
> +		 */
> +		cmd_type_len |= NGBE_TXD_EOP;
> +		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
> +
> +		txd->dw2 |= rte_cpu_to_le_32(cmd_type_len);
> +	}
> +
> +end_of_tx:
> +
> +	rte_wmb();
> +
> +	/*
> +	 * Set the Transmit Descriptor Tail (TDT)
> +	 */
> +	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
> +		   (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
> +		   (uint16_t)tx_id, (uint16_t)nb_tx);
> +	ngbe_set32_relaxed(txq->tdt_reg_addr, tx_id);
> +	txq->tx_tail = tx_id;
> +
> +	return nb_tx;
> +}
> +
>   /*********************************************************************
>    *
>    *  RX functions
> @@ -1123,6 +1734,31 @@ static const struct ngbe_txq_ops def_txq_ops = {
>   	.reset = ngbe_reset_tx_queue,
>   };
>   
> +/* Takes an ethdev and a queue and sets up the tx function to be used based on
> + * the queue parameters. Used in tx_queue_setup by primary process and then
> + * in dev_init by secondary process when attaching to an existing ethdev.
> + */
> +void __rte_cold
> +ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq)
> +{
> +	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
> +	if (txq->offloads == 0 &&
> +			txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) {
> +		PMD_INIT_LOG(DEBUG, "Using simple tx code path");
> +		dev->tx_pkt_burst = ngbe_xmit_pkts_simple;
> +	} else {
> +		PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
> +		PMD_INIT_LOG(DEBUG,
> +				" - offloads = 0x%" PRIx64,
> +				txq->offloads);
> +		PMD_INIT_LOG(DEBUG,
> +				" - tx_free_thresh = %lu [RTE_PMD_NGBE_TX_MAX_BURST=%lu]",
> +				(unsigned long)txq->tx_free_thresh,
> +				(unsigned long)RTE_PMD_NGBE_TX_MAX_BURST);
> +		dev->tx_pkt_burst = ngbe_xmit_pkts;
> +	}
> +}
> +
>   uint64_t
>   ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
>   {
> @@ -1262,6 +1898,9 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
>   	PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
>   		     txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
>   
> +	/* set up scalar TX function as appropriate */
> +	ngbe_set_tx_function(dev, txq);
> +
>   	txq->ops->reset(txq);
>   
>   	dev->data->tx_queues[queue_idx] = txq;
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index 4b8596b24a..2cb98e2497 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -135,8 +135,35 @@ struct ngbe_tx_ctx_desc {
>   	__le32 dw3; /* w.mss_l4len_idx    */
>   };
>   
> +/* @ngbe_tx_ctx_desc.dw0 */
> +#define NGBE_TXD_IPLEN(v)         LS(v, 0, 0x1FF) /* ip/fcoe header end */
> +#define NGBE_TXD_MACLEN(v)        LS(v, 9, 0x7F) /* desc mac len */
> +#define NGBE_TXD_VLAN(v)          LS(v, 16, 0xFFFF) /* vlan tag */
> +
> +/* @ngbe_tx_ctx_desc.dw1 */
> +/*** bit 0-31, when NGBE_TXD_DTYP_FCOE=0 ***/
> +#define NGBE_TXD_IPSEC_SAIDX(v)   LS(v, 0, 0x3FF) /* ipsec SA index */
> +#define NGBE_TXD_ETYPE(v)         LS(v, 11, 0x1) /* tunnel type */
> +#define NGBE_TXD_ETYPE_UDP        LS(0, 11, 0x1)
> +#define NGBE_TXD_ETYPE_GRE        LS(1, 11, 0x1)
> +#define NGBE_TXD_EIPLEN(v)        LS(v, 12, 0x7F) /* tunnel ip header */
> +#define NGBE_TXD_DTYP_FCOE        MS(16, 0x1) /* FCoE/IP descriptor */
> +#define NGBE_TXD_ETUNLEN(v)       LS(v, 21, 0xFF) /* tunnel header */
> +#define NGBE_TXD_DECTTL(v)        LS(v, 29, 0xF) /* decrease ip TTL */
> +
> +/* @ngbe_tx_ctx_desc.dw2 */
> +#define NGBE_TXD_IPSEC_ESPLEN(v)  LS(v, 1, 0x1FF) /* ipsec ESP length */
> +#define NGBE_TXD_SNAP             MS(10, 0x1) /* SNAP indication */
> +#define NGBE_TXD_TPID_SEL(v)      LS(v, 11, 0x7) /* vlan tag index */
> +#define NGBE_TXD_IPSEC_ESP        MS(14, 0x1) /* ipsec type: esp=1 ah=0 */
> +#define NGBE_TXD_IPSEC_ESPENC     MS(15, 0x1) /* ESP encrypt */
> +#define NGBE_TXD_CTXT             MS(20, 0x1) /* context descriptor */
> +#define NGBE_TXD_PTID(v)          LS(v, 24, 0xFF) /* packet type */
>   /* @ngbe_tx_ctx_desc.dw3 */
>   #define NGBE_TXD_DD               MS(0, 0x1) /* descriptor done */
> +#define NGBE_TXD_IDX(v)           LS(v, 4, 0x1) /* ctxt desc index */
> +#define NGBE_TXD_L4LEN(v)         LS(v, 8, 0xFF) /* l4 header length */
> +#define NGBE_TXD_MSS(v)           LS(v, 16, 0xFFFF) /* l4 MSS */
>   
>   /**
>    * Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
> @@ -250,11 +277,34 @@ enum ngbe_ctx_num {
>   	NGBE_CTX_NUM  = 2, /**< CTX NUMBER  */
>   };
>   
> +/** Offload features */
> +union ngbe_tx_offload {
> +	uint64_t data[2];
> +	struct {
> +		uint64_t ptid:8; /**< Packet Type Identifier. */
> +		uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
> +		uint64_t l3_len:9; /**< L3 (IP) Header Length. */
> +		uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
> +		uint64_t tso_segsz:16; /**< TCP TSO segment size */
> +		uint64_t vlan_tci:16;
> +		/**< VLAN Tag Control Identifier (CPU order). */
> +
> +		/* fields for TX offloading of tunnels */
> +		uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */
> +		uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
> +		uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */
> +	};
> +};
> +
>   /**
>    * Structure to check if new context need be built
>    */
>   struct ngbe_ctx_info {
>   	uint64_t flags;           /**< ol_flags for context build. */
> +	/**< tx offload: vlan, tso, l2-l3-l4 lengths. */
> +	union ngbe_tx_offload tx_offload;
> +	/** compare mask for tx offload. */
> +	union ngbe_tx_offload tx_offload_mask;
>   };
>   
>   /**
> @@ -298,6 +348,12 @@ struct ngbe_txq_ops {
>   	void (*reset)(struct ngbe_tx_queue *txq);
>   };
>   
> +/* Takes an ethdev and a queue and sets up the tx function to be used based on
> + * the queue parameters. Used in tx_queue_setup by primary process and then
> + * in dev_init by secondary process when attaching to an existing ethdev.
> + */
> +void ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq);
> +
>   void ngbe_set_rx_function(struct rte_eth_dev *dev);
>   
>   uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path
  2021-06-14 19:22   ` Andrew Rybchenko
@ 2021-06-14 19:23     ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:23 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/14/21 10:22 PM, Andrew Rybchenko wrote:
> On 6/2/21 12:41 PM, Jiawen Wu wrote:
>> Add the full-featured transmit function, which supports checksum, TSO,
>> tunnel parse, etc.
> 
> The patch should adviertise corresponding offloads support in features,
> in dev_info Tx offloads.
> 
> Tx offloads require Tx prepare implemenation. Can you really skip it?

BTW, I've realized that the patch is a dead code since you can't use it
it without device start implemented in the next patch.
I.e. patches order is wrong.

> 
>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>> ---
>>   doc/guides/nics/features/ngbe.ini |   3 +
>>   doc/guides/nics/ngbe.rst          |   3 +-
>>   drivers/net/ngbe/meson.build      |   2 +
>>   drivers/net/ngbe/ngbe_ethdev.c    |  16 +-
>>   drivers/net/ngbe/ngbe_ethdev.h    |   3 +
>>   drivers/net/ngbe/ngbe_rxtx.c      | 639 ++++++++++++++++++++++++++++++
>>   drivers/net/ngbe/ngbe_rxtx.h      |  56 +++
>>   7 files changed, 720 insertions(+), 2 deletions(-)
>>
>> diff --git a/doc/guides/nics/features/ngbe.ini 
>> b/doc/guides/nics/features/ngbe.ini
>> index e24d8d0b55..443c6691a3 100644
>> --- a/doc/guides/nics/features/ngbe.ini
>> +++ b/doc/guides/nics/features/ngbe.ini
>> @@ -9,10 +9,13 @@ Link status          = Y
>>   Link status event    = Y
>>   Jumbo frame          = Y
>>   Scattered Rx         = Y
>> +TSO                  = Y
>>   CRC offload          = P
>>   VLAN offload         = P
>>   L3 checksum offload  = P
>>   L4 checksum offload  = P
>> +Inner L3 checksum    = P
>> +Inner L4 checksum    = P
>>   Packet type parsing  = Y
>>   Multiprocess aware   = Y
>>   Linux                = Y
>> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
>> index e999e0b580..cf3fafabd8 100644
>> --- a/doc/guides/nics/ngbe.rst
>> +++ b/doc/guides/nics/ngbe.rst
>> @@ -12,9 +12,10 @@ Features
>>   - Packet type information
>>   - Checksum offload
>> +- TSO offload
>>   - Jumbo frames
>>   - Link state information
>> -- Scattered and gather for RX
>> +- Scattered and gather for TX and RX
> 
> TX -> Tx
> 
>>   Prerequisites
>>   -------------
>> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
>> index fd571399b3..069e648a36 100644
>> --- a/drivers/net/ngbe/meson.build
>> +++ b/drivers/net/ngbe/meson.build
>> @@ -16,5 +16,7 @@ sources = files(
>>       'ngbe_rxtx.c',
>>   )
>> +deps += ['security']
>> +
>>   includes += include_directories('base')
>> diff --git a/drivers/net/ngbe/ngbe_ethdev.c 
>> b/drivers/net/ngbe/ngbe_ethdev.c
>> index 260bca0e4f..1a6419e5a4 100644
>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>> @@ -110,7 +110,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, 
>> void *init_params __rte_unused)
>>       eth_dev->dev_ops = &ngbe_eth_dev_ops;
>>       eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
>> -    eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
>> +    eth_dev->tx_pkt_burst = &ngbe_xmit_pkts;
>>       /*
>>        * For secondary processes, we don't initialise any further as 
>> primary
>> @@ -118,6 +118,20 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, 
>> void *init_params __rte_unused)
>>        * RX and TX function.
>>        */
>>       if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
>> +        struct ngbe_tx_queue *txq;
>> +        /* TX queue function in primary, set by last queue initialized
> 
> TX -> Tx
> 
>> +         * Tx queue may not initialized by primary process
>> +         */
>> +        if (eth_dev->data->tx_queues) {
>> +            uint16_t nb_tx_queues = eth_dev->data->nb_tx_queues;
>> +            txq = eth_dev->data->tx_queues[nb_tx_queues - 1];
>> +            ngbe_set_tx_function(eth_dev, txq);
>> +        } else {
>> +            /* Use default TX function if we get here */
>> +            PMD_INIT_LOG(NOTICE, "No TX queues configured yet. "
>> +                     "Using default TX function.");
>> +        }
>> +
>>           ngbe_set_rx_function(eth_dev);
>>           return 0;
>> diff --git a/drivers/net/ngbe/ngbe_ethdev.h 
>> b/drivers/net/ngbe/ngbe_ethdev.h
>> index 1e21db5e25..035b1ad5c8 100644
>> --- a/drivers/net/ngbe/ngbe_ethdev.h
>> +++ b/drivers/net/ngbe/ngbe_ethdev.h
>> @@ -86,6 +86,9 @@ uint16_t ngbe_recv_pkts_sc_single_alloc(void *rx_queue,
>>   uint16_t ngbe_recv_pkts_sc_bulk_alloc(void *rx_queue,
>>           struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
>> +uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>> +        uint16_t nb_pkts);
>> +
>>   uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf 
>> **tx_pkts,
>>           uint16_t nb_pkts);
>> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
>> index f633718237..3f3f2cab06 100644
>> --- a/drivers/net/ngbe/ngbe_rxtx.c
>> +++ b/drivers/net/ngbe/ngbe_rxtx.c
>> @@ -8,6 +8,7 @@
>>   #include <stdint.h>
>>   #include <rte_ethdev.h>
>>   #include <ethdev_driver.h>
>> +#include <rte_security_driver.h>
>>   #include <rte_malloc.h>
>>   #include "ngbe_logs.h"
>> @@ -15,6 +16,18 @@
>>   #include "ngbe_ethdev.h"
>>   #include "ngbe_rxtx.h"
>> +/* Bit Mask to indicate what bits required for building TX context */
>> +static const u64 NGBE_TX_OFFLOAD_MASK = (PKT_TX_IP_CKSUM |
>> +        PKT_TX_OUTER_IPV6 |
>> +        PKT_TX_OUTER_IPV4 |
>> +        PKT_TX_IPV6 |
>> +        PKT_TX_IPV4 |
>> +        PKT_TX_VLAN_PKT |
>> +        PKT_TX_L4_MASK |
>> +        PKT_TX_TCP_SEG |
>> +        PKT_TX_TUNNEL_MASK |
>> +        PKT_TX_OUTER_IP_CKSUM);
>> +
> 
> Can we add offloads one by one in a separate patches?
> It would simplify review a lot. Right now it is very
> hard to understand if you lost something or not.
> 
>>   /*
>>    * Prefetch a cache line into all cache levels.
>>    */
>> @@ -248,10 +261,608 @@ ngbe_xmit_pkts_simple(void *tx_queue, struct 
>> rte_mbuf **tx_pkts,
>>       return nb_tx;
>>   }
>> +static inline void
>> +ngbe_set_xmit_ctx(struct ngbe_tx_queue *txq,
>> +        volatile struct ngbe_tx_ctx_desc *ctx_txd,
>> +        uint64_t ol_flags, union ngbe_tx_offload tx_offload,
>> +        __rte_unused uint64_t *mdata)
>> +{
>> +    union ngbe_tx_offload tx_offload_mask;
>> +    uint32_t type_tucmd_mlhl;
>> +    uint32_t mss_l4len_idx;
>> +    uint32_t ctx_idx;
>> +    uint32_t vlan_macip_lens;
>> +    uint32_t tunnel_seed;
>> +
>> +    ctx_idx = txq->ctx_curr;
>> +    tx_offload_mask.data[0] = 0;
>> +    tx_offload_mask.data[1] = 0;
>> +
>> +    /* Specify which HW CTX to upload. */
>> +    mss_l4len_idx = NGBE_TXD_IDX(ctx_idx);
>> +    type_tucmd_mlhl = NGBE_TXD_CTXT;
>> +
>> +    tx_offload_mask.ptid |= ~0;
>> +    type_tucmd_mlhl |= NGBE_TXD_PTID(tx_offload.ptid);
>> +
>> +    /* check if TCP segmentation required for this packet */
>> +    if (ol_flags & PKT_TX_TCP_SEG) {
>> +        tx_offload_mask.l2_len |= ~0;
>> +        tx_offload_mask.l3_len |= ~0;
>> +        tx_offload_mask.l4_len |= ~0;
>> +        tx_offload_mask.tso_segsz |= ~0;
>> +        mss_l4len_idx |= NGBE_TXD_MSS(tx_offload.tso_segsz);
>> +        mss_l4len_idx |= NGBE_TXD_L4LEN(tx_offload.l4_len);
>> +    } else { /* no TSO, check if hardware checksum is needed */
>> +        if (ol_flags & PKT_TX_IP_CKSUM) {
>> +            tx_offload_mask.l2_len |= ~0;
>> +            tx_offload_mask.l3_len |= ~0;
>> +        }
>> +
>> +        switch (ol_flags & PKT_TX_L4_MASK) {
>> +        case PKT_TX_UDP_CKSUM:
>> +            mss_l4len_idx |=
>> +                NGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
>> +            tx_offload_mask.l2_len |= ~0;
>> +            tx_offload_mask.l3_len |= ~0;
>> +            break;
>> +        case PKT_TX_TCP_CKSUM:
>> +            mss_l4len_idx |=
>> +                NGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
>> +            tx_offload_mask.l2_len |= ~0;
>> +            tx_offload_mask.l3_len |= ~0;
>> +            break;
>> +        case PKT_TX_SCTP_CKSUM:
>> +            mss_l4len_idx |=
>> +                NGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
>> +            tx_offload_mask.l2_len |= ~0;
>> +            tx_offload_mask.l3_len |= ~0;
>> +            break;
>> +        default:
>> +            break;
>> +        }
>> +    }
>> +
>> +    vlan_macip_lens = NGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
>> +
>> +    if (ol_flags & PKT_TX_TUNNEL_MASK) {
>> +        tx_offload_mask.outer_tun_len |= ~0;
>> +        tx_offload_mask.outer_l2_len |= ~0;
>> +        tx_offload_mask.outer_l3_len |= ~0;
>> +        tx_offload_mask.l2_len |= ~0;
>> +        tunnel_seed = NGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
>> +        tunnel_seed |= NGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
>> +
>> +        switch (ol_flags & PKT_TX_TUNNEL_MASK) {
>> +        case PKT_TX_TUNNEL_IPIP:
>> +            /* for non UDP / GRE tunneling, set to 0b */
>> +            break;
>> +        default:
>> +            PMD_TX_LOG(ERR, "Tunnel type not supported");
>> +            return;
>> +        }
>> +        vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.outer_l2_len);
>> +    } else {
>> +        tunnel_seed = 0;
>> +        vlan_macip_lens |= NGBE_TXD_MACLEN(tx_offload.l2_len);
>> +    }
>> +
>> +    if (ol_flags & PKT_TX_VLAN_PKT) {
>> +        tx_offload_mask.vlan_tci |= ~0;
>> +        vlan_macip_lens |= NGBE_TXD_VLAN(tx_offload.vlan_tci);
>> +    }
>> +
>> +    txq->ctx_cache[ctx_idx].flags = ol_flags;
>> +    txq->ctx_cache[ctx_idx].tx_offload.data[0] =
>> +        tx_offload_mask.data[0] & tx_offload.data[0];
>> +    txq->ctx_cache[ctx_idx].tx_offload.data[1] =
>> +        tx_offload_mask.data[1] & tx_offload.data[1];
>> +    txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;
>> +
>> +    ctx_txd->dw0 = rte_cpu_to_le_32(vlan_macip_lens);
>> +    ctx_txd->dw1 = rte_cpu_to_le_32(tunnel_seed);
>> +    ctx_txd->dw2 = rte_cpu_to_le_32(type_tucmd_mlhl);
>> +    ctx_txd->dw3 = rte_cpu_to_le_32(mss_l4len_idx);
>> +}
>> +
>> +/*
>> + * Check which hardware context can be used. Use the existing match
>> + * or create a new context descriptor.
>> + */
>> +static inline uint32_t
>> +what_ctx_update(struct ngbe_tx_queue *txq, uint64_t flags,
>> +           union ngbe_tx_offload tx_offload)
>> +{
>> +    /* If match with the current used context */
>> +    if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
>> +           (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
>> +            (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
>> +             & tx_offload.data[0])) &&
>> +           (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
>> +            (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
>> +             & tx_offload.data[1]))))
>> +        return txq->ctx_curr;
>> +
>> +    /* What if match with the next context  */
>> +    txq->ctx_curr ^= 1;
>> +    if (likely(txq->ctx_cache[txq->ctx_curr].flags == flags &&
>> +           (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
>> +            (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
>> +             & tx_offload.data[0])) &&
>> +           (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
>> +            (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
>> +             & tx_offload.data[1]))))
>> +        return txq->ctx_curr;
>> +
>> +    /* Mismatch, use the previous context */
>> +    return NGBE_CTX_NUM;
>> +}
>> +
>> +static inline uint32_t
>> +tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
>> +{
>> +    uint32_t tmp = 0;
>> +
>> +    if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
>> +        tmp |= NGBE_TXD_CC;
>> +        tmp |= NGBE_TXD_L4CS;
>> +    }
>> +    if (ol_flags & PKT_TX_IP_CKSUM) {
>> +        tmp |= NGBE_TXD_CC;
>> +        tmp |= NGBE_TXD_IPCS;
>> +    }
>> +    if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
>> +        tmp |= NGBE_TXD_CC;
>> +        tmp |= NGBE_TXD_EIPCS;
>> +    }
>> +    if (ol_flags & PKT_TX_TCP_SEG) {
>> +        tmp |= NGBE_TXD_CC;
>> +        /* implies IPv4 cksum */
>> +        if (ol_flags & PKT_TX_IPV4)
>> +            tmp |= NGBE_TXD_IPCS;
>> +        tmp |= NGBE_TXD_L4CS;
>> +    }
>> +    if (ol_flags & PKT_TX_VLAN_PKT)
>> +        tmp |= NGBE_TXD_CC;
>> +
>> +    return tmp;
>> +}
>> +
>> +static inline uint32_t
>> +tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
>> +{
>> +    uint32_t cmdtype = 0;
>> +
>> +    if (ol_flags & PKT_TX_VLAN_PKT)
>> +        cmdtype |= NGBE_TXD_VLE;
>> +    if (ol_flags & PKT_TX_TCP_SEG)
>> +        cmdtype |= NGBE_TXD_TSE;
>> +    if (ol_flags & PKT_TX_MACSEC)
>> +        cmdtype |= NGBE_TXD_LINKSEC;
>> +    return cmdtype;
>> +}
>> +
>> +static inline uint8_t
>> +tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
>> +{
>> +    bool tun;
>> +
>> +    if (ptype)
>> +        return ngbe_encode_ptype(ptype);
>> +
>> +    /* Only support flags in NGBE_TX_OFFLOAD_MASK */
>> +    tun = !!(oflags & PKT_TX_TUNNEL_MASK);
>> +
>> +    /* L2 level */
>> +    ptype = RTE_PTYPE_L2_ETHER;
>> +    if (oflags & PKT_TX_VLAN)
>> +        ptype |= RTE_PTYPE_L2_ETHER_VLAN;
>> +
>> +    /* L3 level */
>> +    if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM))
>> +        ptype |= RTE_PTYPE_L3_IPV4;
>> +    else if (oflags & (PKT_TX_OUTER_IPV6))
>> +        ptype |= RTE_PTYPE_L3_IPV6;
>> +
>> +    if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM))
>> +        ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
>> +    else if (oflags & (PKT_TX_IPV6))
>> +        ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
>> +
>> +    /* L4 level */
>> +    switch (oflags & (PKT_TX_L4_MASK)) {
>> +    case PKT_TX_TCP_CKSUM:
>> +        ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
>> +        break;
>> +    case PKT_TX_UDP_CKSUM:
>> +        ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
>> +        break;
>> +    case PKT_TX_SCTP_CKSUM:
>> +        ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
>> +        break;
>> +    }
>> +
>> +    if (oflags & PKT_TX_TCP_SEG)
>> +        ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
>> +
>> +    /* Tunnel */
>> +    switch (oflags & PKT_TX_TUNNEL_MASK) {
>> +    case PKT_TX_TUNNEL_VXLAN:
>> +        ptype |= RTE_PTYPE_L2_ETHER |
>> +             RTE_PTYPE_L3_IPV4 |
>> +             RTE_PTYPE_TUNNEL_VXLAN;
>> +        ptype |= RTE_PTYPE_INNER_L2_ETHER;
>> +        break;
>> +    case PKT_TX_TUNNEL_GRE:
>> +        ptype |= RTE_PTYPE_L2_ETHER |
>> +             RTE_PTYPE_L3_IPV4 |
>> +             RTE_PTYPE_TUNNEL_GRE;
>> +        ptype |= RTE_PTYPE_INNER_L2_ETHER;
>> +        break;
>> +    case PKT_TX_TUNNEL_GENEVE:
>> +        ptype |= RTE_PTYPE_L2_ETHER |
>> +             RTE_PTYPE_L3_IPV4 |
>> +             RTE_PTYPE_TUNNEL_GENEVE;
>> +        ptype |= RTE_PTYPE_INNER_L2_ETHER;
>> +        break;
>> +    case PKT_TX_TUNNEL_VXLAN_GPE:
>> +        ptype |= RTE_PTYPE_L2_ETHER |
>> +             RTE_PTYPE_L3_IPV4 |
>> +             RTE_PTYPE_TUNNEL_VXLAN_GPE;
>> +        break;
>> +    case PKT_TX_TUNNEL_IPIP:
>> +    case PKT_TX_TUNNEL_IP:
>> +        ptype |= RTE_PTYPE_L2_ETHER |
>> +             RTE_PTYPE_L3_IPV4 |
>> +             RTE_PTYPE_TUNNEL_IP;
>> +        break;
>> +    }
>> +
>> +    return ngbe_encode_ptype(ptype);
>> +}
>> +
>>   #ifndef DEFAULT_TX_FREE_THRESH
>>   #define DEFAULT_TX_FREE_THRESH 32
>>   #endif
>> +/* Reset transmit descriptors after they have been used */
>> +static inline int
>> +ngbe_xmit_cleanup(struct ngbe_tx_queue *txq)
>> +{
>> +    struct ngbe_tx_entry *sw_ring = txq->sw_ring;
>> +    volatile struct ngbe_tx_desc *txr = txq->tx_ring;
>> +    uint16_t last_desc_cleaned = txq->last_desc_cleaned;
>> +    uint16_t nb_tx_desc = txq->nb_tx_desc;
>> +    uint16_t desc_to_clean_to;
>> +    uint16_t nb_tx_to_clean;
>> +    uint32_t status;
>> +
>> +    /* Determine the last descriptor needing to be cleaned */
>> +    desc_to_clean_to = (uint16_t)(last_desc_cleaned + 
>> txq->tx_free_thresh);
>> +    if (desc_to_clean_to >= nb_tx_desc)
>> +        desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
>> +
>> +    /* Check to make sure the last descriptor to clean is done */
>> +    desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
>> +    status = txr[desc_to_clean_to].dw3;
>> +    if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
>> +        PMD_TX_LOG(DEBUG,
>> +            "TX descriptor %4u is not done"
>> +            "(port=%d queue=%d)",
>> +            desc_to_clean_to,
>> +            txq->port_id, txq->queue_id);
>> +        if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
>> +            ngbe_set32_masked(txq->tdc_reg_addr,
>> +                NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
>> +        /* Failed to clean any descriptors, better luck next time */
>> +        return -(1);
>> +    }
>> +
>> +    /* Figure out how many descriptors will be cleaned */
>> +    if (last_desc_cleaned > desc_to_clean_to)
>> +        nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
>> +                            desc_to_clean_to);
>> +    else
>> +        nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
>> +                        last_desc_cleaned);
>> +
>> +    PMD_TX_LOG(DEBUG,
>> +        "Cleaning %4u TX descriptors: %4u to %4u "
>> +        "(port=%d queue=%d)",
>> +        nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to,
>> +        txq->port_id, txq->queue_id);
>> +
>> +    /*
>> +     * The last descriptor to clean is done, so that means all the
>> +     * descriptors from the last descriptor that was cleaned
>> +     * up to the last descriptor with the RS bit set
>> +     * are done. Only reset the threshold descriptor.
>> +     */
>> +    txr[desc_to_clean_to].dw3 = 0;
>> +
>> +    /* Update the txq to reflect the last descriptor that was cleaned */
>> +    txq->last_desc_cleaned = desc_to_clean_to;
>> +    txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
>> +
>> +    /* No Error */
>> +    return 0;
>> +}
>> +
>> +uint16_t
>> +ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>> +        uint16_t nb_pkts)
>> +{
>> +    struct ngbe_tx_queue *txq;
>> +    struct ngbe_tx_entry *sw_ring;
>> +    struct ngbe_tx_entry *txe, *txn;
>> +    volatile struct ngbe_tx_desc *txr;
>> +    volatile struct ngbe_tx_desc *txd;
>> +    struct rte_mbuf     *tx_pkt;
>> +    struct rte_mbuf     *m_seg;
>> +    uint64_t buf_dma_addr;
>> +    uint32_t olinfo_status;
>> +    uint32_t cmd_type_len;
>> +    uint32_t pkt_len;
>> +    uint16_t slen;
>> +    uint64_t ol_flags;
>> +    uint16_t tx_id;
>> +    uint16_t tx_last;
>> +    uint16_t nb_tx;
>> +    uint16_t nb_used;
>> +    uint64_t tx_ol_req;
>> +    uint32_t ctx = 0;
>> +    uint32_t new_ctx;
>> +    union ngbe_tx_offload tx_offload;
>> +
>> +    tx_offload.data[0] = 0;
>> +    tx_offload.data[1] = 0;
>> +    txq = tx_queue;
>> +    sw_ring = txq->sw_ring;
>> +    txr     = txq->tx_ring;
>> +    tx_id   = txq->tx_tail;
>> +    txe = &sw_ring[tx_id];
>> +
>> +    /* Determine if the descriptor ring needs to be cleaned. */
>> +    if (txq->nb_tx_free < txq->tx_free_thresh)
>> +        ngbe_xmit_cleanup(txq);
>> +
>> +    rte_prefetch0(&txe->mbuf->pool);
>> +
>> +    /* TX loop */
>> +    for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
>> +        new_ctx = 0;
>> +        tx_pkt = *tx_pkts++;
>> +        pkt_len = tx_pkt->pkt_len;
>> +
>> +        /*
>> +         * Determine how many (if any) context descriptors
>> +         * are needed for offload functionality.
>> +         */
>> +        ol_flags = tx_pkt->ol_flags;
>> +
>> +        /* If hardware offload required */
>> +        tx_ol_req = ol_flags & NGBE_TX_OFFLOAD_MASK;
>> +        if (tx_ol_req) {
>> +            tx_offload.ptid = tx_desc_ol_flags_to_ptid(tx_ol_req,
>> +                    tx_pkt->packet_type);
>> +            tx_offload.l2_len = tx_pkt->l2_len;
>> +            tx_offload.l3_len = tx_pkt->l3_len;
>> +            tx_offload.l4_len = tx_pkt->l4_len;
>> +            tx_offload.vlan_tci = tx_pkt->vlan_tci;
>> +            tx_offload.tso_segsz = tx_pkt->tso_segsz;
>> +            tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
>> +            tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
>> +            tx_offload.outer_tun_len = 0;
>> +
>> +
>> +            /* If new context need be built or reuse the exist ctx*/
>> +            ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
>> +            /* Only allocate context descriptor if required */
>> +            new_ctx = (ctx == NGBE_CTX_NUM);
>> +            ctx = txq->ctx_curr;
>> +        }
>> +
>> +        /*
>> +         * Keep track of how many descriptors are used this loop
>> +         * This will always be the number of segments + the number of
>> +         * Context descriptors required to transmit the packet
>> +         */
>> +        nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
>> +
>> +        /*
>> +         * The number of descriptors that must be allocated for a
>> +         * packet is the number of segments of that packet, plus 1
>> +         * Context Descriptor for the hardware offload, if any.
>> +         * Determine the last TX descriptor to allocate in the TX ring
>> +         * for the packet, starting from the current position (tx_id)
>> +         * in the ring.
>> +         */
>> +        tx_last = (uint16_t)(tx_id + nb_used - 1);
>> +
>> +        /* Circular ring */
>> +        if (tx_last >= txq->nb_tx_desc)
>> +            tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
>> +
>> +        PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u"
>> +               " tx_first=%u tx_last=%u",
>> +               (uint16_t)txq->port_id,
>> +               (uint16_t)txq->queue_id,
>> +               (uint32_t)pkt_len,
>> +               (uint16_t)tx_id,
>> +               (uint16_t)tx_last);
>> +
>> +        /*
>> +         * Make sure there are enough TX descriptors available to
>> +         * transmit the entire packet.
>> +         * nb_used better be less than or equal to txq->tx_free_thresh
>> +         */
>> +        if (nb_used > txq->nb_tx_free) {
>> +            PMD_TX_LOG(DEBUG,
>> +                "Not enough free TX descriptors "
>> +                "nb_used=%4u nb_free=%4u "
>> +                "(port=%d queue=%d)",
>> +                nb_used, txq->nb_tx_free,
>> +                txq->port_id, txq->queue_id);
>> +
>> +            if (ngbe_xmit_cleanup(txq) != 0) {
>> +                /* Could not clean any descriptors */
>> +                if (nb_tx == 0)
>> +                    return 0;
>> +                goto end_of_tx;
>> +            }
>> +
>> +            /* nb_used better be <= txq->tx_free_thresh */
>> +            if (unlikely(nb_used > txq->tx_free_thresh)) {
>> +                PMD_TX_LOG(DEBUG,
>> +                    "The number of descriptors needed to "
>> +                    "transmit the packet exceeds the "
>> +                    "RS bit threshold. This will impact "
>> +                    "performance."
>> +                    "nb_used=%4u nb_free=%4u "
>> +                    "tx_free_thresh=%4u. "
>> +                    "(port=%d queue=%d)",
>> +                    nb_used, txq->nb_tx_free,
>> +                    txq->tx_free_thresh,
>> +                    txq->port_id, txq->queue_id);
>> +                /*
>> +                 * Loop here until there are enough TX
>> +                 * descriptors or until the ring cannot be
>> +                 * cleaned.
>> +                 */
>> +                while (nb_used > txq->nb_tx_free) {
>> +                    if (ngbe_xmit_cleanup(txq) != 0) {
>> +                        /*
>> +                         * Could not clean any
>> +                         * descriptors
>> +                         */
>> +                        if (nb_tx == 0)
>> +                            return 0;
>> +                        goto end_of_tx;
>> +                    }
>> +                }
>> +            }
>> +        }
>> +
>> +        /*
>> +         * By now there are enough free TX descriptors to transmit
>> +         * the packet.
>> +         */
>> +
>> +        /*
>> +         * Set common flags of all TX Data Descriptors.
>> +         *
>> +         * The following bits must be set in the first Data Descriptor
>> +         * and are ignored in the other ones:
>> +         *   - NGBE_TXD_FCS
>> +         *
>> +         * The following bits must only be set in the last Data
>> +         * Descriptor:
>> +         *   - NGBE_TXD_EOP
>> +         */
>> +        cmd_type_len = NGBE_TXD_FCS;
>> +
>> +        olinfo_status = 0;
>> +        if (tx_ol_req) {
>> +            if (ol_flags & PKT_TX_TCP_SEG) {
>> +                /* when TSO is on, paylen in descriptor is the
>> +                 * not the packet len but the tcp payload len
>> +                 */
>> +                pkt_len -= (tx_offload.l2_len +
>> +                    tx_offload.l3_len + tx_offload.l4_len);
>> +                pkt_len -=
>> +                    (tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
>> +                    ? tx_offload.outer_l2_len +
>> +                      tx_offload.outer_l3_len : 0;
>> +            }
>> +
>> +            /*
>> +             * Setup the TX Advanced Context Descriptor if required
>> +             */
>> +            if (new_ctx) {
>> +                volatile struct ngbe_tx_ctx_desc *ctx_txd;
>> +
>> +                ctx_txd = (volatile struct ngbe_tx_ctx_desc *)
>> +                    &txr[tx_id];
>> +
>> +                txn = &sw_ring[txe->next_id];
>> +                rte_prefetch0(&txn->mbuf->pool);
>> +
>> +                if (txe->mbuf != NULL) {
>> +                    rte_pktmbuf_free_seg(txe->mbuf);
>> +                    txe->mbuf = NULL;
>> +                }
>> +
>> +                ngbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
>> +                    tx_offload,
>> +                    rte_security_dynfield(tx_pkt));
>> +
>> +                txe->last_id = tx_last;
>> +                tx_id = txe->next_id;
>> +                txe = txn;
>> +            }
>> +
>> +            /*
>> +             * Setup the TX Advanced Data Descriptor,
>> +             * This path will go through
>> +             * whatever new/reuse the context descriptor
>> +             */
>> +            cmd_type_len  |= tx_desc_ol_flags_to_cmdtype(ol_flags);
>> +            olinfo_status |=
>> +                tx_desc_cksum_flags_to_olinfo(ol_flags);
>> +            olinfo_status |= NGBE_TXD_IDX(ctx);
>> +        }
>> +
>> +        olinfo_status |= NGBE_TXD_PAYLEN(pkt_len);
>> +
>> +        m_seg = tx_pkt;
>> +        do {
>> +            txd = &txr[tx_id];
>> +            txn = &sw_ring[txe->next_id];
>> +            rte_prefetch0(&txn->mbuf->pool);
>> +
>> +            if (txe->mbuf != NULL)
>> +                rte_pktmbuf_free_seg(txe->mbuf);
>> +            txe->mbuf = m_seg;
>> +
>> +            /*
>> +             * Set up Transmit Data Descriptor.
>> +             */
>> +            slen = m_seg->data_len;
>> +            buf_dma_addr = rte_mbuf_data_iova(m_seg);
>> +            txd->qw0 = rte_cpu_to_le_64(buf_dma_addr);
>> +            txd->dw2 = rte_cpu_to_le_32(cmd_type_len | slen);
>> +            txd->dw3 = rte_cpu_to_le_32(olinfo_status);
>> +            txe->last_id = tx_last;
>> +            tx_id = txe->next_id;
>> +            txe = txn;
>> +            m_seg = m_seg->next;
>> +        } while (m_seg != NULL);
>> +
>> +        /*
>> +         * The last packet data descriptor needs End Of Packet (EOP)
>> +         */
>> +        cmd_type_len |= NGBE_TXD_EOP;
>> +        txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
>> +
>> +        txd->dw2 |= rte_cpu_to_le_32(cmd_type_len);
>> +    }
>> +
>> +end_of_tx:
>> +
>> +    rte_wmb();
>> +
>> +    /*
>> +     * Set the Transmit Descriptor Tail (TDT)
>> +     */
>> +    PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
>> +           (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
>> +           (uint16_t)tx_id, (uint16_t)nb_tx);
>> +    ngbe_set32_relaxed(txq->tdt_reg_addr, tx_id);
>> +    txq->tx_tail = tx_id;
>> +
>> +    return nb_tx;
>> +}
>> +
>>   /*********************************************************************
>>    *
>>    *  RX functions
>> @@ -1123,6 +1734,31 @@ static const struct ngbe_txq_ops def_txq_ops = {
>>       .reset = ngbe_reset_tx_queue,
>>   };
>> +/* Takes an ethdev and a queue and sets up the tx function to be used 
>> based on
>> + * the queue parameters. Used in tx_queue_setup by primary process 
>> and then
>> + * in dev_init by secondary process when attaching to an existing 
>> ethdev.
>> + */
>> +void __rte_cold
>> +ngbe_set_tx_function(struct rte_eth_dev *dev, struct ngbe_tx_queue *txq)
>> +{
>> +    /* Use a simple Tx queue (no offloads, no multi segs) if possible */
>> +    if (txq->offloads == 0 &&
>> +            txq->tx_free_thresh >= RTE_PMD_NGBE_TX_MAX_BURST) {
>> +        PMD_INIT_LOG(DEBUG, "Using simple tx code path");
>> +        dev->tx_pkt_burst = ngbe_xmit_pkts_simple;
>> +    } else {
>> +        PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
>> +        PMD_INIT_LOG(DEBUG,
>> +                " - offloads = 0x%" PRIx64,
>> +                txq->offloads);
>> +        PMD_INIT_LOG(DEBUG,
>> +                " - tx_free_thresh = %lu 
>> [RTE_PMD_NGBE_TX_MAX_BURST=%lu]",
>> +                (unsigned long)txq->tx_free_thresh,
>> +                (unsigned long)RTE_PMD_NGBE_TX_MAX_BURST);
>> +        dev->tx_pkt_burst = ngbe_xmit_pkts;
>> +    }
>> +}
>> +
>>   uint64_t
>>   ngbe_get_tx_port_offloads(struct rte_eth_dev *dev)
>>   {
>> @@ -1262,6 +1898,9 @@ ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
>>       PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
>>                txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
>> +    /* set up scalar TX function as appropriate */
>> +    ngbe_set_tx_function(dev, txq);
>> +
>>       txq->ops->reset(txq);
>>       dev->data->tx_queues[queue_idx] = txq;
>> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
>> index 4b8596b24a..2cb98e2497 100644
>> --- a/drivers/net/ngbe/ngbe_rxtx.h
>> +++ b/drivers/net/ngbe/ngbe_rxtx.h
>> @@ -135,8 +135,35 @@ struct ngbe_tx_ctx_desc {
>>       __le32 dw3; /* w.mss_l4len_idx    */
>>   };
>> +/* @ngbe_tx_ctx_desc.dw0 */
>> +#define NGBE_TXD_IPLEN(v)         LS(v, 0, 0x1FF) /* ip/fcoe header 
>> end */
>> +#define NGBE_TXD_MACLEN(v)        LS(v, 9, 0x7F) /* desc mac len */
>> +#define NGBE_TXD_VLAN(v)          LS(v, 16, 0xFFFF) /* vlan tag */
>> +
>> +/* @ngbe_tx_ctx_desc.dw1 */
>> +/*** bit 0-31, when NGBE_TXD_DTYP_FCOE=0 ***/
>> +#define NGBE_TXD_IPSEC_SAIDX(v)   LS(v, 0, 0x3FF) /* ipsec SA index */
>> +#define NGBE_TXD_ETYPE(v)         LS(v, 11, 0x1) /* tunnel type */
>> +#define NGBE_TXD_ETYPE_UDP        LS(0, 11, 0x1)
>> +#define NGBE_TXD_ETYPE_GRE        LS(1, 11, 0x1)
>> +#define NGBE_TXD_EIPLEN(v)        LS(v, 12, 0x7F) /* tunnel ip header */
>> +#define NGBE_TXD_DTYP_FCOE        MS(16, 0x1) /* FCoE/IP descriptor */
>> +#define NGBE_TXD_ETUNLEN(v)       LS(v, 21, 0xFF) /* tunnel header */
>> +#define NGBE_TXD_DECTTL(v)        LS(v, 29, 0xF) /* decrease ip TTL */
>> +
>> +/* @ngbe_tx_ctx_desc.dw2 */
>> +#define NGBE_TXD_IPSEC_ESPLEN(v)  LS(v, 1, 0x1FF) /* ipsec ESP length */
>> +#define NGBE_TXD_SNAP             MS(10, 0x1) /* SNAP indication */
>> +#define NGBE_TXD_TPID_SEL(v)      LS(v, 11, 0x7) /* vlan tag index */
>> +#define NGBE_TXD_IPSEC_ESP        MS(14, 0x1) /* ipsec type: esp=1 
>> ah=0 */
>> +#define NGBE_TXD_IPSEC_ESPENC     MS(15, 0x1) /* ESP encrypt */
>> +#define NGBE_TXD_CTXT             MS(20, 0x1) /* context descriptor */
>> +#define NGBE_TXD_PTID(v)          LS(v, 24, 0xFF) /* packet type */
>>   /* @ngbe_tx_ctx_desc.dw3 */
>>   #define NGBE_TXD_DD               MS(0, 0x1) /* descriptor done */
>> +#define NGBE_TXD_IDX(v)           LS(v, 4, 0x1) /* ctxt desc index */
>> +#define NGBE_TXD_L4LEN(v)         LS(v, 8, 0xFF) /* l4 header length */
>> +#define NGBE_TXD_MSS(v)           LS(v, 16, 0xFFFF) /* l4 MSS */
>>   /**
>>    * Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
>> @@ -250,11 +277,34 @@ enum ngbe_ctx_num {
>>       NGBE_CTX_NUM  = 2, /**< CTX NUMBER  */
>>   };
>> +/** Offload features */
>> +union ngbe_tx_offload {
>> +    uint64_t data[2];
>> +    struct {
>> +        uint64_t ptid:8; /**< Packet Type Identifier. */
>> +        uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
>> +        uint64_t l3_len:9; /**< L3 (IP) Header Length. */
>> +        uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
>> +        uint64_t tso_segsz:16; /**< TCP TSO segment size */
>> +        uint64_t vlan_tci:16;
>> +        /**< VLAN Tag Control Identifier (CPU order). */
>> +
>> +        /* fields for TX offloading of tunnels */
>> +        uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */
>> +        uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
>> +        uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */
>> +    };
>> +};
>> +
>>   /**
>>    * Structure to check if new context need be built
>>    */
>>   struct ngbe_ctx_info {
>>       uint64_t flags;           /**< ol_flags for context build. */
>> +    /**< tx offload: vlan, tso, l2-l3-l4 lengths. */
>> +    union ngbe_tx_offload tx_offload;
>> +    /** compare mask for tx offload. */
>> +    union ngbe_tx_offload tx_offload_mask;
>>   };
>>   /**
>> @@ -298,6 +348,12 @@ struct ngbe_txq_ops {
>>       void (*reset)(struct ngbe_tx_queue *txq);
>>   };
>> +/* Takes an ethdev and a queue and sets up the tx function to be used 
>> based on
>> + * the queue parameters. Used in tx_queue_setup by primary process 
>> and then
>> + * in dev_init by secondary process when attaching to an existing 
>> ethdev.
>> + */
>> +void ngbe_set_tx_function(struct rte_eth_dev *dev, struct 
>> ngbe_tx_queue *txq);
>> +
>>   void ngbe_set_rx_function(struct rte_eth_dev *dev);
>>   uint64_t ngbe_get_tx_port_offloads(struct rte_eth_dev *dev);
>>


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 22/24] net/ngbe: add device start operation
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 22/24] net/ngbe: add device start operation Jiawen Wu
@ 2021-06-14 19:33   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 19:33 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Setup misx interrupt, complete PHY configuration and set device link speed,

msix -> MSI-X

> to start device.

Shouldn't stop go together with start?
The patch should be testable.

> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   drivers/net/ngbe/base/ngbe_dummy.h   |  16 +
>   drivers/net/ngbe/base/ngbe_hw.c      |  50 ++++
>   drivers/net/ngbe/base/ngbe_hw.h      |   4 +
>   drivers/net/ngbe/base/ngbe_phy.c     |   3 +
>   drivers/net/ngbe/base/ngbe_phy_mvl.c |  64 ++++
>   drivers/net/ngbe/base/ngbe_phy_mvl.h |   1 +
>   drivers/net/ngbe/base/ngbe_phy_rtl.c |  58 ++++
>   drivers/net/ngbe/base/ngbe_phy_rtl.h |   2 +
>   drivers/net/ngbe/base/ngbe_phy_yt.c  |  26 ++
>   drivers/net/ngbe/base/ngbe_phy_yt.h  |   1 +
>   drivers/net/ngbe/base/ngbe_type.h    |  17 ++
>   drivers/net/ngbe/ngbe_ethdev.c       | 420 ++++++++++++++++++++++++++-
>   drivers/net/ngbe/ngbe_ethdev.h       |   6 +
>   13 files changed, 667 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
> index 709e01659c..dfc7b13192 100644
> --- a/drivers/net/ngbe/base/ngbe_dummy.h
> +++ b/drivers/net/ngbe/base/ngbe_dummy.h
> @@ -47,6 +47,10 @@ static inline s32 ngbe_mac_reset_hw_dummy(struct ngbe_hw *TUP0)
>   {
>   	return NGBE_ERR_OPS_DUMMY;
>   }
> +static inline s32 ngbe_mac_start_hw_dummy(struct ngbe_hw *TUP0)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
>   static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
>   {
>   	return NGBE_ERR_OPS_DUMMY;
> @@ -74,6 +78,11 @@ static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
>   {
>   	return NGBE_ERR_OPS_DUMMY;
>   }
> +static inline s32 ngbe_mac_get_link_capabilities_dummy(struct ngbe_hw *TUP0,
> +					u32 *TUP1, bool *TUP2)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
>   static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
>   					u8 *TUP2, u32 TUP3, u32 TUP4)
>   {
> @@ -110,6 +119,10 @@ static inline s32 ngbe_phy_identify_dummy(struct ngbe_hw *TUP0)
>   {
>   	return NGBE_ERR_OPS_DUMMY;
>   }
> +static inline s32 ngbe_phy_init_hw_dummy(struct ngbe_hw *TUP0)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
>   static inline s32 ngbe_phy_reset_hw_dummy(struct ngbe_hw *TUP0)
>   {
>   	return NGBE_ERR_OPS_DUMMY;
> @@ -151,12 +164,14 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
>   	hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
>   	hw->mac.init_hw = ngbe_mac_init_hw_dummy;
>   	hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
> +	hw->mac.start_hw = ngbe_mac_start_hw_dummy;
>   	hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
>   	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
>   	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
>   	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
>   	hw->mac.setup_link = ngbe_mac_setup_link_dummy;
>   	hw->mac.check_link = ngbe_mac_check_link_dummy;
> +	hw->mac.get_link_capabilities = ngbe_mac_get_link_capabilities_dummy;
>   	hw->mac.set_rar = ngbe_mac_set_rar_dummy;
>   	hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
>   	hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
> @@ -165,6 +180,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
>   	hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
>   	hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
>   	hw->phy.identify = ngbe_phy_identify_dummy;
> +	hw->phy.init_hw = ngbe_phy_init_hw_dummy;
>   	hw->phy.reset_hw = ngbe_phy_reset_hw_dummy;
>   	hw->phy.read_reg = ngbe_phy_read_reg_dummy;
>   	hw->phy.write_reg = ngbe_phy_write_reg_dummy;
> diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
> index 00ac4ce838..b0bc714741 100644
> --- a/drivers/net/ngbe/base/ngbe_hw.c
> +++ b/drivers/net/ngbe/base/ngbe_hw.c
> @@ -9,6 +9,22 @@
>   #include "ngbe_mng.h"
>   #include "ngbe_hw.h"
>   
> +/**
> + *  ngbe_start_hw - Prepare hardware for Tx/Rx
> + *  @hw: pointer to hardware structure
> + *
> + *  Starts the hardware.
> + **/
> +s32 ngbe_start_hw(struct ngbe_hw *hw)
> +{
> +	DEBUGFUNC("ngbe_start_hw");
> +
> +	/* Clear adapter stopped flag */
> +	hw->adapter_stopped = false;
> +
> +	return 0;
> +}
> +
>   /**
>    *  ngbe_init_hw - Generic hardware initialization
>    *  @hw: pointer to hardware structure
> @@ -27,6 +43,10 @@ s32 ngbe_init_hw(struct ngbe_hw *hw)
>   
>   	/* Reset the hardware */
>   	status = hw->mac.reset_hw(hw);
> +	if (status == 0) {
> +		/* Start the HW */
> +		status = hw->mac.start_hw(hw);
> +	}
>   
>   	if (status != 0)
>   		DEBUGOUT("Failed to initialize HW, STATUS = %d\n", status);
> @@ -633,6 +653,30 @@ s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
>   	return status;
>   }
>   
> +s32 ngbe_get_link_capabilities_em(struct ngbe_hw *hw,
> +				      u32 *speed,
> +				      bool *autoneg)
> +{
> +	s32 status = 0;
> +
> +	DEBUGFUNC("\n");
> +
> +	switch (hw->sub_device_id) {
> +	case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
> +		*speed = NGBE_LINK_SPEED_1GB_FULL |
> +			NGBE_LINK_SPEED_100M_FULL |
> +			NGBE_LINK_SPEED_10M_FULL;
> +		*autoneg = false;
> +		hw->phy.link_mode = NGBE_PHYSICAL_LAYER_1000BASE_T |
> +				NGBE_PHYSICAL_LAYER_100BASE_TX;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return status;
> +}
> +
>   s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
>   			       u32 speed,
>   			       bool autoneg_wait_to_complete)
> @@ -842,6 +886,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
>   	/* MAC */
>   	mac->init_hw = ngbe_init_hw;
>   	mac->reset_hw = ngbe_reset_hw_em;
> +	mac->start_hw = ngbe_start_hw;
>   	mac->get_mac_addr = ngbe_get_mac_addr;
>   	mac->stop_hw = ngbe_stop_hw;
>   	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
> @@ -855,6 +900,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
>   	mac->clear_vmdq = ngbe_clear_vmdq;
>   
>   	/* Link */
> +	mac->get_link_capabilities = ngbe_get_link_capabilities_em;
>   	mac->check_link = ngbe_check_mac_link_em;
>   	mac->setup_link = ngbe_setup_mac_link_em;
>   
> @@ -871,6 +917,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
>   	mac->max_rx_queues	= NGBE_EM_MAX_RX_QUEUES;
>   	mac->max_tx_queues	= NGBE_EM_MAX_TX_QUEUES;
>   
> +	mac->default_speeds = NGBE_LINK_SPEED_10M_FULL |
> +				NGBE_LINK_SPEED_100M_FULL |
> +				NGBE_LINK_SPEED_1GB_FULL;
> +
>   	return 0;
>   }
>   
> diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
> index 1689223168..4fee5735ac 100644
> --- a/drivers/net/ngbe/base/ngbe_hw.h
> +++ b/drivers/net/ngbe/base/ngbe_hw.h
> @@ -14,6 +14,7 @@
>   #define NGBE_EM_MC_TBL_SIZE   32
>   
>   s32 ngbe_init_hw(struct ngbe_hw *hw);
> +s32 ngbe_start_hw(struct ngbe_hw *hw);
>   s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
>   s32 ngbe_stop_hw(struct ngbe_hw *hw);
>   s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
> @@ -22,6 +23,9 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
>   
>   s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
>   			bool *link_up, bool link_up_wait_to_complete);
> +s32 ngbe_get_link_capabilities_em(struct ngbe_hw *hw,
> +				      u32 *speed,
> +				      bool *autoneg);
>   s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
>   			       u32 speed,
>   			       bool autoneg_wait_to_complete);
> diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
> index 7a9baada81..47a5687b48 100644
> --- a/drivers/net/ngbe/base/ngbe_phy.c
> +++ b/drivers/net/ngbe/base/ngbe_phy.c
> @@ -426,16 +426,19 @@ s32 ngbe_init_phy(struct ngbe_hw *hw)
>   	/* Set necessary function pointers based on PHY type */
>   	switch (hw->phy.type) {
>   	case ngbe_phy_rtl:
> +		hw->phy.init_hw = ngbe_init_phy_rtl;
>   		hw->phy.check_link = ngbe_check_phy_link_rtl;
>   		hw->phy.setup_link = ngbe_setup_phy_link_rtl;
>   		break;
>   	case ngbe_phy_mvl:
>   	case ngbe_phy_mvl_sfi:
> +		hw->phy.init_hw = ngbe_init_phy_mvl;
>   		hw->phy.check_link = ngbe_check_phy_link_mvl;
>   		hw->phy.setup_link = ngbe_setup_phy_link_mvl;
>   		break;
>   	case ngbe_phy_yt8521s:
>   	case ngbe_phy_yt8521s_sfi:
> +		hw->phy.init_hw = ngbe_init_phy_yt;
>   		hw->phy.check_link = ngbe_check_phy_link_yt;
>   		hw->phy.setup_link = ngbe_setup_phy_link_yt;
>   	default:
> diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
> index a1c055e238..33d21edfce 100644
> --- a/drivers/net/ngbe/base/ngbe_phy_mvl.c
> +++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
> @@ -48,6 +48,70 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
>   	return 0;
>   }
>   
> +s32 ngbe_init_phy_mvl(struct ngbe_hw *hw)
> +{
> +	s32 ret_val = 0;
> +	u16 value = 0;
> +	int i;
> +
> +	DEBUGFUNC("ngbe_init_phy_mvl");
> +
> +	/* enable interrupts, only link status change and an done is allowed */
> +	ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 2);
> +	ngbe_read_phy_reg_mdi(hw, MVL_RGM_CTL2, 0, &value);
> +	value &= ~MVL_RGM_CTL2_TTC;
> +	value |= MVL_RGM_CTL2_RTC;
> +	ngbe_write_phy_reg_mdi(hw, MVL_RGM_CTL2, 0, value);
> +
> +	hw->phy.write_reg(hw, MVL_CTRL, 0, MVL_CTRL_RESET);
> +	for (i = 0; i < 15; i++) {
> +		ngbe_read_phy_reg_mdi(hw, MVL_CTRL, 0, &value);
> +		if (value & MVL_CTRL_RESET)
> +			msleep(1);
> +		else
> +			break;
> +	}
> +
> +	if (i == 15) {
> +		DEBUGOUT("phy reset exceeds maximum waiting period.\n");
> +		return NGBE_ERR_TIMEOUT;
> +	}
> +
> +	ret_val = hw->phy.reset_hw(hw);
> +	if (ret_val)
> +		return ret_val;
> +
> +	/* set LED2 to interrupt output and INTn active low */
> +	ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 3);
> +	ngbe_read_phy_reg_mdi(hw, MVL_LEDTCR, 0, &value);
> +	value |= MVL_LEDTCR_INTR_EN;
> +	value &= ~(MVL_LEDTCR_INTR_POL);
> +	ngbe_write_phy_reg_mdi(hw, MVL_LEDTCR, 0, value);
> +
> +	if (hw->phy.type == ngbe_phy_mvl_sfi) {
> +		hw->phy.read_reg(hw, MVL_CTRL1, 0, &value);
> +		value &= ~MVL_CTRL1_INTR_POL;
> +		ngbe_write_phy_reg_mdi(hw, MVL_CTRL1, 0, value);
> +	}
> +
> +	/* enable link status change and AN complete interrupts */
> +	value = MVL_INTR_EN_ANC | MVL_INTR_EN_LSC;
> +	hw->phy.write_reg(hw, MVL_INTR_EN, 0, value);
> +
> +	/* LED control */
> +	ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 3);
> +	ngbe_read_phy_reg_mdi(hw, MVL_LEDFCR, 0, &value);
> +	value &= ~(MVL_LEDFCR_CTL0 | MVL_LEDFCR_CTL1);
> +	value |= MVL_LEDFCR_CTL0_CONF | MVL_LEDFCR_CTL1_CONF;
> +	ngbe_write_phy_reg_mdi(hw, MVL_LEDFCR, 0, value);
> +	ngbe_read_phy_reg_mdi(hw, MVL_LEDPCR, 0, &value);
> +	value &= ~(MVL_LEDPCR_CTL0 | MVL_LEDPCR_CTL1);
> +	value |= MVL_LEDPCR_CTL0_CONF | MVL_LEDPCR_CTL1_CONF;
> +	ngbe_write_phy_reg_mdi(hw, MVL_LEDPCR, 0, value);
> +
> +	return ret_val;
> +}
> +
>   s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw, u32 speed,
>   				bool autoneg_wait_to_complete)
>   {
> diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
> index a663a429dd..34cb1e838a 100644
> --- a/drivers/net/ngbe/base/ngbe_phy_mvl.h
> +++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
> @@ -86,6 +86,7 @@ s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
>   			u16 *phy_data);
>   s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
>   			u16 phy_data);
> +s32 ngbe_init_phy_mvl(struct ngbe_hw *hw);
>   
>   s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
>   
> diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
> index 5214ce5a8a..535259d8fd 100644
> --- a/drivers/net/ngbe/base/ngbe_phy_rtl.c
> +++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
> @@ -36,6 +36,64 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
>   	return 0;
>   }
>   
> +s32 ngbe_init_phy_rtl(struct ngbe_hw *hw)
> +{
> +	int i;
> +	u16 value = 0;
> +
> +	/* enable interrupts, only link status change and an done is allowed */
> +	value = RTL_INER_LSC | RTL_INER_ANC;
> +	hw->phy.write_reg(hw, RTL_INER, 0xa42, value);
> +
> +	hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
> +
> +	for (i = 0; i < 15; i++) {
> +		if (!rd32m(hw, NGBE_STAT,
> +			NGBE_STAT_GPHY_IN_RST(hw->bus.lan_id)))
> +			break;
> +
> +		msec_delay(10);
> +	}
> +	if (i == 15) {
> +		DEBUGOUT("GPhy reset exceeds maximum times.\n");
> +		return NGBE_ERR_PHY_TIMEOUT;
> +	}
> +
> +	for (i = 0; i < 1000; i++) {
> +		hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
> +		if (value & RTL_INSR_ACCESS)
> +			break;
> +	}
> +
> +	hw->phy.write_reg(hw, RTL_SCR, 0xa46, RTL_SCR_EFUSE);
> +	for (i = 0; i < 1000; i++) {
> +		hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
> +		if (value & RTL_INSR_ACCESS)
> +			break;
> +	}
> +	if (i == 1000)
> +		return NGBE_ERR_PHY_TIMEOUT;
> +
> +	hw->phy.write_reg(hw, RTL_SCR, 0xa46, RTL_SCR_EXTINI);
> +	for (i = 0; i < 1000; i++) {
> +		hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
> +		if (value & RTL_INSR_ACCESS)
> +			break;
> +	}
> +	if (i == 1000)
> +		return NGBE_ERR_PHY_TIMEOUT;
> +
> +	for (i = 0; i < 1000; i++) {
> +		hw->phy.read_reg(hw, RTL_GSR, 0xa42, &value);
> +		if ((value & RTL_GSR_ST) == RTL_GSR_ST_LANON)
> +			break;
> +	}
> +	if (i == 1000)
> +		return NGBE_ERR_PHY_TIMEOUT;
> +
> +	return 0;
> +}
> +
>   /**
>    *  ngbe_setup_phy_link_rtl - Set and restart auto-neg
>    *  @hw: pointer to hardware structure
> diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
> index e8bc4a1bd7..e6e7df5254 100644
> --- a/drivers/net/ngbe/base/ngbe_phy_rtl.h
> +++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
> @@ -80,6 +80,8 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
>   
>   s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
>   		u32 speed, bool autoneg_wait_to_complete);
> +
> +s32 ngbe_init_phy_rtl(struct ngbe_hw *hw);
>   s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
>   s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw,
>   			u32 *speed, bool *link_up);
> diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
> index f518dc0af6..94d3430fa4 100644
> --- a/drivers/net/ngbe/base/ngbe_phy_yt.c
> +++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
> @@ -98,6 +98,32 @@ s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
>   	return 0;
>   }
>   
> +s32 ngbe_init_phy_yt(struct ngbe_hw *hw)
> +{
> +	u16 value = 0;
> +
> +	DEBUGFUNC("ngbe_init_phy_yt");
> +
> +	if (hw->phy.type != ngbe_phy_yt8521s_sfi)
> +		return 0;
> +
> +	/* select sds area register */
> +	ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
> +	/* enable interrupts */
> +	ngbe_write_phy_reg_mdi(hw, YT_INTR, 0, YT_INTR_ENA_MASK);
> +
> +	/* select fiber_to_rgmii first in multiplex */
> +	ngbe_read_phy_reg_ext_yt(hw, YT_MISC, 0, &value);
> +	value |= YT_MISC_FIBER_PRIO;
> +	ngbe_write_phy_reg_ext_yt(hw, YT_MISC, 0, value);
> +
> +	hw->phy.read_reg(hw, YT_BCR, 0, &value);
> +	value |= YT_BCR_PWDN;
> +	hw->phy.write_reg(hw, YT_BCR, 0, value);
> +
> +	return 0;
> +}
> +
>   s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
>   				bool autoneg_wait_to_complete)
>   {
> diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
> index 26820ecb92..5babd841c1 100644
> --- a/drivers/net/ngbe/base/ngbe_phy_yt.h
> +++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
> @@ -65,6 +65,7 @@ s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
>   		u32 reg_addr, u32 device_type, u16 *phy_data);
>   s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
>   		u32 reg_addr, u32 device_type, u16 phy_data);
> +s32 ngbe_init_phy_yt(struct ngbe_hw *hw);
>   
>   s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
>   
> diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
> index bc99d9c3db..601fb85b91 100644
> --- a/drivers/net/ngbe/base/ngbe_type.h
> +++ b/drivers/net/ngbe/base/ngbe_type.h
> @@ -26,6 +26,12 @@ struct ngbe_thermal_sensor_data {
>   	struct ngbe_thermal_diode_data sensor[1];
>   };
>   
> +/* Physical layer type */
> +#define NGBE_PHYSICAL_LAYER_UNKNOWN		0
> +#define NGBE_PHYSICAL_LAYER_10GBASE_T		0x00001
> +#define NGBE_PHYSICAL_LAYER_1000BASE_T		0x00002
> +#define NGBE_PHYSICAL_LAYER_100BASE_TX		0x00004
> +
>   enum ngbe_eeprom_type {
>   	ngbe_eeprom_unknown = 0,
>   	ngbe_eeprom_spi,
> @@ -93,15 +99,20 @@ struct ngbe_rom_info {
>   struct ngbe_mac_info {
>   	s32 (*init_hw)(struct ngbe_hw *hw);
>   	s32 (*reset_hw)(struct ngbe_hw *hw);
> +	s32 (*start_hw)(struct ngbe_hw *hw);
>   	s32 (*stop_hw)(struct ngbe_hw *hw);
>   	s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
>   	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
>   	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
>   
> +	/* Link */
>   	s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
>   			       bool autoneg_wait_to_complete);
>   	s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
>   			       bool *link_up, bool link_up_wait_to_complete);
> +	s32 (*get_link_capabilities)(struct ngbe_hw *hw,
> +				      u32 *speed, bool *autoneg);
> +
>   	/* RAR */
>   	s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
>   			  u32 enable_addr);
> @@ -126,10 +137,13 @@ struct ngbe_mac_info {
>   	struct ngbe_thermal_sensor_data  thermal_sensor_data;
>   	bool set_lben;
>   	u32  max_link_up_time;
> +
> +	u32 default_speeds;
>   };
>   
>   struct ngbe_phy_info {
>   	s32 (*identify)(struct ngbe_hw *hw);
> +	s32 (*init_hw)(struct ngbe_hw *hw);
>   	s32 (*reset_hw)(struct ngbe_hw *hw);
>   	s32 (*read_reg)(struct ngbe_hw *hw, u32 reg_addr,
>   				u32 device_type, u16 *phy_data);
> @@ -151,6 +165,7 @@ struct ngbe_phy_info {
>   	u32 phy_semaphore_mask;
>   	bool reset_disable;
>   	u32 autoneg_advertised;
> +	u32 link_mode;
>   };
>   
>   enum ngbe_isb_idx {
> @@ -178,6 +193,8 @@ struct ngbe_hw {
>   
>   	uint64_t isb_dma;
>   	void IOMEM *isb_mem;
> +	u16 nb_rx_queues;
> +	u16 nb_tx_queues;
>   
>   	bool is_pf;
>   };
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 1a6419e5a4..3812663591 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -14,9 +14,17 @@
>   #include "ngbe_rxtx.h"
>   
>   static int ngbe_dev_close(struct rte_eth_dev *dev);
> -
> +static int ngbe_dev_link_update(struct rte_eth_dev *dev,
> +				int wait_to_complete);
> +
> +static void ngbe_dev_link_status_print(struct rte_eth_dev *dev);
> +static int ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on);
> +static int ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev);
> +static int ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev);
> +static int ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev);
>   static void ngbe_dev_interrupt_handler(void *param);
>   static void ngbe_dev_interrupt_delayed_handler(void *param);
> +static void ngbe_configure_msix(struct rte_eth_dev *dev);
>   
>   /*
>    * The set of PCI devices this driver supports
> @@ -52,6 +60,25 @@ static const struct rte_eth_desc_lim tx_desc_lim = {
>   };
>   
>   static const struct eth_dev_ops ngbe_eth_dev_ops;
> +static inline int32_t
> +ngbe_pf_reset_hw(struct ngbe_hw *hw)
> +{
> +	uint32_t ctrl_ext;
> +	int32_t status;
> +
> +	status = hw->mac.reset_hw(hw);
> +
> +	ctrl_ext = rd32(hw, NGBE_PORTCTL);
> +	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
> +	ctrl_ext |= NGBE_PORTCTL_RSTDONE;
> +	wr32(hw, NGBE_PORTCTL, ctrl_ext);
> +	ngbe_flush(hw);
> +
> +	if (status == NGBE_ERR_SFP_NOT_PRESENT)
> +		status = 0;
> +	return status;
> +}
> +
>   static inline void
>   ngbe_enable_intr(struct rte_eth_dev *dev)
>   {
> @@ -217,9 +244,18 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
>   	ctrl_ext = rd32(hw, NGBE_PORTCTL);
>   	/* let hardware know driver is loaded */
>   	ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
> +	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
> +	ctrl_ext |= NGBE_PORTCTL_RSTDONE;
>   	wr32(hw, NGBE_PORTCTL, ctrl_ext);
>   	ngbe_flush(hw);
>   
> +	PMD_INIT_LOG(DEBUG, "MAC: %d, PHY: %d",
> +			(int)hw->mac.type, (int)hw->phy.type);
> +
> +	PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
> +		     eth_dev->data->port_id, pci_dev->id.vendor_id,
> +		     pci_dev->id.device_id);
> +
>   	rte_intr_callback_register(intr_handle,
>   				   ngbe_dev_interrupt_handler, eth_dev);
>   
> @@ -302,6 +338,196 @@ ngbe_dev_configure(struct rte_eth_dev *dev)
>   	return 0;
>   }
>   
> +static void
> +ngbe_dev_phy_intr_setup(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +
> +	wr32(hw, NGBE_GPIODIR, NGBE_GPIODIR_DDR(1));
> +	wr32(hw, NGBE_GPIOINTEN, NGBE_GPIOINTEN_INT(3));
> +	wr32(hw, NGBE_GPIOINTTYPE, NGBE_GPIOINTTYPE_LEVEL(0));
> +	if (hw->phy.type == ngbe_phy_yt8521s_sfi)
> +		wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(0));
> +	else
> +		wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(3));
> +
> +	intr->mask_misc |= NGBE_ICRMISC_GPIO;
> +}
> +
> +/*
> + * Configure device link speed and setup link.
> + * It returns 0 on success.
> + */
> +static int
> +ngbe_dev_start(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> +	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
> +	uint32_t intr_vector = 0;
> +	int err;
> +	bool link_up = false, negotiate = 0;
> +	uint32_t speed = 0;
> +	uint32_t allowed_speeds = 0;
> +	int status;
> +	uint32_t *link_speeds;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
> +		PMD_INIT_LOG(ERR,
> +		"Invalid link_speeds for port %u, fix speed not supported",
> +				dev->data->port_id);
> +		return -EINVAL;
> +	}
> +
> +	/* disable uio/vfio intr/eventfd mapping */
> +	rte_intr_disable(intr_handle);
> +
> +	/* stop adapter */
> +	hw->adapter_stopped = 0;
> +	ngbe_stop_hw(hw);
> +
> +	/* reinitialize adapter
> +	 * this calls reset and start
> +	 */
> +	hw->nb_rx_queues = dev->data->nb_rx_queues;
> +	hw->nb_tx_queues = dev->data->nb_tx_queues;
> +	status = ngbe_pf_reset_hw(hw);
> +	if (status != 0)
> +		return -1;
> +	hw->mac.start_hw(hw);
> +	hw->mac.get_link_status = true;
> +
> +	ngbe_dev_phy_intr_setup(dev);
> +
> +	/* check and configure queue intr-vector mapping */
> +	if ((rte_intr_cap_multiple(intr_handle) ||
> +	     !RTE_ETH_DEV_SRIOV(dev).active) &&
> +	    dev->data->dev_conf.intr_conf.rxq != 0) {
> +		intr_vector = dev->data->nb_rx_queues;
> +		if (rte_intr_efd_enable(intr_handle, intr_vector))
> +			return -1;
> +	}
> +
> +	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {

Compare intr_vec with NULL

> +		intr_handle->intr_vec =
> +			rte_zmalloc("intr_vec",
> +				    dev->data->nb_rx_queues * sizeof(int), 0);
> +		if (intr_handle->intr_vec == NULL) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
> +				     " intr_vec", dev->data->nb_rx_queues);
> +			return -ENOMEM;
> +		}
> +	}
> +
> +	/* confiugre msix for sleep until rx interrupt */

rx -> Rx

> +	ngbe_configure_msix(dev);
> +
> +	/* initialize transmission unit */
> +	ngbe_dev_tx_init(dev);
> +
> +	/* This can fail when allocating mbufs for descriptor rings */
> +	err = ngbe_dev_rx_init(dev);
> +	if (err) {

Compare with 0

> +		PMD_INIT_LOG(ERR, "Unable to initialize RX hardware");
> +		goto error;
> +	}
> +
> +	/* Skip link setup if loopback mode is enabled. */
> +	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
> +		goto skip_link_setup;
> +
> +	err = hw->mac.check_link(hw, &speed, &link_up, 0);
> +	if (err)

Compare with 0

> +		goto error;
> +	dev->data->dev_link.link_status = link_up;
> +
> +	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
> +	if (err)

Compare with 0

> +		goto error;
> +
> +	allowed_speeds = 0;
> +	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
> +		allowed_speeds |= ETH_LINK_SPEED_1G;
> +	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
> +		allowed_speeds |= ETH_LINK_SPEED_100M;
> +	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
> +		allowed_speeds |= ETH_LINK_SPEED_10M;
> +
> +	link_speeds = &dev->data->dev_conf.link_speeds;
> +	if (*link_speeds & ~allowed_speeds) {
> +		PMD_INIT_LOG(ERR, "Invalid link setting");
> +		goto error;
> +	}
> +
> +	speed = 0x0;
> +	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
> +		speed = hw->mac.default_speeds;
> +	} else {
> +		if (*link_speeds & ETH_LINK_SPEED_1G)
> +			speed |= NGBE_LINK_SPEED_1GB_FULL;
> +		if (*link_speeds & ETH_LINK_SPEED_100M)
> +			speed |= NGBE_LINK_SPEED_100M_FULL;
> +		if (*link_speeds & ETH_LINK_SPEED_10M)
> +			speed |= NGBE_LINK_SPEED_10M_FULL;
> +	}
> +
> +	hw->phy.init_hw(hw);
> +	err = hw->mac.setup_link(hw, speed, link_up);
> +	if (err)

Compare with 0

> +		goto error;
> +
> +skip_link_setup:
> +
> +	if (rte_intr_allow_others(intr_handle)) {
> +		ngbe_dev_misc_interrupt_setup(dev);
> +		/* check if lsc interrupt is enabled */
> +		if (dev->data->dev_conf.intr_conf.lsc != 0)
> +			ngbe_dev_lsc_interrupt_setup(dev, TRUE);
> +		else
> +			ngbe_dev_lsc_interrupt_setup(dev, FALSE);
> +		ngbe_dev_macsec_interrupt_setup(dev);
> +		ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
> +	} else {
> +		rte_intr_callback_unregister(intr_handle,
> +					     ngbe_dev_interrupt_handler, dev);
> +		if (dev->data->dev_conf.intr_conf.lsc != 0)
> +			PMD_INIT_LOG(INFO, "lsc won't enable because of"
> +				     " no intr multiplex");
> +	}
> +
> +	/* check if rxq interrupt is enabled */
> +	if (dev->data->dev_conf.intr_conf.rxq != 0 &&
> +	    rte_intr_dp_is_en(intr_handle))
> +		ngbe_dev_rxq_interrupt_setup(dev);
> +
> +	/* enable uio/vfio intr/eventfd mapping */

uio -> UIO, vfio -> VFIO

> +	rte_intr_enable(intr_handle);
> +
> +	/* resume enabled intr since hw reset */

hw -> HW

> +	ngbe_enable_intr(dev);
> +
> +	if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
> +		(hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
> +		/* gpio0 is used to power on/off control*/
> +		wr32(hw, NGBE_GPIODATA, 0);
> +	}
> +
> +	/*
> +	 * Update link status right before return, because it may
> +	 * start link configuration process in a separate thread.
> +	 */
> +	ngbe_dev_link_update(dev, 0);
> +
> +	return 0;
> +
> +error:
> +	PMD_INIT_LOG(ERR, "failure in dev start: %d", err);
> +	return -EIO;
> +}
> +
>   /*
>    * Reset and stop device.
>    */
> @@ -487,6 +713,106 @@ ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
>   	return ngbe_dev_link_update_share(dev, wait_to_complete);
>   }
>   
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during nic initialized.
> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + * @param on
> + *  Enable or Disable.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +
> +	ngbe_dev_link_status_print(dev);
> +	if (on) {

Compare with 0

> +		intr->mask_misc |= NGBE_ICRMISC_PHY;
> +		intr->mask_misc |= NGBE_ICRMISC_GPIO;
> +	} else {
> +		intr->mask_misc &= ~NGBE_ICRMISC_PHY;
> +		intr->mask_misc &= ~NGBE_ICRMISC_GPIO;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during nic initialized.
> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	u64 mask;
> +
> +	mask = NGBE_ICR_MASK;
> +	mask &= (1ULL << NGBE_MISC_VEC_ID);
> +	intr->mask |= mask;
> +	intr->mask_misc |= NGBE_ICRMISC_GPIO;
> +
> +	return 0;
> +}
> +
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during nic initialized.

nic -> NIC

> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +	u64 mask;
> +
> +	mask = NGBE_ICR_MASK;
> +	mask &= ~((1ULL << NGBE_RX_VEC_START) - 1);
> +	intr->mask |= mask;
> +
> +	return 0;
> +}
> +
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during nic initialized.

nic -> NIC

> + *
> + * @param dev
> + *  Pointer to struct rte_eth_dev.
> + *
> + * @return
> + *  - On success, zero.
> + *  - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_interrupt *intr = NGBE_DEV_INTR(dev);
> +
> +	intr->mask_misc |= NGBE_ICRMISC_LNKSEC;
> +
> +	return 0;
> +}
> +
>   /*
>    * It reads ICR and sets flag for the link_update.
>    *
> @@ -693,9 +1019,101 @@ ngbe_dev_interrupt_handler(void *param)
>   	ngbe_dev_interrupt_action(dev);
>   }
>   
> +/**
> + * set the IVAR registers, mapping interrupt causes to vectors
> + * @param hw
> + *  pointer to ngbe_hw struct
> + * @direction
> + *  0 for Rx, 1 for Tx, -1 for other causes
> + * @queue
> + *  queue to map the corresponding interrupt to
> + * @msix_vector
> + *  the vector to map to the corresponding queue
> + */
> +void
> +ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
> +		   uint8_t queue, uint8_t msix_vector)
> +{
> +	uint32_t tmp, idx;
> +
> +	if (direction == -1) {
> +		/* other causes */
> +		msix_vector |= NGBE_IVARMISC_VLD;
> +		idx = 0;
> +		tmp = rd32(hw, NGBE_IVARMISC);
> +		tmp &= ~(0xFF << idx);
> +		tmp |= (msix_vector << idx);
> +		wr32(hw, NGBE_IVARMISC, tmp);
> +	} else {
> +		/* rx or tx causes */
> +		/* Workround for ICR lost */
> +		idx = ((16 * (queue & 1)) + (8 * direction));
> +		tmp = rd32(hw, NGBE_IVAR(queue >> 1));
> +		tmp &= ~(0xFF << idx);
> +		tmp |= (msix_vector << idx);
> +		wr32(hw, NGBE_IVAR(queue >> 1), tmp);
> +	}
> +}
> +
> +/**
> + * Sets up the hardware to properly generate MSI-X interrupts
> + * @hw
> + *  board private structure
> + */
> +static void
> +ngbe_configure_msix(struct rte_eth_dev *dev)
> +{
> +	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> +	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	uint32_t queue_id, base = NGBE_MISC_VEC_ID;
> +	uint32_t vec = NGBE_MISC_VEC_ID;
> +	uint32_t gpie;
> +
> +	/* won't configure msix register if no mapping is done

msix -> MSI-X

> +	 * between intr vector and event fd
> +	 * but if misx has been enabled already, need to configure

msix -> MSI-X

> +	 * auto clean, auto mask and throttling.
> +	 */
> +	gpie = rd32(hw, NGBE_GPIE);
> +	if (!rte_intr_dp_is_en(intr_handle) &&
> +	    !(gpie & NGBE_GPIE_MSIX))
> +		return;
> +
> +	if (rte_intr_allow_others(intr_handle)) {
> +		base = NGBE_RX_VEC_START;
> +		vec = base;
> +	}
> +
> +	/* setup GPIE for MSI-x mode */

MSI-x -> MSI-X

> +	gpie = rd32(hw, NGBE_GPIE);
> +	gpie |= NGBE_GPIE_MSIX;
> +	wr32(hw, NGBE_GPIE, gpie);
> +
> +	/* Populate the IVAR table and set the ITR values to the
> +	 * corresponding register.
> +	 */
> +	if (rte_intr_dp_is_en(intr_handle)) {
> +		for (queue_id = 0; queue_id < dev->data->nb_rx_queues;
> +			queue_id++) {
> +			/* by default, 1:1 mapping */
> +			ngbe_set_ivar_map(hw, 0, queue_id, vec);
> +			intr_handle->intr_vec[queue_id] = vec;
> +			if (vec < base + intr_handle->nb_efd - 1)
> +				vec++;
> +		}
> +
> +		ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
> +	}
> +	wr32(hw, NGBE_ITR(NGBE_MISC_VEC_ID),
> +			NGBE_ITR_IVAL_1G(NGBE_QUEUE_ITR_INTERVAL_DEFAULT)
> +			| NGBE_ITR_WRDSA);
> +}
> +
>   static const struct eth_dev_ops ngbe_eth_dev_ops = {
>   	.dev_configure              = ngbe_dev_configure,
>   	.dev_infos_get              = ngbe_dev_info_get,
> +	.dev_start                  = ngbe_dev_start,
>   	.link_update                = ngbe_dev_link_update,
>   	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
>   	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 035b1ad5c8..0b8dba571b 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -18,6 +18,8 @@
>   #define NGBE_VLAN_TAG_SIZE 4
>   #define NGBE_HKEY_MAX_INDEX 10
>   
> +#define NGBE_QUEUE_ITR_INTERVAL_DEFAULT	500 /* 500us */
> +
>   #define NGBE_RSS_OFFLOAD_ALL ( \
>   	ETH_RSS_IPV4 | \
>   	ETH_RSS_NONFRAG_IPV4_TCP | \
> @@ -30,6 +32,7 @@
>   	ETH_RSS_IPV6_UDP_EX)
>   
>   #define NGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
> +#define NGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
>   
>   /* structure for interrupt relative data */
>   struct ngbe_interrupt {
> @@ -92,6 +95,9 @@ uint16_t ngbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>   uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
>   		uint16_t nb_pkts);
>   
> +void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
> +			       uint8_t queue, uint8_t msix_vector);
> +
>   int
>   ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>   		int wait_to_complete);
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 23/24] net/ngbe: start and stop RxTx
  2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 23/24] net/ngbe: start and stop RxTx Jiawen Wu
@ 2021-06-14 20:44   ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 20:44 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:41 PM, Jiawen Wu wrote:
> Support to start and stop receive and transmit unit for specified
> queues.

Before the patch attempt to setup Rx or Tx queue with deferred start
should return an error.

> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>   doc/guides/nics/features/ngbe.ini  |   1 +
>   drivers/net/ngbe/base/ngbe_dummy.h |  15 ++
>   drivers/net/ngbe/base/ngbe_hw.c    | 105 ++++++++++
>   drivers/net/ngbe/base/ngbe_hw.h    |   4 +
>   drivers/net/ngbe/base/ngbe_type.h  |   5 +
>   drivers/net/ngbe/ngbe_ethdev.c     |  10 +
>   drivers/net/ngbe/ngbe_ethdev.h     |  15 ++
>   drivers/net/ngbe/ngbe_rxtx.c       | 307 +++++++++++++++++++++++++++++
>   drivers/net/ngbe/ngbe_rxtx.h       |   3 +
>   9 files changed, 465 insertions(+)
> 
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index 443c6691a3..43b6b2c2c7 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -7,6 +7,7 @@
>   Speed capabilities   = Y
>   Link status          = Y
>   Link status event    = Y
> +Queue start/stop     = Y
>   Jumbo frame          = Y
>   Scattered Rx         = Y
>   TSO                  = Y
> diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
> index dfc7b13192..384631b4f1 100644
> --- a/drivers/net/ngbe/base/ngbe_dummy.h
> +++ b/drivers/net/ngbe/base/ngbe_dummy.h
> @@ -59,6 +59,18 @@ static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1)
>   {
>   	return NGBE_ERR_OPS_DUMMY;
>   }
> +static inline s32 ngbe_mac_enable_rx_dma_dummy(struct ngbe_hw *TUP0, u32 TUP1)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
> +static inline s32 ngbe_mac_disable_sec_rx_path_dummy(struct ngbe_hw *TUP0)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
> +static inline s32 ngbe_mac_enable_sec_rx_path_dummy(struct ngbe_hw *TUP0)
> +{
> +	return NGBE_ERR_OPS_DUMMY;
> +}
>   static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
>   					u32 TUP1)
>   {
> @@ -167,6 +179,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
>   	hw->mac.start_hw = ngbe_mac_start_hw_dummy;
>   	hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
>   	hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
> +	hw->mac.enable_rx_dma = ngbe_mac_enable_rx_dma_dummy;
> +	hw->mac.disable_sec_rx_path = ngbe_mac_disable_sec_rx_path_dummy;
> +	hw->mac.enable_sec_rx_path = ngbe_mac_enable_sec_rx_path_dummy;
>   	hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
>   	hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
>   	hw->mac.setup_link = ngbe_mac_setup_link_dummy;
> diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
> index b0bc714741..030068f3f7 100644
> --- a/drivers/net/ngbe/base/ngbe_hw.c
> +++ b/drivers/net/ngbe/base/ngbe_hw.c
> @@ -536,6 +536,63 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
>   	ngbe_release_eeprom_semaphore(hw);
>   }
>   
> +/**
> + *  ngbe_disable_sec_rx_path - Stops the receive data path
> + *  @hw: pointer to hardware structure
> + *
> + *  Stops the receive data path and waits for the HW to internally empty
> + *  the Rx security block
> + **/
> +s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw)
> +{
> +#define NGBE_MAX_SECRX_POLL 4000
> +
> +	int i;
> +	u32 secrxreg;
> +
> +	DEBUGFUNC("ngbe_disable_sec_rx_path");
> +
> +
> +	secrxreg = rd32(hw, NGBE_SECRXCTL);
> +	secrxreg |= NGBE_SECRXCTL_XDSA;
> +	wr32(hw, NGBE_SECRXCTL, secrxreg);
> +	for (i = 0; i < NGBE_MAX_SECRX_POLL; i++) {
> +		secrxreg = rd32(hw, NGBE_SECRXSTAT);
> +		if (!(secrxreg & NGBE_SECRXSTAT_RDY))
> +			/* Use interrupt-safe sleep just in case */
> +			usec_delay(10);
> +		else
> +			break;
> +	}
> +
> +	/* For informational purposes only */
> +	if (i >= NGBE_MAX_SECRX_POLL)
> +		DEBUGOUT("Rx unit being enabled before security "
> +			 "path fully disabled.  Continuing with init.\n");
> +
> +	return 0;
> +}
> +
> +/**
> + *  ngbe_enable_sec_rx_path - Enables the receive data path
> + *  @hw: pointer to hardware structure
> + *
> + *  Enables the receive data path.
> + **/
> +s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw)
> +{
> +	u32 secrxreg;
> +
> +	DEBUGFUNC("ngbe_enable_sec_rx_path");
> +
> +	secrxreg = rd32(hw, NGBE_SECRXCTL);
> +	secrxreg &= ~NGBE_SECRXCTL_XDSA;
> +	wr32(hw, NGBE_SECRXCTL, secrxreg);
> +	ngbe_flush(hw);
> +
> +	return 0;
> +}
> +
>   /**
>    *  ngbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
>    *  @hw: pointer to hardware struct
> @@ -757,6 +814,21 @@ void ngbe_disable_rx(struct ngbe_hw *hw)
>   	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
>   }
>   
> +void ngbe_enable_rx(struct ngbe_hw *hw)
> +{
> +	u32 pfdtxgswc;
> +
> +	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, NGBE_MACRXCFG_ENA);
> +	wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, NGBE_PBRXCTL_ENA);
> +
> +	if (hw->mac.set_lben) {
> +		pfdtxgswc = rd32(hw, NGBE_PSRCTL);
> +		pfdtxgswc |= NGBE_PSRCTL_LBENA;
> +		wr32(hw, NGBE_PSRCTL, pfdtxgswc);
> +		hw->mac.set_lben = false;
> +	}
> +}
> +
>   /**
>    *  ngbe_set_mac_type - Sets MAC type
>    *  @hw: pointer to the HW structure
> @@ -803,6 +875,36 @@ s32 ngbe_set_mac_type(struct ngbe_hw *hw)
>   	return err;
>   }
>   
> +/**
> + *  ngbe_enable_rx_dma - Enable the Rx DMA unit
> + *  @hw: pointer to hardware structure
> + *  @regval: register value to write to RXCTRL
> + *
> + *  Enables the Rx DMA unit
> + **/
> +s32 ngbe_enable_rx_dma(struct ngbe_hw *hw, u32 regval)
> +{
> +	DEBUGFUNC("ngbe_enable_rx_dma");
> +
> +	/*
> +	 * Workaround silicon errata when enabling the Rx datapath.
> +	 * If traffic is incoming before we enable the Rx unit, it could hang
> +	 * the Rx DMA unit.  Therefore, make sure the security engine is
> +	 * completely disabled prior to enabling the Rx unit.
> +	 */
> +
> +	hw->mac.disable_sec_rx_path(hw);
> +
> +	if (regval & NGBE_PBRXCTL_ENA)
> +		ngbe_enable_rx(hw);
> +	else
> +		ngbe_disable_rx(hw);
> +
> +	hw->mac.enable_sec_rx_path(hw);
> +
> +	return 0;
> +}
> +
>   void ngbe_map_device_id(struct ngbe_hw *hw)
>   {
>   	u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
> @@ -887,11 +989,14 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
>   	mac->init_hw = ngbe_init_hw;
>   	mac->reset_hw = ngbe_reset_hw_em;
>   	mac->start_hw = ngbe_start_hw;
> +	mac->enable_rx_dma = ngbe_enable_rx_dma;
>   	mac->get_mac_addr = ngbe_get_mac_addr;
>   	mac->stop_hw = ngbe_stop_hw;
>   	mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
>   	mac->release_swfw_sync = ngbe_release_swfw_sync;
>   
> +	mac->disable_sec_rx_path = ngbe_disable_sec_rx_path;
> +	mac->enable_sec_rx_path = ngbe_enable_sec_rx_path;
>   	/* RAR */
>   	mac->set_rar = ngbe_set_rar;
>   	mac->clear_rar = ngbe_clear_rar;
> diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
> index 4fee5735ac..01f41fe9b3 100644
> --- a/drivers/net/ngbe/base/ngbe_hw.h
> +++ b/drivers/net/ngbe/base/ngbe_hw.h
> @@ -34,6 +34,8 @@ s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
>   			  u32 enable_addr);
>   s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
>   s32 ngbe_init_rx_addrs(struct ngbe_hw *hw);
> +s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw);
> +s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw);
>   
>   s32 ngbe_validate_mac_addr(u8 *mac_addr);
>   s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
> @@ -46,10 +48,12 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
>   s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
>   s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
>   void ngbe_disable_rx(struct ngbe_hw *hw);
> +void ngbe_enable_rx(struct ngbe_hw *hw);
>   s32 ngbe_init_shared_code(struct ngbe_hw *hw);
>   s32 ngbe_set_mac_type(struct ngbe_hw *hw);
>   s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
>   s32 ngbe_init_phy(struct ngbe_hw *hw);
> +s32 ngbe_enable_rx_dma(struct ngbe_hw *hw, u32 regval);
>   void ngbe_map_device_id(struct ngbe_hw *hw);
>   
>   #endif /* _NGBE_HW_H_ */
> diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
> index 601fb85b91..134d2019e1 100644
> --- a/drivers/net/ngbe/base/ngbe_type.h
> +++ b/drivers/net/ngbe/base/ngbe_type.h
> @@ -102,6 +102,9 @@ struct ngbe_mac_info {
>   	s32 (*start_hw)(struct ngbe_hw *hw);
>   	s32 (*stop_hw)(struct ngbe_hw *hw);
>   	s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
> +	s32 (*enable_rx_dma)(struct ngbe_hw *hw, u32 regval);
> +	s32 (*disable_sec_rx_path)(struct ngbe_hw *hw);
> +	s32 (*enable_sec_rx_path)(struct ngbe_hw *hw);
>   	s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
>   	void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
>   
> @@ -196,6 +199,8 @@ struct ngbe_hw {
>   	u16 nb_rx_queues;
>   	u16 nb_tx_queues;
>   
> +	u32 q_rx_regs[8 * 4];
> +	u32 q_tx_regs[8 * 4];
>   	bool is_pf;
>   };
>   
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 3812663591..2b551c00c7 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -435,6 +435,12 @@ ngbe_dev_start(struct rte_eth_dev *dev)
>   		goto error;
>   	}
>   
> +	err = ngbe_dev_rxtx_start(dev);
> +	if (err < 0) {
> +		PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
> +		goto error;
> +	}
> +

It is a part of device start procedure which is required
even when deferrred start is not supported.

May be separate patch which adds Tx queues start to device
start. And similar for Rx.

>   	/* Skip link setup if loopback mode is enabled. */
>   	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
>   		goto skip_link_setup;
> @@ -1116,6 +1122,10 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
>   	.dev_start                  = ngbe_dev_start,
>   	.link_update                = ngbe_dev_link_update,
>   	.dev_supported_ptypes_get   = ngbe_dev_supported_ptypes_get,
> +	.rx_queue_start	            = ngbe_dev_rx_queue_start,
> +	.rx_queue_stop              = ngbe_dev_rx_queue_stop,
> +	.tx_queue_start	            = ngbe_dev_tx_queue_start,
> +	.tx_queue_stop              = ngbe_dev_tx_queue_stop,

These callbacks really belongs to deferred start feature.
May be it makes sense to seprate Rx and Tx in different patches.

>   	.rx_queue_setup             = ngbe_dev_rx_queue_setup,
>   	.rx_queue_release           = ngbe_dev_rx_queue_release,
>   	.tx_queue_setup             = ngbe_dev_tx_queue_setup,
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 0b8dba571b..97ced40e4b 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -78,6 +78,21 @@ int ngbe_dev_rx_init(struct rte_eth_dev *dev);
>   
>   void ngbe_dev_tx_init(struct rte_eth_dev *dev);
>   
> +int ngbe_dev_rxtx_start(struct rte_eth_dev *dev);
> +
> +void ngbe_dev_save_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id);
> +void ngbe_dev_store_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id);
> +void ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
> +void ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
> +
> +int ngbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
> +
> +int ngbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
> +
> +int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
> +
> +int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
> +
>   uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
>   		uint16_t nb_pkts);
>   
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index 3f3f2cab06..daa2d7ae4d 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -2236,6 +2236,38 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
>   	return 0;
>   }
>   
> +static int __rte_cold
> +ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
> +{
> +	struct ngbe_rx_entry *rxe = rxq->sw_ring;
> +	uint64_t dma_addr;
> +	unsigned int i;
> +
> +	/* Initialize software ring entries */
> +	for (i = 0; i < rxq->nb_rx_desc; i++) {
> +		volatile struct ngbe_rx_desc *rxd;

Please, add a comment to explain why volatile is required.

> +		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
> +
> +		if (mbuf == NULL) {
> +			PMD_INIT_LOG(ERR, "RX mbuf alloc failed queue_id=%u",

RX -> Rx

port_id should be logged as well

> +				     (unsigned int)rxq->queue_id);
> +			return -ENOMEM;
> +		}
> +
> +		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
> +		mbuf->port = rxq->port_id;
> +
> +		dma_addr =
> +			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
> +		rxd = &rxq->rx_ring[i];
> +		NGBE_RXD_HDRADDR(rxd, 0);
> +		NGBE_RXD_PKTADDR(rxd, dma_addr);
> +		rxe[i].mbuf = mbuf;
> +	}
> +
> +	return 0;
> +}
> +
>   void __rte_cold
>   ngbe_set_rx_function(struct rte_eth_dev *dev)
>   {
> @@ -2473,3 +2505,278 @@ ngbe_dev_tx_init(struct rte_eth_dev *dev)
>   	}
>   }
>   
> +/*
> + * Set up link loopback mode Tx->Rx.
> + */
> +static inline void __rte_cold
> +ngbe_setup_loopback_link(struct ngbe_hw *hw)
> +{
> +	PMD_INIT_FUNC_TRACE();
> +
> +	wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_LB, NGBE_MACRXCFG_LB);
> +
> +	msec_delay(50);
> +}

Loopback support is a separate feature.

> +
> +/*
> + * Start Transmit and Receive Units.
> + */
> +int __rte_cold
> +ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
> +{
> +	struct ngbe_hw     *hw;
> +	struct ngbe_tx_queue *txq;
> +	struct ngbe_rx_queue *rxq;
> +	uint32_t dmatxctl;
> +	uint32_t rxctrl;
> +	uint16_t i;
> +	int ret = 0;
> +
> +	PMD_INIT_FUNC_TRACE();
> +	hw = NGBE_DEV_HW(dev);
> +
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		txq = dev->data->tx_queues[i];
> +		/* Setup Transmit Threshold Registers */
> +		wr32m(hw, NGBE_TXCFG(txq->reg_idx),
> +		      NGBE_TXCFG_HTHRESH_MASK |
> +		      NGBE_TXCFG_WTHRESH_MASK,
> +		      NGBE_TXCFG_HTHRESH(txq->hthresh) |
> +		      NGBE_TXCFG_WTHRESH(txq->wthresh));
> +	}
> +
> +	dmatxctl = rd32(hw, NGBE_DMATXCTRL);
> +	dmatxctl |= NGBE_DMATXCTRL_ENA;
> +	wr32(hw, NGBE_DMATXCTRL, dmatxctl);
> +
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		txq = dev->data->tx_queues[i];
> +		if (!txq->tx_deferred_start) {

tx_deferred_start is not a bool, so, should be compared vs 0

> +			ret = ngbe_dev_tx_queue_start(dev, i);
> +			if (ret < 0)
> +				return ret;
> +		}
> +	}
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		rxq = dev->data->rx_queues[i];
> +		if (!rxq->rx_deferred_start) {

rx_deferred_start is not a bool, so, should be compared vs 0

> +			ret = ngbe_dev_rx_queue_start(dev, i);
> +			if (ret < 0)
> +				return ret;
> +		}
> +	}
> +
> +	/* Enable Receive engine */
> +	rxctrl = rd32(hw, NGBE_PBRXCTL);
> +	rxctrl |= NGBE_PBRXCTL_ENA;
> +	hw->mac.enable_rx_dma(hw, rxctrl);
> +
> +	/* If loopback mode is enabled, set up the link accordingly */
> +	if (hw->is_pf && dev->data->dev_conf.lpbk_mode)
> +		ngbe_setup_loopback_link(hw);

Loopback support is a separate feature. Before the patch
request for loopback mode should return an error.

> +
> +	return 0;
> +}
> +
> +void
> +ngbe_dev_save_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id)
> +{
> +	u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
> +	*(reg++) = rd32(hw, NGBE_RXBAL(rx_queue_id));
> +	*(reg++) = rd32(hw, NGBE_RXBAH(rx_queue_id));
> +	*(reg++) = rd32(hw, NGBE_RXCFG(rx_queue_id));
> +}
> +
> +void
> +ngbe_dev_store_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id)
> +{
> +	u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
> +	wr32(hw, NGBE_RXBAL(rx_queue_id), *(reg++));
> +	wr32(hw, NGBE_RXBAH(rx_queue_id), *(reg++));
> +	wr32(hw, NGBE_RXCFG(rx_queue_id), *(reg++) & ~NGBE_RXCFG_ENA);
> +}
> +
> +void
> +ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
> +{
> +	u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
> +	*(reg++) = rd32(hw, NGBE_TXBAL(tx_queue_id));
> +	*(reg++) = rd32(hw, NGBE_TXBAH(tx_queue_id));
> +	*(reg++) = rd32(hw, NGBE_TXCFG(tx_queue_id));
> +}
> +
> +void
> +ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
> +{
> +	u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
> +	wr32(hw, NGBE_TXBAL(tx_queue_id), *(reg++));
> +	wr32(hw, NGBE_TXBAH(tx_queue_id), *(reg++));
> +	wr32(hw, NGBE_TXCFG(tx_queue_id), *(reg++) & ~NGBE_TXCFG_ENA);
> +}
> +
> +/*
> + * Start Receive Units for specified queue.
> + */
> +int __rte_cold
> +ngbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct ngbe_rx_queue *rxq;
> +	uint32_t rxdctl;
> +	int poll_ms;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	rxq = dev->data->rx_queues[rx_queue_id];
> +
> +	/* Allocate buffers for descriptor rings */
> +	if (ngbe_alloc_rx_queue_mbufs(rxq) != 0) {
> +		PMD_INIT_LOG(ERR, "Could not alloc mbuf for queue:%d",
> +			     rx_queue_id);
> +		return -1;
> +	}
> +	rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
> +	rxdctl |= NGBE_RXCFG_ENA;
> +	wr32(hw, NGBE_RXCFG(rxq->reg_idx), rxdctl);
> +
> +	/* Wait until RX Enable ready */

RX -> Rx

> +	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
> +	do {
> +		rte_delay_ms(1);
> +		rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
> +	} while (--poll_ms && !(rxdctl & NGBE_RXCFG_ENA));
> +	if (!poll_ms)

Compare vs 0

> +		PMD_INIT_LOG(ERR, "Could not enable Rx Queue %d", rx_queue_id);
> +	rte_wmb();
> +	wr32(hw, NGBE_RXRP(rxq->reg_idx), 0);
> +	wr32(hw, NGBE_RXWP(rxq->reg_idx), rxq->nb_rx_desc - 1);
> +	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
> +
> +	return 0;
> +}
> +
> +/*
> + * Stop Receive Units for specified queue.
> + */
> +int __rte_cold
> +ngbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct ngbe_adapter *adapter = NGBE_DEV_ADAPTER(dev);
> +	struct ngbe_rx_queue *rxq;
> +	uint32_t rxdctl;
> +	int poll_ms;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	rxq = dev->data->rx_queues[rx_queue_id];
> +
> +	ngbe_dev_save_rx_queue(hw, rxq->reg_idx);
> +	wr32m(hw, NGBE_RXCFG(rxq->reg_idx), NGBE_RXCFG_ENA, 0);
> +
> +	/* Wait until RX Enable bit clear */

RX -> Rx

> +	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
> +	do {
> +		rte_delay_ms(1);
> +		rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
> +	} while (--poll_ms && (rxdctl & NGBE_RXCFG_ENA));
> +	if (!poll_ms)

Compare vs 0

> +		PMD_INIT_LOG(ERR, "Could not disable Rx Queue %d", rx_queue_id);
> +
> +	rte_delay_us(RTE_NGBE_WAIT_100_US);
> +	ngbe_dev_store_rx_queue(hw, rxq->reg_idx);
> +
> +	ngbe_rx_queue_release_mbufs(rxq);
> +	ngbe_reset_rx_queue(adapter, rxq);
> +	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
> +
> +	return 0;
> +}
> +
> +/*
> + * Start Transmit Units for specified queue.
> + */
> +int __rte_cold
> +ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct ngbe_tx_queue *txq;
> +	uint32_t txdctl;
> +	int poll_ms;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	txq = dev->data->tx_queues[tx_queue_id];
> +	wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, NGBE_TXCFG_ENA);
> +
> +	/* Wait until TX Enable ready */

TX -> Tx

> +	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
> +	do {
> +		rte_delay_ms(1);
> +		txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
> +	} while (--poll_ms && !(txdctl & NGBE_TXCFG_ENA));
> +	if (!poll_ms)

Compare vs 0

> +		PMD_INIT_LOG(ERR, "Could not enable "
> +			     "Tx Queue %d", tx_queue_id);
> +
> +	rte_wmb();
> +	wr32(hw, NGBE_TXWP(txq->reg_idx), txq->tx_tail);
> +	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
> +
> +	return 0;
> +}
> +
> +/*
> + * Stop Transmit Units for specified queue.
> + */
> +int __rte_cold
> +ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
> +{
> +	struct ngbe_hw *hw = NGBE_DEV_HW(dev);
> +	struct ngbe_tx_queue *txq;
> +	uint32_t txdctl;
> +	uint32_t txtdh, txtdt;
> +	int poll_ms;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	txq = dev->data->tx_queues[tx_queue_id];
> +
> +	/* Wait until TX queue is empty */

TX -> Tx

> +	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
> +	do {
> +		rte_delay_us(RTE_NGBE_WAIT_100_US);
> +		txtdh = rd32(hw, NGBE_TXRP(txq->reg_idx));
> +		txtdt = rd32(hw, NGBE_TXWP(txq->reg_idx));
> +	} while (--poll_ms && (txtdh != txtdt));
> +	if (!poll_ms)

Compare vs 0

> +		PMD_INIT_LOG(ERR,
> +			"Tx Queue %d is not empty when stopping.",
> +			tx_queue_id);
> +
> +	ngbe_dev_save_tx_queue(hw, txq->reg_idx);
> +	wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, 0);
> +
> +	/* Wait until TX Enable bit clear */

TX -> Tx

> +	poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
> +	do {
> +		rte_delay_ms(1);
> +		txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
> +	} while (--poll_ms && (txdctl & NGBE_TXCFG_ENA));
> +	if (!poll_ms)

Compare vs 0

> +		PMD_INIT_LOG(ERR, "Could not disable Tx Queue %d",
> +			tx_queue_id);
> +
> +	rte_delay_us(RTE_NGBE_WAIT_100_US);
> +	ngbe_dev_store_tx_queue(hw, txq->reg_idx);
> +
> +	if (txq->ops != NULL) {
> +		txq->ops->release_mbufs(txq);
> +		txq->ops->reset(txq);
> +	}
> +	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
> +
> +	return 0;
> +}
> +
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> index 2cb98e2497..48241bd634 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.h
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -208,6 +208,9 @@ struct ngbe_tx_desc {
>   
>   #define rte_packet_prefetch(p)  rte_prefetch1(p)
>   
> +#define RTE_NGBE_REGISTER_POLL_WAIT_10_MS  10
> +#define RTE_NGBE_WAIT_100_US               100
> +
>   #define NGBE_TX_MAX_SEG                    40
>   
>   /**
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD
  2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
                   ` (24 preceding siblings ...)
  2021-06-11  1:38 ` [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
@ 2021-06-14 20:56 ` Andrew Rybchenko
  25 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-14 20:56 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/2/21 12:40 PM, Jiawen Wu wrote:
> This patch set provides a skeleton of ngbe PMD,
> which adapted to Wangxun WX1860 series NICs.

My main concerns for the patch series except style notes is a separation
info patches. Every patch should be testable. I should be able to stop
at any patch in the series, do build and test functionality added by the
patch. It should be no dead code. Split should be feature based and
different features should be added by different patches.

Above requirements are not that strict for base driver. Of course, it
would be useful to follow it, but not strictly required since sometimes
it is very hard to do.

As for the PMD specific code, it should be done this way. Otherwise,
it is almost impossible to review it and understand if something is
lost or missing or inconsistent.

Of course, closely related features with share almost all its code may
be added together.

Initially the driver should be built up to the working state with
absolute minimum feature set. No offloads, no extra configuration
options. It should be able to probe, configure, start, Rx, Tx, stop,
reconfigure, start again etc, close. Subsequent patches should add
features one by one: loopback, deferrred start, various offloads,
ptype etc.


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs
  2021-06-14 17:08   ` Andrew Rybchenko
@ 2021-06-15  2:52     ` Jiawen Wu
  0 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-15  2:52 UTC (permalink / raw)
  To: 'Andrew Rybchenko', dev

On Tuesday, June 15, 2021 1:09 AM, Andrew Rybchenko wrote:
> On 6/2/21 12:40 PM, Jiawen Wu wrote:
> > Add device IDs for Wangxun 1Gb NICs, and register rte_ngbe_pmd.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> >   drivers/net/ngbe/base/meson.build   | 18 +++++++
> >   drivers/net/ngbe/base/ngbe_devids.h | 84 +++++++++++++++++++++++++++++
> >   drivers/net/ngbe/meson.build        |  6 +++
> >   drivers/net/ngbe/ngbe_ethdev.c      | 51 ++++++++++++++++++
> >   4 files changed, 159 insertions(+)
> >   create mode 100644 drivers/net/ngbe/base/meson.build
> >   create mode 100644 drivers/net/ngbe/base/ngbe_devids.h
> >
> > diff --git a/drivers/net/ngbe/base/meson.build
> > b/drivers/net/ngbe/base/meson.build
> > new file mode 100644
> > index 0000000000..c5f6467743
> > --- /dev/null
> > +++ b/drivers/net/ngbe/base/meson.build
> > @@ -0,0 +1,18 @@
> > +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2018-2020
> > +Beijing WangXun Technology Co., Ltd.
> > +
> > +sources = []
> > +
> > +error_cflags = []
> > +
> > +c_args = cflags
> > +foreach flag: error_cflags
> > +	if cc.has_argument(flag)
> > +		c_args += flag
> > +	endif
> > +endforeach
> > +
> > +base_lib = static_library('ngbe_base', sources,
> > +	dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci],
> > +	c_args: c_args)
> > +base_objs = base_lib.extract_all_objects()
> > diff --git a/drivers/net/ngbe/base/ngbe_devids.h
> > b/drivers/net/ngbe/base/ngbe_devids.h
> > new file mode 100644
> > index 0000000000..81671f71da
> > --- /dev/null
> > +++ b/drivers/net/ngbe/base/ngbe_devids.h
> > @@ -0,0 +1,84 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> > + * Copyright(c) 2010-2017 Intel Corporation  */
> > +
> > +#ifndef _NGBE_DEVIDS_H_
> > +#define _NGBE_DEVIDS_H_
> > +
> > +/*
> > + * Vendor ID
> > + */
> > +#ifndef PCI_VENDOR_ID_WANGXUN
> > +#define PCI_VENDOR_ID_WANGXUN                   0x8088
> > +#endif
> > +
> > +/*
> > + * Device IDs
> > + */
> > +#define NGBE_DEV_ID_EM_VF			0x0110
> > +#define   NGBE_SUB_DEV_ID_EM_VF			0x0110
> > +#define NGBE_DEV_ID_EM				0x0100
> > +#define   NGBE_SUB_DEV_ID_EM_MVL_RGMII		0x0200
> > +#define   NGBE_SUB_DEV_ID_EM_MVL_SFP		0x0403
> > +#define   NGBE_SUB_DEV_ID_EM_RTL_SGMII		0x0410
> > +#define   NGBE_SUB_DEV_ID_EM_YT8521S_SFP	0x0460
> > +
> > +#define NGBE_DEV_ID_EM_WX1860AL_W		0x0100
> > +#define NGBE_DEV_ID_EM_WX1860AL_W_VF		0x0110
> > +#define NGBE_DEV_ID_EM_WX1860A2			0x0101
> > +#define NGBE_DEV_ID_EM_WX1860A2_VF		0x0111
> > +#define NGBE_DEV_ID_EM_WX1860A2S		0x0102
> > +#define NGBE_DEV_ID_EM_WX1860A2S_VF		0x0112
> > +#define NGBE_DEV_ID_EM_WX1860A4			0x0103
> > +#define NGBE_DEV_ID_EM_WX1860A4_VF		0x0113
> > +#define NGBE_DEV_ID_EM_WX1860A4S		0x0104
> > +#define NGBE_DEV_ID_EM_WX1860A4S_VF		0x0114
> > +#define NGBE_DEV_ID_EM_WX1860AL2		0x0105
> > +#define NGBE_DEV_ID_EM_WX1860AL2_VF		0x0115
> > +#define NGBE_DEV_ID_EM_WX1860AL2S		0x0106
> > +#define NGBE_DEV_ID_EM_WX1860AL2S_VF		0x0116
> > +#define NGBE_DEV_ID_EM_WX1860AL4		0x0107
> > +#define NGBE_DEV_ID_EM_WX1860AL4_VF		0x0117
> > +#define NGBE_DEV_ID_EM_WX1860AL4S		0x0108
> > +#define NGBE_DEV_ID_EM_WX1860AL4S_VF		0x0118
> > +#define NGBE_DEV_ID_EM_WX1860NCSI		0x0109
> > +#define NGBE_DEV_ID_EM_WX1860NCSI_VF		0x0119
> > +#define NGBE_DEV_ID_EM_WX1860A1			0x010A
> > +#define NGBE_DEV_ID_EM_WX1860A1_VF		0x011A
> > +#define NGBE_DEV_ID_EM_WX1860A1L		0x010B
> > +#define NGBE_DEV_ID_EM_WX1860A1L_VF		0x011B
> > +#define   NGBE_SUB_DEV_ID_EM_ZTE5201_RJ45	0x0100
> > +#define   NGBE_SUB_DEV_ID_EM_SF100F_LP		0x0103
> > +#define   NGBE_SUB_DEV_ID_EM_M88E1512_RJ45	0x0200
> > +#define   NGBE_SUB_DEV_ID_EM_SF100HT		0x0102
> > +#define   NGBE_SUB_DEV_ID_EM_SF200T		0x0201
> > +#define   NGBE_SUB_DEV_ID_EM_SF200HT		0x0202
> > +#define   NGBE_SUB_DEV_ID_EM_SF200T_S		0x0210
> > +#define   NGBE_SUB_DEV_ID_EM_SF200HT_S		0x0220
> > +#define   NGBE_SUB_DEV_ID_EM_SF200HXT		0x0230
> > +#define   NGBE_SUB_DEV_ID_EM_SF400T		0x0401
> > +#define   NGBE_SUB_DEV_ID_EM_SF400HT		0x0402
> > +#define   NGBE_SUB_DEV_ID_EM_M88E1512_SFP	0x0403
> > +#define   NGBE_SUB_DEV_ID_EM_SF400T_S		0x0410
> > +#define   NGBE_SUB_DEV_ID_EM_SF400HT_S		0x0420
> > +#define   NGBE_SUB_DEV_ID_EM_SF400HXT		0x0430
> > +#define   NGBE_SUB_DEV_ID_EM_SF400_OCP		0x0440
> > +#define   NGBE_SUB_DEV_ID_EM_SF400_LY		0x0450
> > +#define   NGBE_SUB_DEV_ID_EM_SF400_LY_YT	0x0470
> > +
> > +/* Assign excessive id with masks */
> > +#define NGBE_INTERNAL_MASK			0x000F
> > +#define NGBE_OEM_MASK				0x00F0
> > +#define NGBE_WOL_SUP_MASK			0x4000
> > +#define NGBE_NCSI_SUP_MASK			0x8000
> > +
> > +#define NGBE_INTERNAL_SFP			0x0003
> > +#define NGBE_OCP_CARD				0x0040
> > +#define NGBE_LY_M88E1512_SFP			0x0050
> > +#define NGBE_YT8521S_SFP			0x0060
> > +#define NGBE_LY_YT8521S_SFP			0x0070
> > +#define NGBE_WOL_SUP				0x4000
> > +#define NGBE_NCSI_SUP				0x8000
> > +
> > +#endif /* _NGBE_DEVIDS_H_ */
> > diff --git a/drivers/net/ngbe/meson.build
> > b/drivers/net/ngbe/meson.build index de2d7be716..81173fa7f0 100644
> > --- a/drivers/net/ngbe/meson.build
> > +++ b/drivers/net/ngbe/meson.build
> > @@ -7,6 +7,12 @@ if is_windows
> >   	subdir_done()
> >   endif
> >
> > +subdir('base')
> > +objs = [base_objs]
> > +
> >   sources = files(
> >   	'ngbe_ethdev.c',
> >   )
> > +
> > +includes += include_directories('base')
> > +
> > diff --git a/drivers/net/ngbe/ngbe_ethdev.c
> > b/drivers/net/ngbe/ngbe_ethdev.c index e424ff11a2..0f1fa86fe6 100644
> > --- a/drivers/net/ngbe/ngbe_ethdev.c
> > +++ b/drivers/net/ngbe/ngbe_ethdev.c
> > @@ -3,3 +3,54 @@
> >    * Copyright(c) 2010-2017 Intel Corporation
> >    */
> >
> > +#include <ethdev_pci.h>
> > +
> > +#include <base/ngbe_devids.h>
> > +
> > +/*
> > + * The set of PCI devices this driver supports  */ static const
> > +struct rte_pci_id pci_id_ngbe_map[] = {
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2S) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4S) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2S) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4S) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860NCSI) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1L) },
> > +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL_W)
> > +},
> 
> Are all these devices supported at once? Or do some devices require extra code
> and it would be clear to add its IDs later?

Yes, all these device IDs need to be supported at once.
Some extra code is added based on different subsystem IDs.

> 
> > +	{ .vendor_id = 0, /* sentinel */ },
> > +};
> > +
> > +static int
> > +eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> > +		struct rte_pci_device *pci_dev)
> > +{
> > +	RTE_SET_USED(pci_dev);
> > +
> > +	return 0;
> 
> IMHO more correct behaviour of such dummy functions is to return failure.

Get it.

> 
> > +}
> > +
> > +static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev) {
> > +	RTE_SET_USED(pci_dev);
> > +
> > +	return 0;
> > +}
> > +
> > +static struct rte_pci_driver rte_ngbe_pmd = {
> > +	.id_table = pci_id_ngbe_map,
> > +	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> > +		     RTE_PCI_DRV_INTR_LSC,
> 
> LSC should be added here when it is actually supported.
> 
> > +	.probe = eth_ngbe_pci_probe,
> > +	.remove = eth_ngbe_pci_remove,
> > +};
> > +
> > +RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> > +RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> > +RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic |
> > +vfio-pci");
> > +
> >





^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type
  2021-06-14 17:54   ` Andrew Rybchenko
@ 2021-06-15  7:13     ` Jiawen Wu
  0 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-06-15  7:13 UTC (permalink / raw)
  To: 'Andrew Rybchenko', dev

On Tuesday, June 15, 2021 1:55 AM, Andrew Rybchenko wrote:
> On 6/2/21 12:40 PM, Jiawen Wu wrote:
> > Add log type and error type to trace functions.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> >   doc/guides/nics/ngbe.rst            |  20 +++++
> >   drivers/net/ngbe/base/ngbe_status.h | 125
> ++++++++++++++++++++++++++++
> >   drivers/net/ngbe/base/ngbe_type.h   |   1 +
> >   drivers/net/ngbe/ngbe_ethdev.c      |  16 ++++
> >   drivers/net/ngbe/ngbe_logs.h        |  46 ++++++++++
> >   5 files changed, 208 insertions(+)
> >   create mode 100644 drivers/net/ngbe/base/ngbe_status.h
> >   create mode 100644 drivers/net/ngbe/ngbe_logs.h
> >
> > diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index
> > 4ec2623a05..c274a15aab 100644
> > --- a/doc/guides/nics/ngbe.rst
> > +++ b/doc/guides/nics/ngbe.rst
> > @@ -15,6 +15,26 @@ Prerequisites
> >
> >   - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to
> setup the basic DPDK environment.
> >
> 
> It should be two empty lines before section start.
> 
> > +Pre-Installation Configuration
> > +------------------------------
> > +
> > +Dynamic Logging Parameters
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +One may leverage EAL option "--log-level" to change default levels
> > +for the log types supported by the driver. The option is used with an
> > +argument typically consisting of two parts separated by a colon.
> > +
> > +NGBE PMD provides the following log types available for control:
> > +
> > +- ``pmd.net.ngbe.driver`` (default level is **notice**)
> > +
> > +  Affects driver-wide messages unrelated to any particular devices.
> > +
> > +- ``pmd.net.ngbe.init`` (default level is **notice**)
> > +
> > +  Extra logging of the messages during PMD initialization.
> > +
> 
> Same here.
> 
> >   Driver compilation and testing
> >   ------------------------------
> >
> > diff --git a/drivers/net/ngbe/base/ngbe_status.h
> > b/drivers/net/ngbe/base/ngbe_status.h
> > new file mode 100644
> > index 0000000000..b1836c6479
> > --- /dev/null
> > +++ b/drivers/net/ngbe/base/ngbe_status.h
> > @@ -0,0 +1,125 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> > + * Copyright(c) 2010-2017 Intel Corporation  */
> > +
> > +#ifndef _NGBE_STATUS_H_
> > +#define _NGBE_STATUS_H_
> > +
> > +/* Error Codes:
> > + * common error
> > + * module error(simple)
> > + * module error(detailed)
> > + *
> > + * (-256, 256): reserved for non-ngbe defined error code  */ #define
> > +TERR_BASE (0x100) enum ngbe_error {
> > +	TERR_NULL = TERR_BASE,
> > +	TERR_ANY,
> > +	TERR_NOSUPP,
> > +	TERR_NOIMPL,
> > +	TERR_NOMEM,
> > +	TERR_NOSPACE,
> > +	TERR_NOENTRY,
> > +	TERR_CONFIG,
> > +	TERR_ARGS,
> > +	TERR_PARAM,
> > +	TERR_INVALID,
> > +	TERR_TIMEOUT,
> > +	TERR_VERSION,
> > +	TERR_REGISTER,
> > +	TERR_FEATURE,
> > +	TERR_RESET,
> > +	TERR_AUTONEG,
> > +	TERR_MBX,
> > +	TERR_I2C,
> > +	TERR_FC,
> > +	TERR_FLASH,
> > +	TERR_DEVICE,
> > +	TERR_HOSTIF,
> > +	TERR_SRAM,
> > +	TERR_EEPROM,
> > +	TERR_EEPROM_CHECKSUM,
> > +	TERR_EEPROM_PROTECT,
> > +	TERR_EEPROM_VERSION,
> > +	TERR_MAC,
> > +	TERR_MAC_ADDR,
> > +	TERR_SFP,
> > +	TERR_SFP_INITSEQ,
> > +	TERR_SFP_PRESENT,
> > +	TERR_SFP_SUPPORT,
> > +	TERR_SFP_SETUP,
> > +	TERR_PHY,
> > +	TERR_PHY_ADDR,
> > +	TERR_PHY_INIT,
> > +	TERR_FDIR_CMD,
> > +	TERR_FDIR_REINIT,
> > +	TERR_SWFW_SYNC,
> > +	TERR_SWFW_COMMAND,
> > +	TERR_FC_CFG,
> > +	TERR_FC_NEGO,
> > +	TERR_LINK_SETUP,
> > +	TERR_PCIE_PENDING,
> > +	TERR_PBA_SECTION,
> > +	TERR_OVERTEMP,
> > +	TERR_UNDERTEMP,
> > +	TERR_XPCS_POWERUP,
> > +};
> > +
> > +/* WARNING: just for legacy compatibility */ #define
> > +NGBE_NOT_IMPLEMENTED 0x7FFFFFFF
> > +#define NGBE_ERR_OPS_DUMMY   0x3FFFFFFF
> > +
> > +/* Error Codes */
> > +#define NGBE_ERR_EEPROM				-(TERR_BASE + 1)
> > +#define NGBE_ERR_EEPROM_CHECKSUM		-(TERR_BASE + 2)
> > +#define NGBE_ERR_PHY				-(TERR_BASE + 3)
> > +#define NGBE_ERR_CONFIG				-(TERR_BASE + 4)
> > +#define NGBE_ERR_PARAM				-(TERR_BASE + 5)
> > +#define NGBE_ERR_MAC_TYPE			-(TERR_BASE + 6)
> > +#define NGBE_ERR_UNKNOWN_PHY			-(TERR_BASE + 7)
> > +#define NGBE_ERR_LINK_SETUP			-(TERR_BASE + 8)
> > +#define NGBE_ERR_ADAPTER_STOPPED		-(TERR_BASE + 9)
> > +#define NGBE_ERR_INVALID_MAC_ADDR		-(TERR_BASE + 10)
> > +#define NGBE_ERR_DEVICE_NOT_SUPPORTED		-(TERR_BASE + 11)
> > +#define NGBE_ERR_MASTER_REQUESTS_PENDING	-(TERR_BASE + 12)
> > +#define NGBE_ERR_INVALID_LINK_SETTINGS		-(TERR_BASE + 13)
> > +#define NGBE_ERR_AUTONEG_NOT_COMPLETE		-(TERR_BASE + 14)
> > +#define NGBE_ERR_RESET_FAILED			-(TERR_BASE + 15)
> > +#define NGBE_ERR_SWFW_SYNC			-(TERR_BASE + 16)
> > +#define NGBE_ERR_PHY_ADDR_INVALID		-(TERR_BASE + 17)
> > +#define NGBE_ERR_I2C				-(TERR_BASE + 18)
> > +#define NGBE_ERR_SFP_NOT_SUPPORTED		-(TERR_BASE + 19)
> > +#define NGBE_ERR_SFP_NOT_PRESENT		-(TERR_BASE + 20)
> > +#define NGBE_ERR_SFP_NO_INIT_SEQ_PRESENT	-(TERR_BASE + 21)
> > +#define NGBE_ERR_NO_SAN_ADDR_PTR		-(TERR_BASE + 22)
> > +#define NGBE_ERR_FDIR_REINIT_FAILED		-(TERR_BASE + 23)
> > +#define NGBE_ERR_EEPROM_VERSION			-(TERR_BASE + 24)
> > +#define NGBE_ERR_NO_SPACE			-(TERR_BASE + 25)
> > +#define NGBE_ERR_OVERTEMP			-(TERR_BASE + 26)
> > +#define NGBE_ERR_FC_NOT_NEGOTIATED		-(TERR_BASE + 27)
> > +#define NGBE_ERR_FC_NOT_SUPPORTED		-(TERR_BASE + 28)
> > +#define NGBE_ERR_SFP_SETUP_NOT_COMPLETE		-(TERR_BASE + 30)
> > +#define NGBE_ERR_PBA_SECTION			-(TERR_BASE + 31)
> > +#define NGBE_ERR_INVALID_ARGUMENT		-(TERR_BASE + 32)
> > +#define NGBE_ERR_HOST_INTERFACE_COMMAND		-(TERR_BASE + 33)
> > +#define NGBE_ERR_OUT_OF_MEM			-(TERR_BASE + 34)
> > +#define NGBE_ERR_FEATURE_NOT_SUPPORTED		-(TERR_BASE + 36)
> > +#define NGBE_ERR_EEPROM_PROTECTED_REGION	-(TERR_BASE + 37)
> > +#define NGBE_ERR_FDIR_CMD_INCOMPLETE		-(TERR_BASE + 38)
> > +#define NGBE_ERR_FW_RESP_INVALID		-(TERR_BASE + 39)
> > +#define NGBE_ERR_TOKEN_RETRY			-(TERR_BASE + 40)
> > +#define NGBE_ERR_FLASH_LOADING_FAILED		-(TERR_BASE + 41)
> > +
> > +#define NGBE_ERR_NOSUPP                        -(TERR_BASE + 42)
> > +#define NGBE_ERR_UNDERTEMP                     -(TERR_BASE + 43)
> > +#define NGBE_ERR_XPCS_POWER_UP_FAILED          -(TERR_BASE + 44)
> > +#define NGBE_ERR_PHY_INIT_NOT_DONE             -(TERR_BASE + 45)
> > +#define NGBE_ERR_TIMEOUT                       -(TERR_BASE + 46)
> > +#define NGBE_ERR_REGISTER                      -(TERR_BASE + 47)
> > +#define NGBE_ERR_MNG_ACCESS_FAILED             -(TERR_BASE + 49)
> > +#define NGBE_ERR_PHY_TYPE                      -(TERR_BASE + 50)
> > +#define NGBE_ERR_PHY_TIMEOUT                   -(TERR_BASE + 51)
> 
> Not sure that I understand how above define are related to logging.

Above redundant code was not handled properly in the early.
I'll remove enum ngbe_error, and use above macro defines to return different errors.





^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release
  2021-06-14 18:53   ` Andrew Rybchenko
@ 2021-06-15  7:50     ` Jiawen Wu
  2021-06-15  8:06       ` Andrew Rybchenko
  0 siblings, 1 reply; 51+ messages in thread
From: Jiawen Wu @ 2021-06-15  7:50 UTC (permalink / raw)
  To: 'Andrew Rybchenko', dev

On Tuesday, June 15, 2021 2:53 AM, Andrew Rybchenko wrote:
> On 6/2/21 12:40 PM, Jiawen Wu wrote:
> > Setup device Rx queue and release Rx queue.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> >   drivers/net/ngbe/ngbe_ethdev.c |   9 +
> >   drivers/net/ngbe/ngbe_ethdev.h |   8 +
> >   drivers/net/ngbe/ngbe_rxtx.c   | 305
> +++++++++++++++++++++++++++++++++
> >   drivers/net/ngbe/ngbe_rxtx.h   |  90 ++++++++++
> >   4 files changed, 412 insertions(+)
> >
> > diff --git a/drivers/net/ngbe/ngbe_rxtx.h
> > b/drivers/net/ngbe/ngbe_rxtx.h index 39011ee286..e1676a53b4 100644
> > --- a/drivers/net/ngbe/ngbe_rxtx.h
> > +++ b/drivers/net/ngbe/ngbe_rxtx.h
> > @@ -6,7 +6,97 @@
> >   #ifndef _NGBE_RXTX_H_
> >   #define _NGBE_RXTX_H_
> >
> > +/*****************************************************************************
> > + * Receive Descriptor
> > + *****************************************************************************/
> > +struct ngbe_rx_desc {
> > +	struct {
> > +		union {
> > +			__le32 dw0;
> 
> rte_* types shuld be used

I don't quite understand, should '__le32' be changed to 'rte_*' type?

> 
> > +			struct {
> > +				__le16 pkt;
> > +				__le16 hdr;
> > +			} lo;
> > +		};
> > +		union {
> > +			__le32 dw1;
> > +			struct {
> > +				__le16 ipid;
> > +				__le16 csum;
> > +			} hi;
> > +		};
> > +	} qw0; /* also as r.pkt_addr */
> > +	struct {
> > +		union {
> > +			__le32 dw2;
> > +			struct {
> > +				__le32 status;
> > +			} lo;
> > +		};
> > +		union {
> > +			__le32 dw3;
> > +			struct {
> > +				__le16 len;
> > +				__le16 tag;
> > +			} hi;
> > +		};
> > +	} qw1; /* also as r.hdr_addr */
> > +};
> > +





^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release
  2021-06-15  7:50     ` Jiawen Wu
@ 2021-06-15  8:06       ` Andrew Rybchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andrew Rybchenko @ 2021-06-15  8:06 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 6/15/21 10:50 AM, Jiawen Wu wrote:
> On Tuesday, June 15, 2021 2:53 AM, Andrew Rybchenko wrote:
>> On 6/2/21 12:40 PM, Jiawen Wu wrote:
>>> Setup device Rx queue and release Rx queue.
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>> ---
>>>   drivers/net/ngbe/ngbe_ethdev.c |   9 +
>>>   drivers/net/ngbe/ngbe_ethdev.h |   8 +
>>>   drivers/net/ngbe/ngbe_rxtx.c   | 305
>> +++++++++++++++++++++++++++++++++
>>>   drivers/net/ngbe/ngbe_rxtx.h   |  90 ++++++++++
>>>   4 files changed, 412 insertions(+)
>>>
>>> diff --git a/drivers/net/ngbe/ngbe_rxtx.h
>>> b/drivers/net/ngbe/ngbe_rxtx.h index 39011ee286..e1676a53b4 100644
>>> --- a/drivers/net/ngbe/ngbe_rxtx.h
>>> +++ b/drivers/net/ngbe/ngbe_rxtx.h
>>> @@ -6,7 +6,97 @@
>>>   #ifndef _NGBE_RXTX_H_
>>>   #define _NGBE_RXTX_H_
>>>
>>> +/*****************************************************************************
>>> + * Receive Descriptor
>>> + *****************************************************************************/
>>> +struct ngbe_rx_desc {
>>> +	struct {
>>> +		union {
>>> +			__le32 dw0;
>>
>> rte_* types shuld be used
> 
> I don't quite understand, should '__le32' be changed to 'rte_*' type?

Yes, since it is native DPDK code, it should use native
DPDK data types. In this particular case it is rte_le32.

> 
>>
>>> +			struct {
>>> +				__le16 pkt;
>>> +				__le16 hdr;
>>> +			} lo;
>>> +		};
>>> +		union {
>>> +			__le32 dw1;
>>> +			struct {
>>> +				__le16 ipid;
>>> +				__le16 csum;
>>> +			} hi;
>>> +		};
>>> +	} qw0; /* also as r.pkt_addr */
>>> +	struct {
>>> +		union {
>>> +			__le32 dw2;
>>> +			struct {
>>> +				__le32 status;
>>> +			} lo;
>>> +		};
>>> +		union {
>>> +			__le32 dw3;
>>> +			struct {
>>> +				__le16 len;
>>> +				__le16 tag;
>>> +			} hi;
>>> +		};
>>> +	} qw1; /* also as r.hdr_addr */
>>> +};
>>> +
> 
> 
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type
  2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type Jiawen Wu
  2021-06-14 17:54   ` Andrew Rybchenko
@ 2021-07-01 13:57   ` David Marchand
  2021-07-02  2:08     ` Jiawen Wu
  1 sibling, 1 reply; 51+ messages in thread
From: David Marchand @ 2021-07-01 13:57 UTC (permalink / raw)
  To: Jiawen Wu; +Cc: dev

Hello,

Currently looking at new drivers posted on the ml.


On Wed, Jun 2, 2021 at 11:40 AM Jiawen Wu <jiawenwu@trustnetic.com> wrote:
> @@ -124,3 +131,12 @@ RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
>  RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
>  RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
>
> +RTE_LOG_REGISTER(ngbe_logtype_init, pmd.net.ngbe.init, NOTICE);
> +RTE_LOG_REGISTER(ngbe_logtype_driver, pmd.net.ngbe.driver, NOTICE);

Please use helpers added recently:
https://git.dpdk.org/dpdk/commit/?id=eeded2044af5bbe88220120b14933536cbb3edb6

Converting this patch should be quick:
RTE_LOG_REGISTER_SUFFIX(ngbe_logtype_init, init, NOTICE)
RTE_LOG_REGISTER_SUFFIX(ngbe_logtype_driver, driver, NOTICE);

etc...

> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> +       RTE_LOG_REGISTER(ngbe_logtype_rx, pmd.net.ngbe.rx, DEBUG);
> +#endif
> +#ifdef RTE_ETHDEV_DEBUG_TX
> +       RTE_LOG_REGISTER(ngbe_logtype_tx, pmd.net.ngbe.tx, DEBUG);
> +#endif



-- 
David Marchand


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type
  2021-07-01 13:57   ` David Marchand
@ 2021-07-02  2:08     ` Jiawen Wu
  0 siblings, 0 replies; 51+ messages in thread
From: Jiawen Wu @ 2021-07-02  2:08 UTC (permalink / raw)
  To: 'David Marchand'; +Cc: 'dev'

On July 1, 2021 9:57 PM, David Marchand wrote:
> Hello,
> 
> Currently looking at new drivers posted on the ml.
> 
> 
> On Wed, Jun 2, 2021 at 11:40 AM Jiawen Wu <jiawenwu@trustnetic.com>
> wrote:
> > @@ -124,3 +131,12 @@ RTE_PMD_REGISTER_PCI(net_ngbe,
> rte_ngbe_pmd);
> > RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> > RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic |
> > vfio-pci");
> >
> > +RTE_LOG_REGISTER(ngbe_logtype_init, pmd.net.ngbe.init, NOTICE);
> > +RTE_LOG_REGISTER(ngbe_logtype_driver, pmd.net.ngbe.driver, NOTICE);
> 
> Please use helpers added recently:
> https://git.dpdk.org/dpdk/commit/?id=eeded2044af5bbe88220120b14933536c
> bb3edb6
> 
> Converting this patch should be quick:
> RTE_LOG_REGISTER_SUFFIX(ngbe_logtype_init, init, NOTICE)
> RTE_LOG_REGISTER_SUFFIX(ngbe_logtype_driver, driver, NOTICE);
> 
> etc...
> 
> > +
> > +#ifdef RTE_ETHDEV_DEBUG_RX
> > +       RTE_LOG_REGISTER(ngbe_logtype_rx, pmd.net.ngbe.rx, DEBUG);
> > +#endif #ifdef RTE_ETHDEV_DEBUG_TX
> > +       RTE_LOG_REGISTER(ngbe_logtype_tx, pmd.net.ngbe.tx, DEBUG);
> > +#endif
> 
> 
> 
> --
> David Marchand

Thanks David,
I'd like to convert it in the patch set v6. And are there any comments on other patches?





^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2021-07-02  2:08 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-02  9:40 [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 01/24] net/ngbe: add build and doc infrastructure Jiawen Wu
2021-06-14 17:05   ` Andrew Rybchenko
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 02/24] net/ngbe: add device IDs Jiawen Wu
2021-06-14 17:08   ` Andrew Rybchenko
2021-06-15  2:52     ` Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 03/24] net/ngbe: support probe and remove Jiawen Wu
2021-06-14 17:27   ` Andrew Rybchenko
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 04/24] net/ngbe: add device init and uninit Jiawen Wu
2021-06-14 17:36   ` Andrew Rybchenko
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 05/24] net/ngbe: add log type and error type Jiawen Wu
2021-06-14 17:54   ` Andrew Rybchenko
2021-06-15  7:13     ` Jiawen Wu
2021-07-01 13:57   ` David Marchand
2021-07-02  2:08     ` Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 06/24] net/ngbe: define registers Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 07/24] net/ngbe: set MAC type and LAN id Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 08/24] net/ngbe: init and validate EEPROM Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 09/24] net/ngbe: add HW initialization Jiawen Wu
2021-06-14 18:01   ` Andrew Rybchenko
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 10/24] net/ngbe: identify PHY and reset PHY Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 11/24] net/ngbe: store MAC address Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 12/24] net/ngbe: add info get operation Jiawen Wu
2021-06-14 18:13   ` Andrew Rybchenko
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 13/24] net/ngbe: support link update Jiawen Wu
2021-06-14 18:45   ` Andrew Rybchenko
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 14/24] net/ngbe: setup the check PHY link Jiawen Wu
2021-06-02  9:40 ` [dpdk-dev] [PATCH v5 15/24] net/ngbe: add Rx queue setup and release Jiawen Wu
2021-06-14 18:53   ` Andrew Rybchenko
2021-06-15  7:50     ` Jiawen Wu
2021-06-15  8:06       ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 16/24] net/ngbe: add Tx " Jiawen Wu
2021-06-14 18:59   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 17/24] net/ngbe: add Rx and Tx init Jiawen Wu
2021-06-14 19:01   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 18/24] net/ngbe: add packet type Jiawen Wu
2021-06-14 19:06   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 19/24] net/ngbe: add simple Rx and Tx flow Jiawen Wu
2021-06-14 19:10   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 20/24] net/ngbe: support bulk and scatter Rx Jiawen Wu
2021-06-14 19:17   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 21/24] net/ngbe: support full-featured Tx path Jiawen Wu
2021-06-14 19:22   ` Andrew Rybchenko
2021-06-14 19:23     ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 22/24] net/ngbe: add device start operation Jiawen Wu
2021-06-14 19:33   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 23/24] net/ngbe: start and stop RxTx Jiawen Wu
2021-06-14 20:44   ` Andrew Rybchenko
2021-06-02  9:41 ` [dpdk-dev] [PATCH v5 24/24] net/ngbe: add device stop operation Jiawen Wu
2021-06-11  1:38 ` [dpdk-dev] [PATCH v5 00/24] net: ngbe PMD Jiawen Wu
2021-06-14 20:56 ` Andrew Rybchenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).