* [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD
@ 2021-06-17 10:59 Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure Jiawen Wu
` (18 more replies)
0 siblings, 19 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
This patch set provides a skeleton of ngbe PMD,
which adapted to Wangxun WX1860 series NICs.
v6:
- Correct style errors and re-split patches.
v5:
- Extend patches with device initialization and RxTx functions.
v4:
- Fix compile error.
v3:
- Use rte_ether functions to define marcos.
v2:
- Correct some clerical errors.
- Use ethdev debug flags instead of driver own.
Jiawen Wu (19):
net/ngbe: add build and doc infrastructure
net/ngbe: support probe and remove
net/ngbe: add log type and error type
net/ngbe: define registers
net/ngbe: set MAC type and LAN ID with device initialization
net/ngbe: init and validate EEPROM
net/ngbe: add HW initialization
net/ngbe: identify PHY and reset PHY
net/ngbe: store MAC address
net/ngbe: support link update
net/ngbe: setup the check PHY link
net/ngbe: add Rx queue setup and release
net/ngbe: add Tx queue setup and release
net/ngbe: add simple Rx flow
net/ngbe: add simple Tx flow
net/ngbe: add device start and stop operations
net/ngbe: add Tx queue start and stop
net/ngbe: add Rx queue start and stop
net/ngbe: support to close and reset device
MAINTAINERS | 6 +
doc/guides/nics/features/ngbe.ini | 15 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/ngbe.rst | 57 +
doc/guides/rel_notes/release_21_08.rst | 6 +
drivers/net/meson.build | 1 +
drivers/net/ngbe/base/meson.build | 26 +
drivers/net/ngbe/base/ngbe.h | 11 +
drivers/net/ngbe/base/ngbe_devids.h | 83 ++
drivers/net/ngbe/base/ngbe_dummy.h | 209 ++++
drivers/net/ngbe/base/ngbe_eeprom.c | 203 ++++
drivers/net/ngbe/base/ngbe_eeprom.h | 17 +
drivers/net/ngbe/base/ngbe_hw.c | 1068 +++++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 59 +
drivers/net/ngbe/base/ngbe_mng.c | 198 ++++
drivers/net/ngbe/base/ngbe_mng.h | 65 ++
drivers/net/ngbe/base/ngbe_osdep.h | 183 +++
drivers/net/ngbe/base/ngbe_phy.c | 451 +++++++
drivers/net/ngbe/base/ngbe_phy.h | 62 +
drivers/net/ngbe/base/ngbe_phy_mvl.c | 251 ++++
drivers/net/ngbe/base/ngbe_phy_mvl.h | 97 ++
drivers/net/ngbe/base/ngbe_phy_rtl.c | 289 +++++
drivers/net/ngbe/base/ngbe_phy_rtl.h | 89 ++
drivers/net/ngbe/base/ngbe_phy_yt.c | 272 +++++
drivers/net/ngbe/base/ngbe_phy_yt.h | 76 ++
drivers/net/ngbe/base/ngbe_regs.h | 1490 ++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_status.h | 73 ++
drivers/net/ngbe/base/ngbe_type.h | 204 ++++
drivers/net/ngbe/meson.build | 19 +
drivers/net/ngbe/ngbe_ethdev.c | 1184 +++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.h | 134 +++
drivers/net/ngbe/ngbe_logs.h | 46 +
drivers/net/ngbe/ngbe_rxtx.c | 1356 +++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 302 +++++
drivers/net/ngbe/version.map | 3 +
35 files changed, 8606 insertions(+)
create mode 100644 doc/guides/nics/features/ngbe.ini
create mode 100644 doc/guides/nics/ngbe.rst
create mode 100644 drivers/net/ngbe/base/meson.build
create mode 100644 drivers/net/ngbe/base/ngbe.h
create mode 100644 drivers/net/ngbe/base/ngbe_devids.h
create mode 100644 drivers/net/ngbe/base/ngbe_dummy.h
create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.c
create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.h
create mode 100644 drivers/net/ngbe/base/ngbe_hw.c
create mode 100644 drivers/net/ngbe/base/ngbe_hw.h
create mode 100644 drivers/net/ngbe/base/ngbe_mng.c
create mode 100644 drivers/net/ngbe/base/ngbe_mng.h
create mode 100644 drivers/net/ngbe/base/ngbe_osdep.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.h
create mode 100644 drivers/net/ngbe/base/ngbe_regs.h
create mode 100644 drivers/net/ngbe/base/ngbe_status.h
create mode 100644 drivers/net/ngbe/base/ngbe_type.h
create mode 100644 drivers/net/ngbe/meson.build
create mode 100644 drivers/net/ngbe/ngbe_ethdev.c
create mode 100644 drivers/net/ngbe/ngbe_ethdev.h
create mode 100644 drivers/net/ngbe/ngbe_logs.h
create mode 100644 drivers/net/ngbe/ngbe_rxtx.c
create mode 100644 drivers/net/ngbe/ngbe_rxtx.h
create mode 100644 drivers/net/ngbe/version.map
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 13:07 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 02/19] net/ngbe: support probe and remove Jiawen Wu
` (17 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Adding bare minimum PMD library and doc build infrastructure
and claim the maintainership for ngbe PMD.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
MAINTAINERS | 6 +++++
doc/guides/nics/features/ngbe.ini | 10 +++++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/ngbe.rst | 31 ++++++++++++++++++++++++++
doc/guides/rel_notes/release_21_08.rst | 6 +++++
drivers/net/meson.build | 1 +
drivers/net/ngbe/meson.build | 12 ++++++++++
drivers/net/ngbe/ngbe_ethdev.c | 29 ++++++++++++++++++++++++
drivers/net/ngbe/version.map | 3 +++
9 files changed, 99 insertions(+)
create mode 100644 doc/guides/nics/features/ngbe.ini
create mode 100644 doc/guides/nics/ngbe.rst
create mode 100644 drivers/net/ngbe/meson.build
create mode 100644 drivers/net/ngbe/ngbe_ethdev.c
create mode 100644 drivers/net/ngbe/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 5877a16971..b3574c9b49 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -896,6 +896,12 @@ F: drivers/net/sfc/
F: doc/guides/nics/sfc_efx.rst
F: doc/guides/nics/features/sfc.ini
+Wangxun ngbe
+M: Jiawen Wu <jiawenwu@trustnetic.com>
+F: drivers/net/ngbe/
+F: doc/guides/nics/ngbe.rst
+F: doc/guides/nics/features/ngbe.ini
+
Wangxun txgbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
M: Jian Wang <jianwang@trustnetic.com>
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
new file mode 100644
index 0000000000..a7a524defc
--- /dev/null
+++ b/doc/guides/nics/features/ngbe.ini
@@ -0,0 +1,10 @@
+;
+; Supported features of the 'ngbe' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+x86-32 = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 799697caf0..31a3e6bcdc 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -47,6 +47,7 @@ Network Interface Controller Drivers
netvsc
nfb
nfp
+ ngbe
null
octeontx
octeontx2
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
new file mode 100644
index 0000000000..37502627a3
--- /dev/null
+++ b/doc/guides/nics/ngbe.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+
+NGBE Poll Mode Driver
+======================
+
+The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
+for Wangxun 1 Gigabit Ethernet NICs.
+
+
+Prerequisites
+-------------
+
+- Learning about Wangxun 1 Gigabit Ethernet NICs using
+ `<https://www.net-swift.com/a/386.html>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+
+Limitations or Known issues
+---------------------------
+
+Build with ICC is not supported yet.
+Power8, ARMv7 and BSD are not supported yet.
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index a6ecfdf3ce..2deac4f398 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -55,6 +55,12 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added Wangxun ngbe PMD.**
+
+ Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs.
+
+ See the :doc:`../nics/ngbe` for more details.
+
Removed Items
-------------
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index c8b5ce2980..d6c1751540 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -40,6 +40,7 @@ drivers = [
'netvsc',
'nfb',
'nfp',
+ 'ngbe',
'null',
'octeontx',
'octeontx2',
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
new file mode 100644
index 0000000000..de2d7be716
--- /dev/null
+++ b/drivers/net/ngbe/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+
+if is_windows
+ build = false
+ reason = 'not supported on Windows'
+ subdir_done()
+endif
+
+sources = files(
+ 'ngbe_ethdev.c',
+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
new file mode 100644
index 0000000000..f8e19066de
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include <errno.h>
+#include <rte_common.h>
+#include <ethdev_pci.h>
+
+static int
+eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ RTE_SET_USED(pci_dev);
+ return -EINVAL;
+}
+
+static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
+{
+ RTE_SET_USED(pci_dev);
+ return -EINVAL;
+}
+
+static struct rte_pci_driver rte_ngbe_pmd = {
+ .probe = eth_ngbe_pci_probe,
+ .remove = eth_ngbe_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/version.map b/drivers/net/ngbe/version.map
new file mode 100644
index 0000000000..4a76d1d52d
--- /dev/null
+++ b/drivers/net/ngbe/version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+ local: *;
+};
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 02/19] net/ngbe: support probe and remove
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 13:22 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 03/19] net/ngbe: add log type and error type Jiawen Wu
` (16 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add device IDs for Wangxun 1Gb NICs, map device IDs to register ngbe
PMD. Add basic PCIe ethdev probe and remove.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/base/meson.build | 18 +++++++
drivers/net/ngbe/base/ngbe_devids.h | 83 +++++++++++++++++++++++++++++
drivers/net/ngbe/meson.build | 6 +++
drivers/net/ngbe/ngbe_ethdev.c | 66 +++++++++++++++++++++--
drivers/net/ngbe/ngbe_ethdev.h | 16 ++++++
6 files changed, 186 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/ngbe/base/meson.build
create mode 100644 drivers/net/ngbe/base/ngbe_devids.h
create mode 100644 drivers/net/ngbe/ngbe_ethdev.h
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index a7a524defc..977286ac04 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Multiprocess aware = Y
Linux = Y
ARMv8 = Y
x86-32 = Y
diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
new file mode 100644
index 0000000000..c5f6467743
--- /dev/null
+++ b/drivers/net/ngbe/base/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+
+sources = []
+
+error_cflags = []
+
+c_args = cflags
+foreach flag: error_cflags
+ if cc.has_argument(flag)
+ c_args += flag
+ endif
+endforeach
+
+base_lib = static_library('ngbe_base', sources,
+ dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci],
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/ngbe/base/ngbe_devids.h b/drivers/net/ngbe/base/ngbe_devids.h
new file mode 100644
index 0000000000..eb9a0b231e
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_devids.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_DEVIDS_H_
+#define _NGBE_DEVIDS_H_
+
+/*
+ * Vendor ID
+ */
+#ifndef PCI_VENDOR_ID_WANGXUN
+#define PCI_VENDOR_ID_WANGXUN 0x8088
+#endif
+
+/*
+ * Device IDs
+ */
+#define NGBE_DEV_ID_EM_VF 0x0110
+#define NGBE_SUB_DEV_ID_EM_VF 0x0110
+#define NGBE_DEV_ID_EM 0x0100
+#define NGBE_SUB_DEV_ID_EM_MVL_RGMII 0x0200
+#define NGBE_SUB_DEV_ID_EM_MVL_SFP 0x0403
+#define NGBE_SUB_DEV_ID_EM_RTL_SGMII 0x0410
+#define NGBE_SUB_DEV_ID_EM_YT8521S_SFP 0x0460
+
+#define NGBE_DEV_ID_EM_WX1860AL_W 0x0100
+#define NGBE_DEV_ID_EM_WX1860AL_W_VF 0x0110
+#define NGBE_DEV_ID_EM_WX1860A2 0x0101
+#define NGBE_DEV_ID_EM_WX1860A2_VF 0x0111
+#define NGBE_DEV_ID_EM_WX1860A2S 0x0102
+#define NGBE_DEV_ID_EM_WX1860A2S_VF 0x0112
+#define NGBE_DEV_ID_EM_WX1860A4 0x0103
+#define NGBE_DEV_ID_EM_WX1860A4_VF 0x0113
+#define NGBE_DEV_ID_EM_WX1860A4S 0x0104
+#define NGBE_DEV_ID_EM_WX1860A4S_VF 0x0114
+#define NGBE_DEV_ID_EM_WX1860AL2 0x0105
+#define NGBE_DEV_ID_EM_WX1860AL2_VF 0x0115
+#define NGBE_DEV_ID_EM_WX1860AL2S 0x0106
+#define NGBE_DEV_ID_EM_WX1860AL2S_VF 0x0116
+#define NGBE_DEV_ID_EM_WX1860AL4 0x0107
+#define NGBE_DEV_ID_EM_WX1860AL4_VF 0x0117
+#define NGBE_DEV_ID_EM_WX1860AL4S 0x0108
+#define NGBE_DEV_ID_EM_WX1860AL4S_VF 0x0118
+#define NGBE_DEV_ID_EM_WX1860NCSI 0x0109
+#define NGBE_DEV_ID_EM_WX1860NCSI_VF 0x0119
+#define NGBE_DEV_ID_EM_WX1860A1 0x010A
+#define NGBE_DEV_ID_EM_WX1860A1_VF 0x011A
+#define NGBE_DEV_ID_EM_WX1860A1L 0x010B
+#define NGBE_DEV_ID_EM_WX1860A1L_VF 0x011B
+#define NGBE_SUB_DEV_ID_EM_ZTE5201_RJ45 0x0100
+#define NGBE_SUB_DEV_ID_EM_SF100F_LP 0x0103
+#define NGBE_SUB_DEV_ID_EM_M88E1512_RJ45 0x0200
+#define NGBE_SUB_DEV_ID_EM_SF100HT 0x0102
+#define NGBE_SUB_DEV_ID_EM_SF200T 0x0201
+#define NGBE_SUB_DEV_ID_EM_SF200HT 0x0202
+#define NGBE_SUB_DEV_ID_EM_SF200T_S 0x0210
+#define NGBE_SUB_DEV_ID_EM_SF200HT_S 0x0220
+#define NGBE_SUB_DEV_ID_EM_SF200HXT 0x0230
+#define NGBE_SUB_DEV_ID_EM_SF400T 0x0401
+#define NGBE_SUB_DEV_ID_EM_SF400HT 0x0402
+#define NGBE_SUB_DEV_ID_EM_M88E1512_SFP 0x0403
+#define NGBE_SUB_DEV_ID_EM_SF400T_S 0x0410
+#define NGBE_SUB_DEV_ID_EM_SF400HT_S 0x0420
+#define NGBE_SUB_DEV_ID_EM_SF400HXT 0x0430
+#define NGBE_SUB_DEV_ID_EM_SF400_OCP 0x0440
+#define NGBE_SUB_DEV_ID_EM_SF400_LY 0x0450
+#define NGBE_SUB_DEV_ID_EM_SF400_LY_YT 0x0470
+
+/* Assign excessive id with masks */
+#define NGBE_INTERNAL_MASK 0x000F
+#define NGBE_OEM_MASK 0x00F0
+#define NGBE_WOL_SUP_MASK 0x4000
+#define NGBE_NCSI_SUP_MASK 0x8000
+
+#define NGBE_INTERNAL_SFP 0x0003
+#define NGBE_OCP_CARD 0x0040
+#define NGBE_LY_M88E1512_SFP 0x0050
+#define NGBE_YT8521S_SFP 0x0060
+#define NGBE_LY_YT8521S_SFP 0x0070
+#define NGBE_WOL_SUP 0x4000
+#define NGBE_NCSI_SUP 0x8000
+
+#endif /* _NGBE_DEVIDS_H_ */
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index de2d7be716..81173fa7f0 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -7,6 +7,12 @@ if is_windows
subdir_done()
endif
+subdir('base')
+objs = [base_objs]
+
sources = files(
'ngbe_ethdev.c',
)
+
+includes += include_directories('base')
+
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index f8e19066de..d8df7ef896 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -7,23 +7,81 @@
#include <rte_common.h>
#include <ethdev_pci.h>
+#include <base/ngbe_devids.h>
+#include "ngbe_ethdev.h"
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_ngbe_map[] = {
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2S) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4S) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2S) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4S) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860NCSI) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1L) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL_W) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+ return -EINVAL;
+}
+
+static int
+eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ RTE_SET_USED(eth_dev);
+
+ return -EINVAL;
+}
+
static int
eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
{
- RTE_SET_USED(pci_dev);
- return -EINVAL;
+ return rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+ sizeof(struct ngbe_adapter),
+ eth_dev_pci_specific_init, pci_dev,
+ eth_ngbe_dev_init, NULL);
}
static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
{
- RTE_SET_USED(pci_dev);
- return -EINVAL;
+ struct rte_eth_dev *ethdev;
+
+ ethdev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (ethdev == NULL)
+ return 0;
+
+ return rte_eth_dev_destroy(ethdev, eth_ngbe_dev_uninit);
}
static struct rte_pci_driver rte_ngbe_pmd = {
+ .id_table = pci_id_ngbe_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
.probe = eth_ngbe_pci_probe,
.remove = eth_ngbe_pci_remove,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
+
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
new file mode 100644
index 0000000000..38f55f7fb1
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_ETHDEV_H_
+#define _NGBE_ETHDEV_H_
+
+/*
+ * Structure to store private data for each driver instance (for each port).
+ */
+struct ngbe_adapter {
+ void *back;
+};
+
+#endif /* _NGBE_ETHDEV_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 03/19] net/ngbe: add log type and error type
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 02/19] net/ngbe: support probe and remove Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 13:22 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 04/19] net/ngbe: define registers Jiawen Wu
` (15 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add log type and error type to trace functions.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/ngbe.rst | 21 +++++++++
drivers/net/ngbe/base/ngbe_status.h | 73 +++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_ethdev.c | 14 ++++++
drivers/net/ngbe/ngbe_logs.h | 46 ++++++++++++++++++
4 files changed, 154 insertions(+)
create mode 100644 drivers/net/ngbe/base/ngbe_status.h
create mode 100644 drivers/net/ngbe/ngbe_logs.h
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 37502627a3..54d0665db9 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -17,6 +17,27 @@ Prerequisites
- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+Pre-Installation Configuration
+------------------------------
+
+Dynamic Logging Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One may leverage EAL option "--log-level" to change default levels
+for the log types supported by the driver. The option is used with
+an argument typically consisting of two parts separated by a colon.
+
+NGBE PMD provides the following log types available for control:
+
+- ``pmd.net.ngbe.driver`` (default level is **notice**)
+
+ Affects driver-wide messages unrelated to any particular devices.
+
+- ``pmd.net.ngbe.init`` (default level is **notice**)
+
+ Extra logging of the messages during PMD initialization.
+
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/ngbe/base/ngbe_status.h b/drivers/net/ngbe/base/ngbe_status.h
new file mode 100644
index 0000000000..917b7da8aa
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_status.h
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_STATUS_H_
+#define _NGBE_STATUS_H_
+
+/* Error Codes:
+ * common error
+ * module error(simple)
+ * module error(detailed)
+ *
+ * (-256, 256): reserved for non-ngbe defined error code
+ */
+#define TERR_BASE (0x100)
+
+/* WARNING: just for legacy compatibility */
+#define NGBE_NOT_IMPLEMENTED 0x7FFFFFFF
+#define NGBE_ERR_OPS_DUMMY 0x3FFFFFFF
+
+/* Error Codes */
+#define NGBE_ERR_EEPROM -(TERR_BASE + 1)
+#define NGBE_ERR_EEPROM_CHECKSUM -(TERR_BASE + 2)
+#define NGBE_ERR_PHY -(TERR_BASE + 3)
+#define NGBE_ERR_CONFIG -(TERR_BASE + 4)
+#define NGBE_ERR_PARAM -(TERR_BASE + 5)
+#define NGBE_ERR_MAC_TYPE -(TERR_BASE + 6)
+#define NGBE_ERR_UNKNOWN_PHY -(TERR_BASE + 7)
+#define NGBE_ERR_LINK_SETUP -(TERR_BASE + 8)
+#define NGBE_ERR_ADAPTER_STOPPED -(TERR_BASE + 9)
+#define NGBE_ERR_INVALID_MAC_ADDR -(TERR_BASE + 10)
+#define NGBE_ERR_DEVICE_NOT_SUPPORTED -(TERR_BASE + 11)
+#define NGBE_ERR_MASTER_REQUESTS_PENDING -(TERR_BASE + 12)
+#define NGBE_ERR_INVALID_LINK_SETTINGS -(TERR_BASE + 13)
+#define NGBE_ERR_AUTONEG_NOT_COMPLETE -(TERR_BASE + 14)
+#define NGBE_ERR_RESET_FAILED -(TERR_BASE + 15)
+#define NGBE_ERR_SWFW_SYNC -(TERR_BASE + 16)
+#define NGBE_ERR_PHY_ADDR_INVALID -(TERR_BASE + 17)
+#define NGBE_ERR_I2C -(TERR_BASE + 18)
+#define NGBE_ERR_SFP_NOT_SUPPORTED -(TERR_BASE + 19)
+#define NGBE_ERR_SFP_NOT_PRESENT -(TERR_BASE + 20)
+#define NGBE_ERR_SFP_NO_INIT_SEQ_PRESENT -(TERR_BASE + 21)
+#define NGBE_ERR_NO_SAN_ADDR_PTR -(TERR_BASE + 22)
+#define NGBE_ERR_FDIR_REINIT_FAILED -(TERR_BASE + 23)
+#define NGBE_ERR_EEPROM_VERSION -(TERR_BASE + 24)
+#define NGBE_ERR_NO_SPACE -(TERR_BASE + 25)
+#define NGBE_ERR_OVERTEMP -(TERR_BASE + 26)
+#define NGBE_ERR_FC_NOT_NEGOTIATED -(TERR_BASE + 27)
+#define NGBE_ERR_FC_NOT_SUPPORTED -(TERR_BASE + 28)
+#define NGBE_ERR_SFP_SETUP_NOT_COMPLETE -(TERR_BASE + 30)
+#define NGBE_ERR_PBA_SECTION -(TERR_BASE + 31)
+#define NGBE_ERR_INVALID_ARGUMENT -(TERR_BASE + 32)
+#define NGBE_ERR_HOST_INTERFACE_COMMAND -(TERR_BASE + 33)
+#define NGBE_ERR_OUT_OF_MEM -(TERR_BASE + 34)
+#define NGBE_ERR_FEATURE_NOT_SUPPORTED -(TERR_BASE + 36)
+#define NGBE_ERR_EEPROM_PROTECTED_REGION -(TERR_BASE + 37)
+#define NGBE_ERR_FDIR_CMD_INCOMPLETE -(TERR_BASE + 38)
+#define NGBE_ERR_FW_RESP_INVALID -(TERR_BASE + 39)
+#define NGBE_ERR_TOKEN_RETRY -(TERR_BASE + 40)
+#define NGBE_ERR_FLASH_LOADING_FAILED -(TERR_BASE + 41)
+
+#define NGBE_ERR_NOSUPP -(TERR_BASE + 42)
+#define NGBE_ERR_UNDERTEMP -(TERR_BASE + 43)
+#define NGBE_ERR_XPCS_POWER_UP_FAILED -(TERR_BASE + 44)
+#define NGBE_ERR_PHY_INIT_NOT_DONE -(TERR_BASE + 45)
+#define NGBE_ERR_TIMEOUT -(TERR_BASE + 46)
+#define NGBE_ERR_REGISTER -(TERR_BASE + 47)
+#define NGBE_ERR_MNG_ACCESS_FAILED -(TERR_BASE + 49)
+#define NGBE_ERR_PHY_TYPE -(TERR_BASE + 50)
+#define NGBE_ERR_PHY_TIMEOUT -(TERR_BASE + 51)
+
+#endif /* _NGBE_STATUS_H_ */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index d8df7ef896..e05766752a 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -7,6 +7,7 @@
#include <rte_common.h>
#include <ethdev_pci.h>
+#include "ngbe_logs.h"
#include <base/ngbe_devids.h>
#include "ngbe_ethdev.h"
@@ -34,6 +35,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ PMD_INIT_FUNC_TRACE();
+
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
@@ -45,6 +48,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
static int
eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
{
+ PMD_INIT_FUNC_TRACE();
+
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
@@ -85,3 +90,12 @@ RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
+RTE_LOG_REGISTER(ngbe_logtype_init, pmd.net.ngbe.init, NOTICE);
+RTE_LOG_REGISTER(ngbe_logtype_driver, pmd.net.ngbe.driver, NOTICE);
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_LOG_REGISTER(ngbe_logtype_rx, pmd.net.ngbe.rx, DEBUG);
+#endif
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_LOG_REGISTER(ngbe_logtype_tx, pmd.net.ngbe.tx, DEBUG);
+#endif
diff --git a/drivers/net/ngbe/ngbe_logs.h b/drivers/net/ngbe/ngbe_logs.h
new file mode 100644
index 0000000000..c5d1ab0930
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_logs.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_LOGS_H_
+#define _NGBE_LOGS_H_
+
+/*
+ * PMD_USER_LOG: for user
+ */
+extern int ngbe_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ngbe_logtype_init, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+extern int ngbe_logtype_driver;
+#define PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ngbe_logtype_driver, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+extern int ngbe_logtype_rx;
+#define PMD_RX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ngbe_logtype_rx, \
+ "%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+extern int ngbe_logtype_tx;
+#define PMD_TX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, ngbe_logtype_tx, \
+ "%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#define TLOG_DEBUG(fmt, args...) PMD_DRV_LOG(DEBUG, fmt, ##args)
+
+#define DEBUGOUT(fmt, args...) TLOG_DEBUG(fmt, ##args)
+#define PMD_INIT_FUNC_TRACE() TLOG_DEBUG(" >>")
+#define DEBUGFUNC(fmt) TLOG_DEBUG(fmt)
+
+#endif /* _NGBE_LOGS_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 04/19] net/ngbe: define registers
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (2 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 03/19] net/ngbe: add log type and error type Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 05/19] net/ngbe: set MAC type and LAN ID with device initialization Jiawen Wu
` (14 subsequent siblings)
18 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Define all registers that will be used.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_regs.h | 1490 +++++++++++++++++++++++++++++
1 file changed, 1490 insertions(+)
create mode 100644 drivers/net/ngbe/base/ngbe_regs.h
diff --git a/drivers/net/ngbe/base/ngbe_regs.h b/drivers/net/ngbe/base/ngbe_regs.h
new file mode 100644
index 0000000000..737bd796a1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_regs.h
@@ -0,0 +1,1490 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_REGS_H_
+#define _NGBE_REGS_H_
+
+#define NGBE_PVMBX_QSIZE (16) /* 16*4B */
+#define NGBE_PVMBX_BSIZE (NGBE_PVMBX_QSIZE * 4)
+
+#define NGBE_REMOVED(a) (0)
+
+#define NGBE_REG_DUMMY 0xFFFFFF
+
+#define MS8(shift, mask) (((u8)(mask)) << (shift))
+#define LS8(val, shift, mask) (((u8)(val) & (u8)(mask)) << (shift))
+#define RS8(reg, shift, mask) (((u8)(reg) >> (shift)) & (u8)(mask))
+
+#define MS16(shift, mask) (((u16)(mask)) << (shift))
+#define LS16(val, shift, mask) (((u16)(val) & (u16)(mask)) << (shift))
+#define RS16(reg, shift, mask) (((u16)(reg) >> (shift)) & (u16)(mask))
+
+#define MS32(shift, mask) (((u32)(mask)) << (shift))
+#define LS32(val, shift, mask) (((u32)(val) & (u32)(mask)) << (shift))
+#define RS32(reg, shift, mask) (((u32)(reg) >> (shift)) & (u32)(mask))
+
+#define MS64(shift, mask) (((u64)(mask)) << (shift))
+#define LS64(val, shift, mask) (((u64)(val) & (u64)(mask)) << (shift))
+#define RS64(reg, shift, mask) (((u64)(reg) >> (shift)) & (u64)(mask))
+
+#define MS(shift, mask) MS32(shift, mask)
+#define LS(val, shift, mask) LS32(val, shift, mask)
+#define RS(reg, shift, mask) RS32(reg, shift, mask)
+
+#define ROUND_UP(x, y) (((x) + (y) - 1) / (y) * (y))
+#define ROUND_DOWN(x, y) ((x) / (y) * (y))
+#define ROUND_OVER(x, maxbits, unitbits) \
+ ((x) >= 1 << (maxbits) ? 0 : (x) >> (unitbits))
+
+/* autoc bits definition */
+#define NGBE_AUTOC NGBE_REG_DUMMY
+#define NGBE_AUTOC_FLU MS64(0, 0x1)
+#define NGBE_AUTOC_10G_PMA_PMD_MASK MS64(7, 0x3) /* parallel */
+#define NGBE_AUTOC_10G_XAUI LS64(0, 7, 0x3)
+#define NGBE_AUTOC_10G_KX4 LS64(1, 7, 0x3)
+#define NGBE_AUTOC_10G_CX4 LS64(2, 7, 0x3)
+#define NGBE_AUTOC_10G_KR LS64(3, 7, 0x3) /* fixme */
+#define NGBE_AUTOC_1G_PMA_PMD_MASK MS64(9, 0x7)
+#define NGBE_AUTOC_1G_BX LS64(0, 9, 0x7)
+#define NGBE_AUTOC_1G_KX LS64(1, 9, 0x7)
+#define NGBE_AUTOC_1G_SFI LS64(0, 9, 0x7)
+#define NGBE_AUTOC_1G_KX_BX LS64(1, 9, 0x7)
+#define NGBE_AUTOC_AN_RESTART MS64(12, 0x1)
+#define NGBE_AUTOC_LMS_MASK MS64(13, 0x7)
+#define NGBE_AUTOC_LMS_10G LS64(3, 13, 0x7)
+#define NGBE_AUTOC_LMS_KX4_KX_KR LS64(4, 13, 0x7)
+#define NGBE_AUTOC_LMS_SGMII_1G_100M LS64(5, 13, 0x7)
+#define NGBE_AUTOC_LMS_KX4_KX_KR_1G_AN LS64(6, 13, 0x7)
+#define NGBE_AUTOC_LMS_KX4_KX_KR_SGMII LS64(7, 13, 0x7)
+#define NGBE_AUTOC_LMS_1G_LINK_NO_AN LS64(0, 13, 0x7)
+#define NGBE_AUTOC_LMS_10G_LINK_NO_AN LS64(1, 13, 0x7)
+#define NGBE_AUTOC_LMS_1G_AN LS64(2, 13, 0x7)
+#define NGBE_AUTOC_LMS_KX4_AN LS64(4, 13, 0x7)
+#define NGBE_AUTOC_LMS_KX4_AN_1G_AN LS64(6, 13, 0x7)
+#define NGBE_AUTOC_LMS_ATTACH_TYPE LS64(7, 13, 0x7)
+#define NGBE_AUTOC_LMS_AN MS64(15, 0x7)
+
+#define NGBE_AUTOC_KR_SUPP MS64(16, 0x1)
+#define NGBE_AUTOC_FECR MS64(17, 0x1)
+#define NGBE_AUTOC_FECA MS64(18, 0x1)
+#define NGBE_AUTOC_AN_RX_ALIGN MS64(18, 0x1F) /* fixme */
+#define NGBE_AUTOC_AN_RX_DRIFT MS64(23, 0x3)
+#define NGBE_AUTOC_AN_RX_LOOSE MS64(24, 0x3)
+#define NGBE_AUTOC_PD_TMR MS64(25, 0x3)
+#define NGBE_AUTOC_RF MS64(27, 0x1)
+#define NGBE_AUTOC_ASM_PAUSE MS64(29, 0x1)
+#define NGBE_AUTOC_SYM_PAUSE MS64(28, 0x1)
+#define NGBE_AUTOC_PAUSE MS64(28, 0x3)
+#define NGBE_AUTOC_KX_SUPP MS64(30, 0x1)
+#define NGBE_AUTOC_KX4_SUPP MS64(31, 0x1)
+
+#define NGBE_AUTOC_10GS_PMA_PMD_MASK MS64(48, 0x3) /* serial */
+#define NGBE_AUTOC_10GS_KR LS64(0, 48, 0x3)
+#define NGBE_AUTOC_10GS_XFI LS64(1, 48, 0x3)
+#define NGBE_AUTOC_10GS_SFI LS64(2, 48, 0x3)
+#define NGBE_AUTOC_LINK_DIA_MASK MS64(60, 0x7)
+#define NGBE_AUTOC_LINK_DIA_D3_MASK LS64(5, 60, 0x7)
+
+#define NGBE_AUTOC_SPEED_MASK MS64(32, 0xFFFF)
+#define NGBD_AUTOC_SPEED(r) RS64(r, 32, 0xFFFF)
+#define NGBE_AUTOC_SPEED(v) LS64(v, 32, 0xFFFF)
+#define NGBE_LINK_SPEED_UNKNOWN 0
+#define NGBE_LINK_SPEED_10M_FULL 0x0002
+#define NGBE_LINK_SPEED_100M_FULL 0x0008
+#define NGBE_LINK_SPEED_1GB_FULL 0x0020
+#define NGBE_LINK_SPEED_2_5GB_FULL 0x0400
+#define NGBE_LINK_SPEED_5GB_FULL 0x0800
+#define NGBE_LINK_SPEED_10GB_FULL 0x0080
+#define NGBE_LINK_SPEED_40GB_FULL 0x0100
+#define NGBE_AUTOC_AUTONEG MS64(63, 0x1)
+
+
+
+/* Hardware Datapath:
+ * RX: / Queue <- Filter \
+ * Host | TC <=> SEC <=> MAC <=> PHY
+ * TX: \ Queue -> Filter /
+ *
+ * Packet Filter:
+ * RX: RSS < FDIR < Filter < Encrypt
+ *
+ * Macro Argument Naming:
+ * rp = ring pair [0,127]
+ * tc = traffic class [0,7]
+ * up = user priority [0,7]
+ * pi = pool index [0,63]
+ * r = register
+ * v = value
+ * s = shift
+ * m = mask
+ * i,j,k = array index
+ * H,L = high/low bits
+ * HI,LO = high/low state
+ */
+
+#define NGBE_ETHPHYIF NGBE_REG_DUMMY
+#define NGBE_ETHPHYIF_MDIO_ACT MS(1, 0x1)
+#define NGBE_ETHPHYIF_MDIO_MODE MS(2, 0x1)
+#define NGBE_ETHPHYIF_MDIO_BASE(r) RS(r, 3, 0x1F)
+#define NGBE_ETHPHYIF_MDIO_SHARED MS(13, 0x1)
+#define NGBE_ETHPHYIF_SPEED_10M MS(17, 0x1)
+#define NGBE_ETHPHYIF_SPEED_100M MS(18, 0x1)
+#define NGBE_ETHPHYIF_SPEED_1G MS(19, 0x1)
+#define NGBE_ETHPHYIF_SPEED_2_5G MS(20, 0x1)
+#define NGBE_ETHPHYIF_SPEED_10G MS(21, 0x1)
+#define NGBE_ETHPHYIF_SGMII_ENABLE MS(25, 0x1)
+#define NGBE_ETHPHYIF_INT_PHY_MODE MS(24, 0x1)
+#define NGBE_ETHPHYIF_IO_XPCS MS(30, 0x1)
+#define NGBE_ETHPHYIF_IO_EPHY MS(31, 0x1)
+
+/******************************************************************************
+ * Chip Registers
+ ******************************************************************************/
+/**
+ * Chip Status
+ **/
+#define NGBE_PWR 0x010000
+#define NGBE_PWR_LAN(r) RS(r, 28, 0xC)
+#define NGBE_PWR_LAN_0 (1)
+#define NGBE_PWR_LAN_1 (2)
+#define NGBE_PWR_LAN_2 (3)
+#define NGBE_PWR_LAN_3 (4)
+#define NGBE_CTL 0x010004
+#define NGBE_LOCKPF 0x010008
+#define NGBE_RST 0x01000C
+#define NGBE_RST_SW MS(0, 0x1)
+#define NGBE_RST_LAN(i) MS(((i) + 1), 0x1)
+#define NGBE_RST_FW MS(5, 0x1)
+#define NGBE_RST_ETH(i) MS(((i) + 29), 0x1)
+#define NGBE_RST_GLB MS(31, 0x1)
+#define NGBE_RST_DEFAULT (NGBE_RST_SW | \
+ NGBE_RST_LAN(0) | \
+ NGBE_RST_LAN(1) | \
+ NGBE_RST_LAN(2) | \
+ NGBE_RST_LAN(3))
+#define NGBE_PROB 0x010010
+#define NGBE_IODRV 0x010024
+#define NGBE_STAT 0x010028
+#define NGBE_STAT_MNGINIT MS(0, 0x1)
+#define NGBE_STAT_MNGVETO MS(8, 0x1)
+#define NGBE_STAT_ECCLAN0 MS(16, 0x1)
+#define NGBE_STAT_ECCLAN1 MS(17, 0x1)
+#define NGBE_STAT_ECCLAN2 MS(18, 0x1)
+#define NGBE_STAT_ECCLAN3 MS(19, 0x1)
+#define NGBE_STAT_ECCMNG MS(20, 0x1)
+#define NGBE_STAT_ECCPCORE MS(21, 0X1)
+#define NGBE_STAT_ECCPCIW MS(22, 0x1)
+#define NGBE_STAT_ECCPCIEPHY MS(23, 0x1)
+#define NGBE_STAT_ECCFMGR MS(24, 0x1)
+#define NGBE_STAT_GPHY_IN_RST(i) MS(((i) + 9), 0x1)
+#define NGBE_RSTSTAT 0x010030
+#define NGBE_RSTSTAT_PROG MS(20, 0x1)
+#define NGBE_RSTSTAT_PREP MS(19, 0x1)
+#define NGBE_RSTSTAT_TYPE_MASK MS(16, 0x7)
+#define NGBE_RSTSTAT_TYPE(r) RS(r, 16, 0x7)
+#define NGBE_RSTSTAT_TYPE_PE LS(0, 16, 0x7)
+#define NGBE_RSTSTAT_TYPE_PWR LS(1, 16, 0x7)
+#define NGBE_RSTSTAT_TYPE_HOT LS(2, 16, 0x7)
+#define NGBE_RSTSTAT_TYPE_SW LS(3, 16, 0x7)
+#define NGBE_RSTSTAT_TYPE_FW LS(4, 16, 0x7)
+#define NGBE_RSTSTAT_TMRINIT_MASK MS(8, 0xFF)
+#define NGBE_RSTSTAT_TMRINIT(v) LS(v, 8, 0xFF)
+#define NGBE_RSTSTAT_TMRCNT_MASK MS(0, 0xFF)
+#define NGBE_RSTSTAT_TMRCNT(v) LS(v, 0, 0xFF)
+#define NGBE_PWRTMR 0x010034
+
+/**
+ * SPI(Flash)
+ **/
+#define NGBE_SPICMD 0x010104
+#define NGBE_SPICMD_ADDR(v) LS(v, 0, 0xFFFFFF)
+#define NGBE_SPICMD_CLK(v) LS(v, 25, 0x7)
+#define NGBE_SPICMD_CMD(v) LS(v, 28, 0x7)
+#define NGBE_SPIDAT 0x010108
+#define NGBE_SPIDAT_BYPASS MS(31, 0x1)
+#define NGBE_SPIDAT_STATUS(v) LS(v, 16, 0xFF)
+#define NGBE_SPIDAT_OPDONE MS(0, 0x1)
+#define NGBE_SPISTAT 0x01010C
+#define NGBE_SPISTAT_OPDONE MS(0, 0x1)
+#define NGBE_SPISTAT_BPFLASH MS(31, 0x1)
+#define NGBE_SPIUSRCMD 0x010110
+#define NGBE_SPICFG0 0x010114
+#define NGBE_SPICFG1 0x010118
+
+/* FMGR Registers */
+#define NGBE_ILDRSTAT 0x010120
+#define NGBE_ILDRSTAT_PCIRST MS(0, 0x1)
+#define NGBE_ILDRSTAT_PWRRST MS(1, 0x1)
+#define NGBE_ILDRSTAT_SWRST MS(11, 0x1)
+#define NGBE_ILDRSTAT_SWRST_LAN0 MS(13, 0x1)
+#define NGBE_ILDRSTAT_SWRST_LAN1 MS(14, 0x1)
+#define NGBE_ILDRSTAT_SWRST_LAN2 MS(15, 0x1)
+#define NGBE_ILDRSTAT_SWRST_LAN3 MS(16, 0x1)
+
+#define NGBE_SRAM 0x010124
+#define NGBE_SRAM_SZ(v) LS(v, 28, 0x7)
+#define NGBE_SRAMCTLECC 0x010130
+#define NGBE_SRAMINJECC 0x010134
+#define NGBE_SRAMECC 0x010138
+
+/* Sensors for PVT(Process Voltage Temperature) */
+#define NGBE_TSCTRL 0x010300
+#define NGBE_TSCTRL_EVALMD MS(31, 0x1)
+#define NGBE_TSEN 0x010304
+#define NGBE_TSEN_ENA MS(0, 0x1)
+#define NGBE_TSSTAT 0x010308
+#define NGBE_TSSTAT_VLD MS(16, 0x1)
+#define NGBE_TSSTAT_DATA(r) RS(r, 0, 0x3FF)
+#define NGBE_TSATHRE 0x01030C
+#define NGBE_TSDTHRE 0x010310
+#define NGBE_TSINTR 0x010314
+#define NGBE_TSINTR_AEN MS(0, 0x1)
+#define NGBE_TSINTR_DEN MS(1, 0x1)
+#define NGBE_TSALM 0x010318
+#define NGBE_TSALM_LO MS(0, 0x1)
+#define NGBE_TSALM_HI MS(1, 0x1)
+
+#define NGBE_EFUSE_WDATA0 0x010320
+#define NGBE_EFUSE_WDATA1 0x010324
+#define NGBE_EFUSE_RDATA0 0x010328
+#define NGBE_EFUSE_RDATA1 0x01032C
+#define NGBE_EFUSE_STATUS 0x010330
+
+/******************************************************************************
+ * Port Registers
+ ******************************************************************************/
+/* Internal PHY reg_offset [0,31] */
+#define NGBE_PHY_CONFIG(reg_offset) (0x014000 + (reg_offset) * 4)
+
+/* Port Control */
+#define NGBE_PORTCTL 0x014400
+#define NGBE_PORTCTL_VLANEXT MS(0, 0x1)
+#define NGBE_PORTCTL_ETAG MS(1, 0x1)
+#define NGBE_PORTCTL_QINQ MS(2, 0x1)
+#define NGBE_PORTCTL_DRVLOAD MS(3, 0x1)
+#define NGBE_PORTCTL_NUMVT_MASK MS(12, 0x1)
+#define NGBE_PORTCTL_NUMVT_8 LS(1, 12, 0x1)
+#define NGBE_PORTCTL_RSTDONE MS(14, 0x1)
+#define NGBE_PORTCTL_TEREDODIA MS(27, 0x1)
+#define NGBE_PORTCTL_GENEVEDIA MS(28, 0x1)
+#define NGBE_PORTCTL_VXLANGPEDIA MS(30, 0x1)
+#define NGBE_PORTCTL_VXLANDIA MS(31, 0x1)
+
+/* Port Status */
+#define NGBE_PORTSTAT 0x014404
+#define NGBE_PORTSTAT_BW_MASK MS(1, 0x7)
+#define NGBE_PORTSTAT_BW_1G MS(1, 0x1)
+#define NGBE_PORTSTAT_BW_100M MS(2, 0x1)
+#define NGBE_PORTSTAT_BW_10M MS(3, 0x1)
+#define NGBE_PORTSTAT_ID(r) RS(r, 8, 0x3)
+
+#define NGBE_EXTAG 0x014408
+#define NGBE_EXTAG_ETAG_MASK MS(0, 0xFFFF)
+#define NGBE_EXTAG_ETAG(v) LS(v, 0, 0xFFFF)
+#define NGBE_EXTAG_VLAN_MASK MS(16, 0xFFFF)
+#define NGBE_EXTAG_VLAN(v) LS(v, 16, 0xFFFF)
+
+#define NGBE_TCPTIME 0x014420
+
+#define NGBE_LEDCTL 0x014424
+#define NGBE_LEDCTL_SEL(s) MS((s), 0x1)
+#define NGBE_LEDCTL_OD(s) MS(((s) + 16), 0x1)
+ /* s=1G(1),100M(2),10M(3) */
+#define NGBE_LEDCTL_100M (NGBE_LEDCTL_SEL(2) | NGBE_LEDCTL_OD(2))
+
+#define NGBE_TAGTPID(i) (0x014430 + (i) * 4) /*0-3*/
+#define NGBE_TAGTPID_LSB_MASK MS(0, 0xFFFF)
+#define NGBE_TAGTPID_LSB(v) LS(v, 0, 0xFFFF)
+#define NGBE_TAGTPID_MSB_MASK MS(16, 0xFFFF)
+#define NGBE_TAGTPID_MSB(v) LS(v, 16, 0xFFFF)
+
+#define NGBE_LAN_SPEED 0x014440
+#define NGBE_LAN_SPEED_MASK MS(0, 0x3)
+
+/* GPIO Registers */
+#define NGBE_GPIODATA 0x014800
+#define NGBE_GPIOBIT_0 MS(0, 0x1) /* O:tx fault */
+#define NGBE_GPIOBIT_1 MS(1, 0x1) /* O:tx disabled */
+#define NGBE_GPIOBIT_2 MS(2, 0x1) /* I:sfp module absent */
+#define NGBE_GPIOBIT_3 MS(3, 0x1) /* I:rx signal lost */
+#define NGBE_GPIOBIT_4 MS(4, 0x1) /* O:rate select, 1G(0) 10G(1) */
+#define NGBE_GPIOBIT_5 MS(5, 0x1) /* O:rate select, 1G(0) 10G(1) */
+#define NGBE_GPIOBIT_6 MS(6, 0x1) /* I:ext phy interrupt */
+#define NGBE_GPIOBIT_7 MS(7, 0x1) /* I:fan speed alarm */
+#define NGBE_GPIODIR 0x014804
+#define NGBE_GPIODIR_DDR(v) LS(v, 0, 0x3)
+#define NGBE_GPIOCTL 0x014808
+#define NGBE_GPIOINTEN 0x014830
+#define NGBE_GPIOINTEN_INT(v) LS(v, 0, 0x3)
+#define NGBE_GPIOINTMASK 0x014834
+#define NGBE_GPIOINTTYPE 0x014838
+#define NGBE_GPIOINTTYPE_LEVEL(v) LS(v, 0, 0x3)
+#define NGBE_GPIOINTPOL 0x01483C
+#define NGBE_GPIOINTPOL_ACT(v) LS(v, 0, 0x3)
+#define NGBE_GPIOINTSTAT 0x014840
+#define NGBE_GPIOINTDB 0x014848
+#define NGBE_GPIOEOI 0x01484C
+#define NGBE_GPIODAT 0x014850
+
+/* TPH */
+#define NGBE_TPHCFG 0x014F00
+
+/******************************************************************************
+ * Transmit DMA Registers
+ ******************************************************************************/
+/* TDMA Control */
+#define NGBE_DMATXCTRL 0x018000
+#define NGBE_DMATXCTRL_ENA MS(0, 0x1)
+#define NGBE_DMATXCTRL_TPID_MASK MS(16, 0xFFFF)
+#define NGBE_DMATXCTRL_TPID(v) LS(v, 16, 0xFFFF)
+#define NGBE_POOLTXENA(i) (0x018004 + (i) * 4) /*0*/
+#define NGBE_PRBTXDMACTL 0x018010
+#define NGBE_ECCTXDMACTL 0x018014
+#define NGBE_ECCTXDMAINJ 0x018018
+#define NGBE_ECCTXDMA 0x01801C
+#define NGBE_PBTXDMATH 0x018020
+#define NGBE_QPTXLLI 0x018040
+#define NGBE_POOLTXLBET 0x018050
+#define NGBE_POOLTXASET 0x018058
+#define NGBE_POOLTXASMAC 0x018060
+#define NGBE_POOLTXASVLAN 0x018070
+#define NGBE_POOLTXDSA 0x0180A0
+#define NGBE_POOLTAG(pl) (0x018100 + (pl) * 4) /*0-7*/
+#define NGBE_POOLTAG_VTAG(v) LS(v, 0, 0xFFFF)
+#define NGBE_POOLTAG_VTAG_MASK MS(0, 0xFFFF)
+#define TXGBD_POOLTAG_VTAG_UP(r) RS(r, 13, 0x7)
+#define NGBE_POOLTAG_TPIDSEL(v) LS(v, 24, 0x7)
+#define NGBE_POOLTAG_ETAG_MASK MS(27, 0x3)
+#define NGBE_POOLTAG_ETAG LS(2, 27, 0x3)
+#define NGBE_POOLTAG_ACT_MASK MS(30, 0x3)
+#define NGBE_POOLTAG_ACT_ALWAYS LS(1, 30, 0x3)
+#define NGBE_POOLTAG_ACT_NEVER LS(2, 30, 0x3)
+
+/* Queue Arbiter(QoS) */
+#define NGBE_QARBTXCTL 0x018200
+#define NGBE_QARBTXCTL_DA MS(6, 0x1)
+#define NGBE_QARBTXRATE 0x018404
+#define NGBE_QARBTXRATE_MIN(v) LS(v, 0, 0x3FFF)
+#define NGBE_QARBTXRATE_MAX(v) LS(v, 16, 0x3FFF)
+
+/* ETAG */
+#define NGBE_POOLETAG(pl) (0x018700 + (pl) * 4)
+
+/******************************************************************************
+ * Receive DMA Registers
+ ******************************************************************************/
+/* Receive Control */
+#define NGBE_ARBRXCTL 0x012000
+#define NGBE_ARBRXCTL_DIA MS(6, 0x1)
+#define NGBE_POOLRXENA(i) (0x012004 + (i) * 4) /*0*/
+#define NGBE_PRBRDMA 0x012010
+#define NGBE_ECCRXDMACTL 0x012014
+#define NGBE_ECCRXDMAINJ 0x012018
+#define NGBE_ECCRXDMA 0x01201C
+#define NGBE_POOLRXDNA 0x0120A0
+#define NGBE_QPRXDROP 0x012080
+#define NGBE_QPRXSTRPVLAN 0x012090
+
+/******************************************************************************
+ * Packet Buffer
+ ******************************************************************************/
+/* Flow Control */
+#define NGBE_FCXOFFTM 0x019200
+#define NGBE_FCWTRLO 0x019220
+#define NGBE_FCWTRLO_TH(v) LS(v, 10, 0x1FF) /*KB*/
+#define NGBE_FCWTRLO_XON MS(31, 0x1)
+#define NGBE_FCWTRHI 0x019260
+#define NGBE_FCWTRHI_TH(v) LS(v, 10, 0x1FF) /*KB*/
+#define NGBE_FCWTRHI_XOFF MS(31, 0x1)
+#define NGBE_RXFCRFSH 0x0192A0
+#define NGBE_RXFCFSH_TIME(v) LS(v, 0, 0xFFFF)
+#define NGBE_FCSTAT 0x01CE00
+#define NGBE_FCSTAT_DLNK MS(0, 0x1)
+#define NGBE_FCSTAT_ULNK MS(8, 0x1)
+
+#define NGBE_RXFCCFG 0x011090
+#define NGBE_RXFCCFG_FC MS(0, 0x1)
+#define NGBE_TXFCCFG 0x0192A4
+#define NGBE_TXFCCFG_FC MS(3, 0x1)
+
+/* Data Buffer */
+#define NGBE_PBRXCTL 0x019000
+#define NGBE_PBRXCTL_ST MS(0, 0x1)
+#define NGBE_PBRXCTL_ENA MS(31, 0x1)
+#define NGBE_PBRXSTAT 0x019004
+#define NGBE_PBRXSIZE 0x019020
+#define NGBE_PBRXSIZE_KB(v) LS(v, 10, 0x3F)
+
+#define NGBE_PBRXOFTMR 0x019094
+#define NGBE_PBRXDBGCMD 0x019090
+#define NGBE_PBRXDBGDAT 0x0190A0
+
+#define NGBE_PBTXSIZE 0x01CC00
+
+/* LLI */
+#define NGBE_PBRXLLI 0x19080
+#define NGBE_PBRXLLI_SZLT(v) LS(v, 0, 0xFFF)
+#define NGBE_PBRXLLI_UPLT(v) LS(v, 16, 0x7)
+#define NGBE_PBRXLLI_UPEA MS(19, 0x1)
+
+/* Port Arbiter(QoS) */
+#define NGBE_PARBTXCTL 0x01CD00
+#define NGBE_PARBTXCTL_DA MS(6, 0x1)
+
+/******************************************************************************
+ * Packet Filter (L2-7)
+ ******************************************************************************/
+/**
+ * Receive Scaling
+ **/
+#define NGBE_POOLRSS(pl) (0x019300 + (pl) * 4) /*0-7*/
+#define NGBE_POOLRSS_L4HDR MS(1, 0x1)
+#define NGBE_POOLRSS_L3HDR MS(2, 0x1)
+#define NGBE_POOLRSS_L2HDR MS(3, 0x1)
+#define NGBE_POOLRSS_L2TUN MS(4, 0x1)
+#define NGBE_POOLRSS_TUNHDR MS(5, 0x1)
+#define NGBE_RSSTBL(i) (0x019400 + (i) * 4) /*32*/
+#define NGBE_RSSKEY(i) (0x019480 + (i) * 4) /*10*/
+#define NGBE_RACTL 0x0194F4
+#define NGBE_RACTL_RSSENA MS(2, 0x1)
+#define NGBE_RACTL_RSSMASK MS(16, 0xFFFF)
+#define NGBE_RACTL_RSSIPV4TCP MS(16, 0x1)
+#define NGBE_RACTL_RSSIPV4 MS(17, 0x1)
+#define NGBE_RACTL_RSSIPV6 MS(20, 0x1)
+#define NGBE_RACTL_RSSIPV6TCP MS(21, 0x1)
+#define NGBE_RACTL_RSSIPV4UDP MS(22, 0x1)
+#define NGBE_RACTL_RSSIPV6UDP MS(23, 0x1)
+
+/**
+ * Flow Director
+ **/
+#define PERFECT_BUCKET_64KB_HASH_MASK 0x07FF /* 11 bits */
+#define PERFECT_BUCKET_128KB_HASH_MASK 0x0FFF /* 12 bits */
+#define PERFECT_BUCKET_256KB_HASH_MASK 0x1FFF /* 13 bits */
+#define SIG_BUCKET_64KB_HASH_MASK 0x1FFF /* 13 bits */
+#define SIG_BUCKET_128KB_HASH_MASK 0x3FFF /* 14 bits */
+#define SIG_BUCKET_256KB_HASH_MASK 0x7FFF /* 15 bits */
+
+/**
+ * 5-tuple Filter
+ **/
+#define NGBE_5TFPORT(i) (0x019A00 + (i) * 4) /*0-7*/
+#define NGBE_5TFPORT_SRC(v) LS(v, 0, 0xFFFF)
+#define NGBE_5TFPORT_DST(v) LS(v, 16, 0xFFFF)
+#define NGBE_5TFCTL0(i) (0x019C00 + (i) * 4) /*0-7*/
+#define NGBE_5TFCTL0_PROTO(v) LS(v, 0, 0x3)
+enum ngbe_5tuple_protocol {
+ NGBE_5TF_PROT_TCP = 0,
+ NGBE_5TF_PROT_UDP,
+ NGBE_5TF_PROT_SCTP,
+ NGBE_5TF_PROT_NONE,
+};
+#define NGBE_5TFCTL0_PRI(v) LS(v, 2, 0x7)
+#define NGBE_5TFCTL0_POOL(v) LS(v, 8, 0x7)
+#define NGBE_5TFCTL0_MASK MS(27, 0xF)
+#define NGBE_5TFCTL0_MSPORT MS(27, 0x1)
+#define NGBE_5TFCTL0_MDPORT MS(28, 0x1)
+#define NGBE_5TFCTL0_MPROTO MS(29, 0x1)
+#define NGBE_5TFCTL0_MPOOL MS(30, 0x1)
+#define NGBE_5TFCTL0_ENA MS(31, 0x1)
+#define NGBE_5TFCTL1(i) (0x019E00 + (i) * 4) /*0-7*/
+#define NGBE_5TFCTL1_CHKSZ MS(12, 0x1)
+#define NGBE_5TFCTL1_LLI MS(20, 0x1)
+#define NGBE_5TFCTL1_QP(v) LS(v, 21, 0x7)
+
+/**
+ * Storm Control
+ **/
+#define NGBE_STRMCTL 0x015004
+#define NGBE_STRMCTL_MCPNSH MS(0, 0x1)
+#define NGBE_STRMCTL_MCDROP MS(1, 0x1)
+#define NGBE_STRMCTL_BCPNSH MS(2, 0x1)
+#define NGBE_STRMCTL_BCDROP MS(3, 0x1)
+#define NGBE_STRMCTL_DFTPOOL MS(4, 0x1)
+#define NGBE_STRMCTL_ITVL(v) LS(v, 8, 0x3FF)
+#define NGBE_STRMTH 0x015008
+#define NGBE_STRMTH_MC(v) LS(v, 0, 0xFFFF)
+#define NGBE_STRMTH_BC(v) LS(v, 16, 0xFFFF)
+
+/******************************************************************************
+ * Ether Flow
+ ******************************************************************************/
+#define NGBE_PSRCTL 0x015000
+#define NGBE_PSRCTL_TPE MS(4, 0x1)
+#define NGBE_PSRCTL_ADHF12_MASK MS(5, 0x3)
+#define NGBE_PSRCTL_ADHF12(v) LS(v, 5, 0x3)
+#define NGBE_PSRCTL_UCHFENA MS(7, 0x1)
+#define NGBE_PSRCTL_MCHFENA MS(7, 0x1)
+#define NGBE_PSRCTL_MCP MS(8, 0x1)
+#define NGBE_PSRCTL_UCP MS(9, 0x1)
+#define NGBE_PSRCTL_BCA MS(10, 0x1)
+#define NGBE_PSRCTL_L4CSUM MS(12, 0x1)
+#define NGBE_PSRCTL_PCSD MS(13, 0x1)
+#define NGBE_PSRCTL_LBENA MS(18, 0x1)
+#define NGBE_FRMSZ 0x015020
+#define NGBE_FRMSZ_MAX_MASK MS(0, 0xFFFF)
+#define NGBE_FRMSZ_MAX(v) LS(v, 0, 0xFFFF)
+#define NGBE_VLANCTL 0x015088
+#define NGBE_VLANCTL_TPID_MASK MS(0, 0xFFFF)
+#define NGBE_VLANCTL_TPID(v) LS(v, 0, 0xFFFF)
+#define NGBE_VLANCTL_CFI MS(28, 0x1)
+#define NGBE_VLANCTL_CFIENA MS(29, 0x1)
+#define NGBE_VLANCTL_VFE MS(30, 0x1)
+#define NGBE_POOLCTL 0x0151B0
+#define NGBE_POOLCTL_DEFDSA MS(29, 0x1)
+#define NGBE_POOLCTL_RPLEN MS(30, 0x1)
+#define NGBE_POOLCTL_MODE_MASK MS(16, 0x3)
+#define NGBE_PSRPOOL_MODE_MAC LS(0, 16, 0x3)
+#define NGBE_PSRPOOL_MODE_ETAG LS(1, 16, 0x3)
+#define NGBE_POOLCTL_DEFPL(v) LS(v, 7, 0x7)
+#define NGBE_POOLCTL_DEFPL_MASK MS(7, 0x7)
+
+#define NGBE_ETFLT(i) (0x015128 + (i) * 4) /*0-7*/
+#define NGBE_ETFLT_ETID(v) LS(v, 0, 0xFFFF)
+#define NGBE_ETFLT_ETID_MASK MS(0, 0xFFFF)
+#define NGBE_ETFLT_POOL(v) LS(v, 20, 0x7)
+#define NGBE_ETFLT_POOLENA MS(26, 0x1)
+#define NGBE_ETFLT_TXAS MS(29, 0x1)
+#define NGBE_ETFLT_1588 MS(30, 0x1)
+#define NGBE_ETFLT_ENA MS(31, 0x1)
+#define NGBE_ETCLS(i) (0x019100 + (i) * 4) /*0-7*/
+#define NGBE_ETCLS_QPID(v) LS(v, 16, 0x7)
+#define NGBD_ETCLS_QPID(r) RS(r, 16, 0x7)
+#define NGBE_ETCLS_LLI MS(29, 0x1)
+#define NGBE_ETCLS_QENA MS(31, 0x1)
+#define NGBE_SYNCLS 0x019130
+#define NGBE_SYNCLS_ENA MS(0, 0x1)
+#define NGBE_SYNCLS_QPID(v) LS(v, 1, 0x7)
+#define NGBD_SYNCLS_QPID(r) RS(r, 1, 0x7)
+#define NGBE_SYNCLS_QPID_MASK MS(1, 0x7)
+#define NGBE_SYNCLS_HIPRIO MS(31, 0x1)
+
+/* MAC & VLAN & NVE */
+#define NGBE_PSRVLANIDX 0x016230 /*0-31*/
+#define NGBE_PSRVLAN 0x016220
+#define NGBE_PSRVLAN_VID(v) LS(v, 0, 0xFFF)
+#define NGBE_PSRVLAN_EA MS(31, 0x1)
+#define NGBE_PSRVLANPLM(i) (0x016224 + (i) * 4) /*0-1*/
+
+/**
+ * Mirror Rules
+ **/
+#define NGBE_MIRRCTL(i) (0x015B00 + (i) * 4)
+#define NGBE_MIRRCTL_POOL MS(0, 0x1)
+#define NGBE_MIRRCTL_UPLINK MS(1, 0x1)
+#define NGBE_MIRRCTL_DNLINK MS(2, 0x1)
+#define NGBE_MIRRCTL_VLAN MS(3, 0x1)
+#define NGBE_MIRRCTL_DESTP(v) LS(v, 8, 0x7)
+#define NGBE_MIRRVLANL(i) (0x015B10 + (i) * 8)
+#define NGBE_MIRRPOOLL(i) (0x015B30 + (i) * 8)
+
+/**
+ * Time Stamp
+ **/
+#define NGBE_TSRXCTL 0x015188
+#define NGBE_TSRXCTL_VLD MS(0, 0x1)
+#define NGBE_TSRXCTL_TYPE(v) LS(v, 1, 0x7)
+#define NGBE_TSRXCTL_TYPE_V2L2 (0)
+#define NGBE_TSRXCTL_TYPE_V1L4 (1)
+#define NGBE_TSRXCTL_TYPE_V2L24 (2)
+#define NGBE_TSRXCTL_TYPE_V2EVENT (5)
+#define NGBE_TSRXCTL_ENA MS(4, 0x1)
+#define NGBE_TSRXSTMPL 0x0151E8
+#define NGBE_TSRXSTMPH 0x0151A4
+#define NGBE_TSTXCTL 0x011F00
+#define NGBE_TSTXCTL_VLD MS(0, 0x1)
+#define NGBE_TSTXCTL_ENA MS(4, 0x1)
+#define NGBE_TSTXSTMPL 0x011F04
+#define NGBE_TSTXSTMPH 0x011F08
+#define NGBE_TSTIMEL 0x011F0C
+#define NGBE_TSTIMEH 0x011F10
+#define NGBE_TSTIMEINC 0x011F14
+#define NGBE_TSTIMEINC_IV(v) LS(v, 0, 0x7FFFFFF)
+
+/**
+ * Wake on Lan
+ **/
+#define NGBE_WOLCTL 0x015B80
+#define NGBE_WOLIPCTL 0x015B84
+#define NGBE_WOLIP4(i) (0x015BC0 + (i) * 4) /* 0-3 */
+#define NGBE_WOLIP6(i) (0x015BE0 + (i) * 4) /* 0-3 */
+
+#define NGBE_WOLFLEXCTL 0x015CFC
+#define NGBE_WOLFLEXI 0x015B8C
+#define NGBE_WOLFLEXDAT(i) (0x015C00 + (i) * 16) /* 0-15 */
+#define NGBE_WOLFLEXMSK(i) (0x015C08 + (i) * 16) /* 0-15 */
+
+/******************************************************************************
+ * Security Registers
+ ******************************************************************************/
+#define NGBE_SECRXCTL 0x017000
+#define NGBE_SECRXCTL_ODSA MS(0, 0x1)
+#define NGBE_SECRXCTL_XDSA MS(1, 0x1)
+#define NGBE_SECRXCTL_CRCSTRIP MS(2, 0x1)
+#define NGBE_SECRXCTL_SAVEBAD MS(6, 0x1)
+#define NGBE_SECRXSTAT 0x017004
+#define NGBE_SECRXSTAT_RDY MS(0, 0x1)
+#define NGBE_SECRXSTAT_ECC MS(1, 0x1)
+
+#define NGBE_SECTXCTL 0x01D000
+#define NGBE_SECTXCTL_ODSA MS(0, 0x1)
+#define NGBE_SECTXCTL_XDSA MS(1, 0x1)
+#define NGBE_SECTXCTL_STFWD MS(2, 0x1)
+#define NGBE_SECTXCTL_MSKIV MS(3, 0x1)
+#define NGBE_SECTXSTAT 0x01D004
+#define NGBE_SECTXSTAT_RDY MS(0, 0x1)
+#define NGBE_SECTXSTAT_ECC MS(1, 0x1)
+#define NGBE_SECTXBUFAF 0x01D008
+#define NGBE_SECTXBUFAE 0x01D00C
+#define NGBE_SECTXIFG 0x01D020
+#define NGBE_SECTXIFG_MIN(v) LS(v, 0, 0xF)
+#define NGBE_SECTXIFG_MIN_MASK MS(0, 0xF)
+
+/**
+ * LinkSec
+ **/
+#define NGBE_LSECRXCAP 0x017200
+#define NGBE_LSECRXCTL 0x017204
+ /* disabled(0),check(1),strict(2),drop(3) */
+#define NGBE_LSECRXCTL_MODE_MASK MS(2, 0x3)
+#define NGBE_LSECRXCTL_MODE_STRICT LS(2, 2, 0x3)
+#define NGBE_LSECRXCTL_POSTHDR MS(6, 0x1)
+#define NGBE_LSECRXCTL_REPLAY MS(7, 0x1)
+#define NGBE_LSECRXSCIL 0x017208
+#define NGBE_LSECRXSCIH 0x01720C
+#define NGBE_LSECRXSA(i) (0x017210 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRXPN(i) (0x017218 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRXKEY(n, i) (0x017220 + 0x10 * (n) + 4 * (i)) /*0-3*/
+#define NGBE_LSECTXCAP 0x01D200
+#define NGBE_LSECTXCTL 0x01D204
+ /* disabled(0), auth(1), auth+encrypt(2) */
+#define NGBE_LSECTXCTL_MODE_MASK MS(0, 0x3)
+#define NGBE_LSECTXCTL_MODE_AUTH LS(1, 0, 0x3)
+#define NGBE_LSECTXCTL_MODE_AENC LS(2, 0, 0x3)
+#define NGBE_LSECTXCTL_PNTRH_MASK MS(8, 0xFFFFFF)
+#define NGBE_LSECTXCTL_PNTRH(v) LS(v, 8, 0xFFFFFF)
+#define NGBE_LSECTXSCIL 0x01D208
+#define NGBE_LSECTXSCIH 0x01D20C
+#define NGBE_LSECTXSA 0x01D210
+#define NGBE_LSECTXPN0 0x01D214
+#define NGBE_LSECTXPN1 0x01D218
+#define NGBE_LSECTXKEY0(i) (0x01D21C + (i) * 4) /* 0-3 */
+#define NGBE_LSECTXKEY1(i) (0x01D22C + (i) * 4) /* 0-3 */
+
+#define NGBE_LSECRX_UTPKT 0x017240
+#define NGBE_LSECRX_DECOCT 0x017244
+#define NGBE_LSECRX_VLDOCT 0x017248
+#define NGBE_LSECRX_BTPKT 0x01724C
+#define NGBE_LSECRX_NOSCIPKT 0x017250
+#define NGBE_LSECRX_UNSCIPKT 0x017254
+#define NGBE_LSECRX_UNCHKPKT 0x017258
+#define NGBE_LSECRX_DLYPKT 0x01725C
+#define NGBE_LSECRX_LATEPKT 0x017260
+#define NGBE_LSECRX_OKPKT(i) (0x017264 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRX_BADPKT(i) (0x01726C + (i) * 4) /* 0-1 */
+#define NGBE_LSECRX_INVPKT(i) (0x017274 + (i) * 4) /* 0-1 */
+#define NGBE_LSECRX_BADSAPKT(i) (0x01727C + (i) * 8) /* 0-3 */
+#define NGBE_LSECRX_INVSAPKT(i) (0x017280 + (i) * 8) /* 0-3 */
+#define NGBE_LSECTX_UTPKT 0x01D23C
+#define NGBE_LSECTX_ENCPKT 0x01D240
+#define NGBE_LSECTX_PROTPKT 0x01D244
+#define NGBE_LSECTX_ENCOCT 0x01D248
+#define NGBE_LSECTX_PROTOCT 0x01D24C
+
+/******************************************************************************
+ * MAC Registers
+ ******************************************************************************/
+#define NGBE_MACRXCFG 0x011004
+#define NGBE_MACRXCFG_ENA MS(0, 0x1)
+#define NGBE_MACRXCFG_JUMBO MS(8, 0x1)
+#define NGBE_MACRXCFG_LB MS(10, 0x1)
+#define NGBE_MACCNTCTL 0x011800
+#define NGBE_MACCNTCTL_RC MS(2, 0x1)
+
+#define NGBE_MACRXFLT 0x011008
+#define NGBE_MACRXFLT_PROMISC MS(0, 0x1)
+#define NGBE_MACRXFLT_CTL_MASK MS(6, 0x3)
+#define NGBE_MACRXFLT_CTL_DROP LS(0, 6, 0x3)
+#define NGBE_MACRXFLT_CTL_NOPS LS(1, 6, 0x3)
+#define NGBE_MACRXFLT_CTL_NOFT LS(2, 6, 0x3)
+#define NGBE_MACRXFLT_CTL_PASS LS(3, 6, 0x3)
+#define NGBE_MACRXFLT_RXALL MS(31, 0x1)
+
+/******************************************************************************
+ * Statistic Registers
+ ******************************************************************************/
+/* Ring Counter */
+#define NGBE_QPRXPKT(rp) (0x001014 + 0x40 * (rp))
+#define NGBE_QPRXOCTL(rp) (0x001018 + 0x40 * (rp))
+#define NGBE_QPRXOCTH(rp) (0x00101C + 0x40 * (rp))
+#define NGBE_QPRXMPKT(rp) (0x001020 + 0x40 * (rp))
+#define NGBE_QPRXBPKT(rp) (0x001024 + 0x40 * (rp))
+#define NGBE_QPTXPKT(rp) (0x003014 + 0x40 * (rp))
+#define NGBE_QPTXOCTL(rp) (0x003018 + 0x40 * (rp))
+#define NGBE_QPTXOCTH(rp) (0x00301C + 0x40 * (rp))
+#define NGBE_QPTXMPKT(rp) (0x003020 + 0x40 * (rp))
+#define NGBE_QPTXBPKT(rp) (0x003024 + 0x40 * (rp))
+
+/* TDMA Counter */
+#define NGBE_DMATXDROP 0x018300
+#define NGBE_DMATXSECDROP 0x018304
+#define NGBE_DMATXPKT 0x018308
+#define NGBE_DMATXOCTL 0x01830C
+#define NGBE_DMATXOCTH 0x018310
+#define NGBE_DMATXMNG 0x018314
+
+/* RDMA Counter */
+#define NGBE_DMARXDROP 0x012500
+#define NGBE_DMARXPKT 0x012504
+#define NGBE_DMARXOCTL 0x012508
+#define NGBE_DMARXOCTH 0x01250C
+#define NGBE_DMARXMNG 0x012510
+
+/* Packet Buffer Counter */
+#define NGBE_PBRXMISS 0x019040
+#define NGBE_PBRXPKT 0x019060
+#define NGBE_PBRXREP 0x019064
+#define NGBE_PBRXDROP 0x019068
+#define NGBE_PBLBSTAT 0x01906C
+#define NGBE_PBLBSTAT_FREE(r) RS(r, 0, 0x3FF)
+#define NGBE_PBLBSTAT_FULL MS(11, 0x1)
+#define NGBE_PBRXWRPTR 0x019180
+#define NGBE_PBRXWRPTR_HEAD(r) RS(r, 0, 0xFFFF)
+#define NGBE_PBRXWRPTR_TAIL(r) RS(r, 16, 0xFFFF)
+#define NGBE_PBRXRDPTR 0x0191A0
+#define NGBE_PBRXRDPTR_HEAD(r) RS(r, 0, 0xFFFF)
+#define NGBE_PBRXRDPTR_TAIL(r) RS(r, 16, 0xFFFF)
+#define NGBE_PBRXDATA 0x0191C0
+#define NGBE_PBRXDATA_RDPTR(r) RS(r, 0, 0xFFFF)
+#define NGBE_PBRXDATA_WRPTR(r) RS(r, 16, 0xFFFF)
+#define NGBE_PBRX_USDSP 0x0191E0
+#define NGBE_RXPBPFCDMACL 0x019210
+#define NGBE_RXPBPFCDMACH 0x019214
+#define NGBE_PBTXLNKXOFF 0x019218
+#define NGBE_PBTXLNKXON 0x01921C
+
+#define NGBE_PBTXSTAT 0x01C004
+#define NGBE_PBTXSTAT_EMPT(tc, r) ((1 << (tc) & (r)) >> (tc))
+
+#define NGBE_PBRXLNKXOFF 0x011988
+#define NGBE_PBRXLNKXON 0x011E0C
+
+#define NGBE_PBLPBK 0x01CF08
+
+/* Ether Flow Counter */
+#define NGBE_LANPKTDROP 0x0151C0
+#define NGBE_MNGPKTDROP 0x0151C4
+
+#define NGBE_PSRLANPKTCNT 0x0151B8
+#define NGBE_PSRMNGPKTCNT 0x0151BC
+
+/* MAC Counter */
+#define NGBE_MACRXERRCRCL 0x011928
+#define NGBE_MACRXERRCRCH 0x01192C
+#define NGBE_MACRXERRLENL 0x011978
+#define NGBE_MACRXERRLENH 0x01197C
+#define NGBE_MACRX1TO64L 0x001940
+#define NGBE_MACRX1TO64H 0x001944
+#define NGBE_MACRX65TO127L 0x001948
+#define NGBE_MACRX65TO127H 0x00194C
+#define NGBE_MACRX128TO255L 0x001950
+#define NGBE_MACRX128TO255H 0x001954
+#define NGBE_MACRX256TO511L 0x001958
+#define NGBE_MACRX256TO511H 0x00195C
+#define NGBE_MACRX512TO1023L 0x001960
+#define NGBE_MACRX512TO1023H 0x001964
+#define NGBE_MACRX1024TOMAXL 0x001968
+#define NGBE_MACRX1024TOMAXH 0x00196C
+#define NGBE_MACTX1TO64L 0x001834
+#define NGBE_MACTX1TO64H 0x001838
+#define NGBE_MACTX65TO127L 0x00183C
+#define NGBE_MACTX65TO127H 0x001840
+#define NGBE_MACTX128TO255L 0x001844
+#define NGBE_MACTX128TO255H 0x001848
+#define NGBE_MACTX256TO511L 0x00184C
+#define NGBE_MACTX256TO511H 0x001850
+#define NGBE_MACTX512TO1023L 0x001854
+#define NGBE_MACTX512TO1023H 0x001858
+#define NGBE_MACTX1024TOMAXL 0x00185C
+#define NGBE_MACTX1024TOMAXH 0x001860
+
+#define NGBE_MACRXUNDERSIZE 0x011938
+#define NGBE_MACRXOVERSIZE 0x01193C
+#define NGBE_MACRXJABBER 0x011934
+
+#define NGBE_MACRXPKTL 0x011900
+#define NGBE_MACRXPKTH 0x011904
+#define NGBE_MACTXPKTL 0x01181C
+#define NGBE_MACTXPKTH 0x011820
+#define NGBE_MACRXGBOCTL 0x011908
+#define NGBE_MACRXGBOCTH 0x01190C
+#define NGBE_MACTXGBOCTL 0x011814
+#define NGBE_MACTXGBOCTH 0x011818
+
+#define NGBE_MACRXOCTL 0x011918
+#define NGBE_MACRXOCTH 0x01191C
+#define NGBE_MACRXMPKTL 0x011920
+#define NGBE_MACRXMPKTH 0x011924
+#define NGBE_MACTXOCTL 0x011824
+#define NGBE_MACTXOCTH 0x011828
+#define NGBE_MACTXMPKTL 0x01182C
+#define NGBE_MACTXMPKTH 0x011830
+
+/* Management Counter */
+#define NGBE_MNGOUT 0x01CF00
+#define NGBE_MNGIN 0x01CF04
+#define NGBE_MNGDROP 0x01CF0C
+
+/* MAC SEC Counter */
+#define NGBE_LSECRXUNTAG 0x017240
+#define NGBE_LSECRXDECOCT 0x017244
+#define NGBE_LSECRXVLDOCT 0x017248
+#define NGBE_LSECRXBADTAG 0x01724C
+#define NGBE_LSECRXNOSCI 0x017250
+#define NGBE_LSECRXUKSCI 0x017254
+#define NGBE_LSECRXUNCHK 0x017258
+#define NGBE_LSECRXDLY 0x01725C
+#define NGBE_LSECRXLATE 0x017260
+#define NGBE_LSECRXGOOD 0x017264
+#define NGBE_LSECRXBAD 0x01726C
+#define NGBE_LSECRXUK 0x017274
+#define NGBE_LSECRXBADSA 0x01727C
+#define NGBE_LSECRXUKSA 0x017280
+#define NGBE_LSECTXUNTAG 0x01D23C
+#define NGBE_LSECTXENC 0x01D240
+#define NGBE_LSECTXPTT 0x01D244
+#define NGBE_LSECTXENCOCT 0x01D248
+#define NGBE_LSECTXPTTOCT 0x01D24C
+
+/* Management Counter */
+#define NGBE_MNGOS2BMC 0x01E094
+#define NGBE_MNGBMC2OS 0x01E090
+
+/******************************************************************************
+ * PF(Physical Function) Registers
+ ******************************************************************************/
+/* Interrupt */
+#define NGBE_ICRMISC 0x000100
+#define NGBE_ICRMISC_MASK MS(8, 0xFFFFFF)
+#define NGBE_ICRMISC_RST MS(10, 0x1) /* device reset event */
+#define NGBE_ICRMISC_TS MS(11, 0x1) /* time sync */
+#define NGBE_ICRMISC_STALL MS(12, 0x1) /* trans or recv path is stalled */
+#define NGBE_ICRMISC_LNKSEC MS(13, 0x1) /* Tx LinkSec require key exchange*/
+#define NGBE_ICRMISC_ERRBUF MS(14, 0x1) /* Packet Buffer Overrun */
+#define NGBE_ICRMISC_ERRMAC MS(17, 0x1) /* err reported by MAC */
+#define NGBE_ICRMISC_PHY MS(18, 0x1) /* interrupt reported by eth phy */
+#define NGBE_ICRMISC_ERRIG MS(20, 0x1) /* integrity error */
+#define NGBE_ICRMISC_SPI MS(21, 0x1) /* SPI interface */
+#define NGBE_ICRMISC_VFMBX MS(23, 0x1) /* VF-PF message box */
+#define NGBE_ICRMISC_GPIO MS(26, 0x1) /* GPIO interrupt */
+#define NGBE_ICRMISC_ERRPCI MS(27, 0x1) /* pcie request error */
+#define NGBE_ICRMISC_HEAT MS(28, 0x1) /* overheat detection */
+#define NGBE_ICRMISC_PROBE MS(29, 0x1) /* probe match */
+#define NGBE_ICRMISC_MNGMBX MS(30, 0x1) /* mng mailbox */
+#define NGBE_ICRMISC_TIMER MS(31, 0x1) /* tcp timer */
+#define NGBE_ICRMISC_DEFAULT ( \
+ NGBE_ICRMISC_RST | \
+ NGBE_ICRMISC_ERRMAC | \
+ NGBE_ICRMISC_PHY | \
+ NGBE_ICRMISC_ERRIG | \
+ NGBE_ICRMISC_GPIO | \
+ NGBE_ICRMISC_VFMBX | \
+ NGBE_ICRMISC_MNGMBX | \
+ NGBE_ICRMISC_STALL | \
+ NGBE_ICRMISC_TIMER)
+#define NGBE_ICSMISC 0x000104
+#define NGBE_IENMISC 0x000108
+#define NGBE_IVARMISC 0x0004FC
+#define NGBE_IVARMISC_VEC(v) LS(v, 0, 0x7)
+#define NGBE_IVARMISC_VLD MS(7, 0x1)
+#define NGBE_ICR(i) (0x000120 + (i) * 4) /*0*/
+#define NGBE_ICR_MASK MS(0, 0x1FF)
+#define NGBE_ICS(i) (0x000130 + (i) * 4) /*0*/
+#define NGBE_ICS_MASK NGBE_ICR_MASK
+#define NGBE_IMS(i) (0x000140 + (i) * 4) /*0*/
+#define NGBE_IMS_MASK NGBE_ICR_MASK
+#define NGBE_IMC(i) (0x000150 + (i) * 4) /*0*/
+#define NGBE_IMC_MASK NGBE_ICR_MASK
+#define NGBE_IVAR(i) (0x000500 + (i) * 4) /*0-3*/
+#define NGBE_IVAR_VEC(v) LS(v, 0, 0x7)
+#define NGBE_IVAR_VLD MS(7, 0x1)
+#define NGBE_TCPTMR 0x000170
+#define NGBE_ITRSEL 0x000180
+
+/* P2V Mailbox */
+#define NGBE_MBMEM(i) (0x005000 + 0x40 * (i)) /*0-7*/
+#define NGBE_MBCTL(i) (0x000600 + 4 * (i)) /*0-7*/
+#define NGBE_MBCTL_STS MS(0, 0x1) /* Initiate message send to VF */
+#define NGBE_MBCTL_ACK MS(1, 0x1) /* Ack message recv'd from VF */
+#define NGBE_MBCTL_VFU MS(2, 0x1) /* VF owns the mailbox buffer */
+#define NGBE_MBCTL_PFU MS(3, 0x1) /* PF owns the mailbox buffer */
+#define NGBE_MBCTL_RVFU MS(4, 0x1) /* Reset VFU - used when VF stuck */
+#define NGBE_MBVFICR 0x000480
+#define NGBE_MBVFICR_INDEX(vf) ((vf) >> 4)
+#define NGBE_MBVFICR_VFREQ_MASK (0x0000FFFF) /* bits for VF messages */
+#define NGBE_MBVFICR_VFREQ_VF1 (0x00000001) /* bit for VF 1 message */
+#define NGBE_MBVFICR_VFACK_MASK (0xFFFF0000) /* bits for VF acks */
+#define NGBE_MBVFICR_VFACK_VF1 (0x00010000) /* bit for VF 1 ack */
+#define NGBE_FLRVFP 0x000490
+#define NGBE_FLRVFE 0x0004A0
+#define NGBE_FLRVFEC 0x0004A8
+
+/******************************************************************************
+ * VF(Virtual Function) Registers
+ ******************************************************************************/
+#define NGBE_VFPBWRAP 0x000000
+#define NGBE_VFPBWRAP_WRAP MS(0, 0x7)
+#define NGBE_VFPBWRAP_EMPT MS(3, 0x1)
+#define NGBE_VFSTATUS 0x000004
+#define NGBE_VFSTATUS_UP MS(0, 0x1)
+#define NGBE_VFSTATUS_BW_MASK MS(1, 0x7)
+#define NGBE_VFSTATUS_BW_1G LS(0x1, 1, 0x7)
+#define NGBE_VFSTATUS_BW_100M LS(0x2, 1, 0x7)
+#define NGBE_VFSTATUS_BW_10M LS(0x4, 1, 0x7)
+#define NGBE_VFSTATUS_BUSY MS(4, 0x1)
+#define NGBE_VFSTATUS_LANID MS(8, 0x3)
+#define NGBE_VFRST 0x000008
+#define NGBE_VFRST_SET MS(0, 0x1)
+#define NGBE_VFMSIXECC 0x00000C
+#define NGBE_VFPLCFG 0x000078
+#define NGBE_VFPLCFG_RSV MS(0, 0x1)
+#define NGBE_VFPLCFG_PSR(v) LS(v, 1, 0x1F)
+#define NGBE_VFPLCFG_PSRL4HDR (0x1)
+#define NGBE_VFPLCFG_PSRL3HDR (0x2)
+#define NGBE_VFPLCFG_PSRL2HDR (0x4)
+#define NGBE_VFPLCFG_PSRTUNHDR (0x8)
+#define NGBE_VFPLCFG_PSRTUNMAC (0x10)
+#define NGBE_VFICR 0x000100
+#define NGBE_VFICR_MASK LS(3, 0, 0x3)
+#define NGBE_VFICR_MBX MS(1, 0x1)
+#define NGBE_VFICR_DONE1 MS(0, 0x1)
+#define NGBE_VFICS 0x000104
+#define NGBE_VFICS_MASK NGBE_VFICR_MASK
+#define NGBE_VFIMS 0x000108
+#define NGBE_VFIMS_MASK NGBE_VFICR_MASK
+#define NGBE_VFIMC 0x00010C
+#define NGBE_VFIMC_MASK NGBE_VFICR_MASK
+#define NGBE_VFGPIE 0x000118
+#define NGBE_VFIVAR(i) (0x000240 + 4 * (i)) /*0-1*/
+#define NGBE_VFIVARMISC 0x000260
+#define NGBE_VFIVAR_ALLOC(v) LS(v, 0, 0x1)
+#define NGBE_VFIVAR_VLD MS(7, 0x1)
+
+#define NGBE_VFMBCTL 0x000600
+#define NGBE_VFMBCTL_REQ MS(0, 0x1) /* Request for PF Ready bit */
+#define NGBE_VFMBCTL_ACK MS(1, 0x1) /* Ack PF message received */
+#define NGBE_VFMBCTL_VFU MS(2, 0x1) /* VF owns the mailbox buffer */
+#define NGBE_VFMBCTL_PFU MS(3, 0x1) /* PF owns the mailbox buffer */
+#define NGBE_VFMBCTL_PFSTS MS(4, 0x1) /* PF wrote a message in the MB */
+#define NGBE_VFMBCTL_PFACK MS(5, 0x1) /* PF ack the previous VF msg */
+#define NGBE_VFMBCTL_RSTI MS(6, 0x1) /* PF has reset indication */
+#define NGBE_VFMBCTL_RSTD MS(7, 0x1) /* PF has indicated reset done */
+#define NGBE_VFMBCTL_R2C_BITS (NGBE_VFMBCTL_RSTD | \
+ NGBE_VFMBCTL_PFSTS | \
+ NGBE_VFMBCTL_PFACK)
+#define NGBE_VFMBX 0x000C00 /*0-15*/
+#define NGBE_VFTPHCTL(i) 0x000D00
+
+/******************************************************************************
+ * PF&VF TxRx Interface
+ ******************************************************************************/
+#define RNGLEN(v) ROUND_OVER(v, 13, 7)
+#define HDRLEN(v) ROUND_OVER(v, 10, 6)
+#define PKTLEN(v) ROUND_OVER(v, 14, 10)
+#define INTTHR(v) ROUND_OVER(v, 4, 0)
+
+#define NGBE_RING_DESC_ALIGN 128
+#define NGBE_RING_DESC_MIN 128
+#define NGBE_RING_DESC_MAX 8192
+#define NGBE_RXD_ALIGN NGBE_RING_DESC_ALIGN
+#define NGBE_TXD_ALIGN NGBE_RING_DESC_ALIGN
+
+/* receive ring */
+#define NGBE_RXBAL(rp) (0x001000 + 0x40 * (rp))
+#define NGBE_RXBAH(rp) (0x001004 + 0x40 * (rp))
+#define NGBE_RXRP(rp) (0x00100C + 0x40 * (rp))
+#define NGBE_RXWP(rp) (0x001008 + 0x40 * (rp))
+#define NGBE_RXCFG(rp) (0x001010 + 0x40 * (rp))
+#define NGBE_RXCFG_ENA MS(0, 0x1)
+#define NGBE_RXCFG_RNGLEN(v) LS(RNGLEN(v), 1, 0x3F)
+#define NGBE_RXCFG_PKTLEN(v) LS(PKTLEN(v), 8, 0xF)
+#define NGBE_RXCFG_PKTLEN_MASK MS(8, 0xF)
+#define NGBE_RXCFG_HDRLEN(v) LS(HDRLEN(v), 12, 0xF)
+#define NGBE_RXCFG_HDRLEN_MASK MS(12, 0xF)
+#define NGBE_RXCFG_WTHRESH(v) LS(v, 16, 0x7)
+#define NGBE_RXCFG_ETAG MS(22, 0x1)
+#define NGBE_RXCFG_SPLIT MS(26, 0x1)
+#define NGBE_RXCFG_CNTAG MS(28, 0x1)
+#define NGBE_RXCFG_DROP MS(30, 0x1)
+#define NGBE_RXCFG_VLAN MS(31, 0x1)
+
+/* transmit ring */
+#define NGBE_TXBAL(rp) (0x003000 + 0x40 * (rp)) /*0-7*/
+#define NGBE_TXBAH(rp) (0x003004 + 0x40 * (rp))
+#define NGBE_TXWP(rp) (0x003008 + 0x40 * (rp))
+#define NGBE_TXRP(rp) (0x00300C + 0x40 * (rp))
+#define NGBE_TXCFG(rp) (0x003010 + 0x40 * (rp))
+#define NGBE_TXCFG_ENA MS(0, 0x1)
+#define NGBE_TXCFG_BUFLEN_MASK MS(1, 0x3F)
+#define NGBE_TXCFG_BUFLEN(v) LS(RNGLEN(v), 1, 0x3F)
+#define NGBE_TXCFG_HTHRESH_MASK MS(8, 0xF)
+#define NGBE_TXCFG_HTHRESH(v) LS(v, 8, 0xF)
+#define NGBE_TXCFG_WTHRESH_MASK MS(16, 0x7F)
+#define NGBE_TXCFG_WTHRESH(v) LS(v, 16, 0x7F)
+#define NGBE_TXCFG_FLUSH MS(26, 0x1)
+
+/* interrupt registers */
+#define NGBE_BMEPEND 0x000168
+#define NGBE_BMEPEND_ST MS(0, 0x1)
+#define NGBE_ITRI 0x000180
+#define NGBE_ITR(i) (0x000200 + 4 * (i))
+#define NGBE_ITR_IVAL_MASK MS(2, 0x1FFF) /* 1ns/10G, 10ns/REST */
+#define NGBE_ITR_IVAL(v) LS(v, 2, 0x1FFF) /*1ns/10G, 10ns/REST*/
+#define NGBE_ITR_IVAL_1G(us) NGBE_ITR_IVAL((us) / 2)
+#define NGBE_ITR_IVAL_10G(us) NGBE_ITR_IVAL((us) / 20)
+#define NGBE_ITR_LLIEA MS(15, 0x1)
+#define NGBE_ITR_LLICREDIT(v) LS(v, 16, 0x1F)
+#define NGBE_ITR_CNT(v) LS(v, 21, 0x3FF)
+#define NGBE_ITR_WRDSA MS(31, 0x1)
+#define NGBE_GPIE 0x000118
+#define NGBE_GPIE_MSIX MS(0, 0x1)
+#define NGBE_GPIE_LLIEA MS(1, 0x1)
+#define NGBE_GPIE_LLIVAL(v) LS(v, 3, 0x1F)
+#define NGBE_GPIE_LLIVAL_H(v) LS(v, 16, 0x7FF)
+
+/******************************************************************************
+ * Debug Registers
+ ******************************************************************************/
+/**
+ * Probe
+ **/
+#define NGBE_PRBCTL 0x010200
+#define NGBE_PRBSTA 0x010204
+#define NGBE_PRBDAT 0x010220
+#define NGBE_PRBCNT 0x010228
+
+#define NGBE_PRBPCI 0x01F010
+#define NGBE_PRBPSR 0x015010
+#define NGBE_PRBRDB 0x019010
+#define NGBE_PRBTDB 0x01C010
+#define NGBE_PRBRSEC 0x017010
+#define NGBE_PRBTSEC 0x01D010
+#define NGBE_PRBMNG 0x01E010
+#define NGBE_PRBRMAC 0x011014
+#define NGBE_PRBTMAC 0x011010
+#define NGBE_PRBREMAC 0x011E04
+#define NGBE_PRBTEMAC 0x011E00
+
+/**
+ * ECC
+ **/
+#define NGBE_ECCRXPBCTL 0x019014
+#define NGBE_ECCRXPBINJ 0x019018
+#define NGBE_ECCRXPB 0x01901C
+#define NGBE_ECCTXPBCTL 0x01C014
+#define NGBE_ECCTXPBINJ 0x01C018
+#define NGBE_ECCTXPB 0x01C01C
+
+#define NGBE_ECCRXETHCTL 0x015014
+#define NGBE_ECCRXETHINJ 0x015018
+#define NGBE_ECCRXETH 0x01401C
+
+#define NGBE_ECCRXSECCTL 0x017014
+#define NGBE_ECCRXSECINJ 0x017018
+#define NGBE_ECCRXSEC 0x01701C
+#define NGBE_ECCTXSECCTL 0x01D014
+#define NGBE_ECCTXSECINJ 0x01D018
+#define NGBE_ECCTXSEC 0x01D01C
+
+#define NGBE_P2VMBX_SIZE (16) /* 16*4B */
+#define NGBE_P2MMBX_SIZE (64) /* 64*4B */
+
+/**************** Global Registers ****************************/
+#define NGBE_POOLETHCTL(pl) (0x015600 + (pl) * 4)
+#define NGBE_POOLETHCTL_LBDIA MS(0, 0x1)
+#define NGBE_POOLETHCTL_LLBDIA MS(1, 0x1)
+#define NGBE_POOLETHCTL_LLB MS(2, 0x1)
+#define NGBE_POOLETHCTL_UCP MS(4, 0x1)
+#define NGBE_POOLETHCTL_ETP MS(5, 0x1)
+#define NGBE_POOLETHCTL_VLA MS(6, 0x1)
+#define NGBE_POOLETHCTL_VLP MS(7, 0x1)
+#define NGBE_POOLETHCTL_UTA MS(8, 0x1)
+#define NGBE_POOLETHCTL_MCHA MS(9, 0x1)
+#define NGBE_POOLETHCTL_UCHA MS(10, 0x1)
+#define NGBE_POOLETHCTL_BCA MS(11, 0x1)
+#define NGBE_POOLETHCTL_MCP MS(12, 0x1)
+#define NGBE_POOLDROPSWBK(i) (0x0151C8 + (i) * 4) /*0-1*/
+
+/**************************** Receive DMA registers **************************/
+
+#define NGBE_RPUP2TC 0x019008
+#define NGBE_RPUP2TC_UP_SHIFT 3
+#define NGBE_RPUP2TC_UP_MASK 0x7
+
+/* mac switcher */
+#define NGBE_ETHADDRL 0x016200
+#define NGBE_ETHADDRL_AD0(v) LS(v, 0, 0xFF)
+#define NGBE_ETHADDRL_AD1(v) LS(v, 8, 0xFF)
+#define NGBE_ETHADDRL_AD2(v) LS(v, 16, 0xFF)
+#define NGBE_ETHADDRL_AD3(v) LS(v, 24, 0xFF)
+#define NGBE_ETHADDRL_ETAG(r) RS(r, 0, 0x3FFF)
+#define NGBE_ETHADDRH 0x016204
+#define NGBE_ETHADDRH_AD4(v) LS(v, 0, 0xFF)
+#define NGBE_ETHADDRH_AD5(v) LS(v, 8, 0xFF)
+#define NGBE_ETHADDRH_AD_MASK MS(0, 0xFFFF)
+#define NGBE_ETHADDRH_ETAG MS(30, 0x1)
+#define NGBE_ETHADDRH_VLD MS(31, 0x1)
+#define NGBE_ETHADDRASS 0x016208
+#define NGBE_ETHADDRIDX 0x016210
+
+/* Outmost Barrier Filters */
+#define NGBE_MCADDRTBL(i) (0x015200 + (i) * 4) /*0-127*/
+#define NGBE_UCADDRTBL(i) (0x015400 + (i) * 4) /*0-127*/
+#define NGBE_VLANTBL(i) (0x016000 + (i) * 4) /*0-127*/
+
+#define NGBE_MNGFLEXSEL 0x1582C
+#define NGBE_MNGFLEXDWL(i) (0x15A00 + ((i) * 16))
+#define NGBE_MNGFLEXDWH(i) (0x15A04 + ((i) * 16))
+#define NGBE_MNGFLEXMSK(i) (0x15A08 + ((i) * 16))
+
+#define NGBE_LANFLEXSEL 0x15B8C
+#define NGBE_LANFLEXDWL(i) (0x15C00 + ((i) * 16))
+#define NGBE_LANFLEXDWH(i) (0x15C04 + ((i) * 16))
+#define NGBE_LANFLEXMSK(i) (0x15C08 + ((i) * 16))
+#define NGBE_LANFLEXCTL 0x15CFC
+
+/* ipsec */
+#define NGBE_IPSRXIDX 0x017100
+#define NGBE_IPSRXIDX_ENA MS(0, 0x1)
+#define NGBE_IPSRXIDX_TB_MASK MS(1, 0x3)
+#define NGBE_IPSRXIDX_TB_IP LS(1, 1, 0x3)
+#define NGBE_IPSRXIDX_TB_SPI LS(2, 1, 0x3)
+#define NGBE_IPSRXIDX_TB_KEY LS(3, 1, 0x3)
+#define NGBE_IPSRXIDX_TBIDX(v) LS(v, 3, 0xF)
+#define NGBE_IPSRXIDX_READ MS(30, 0x1)
+#define NGBE_IPSRXIDX_WRITE MS(31, 0x1)
+#define NGBE_IPSRXADDR(i) (0x017104 + (i) * 4)
+
+#define NGBE_IPSRXSPI 0x017114
+#define NGBE_IPSRXADDRIDX 0x017118
+#define NGBE_IPSRXKEY(i) (0x01711C + (i) * 4)
+#define NGBE_IPSRXSALT 0x01712C
+#define NGBE_IPSRXMODE 0x017130
+#define NGBE_IPSRXMODE_IPV6 0x00000010
+#define NGBE_IPSRXMODE_DEC 0x00000008
+#define NGBE_IPSRXMODE_ESP 0x00000004
+#define NGBE_IPSRXMODE_AH 0x00000002
+#define NGBE_IPSRXMODE_VLD 0x00000001
+#define NGBE_IPSTXIDX 0x01D100
+#define NGBE_IPSTXIDX_ENA MS(0, 0x1)
+#define NGBE_IPSTXIDX_SAIDX(v) LS(v, 3, 0x3FF)
+#define NGBE_IPSTXIDX_READ MS(30, 0x1)
+#define NGBE_IPSTXIDX_WRITE MS(31, 0x1)
+#define NGBE_IPSTXSALT 0x01D104
+#define NGBE_IPSTXKEY(i) (0x01D108 + (i) * 4)
+
+#define NGBE_MACTXCFG 0x011000
+#define NGBE_MACTXCFG_TE MS(0, 0x1)
+#define NGBE_MACTXCFG_SPEED_MASK MS(29, 0x3)
+#define NGBE_MACTXCFG_SPEED(v) LS(v, 29, 0x3)
+#define NGBE_MACTXCFG_SPEED_10G LS(0, 29, 0x3)
+#define NGBE_MACTXCFG_SPEED_1G LS(3, 29, 0x3)
+
+#define NGBE_ISBADDRL 0x000160
+#define NGBE_ISBADDRH 0x000164
+
+#define NGBE_ARBPOOLIDX 0x01820C
+#define NGBE_ARBTXRATE 0x018404
+#define NGBE_ARBTXRATE_MIN(v) LS(v, 0, 0x3FFF)
+#define NGBE_ARBTXRATE_MAX(v) LS(v, 16, 0x3FFF)
+
+/* qos */
+#define NGBE_ARBTXCTL 0x018200
+#define NGBE_ARBTXCTL_RRM MS(1, 0x1)
+#define NGBE_ARBTXCTL_WSP MS(2, 0x1)
+#define NGBE_ARBTXCTL_DIA MS(6, 0x1)
+#define NGBE_ARBTXMMW 0x018208
+
+/* Management */
+#define NGBE_MNGFWSYNC 0x01E000
+#define NGBE_MNGFWSYNC_REQ MS(0, 0x1)
+#define NGBE_MNGSWSYNC 0x01E004
+#define NGBE_MNGSWSYNC_REQ MS(0, 0x1)
+#define NGBE_SWSEM 0x01002C
+#define NGBE_SWSEM_PF MS(0, 0x1)
+#define NGBE_MNGSEM 0x01E008
+#define NGBE_MNGSEM_SW(v) LS(v, 0, 0xFFFF)
+#define NGBE_MNGSEM_SWPHY MS(0, 0x1)
+#define NGBE_MNGSEM_SWMBX MS(2, 0x1)
+#define NGBE_MNGSEM_SWFLASH MS(3, 0x1)
+#define NGBE_MNGSEM_FW(v) LS(v, 16, 0xFFFF)
+#define NGBE_MNGSEM_FWPHY MS(16, 0x1)
+#define NGBE_MNGSEM_FWMBX MS(18, 0x1)
+#define NGBE_MNGSEM_FWFLASH MS(19, 0x1)
+#define NGBE_MNGMBXCTL 0x01E044
+#define NGBE_MNGMBXCTL_SWRDY MS(0, 0x1)
+#define NGBE_MNGMBXCTL_SWACK MS(1, 0x1)
+#define NGBE_MNGMBXCTL_FWRDY MS(2, 0x1)
+#define NGBE_MNGMBXCTL_FWACK MS(3, 0x1)
+#define NGBE_MNGMBX 0x01E100
+
+/**
+ * MDIO(PHY)
+ **/
+#define NGBE_MDIOSCA 0x011200
+#define NGBE_MDIOSCA_REG(v) LS(v, 0, 0xFFFF)
+#define NGBE_MDIOSCA_PORT(v) LS(v, 16, 0x1F)
+#define NGBE_MDIOSCA_DEV(v) LS(v, 21, 0x1F)
+#define NGBE_MDIOSCD 0x011204
+#define NGBE_MDIOSCD_DAT_R(r) RS(r, 0, 0xFFFF)
+#define NGBE_MDIOSCD_DAT(v) LS(v, 0, 0xFFFF)
+#define NGBE_MDIOSCD_CMD_PREAD LS(2, 16, 0x3)
+#define NGBE_MDIOSCD_CMD_WRITE LS(1, 16, 0x3)
+#define NGBE_MDIOSCD_CMD_READ LS(3, 16, 0x3)
+#define NGBE_MDIOSCD_SADDR MS(18, 0x1)
+#define NGBE_MDIOSCD_CLOCK(v) LS(v, 19, 0x7)
+#define NGBE_MDIOSCD_BUSY MS(22, 0x1)
+
+#define NGBE_MDIOMODE 0x011220
+#define NGBE_MDIOMODE_MASK MS(0, 0xF)
+#define NGBE_MDIOMODE_PRT3CL22 MS(3, 0x1)
+#define NGBE_MDIOMODE_PRT2CL22 MS(2, 0x1)
+#define NGBE_MDIOMODE_PRT1CL22 MS(1, 0x1)
+#define NGBE_MDIOMODE_PRT0CL22 MS(0, 0x1)
+
+#define NVM_OROM_OFFSET 0x17
+#define NVM_OROM_BLK_LOW 0x83
+#define NVM_OROM_BLK_HI 0x84
+#define NVM_OROM_PATCH_MASK 0xFF
+#define NVM_OROM_SHIFT 8
+#define NVM_VER_MASK 0x00FF /* version mask */
+#define NVM_VER_SHIFT 8 /* version bit shift */
+#define NVM_OEM_PROD_VER_PTR 0x1B /* OEM Product version block pointer */
+#define NVM_OEM_PROD_VER_CAP_OFF 0x1 /* OEM Product version format offset */
+#define NVM_OEM_PROD_VER_OFF_L 0x2 /* OEM Product version offset low */
+#define NVM_OEM_PROD_VER_OFF_H 0x3 /* OEM Product version offset high */
+#define NVM_OEM_PROD_VER_CAP_MASK 0xF /* OEM Product version cap mask */
+#define NVM_OEM_PROD_VER_MOD_LEN 0x3 /* OEM Product version module length */
+#define NVM_ETK_OFF_LOW 0x2D /* version low order word */
+#define NVM_ETK_OFF_HI 0x2E /* version high order word */
+#define NVM_ETK_SHIFT 16 /* high version word shift */
+#define NVM_VER_INVALID 0xFFFF
+#define NVM_ETK_VALID 0x8000
+#define NVM_INVALID_PTR 0xFFFF
+#define NVM_VER_SIZE 32 /* version string size */
+
+#define NGBE_REG_RSSTBL NGBE_RSSTBL(0)
+#define NGBE_REG_RSSKEY NGBE_RSSKEY(0)
+
+/*
+ * read non-rc counters
+ */
+#define NGBE_UPDCNT32(reg, last, cur) \
+do { \
+ uint32_t latest = rd32(hw, reg); \
+ if (hw->offset_loaded || hw->rx_loaded) \
+ last = 0; \
+ cur += (latest - last) & UINT_MAX; \
+ last = latest; \
+} while (0)
+
+#define NGBE_UPDCNT36(regl, last, cur) \
+do { \
+ uint64_t new_lsb = rd32(hw, regl); \
+ uint64_t new_msb = rd32(hw, regl + 4); \
+ uint64_t latest = ((new_msb << 32) | new_lsb); \
+ if (hw->offset_loaded || hw->rx_loaded) \
+ last = 0; \
+ cur += (0x1000000000LL + latest - last) & 0xFFFFFFFFFLL; \
+ last = latest; \
+} while (0)
+
+/**
+ * register operations
+ **/
+#define NGBE_REG_READ32(addr) rte_read32(addr)
+#define NGBE_REG_READ32_RELAXED(addr) rte_read32_relaxed(addr)
+#define NGBE_REG_WRITE32(addr, val) rte_write32(val, addr)
+#define NGBE_REG_WRITE32_RELAXED(addr, val) rte_write32_relaxed(val, addr)
+
+#define NGBE_DEAD_READ_REG 0xdeadbeefU
+#define NGBE_FAILED_READ_REG 0xffffffffU
+#define NGBE_REG_ADDR(hw, reg) \
+ ((volatile u32 *)((char *)(hw)->hw_addr + (reg)))
+
+static inline u32
+ngbe_get32(volatile u32 *addr)
+{
+ u32 val = NGBE_REG_READ32(addr);
+ return rte_le_to_cpu_32(val);
+}
+
+static inline void
+ngbe_set32(volatile u32 *addr, u32 val)
+{
+ val = rte_cpu_to_le_32(val);
+ NGBE_REG_WRITE32(addr, val);
+}
+
+static inline u32
+ngbe_get32_masked(volatile u32 *addr, u32 mask)
+{
+ u32 val = ngbe_get32(addr);
+ val &= mask;
+ return val;
+}
+
+static inline void
+ngbe_set32_masked(volatile u32 *addr, u32 mask, u32 field)
+{
+ u32 val = ngbe_get32(addr);
+ val = ((val & ~mask) | (field & mask));
+ ngbe_set32(addr, val);
+}
+
+static inline u32
+ngbe_get32_relaxed(volatile u32 *addr)
+{
+ u32 val = NGBE_REG_READ32_RELAXED(addr);
+ return rte_le_to_cpu_32(val);
+}
+
+static inline void
+ngbe_set32_relaxed(volatile u32 *addr, u32 val)
+{
+ val = rte_cpu_to_le_32(val);
+ NGBE_REG_WRITE32_RELAXED(addr, val);
+}
+
+static inline u32
+rd32(struct ngbe_hw *hw, u32 reg)
+{
+ if (reg == NGBE_REG_DUMMY)
+ return 0;
+ return ngbe_get32(NGBE_REG_ADDR(hw, reg));
+}
+
+static inline void
+wr32(struct ngbe_hw *hw, u32 reg, u32 val)
+{
+ if (reg == NGBE_REG_DUMMY)
+ return;
+ ngbe_set32(NGBE_REG_ADDR(hw, reg), val);
+}
+
+static inline u32
+rd32m(struct ngbe_hw *hw, u32 reg, u32 mask)
+{
+ u32 val = rd32(hw, reg);
+ val &= mask;
+ return val;
+}
+
+static inline void
+wr32m(struct ngbe_hw *hw, u32 reg, u32 mask, u32 field)
+{
+ u32 val = rd32(hw, reg);
+ val = ((val & ~mask) | (field & mask));
+ wr32(hw, reg, val);
+}
+
+static inline u64
+rd64(struct ngbe_hw *hw, u32 reg)
+{
+ u64 lsb = rd32(hw, reg);
+ u64 msb = rd32(hw, reg + 4);
+ return (lsb | msb << 32);
+}
+
+static inline void
+wr64(struct ngbe_hw *hw, u32 reg, u64 val)
+{
+ wr32(hw, reg, (u32)val);
+ wr32(hw, reg + 4, (u32)(val >> 32));
+}
+
+/* poll register */
+static inline u32
+po32m(struct ngbe_hw *hw, u32 reg, u32 mask, u32 expect, u32 *actual,
+ u32 loop, u32 slice)
+{
+ bool usec = true;
+ u32 value = 0, all = 0;
+
+ if (slice > 1000 * MAX_UDELAY_MS) {
+ usec = false;
+ slice = (slice + 500) / 1000;
+ }
+
+ do {
+ all |= rd32(hw, reg);
+ value |= mask & all;
+ if (value == expect)
+ break;
+
+ usec ? usec_delay(slice) : msec_delay(slice);
+ } while (--loop > 0);
+
+ if (actual)
+ *actual = all;
+
+ return loop;
+}
+
+/* flush all write operations */
+#define ngbe_flush(hw) rd32(hw, 0x00100C)
+
+#define rd32a(hw, reg, idx) ( \
+ rd32((hw), (reg) + ((idx) << 2)))
+#define wr32a(hw, reg, idx, val) \
+ wr32((hw), (reg) + ((idx) << 2), (val))
+
+#define rd32w(hw, reg, mask, slice) do { \
+ rd32((hw), reg); \
+ po32m((hw), reg, mask, mask, NULL, 5, slice); \
+} while (0)
+
+#define wr32w(hw, reg, val, mask, slice) do { \
+ wr32((hw), reg, val); \
+ po32m((hw), reg, mask, mask, NULL, 5, slice); \
+} while (0)
+
+#define NGBE_XPCS_IDAADDR 0x13000
+#define NGBE_XPCS_IDADATA 0x13004
+#define NGBE_EPHY_IDAADDR 0x13008
+#define NGBE_EPHY_IDADATA 0x1300C
+static inline u32
+rd32_epcs(struct ngbe_hw *hw, u32 addr)
+{
+ u32 data;
+ wr32(hw, NGBE_XPCS_IDAADDR, addr);
+ data = rd32(hw, NGBE_XPCS_IDADATA);
+ return data;
+}
+
+static inline void
+wr32_epcs(struct ngbe_hw *hw, u32 addr, u32 data)
+{
+ wr32(hw, NGBE_XPCS_IDAADDR, addr);
+ wr32(hw, NGBE_XPCS_IDADATA, data);
+}
+
+static inline u32
+rd32_ephy(struct ngbe_hw *hw, u32 addr)
+{
+ u32 data;
+ wr32(hw, NGBE_EPHY_IDAADDR, addr);
+ data = rd32(hw, NGBE_EPHY_IDADATA);
+ return data;
+}
+
+static inline void
+wr32_ephy(struct ngbe_hw *hw, u32 addr, u32 data)
+{
+ wr32(hw, NGBE_EPHY_IDAADDR, addr);
+ wr32(hw, NGBE_EPHY_IDADATA, data);
+}
+
+#endif /* _NGBE_REGS_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 05/19] net/ngbe: set MAC type and LAN ID with device initialization
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (3 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 04/19] net/ngbe: define registers Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 16:05 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 06/19] net/ngbe: init and validate EEPROM Jiawen Wu
` (13 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Add basic init and uninit function.
Map device IDs and subsystem IDs to single ID for easy operation.
Then initialize the shared code.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/meson.build | 4 +-
drivers/net/ngbe/base/ngbe.h | 11 ++
drivers/net/ngbe/base/ngbe_dummy.h | 37 ++++++
drivers/net/ngbe/base/ngbe_hw.c | 182 ++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 18 +++
drivers/net/ngbe/base/ngbe_osdep.h | 183 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_type.h | 78 ++++++++++++
drivers/net/ngbe/ngbe_ethdev.c | 39 +++++-
drivers/net/ngbe/ngbe_ethdev.h | 19 ++-
9 files changed, 565 insertions(+), 6 deletions(-)
create mode 100644 drivers/net/ngbe/base/ngbe.h
create mode 100644 drivers/net/ngbe/base/ngbe_dummy.h
create mode 100644 drivers/net/ngbe/base/ngbe_hw.c
create mode 100644 drivers/net/ngbe/base/ngbe_hw.h
create mode 100644 drivers/net/ngbe/base/ngbe_osdep.h
create mode 100644 drivers/net/ngbe/base/ngbe_type.h
diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index c5f6467743..fdbfa99916 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -1,7 +1,9 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
-sources = []
+sources = [
+ 'ngbe_hw.c',
+]
error_cflags = []
diff --git a/drivers/net/ngbe/base/ngbe.h b/drivers/net/ngbe/base/ngbe.h
new file mode 100644
index 0000000000..63fad12ad3
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_H_
+#define _NGBE_H_
+
+#include "ngbe_type.h"
+#include "ngbe_hw.h"
+
+#endif /* _NGBE_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
new file mode 100644
index 0000000000..75b4e50bca
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#ifndef _NGBE_TYPE_DUMMY_H_
+#define _NGBE_TYPE_DUMMY_H_
+
+#ifdef TUP
+#elif defined(__GNUC__)
+#define TUP(x) x##_unused ngbe_unused
+#elif defined(__LCLINT__)
+#define TUP(x) x /*@unused@*/
+#else
+#define TUP(x) x
+#endif /*TUP*/
+#define TUP0 TUP(p0)
+#define TUP1 TUP(p1)
+#define TUP2 TUP(p2)
+#define TUP3 TUP(p3)
+#define TUP4 TUP(p4)
+#define TUP5 TUP(p5)
+#define TUP6 TUP(p6)
+#define TUP7 TUP(p7)
+#define TUP8 TUP(p8)
+#define TUP9 TUP(p9)
+
+/* struct ngbe_bus_operations */
+static inline void ngbe_bus_set_lan_id_dummy(struct ngbe_hw *TUP0)
+{
+}
+static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
+{
+ hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
+}
+
+#endif /* _NGBE_TYPE_DUMMY_H_ */
+
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
new file mode 100644
index 0000000000..014bb0faee
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_type.h"
+#include "ngbe_hw.h"
+
+/**
+ * ngbe_set_lan_id_multi_port - Set LAN id for PCIe multiple port devices
+ * @hw: pointer to the HW structure
+ *
+ * Determines the LAN function id by reading memory-mapped registers and swaps
+ * the port value if requested, and set MAC instance for devices.
+ **/
+void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw)
+{
+ struct ngbe_bus_info *bus = &hw->bus;
+ u32 reg = 0;
+
+ DEBUGFUNC("ngbe_set_lan_id_multi_port");
+
+ reg = rd32(hw, NGBE_PORTSTAT);
+ bus->lan_id = NGBE_PORTSTAT_ID(reg);
+ bus->func = bus->lan_id;
+}
+
+/**
+ * ngbe_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ **/
+s32 ngbe_set_mac_type(struct ngbe_hw *hw)
+{
+ s32 err = 0;
+
+ DEBUGFUNC("ngbe_set_mac_type");
+
+ if (hw->vendor_id != PCI_VENDOR_ID_WANGXUN) {
+ DEBUGOUT("Unsupported vendor id: %x", hw->vendor_id);
+ return NGBE_ERR_DEVICE_NOT_SUPPORTED;
+ }
+
+ switch (hw->sub_device_id) {
+ case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
+ case NGBE_SUB_DEV_ID_EM_MVL_RGMII:
+ hw->phy.media_type = ngbe_media_type_copper;
+ hw->mac.type = ngbe_mac_em;
+ break;
+ case NGBE_SUB_DEV_ID_EM_MVL_SFP:
+ case NGBE_SUB_DEV_ID_EM_YT8521S_SFP:
+ hw->phy.media_type = ngbe_media_type_fiber;
+ hw->mac.type = ngbe_mac_em;
+ break;
+ case NGBE_SUB_DEV_ID_EM_VF:
+ hw->phy.media_type = ngbe_media_type_virtual;
+ hw->mac.type = ngbe_mac_em_vf;
+ break;
+ default:
+ err = NGBE_ERR_DEVICE_NOT_SUPPORTED;
+ hw->phy.media_type = ngbe_media_type_unknown;
+ hw->mac.type = ngbe_mac_unknown;
+ DEBUGOUT("Unsupported device id: %x", hw->device_id);
+ break;
+ }
+
+ DEBUGOUT("found mac: %d media: %d, returns: %d\n",
+ hw->mac.type, hw->phy.media_type, err);
+ return err;
+}
+
+void ngbe_map_device_id(struct ngbe_hw *hw)
+{
+ u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
+ u16 internal = hw->sub_system_id & NGBE_INTERNAL_MASK;
+ hw->is_pf = true;
+
+ /* move subsystem_device_id to device_id */
+ switch (hw->device_id) {
+ case NGBE_DEV_ID_EM_WX1860AL_W_VF:
+ case NGBE_DEV_ID_EM_WX1860A2_VF:
+ case NGBE_DEV_ID_EM_WX1860A2S_VF:
+ case NGBE_DEV_ID_EM_WX1860A4_VF:
+ case NGBE_DEV_ID_EM_WX1860A4S_VF:
+ case NGBE_DEV_ID_EM_WX1860AL2_VF:
+ case NGBE_DEV_ID_EM_WX1860AL2S_VF:
+ case NGBE_DEV_ID_EM_WX1860AL4_VF:
+ case NGBE_DEV_ID_EM_WX1860AL4S_VF:
+ case NGBE_DEV_ID_EM_WX1860NCSI_VF:
+ case NGBE_DEV_ID_EM_WX1860A1_VF:
+ case NGBE_DEV_ID_EM_WX1860A1L_VF:
+ hw->device_id = NGBE_DEV_ID_EM_VF;
+ hw->sub_device_id = NGBE_SUB_DEV_ID_EM_VF;
+ hw->is_pf = false;
+ break;
+ case NGBE_DEV_ID_EM_WX1860AL_W:
+ case NGBE_DEV_ID_EM_WX1860A2:
+ case NGBE_DEV_ID_EM_WX1860A2S:
+ case NGBE_DEV_ID_EM_WX1860A4:
+ case NGBE_DEV_ID_EM_WX1860A4S:
+ case NGBE_DEV_ID_EM_WX1860AL2:
+ case NGBE_DEV_ID_EM_WX1860AL2S:
+ case NGBE_DEV_ID_EM_WX1860AL4:
+ case NGBE_DEV_ID_EM_WX1860AL4S:
+ case NGBE_DEV_ID_EM_WX1860NCSI:
+ case NGBE_DEV_ID_EM_WX1860A1:
+ case NGBE_DEV_ID_EM_WX1860A1L:
+ hw->device_id = NGBE_DEV_ID_EM;
+ if (oem == NGBE_LY_M88E1512_SFP ||
+ internal == NGBE_INTERNAL_SFP)
+ hw->sub_device_id = NGBE_SUB_DEV_ID_EM_MVL_SFP;
+ else if (hw->sub_system_id == NGBE_SUB_DEV_ID_EM_M88E1512_RJ45)
+ hw->sub_device_id = NGBE_SUB_DEV_ID_EM_MVL_RGMII;
+ else if (oem == NGBE_YT8521S_SFP ||
+ oem == NGBE_LY_YT8521S_SFP)
+ hw->sub_device_id = NGBE_SUB_DEV_ID_EM_YT8521S_SFP;
+ else
+ hw->sub_device_id = NGBE_SUB_DEV_ID_EM_RTL_SGMII;
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * ngbe_init_ops_pf - Inits func ptrs and MAC type
+ * @hw: pointer to hardware structure
+ *
+ * Initialize the function pointers and assign the MAC type.
+ * Does not touch the hardware.
+ **/
+s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
+{
+ struct ngbe_bus_info *bus = &hw->bus;
+
+ DEBUGFUNC("ngbe_init_ops_pf");
+
+ /* BUS */
+ bus->set_lan_id = ngbe_set_lan_id_multi_port;
+
+ return 0;
+}
+
+/**
+ * ngbe_init_shared_code - Initialize the shared code
+ * @hw: pointer to hardware structure
+ *
+ * This will assign function pointers and assign the MAC type and PHY code.
+ * Does not touch the hardware. This function must be called prior to any
+ * other function in the shared code. The ngbe_hw structure should be
+ * memset to 0 prior to calling this function. The following fields in
+ * hw structure should be filled in prior to calling this function:
+ * hw_addr, back, device_id, vendor_id, subsystem_device_id
+ **/
+s32 ngbe_init_shared_code(struct ngbe_hw *hw)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_init_shared_code");
+
+ /*
+ * Set the mac type
+ */
+ ngbe_set_mac_type(hw);
+
+ ngbe_init_ops_dummy(hw);
+ switch (hw->mac.type) {
+ case ngbe_mac_em:
+ ngbe_init_ops_pf(hw);
+ break;
+ default:
+ status = NGBE_ERR_DEVICE_NOT_SUPPORTED;
+ break;
+ }
+
+ hw->bus.set_lan_id(hw);
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
new file mode 100644
index 0000000000..7d5de49248
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_HW_H_
+#define _NGBE_HW_H_
+
+#include "ngbe_type.h"
+
+void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
+
+s32 ngbe_init_shared_code(struct ngbe_hw *hw);
+s32 ngbe_set_mac_type(struct ngbe_hw *hw);
+s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
+void ngbe_map_device_id(struct ngbe_hw *hw);
+
+#endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_osdep.h b/drivers/net/ngbe/base/ngbe_osdep.h
new file mode 100644
index 0000000000..215a953081
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_osdep.h
@@ -0,0 +1,183 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_OS_H_
+#define _NGBE_OS_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <rte_version.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_io.h>
+#include <rte_ether.h>
+
+#include "../ngbe_logs.h"
+
+#define RTE_LIBRTE_NGBE_TM DCPV(1, 0)
+#define TMZ_PADDR(mz) ((mz)->iova)
+#define TMZ_VADDR(mz) ((mz)->addr)
+#define TDEV_NAME(eth_dev) ((eth_dev)->device->name)
+
+#define ASSERT(x) do { \
+ if (!(x)) \
+ PMD_DRV_LOG(ERR, "NGBE: %d", x); \
+} while (0)
+
+#define ngbe_unused __rte_unused
+
+#define usec_delay(x) rte_delay_us(x)
+#define msec_delay(x) rte_delay_ms(x)
+#define usleep(x) rte_delay_us(x)
+#define msleep(x) rte_delay_ms(x)
+
+#define FALSE 0
+#define TRUE 1
+
+#ifndef false
+#define false 0
+#endif
+#ifndef true
+#define true 1
+#endif
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+/* Bunch of defines for shared code bogosity */
+
+static inline void UNREFERENCED(const char *a __rte_unused, ...) {}
+#define UNREFERENCED_PARAMETER(args...) UNREFERENCED("", ##args)
+
+#define STATIC static
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef int16_t s16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+typedef int64_t s64;
+
+/* Little Endian defines */
+#ifndef __le16
+#define __le16 u16
+#define __le32 u32
+#define __le64 u64
+#endif
+#ifndef __be16
+#define __be16 u16
+#define __be32 u32
+#define __be64 u64
+#endif
+
+/* Bit shift and mask */
+#define BIT_MASK4 (0x0000000FU)
+#define BIT_MASK8 (0x000000FFU)
+#define BIT_MASK16 (0x0000FFFFU)
+#define BIT_MASK32 (0xFFFFFFFFU)
+#define BIT_MASK64 (0xFFFFFFFFFFFFFFFFUL)
+
+#ifndef cpu_to_le32
+#define cpu_to_le16(v) rte_cpu_to_le_16((u16)(v))
+#define cpu_to_le32(v) rte_cpu_to_le_32((u32)(v))
+#define cpu_to_le64(v) rte_cpu_to_le_64((u64)(v))
+#define le_to_cpu16(v) rte_le_to_cpu_16((u16)(v))
+#define le_to_cpu32(v) rte_le_to_cpu_32((u32)(v))
+#define le_to_cpu64(v) rte_le_to_cpu_64((u64)(v))
+
+#define cpu_to_be16(v) rte_cpu_to_be_16((u16)(v))
+#define cpu_to_be32(v) rte_cpu_to_be_32((u32)(v))
+#define cpu_to_be64(v) rte_cpu_to_be_64((u64)(v))
+#define be_to_cpu16(v) rte_be_to_cpu_16((u16)(v))
+#define be_to_cpu32(v) rte_be_to_cpu_32((u32)(v))
+#define be_to_cpu64(v) rte_be_to_cpu_64((u64)(v))
+
+#define le_to_be16(v) rte_bswap16((u16)(v))
+#define le_to_be32(v) rte_bswap32((u32)(v))
+#define le_to_be64(v) rte_bswap64((u64)(v))
+#define be_to_le16(v) rte_bswap16((u16)(v))
+#define be_to_le32(v) rte_bswap32((u32)(v))
+#define be_to_le64(v) rte_bswap64((u64)(v))
+
+#define npu_to_le16(v) (v)
+#define npu_to_le32(v) (v)
+#define npu_to_le64(v) (v)
+#define le_to_npu16(v) (v)
+#define le_to_npu32(v) (v)
+#define le_to_npu64(v) (v)
+
+#define npu_to_be16(v) le_to_be16((u16)(v))
+#define npu_to_be32(v) le_to_be32((u32)(v))
+#define npu_to_be64(v) le_to_be64((u64)(v))
+#define be_to_npu16(v) be_to_le16((u16)(v))
+#define be_to_npu32(v) be_to_le32((u32)(v))
+#define be_to_npu64(v) be_to_le64((u64)(v))
+#endif /* !cpu_to_le32 */
+
+static inline u16 REVERT_BIT_MASK16(u16 mask)
+{
+ mask = ((mask & 0x5555) << 1) | ((mask & 0xAAAA) >> 1);
+ mask = ((mask & 0x3333) << 2) | ((mask & 0xCCCC) >> 2);
+ mask = ((mask & 0x0F0F) << 4) | ((mask & 0xF0F0) >> 4);
+ return ((mask & 0x00FF) << 8) | ((mask & 0xFF00) >> 8);
+}
+
+static inline u32 REVERT_BIT_MASK32(u32 mask)
+{
+ mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1);
+ mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2);
+ mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4);
+ mask = ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8);
+ return ((mask & 0x0000FFFF) << 16) | ((mask & 0xFFFF0000) >> 16);
+}
+
+static inline u64 REVERT_BIT_MASK64(u64 mask)
+{
+ mask = ((mask & 0x5555555555555555) << 1) |
+ ((mask & 0xAAAAAAAAAAAAAAAA) >> 1);
+ mask = ((mask & 0x3333333333333333) << 2) |
+ ((mask & 0xCCCCCCCCCCCCCCCC) >> 2);
+ mask = ((mask & 0x0F0F0F0F0F0F0F0F) << 4) |
+ ((mask & 0xF0F0F0F0F0F0F0F0) >> 4);
+ mask = ((mask & 0x00FF00FF00FF00FF) << 8) |
+ ((mask & 0xFF00FF00FF00FF00) >> 8);
+ mask = ((mask & 0x0000FFFF0000FFFF) << 16) |
+ ((mask & 0xFFFF0000FFFF0000) >> 16);
+ return ((mask & 0x00000000FFFFFFFF) << 32) |
+ ((mask & 0xFFFFFFFF00000000) >> 32);
+}
+
+#define IOMEM
+
+#define prefetch(x) rte_prefetch0(x)
+
+#define ARRAY_SIZE(x) ((int32_t)RTE_DIM(x))
+
+#ifndef MAX_UDELAY_MS
+#define MAX_UDELAY_MS 5
+#endif
+
+#define ETH_ADDR_LEN 6
+#define ETH_FCS_LEN 4
+
+/* Check whether address is multicast. This is little-endian specific check.*/
+#define NGBE_IS_MULTICAST(address) \
+ rte_is_multicast_ether_addr(address)
+
+/* Check whether an address is broadcast. */
+#define NGBE_IS_BROADCAST(address) \
+ rte_is_broadcast_ether_addr(address)
+
+#define ETH_P_8021Q 0x8100
+#define ETH_P_8021AD 0x88A8
+
+#endif /* _NGBE_OS_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
new file mode 100644
index 0000000000..d172f2702f
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_TYPE_H_
+#define _NGBE_TYPE_H_
+
+#include "ngbe_status.h"
+#include "ngbe_osdep.h"
+#include "ngbe_devids.h"
+
+enum ngbe_mac_type {
+ ngbe_mac_unknown = 0,
+ ngbe_mac_em,
+ ngbe_mac_em_vf,
+ ngbe_num_macs
+};
+
+enum ngbe_phy_type {
+ ngbe_phy_unknown = 0,
+ ngbe_phy_none,
+ ngbe_phy_rtl,
+ ngbe_phy_mvl,
+ ngbe_phy_mvl_sfi,
+ ngbe_phy_yt8521s,
+ ngbe_phy_yt8521s_sfi,
+ ngbe_phy_zte,
+ ngbe_phy_cu_mtd,
+};
+
+enum ngbe_media_type {
+ ngbe_media_type_unknown = 0,
+ ngbe_media_type_fiber,
+ ngbe_media_type_fiber_qsfp,
+ ngbe_media_type_copper,
+ ngbe_media_type_backplane,
+ ngbe_media_type_cx4,
+ ngbe_media_type_virtual
+};
+
+struct ngbe_hw;
+
+/* Bus parameters */
+struct ngbe_bus_info {
+ void (*set_lan_id)(struct ngbe_hw *hw);
+
+ u16 func;
+ u8 lan_id;
+};
+
+struct ngbe_mac_info {
+ enum ngbe_mac_type type;
+};
+
+struct ngbe_phy_info {
+ enum ngbe_media_type media_type;
+ enum ngbe_phy_type type;
+};
+
+struct ngbe_hw {
+ void IOMEM *hw_addr;
+ void *back;
+ struct ngbe_mac_info mac;
+ struct ngbe_phy_info phy;
+ struct ngbe_bus_info bus;
+ u16 device_id;
+ u16 vendor_id;
+ u16 sub_device_id;
+ u16 sub_system_id;
+
+ bool is_pf;
+};
+
+#include "ngbe_regs.h"
+#include "ngbe_dummy.h"
+
+#endif /* _NGBE_TYPE_H_ */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e05766752a..a355c7dc29 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -8,9 +8,11 @@
#include <ethdev_pci.h>
#include "ngbe_logs.h"
-#include <base/ngbe_devids.h>
+#include "base/ngbe.h"
#include "ngbe_ethdev.h"
+static int ngbe_dev_close(struct rte_eth_dev *dev);
+
/*
* The set of PCI devices this driver supports
*/
@@ -34,6 +36,8 @@ static int
eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ int err;
PMD_INIT_FUNC_TRACE();
@@ -42,7 +46,21 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
rte_eth_copy_pci_info(eth_dev, pci_dev);
- return -EINVAL;
+ /* Vendor and Device ID need to be set before init of shared code */
+ hw->device_id = pci_dev->id.device_id;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->sub_system_id = pci_dev->id.subsystem_device_id;
+ ngbe_map_device_id(hw);
+ hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+
+ /* Initialize the shared code (base driver) */
+ err = ngbe_init_shared_code(hw);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Shared code init failed: %d", err);
+ return -EIO;
+ }
+
+ return 0;
}
static int
@@ -53,9 +71,9 @@ eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
- RTE_SET_USED(eth_dev);
+ ngbe_dev_close(eth_dev);
- return -EINVAL;
+ return 0;
}
static int
@@ -86,6 +104,19 @@ static struct rte_pci_driver rte_ngbe_pmd = {
.remove = eth_ngbe_pci_remove,
};
+/*
+ * Reset and stop device.
+ */
+static int
+ngbe_dev_close(struct rte_eth_dev *dev)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+
+ return -EINVAL;
+}
+
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 38f55f7fb1..d4d02c6bd8 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -10,7 +10,24 @@
* Structure to store private data for each driver instance (for each port).
*/
struct ngbe_adapter {
- void *back;
+ struct ngbe_hw hw;
};
+static inline struct ngbe_adapter *
+ngbe_dev_adapter(struct rte_eth_dev *dev)
+{
+ struct ngbe_adapter *ad = dev->data->dev_private;
+
+ return ad;
+}
+
+static inline struct ngbe_hw *
+ngbe_dev_hw(struct rte_eth_dev *dev)
+{
+ struct ngbe_adapter *ad = ngbe_dev_adapter(dev);
+ struct ngbe_hw *hw = &ad->hw;
+
+ return hw;
+}
+
#endif /* _NGBE_ETHDEV_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 06/19] net/ngbe: init and validate EEPROM
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (4 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 05/19] net/ngbe: set MAC type and LAN ID with device initialization Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 07/19] net/ngbe: add HW initialization Jiawen Wu
` (12 subsequent siblings)
18 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Reset swfw lock before NVM access, init EEPROM and validate the
checksum.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/meson.build | 2 +
drivers/net/ngbe/base/ngbe_dummy.h | 23 ++++
drivers/net/ngbe/base/ngbe_eeprom.c | 203 ++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_eeprom.h | 17 +++
drivers/net/ngbe/base/ngbe_hw.c | 83 ++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 3 +
drivers/net/ngbe/base/ngbe_mng.c | 198 +++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_mng.h | 65 +++++++++
drivers/net/ngbe/base/ngbe_type.h | 24 ++++
drivers/net/ngbe/ngbe_ethdev.c | 39 ++++++
10 files changed, 657 insertions(+)
create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.c
create mode 100644 drivers/net/ngbe/base/ngbe_eeprom.h
create mode 100644 drivers/net/ngbe/base/ngbe_mng.c
create mode 100644 drivers/net/ngbe/base/ngbe_mng.h
diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index fdbfa99916..ddd122ec45 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -2,7 +2,9 @@
# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
sources = [
+ 'ngbe_eeprom.c',
'ngbe_hw.c',
+ 'ngbe_mng.c',
]
error_cflags = []
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 75b4e50bca..ade03eae81 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -28,9 +28,32 @@
static inline void ngbe_bus_set_lan_id_dummy(struct ngbe_hw *TUP0)
{
}
+/* struct ngbe_rom_operations */
+static inline s32 ngbe_rom_init_params_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0,
+ u16 *TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
+ u32 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
+ u32 TUP1)
+{
+}
static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
+ hw->rom.init_params = ngbe_rom_init_params_dummy;
+ hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
+ hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
+ hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
}
#endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.c b/drivers/net/ngbe/base/ngbe_eeprom.c
new file mode 100644
index 0000000000..0ebbb7a29e
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_eeprom.c
@@ -0,0 +1,203 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_hw.h"
+#include "ngbe_mng.h"
+#include "ngbe_eeprom.h"
+
+/**
+ * ngbe_init_eeprom_params - Initialize EEPROM params
+ * @hw: pointer to hardware structure
+ *
+ * Initializes the EEPROM parameters ngbe_rom_info within the
+ * ngbe_hw struct in order to set up EEPROM access.
+ **/
+s32 ngbe_init_eeprom_params(struct ngbe_hw *hw)
+{
+ struct ngbe_rom_info *eeprom = &hw->rom;
+ u32 eec;
+ u16 eeprom_size;
+
+ DEBUGFUNC("ngbe_init_eeprom_params");
+
+ if (eeprom->type != ngbe_eeprom_unknown)
+ return 0;
+
+ eeprom->type = ngbe_eeprom_none;
+ /* Set default semaphore delay to 10ms which is a well
+ * tested value
+ */
+ eeprom->semaphore_delay = 10; /*ms*/
+ /* Clear EEPROM page size, it will be initialized as needed */
+ eeprom->word_page_size = 0;
+
+ /*
+ * Check for EEPROM present first.
+ * If not present leave as none
+ */
+ eec = rd32(hw, NGBE_SPISTAT);
+ if (!(eec & NGBE_SPISTAT_BPFLASH)) {
+ eeprom->type = ngbe_eeprom_flash;
+
+ /*
+ * SPI EEPROM is assumed here. This code would need to
+ * change if a future EEPROM is not SPI.
+ */
+ eeprom_size = 4096;
+ eeprom->word_size = eeprom_size >> 1;
+ }
+
+ eeprom->address_bits = 16;
+ eeprom->sw_addr = 0x80;
+
+ DEBUGOUT("eeprom params: type = %d, size = %d, address bits: "
+ "%d %d\n", eeprom->type, eeprom->word_size,
+ eeprom->address_bits, eeprom->sw_addr);
+
+ return 0;
+}
+
+/**
+ * ngbe_get_eeprom_semaphore - Get hardware semaphore
+ * @hw: pointer to hardware structure
+ *
+ * Sets the hardware semaphores so EEPROM access can occur for bit-bang method
+ **/
+s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw)
+{
+ s32 status = NGBE_ERR_EEPROM;
+ u32 timeout = 2000;
+ u32 i;
+ u32 swsm;
+
+ DEBUGFUNC("ngbe_get_eeprom_semaphore");
+
+
+ /* Get SMBI software semaphore between device drivers first */
+ for (i = 0; i < timeout; i++) {
+ /*
+ * If the SMBI bit is 0 when we read it, then the bit will be
+ * set and we have the semaphore
+ */
+ swsm = rd32(hw, NGBE_SWSEM);
+ if (!(swsm & NGBE_SWSEM_PF)) {
+ status = 0;
+ break;
+ }
+ usec_delay(50);
+ }
+
+ if (i == timeout) {
+ DEBUGOUT("Driver can't access the eeprom - SMBI Semaphore "
+ "not granted.\n");
+ /*
+ * this release is particularly important because our attempts
+ * above to get the semaphore may have succeeded, and if there
+ * was a timeout, we should unconditionally clear the semaphore
+ * bits to free the driver to make progress
+ */
+ ngbe_release_eeprom_semaphore(hw);
+
+ usec_delay(50);
+ /*
+ * one last try
+ * If the SMBI bit is 0 when we read it, then the bit will be
+ * set and we have the semaphore
+ */
+ swsm = rd32(hw, NGBE_SWSEM);
+ if (!(swsm & NGBE_SWSEM_PF))
+ status = 0;
+ }
+
+ /* Now get the semaphore between SW/FW through the SWESMBI bit */
+ if (status == 0) {
+ for (i = 0; i < timeout; i++) {
+ /* Set the SW EEPROM semaphore bit to request access */
+ wr32m(hw, NGBE_MNGSWSYNC,
+ NGBE_MNGSWSYNC_REQ, NGBE_MNGSWSYNC_REQ);
+
+ /*
+ * If we set the bit successfully then we got the
+ * semaphore.
+ */
+ swsm = rd32(hw, NGBE_MNGSWSYNC);
+ if (swsm & NGBE_MNGSWSYNC_REQ)
+ break;
+
+ usec_delay(50);
+ }
+
+ /*
+ * Release semaphores and return error if SW EEPROM semaphore
+ * was not granted because we don't have access to the EEPROM
+ */
+ if (i >= timeout) {
+ DEBUGOUT("SWESMBI Software EEPROM semaphore not granted.\n");
+ ngbe_release_eeprom_semaphore(hw);
+ status = NGBE_ERR_EEPROM;
+ }
+ } else {
+ DEBUGOUT("Software semaphore SMBI between device drivers "
+ "not granted.\n");
+ }
+
+ return status;
+}
+
+/**
+ * ngbe_release_eeprom_semaphore - Release hardware semaphore
+ * @hw: pointer to hardware structure
+ *
+ * This function clears hardware semaphore bits.
+ **/
+void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw)
+{
+ DEBUGFUNC("ngbe_release_eeprom_semaphore");
+
+ wr32m(hw, NGBE_MNGSWSYNC, NGBE_MNGSWSYNC_REQ, 0);
+ wr32m(hw, NGBE_SWSEM, NGBE_SWSEM_PF, 0);
+ ngbe_flush(hw);
+}
+
+/**
+ * ngbe_validate_eeprom_checksum_em - Validate EEPROM checksum
+ * @hw: pointer to hardware structure
+ * @checksum_val: calculated checksum
+ *
+ * Performs checksum calculation and validates the EEPROM checksum. If the
+ * caller does not need checksum_val, the value can be NULL.
+ **/
+s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw,
+ u16 *checksum_val)
+{
+ u32 eeprom_cksum_devcap = 0;
+ int err = 0;
+
+ DEBUGFUNC("ngbe_validate_eeprom_checksum_em");
+ UNREFERENCED_PARAMETER(checksum_val);
+
+ /* Check EEPROM only once */
+ if (hw->bus.lan_id == 0) {
+ wr32(hw, NGBE_CALSUM_CAP_STATUS, 0x0);
+ wr32(hw, NGBE_EEPROM_VERSION_STORE_REG, 0x0);
+ } else {
+ eeprom_cksum_devcap = rd32(hw, NGBE_CALSUM_CAP_STATUS);
+ hw->rom.saved_version = rd32(hw, NGBE_EEPROM_VERSION_STORE_REG);
+ }
+
+ if (hw->bus.lan_id == 0 || eeprom_cksum_devcap == 0) {
+ err = ngbe_hic_check_cap(hw);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR,
+ "The EEPROM checksum is not valid: %d", err);
+ return -EIO;
+ }
+ }
+
+ hw->rom.cksum_devcap = eeprom_cksum_devcap & 0xffff;
+
+ return err;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_eeprom.h b/drivers/net/ngbe/base/ngbe_eeprom.h
new file mode 100644
index 0000000000..0c2819df4a
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_eeprom.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_EEPROM_H_
+#define _NGBE_EEPROM_H_
+
+#define NGBE_CALSUM_CAP_STATUS 0x10224
+#define NGBE_EEPROM_VERSION_STORE_REG 0x1022C
+
+s32 ngbe_init_eeprom_params(struct ngbe_hw *hw);
+s32 ngbe_validate_eeprom_checksum_em(struct ngbe_hw *hw, u16 *checksum_val);
+s32 ngbe_get_eeprom_semaphore(struct ngbe_hw *hw);
+void ngbe_release_eeprom_semaphore(struct ngbe_hw *hw);
+
+#endif /* _NGBE_EEPROM_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 014bb0faee..de6b75e1c0 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -4,6 +4,8 @@
*/
#include "ngbe_type.h"
+#include "ngbe_eeprom.h"
+#include "ngbe_mng.h"
#include "ngbe_hw.h"
/**
@@ -25,6 +27,77 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw)
bus->func = bus->lan_id;
}
+/**
+ * ngbe_acquire_swfw_sync - Acquire SWFW semaphore
+ * @hw: pointer to hardware structure
+ * @mask: Mask to specify which semaphore to acquire
+ *
+ * Acquires the SWFW semaphore through the MNGSEM register for the specified
+ * function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask)
+{
+ u32 mngsem = 0;
+ u32 swmask = NGBE_MNGSEM_SW(mask);
+ u32 fwmask = NGBE_MNGSEM_FW(mask);
+ u32 timeout = 200;
+ u32 i;
+
+ DEBUGFUNC("ngbe_acquire_swfw_sync");
+
+ for (i = 0; i < timeout; i++) {
+ /*
+ * SW NVM semaphore bit is used for access to all
+ * SW_FW_SYNC bits (not just NVM)
+ */
+ if (ngbe_get_eeprom_semaphore(hw))
+ return NGBE_ERR_SWFW_SYNC;
+
+ mngsem = rd32(hw, NGBE_MNGSEM);
+ if (mngsem & (fwmask | swmask)) {
+ /* Resource is currently in use by FW or SW */
+ ngbe_release_eeprom_semaphore(hw);
+ msec_delay(5);
+ } else {
+ mngsem |= swmask;
+ wr32(hw, NGBE_MNGSEM, mngsem);
+ ngbe_release_eeprom_semaphore(hw);
+ return 0;
+ }
+ }
+
+ /* If time expired clear the bits holding the lock and retry */
+ if (mngsem & (fwmask | swmask))
+ ngbe_release_swfw_sync(hw, mngsem & (fwmask | swmask));
+
+ msec_delay(5);
+ return NGBE_ERR_SWFW_SYNC;
+}
+
+/**
+ * ngbe_release_swfw_sync - Release SWFW semaphore
+ * @hw: pointer to hardware structure
+ * @mask: Mask to specify which semaphore to release
+ *
+ * Releases the SWFW semaphore through the MNGSEM register for the specified
+ * function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
+{
+ u32 mngsem;
+ u32 swmask = mask;
+
+ DEBUGFUNC("ngbe_release_swfw_sync");
+
+ ngbe_get_eeprom_semaphore(hw);
+
+ mngsem = rd32(hw, NGBE_MNGSEM);
+ mngsem &= ~swmask;
+ wr32(hw, NGBE_MNGSEM, mngsem);
+
+ ngbe_release_eeprom_semaphore(hw);
+}
+
/**
* ngbe_set_mac_type - Sets MAC type
* @hw: pointer to the HW structure
@@ -134,12 +207,22 @@ void ngbe_map_device_id(struct ngbe_hw *hw)
s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
{
struct ngbe_bus_info *bus = &hw->bus;
+ struct ngbe_mac_info *mac = &hw->mac;
+ struct ngbe_rom_info *rom = &hw->rom;
DEBUGFUNC("ngbe_init_ops_pf");
/* BUS */
bus->set_lan_id = ngbe_set_lan_id_multi_port;
+ /* MAC */
+ mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
+ mac->release_swfw_sync = ngbe_release_swfw_sync;
+
+ /* EEPROM */
+ rom->init_params = ngbe_init_eeprom_params;
+ rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
+
return 0;
}
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 7d5de49248..5e508fb67f 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -10,6 +10,9 @@
void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
+s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
+void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
+
s32 ngbe_init_shared_code(struct ngbe_hw *hw);
s32 ngbe_set_mac_type(struct ngbe_hw *hw);
s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_mng.c b/drivers/net/ngbe/base/ngbe_mng.c
new file mode 100644
index 0000000000..87891a91e1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_mng.c
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_type.h"
+#include "ngbe_mng.h"
+
+/**
+ * ngbe_hic_unlocked - Issue command to manageability block unlocked
+ * @hw: pointer to the HW structure
+ * @buffer: command to write and where the return status will be placed
+ * @length: length of buffer, must be multiple of 4 bytes
+ * @timeout: time in ms to wait for command completion
+ *
+ * Communicates with the manageability block. On success return 0
+ * else returns semaphore error when encountering an error acquiring
+ * semaphore or NGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ *
+ * This function assumes that the NGBE_MNGSEM_SWMBX semaphore is held
+ * by the caller.
+ **/
+static s32
+ngbe_hic_unlocked(struct ngbe_hw *hw, u32 *buffer, u32 length, u32 timeout)
+{
+ u32 value, loop;
+ u16 i, dword_len;
+
+ DEBUGFUNC("ngbe_hic_unlocked");
+
+ if (!length || length > NGBE_PMMBX_BSIZE) {
+ DEBUGOUT("Buffer length failure buffersize=%d.\n", length);
+ return NGBE_ERR_HOST_INTERFACE_COMMAND;
+ }
+
+ /* Calculate length in DWORDs. We must be DWORD aligned */
+ if (length % sizeof(u32)) {
+ DEBUGOUT("Buffer length failure, not aligned to dword");
+ return NGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ dword_len = length >> 2;
+
+ /* The device driver writes the relevant command block
+ * into the ram area.
+ */
+ for (i = 0; i < dword_len; i++) {
+ wr32a(hw, NGBE_MNGMBX, i, cpu_to_le32(buffer[i]));
+ buffer[i] = rd32a(hw, NGBE_MNGMBX, i);
+ }
+ ngbe_flush(hw);
+
+ /* Setting this bit tells the ARC that a new command is pending. */
+ wr32m(hw, NGBE_MNGMBXCTL,
+ NGBE_MNGMBXCTL_SWRDY, NGBE_MNGMBXCTL_SWRDY);
+
+ /* Check command completion */
+ loop = po32m(hw, NGBE_MNGMBXCTL,
+ NGBE_MNGMBXCTL_FWRDY, NGBE_MNGMBXCTL_FWRDY,
+ &value, timeout, 1000);
+ if (!loop || !(value & NGBE_MNGMBXCTL_FWACK)) {
+ DEBUGOUT("Command has failed with no status valid.\n");
+ return NGBE_ERR_HOST_INTERFACE_COMMAND;
+ }
+
+ return 0;
+}
+
+/**
+ * ngbe_host_interface_command - Issue command to manageability block
+ * @hw: pointer to the HW structure
+ * @buffer: contains the command to write and where the return status will
+ * be placed
+ * @length: length of buffer, must be multiple of 4 bytes
+ * @timeout: time in ms to wait for command completion
+ * @return_data: read and return data from the buffer (true) or not (false)
+ * Needed because FW structures are big endian and decoding of
+ * these fields can be 8 bit or 16 bit based on command. Decoding
+ * is not easily understood without making a table of commands.
+ * So we will leave this up to the caller to read back the data
+ * in these cases.
+ *
+ * Communicates with the manageability block. On success return 0
+ * else returns semaphore error when encountering an error acquiring
+ * semaphore or NGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+static s32
+ngbe_host_interface_command(struct ngbe_hw *hw, u32 *buffer,
+ u32 length, u32 timeout, bool return_data)
+{
+ u32 hdr_size = sizeof(struct ngbe_hic_hdr);
+ struct ngbe_hic_hdr *resp = (struct ngbe_hic_hdr *)buffer;
+ u16 buf_len;
+ s32 err;
+ u32 bi;
+ u32 dword_len;
+
+ DEBUGFUNC("ngbe_host_interface_command");
+
+ if (length == 0 || length > NGBE_PMMBX_BSIZE) {
+ DEBUGOUT("Buffer length failure buffersize=%d.\n", length);
+ return NGBE_ERR_HOST_INTERFACE_COMMAND;
+ }
+
+ /* Take management host interface semaphore */
+ err = hw->mac.acquire_swfw_sync(hw, NGBE_MNGSEM_SWMBX);
+ if (err)
+ return err;
+
+ err = ngbe_hic_unlocked(hw, buffer, length, timeout);
+ if (err)
+ goto rel_out;
+
+ if (!return_data)
+ goto rel_out;
+
+ /* Calculate length in DWORDs */
+ dword_len = hdr_size >> 2;
+
+ /* first pull in the header so we know the buffer length */
+ for (bi = 0; bi < dword_len; bi++)
+ buffer[bi] = rd32a(hw, NGBE_MNGMBX, bi);
+
+ /*
+ * If there is any thing in data position pull it in
+ * Read Flash command requires reading buffer length from
+ * two byes instead of one byte
+ */
+ if (resp->cmd == 0x30) {
+ for (; bi < dword_len + 2; bi++)
+ buffer[bi] = rd32a(hw, NGBE_MNGMBX, bi);
+
+ buf_len = (((u16)(resp->cmd_or_resp.ret_status) << 3)
+ & 0xF00) | resp->buf_len;
+ hdr_size += (2 << 2);
+ } else {
+ buf_len = resp->buf_len;
+ }
+ if (!buf_len)
+ goto rel_out;
+
+ if (length < buf_len + hdr_size) {
+ DEBUGOUT("Buffer not large enough for reply message.\n");
+ err = NGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto rel_out;
+ }
+
+ /* Calculate length in DWORDs, add 3 for odd lengths */
+ dword_len = (buf_len + 3) >> 2;
+
+ /* Pull in the rest of the buffer (bi is where we left off) */
+ for (; bi <= dword_len; bi++)
+ buffer[bi] = rd32a(hw, NGBE_MNGMBX, bi);
+
+rel_out:
+ hw->mac.release_swfw_sync(hw, NGBE_MNGSEM_SWMBX);
+
+ return err;
+}
+
+s32 ngbe_hic_check_cap(struct ngbe_hw *hw)
+{
+ struct ngbe_hic_read_shadow_ram command;
+ s32 err;
+ int i;
+
+ DEBUGFUNC("\n");
+
+ command.hdr.req.cmd = FW_EEPROM_CHECK_STATUS;
+ command.hdr.req.buf_lenh = 0;
+ command.hdr.req.buf_lenl = 0;
+ command.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+
+ /* convert offset from words to bytes */
+ command.address = 0;
+ /* one word */
+ command.length = 0;
+
+ for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+ err = ngbe_host_interface_command(hw, (u32 *)&command,
+ sizeof(command),
+ NGBE_HI_COMMAND_TIMEOUT, true);
+ if (err)
+ continue;
+
+ command.hdr.rsp.ret_status &= 0x1F;
+ if (command.hdr.rsp.ret_status !=
+ FW_CEM_RESP_STATUS_SUCCESS)
+ err = NGBE_ERR_HOST_INTERFACE_COMMAND;
+
+ break;
+ }
+
+ if (!err && command.address != FW_CHECKSUM_CAP_ST_PASS)
+ err = NGBE_ERR_EEPROM_CHECKSUM;
+
+ return err;
+}
diff --git a/drivers/net/ngbe/base/ngbe_mng.h b/drivers/net/ngbe/base/ngbe_mng.h
new file mode 100644
index 0000000000..383e0dc0d1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_mng.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_MNG_H_
+#define _NGBE_MNG_H_
+
+#include "ngbe_type.h"
+
+#define NGBE_PMMBX_QSIZE 64 /* Num of dwords in range */
+#define NGBE_PMMBX_BSIZE (NGBE_PMMBX_QSIZE * 4)
+#define NGBE_HI_COMMAND_TIMEOUT 5000 /* Process HI command limit */
+
+/* CEM Support */
+#define FW_CEM_MAX_RETRIES 3
+#define FW_CEM_RESP_STATUS_SUCCESS 0x1
+#define FW_DEFAULT_CHECKSUM 0xFF /* checksum always 0xFF */
+#define FW_EEPROM_CHECK_STATUS 0xE9
+
+#define FW_CHECKSUM_CAP_ST_PASS 0x80658383
+#define FW_CHECKSUM_CAP_ST_FAIL 0x70657376
+
+/* Host Interface Command Structures */
+struct ngbe_hic_hdr {
+ u8 cmd;
+ u8 buf_len;
+ union {
+ u8 cmd_resv;
+ u8 ret_status;
+ } cmd_or_resp;
+ u8 checksum;
+};
+
+struct ngbe_hic_hdr2_req {
+ u8 cmd;
+ u8 buf_lenh;
+ u8 buf_lenl;
+ u8 checksum;
+};
+
+struct ngbe_hic_hdr2_rsp {
+ u8 cmd;
+ u8 buf_lenl;
+ u8 ret_status; /* 7-5: high bits of buf_len, 4-0: status */
+ u8 checksum;
+};
+
+union ngbe_hic_hdr2 {
+ struct ngbe_hic_hdr2_req req;
+ struct ngbe_hic_hdr2_rsp rsp;
+};
+
+/* These need to be dword aligned */
+struct ngbe_hic_read_shadow_ram {
+ union ngbe_hic_hdr2 hdr;
+ u32 address;
+ u16 length;
+ u16 pad2;
+ u16 data;
+ u16 pad3;
+};
+
+s32 ngbe_hic_check_cap(struct ngbe_hw *hw);
+#endif /* _NGBE_MNG_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index d172f2702f..c727338bd5 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -10,6 +10,13 @@
#include "ngbe_osdep.h"
#include "ngbe_devids.h"
+enum ngbe_eeprom_type {
+ ngbe_eeprom_unknown = 0,
+ ngbe_eeprom_spi,
+ ngbe_eeprom_flash,
+ ngbe_eeprom_none /* No NVM support */
+};
+
enum ngbe_mac_type {
ngbe_mac_unknown = 0,
ngbe_mac_em,
@@ -49,7 +56,23 @@ struct ngbe_bus_info {
u8 lan_id;
};
+struct ngbe_rom_info {
+ s32 (*init_params)(struct ngbe_hw *hw);
+ s32 (*validate_checksum)(struct ngbe_hw *hw, u16 *checksum_val);
+
+ enum ngbe_eeprom_type type;
+ u32 semaphore_delay;
+ u16 word_size;
+ u16 address_bits;
+ u16 word_page_size;
+ u32 sw_addr;
+ u32 saved_version;
+ u16 cksum_devcap;
+};
+
struct ngbe_mac_info {
+ s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+ void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
enum ngbe_mac_type type;
};
@@ -63,6 +86,7 @@ struct ngbe_hw {
void *back;
struct ngbe_mac_info mac;
struct ngbe_phy_info phy;
+ struct ngbe_rom_info rom;
struct ngbe_bus_info bus;
u16 device_id;
u16 vendor_id;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index a355c7dc29..c779c46dd5 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -32,6 +32,29 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+/*
+ * Ensure that all locks are released before first NVM or PHY access
+ */
+static void
+ngbe_swfw_lock_reset(struct ngbe_hw *hw)
+{
+ uint16_t mask;
+
+ /*
+ * These ones are more tricky since they are common to all ports; but
+ * swfw_sync retries last long enough (1s) to be almost sure that if
+ * lock can not be taken it is due to an improper lock of the
+ * semaphore.
+ */
+ mask = NGBE_MNGSEM_SWPHY |
+ NGBE_MNGSEM_SWMBX |
+ NGBE_MNGSEM_SWFLASH;
+ if (hw->mac.acquire_swfw_sync(hw, mask) < 0)
+ PMD_DRV_LOG(DEBUG, "SWFW common locks released");
+
+ hw->mac.release_swfw_sync(hw, mask);
+}
+
static int
eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
@@ -60,6 +83,22 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -EIO;
}
+ /* Unlock any pending hardware semaphore */
+ ngbe_swfw_lock_reset(hw);
+
+ err = hw->rom.init_params(hw);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
+ return -EIO;
+ }
+
+ /* Make sure we have a good EEPROM before we read from it */
+ err = hw->rom.validate_checksum(hw, NULL);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "The EEPROM checksum is not valid: %d", err);
+ return -EIO;
+ }
+
return 0;
}
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 07/19] net/ngbe: add HW initialization
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (5 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 06/19] net/ngbe: init and validate EEPROM Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 16:08 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 08/19] net/ngbe: identify PHY and reset PHY Jiawen Wu
` (11 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initialize the hardware by resetting the hardware in base code.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_dummy.h | 21 +++
drivers/net/ngbe/base/ngbe_hw.c | 235 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 9 ++
drivers/net/ngbe/base/ngbe_type.h | 30 ++++
drivers/net/ngbe/ngbe_ethdev.c | 17 +++
5 files changed, 312 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index ade03eae81..d0081acc2b 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -38,6 +38,19 @@ static inline s32 ngbe_rom_validate_checksum_dummy(struct ngbe_hw *TUP0,
{
return NGBE_ERR_OPS_DUMMY;
}
+/* struct ngbe_mac_operations */
+static inline s32 ngbe_mac_init_hw_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_reset_hw_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
@@ -47,13 +60,21 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
}
+static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
hw->rom.init_params = ngbe_rom_init_params_dummy;
hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
+ hw->mac.init_hw = ngbe_mac_init_hw_dummy;
+ hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
+ hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+ hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
}
#endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index de6b75e1c0..9fa40f7de1 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -8,6 +8,133 @@
#include "ngbe_mng.h"
#include "ngbe_hw.h"
+/**
+ * ngbe_init_hw - Generic hardware initialization
+ * @hw: pointer to hardware structure
+ *
+ * Initialize the hardware by resetting the hardware, filling the bus info
+ * structure and media type, clears all on chip counters, initializes receive
+ * address registers, multicast table, VLAN filter table, calls routine to set
+ * up link and flow control settings, and leaves transmit and receive units
+ * disabled and uninitialized
+ **/
+s32 ngbe_init_hw(struct ngbe_hw *hw)
+{
+ s32 status;
+
+ DEBUGFUNC("ngbe_init_hw");
+
+ /* Reset the hardware */
+ status = hw->mac.reset_hw(hw);
+
+ if (status != 0)
+ DEBUGOUT("Failed to initialize HW, STATUS = %d\n", status);
+
+ return status;
+}
+
+static void
+ngbe_reset_misc_em(struct ngbe_hw *hw)
+{
+ int i;
+
+ wr32(hw, NGBE_ISBADDRL, hw->isb_dma & 0xFFFFFFFF);
+ wr32(hw, NGBE_ISBADDRH, hw->isb_dma >> 32);
+
+ /* receive packets that size > 2048 */
+ wr32m(hw, NGBE_MACRXCFG,
+ NGBE_MACRXCFG_JUMBO, NGBE_MACRXCFG_JUMBO);
+
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
+
+ /* clear counters on read */
+ wr32m(hw, NGBE_MACCNTCTL,
+ NGBE_MACCNTCTL_RC, NGBE_MACCNTCTL_RC);
+
+ wr32m(hw, NGBE_RXFCCFG,
+ NGBE_RXFCCFG_FC, NGBE_RXFCCFG_FC);
+ wr32m(hw, NGBE_TXFCCFG,
+ NGBE_TXFCCFG_FC, NGBE_TXFCCFG_FC);
+
+ wr32m(hw, NGBE_MACRXFLT,
+ NGBE_MACRXFLT_PROMISC, NGBE_MACRXFLT_PROMISC);
+
+ wr32m(hw, NGBE_RSTSTAT,
+ NGBE_RSTSTAT_TMRINIT_MASK, NGBE_RSTSTAT_TMRINIT(30));
+
+ /* errata 4: initialize mng flex tbl and wakeup flex tbl*/
+ wr32(hw, NGBE_MNGFLEXSEL, 0);
+ for (i = 0; i < 16; i++) {
+ wr32(hw, NGBE_MNGFLEXDWL(i), 0);
+ wr32(hw, NGBE_MNGFLEXDWH(i), 0);
+ wr32(hw, NGBE_MNGFLEXMSK(i), 0);
+ }
+ wr32(hw, NGBE_LANFLEXSEL, 0);
+ for (i = 0; i < 16; i++) {
+ wr32(hw, NGBE_LANFLEXDWL(i), 0);
+ wr32(hw, NGBE_LANFLEXDWH(i), 0);
+ wr32(hw, NGBE_LANFLEXMSK(i), 0);
+ }
+
+ /* set pause frame dst mac addr */
+ wr32(hw, NGBE_RXPBPFCDMACL, 0xC2000001);
+ wr32(hw, NGBE_RXPBPFCDMACH, 0x0180);
+
+ wr32(hw, NGBE_MDIOMODE, 0xF);
+
+ wr32m(hw, NGBE_GPIE, NGBE_GPIE_MSIX, NGBE_GPIE_MSIX);
+
+ if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
+ (hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
+ /* gpio0 is used to power on/off control*/
+ wr32(hw, NGBE_GPIODIR, NGBE_GPIODIR_DDR(1));
+ wr32(hw, NGBE_GPIODATA, NGBE_GPIOBIT_0);
+ }
+
+ hw->mac.init_thermal_sensor_thresh(hw);
+
+ /* enable mac transmiter */
+ wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_TE, NGBE_MACTXCFG_TE);
+
+ /* sellect GMII */
+ wr32m(hw, NGBE_MACTXCFG,
+ NGBE_MACTXCFG_SPEED_MASK, NGBE_MACTXCFG_SPEED_1G);
+
+ for (i = 0; i < 4; i++)
+ wr32m(hw, NGBE_IVAR(i), 0x80808080, 0);
+}
+
+/**
+ * ngbe_reset_hw_em - Perform hardware reset
+ * @hw: pointer to hardware structure
+ *
+ * Resets the hardware by resetting the transmit and receive units, masks
+ * and clears all interrupts, perform a PHY reset, and perform a link (MAC)
+ * reset.
+ **/
+s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
+{
+ s32 status;
+
+ DEBUGFUNC("ngbe_reset_hw_em");
+
+ /* Call adapter stop to disable tx/rx and clear interrupts */
+ status = hw->mac.stop_hw(hw);
+ if (status != 0)
+ return status;
+
+ wr32(hw, NGBE_RST, NGBE_RST_LAN(hw->bus.lan_id));
+ ngbe_flush(hw);
+ msec_delay(50);
+
+ ngbe_reset_misc_em(hw);
+
+ msec_delay(50);
+
+ return status;
+}
+
/**
* ngbe_set_lan_id_multi_port - Set LAN id for PCIe multiple port devices
* @hw: pointer to the HW structure
@@ -27,6 +154,57 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw)
bus->func = bus->lan_id;
}
+/**
+ * ngbe_stop_hw - Generic stop Tx/Rx units
+ * @hw: pointer to hardware structure
+ *
+ * Sets the adapter_stopped flag within ngbe_hw struct. Clears interrupts,
+ * disables transmit and receive units. The adapter_stopped flag is used by
+ * the shared code and drivers to determine if the adapter is in a stopped
+ * state and should not touch the hardware.
+ **/
+s32 ngbe_stop_hw(struct ngbe_hw *hw)
+{
+ u32 reg_val;
+ u16 i;
+
+ DEBUGFUNC("ngbe_stop_hw");
+
+ /*
+ * Set the adapter_stopped flag so other driver functions stop touching
+ * the hardware
+ */
+ hw->adapter_stopped = true;
+
+ /* Disable the receive unit */
+ ngbe_disable_rx(hw);
+
+ /* Clear interrupt mask to stop interrupts from being generated */
+ wr32(hw, NGBE_IENMISC, 0);
+ wr32(hw, NGBE_IMS(0), NGBE_IMS_MASK);
+
+ /* Clear any pending interrupts, flush previous writes */
+ wr32(hw, NGBE_ICRMISC, NGBE_ICRMISC_MASK);
+ wr32(hw, NGBE_ICR(0), NGBE_ICR_MASK);
+
+ /* Disable the transmit unit. Each queue must be disabled. */
+ for (i = 0; i < hw->mac.max_tx_queues; i++)
+ wr32(hw, NGBE_TXCFG(i), NGBE_TXCFG_FLUSH);
+
+ /* Disable the receive unit by stopping each queue */
+ for (i = 0; i < hw->mac.max_rx_queues; i++) {
+ reg_val = rd32(hw, NGBE_RXCFG(i));
+ reg_val &= ~NGBE_RXCFG_ENA;
+ wr32(hw, NGBE_RXCFG(i), reg_val);
+ }
+
+ /* flush all queues disables */
+ ngbe_flush(hw);
+ msec_delay(2);
+
+ return 0;
+}
+
/**
* ngbe_acquire_swfw_sync - Acquire SWFW semaphore
* @hw: pointer to hardware structure
@@ -98,6 +276,54 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
ngbe_release_eeprom_semaphore(hw);
}
+/**
+ * ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
+ * @hw: pointer to hardware structure
+ *
+ * Inits the thermal sensor thresholds according to the NVM map
+ * and save off the threshold and location values into mac.thermal_sensor_data
+ **/
+s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw)
+{
+ struct ngbe_thermal_sensor_data *data = &hw->mac.thermal_sensor_data;
+
+ DEBUGFUNC("ngbe_init_thermal_sensor_thresh");
+
+ memset(data, 0, sizeof(struct ngbe_thermal_sensor_data));
+
+ if (hw->bus.lan_id != 0)
+ return NGBE_NOT_IMPLEMENTED;
+
+ wr32(hw, NGBE_TSINTR,
+ NGBE_TSINTR_AEN | NGBE_TSINTR_DEN);
+ wr32(hw, NGBE_TSEN, NGBE_TSEN_ENA);
+
+
+ data->sensor[0].alarm_thresh = 115;
+ wr32(hw, NGBE_TSATHRE, 0x344);
+ data->sensor[0].dalarm_thresh = 110;
+ wr32(hw, NGBE_TSDTHRE, 0x330);
+
+ return 0;
+}
+
+void ngbe_disable_rx(struct ngbe_hw *hw)
+{
+ u32 pfdtxgswc;
+
+ pfdtxgswc = rd32(hw, NGBE_PSRCTL);
+ if (pfdtxgswc & NGBE_PSRCTL_LBENA) {
+ pfdtxgswc &= ~NGBE_PSRCTL_LBENA;
+ wr32(hw, NGBE_PSRCTL, pfdtxgswc);
+ hw->mac.set_lben = true;
+ } else {
+ hw->mac.set_lben = false;
+ }
+
+ wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, 0);
+ wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
+}
+
/**
* ngbe_set_mac_type - Sets MAC type
* @hw: pointer to the HW structure
@@ -216,13 +442,22 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
bus->set_lan_id = ngbe_set_lan_id_multi_port;
/* MAC */
+ mac->init_hw = ngbe_init_hw;
+ mac->reset_hw = ngbe_reset_hw_em;
+ mac->stop_hw = ngbe_stop_hw;
mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
mac->release_swfw_sync = ngbe_release_swfw_sync;
+ /* Manageability interface */
+ mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
+
/* EEPROM */
rom->init_params = ngbe_init_eeprom_params;
rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
+ mac->max_rx_queues = NGBE_EM_MAX_RX_QUEUES;
+ mac->max_tx_queues = NGBE_EM_MAX_TX_QUEUES;
+
return 0;
}
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 5e508fb67f..39b2fe696b 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -8,11 +8,20 @@
#include "ngbe_type.h"
+#define NGBE_EM_MAX_TX_QUEUES 8
+#define NGBE_EM_MAX_RX_QUEUES 8
+
+s32 ngbe_init_hw(struct ngbe_hw *hw);
+s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
+s32 ngbe_stop_hw(struct ngbe_hw *hw);
+
void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
+s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
+void ngbe_disable_rx(struct ngbe_hw *hw);
s32 ngbe_init_shared_code(struct ngbe_hw *hw);
s32 ngbe_set_mac_type(struct ngbe_hw *hw);
s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index c727338bd5..55c686c0a3 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -6,10 +6,25 @@
#ifndef _NGBE_TYPE_H_
#define _NGBE_TYPE_H_
+#define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */
+
+#define NGBE_ALIGN 128 /* as intel did */
+#define NGBE_ISB_SIZE 16
+
#include "ngbe_status.h"
#include "ngbe_osdep.h"
#include "ngbe_devids.h"
+struct ngbe_thermal_diode_data {
+ s16 temp;
+ s16 alarm_thresh;
+ s16 dalarm_thresh;
+};
+
+struct ngbe_thermal_sensor_data {
+ struct ngbe_thermal_diode_data sensor[1];
+};
+
enum ngbe_eeprom_type {
ngbe_eeprom_unknown = 0,
ngbe_eeprom_spi,
@@ -71,9 +86,20 @@ struct ngbe_rom_info {
};
struct ngbe_mac_info {
+ s32 (*init_hw)(struct ngbe_hw *hw);
+ s32 (*reset_hw)(struct ngbe_hw *hw);
+ s32 (*stop_hw)(struct ngbe_hw *hw);
s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+
+ /* Manageability interface */
+ s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
+
enum ngbe_mac_type type;
+ u32 max_tx_queues;
+ u32 max_rx_queues;
+ struct ngbe_thermal_sensor_data thermal_sensor_data;
+ bool set_lben;
};
struct ngbe_phy_info {
@@ -92,6 +118,10 @@ struct ngbe_hw {
u16 vendor_id;
u16 sub_device_id;
u16 sub_system_id;
+ bool adapter_stopped;
+
+ uint64_t isb_dma;
+ void IOMEM *isb_mem;
bool is_pf;
};
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index c779c46dd5..31d4dda976 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -60,6 +60,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ const struct rte_memzone *mz;
int err;
PMD_INIT_FUNC_TRACE();
@@ -76,6 +77,15 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
ngbe_map_device_id(hw);
hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+ /* Reserve memory for interrupt status block */
+ mz = rte_eth_dma_zone_reserve(eth_dev, "ngbe_driver", -1,
+ NGBE_ISB_SIZE, NGBE_ALIGN, SOCKET_ID_ANY);
+ if (mz == NULL)
+ return -ENOMEM;
+
+ hw->isb_dma = TMZ_PADDR(mz);
+ hw->isb_mem = TMZ_VADDR(mz);
+
/* Initialize the shared code (base driver) */
err = ngbe_init_shared_code(hw);
if (err != 0) {
@@ -99,6 +109,13 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -EIO;
}
+ err = hw->mac.init_hw(hw);
+
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Hardware Initialization Failure: %d", err);
+ return -EIO;
+ }
+
return 0;
}
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 08/19] net/ngbe: identify PHY and reset PHY
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (6 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 07/19] net/ngbe: add HW initialization Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 09/19] net/ngbe: store MAC address Jiawen Wu
` (10 subsequent siblings)
18 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Identify PHY to get the PHY type, and perform a PHY reset.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/meson.build | 4 +
drivers/net/ngbe/base/ngbe_dummy.h | 40 +++
drivers/net/ngbe/base/ngbe_hw.c | 40 ++-
drivers/net/ngbe/base/ngbe_hw.h | 2 +
drivers/net/ngbe/base/ngbe_phy.c | 426 +++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_phy.h | 60 ++++
drivers/net/ngbe/base/ngbe_phy_mvl.c | 89 ++++++
drivers/net/ngbe/base/ngbe_phy_mvl.h | 92 ++++++
drivers/net/ngbe/base/ngbe_phy_rtl.c | 65 ++++
drivers/net/ngbe/base/ngbe_phy_rtl.h | 83 ++++++
drivers/net/ngbe/base/ngbe_phy_yt.c | 112 +++++++
drivers/net/ngbe/base/ngbe_phy_yt.h | 67 +++++
drivers/net/ngbe/base/ngbe_type.h | 17 ++
13 files changed, 1096 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ngbe/base/ngbe_phy.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy_mvl.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy_rtl.h
create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.c
create mode 100644 drivers/net/ngbe/base/ngbe_phy_yt.h
diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
index ddd122ec45..146134f671 100644
--- a/drivers/net/ngbe/base/meson.build
+++ b/drivers/net/ngbe/base/meson.build
@@ -5,6 +5,10 @@ sources = [
'ngbe_eeprom.c',
'ngbe_hw.c',
'ngbe_mng.c',
+ 'ngbe_phy.c',
+ 'ngbe_phy_rtl.c',
+ 'ngbe_phy_mvl.c',
+ 'ngbe_phy_yt.c',
]
error_cflags = []
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index d0081acc2b..15017cfd82 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -64,6 +64,39 @@ static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_check_overtemp_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+/* struct ngbe_phy_operations */
+static inline s32 ngbe_phy_identify_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_reset_hw_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_read_reg_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2, u16 *TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_write_reg_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2, u16 TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_read_reg_unlocked_dummy(struct ngbe_hw *TUP0,
+ u32 TUP1, u32 TUP2, u16 *TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_write_reg_unlocked_dummy(struct ngbe_hw *TUP0,
+ u32 TUP1, u32 TUP2, u16 TUP3)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
@@ -75,6 +108,13 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
+ hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
+ hw->phy.identify = ngbe_phy_identify_dummy;
+ hw->phy.reset_hw = ngbe_phy_reset_hw_dummy;
+ hw->phy.read_reg = ngbe_phy_read_reg_dummy;
+ hw->phy.write_reg = ngbe_phy_write_reg_dummy;
+ hw->phy.read_reg_unlocked = ngbe_phy_read_reg_unlocked_dummy;
+ hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy;
}
#endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 9fa40f7de1..5723213209 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -4,6 +4,7 @@
*/
#include "ngbe_type.h"
+#include "ngbe_phy.h"
#include "ngbe_eeprom.h"
#include "ngbe_mng.h"
#include "ngbe_hw.h"
@@ -94,7 +95,7 @@ ngbe_reset_misc_em(struct ngbe_hw *hw)
hw->mac.init_thermal_sensor_thresh(hw);
- /* enable mac transmiter */
+ /* enable mac transmitter */
wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_TE, NGBE_MACTXCFG_TE);
/* sellect GMII */
@@ -124,6 +125,15 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
if (status != 0)
return status;
+ /* Identify PHY and related function pointers */
+ status = ngbe_init_phy(hw);
+ if (status)
+ return status;
+
+ /* Reset PHY */
+ if (!hw->phy.reset_disable)
+ hw->phy.reset_hw(hw);
+
wr32(hw, NGBE_RST, NGBE_RST_LAN(hw->bus.lan_id));
ngbe_flush(hw);
msec_delay(50);
@@ -307,6 +317,24 @@ s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw)
return 0;
}
+s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw)
+{
+ s32 status = 0;
+ u32 ts_state;
+
+ DEBUGFUNC("ngbe_mac_check_overtemp");
+
+ /* Check that the LASI temp alarm status was triggered */
+ ts_state = rd32(hw, NGBE_TSALM);
+
+ if (ts_state & NGBE_TSALM_HI)
+ status = NGBE_ERR_UNDERTEMP;
+ else if (ts_state & NGBE_TSALM_LO)
+ status = NGBE_ERR_OVERTEMP;
+
+ return status;
+}
+
void ngbe_disable_rx(struct ngbe_hw *hw)
{
u32 pfdtxgswc;
@@ -434,6 +462,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
{
struct ngbe_bus_info *bus = &hw->bus;
struct ngbe_mac_info *mac = &hw->mac;
+ struct ngbe_phy_info *phy = &hw->phy;
struct ngbe_rom_info *rom = &hw->rom;
DEBUGFUNC("ngbe_init_ops_pf");
@@ -441,6 +470,14 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
/* BUS */
bus->set_lan_id = ngbe_set_lan_id_multi_port;
+ /* PHY */
+ phy->identify = ngbe_identify_phy;
+ phy->read_reg = ngbe_read_phy_reg;
+ phy->write_reg = ngbe_write_phy_reg;
+ phy->read_reg_unlocked = ngbe_read_phy_reg_mdi;
+ phy->write_reg_unlocked = ngbe_write_phy_reg_mdi;
+ phy->reset_hw = ngbe_reset_phy;
+
/* MAC */
mac->init_hw = ngbe_init_hw;
mac->reset_hw = ngbe_reset_hw_em;
@@ -450,6 +487,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
/* Manageability interface */
mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
+ mac->check_overtemp = ngbe_mac_check_overtemp;
/* EEPROM */
rom->init_params = ngbe_init_eeprom_params;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 39b2fe696b..3c8e646bb7 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -21,10 +21,12 @@ s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
+s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
void ngbe_disable_rx(struct ngbe_hw *hw);
s32 ngbe_init_shared_code(struct ngbe_hw *hw);
s32 ngbe_set_mac_type(struct ngbe_hw *hw);
s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
+s32 ngbe_init_phy(struct ngbe_hw *hw);
void ngbe_map_device_id(struct ngbe_hw *hw);
#endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
new file mode 100644
index 0000000000..0467e1e66e
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include "ngbe_hw.h"
+#include "ngbe_phy.h"
+
+s32 ngbe_mdi_map_register(mdi_reg_t *reg, mdi_reg_22_t *reg22)
+{
+ bool match = 1;
+ switch (reg->device_type) {
+ case NGBE_MD_DEV_PMA_PMD:
+ switch (reg->addr) {
+ case NGBE_MD_PHY_ID_HIGH:
+ case NGBE_MD_PHY_ID_LOW:
+ reg22->page = 0;
+ reg22->addr = reg->addr;
+ reg22->device_type = 0;
+ break;
+ default:
+ match = 0;
+ }
+ break;
+ default:
+ match = 0;
+ break;
+ }
+
+ if (!match) {
+ reg22->page = reg->device_type;
+ reg22->device_type = reg->device_type;
+ reg22->addr = reg->addr;
+ }
+
+ return 0;
+}
+
+/**
+ * ngbe_probe_phy - Identify a single address for a PHY
+ * @hw: pointer to hardware structure
+ * @phy_addr: PHY address to probe
+ *
+ * Returns true if PHY found
+ */
+static bool ngbe_probe_phy(struct ngbe_hw *hw, u16 phy_addr)
+{
+ if (!ngbe_validate_phy_addr(hw, phy_addr)) {
+ DEBUGOUT("Unable to validate PHY address 0x%04X\n",
+ phy_addr);
+ return false;
+ }
+
+ if (ngbe_get_phy_id(hw))
+ return false;
+
+ hw->phy.type = ngbe_get_phy_type_from_id(hw);
+ if (hw->phy.type == ngbe_phy_unknown)
+ return false;
+
+ return true;
+}
+
+/**
+ * ngbe_identify_phy - Get physical layer module
+ * @hw: pointer to hardware structure
+ *
+ * Determines the physical layer module found on the current adapter.
+ **/
+s32 ngbe_identify_phy(struct ngbe_hw *hw)
+{
+ s32 err = NGBE_ERR_PHY_ADDR_INVALID;
+ u16 phy_addr;
+
+ DEBUGFUNC("ngbe_identify_phy");
+
+ if (hw->phy.type != ngbe_phy_unknown)
+ return 0;
+
+ /* select clause22 */
+ wr32(hw, NGBE_MDIOMODE, NGBE_MDIOMODE_MASK);
+
+ for (phy_addr = 0; phy_addr < NGBE_MAX_PHY_ADDR; phy_addr++) {
+ if (ngbe_probe_phy(hw, phy_addr)) {
+ err = 0;
+ break;
+ }
+ }
+
+ return err;
+}
+
+/**
+ * ngbe_check_reset_blocked - check status of MNG FW veto bit
+ * @hw: pointer to the hardware structure
+ *
+ * This function checks the STAT.MNGVETO bit to see if there are
+ * any constraints on link from manageability. For MAC's that don't
+ * have this bit just return faluse since the link can not be blocked
+ * via this method.
+ **/
+s32 ngbe_check_reset_blocked(struct ngbe_hw *hw)
+{
+ u32 mmngc;
+
+ DEBUGFUNC("ngbe_check_reset_blocked");
+
+ mmngc = rd32(hw, NGBE_STAT);
+ if (mmngc & NGBE_STAT_MNGVETO) {
+ DEBUGOUT("MNG_VETO bit detected.\n");
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * ngbe_validate_phy_addr - Determines phy address is valid
+ * @hw: pointer to hardware structure
+ * @phy_addr: PHY address
+ *
+ **/
+bool ngbe_validate_phy_addr(struct ngbe_hw *hw, u32 phy_addr)
+{
+ u16 phy_id = 0;
+ bool valid = false;
+
+ DEBUGFUNC("ngbe_validate_phy_addr");
+
+ if (hw->sub_device_id == NGBE_SUB_DEV_ID_EM_YT8521S_SFP)
+ return true;
+
+ hw->phy.addr = phy_addr;
+ hw->phy.read_reg(hw, NGBE_MD_PHY_ID_HIGH,
+ NGBE_MD_DEV_PMA_PMD, &phy_id);
+
+ if (phy_id != 0xFFFF && phy_id != 0x0)
+ valid = true;
+
+ DEBUGOUT("PHY ID HIGH is 0x%04X\n", phy_id);
+
+ return valid;
+}
+
+/**
+ * ngbe_get_phy_id - Get the phy ID
+ * @hw: pointer to hardware structure
+ *
+ **/
+s32 ngbe_get_phy_id(struct ngbe_hw *hw)
+{
+ u32 err;
+ u16 phy_id_high = 0;
+ u16 phy_id_low = 0;
+
+ DEBUGFUNC("ngbe_get_phy_id");
+
+ err = hw->phy.read_reg(hw, NGBE_MD_PHY_ID_HIGH,
+ NGBE_MD_DEV_PMA_PMD,
+ &phy_id_high);
+ hw->phy.id = (u32)(phy_id_high << 16);
+
+ err = hw->phy.read_reg(hw, NGBE_MD_PHY_ID_LOW,
+ NGBE_MD_DEV_PMA_PMD,
+ &phy_id_low);
+ hw->phy.id |= (u32)(phy_id_low & NGBE_PHY_REVISION_MASK);
+ hw->phy.revision = (u32)(phy_id_low & ~NGBE_PHY_REVISION_MASK);
+
+ DEBUGOUT("PHY_ID_HIGH 0x%04X, PHY_ID_LOW 0x%04X\n",
+ phy_id_high, phy_id_low);
+
+ return err;
+}
+
+/**
+ * ngbe_get_phy_type_from_id - Get the phy type
+ * @phy_id: PHY ID information
+ *
+ **/
+enum ngbe_phy_type ngbe_get_phy_type_from_id(struct ngbe_hw *hw)
+{
+ enum ngbe_phy_type phy_type;
+
+ DEBUGFUNC("ngbe_get_phy_type_from_id");
+
+ switch (hw->phy.id) {
+ case NGBE_PHYID_RTL:
+ phy_type = ngbe_phy_rtl;
+ break;
+ case NGBE_PHYID_MVL:
+ if (hw->phy.media_type == ngbe_media_type_fiber)
+ phy_type = ngbe_phy_mvl_sfi;
+ else
+ phy_type = ngbe_phy_mvl;
+ break;
+ case NGBE_PHYID_YT:
+ if (hw->phy.media_type == ngbe_media_type_fiber)
+ phy_type = ngbe_phy_yt8521s_sfi;
+ else
+ phy_type = ngbe_phy_yt8521s;
+ break;
+ default:
+ phy_type = ngbe_phy_unknown;
+ break;
+ }
+
+ return phy_type;
+}
+
+/**
+ * ngbe_reset_phy - Performs a PHY reset
+ * @hw: pointer to hardware structure
+ **/
+s32 ngbe_reset_phy(struct ngbe_hw *hw)
+{
+ s32 err = 0;
+
+ DEBUGFUNC("ngbe_reset_phy");
+
+ if (hw->phy.type == ngbe_phy_unknown)
+ err = ngbe_identify_phy(hw);
+
+ if (err != 0 || hw->phy.type == ngbe_phy_none)
+ return err;
+
+ /* Don't reset PHY if it's shut down due to overtemp. */
+ if (hw->mac.check_overtemp(hw) == NGBE_ERR_OVERTEMP)
+ return err;
+
+ /* Blocked by MNG FW so bail */
+ if (ngbe_check_reset_blocked(hw))
+ return err;
+
+ switch (hw->phy.type) {
+ case ngbe_phy_rtl:
+ err = ngbe_reset_phy_rtl(hw);
+ break;
+ case ngbe_phy_mvl:
+ case ngbe_phy_mvl_sfi:
+ err = ngbe_reset_phy_mvl(hw);
+ break;
+ case ngbe_phy_yt8521s:
+ case ngbe_phy_yt8521s_sfi:
+ err = ngbe_reset_phy_yt(hw);
+ break;
+ default:
+ break;
+ }
+
+ return err;
+}
+
+/**
+ * ngbe_read_phy_mdi - Reads a value from a specified PHY register without
+ * the SWFW lock
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit address of PHY register to read
+ * @device_type: 5 bit device type
+ * @phy_data: Pointer to read data from PHY register
+ **/
+s32 ngbe_read_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data)
+{
+ u32 command, data;
+
+ /* Setup and write the address cycle command */
+ command = NGBE_MDIOSCA_REG(reg_addr) |
+ NGBE_MDIOSCA_DEV(device_type) |
+ NGBE_MDIOSCA_PORT(hw->phy.addr);
+ wr32(hw, NGBE_MDIOSCA, command);
+
+ command = NGBE_MDIOSCD_CMD_READ |
+ NGBE_MDIOSCD_BUSY |
+ NGBE_MDIOSCD_CLOCK(6);
+ wr32(hw, NGBE_MDIOSCD, command);
+
+ /*
+ * Check every 10 usec to see if the address cycle completed.
+ * The MDI Command bit will clear when the operation is
+ * complete
+ */
+ if (!po32m(hw, NGBE_MDIOSCD, NGBE_MDIOSCD_BUSY,
+ 0, NULL, 100, 100)) {
+ DEBUGOUT("PHY address command did not complete\n");
+ return NGBE_ERR_PHY;
+ }
+
+ data = rd32(hw, NGBE_MDIOSCD);
+ *phy_data = (u16)NGBE_MDIOSCD_DAT_R(data);
+
+ return 0;
+}
+
+/**
+ * ngbe_read_phy_reg - Reads a value from a specified PHY register
+ * using the SWFW lock - this function is needed in most cases
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit address of PHY register to read
+ * @device_type: 5 bit device type
+ * @phy_data: Pointer to read data from PHY register
+ **/
+s32 ngbe_read_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data)
+{
+ s32 err;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ DEBUGFUNC("ngbe_read_phy_reg");
+
+ if (hw->mac.acquire_swfw_sync(hw, gssr))
+ return NGBE_ERR_SWFW_SYNC;
+
+ err = hw->phy.read_reg_unlocked(hw, reg_addr, device_type,
+ phy_data);
+
+ hw->mac.release_swfw_sync(hw, gssr);
+
+ return err;
+}
+
+/**
+ * ngbe_write_phy_reg_mdi - Writes a value to specified PHY register
+ * without SWFW lock
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 5 bit device type
+ * @phy_data: Data to write to the PHY register
+ **/
+s32 ngbe_write_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data)
+{
+ u32 command;
+
+ /* write command */
+ command = NGBE_MDIOSCA_REG(reg_addr) |
+ NGBE_MDIOSCA_DEV(device_type) |
+ NGBE_MDIOSCA_PORT(hw->phy.addr);
+ wr32(hw, NGBE_MDIOSCA, command);
+
+ command = NGBE_MDIOSCD_CMD_WRITE |
+ NGBE_MDIOSCD_DAT(phy_data) |
+ NGBE_MDIOSCD_BUSY |
+ NGBE_MDIOSCD_CLOCK(6);
+ wr32(hw, NGBE_MDIOSCD, command);
+
+ /* wait for completion */
+ if (!po32m(hw, NGBE_MDIOSCD, NGBE_MDIOSCD_BUSY,
+ 0, NULL, 100, 100)) {
+ TLOG_DEBUG("PHY write cmd didn't complete\n");
+ return NGBE_ERR_PHY;
+ }
+
+ return 0;
+}
+
+/**
+ * ngbe_write_phy_reg - Writes a value to specified PHY register
+ * using SWFW lock- this function is needed in most cases
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 5 bit device type
+ * @phy_data: Data to write to the PHY register
+ **/
+s32 ngbe_write_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data)
+{
+ s32 err;
+ u32 gssr = hw->phy.phy_semaphore_mask;
+
+ DEBUGFUNC("ngbe_write_phy_reg");
+
+ if (hw->mac.acquire_swfw_sync(hw, gssr))
+ err = NGBE_ERR_SWFW_SYNC;
+
+ err = hw->phy.write_reg_unlocked(hw, reg_addr, device_type,
+ phy_data);
+
+ hw->mac.release_swfw_sync(hw, gssr);
+
+ return err;
+}
+
+/**
+ * ngbe_init_phy - PHY specific init
+ * @hw: pointer to hardware structure
+ *
+ * Initialize any function pointers that were not able to be
+ * set during init_shared_code because the PHY type was
+ * not known.
+ *
+ **/
+s32 ngbe_init_phy(struct ngbe_hw *hw)
+{
+ struct ngbe_phy_info *phy = &hw->phy;
+ s32 err = 0;
+
+ DEBUGFUNC("ngbe_init_phy");
+
+ hw->phy.addr = 0;
+
+ switch (hw->sub_device_id) {
+ case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
+ hw->phy.read_reg_unlocked = ngbe_read_phy_reg_rtl;
+ hw->phy.write_reg_unlocked = ngbe_write_phy_reg_rtl;
+ break;
+ case NGBE_SUB_DEV_ID_EM_MVL_RGMII:
+ case NGBE_SUB_DEV_ID_EM_MVL_SFP:
+ hw->phy.read_reg_unlocked = ngbe_read_phy_reg_mvl;
+ hw->phy.write_reg_unlocked = ngbe_write_phy_reg_mvl;
+ break;
+ case NGBE_SUB_DEV_ID_EM_YT8521S_SFP:
+ hw->phy.read_reg_unlocked = ngbe_read_phy_reg_yt;
+ hw->phy.write_reg_unlocked = ngbe_write_phy_reg_yt;
+ break;
+ default:
+ break;
+ }
+
+ hw->phy.phy_semaphore_mask = NGBE_MNGSEM_SWPHY;
+
+ /* Identify the PHY */
+ err = phy->identify(hw);
+
+ return err;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy.h b/drivers/net/ngbe/base/ngbe_phy.h
new file mode 100644
index 0000000000..59d9efe025
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_PHY_H_
+#define _NGBE_PHY_H_
+
+#include "ngbe_type.h"
+#include "ngbe_phy_rtl.h"
+#include "ngbe_phy_mvl.h"
+#include "ngbe_phy_yt.h"
+
+/******************************************************************************
+ * PHY MDIO Registers:
+ ******************************************************************************/
+#define NGBE_MAX_PHY_ADDR 32
+
+/* (dev_type = 1) */
+#define NGBE_MD_DEV_PMA_PMD 0x1
+#define NGBE_MD_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/
+#define NGBE_MD_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/
+#define NGBE_PHY_REVISION_MASK 0xFFFFFFF0
+
+/* IEEE 802.3 Clause 22 */
+struct mdi_reg_22 {
+ u16 page;
+ u16 addr;
+ u16 device_type;
+};
+typedef struct mdi_reg_22 mdi_reg_22_t;
+
+/* IEEE 802.3ae Clause 45 */
+struct mdi_reg {
+ u16 device_type;
+ u16 addr;
+};
+typedef struct mdi_reg mdi_reg_t;
+
+#define NGBE_MD22_PHY_ID_HIGH 0x2 /* PHY ID High Reg*/
+#define NGBE_MD22_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/
+
+s32 ngbe_mdi_map_register(mdi_reg_t *reg, mdi_reg_22_t *reg22);
+
+bool ngbe_validate_phy_addr(struct ngbe_hw *hw, u32 phy_addr);
+enum ngbe_phy_type ngbe_get_phy_type_from_id(struct ngbe_hw *hw);
+s32 ngbe_get_phy_id(struct ngbe_hw *hw);
+s32 ngbe_identify_phy(struct ngbe_hw *hw);
+s32 ngbe_reset_phy(struct ngbe_hw *hw);
+s32 ngbe_read_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data);
+s32 ngbe_write_phy_reg_mdi(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 phy_data);
+s32 ngbe_read_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data);
+s32 ngbe_write_phy_reg(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data);
+s32 ngbe_check_reset_blocked(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
new file mode 100644
index 0000000000..40419a61f6
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy_mvl.h"
+
+#define MVL_PHY_RST_WAIT_PERIOD 5
+
+s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+ mdi_reg_t reg;
+ mdi_reg_22_t reg22;
+
+ reg.device_type = device_type;
+ reg.addr = reg_addr;
+
+ if (hw->phy.media_type == ngbe_media_type_fiber)
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 1);
+ else
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 0);
+
+ ngbe_mdi_map_register(®, ®22);
+
+ ngbe_read_phy_reg_mdi(hw, reg22.addr, reg22.device_type, phy_data);
+
+ return 0;
+}
+
+s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data)
+{
+ mdi_reg_t reg;
+ mdi_reg_22_t reg22;
+
+ reg.device_type = device_type;
+ reg.addr = reg_addr;
+
+ if (hw->phy.media_type == ngbe_media_type_fiber)
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 1);
+ else
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 0);
+
+ ngbe_mdi_map_register(®, ®22);
+
+ ngbe_write_phy_reg_mdi(hw, reg22.addr, reg22.device_type, phy_data);
+
+ return 0;
+}
+
+s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
+{
+ u32 i;
+ u16 ctrl = 0;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_reset_phy_mvl");
+
+ if (hw->phy.type != ngbe_phy_mvl && hw->phy.type != ngbe_phy_mvl_sfi)
+ return NGBE_ERR_PHY_TYPE;
+
+ /* select page 18 reg 20 */
+ status = ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 18);
+
+ /* mode select to RGMII-to-copper or RGMII-to-sfi*/
+ if (hw->phy.type == ngbe_phy_mvl)
+ ctrl = MVL_GEN_CTL_MODE_COPPER;
+ else
+ ctrl = MVL_GEN_CTL_MODE_FIBER;
+ status = ngbe_write_phy_reg_mdi(hw, MVL_GEN_CTL, 0, ctrl);
+ /* mode reset */
+ ctrl |= MVL_GEN_CTL_RESET;
+ status = ngbe_write_phy_reg_mdi(hw, MVL_GEN_CTL, 0, ctrl);
+
+ for (i = 0; i < MVL_PHY_RST_WAIT_PERIOD; i++) {
+ status = ngbe_read_phy_reg_mdi(hw, MVL_GEN_CTL, 0, &ctrl);
+ if (!(ctrl & MVL_GEN_CTL_RESET))
+ break;
+ msleep(1);
+ }
+
+ if (i == MVL_PHY_RST_WAIT_PERIOD) {
+ DEBUGOUT("PHY reset polling failed to complete.\n");
+ return NGBE_ERR_RESET_FAILED;
+ }
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
new file mode 100644
index 0000000000..a88ace9ec1
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy.h"
+
+#ifndef _NGBE_PHY_MVL_H_
+#define _NGBE_PHY_MVL_H_
+
+#define NGBE_PHYID_MVL 0x01410DD0U
+
+/* Page 0 for Copper, Page 1 for Fiber */
+#define MVL_CTRL 0x0
+#define MVL_CTRL_RESET MS16(15, 0x1)
+#define MVL_CTRL_ANE MS16(12, 0x1)
+#define MVL_CTRL_RESTART_AN MS16(9, 0x1)
+#define MVL_ANA 0x4
+/* copper */
+#define MVL_CANA_ASM_PAUSE MS16(11, 0x1)
+#define MVL_CANA_PAUSE MS16(10, 0x1)
+#define MVL_PHY_100BASET_FULL MS16(8, 0x1)
+#define MVL_PHY_100BASET_HALF MS16(7, 0x1)
+#define MVL_PHY_10BASET_FULL MS16(6, 0x1)
+#define MVL_PHY_10BASET_HALF MS16(5, 0x1)
+/* fiber */
+#define MVL_FANA_PAUSE_MASK MS16(7, 0x3)
+#define MVL_FANA_SYM_PAUSE LS16(1, 7, 0x3)
+#define MVL_FANA_ASM_PAUSE LS16(2, 7, 0x3)
+#define MVL_PHY_1000BASEX_HALF MS16(6, 0x1)
+#define MVL_PHY_1000BASEX_FULL MS16(5, 0x1)
+#define MVL_LPAR 0x5
+#define MVL_CLPAR_ASM_PAUSE MS(11, 0x1)
+#define MVL_CLPAR_PAUSE MS(10, 0x1)
+#define MVL_FLPAR_PAUSE_MASK MS(7, 0x3)
+#define MVL_PHY_1000BASET 0x9
+#define MVL_PHY_1000BASET_FULL MS16(9, 0x1)
+#define MVL_PHY_1000BASET_HALF MS16(8, 0x1)
+#define MVL_CTRL1 0x10
+#define MVL_CTRL1_INTR_POL MS16(2, 0x1)
+#define MVL_PHYSR 0x11
+#define MVL_PHYSR_SPEED_MASK MS16(14, 0x3)
+#define MVL_PHYSR_SPEED_1000M LS16(2, 14, 0x3)
+#define MVL_PHYSR_SPEED_100M LS16(1, 14, 0x3)
+#define MVL_PHYSR_SPEED_10M LS16(0, 14, 0x3)
+#define MVL_PHYSR_LINK MS16(10, 0x1)
+#define MVL_INTR_EN 0x12
+#define MVL_INTR_EN_ANC MS16(11, 0x1)
+#define MVL_INTR_EN_LSC MS16(10, 0x1)
+#define MVL_INTR 0x13
+#define MVL_INTR_ANC MS16(11, 0x1)
+#define MVL_INTR_LSC MS16(10, 0x1)
+
+/* Page 2 */
+#define MVL_RGM_CTL2 0x15
+#define MVL_RGM_CTL2_TTC MS16(4, 0x1)
+#define MVL_RGM_CTL2_RTC MS16(5, 0x1)
+/* Page 3 */
+#define MVL_LEDFCR 0x10
+#define MVL_LEDFCR_CTL1 MS16(4, 0xF)
+#define MVL_LEDFCR_CTL1_CONF LS16(6, 4, 0xF)
+#define MVL_LEDFCR_CTL0 MS16(0, 0xF)
+#define MVL_LEDFCR_CTL0_CONF LS16(1, 0, 0xF)
+#define MVL_LEDPCR 0x11
+#define MVL_LEDPCR_CTL1 MS16(2, 0x3)
+#define MVL_LEDPCR_CTL1_CONF LS16(1, 2, 0x3)
+#define MVL_LEDPCR_CTL0 MS16(0, 0x3)
+#define MVL_LEDPCR_CTL0_CONF LS16(1, 0, 0x3)
+#define MVL_LEDTCR 0x12
+#define MVL_LEDTCR_INTR_POL MS16(11, 0x1)
+#define MVL_LEDTCR_INTR_EN MS16(7, 0x1)
+/* Page 18 */
+#define MVL_GEN_CTL 0x14
+#define MVL_GEN_CTL_RESET MS16(15, 0x1)
+#define MVL_GEN_CTL_MODE(v) LS16(v, 0, 0x7)
+#define MVL_GEN_CTL_MODE_COPPER LS16(0, 0, 0x7)
+#define MVL_GEN_CTL_MODE_FIBER LS16(2, 0, 0x7)
+
+/* reg 22 */
+#define MVL_PAGE_SEL 22
+
+/* reg 19_0 INT status*/
+#define MVL_PHY_ANC 0x0800
+#define MVL_PHY_LSC 0x0400
+
+s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data);
+s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 phy_data);
+
+s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_MVL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
new file mode 100644
index 0000000000..0703ad9396
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy_rtl.h"
+
+#define RTL_PHY_RST_WAIT_PERIOD 5
+
+s32 ngbe_read_phy_reg_rtl(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+ mdi_reg_t reg;
+ mdi_reg_22_t reg22;
+
+ reg.device_type = device_type;
+ reg.addr = reg_addr;
+ ngbe_mdi_map_register(®, ®22);
+
+ wr32(hw, NGBE_PHY_CONFIG(RTL_PAGE_SELECT), reg22.page);
+ *phy_data = 0xFFFF & rd32(hw, NGBE_PHY_CONFIG(reg22.addr));
+
+ return 0;
+}
+
+s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data)
+{
+ mdi_reg_t reg;
+ mdi_reg_22_t reg22;
+
+ reg.device_type = device_type;
+ reg.addr = reg_addr;
+ ngbe_mdi_map_register(®, ®22);
+
+ wr32(hw, NGBE_PHY_CONFIG(RTL_PAGE_SELECT), reg22.page);
+ wr32(hw, NGBE_PHY_CONFIG(reg22.addr), phy_data);
+
+ return 0;
+}
+
+s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw)
+{
+ u16 value = 0, i;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_reset_phy_rtl");
+
+ value |= RTL_BMCR_RESET;
+ status = hw->phy.write_reg(hw, RTL_BMCR, RTL_DEV_ZERO, value);
+
+ for (i = 0; i < RTL_PHY_RST_WAIT_PERIOD; i++) {
+ status = hw->phy.read_reg(hw, RTL_BMCR, RTL_DEV_ZERO, &value);
+ if (!(value & RTL_BMCR_RESET))
+ break;
+ msleep(1);
+ }
+
+ if (i == RTL_PHY_RST_WAIT_PERIOD) {
+ DEBUGOUT("PHY reset polling failed to complete.\n");
+ return NGBE_ERR_RESET_FAILED;
+ }
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
new file mode 100644
index 0000000000..ecb60b0ddd
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy.h"
+
+#ifndef _NGBE_PHY_RTL_H_
+#define _NGBE_PHY_RTL_H_
+
+#define NGBE_PHYID_RTL 0x001CC800U
+
+/* Page 0 */
+#define RTL_DEV_ZERO 0
+#define RTL_BMCR 0x0
+#define RTL_BMCR_RESET MS16(15, 0x1)
+#define RTL_BMCR_SPEED_SELECT0 MS16(13, 0x1)
+#define RTL_BMCR_ANE MS16(12, 0x1)
+#define RTL_BMCR_RESTART_AN MS16(9, 0x1)
+#define RTL_BMCR_DUPLEX MS16(8, 0x1)
+#define RTL_BMCR_SPEED_SELECT1 MS16(6, 0x1)
+#define RTL_BMSR 0x1
+#define RTL_BMSR_ANC MS16(5, 0x1)
+#define RTL_ID1_OFFSET 0x2
+#define RTL_ID2_OFFSET 0x3
+#define RTL_ID_MASK 0xFFFFFC00U
+#define RTL_ANAR 0x4
+#define RTL_ANAR_APAUSE MS16(11, 0x1)
+#define RTL_ANAR_PAUSE MS16(10, 0x1)
+#define RTL_ANAR_100F MS16(8, 0x1)
+#define RTL_ANAR_100H MS16(7, 0x1)
+#define RTL_ANAR_10F MS16(6, 0x1)
+#define RTL_ANAR_10H MS16(5, 0x1)
+#define RTL_ANLPAR 0x5
+#define RTL_ANLPAR_LP MS16(10, 0x3)
+#define RTL_GBCR 0x9
+#define RTL_GBCR_1000F MS16(9, 0x1)
+/* Page 0xa42*/
+#define RTL_GSR 0x10
+#define RTL_GSR_ST MS16(0, 0x7)
+#define RTL_GSR_ST_LANON MS16(0, 0x3)
+#define RTL_INER 0x12
+#define RTL_INER_LSC MS16(4, 0x1)
+#define RTL_INER_ANC MS16(3, 0x1)
+/* Page 0xa43*/
+#define RTL_PHYSR 0x1A
+#define RTL_PHYSR_SPEED_MASK MS16(4, 0x3)
+#define RTL_PHYSR_SPEED_RES LS16(3, 4, 0x3)
+#define RTL_PHYSR_SPEED_1000M LS16(2, 4, 0x3)
+#define RTL_PHYSR_SPEED_100M LS16(1, 4, 0x3)
+#define RTL_PHYSR_SPEED_10M LS16(0, 4, 0x3)
+#define RTL_PHYSR_DP MS16(3, 0x1)
+#define RTL_PHYSR_RTLS MS16(2, 0x1)
+#define RTL_INSR 0x1D
+#define RTL_INSR_ACCESS MS16(5, 0x1)
+#define RTL_INSR_LSC MS16(4, 0x1)
+#define RTL_INSR_ANC MS16(3, 0x1)
+/* Page 0xa46*/
+#define RTL_SCR 0x14
+#define RTL_SCR_EXTINI MS16(1, 0x1)
+#define RTL_SCR_EFUSE MS16(0, 0x1)
+/* Page 0xa47*/
+/* Page 0xd04*/
+#define RTL_LCR 0x10
+#define RTL_EEELCR 0x11
+#define RTL_LPCR 0x12
+
+/* INTERNAL PHY CONTROL */
+#define RTL_PAGE_SELECT 31
+#define NGBE_INTERNAL_PHY_OFFSET_MAX 32
+#define NGBE_INTERNAL_PHY_ID 0x000732
+
+#define NGBE_INTPHY_LED0 0x0010
+#define NGBE_INTPHY_LED1 0x0040
+#define NGBE_INTPHY_LED2 0x2000
+
+s32 ngbe_read_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data);
+s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 phy_data);
+
+s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_RTL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
new file mode 100644
index 0000000000..84b20de45c
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy_yt.h"
+
+#define YT_PHY_RST_WAIT_PERIOD 5
+
+s32 ngbe_read_phy_reg_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+ mdi_reg_t reg;
+ mdi_reg_22_t reg22;
+
+ reg.device_type = device_type;
+ reg.addr = reg_addr;
+
+ ngbe_mdi_map_register(®, ®22);
+
+ /* Read MII reg according to media type */
+ if (hw->phy.media_type == ngbe_media_type_fiber) {
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+ reg22.device_type, YT_SMI_PHY_SDS);
+ ngbe_read_phy_reg_mdi(hw, reg22.addr,
+ reg22.device_type, phy_data);
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+ reg22.device_type, 0);
+ } else {
+ ngbe_read_phy_reg_mdi(hw, reg22.addr,
+ reg22.device_type, phy_data);
+ }
+
+ return 0;
+}
+
+s32 ngbe_write_phy_reg_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data)
+{
+ mdi_reg_t reg;
+ mdi_reg_22_t reg22;
+
+ reg.device_type = device_type;
+ reg.addr = reg_addr;
+
+ ngbe_mdi_map_register(®, ®22);
+
+ /* Write MII reg according to media type */
+ if (hw->phy.media_type == ngbe_media_type_fiber) {
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+ reg22.device_type, YT_SMI_PHY_SDS);
+ ngbe_write_phy_reg_mdi(hw, reg22.addr,
+ reg22.device_type, phy_data);
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY,
+ reg22.device_type, 0);
+ } else {
+ ngbe_write_phy_reg_mdi(hw, reg22.addr,
+ reg22.device_type, phy_data);
+ }
+
+ return 0;
+}
+
+s32 ngbe_read_phy_reg_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+ ngbe_write_phy_reg_mdi(hw, 0x1E, device_type, reg_addr);
+ ngbe_read_phy_reg_mdi(hw, 0x1F, device_type, phy_data);
+
+ return 0;
+}
+
+s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data)
+{
+ ngbe_write_phy_reg_mdi(hw, 0x1E, device_type, reg_addr);
+ ngbe_write_phy_reg_mdi(hw, 0x1F, device_type, phy_data);
+
+ return 0;
+}
+
+s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
+{
+ u32 i;
+ u16 ctrl = 0;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_reset_phy_yt");
+
+ if (hw->phy.type != ngbe_phy_yt8521s &&
+ hw->phy.type != ngbe_phy_yt8521s_sfi)
+ return NGBE_ERR_PHY_TYPE;
+
+ status = hw->phy.read_reg(hw, YT_BCR, 0, &ctrl);
+ /* sds software reset */
+ ctrl |= YT_BCR_RESET;
+ status = hw->phy.write_reg(hw, YT_BCR, 0, ctrl);
+
+ for (i = 0; i < YT_PHY_RST_WAIT_PERIOD; i++) {
+ status = hw->phy.read_reg(hw, YT_BCR, 0, &ctrl);
+ if (!(ctrl & YT_BCR_RESET))
+ break;
+ msleep(1);
+ }
+
+ if (i == YT_PHY_RST_WAIT_PERIOD) {
+ DEBUGOUT("PHY reset polling failed to complete.\n");
+ return NGBE_ERR_RESET_FAILED;
+ }
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
new file mode 100644
index 0000000000..80fd420a63
--- /dev/null
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ */
+
+#include "ngbe_phy.h"
+
+#ifndef _NGBE_PHY_YT_H_
+#define _NGBE_PHY_YT_H_
+
+#define NGBE_PHYID_YT 0x00000110U
+
+/* Common EXT */
+#define YT_SMI_PHY 0xA000
+#define YT_SMI_PHY_SDS MS16(1, 0x1) /* 0 for UTP */
+#define YT_CHIP 0xA001
+#define YT_CHIP_SW_RST MS16(15, 0x1)
+#define YT_CHIP_SW_LDO_EN MS16(6, 0x1)
+#define YT_CHIP_MODE_SEL(v) LS16(v, 0, 0x7)
+#define YT_RGMII_CONF1 0xA003
+#define YT_RGMII_CONF1_RXDELAY MS16(10, 0xF)
+#define YT_RGMII_CONF1_TXDELAY_FE MS16(4, 0xF)
+#define YT_RGMII_CONF1_TXDELAY MS16(0, 0x1)
+#define YT_MISC 0xA006
+#define YT_MISC_FIBER_PRIO MS16(8, 0x1) /* 0 for UTP */
+
+/* MII common registers in UTP and SDS */
+#define YT_BCR 0x0
+#define YT_BCR_RESET MS16(15, 0x1)
+#define YT_BCR_PWDN MS16(11, 0x1)
+#define YT_ANA 0x4
+/* copper */
+#define YT_ANA_100BASET_FULL MS16(8, 0x1)
+#define YT_ANA_10BASET_FULL MS16(6, 0x1)
+/* fiber */
+#define YT_FANA_PAUSE_MASK MS16(7, 0x3)
+
+#define YT_LPAR 0x5
+#define YT_CLPAR_ASM_PAUSE MS(11, 0x1)
+#define YT_CLPAR_PAUSE MS(10, 0x1)
+#define YT_FLPAR_PAUSE_MASK MS(7, 0x3)
+
+#define YT_MS_CTRL 0x9
+#define YT_MS_1000BASET_FULL MS16(9, 0x1)
+#define YT_SPST 0x11
+#define YT_SPST_SPEED_MASK MS16(14, 0x3)
+#define YT_SPST_SPEED_1000M LS16(2, 14, 0x3)
+#define YT_SPST_SPEED_100M LS16(1, 14, 0x3)
+#define YT_SPST_SPEED_10M LS16(0, 14, 0x3)
+#define YT_SPST_LINK MS16(10, 0x1)
+
+/* UTP only */
+#define YT_INTR 0x12
+#define YT_INTR_ENA_MASK MS16(2, 0x3)
+#define YT_INTR_STATUS 0x13
+
+s32 ngbe_read_phy_reg_yt(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 *phy_data);
+s32 ngbe_write_phy_reg_yt(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
+ u16 phy_data);
+s32 ngbe_read_phy_reg_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data);
+s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data);
+
+s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
+
+#endif /* _NGBE_PHY_YT_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 55c686c0a3..8ed14560c3 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -94,6 +94,7 @@ struct ngbe_mac_info {
/* Manageability interface */
s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
+ s32 (*check_overtemp)(struct ngbe_hw *hw);
enum ngbe_mac_type type;
u32 max_tx_queues;
@@ -103,8 +104,24 @@ struct ngbe_mac_info {
};
struct ngbe_phy_info {
+ s32 (*identify)(struct ngbe_hw *hw);
+ s32 (*reset_hw)(struct ngbe_hw *hw);
+ s32 (*read_reg)(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data);
+ s32 (*write_reg)(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data);
+ s32 (*read_reg_unlocked)(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data);
+ s32 (*write_reg_unlocked)(struct ngbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data);
+
enum ngbe_media_type media_type;
enum ngbe_phy_type type;
+ u32 addr;
+ u32 id;
+ u32 revision;
+ u32 phy_semaphore_mask;
+ bool reset_disable;
};
struct ngbe_hw {
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 09/19] net/ngbe: store MAC address
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (7 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 08/19] net/ngbe: identify PHY and reset PHY Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 16:12 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update Jiawen Wu
` (9 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Store MAC addresses and init receive address filters.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_dummy.h | 33 +++
drivers/net/ngbe/base/ngbe_hw.c | 323 +++++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 13 ++
drivers/net/ngbe/base/ngbe_type.h | 19 ++
drivers/net/ngbe/ngbe_ethdev.c | 25 +++
drivers/net/ngbe/ngbe_ethdev.h | 2 +
6 files changed, 415 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 15017cfd82..8462d6d1cb 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -51,6 +51,10 @@ static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
@@ -60,6 +64,29 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
}
+static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u8 *TUP2, u32 TUP3, u32 TUP4)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_clear_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_set_vmdq_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_clear_vmdq_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ u32 TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_init_rx_addrs_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_init_thermal_ssth_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -105,8 +132,14 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.init_hw = ngbe_mac_init_hw_dummy;
hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
+ hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+ hw->mac.set_rar = ngbe_mac_set_rar_dummy;
+ hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
+ hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
+ hw->mac.clear_vmdq = ngbe_mac_clear_vmdq_dummy;
+ hw->mac.init_rx_addrs = ngbe_mac_init_rx_addrs_dummy;
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
hw->phy.identify = ngbe_phy_identify_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 5723213209..7f20d6c807 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -142,9 +142,49 @@ s32 ngbe_reset_hw_em(struct ngbe_hw *hw)
msec_delay(50);
+ /* Store the permanent mac address */
+ hw->mac.get_mac_addr(hw, hw->mac.perm_addr);
+
+ /*
+ * Store MAC address from RAR0, clear receive address registers, and
+ * clear the multicast table.
+ */
+ hw->mac.num_rar_entries = NGBE_EM_RAR_ENTRIES;
+ hw->mac.init_rx_addrs(hw);
+
return status;
}
+/**
+ * ngbe_get_mac_addr - Generic get MAC address
+ * @hw: pointer to hardware structure
+ * @mac_addr: Adapter MAC address
+ *
+ * Reads the adapter's MAC address from first Receive Address Register (RAR0)
+ * A reset of the adapter must be performed prior to calling this function
+ * in order for the MAC address to have been loaded from the EEPROM into RAR0
+ **/
+s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr)
+{
+ u32 rar_high;
+ u32 rar_low;
+ u16 i;
+
+ DEBUGFUNC("ngbe_get_mac_addr");
+
+ wr32(hw, NGBE_ETHADDRIDX, 0);
+ rar_high = rd32(hw, NGBE_ETHADDRH);
+ rar_low = rd32(hw, NGBE_ETHADDRL);
+
+ for (i = 0; i < 2; i++)
+ mac_addr[i] = (u8)(rar_high >> (1 - i) * 8);
+
+ for (i = 0; i < 4; i++)
+ mac_addr[i + 2] = (u8)(rar_low >> (3 - i) * 8);
+
+ return 0;
+}
+
/**
* ngbe_set_lan_id_multi_port - Set LAN id for PCIe multiple port devices
* @hw: pointer to the HW structure
@@ -215,6 +255,196 @@ s32 ngbe_stop_hw(struct ngbe_hw *hw)
return 0;
}
+/**
+ * ngbe_validate_mac_addr - Validate MAC address
+ * @mac_addr: pointer to MAC address.
+ *
+ * Tests a MAC address to ensure it is a valid Individual Address.
+ **/
+s32 ngbe_validate_mac_addr(u8 *mac_addr)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_validate_mac_addr");
+
+ /* Make sure it is not a multicast address */
+ if (NGBE_IS_MULTICAST((struct rte_ether_addr *)mac_addr)) {
+ status = NGBE_ERR_INVALID_MAC_ADDR;
+ /* Not a broadcast address */
+ } else if (NGBE_IS_BROADCAST((struct rte_ether_addr *)mac_addr)) {
+ status = NGBE_ERR_INVALID_MAC_ADDR;
+ /* Reject the zero address */
+ } else if (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+ mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0) {
+ status = NGBE_ERR_INVALID_MAC_ADDR;
+ }
+ return status;
+}
+
+/**
+ * ngbe_set_rar - Set Rx address register
+ * @hw: pointer to hardware structure
+ * @index: Receive address register to write
+ * @addr: Address to put into receive address register
+ * @vmdq: VMDq "set" or "pool" index
+ * @enable_addr: set flag that address is active
+ *
+ * Puts an ethernet address into a receive address register.
+ **/
+s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+ u32 enable_addr)
+{
+ u32 rar_low, rar_high;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("ngbe_set_rar");
+
+ /* Make sure we are using a valid rar index range */
+ if (index >= rar_entries) {
+ DEBUGOUT("RAR index %d is out of range.\n", index);
+ return NGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ /* setup VMDq pool selection before this RAR gets enabled */
+ hw->mac.set_vmdq(hw, index, vmdq);
+
+ /*
+ * HW expects these in little endian so we reverse the byte
+ * order from network order (big endian) to little endian
+ */
+ rar_low = NGBE_ETHADDRL_AD0(addr[5]) |
+ NGBE_ETHADDRL_AD1(addr[4]) |
+ NGBE_ETHADDRL_AD2(addr[3]) |
+ NGBE_ETHADDRL_AD3(addr[2]);
+ /*
+ * Some parts put the VMDq setting in the extra RAH bits,
+ * so save everything except the lower 16 bits that hold part
+ * of the address and the address valid bit.
+ */
+ rar_high = rd32(hw, NGBE_ETHADDRH);
+ rar_high &= ~NGBE_ETHADDRH_AD_MASK;
+ rar_high |= (NGBE_ETHADDRH_AD4(addr[1]) |
+ NGBE_ETHADDRH_AD5(addr[0]));
+
+ rar_high &= ~NGBE_ETHADDRH_VLD;
+ if (enable_addr != 0)
+ rar_high |= NGBE_ETHADDRH_VLD;
+
+ wr32(hw, NGBE_ETHADDRIDX, index);
+ wr32(hw, NGBE_ETHADDRL, rar_low);
+ wr32(hw, NGBE_ETHADDRH, rar_high);
+
+ return 0;
+}
+
+/**
+ * ngbe_clear_rar - Remove Rx address register
+ * @hw: pointer to hardware structure
+ * @index: Receive address register to write
+ *
+ * Clears an ethernet address from a receive address register.
+ **/
+s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index)
+{
+ u32 rar_high;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("ngbe_clear_rar");
+
+ /* Make sure we are using a valid rar index range */
+ if (index >= rar_entries) {
+ DEBUGOUT("RAR index %d is out of range.\n", index);
+ return NGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ /*
+ * Some parts put the VMDq setting in the extra RAH bits,
+ * so save everything except the lower 16 bits that hold part
+ * of the address and the address valid bit.
+ */
+ wr32(hw, NGBE_ETHADDRIDX, index);
+ rar_high = rd32(hw, NGBE_ETHADDRH);
+ rar_high &= ~(NGBE_ETHADDRH_AD_MASK | NGBE_ETHADDRH_VLD);
+
+ wr32(hw, NGBE_ETHADDRL, 0);
+ wr32(hw, NGBE_ETHADDRH, rar_high);
+
+ /* clear VMDq pool/queue selection for this RAR */
+ hw->mac.clear_vmdq(hw, index, BIT_MASK32);
+
+ return 0;
+}
+
+/**
+ * ngbe_init_rx_addrs - Initializes receive address filters.
+ * @hw: pointer to hardware structure
+ *
+ * Places the MAC address in receive address register 0 and clears the rest
+ * of the receive address registers. Clears the multicast table. Assumes
+ * the receiver is in reset when the routine is called.
+ **/
+s32 ngbe_init_rx_addrs(struct ngbe_hw *hw)
+{
+ u32 i;
+ u32 psrctl;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("ngbe_init_rx_addrs");
+
+ /*
+ * If the current mac address is valid, assume it is a software override
+ * to the permanent address.
+ * Otherwise, use the permanent address from the eeprom.
+ */
+ if (ngbe_validate_mac_addr(hw->mac.addr) ==
+ NGBE_ERR_INVALID_MAC_ADDR) {
+ /* Get the MAC address from the RAR0 for later reference */
+ hw->mac.get_mac_addr(hw, hw->mac.addr);
+
+ DEBUGOUT(" Keeping Current RAR0 Addr =%.2X %.2X %.2X ",
+ hw->mac.addr[0], hw->mac.addr[1],
+ hw->mac.addr[2]);
+ DEBUGOUT("%.2X %.2X %.2X\n", hw->mac.addr[3],
+ hw->mac.addr[4], hw->mac.addr[5]);
+ } else {
+ /* Setup the receive address. */
+ DEBUGOUT("Overriding MAC Address in RAR[0]\n");
+ DEBUGOUT(" New MAC Addr =%.2X %.2X %.2X ",
+ hw->mac.addr[0], hw->mac.addr[1],
+ hw->mac.addr[2]);
+ DEBUGOUT("%.2X %.2X %.2X\n", hw->mac.addr[3],
+ hw->mac.addr[4], hw->mac.addr[5]);
+
+ hw->mac.set_rar(hw, 0, hw->mac.addr, 0, true);
+ }
+
+ /* clear VMDq pool/queue selection for RAR 0 */
+ hw->mac.clear_vmdq(hw, 0, BIT_MASK32);
+
+ /* Zero out the other receive addresses. */
+ DEBUGOUT("Clearing RAR[1-%d]\n", rar_entries - 1);
+ for (i = 1; i < rar_entries; i++) {
+ wr32(hw, NGBE_ETHADDRIDX, i);
+ wr32(hw, NGBE_ETHADDRL, 0);
+ wr32(hw, NGBE_ETHADDRH, 0);
+ }
+
+ /* Clear the MTA */
+ hw->addr_ctrl.mta_in_use = 0;
+ psrctl = rd32(hw, NGBE_PSRCTL);
+ psrctl &= ~(NGBE_PSRCTL_ADHF12_MASK | NGBE_PSRCTL_MCHFENA);
+ psrctl |= NGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+ wr32(hw, NGBE_PSRCTL, psrctl);
+
+ DEBUGOUT(" Clearing MTA\n");
+ for (i = 0; i < hw->mac.mcft_size; i++)
+ wr32(hw, NGBE_MCADDRTBL(i), 0);
+
+ ngbe_init_uta_tables(hw);
+
+ return 0;
+}
+
/**
* ngbe_acquire_swfw_sync - Acquire SWFW semaphore
* @hw: pointer to hardware structure
@@ -286,6 +516,89 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
ngbe_release_eeprom_semaphore(hw);
}
+/**
+ * ngbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
+ * @hw: pointer to hardware struct
+ * @rar: receive address register index to disassociate
+ * @vmdq: VMDq pool index to remove from the rar
+ **/
+s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq)
+{
+ u32 mpsar;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("ngbe_clear_vmdq");
+
+ /* Make sure we are using a valid rar index range */
+ if (rar >= rar_entries) {
+ DEBUGOUT("RAR index %d is out of range.\n", rar);
+ return NGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ wr32(hw, NGBE_ETHADDRIDX, rar);
+ mpsar = rd32(hw, NGBE_ETHADDRASS);
+
+ if (NGBE_REMOVED(hw->hw_addr))
+ goto done;
+
+ if (!mpsar)
+ goto done;
+
+ mpsar &= ~(1 << vmdq);
+ wr32(hw, NGBE_ETHADDRASS, mpsar);
+
+ /* was that the last pool using this rar? */
+ if (mpsar == 0 && rar != 0)
+ hw->mac.clear_rar(hw, rar);
+done:
+ return 0;
+}
+
+/**
+ * ngbe_set_vmdq - Associate a VMDq pool index with a rx address
+ * @hw: pointer to hardware struct
+ * @rar: receive address register index to associate with a VMDq index
+ * @vmdq: VMDq pool index
+ **/
+s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq)
+{
+ u32 mpsar;
+ u32 rar_entries = hw->mac.num_rar_entries;
+
+ DEBUGFUNC("ngbe_set_vmdq");
+
+ /* Make sure we are using a valid rar index range */
+ if (rar >= rar_entries) {
+ DEBUGOUT("RAR index %d is out of range.\n", rar);
+ return NGBE_ERR_INVALID_ARGUMENT;
+ }
+
+ wr32(hw, NGBE_ETHADDRIDX, rar);
+
+ mpsar = rd32(hw, NGBE_ETHADDRASS);
+ mpsar |= 1 << vmdq;
+ wr32(hw, NGBE_ETHADDRASS, mpsar);
+
+ return 0;
+}
+
+/**
+ * ngbe_init_uta_tables - Initialize the Unicast Table Array
+ * @hw: pointer to hardware structure
+ **/
+s32 ngbe_init_uta_tables(struct ngbe_hw *hw)
+{
+ int i;
+
+ DEBUGFUNC("ngbe_init_uta_tables");
+ DEBUGOUT(" Clearing UTA\n");
+
+ for (i = 0; i < 128; i++)
+ wr32(hw, NGBE_UCADDRTBL(i), 0);
+
+ return 0;
+}
+
/**
* ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
* @hw: pointer to hardware structure
@@ -481,10 +794,18 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
/* MAC */
mac->init_hw = ngbe_init_hw;
mac->reset_hw = ngbe_reset_hw_em;
+ mac->get_mac_addr = ngbe_get_mac_addr;
mac->stop_hw = ngbe_stop_hw;
mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
mac->release_swfw_sync = ngbe_release_swfw_sync;
+ /* RAR */
+ mac->set_rar = ngbe_set_rar;
+ mac->clear_rar = ngbe_clear_rar;
+ mac->init_rx_addrs = ngbe_init_rx_addrs;
+ mac->set_vmdq = ngbe_set_vmdq;
+ mac->clear_vmdq = ngbe_clear_vmdq;
+
/* Manageability interface */
mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
mac->check_overtemp = ngbe_mac_check_overtemp;
@@ -493,6 +814,8 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
rom->init_params = ngbe_init_eeprom_params;
rom->validate_checksum = ngbe_validate_eeprom_checksum_em;
+ mac->mcft_size = NGBE_EM_MC_TBL_SIZE;
+ mac->num_rar_entries = NGBE_EM_RAR_ENTRIES;
mac->max_rx_queues = NGBE_EM_MAX_RX_QUEUES;
mac->max_tx_queues = NGBE_EM_MAX_TX_QUEUES;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 3c8e646bb7..0b3d60ae29 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -10,16 +10,29 @@
#define NGBE_EM_MAX_TX_QUEUES 8
#define NGBE_EM_MAX_RX_QUEUES 8
+#define NGBE_EM_RAR_ENTRIES 32
+#define NGBE_EM_MC_TBL_SIZE 32
s32 ngbe_init_hw(struct ngbe_hw *hw);
s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
s32 ngbe_stop_hw(struct ngbe_hw *hw);
+s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
+s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+ u32 enable_addr);
+s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
+s32 ngbe_init_rx_addrs(struct ngbe_hw *hw);
+
+s32 ngbe_validate_mac_addr(u8 *mac_addr);
s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask);
+s32 ngbe_set_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+s32 ngbe_clear_vmdq(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
+
s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
void ngbe_disable_rx(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 8ed14560c3..517db3380d 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -63,6 +63,10 @@ enum ngbe_media_type {
struct ngbe_hw;
+struct ngbe_addr_filter_info {
+ u32 mta_in_use;
+};
+
/* Bus parameters */
struct ngbe_bus_info {
void (*set_lan_id)(struct ngbe_hw *hw);
@@ -89,14 +93,28 @@ struct ngbe_mac_info {
s32 (*init_hw)(struct ngbe_hw *hw);
s32 (*reset_hw)(struct ngbe_hw *hw);
s32 (*stop_hw)(struct ngbe_hw *hw);
+ s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+ /* RAR */
+ s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+ u32 enable_addr);
+ s32 (*clear_rar)(struct ngbe_hw *hw, u32 index);
+ s32 (*set_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+ s32 (*clear_vmdq)(struct ngbe_hw *hw, u32 rar, u32 vmdq);
+ s32 (*init_rx_addrs)(struct ngbe_hw *hw);
+
/* Manageability interface */
s32 (*init_thermal_sensor_thresh)(struct ngbe_hw *hw);
s32 (*check_overtemp)(struct ngbe_hw *hw);
enum ngbe_mac_type type;
+ u8 addr[ETH_ADDR_LEN];
+ u8 perm_addr[ETH_ADDR_LEN];
+ s32 mc_filter_type;
+ u32 mcft_size;
+ u32 num_rar_entries;
u32 max_tx_queues;
u32 max_rx_queues;
struct ngbe_thermal_sensor_data thermal_sensor_data;
@@ -128,6 +146,7 @@ struct ngbe_hw {
void IOMEM *hw_addr;
void *back;
struct ngbe_mac_info mac;
+ struct ngbe_addr_filter_info addr_ctrl;
struct ngbe_phy_info phy;
struct ngbe_rom_info rom;
struct ngbe_bus_info bus;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 31d4dda976..deca64137d 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -116,6 +116,31 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -EIO;
}
+ /* Allocate memory for storing MAC addresses */
+ eth_dev->data->mac_addrs = rte_zmalloc("ngbe", RTE_ETHER_ADDR_LEN *
+ hw->mac.num_rar_entries, 0);
+ if (eth_dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR,
+ "Failed to allocate %u bytes needed to store "
+ "MAC addresses",
+ RTE_ETHER_ADDR_LEN * hw->mac.num_rar_entries);
+ return -ENOMEM;
+ }
+
+ /* Copy the permanent MAC address */
+ rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.perm_addr,
+ ð_dev->data->mac_addrs[0]);
+
+ /* Allocate memory for storing hash filter MAC addresses */
+ eth_dev->data->hash_mac_addrs = rte_zmalloc("ngbe",
+ RTE_ETHER_ADDR_LEN * NGBE_VMDQ_NUM_UC_MAC, 0);
+ if (eth_dev->data->hash_mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR,
+ "Failed to allocate %d bytes needed to store MAC addresses",
+ RTE_ETHER_ADDR_LEN * NGBE_VMDQ_NUM_UC_MAC);
+ return -ENOMEM;
+ }
+
return 0;
}
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index d4d02c6bd8..87cc1cff6b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -30,4 +30,6 @@ ngbe_dev_hw(struct rte_eth_dev *dev)
return hw;
}
+#define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
+
#endif /* _NGBE_ETHDEV_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (8 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 09/19] net/ngbe: store MAC address Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 16:24 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 11/19] net/ngbe: setup the check PHY link Jiawen Wu
` (8 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Register to handle device interrupt.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 3 +
doc/guides/nics/ngbe.rst | 5 +
drivers/net/ngbe/base/ngbe_dummy.h | 6 +
drivers/net/ngbe/base/ngbe_type.h | 11 +
drivers/net/ngbe/ngbe_ethdev.c | 376 ++++++++++++++++++++++++++++-
drivers/net/ngbe/ngbe_ethdev.h | 34 +++
6 files changed, 434 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 977286ac04..291a542a42 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -4,6 +4,9 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
+Link status = Y
+Link status event = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
index 54d0665db9..0918cc2918 100644
--- a/doc/guides/nics/ngbe.rst
+++ b/doc/guides/nics/ngbe.rst
@@ -8,6 +8,11 @@ The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
for Wangxun 1 Gigabit Ethernet NICs.
+Features
+--------
+
+- Link state information
+
Prerequisites
-------------
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 8462d6d1cb..4273e5af36 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -64,6 +64,11 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
}
+static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
+ bool *TUP3, bool TUP4)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
u8 *TUP2, u32 TUP3, u32 TUP4)
{
@@ -135,6 +140,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+ hw->mac.check_link = ngbe_mac_check_link_dummy;
hw->mac.set_rar = ngbe_mac_set_rar_dummy;
hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 517db3380d..04c1cac422 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -97,6 +97,8 @@ struct ngbe_mac_info {
s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+ s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
+ bool *link_up, bool link_up_wait_to_complete);
/* RAR */
s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
@@ -117,6 +119,7 @@ struct ngbe_mac_info {
u32 num_rar_entries;
u32 max_tx_queues;
u32 max_rx_queues;
+ bool get_link_status;
struct ngbe_thermal_sensor_data thermal_sensor_data;
bool set_lben;
};
@@ -142,6 +145,14 @@ struct ngbe_phy_info {
bool reset_disable;
};
+enum ngbe_isb_idx {
+ NGBE_ISB_HEADER,
+ NGBE_ISB_MISC,
+ NGBE_ISB_VEC0,
+ NGBE_ISB_VEC1,
+ NGBE_ISB_MAX
+};
+
struct ngbe_hw {
void IOMEM *hw_addr;
void *back;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index deca64137d..c952023e8b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -7,12 +7,17 @@
#include <rte_common.h>
#include <ethdev_pci.h>
+#include <rte_alarm.h>
+
#include "ngbe_logs.h"
#include "base/ngbe.h"
#include "ngbe_ethdev.h"
static int ngbe_dev_close(struct rte_eth_dev *dev);
+static void ngbe_dev_interrupt_handler(void *param);
+static void ngbe_dev_interrupt_delayed_handler(void *param);
+
/*
* The set of PCI devices this driver supports
*/
@@ -32,6 +37,28 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct eth_dev_ops ngbe_eth_dev_ops;
+
+static inline void
+ngbe_enable_intr(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ wr32(hw, NGBE_IENMISC, intr->mask_misc);
+ wr32(hw, NGBE_IMC(0), intr->mask & BIT_MASK32);
+ ngbe_flush(hw);
+}
+
+static void
+ngbe_disable_intr(struct ngbe_hw *hw)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ wr32(hw, NGBE_IMS(0), NGBE_IMS_MASK);
+ ngbe_flush(hw);
+}
+
/*
* Ensure that all locks are released before first NVM or PHY access
*/
@@ -60,11 +87,15 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
const struct rte_memzone *mz;
+ uint32_t ctrl_ext;
int err;
PMD_INIT_FUNC_TRACE();
+ eth_dev->dev_ops = &ngbe_eth_dev_ops;
+
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
@@ -116,6 +147,9 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -EIO;
}
+ /* disable interrupt */
+ ngbe_disable_intr(hw);
+
/* Allocate memory for storing MAC addresses */
eth_dev->data->mac_addrs = rte_zmalloc("ngbe", RTE_ETHER_ADDR_LEN *
hw->mac.num_rar_entries, 0);
@@ -141,6 +175,23 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
return -ENOMEM;
}
+ ctrl_ext = rd32(hw, NGBE_PORTCTL);
+ /* let hardware know driver is loaded */
+ ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
+ /* Set PF Reset Done bit so PF/VF Mail Ops can work */
+ ctrl_ext |= NGBE_PORTCTL_RSTDONE;
+ wr32(hw, NGBE_PORTCTL, ctrl_ext);
+ ngbe_flush(hw);
+
+ rte_intr_callback_register(intr_handle,
+ ngbe_dev_interrupt_handler, eth_dev);
+
+ /* enable uio/vfio intr/eventfd mapping */
+ rte_intr_enable(intr_handle);
+
+ /* enable support intr */
+ ngbe_enable_intr(eth_dev);
+
return 0;
}
@@ -180,11 +231,25 @@ static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
static struct rte_pci_driver rte_ngbe_pmd = {
.id_table = pci_id_ngbe_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_INTR_LSC,
.probe = eth_ngbe_pci_probe,
.remove = eth_ngbe_pci_remove,
};
+static int
+ngbe_dev_configure(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* set flag to update link status after init */
+ intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+
+ return 0;
+}
+
/*
* Reset and stop device.
*/
@@ -198,6 +263,315 @@ ngbe_dev_close(struct rte_eth_dev *dev)
return -EINVAL;
}
+static int
+ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+ RTE_SET_USED(dev);
+
+ dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
+ ETH_LINK_SPEED_10M;
+
+ return 0;
+}
+
+/* return 0 means link status changed, -1 means not changed */
+int
+ngbe_dev_link_update_share(struct rte_eth_dev *dev,
+ int wait_to_complete)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct rte_eth_link link;
+ u32 link_speed = NGBE_LINK_SPEED_UNKNOWN;
+ u32 lan_speed = 0;
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ bool link_up;
+ int err;
+ int wait = 1;
+
+ memset(&link, 0, sizeof(link));
+ link.link_status = ETH_LINK_DOWN;
+ link.link_speed = ETH_SPEED_NUM_NONE;
+ link.link_duplex = ETH_LINK_HALF_DUPLEX;
+ link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ ~ETH_LINK_SPEED_AUTONEG);
+
+ hw->mac.get_link_status = true;
+
+ if (intr->flags & NGBE_FLAG_NEED_LINK_CONFIG)
+ return rte_eth_linkstatus_set(dev, &link);
+
+ /* check if it needs to wait to complete, if lsc interrupt is enabled */
+ if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)
+ wait = 0;
+
+ err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
+
+ if (err != 0) {
+ link.link_speed = ETH_SPEED_NUM_100M;
+ link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ return rte_eth_linkstatus_set(dev, &link);
+ }
+
+ if (!link_up)
+ return rte_eth_linkstatus_set(dev, &link);
+
+ intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
+ link.link_status = ETH_LINK_UP;
+ link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+ switch (link_speed) {
+ default:
+ case NGBE_LINK_SPEED_UNKNOWN:
+ link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ link.link_speed = ETH_SPEED_NUM_100M;
+ break;
+
+ case NGBE_LINK_SPEED_10M_FULL:
+ link.link_speed = ETH_SPEED_NUM_10M;
+ lan_speed = 0;
+ break;
+
+ case NGBE_LINK_SPEED_100M_FULL:
+ link.link_speed = ETH_SPEED_NUM_100M;
+ lan_speed = 1;
+ break;
+
+ case NGBE_LINK_SPEED_1GB_FULL:
+ link.link_speed = ETH_SPEED_NUM_1G;
+ lan_speed = 2;
+ break;
+ }
+
+ if (hw->is_pf) {
+ wr32m(hw, NGBE_LAN_SPEED, NGBE_LAN_SPEED_MASK, lan_speed);
+ if (link_speed & (NGBE_LINK_SPEED_1GB_FULL |
+ NGBE_LINK_SPEED_100M_FULL | NGBE_LINK_SPEED_10M_FULL)) {
+ wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_SPEED_MASK,
+ NGBE_MACTXCFG_SPEED_1G | NGBE_MACTXCFG_TE);
+ }
+ }
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
+static int
+ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+ return ngbe_dev_link_update_share(dev, wait_to_complete);
+}
+
+/*
+ * It reads ICR and sets flag for the link_update.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static int
+ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
+{
+ uint32_t eicr;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+
+ /* clear all cause mask */
+ ngbe_disable_intr(hw);
+
+ /* read-on-clear nic registers here */
+ eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
+ PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
+
+ intr->flags = 0;
+
+ /* set flag for async link update */
+ if (eicr & NGBE_ICRMISC_PHY)
+ intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+
+ if (eicr & NGBE_ICRMISC_VFMBX)
+ intr->flags |= NGBE_FLAG_MAILBOX;
+
+ if (eicr & NGBE_ICRMISC_LNKSEC)
+ intr->flags |= NGBE_FLAG_MACSEC;
+
+ if (eicr & NGBE_ICRMISC_GPIO)
+ intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+
+ return 0;
+}
+
+/**
+ * It gets and then prints the link status.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static void
+ngbe_dev_link_status_print(struct rte_eth_dev *dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_eth_link link;
+
+ rte_eth_linkstatus_get(dev, &link);
+
+ if (link.link_status == ETH_LINK_UP) {
+ PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
+ (int)(dev->data->port_id),
+ (unsigned int)link.link_speed,
+ link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+ "full-duplex" : "half-duplex");
+ } else {
+ PMD_INIT_LOG(INFO, " Port %d: Link Down",
+ (int)(dev->data->port_id));
+ }
+ PMD_INIT_LOG(DEBUG, "PCI Address: " PCI_PRI_FMT,
+ pci_dev->addr.domain,
+ pci_dev->addr.bus,
+ pci_dev->addr.devid,
+ pci_dev->addr.function);
+}
+
+/*
+ * It executes link_update after knowing an interrupt occurred.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static int
+ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ int64_t timeout;
+
+ PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags);
+
+ if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
+ struct rte_eth_link link;
+
+ /*get the link status before link update, for predicting later*/
+ rte_eth_linkstatus_get(dev, &link);
+
+ ngbe_dev_link_update(dev, 0);
+
+ /* likely to up */
+ if (link.link_status != ETH_LINK_UP)
+ /* handle it 1 sec later, wait it being stable */
+ timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
+ /* likely to down */
+ else
+ /* handle it 4 sec later, wait it being stable */
+ timeout = NGBE_LINK_DOWN_CHECK_TIMEOUT;
+
+ ngbe_dev_link_status_print(dev);
+ if (rte_eal_alarm_set(timeout * 1000,
+ ngbe_dev_interrupt_delayed_handler,
+ (void *)dev) < 0) {
+ PMD_DRV_LOG(ERR, "Error setting alarm");
+ } else {
+ /* remember original mask */
+ intr->mask_misc_orig = intr->mask_misc;
+ /* only disable lsc interrupt */
+ intr->mask_misc &= ~NGBE_ICRMISC_PHY;
+
+ intr->mask_orig = intr->mask;
+ /* only disable all misc interrupts */
+ intr->mask &= ~(1ULL << NGBE_MISC_VEC_ID);
+ }
+ }
+
+ PMD_DRV_LOG(DEBUG, "enable intr immediately");
+ ngbe_enable_intr(dev);
+
+ return 0;
+}
+
+/**
+ * Interrupt handler which shall be registered for alarm callback for delayed
+ * handling specific interrupt to wait for the stable nic state. As the
+ * NIC interrupt state is not stable for ngbe after link is just down,
+ * it needs to wait 4 seconds to get the stable status.
+ *
+ * @param handle
+ * Pointer to interrupt handle.
+ * @param param
+ * The address of parameter (struct rte_eth_dev *) registered before.
+ *
+ * @return
+ * void
+ */
+static void
+ngbe_dev_interrupt_delayed_handler(void *param)
+{
+ struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t eicr;
+
+ ngbe_disable_intr(hw);
+
+ eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
+
+ if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
+ ngbe_dev_link_update(dev, 0);
+ intr->flags &= ~NGBE_FLAG_NEED_LINK_UPDATE;
+ ngbe_dev_link_status_print(dev);
+ rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+ NULL);
+ }
+
+ if (intr->flags & NGBE_FLAG_MACSEC) {
+ rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_MACSEC,
+ NULL);
+ intr->flags &= ~NGBE_FLAG_MACSEC;
+ }
+
+ /* restore original mask */
+ intr->mask_misc = intr->mask_misc_orig;
+ intr->mask_misc_orig = 0;
+ intr->mask = intr->mask_orig;
+ intr->mask_orig = 0;
+
+ PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr);
+ ngbe_enable_intr(dev);
+}
+
+/**
+ * Interrupt handler triggered by NIC for handling
+ * specific interrupt.
+ *
+ * @param handle
+ * Pointer to interrupt handle.
+ * @param param
+ * The address of parameter (struct rte_eth_dev *) registered before.
+ *
+ * @return
+ * void
+ */
+static void
+ngbe_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+
+ ngbe_dev_interrupt_get_status(dev);
+ ngbe_dev_interrupt_action(dev);
+}
+
+static const struct eth_dev_ops ngbe_eth_dev_ops = {
+ .dev_configure = ngbe_dev_configure,
+ .dev_infos_get = ngbe_dev_info_get,
+ .link_update = ngbe_dev_link_update,
+};
+
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 87cc1cff6b..b67508a3de 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -6,11 +6,30 @@
#ifndef _NGBE_ETHDEV_H_
#define _NGBE_ETHDEV_H_
+/* need update link, bit flag */
+#define NGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
+#define NGBE_FLAG_MAILBOX (uint32_t)(1 << 1)
+#define NGBE_FLAG_PHY_INTERRUPT (uint32_t)(1 << 2)
+#define NGBE_FLAG_MACSEC (uint32_t)(1 << 3)
+#define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
+
+#define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
+
+/* structure for interrupt relative data */
+struct ngbe_interrupt {
+ uint32_t flags;
+ uint32_t mask_misc;
+ uint32_t mask_misc_orig; /* save mask during delayed handler */
+ uint64_t mask;
+ uint64_t mask_orig; /* save mask during delayed handler */
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
struct ngbe_adapter {
struct ngbe_hw hw;
+ struct ngbe_interrupt intr;
};
static inline struct ngbe_adapter *
@@ -30,6 +49,21 @@ ngbe_dev_hw(struct rte_eth_dev *dev)
return hw;
}
+static inline struct ngbe_interrupt *
+ngbe_dev_intr(struct rte_eth_dev *dev)
+{
+ struct ngbe_adapter *ad = ngbe_dev_adapter(dev);
+ struct ngbe_interrupt *intr = &ad->intr;
+
+ return intr;
+}
+
+int
+ngbe_dev_link_update_share(struct rte_eth_dev *dev,
+ int wait_to_complete);
+
+#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
+#define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
#define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
#endif /* _NGBE_ETHDEV_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 11/19] net/ngbe: setup the check PHY link
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (9 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release Jiawen Wu
` (7 subsequent siblings)
18 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Setup PHY, determine link and speed status from PHY.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_dummy.h | 18 +++
drivers/net/ngbe/base/ngbe_hw.c | 53 +++++++++
drivers/net/ngbe/base/ngbe_hw.h | 6 +
drivers/net/ngbe/base/ngbe_phy.c | 22 ++++
drivers/net/ngbe/base/ngbe_phy.h | 2 +
drivers/net/ngbe/base/ngbe_phy_mvl.c | 98 ++++++++++++++++
drivers/net/ngbe/base/ngbe_phy_mvl.h | 4 +
drivers/net/ngbe/base/ngbe_phy_rtl.c | 166 +++++++++++++++++++++++++++
drivers/net/ngbe/base/ngbe_phy_rtl.h | 4 +
drivers/net/ngbe/base/ngbe_phy_yt.c | 134 +++++++++++++++++++++
drivers/net/ngbe/base/ngbe_phy_yt.h | 8 ++
drivers/net/ngbe/base/ngbe_type.h | 11 ++
12 files changed, 526 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 4273e5af36..709e01659c 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -64,6 +64,11 @@ static inline void ngbe_mac_release_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
}
+static inline s32 ngbe_mac_setup_link_dummy(struct ngbe_hw *TUP0, u32 TUP1,
+ bool TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
bool *TUP3, bool TUP4)
{
@@ -129,6 +134,16 @@ static inline s32 ngbe_phy_write_reg_unlocked_dummy(struct ngbe_hw *TUP0,
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_phy_setup_link_dummy(struct ngbe_hw *TUP0,
+ u32 TUP1, bool TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_phy_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
+ bool *TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
{
hw->bus.set_lan_id = ngbe_bus_set_lan_id_dummy;
@@ -140,6 +155,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
+ hw->mac.setup_link = ngbe_mac_setup_link_dummy;
hw->mac.check_link = ngbe_mac_check_link_dummy;
hw->mac.set_rar = ngbe_mac_set_rar_dummy;
hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
@@ -154,6 +170,8 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->phy.write_reg = ngbe_phy_write_reg_dummy;
hw->phy.read_reg_unlocked = ngbe_phy_read_reg_unlocked_dummy;
hw->phy.write_reg_unlocked = ngbe_phy_write_reg_unlocked_dummy;
+ hw->phy.setup_link = ngbe_phy_setup_link_dummy;
+ hw->phy.check_link = ngbe_phy_check_link_dummy;
}
#endif /* _NGBE_TYPE_DUMMY_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 7f20d6c807..331fc752ff 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -599,6 +599,54 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw)
return 0;
}
+/**
+ * ngbe_check_mac_link_em - Determine link and speed status
+ * @hw: pointer to hardware structure
+ * @speed: pointer to link speed
+ * @link_up: true when link is up
+ * @link_up_wait_to_complete: bool used to wait for link up or not
+ *
+ * Reads the links register to determine if link is up and the current speed
+ **/
+s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
+ bool *link_up, bool link_up_wait_to_complete)
+{
+ u32 i, reg;
+ s32 status = 0;
+
+ DEBUGFUNC("ngbe_check_mac_link_em");
+
+ reg = rd32(hw, NGBE_GPIOINTSTAT);
+ wr32(hw, NGBE_GPIOEOI, reg);
+
+ if (link_up_wait_to_complete) {
+ for (i = 0; i < hw->mac.max_link_up_time; i++) {
+ status = hw->phy.check_link(hw, speed, link_up);
+ if (*link_up)
+ break;
+ msec_delay(100);
+ }
+ } else {
+ status = hw->phy.check_link(hw, speed, link_up);
+ }
+
+ return status;
+}
+
+s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete)
+{
+ s32 status;
+
+ DEBUGFUNC("\n");
+
+ /* Setup the PHY according to input speed */
+ status = hw->phy.setup_link(hw, speed, autoneg_wait_to_complete);
+
+ return status;
+}
+
/**
* ngbe_init_thermal_sensor_thresh - Inits thermal sensor thresholds
* @hw: pointer to hardware structure
@@ -806,6 +854,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->set_vmdq = ngbe_set_vmdq;
mac->clear_vmdq = ngbe_clear_vmdq;
+ /* Link */
+ mac->check_link = ngbe_check_mac_link_em;
+ mac->setup_link = ngbe_setup_mac_link_em;
+
/* Manageability interface */
mac->init_thermal_sensor_thresh = ngbe_init_thermal_sensor_thresh;
mac->check_overtemp = ngbe_mac_check_overtemp;
@@ -853,6 +905,7 @@ s32 ngbe_init_shared_code(struct ngbe_hw *hw)
status = NGBE_ERR_DEVICE_NOT_SUPPORTED;
break;
}
+ hw->mac.max_link_up_time = NGBE_LINK_UP_TIME;
hw->bus.set_lan_id(hw);
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 0b3d60ae29..1689223168 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -20,6 +20,12 @@ s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
+s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
+ bool *link_up, bool link_up_wait_to_complete);
+s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
+ u32 speed,
+ bool autoneg_wait_to_complete);
+
s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
index 0467e1e66e..471656cc51 100644
--- a/drivers/net/ngbe/base/ngbe_phy.c
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -420,7 +420,29 @@ s32 ngbe_init_phy(struct ngbe_hw *hw)
/* Identify the PHY */
err = phy->identify(hw);
+ if (err == NGBE_ERR_PHY_ADDR_INVALID)
+ goto init_phy_ops_out;
+ /* Set necessary function pointers based on PHY type */
+ switch (hw->phy.type) {
+ case ngbe_phy_rtl:
+ hw->phy.check_link = ngbe_check_phy_link_rtl;
+ hw->phy.setup_link = ngbe_setup_phy_link_rtl;
+ break;
+ case ngbe_phy_mvl:
+ case ngbe_phy_mvl_sfi:
+ hw->phy.check_link = ngbe_check_phy_link_mvl;
+ hw->phy.setup_link = ngbe_setup_phy_link_mvl;
+ break;
+ case ngbe_phy_yt8521s:
+ case ngbe_phy_yt8521s_sfi:
+ hw->phy.check_link = ngbe_check_phy_link_yt;
+ hw->phy.setup_link = ngbe_setup_phy_link_yt;
+ default:
+ break;
+ }
+
+init_phy_ops_out:
return err;
}
diff --git a/drivers/net/ngbe/base/ngbe_phy.h b/drivers/net/ngbe/base/ngbe_phy.h
index 59d9efe025..96e47b5bb9 100644
--- a/drivers/net/ngbe/base/ngbe_phy.h
+++ b/drivers/net/ngbe/base/ngbe_phy.h
@@ -22,6 +22,8 @@
#define NGBE_MD_PHY_ID_LOW 0x3 /* PHY ID Low Reg*/
#define NGBE_PHY_REVISION_MASK 0xFFFFFFF0
+#define NGBE_MII_AUTONEG_REG 0x0
+
/* IEEE 802.3 Clause 22 */
struct mdi_reg_22 {
u16 page;
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
index 40419a61f6..a1c055e238 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -48,6 +48,64 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
return 0;
}
+s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw, u32 speed,
+ bool autoneg_wait_to_complete)
+{
+ u16 value_r4 = 0;
+ u16 value_r9 = 0;
+ u16 value;
+
+ DEBUGFUNC("ngbe_setup_phy_link_mvl");
+ UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+ hw->phy.autoneg_advertised = 0;
+
+ if (hw->phy.type == ngbe_phy_mvl) {
+ if (speed & NGBE_LINK_SPEED_1GB_FULL) {
+ value_r9 |= MVL_PHY_1000BASET_FULL;
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+ }
+
+ if (speed & NGBE_LINK_SPEED_100M_FULL) {
+ value_r4 |= MVL_PHY_100BASET_FULL;
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_100M_FULL;
+ }
+
+ if (speed & NGBE_LINK_SPEED_10M_FULL) {
+ value_r4 |= MVL_PHY_10BASET_FULL;
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_10M_FULL;
+ }
+
+ hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+ value &= ~(MVL_PHY_100BASET_FULL |
+ MVL_PHY_100BASET_HALF |
+ MVL_PHY_10BASET_FULL |
+ MVL_PHY_10BASET_HALF);
+ value_r4 |= value;
+ hw->phy.write_reg(hw, MVL_ANA, 0, value_r4);
+
+ hw->phy.read_reg(hw, MVL_PHY_1000BASET, 0, &value);
+ value &= ~(MVL_PHY_1000BASET_FULL |
+ MVL_PHY_1000BASET_HALF);
+ value_r9 |= value;
+ hw->phy.write_reg(hw, MVL_PHY_1000BASET, 0, value_r9);
+ } else {
+ hw->phy.autoneg_advertised = 1;
+
+ hw->phy.read_reg(hw, MVL_ANA, 0, &value);
+ value &= ~(MVL_PHY_1000BASEX_HALF | MVL_PHY_1000BASEX_FULL);
+ value |= MVL_PHY_1000BASEX_FULL;
+ hw->phy.write_reg(hw, MVL_ANA, 0, value);
+ }
+
+ value = MVL_CTRL_RESTART_AN | MVL_CTRL_ANE;
+ ngbe_write_phy_reg_mdi(hw, MVL_CTRL, 0, value);
+
+ hw->phy.read_reg(hw, MVL_INTR, 0, &value);
+
+ return 0;
+}
+
s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
{
u32 i;
@@ -87,3 +145,43 @@ s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw)
return status;
}
+s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw,
+ u32 *speed, bool *link_up)
+{
+ s32 status = 0;
+ u16 phy_link = 0;
+ u16 phy_speed = 0;
+ u16 phy_data = 0;
+ u16 insr = 0;
+
+ DEBUGFUNC("ngbe_check_phy_link_mvl");
+
+ /* Initialize speed and link to default case */
+ *link_up = false;
+ *speed = NGBE_LINK_SPEED_UNKNOWN;
+
+ hw->phy.read_reg(hw, MVL_INTR, 0, &insr);
+
+ /*
+ * Check current speed and link status of the PHY register.
+ * This is a vendor specific register and may have to
+ * be changed for other copper PHYs.
+ */
+ status = hw->phy.read_reg(hw, MVL_PHYSR, 0, &phy_data);
+ phy_link = phy_data & MVL_PHYSR_LINK;
+ phy_speed = phy_data & MVL_PHYSR_SPEED_MASK;
+
+ if (phy_link == MVL_PHYSR_LINK) {
+ *link_up = true;
+
+ if (phy_speed == MVL_PHYSR_SPEED_1000M)
+ *speed = NGBE_LINK_SPEED_1GB_FULL;
+ else if (phy_speed == MVL_PHYSR_SPEED_100M)
+ *speed = NGBE_LINK_SPEED_100M_FULL;
+ else if (phy_speed == MVL_PHYSR_SPEED_10M)
+ *speed = NGBE_LINK_SPEED_10M_FULL;
+ }
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
index a88ace9ec1..a663a429dd 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -89,4 +89,8 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
+s32 ngbe_check_phy_link_mvl(struct ngbe_hw *hw,
+ u32 *speed, bool *link_up);
+s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw,
+ u32 speed, bool autoneg_wait_to_complete);
#endif /* _NGBE_PHY_MVL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
index 0703ad9396..3401fc7e92 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -38,6 +38,134 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
return 0;
}
+/**
+ * ngbe_setup_phy_link_rtl - Set and restart auto-neg
+ * @hw: pointer to hardware structure
+ *
+ * Restart auto-negotiation and PHY and waits for completion.
+ **/
+s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
+ u32 speed, bool autoneg_wait_to_complete)
+{
+ u16 autoneg_reg = NGBE_MII_AUTONEG_REG;
+ u16 value = 0;
+
+ DEBUGFUNC("ngbe_setup_phy_link_rtl");
+
+ UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+ hw->phy.read_reg(hw, RTL_INSR, 0xa43, &autoneg_reg);
+
+ if (!hw->mac.autoneg) {
+ hw->phy.reset_hw(hw);
+
+ switch (speed) {
+ case NGBE_LINK_SPEED_1GB_FULL:
+ value = RTL_BMCR_SPEED_SELECT1;
+ break;
+ case NGBE_LINK_SPEED_100M_FULL:
+ value = RTL_BMCR_SPEED_SELECT0;
+ break;
+ case NGBE_LINK_SPEED_10M_FULL:
+ value = 0;
+ break;
+ default:
+ value = RTL_BMCR_SPEED_SELECT1 | RTL_BMCR_SPEED_SELECT0;
+ DEBUGOUT("unknown speed = 0x%x.\n", speed);
+ break;
+ }
+ /* duplex full */
+ value |= RTL_BMCR_DUPLEX;
+ hw->phy.write_reg(hw, RTL_BMCR, RTL_DEV_ZERO, value);
+
+ goto skip_an;
+ }
+
+ /*
+ * Clear autoneg_advertised and set new values based on input link
+ * speed.
+ */
+ if (speed) {
+ hw->phy.autoneg_advertised = 0;
+
+ if (speed & NGBE_LINK_SPEED_1GB_FULL)
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+
+ if (speed & NGBE_LINK_SPEED_100M_FULL)
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_100M_FULL;
+
+ if (speed & NGBE_LINK_SPEED_10M_FULL)
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_10M_FULL;
+ }
+
+ /* disable 10/100M Half Duplex */
+ hw->phy.read_reg(hw, RTL_ANAR, RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg &= 0xFF5F;
+ hw->phy.write_reg(hw, RTL_ANAR, RTL_DEV_ZERO, autoneg_reg);
+
+ /* set advertise enable according to input speed */
+ if (!(speed & NGBE_LINK_SPEED_1GB_FULL)) {
+ hw->phy.read_reg(hw, RTL_GBCR,
+ RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg &= ~RTL_GBCR_1000F;
+ hw->phy.write_reg(hw, RTL_GBCR,
+ RTL_DEV_ZERO, autoneg_reg);
+ } else {
+ hw->phy.read_reg(hw, RTL_GBCR,
+ RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg |= RTL_GBCR_1000F;
+ hw->phy.write_reg(hw, RTL_GBCR,
+ RTL_DEV_ZERO, autoneg_reg);
+ }
+
+ if (!(speed & NGBE_LINK_SPEED_100M_FULL)) {
+ hw->phy.read_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg &= ~RTL_ANAR_100F;
+ autoneg_reg &= ~RTL_ANAR_100H;
+ hw->phy.write_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, autoneg_reg);
+ } else {
+ hw->phy.read_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg |= RTL_ANAR_100F;
+ hw->phy.write_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, autoneg_reg);
+ }
+
+ if (!(speed & NGBE_LINK_SPEED_10M_FULL)) {
+ hw->phy.read_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg &= ~RTL_ANAR_10F;
+ autoneg_reg &= ~RTL_ANAR_10H;
+ hw->phy.write_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, autoneg_reg);
+ } else {
+ hw->phy.read_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, &autoneg_reg);
+ autoneg_reg |= RTL_ANAR_10F;
+ hw->phy.write_reg(hw, RTL_ANAR,
+ RTL_DEV_ZERO, autoneg_reg);
+ }
+
+ /* restart AN and wait AN done interrupt */
+ autoneg_reg = RTL_BMCR_RESTART_AN | RTL_BMCR_ANE;
+ hw->phy.write_reg(hw, RTL_BMCR, RTL_DEV_ZERO, autoneg_reg);
+
+skip_an:
+ autoneg_reg = 0x205B;
+ hw->phy.write_reg(hw, RTL_LCR, 0xd04, autoneg_reg);
+ hw->phy.write_reg(hw, RTL_EEELCR, 0xd04, 0);
+
+ hw->phy.read_reg(hw, RTL_LPCR, 0xd04, &autoneg_reg);
+ autoneg_reg = autoneg_reg & 0xFFFC;
+ /* act led blinking mode set to 60ms */
+ autoneg_reg |= 0x2;
+ hw->phy.write_reg(hw, RTL_LPCR, 0xd04, autoneg_reg);
+
+ return 0;
+}
+
s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw)
{
u16 value = 0, i;
@@ -63,3 +191,41 @@ s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw)
return status;
}
+s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw, u32 *speed, bool *link_up)
+{
+ s32 status = 0;
+ u16 phy_link = 0;
+ u16 phy_speed = 0;
+ u16 phy_data = 0;
+ u16 insr = 0;
+
+ DEBUGFUNC("ngbe_check_phy_link_rtl");
+
+ hw->phy.read_reg(hw, RTL_INSR, 0xa43, &insr);
+
+ /* Initialize speed and link to default case */
+ *link_up = false;
+ *speed = NGBE_LINK_SPEED_UNKNOWN;
+
+ /*
+ * Check current speed and link status of the PHY register.
+ * This is a vendor specific register and may have to
+ * be changed for other copper PHYs.
+ */
+ status = hw->phy.read_reg(hw, RTL_PHYSR, 0xa43, &phy_data);
+ phy_link = phy_data & RTL_PHYSR_RTLS;
+ phy_speed = phy_data & (RTL_PHYSR_SPEED_MASK | RTL_PHYSR_DP);
+ if (phy_link == RTL_PHYSR_RTLS) {
+ *link_up = true;
+
+ if (phy_speed == (RTL_PHYSR_SPEED_1000M | RTL_PHYSR_DP))
+ *speed = NGBE_LINK_SPEED_1GB_FULL;
+ else if (phy_speed == (RTL_PHYSR_SPEED_100M | RTL_PHYSR_DP))
+ *speed = NGBE_LINK_SPEED_100M_FULL;
+ else if (phy_speed == (RTL_PHYSR_SPEED_10M | RTL_PHYSR_DP))
+ *speed = NGBE_LINK_SPEED_10M_FULL;
+ }
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
index ecb60b0ddd..e8bc4a1bd7 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -78,6 +78,10 @@ s32 ngbe_read_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
u16 phy_data);
+s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
+ u32 speed, bool autoneg_wait_to_complete);
s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
+s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw,
+ u32 *speed, bool *link_up);
#endif /* _NGBE_PHY_RTL_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index 84b20de45c..f518dc0af6 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -78,6 +78,104 @@ s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
return 0;
}
+s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data)
+{
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, YT_SMI_PHY_SDS);
+ ngbe_read_phy_reg_ext_yt(hw, reg_addr, device_type, phy_data);
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, 0);
+
+ return 0;
+}
+
+s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data)
+{
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, YT_SMI_PHY_SDS);
+ ngbe_write_phy_reg_ext_yt(hw, reg_addr, device_type, phy_data);
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, device_type, 0);
+
+ return 0;
+}
+
+s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
+ bool autoneg_wait_to_complete)
+{
+ u16 value_r4 = 0;
+ u16 value_r9 = 0;
+ u16 value;
+
+ DEBUGFUNC("ngbe_setup_phy_link_yt");
+ UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+ hw->phy.autoneg_advertised = 0;
+
+ if (hw->phy.type == ngbe_phy_yt8521s) {
+ /*disable 100/10base-T Self-negotiation ability*/
+ hw->phy.read_reg(hw, YT_ANA, 0, &value);
+ value &= ~(YT_ANA_100BASET_FULL | YT_ANA_10BASET_FULL);
+ hw->phy.write_reg(hw, YT_ANA, 0, value);
+
+ /*disable 1000base-T Self-negotiation ability*/
+ hw->phy.read_reg(hw, YT_MS_CTRL, 0, &value);
+ value &= ~YT_MS_1000BASET_FULL;
+ hw->phy.write_reg(hw, YT_MS_CTRL, 0, value);
+
+ if (speed & NGBE_LINK_SPEED_1GB_FULL) {
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+ value_r9 |= YT_MS_1000BASET_FULL;
+ }
+ if (speed & NGBE_LINK_SPEED_100M_FULL) {
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_100M_FULL;
+ value_r4 |= YT_ANA_100BASET_FULL;
+ }
+ if (speed & NGBE_LINK_SPEED_10M_FULL) {
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_10M_FULL;
+ value_r4 |= YT_ANA_10BASET_FULL;
+ }
+
+ /* enable 1000base-T Self-negotiation ability */
+ hw->phy.read_reg(hw, YT_MS_CTRL, 0, &value);
+ value |= value_r9;
+ hw->phy.write_reg(hw, YT_MS_CTRL, 0, value);
+
+ /* enable 100/10base-T Self-negotiation ability */
+ hw->phy.read_reg(hw, YT_ANA, 0, &value);
+ value |= value_r4;
+ hw->phy.write_reg(hw, YT_ANA, 0, value);
+
+ /* software reset to make the above configuration take effect*/
+ hw->phy.read_reg(hw, YT_BCR, 0, &value);
+ value |= YT_BCR_RESET;
+ hw->phy.write_reg(hw, YT_BCR, 0, value);
+ } else {
+ hw->phy.autoneg_advertised |= NGBE_LINK_SPEED_1GB_FULL;
+
+ /* RGMII_Config1 : Config rx and tx training delay */
+ value = YT_RGMII_CONF1_RXDELAY |
+ YT_RGMII_CONF1_TXDELAY_FE |
+ YT_RGMII_CONF1_TXDELAY;
+ ngbe_write_phy_reg_ext_yt(hw, YT_RGMII_CONF1, 0, value);
+ value = YT_CHIP_MODE_SEL(1) |
+ YT_CHIP_SW_LDO_EN |
+ YT_CHIP_SW_RST;
+ ngbe_write_phy_reg_ext_yt(hw, YT_CHIP, 0, value);
+
+ /* software reset */
+ ngbe_write_phy_reg_sds_ext_yt(hw, 0x0, 0, 0x9140);
+
+ /* power on phy */
+ hw->phy.read_reg(hw, YT_BCR, 0, &value);
+ value &= ~YT_BCR_PWDN;
+ hw->phy.write_reg(hw, YT_BCR, 0, value);
+ }
+
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
+ ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &value);
+
+ return 0;
+}
+
s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
{
u32 i;
@@ -110,3 +208,39 @@ s32 ngbe_reset_phy_yt(struct ngbe_hw *hw)
return status;
}
+s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
+ u32 *speed, bool *link_up)
+{
+ s32 status = 0;
+ u16 phy_link = 0;
+ u16 phy_speed = 0;
+ u16 phy_data = 0;
+ u16 insr = 0;
+
+ DEBUGFUNC("ngbe_check_phy_link_yt");
+
+ /* Initialize speed and link to default case */
+ *link_up = false;
+ *speed = NGBE_LINK_SPEED_UNKNOWN;
+
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
+ ngbe_read_phy_reg_mdi(hw, YT_INTR_STATUS, 0, &insr);
+
+ status = hw->phy.read_reg(hw, YT_SPST, 0, &phy_data);
+ phy_link = phy_data & YT_SPST_LINK;
+ phy_speed = phy_data & YT_SPST_SPEED_MASK;
+
+ if (phy_link) {
+ *link_up = true;
+
+ if (phy_speed == YT_SPST_SPEED_1000M)
+ *speed = NGBE_LINK_SPEED_1GB_FULL;
+ else if (phy_speed == YT_SPST_SPEED_100M)
+ *speed = NGBE_LINK_SPEED_100M_FULL;
+ else if (phy_speed == YT_SPST_SPEED_10M)
+ *speed = NGBE_LINK_SPEED_10M_FULL;
+ }
+
+ return status;
+}
+
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
index 80fd420a63..26820ecb92 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.h
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -61,7 +61,15 @@ s32 ngbe_read_phy_reg_ext_yt(struct ngbe_hw *hw,
u32 reg_addr, u32 device_type, u16 *phy_data);
s32 ngbe_write_phy_reg_ext_yt(struct ngbe_hw *hw,
u32 reg_addr, u32 device_type, u16 phy_data);
+s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 *phy_data);
+s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
+ u32 reg_addr, u32 device_type, u16 phy_data);
s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
+s32 ngbe_check_phy_link_yt(struct ngbe_hw *hw,
+ u32 *speed, bool *link_up);
+s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw,
+ u32 speed, bool autoneg_wait_to_complete);
#endif /* _NGBE_PHY_YT_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 04c1cac422..626c6e2eec 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -6,6 +6,8 @@
#ifndef _NGBE_TYPE_H_
#define _NGBE_TYPE_H_
+#define NGBE_LINK_UP_TIME 90 /* 9.0 Seconds */
+
#define NGBE_FRAME_SIZE_DFT (1522) /* Default frame size, +FCS */
#define NGBE_ALIGN 128 /* as intel did */
@@ -97,6 +99,8 @@ struct ngbe_mac_info {
s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+ s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
+ bool autoneg_wait_to_complete);
s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
bool *link_up, bool link_up_wait_to_complete);
/* RAR */
@@ -122,6 +126,9 @@ struct ngbe_mac_info {
bool get_link_status;
struct ngbe_thermal_sensor_data thermal_sensor_data;
bool set_lben;
+ u32 max_link_up_time;
+
+ bool autoneg;
};
struct ngbe_phy_info {
@@ -135,6 +142,9 @@ struct ngbe_phy_info {
u32 device_type, u16 *phy_data);
s32 (*write_reg_unlocked)(struct ngbe_hw *hw, u32 reg_addr,
u32 device_type, u16 phy_data);
+ s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
+ bool autoneg_wait_to_complete);
+ s32 (*check_link)(struct ngbe_hw *hw, u32 *speed, bool *link_up);
enum ngbe_media_type media_type;
enum ngbe_phy_type type;
@@ -143,6 +153,7 @@ struct ngbe_phy_info {
u32 revision;
u32 phy_semaphore_mask;
bool reset_disable;
+ u32 autoneg_advertised;
};
enum ngbe_isb_idx {
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (10 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 11/19] net/ngbe: setup the check PHY link Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 16:35 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 13/19] net/ngbe: add Tx " Jiawen Wu
` (6 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Setup device Rx queue and release Rx queue.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/meson.build | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
drivers/net/ngbe/ngbe_ethdev.h | 16 ++
drivers/net/ngbe/ngbe_rxtx.c | 308 +++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
5 files changed, 457 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/ngbe/ngbe_rxtx.c
create mode 100644 drivers/net/ngbe/ngbe_rxtx.h
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 81173fa7f0..9e75b82f1c 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
sources = files(
'ngbe_ethdev.c',
+ 'ngbe_rxtx.c',
)
includes += include_directories('base')
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index c952023e8b..e73606c5f3 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -12,6 +12,7 @@
#include "ngbe_logs.h"
#include "base/ngbe.h"
#include "ngbe_ethdev.h"
+#include "ngbe_rxtx.h"
static int ngbe_dev_close(struct rte_eth_dev *dev);
@@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
{ .vendor_id = 0, /* sentinel */ },
};
+static const struct rte_eth_desc_lim rx_desc_lim = {
+ .nb_max = NGBE_RING_DESC_MAX,
+ .nb_min = NGBE_RING_DESC_MIN,
+ .nb_align = NGBE_RXD_ALIGN,
+};
+
static const struct eth_dev_ops ngbe_eth_dev_ops;
static inline void
@@ -241,12 +248,19 @@ static int
ngbe_dev_configure(struct rte_eth_dev *dev)
{
struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
PMD_INIT_FUNC_TRACE();
/* set flag to update link status after init */
intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
+ /*
+ * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
+ * allocation Rx preconditions we will reset it.
+ */
+ adapter->rx_bulk_alloc_allowed = true;
+
return 0;
}
@@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev)
static int
ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
- RTE_SET_USED(dev);
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_thresh = {
+ .pthresh = NGBE_DEFAULT_RX_PTHRESH,
+ .hthresh = NGBE_DEFAULT_RX_HTHRESH,
+ .wthresh = NGBE_DEFAULT_RX_WTHRESH,
+ },
+ .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = rx_desc_lim;
dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
ETH_LINK_SPEED_10M;
+ /* Driver-preferred Rx/Tx parameters */
+ dev_info->default_rxportconf.nb_queues = 1;
+ dev_info->default_rxportconf.ring_size = 256;
+
return 0;
}
@@ -570,6 +603,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_configure = ngbe_dev_configure,
.dev_infos_get = ngbe_dev_info_get,
.link_update = ngbe_dev_link_update,
+ .rx_queue_setup = ngbe_dev_rx_queue_setup,
+ .rx_queue_release = ngbe_dev_rx_queue_release,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index b67508a3de..6580d288c8 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -30,6 +30,7 @@ struct ngbe_interrupt {
struct ngbe_adapter {
struct ngbe_hw hw;
struct ngbe_interrupt intr;
+ bool rx_bulk_alloc_allowed;
};
static inline struct ngbe_adapter *
@@ -58,6 +59,13 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
return intr;
}
+void ngbe_dev_rx_queue_release(void *rxq);
+
+int ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+ uint16_t nb_rx_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool);
+
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete);
@@ -66,4 +74,12 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
#define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
#define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
+/*
+ * Default values for Rx/Tx configuration
+ */
+#define NGBE_DEFAULT_RX_FREE_THRESH 32
+#define NGBE_DEFAULT_RX_PTHRESH 8
+#define NGBE_DEFAULT_RX_HTHRESH 8
+#define NGBE_DEFAULT_RX_WTHRESH 0
+
#endif /* _NGBE_ETHDEV_H_ */
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
new file mode 100644
index 0000000000..df0b64dc01
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -0,0 +1,308 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#include <sys/queue.h>
+
+#include <stdint.h>
+#include <rte_ethdev.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "ngbe_logs.h"
+#include "base/ngbe.h"
+#include "ngbe_ethdev.h"
+#include "ngbe_rxtx.h"
+
+/**
+ * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
+ *
+ * The "next" pointer of the last segment of (not-yet-completed) RSC clusters
+ * in the sw_sc_ring is not set to NULL but rather points to the next
+ * mbuf of this RSC aggregation (that has not been completed yet and still
+ * resides on the HW ring). So, instead of calling for rte_pktmbuf_free() we
+ * will just free first "nb_segs" segments of the cluster explicitly by calling
+ * an rte_pktmbuf_free_seg().
+ *
+ * @m scattered cluster head
+ */
+static void __rte_cold
+ngbe_free_sc_cluster(struct rte_mbuf *m)
+{
+ uint16_t i, nb_segs = m->nb_segs;
+ struct rte_mbuf *next_seg;
+
+ for (i = 0; i < nb_segs; i++) {
+ next_seg = m->next;
+ rte_pktmbuf_free_seg(m);
+ m = next_seg;
+ }
+}
+
+static void __rte_cold
+ngbe_rx_queue_release_mbufs(struct ngbe_rx_queue *rxq)
+{
+ unsigned int i;
+
+ if (rxq->sw_ring != NULL) {
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ if (rxq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+ rxq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ if (rxq->rx_nb_avail) {
+ for (i = 0; i < rxq->rx_nb_avail; ++i) {
+ struct rte_mbuf *mb;
+
+ mb = rxq->rx_stage[rxq->rx_next_avail + i];
+ rte_pktmbuf_free_seg(mb);
+ }
+ rxq->rx_nb_avail = 0;
+ }
+ }
+
+ if (rxq->sw_sc_ring != NULL)
+ for (i = 0; i < rxq->nb_rx_desc; i++)
+ if (rxq->sw_sc_ring[i].fbuf) {
+ ngbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
+ rxq->sw_sc_ring[i].fbuf = NULL;
+ }
+}
+
+static void __rte_cold
+ngbe_rx_queue_release(struct ngbe_rx_queue *rxq)
+{
+ if (rxq != NULL) {
+ ngbe_rx_queue_release_mbufs(rxq);
+ rte_free(rxq->sw_ring);
+ rte_free(rxq->sw_sc_ring);
+ rte_free(rxq);
+ }
+}
+
+void __rte_cold
+ngbe_dev_rx_queue_release(void *rxq)
+{
+ ngbe_rx_queue_release(rxq);
+}
+
+/*
+ * Check if Rx Burst Bulk Alloc function can be used.
+ * Return
+ * 0: the preconditions are satisfied and the bulk allocation function
+ * can be used.
+ * -EINVAL: the preconditions are NOT satisfied and the default Rx burst
+ * function must be used.
+ */
+static inline int __rte_cold
+check_rx_burst_bulk_alloc_preconditions(struct ngbe_rx_queue *rxq)
+{
+ int ret = 0;
+
+ /*
+ * Make sure the following pre-conditions are satisfied:
+ * rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST
+ * rxq->rx_free_thresh < rxq->nb_rx_desc
+ * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
+ * Scattered packets are not supported. This should be checked
+ * outside of this function.
+ */
+ if (!(rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST)) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+ "rxq->rx_free_thresh=%d, "
+ "RTE_PMD_NGBE_RX_MAX_BURST=%d",
+ rxq->rx_free_thresh, RTE_PMD_NGBE_RX_MAX_BURST);
+ ret = -EINVAL;
+ } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+ "rxq->rx_free_thresh=%d, "
+ "rxq->nb_rx_desc=%d",
+ rxq->rx_free_thresh, rxq->nb_rx_desc);
+ ret = -EINVAL;
+ } else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
+ PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+ "rxq->nb_rx_desc=%d, "
+ "rxq->rx_free_thresh=%d",
+ rxq->nb_rx_desc, rxq->rx_free_thresh);
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+
+/* Reset dynamic ngbe_rx_queue fields back to defaults */
+static void __rte_cold
+ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
+{
+ static const struct ngbe_rx_desc zeroed_desc = {
+ {{0}, {0} }, {{0}, {0} } };
+ unsigned int i;
+ uint16_t len = rxq->nb_rx_desc;
+
+ /*
+ * By default, the Rx queue setup function allocates enough memory for
+ * NGBE_RING_DESC_MAX. The Rx Burst bulk allocation function requires
+ * extra memory at the end of the descriptor ring to be zero'd out.
+ */
+ if (adapter->rx_bulk_alloc_allowed)
+ /* zero out extra memory */
+ len += RTE_PMD_NGBE_RX_MAX_BURST;
+
+ /*
+ * Zero out HW ring memory. Zero out extra memory at the end of
+ * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
+ * reads extra memory as zeros.
+ */
+ for (i = 0; i < len; i++)
+ rxq->rx_ring[i] = zeroed_desc;
+
+ /*
+ * initialize extra software ring entries. Space for these extra
+ * entries is always allocated
+ */
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+ for (i = rxq->nb_rx_desc; i < len; ++i)
+ rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
+
+ rxq->rx_nb_avail = 0;
+ rxq->rx_next_avail = 0;
+ rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+ rxq->rx_tail = 0;
+ rxq->nb_rx_hold = 0;
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+}
+
+int __rte_cold
+ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ const struct rte_memzone *rz;
+ struct ngbe_rx_queue *rxq;
+ struct ngbe_hw *hw;
+ uint16_t len;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+
+ PMD_INIT_FUNC_TRACE();
+ hw = ngbe_dev_hw(dev);
+
+ /*
+ * Validate number of receive descriptors.
+ * It must not exceed hardware maximum, and must be multiple
+ * of NGBE_ALIGN.
+ */
+ if (nb_desc % NGBE_RXD_ALIGN != 0 ||
+ nb_desc > NGBE_RING_DESC_MAX ||
+ nb_desc < NGBE_RING_DESC_MIN) {
+ return -EINVAL;
+ }
+
+ /* Free memory prior to re-allocation if needed... */
+ if (dev->data->rx_queues[queue_idx] != NULL) {
+ ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* First allocate the Rx queue data structure */
+ rxq = rte_zmalloc_socket("ethdev RX queue",
+ sizeof(struct ngbe_rx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq == NULL)
+ return -ENOMEM;
+ rxq->mb_pool = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+ rxq->queue_id = queue_idx;
+ rxq->reg_idx = queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->drop_en = rx_conf->rx_drop_en;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+
+ /*
+ * Allocate Rx ring hardware descriptors. A memzone large enough to
+ * handle the maximum ring size is allocated in order to allow for
+ * resizing in later calls to the queue setup function.
+ */
+ rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+ RX_RING_SZ, NGBE_ALIGN, socket_id);
+ if (rz == NULL) {
+ ngbe_rx_queue_release(rxq);
+ return -ENOMEM;
+ }
+
+ /*
+ * Zero init all the descriptors in the ring.
+ */
+ memset(rz->addr, 0, RX_RING_SZ);
+
+ rxq->rdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXWP(rxq->reg_idx));
+ rxq->rdh_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXRP(rxq->reg_idx));
+
+ rxq->rx_ring_phys_addr = TMZ_PADDR(rz);
+ rxq->rx_ring = (struct ngbe_rx_desc *)TMZ_VADDR(rz);
+
+ /*
+ * Certain constraints must be met in order to use the bulk buffer
+ * allocation Rx burst function. If any of Rx queues doesn't meet them
+ * the feature should be disabled for the whole port.
+ */
+ if (check_rx_burst_bulk_alloc_preconditions(rxq)) {
+ PMD_INIT_LOG(DEBUG, "queue[%d] doesn't meet Rx Bulk Alloc "
+ "preconditions - canceling the feature for "
+ "the whole port[%d]",
+ rxq->queue_id, rxq->port_id);
+ adapter->rx_bulk_alloc_allowed = false;
+ }
+
+ /*
+ * Allocate software ring. Allow for space at the end of the
+ * S/W ring to make sure look-ahead logic in bulk alloc Rx burst
+ * function does not access an invalid memory region.
+ */
+ len = nb_desc;
+ if (adapter->rx_bulk_alloc_allowed)
+ len += RTE_PMD_NGBE_RX_MAX_BURST;
+
+ rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
+ sizeof(struct ngbe_rx_entry) * len,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq->sw_ring == NULL) {
+ ngbe_rx_queue_release(rxq);
+ return -ENOMEM;
+ }
+
+ /*
+ * Always allocate even if it's not going to be needed in order to
+ * simplify the code.
+ *
+ * This ring is used in Scattered Rx cases and Scattered Rx may
+ * be requested in ngbe_dev_rx_init(), which is called later from
+ * dev_start() flow.
+ */
+ rxq->sw_sc_ring =
+ rte_zmalloc_socket("rxq->sw_sc_ring",
+ sizeof(struct ngbe_scattered_rx_entry) * len,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq->sw_sc_ring == NULL) {
+ ngbe_rx_queue_release(rxq);
+ return -ENOMEM;
+ }
+
+ PMD_INIT_LOG(DEBUG, "sw_ring=%p sw_sc_ring=%p hw_ring=%p "
+ "dma_addr=0x%" PRIx64,
+ rxq->sw_ring, rxq->sw_sc_ring, rxq->rx_ring,
+ rxq->rx_ring_phys_addr);
+
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ ngbe_reset_rx_queue(adapter, rxq);
+
+ return 0;
+}
+
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
new file mode 100644
index 0000000000..92b9a9fd1b
--- /dev/null
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
+ * Copyright(c) 2010-2017 Intel Corporation
+ */
+
+#ifndef _NGBE_RXTX_H_
+#define _NGBE_RXTX_H_
+
+/*****************************************************************************
+ * Receive Descriptor
+ *****************************************************************************/
+struct ngbe_rx_desc {
+ struct {
+ union {
+ rte_le32_t dw0;
+ struct {
+ rte_le16_t pkt;
+ rte_le16_t hdr;
+ } lo;
+ };
+ union {
+ rte_le32_t dw1;
+ struct {
+ rte_le16_t ipid;
+ rte_le16_t csum;
+ } hi;
+ };
+ } qw0; /* also as r.pkt_addr */
+ struct {
+ union {
+ rte_le32_t dw2;
+ struct {
+ rte_le32_t status;
+ } lo;
+ };
+ union {
+ rte_le32_t dw3;
+ struct {
+ rte_le16_t len;
+ rte_le16_t tag;
+ } hi;
+ };
+ } qw1; /* also as r.hdr_addr */
+};
+
+#define RTE_PMD_NGBE_RX_MAX_BURST 32
+
+#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
+ sizeof(struct ngbe_rx_desc))
+
+
+/**
+ * Structure associated with each descriptor of the Rx ring of a Rx queue.
+ */
+struct ngbe_rx_entry {
+ struct rte_mbuf *mbuf; /**< mbuf associated with Rx descriptor. */
+};
+
+struct ngbe_scattered_rx_entry {
+ struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
+};
+
+/**
+ * Structure associated with each Rx queue.
+ */
+struct ngbe_rx_queue {
+ struct rte_mempool *mb_pool; /**< mbuf pool to populate Rx ring. */
+ volatile struct ngbe_rx_desc *rx_ring; /**< Rx ring virtual address. */
+ uint64_t rx_ring_phys_addr; /**< Rx ring DMA address. */
+ volatile uint32_t *rdt_reg_addr; /**< RDT register address. */
+ volatile uint32_t *rdh_reg_addr; /**< RDH register address. */
+ struct ngbe_rx_entry *sw_ring; /**< address of Rx software ring. */
+ /**< address of scattered Rx software ring. */
+ struct ngbe_scattered_rx_entry *sw_sc_ring;
+ struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
+ struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
+ uint16_t nb_rx_desc; /**< number of Rx descriptors. */
+ uint16_t rx_tail; /**< current value of RDT register. */
+ uint16_t nb_rx_hold; /**< number of held free Rx desc. */
+ uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
+ uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
+ uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+ uint16_t rx_free_thresh; /**< max free Rx desc to hold. */
+ uint16_t queue_id; /**< RX queue index. */
+ uint16_t reg_idx; /**< RX queue register index. */
+ /**< Packet type mask for different NICs. */
+ uint16_t port_id; /**< Device port identifier. */
+ uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
+ uint8_t rx_deferred_start; /**< not in global dev start. */
+ /** need to alloc dummy mbuf, for wraparound when scanning hw ring */
+ struct rte_mbuf fake_mbuf;
+ /** hold packets to return to application */
+ struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
+};
+
+#endif /* _NGBE_RXTX_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 13/19] net/ngbe: add Tx queue setup and release
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (11 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release Jiawen Wu
@ 2021-06-17 10:59 ` Jiawen Wu
2021-07-02 16:39 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 14/19] net/ngbe: add simple Rx flow Jiawen Wu
` (5 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 10:59 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Setup device Tx queue and release Tx queue.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 24 ++++
drivers/net/ngbe/ngbe_ethdev.h | 11 ++
drivers/net/ngbe/ngbe_rxtx.c | 206 +++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 95 +++++++++++++++
4 files changed, 336 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e73606c5f3..d6f93cfe46 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -44,6 +44,14 @@ static const struct rte_eth_desc_lim rx_desc_lim = {
.nb_align = NGBE_RXD_ALIGN,
};
+static const struct rte_eth_desc_lim tx_desc_lim = {
+ .nb_max = NGBE_RING_DESC_MAX,
+ .nb_min = NGBE_RING_DESC_MIN,
+ .nb_align = NGBE_TXD_ALIGN,
+ .nb_seg_max = NGBE_TX_MAX_SEG,
+ .nb_mtu_seg_max = NGBE_TX_MAX_SEG,
+};
+
static const struct eth_dev_ops ngbe_eth_dev_ops;
static inline void
@@ -283,6 +291,7 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
struct ngbe_hw *hw = ngbe_dev_hw(dev);
dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
+ dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -295,14 +304,27 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
.offloads = 0,
};
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_thresh = {
+ .pthresh = NGBE_DEFAULT_TX_PTHRESH,
+ .hthresh = NGBE_DEFAULT_TX_HTHRESH,
+ .wthresh = NGBE_DEFAULT_TX_WTHRESH,
+ },
+ .tx_free_thresh = NGBE_DEFAULT_TX_FREE_THRESH,
+ .offloads = 0,
+ };
+
dev_info->rx_desc_lim = rx_desc_lim;
+ dev_info->tx_desc_lim = tx_desc_lim;
dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
dev_info->default_rxportconf.nb_queues = 1;
+ dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
+ dev_info->default_txportconf.ring_size = 256;
return 0;
}
@@ -605,6 +627,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.link_update = ngbe_dev_link_update,
.rx_queue_setup = ngbe_dev_rx_queue_setup,
.rx_queue_release = ngbe_dev_rx_queue_release,
+ .tx_queue_setup = ngbe_dev_tx_queue_setup,
+ .tx_queue_release = ngbe_dev_tx_queue_release,
};
RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 6580d288c8..131671c313 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -61,11 +61,17 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
void ngbe_dev_rx_queue_release(void *rxq);
+void ngbe_dev_tx_queue_release(void *txq);
+
int ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
uint16_t nb_rx_desc, unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
+int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+ uint16_t nb_tx_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf);
+
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete);
@@ -82,4 +88,9 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
#define NGBE_DEFAULT_RX_HTHRESH 8
#define NGBE_DEFAULT_RX_WTHRESH 0
+#define NGBE_DEFAULT_TX_FREE_THRESH 32
+#define NGBE_DEFAULT_TX_PTHRESH 32
+#define NGBE_DEFAULT_TX_HTHRESH 0
+#define NGBE_DEFAULT_TX_WTHRESH 0
+
#endif /* _NGBE_ETHDEV_H_ */
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index df0b64dc01..da9150b2f1 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -15,6 +15,212 @@
#include "ngbe_ethdev.h"
#include "ngbe_rxtx.h"
+/*********************************************************************
+ *
+ * Queue management functions
+ *
+ **********************************************************************/
+
+static void __rte_cold
+ngbe_tx_queue_release_mbufs(struct ngbe_tx_queue *txq)
+{
+ unsigned int i;
+
+ if (txq->sw_ring != NULL) {
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_ring[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ }
+}
+
+static void __rte_cold
+ngbe_tx_free_swring(struct ngbe_tx_queue *txq)
+{
+ if (txq != NULL)
+ rte_free(txq->sw_ring);
+}
+
+static void __rte_cold
+ngbe_tx_queue_release(struct ngbe_tx_queue *txq)
+{
+ if (txq != NULL) {
+ if (txq->ops != NULL) {
+ txq->ops->release_mbufs(txq);
+ txq->ops->free_swring(txq);
+ }
+ rte_free(txq);
+ }
+}
+
+void __rte_cold
+ngbe_dev_tx_queue_release(void *txq)
+{
+ ngbe_tx_queue_release(txq);
+}
+
+/* (Re)set dynamic ngbe_tx_queue fields to defaults */
+static void __rte_cold
+ngbe_reset_tx_queue(struct ngbe_tx_queue *txq)
+{
+ static const struct ngbe_tx_desc zeroed_desc = {0};
+ struct ngbe_tx_entry *txe = txq->sw_ring;
+ uint16_t prev, i;
+
+ /* Zero out HW ring memory */
+ for (i = 0; i < txq->nb_tx_desc; i++)
+ txq->tx_ring[i] = zeroed_desc;
+
+ /* Initialize SW ring entries */
+ prev = (uint16_t)(txq->nb_tx_desc - 1);
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ /* the ring can also be modified by hardware */
+ volatile struct ngbe_tx_desc *txd = &txq->tx_ring[i];
+
+ txd->dw3 = rte_cpu_to_le_32(NGBE_TXD_DD);
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
+ txq->tx_tail = 0;
+
+ /*
+ * Always allow 1 descriptor to be un-allocated to avoid
+ * a H/W race condition
+ */
+ txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+ txq->ctx_curr = 0;
+ memset((void *)&txq->ctx_cache, 0,
+ NGBE_CTX_NUM * sizeof(struct ngbe_ctx_info));
+}
+
+static const struct ngbe_txq_ops def_txq_ops = {
+ .release_mbufs = ngbe_tx_queue_release_mbufs,
+ .free_swring = ngbe_tx_free_swring,
+ .reset = ngbe_reset_tx_queue,
+};
+
+int __rte_cold
+ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ const struct rte_memzone *tz;
+ struct ngbe_tx_queue *txq;
+ struct ngbe_hw *hw;
+ uint16_t tx_free_thresh;
+
+ PMD_INIT_FUNC_TRACE();
+ hw = ngbe_dev_hw(dev);
+
+ /*
+ * Validate number of transmit descriptors.
+ * It must not exceed hardware maximum, and must be multiple
+ * of NGBE_ALIGN.
+ */
+ if (nb_desc % NGBE_TXD_ALIGN != 0 ||
+ nb_desc > NGBE_RING_DESC_MAX ||
+ nb_desc < NGBE_RING_DESC_MIN) {
+ return -EINVAL;
+ }
+
+ /*
+ * The Tx descriptor ring will be cleaned after txq->tx_free_thresh
+ * descriptors are used or if the number of descriptors required
+ * to transmit a packet is greater than the number of free Tx
+ * descriptors.
+ * One descriptor in the Tx ring is used as a sentinel to avoid a
+ * H/W race condition, hence the maximum threshold constraints.
+ * When set to zero use default values.
+ */
+ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+ if (tx_free_thresh >= (nb_desc - 3)) {
+ PMD_INIT_LOG(ERR, "tx_free_thresh must be less than the number of "
+ "TX descriptors minus 3. (tx_free_thresh=%u "
+ "port=%d queue=%d)",
+ (unsigned int)tx_free_thresh,
+ (int)dev->data->port_id, (int)queue_idx);
+ return -(EINVAL);
+ }
+
+ if (nb_desc % tx_free_thresh != 0) {
+ PMD_INIT_LOG(ERR, "tx_free_thresh must be a divisor of the "
+ "number of Tx descriptors. (tx_free_thresh=%u "
+ "port=%d queue=%d)", (unsigned int)tx_free_thresh,
+ (int)dev->data->port_id, (int)queue_idx);
+ return -(EINVAL);
+ }
+
+ /* Free memory prior to re-allocation if needed... */
+ if (dev->data->tx_queues[queue_idx] != NULL) {
+ ngbe_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* First allocate the Tx queue data structure */
+ txq = rte_zmalloc_socket("ethdev Tx queue",
+ sizeof(struct ngbe_tx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL)
+ return -ENOMEM;
+
+ /*
+ * Allocate Tx ring hardware descriptors. A memzone large enough to
+ * handle the maximum ring size is allocated in order to allow for
+ * resizing in later calls to the queue setup function.
+ */
+ tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ sizeof(struct ngbe_tx_desc) * NGBE_RING_DESC_MAX,
+ NGBE_ALIGN, socket_id);
+ if (tz == NULL) {
+ ngbe_tx_queue_release(txq);
+ return -ENOMEM;
+ }
+
+ txq->nb_tx_desc = nb_desc;
+ txq->tx_free_thresh = tx_free_thresh;
+ txq->pthresh = tx_conf->tx_thresh.pthresh;
+ txq->hthresh = tx_conf->tx_thresh.hthresh;
+ txq->wthresh = tx_conf->tx_thresh.wthresh;
+ txq->queue_id = queue_idx;
+ txq->reg_idx = queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->ops = &def_txq_ops;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ txq->tdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXWP(txq->reg_idx));
+ txq->tdc_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXCFG(txq->reg_idx));
+
+ txq->tx_ring_phys_addr = TMZ_PADDR(tz);
+ txq->tx_ring = (struct ngbe_tx_desc *)TMZ_VADDR(tz);
+
+ /* Allocate software ring */
+ txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
+ sizeof(struct ngbe_tx_entry) * nb_desc,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq->sw_ring == NULL) {
+ ngbe_tx_queue_release(txq);
+ return -ENOMEM;
+ }
+ PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
+ txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+
+ txq->ops->reset(txq);
+
+ dev->data->tx_queues[queue_idx] = txq;
+
+ return 0;
+}
+
/**
* ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
*
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 92b9a9fd1b..0f2c185dbf 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -43,11 +43,41 @@ struct ngbe_rx_desc {
} qw1; /* also as r.hdr_addr */
};
+/*****************************************************************************
+ * Transmit Descriptor
+ *****************************************************************************/
+/**
+ * Transmit Context Descriptor (NGBE_TXD_TYP=CTXT)
+ **/
+struct ngbe_tx_ctx_desc {
+ rte_le32_t dw0; /* w.vlan_macip_lens */
+ rte_le32_t dw1; /* w.seqnum_seed */
+ rte_le32_t dw2; /* w.type_tucmd_mlhl */
+ rte_le32_t dw3; /* w.mss_l4len_idx */
+};
+
+/* @ngbe_tx_ctx_desc.dw3 */
+#define NGBE_TXD_DD MS(0, 0x1) /* descriptor done */
+
+/**
+ * Transmit Data Descriptor (NGBE_TXD_TYP=DATA)
+ **/
+struct ngbe_tx_desc {
+ rte_le64_t qw0; /* r.buffer_addr , w.reserved */
+ rte_le32_t dw2; /* r.cmd_type_len, w.nxtseq_seed */
+ rte_le32_t dw3; /* r.olinfo_status, w.status */
+};
+
#define RTE_PMD_NGBE_RX_MAX_BURST 32
#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
sizeof(struct ngbe_rx_desc))
+#define NGBE_TX_MAX_SEG 40
+
+#ifndef DEFAULT_TX_FREE_THRESH
+#define DEFAULT_TX_FREE_THRESH 32
+#endif
/**
* Structure associated with each descriptor of the Rx ring of a Rx queue.
@@ -60,6 +90,15 @@ struct ngbe_scattered_rx_entry {
struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
};
+/**
+ * Structure associated with each descriptor of the Tx ring of a Tx queue.
+ */
+struct ngbe_tx_entry {
+ struct rte_mbuf *mbuf; /**< mbuf associated with Tx desc, if any. */
+ uint16_t next_id; /**< Index of next descriptor in ring. */
+ uint16_t last_id; /**< Index of last scattered descriptor. */
+};
+
/**
* Structure associated with each Rx queue.
*/
@@ -93,4 +132,60 @@ struct ngbe_rx_queue {
struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
};
+/**
+ * NGBE CTX Constants
+ */
+enum ngbe_ctx_num {
+ NGBE_CTX_0 = 0, /**< CTX0 */
+ NGBE_CTX_1 = 1, /**< CTX1 */
+ NGBE_CTX_NUM = 2, /**< CTX NUMBER */
+};
+
+/**
+ * Structure to check if new context need be built
+ */
+struct ngbe_ctx_info {
+ uint64_t flags; /**< ol_flags for context build. */
+};
+
+/**
+ * Structure associated with each Tx queue.
+ */
+struct ngbe_tx_queue {
+ /** Tx ring virtual address. */
+ volatile struct ngbe_tx_desc *tx_ring;
+ uint64_t tx_ring_phys_addr; /**< Tx ring DMA address. */
+ struct ngbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD.*/
+ volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */
+ volatile uint32_t *tdc_reg_addr; /**< Address of TDC register. */
+ uint16_t nb_tx_desc; /**< number of Tx descriptors. */
+ uint16_t tx_tail; /**< current value of TDT reg. */
+ /**< Start freeing Tx buffers if there are less free descriptors than
+ * this value.
+ */
+ uint16_t tx_free_thresh;
+ /** Index to last Tx descriptor to have been cleaned. */
+ uint16_t last_desc_cleaned;
+ /** Total number of Tx descriptors ready to be allocated. */
+ uint16_t nb_tx_free;
+ uint16_t tx_next_dd; /**< next desc to scan for DD bit */
+ uint16_t queue_id; /**< Tx queue index. */
+ uint16_t reg_idx; /**< Tx queue register index. */
+ uint16_t port_id; /**< Device port identifier. */
+ uint8_t pthresh; /**< Prefetch threshold register. */
+ uint8_t hthresh; /**< Host threshold register. */
+ uint8_t wthresh; /**< Write-back threshold reg. */
+ uint32_t ctx_curr; /**< Hardware context states. */
+ /** Hardware context0 history. */
+ struct ngbe_ctx_info ctx_cache[NGBE_CTX_NUM];
+ const struct ngbe_txq_ops *ops; /**< txq ops */
+ uint8_t tx_deferred_start; /**< not in global dev start. */
+};
+
+struct ngbe_txq_ops {
+ void (*release_mbufs)(struct ngbe_tx_queue *txq);
+ void (*free_swring)(struct ngbe_tx_queue *txq);
+ void (*reset)(struct ngbe_tx_queue *txq);
+};
+
#endif /* _NGBE_RXTX_H_ */
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 14/19] net/ngbe: add simple Rx flow
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (12 preceding siblings ...)
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 13/19] net/ngbe: add Tx " Jiawen Wu
@ 2021-06-17 11:00 ` Jiawen Wu
2021-07-02 16:42 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 15/19] net/ngbe: add simple Tx flow Jiawen Wu
` (4 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 11:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initialize device with the simplest receive function.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 1 +
drivers/net/ngbe/ngbe_ethdev.h | 3 +
drivers/net/ngbe/ngbe_rxtx.c | 168 +++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 81 ++++++++++++++++
4 files changed, 253 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index d6f93cfe46..269186acc0 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -110,6 +110,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
PMD_INIT_FUNC_TRACE();
eth_dev->dev_ops = &ngbe_eth_dev_ops;
+ eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 131671c313..8fb7c8a19b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -72,6 +72,9 @@ int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
+uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index da9150b2f1..f97fceaf7c 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -15,6 +15,174 @@
#include "ngbe_ethdev.h"
#include "ngbe_rxtx.h"
+/*
+ * Prefetch a cache line into all cache levels.
+ */
+#define rte_ngbe_prefetch(p) rte_prefetch0(p)
+
+/*********************************************************************
+ *
+ * Rx functions
+ *
+ **********************************************************************/
+uint16_t
+ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct ngbe_rx_queue *rxq;
+ volatile struct ngbe_rx_desc *rx_ring;
+ volatile struct ngbe_rx_desc *rxdp;
+ struct ngbe_rx_entry *sw_ring;
+ struct ngbe_rx_entry *rxe;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *nmb;
+ struct ngbe_rx_desc rxd;
+ uint64_t dma_addr;
+ uint32_t staterr;
+ uint16_t pkt_len;
+ uint16_t rx_id;
+ uint16_t nb_rx;
+ uint16_t nb_hold;
+
+ nb_rx = 0;
+ nb_hold = 0;
+ rxq = rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_ring = rxq->rx_ring;
+ sw_ring = rxq->sw_ring;
+ struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
+ while (nb_rx < nb_pkts) {
+ /*
+ * The order of operations here is important as the DD status
+ * bit must not be read after any other descriptor fields.
+ * rx_ring and rxdp are pointing to volatile data so the order
+ * of accesses cannot be reordered by the compiler. If they were
+ * not volatile, they could be reordered which could lead to
+ * using invalid descriptor fields when read from rxd.
+ */
+ rxdp = &rx_ring[rx_id];
+ staterr = rxdp->qw1.lo.status;
+ if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
+ break;
+ rxd = *rxdp;
+
+ /*
+ * End of packet.
+ *
+ * If the NGBE_RXD_STAT_EOP flag is not set, the Rx packet
+ * is likely to be invalid and to be dropped by the various
+ * validation checks performed by the network stack.
+ *
+ * Allocate a new mbuf to replenish the RX ring descriptor.
+ * If the allocation fails:
+ * - arrange for that Rx descriptor to be the first one
+ * being parsed the next time the receive function is
+ * invoked [on the same queue].
+ *
+ * - Stop parsing the Rx ring and return immediately.
+ *
+ * This policy do not drop the packet received in the Rx
+ * descriptor for which the allocation of a new mbuf failed.
+ * Thus, it allows that packet to be later retrieved if
+ * mbuf have been freed in the mean time.
+ * As a side effect, holding Rx descriptors instead of
+ * systematically giving them back to the NIC may lead to
+ * Rx ring exhaustion situations.
+ * However, the NIC can gracefully prevent such situations
+ * to happen by sending specific "back-pressure" flow control
+ * frames to its peer(s).
+ */
+ PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
+ "ext_err_stat=0x%08x pkt_len=%u",
+ (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
+ (uint16_t)rx_id, (uint32_t)staterr,
+ (uint16_t)rte_le_to_cpu_16(rxd.qw1.hi.len));
+
+ nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (nmb == NULL) {
+ PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed port_id=%u "
+ "queue_id=%u", (uint16_t)rxq->port_id,
+ (uint16_t)rxq->queue_id);
+ dev->data->rx_mbuf_alloc_failed++;
+ break;
+ }
+
+ nb_hold++;
+ rxe = &sw_ring[rx_id];
+ rx_id++;
+ if (rx_id == rxq->nb_rx_desc)
+ rx_id = 0;
+
+ /* Prefetch next mbuf while processing current one. */
+ rte_ngbe_prefetch(sw_ring[rx_id].mbuf);
+
+ /*
+ * When next Rx descriptor is on a cache-line boundary,
+ * prefetch the next 4 Rx descriptors and the next 8 pointers
+ * to mbufs.
+ */
+ if ((rx_id & 0x3) == 0) {
+ rte_ngbe_prefetch(&rx_ring[rx_id]);
+ rte_ngbe_prefetch(&sw_ring[rx_id]);
+ }
+
+ rxm = rxe->mbuf;
+ rxe->mbuf = nmb;
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+ NGBE_RXD_HDRADDR(rxdp, 0);
+ NGBE_RXD_PKTADDR(rxdp, dma_addr);
+
+ /*
+ * Initialize the returned mbuf.
+ * setup generic mbuf fields:
+ * - number of segments,
+ * - next segment,
+ * - packet length,
+ * - Rx port identifier.
+ */
+ pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len));
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->port = rxq->port_id;
+
+ /*
+ * Store the mbuf address into the next entry of the array
+ * of returned packets.
+ */
+ rx_pkts[nb_rx++] = rxm;
+ }
+ rxq->rx_tail = rx_id;
+
+ /*
+ * If the number of free Rx descriptors is greater than the Rx free
+ * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+ * register.
+ * Update the RDT with the value of the last processed Rx descriptor
+ * minus 1, to guarantee that the RDT register is never equal to the
+ * RDH register, which creates a "full" ring situation from the
+ * hardware point of view...
+ */
+ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+ if (nb_hold > rxq->rx_free_thresh) {
+ PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+ "nb_hold=%u nb_rx=%u",
+ (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
+ (uint16_t)rx_id, (uint16_t)nb_hold,
+ (uint16_t)nb_rx);
+ rx_id = (uint16_t)((rx_id == 0) ?
+ (rxq->nb_rx_desc - 1) : (rx_id - 1));
+ ngbe_set32(rxq->rdt_reg_addr, rx_id);
+ nb_hold = 0;
+ }
+ rxq->nb_rx_hold = nb_hold;
+ return nb_rx;
+}
+
+
/*********************************************************************
*
* Queue management functions
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 0f2c185dbf..1c8fd76f12 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -43,6 +43,85 @@ struct ngbe_rx_desc {
} qw1; /* also as r.hdr_addr */
};
+/* @ngbe_rx_desc.qw0 */
+#define NGBE_RXD_PKTADDR(rxd, v) \
+ (((volatile __le64 *)(rxd))[0] = cpu_to_le64(v))
+
+/* @ngbe_rx_desc.qw1 */
+#define NGBE_RXD_HDRADDR(rxd, v) \
+ (((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
+
+/* @ngbe_rx_desc.dw0 */
+#define NGBE_RXD_RSSTYPE(dw) RS(dw, 0, 0xF)
+#define NGBE_RSSTYPE_NONE 0
+#define NGBE_RSSTYPE_IPV4TCP 1
+#define NGBE_RSSTYPE_IPV4 2
+#define NGBE_RSSTYPE_IPV6TCP 3
+#define NGBE_RSSTYPE_IPV4SCTP 4
+#define NGBE_RSSTYPE_IPV6 5
+#define NGBE_RSSTYPE_IPV6SCTP 6
+#define NGBE_RSSTYPE_IPV4UDP 7
+#define NGBE_RSSTYPE_IPV6UDP 8
+#define NGBE_RSSTYPE_FDIR 15
+#define NGBE_RXD_SECTYPE(dw) RS(dw, 4, 0x3)
+#define NGBE_RXD_SECTYPE_NONE LS(0, 4, 0x3)
+#define NGBE_RXD_SECTYPE_IPSECESP LS(2, 4, 0x3)
+#define NGBE_RXD_SECTYPE_IPSECAH LS(3, 4, 0x3)
+#define NGBE_RXD_TPIDSEL(dw) RS(dw, 6, 0x7)
+#define NGBE_RXD_PTID(dw) RS(dw, 9, 0xFF)
+#define NGBE_RXD_RSCCNT(dw) RS(dw, 17, 0xF)
+#define NGBE_RXD_HDRLEN(dw) RS(dw, 21, 0x3FF)
+#define NGBE_RXD_SPH MS(31, 0x1)
+
+/* @ngbe_rx_desc.dw1 */
+/** bit 0-31, as rss hash when **/
+#define NGBE_RXD_RSSHASH(rxd) ((rxd)->qw0.dw1)
+
+/** bit 0-31, as ip csum when **/
+#define NGBE_RXD_IPID(rxd) ((rxd)->qw0.hi.ipid)
+#define NGBE_RXD_CSUM(rxd) ((rxd)->qw0.hi.csum)
+
+/* @ngbe_rx_desc.dw2 */
+#define NGBE_RXD_STATUS(rxd) ((rxd)->qw1.lo.status)
+/** bit 0-1 **/
+#define NGBE_RXD_STAT_DD MS(0, 0x1) /* Descriptor Done */
+#define NGBE_RXD_STAT_EOP MS(1, 0x1) /* End of Packet */
+/** bit 2-31, when EOP=0 **/
+#define NGBE_RXD_NEXTP_RESV(v) LS(v, 2, 0x3)
+#define NGBE_RXD_NEXTP(dw) RS(dw, 4, 0xFFFF) /* Next Descriptor */
+/** bit 2-31, when EOP=1 **/
+#define NGBE_RXD_PKT_CLS_MASK MS(2, 0x7) /* Packet Class */
+#define NGBE_RXD_PKT_CLS_TC_RSS LS(0, 2, 0x7) /* RSS Hash */
+#define NGBE_RXD_PKT_CLS_FLM LS(1, 2, 0x7) /* FDir Match */
+#define NGBE_RXD_PKT_CLS_SYN LS(2, 2, 0x7) /* TCP Sync */
+#define NGBE_RXD_PKT_CLS_5TUPLE LS(3, 2, 0x7) /* 5 Tuple */
+#define NGBE_RXD_PKT_CLS_ETF LS(4, 2, 0x7) /* Ethertype Filter */
+#define NGBE_RXD_STAT_VLAN MS(5, 0x1) /* IEEE VLAN Packet */
+#define NGBE_RXD_STAT_UDPCS MS(6, 0x1) /* UDP xsum calculated */
+#define NGBE_RXD_STAT_L4CS MS(7, 0x1) /* L4 xsum calculated */
+#define NGBE_RXD_STAT_IPCS MS(8, 0x1) /* IP xsum calculated */
+#define NGBE_RXD_STAT_PIF MS(9, 0x1) /* Non-unicast address */
+#define NGBE_RXD_STAT_EIPCS MS(10, 0x1) /* Encap IP xsum calculated */
+#define NGBE_RXD_STAT_VEXT MS(11, 0x1) /* Multi-VLAN */
+#define NGBE_RXD_STAT_IPV6EX MS(12, 0x1) /* IPv6 with option header */
+#define NGBE_RXD_STAT_LLINT MS(13, 0x1) /* Pkt caused LLI */
+#define NGBE_RXD_STAT_1588 MS(14, 0x1) /* IEEE1588 Time Stamp */
+#define NGBE_RXD_STAT_SECP MS(15, 0x1) /* Security Processing */
+#define NGBE_RXD_STAT_LB MS(16, 0x1) /* Loopback Status */
+/*** bit 17-30, when PTYPE=IP ***/
+#define NGBE_RXD_STAT_BMC MS(17, 0x1) /* PTYPE=IP, BMC status */
+#define NGBE_RXD_ERR_HBO MS(23, 0x1) /* Header Buffer Overflow */
+#define NGBE_RXD_ERR_EIPCS MS(26, 0x1) /* Encap IP header error */
+#define NGBE_RXD_ERR_SECERR MS(27, 0x1) /* macsec or ipsec error */
+#define NGBE_RXD_ERR_RXE MS(29, 0x1) /* Any MAC Error */
+#define NGBE_RXD_ERR_L4CS MS(30, 0x1) /* TCP/UDP xsum error */
+#define NGBE_RXD_ERR_IPCS MS(31, 0x1) /* IP xsum error */
+#define NGBE_RXD_ERR_CSUM(dw) RS(dw, 30, 0x3)
+
+/* @ngbe_rx_desc.dw3 */
+#define NGBE_RXD_LENGTH(rxd) ((rxd)->qw1.hi.len)
+#define NGBE_RXD_VLAN(rxd) ((rxd)->qw1.hi.tag)
+
/*****************************************************************************
* Transmit Descriptor
*****************************************************************************/
@@ -73,6 +152,8 @@ struct ngbe_tx_desc {
#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
sizeof(struct ngbe_rx_desc))
+#define rte_packet_prefetch(p) rte_prefetch1(p)
+
#define NGBE_TX_MAX_SEG 40
#ifndef DEFAULT_TX_FREE_THRESH
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 15/19] net/ngbe: add simple Tx flow
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (13 preceding siblings ...)
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 14/19] net/ngbe: add simple Rx flow Jiawen Wu
@ 2021-06-17 11:00 ` Jiawen Wu
2021-07-02 16:45 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 16/19] net/ngbe: add device start and stop operations Jiawen Wu
` (3 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 11:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initialize device with the simplest transmit functions.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 1 +
drivers/net/ngbe/ngbe_ethdev.h | 3 +
drivers/net/ngbe/ngbe_rxtx.c | 228 +++++++++++++++++++++++++++++++++
drivers/net/ngbe/ngbe_rxtx.h | 27 ++++
4 files changed, 259 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 269186acc0..6b4d5ac65b 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -111,6 +111,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
eth_dev->dev_ops = &ngbe_eth_dev_ops;
eth_dev->rx_pkt_burst = &ngbe_recv_pkts;
+ eth_dev->tx_pkt_burst = &ngbe_xmit_pkts_simple;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 8fb7c8a19b..c52cac2ca1 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -75,6 +75,9 @@ int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index f97fceaf7c..6dde996659 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -20,6 +20,234 @@
*/
#define rte_ngbe_prefetch(p) rte_prefetch0(p)
+/*********************************************************************
+ *
+ * Tx functions
+ *
+ **********************************************************************/
+
+/*
+ * Check for descriptors with their DD bit set and free mbufs.
+ * Return the total number of buffers freed.
+ */
+static __rte_always_inline int
+ngbe_tx_free_bufs(struct ngbe_tx_queue *txq)
+{
+ struct ngbe_tx_entry *txep;
+ uint32_t status;
+ int i, nb_free = 0;
+ struct rte_mbuf *m, *free[RTE_NGBE_TX_MAX_FREE_BUF_SZ];
+
+ /* check DD bit on threshold descriptor */
+ status = txq->tx_ring[txq->tx_next_dd].dw3;
+ if (!(status & rte_cpu_to_le_32(NGBE_TXD_DD))) {
+ if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
+ ngbe_set32_masked(txq->tdc_reg_addr,
+ NGBE_TXCFG_FLUSH, NGBE_TXCFG_FLUSH);
+ return 0;
+ }
+
+ /*
+ * first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_free_thresh-1)
+ */
+ txep = &txq->sw_ring[txq->tx_next_dd - (txq->tx_free_thresh - 1)];
+ for (i = 0; i < txq->tx_free_thresh; ++i, ++txep) {
+ /* free buffers one at a time */
+ m = rte_pktmbuf_prefree_seg(txep->mbuf);
+ txep->mbuf = NULL;
+
+ if (unlikely(m == NULL))
+ continue;
+
+ if (nb_free >= RTE_NGBE_TX_MAX_FREE_BUF_SZ ||
+ (nb_free > 0 && m->pool != free[0]->pool)) {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void **)free, nb_free);
+ nb_free = 0;
+ }
+
+ free[nb_free++] = m;
+ }
+
+ if (nb_free > 0)
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+
+ /* buffers were freed, update counters */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_free_thresh);
+ txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_free_thresh);
+ if (txq->tx_next_dd >= txq->nb_tx_desc)
+ txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
+
+ return txq->tx_free_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct ngbe_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+ uint64_t buf_dma_addr;
+ uint32_t pkt_len;
+ int i;
+
+ for (i = 0; i < 4; ++i, ++txdp, ++pkts) {
+ buf_dma_addr = rte_mbuf_data_iova(*pkts);
+ pkt_len = (*pkts)->data_len;
+
+ /* write data to descriptor */
+ txdp->qw0 = rte_cpu_to_le_64(buf_dma_addr);
+ txdp->dw2 = cpu_to_le32(NGBE_TXD_FLAGS |
+ NGBE_TXD_DATLEN(pkt_len));
+ txdp->dw3 = cpu_to_le32(NGBE_TXD_PAYLEN(pkt_len));
+
+ rte_prefetch0(&(*pkts)->pool);
+ }
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct ngbe_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+ uint64_t buf_dma_addr;
+ uint32_t pkt_len;
+
+ buf_dma_addr = rte_mbuf_data_iova(*pkts);
+ pkt_len = (*pkts)->data_len;
+
+ /* write data to descriptor */
+ txdp->qw0 = cpu_to_le64(buf_dma_addr);
+ txdp->dw2 = cpu_to_le32(NGBE_TXD_FLAGS |
+ NGBE_TXD_DATLEN(pkt_len));
+ txdp->dw3 = cpu_to_le32(NGBE_TXD_PAYLEN(pkt_len));
+
+ rte_prefetch0(&(*pkts)->pool);
+}
+
+/*
+ * Fill H/W descriptor ring with mbuf data.
+ * Copy mbuf pointers to the S/W ring.
+ */
+static inline void
+ngbe_tx_fill_hw_ring(struct ngbe_tx_queue *txq, struct rte_mbuf **pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct ngbe_tx_desc *txdp = &txq->tx_ring[txq->tx_tail];
+ struct ngbe_tx_entry *txep = &txq->sw_ring[txq->tx_tail];
+ const int N_PER_LOOP = 4;
+ const int N_PER_LOOP_MASK = N_PER_LOOP - 1;
+ int mainpart, leftover;
+ int i, j;
+
+ /*
+ * Process most of the packets in chunks of N pkts. Any
+ * leftover packets will get processed one at a time.
+ */
+ mainpart = (nb_pkts & ((uint32_t)~N_PER_LOOP_MASK));
+ leftover = (nb_pkts & ((uint32_t)N_PER_LOOP_MASK));
+ for (i = 0; i < mainpart; i += N_PER_LOOP) {
+ /* Copy N mbuf pointers to the S/W ring */
+ for (j = 0; j < N_PER_LOOP; ++j)
+ (txep + i + j)->mbuf = *(pkts + i + j);
+ tx4(txdp + i, pkts + i);
+ }
+
+ if (unlikely(leftover > 0)) {
+ for (i = 0; i < leftover; ++i) {
+ (txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+ tx1(txdp + mainpart + i, pkts + mainpart + i);
+ }
+ }
+}
+
+static inline uint16_t
+tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct ngbe_tx_queue *txq = (struct ngbe_tx_queue *)tx_queue;
+ uint16_t n = 0;
+
+ /*
+ * Begin scanning the H/W ring for done descriptors when the
+ * number of available descriptors drops below tx_free_thresh.
+ * For each done descriptor, free the associated buffer.
+ */
+ if (txq->nb_tx_free < txq->tx_free_thresh)
+ ngbe_tx_free_bufs(txq);
+
+ /* Only use descriptors that are available */
+ nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+ if (unlikely(nb_pkts == 0))
+ return 0;
+
+ /* Use exactly nb_pkts descriptors */
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+
+ /*
+ * At this point, we know there are enough descriptors in the
+ * ring to transmit all the packets. This assumes that each
+ * mbuf contains a single segment, and that no new offloads
+ * are expected, which would require a new context descriptor.
+ */
+
+ /*
+ * See if we're going to wrap-around. If so, handle the top
+ * of the descriptor ring first, then do the bottom. If not,
+ * the processing looks just like the "bottom" part anyway...
+ */
+ if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+ n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+ ngbe_tx_fill_hw_ring(txq, tx_pkts, n);
+ txq->tx_tail = 0;
+ }
+
+ /* Fill H/W descriptor ring with mbuf data */
+ ngbe_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+ txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+ /*
+ * Check for wrap-around. This would only happen if we used
+ * up to the last descriptor in the ring, no more, no less.
+ */
+ if (txq->tx_tail >= txq->nb_tx_desc)
+ txq->tx_tail = 0;
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+ (uint16_t)txq->port_id, (uint16_t)txq->queue_id,
+ (uint16_t)txq->tx_tail, (uint16_t)nb_pkts);
+
+ /* update tail pointer */
+ rte_wmb();
+ ngbe_set32_relaxed(txq->tdt_reg_addr, txq->tx_tail);
+
+ return nb_pkts;
+}
+
+uint16_t
+ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ uint16_t nb_tx;
+
+ /* Try to transmit at least chunks of TX_MAX_BURST pkts */
+ if (likely(nb_pkts <= RTE_PMD_NGBE_TX_MAX_BURST))
+ return tx_xmit_pkts(tx_queue, tx_pkts, nb_pkts);
+
+ /* transmit more than the max burst, in chunks of TX_MAX_BURST */
+ nb_tx = 0;
+ while (nb_pkts) {
+ uint16_t ret, n;
+
+ n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_TX_MAX_BURST);
+ ret = tx_xmit_pkts(tx_queue, &tx_pkts[nb_tx], n);
+ nb_tx = (uint16_t)(nb_tx + ret);
+ nb_pkts = (uint16_t)(nb_pkts - ret);
+ if (ret < n)
+ break;
+ }
+
+ return nb_tx;
+}
+
/*********************************************************************
*
* Rx functions
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 1c8fd76f12..616b41a300 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -147,7 +147,34 @@ struct ngbe_tx_desc {
rte_le32_t dw3; /* r.olinfo_status, w.status */
};
+/* @ngbe_tx_desc.dw2 */
+#define NGBE_TXD_DATLEN(v) ((0xFFFF & (v))) /* data buffer length */
+#define NGBE_TXD_1588 ((0x1) << 19) /* IEEE1588 time stamp */
+#define NGBE_TXD_DATA ((0x0) << 20) /* data descriptor */
+#define NGBE_TXD_EOP ((0x1) << 24) /* End of Packet */
+#define NGBE_TXD_FCS ((0x1) << 25) /* Insert FCS */
+#define NGBE_TXD_LINKSEC ((0x1) << 26) /* Insert LinkSec */
+#define NGBE_TXD_ECU ((0x1) << 28) /* forward to ECU */
+#define NGBE_TXD_CNTAG ((0x1) << 29) /* insert CN tag */
+#define NGBE_TXD_VLE ((0x1) << 30) /* insert VLAN tag */
+#define NGBE_TXD_TSE ((0x1) << 31) /* transmit segmentation */
+
+#define NGBE_TXD_FLAGS (NGBE_TXD_FCS | NGBE_TXD_EOP)
+
+/* @ngbe_tx_desc.dw3 */
+#define NGBE_TXD_DD_UNUSED NGBE_TXD_DD
+#define NGBE_TXD_IDX_UNUSED(v) NGBE_TXD_IDX(v)
+#define NGBE_TXD_CC ((0x1) << 7) /* check context */
+#define NGBE_TXD_IPSEC ((0x1) << 8) /* request ipsec offload */
+#define NGBE_TXD_L4CS ((0x1) << 9) /* insert TCP/UDP/SCTP csum */
+#define NGBE_TXD_IPCS ((0x1) << 10) /* insert IPv4 csum */
+#define NGBE_TXD_EIPCS ((0x1) << 11) /* insert outer IP csum */
+#define NGBE_TXD_MNGFLT ((0x1) << 12) /* enable management filter */
+#define NGBE_TXD_PAYLEN(v) ((0x7FFFF & (v)) << 13) /* payload length */
+
+#define RTE_PMD_NGBE_TX_MAX_BURST 32
#define RTE_PMD_NGBE_RX_MAX_BURST 32
+#define RTE_NGBE_TX_MAX_FREE_BUF_SZ 64
#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
sizeof(struct ngbe_rx_desc))
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 16/19] net/ngbe: add device start and stop operations
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (14 preceding siblings ...)
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 15/19] net/ngbe: add simple Tx flow Jiawen Wu
@ 2021-06-17 11:00 ` Jiawen Wu
2021-07-02 16:52 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 17/19] net/ngbe: add Tx queue start and stop Jiawen Wu
` (2 subsequent siblings)
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 11:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Setup MSI-X interrupt, complete PHY configuration and set device link
speed to start device. Disable interrupt, stop hardware and clear queues
to stop device.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_dummy.h | 16 +
drivers/net/ngbe/base/ngbe_hw.c | 49 +++
drivers/net/ngbe/base/ngbe_hw.h | 4 +
drivers/net/ngbe/base/ngbe_phy.c | 3 +
drivers/net/ngbe/base/ngbe_phy_mvl.c | 64 ++++
drivers/net/ngbe/base/ngbe_phy_mvl.h | 1 +
drivers/net/ngbe/base/ngbe_phy_rtl.c | 58 ++++
drivers/net/ngbe/base/ngbe_phy_rtl.h | 2 +
drivers/net/ngbe/base/ngbe_phy_yt.c | 26 ++
drivers/net/ngbe/base/ngbe_phy_yt.h | 1 +
drivers/net/ngbe/base/ngbe_type.h | 9 +
drivers/net/ngbe/ngbe_ethdev.c | 473 ++++++++++++++++++++++++++-
drivers/net/ngbe/ngbe_ethdev.h | 17 +
drivers/net/ngbe/ngbe_rxtx.c | 59 ++++
14 files changed, 781 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index 709e01659c..dfc7b13192 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -47,6 +47,10 @@ static inline s32 ngbe_mac_reset_hw_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_start_hw_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_stop_hw_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -74,6 +78,11 @@ static inline s32 ngbe_mac_check_link_dummy(struct ngbe_hw *TUP0, u32 *TUP1,
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_get_link_capabilities_dummy(struct ngbe_hw *TUP0,
+ u32 *TUP1, bool *TUP2)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_set_rar_dummy(struct ngbe_hw *TUP0, u32 TUP1,
u8 *TUP2, u32 TUP3, u32 TUP4)
{
@@ -110,6 +119,10 @@ static inline s32 ngbe_phy_identify_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_phy_init_hw_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_phy_reset_hw_dummy(struct ngbe_hw *TUP0)
{
return NGBE_ERR_OPS_DUMMY;
@@ -151,12 +164,14 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->rom.validate_checksum = ngbe_rom_validate_checksum_dummy;
hw->mac.init_hw = ngbe_mac_init_hw_dummy;
hw->mac.reset_hw = ngbe_mac_reset_hw_dummy;
+ hw->mac.start_hw = ngbe_mac_start_hw_dummy;
hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
hw->mac.setup_link = ngbe_mac_setup_link_dummy;
hw->mac.check_link = ngbe_mac_check_link_dummy;
+ hw->mac.get_link_capabilities = ngbe_mac_get_link_capabilities_dummy;
hw->mac.set_rar = ngbe_mac_set_rar_dummy;
hw->mac.clear_rar = ngbe_mac_clear_rar_dummy;
hw->mac.set_vmdq = ngbe_mac_set_vmdq_dummy;
@@ -165,6 +180,7 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.init_thermal_sensor_thresh = ngbe_mac_init_thermal_ssth_dummy;
hw->mac.check_overtemp = ngbe_mac_check_overtemp_dummy;
hw->phy.identify = ngbe_phy_identify_dummy;
+ hw->phy.init_hw = ngbe_phy_init_hw_dummy;
hw->phy.reset_hw = ngbe_phy_reset_hw_dummy;
hw->phy.read_reg = ngbe_phy_read_reg_dummy;
hw->phy.write_reg = ngbe_phy_write_reg_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 331fc752ff..22ce4e52b9 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -9,6 +9,22 @@
#include "ngbe_mng.h"
#include "ngbe_hw.h"
+/**
+ * ngbe_start_hw - Prepare hardware for Tx/Rx
+ * @hw: pointer to hardware structure
+ *
+ * Starts the hardware.
+ **/
+s32 ngbe_start_hw(struct ngbe_hw *hw)
+{
+ DEBUGFUNC("ngbe_start_hw");
+
+ /* Clear adapter stopped flag */
+ hw->adapter_stopped = false;
+
+ return 0;
+}
+
/**
* ngbe_init_hw - Generic hardware initialization
* @hw: pointer to hardware structure
@@ -27,6 +43,10 @@ s32 ngbe_init_hw(struct ngbe_hw *hw)
/* Reset the hardware */
status = hw->mac.reset_hw(hw);
+ if (status == 0) {
+ /* Start the HW */
+ status = hw->mac.start_hw(hw);
+ }
if (status != 0)
DEBUGOUT("Failed to initialize HW, STATUS = %d\n", status);
@@ -633,6 +653,29 @@ s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
return status;
}
+s32 ngbe_get_link_capabilities_em(struct ngbe_hw *hw,
+ u32 *speed,
+ bool *autoneg)
+{
+ s32 status = 0;
+
+ DEBUGFUNC("\n");
+
+ hw->mac.autoneg = *autoneg;
+
+ switch (hw->sub_device_id) {
+ case NGBE_SUB_DEV_ID_EM_RTL_SGMII:
+ *speed = NGBE_LINK_SPEED_1GB_FULL |
+ NGBE_LINK_SPEED_100M_FULL |
+ NGBE_LINK_SPEED_10M_FULL;
+ break;
+ default:
+ break;
+ }
+
+ return status;
+}
+
s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
u32 speed,
bool autoneg_wait_to_complete)
@@ -842,6 +885,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
/* MAC */
mac->init_hw = ngbe_init_hw;
mac->reset_hw = ngbe_reset_hw_em;
+ mac->start_hw = ngbe_start_hw;
mac->get_mac_addr = ngbe_get_mac_addr;
mac->stop_hw = ngbe_stop_hw;
mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
@@ -855,6 +899,7 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->clear_vmdq = ngbe_clear_vmdq;
/* Link */
+ mac->get_link_capabilities = ngbe_get_link_capabilities_em;
mac->check_link = ngbe_check_mac_link_em;
mac->setup_link = ngbe_setup_mac_link_em;
@@ -871,6 +916,10 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->max_rx_queues = NGBE_EM_MAX_RX_QUEUES;
mac->max_tx_queues = NGBE_EM_MAX_TX_QUEUES;
+ mac->default_speeds = NGBE_LINK_SPEED_10M_FULL |
+ NGBE_LINK_SPEED_100M_FULL |
+ NGBE_LINK_SPEED_1GB_FULL;
+
return 0;
}
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 1689223168..4fee5735ac 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -14,6 +14,7 @@
#define NGBE_EM_MC_TBL_SIZE 32
s32 ngbe_init_hw(struct ngbe_hw *hw);
+s32 ngbe_start_hw(struct ngbe_hw *hw);
s32 ngbe_reset_hw_em(struct ngbe_hw *hw);
s32 ngbe_stop_hw(struct ngbe_hw *hw);
s32 ngbe_get_mac_addr(struct ngbe_hw *hw, u8 *mac_addr);
@@ -22,6 +23,9 @@ void ngbe_set_lan_id_multi_port(struct ngbe_hw *hw);
s32 ngbe_check_mac_link_em(struct ngbe_hw *hw, u32 *speed,
bool *link_up, bool link_up_wait_to_complete);
+s32 ngbe_get_link_capabilities_em(struct ngbe_hw *hw,
+ u32 *speed,
+ bool *autoneg);
s32 ngbe_setup_mac_link_em(struct ngbe_hw *hw,
u32 speed,
bool autoneg_wait_to_complete);
diff --git a/drivers/net/ngbe/base/ngbe_phy.c b/drivers/net/ngbe/base/ngbe_phy.c
index 471656cc51..0041cc9a90 100644
--- a/drivers/net/ngbe/base/ngbe_phy.c
+++ b/drivers/net/ngbe/base/ngbe_phy.c
@@ -426,16 +426,19 @@ s32 ngbe_init_phy(struct ngbe_hw *hw)
/* Set necessary function pointers based on PHY type */
switch (hw->phy.type) {
case ngbe_phy_rtl:
+ hw->phy.init_hw = ngbe_init_phy_rtl;
hw->phy.check_link = ngbe_check_phy_link_rtl;
hw->phy.setup_link = ngbe_setup_phy_link_rtl;
break;
case ngbe_phy_mvl:
case ngbe_phy_mvl_sfi:
+ hw->phy.init_hw = ngbe_init_phy_mvl;
hw->phy.check_link = ngbe_check_phy_link_mvl;
hw->phy.setup_link = ngbe_setup_phy_link_mvl;
break;
case ngbe_phy_yt8521s:
case ngbe_phy_yt8521s_sfi:
+ hw->phy.init_hw = ngbe_init_phy_yt;
hw->phy.check_link = ngbe_check_phy_link_yt;
hw->phy.setup_link = ngbe_setup_phy_link_yt;
default:
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.c b/drivers/net/ngbe/base/ngbe_phy_mvl.c
index a1c055e238..33d21edfce 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.c
@@ -48,6 +48,70 @@ s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw,
return 0;
}
+s32 ngbe_init_phy_mvl(struct ngbe_hw *hw)
+{
+ s32 ret_val = 0;
+ u16 value = 0;
+ int i;
+
+ DEBUGFUNC("ngbe_init_phy_mvl");
+
+ /* enable interrupts, only link status change and an done is allowed */
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 2);
+ ngbe_read_phy_reg_mdi(hw, MVL_RGM_CTL2, 0, &value);
+ value &= ~MVL_RGM_CTL2_TTC;
+ value |= MVL_RGM_CTL2_RTC;
+ ngbe_write_phy_reg_mdi(hw, MVL_RGM_CTL2, 0, value);
+
+ hw->phy.write_reg(hw, MVL_CTRL, 0, MVL_CTRL_RESET);
+ for (i = 0; i < 15; i++) {
+ ngbe_read_phy_reg_mdi(hw, MVL_CTRL, 0, &value);
+ if (value & MVL_CTRL_RESET)
+ msleep(1);
+ else
+ break;
+ }
+
+ if (i == 15) {
+ DEBUGOUT("phy reset exceeds maximum waiting period.\n");
+ return NGBE_ERR_TIMEOUT;
+ }
+
+ ret_val = hw->phy.reset_hw(hw);
+ if (ret_val)
+ return ret_val;
+
+ /* set LED2 to interrupt output and INTn active low */
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 3);
+ ngbe_read_phy_reg_mdi(hw, MVL_LEDTCR, 0, &value);
+ value |= MVL_LEDTCR_INTR_EN;
+ value &= ~(MVL_LEDTCR_INTR_POL);
+ ngbe_write_phy_reg_mdi(hw, MVL_LEDTCR, 0, value);
+
+ if (hw->phy.type == ngbe_phy_mvl_sfi) {
+ hw->phy.read_reg(hw, MVL_CTRL1, 0, &value);
+ value &= ~MVL_CTRL1_INTR_POL;
+ ngbe_write_phy_reg_mdi(hw, MVL_CTRL1, 0, value);
+ }
+
+ /* enable link status change and AN complete interrupts */
+ value = MVL_INTR_EN_ANC | MVL_INTR_EN_LSC;
+ hw->phy.write_reg(hw, MVL_INTR_EN, 0, value);
+
+ /* LED control */
+ ngbe_write_phy_reg_mdi(hw, MVL_PAGE_SEL, 0, 3);
+ ngbe_read_phy_reg_mdi(hw, MVL_LEDFCR, 0, &value);
+ value &= ~(MVL_LEDFCR_CTL0 | MVL_LEDFCR_CTL1);
+ value |= MVL_LEDFCR_CTL0_CONF | MVL_LEDFCR_CTL1_CONF;
+ ngbe_write_phy_reg_mdi(hw, MVL_LEDFCR, 0, value);
+ ngbe_read_phy_reg_mdi(hw, MVL_LEDPCR, 0, &value);
+ value &= ~(MVL_LEDPCR_CTL0 | MVL_LEDPCR_CTL1);
+ value |= MVL_LEDPCR_CTL0_CONF | MVL_LEDPCR_CTL1_CONF;
+ ngbe_write_phy_reg_mdi(hw, MVL_LEDPCR, 0, value);
+
+ return ret_val;
+}
+
s32 ngbe_setup_phy_link_mvl(struct ngbe_hw *hw, u32 speed,
bool autoneg_wait_to_complete)
{
diff --git a/drivers/net/ngbe/base/ngbe_phy_mvl.h b/drivers/net/ngbe/base/ngbe_phy_mvl.h
index a663a429dd..34cb1e838a 100644
--- a/drivers/net/ngbe/base/ngbe_phy_mvl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_mvl.h
@@ -86,6 +86,7 @@ s32 ngbe_read_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
u16 *phy_data);
s32 ngbe_write_phy_reg_mvl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
u16 phy_data);
+s32 ngbe_init_phy_mvl(struct ngbe_hw *hw);
s32 ngbe_reset_phy_mvl(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.c b/drivers/net/ngbe/base/ngbe_phy_rtl.c
index 3401fc7e92..1cd576010e 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.c
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.c
@@ -38,6 +38,64 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw,
return 0;
}
+s32 ngbe_init_phy_rtl(struct ngbe_hw *hw)
+{
+ int i;
+ u16 value = 0;
+
+ /* enable interrupts, only link status change and an done is allowed */
+ value = RTL_INER_LSC | RTL_INER_ANC;
+ hw->phy.write_reg(hw, RTL_INER, 0xa42, value);
+
+ hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+
+ for (i = 0; i < 15; i++) {
+ if (!rd32m(hw, NGBE_STAT,
+ NGBE_STAT_GPHY_IN_RST(hw->bus.lan_id)))
+ break;
+
+ msec_delay(10);
+ }
+ if (i == 15) {
+ DEBUGOUT("GPhy reset exceeds maximum times.\n");
+ return NGBE_ERR_PHY_TIMEOUT;
+ }
+
+ for (i = 0; i < 1000; i++) {
+ hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+ if (value & RTL_INSR_ACCESS)
+ break;
+ }
+
+ hw->phy.write_reg(hw, RTL_SCR, 0xa46, RTL_SCR_EFUSE);
+ for (i = 0; i < 1000; i++) {
+ hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+ if (value & RTL_INSR_ACCESS)
+ break;
+ }
+ if (i == 1000)
+ return NGBE_ERR_PHY_TIMEOUT;
+
+ hw->phy.write_reg(hw, RTL_SCR, 0xa46, RTL_SCR_EXTINI);
+ for (i = 0; i < 1000; i++) {
+ hw->phy.read_reg(hw, RTL_INSR, 0xa43, &value);
+ if (value & RTL_INSR_ACCESS)
+ break;
+ }
+ if (i == 1000)
+ return NGBE_ERR_PHY_TIMEOUT;
+
+ for (i = 0; i < 1000; i++) {
+ hw->phy.read_reg(hw, RTL_GSR, 0xa42, &value);
+ if ((value & RTL_GSR_ST) == RTL_GSR_ST_LANON)
+ break;
+ }
+ if (i == 1000)
+ return NGBE_ERR_PHY_TIMEOUT;
+
+ return 0;
+}
+
/**
* ngbe_setup_phy_link_rtl - Set and restart auto-neg
* @hw: pointer to hardware structure
diff --git a/drivers/net/ngbe/base/ngbe_phy_rtl.h b/drivers/net/ngbe/base/ngbe_phy_rtl.h
index e8bc4a1bd7..e6e7df5254 100644
--- a/drivers/net/ngbe/base/ngbe_phy_rtl.h
+++ b/drivers/net/ngbe/base/ngbe_phy_rtl.h
@@ -80,6 +80,8 @@ s32 ngbe_write_phy_reg_rtl(struct ngbe_hw *hw, u32 reg_addr, u32 device_type,
s32 ngbe_setup_phy_link_rtl(struct ngbe_hw *hw,
u32 speed, bool autoneg_wait_to_complete);
+
+s32 ngbe_init_phy_rtl(struct ngbe_hw *hw);
s32 ngbe_reset_phy_rtl(struct ngbe_hw *hw);
s32 ngbe_check_phy_link_rtl(struct ngbe_hw *hw,
u32 *speed, bool *link_up);
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.c b/drivers/net/ngbe/base/ngbe_phy_yt.c
index f518dc0af6..94d3430fa4 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.c
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.c
@@ -98,6 +98,32 @@ s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
return 0;
}
+s32 ngbe_init_phy_yt(struct ngbe_hw *hw)
+{
+ u16 value = 0;
+
+ DEBUGFUNC("ngbe_init_phy_yt");
+
+ if (hw->phy.type != ngbe_phy_yt8521s_sfi)
+ return 0;
+
+ /* select sds area register */
+ ngbe_write_phy_reg_ext_yt(hw, YT_SMI_PHY, 0, 0);
+ /* enable interrupts */
+ ngbe_write_phy_reg_mdi(hw, YT_INTR, 0, YT_INTR_ENA_MASK);
+
+ /* select fiber_to_rgmii first in multiplex */
+ ngbe_read_phy_reg_ext_yt(hw, YT_MISC, 0, &value);
+ value |= YT_MISC_FIBER_PRIO;
+ ngbe_write_phy_reg_ext_yt(hw, YT_MISC, 0, value);
+
+ hw->phy.read_reg(hw, YT_BCR, 0, &value);
+ value |= YT_BCR_PWDN;
+ hw->phy.write_reg(hw, YT_BCR, 0, value);
+
+ return 0;
+}
+
s32 ngbe_setup_phy_link_yt(struct ngbe_hw *hw, u32 speed,
bool autoneg_wait_to_complete)
{
diff --git a/drivers/net/ngbe/base/ngbe_phy_yt.h b/drivers/net/ngbe/base/ngbe_phy_yt.h
index 26820ecb92..5babd841c1 100644
--- a/drivers/net/ngbe/base/ngbe_phy_yt.h
+++ b/drivers/net/ngbe/base/ngbe_phy_yt.h
@@ -65,6 +65,7 @@ s32 ngbe_read_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
u32 reg_addr, u32 device_type, u16 *phy_data);
s32 ngbe_write_phy_reg_sds_ext_yt(struct ngbe_hw *hw,
u32 reg_addr, u32 device_type, u16 phy_data);
+s32 ngbe_init_phy_yt(struct ngbe_hw *hw);
s32 ngbe_reset_phy_yt(struct ngbe_hw *hw);
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 626c6e2eec..396c19b573 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -94,15 +94,20 @@ struct ngbe_rom_info {
struct ngbe_mac_info {
s32 (*init_hw)(struct ngbe_hw *hw);
s32 (*reset_hw)(struct ngbe_hw *hw);
+ s32 (*start_hw)(struct ngbe_hw *hw);
s32 (*stop_hw)(struct ngbe_hw *hw);
s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
+ /* Link */
s32 (*setup_link)(struct ngbe_hw *hw, u32 speed,
bool autoneg_wait_to_complete);
s32 (*check_link)(struct ngbe_hw *hw, u32 *speed,
bool *link_up, bool link_up_wait_to_complete);
+ s32 (*get_link_capabilities)(struct ngbe_hw *hw,
+ u32 *speed, bool *autoneg);
+
/* RAR */
s32 (*set_rar)(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
@@ -128,11 +133,13 @@ struct ngbe_mac_info {
bool set_lben;
u32 max_link_up_time;
+ u32 default_speeds;
bool autoneg;
};
struct ngbe_phy_info {
s32 (*identify)(struct ngbe_hw *hw);
+ s32 (*init_hw)(struct ngbe_hw *hw);
s32 (*reset_hw)(struct ngbe_hw *hw);
s32 (*read_reg)(struct ngbe_hw *hw, u32 reg_addr,
u32 device_type, u16 *phy_data);
@@ -180,6 +187,8 @@ struct ngbe_hw {
uint64_t isb_dma;
void IOMEM *isb_mem;
+ u16 nb_rx_queues;
+ u16 nb_tx_queues;
bool is_pf;
};
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 6b4d5ac65b..008326631e 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -15,9 +15,17 @@
#include "ngbe_rxtx.h"
static int ngbe_dev_close(struct rte_eth_dev *dev);
-
+static int ngbe_dev_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete);
+
+static void ngbe_dev_link_status_print(struct rte_eth_dev *dev);
+static int ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on);
+static int ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev);
+static int ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev);
+static int ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev);
static void ngbe_dev_interrupt_handler(void *param);
static void ngbe_dev_interrupt_delayed_handler(void *param);
+static void ngbe_configure_msix(struct rte_eth_dev *dev);
/*
* The set of PCI devices this driver supports
@@ -54,6 +62,25 @@ static const struct rte_eth_desc_lim tx_desc_lim = {
static const struct eth_dev_ops ngbe_eth_dev_ops;
+static inline int32_t
+ngbe_pf_reset_hw(struct ngbe_hw *hw)
+{
+ uint32_t ctrl_ext;
+ int32_t status;
+
+ status = hw->mac.reset_hw(hw);
+
+ ctrl_ext = rd32(hw, NGBE_PORTCTL);
+ /* Set PF Reset Done bit so PF/VF Mail Ops can work */
+ ctrl_ext |= NGBE_PORTCTL_RSTDONE;
+ wr32(hw, NGBE_PORTCTL, ctrl_ext);
+ ngbe_flush(hw);
+
+ if (status == NGBE_ERR_SFP_NOT_PRESENT)
+ status = 0;
+ return status;
+}
+
static inline void
ngbe_enable_intr(struct rte_eth_dev *dev)
{
@@ -200,6 +227,13 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
wr32(hw, NGBE_PORTCTL, ctrl_ext);
ngbe_flush(hw);
+ PMD_INIT_LOG(DEBUG, "MAC: %d, PHY: %d",
+ (int)hw->mac.type, (int)hw->phy.type);
+
+ PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
+ eth_dev->data->port_id, pci_dev->id.vendor_id,
+ pci_dev->id.device_id);
+
rte_intr_callback_register(intr_handle,
ngbe_dev_interrupt_handler, eth_dev);
@@ -274,6 +308,250 @@ ngbe_dev_configure(struct rte_eth_dev *dev)
return 0;
}
+static void
+ngbe_dev_phy_intr_setup(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+
+ wr32(hw, NGBE_GPIODIR, NGBE_GPIODIR_DDR(1));
+ wr32(hw, NGBE_GPIOINTEN, NGBE_GPIOINTEN_INT(3));
+ wr32(hw, NGBE_GPIOINTTYPE, NGBE_GPIOINTTYPE_LEVEL(0));
+ if (hw->phy.type == ngbe_phy_yt8521s_sfi)
+ wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(0));
+ else
+ wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(3));
+
+ intr->mask_misc |= NGBE_ICRMISC_GPIO;
+}
+
+/*
+ * Configure device link speed and setup link.
+ * It returns 0 on success.
+ */
+static int
+ngbe_dev_start(struct rte_eth_dev *dev)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ uint32_t intr_vector = 0;
+ int err;
+ bool link_up = false, negotiate = false;
+ uint32_t speed = 0;
+ uint32_t allowed_speeds = 0;
+ int status;
+ uint32_t *link_speeds;
+
+ PMD_INIT_FUNC_TRACE();
+
+ /* disable uio/vfio intr/eventfd mapping */
+ rte_intr_disable(intr_handle);
+
+ /* stop adapter */
+ hw->adapter_stopped = 0;
+ ngbe_stop_hw(hw);
+
+ /* reinitialize adapter
+ * this calls reset and start
+ */
+ hw->nb_rx_queues = dev->data->nb_rx_queues;
+ hw->nb_tx_queues = dev->data->nb_tx_queues;
+ status = ngbe_pf_reset_hw(hw);
+ if (status != 0)
+ return -1;
+ hw->mac.start_hw(hw);
+ hw->mac.get_link_status = true;
+
+ ngbe_dev_phy_intr_setup(dev);
+
+ /* check and configure queue intr-vector mapping */
+ if ((rte_intr_cap_multiple(intr_handle) ||
+ !RTE_ETH_DEV_SRIOV(dev).active) &&
+ dev->data->dev_conf.intr_conf.rxq != 0) {
+ intr_vector = dev->data->nb_rx_queues;
+ if (rte_intr_efd_enable(intr_handle, intr_vector))
+ return -1;
+ }
+
+ if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
+ intr_handle->intr_vec =
+ rte_zmalloc("intr_vec",
+ dev->data->nb_rx_queues * sizeof(int), 0);
+ if (intr_handle->intr_vec == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
+ }
+ }
+
+ /* confiugre MSI-X for sleep until Rx interrupt */
+ ngbe_configure_msix(dev);
+
+ /* initialize transmission unit */
+ ngbe_dev_tx_init(dev);
+
+ /* This can fail when allocating mbufs for descriptor rings */
+ err = ngbe_dev_rx_init(dev);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Unable to initialize RX hardware");
+ goto error;
+ }
+
+ err = ngbe_dev_rxtx_start(dev);
+ if (err < 0) {
+ PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
+ goto error;
+ }
+
+ err = hw->mac.check_link(hw, &speed, &link_up, 0);
+ if (err != 0)
+ goto error;
+ dev->data->dev_link.link_status = link_up;
+
+ link_speeds = &dev->data->dev_conf.link_speeds;
+ if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+ negotiate = true;
+
+ err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
+ if (err != 0)
+ goto error;
+
+ allowed_speeds = 0;
+ if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
+ allowed_speeds |= ETH_LINK_SPEED_1G;
+ if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
+ allowed_speeds |= ETH_LINK_SPEED_100M;
+ if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
+ allowed_speeds |= ETH_LINK_SPEED_10M;
+
+ if (*link_speeds & ~allowed_speeds) {
+ PMD_INIT_LOG(ERR, "Invalid link setting");
+ goto error;
+ }
+
+ speed = 0x0;
+ if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+ speed = hw->mac.default_speeds;
+ } else {
+ if (*link_speeds & ETH_LINK_SPEED_1G)
+ speed |= NGBE_LINK_SPEED_1GB_FULL;
+ if (*link_speeds & ETH_LINK_SPEED_100M)
+ speed |= NGBE_LINK_SPEED_100M_FULL;
+ if (*link_speeds & ETH_LINK_SPEED_10M)
+ speed |= NGBE_LINK_SPEED_10M_FULL;
+ }
+
+ hw->phy.init_hw(hw);
+ err = hw->mac.setup_link(hw, speed, link_up);
+ if (err != 0)
+ goto error;
+
+ if (rte_intr_allow_others(intr_handle)) {
+ ngbe_dev_misc_interrupt_setup(dev);
+ /* check if lsc interrupt is enabled */
+ if (dev->data->dev_conf.intr_conf.lsc != 0)
+ ngbe_dev_lsc_interrupt_setup(dev, TRUE);
+ else
+ ngbe_dev_lsc_interrupt_setup(dev, FALSE);
+ ngbe_dev_macsec_interrupt_setup(dev);
+ ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
+ } else {
+ rte_intr_callback_unregister(intr_handle,
+ ngbe_dev_interrupt_handler, dev);
+ if (dev->data->dev_conf.intr_conf.lsc != 0)
+ PMD_INIT_LOG(INFO, "lsc won't enable because of"
+ " no intr multiplex");
+ }
+
+ /* check if rxq interrupt is enabled */
+ if (dev->data->dev_conf.intr_conf.rxq != 0 &&
+ rte_intr_dp_is_en(intr_handle))
+ ngbe_dev_rxq_interrupt_setup(dev);
+
+ /* enable UIO/VFIO intr/eventfd mapping */
+ rte_intr_enable(intr_handle);
+
+ /* resume enabled intr since HW reset */
+ ngbe_enable_intr(dev);
+
+ if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
+ (hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
+ /* gpio0 is used to power on/off control*/
+ wr32(hw, NGBE_GPIODATA, 0);
+ }
+
+ /*
+ * Update link status right before return, because it may
+ * start link configuration process in a separate thread.
+ */
+ ngbe_dev_link_update(dev, 0);
+
+ return 0;
+
+error:
+ PMD_INIT_LOG(ERR, "failure in dev start: %d", err);
+ ngbe_dev_clear_queues(dev);
+ return -EIO;
+}
+
+/*
+ * Stop device: disable rx and tx functions to allow for reconfiguring.
+ */
+static int
+ngbe_dev_stop(struct rte_eth_dev *dev)
+{
+ struct rte_eth_link link;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ if (hw->adapter_stopped)
+ return 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
+ (hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
+ /* gpio0 is used to power on/off control*/
+ wr32(hw, NGBE_GPIODATA, NGBE_GPIOBIT_0);
+ }
+
+ /* disable interrupts */
+ ngbe_disable_intr(hw);
+
+ /* reset the NIC */
+ ngbe_pf_reset_hw(hw);
+ hw->adapter_stopped = 0;
+
+ /* stop adapter */
+ ngbe_stop_hw(hw);
+
+ ngbe_dev_clear_queues(dev);
+
+ /* Clear recorded link status */
+ memset(&link, 0, sizeof(link));
+ rte_eth_linkstatus_set(dev, &link);
+
+ if (!rte_intr_allow_others(intr_handle))
+ /* resume to the default handler */
+ rte_intr_callback_register(intr_handle,
+ ngbe_dev_interrupt_handler,
+ (void *)dev);
+
+ /* Clean datapath event and queue/vec mapping */
+ rte_intr_efd_disable(intr_handle);
+ if (intr_handle->intr_vec != NULL) {
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ }
+
+ hw->adapter_stopped = true;
+ dev->data->dev_started = 0;
+
+ return 0;
+}
+
/*
* Reset and stop device.
*/
@@ -417,6 +695,106 @@ ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
return ngbe_dev_link_update_share(dev, wait_to_complete);
}
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during NIC initialized.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ * @param on
+ * Enable or Disable.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static int
+ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+
+ ngbe_dev_link_status_print(dev);
+ if (on != 0) {
+ intr->mask_misc |= NGBE_ICRMISC_PHY;
+ intr->mask_misc |= NGBE_ICRMISC_GPIO;
+ } else {
+ intr->mask_misc &= ~NGBE_ICRMISC_PHY;
+ intr->mask_misc &= ~NGBE_ICRMISC_GPIO;
+ }
+
+ return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during NIC initialized.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static int
+ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ u64 mask;
+
+ mask = NGBE_ICR_MASK;
+ mask &= (1ULL << NGBE_MISC_VEC_ID);
+ intr->mask |= mask;
+ intr->mask_misc |= NGBE_ICRMISC_GPIO;
+
+ return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during NIC initialized.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static int
+ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ u64 mask;
+
+ mask = NGBE_ICR_MASK;
+ mask &= ~((1ULL << NGBE_RX_VEC_START) - 1);
+ intr->mask |= mask;
+
+ return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during NIC initialized.
+ *
+ * @param dev
+ * Pointer to struct rte_eth_dev.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+static int
+ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev)
+{
+ struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+
+ intr->mask_misc |= NGBE_ICRMISC_LNKSEC;
+
+ return 0;
+}
+
/*
* It reads ICR and sets flag for the link_update.
*
@@ -623,9 +1001,102 @@ ngbe_dev_interrupt_handler(void *param)
ngbe_dev_interrupt_action(dev);
}
+/**
+ * set the IVAR registers, mapping interrupt causes to vectors
+ * @param hw
+ * pointer to ngbe_hw struct
+ * @direction
+ * 0 for Rx, 1 for Tx, -1 for other causes
+ * @queue
+ * queue to map the corresponding interrupt to
+ * @msix_vector
+ * the vector to map to the corresponding queue
+ */
+void
+ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
+ uint8_t queue, uint8_t msix_vector)
+{
+ uint32_t tmp, idx;
+
+ if (direction == -1) {
+ /* other causes */
+ msix_vector |= NGBE_IVARMISC_VLD;
+ idx = 0;
+ tmp = rd32(hw, NGBE_IVARMISC);
+ tmp &= ~(0xFF << idx);
+ tmp |= (msix_vector << idx);
+ wr32(hw, NGBE_IVARMISC, tmp);
+ } else {
+ /* rx or tx causes */
+ /* Workround for ICR lost */
+ idx = ((16 * (queue & 1)) + (8 * direction));
+ tmp = rd32(hw, NGBE_IVAR(queue >> 1));
+ tmp &= ~(0xFF << idx);
+ tmp |= (msix_vector << idx);
+ wr32(hw, NGBE_IVAR(queue >> 1), tmp);
+ }
+}
+
+/**
+ * Sets up the hardware to properly generate MSI-X interrupts
+ * @hw
+ * board private structure
+ */
+static void
+ngbe_configure_msix(struct rte_eth_dev *dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ uint32_t queue_id, base = NGBE_MISC_VEC_ID;
+ uint32_t vec = NGBE_MISC_VEC_ID;
+ uint32_t gpie;
+
+ /* won't configure MSI-X register if no mapping is done
+ * between intr vector and event fd
+ * but if MSI-X has been enabled already, need to configure
+ * auto clean, auto mask and throttling.
+ */
+ gpie = rd32(hw, NGBE_GPIE);
+ if (!rte_intr_dp_is_en(intr_handle) &&
+ !(gpie & NGBE_GPIE_MSIX))
+ return;
+
+ if (rte_intr_allow_others(intr_handle)) {
+ base = NGBE_RX_VEC_START;
+ vec = base;
+ }
+
+ /* setup GPIE for MSI-X mode */
+ gpie = rd32(hw, NGBE_GPIE);
+ gpie |= NGBE_GPIE_MSIX;
+ wr32(hw, NGBE_GPIE, gpie);
+
+ /* Populate the IVAR table and set the ITR values to the
+ * corresponding register.
+ */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ for (queue_id = 0; queue_id < dev->data->nb_rx_queues;
+ queue_id++) {
+ /* by default, 1:1 mapping */
+ ngbe_set_ivar_map(hw, 0, queue_id, vec);
+ intr_handle->intr_vec[queue_id] = vec;
+ if (vec < base + intr_handle->nb_efd - 1)
+ vec++;
+ }
+
+ ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
+ }
+ wr32(hw, NGBE_ITR(NGBE_MISC_VEC_ID),
+ NGBE_ITR_IVAL_1G(NGBE_QUEUE_ITR_INTERVAL_DEFAULT)
+ | NGBE_ITR_WRDSA);
+}
+
static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_configure = ngbe_dev_configure,
.dev_infos_get = ngbe_dev_info_get,
+ .dev_start = ngbe_dev_start,
+ .dev_stop = ngbe_dev_stop,
.link_update = ngbe_dev_link_update,
.rx_queue_setup = ngbe_dev_rx_queue_setup,
.rx_queue_release = ngbe_dev_rx_queue_release,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index c52cac2ca1..5277f175cd 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -13,7 +13,10 @@
#define NGBE_FLAG_MACSEC (uint32_t)(1 << 3)
#define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
+#define NGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */
+
#define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
+#define NGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
/* structure for interrupt relative data */
struct ngbe_interrupt {
@@ -59,6 +62,11 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
return intr;
}
+/*
+ * RX/TX function prototypes
+ */
+void ngbe_dev_clear_queues(struct rte_eth_dev *dev);
+
void ngbe_dev_rx_queue_release(void *rxq);
void ngbe_dev_tx_queue_release(void *txq);
@@ -72,12 +80,21 @@ int ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
+int ngbe_dev_rx_init(struct rte_eth_dev *dev);
+
+void ngbe_dev_tx_init(struct rte_eth_dev *dev);
+
+int ngbe_dev_rxtx_start(struct rte_eth_dev *dev);
+
uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
uint16_t ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+void ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
+ uint8_t queue, uint8_t msix_vector);
+
int
ngbe_dev_link_update_share(struct rte_eth_dev *dev,
int wait_to_complete);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 6dde996659..d4680afaae 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -908,3 +908,62 @@ ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
return 0;
}
+void __rte_cold
+ngbe_dev_clear_queues(struct rte_eth_dev *dev)
+{
+ unsigned int i;
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ struct ngbe_tx_queue *txq = dev->data->tx_queues[i];
+
+ if (txq != NULL) {
+ txq->ops->release_mbufs(txq);
+ txq->ops->reset(txq);
+ }
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ struct ngbe_rx_queue *rxq = dev->data->rx_queues[i];
+
+ if (rxq != NULL) {
+ ngbe_rx_queue_release_mbufs(rxq);
+ ngbe_reset_rx_queue(adapter, rxq);
+ }
+ }
+}
+
+/*
+ * Initializes Receive Unit.
+ */
+int __rte_cold
+ngbe_dev_rx_init(struct rte_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return -EINVAL;
+}
+
+/*
+ * Initializes Transmit Unit.
+ */
+void __rte_cold
+ngbe_dev_tx_init(struct rte_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+}
+
+/*
+ * Start Transmit and Receive Units.
+ */
+int __rte_cold
+ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return -EINVAL;
+}
+
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 17/19] net/ngbe: add Tx queue start and stop
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (15 preceding siblings ...)
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 16/19] net/ngbe: add device start and stop operations Jiawen Wu
@ 2021-06-17 11:00 ` Jiawen Wu
2021-07-02 16:55 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 18/19] net/ngbe: add Rx " Jiawen Wu
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 19/19] net/ngbe: support to close and reset device Jiawen Wu
18 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 11:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initializes transmit unit, support to start and stop transmit unit for
specified queues.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
doc/guides/nics/features/ngbe.ini | 1 +
drivers/net/ngbe/base/ngbe_type.h | 1 +
drivers/net/ngbe/ngbe_ethdev.c | 3 +
drivers/net/ngbe/ngbe_ethdev.h | 7 ++
drivers/net/ngbe/ngbe_rxtx.c | 162 +++++++++++++++++++++++++++++-
drivers/net/ngbe/ngbe_rxtx.h | 3 +
6 files changed, 175 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
index 291a542a42..08d5f1b0dc 100644
--- a/doc/guides/nics/features/ngbe.ini
+++ b/doc/guides/nics/features/ngbe.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Link status = Y
Link status event = Y
+Queue start/stop = Y
Multiprocess aware = Y
Linux = Y
ARMv8 = Y
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 396c19b573..9312a1ee44 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -190,6 +190,7 @@ struct ngbe_hw {
u16 nb_rx_queues;
u16 nb_tx_queues;
+ u32 q_tx_regs[8 * 4];
bool is_pf;
};
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 008326631e..bd32f20c38 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -601,6 +601,7 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
+ dev_info->default_txportconf.burst_size = 32;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
dev_info->default_rxportconf.ring_size = 256;
@@ -1098,6 +1099,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_start = ngbe_dev_start,
.dev_stop = ngbe_dev_stop,
.link_update = ngbe_dev_link_update,
+ .tx_queue_start = ngbe_dev_tx_queue_start,
+ .tx_queue_stop = ngbe_dev_tx_queue_stop,
.rx_queue_setup = ngbe_dev_rx_queue_setup,
.rx_queue_release = ngbe_dev_rx_queue_release,
.tx_queue_setup = ngbe_dev_tx_queue_setup,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 5277f175cd..40a28711f1 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -86,6 +86,13 @@ void ngbe_dev_tx_init(struct rte_eth_dev *dev);
int ngbe_dev_rxtx_start(struct rte_eth_dev *dev);
+void ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
+void ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
+
+int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
+int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
uint16_t ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index d4680afaae..b05cb0ec34 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -952,8 +952,32 @@ ngbe_dev_rx_init(struct rte_eth_dev *dev)
void __rte_cold
ngbe_dev_tx_init(struct rte_eth_dev *dev)
{
- RTE_SET_USED(dev);
+ struct ngbe_hw *hw;
+ struct ngbe_tx_queue *txq;
+ uint64_t bus_addr;
+ uint16_t i;
+
+ PMD_INIT_FUNC_TRACE();
+ hw = ngbe_dev_hw(dev);
+ wr32m(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_ODSA, NGBE_SECTXCTL_ODSA);
+ wr32m(hw, NGBE_SECTXCTL, NGBE_SECTXCTL_XDSA, 0);
+
+ /* Setup the Base and Length of the Tx Descriptor Rings */
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+
+ bus_addr = txq->tx_ring_phys_addr;
+ wr32(hw, NGBE_TXBAL(txq->reg_idx),
+ (uint32_t)(bus_addr & BIT_MASK32));
+ wr32(hw, NGBE_TXBAH(txq->reg_idx),
+ (uint32_t)(bus_addr >> 32));
+ wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_BUFLEN_MASK,
+ NGBE_TXCFG_BUFLEN(txq->nb_tx_desc));
+ /* Setup the HW Tx Head and TX Tail descriptor pointers */
+ wr32(hw, NGBE_TXRP(txq->reg_idx), 0);
+ wr32(hw, NGBE_TXWP(txq->reg_idx), 0);
+ }
}
/*
@@ -962,8 +986,142 @@ ngbe_dev_tx_init(struct rte_eth_dev *dev)
int __rte_cold
ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
- RTE_SET_USED(dev);
+ struct ngbe_hw *hw;
+ struct ngbe_tx_queue *txq;
+ uint32_t dmatxctl;
+ uint16_t i;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ hw = ngbe_dev_hw(dev);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ /* Setup Transmit Threshold Registers */
+ wr32m(hw, NGBE_TXCFG(txq->reg_idx),
+ NGBE_TXCFG_HTHRESH_MASK |
+ NGBE_TXCFG_WTHRESH_MASK,
+ NGBE_TXCFG_HTHRESH(txq->hthresh) |
+ NGBE_TXCFG_WTHRESH(txq->wthresh));
+ }
+
+ dmatxctl = rd32(hw, NGBE_DMATXCTRL);
+ dmatxctl |= NGBE_DMATXCTRL_ENA;
+ wr32(hw, NGBE_DMATXCTRL, dmatxctl);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (txq->tx_deferred_start == 0) {
+ ret = ngbe_dev_tx_queue_start(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+ }
return -EINVAL;
}
+void
+ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
+{
+ u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
+ *(reg++) = rd32(hw, NGBE_TXBAL(tx_queue_id));
+ *(reg++) = rd32(hw, NGBE_TXBAH(tx_queue_id));
+ *(reg++) = rd32(hw, NGBE_TXCFG(tx_queue_id));
+}
+
+void
+ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
+{
+ u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
+ wr32(hw, NGBE_TXBAL(tx_queue_id), *(reg++));
+ wr32(hw, NGBE_TXBAH(tx_queue_id), *(reg++));
+ wr32(hw, NGBE_TXCFG(tx_queue_id), *(reg++) & ~NGBE_TXCFG_ENA);
+}
+
+/*
+ * Start Transmit Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_tx_queue *txq;
+ uint32_t txdctl;
+ int poll_ms;
+
+ PMD_INIT_FUNC_TRACE();
+
+ txq = dev->data->tx_queues[tx_queue_id];
+ wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, NGBE_TXCFG_ENA);
+
+ /* Wait until Tx Enable ready */
+ poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+ do {
+ rte_delay_ms(1);
+ txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
+ } while (--poll_ms && !(txdctl & NGBE_TXCFG_ENA));
+ if (poll_ms == 0)
+ PMD_INIT_LOG(ERR, "Could not enable "
+ "Tx Queue %d", tx_queue_id);
+
+ rte_wmb();
+ wr32(hw, NGBE_TXWP(txq->reg_idx), txq->tx_tail);
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+ return 0;
+}
+
+/*
+ * Stop Transmit Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_tx_queue *txq;
+ uint32_t txdctl;
+ uint32_t txtdh, txtdt;
+ int poll_ms;
+
+ PMD_INIT_FUNC_TRACE();
+
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /* Wait until Tx queue is empty */
+ poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+ do {
+ rte_delay_us(RTE_NGBE_WAIT_100_US);
+ txtdh = rd32(hw, NGBE_TXRP(txq->reg_idx));
+ txtdt = rd32(hw, NGBE_TXWP(txq->reg_idx));
+ } while (--poll_ms && (txtdh != txtdt));
+ if (poll_ms == 0)
+ PMD_INIT_LOG(ERR,
+ "Tx Queue %d is not empty when stopping.",
+ tx_queue_id);
+
+ ngbe_dev_save_tx_queue(hw, txq->reg_idx);
+ wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, 0);
+
+ /* Wait until Tx Enable bit clear */
+ poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+ do {
+ rte_delay_ms(1);
+ txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
+ } while (--poll_ms && (txdctl & NGBE_TXCFG_ENA));
+ if (poll_ms == 0)
+ PMD_INIT_LOG(ERR, "Could not disable Tx Queue %d",
+ tx_queue_id);
+
+ rte_delay_us(RTE_NGBE_WAIT_100_US);
+ ngbe_dev_store_tx_queue(hw, txq->reg_idx);
+
+ if (txq->ops != NULL) {
+ txq->ops->release_mbufs(txq);
+ txq->ops->reset(txq);
+ }
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
index 616b41a300..d906b45a73 100644
--- a/drivers/net/ngbe/ngbe_rxtx.h
+++ b/drivers/net/ngbe/ngbe_rxtx.h
@@ -181,6 +181,9 @@ struct ngbe_tx_desc {
#define rte_packet_prefetch(p) rte_prefetch1(p)
+#define RTE_NGBE_REGISTER_POLL_WAIT_10_MS 10
+#define RTE_NGBE_WAIT_100_US 100
+
#define NGBE_TX_MAX_SEG 40
#ifndef DEFAULT_TX_FREE_THRESH
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 18/19] net/ngbe: add Rx queue start and stop
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (16 preceding siblings ...)
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 17/19] net/ngbe: add Tx queue start and stop Jiawen Wu
@ 2021-06-17 11:00 ` Jiawen Wu
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 19/19] net/ngbe: support to close and reset device Jiawen Wu
18 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 11:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Initializes receive unit, support to start and stop receive unit for
specified queues.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_dummy.h | 15 ++
drivers/net/ngbe/base/ngbe_hw.c | 105 ++++++++++++++
drivers/net/ngbe/base/ngbe_hw.h | 4 +
drivers/net/ngbe/base/ngbe_type.h | 4 +
drivers/net/ngbe/ngbe_ethdev.c | 5 +
drivers/net/ngbe/ngbe_ethdev.h | 6 +
drivers/net/ngbe/ngbe_rxtx.c | 215 ++++++++++++++++++++++++++++-
7 files changed, 351 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ngbe/base/ngbe_dummy.h b/drivers/net/ngbe/base/ngbe_dummy.h
index dfc7b13192..384631b4f1 100644
--- a/drivers/net/ngbe/base/ngbe_dummy.h
+++ b/drivers/net/ngbe/base/ngbe_dummy.h
@@ -59,6 +59,18 @@ static inline s32 ngbe_mac_get_mac_addr_dummy(struct ngbe_hw *TUP0, u8 *TUP1)
{
return NGBE_ERR_OPS_DUMMY;
}
+static inline s32 ngbe_mac_enable_rx_dma_dummy(struct ngbe_hw *TUP0, u32 TUP1)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_disable_sec_rx_path_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
+static inline s32 ngbe_mac_enable_sec_rx_path_dummy(struct ngbe_hw *TUP0)
+{
+ return NGBE_ERR_OPS_DUMMY;
+}
static inline s32 ngbe_mac_acquire_swfw_sync_dummy(struct ngbe_hw *TUP0,
u32 TUP1)
{
@@ -167,6 +179,9 @@ static inline void ngbe_init_ops_dummy(struct ngbe_hw *hw)
hw->mac.start_hw = ngbe_mac_start_hw_dummy;
hw->mac.stop_hw = ngbe_mac_stop_hw_dummy;
hw->mac.get_mac_addr = ngbe_mac_get_mac_addr_dummy;
+ hw->mac.enable_rx_dma = ngbe_mac_enable_rx_dma_dummy;
+ hw->mac.disable_sec_rx_path = ngbe_mac_disable_sec_rx_path_dummy;
+ hw->mac.enable_sec_rx_path = ngbe_mac_enable_sec_rx_path_dummy;
hw->mac.acquire_swfw_sync = ngbe_mac_acquire_swfw_sync_dummy;
hw->mac.release_swfw_sync = ngbe_mac_release_swfw_sync_dummy;
hw->mac.setup_link = ngbe_mac_setup_link_dummy;
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 22ce4e52b9..45cbfaa76c 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -536,6 +536,63 @@ void ngbe_release_swfw_sync(struct ngbe_hw *hw, u32 mask)
ngbe_release_eeprom_semaphore(hw);
}
+/**
+ * ngbe_disable_sec_rx_path - Stops the receive data path
+ * @hw: pointer to hardware structure
+ *
+ * Stops the receive data path and waits for the HW to internally empty
+ * the Rx security block
+ **/
+s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw)
+{
+#define NGBE_MAX_SECRX_POLL 4000
+
+ int i;
+ u32 secrxreg;
+
+ DEBUGFUNC("ngbe_disable_sec_rx_path");
+
+
+ secrxreg = rd32(hw, NGBE_SECRXCTL);
+ secrxreg |= NGBE_SECRXCTL_XDSA;
+ wr32(hw, NGBE_SECRXCTL, secrxreg);
+ for (i = 0; i < NGBE_MAX_SECRX_POLL; i++) {
+ secrxreg = rd32(hw, NGBE_SECRXSTAT);
+ if (!(secrxreg & NGBE_SECRXSTAT_RDY))
+ /* Use interrupt-safe sleep just in case */
+ usec_delay(10);
+ else
+ break;
+ }
+
+ /* For informational purposes only */
+ if (i >= NGBE_MAX_SECRX_POLL)
+ DEBUGOUT("Rx unit being enabled before security "
+ "path fully disabled. Continuing with init.\n");
+
+ return 0;
+}
+
+/**
+ * ngbe_enable_sec_rx_path - Enables the receive data path
+ * @hw: pointer to hardware structure
+ *
+ * Enables the receive data path.
+ **/
+s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw)
+{
+ u32 secrxreg;
+
+ DEBUGFUNC("ngbe_enable_sec_rx_path");
+
+ secrxreg = rd32(hw, NGBE_SECRXCTL);
+ secrxreg &= ~NGBE_SECRXCTL_XDSA;
+ wr32(hw, NGBE_SECRXCTL, secrxreg);
+ ngbe_flush(hw);
+
+ return 0;
+}
+
/**
* ngbe_clear_vmdq - Disassociate a VMDq pool index from a rx address
* @hw: pointer to hardware struct
@@ -756,6 +813,21 @@ void ngbe_disable_rx(struct ngbe_hw *hw)
wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
}
+void ngbe_enable_rx(struct ngbe_hw *hw)
+{
+ u32 pfdtxgswc;
+
+ wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, NGBE_MACRXCFG_ENA);
+ wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, NGBE_PBRXCTL_ENA);
+
+ if (hw->mac.set_lben) {
+ pfdtxgswc = rd32(hw, NGBE_PSRCTL);
+ pfdtxgswc |= NGBE_PSRCTL_LBENA;
+ wr32(hw, NGBE_PSRCTL, pfdtxgswc);
+ hw->mac.set_lben = false;
+ }
+}
+
/**
* ngbe_set_mac_type - Sets MAC type
* @hw: pointer to the HW structure
@@ -802,6 +874,36 @@ s32 ngbe_set_mac_type(struct ngbe_hw *hw)
return err;
}
+/**
+ * ngbe_enable_rx_dma - Enable the Rx DMA unit
+ * @hw: pointer to hardware structure
+ * @regval: register value to write to RXCTRL
+ *
+ * Enables the Rx DMA unit
+ **/
+s32 ngbe_enable_rx_dma(struct ngbe_hw *hw, u32 regval)
+{
+ DEBUGFUNC("ngbe_enable_rx_dma");
+
+ /*
+ * Workaround silicon errata when enabling the Rx datapath.
+ * If traffic is incoming before we enable the Rx unit, it could hang
+ * the Rx DMA unit. Therefore, make sure the security engine is
+ * completely disabled prior to enabling the Rx unit.
+ */
+
+ hw->mac.disable_sec_rx_path(hw);
+
+ if (regval & NGBE_PBRXCTL_ENA)
+ ngbe_enable_rx(hw);
+ else
+ ngbe_disable_rx(hw);
+
+ hw->mac.enable_sec_rx_path(hw);
+
+ return 0;
+}
+
void ngbe_map_device_id(struct ngbe_hw *hw)
{
u16 oem = hw->sub_system_id & NGBE_OEM_MASK;
@@ -886,11 +988,14 @@ s32 ngbe_init_ops_pf(struct ngbe_hw *hw)
mac->init_hw = ngbe_init_hw;
mac->reset_hw = ngbe_reset_hw_em;
mac->start_hw = ngbe_start_hw;
+ mac->enable_rx_dma = ngbe_enable_rx_dma;
mac->get_mac_addr = ngbe_get_mac_addr;
mac->stop_hw = ngbe_stop_hw;
mac->acquire_swfw_sync = ngbe_acquire_swfw_sync;
mac->release_swfw_sync = ngbe_release_swfw_sync;
+ mac->disable_sec_rx_path = ngbe_disable_sec_rx_path;
+ mac->enable_sec_rx_path = ngbe_enable_sec_rx_path;
/* RAR */
mac->set_rar = ngbe_set_rar;
mac->clear_rar = ngbe_clear_rar;
diff --git a/drivers/net/ngbe/base/ngbe_hw.h b/drivers/net/ngbe/base/ngbe_hw.h
index 4fee5735ac..01f41fe9b3 100644
--- a/drivers/net/ngbe/base/ngbe_hw.h
+++ b/drivers/net/ngbe/base/ngbe_hw.h
@@ -34,6 +34,8 @@ s32 ngbe_set_rar(struct ngbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
u32 enable_addr);
s32 ngbe_clear_rar(struct ngbe_hw *hw, u32 index);
s32 ngbe_init_rx_addrs(struct ngbe_hw *hw);
+s32 ngbe_disable_sec_rx_path(struct ngbe_hw *hw);
+s32 ngbe_enable_sec_rx_path(struct ngbe_hw *hw);
s32 ngbe_validate_mac_addr(u8 *mac_addr);
s32 ngbe_acquire_swfw_sync(struct ngbe_hw *hw, u32 mask);
@@ -46,10 +48,12 @@ s32 ngbe_init_uta_tables(struct ngbe_hw *hw);
s32 ngbe_init_thermal_sensor_thresh(struct ngbe_hw *hw);
s32 ngbe_mac_check_overtemp(struct ngbe_hw *hw);
void ngbe_disable_rx(struct ngbe_hw *hw);
+void ngbe_enable_rx(struct ngbe_hw *hw);
s32 ngbe_init_shared_code(struct ngbe_hw *hw);
s32 ngbe_set_mac_type(struct ngbe_hw *hw);
s32 ngbe_init_ops_pf(struct ngbe_hw *hw);
s32 ngbe_init_phy(struct ngbe_hw *hw);
+s32 ngbe_enable_rx_dma(struct ngbe_hw *hw, u32 regval);
void ngbe_map_device_id(struct ngbe_hw *hw);
#endif /* _NGBE_HW_H_ */
diff --git a/drivers/net/ngbe/base/ngbe_type.h b/drivers/net/ngbe/base/ngbe_type.h
index 9312a1ee44..a06e6d240e 100644
--- a/drivers/net/ngbe/base/ngbe_type.h
+++ b/drivers/net/ngbe/base/ngbe_type.h
@@ -97,6 +97,9 @@ struct ngbe_mac_info {
s32 (*start_hw)(struct ngbe_hw *hw);
s32 (*stop_hw)(struct ngbe_hw *hw);
s32 (*get_mac_addr)(struct ngbe_hw *hw, u8 *mac_addr);
+ s32 (*enable_rx_dma)(struct ngbe_hw *hw, u32 regval);
+ s32 (*disable_sec_rx_path)(struct ngbe_hw *hw);
+ s32 (*enable_sec_rx_path)(struct ngbe_hw *hw);
s32 (*acquire_swfw_sync)(struct ngbe_hw *hw, u32 mask);
void (*release_swfw_sync)(struct ngbe_hw *hw, u32 mask);
@@ -190,6 +193,7 @@ struct ngbe_hw {
u16 nb_rx_queues;
u16 nb_tx_queues;
+ u32 q_rx_regs[8 * 4];
u32 q_tx_regs[8 * 4];
bool is_pf;
};
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index bd32f20c38..ef8ea5ecff 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -572,6 +572,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
+ dev_info->min_rx_bufsize = 1024;
+ dev_info->max_rx_pktlen = 15872;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_thresh = {
@@ -601,6 +603,7 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ETH_LINK_SPEED_10M;
/* Driver-preferred Rx/Tx parameters */
+ dev_info->default_rxportconf.burst_size = 32;
dev_info->default_txportconf.burst_size = 32;
dev_info->default_rxportconf.nb_queues = 1;
dev_info->default_txportconf.nb_queues = 1;
@@ -1099,6 +1102,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_start = ngbe_dev_start,
.dev_stop = ngbe_dev_stop,
.link_update = ngbe_dev_link_update,
+ .rx_queue_start = ngbe_dev_rx_queue_start,
+ .rx_queue_stop = ngbe_dev_rx_queue_stop,
.tx_queue_start = ngbe_dev_tx_queue_start,
.tx_queue_stop = ngbe_dev_tx_queue_stop,
.rx_queue_setup = ngbe_dev_rx_queue_setup,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index 40a28711f1..d576eef5d6 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -86,9 +86,15 @@ void ngbe_dev_tx_init(struct rte_eth_dev *dev);
int ngbe_dev_rxtx_start(struct rte_eth_dev *dev);
+void ngbe_dev_save_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id);
+void ngbe_dev_store_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id);
void ngbe_dev_save_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
void ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id);
+int ngbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int ngbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
int ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int ngbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index b05cb0ec34..0801424245 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -935,15 +935,111 @@ ngbe_dev_clear_queues(struct rte_eth_dev *dev)
}
}
+static int __rte_cold
+ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
+{
+ struct ngbe_rx_entry *rxe = rxq->sw_ring;
+ uint64_t dma_addr;
+ unsigned int i;
+
+ /* Initialize software ring entries */
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ /* the ring can also be modified by hardware */
+ volatile struct ngbe_rx_desc *rxd;
+ struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+
+ if (mbuf == NULL) {
+ PMD_INIT_LOG(ERR, "Rx mbuf alloc failed queue_id=%u port_id=%u",
+ (unsigned int)rxq->queue_id,
+ (unsigned int)rxq->port_id);
+ return -ENOMEM;
+ }
+
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+ rxd = &rxq->rx_ring[i];
+ NGBE_RXD_HDRADDR(rxd, 0);
+ NGBE_RXD_PKTADDR(rxd, dma_addr);
+ rxe[i].mbuf = mbuf;
+ }
+
+ return 0;
+}
+
/*
* Initializes Receive Unit.
*/
int __rte_cold
ngbe_dev_rx_init(struct rte_eth_dev *dev)
{
- RTE_SET_USED(dev);
+ struct ngbe_hw *hw;
+ struct ngbe_rx_queue *rxq;
+ uint64_t bus_addr;
+ uint32_t fctrl;
+ uint32_t hlreg0;
+ uint32_t srrctl;
+ uint16_t buf_size;
+ uint16_t i;
- return -EINVAL;
+ PMD_INIT_FUNC_TRACE();
+ hw = ngbe_dev_hw(dev);
+
+ /*
+ * Make sure receives are disabled while setting
+ * up the Rx context (registers, descriptor rings, etc.).
+ */
+ wr32m(hw, NGBE_MACRXCFG, NGBE_MACRXCFG_ENA, 0);
+ wr32m(hw, NGBE_PBRXCTL, NGBE_PBRXCTL_ENA, 0);
+
+ /* Enable receipt of broadcasted frames */
+ fctrl = rd32(hw, NGBE_PSRCTL);
+ fctrl |= NGBE_PSRCTL_BCA;
+ wr32(hw, NGBE_PSRCTL, fctrl);
+
+ hlreg0 = rd32(hw, NGBE_SECRXCTL);
+ hlreg0 &= ~NGBE_SECRXCTL_XDSA;
+ wr32(hw, NGBE_SECRXCTL, hlreg0);
+
+ wr32m(hw, NGBE_FRMSZ, NGBE_FRMSZ_MAX_MASK,
+ NGBE_FRMSZ_MAX(NGBE_FRAME_SIZE_DFT));
+
+ /* Setup Rx queues */
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+
+ /* Setup the Base and Length of the Rx Descriptor Rings */
+ bus_addr = rxq->rx_ring_phys_addr;
+ wr32(hw, NGBE_RXBAL(rxq->reg_idx),
+ (uint32_t)(bus_addr & BIT_MASK32));
+ wr32(hw, NGBE_RXBAH(rxq->reg_idx),
+ (uint32_t)(bus_addr >> 32));
+ wr32(hw, NGBE_RXRP(rxq->reg_idx), 0);
+ wr32(hw, NGBE_RXWP(rxq->reg_idx), 0);
+
+ srrctl = NGBE_RXCFG_RNGLEN(rxq->nb_rx_desc);
+
+ /* Set if packets are dropped when no descriptors available */
+ if (rxq->drop_en)
+ srrctl |= NGBE_RXCFG_DROP;
+
+ /*
+ * Configure the Rx buffer size in the PKTLEN field of
+ * the RXCFG register of the queue.
+ * The value is in 1 KB resolution. Valid values can be from
+ * 1 KB to 16 KB.
+ */
+ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
+ RTE_PKTMBUF_HEADROOM);
+ buf_size = ROUND_DOWN(buf_size, 0x1 << 10);
+ srrctl |= NGBE_RXCFG_PKTLEN(buf_size);
+
+ wr32(hw, NGBE_RXCFG(rxq->reg_idx), srrctl);
+ }
+
+ return 0;
}
/*
@@ -988,7 +1084,9 @@ ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw;
struct ngbe_tx_queue *txq;
+ struct ngbe_rx_queue *rxq;
uint32_t dmatxctl;
+ uint32_t rxctrl;
uint16_t i;
int ret = 0;
@@ -1018,7 +1116,39 @@ ngbe_dev_rxtx_start(struct rte_eth_dev *dev)
}
}
- return -EINVAL;
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (rxq->rx_deferred_start == 0) {
+ ret = ngbe_dev_rx_queue_start(dev, i);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ /* Enable Receive engine */
+ rxctrl = rd32(hw, NGBE_PBRXCTL);
+ rxctrl |= NGBE_PBRXCTL_ENA;
+ hw->mac.enable_rx_dma(hw, rxctrl);
+
+ return 0;
+}
+
+void
+ngbe_dev_save_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id)
+{
+ u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
+ *(reg++) = rd32(hw, NGBE_RXBAL(rx_queue_id));
+ *(reg++) = rd32(hw, NGBE_RXBAH(rx_queue_id));
+ *(reg++) = rd32(hw, NGBE_RXCFG(rx_queue_id));
+}
+
+void
+ngbe_dev_store_rx_queue(struct ngbe_hw *hw, uint16_t rx_queue_id)
+{
+ u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
+ wr32(hw, NGBE_RXBAL(rx_queue_id), *(reg++));
+ wr32(hw, NGBE_RXBAH(rx_queue_id), *(reg++));
+ wr32(hw, NGBE_RXCFG(rx_queue_id), *(reg++) & ~NGBE_RXCFG_ENA);
}
void
@@ -1039,6 +1169,85 @@ ngbe_dev_store_tx_queue(struct ngbe_hw *hw, uint16_t tx_queue_id)
wr32(hw, NGBE_TXCFG(tx_queue_id), *(reg++) & ~NGBE_TXCFG_ENA);
}
+/*
+ * Start Receive Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_rx_queue *rxq;
+ uint32_t rxdctl;
+ int poll_ms;
+
+ PMD_INIT_FUNC_TRACE();
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+
+ /* Allocate buffers for descriptor rings */
+ if (ngbe_alloc_rx_queue_mbufs(rxq) != 0) {
+ PMD_INIT_LOG(ERR, "Could not alloc mbuf for queue:%d",
+ rx_queue_id);
+ return -1;
+ }
+ rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+ rxdctl |= NGBE_RXCFG_ENA;
+ wr32(hw, NGBE_RXCFG(rxq->reg_idx), rxdctl);
+
+ /* Wait until Rx Enable ready */
+ poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+ do {
+ rte_delay_ms(1);
+ rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+ } while (--poll_ms && !(rxdctl & NGBE_RXCFG_ENA));
+ if (poll_ms == 0)
+ PMD_INIT_LOG(ERR, "Could not enable Rx Queue %d", rx_queue_id);
+ rte_wmb();
+ wr32(hw, NGBE_RXRP(rxq->reg_idx), 0);
+ wr32(hw, NGBE_RXWP(rxq->reg_idx), rxq->nb_rx_desc - 1);
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+ return 0;
+}
+
+/*
+ * Stop Receive Units for specified queue.
+ */
+int __rte_cold
+ngbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
+ struct ngbe_rx_queue *rxq;
+ uint32_t rxdctl;
+ int poll_ms;
+
+ PMD_INIT_FUNC_TRACE();
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+
+ ngbe_dev_save_rx_queue(hw, rxq->reg_idx);
+ wr32m(hw, NGBE_RXCFG(rxq->reg_idx), NGBE_RXCFG_ENA, 0);
+
+ /* Wait until Rx Enable bit clear */
+ poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
+ do {
+ rte_delay_ms(1);
+ rxdctl = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
+ } while (--poll_ms && (rxdctl & NGBE_RXCFG_ENA));
+ if (poll_ms == 0)
+ PMD_INIT_LOG(ERR, "Could not disable Rx Queue %d", rx_queue_id);
+
+ rte_delay_us(RTE_NGBE_WAIT_100_US);
+ ngbe_dev_store_rx_queue(hw, rxq->reg_idx);
+
+ ngbe_rx_queue_release_mbufs(rxq);
+ ngbe_reset_rx_queue(adapter, rxq);
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
/*
* Start Transmit Units for specified queue.
*/
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* [dpdk-dev] [PATCH v6 19/19] net/ngbe: support to close and reset device
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
` (17 preceding siblings ...)
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 18/19] net/ngbe: add Rx " Jiawen Wu
@ 2021-06-17 11:00 ` Jiawen Wu
18 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-06-17 11:00 UTC (permalink / raw)
To: dev; +Cc: Jiawen Wu
Support to close and reset device.
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 61 ++++++++++++++++++++++++++++++++--
drivers/net/ngbe/ngbe_ethdev.h | 2 ++
drivers/net/ngbe/ngbe_rxtx.c | 20 +++++++++++
3 files changed, 81 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index ef8ea5ecff..695c1d26c8 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -558,11 +558,66 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
static int
ngbe_dev_close(struct rte_eth_dev *dev)
{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ int retries = 0;
+ int ret;
+
PMD_INIT_FUNC_TRACE();
- RTE_SET_USED(dev);
+ ngbe_pf_reset_hw(hw);
+
+ ret = ngbe_dev_stop(dev);
+
+ ngbe_dev_free_queues(dev);
+
+ /* reprogram the RAR[0] in case user changed it. */
+ ngbe_set_rar(hw, 0, hw->mac.addr, 0, true);
+
+ /* Unlock any pending hardware semaphore */
+ ngbe_swfw_lock_reset(hw);
+
+ /* disable uio intr before callback unregister */
+ rte_intr_disable(intr_handle);
+
+ do {
+ ret = rte_intr_callback_unregister(intr_handle,
+ ngbe_dev_interrupt_handler, dev);
+ if (ret >= 0 || ret == -ENOENT) {
+ break;
+ } else if (ret != -EAGAIN) {
+ PMD_INIT_LOG(ERR,
+ "intr callback unregister failed: %d",
+ ret);
+ }
+ rte_delay_ms(100);
+ } while (retries++ < (10 + NGBE_LINK_UP_TIME));
+
+ rte_free(dev->data->mac_addrs);
+ dev->data->mac_addrs = NULL;
+
+ rte_free(dev->data->hash_mac_addrs);
+ dev->data->hash_mac_addrs = NULL;
+
+ return ret;
+}
+
+/*
+ * Reset PF device.
+ */
+static int
+ngbe_dev_reset(struct rte_eth_dev *dev)
+{
+ int ret;
+
+ ret = eth_ngbe_dev_uninit(dev);
+ if (ret)
+ return ret;
+
+ ret = eth_ngbe_dev_init(dev, NULL);
- return -EINVAL;
+ return ret;
}
static int
@@ -1101,6 +1156,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
.dev_infos_get = ngbe_dev_info_get,
.dev_start = ngbe_dev_start,
.dev_stop = ngbe_dev_stop,
+ .dev_close = ngbe_dev_close,
+ .dev_reset = ngbe_dev_reset,
.link_update = ngbe_dev_link_update,
.rx_queue_start = ngbe_dev_rx_queue_start,
.rx_queue_stop = ngbe_dev_rx_queue_stop,
diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
index d576eef5d6..3893853645 100644
--- a/drivers/net/ngbe/ngbe_ethdev.h
+++ b/drivers/net/ngbe/ngbe_ethdev.h
@@ -67,6 +67,8 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
*/
void ngbe_dev_clear_queues(struct rte_eth_dev *dev);
+void ngbe_dev_free_queues(struct rte_eth_dev *dev);
+
void ngbe_dev_rx_queue_release(void *rxq);
void ngbe_dev_tx_queue_release(void *txq);
diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
index 0801424245..62b1523c77 100644
--- a/drivers/net/ngbe/ngbe_rxtx.c
+++ b/drivers/net/ngbe/ngbe_rxtx.c
@@ -935,6 +935,26 @@ ngbe_dev_clear_queues(struct rte_eth_dev *dev)
}
}
+void
+ngbe_dev_free_queues(struct rte_eth_dev *dev)
+{
+ unsigned int i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ ngbe_dev_rx_queue_release(dev->data->rx_queues[i]);
+ dev->data->rx_queues[i] = NULL;
+ }
+ dev->data->nb_rx_queues = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ ngbe_dev_tx_queue_release(dev->data->tx_queues[i]);
+ dev->data->tx_queues[i] = NULL;
+ }
+ dev->data->nb_tx_queues = 0;
+}
+
static int __rte_cold
ngbe_alloc_rx_queue_mbufs(struct ngbe_rx_queue *rxq)
{
--
2.21.0.windows.1
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure Jiawen Wu
@ 2021-07-02 13:07 ` Andrew Rybchenko
2021-07-05 2:52 ` Jiawen Wu
0 siblings, 1 reply; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 13:07 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Adding bare minimum PMD library and doc build infrastructure
> and claim the maintainership for ngbe PMD.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
Just one nit below.
[snip]
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> new file mode 100644
> index 0000000000..f8e19066de
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#include <errno.h>
> +#include <rte_common.h>
> +#include <ethdev_pci.h>
> +
> +static int
> +eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> + struct rte_pci_device *pci_dev)
> +{
> + RTE_SET_USED(pci_dev);
> + return -EINVAL;
> +}
> +
> +static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
> +{
> + RTE_SET_USED(pci_dev);
> + return -EINVAL;
> +}
Why is different style of unused suppression is used
above: __rte_unused vs RTE_SET_USED'?
[snip]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 03/19] net/ngbe: add log type and error type
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 03/19] net/ngbe: add log type and error type Jiawen Wu
@ 2021-07-02 13:22 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 13:22 UTC (permalink / raw)
To: Jiawen Wu, dev; +Cc: david.marchand
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Add log type and error type to trace functions.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/ngbe.rst | 21 +++++++++
> drivers/net/ngbe/base/ngbe_status.h | 73 +++++++++++++++++++++++++++++
> drivers/net/ngbe/ngbe_ethdev.c | 14 ++++++
> drivers/net/ngbe/ngbe_logs.h | 46 ++++++++++++++++++
> 4 files changed, 154 insertions(+)
> create mode 100644 drivers/net/ngbe/base/ngbe_status.h
> create mode 100644 drivers/net/ngbe/ngbe_logs.h
>
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index 37502627a3..54d0665db9 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -17,6 +17,27 @@ Prerequisites
> - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
>
>
> +Pre-Installation Configuration
> +------------------------------
> +
> +Dynamic Logging Parameters
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +One may leverage EAL option "--log-level" to change default levels
> +for the log types supported by the driver. The option is used with
> +an argument typically consisting of two parts separated by a colon.
> +
> +NGBE PMD provides the following log types available for control:
> +
> +- ``pmd.net.ngbe.driver`` (default level is **notice**)
> +
> + Affects driver-wide messages unrelated to any particular devices.
> +
> +- ``pmd.net.ngbe.init`` (default level is **notice**)
> +
> + Extra logging of the messages during PMD initialization.
> +
> +
> Driver compilation and testing
> ------------------------------
>
> diff --git a/drivers/net/ngbe/base/ngbe_status.h b/drivers/net/ngbe/base/ngbe_status.h
> new file mode 100644
> index 0000000000..917b7da8aa
> --- /dev/null
> +++ b/drivers/net/ngbe/base/ngbe_status.h
> @@ -0,0 +1,73 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
2021?
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_STATUS_H_
> +#define _NGBE_STATUS_H_
> +
> +/* Error Codes:
> + * common error
> + * module error(simple)
> + * module error(detailed)
> + *
> + * (-256, 256): reserved for non-ngbe defined error code
> + */
> +#define TERR_BASE (0x100)
> +
> +/* WARNING: just for legacy compatibility */
> +#define NGBE_NOT_IMPLEMENTED 0x7FFFFFFF
> +#define NGBE_ERR_OPS_DUMMY 0x3FFFFFFF
> +
> +/* Error Codes */
> +#define NGBE_ERR_EEPROM -(TERR_BASE + 1)
> +#define NGBE_ERR_EEPROM_CHECKSUM -(TERR_BASE + 2)
> +#define NGBE_ERR_PHY -(TERR_BASE + 3)
> +#define NGBE_ERR_CONFIG -(TERR_BASE + 4)
> +#define NGBE_ERR_PARAM -(TERR_BASE + 5)
> +#define NGBE_ERR_MAC_TYPE -(TERR_BASE + 6)
> +#define NGBE_ERR_UNKNOWN_PHY -(TERR_BASE + 7)
> +#define NGBE_ERR_LINK_SETUP -(TERR_BASE + 8)
> +#define NGBE_ERR_ADAPTER_STOPPED -(TERR_BASE + 9)
> +#define NGBE_ERR_INVALID_MAC_ADDR -(TERR_BASE + 10)
> +#define NGBE_ERR_DEVICE_NOT_SUPPORTED -(TERR_BASE + 11)
> +#define NGBE_ERR_MASTER_REQUESTS_PENDING -(TERR_BASE + 12)
> +#define NGBE_ERR_INVALID_LINK_SETTINGS -(TERR_BASE + 13)
> +#define NGBE_ERR_AUTONEG_NOT_COMPLETE -(TERR_BASE + 14)
> +#define NGBE_ERR_RESET_FAILED -(TERR_BASE + 15)
> +#define NGBE_ERR_SWFW_SYNC -(TERR_BASE + 16)
> +#define NGBE_ERR_PHY_ADDR_INVALID -(TERR_BASE + 17)
> +#define NGBE_ERR_I2C -(TERR_BASE + 18)
> +#define NGBE_ERR_SFP_NOT_SUPPORTED -(TERR_BASE + 19)
> +#define NGBE_ERR_SFP_NOT_PRESENT -(TERR_BASE + 20)
> +#define NGBE_ERR_SFP_NO_INIT_SEQ_PRESENT -(TERR_BASE + 21)
> +#define NGBE_ERR_NO_SAN_ADDR_PTR -(TERR_BASE + 22)
> +#define NGBE_ERR_FDIR_REINIT_FAILED -(TERR_BASE + 23)
> +#define NGBE_ERR_EEPROM_VERSION -(TERR_BASE + 24)
> +#define NGBE_ERR_NO_SPACE -(TERR_BASE + 25)
> +#define NGBE_ERR_OVERTEMP -(TERR_BASE + 26)
> +#define NGBE_ERR_FC_NOT_NEGOTIATED -(TERR_BASE + 27)
> +#define NGBE_ERR_FC_NOT_SUPPORTED -(TERR_BASE + 28)
> +#define NGBE_ERR_SFP_SETUP_NOT_COMPLETE -(TERR_BASE + 30)
> +#define NGBE_ERR_PBA_SECTION -(TERR_BASE + 31)
> +#define NGBE_ERR_INVALID_ARGUMENT -(TERR_BASE + 32)
> +#define NGBE_ERR_HOST_INTERFACE_COMMAND -(TERR_BASE + 33)
> +#define NGBE_ERR_OUT_OF_MEM -(TERR_BASE + 34)
> +#define NGBE_ERR_FEATURE_NOT_SUPPORTED -(TERR_BASE + 36)
> +#define NGBE_ERR_EEPROM_PROTECTED_REGION -(TERR_BASE + 37)
> +#define NGBE_ERR_FDIR_CMD_INCOMPLETE -(TERR_BASE + 38)
> +#define NGBE_ERR_FW_RESP_INVALID -(TERR_BASE + 39)
> +#define NGBE_ERR_TOKEN_RETRY -(TERR_BASE + 40)
> +#define NGBE_ERR_FLASH_LOADING_FAILED -(TERR_BASE + 41)
> +
> +#define NGBE_ERR_NOSUPP -(TERR_BASE + 42)
> +#define NGBE_ERR_UNDERTEMP -(TERR_BASE + 43)
> +#define NGBE_ERR_XPCS_POWER_UP_FAILED -(TERR_BASE + 44)
> +#define NGBE_ERR_PHY_INIT_NOT_DONE -(TERR_BASE + 45)
> +#define NGBE_ERR_TIMEOUT -(TERR_BASE + 46)
> +#define NGBE_ERR_REGISTER -(TERR_BASE + 47)
> +#define NGBE_ERR_MNG_ACCESS_FAILED -(TERR_BASE + 49)
> +#define NGBE_ERR_PHY_TYPE -(TERR_BASE + 50)
> +#define NGBE_ERR_PHY_TIMEOUT -(TERR_BASE + 51)
> +
> +#endif /* _NGBE_STATUS_H_ */
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index d8df7ef896..e05766752a 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -7,6 +7,7 @@
> #include <rte_common.h>
> #include <ethdev_pci.h>
>
> +#include "ngbe_logs.h"
> #include <base/ngbe_devids.h>
> #include "ngbe_ethdev.h"
>
> @@ -34,6 +35,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> {
> struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
>
> + PMD_INIT_FUNC_TRACE();
> +
> if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> return 0;
>
> @@ -45,6 +48,8 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> static int
> eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
> {
> + PMD_INIT_FUNC_TRACE();
> +
> if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> return 0;
>
> @@ -85,3 +90,12 @@ RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
>
> +RTE_LOG_REGISTER(ngbe_logtype_init, pmd.net.ngbe.init, NOTICE);
> +RTE_LOG_REGISTER(ngbe_logtype_driver, pmd.net.ngbe.driver, NOTICE);
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + RTE_LOG_REGISTER(ngbe_logtype_rx, pmd.net.ngbe.rx, DEBUG);
> +#endif
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + RTE_LOG_REGISTER(ngbe_logtype_tx, pmd.net.ngbe.tx, DEBUG);
> +#endif
Sorry for misguiding you. Please, follow David's
review notes to use RTE_LOG_REGISTER_SUFFIX.
> diff --git a/drivers/net/ngbe/ngbe_logs.h b/drivers/net/ngbe/ngbe_logs.h
> new file mode 100644
> index 0000000000..c5d1ab0930
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_logs.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
2021?
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_LOGS_H_
> +#define _NGBE_LOGS_H_
> +
> +/*
> + * PMD_USER_LOG: for user
> + */
> +extern int ngbe_logtype_init;
> +#define PMD_INIT_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, ngbe_logtype_init, \
> + "%s(): " fmt "\n", __func__, ##args)
> +
> +extern int ngbe_logtype_driver;
> +#define PMD_DRV_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, ngbe_logtype_driver, \
> + "%s(): " fmt "\n", __func__, ##args)
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> +extern int ngbe_logtype_rx;
> +#define PMD_RX_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, ngbe_logtype_rx, \
> + "%s(): " fmt "\n", __func__, ##args)
> +#else
> +#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> +extern int ngbe_logtype_tx;
> +#define PMD_TX_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ## level, ngbe_logtype_tx, \
> + "%s(): " fmt "\n", __func__, ##args)
> +#else
> +#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
> +#endif
> +
> +#define TLOG_DEBUG(fmt, args...) PMD_DRV_LOG(DEBUG, fmt, ##args)
> +
> +#define DEBUGOUT(fmt, args...) TLOG_DEBUG(fmt, ##args)
> +#define PMD_INIT_FUNC_TRACE() TLOG_DEBUG(" >>")
> +#define DEBUGFUNC(fmt) TLOG_DEBUG(fmt)
> +
> +#endif /* _NGBE_LOGS_H_ */
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 02/19] net/ngbe: support probe and remove
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 02/19] net/ngbe: support probe and remove Jiawen Wu
@ 2021-07-02 13:22 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 13:22 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Add device IDs for Wangxun 1Gb NICs, map device IDs to register ngbe
> PMD. Add basic PCIe ethdev probe and remove.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> doc/guides/nics/features/ngbe.ini | 1 +
> drivers/net/ngbe/base/meson.build | 18 +++++++
> drivers/net/ngbe/base/ngbe_devids.h | 83 +++++++++++++++++++++++++++++
> drivers/net/ngbe/meson.build | 6 +++
> drivers/net/ngbe/ngbe_ethdev.c | 66 +++++++++++++++++++++--
> drivers/net/ngbe/ngbe_ethdev.h | 16 ++++++
> 6 files changed, 186 insertions(+), 4 deletions(-)
> create mode 100644 drivers/net/ngbe/base/meson.build
> create mode 100644 drivers/net/ngbe/base/ngbe_devids.h
> create mode 100644 drivers/net/ngbe/ngbe_ethdev.h
>
> diff --git a/doc/guides/nics/features/ngbe.ini b/doc/guides/nics/features/ngbe.ini
> index a7a524defc..977286ac04 100644
> --- a/doc/guides/nics/features/ngbe.ini
> +++ b/doc/guides/nics/features/ngbe.ini
> @@ -4,6 +4,7 @@
> ; Refer to default.ini for the full list of available PMD features.
> ;
> [Features]
> +Multiprocess aware = Y
> Linux = Y
> ARMv8 = Y
> x86-32 = Y
> diff --git a/drivers/net/ngbe/base/meson.build b/drivers/net/ngbe/base/meson.build
> new file mode 100644
> index 0000000000..c5f6467743
> --- /dev/null
> +++ b/drivers/net/ngbe/base/meson.build
> @@ -0,0 +1,18 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> +
> +sources = []
> +
> +error_cflags = []
> +
> +c_args = cflags
> +foreach flag: error_cflags
> + if cc.has_argument(flag)
> + c_args += flag
> + endif
> +endforeach
It is a dead code since error_cflags is empty.
It should be added when the first flag is added to
error_cflags.
> +
> +base_lib = static_library('ngbe_base', sources,
> + dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci],
> + c_args: c_args)
> +base_objs = base_lib.extract_all_objects()
[snip]
> diff --git a/drivers/net/ngbe/base/ngbe_devids.h b/drivers/net/ngbe/base/ngbe_devids.h
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index de2d7be716..81173fa7f0 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -7,6 +7,12 @@ if is_windows
> subdir_done()
> endif
>
> +subdir('base')
> +objs = [base_objs]
> +
> sources = files(
> 'ngbe_ethdev.c',
> )
> +
> +includes += include_directories('base')
> +
Trailing empty line should be avoided
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index f8e19066de..d8df7ef896 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -7,23 +7,81 @@
> #include <rte_common.h>
> #include <ethdev_pci.h>
>
> +#include <base/ngbe_devids.h>
Shouldn't it be #include "ngbe_devids.h" since base is in
includes? It definitely should be in double-quotes since
it is not a system header.
> +#include "ngbe_ethdev.h"
> +
> +/*
> + * The set of PCI devices this driver supports
> + */
> +static const struct rte_pci_id pci_id_ngbe_map[] = {
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A2S) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A4S) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL2S) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL4S) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860NCSI) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860A1L) },
> + { RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, NGBE_DEV_ID_EM_WX1860AL_W) },
> + { .vendor_id = 0, /* sentinel */ },
> +};
> +
> +static int
> +eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> +{
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> + return 0;
> +
> + rte_eth_copy_pci_info(eth_dev, pci_dev);
> +
> + return -EINVAL;
> +}
> +
> +static int
> +eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
> +{
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> + return 0;
> +
> + RTE_SET_USED(eth_dev);
> +
> + return -EINVAL;
> +}
> +
> static int
> eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> struct rte_pci_device *pci_dev)
> {
> - RTE_SET_USED(pci_dev);
> - return -EINVAL;
> + return rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
> + sizeof(struct ngbe_adapter),
> + eth_dev_pci_specific_init, pci_dev,
> + eth_ngbe_dev_init, NULL);
> }
>
> static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
> {
> - RTE_SET_USED(pci_dev);
> - return -EINVAL;
> + struct rte_eth_dev *ethdev;
> +
> + ethdev = rte_eth_dev_allocated(pci_dev->device.name);
> + if (ethdev == NULL)
> + return 0;
> +
> + return rte_eth_dev_destroy(ethdev, eth_ngbe_dev_uninit);
> }
>
> static struct rte_pci_driver rte_ngbe_pmd = {
> + .id_table = pci_id_ngbe_map,
> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> .probe = eth_ngbe_pci_probe,
> .remove = eth_ngbe_pci_remove,
> };
>
> RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
> +
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> new file mode 100644
> index 0000000000..38f55f7fb1
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
2021?
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_ETHDEV_H_
> +#define _NGBE_ETHDEV_H_
> +
> +/*
> + * Structure to store private data for each driver instance (for each port).
> + */
> +struct ngbe_adapter {
> + void *back;
It should be a comment above what it is and why it is added.
> +};
> +
> +#endif /* _NGBE_ETHDEV_H_ */
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 05/19] net/ngbe: set MAC type and LAN ID with device initialization
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 05/19] net/ngbe: set MAC type and LAN ID with device initialization Jiawen Wu
@ 2021-07-02 16:05 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:05 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Add basic init and uninit function.
> Map device IDs and subsystem IDs to single ID for easy operation.
> Then initialize the shared code.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index e05766752a..a355c7dc29 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -53,9 +71,9 @@ eth_ngbe_dev_uninit(struct rte_eth_dev *eth_dev)
> if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> return 0;
>
> - RTE_SET_USED(eth_dev);
> + ngbe_dev_close(eth_dev);
Why is return value ignored?
>
> - return -EINVAL;
> + return 0;
> }
>
> static int
> @@ -86,6 +104,19 @@ static struct rte_pci_driver rte_ngbe_pmd = {
> .remove = eth_ngbe_pci_remove,
> };
>
> +/*
> + * Reset and stop device.
> + */
> +static int
> +ngbe_dev_close(struct rte_eth_dev *dev)
> +{
> + PMD_INIT_FUNC_TRACE();
> +
> + RTE_SET_USED(dev);
> +
> + return -EINVAL;
Is it really a problem to implement close here for the
symmetry? Such asymmetry will result in failures if
I try to run, for example, testpmd on the patch.
> +}
> +
> RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
[snip]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 07/19] net/ngbe: add HW initialization
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 07/19] net/ngbe: add HW initialization Jiawen Wu
@ 2021-07-02 16:08 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:08 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Initialize the hardware by resetting the hardware in base code.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index c779c46dd5..31d4dda976 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -60,6 +60,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> {
> struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
> struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
> + const struct rte_memzone *mz;
> int err;
>
> PMD_INIT_FUNC_TRACE();
> @@ -76,6 +77,15 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> ngbe_map_device_id(hw);
> hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
>
> + /* Reserve memory for interrupt status block */
> + mz = rte_eth_dma_zone_reserve(eth_dev, "ngbe_driver", -1,
> + NGBE_ISB_SIZE, NGBE_ALIGN, SOCKET_ID_ANY);
> + if (mz == NULL)
> + return -ENOMEM;
> +
> + hw->isb_dma = TMZ_PADDR(mz);
> + hw->isb_mem = TMZ_VADDR(mz);
> +
> /* Initialize the shared code (base driver) */
> err = ngbe_init_shared_code(hw);
> if (err != 0) {
> @@ -99,6 +109,13 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> return -EIO;
> }
>
> + err = hw->mac.init_hw(hw);
> +
Please, remove the empty line to avoid logical separation
of the operation and its result check.
> + if (err != 0) {
> + PMD_INIT_LOG(ERR, "Hardware Initialization Failure: %d", err);
> + return -EIO;
> + }
> +
> return 0;
> }
>
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 09/19] net/ngbe: store MAC address
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 09/19] net/ngbe: store MAC address Jiawen Wu
@ 2021-07-02 16:12 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:12 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Store MAC addresses and init receive address filters.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 31d4dda976..deca64137d 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -116,6 +116,31 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> return -EIO;
> }
>
> + /* Allocate memory for storing MAC addresses */
> + eth_dev->data->mac_addrs = rte_zmalloc("ngbe", RTE_ETHER_ADDR_LEN *
> + hw->mac.num_rar_entries, 0);
> + if (eth_dev->data->mac_addrs == NULL) {
> + PMD_INIT_LOG(ERR,
> + "Failed to allocate %u bytes needed to store "
> + "MAC addresses",
Do not split format strings. It makes it hard to
find line in code using grep by the message
found in log.
[snip]
> + RTE_ETHER_ADDR_LEN * hw->mac.num_rar_entries);
> + return -ENOMEM;
> + }
> +
> + /* Copy the permanent MAC address */
> + rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.perm_addr,
> + ð_dev->data->mac_addrs[0]);
> +
> + /* Allocate memory for storing hash filter MAC addresses */
> + eth_dev->data->hash_mac_addrs = rte_zmalloc("ngbe",
> + RTE_ETHER_ADDR_LEN * NGBE_VMDQ_NUM_UC_MAC, 0);
> + if (eth_dev->data->hash_mac_addrs == NULL) {
> + PMD_INIT_LOG(ERR,
> + "Failed to allocate %d bytes needed to store MAC addresses",
> + RTE_ETHER_ADDR_LEN * NGBE_VMDQ_NUM_UC_MAC);
Shouldn't we free eth_dev->data->mac_addrs here?
> + return -ENOMEM;
> + }
> +
> return 0;
> }
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update Jiawen Wu
@ 2021-07-02 16:24 ` Andrew Rybchenko
2021-07-05 7:10 ` Jiawen Wu
0 siblings, 1 reply; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:24 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Register to handle device interrupt.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst
> index 54d0665db9..0918cc2918 100644
> --- a/doc/guides/nics/ngbe.rst
> +++ b/doc/guides/nics/ngbe.rst
> @@ -8,6 +8,11 @@ The NGBE PMD (librte_pmd_ngbe) provides poll mode driver support
> for Wangxun 1 Gigabit Ethernet NICs.
>
>
> +Features
> +--------
> +
> +- Link state information
> +
Two empty lines before the section.
> Prerequisites
> -------------
>
[snip]
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index deca64137d..c952023e8b 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -141,6 +175,23 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
> return -ENOMEM;
> }
>
> + ctrl_ext = rd32(hw, NGBE_PORTCTL);
> + /* let hardware know driver is loaded */
> + ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
> + /* Set PF Reset Done bit so PF/VF Mail Ops can work */
> + ctrl_ext |= NGBE_PORTCTL_RSTDONE;
> + wr32(hw, NGBE_PORTCTL, ctrl_ext);
> + ngbe_flush(hw);
> +
> + rte_intr_callback_register(intr_handle,
> + ngbe_dev_interrupt_handler, eth_dev);
> +
> + /* enable uio/vfio intr/eventfd mapping */
> + rte_intr_enable(intr_handle);
> +
> + /* enable support intr */
> + ngbe_enable_intr(eth_dev);
> +
I don't understand why it is done unconditionally regardless
of the corresponding bit in dev_conf.
> return 0;
> }
>
> @@ -180,11 +231,25 @@ static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev)
>
> static struct rte_pci_driver rte_ngbe_pmd = {
> .id_table = pci_id_ngbe_map,
> - .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> + RTE_PCI_DRV_INTR_LSC,
> .probe = eth_ngbe_pci_probe,
> .remove = eth_ngbe_pci_remove,
> };
>
> +static int
> +ngbe_dev_configure(struct rte_eth_dev *dev)
> +{
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> +
> + PMD_INIT_FUNC_TRACE();
> +
> + /* set flag to update link status after init */
> + intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> +
> + return 0;
> +}
> +
> /*
> * Reset and stop device.
> */
> @@ -198,6 +263,315 @@ ngbe_dev_close(struct rte_eth_dev *dev)
> return -EINVAL;
> }
>
> +static int
> +ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> +{
> + RTE_SET_USED(dev);
> +
> + dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
> + ETH_LINK_SPEED_10M;
> +
> + return 0;
> +}
> +
> +/* return 0 means link status changed, -1 means not changed */
> +int
> +ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> + int wait_to_complete)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct rte_eth_link link;
> + u32 link_speed = NGBE_LINK_SPEED_UNKNOWN;
> + u32 lan_speed = 0;
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> + bool link_up;
> + int err;
> + int wait = 1;
> +
> + memset(&link, 0, sizeof(link));
> + link.link_status = ETH_LINK_DOWN;
> + link.link_speed = ETH_SPEED_NUM_NONE;
> + link.link_duplex = ETH_LINK_HALF_DUPLEX;
> + link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> + ~ETH_LINK_SPEED_AUTONEG);
> +
> + hw->mac.get_link_status = true;
> +
> + if (intr->flags & NGBE_FLAG_NEED_LINK_CONFIG)
> + return rte_eth_linkstatus_set(dev, &link);
> +
> + /* check if it needs to wait to complete, if lsc interrupt is enabled */
> + if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)
> + wait = 0;
> +
> + err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
> +
Please, remove empty line which adds unnecessary logical
separation of the operation and its result check.
> + if (err != 0) {
> + link.link_speed = ETH_SPEED_NUM_100M;
> + link.link_duplex = ETH_LINK_FULL_DUPLEX;
> + return rte_eth_linkstatus_set(dev, &link);
> + }
> +
> + if (!link_up)
> + return rte_eth_linkstatus_set(dev, &link);
> +
> + intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
> + link.link_status = ETH_LINK_UP;
> + link.link_duplex = ETH_LINK_FULL_DUPLEX;
> +
> + switch (link_speed) {
> + default:
> + case NGBE_LINK_SPEED_UNKNOWN:
> + link.link_duplex = ETH_LINK_FULL_DUPLEX;
> + link.link_speed = ETH_SPEED_NUM_100M;
May be ETH_SPEED_NUM_NONE?
> + break;
> +
> + case NGBE_LINK_SPEED_10M_FULL:
> + link.link_speed = ETH_SPEED_NUM_10M;
> + lan_speed = 0;
> + break;
> +
> + case NGBE_LINK_SPEED_100M_FULL:
> + link.link_speed = ETH_SPEED_NUM_100M;
> + lan_speed = 1;
> + break;
> +
> + case NGBE_LINK_SPEED_1GB_FULL:
> + link.link_speed = ETH_SPEED_NUM_1G;
> + lan_speed = 2;
> + break;
> + }
> +
> + if (hw->is_pf) {
> + wr32m(hw, NGBE_LAN_SPEED, NGBE_LAN_SPEED_MASK, lan_speed);
> + if (link_speed & (NGBE_LINK_SPEED_1GB_FULL |
> + NGBE_LINK_SPEED_100M_FULL | NGBE_LINK_SPEED_10M_FULL)) {
Such indent is very confusing since it is the same as on the
next line. Please, add extra TAB to indent above line further.
> + wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_SPEED_MASK,
> + NGBE_MACTXCFG_SPEED_1G | NGBE_MACTXCFG_TE);
> + }
> + }
> +
> + return rte_eth_linkstatus_set(dev, &link);
> +}
> +
> +static int
> +ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
> +{
> + return ngbe_dev_link_update_share(dev, wait_to_complete);
> +}
> +
> +/*
> + * It reads ICR and sets flag for the link_update.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
> +{
> + uint32_t eicr;
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> +
> + /* clear all cause mask */
> + ngbe_disable_intr(hw);
> +
> + /* read-on-clear nic registers here */
> + eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
> + PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
> +
> + intr->flags = 0;
> +
> + /* set flag for async link update */
> + if (eicr & NGBE_ICRMISC_PHY)
> + intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> +
> + if (eicr & NGBE_ICRMISC_VFMBX)
> + intr->flags |= NGBE_FLAG_MAILBOX;
> +
> + if (eicr & NGBE_ICRMISC_LNKSEC)
> + intr->flags |= NGBE_FLAG_MACSEC;
> +
> + if (eicr & NGBE_ICRMISC_GPIO)
> + intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> +
> + return 0;
> +}
> +
> +/**
> + * It gets and then prints the link status.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static void
> +ngbe_dev_link_status_print(struct rte_eth_dev *dev)
> +{
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + struct rte_eth_link link;
> +
> + rte_eth_linkstatus_get(dev, &link);
> +
> + if (link.link_status == ETH_LINK_UP) {
> + PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
> + (int)(dev->data->port_id),
> + (unsigned int)link.link_speed,
> + link.link_duplex == ETH_LINK_FULL_DUPLEX ?
> + "full-duplex" : "half-duplex");
> + } else {
> + PMD_INIT_LOG(INFO, " Port %d: Link Down",
> + (int)(dev->data->port_id));
> + }
> + PMD_INIT_LOG(DEBUG, "PCI Address: " PCI_PRI_FMT,
> + pci_dev->addr.domain,
> + pci_dev->addr.bus,
> + pci_dev->addr.devid,
> + pci_dev->addr.function);
> +}
> +
> +/*
> + * It executes link_update after knowing an interrupt occurred.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
> +{
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> + int64_t timeout;
> +
> + PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags);
> +
> + if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
> + struct rte_eth_link link;
> +
> + /*get the link status before link update, for predicting later*/
> + rte_eth_linkstatus_get(dev, &link);
> +
> + ngbe_dev_link_update(dev, 0);
> +
> + /* likely to up */
> + if (link.link_status != ETH_LINK_UP)
> + /* handle it 1 sec later, wait it being stable */
> + timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
> + /* likely to down */
> + else
> + /* handle it 4 sec later, wait it being stable */
> + timeout = NGBE_LINK_DOWN_CHECK_TIMEOUT;
> +
> + ngbe_dev_link_status_print(dev);
> + if (rte_eal_alarm_set(timeout * 1000,
> + ngbe_dev_interrupt_delayed_handler,
> + (void *)dev) < 0) {
> + PMD_DRV_LOG(ERR, "Error setting alarm");
> + } else {
> + /* remember original mask */
> + intr->mask_misc_orig = intr->mask_misc;
> + /* only disable lsc interrupt */
> + intr->mask_misc &= ~NGBE_ICRMISC_PHY;
> +
> + intr->mask_orig = intr->mask;
> + /* only disable all misc interrupts */
> + intr->mask &= ~(1ULL << NGBE_MISC_VEC_ID);
> + }
> + }
> +
> + PMD_DRV_LOG(DEBUG, "enable intr immediately");
> + ngbe_enable_intr(dev);
> +
> + return 0;
> +}
> +
> +/**
> + * Interrupt handler which shall be registered for alarm callback for delayed
> + * handling specific interrupt to wait for the stable nic state. As the
> + * NIC interrupt state is not stable for ngbe after link is just down,
> + * it needs to wait 4 seconds to get the stable status.
> + *
> + * @param handle
> + * Pointer to interrupt handle.
> + * @param param
> + * The address of parameter (struct rte_eth_dev *) registered before.
> + *
> + * @return
> + * void
Such documentation of the void functions is useless.
It may be simply omited.
> + */
> +static void
> +ngbe_dev_interrupt_delayed_handler(void *param)
> +{
> + struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + uint32_t eicr;
> +
> + ngbe_disable_intr(hw);
> +
> + eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
> +
> + if (intr->flags & NGBE_FLAG_NEED_LINK_UPDATE) {
> + ngbe_dev_link_update(dev, 0);
> + intr->flags &= ~NGBE_FLAG_NEED_LINK_UPDATE;
> + ngbe_dev_link_status_print(dev);
> + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
> + NULL);
> + }
> +
> + if (intr->flags & NGBE_FLAG_MACSEC) {
> + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_MACSEC,
> + NULL);
> + intr->flags &= ~NGBE_FLAG_MACSEC;
> + }
> +
> + /* restore original mask */
> + intr->mask_misc = intr->mask_misc_orig;
> + intr->mask_misc_orig = 0;
> + intr->mask = intr->mask_orig;
> + intr->mask_orig = 0;
> +
> + PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr);
> + ngbe_enable_intr(dev);
> +}
> +
> +/**
> + * Interrupt handler triggered by NIC for handling
> + * specific interrupt.
> + *
> + * @param handle
> + * Pointer to interrupt handle.
> + * @param param
> + * The address of parameter (struct rte_eth_dev *) registered before.
> + *
> + * @return
> + * void
Such documentation of the void functions is useless.
It may be simply omited.
> + */
> +static void
> +ngbe_dev_interrupt_handler(void *param)
> +{
> + struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
> +
> + ngbe_dev_interrupt_get_status(dev);
> + ngbe_dev_interrupt_action(dev);
> +}
> +
> +static const struct eth_dev_ops ngbe_eth_dev_ops = {
> + .dev_configure = ngbe_dev_configure,
> + .dev_infos_get = ngbe_dev_info_get,
> + .link_update = ngbe_dev_link_update,
> +};
> +
> RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> RTE_PMD_REGISTER_PCI_TABLE(net_ngbe, pci_id_ngbe_map);
> RTE_PMD_REGISTER_KMOD_DEP(net_ngbe, "* igb_uio | uio_pci_generic | vfio-pci");
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index 87cc1cff6b..b67508a3de 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -6,11 +6,30 @@
> #ifndef _NGBE_ETHDEV_H_
> #define _NGBE_ETHDEV_H_
>
> +/* need update link, bit flag */
> +#define NGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
> +#define NGBE_FLAG_MAILBOX (uint32_t)(1 << 1)
> +#define NGBE_FLAG_PHY_INTERRUPT (uint32_t)(1 << 2)
> +#define NGBE_FLAG_MACSEC (uint32_t)(1 << 3)
> +#define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
> +
> +#define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
> +
> +/* structure for interrupt relative data */
> +struct ngbe_interrupt {
> + uint32_t flags;
> + uint32_t mask_misc;
> + uint32_t mask_misc_orig; /* save mask during delayed handler */
> + uint64_t mask;
> + uint64_t mask_orig; /* save mask during delayed handler */
> +};
> +
> /*
> * Structure to store private data for each driver instance (for each port).
> */
> struct ngbe_adapter {
> struct ngbe_hw hw;
> + struct ngbe_interrupt intr;
> };
>
> static inline struct ngbe_adapter *
> @@ -30,6 +49,21 @@ ngbe_dev_hw(struct rte_eth_dev *dev)
> return hw;
> }
>
> +static inline struct ngbe_interrupt *
> +ngbe_dev_intr(struct rte_eth_dev *dev)
> +{
> + struct ngbe_adapter *ad = ngbe_dev_adapter(dev);
> + struct ngbe_interrupt *intr = &ad->intr;
> +
> + return intr;
> +}
> +
> +int
> +ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> + int wait_to_complete);
> +
> +#define NGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
> +#define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
> #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
>
> #endif /* _NGBE_ETHDEV_H_ */
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release Jiawen Wu
@ 2021-07-02 16:35 ` Andrew Rybchenko
2021-07-05 8:36 ` Jiawen Wu
0 siblings, 1 reply; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:35 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Setup device Rx queue and release Rx queue.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
> drivers/net/ngbe/meson.build | 1 +
> drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
> drivers/net/ngbe/ngbe_ethdev.h | 16 ++
> drivers/net/ngbe/ngbe_rxtx.c | 308 +++++++++++++++++++++++++++++++++
> drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
> 5 files changed, 457 insertions(+), 1 deletion(-)
> create mode 100644 drivers/net/ngbe/ngbe_rxtx.c
> create mode 100644 drivers/net/ngbe/ngbe_rxtx.h
>
> diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
> index 81173fa7f0..9e75b82f1c 100644
> --- a/drivers/net/ngbe/meson.build
> +++ b/drivers/net/ngbe/meson.build
> @@ -12,6 +12,7 @@ objs = [base_objs]
>
> sources = files(
> 'ngbe_ethdev.c',
> + 'ngbe_rxtx.c',
> )
>
> includes += include_directories('base')
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index c952023e8b..e73606c5f3 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -12,6 +12,7 @@
> #include "ngbe_logs.h"
> #include "base/ngbe.h"
> #include "ngbe_ethdev.h"
> +#include "ngbe_rxtx.h"
>
> static int ngbe_dev_close(struct rte_eth_dev *dev);
>
> @@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
> { .vendor_id = 0, /* sentinel */ },
> };
>
> +static const struct rte_eth_desc_lim rx_desc_lim = {
> + .nb_max = NGBE_RING_DESC_MAX,
> + .nb_min = NGBE_RING_DESC_MIN,
> + .nb_align = NGBE_RXD_ALIGN,
> +};
> +
> static const struct eth_dev_ops ngbe_eth_dev_ops;
>
> static inline void
> @@ -241,12 +248,19 @@ static int
> ngbe_dev_configure(struct rte_eth_dev *dev)
> {
> struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
>
> PMD_INIT_FUNC_TRACE();
>
> /* set flag to update link status after init */
> intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
>
> + /*
> + * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
> + * allocation Rx preconditions we will reset it.
> + */
> + adapter->rx_bulk_alloc_allowed = true;
> +
> return 0;
> }
>
> @@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev)
> static int
> ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> {
> - RTE_SET_USED(dev);
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> +
> + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
> +
> + dev_info->default_rxconf = (struct rte_eth_rxconf) {
> + .rx_thresh = {
> + .pthresh = NGBE_DEFAULT_RX_PTHRESH,
> + .hthresh = NGBE_DEFAULT_RX_HTHRESH,
> + .wthresh = NGBE_DEFAULT_RX_WTHRESH,
> + },
> + .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
> + .rx_drop_en = 0,
> + .offloads = 0,
> + };
> +
> + dev_info->rx_desc_lim = rx_desc_lim;
>
> dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
> ETH_LINK_SPEED_10M;
>
> + /* Driver-preferred Rx/Tx parameters */
> + dev_info->default_rxportconf.nb_queues = 1;
> + dev_info->default_rxportconf.ring_size = 256;
> +
> return 0;
> }
>
> @@ -570,6 +603,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops = {
> .dev_configure = ngbe_dev_configure,
> .dev_infos_get = ngbe_dev_info_get,
> .link_update = ngbe_dev_link_update,
> + .rx_queue_setup = ngbe_dev_rx_queue_setup,
> + .rx_queue_release = ngbe_dev_rx_queue_release,
> };
>
> RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd);
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index b67508a3de..6580d288c8 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -30,6 +30,7 @@ struct ngbe_interrupt {
> struct ngbe_adapter {
> struct ngbe_hw hw;
> struct ngbe_interrupt intr;
> + bool rx_bulk_alloc_allowed;
Shouldn't it be aligned as well as above fields?
> };
>
> static inline struct ngbe_adapter *
> @@ -58,6 +59,13 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
> return intr;
> }
>
> +void ngbe_dev_rx_queue_release(void *rxq);
> +
> +int ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> + uint16_t nb_rx_desc, unsigned int socket_id,
> + const struct rte_eth_rxconf *rx_conf,
> + struct rte_mempool *mb_pool);
> +
> int
> ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> int wait_to_complete);
> @@ -66,4 +74,12 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> #define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
> #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
>
> +/*
> + * Default values for Rx/Tx configuration
> + */
> +#define NGBE_DEFAULT_RX_FREE_THRESH 32
> +#define NGBE_DEFAULT_RX_PTHRESH 8
> +#define NGBE_DEFAULT_RX_HTHRESH 8
> +#define NGBE_DEFAULT_RX_WTHRESH 0
> +
> #endif /* _NGBE_ETHDEV_H_ */
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> new file mode 100644
> index 0000000000..df0b64dc01
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -0,0 +1,308 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#include <sys/queue.h>
> +
> +#include <stdint.h>
> +#include <rte_ethdev.h>
> +#include <ethdev_driver.h>
> +#include <rte_malloc.h>
> +
> +#include "ngbe_logs.h"
> +#include "base/ngbe.h"
> +#include "ngbe_ethdev.h"
> +#include "ngbe_rxtx.h"
> +
> +/**
> + * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
> + *
> + * The "next" pointer of the last segment of (not-yet-completed) RSC clusters
> + * in the sw_sc_ring is not set to NULL but rather points to the next
> + * mbuf of this RSC aggregation (that has not been completed yet and still
> + * resides on the HW ring). So, instead of calling for rte_pktmbuf_free() we
> + * will just free first "nb_segs" segments of the cluster explicitly by calling
> + * an rte_pktmbuf_free_seg().
> + *
> + * @m scattered cluster head
> + */
> +static void __rte_cold
> +ngbe_free_sc_cluster(struct rte_mbuf *m)
> +{
> + uint16_t i, nb_segs = m->nb_segs;
> + struct rte_mbuf *next_seg;
> +
> + for (i = 0; i < nb_segs; i++) {
> + next_seg = m->next;
> + rte_pktmbuf_free_seg(m);
> + m = next_seg;
> + }
> +}
> +
> +static void __rte_cold
> +ngbe_rx_queue_release_mbufs(struct ngbe_rx_queue *rxq)
> +{
> + unsigned int i;
> +
> + if (rxq->sw_ring != NULL) {
> + for (i = 0; i < rxq->nb_rx_desc; i++) {
> + if (rxq->sw_ring[i].mbuf != NULL) {
> + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
> + rxq->sw_ring[i].mbuf = NULL;
> + }
> + }
> + if (rxq->rx_nb_avail) {
Compare vs 0 explicitly. However, it looks like that the
check is not required at all. Body may be done unconditionally.
> + for (i = 0; i < rxq->rx_nb_avail; ++i) {
> + struct rte_mbuf *mb;
> +
> + mb = rxq->rx_stage[rxq->rx_next_avail + i];
> + rte_pktmbuf_free_seg(mb);
> + }
> + rxq->rx_nb_avail = 0;
> + }
> + }
> +
> + if (rxq->sw_sc_ring != NULL)
> + for (i = 0; i < rxq->nb_rx_desc; i++)
> + if (rxq->sw_sc_ring[i].fbuf) {
Compare vs NULL
> + ngbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
> + rxq->sw_sc_ring[i].fbuf = NULL;
> + }
> +}
> +
> +static void __rte_cold
> +ngbe_rx_queue_release(struct ngbe_rx_queue *rxq)
> +{
> + if (rxq != NULL) {
> + ngbe_rx_queue_release_mbufs(rxq);
> + rte_free(rxq->sw_ring);
> + rte_free(rxq->sw_sc_ring);
> + rte_free(rxq);
> + }
> +}
> +
> +void __rte_cold
> +ngbe_dev_rx_queue_release(void *rxq)
> +{
> + ngbe_rx_queue_release(rxq);
> +}
> +
> +/*
> + * Check if Rx Burst Bulk Alloc function can be used.
> + * Return
> + * 0: the preconditions are satisfied and the bulk allocation function
> + * can be used.
> + * -EINVAL: the preconditions are NOT satisfied and the default Rx burst
> + * function must be used.
> + */
> +static inline int __rte_cold
> +check_rx_burst_bulk_alloc_preconditions(struct ngbe_rx_queue *rxq)
> +{
> + int ret = 0;
> +
> + /*
> + * Make sure the following pre-conditions are satisfied:
> + * rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST
> + * rxq->rx_free_thresh < rxq->nb_rx_desc
> + * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
> + * Scattered packets are not supported. This should be checked
> + * outside of this function.
> + */
> + if (!(rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST)) {
Isn't is simpler to read and understand:
rxq->rx_free_thresh < RTE_PMD_NGBE_RX_MAX_BURST
> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> + "rxq->rx_free_thresh=%d, "
> + "RTE_PMD_NGBE_RX_MAX_BURST=%d",
Do not split format string.
> + rxq->rx_free_thresh, RTE_PMD_NGBE_RX_MAX_BURST);
> + ret = -EINVAL;
> + } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
rxq->rx_free_thresh >= rxq->nb_rx_desc
is simpler to read
> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> + "rxq->rx_free_thresh=%d, "
> + "rxq->nb_rx_desc=%d",
Do not split format string.
> + rxq->rx_free_thresh, rxq->nb_rx_desc);
> + ret = -EINVAL;
> + } else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
(rxq->nb_rx_desc % rxq->rx_free_thresh) != 0
is easier to read
> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> + "rxq->nb_rx_desc=%d, "
> + "rxq->rx_free_thresh=%d",
> + rxq->nb_rx_desc, rxq->rx_free_thresh);
Do not split format string
> + ret = -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +/* Reset dynamic ngbe_rx_queue fields back to defaults */
> +static void __rte_cold
> +ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct ngbe_rx_queue *rxq)
> +{
> + static const struct ngbe_rx_desc zeroed_desc = {
> + {{0}, {0} }, {{0}, {0} } };
> + unsigned int i;
> + uint16_t len = rxq->nb_rx_desc;
> +
> + /*
> + * By default, the Rx queue setup function allocates enough memory for
> + * NGBE_RING_DESC_MAX. The Rx Burst bulk allocation function requires
> + * extra memory at the end of the descriptor ring to be zero'd out.
> + */
> + if (adapter->rx_bulk_alloc_allowed)
> + /* zero out extra memory */
> + len += RTE_PMD_NGBE_RX_MAX_BURST;
> +
> + /*
> + * Zero out HW ring memory. Zero out extra memory at the end of
> + * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
> + * reads extra memory as zeros.
> + */
> + for (i = 0; i < len; i++)
> + rxq->rx_ring[i] = zeroed_desc;
> +
> + /*
> + * initialize extra software ring entries. Space for these extra
> + * entries is always allocated
> + */
> + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
> + for (i = rxq->nb_rx_desc; i < len; ++i)
> + rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
> +
> + rxq->rx_nb_avail = 0;
> + rxq->rx_next_avail = 0;
> + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
> + rxq->rx_tail = 0;
> + rxq->nb_rx_hold = 0;
> + rxq->pkt_first_seg = NULL;
> + rxq->pkt_last_seg = NULL;
> +}
> +
> +int __rte_cold
> +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_desc,
> + unsigned int socket_id,
> + const struct rte_eth_rxconf *rx_conf,
> + struct rte_mempool *mp)
> +{
> + const struct rte_memzone *rz;
> + struct ngbe_rx_queue *rxq;
> + struct ngbe_hw *hw;
> + uint16_t len;
> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> +
> + PMD_INIT_FUNC_TRACE();
> + hw = ngbe_dev_hw(dev);
> +
> + /*
> + * Validate number of receive descriptors.
> + * It must not exceed hardware maximum, and must be multiple
> + * of NGBE_ALIGN.
> + */
> + if (nb_desc % NGBE_RXD_ALIGN != 0 ||
> + nb_desc > NGBE_RING_DESC_MAX ||
> + nb_desc < NGBE_RING_DESC_MIN) {
> + return -EINVAL;
> + }
rte_eth_rx_queue_setup cares about it
> +
> + /* Free memory prior to re-allocation if needed... */
> + if (dev->data->rx_queues[queue_idx] != NULL) {
> + ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
> + dev->data->rx_queues[queue_idx] = NULL;
> + }
> +
> + /* First allocate the Rx queue data structure */
> + rxq = rte_zmalloc_socket("ethdev RX queue",
> + sizeof(struct ngbe_rx_queue),
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (rxq == NULL)
> + return -ENOMEM;
> + rxq->mb_pool = mp;
> + rxq->nb_rx_desc = nb_desc;
> + rxq->rx_free_thresh = rx_conf->rx_free_thresh;
> + rxq->queue_id = queue_idx;
> + rxq->reg_idx = queue_idx;
> + rxq->port_id = dev->data->port_id;
> + rxq->drop_en = rx_conf->rx_drop_en;
> + rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> +
> + /*
> + * Allocate Rx ring hardware descriptors. A memzone large enough to
> + * handle the maximum ring size is allocated in order to allow for
> + * resizing in later calls to the queue setup function.
> + */
> + rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
> + RX_RING_SZ, NGBE_ALIGN, socket_id);
> + if (rz == NULL) {
> + ngbe_rx_queue_release(rxq);
> + return -ENOMEM;
> + }
> +
> + /*
> + * Zero init all the descriptors in the ring.
> + */
> + memset(rz->addr, 0, RX_RING_SZ);
> +
> + rxq->rdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXWP(rxq->reg_idx));
> + rxq->rdh_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXRP(rxq->reg_idx));
> +
> + rxq->rx_ring_phys_addr = TMZ_PADDR(rz);
> + rxq->rx_ring = (struct ngbe_rx_desc *)TMZ_VADDR(rz);
> +
> + /*
> + * Certain constraints must be met in order to use the bulk buffer
> + * allocation Rx burst function. If any of Rx queues doesn't meet them
> + * the feature should be disabled for the whole port.
> + */
> + if (check_rx_burst_bulk_alloc_preconditions(rxq)) {
> + PMD_INIT_LOG(DEBUG, "queue[%d] doesn't meet Rx Bulk Alloc "
> + "preconditions - canceling the feature for "
> + "the whole port[%d]",
Do not split format string.
> + rxq->queue_id, rxq->port_id);
> + adapter->rx_bulk_alloc_allowed = false;
> + }
> +
> + /*
> + * Allocate software ring. Allow for space at the end of the
> + * S/W ring to make sure look-ahead logic in bulk alloc Rx burst
> + * function does not access an invalid memory region.
> + */
> + len = nb_desc;
> + if (adapter->rx_bulk_alloc_allowed)
> + len += RTE_PMD_NGBE_RX_MAX_BURST;
> +
> + rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
> + sizeof(struct ngbe_rx_entry) * len,
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (rxq->sw_ring == NULL) {
> + ngbe_rx_queue_release(rxq);
> + return -ENOMEM;
> + }
> +
> + /*
> + * Always allocate even if it's not going to be needed in order to
> + * simplify the code.
> + *
> + * This ring is used in Scattered Rx cases and Scattered Rx may
> + * be requested in ngbe_dev_rx_init(), which is called later from
> + * dev_start() flow.
> + */
> + rxq->sw_sc_ring =
> + rte_zmalloc_socket("rxq->sw_sc_ring",
> + sizeof(struct ngbe_scattered_rx_entry) * len,
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (rxq->sw_sc_ring == NULL) {
> + ngbe_rx_queue_release(rxq);
> + return -ENOMEM;
> + }
> +
> + PMD_INIT_LOG(DEBUG, "sw_ring=%p sw_sc_ring=%p hw_ring=%p "
> + "dma_addr=0x%" PRIx64,
Do not split format string.
> + rxq->sw_ring, rxq->sw_sc_ring, rxq->rx_ring,
> + rxq->rx_ring_phys_addr);
> +
> + dev->data->rx_queues[queue_idx] = rxq;
> +
> + ngbe_reset_rx_queue(adapter, rxq);
> +
> + return 0;
> +}
> +
> diff --git a/drivers/net/ngbe/ngbe_rxtx.h b/drivers/net/ngbe/ngbe_rxtx.h
> new file mode 100644
> index 0000000000..92b9a9fd1b
> --- /dev/null
> +++ b/drivers/net/ngbe/ngbe_rxtx.h
> @@ -0,0 +1,96 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> + * Copyright(c) 2010-2017 Intel Corporation
> + */
> +
> +#ifndef _NGBE_RXTX_H_
> +#define _NGBE_RXTX_H_
> +
> +/*****************************************************************************
> + * Receive Descriptor
> + *****************************************************************************/
> +struct ngbe_rx_desc {
> + struct {
> + union {
> + rte_le32_t dw0;
> + struct {
> + rte_le16_t pkt;
> + rte_le16_t hdr;
> + } lo;
> + };
> + union {
> + rte_le32_t dw1;
> + struct {
> + rte_le16_t ipid;
> + rte_le16_t csum;
> + } hi;
> + };
> + } qw0; /* also as r.pkt_addr */
> + struct {
> + union {
> + rte_le32_t dw2;
> + struct {
> + rte_le32_t status;
> + } lo;
> + };
> + union {
> + rte_le32_t dw3;
> + struct {
> + rte_le16_t len;
> + rte_le16_t tag;
> + } hi;
> + };
> + } qw1; /* also as r.hdr_addr */
> +};
> +
> +#define RTE_PMD_NGBE_RX_MAX_BURST 32
> +
> +#define RX_RING_SZ ((NGBE_RING_DESC_MAX + RTE_PMD_NGBE_RX_MAX_BURST) * \
> + sizeof(struct ngbe_rx_desc))
> +
> +
> +/**
> + * Structure associated with each descriptor of the Rx ring of a Rx queue.
> + */
> +struct ngbe_rx_entry {
> + struct rte_mbuf *mbuf; /**< mbuf associated with Rx descriptor. */
> +};
> +
> +struct ngbe_scattered_rx_entry {
> + struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
> +};
> +
> +/**
> + * Structure associated with each Rx queue.
> + */
> +struct ngbe_rx_queue {
> + struct rte_mempool *mb_pool; /**< mbuf pool to populate Rx ring. */
> + volatile struct ngbe_rx_desc *rx_ring; /**< Rx ring virtual address. */
> + uint64_t rx_ring_phys_addr; /**< Rx ring DMA address. */
> + volatile uint32_t *rdt_reg_addr; /**< RDT register address. */
> + volatile uint32_t *rdh_reg_addr; /**< RDH register address. */
> + struct ngbe_rx_entry *sw_ring; /**< address of Rx software ring. */
> + /**< address of scattered Rx software ring. */
> + struct ngbe_scattered_rx_entry *sw_sc_ring;
> + struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
> + struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
> + uint16_t nb_rx_desc; /**< number of Rx descriptors. */
> + uint16_t rx_tail; /**< current value of RDT register. */
> + uint16_t nb_rx_hold; /**< number of held free Rx desc. */
> + uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
> + uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
> + uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
> + uint16_t rx_free_thresh; /**< max free Rx desc to hold. */
> + uint16_t queue_id; /**< RX queue index. */
> + uint16_t reg_idx; /**< RX queue register index. */
> + /**< Packet type mask for different NICs. */
> + uint16_t port_id; /**< Device port identifier. */
> + uint8_t drop_en; /**< If not 0, set SRRCTL.Drop_En. */
> + uint8_t rx_deferred_start; /**< not in global dev start. */
> + /** need to alloc dummy mbuf, for wraparound when scanning hw ring */
> + struct rte_mbuf fake_mbuf;
> + /** hold packets to return to application */
> + struct rte_mbuf *rx_stage[RTE_PMD_NGBE_RX_MAX_BURST * 2];
> +};
> +
> +#endif /* _NGBE_RXTX_H_ */
>
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 13/19] net/ngbe: add Tx queue setup and release
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 13/19] net/ngbe: add Tx " Jiawen Wu
@ 2021-07-02 16:39 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:39 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 1:59 PM, Jiawen Wu wrote:
> Setup device Tx queue and release Tx queue.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> +int __rte_cold
> +ngbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
> + uint16_t queue_idx,
> + uint16_t nb_desc,
> + unsigned int socket_id,
> + const struct rte_eth_txconf *tx_conf)
> +{
> + const struct rte_memzone *tz;
> + struct ngbe_tx_queue *txq;
> + struct ngbe_hw *hw;
> + uint16_t tx_free_thresh;
> +
> + PMD_INIT_FUNC_TRACE();
> + hw = ngbe_dev_hw(dev);
> +
> + /*
> + * Validate number of transmit descriptors.
> + * It must not exceed hardware maximum, and must be multiple
> + * of NGBE_ALIGN.
> + */
> + if (nb_desc % NGBE_TXD_ALIGN != 0 ||
> + nb_desc > NGBE_RING_DESC_MAX ||
> + nb_desc < NGBE_RING_DESC_MIN) {
> + return -EINVAL;
> + }
rte_eth_tx_queue_setup cares about it
> +
> + /*
> + * The Tx descriptor ring will be cleaned after txq->tx_free_thresh
> + * descriptors are used or if the number of descriptors required
> + * to transmit a packet is greater than the number of free Tx
> + * descriptors.
> + * One descriptor in the Tx ring is used as a sentinel to avoid a
> + * H/W race condition, hence the maximum threshold constraints.
> + * When set to zero use default values.
> + */
> + tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
> + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
> + if (tx_free_thresh >= (nb_desc - 3)) {
> + PMD_INIT_LOG(ERR, "tx_free_thresh must be less than the number of "
> + "TX descriptors minus 3. (tx_free_thresh=%u "
> + "port=%d queue=%d)",
Do not split format string.
> + (unsigned int)tx_free_thresh,
> + (int)dev->data->port_id, (int)queue_idx);
> + return -(EINVAL);
> + }
> +
> + if (nb_desc % tx_free_thresh != 0) {
> + PMD_INIT_LOG(ERR, "tx_free_thresh must be a divisor of the "
> + "number of Tx descriptors. (tx_free_thresh=%u "
> + "port=%d queue=%d)", (unsigned int)tx_free_thresh,
Do not split format string.
> + (int)dev->data->port_id, (int)queue_idx);
> + return -(EINVAL);
> + }
> +
> + /* Free memory prior to re-allocation if needed... */
> + if (dev->data->tx_queues[queue_idx] != NULL) {
> + ngbe_tx_queue_release(dev->data->tx_queues[queue_idx]);
> + dev->data->tx_queues[queue_idx] = NULL;
> + }
> +
> + /* First allocate the Tx queue data structure */
> + txq = rte_zmalloc_socket("ethdev Tx queue",
> + sizeof(struct ngbe_tx_queue),
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (txq == NULL)
> + return -ENOMEM;
> +
> + /*
> + * Allocate Tx ring hardware descriptors. A memzone large enough to
> + * handle the maximum ring size is allocated in order to allow for
> + * resizing in later calls to the queue setup function.
> + */
> + tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
> + sizeof(struct ngbe_tx_desc) * NGBE_RING_DESC_MAX,
> + NGBE_ALIGN, socket_id);
> + if (tz == NULL) {
> + ngbe_tx_queue_release(txq);
> + return -ENOMEM;
> + }
> +
> + txq->nb_tx_desc = nb_desc;
> + txq->tx_free_thresh = tx_free_thresh;
> + txq->pthresh = tx_conf->tx_thresh.pthresh;
> + txq->hthresh = tx_conf->tx_thresh.hthresh;
> + txq->wthresh = tx_conf->tx_thresh.wthresh;
> + txq->queue_id = queue_idx;
> + txq->reg_idx = queue_idx;
> + txq->port_id = dev->data->port_id;
> + txq->ops = &def_txq_ops;
> + txq->tx_deferred_start = tx_conf->tx_deferred_start;
> +
> + txq->tdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXWP(txq->reg_idx));
> + txq->tdc_reg_addr = NGBE_REG_ADDR(hw, NGBE_TXCFG(txq->reg_idx));
> +
> + txq->tx_ring_phys_addr = TMZ_PADDR(tz);
> + txq->tx_ring = (struct ngbe_tx_desc *)TMZ_VADDR(tz);
> +
> + /* Allocate software ring */
> + txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
> + sizeof(struct ngbe_tx_entry) * nb_desc,
> + RTE_CACHE_LINE_SIZE, socket_id);
> + if (txq->sw_ring == NULL) {
> + ngbe_tx_queue_release(txq);
> + return -ENOMEM;
> + }
> + PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%" PRIx64,
> + txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
> +
> + txq->ops->reset(txq);
> +
> + dev->data->tx_queues[queue_idx] = txq;
> +
> + return 0;
> +}
> +
> /**
> * ngbe_free_sc_cluster - free the not-yet-completed scattered cluster
[snip]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 14/19] net/ngbe: add simple Rx flow
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 14/19] net/ngbe: add simple Rx flow Jiawen Wu
@ 2021-07-02 16:42 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:42 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 2:00 PM, Jiawen Wu wrote:
> Initialize device with the simplest receive function.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
The patch cannot be tested before device start up.
So, it should go after the patch which implements
device and Rx queues start up.
[snip]
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index da9150b2f1..f97fceaf7c 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> @@ -15,6 +15,174 @@
> #include "ngbe_ethdev.h"
> #include "ngbe_rxtx.h"
>
> +/*
> + * Prefetch a cache line into all cache levels.
> + */
> +#define rte_ngbe_prefetch(p) rte_prefetch0(p)
> +
> +/*********************************************************************
> + *
> + * Rx functions
> + *
> + **********************************************************************/
> +uint16_t
> +ngbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> + uint16_t nb_pkts)
> +{
> + struct ngbe_rx_queue *rxq;
> + volatile struct ngbe_rx_desc *rx_ring;
> + volatile struct ngbe_rx_desc *rxdp;
> + struct ngbe_rx_entry *sw_ring;
> + struct ngbe_rx_entry *rxe;
> + struct rte_mbuf *rxm;
> + struct rte_mbuf *nmb;
> + struct ngbe_rx_desc rxd;
> + uint64_t dma_addr;
> + uint32_t staterr;
> + uint16_t pkt_len;
> + uint16_t rx_id;
> + uint16_t nb_rx;
> + uint16_t nb_hold;
> +
> + nb_rx = 0;
> + nb_hold = 0;
> + rxq = rx_queue;
> + rx_id = rxq->rx_tail;
> + rx_ring = rxq->rx_ring;
> + sw_ring = rxq->sw_ring;
> + struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> + while (nb_rx < nb_pkts) {
> + /*
> + * The order of operations here is important as the DD status
> + * bit must not be read after any other descriptor fields.
> + * rx_ring and rxdp are pointing to volatile data so the order
> + * of accesses cannot be reordered by the compiler. If they were
> + * not volatile, they could be reordered which could lead to
> + * using invalid descriptor fields when read from rxd.
> + */
> + rxdp = &rx_ring[rx_id];
> + staterr = rxdp->qw1.lo.status;
> + if (!(staterr & rte_cpu_to_le_32(NGBE_RXD_STAT_DD)))
> + break;
> + rxd = *rxdp;
> +
> + /*
> + * End of packet.
> + *
> + * If the NGBE_RXD_STAT_EOP flag is not set, the Rx packet
> + * is likely to be invalid and to be dropped by the various
> + * validation checks performed by the network stack.
> + *
> + * Allocate a new mbuf to replenish the RX ring descriptor.
> + * If the allocation fails:
> + * - arrange for that Rx descriptor to be the first one
> + * being parsed the next time the receive function is
> + * invoked [on the same queue].
> + *
> + * - Stop parsing the Rx ring and return immediately.
> + *
> + * This policy do not drop the packet received in the Rx
> + * descriptor for which the allocation of a new mbuf failed.
> + * Thus, it allows that packet to be later retrieved if
> + * mbuf have been freed in the mean time.
> + * As a side effect, holding Rx descriptors instead of
> + * systematically giving them back to the NIC may lead to
> + * Rx ring exhaustion situations.
> + * However, the NIC can gracefully prevent such situations
> + * to happen by sending specific "back-pressure" flow control
> + * frames to its peer(s).
> + */
> + PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
> + "ext_err_stat=0x%08x pkt_len=%u",
Do not split format string
> + (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
> + (uint16_t)rx_id, (uint32_t)staterr,
> + (uint16_t)rte_le_to_cpu_16(rxd.qw1.hi.len));
> +
> + nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
> + if (nmb == NULL) {
> + PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed port_id=%u "
> + "queue_id=%u", (uint16_t)rxq->port_id,
Do not split format string
> + (uint16_t)rxq->queue_id);
> + dev->data->rx_mbuf_alloc_failed++;
> + break;
> + }
> +
> + nb_hold++;
> + rxe = &sw_ring[rx_id];
> + rx_id++;
> + if (rx_id == rxq->nb_rx_desc)
> + rx_id = 0;
> +
> + /* Prefetch next mbuf while processing current one. */
> + rte_ngbe_prefetch(sw_ring[rx_id].mbuf);
> +
> + /*
> + * When next Rx descriptor is on a cache-line boundary,
> + * prefetch the next 4 Rx descriptors and the next 8 pointers
> + * to mbufs.
> + */
> + if ((rx_id & 0x3) == 0) {
> + rte_ngbe_prefetch(&rx_ring[rx_id]);
> + rte_ngbe_prefetch(&sw_ring[rx_id]);
> + }
> +
> + rxm = rxe->mbuf;
> + rxe->mbuf = nmb;
> + dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> + NGBE_RXD_HDRADDR(rxdp, 0);
> + NGBE_RXD_PKTADDR(rxdp, dma_addr);
> +
> + /*
> + * Initialize the returned mbuf.
> + * setup generic mbuf fields:
> + * - number of segments,
> + * - next segment,
> + * - packet length,
> + * - Rx port identifier.
> + */
> + pkt_len = (uint16_t)(rte_le_to_cpu_16(rxd.qw1.hi.len));
> + rxm->data_off = RTE_PKTMBUF_HEADROOM;
> + rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
> + rxm->nb_segs = 1;
> + rxm->next = NULL;
> + rxm->pkt_len = pkt_len;
> + rxm->data_len = pkt_len;
> + rxm->port = rxq->port_id;
> +
> + /*
> + * Store the mbuf address into the next entry of the array
> + * of returned packets.
> + */
> + rx_pkts[nb_rx++] = rxm;
> + }
> + rxq->rx_tail = rx_id;
> +
> + /*
> + * If the number of free Rx descriptors is greater than the Rx free
> + * threshold of the queue, advance the Receive Descriptor Tail (RDT)
> + * register.
> + * Update the RDT with the value of the last processed Rx descriptor
> + * minus 1, to guarantee that the RDT register is never equal to the
> + * RDH register, which creates a "full" ring situation from the
> + * hardware point of view...
> + */
> + nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
> + if (nb_hold > rxq->rx_free_thresh) {
> + PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
> + "nb_hold=%u nb_rx=%u",
Do not split format string
> + (uint16_t)rxq->port_id, (uint16_t)rxq->queue_id,
> + (uint16_t)rx_id, (uint16_t)nb_hold,
> + (uint16_t)nb_rx);
> + rx_id = (uint16_t)((rx_id == 0) ?
> + (rxq->nb_rx_desc - 1) : (rx_id - 1));
> + ngbe_set32(rxq->rdt_reg_addr, rx_id);
> + nb_hold = 0;
> + }
> + rxq->nb_rx_hold = nb_hold;
> + return nb_rx;
> +}
> +
> +
> /*********************************************************************
> *
> * Queue management functions
[snip]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 15/19] net/ngbe: add simple Tx flow
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 15/19] net/ngbe: add simple Tx flow Jiawen Wu
@ 2021-07-02 16:45 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:45 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 2:00 PM, Jiawen Wu wrote:
> Initialize device with the simplest transmit functions.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
The patch cannot be tested before device start up.
So, it should go after corresponding patches.
[snip]
> +uint16_t
> +ngbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
> + uint16_t nb_pkts)
> +{
> + uint16_t nb_tx;
> +
> + /* Try to transmit at least chunks of TX_MAX_BURST pkts */
> + if (likely(nb_pkts <= RTE_PMD_NGBE_TX_MAX_BURST))
> + return tx_xmit_pkts(tx_queue, tx_pkts, nb_pkts);
> +
> + /* transmit more than the max burst, in chunks of TX_MAX_BURST */
> + nb_tx = 0;
> + while (nb_pkts) {
Compare vs 0 explicitly
> + uint16_t ret, n;
> +
> + n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_NGBE_TX_MAX_BURST);
> + ret = tx_xmit_pkts(tx_queue, &tx_pkts[nb_tx], n);
> + nb_tx = (uint16_t)(nb_tx + ret);
> + nb_pkts = (uint16_t)(nb_pkts - ret);
> + if (ret < n)
> + break;
> + }
> +
> + return nb_tx;
> +}
> +
> /*********************************************************************
> *
> * Rx functions
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 16/19] net/ngbe: add device start and stop operations
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 16/19] net/ngbe: add device start and stop operations Jiawen Wu
@ 2021-07-02 16:52 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:52 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 2:00 PM, Jiawen Wu wrote:
> Setup MSI-X interrupt, complete PHY configuration and set device link
> speed to start device. Disable interrupt, stop hardware and clear queues
> to stop device.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> index 6b4d5ac65b..008326631e 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.c
> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> @@ -274,6 +308,250 @@ ngbe_dev_configure(struct rte_eth_dev *dev)
> return 0;
> }
>
> +static void
> +ngbe_dev_phy_intr_setup(struct rte_eth_dev *dev)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> +
> + wr32(hw, NGBE_GPIODIR, NGBE_GPIODIR_DDR(1));
> + wr32(hw, NGBE_GPIOINTEN, NGBE_GPIOINTEN_INT(3));
> + wr32(hw, NGBE_GPIOINTTYPE, NGBE_GPIOINTTYPE_LEVEL(0));
> + if (hw->phy.type == ngbe_phy_yt8521s_sfi)
> + wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(0));
> + else
> + wr32(hw, NGBE_GPIOINTPOL, NGBE_GPIOINTPOL_ACT(3));
> +
> + intr->mask_misc |= NGBE_ICRMISC_GPIO;
> +}
> +
> +/*
> + * Configure device link speed and setup link.
> + * It returns 0 on success.
> + */
> +static int
> +ngbe_dev_start(struct rte_eth_dev *dev)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
> + uint32_t intr_vector = 0;
> + int err;
> + bool link_up = false, negotiate = false;
> + uint32_t speed = 0;
> + uint32_t allowed_speeds = 0;
> + int status;
> + uint32_t *link_speeds;
> +
> + PMD_INIT_FUNC_TRACE();
> +
> + /* disable uio/vfio intr/eventfd mapping */
> + rte_intr_disable(intr_handle);
> +
> + /* stop adapter */
> + hw->adapter_stopped = 0;
> + ngbe_stop_hw(hw);
> +
> + /* reinitialize adapter
> + * this calls reset and start
> + */
Wrong style of the multi-line comment. Should be:
/*
* Long comment
*/
Hoever, above could simply fit in a single line.
> + hw->nb_rx_queues = dev->data->nb_rx_queues;
> + hw->nb_tx_queues = dev->data->nb_tx_queues;
> + status = ngbe_pf_reset_hw(hw);
> + if (status != 0)
> + return -1;
> + hw->mac.start_hw(hw);
> + hw->mac.get_link_status = true;
> +
> + ngbe_dev_phy_intr_setup(dev);
> +
> + /* check and configure queue intr-vector mapping */
> + if ((rte_intr_cap_multiple(intr_handle) ||
> + !RTE_ETH_DEV_SRIOV(dev).active) &&
> + dev->data->dev_conf.intr_conf.rxq != 0) {
> + intr_vector = dev->data->nb_rx_queues;
> + if (rte_intr_efd_enable(intr_handle, intr_vector))
> + return -1;
> + }
> +
> + if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
> + intr_handle->intr_vec =
> + rte_zmalloc("intr_vec",
> + dev->data->nb_rx_queues * sizeof(int), 0);
> + if (intr_handle->intr_vec == NULL) {
> + PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
> + " intr_vec", dev->data->nb_rx_queues);
Do not split format string
> + return -ENOMEM;
> + }
> + }
> +
> + /* confiugre MSI-X for sleep until Rx interrupt */
> + ngbe_configure_msix(dev);
> +
> + /* initialize transmission unit */
> + ngbe_dev_tx_init(dev);
> +
> + /* This can fail when allocating mbufs for descriptor rings */
> + err = ngbe_dev_rx_init(dev);
> + if (err != 0) {
> + PMD_INIT_LOG(ERR, "Unable to initialize RX hardware");
RX -> Rx
> + goto error;
> + }
> +
> + err = ngbe_dev_rxtx_start(dev);
> + if (err < 0) {
> + PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
> + goto error;
> + }
> +
> + err = hw->mac.check_link(hw, &speed, &link_up, 0);
> + if (err != 0)
> + goto error;
> + dev->data->dev_link.link_status = link_up;
> +
> + link_speeds = &dev->data->dev_conf.link_speeds;
> + if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
> + negotiate = true;
> +
> + err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
> + if (err != 0)
> + goto error;
> +
> + allowed_speeds = 0;
> + if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
> + allowed_speeds |= ETH_LINK_SPEED_1G;
> + if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
> + allowed_speeds |= ETH_LINK_SPEED_100M;
> + if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
> + allowed_speeds |= ETH_LINK_SPEED_10M;
> +
> + if (*link_speeds & ~allowed_speeds) {
> + PMD_INIT_LOG(ERR, "Invalid link setting");
> + goto error;
> + }
> +
> + speed = 0x0;
> + if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
> + speed = hw->mac.default_speeds;
> + } else {
> + if (*link_speeds & ETH_LINK_SPEED_1G)
> + speed |= NGBE_LINK_SPEED_1GB_FULL;
> + if (*link_speeds & ETH_LINK_SPEED_100M)
> + speed |= NGBE_LINK_SPEED_100M_FULL;
> + if (*link_speeds & ETH_LINK_SPEED_10M)
> + speed |= NGBE_LINK_SPEED_10M_FULL;
> + }
> +
> + hw->phy.init_hw(hw);
> + err = hw->mac.setup_link(hw, speed, link_up);
> + if (err != 0)
> + goto error;
> +
> + if (rte_intr_allow_others(intr_handle)) {
> + ngbe_dev_misc_interrupt_setup(dev);
> + /* check if lsc interrupt is enabled */
> + if (dev->data->dev_conf.intr_conf.lsc != 0)
> + ngbe_dev_lsc_interrupt_setup(dev, TRUE);
> + else
> + ngbe_dev_lsc_interrupt_setup(dev, FALSE);
> + ngbe_dev_macsec_interrupt_setup(dev);
> + ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
> + } else {
> + rte_intr_callback_unregister(intr_handle,
> + ngbe_dev_interrupt_handler, dev);
> + if (dev->data->dev_conf.intr_conf.lsc != 0)
> + PMD_INIT_LOG(INFO, "lsc won't enable because of"
> + " no intr multiplex");
Do not split format string
lsc -> LSC
> + }
> +
> + /* check if rxq interrupt is enabled */
> + if (dev->data->dev_conf.intr_conf.rxq != 0 &&
> + rte_intr_dp_is_en(intr_handle))
> + ngbe_dev_rxq_interrupt_setup(dev);
> +
> + /* enable UIO/VFIO intr/eventfd mapping */
> + rte_intr_enable(intr_handle);
> +
> + /* resume enabled intr since HW reset */
> + ngbe_enable_intr(dev);
> +
> + if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
> + (hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
> + /* gpio0 is used to power on/off control*/
> + wr32(hw, NGBE_GPIODATA, 0);
> + }
> +
> + /*
> + * Update link status right before return, because it may
> + * start link configuration process in a separate thread.
> + */
> + ngbe_dev_link_update(dev, 0);
> +
> + return 0;
> +
> +error:
> + PMD_INIT_LOG(ERR, "failure in dev start: %d", err);
> + ngbe_dev_clear_queues(dev);
> + return -EIO;
> +}
> +
> +/*
> + * Stop device: disable rx and tx functions to allow for reconfiguring.
> + */
> +static int
> +ngbe_dev_stop(struct rte_eth_dev *dev)
> +{
> + struct rte_eth_link link;
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
> +
> + if (hw->adapter_stopped)
> + return 0;
> +
> + PMD_INIT_FUNC_TRACE();
> +
> + if ((hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_M88E1512_SFP ||
> + (hw->sub_system_id & NGBE_OEM_MASK) == NGBE_LY_YT8521S_SFP) {
> + /* gpio0 is used to power on/off control*/
> + wr32(hw, NGBE_GPIODATA, NGBE_GPIOBIT_0);
> + }
> +
> + /* disable interrupts */
> + ngbe_disable_intr(hw);
> +
> + /* reset the NIC */
> + ngbe_pf_reset_hw(hw);
> + hw->adapter_stopped = 0;
> +
> + /* stop adapter */
> + ngbe_stop_hw(hw);
> +
> + ngbe_dev_clear_queues(dev);
> +
> + /* Clear recorded link status */
> + memset(&link, 0, sizeof(link));
> + rte_eth_linkstatus_set(dev, &link);
> +
> + if (!rte_intr_allow_others(intr_handle))
> + /* resume to the default handler */
> + rte_intr_callback_register(intr_handle,
> + ngbe_dev_interrupt_handler,
> + (void *)dev);
> +
> + /* Clean datapath event and queue/vec mapping */
> + rte_intr_efd_disable(intr_handle);
> + if (intr_handle->intr_vec != NULL) {
> + rte_free(intr_handle->intr_vec);
> + intr_handle->intr_vec = NULL;
> + }
> +
> + hw->adapter_stopped = true;
> + dev->data->dev_started = 0;
> +
> + return 0;
> +}
> +
> /*
> * Reset and stop device.
> */
> @@ -417,6 +695,106 @@ ngbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
> return ngbe_dev_link_update_share(dev, wait_to_complete);
> }
>
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during NIC initialized.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + * @param on
> + * Enable or Disable.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on)
> +{
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> +
> + ngbe_dev_link_status_print(dev);
> + if (on != 0) {
> + intr->mask_misc |= NGBE_ICRMISC_PHY;
> + intr->mask_misc |= NGBE_ICRMISC_GPIO;
> + } else {
> + intr->mask_misc &= ~NGBE_ICRMISC_PHY;
> + intr->mask_misc &= ~NGBE_ICRMISC_GPIO;
> + }
> +
> + return 0;
> +}
> +
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during NIC initialized.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_misc_interrupt_setup(struct rte_eth_dev *dev)
> +{
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> + u64 mask;
> +
> + mask = NGBE_ICR_MASK;
> + mask &= (1ULL << NGBE_MISC_VEC_ID);
> + intr->mask |= mask;
> + intr->mask_misc |= NGBE_ICRMISC_GPIO;
> +
> + return 0;
> +}
> +
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during NIC initialized.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
> +{
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> + u64 mask;
> +
> + mask = NGBE_ICR_MASK;
> + mask &= ~((1ULL << NGBE_RX_VEC_START) - 1);
> + intr->mask |= mask;
> +
> + return 0;
> +}
> +
> +/**
> + * It clears the interrupt causes and enables the interrupt.
> + * It will be called once only during NIC initialized.
> + *
> + * @param dev
> + * Pointer to struct rte_eth_dev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +static int
> +ngbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev)
> +{
> + struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> +
> + intr->mask_misc |= NGBE_ICRMISC_LNKSEC;
> +
> + return 0;
> +}
> +
> /*
> * It reads ICR and sets flag for the link_update.
> *
> @@ -623,9 +1001,102 @@ ngbe_dev_interrupt_handler(void *param)
> ngbe_dev_interrupt_action(dev);
> }
>
> +/**
> + * set the IVAR registers, mapping interrupt causes to vectors
set -> Set
> + * @param hw
> + * pointer to ngbe_hw struct
> + * @direction
> + * 0 for Rx, 1 for Tx, -1 for other causes
> + * @queue
> + * queue to map the corresponding interrupt to
> + * @msix_vector
> + * the vector to map to the corresponding queue
> + */
> +void
> +ngbe_set_ivar_map(struct ngbe_hw *hw, int8_t direction,
> + uint8_t queue, uint8_t msix_vector)
> +{
> + uint32_t tmp, idx;
> +
> + if (direction == -1) {
> + /* other causes */
> + msix_vector |= NGBE_IVARMISC_VLD;
> + idx = 0;
> + tmp = rd32(hw, NGBE_IVARMISC);
> + tmp &= ~(0xFF << idx);
> + tmp |= (msix_vector << idx);
> + wr32(hw, NGBE_IVARMISC, tmp);
> + } else {
> + /* rx or tx causes */
> + /* Workround for ICR lost */
> + idx = ((16 * (queue & 1)) + (8 * direction));
> + tmp = rd32(hw, NGBE_IVAR(queue >> 1));
> + tmp &= ~(0xFF << idx);
> + tmp |= (msix_vector << idx);
> + wr32(hw, NGBE_IVAR(queue >> 1), tmp);
> + }
> +}
> +
> +/**
> + * Sets up the hardware to properly generate MSI-X interrupts
> + * @hw
> + * board private structure
> + */
> +static void
> +ngbe_configure_msix(struct rte_eth_dev *dev)
> +{
> + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
> + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + uint32_t queue_id, base = NGBE_MISC_VEC_ID;
> + uint32_t vec = NGBE_MISC_VEC_ID;
> + uint32_t gpie;
> +
> + /* won't configure MSI-X register if no mapping is done
> + * between intr vector and event fd
> + * but if MSI-X has been enabled already, need to configure
> + * auto clean, auto mask and throttling.
> + */
Wrong style of multi-line comment, should be:
/*
* Long comment here
*/
> + gpie = rd32(hw, NGBE_GPIE);
> + if (!rte_intr_dp_is_en(intr_handle) &&
> + !(gpie & NGBE_GPIE_MSIX))
> + return;
> +
> + if (rte_intr_allow_others(intr_handle)) {
> + base = NGBE_RX_VEC_START;
> + vec = base;
> + }
> +
> + /* setup GPIE for MSI-X mode */
> + gpie = rd32(hw, NGBE_GPIE);
> + gpie |= NGBE_GPIE_MSIX;
> + wr32(hw, NGBE_GPIE, gpie);
> +
> + /* Populate the IVAR table and set the ITR values to the
> + * corresponding register.
> + */
> + if (rte_intr_dp_is_en(intr_handle)) {
> + for (queue_id = 0; queue_id < dev->data->nb_rx_queues;
> + queue_id++) {
> + /* by default, 1:1 mapping */
> + ngbe_set_ivar_map(hw, 0, queue_id, vec);
> + intr_handle->intr_vec[queue_id] = vec;
> + if (vec < base + intr_handle->nb_efd - 1)
> + vec++;
> + }
> +
> + ngbe_set_ivar_map(hw, -1, 1, NGBE_MISC_VEC_ID);
> + }
> + wr32(hw, NGBE_ITR(NGBE_MISC_VEC_ID),
> + NGBE_ITR_IVAL_1G(NGBE_QUEUE_ITR_INTERVAL_DEFAULT)
> + | NGBE_ITR_WRDSA);
> +}
> +
> static const struct eth_dev_ops ngbe_eth_dev_ops = {
> .dev_configure = ngbe_dev_configure,
> .dev_infos_get = ngbe_dev_info_get,
> + .dev_start = ngbe_dev_start,
> + .dev_stop = ngbe_dev_stop,
> .link_update = ngbe_dev_link_update,
> .rx_queue_setup = ngbe_dev_rx_queue_setup,
> .rx_queue_release = ngbe_dev_rx_queue_release,
> diff --git a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> index c52cac2ca1..5277f175cd 100644
> --- a/drivers/net/ngbe/ngbe_ethdev.h
> +++ b/drivers/net/ngbe/ngbe_ethdev.h
> @@ -13,7 +13,10 @@
> #define NGBE_FLAG_MACSEC (uint32_t)(1 << 3)
> #define NGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
>
> +#define NGBE_QUEUE_ITR_INTERVAL_DEFAULT 500 /* 500us */
> +
> #define NGBE_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
> +#define NGBE_RX_VEC_START RTE_INTR_VEC_RXTX_OFFSET
>
> /* structure for interrupt relative data */
> struct ngbe_interrupt {
> @@ -59,6 +62,11 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
> return intr;
> }
>
> +/*
> + * RX/TX function prototypes
RX -> Rx, TX -> Tx
[snip]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 17/19] net/ngbe: add Tx queue start and stop
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 17/19] net/ngbe: add Tx queue start and stop Jiawen Wu
@ 2021-07-02 16:55 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 16:55 UTC (permalink / raw)
To: Jiawen Wu, dev
On 6/17/21 2:00 PM, Jiawen Wu wrote:
> Initializes transmit unit, support to start and stop transmit unit for
> specified queues.
>
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
[snip]
> diff --git a/drivers/net/ngbe/ngbe_rxtx.c b/drivers/net/ngbe/ngbe_rxtx.c
> index d4680afaae..b05cb0ec34 100644
> --- a/drivers/net/ngbe/ngbe_rxtx.c
> +++ b/drivers/net/ngbe/ngbe_rxtx.c
> +/*
> + * Start Transmit Units for specified queue.
> + */
> +int __rte_cold
> +ngbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
> +{
> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> + struct ngbe_tx_queue *txq;
> + uint32_t txdctl;
> + int poll_ms;
> +
> + PMD_INIT_FUNC_TRACE();
> +
> + txq = dev->data->tx_queues[tx_queue_id];
> + wr32m(hw, NGBE_TXCFG(txq->reg_idx), NGBE_TXCFG_ENA, NGBE_TXCFG_ENA);
> +
> + /* Wait until Tx Enable ready */
> + poll_ms = RTE_NGBE_REGISTER_POLL_WAIT_10_MS;
> + do {
> + rte_delay_ms(1);
> + txdctl = rd32(hw, NGBE_TXCFG(txq->reg_idx));
> + } while (--poll_ms && !(txdctl & NGBE_TXCFG_ENA));
> + if (poll_ms == 0)
> + PMD_INIT_LOG(ERR, "Could not enable "
> + "Tx Queue %d", tx_queue_id);
Do not split format string
[snip]
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure
2021-07-02 13:07 ` Andrew Rybchenko
@ 2021-07-05 2:52 ` Jiawen Wu
2021-07-05 8:54 ` Andrew Rybchenko
0 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-07-05 2:52 UTC (permalink / raw)
To: 'Andrew Rybchenko', dev
On July 2, 2021 9:08 PM, Andrew Rybchenko wrote:
> On 6/17/21 1:59 PM, Jiawen Wu wrote:
> > Adding bare minimum PMD library and doc build infrastructure and claim
> > the maintainership for ngbe PMD.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>
> Just one nit below.
>
> [snip]
>
> > diff --git a/drivers/net/ngbe/ngbe_ethdev.c
> > b/drivers/net/ngbe/ngbe_ethdev.c new file mode 100644 index
> > 0000000000..f8e19066de
> > --- /dev/null
> > +++ b/drivers/net/ngbe/ngbe_ethdev.c
> > @@ -0,0 +1,29 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> > + * Copyright(c) 2010-2017 Intel Corporation */
> > +
> > +#include <errno.h>
> > +#include <rte_common.h>
> > +#include <ethdev_pci.h>
> > +
> > +static int
> > +eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> > + struct rte_pci_device *pci_dev)
> > +{
> > + RTE_SET_USED(pci_dev);
> > + return -EINVAL;
> > +}
> > +
> > +static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev) {
> > + RTE_SET_USED(pci_dev);
> > + return -EINVAL;
> > +}
>
> Why is different style of unused suppression is used
> above: __rte_unused vs RTE_SET_USED'?
>
> [snip]
I guess, 'pci_drv' will not be used in future implement the probe function.
So I just gave '__rte_unused' when I separated the patches.
Does this have to be corrected?
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update
2021-07-02 16:24 ` Andrew Rybchenko
@ 2021-07-05 7:10 ` Jiawen Wu
2021-07-05 8:58 ` Andrew Rybchenko
0 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-07-05 7:10 UTC (permalink / raw)
To: 'Andrew Rybchenko', dev
On July 3, 2021 12:24 AM, Andrew Rybchenko wrote:
> On 6/17/21 1:59 PM, Jiawen Wu wrote:
> > Register to handle device interrupt.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>
> [snip]
>
> > diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index
> > 54d0665db9..0918cc2918 100644
> > --- a/doc/guides/nics/ngbe.rst
> > +++ b/doc/guides/nics/ngbe.rst
> > @@ -8,6 +8,11 @@ The NGBE PMD (librte_pmd_ngbe) provides poll mode
> > driver support for Wangxun 1 Gigabit Ethernet NICs.
> >
> >
> > +Features
> > +--------
> > +
> > +- Link state information
> > +
>
> Two empty lines before the section.
>
> > Prerequisites
> > -------------
> >
>
> [snip]
>
> > diff --git a/drivers/net/ngbe/ngbe_ethdev.c
> > b/drivers/net/ngbe/ngbe_ethdev.c index deca64137d..c952023e8b 100644
> > --- a/drivers/net/ngbe/ngbe_ethdev.c
> > +++ b/drivers/net/ngbe/ngbe_ethdev.c
> > @@ -141,6 +175,23 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev,
> void *init_params __rte_unused)
> > return -ENOMEM;
> > }
> >
> > + ctrl_ext = rd32(hw, NGBE_PORTCTL);
> > + /* let hardware know driver is loaded */
> > + ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
> > + /* Set PF Reset Done bit so PF/VF Mail Ops can work */
> > + ctrl_ext |= NGBE_PORTCTL_RSTDONE;
> > + wr32(hw, NGBE_PORTCTL, ctrl_ext);
> > + ngbe_flush(hw);
> > +
> > + rte_intr_callback_register(intr_handle,
> > + ngbe_dev_interrupt_handler, eth_dev);
> > +
> > + /* enable uio/vfio intr/eventfd mapping */
> > + rte_intr_enable(intr_handle);
> > +
> > + /* enable support intr */
> > + ngbe_enable_intr(eth_dev);
> > +
>
> I don't understand why it is done unconditionally regardless of the
> corresponding bit in dev_conf.
Sorry...I don't quite understand what you mean.
>
> > return 0;
> > }
> >
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-07-02 16:35 ` Andrew Rybchenko
@ 2021-07-05 8:36 ` Jiawen Wu
2021-07-05 9:08 ` Andrew Rybchenko
0 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-07-05 8:36 UTC (permalink / raw)
To: 'Andrew Rybchenko', dev
On July 3, 2021 12:36 AM, Andrew Rybchenko wrote:
> On 6/17/21 1:59 PM, Jiawen Wu wrote:
> > Setup device Rx queue and release Rx queue.
> >
> > Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> > ---
> > drivers/net/ngbe/meson.build | 1 +
> > drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
> > drivers/net/ngbe/ngbe_ethdev.h | 16 ++
> > drivers/net/ngbe/ngbe_rxtx.c | 308 +++++++++++++++++++++++++++++++++
> > drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
> > 5 files changed, 457 insertions(+), 1 deletion(-) create mode 100644
> > drivers/net/ngbe/ngbe_rxtx.c create mode 100644
> > drivers/net/ngbe/ngbe_rxtx.h
> >
> > diff --git a/drivers/net/ngbe/meson.build
> > b/drivers/net/ngbe/meson.build index 81173fa7f0..9e75b82f1c 100644
> > --- a/drivers/net/ngbe/meson.build
> > +++ b/drivers/net/ngbe/meson.build
> > @@ -12,6 +12,7 @@ objs = [base_objs]
> >
> > sources = files(
> > 'ngbe_ethdev.c',
> > + 'ngbe_rxtx.c',
> > )
> >
> > includes += include_directories('base') diff --git
> > a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> > index c952023e8b..e73606c5f3 100644
> > --- a/drivers/net/ngbe/ngbe_ethdev.c
> > +++ b/drivers/net/ngbe/ngbe_ethdev.c
> > @@ -12,6 +12,7 @@
> > #include "ngbe_logs.h"
> > #include "base/ngbe.h"
> > #include "ngbe_ethdev.h"
> > +#include "ngbe_rxtx.h"
> >
> > static int ngbe_dev_close(struct rte_eth_dev *dev);
> >
> > @@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
> > { .vendor_id = 0, /* sentinel */ },
> > };
> >
> > +static const struct rte_eth_desc_lim rx_desc_lim = {
> > + .nb_max = NGBE_RING_DESC_MAX,
> > + .nb_min = NGBE_RING_DESC_MIN,
> > + .nb_align = NGBE_RXD_ALIGN,
> > +};
> > +
> > static const struct eth_dev_ops ngbe_eth_dev_ops;
> >
> > static inline void
> > @@ -241,12 +248,19 @@ static int
> > ngbe_dev_configure(struct rte_eth_dev *dev) {
> > struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
> > + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> >
> > PMD_INIT_FUNC_TRACE();
> >
> > /* set flag to update link status after init */
> > intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
> >
> > + /*
> > + * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
> > + * allocation Rx preconditions we will reset it.
> > + */
> > + adapter->rx_bulk_alloc_allowed = true;
> > +
> > return 0;
> > }
> >
> > @@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev) static
> > int ngbe_dev_info_get(struct rte_eth_dev *dev, struct
> > rte_eth_dev_info *dev_info) {
> > - RTE_SET_USED(dev);
> > + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> > +
> > + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
> > +
> > + dev_info->default_rxconf = (struct rte_eth_rxconf) {
> > + .rx_thresh = {
> > + .pthresh = NGBE_DEFAULT_RX_PTHRESH,
> > + .hthresh = NGBE_DEFAULT_RX_HTHRESH,
> > + .wthresh = NGBE_DEFAULT_RX_WTHRESH,
> > + },
> > + .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
> > + .rx_drop_en = 0,
> > + .offloads = 0,
> > + };
> > +
> > + dev_info->rx_desc_lim = rx_desc_lim;
> >
> > dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
> > ETH_LINK_SPEED_10M;
> >
> > + /* Driver-preferred Rx/Tx parameters */
> > + dev_info->default_rxportconf.nb_queues = 1;
> > + dev_info->default_rxportconf.ring_size = 256;
> > +
> > return 0;
> > }
> >
> > @@ -570,6 +603,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops =
> {
> > .dev_configure = ngbe_dev_configure,
> > .dev_infos_get = ngbe_dev_info_get,
> > .link_update = ngbe_dev_link_update,
> > + .rx_queue_setup = ngbe_dev_rx_queue_setup,
> > + .rx_queue_release = ngbe_dev_rx_queue_release,
> > };
> >
> > RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd); diff --git
> > a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
> > index b67508a3de..6580d288c8 100644
> > --- a/drivers/net/ngbe/ngbe_ethdev.h
> > +++ b/drivers/net/ngbe/ngbe_ethdev.h
> > @@ -30,6 +30,7 @@ struct ngbe_interrupt { struct ngbe_adapter {
> > struct ngbe_hw hw;
> > struct ngbe_interrupt intr;
> > + bool rx_bulk_alloc_allowed;
>
> Shouldn't it be aligned as well as above fields?
>
> > };
> >
> > static inline struct ngbe_adapter *
> > @@ -58,6 +59,13 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
> > return intr;
> > }
> >
> > +void ngbe_dev_rx_queue_release(void *rxq);
> > +
> > +int ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
> > + uint16_t nb_rx_desc, unsigned int socket_id,
> > + const struct rte_eth_rxconf *rx_conf,
> > + struct rte_mempool *mb_pool);
> > +
> > int
> > ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> > int wait_to_complete);
> > @@ -66,4 +74,12 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
> > #define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
> > #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
> >
> > +/*
> > + * Default values for Rx/Tx configuration */ #define
> > +NGBE_DEFAULT_RX_FREE_THRESH 32
> > +#define NGBE_DEFAULT_RX_PTHRESH 8
> > +#define NGBE_DEFAULT_RX_HTHRESH 8
> > +#define NGBE_DEFAULT_RX_WTHRESH 0
> > +
> > #endif /* _NGBE_ETHDEV_H_ */
> > diff --git a/drivers/net/ngbe/ngbe_rxtx.c
> > b/drivers/net/ngbe/ngbe_rxtx.c new file mode 100644 index
> > 0000000000..df0b64dc01
> > --- /dev/null
> > +++ b/drivers/net/ngbe/ngbe_rxtx.c
> > @@ -0,0 +1,308 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
> > + * Copyright(c) 2010-2017 Intel Corporation */
> > +
> > +#include <sys/queue.h>
> > +
> > +#include <stdint.h>
> > +#include <rte_ethdev.h>
> > +#include <ethdev_driver.h>
> > +#include <rte_malloc.h>
> > +
> > +#include "ngbe_logs.h"
> > +#include "base/ngbe.h"
> > +#include "ngbe_ethdev.h"
> > +#include "ngbe_rxtx.h"
> > +
> > +/**
> > + * ngbe_free_sc_cluster - free the not-yet-completed scattered
> > +cluster
> > + *
> > + * The "next" pointer of the last segment of (not-yet-completed) RSC
> > +clusters
> > + * in the sw_sc_ring is not set to NULL but rather points to the next
> > + * mbuf of this RSC aggregation (that has not been completed yet and
> > +still
> > + * resides on the HW ring). So, instead of calling for
> > +rte_pktmbuf_free() we
> > + * will just free first "nb_segs" segments of the cluster explicitly
> > +by calling
> > + * an rte_pktmbuf_free_seg().
> > + *
> > + * @m scattered cluster head
> > + */
> > +static void __rte_cold
> > +ngbe_free_sc_cluster(struct rte_mbuf *m) {
> > + uint16_t i, nb_segs = m->nb_segs;
> > + struct rte_mbuf *next_seg;
> > +
> > + for (i = 0; i < nb_segs; i++) {
> > + next_seg = m->next;
> > + rte_pktmbuf_free_seg(m);
> > + m = next_seg;
> > + }
> > +}
> > +
> > +static void __rte_cold
> > +ngbe_rx_queue_release_mbufs(struct ngbe_rx_queue *rxq) {
> > + unsigned int i;
> > +
> > + if (rxq->sw_ring != NULL) {
> > + for (i = 0; i < rxq->nb_rx_desc; i++) {
> > + if (rxq->sw_ring[i].mbuf != NULL) {
> > + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
> > + rxq->sw_ring[i].mbuf = NULL;
> > + }
> > + }
> > + if (rxq->rx_nb_avail) {
>
> Compare vs 0 explicitly. However, it looks like that the check is not required at
> all. Body may be done unconditionally.
>
> > + for (i = 0; i < rxq->rx_nb_avail; ++i) {
> > + struct rte_mbuf *mb;
> > +
> > + mb = rxq->rx_stage[rxq->rx_next_avail + i];
> > + rte_pktmbuf_free_seg(mb);
> > + }
> > + rxq->rx_nb_avail = 0;
> > + }
> > + }
> > +
> > + if (rxq->sw_sc_ring != NULL)
> > + for (i = 0; i < rxq->nb_rx_desc; i++)
> > + if (rxq->sw_sc_ring[i].fbuf) {
>
> Compare vs NULL
>
> > + ngbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
> > + rxq->sw_sc_ring[i].fbuf = NULL;
> > + }
> > +}
> > +
> > +static void __rte_cold
> > +ngbe_rx_queue_release(struct ngbe_rx_queue *rxq) {
> > + if (rxq != NULL) {
> > + ngbe_rx_queue_release_mbufs(rxq);
> > + rte_free(rxq->sw_ring);
> > + rte_free(rxq->sw_sc_ring);
> > + rte_free(rxq);
> > + }
> > +}
> > +
> > +void __rte_cold
> > +ngbe_dev_rx_queue_release(void *rxq)
> > +{
> > + ngbe_rx_queue_release(rxq);
> > +}
> > +
> > +/*
> > + * Check if Rx Burst Bulk Alloc function can be used.
> > + * Return
> > + * 0: the preconditions are satisfied and the bulk allocation function
> > + * can be used.
> > + * -EINVAL: the preconditions are NOT satisfied and the default Rx burst
> > + * function must be used.
> > + */
> > +static inline int __rte_cold
> > +check_rx_burst_bulk_alloc_preconditions(struct ngbe_rx_queue *rxq) {
> > + int ret = 0;
> > +
> > + /*
> > + * Make sure the following pre-conditions are satisfied:
> > + * rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST
> > + * rxq->rx_free_thresh < rxq->nb_rx_desc
> > + * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
> > + * Scattered packets are not supported. This should be checked
> > + * outside of this function.
> > + */
> > + if (!(rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST)) {
>
> Isn't is simpler to read and understand:
> rxq->rx_free_thresh < RTE_PMD_NGBE_RX_MAX_BURST
>
> > + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> > + "rxq->rx_free_thresh=%d, "
> > + "RTE_PMD_NGBE_RX_MAX_BURST=%d",
>
> Do not split format string.
>
> > + rxq->rx_free_thresh, RTE_PMD_NGBE_RX_MAX_BURST);
> > + ret = -EINVAL;
> > + } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
>
> rxq->rx_free_thresh >= rxq->nb_rx_desc
> is simpler to read
>
> > + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> > + "rxq->rx_free_thresh=%d, "
> > + "rxq->nb_rx_desc=%d",
>
> Do not split format string.
>
> > + rxq->rx_free_thresh, rxq->nb_rx_desc);
> > + ret = -EINVAL;
> > + } else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
>
> (rxq->nb_rx_desc % rxq->rx_free_thresh) != 0 is easier to read
>
> > + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
> > + "rxq->nb_rx_desc=%d, "
> > + "rxq->rx_free_thresh=%d",
> > + rxq->nb_rx_desc, rxq->rx_free_thresh);
>
> Do not split format string
>
> > + ret = -EINVAL;
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +/* Reset dynamic ngbe_rx_queue fields back to defaults */ static void
> > +__rte_cold ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct
> > +ngbe_rx_queue *rxq) {
> > + static const struct ngbe_rx_desc zeroed_desc = {
> > + {{0}, {0} }, {{0}, {0} } };
> > + unsigned int i;
> > + uint16_t len = rxq->nb_rx_desc;
> > +
> > + /*
> > + * By default, the Rx queue setup function allocates enough memory for
> > + * NGBE_RING_DESC_MAX. The Rx Burst bulk allocation function requires
> > + * extra memory at the end of the descriptor ring to be zero'd out.
> > + */
> > + if (adapter->rx_bulk_alloc_allowed)
> > + /* zero out extra memory */
> > + len += RTE_PMD_NGBE_RX_MAX_BURST;
> > +
> > + /*
> > + * Zero out HW ring memory. Zero out extra memory at the end of
> > + * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
> > + * reads extra memory as zeros.
> > + */
> > + for (i = 0; i < len; i++)
> > + rxq->rx_ring[i] = zeroed_desc;
> > +
> > + /*
> > + * initialize extra software ring entries. Space for these extra
> > + * entries is always allocated
> > + */
> > + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
> > + for (i = rxq->nb_rx_desc; i < len; ++i)
> > + rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
> > +
> > + rxq->rx_nb_avail = 0;
> > + rxq->rx_next_avail = 0;
> > + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
> > + rxq->rx_tail = 0;
> > + rxq->nb_rx_hold = 0;
> > + rxq->pkt_first_seg = NULL;
> > + rxq->pkt_last_seg = NULL;
> > +}
> > +
> > +int __rte_cold
> > +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> > + uint16_t queue_idx,
> > + uint16_t nb_desc,
> > + unsigned int socket_id,
> > + const struct rte_eth_rxconf *rx_conf,
> > + struct rte_mempool *mp)
> > +{
> > + const struct rte_memzone *rz;
> > + struct ngbe_rx_queue *rxq;
> > + struct ngbe_hw *hw;
> > + uint16_t len;
> > + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> > +
> > + PMD_INIT_FUNC_TRACE();
> > + hw = ngbe_dev_hw(dev);
> > +
> > + /*
> > + * Validate number of receive descriptors.
> > + * It must not exceed hardware maximum, and must be multiple
> > + * of NGBE_ALIGN.
> > + */
> > + if (nb_desc % NGBE_RXD_ALIGN != 0 ||
> > + nb_desc > NGBE_RING_DESC_MAX ||
> > + nb_desc < NGBE_RING_DESC_MIN) {
> > + return -EINVAL;
> > + }
>
> rte_eth_rx_queue_setup cares about it
>
I don't quite understand.
> > +
> > + /* Free memory prior to re-allocation if needed... */
> > + if (dev->data->rx_queues[queue_idx] != NULL) {
> > + ngbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
> > + dev->data->rx_queues[queue_idx] = NULL;
> > + }
> > +
> > + /* First allocate the Rx queue data structure */
> > + rxq = rte_zmalloc_socket("ethdev RX queue",
> > + sizeof(struct ngbe_rx_queue),
> > + RTE_CACHE_LINE_SIZE, socket_id);
> > + if (rxq == NULL)
> > + return -ENOMEM;
> > + rxq->mb_pool = mp;
> > + rxq->nb_rx_desc = nb_desc;
> > + rxq->rx_free_thresh = rx_conf->rx_free_thresh;
> > + rxq->queue_id = queue_idx;
> > + rxq->reg_idx = queue_idx;
> > + rxq->port_id = dev->data->port_id;
> > + rxq->drop_en = rx_conf->rx_drop_en;
> > + rxq->rx_deferred_start = rx_conf->rx_deferred_start;
> > +
> > + /*
> > + * Allocate Rx ring hardware descriptors. A memzone large enough to
> > + * handle the maximum ring size is allocated in order to allow for
> > + * resizing in later calls to the queue setup function.
> > + */
> > + rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
> > + RX_RING_SZ, NGBE_ALIGN, socket_id);
> > + if (rz == NULL) {
> > + ngbe_rx_queue_release(rxq);
> > + return -ENOMEM;
> > + }
> > +
> > + /*
> > + * Zero init all the descriptors in the ring.
> > + */
> > + memset(rz->addr, 0, RX_RING_SZ);
> > +
> > + rxq->rdt_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXWP(rxq->reg_idx));
> > + rxq->rdh_reg_addr = NGBE_REG_ADDR(hw, NGBE_RXRP(rxq->reg_idx));
> > +
> > + rxq->rx_ring_phys_addr = TMZ_PADDR(rz);
> > + rxq->rx_ring = (struct ngbe_rx_desc *)TMZ_VADDR(rz);
> > +
> > + /*
> > + * Certain constraints must be met in order to use the bulk buffer
> > + * allocation Rx burst function. If any of Rx queues doesn't meet them
> > + * the feature should be disabled for the whole port.
> > + */
> > + if (check_rx_burst_bulk_alloc_preconditions(rxq)) {
> > + PMD_INIT_LOG(DEBUG, "queue[%d] doesn't meet Rx Bulk Alloc "
> > + "preconditions - canceling the feature for "
> > + "the whole port[%d]",
>
> Do not split format string.
>
> > + rxq->queue_id, rxq->port_id);
> > + adapter->rx_bulk_alloc_allowed = false;
> > + }
> > +
> > + /*
> > + * Allocate software ring. Allow for space at the end of the
> > + * S/W ring to make sure look-ahead logic in bulk alloc Rx burst
> > + * function does not access an invalid memory region.
> > + */
> > + len = nb_desc;
> > + if (adapter->rx_bulk_alloc_allowed)
> > + len += RTE_PMD_NGBE_RX_MAX_BURST;
> > +
> > + rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
> > + sizeof(struct ngbe_rx_entry) * len,
> > + RTE_CACHE_LINE_SIZE, socket_id);
> > + if (rxq->sw_ring == NULL) {
> > + ngbe_rx_queue_release(rxq);
> > + return -ENOMEM;
> > + }
> > +
> > + /*
> > + * Always allocate even if it's not going to be needed in order to
> > + * simplify the code.
> > + *
> > + * This ring is used in Scattered Rx cases and Scattered Rx may
> > + * be requested in ngbe_dev_rx_init(), which is called later from
> > + * dev_start() flow.
> > + */
> > + rxq->sw_sc_ring =
> > + rte_zmalloc_socket("rxq->sw_sc_ring",
> > + sizeof(struct ngbe_scattered_rx_entry) * len,
> > + RTE_CACHE_LINE_SIZE, socket_id);
> > + if (rxq->sw_sc_ring == NULL) {
> > + ngbe_rx_queue_release(rxq);
> > + return -ENOMEM;
> > + }
> > +
> > + PMD_INIT_LOG(DEBUG, "sw_ring=%p sw_sc_ring=%p hw_ring=%p "
> > + "dma_addr=0x%" PRIx64,
>
> Do not split format string.
>
> > + rxq->sw_ring, rxq->sw_sc_ring, rxq->rx_ring,
> > + rxq->rx_ring_phys_addr);
> > +
> > + dev->data->rx_queues[queue_idx] = rxq;
> > +
> > + ngbe_reset_rx_queue(adapter, rxq);
> > +
> > + return 0;
> > +}
> > +
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure
2021-07-05 2:52 ` Jiawen Wu
@ 2021-07-05 8:54 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-05 8:54 UTC (permalink / raw)
To: Jiawen Wu, dev
On 7/5/21 5:52 AM, Jiawen Wu wrote:
> On July 2, 2021 9:08 PM, Andrew Rybchenko wrote:
>> On 6/17/21 1:59 PM, Jiawen Wu wrote:
>>> Adding bare minimum PMD library and doc build infrastructure and claim
>>> the maintainership for ngbe PMD.
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>
>> Just one nit below.
>>
>> [snip]
>>
>>> diff --git a/drivers/net/ngbe/ngbe_ethdev.c
>>> b/drivers/net/ngbe/ngbe_ethdev.c new file mode 100644 index
>>> 0000000000..f8e19066de
>>> --- /dev/null
>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>>> @@ -0,0 +1,29 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
>>> + * Copyright(c) 2010-2017 Intel Corporation */
>>> +
>>> +#include <errno.h>
>>> +#include <rte_common.h>
>>> +#include <ethdev_pci.h>
>>> +
>>> +static int
>>> +eth_ngbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
>>> + struct rte_pci_device *pci_dev)
>>> +{
>>> + RTE_SET_USED(pci_dev);
>>> + return -EINVAL;
>>> +}
>>> +
>>> +static int eth_ngbe_pci_remove(struct rte_pci_device *pci_dev) {
>>> + RTE_SET_USED(pci_dev);
>>> + return -EINVAL;
>>> +}
>>
>> Why is different style of unused suppression is used
>> above: __rte_unused vs RTE_SET_USED'?
>>
>> [snip]
>
> I guess, 'pci_drv' will not be used in future implement the probe function.
> So I just gave '__rte_unused' when I separated the patches.
> Does this have to be corrected?
You never know. So, it is better to be consistent. Yes, please.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update
2021-07-05 7:10 ` Jiawen Wu
@ 2021-07-05 8:58 ` Andrew Rybchenko
0 siblings, 0 replies; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-05 8:58 UTC (permalink / raw)
To: Jiawen Wu, dev
On 7/5/21 10:10 AM, Jiawen Wu wrote:
> On July 3, 2021 12:24 AM, Andrew Rybchenko wrote:
>> On 6/17/21 1:59 PM, Jiawen Wu wrote:
>>> Register to handle device interrupt.
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>
>> [snip]
>>
>>> diff --git a/doc/guides/nics/ngbe.rst b/doc/guides/nics/ngbe.rst index
>>> 54d0665db9..0918cc2918 100644
>>> --- a/doc/guides/nics/ngbe.rst
>>> +++ b/doc/guides/nics/ngbe.rst
>>> @@ -8,6 +8,11 @@ The NGBE PMD (librte_pmd_ngbe) provides poll mode
>>> driver support for Wangxun 1 Gigabit Ethernet NICs.
>>>
>>>
>>> +Features
>>> +--------
>>> +
>>> +- Link state information
>>> +
>>
>> Two empty lines before the section.
>>
>>> Prerequisites
>>> -------------
>>>
>>
>> [snip]
>>
>>> diff --git a/drivers/net/ngbe/ngbe_ethdev.c
>>> b/drivers/net/ngbe/ngbe_ethdev.c index deca64137d..c952023e8b 100644
>>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>>> @@ -141,6 +175,23 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev,
>> void *init_params __rte_unused)
>>> return -ENOMEM;
>>> }
>>>
>>> + ctrl_ext = rd32(hw, NGBE_PORTCTL);
>>> + /* let hardware know driver is loaded */
>>> + ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
>>> + /* Set PF Reset Done bit so PF/VF Mail Ops can work */
>>> + ctrl_ext |= NGBE_PORTCTL_RSTDONE;
>>> + wr32(hw, NGBE_PORTCTL, ctrl_ext);
>>> + ngbe_flush(hw);
>>> +
>>> + rte_intr_callback_register(intr_handle,
>>> + ngbe_dev_interrupt_handler, eth_dev);
>>> +
>>> + /* enable uio/vfio intr/eventfd mapping */
>>> + rte_intr_enable(intr_handle);
>>> +
>>> + /* enable support intr */
>>> + ngbe_enable_intr(eth_dev);
>>> +
>>
>> I don't understand why it is done unconditionally regardless of the
>> corresponding bit in dev_conf.
>
> Sorry...I don't quite understand what you mean.
May be it is OK and it is specific to your HW, but typically
interrupts are only enabled and used if requested via either
dev_conf->intr_conf.lsc for link status change OR
dev_conf->intr_conf.rxq for Rx queues.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-07-05 8:36 ` Jiawen Wu
@ 2021-07-05 9:08 ` Andrew Rybchenko
2021-07-06 7:53 ` Jiawen Wu
0 siblings, 1 reply; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-05 9:08 UTC (permalink / raw)
To: Jiawen Wu, dev
On 7/5/21 11:36 AM, Jiawen Wu wrote:
> On July 3, 2021 12:36 AM, Andrew Rybchenko wrote:
>> On 6/17/21 1:59 PM, Jiawen Wu wrote:
>>> Setup device Rx queue and release Rx queue.
>>>
>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>> ---
>>> drivers/net/ngbe/meson.build | 1 +
>>> drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
>>> drivers/net/ngbe/ngbe_ethdev.h | 16 ++
>>> drivers/net/ngbe/ngbe_rxtx.c | 308 +++++++++++++++++++++++++++++++++
>>> drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
>>> 5 files changed, 457 insertions(+), 1 deletion(-) create mode 100644
>>> drivers/net/ngbe/ngbe_rxtx.c create mode 100644
>>> drivers/net/ngbe/ngbe_rxtx.h
>>>
>>> diff --git a/drivers/net/ngbe/meson.build
>>> b/drivers/net/ngbe/meson.build index 81173fa7f0..9e75b82f1c 100644
>>> --- a/drivers/net/ngbe/meson.build
>>> +++ b/drivers/net/ngbe/meson.build
>>> @@ -12,6 +12,7 @@ objs = [base_objs]
>>>
>>> sources = files(
>>> 'ngbe_ethdev.c',
>>> + 'ngbe_rxtx.c',
>>> )
>>>
>>> includes += include_directories('base') diff --git
>>> a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
>>> index c952023e8b..e73606c5f3 100644
>>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>>> @@ -12,6 +12,7 @@
>>> #include "ngbe_logs.h"
>>> #include "base/ngbe.h"
>>> #include "ngbe_ethdev.h"
>>> +#include "ngbe_rxtx.h"
>>>
>>> static int ngbe_dev_close(struct rte_eth_dev *dev);
>>>
>>> @@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
>>> { .vendor_id = 0, /* sentinel */ },
>>> };
>>>
>>> +static const struct rte_eth_desc_lim rx_desc_lim = {
>>> + .nb_max = NGBE_RING_DESC_MAX,
>>> + .nb_min = NGBE_RING_DESC_MIN,
>>> + .nb_align = NGBE_RXD_ALIGN,
>>> +};
>>> +
>>> static const struct eth_dev_ops ngbe_eth_dev_ops;
>>>
>>> static inline void
>>> @@ -241,12 +248,19 @@ static int
>>> ngbe_dev_configure(struct rte_eth_dev *dev) {
>>> struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
>>> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
>>>
>>> PMD_INIT_FUNC_TRACE();
>>>
>>> /* set flag to update link status after init */
>>> intr->flags |= NGBE_FLAG_NEED_LINK_UPDATE;
>>>
>>> + /*
>>> + * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
>>> + * allocation Rx preconditions we will reset it.
>>> + */
>>> + adapter->rx_bulk_alloc_allowed = true;
>>> +
>>> return 0;
>>> }
>>>
>>> @@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev) static
>>> int ngbe_dev_info_get(struct rte_eth_dev *dev, struct
>>> rte_eth_dev_info *dev_info) {
>>> - RTE_SET_USED(dev);
>>> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
>>> +
>>> + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
>>> +
>>> + dev_info->default_rxconf = (struct rte_eth_rxconf) {
>>> + .rx_thresh = {
>>> + .pthresh = NGBE_DEFAULT_RX_PTHRESH,
>>> + .hthresh = NGBE_DEFAULT_RX_HTHRESH,
>>> + .wthresh = NGBE_DEFAULT_RX_WTHRESH,
>>> + },
>>> + .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
>>> + .rx_drop_en = 0,
>>> + .offloads = 0,
>>> + };
>>> +
>>> + dev_info->rx_desc_lim = rx_desc_lim;
>>>
>>> dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
>>> ETH_LINK_SPEED_10M;
>>>
>>> + /* Driver-preferred Rx/Tx parameters */
>>> + dev_info->default_rxportconf.nb_queues = 1;
>>> + dev_info->default_rxportconf.ring_size = 256;
>>> +
>>> return 0;
>>> }
>>>
>>> @@ -570,6 +603,8 @@ static const struct eth_dev_ops ngbe_eth_dev_ops =
>> {
>>> .dev_configure = ngbe_dev_configure,
>>> .dev_infos_get = ngbe_dev_info_get,
>>> .link_update = ngbe_dev_link_update,
>>> + .rx_queue_setup = ngbe_dev_rx_queue_setup,
>>> + .rx_queue_release = ngbe_dev_rx_queue_release,
>>> };
>>>
>>> RTE_PMD_REGISTER_PCI(net_ngbe, rte_ngbe_pmd); diff --git
>>> a/drivers/net/ngbe/ngbe_ethdev.h b/drivers/net/ngbe/ngbe_ethdev.h
>>> index b67508a3de..6580d288c8 100644
>>> --- a/drivers/net/ngbe/ngbe_ethdev.h
>>> +++ b/drivers/net/ngbe/ngbe_ethdev.h
>>> @@ -30,6 +30,7 @@ struct ngbe_interrupt { struct ngbe_adapter {
>>> struct ngbe_hw hw;
>>> struct ngbe_interrupt intr;
>>> + bool rx_bulk_alloc_allowed;
>>
>> Shouldn't it be aligned as well as above fields?
>>
>>> };
>>>
>>> static inline struct ngbe_adapter *
>>> @@ -58,6 +59,13 @@ ngbe_dev_intr(struct rte_eth_dev *dev)
>>> return intr;
>>> }
>>>
>>> +void ngbe_dev_rx_queue_release(void *rxq);
>>> +
>>> +int ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
>>> + uint16_t nb_rx_desc, unsigned int socket_id,
>>> + const struct rte_eth_rxconf *rx_conf,
>>> + struct rte_mempool *mb_pool);
>>> +
>>> int
>>> ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>>> int wait_to_complete);
>>> @@ -66,4 +74,12 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
>>> #define NGBE_LINK_UP_CHECK_TIMEOUT 1000 /* ms */
>>> #define NGBE_VMDQ_NUM_UC_MAC 4096 /* Maximum nb. of UC MAC addr. */
>>>
>>> +/*
>>> + * Default values for Rx/Tx configuration */ #define
>>> +NGBE_DEFAULT_RX_FREE_THRESH 32
>>> +#define NGBE_DEFAULT_RX_PTHRESH 8
>>> +#define NGBE_DEFAULT_RX_HTHRESH 8
>>> +#define NGBE_DEFAULT_RX_WTHRESH 0
>>> +
>>> #endif /* _NGBE_ETHDEV_H_ */
>>> diff --git a/drivers/net/ngbe/ngbe_rxtx.c
>>> b/drivers/net/ngbe/ngbe_rxtx.c new file mode 100644 index
>>> 0000000000..df0b64dc01
>>> --- /dev/null
>>> +++ b/drivers/net/ngbe/ngbe_rxtx.c
>>> @@ -0,0 +1,308 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2018-2020 Beijing WangXun Technology Co., Ltd.
>>> + * Copyright(c) 2010-2017 Intel Corporation */
>>> +
>>> +#include <sys/queue.h>
>>> +
>>> +#include <stdint.h>
>>> +#include <rte_ethdev.h>
>>> +#include <ethdev_driver.h>
>>> +#include <rte_malloc.h>
>>> +
>>> +#include "ngbe_logs.h"
>>> +#include "base/ngbe.h"
>>> +#include "ngbe_ethdev.h"
>>> +#include "ngbe_rxtx.h"
>>> +
>>> +/**
>>> + * ngbe_free_sc_cluster - free the not-yet-completed scattered
>>> +cluster
>>> + *
>>> + * The "next" pointer of the last segment of (not-yet-completed) RSC
>>> +clusters
>>> + * in the sw_sc_ring is not set to NULL but rather points to the next
>>> + * mbuf of this RSC aggregation (that has not been completed yet and
>>> +still
>>> + * resides on the HW ring). So, instead of calling for
>>> +rte_pktmbuf_free() we
>>> + * will just free first "nb_segs" segments of the cluster explicitly
>>> +by calling
>>> + * an rte_pktmbuf_free_seg().
>>> + *
>>> + * @m scattered cluster head
>>> + */
>>> +static void __rte_cold
>>> +ngbe_free_sc_cluster(struct rte_mbuf *m) {
>>> + uint16_t i, nb_segs = m->nb_segs;
>>> + struct rte_mbuf *next_seg;
>>> +
>>> + for (i = 0; i < nb_segs; i++) {
>>> + next_seg = m->next;
>>> + rte_pktmbuf_free_seg(m);
>>> + m = next_seg;
>>> + }
>>> +}
>>> +
>>> +static void __rte_cold
>>> +ngbe_rx_queue_release_mbufs(struct ngbe_rx_queue *rxq) {
>>> + unsigned int i;
>>> +
>>> + if (rxq->sw_ring != NULL) {
>>> + for (i = 0; i < rxq->nb_rx_desc; i++) {
>>> + if (rxq->sw_ring[i].mbuf != NULL) {
>>> + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
>>> + rxq->sw_ring[i].mbuf = NULL;
>>> + }
>>> + }
>>> + if (rxq->rx_nb_avail) {
>>
>> Compare vs 0 explicitly. However, it looks like that the check is not required at
>> all. Body may be done unconditionally.
>>
>>> + for (i = 0; i < rxq->rx_nb_avail; ++i) {
>>> + struct rte_mbuf *mb;
>>> +
>>> + mb = rxq->rx_stage[rxq->rx_next_avail + i];
>>> + rte_pktmbuf_free_seg(mb);
>>> + }
>>> + rxq->rx_nb_avail = 0;
>>> + }
>>> + }
>>> +
>>> + if (rxq->sw_sc_ring != NULL)
>>> + for (i = 0; i < rxq->nb_rx_desc; i++)
>>> + if (rxq->sw_sc_ring[i].fbuf) {
>>
>> Compare vs NULL
>>
>>> + ngbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
>>> + rxq->sw_sc_ring[i].fbuf = NULL;
>>> + }
>>> +}
>>> +
>>> +static void __rte_cold
>>> +ngbe_rx_queue_release(struct ngbe_rx_queue *rxq) {
>>> + if (rxq != NULL) {
>>> + ngbe_rx_queue_release_mbufs(rxq);
>>> + rte_free(rxq->sw_ring);
>>> + rte_free(rxq->sw_sc_ring);
>>> + rte_free(rxq);
>>> + }
>>> +}
>>> +
>>> +void __rte_cold
>>> +ngbe_dev_rx_queue_release(void *rxq)
>>> +{
>>> + ngbe_rx_queue_release(rxq);
>>> +}
>>> +
>>> +/*
>>> + * Check if Rx Burst Bulk Alloc function can be used.
>>> + * Return
>>> + * 0: the preconditions are satisfied and the bulk allocation function
>>> + * can be used.
>>> + * -EINVAL: the preconditions are NOT satisfied and the default Rx burst
>>> + * function must be used.
>>> + */
>>> +static inline int __rte_cold
>>> +check_rx_burst_bulk_alloc_preconditions(struct ngbe_rx_queue *rxq) {
>>> + int ret = 0;
>>> +
>>> + /*
>>> + * Make sure the following pre-conditions are satisfied:
>>> + * rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST
>>> + * rxq->rx_free_thresh < rxq->nb_rx_desc
>>> + * (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
>>> + * Scattered packets are not supported. This should be checked
>>> + * outside of this function.
>>> + */
>>> + if (!(rxq->rx_free_thresh >= RTE_PMD_NGBE_RX_MAX_BURST)) {
>>
>> Isn't is simpler to read and understand:
>> rxq->rx_free_thresh < RTE_PMD_NGBE_RX_MAX_BURST
>>
>>> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
>>> + "rxq->rx_free_thresh=%d, "
>>> + "RTE_PMD_NGBE_RX_MAX_BURST=%d",
>>
>> Do not split format string.
>>
>>> + rxq->rx_free_thresh, RTE_PMD_NGBE_RX_MAX_BURST);
>>> + ret = -EINVAL;
>>> + } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
>>
>> rxq->rx_free_thresh >= rxq->nb_rx_desc
>> is simpler to read
>>
>>> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
>>> + "rxq->rx_free_thresh=%d, "
>>> + "rxq->nb_rx_desc=%d",
>>
>> Do not split format string.
>>
>>> + rxq->rx_free_thresh, rxq->nb_rx_desc);
>>> + ret = -EINVAL;
>>> + } else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
>>
>> (rxq->nb_rx_desc % rxq->rx_free_thresh) != 0 is easier to read
>>
>>> + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
>>> + "rxq->nb_rx_desc=%d, "
>>> + "rxq->rx_free_thresh=%d",
>>> + rxq->nb_rx_desc, rxq->rx_free_thresh);
>>
>> Do not split format string
>>
>>> + ret = -EINVAL;
>>> + }
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +/* Reset dynamic ngbe_rx_queue fields back to defaults */ static void
>>> +__rte_cold ngbe_reset_rx_queue(struct ngbe_adapter *adapter, struct
>>> +ngbe_rx_queue *rxq) {
>>> + static const struct ngbe_rx_desc zeroed_desc = {
>>> + {{0}, {0} }, {{0}, {0} } };
>>> + unsigned int i;
>>> + uint16_t len = rxq->nb_rx_desc;
>>> +
>>> + /*
>>> + * By default, the Rx queue setup function allocates enough memory for
>>> + * NGBE_RING_DESC_MAX. The Rx Burst bulk allocation function requires
>>> + * extra memory at the end of the descriptor ring to be zero'd out.
>>> + */
>>> + if (adapter->rx_bulk_alloc_allowed)
>>> + /* zero out extra memory */
>>> + len += RTE_PMD_NGBE_RX_MAX_BURST;
>>> +
>>> + /*
>>> + * Zero out HW ring memory. Zero out extra memory at the end of
>>> + * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
>>> + * reads extra memory as zeros.
>>> + */
>>> + for (i = 0; i < len; i++)
>>> + rxq->rx_ring[i] = zeroed_desc;
>>> +
>>> + /*
>>> + * initialize extra software ring entries. Space for these extra
>>> + * entries is always allocated
>>> + */
>>> + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
>>> + for (i = rxq->nb_rx_desc; i < len; ++i)
>>> + rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
>>> +
>>> + rxq->rx_nb_avail = 0;
>>> + rxq->rx_next_avail = 0;
>>> + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
>>> + rxq->rx_tail = 0;
>>> + rxq->nb_rx_hold = 0;
>>> + rxq->pkt_first_seg = NULL;
>>> + rxq->pkt_last_seg = NULL;
>>> +}
>>> +
>>> +int __rte_cold
>>> +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
>>> + uint16_t queue_idx,
>>> + uint16_t nb_desc,
>>> + unsigned int socket_id,
>>> + const struct rte_eth_rxconf *rx_conf,
>>> + struct rte_mempool *mp)
>>> +{
>>> + const struct rte_memzone *rz;
>>> + struct ngbe_rx_queue *rxq;
>>> + struct ngbe_hw *hw;
>>> + uint16_t len;
>>> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
>>> +
>>> + PMD_INIT_FUNC_TRACE();
>>> + hw = ngbe_dev_hw(dev);
>>> +
>>> + /*
>>> + * Validate number of receive descriptors.
>>> + * It must not exceed hardware maximum, and must be multiple
>>> + * of NGBE_ALIGN.
>>> + */
>>> + if (nb_desc % NGBE_RXD_ALIGN != 0 ||
>>> + nb_desc > NGBE_RING_DESC_MAX ||
>>> + nb_desc < NGBE_RING_DESC_MIN) {
>>> + return -EINVAL;
>>> + }
>>
>> rte_eth_rx_queue_setup cares about it
>>
>
> I don't quite understand.
ethdev does the check based dev_info provided by the driver:
if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
RTE_ETHDEV_LOG(ERR,
"Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >=
%hu, and a product of %hu\n",
nb_rx_desc, dev_info.rx_desc_lim.nb_max,
dev_info.rx_desc_lim.nb_min,
dev_info.rx_desc_lim.nb_align);
return -EINVAL;
}
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-07-05 9:08 ` Andrew Rybchenko
@ 2021-07-06 7:53 ` Jiawen Wu
2021-07-06 8:06 ` Andrew Rybchenko
0 siblings, 1 reply; 42+ messages in thread
From: Jiawen Wu @ 2021-07-06 7:53 UTC (permalink / raw)
To: 'Andrew Rybchenko', dev
On July 5, 2021 5:08 PM, Andrew Rybchenko wrote:
> On 7/5/21 11:36 AM, Jiawen Wu wrote:
> > On July 3, 2021 12:36 AM, Andrew Rybchenko wrote:
> >> On 6/17/21 1:59 PM, Jiawen Wu wrote:
> >>> Setup device Rx queue and release Rx queue.
> >>>
> >>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> >>> ---
> >>> drivers/net/ngbe/meson.build | 1 +
> >>> drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
> >>> drivers/net/ngbe/ngbe_ethdev.h | 16 ++
> >>> drivers/net/ngbe/ngbe_rxtx.c | 308 +++++++++++++++++++++++++++++++++
> >>> drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
> >>> 5 files changed, 457 insertions(+), 1 deletion(-) create mode
> >>> 100644 drivers/net/ngbe/ngbe_rxtx.c create mode 100644
> >>> drivers/net/ngbe/ngbe_rxtx.h
> >>>
> >>> a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> >>> index c952023e8b..e73606c5f3 100644
> >>> --- a/drivers/net/ngbe/ngbe_ethdev.c
> >>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> >>> @@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
> >>> { .vendor_id = 0, /* sentinel */ }, };
> >>>
> >>> +static const struct rte_eth_desc_lim rx_desc_lim = {
> >>> + .nb_max = NGBE_RING_DESC_MAX,
> >>> + .nb_min = NGBE_RING_DESC_MIN,
> >>> + .nb_align = NGBE_RXD_ALIGN,
> >>> +};
> >>> +
> >>> static const struct eth_dev_ops ngbe_eth_dev_ops;
> >>>
> >>> static inline void
> >>> @@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev)
> >>> static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct
> >>> rte_eth_dev_info *dev_info) {
> >>> - RTE_SET_USED(dev);
> >>> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> >>> +
> >>> + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
> >>> +
> >>> + dev_info->default_rxconf = (struct rte_eth_rxconf) {
> >>> + .rx_thresh = {
> >>> + .pthresh = NGBE_DEFAULT_RX_PTHRESH,
> >>> + .hthresh = NGBE_DEFAULT_RX_HTHRESH,
> >>> + .wthresh = NGBE_DEFAULT_RX_WTHRESH,
> >>> + },
> >>> + .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
> >>> + .rx_drop_en = 0,
> >>> + .offloads = 0,
> >>> + };
> >>> +
> >>> + dev_info->rx_desc_lim = rx_desc_lim;
> >>>
<...>
> >>> +int __rte_cold
> >>> +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> >>> + uint16_t queue_idx,
> >>> + uint16_t nb_desc,
> >>> + unsigned int socket_id,
> >>> + const struct rte_eth_rxconf *rx_conf,
> >>> + struct rte_mempool *mp)
> >>> +{
> >>> + const struct rte_memzone *rz;
> >>> + struct ngbe_rx_queue *rxq;
> >>> + struct ngbe_hw *hw;
> >>> + uint16_t len;
> >>> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> >>> +
> >>> + PMD_INIT_FUNC_TRACE();
> >>> + hw = ngbe_dev_hw(dev);
> >>> +
> >>> + /*
> >>> + * Validate number of receive descriptors.
> >>> + * It must not exceed hardware maximum, and must be multiple
> >>> + * of NGBE_ALIGN.
> >>> + */
> >>> + if (nb_desc % NGBE_RXD_ALIGN != 0 ||
> >>> + nb_desc > NGBE_RING_DESC_MAX ||
> >>> + nb_desc < NGBE_RING_DESC_MIN) {
> >>> + return -EINVAL;
> >>> + }
> >>
> >> rte_eth_rx_queue_setup cares about it
> >>
> >
> > I don't quite understand.
>
> ethdev does the check based dev_info provided by the driver:
>
> if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
> nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
> nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
> RTE_ETHDEV_LOG(ERR,
> "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu,
> and a product of %hu\n",
> nb_rx_desc, dev_info.rx_desc_lim.nb_max,
> dev_info.rx_desc_lim.nb_min,
> dev_info.rx_desc_lim.nb_align);
> return -EINVAL;
> }
'dev_info.rx_desc_lim' was set up in function 'ngbe_dev_info_get' with the struct 'rx_desc_lim' defined.
So I think it is appropriate to check with macros. However, the log can be added.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-07-06 7:53 ` Jiawen Wu
@ 2021-07-06 8:06 ` Andrew Rybchenko
2021-07-06 8:33 ` Jiawen Wu
0 siblings, 1 reply; 42+ messages in thread
From: Andrew Rybchenko @ 2021-07-06 8:06 UTC (permalink / raw)
To: Jiawen Wu, dev
On 7/6/21 10:53 AM, Jiawen Wu wrote:
> On July 5, 2021 5:08 PM, Andrew Rybchenko wrote:
>> On 7/5/21 11:36 AM, Jiawen Wu wrote:
>>> On July 3, 2021 12:36 AM, Andrew Rybchenko wrote:
>>>> On 6/17/21 1:59 PM, Jiawen Wu wrote:
>>>>> Setup device Rx queue and release Rx queue.
>>>>>
>>>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
>>>>> ---
>>>>> drivers/net/ngbe/meson.build | 1 +
>>>>> drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
>>>>> drivers/net/ngbe/ngbe_ethdev.h | 16 ++
>>>>> drivers/net/ngbe/ngbe_rxtx.c | 308 +++++++++++++++++++++++++++++++++
>>>>> drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
>>>>> 5 files changed, 457 insertions(+), 1 deletion(-) create mode
>>>>> 100644 drivers/net/ngbe/ngbe_rxtx.c create mode 100644
>>>>> drivers/net/ngbe/ngbe_rxtx.h
>>>>>
>>>>> a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
>>>>> index c952023e8b..e73606c5f3 100644
>>>>> --- a/drivers/net/ngbe/ngbe_ethdev.c
>>>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
>>>>> @@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[] = {
>>>>> { .vendor_id = 0, /* sentinel */ }, };
>>>>>
>>>>> +static const struct rte_eth_desc_lim rx_desc_lim = {
>>>>> + .nb_max = NGBE_RING_DESC_MAX,
>>>>> + .nb_min = NGBE_RING_DESC_MIN,
>>>>> + .nb_align = NGBE_RXD_ALIGN,
>>>>> +};
>>>>> +
>>>>> static const struct eth_dev_ops ngbe_eth_dev_ops;
>>>>>
>>>>> static inline void
>>>>> @@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev)
>>>>> static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct
>>>>> rte_eth_dev_info *dev_info) {
>>>>> - RTE_SET_USED(dev);
>>>>> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
>>>>> +
>>>>> + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
>>>>> +
>>>>> + dev_info->default_rxconf = (struct rte_eth_rxconf) {
>>>>> + .rx_thresh = {
>>>>> + .pthresh = NGBE_DEFAULT_RX_PTHRESH,
>>>>> + .hthresh = NGBE_DEFAULT_RX_HTHRESH,
>>>>> + .wthresh = NGBE_DEFAULT_RX_WTHRESH,
>>>>> + },
>>>>> + .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
>>>>> + .rx_drop_en = 0,
>>>>> + .offloads = 0,
>>>>> + };
>>>>> +
>>>>> + dev_info->rx_desc_lim = rx_desc_lim;
>>>>>
> <...>
>>>>> +int __rte_cold
>>>>> +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
>>>>> + uint16_t queue_idx,
>>>>> + uint16_t nb_desc,
>>>>> + unsigned int socket_id,
>>>>> + const struct rte_eth_rxconf *rx_conf,
>>>>> + struct rte_mempool *mp)
>>>>> +{
>>>>> + const struct rte_memzone *rz;
>>>>> + struct ngbe_rx_queue *rxq;
>>>>> + struct ngbe_hw *hw;
>>>>> + uint16_t len;
>>>>> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
>>>>> +
>>>>> + PMD_INIT_FUNC_TRACE();
>>>>> + hw = ngbe_dev_hw(dev);
>>>>> +
>>>>> + /*
>>>>> + * Validate number of receive descriptors.
>>>>> + * It must not exceed hardware maximum, and must be multiple
>>>>> + * of NGBE_ALIGN.
>>>>> + */
>>>>> + if (nb_desc % NGBE_RXD_ALIGN != 0 ||
>>>>> + nb_desc > NGBE_RING_DESC_MAX ||
>>>>> + nb_desc < NGBE_RING_DESC_MIN) {
>>>>> + return -EINVAL;
>>>>> + }
>>>>
>>>> rte_eth_rx_queue_setup cares about it
>>>>
>>>
>>> I don't quite understand.
>>
>> ethdev does the check based dev_info provided by the driver:
>>
>> if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
>> nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
>> nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
>> RTE_ETHDEV_LOG(ERR,
>> "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu,
>> and a product of %hu\n",
>> nb_rx_desc, dev_info.rx_desc_lim.nb_max,
>> dev_info.rx_desc_lim.nb_min,
>> dev_info.rx_desc_lim.nb_align);
>> return -EINVAL;
>> }
>
> 'dev_info.rx_desc_lim' was set up in function 'ngbe_dev_info_get' with the struct 'rx_desc_lim' defined.
> So I think it is appropriate to check with macros. However, the log can be
Sorry, but I don't understand why do you want to duplicate
check which is already done in a generic code.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release
2021-07-06 8:06 ` Andrew Rybchenko
@ 2021-07-06 8:33 ` Jiawen Wu
0 siblings, 0 replies; 42+ messages in thread
From: Jiawen Wu @ 2021-07-06 8:33 UTC (permalink / raw)
To: 'Andrew Rybchenko', dev
On July 6, 2021 4:06 PM, Andrew Rybchenko wrote:
> On 7/6/21 10:53 AM, Jiawen Wu wrote:
> > On July 5, 2021 5:08 PM, Andrew Rybchenko wrote:
> >> On 7/5/21 11:36 AM, Jiawen Wu wrote:
> >>> On July 3, 2021 12:36 AM, Andrew Rybchenko wrote:
> >>>> On 6/17/21 1:59 PM, Jiawen Wu wrote:
> >>>>> Setup device Rx queue and release Rx queue.
> >>>>>
> >>>>> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> >>>>> ---
> >>>>> drivers/net/ngbe/meson.build | 1 +
> >>>>> drivers/net/ngbe/ngbe_ethdev.c | 37 +++-
> >>>>> drivers/net/ngbe/ngbe_ethdev.h | 16 ++
> >>>>> drivers/net/ngbe/ngbe_rxtx.c | 308
> +++++++++++++++++++++++++++++++++
> >>>>> drivers/net/ngbe/ngbe_rxtx.h | 96 ++++++++++
> >>>>> 5 files changed, 457 insertions(+), 1 deletion(-) create mode
> >>>>> 100644 drivers/net/ngbe/ngbe_rxtx.c create mode 100644
> >>>>> drivers/net/ngbe/ngbe_rxtx.h
> >>>>>
> >>>>> a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
> >>>>> index c952023e8b..e73606c5f3 100644
> >>>>> --- a/drivers/net/ngbe/ngbe_ethdev.c
> >>>>> +++ b/drivers/net/ngbe/ngbe_ethdev.c
> >>>>> @@ -37,6 +38,12 @@ static const struct rte_pci_id pci_id_ngbe_map[]
> = {
> >>>>> { .vendor_id = 0, /* sentinel */ }, };
> >>>>>
> >>>>> +static const struct rte_eth_desc_lim rx_desc_lim = {
> >>>>> + .nb_max = NGBE_RING_DESC_MAX,
> >>>>> + .nb_min = NGBE_RING_DESC_MIN,
> >>>>> + .nb_align = NGBE_RXD_ALIGN,
> >>>>> +};
> >>>>> +
> >>>>> static const struct eth_dev_ops ngbe_eth_dev_ops;
> >>>>>
> >>>>> static inline void
> >>>>> @@ -266,11 +280,30 @@ ngbe_dev_close(struct rte_eth_dev *dev)
> >>>>> static int ngbe_dev_info_get(struct rte_eth_dev *dev, struct
> >>>>> rte_eth_dev_info *dev_info) {
> >>>>> - RTE_SET_USED(dev);
> >>>>> + struct ngbe_hw *hw = ngbe_dev_hw(dev);
> >>>>> +
> >>>>> + dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
> >>>>> +
> >>>>> + dev_info->default_rxconf = (struct rte_eth_rxconf) {
> >>>>> + .rx_thresh = {
> >>>>> + .pthresh = NGBE_DEFAULT_RX_PTHRESH,
> >>>>> + .hthresh = NGBE_DEFAULT_RX_HTHRESH,
> >>>>> + .wthresh = NGBE_DEFAULT_RX_WTHRESH,
> >>>>> + },
> >>>>> + .rx_free_thresh = NGBE_DEFAULT_RX_FREE_THRESH,
> >>>>> + .rx_drop_en = 0,
> >>>>> + .offloads = 0,
> >>>>> + };
> >>>>> +
> >>>>> + dev_info->rx_desc_lim = rx_desc_lim;
> >>>>>
> > <...>
> >>>>> +int __rte_cold
> >>>>> +ngbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
> >>>>> + uint16_t queue_idx,
> >>>>> + uint16_t nb_desc,
> >>>>> + unsigned int socket_id,
> >>>>> + const struct rte_eth_rxconf *rx_conf,
> >>>>> + struct rte_mempool *mp)
> >>>>> +{
> >>>>> + const struct rte_memzone *rz;
> >>>>> + struct ngbe_rx_queue *rxq;
> >>>>> + struct ngbe_hw *hw;
> >>>>> + uint16_t len;
> >>>>> + struct ngbe_adapter *adapter = ngbe_dev_adapter(dev);
> >>>>> +
> >>>>> + PMD_INIT_FUNC_TRACE();
> >>>>> + hw = ngbe_dev_hw(dev);
> >>>>> +
> >>>>> + /*
> >>>>> + * Validate number of receive descriptors.
> >>>>> + * It must not exceed hardware maximum, and must be multiple
> >>>>> + * of NGBE_ALIGN.
> >>>>> + */
> >>>>> + if (nb_desc % NGBE_RXD_ALIGN != 0 ||
> >>>>> + nb_desc > NGBE_RING_DESC_MAX ||
> >>>>> + nb_desc < NGBE_RING_DESC_MIN) {
> >>>>> + return -EINVAL;
> >>>>> + }
> >>>>
> >>>> rte_eth_rx_queue_setup cares about it
> >>>>
> >>>
> >>> I don't quite understand.
> >>
> >> ethdev does the check based dev_info provided by the driver:
> >>
> >> if (nb_rx_desc > dev_info.rx_desc_lim.nb_max ||
> >> nb_rx_desc < dev_info.rx_desc_lim.nb_min ||
> >> nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) {
> >> RTE_ETHDEV_LOG(ERR,
> >> "Invalid value for nb_rx_desc(=%hu), should be: <= %hu,
> >> >= %hu, and a product of %hu\n",
> >> nb_rx_desc, dev_info.rx_desc_lim.nb_max,
> >> dev_info.rx_desc_lim.nb_min,
> >> dev_info.rx_desc_lim.nb_align);
> >> return -EINVAL;
> >> }
> >
> > 'dev_info.rx_desc_lim' was set up in function 'ngbe_dev_info_get' with the
> struct 'rx_desc_lim' defined.
> > So I think it is appropriate to check with macros. However, the log
> > can be
> Sorry, but I don't understand why do you want to duplicate check which is
> already done in a generic code.
This check can be omitted, thanks.
NGBE driver code was ported from some earlier code, there may some redundant code left.
^ permalink raw reply [flat|nested] 42+ messages in thread
end of thread, other threads:[~2021-07-06 8:33 UTC | newest]
Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-17 10:59 [dpdk-dev] [PATCH v6 00/19] net: ngbe PMD Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 01/19] net/ngbe: add build and doc infrastructure Jiawen Wu
2021-07-02 13:07 ` Andrew Rybchenko
2021-07-05 2:52 ` Jiawen Wu
2021-07-05 8:54 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 02/19] net/ngbe: support probe and remove Jiawen Wu
2021-07-02 13:22 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 03/19] net/ngbe: add log type and error type Jiawen Wu
2021-07-02 13:22 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 04/19] net/ngbe: define registers Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 05/19] net/ngbe: set MAC type and LAN ID with device initialization Jiawen Wu
2021-07-02 16:05 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 06/19] net/ngbe: init and validate EEPROM Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 07/19] net/ngbe: add HW initialization Jiawen Wu
2021-07-02 16:08 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 08/19] net/ngbe: identify PHY and reset PHY Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 09/19] net/ngbe: store MAC address Jiawen Wu
2021-07-02 16:12 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 10/19] net/ngbe: support link update Jiawen Wu
2021-07-02 16:24 ` Andrew Rybchenko
2021-07-05 7:10 ` Jiawen Wu
2021-07-05 8:58 ` Andrew Rybchenko
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 11/19] net/ngbe: setup the check PHY link Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 12/19] net/ngbe: add Rx queue setup and release Jiawen Wu
2021-07-02 16:35 ` Andrew Rybchenko
2021-07-05 8:36 ` Jiawen Wu
2021-07-05 9:08 ` Andrew Rybchenko
2021-07-06 7:53 ` Jiawen Wu
2021-07-06 8:06 ` Andrew Rybchenko
2021-07-06 8:33 ` Jiawen Wu
2021-06-17 10:59 ` [dpdk-dev] [PATCH v6 13/19] net/ngbe: add Tx " Jiawen Wu
2021-07-02 16:39 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 14/19] net/ngbe: add simple Rx flow Jiawen Wu
2021-07-02 16:42 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 15/19] net/ngbe: add simple Tx flow Jiawen Wu
2021-07-02 16:45 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 16/19] net/ngbe: add device start and stop operations Jiawen Wu
2021-07-02 16:52 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 17/19] net/ngbe: add Tx queue start and stop Jiawen Wu
2021-07-02 16:55 ` Andrew Rybchenko
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 18/19] net/ngbe: add Rx " Jiawen Wu
2021-06-17 11:00 ` [dpdk-dev] [PATCH v6 19/19] net/ngbe: support to close and reset device Jiawen Wu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).