DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure
@ 2020-09-01 11:50 Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove Jiawen Wu
                   ` (41 more replies)
  0 siblings, 42 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Adding bare minimum PMD library and doc build infrastructure and claim the maintainership for txgbe PMD.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 MAINTAINERS                                 |  7 +++
 config/common_base                          | 10 +++
 doc/guides/nics/features/txgbe.ini          | 52 ++++++++++++++++
 doc/guides/nics/txgbe.rst                   | 67 +++++++++++++++++++++
 drivers/net/meson.build                     |  1 +
 drivers/net/txgbe/meson.build               |  9 +++
 drivers/net/txgbe/rte_pmd_txgbe_version.map |  3 +
 drivers/net/txgbe/txgbe_ethdev.c            |  4 ++
 drivers/net/txgbe/txgbe_ethdev.h            |  4 ++
 mk/rte.app.mk                               |  1 +
 10 files changed, 158 insertions(+)
 create mode 100644 doc/guides/nics/features/txgbe.ini
 create mode 100644 doc/guides/nics/txgbe.rst
 create mode 100644 drivers/net/txgbe/meson.build
 create mode 100644 drivers/net/txgbe/rte_pmd_txgbe_version.map
 create mode 100644 drivers/net/txgbe/txgbe_ethdev.c
 create mode 100644 drivers/net/txgbe/txgbe_ethdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index ed163f5d5..155ae17c4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -882,6 +882,13 @@ F: drivers/net/vmxnet3/
 F: doc/guides/nics/vmxnet3.rst
 F: doc/guides/nics/features/vmxnet3.ini
 
+Wangxun txgbe
+M: Jiawen Wu <jiawenwu@trustnetic.com>
+M: Jian Wang <jianwang@trustnetic.com>
+F: drivers/net/txgbe/
+F: doc/guides/nics/txgbe.rst
+F: doc/guides/nics/features/txgbe.ini
+
 Vhost-user
 M: Maxime Coquelin <maxime.coquelin@redhat.com>
 M: Chenbo Xia <chenbo.xia@intel.com>
diff --git a/config/common_base b/config/common_base
index fbf0ee70c..037aea6a7 100644
--- a/config/common_base
+++ b/config/common_base
@@ -389,6 +389,16 @@ CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD=n
 CONFIG_RTE_IBVERBS_LINK_DLOPEN=n
 CONFIG_RTE_IBVERBS_LINK_STATIC=n
 
+#
+# Compile burst-oriented TXGBE PMD driver
+#
+CONFIG_RTE_LIBRTE_TXGBE_PMD=y
+CONFIG_RTE_LIBRTE_TXGBE_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_TXGBE_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_TXGBE_DEBUG_TX_FREE=n
+CONFIG_RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC=n
+CONFIG_RTE_LIBRTE_TXGBE_BYPASS=n
+
 #
 # Compile burst-oriented Netronome NFP PMD driver
 #
diff --git a/doc/guides/nics/features/txgbe.ini b/doc/guides/nics/features/txgbe.ini
new file mode 100644
index 000000000..4de458669
--- /dev/null
+++ b/doc/guides/nics/features/txgbe.ini
@@ -0,0 +1,52 @@
+;
+; Supported features of the 'txgbe' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Speed capabilities   = Y
+Link status          = Y
+Link status event    = Y
+Rx interrupt         = Y
+Queue start/stop     = Y
+MTU update           = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+LRO                  = Y
+TSO                  = Y
+Promiscuous mode     = Y
+Allmulticast mode    = Y
+Unicast MAC filter   = Y
+Multicast MAC filter = Y
+RSS hash             = Y
+RSS key update       = Y
+RSS reta update      = Y
+DCB                  = Y
+VLAN filter          = Y
+Flow control         = Y
+Flow API             = Y
+Rate limitation      = Y
+Traffic mirroring    = Y
+Inline crypto        = Y
+CRC offload          = P
+VLAN offload         = P
+QinQ offload         = P
+L3 checksum offload  = P
+L4 checksum offload  = P
+MACsec offload       = P
+Inner L3 checksum    = P
+Inner L4 checksum    = P
+Packet type parsing  = Y
+Timesync             = Y
+Rx descriptor status = Y
+Tx descriptor status = Y
+Basic stats          = Y
+Extended stats       = Y
+Stats per queue      = Y
+FW version           = Y
+EEPROM dump          = Y
+Module EEPROM dump   = Y
+Multiprocess aware   = Y
+BSD nic_uio          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
diff --git a/doc/guides/nics/txgbe.rst b/doc/guides/nics/txgbe.rst
new file mode 100644
index 000000000..133e17bc0
--- /dev/null
+++ b/doc/guides/nics/txgbe.rst
@@ -0,0 +1,67 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2015-2020.
+
+TXGBE Poll Mode Driver
+======================
+
+The TXGBE PMD (librte_pmd_txgbe) provides poll mode driver support
+for Wangxun 10 Gigabit Ethernet NICs.
+
+Features
+--------
+
+- Multiple queues for TX and RX
+- Receiver Side Scaling (RSS)
+- MAC/VLAN filtering
+- Packet type information
+- Checksum offload
+- VLAN/QinQ stripping and inserting
+- TSO offload
+- Promiscuous mode
+- Multicast mode
+- Port hardware statistics
+- Jumbo frames
+- Link state information
+- Link flow control
+- Interrupt mode for RX
+- Scattered and gather for TX and RX
+- DCB
+- IEEE 1588
+- FW version
+- LRO
+- Generic flow API
+
+Prerequisites
+-------------
+
+- Learning about Wangxun 10 Gigabit Ethernet NICs using
+  `<https://www.net-swift.com/c/product.html>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_TXGBE_PMD`` (default ``y``)
+
+  Toggle compilation of the ``librte_pmd_txgbe`` driver.
+
+- ``CONFIG_RTE_LIBRTE_TXGBE_DEBUG_*`` (default ``n``)
+
+  Toggle display of generic debugging messages.
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+Limitations or Known issues
+---------------------------
+Build with ICC is not supported yet.
+X86-32, Power8, ARMv7 and BSD are not supported yet.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index d56b24051..8a240134f 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -50,6 +50,7 @@ drivers = ['af_packet',
 	'szedata2',
 	'tap',
 	'thunderx',
+	'txgbe',
 	'vdev_netvsc',
 	'vhost',
 	'virtio',
diff --git a/drivers/net/txgbe/meson.build b/drivers/net/txgbe/meson.build
new file mode 100644
index 000000000..605fcba78
--- /dev/null
+++ b/drivers/net/txgbe/meson.build
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2015-2020
+
+cflags += ['-DRTE_LIBRTE_TXGBE_BYPASS']
+
+sources = files(
+	'txgbe_ethdev.c',
+)
+
diff --git a/drivers/net/txgbe/rte_pmd_txgbe_version.map b/drivers/net/txgbe/rte_pmd_txgbe_version.map
new file mode 100644
index 000000000..4a76d1d52
--- /dev/null
+++ b/drivers/net/txgbe/rte_pmd_txgbe_version.map
@@ -0,0 +1,3 @@
+DPDK_21 {
+	local: *;
+};
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
new file mode 100644
index 000000000..cb758762d
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
new file mode 100644
index 000000000..cb758762d
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index a54425997..85e3e8b52 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -239,6 +239,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_NFB_PMD)        += -lrte_pmd_nfb
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NFB_PMD)        +=  $(shell command -v pkg-config > /dev/null 2>&1 && pkg-config --libs netcope-common)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_TAP)        += -lrte_pmd_tap
 _LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf
+_LDLIBS-$(CONFIG_RTE_LIBRTE_TXGBE_PMD)      += -lrte_pmd_txgbe
 _LDLIBS-$(CONFIG_RTE_LIBRTE_VDEV_NETVSC_PMD) += -lrte_pmd_vdev_netvsc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD)     += -lrte_pmd_virtio
 ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y)
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-09 17:50   ` Ferruh Yigit
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 03/42] net/txgbe: add device init and uninit Jiawen Wu
                   ` (40 subsequent siblings)
  41 siblings, 1 reply; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

add basic PCIe ethdev probe and remove.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build       |  21 +++
 drivers/net/txgbe/base/txgbe.h           |  10 ++
 drivers/net/txgbe/base/txgbe_devids.h    |  40 ++++++
 drivers/net/txgbe/base/txgbe_type.h      |  14 ++
 drivers/net/txgbe/meson.build            |   5 +
 drivers/net/txgbe/txgbe_ethdev.c         | 161 +++++++++++++++++++++++
 drivers/net/txgbe/txgbe_ethdev.h         |  37 ++++++
 drivers/net/txgbe/txgbe_logs.h           | 123 +++++++++++++++++
 drivers/net/txgbe/txgbe_vf_representor.c |  27 ++++
 9 files changed, 438 insertions(+)
 create mode 100644 drivers/net/txgbe/base/meson.build
 create mode 100644 drivers/net/txgbe/base/txgbe.h
 create mode 100644 drivers/net/txgbe/base/txgbe_devids.h
 create mode 100644 drivers/net/txgbe/base/txgbe_type.h
 create mode 100644 drivers/net/txgbe/txgbe_logs.h
 create mode 100644 drivers/net/txgbe/txgbe_vf_representor.c

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
new file mode 100644
index 000000000..8cc8395d1
--- /dev/null
+++ b/drivers/net/txgbe/base/meson.build
@@ -0,0 +1,21 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2015-2020
+
+sources = [
+
+]
+
+error_cflags = ['-Wno-unused-value',
+				'-Wno-unused-parameter',
+				'-Wno-unused-but-set-variable']
+c_args = cflags
+foreach flag: error_cflags
+	if cc.has_argument(flag)
+		c_args += flag
+	endif
+endforeach
+
+base_lib = static_library('txgbe_base', sources,
+	dependencies: static_rte_eal,
+	c_args: c_args)
+base_objs = base_lib.extract_all_objects()
diff --git a/drivers/net/txgbe/base/txgbe.h b/drivers/net/txgbe/base/txgbe.h
new file mode 100644
index 000000000..9aee9738a
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_H_
+#define _TXGBE_H_
+
+#include "txgbe_type.h"
+
+#endif /* _TXGBE_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_devids.h b/drivers/net/txgbe/base/txgbe_devids.h
new file mode 100644
index 000000000..744f2f3b5
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_devids.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_DEVIDS_H_
+#define _TXGBE_DEVIDS_H_
+
+/*
+ * Vendor ID
+ */
+#ifndef PCI_VENDOR_ID_WANGXUN
+#define PCI_VENDOR_ID_WANGXUN                   0x8088
+#endif
+
+/*
+ * Device IDs
+ */
+#define TXGBE_DEV_ID_RAPTOR_VF                  0x1000
+#define TXGBE_DEV_ID_RAPTOR_SFP                 0x1001 /* fiber */
+#define TXGBE_DEV_ID_RAPTOR_KR_KX_KX4           0x1002 /* backplane */
+#define TXGBE_DEV_ID_RAPTOR_XAUI                0x1003 /* copper */
+#define TXGBE_DEV_ID_RAPTOR_SGMII               0x1004 /* copper */
+#define TXGBE_DEV_ID_RAPTOR_QSFP                0x1011 /* fiber */
+#define TXGBE_DEV_ID_RAPTOR_VF_HV               0x2000
+#define TXGBE_DEV_ID_RAPTOR_T3_LOM              0x2001
+
+#define TXGBE_DEV_ID_WX1820_SFP                 0x2001
+
+/*
+ * Subdevice IDs
+ */
+#define TXGBE_SUBDEV_ID_RAPTOR			0x0000
+#define TXGBE_SUBDEV_ID_MPW			0x0001
+
+#define TXGBE_ETHERTYPE_FLOW_CTRL   0x8808
+#define TXGBE_ETHERTYPE_IEEE_VLAN   0x8100  /* 802.1q protocol */
+
+#define TXGBE_VXLAN_PORT 4789
+
+#endif /* _TXGBE_DEVIDS_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
new file mode 100644
index 000000000..8ed324a1b
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_TYPE_H_
+#define _TXGBE_TYPE_H_
+
+#include "txgbe_devids.h"
+
+struct txgbe_hw {
+	void *back;
+};
+
+#endif /* _TXGBE_TYPE_H_ */
diff --git a/drivers/net/txgbe/meson.build b/drivers/net/txgbe/meson.build
index 605fcba78..f45b04b1c 100644
--- a/drivers/net/txgbe/meson.build
+++ b/drivers/net/txgbe/meson.build
@@ -3,7 +3,12 @@
 
 cflags += ['-DRTE_LIBRTE_TXGBE_BYPASS']
 
+subdir('base')
+objs = [base_objs]
+
 sources = files(
 	'txgbe_ethdev.c',
+	'txgbe_vf_representor.c',
 )
 
+includes += include_directories('base')
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index cb758762d..86d2b9064 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2,3 +2,164 @@
  * Copyright(c) 2015-2020
  */
 
+#include <sys/queue.h>
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <netinet/in.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_ethdev_driver.h>
+#include <rte_ethdev_pci.h>
+
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_pci.h>
+#include <rte_branch_prediction.h>
+#include <rte_memory.h>
+#include <rte_eal.h>
+#include <rte_alarm.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_dev.h>
+
+#include "txgbe_logs.h"
+#include "base/txgbe.h"
+#include "txgbe_ethdev.h"
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_txgbe_map[] = {
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, TXGBE_DEV_ID_RAPTOR_SFP) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_WANGXUN, TXGBE_DEV_ID_WX1820_SFP) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+{
+
+	RTE_SET_USED(eth_dev);
+
+	return 0;
+}
+
+static int
+eth_txgbe_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+
+	RTE_SET_USED(eth_dev);
+
+	return 0;
+}
+
+static int
+eth_txgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		struct rte_pci_device *pci_dev)
+{
+	char name[RTE_ETH_NAME_MAX_LEN];
+	struct rte_eth_dev *pf_ethdev;
+	struct rte_eth_devargs eth_da;
+	int i, retval;
+
+	if (pci_dev->device.devargs) {
+		retval = rte_eth_devargs_parse(pci_dev->device.devargs->args,
+				&eth_da);
+		if (retval)
+			return retval;
+	} else
+		memset(&eth_da, 0, sizeof(eth_da));
+
+	retval = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+		sizeof(struct txgbe_adapter),
+		eth_dev_pci_specific_init, pci_dev,
+		eth_txgbe_dev_init, NULL);
+
+	if (retval || eth_da.nb_representor_ports < 1)
+		return retval;
+
+	pf_ethdev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (pf_ethdev == NULL)
+		return -ENODEV;
+
+	/* probe VF representor ports */
+	for (i = 0; i < eth_da.nb_representor_ports; i++) {
+		struct txgbe_vf_info *vfinfo;
+		struct txgbe_vf_representor representor;
+
+		vfinfo = *TXGBE_DEV_VFDATA(pf_ethdev);
+		if (vfinfo == NULL) {
+			PMD_DRV_LOG(ERR,
+				"no virtual functions supported by PF");
+			break;
+		}
+
+		representor.vf_id = eth_da.representor_ports[i];
+		representor.switch_domain_id = vfinfo->switch_domain_id;
+		representor.pf_ethdev = pf_ethdev;
+
+		/* representor port net_bdf_port */
+		snprintf(name, sizeof(name), "net_%s_representor_%d",
+			pci_dev->device.name,
+			eth_da.representor_ports[i]);
+
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+			sizeof(struct txgbe_vf_representor), NULL, NULL,
+			txgbe_vf_representor_init, &representor);
+
+		if (retval)
+			PMD_DRV_LOG(ERR, "failed to create txgbe vf "
+				"representor %s.", name);
+	}
+
+	return 0;
+}
+
+static int eth_txgbe_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct rte_eth_dev *ethdev;
+
+	ethdev = rte_eth_dev_allocated(pci_dev->device.name);
+	if (!ethdev)
+		return -ENODEV;
+
+	if (ethdev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)
+		return rte_eth_dev_destroy(ethdev, txgbe_vf_representor_uninit);
+	else
+		return rte_eth_dev_destroy(ethdev, eth_txgbe_dev_uninit);
+}
+
+static struct rte_pci_driver rte_txgbe_pmd = {
+	.id_table = pci_id_txgbe_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
+		     RTE_PCI_DRV_INTR_LSC,
+	.probe = eth_txgbe_pci_probe,
+	.remove = eth_txgbe_pci_remove,
+};
+
+
+RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_txgbe, pci_id_txgbe_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_txgbe, "* igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_LOG_REGISTER(txgbe_logtype_init, pmd.net.txgbe.init, NOTICE);
+RTE_LOG_REGISTER(txgbe_logtype_driver, pmd.net.txgbe.driver, NOTICE);
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_RX
+	RTE_LOG_REGISTER(txgbe_logtype_rx, pmd.net.txgbe.rx, DEBUG);
+#endif
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_TX
+	RTE_LOG_REGISTER(txgbe_logtype_tx, pmd.net.txgbe.tx, DEBUG);
+#endif
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_TX_FREE
+	RTE_LOG_REGISTER(txgbe_logtype_tx_free, pmd.net.txgbe.tx_free, DEBUG);
+#endif
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index cb758762d..8dbc4a64a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -2,3 +2,40 @@
  * Copyright(c) 2015-2020
  */
 
+#ifndef _TXGBE_ETHDEV_H_
+#define _TXGBE_ETHDEV_H_
+
+#include <stdint.h>
+
+#include "base/txgbe.h"
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_tm_driver.h>
+
+
+struct txgbe_vf_info {
+	uint8_t api_version;
+	uint16_t switch_domain_id;
+};
+
+/*
+ * Structure to store private data for each driver instance (for each port).
+ */
+struct txgbe_adapter {
+	struct txgbe_hw             hw;
+	struct txgbe_vf_info        *vfdata;
+};
+
+struct txgbe_vf_representor {
+	uint16_t vf_id;
+	uint16_t switch_domain_id;
+	struct rte_eth_dev *pf_ethdev;
+};
+
+int txgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
+int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
+
+#define TXGBE_DEV_VFDATA(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
+
+#endif /* _TXGBE_ETHDEV_H_ */
diff --git a/drivers/net/txgbe/txgbe_logs.h b/drivers/net/txgbe/txgbe_logs.h
new file mode 100644
index 000000000..ba17a128a
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_logs.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_LOGS_H_
+#define _TXGBE_LOGS_H_
+
+/*
+ * PMD_USER_LOG: for user
+ */
+extern int txgbe_logtype_init;
+#define PMD_INIT_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_init, \
+		"%s(): " fmt "\n", __func__, ##args)
+
+extern int txgbe_logtype_driver;
+#define PMD_DRV_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_driver, \
+		"%s(): " fmt "\n", __func__, ##args)
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_RX
+extern int txgbe_logtype_rx;
+#define PMD_RX_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_rx,	\
+		"%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_TX
+extern int txgbe_logtype_tx;
+#define PMD_TX_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_tx,	\
+		"%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_TX_FREE
+extern int txgbe_logtype_tx_free;
+#define PMD_TX_FREE_LOG(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_tx_free,	\
+		"%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_INIT
+#define PMD_TLOG_INIT(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_init, \
+		"%s(): " fmt, __func__, ##args)
+#else
+#define PMD_TLOG_INIT(level, fmt, args...)   do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_TXGBE_DEBUG_DRIVER
+#define PMD_TLOG_DRIVER(level, fmt, args...) \
+	rte_log(RTE_LOG_ ## level, txgbe_logtype_driver, \
+		"%s(): " fmt, __func__, ##args)
+#else
+#define PMD_TLOG_DRIVER(level, fmt, args...) do { } while (0)
+#endif
+
+/*
+ * PMD_DEBUG_LOG: for debugger
+ */
+#define TLOG_EMERG(fmt, args...)    PMD_TLOG_DRIVER(EMERG, fmt, ##args)
+#define TLOG_ALERT(fmt, args...)    PMD_TLOG_DRIVER(ALERT, fmt, ##args)
+#define TLOG_CRIT(fmt, args...)     PMD_TLOG_DRIVER(CRIT, fmt, ##args)
+#define TLOG_ERR(fmt, args...)      PMD_TLOG_DRIVER(ERR, fmt, ##args)
+#define TLOG_WARN(fmt, args...)     PMD_TLOG_DRIVER(WARNING, fmt, ##args)
+#define TLOG_NOTICE(fmt, args...)   PMD_TLOG_DRIVER(NOTICE, fmt, ##args)
+#define TLOG_INFO(fmt, args...)     PMD_TLOG_DRIVER(INFO, fmt, ##args)
+#define TLOG_DEBUG(fmt, args...)    PMD_TLOG_DRIVER(DEBUG, fmt, ##args)
+
+/* to be deleted */
+#define DEBUGOUT(fmt, args...)    TLOG_DEBUG(fmt, ##args)
+#define PMD_INIT_FUNC_TRACE()     TLOG_DEBUG(" >>")
+#define DEBUGFUNC(fmt)            TLOG_DEBUG(fmt)
+
+/*
+ * PMD_TEMP_LOG: for tester
+ */
+#ifdef RTE_LIBRTE_TXGBE_DEBUG
+#define wjmsg_line(fmt, ...) \
+    do { \
+	RTE_LOG(CRIT, PMD, "%s(%d): " fmt, \
+	       __FUNCTION__, __LINE__, ## __VA_ARGS__); \
+    } while (0)
+#define wjmsg_stack(fmt, ...) \
+    do { \
+	wjmsg_line(fmt, ## __VA_ARGS__); \
+	rte_dump_stack(); \
+    } while (0)
+#define wjmsg wjmsg_line
+
+#define wjdump(mb) { \
+	int j; char buf[128] = ""; \
+	wjmsg("data_len=%d pkt_len=%d vlan_tci=%d " \
+		"packet_type=0x%08x ol_flags=0x%016lx " \
+		"hash.rss=0x%08x hash.fdir.hash=0x%04x hash.fdir.id=%d\n", \
+		mb->data_len, mb->pkt_len, mb->vlan_tci, \
+		mb->packet_type, mb->ol_flags, \
+		mb->hash.rss, mb->hash.fdir.hash, mb->hash.fdir.id); \
+	for (j = 0; j < mb->data_len; j++) { \
+		sprintf(buf + strlen(buf), "%02x ", \
+			((uint8_t *)(mb->buf_addr) + mb->data_off)[j]); \
+		if (j % 8 == 7) {\
+			wjmsg("%s\n", buf); \
+			buf[0] = '\0'; \
+		} \
+	} \
+	wjmsg("%s\n", buf); \
+}
+#else /* RTE_LIBRTE_TXGBE_DEBUG */
+#define wjmsg_line(fmt, args...) do {} while (0)
+#define wjmsg_limit(fmt, args...) do {} while (0)
+#define wjmsg_stack(fmt, args...) do {} while (0)
+#define wjmsg(fmt, args...) do {} while (0)
+#define wjdump(fmt, args...) do {} while (0)
+#endif /* RTE_LIBRTE_TXGBE_DEBUG */
+
+#endif /* _TXGBE_LOGS_H_ */
diff --git a/drivers/net/txgbe/txgbe_vf_representor.c b/drivers/net/txgbe/txgbe_vf_representor.c
new file mode 100644
index 000000000..df9ae8cc7
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_vf_representor.c
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include <rte_ethdev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+
+#include "txgbe_ethdev.h"
+
+int
+txgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
+{
+
+	RTE_SET_USED(ethdev);
+	RTE_SET_USED(init_params);
+
+	return 0;
+}
+
+int
+txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev)
+{
+	RTE_SET_USED(ethdev);
+
+	return 0;
+}
\ No newline at end of file
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 03/42] net/txgbe: add device init and uninit
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-09 17:52   ` Ferruh Yigit
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 04/42] net/txgbe: add error types and dummy function Jiawen Wu
                   ` (39 subsequent siblings)
  41 siblings, 1 reply; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add basic init and uninit function, registers and some macro definitions prepare for hardware infrastructure.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build    |    3 +-
 drivers/net/txgbe/base/txgbe.h        |    2 +
 drivers/net/txgbe/base/txgbe_eeprom.c |   39 +
 drivers/net/txgbe/base/txgbe_eeprom.h |   11 +
 drivers/net/txgbe/base/txgbe_hw.c     |   32 +
 drivers/net/txgbe/base/txgbe_hw.h     |   16 +
 drivers/net/txgbe/base/txgbe_osdep.h  |  184 +++
 drivers/net/txgbe/base/txgbe_regs.h   | 1895 +++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_type.h   |  115 ++
 drivers/net/txgbe/meson.build         |    2 +
 drivers/net/txgbe/txgbe_ethdev.c      |  256 +++-
 drivers/net/txgbe/txgbe_ethdev.h      |   23 +
 drivers/net/txgbe/txgbe_pf.c          |   34 +
 drivers/net/txgbe/txgbe_rxtx.c        |   45 +
 drivers/net/txgbe/txgbe_rxtx.h        |   24 +
 15 files changed, 2678 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/txgbe/base/txgbe_eeprom.c
 create mode 100644 drivers/net/txgbe/base/txgbe_eeprom.h
 create mode 100644 drivers/net/txgbe/base/txgbe_hw.c
 create mode 100644 drivers/net/txgbe/base/txgbe_hw.h
 create mode 100644 drivers/net/txgbe/base/txgbe_osdep.h
 create mode 100644 drivers/net/txgbe/base/txgbe_regs.h
 create mode 100644 drivers/net/txgbe/txgbe_pf.c
 create mode 100644 drivers/net/txgbe/txgbe_rxtx.c
 create mode 100644 drivers/net/txgbe/txgbe_rxtx.h

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
index 8cc8395d1..72f1e73c9 100644
--- a/drivers/net/txgbe/base/meson.build
+++ b/drivers/net/txgbe/base/meson.build
@@ -2,7 +2,8 @@
 # Copyright(c) 2015-2020
 
 sources = [
-
+	'txgbe_eeprom.c',
+	'txgbe_hw.c',
 ]
 
 error_cflags = ['-Wno-unused-value',
diff --git a/drivers/net/txgbe/base/txgbe.h b/drivers/net/txgbe/base/txgbe.h
index 9aee9738a..32867f5aa 100644
--- a/drivers/net/txgbe/base/txgbe.h
+++ b/drivers/net/txgbe/base/txgbe.h
@@ -6,5 +6,7 @@
 #define _TXGBE_H_
 
 #include "txgbe_type.h"
+#include "txgbe_eeprom.h"
+#include "txgbe_hw.h"
 
 #endif /* _TXGBE_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.c b/drivers/net/txgbe/base/txgbe_eeprom.c
new file mode 100644
index 000000000..287233dda
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_eeprom.c
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_hw.h"
+
+#include "txgbe_eeprom.h"
+
+/**
+ *  txgbe_init_eeprom_params - Initialize EEPROM params
+ *  @hw: pointer to hardware structure
+ *
+ *  Initializes the EEPROM parameters txgbe_rom_info within the
+ *  txgbe_hw struct in order to set up EEPROM access.
+ **/
+s32 txgbe_init_eeprom_params(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_validate_eeprom_checksum - Validate EEPROM checksum
+ *  @hw: pointer to hardware structure
+ *  @checksum_val: calculated checksum
+ *
+ *  Performs checksum calculation and validates the EEPROM checksum.  If the
+ *  caller does not need checksum_val, the value can be NULL.
+ **/
+s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
+					   u16 *checksum_val)
+{
+	RTE_SET_USED(hw);
+	RTE_SET_USED(checksum_val);
+
+	return 0;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
new file mode 100644
index 000000000..e845492f3
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_EEPROM_H_
+#define _TXGBE_EEPROM_H_
+
+s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
+s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val);
+
+#endif /* _TXGBE_EEPROM_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
new file mode 100644
index 000000000..17ccd0b65
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_type.h"
+#include "txgbe_eeprom.h"
+#include "txgbe_hw.h"
+
+s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+			  u32 enable_addr)
+{
+	RTE_SET_USED(hw);
+	RTE_SET_USED(index);
+	RTE_SET_USED(addr);
+	RTE_SET_USED(vmdq);
+	RTE_SET_USED(enable_addr);
+
+	return 0;
+}
+
+s32 txgbe_init_shared_code(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+	return 0;
+}
+
+s32 txgbe_init_hw(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+	return 0;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
new file mode 100644
index 000000000..cd738245f
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_HW_H_
+#define _TXGBE_HW_H_
+
+#include "txgbe_type.h"
+
+s32 txgbe_init_hw(struct txgbe_hw *hw);
+
+s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
+			  u32 enable_addr);
+s32 txgbe_init_shared_code(struct txgbe_hw *hw);
+
+#endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_osdep.h b/drivers/net/txgbe/base/txgbe_osdep.h
new file mode 100644
index 000000000..348df8de0
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_osdep.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_OS_H_
+#define _TXGBE_OS_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <rte_version.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_io.h>
+
+#include "../txgbe_logs.h"
+
+#define RTE_LIBRTE_TXGBE_TM        DCPV(1, 0)
+#define TMZ_PADDR(mz)  ((mz)->iova)
+#define TMZ_VADDR(mz)  ((mz)->addr)
+#define TDEV_NAME(eth_dev)  ((eth_dev)->device->name)
+
+#define ASSERT(x) do {			\
+	if (!(x))			\
+		rte_panic("TXGBE: x");	\
+} while (0)
+
+#define usec_delay(x) rte_delay_us(x)
+#define msec_delay(x) rte_delay_ms(x)
+#define usleep(x)     rte_delay_us(x)
+#define msleep(x)     rte_delay_ms(x)
+
+#define FALSE               0
+#define TRUE                1
+
+#define false               0
+#define true                1
+#define min(a, b)	RTE_MIN(a, b)
+#define max(a, b)	RTE_MAX(a, b)
+
+/* Bunch of defines for shared code bogosity */
+
+static inline void UNREFERENCED(const char *a __rte_unused, ...) {}
+#define UNREFERENCED_PARAMETER(args...) UNREFERENCED("", ##args)
+
+#define STATIC static
+
+typedef uint8_t		u8;
+typedef int8_t		s8;
+typedef uint16_t	u16;
+typedef int16_t		s16;
+typedef uint32_t	u32;
+typedef int32_t		s32;
+typedef uint64_t	u64;
+typedef int64_t		s64;
+
+/* Little Endian defines */
+#ifndef __le16
+#define __le16  u16
+#define __le32  u32
+#define __le64  u64
+#endif
+#ifndef __be16
+#define __be16  u16
+#define __be32  u32
+#define __be64  u64
+#endif
+
+/* Bit shift and mask */
+#define BIT_MASK4                 (0x0000000FU)
+#define BIT_MASK8                 (0x000000FFU)
+#define BIT_MASK16                (0x0000FFFFU)
+#define BIT_MASK32                (0xFFFFFFFFU)
+#define BIT_MASK64                (0xFFFFFFFFFFFFFFFFUL)
+
+#ifndef cpu_to_le32
+#define cpu_to_le16(v)          rte_cpu_to_le_16((u16)(v))
+#define cpu_to_le32(v)          rte_cpu_to_le_32((u32)(v))
+#define cpu_to_le64(v)          rte_cpu_to_le_64((u64)(v))
+#define le_to_cpu16(v)          rte_le_to_cpu_16((u16)(v))
+#define le_to_cpu32(v)          rte_le_to_cpu_32((u32)(v))
+#define le_to_cpu64(v)          rte_le_to_cpu_64((u64)(v))
+
+#define cpu_to_be16(v)          rte_cpu_to_be_16((u16)(v))
+#define cpu_to_be32(v)          rte_cpu_to_be_32((u32)(v))
+#define cpu_to_be64(v)          rte_cpu_to_be_64((u64)(v))
+#define be_to_cpu16(v)          rte_be_to_cpu_16((u16)(v))
+#define be_to_cpu32(v)          rte_be_to_cpu_32((u32)(v))
+#define be_to_cpu64(v)          rte_be_to_cpu_64((u64)(v))
+
+#define le_to_be16(v)           rte_bswap16((u16)(v))
+#define le_to_be32(v)           rte_bswap32((u32)(v))
+#define le_to_be64(v)           rte_bswap64((u64)(v))
+#define be_to_le16(v)           rte_bswap16((u16)(v))
+#define be_to_le32(v)           rte_bswap32((u32)(v))
+#define be_to_le64(v)           rte_bswap64((u64)(v))
+
+#define npu_to_le16(v)          (v)
+#define npu_to_le32(v)          (v)
+#define npu_to_le64(v)          (v)
+#define le_to_npu16(v)          (v)
+#define le_to_npu32(v)          (v)
+#define le_to_npu64(v)          (v)
+
+#define npu_to_be16(v)          le_to_be16((u16)(v))
+#define npu_to_be32(v)          le_to_be32((u32)(v))
+#define npu_to_be64(v)          le_to_be64((u64)(v))
+#define be_to_npu16(v)          be_to_le16((u16)(v))
+#define be_to_npu32(v)          be_to_le32((u32)(v))
+#define be_to_npu64(v)          be_to_le64((u64)(v))
+#endif /* !cpu_to_le32 */
+
+static inline u16 REVERT_BIT_MASK16(u16 mask)
+{
+	mask = ((mask & 0x5555) << 1) | ((mask & 0xAAAA) >> 1);
+	mask = ((mask & 0x3333) << 2) | ((mask & 0xCCCC) >> 2);
+	mask = ((mask & 0x0F0F) << 4) | ((mask & 0xF0F0) >> 4);
+	return ((mask & 0x00FF) << 8) | ((mask & 0xFF00) >> 8);
+}
+
+static inline u32 REVERT_BIT_MASK32(u32 mask)
+{
+	mask = ((mask & 0x55555555) << 1) | ((mask & 0xAAAAAAAA) >> 1);
+	mask = ((mask & 0x33333333) << 2) | ((mask & 0xCCCCCCCC) >> 2);
+	mask = ((mask & 0x0F0F0F0F) << 4) | ((mask & 0xF0F0F0F0) >> 4);
+	mask = ((mask & 0x00FF00FF) << 8) | ((mask & 0xFF00FF00) >> 8);
+	return ((mask & 0x0000FFFF) << 16) | ((mask & 0xFFFF0000) >> 16);
+}
+
+static inline u64 REVERT_BIT_MASK64(u64 mask)
+{
+	mask = ((mask & 0x5555555555555555) << 1) |
+	       ((mask & 0xAAAAAAAAAAAAAAAA) >> 1);
+	mask = ((mask & 0x3333333333333333) << 2) |
+	       ((mask & 0xCCCCCCCCCCCCCCCC) >> 2);
+	mask = ((mask & 0x0F0F0F0F0F0F0F0F) << 4) |
+	       ((mask & 0xF0F0F0F0F0F0F0F0) >> 4);
+	mask = ((mask & 0x00FF00FF00FF00FF) << 8) |
+	       ((mask & 0xFF00FF00FF00FF00) >> 8);
+	mask = ((mask & 0x0000FFFF0000FFFF) << 16) |
+	       ((mask & 0xFFFF0000FFFF0000) >> 16);
+	return ((mask & 0x00000000FFFFFFFF) << 32) |
+	       ((mask & 0xFFFFFFFF00000000) >> 32);
+}
+
+#define mb()	rte_mb()
+#define wmb()	rte_wmb()
+#define rmb()	rte_rmb()
+
+#ifndef __rte_weak
+#define __rte_weak __attribute__((__weak__))
+#endif
+
+#define IOMEM
+
+#define prefetch(x) rte_prefetch0(x)
+
+#define ARRAY_SIZE(x) ((int32_t)RTE_DIM(x))
+
+#ifndef MAX_UDELAY_MS
+#define MAX_UDELAY_MS 5
+#endif
+
+#define ETH_ADDR_LEN	6
+#define ETH_FCS_LEN	4
+
+/* Check whether address is multicast. This is little-endian specific check.*/
+#define TXGBE_IS_MULTICAST(Address) \
+		(bool)(((u8 *)(Address))[0] & ((u8)0x01))
+
+/* Check whether an address is broadcast. */
+#define TXGBE_IS_BROADCAST(Address) \
+		((((u8 *)(Address))[0] == ((u8)0xff)) && \
+		(((u8 *)(Address))[1] == ((u8)0xff)))
+
+#define ETH_P_8021Q      0x8100
+#define ETH_P_8021AD     0x88A8
+
+#endif /* _TXGBE_OS_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_regs.h b/drivers/net/txgbe/base/txgbe_regs.h
new file mode 100644
index 000000000..a2100bee2
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_regs.h
@@ -0,0 +1,1895 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_REGS_H_
+#define _TXGBE_REGS_H_
+
+#define TXGBE_PVMBX_QSIZE          (16) /* 16*4B */
+#define TXGBE_PVMBX_BSIZE          (TXGBE_PVMBX_QSIZE * 4)
+
+#define TXGBE_REMOVED(a) (0)
+
+#define TXGBE_REG_DUMMY             0xFFFFFF
+
+#define MS8(shift, mask)          (((u8)(mask)) << (shift))
+#define LS8(val, shift, mask)     (((u8)(val) & (u8)(mask)) << (shift))
+#define RS8(reg, shift, mask)     (((u8)(reg) >> (shift)) & (u8)(mask))
+
+#define MS16(shift, mask)         (((u16)(mask)) << (shift))
+#define LS16(val, shift, mask)    (((u16)(val) & (u16)(mask)) << (shift))
+#define RS16(reg, shift, mask)    (((u16)(reg) >> (shift)) & (u16)(mask))
+
+#define MS32(shift, mask)         (((u32)(mask)) << (shift))
+#define LS32(val, shift, mask)    (((u32)(val) & (u32)(mask)) << (shift))
+#define RS32(reg, shift, mask)    (((u32)(reg) >> (shift)) & (u32)(mask))
+
+#define MS64(shift, mask)         (((u64)(mask)) << (shift))
+#define LS64(val, shift, mask)    (((u64)(val) & (u64)(mask)) << (shift))
+#define RS64(reg, shift, mask)    (((u64)(reg) >> (shift)) & (u64)(mask))
+
+#define MS(shift, mask)           MS32(shift, mask)
+#define LS(val, shift, mask)      LS32(val, shift, mask)
+#define RS(reg, shift, mask)      RS32(reg, shift, mask)
+
+#define ROUND_UP(x, y)          (((x) + (y) - 1) / (y) * (y))
+#define ROUND_DOWN(x, y)        ((x) / (y) * (y))
+#define ROUND_OVER(x, maxbits, unitbits) \
+	((x) >= 1 << (maxbits) ? 0 : (x) >> (unitbits))
+
+/* autoc bits definition */
+#define TXGBE_AUTOC                       TXGBE_REG_DUMMY
+#define   TXGBE_AUTOC_FLU                 MS64(0, 0x1)
+#define   TXGBE_AUTOC_10G_PMA_PMD_MASK    MS64(7, 0x3) /* parallel */
+#define   TXGBE_AUTOC_10G_XAUI            LS64(0, 7, 0x3)
+#define   TXGBE_AUTOC_10G_KX4             LS64(1, 7, 0x3)
+#define   TXGBE_AUTOC_10G_CX4             LS64(2, 7, 0x3)
+#define   TXGBE_AUTOC_10G_KR              LS64(3, 7, 0x3) /* fixme */
+#define   TXGBE_AUTOC_1G_PMA_PMD_MASK     MS64(9, 0x7)
+#define   TXGBE_AUTOC_1G_BX               LS64(0, 9, 0x7)
+#define   TXGBE_AUTOC_1G_KX               LS64(1, 9, 0x7)
+#define   TXGBE_AUTOC_1G_SFI              LS64(0, 9, 0x7)
+#define   TXGBE_AUTOC_1G_KX_BX            LS64(1, 9, 0x7)
+#define   TXGBE_AUTOC_AN_RESTART          MS64(12, 0x1)
+#define   TXGBE_AUTOC_LMS_MASK            MS64(13, 0x7)
+#define   TXGBE_AUTOC_LMS_10Gs            LS64(3, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_KX4_KX_KR       LS64(4, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_SGMII_1G_100M   LS64(5, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_KX4_KX_KR_1G_AN LS64(6, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_KX4_KX_KR_SGMII LS64(7, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_1G_LINK_NO_AN   LS64(0, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_10G_LINK_NO_AN  LS64(1, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_1G_AN           LS64(2, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_KX4_AN          LS64(4, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_KX4_AN_1G_AN    LS64(6, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_ATTACH_TYPE     LS64(7, 13, 0x7)
+#define   TXGBE_AUTOC_LMS_AN              MS64(15, 0x7)
+
+#define   TXGBE_AUTOC_KR_SUPP             MS64(16, 0x1)
+#define   TXGBE_AUTOC_FECR                MS64(17, 0x1)
+#define   TXGBE_AUTOC_FECA                MS64(18, 0x1)
+#define   TXGBE_AUTOC_AN_RX_ALIGN         MS64(18, 0x1F) /* fixme */
+#define   TXGBE_AUTOC_AN_RX_DRIFT         MS64(23, 0x3)
+#define   TXGBE_AUTOC_AN_RX_LOOSE         MS64(24, 0x3)
+#define   TXGBE_AUTOC_PD_TMR              MS64(25, 0x3)
+#define   TXGBE_AUTOC_RF                  MS64(27, 0x1)
+#define   TXGBE_AUTOC_ASM_PAUSE           MS64(29, 0x1)
+#define   TXGBE_AUTOC_SYM_PAUSE           MS64(28, 0x1)
+#define   TXGBE_AUTOC_PAUSE               MS64(28, 0x3)
+#define   TXGBE_AUTOC_KX_SUPP             MS64(30, 0x1)
+#define   TXGBE_AUTOC_KX4_SUPP            MS64(31, 0x1)
+
+#define   TXGBE_AUTOC_10Gs_PMA_PMD_MASK   MS64(48, 0x3)  /* serial */
+#define   TXGBE_AUTOC_10Gs_KR             LS64(0, 48, 0x3)
+#define   TXGBE_AUTOC_10Gs_XFI            LS64(1, 48, 0x3)
+#define   TXGBE_AUTOC_10Gs_SFI            LS64(2, 48, 0x3)
+#define   TXGBE_AUTOC_LINK_DIA_MASK       MS64(60, 0x7)
+#define   TXGBE_AUTOC_LINK_DIA_D3_MASK    LS64(5, 60, 0x7)
+
+#define   TXGBE_AUTOC_SPEED_MASK          MS64(32, 0xFFFF)
+#define   TXGBD_AUTOC_SPEED(r)            RS64(r, 32, 0xFFFF)
+#define   TXGBE_AUTOC_SPEED(v)            LS64(v, 32, 0xFFFF)
+#define     TXGBE_LINK_SPEED_UNKNOWN      0
+#define     TXGBE_LINK_SPEED_10M_FULL     0x0002
+#define     TXGBE_LINK_SPEED_100M_FULL    0x0008
+#define     TXGBE_LINK_SPEED_1GB_FULL     0x0020
+#define     TXGBE_LINK_SPEED_2_5GB_FULL   0x0400
+#define     TXGBE_LINK_SPEED_5GB_FULL     0x0800
+#define     TXGBE_LINK_SPEED_10GB_FULL    0x0080
+#define     TXGBE_LINK_SPEED_40GB_FULL    0x0100
+#define   TXGBE_AUTOC_AUTONEG             MS64(63, 0x1)
+
+
+
+/* Hardware Datapath:
+ *  RX:     / Queue <- Filter \
+ *      Host     |             TC <=> SEC <=> MAC <=> PHY
+ *  TX:     \ Queue -> Filter /
+ *
+ * Packet Filter:
+ *  RX: RSS < FDIR < Filter < Encrypt
+ *
+ * Macro Argument Naming:
+ *   rp = ring pair         [0,127]
+ *   tc = traffic class     [0,7]
+ *   up = user priority     [0,7]
+ *   pi = pool index        [0,63]
+ *   r  = register
+ *   v  = value
+ *   s  = shift
+ *   m  = mask
+ *   i,j,k  = array index
+ *   H,L    = high/low bits
+ *   HI,LO  = high/low state
+ */
+
+#define TXGBE_ETHPHYIF                  TXGBE_REG_DUMMY
+#define   TXGBE_ETHPHYIF_MDIO_ACT       MS(1, 0x1)
+#define   TXGBE_ETHPHYIF_MDIO_MODE      MS(2, 0x1)
+#define   TXGBE_ETHPHYIF_MDIO_BASE(r)   RS(r, 3, 0x1F)
+#define   TXGBE_ETHPHYIF_MDIO_SHARED    MS(13, 0x1)
+#define   TXGBE_ETHPHYIF_SPEED_10M      MS(17, 0x1)
+#define   TXGBE_ETHPHYIF_SPEED_100M     MS(18, 0x1)
+#define   TXGBE_ETHPHYIF_SPEED_1G       MS(19, 0x1)
+#define   TXGBE_ETHPHYIF_SPEED_2_5G     MS(20, 0x1)
+#define   TXGBE_ETHPHYIF_SPEED_10G      MS(21, 0x1)
+#define   TXGBE_ETHPHYIF_SGMII_ENABLE   MS(25, 0x1)
+#define   TXGBE_ETHPHYIF_INT_PHY_MODE   MS(24, 0x1)
+#define   TXGBE_ETHPHYIF_IO_XPCS        MS(30, 0x1)
+#define   TXGBE_ETHPHYIF_IO_EPHY        MS(31, 0x1)
+
+/******************************************************************************
+ * Chip Registers
+ ******************************************************************************/
+/**
+ * Chip Status
+ **/
+#define TXGBE_PWR                  0x010000
+#define   TXGBE_PWR_LAN(r)         RS(r, 30, 0x3)
+#define     TXGBE_PWR_LAN_0          (1)
+#define     TXGBE_PWR_LAN_1          (2)
+#define     TXGBE_PWR_LAN_A          (3)
+#define TXGBE_CTL                  0x010004
+#define TXGBE_LOCKPF               0x010008
+#define TXGBE_RST                  0x01000C
+#define   TXGBE_RST_SW             MS(0, 0x1)
+#define   TXGBE_RST_LAN(i)         MS(((i)+1), 0x1)
+#define   TXGBE_RST_FW             MS(3, 0x1)
+#define   TXGBE_RST_ETH(i)         MS(((i)+29), 0x1)
+#define   TXGBE_RST_GLB            MS(31, 0x1)
+#define   TXGBE_RST_DEFAULT        (TXGBE_RST_SW | \
+				   TXGBE_RST_LAN(0) | \
+				   TXGBE_RST_LAN(1))
+
+#define TXGBE_STAT			0x010028
+#define   TXGBE_STAT_MNGINIT		MS(0, 0x1)
+#define   TXGBE_STAT_MNGVETO		MS(8, 0x1)
+#define   TXGBE_STAT_ECCLAN0		MS(16, 0x1)
+#define   TXGBE_STAT_ECCLAN1		MS(17, 0x1)
+#define   TXGBE_STAT_ECCMNG		MS(18, 0x1)
+#define   TXGBE_STAT_ECCPCIE		MS(19, 0x1)
+#define   TXGBE_STAT_ECCPCIW		MS(20, 0x1)
+#define TXGBE_RSTSTAT                   0x010030
+#define   TXGBE_RSTSTAT_PROG            MS(20, 0x1)
+#define   TXGBE_RSTSTAT_PREP            MS(19, 0x1)
+#define   TXGBE_RSTSTAT_TYPE_MASK       MS(16, 0x7)
+#define   TXGBE_RSTSTAT_TYPE(r)         RS(r, 16, 0x7)
+#define   TXGBE_RSTSTAT_TYPE_PE         LS(0, 16, 0x7)
+#define   TXGBE_RSTSTAT_TYPE_PWR        LS(1, 16, 0x7)
+#define   TXGBE_RSTSTAT_TYPE_HOT        LS(2, 16, 0x7)
+#define   TXGBE_RSTSTAT_TYPE_SW         LS(3, 16, 0x7)
+#define   TXGBE_RSTSTAT_TYPE_FW         LS(4, 16, 0x7)
+#define   TXGBE_RSTSTAT_TMRINIT_MASK    MS(8, 0xFF)
+#define   TXGBE_RSTSTAT_TMRINIT(v)      LS(v, 8, 0xFF)
+#define   TXGBE_RSTSTAT_TMRCNT_MASK     MS(0, 0xFF)
+#define   TXGBE_RSTSTAT_TMRCNT(v)       LS(v, 0, 0xFF)
+#define TXGBE_PWRTMR			0x010034
+
+/**
+ * SPI(Flash)
+ **/
+#define TXGBE_SPICMD               0x010104
+#define   TXGBE_SPICMD_ADDR(v)     LS(v, 0, 0xFFFFFF)
+#define   TXGBE_SPICMD_CLK(v)      LS(v, 25, 0x7)
+#define   TXGBE_SPICMD_CMD(v)      LS(v, 28, 0x7)
+#define TXGBE_SPIDAT               0x010108
+#define   TXGBE_SPIDAT_BYPASS      MS(31, 0x1)
+#define   TXGBE_SPIDAT_STATUS(v)   LS(v, 16, 0xFF)
+#define   TXGBE_SPIDAT_OPDONE      MS(0, 0x1)
+#define TXGBE_SPISTATUS            0x01010C
+#define   TXGBE_SPISTATUS_OPDONE   MS(0, 0x1)
+#define   TXGBE_SPISTATUS_BYPASS   MS(31, 0x1)
+#define TXGBE_SPIUSRCMD            0x010110
+#define TXGBE_SPICFG0              0x010114
+#define TXGBE_SPICFG1              0x010118
+#define TXGBE_FLASH                0x010120
+#define   TXGBE_FLASH_PERSTD       MS(0, 0x1)
+#define   TXGBE_FLASH_PWRRSTD      MS(1, 0x1)
+#define   TXGBE_FLASH_SWRSTD       MS(7, 0x1)
+#define   TXGBE_FLASH_LANRSTD(i)   MS(((i)+9), 0x1)
+#define TXGBE_SRAM                 0x010124
+#define   TXGBE_SRAM_SZ(v)         LS(v, 28, 0x7)
+#define TXGBE_SRAMCTLECC           0x010130
+#define TXGBE_SRAMINJECC           0x010134
+#define TXGBE_SRAMECC              0x010138
+
+/**
+ * Thermel Sensor
+ **/
+#define TXGBE_TSCTL                0x010300
+#define   TXGBE_TSCTL_MODE         MS(31, 0x1)
+#define TXGBE_TSREVAL              0x010304
+#define   TXGBE_TSREVAL_EA         MS(0, 0x1)
+#define TXGBE_TSDAT                0x010308
+#define   TXGBE_TSDAT_TMP(r)       ((r) & 0x3FF)
+#define   TXGBE_TSDAT_VLD          MS(16, 0x1)
+#define TXGBE_TSALMWTRHI           0x01030C
+#define   TXGBE_TSALMWTRHI_VAL(v)  (((v) & 0x3FF))
+#define TXGBE_TSALMWTRLO           0x010310
+#define   TXGBE_TSALMWTRLO_VAL(v)  (((v) & 0x3FF))
+#define TXGBE_TSINTWTR             0x010314
+#define   TXGBE_TSINTWTR_HI        MS(0, 0x1)
+#define   TXGBE_TSINTWTR_LO        MS(1, 0x1)
+#define TXGBE_TSALM                0x010318
+#define   TXGBE_TSALM_LO           MS(0, 0x1)
+#define   TXGBE_TSALM_HI           MS(1, 0x1)
+
+/**
+ * Management
+ **/
+#define TXGBE_MNGTC                0x01CD10
+#define TXGBE_MNGFWSYNC            0x01E000
+#define   TXGBE_MNGFWSYNC_REQ      MS(0, 0x1)
+#define TXGBE_MNGSWSYNC            0x01E004
+#define   TXGBE_MNGSWSYNC_REQ      MS(0, 0x1)
+#define TXGBE_SWSEM                0x01002C
+#define   TXGBE_SWSEM_PF           MS(0, 0x1)
+#define TXGBE_MNGSEM               0x01E008
+#define   TXGBE_MNGSEM_SW(v)       LS(v, 0, 0xFFFF)
+#define   TXGBE_MNGSEM_SWPHY       MS(0, 0x1)
+#define   TXGBE_MNGSEM_SWMBX       MS(2, 0x1)
+#define   TXGBE_MNGSEM_SWFLASH     MS(3, 0x1)
+#define   TXGBE_MNGSEM_FW(v)       LS(v, 16, 0xFFFF)
+#define   TXGBE_MNGSEM_FWPHY       MS(16, 0x1)
+#define   TXGBE_MNGSEM_FWMBX       MS(18, 0x1)
+#define   TXGBE_MNGSEM_FWFLASH     MS(19, 0x1)
+#define TXGBE_MNGMBXCTL            0x01E044
+#define   TXGBE_MNGMBXCTL_SWRDY    MS(0, 0x1)
+#define   TXGBE_MNGMBXCTL_SWACK    MS(1, 0x1)
+#define   TXGBE_MNGMBXCTL_FWRDY    MS(2, 0x1)
+#define   TXGBE_MNGMBXCTL_FWACK    MS(3, 0x1)
+#define TXGBE_MNGMBX               0x01E100
+
+/******************************************************************************
+ * Port Registers
+ ******************************************************************************/
+/* Port Control */
+#define TXGBE_PORTCTL                   0x014400
+#define   TXGBE_PORTCTL_VLANEXT         MS(0, 0x1)
+#define   TXGBE_PORTCTL_ETAG            MS(1, 0x1)
+#define   TXGBE_PORTCTL_QINQ            MS(2, 0x1)
+#define   TXGBE_PORTCTL_DRVLOAD         MS(3, 0x1)
+#define   TXGBE_PORTCTL_UPLNK           MS(4, 0x1)
+#define   TXGBE_PORTCTL_DCB             MS(10, 0x1)
+#define   TXGBE_PORTCTL_NUMTC_MASK      MS(11, 0x1)
+#define   TXGBE_PORTCTL_NUMTC_4         LS(0, 11, 0x1)
+#define   TXGBE_PORTCTL_NUMTC_8         LS(1, 11, 0x1)
+#define   TXGBE_PORTCTL_NUMVT_MASK      MS(12, 0x3)
+#define   TXGBE_PORTCTL_NUMVT_16        LS(1, 12, 0x3)
+#define   TXGBE_PORTCTL_NUMVT_32        LS(2, 12, 0x3)
+#define   TXGBE_PORTCTL_NUMVT_64        LS(3, 12, 0x3)
+#define   TXGBE_PORTCTL_RSTDONE         MS(14, 0x1)
+#define   TXGBE_PORTCTL_TEREDODIA       MS(27, 0x1)
+#define   TXGBE_PORTCTL_GENEVEDIA       MS(28, 0x1)
+#define   TXGBE_PORTCTL_VXLANGPEDIA     MS(30, 0x1)
+#define   TXGBE_PORTCTL_VXLANDIA        MS(31, 0x1)
+
+#define TXGBE_PORT                      0x014404
+#define   TXGBE_PORT_LINKUP             MS(0, 0x1)
+#define   TXGBE_PORT_LINK10G            MS(1, 0x1)
+#define   TXGBE_PORT_LINK1000M          MS(2, 0x1)
+#define   TXGBE_PORT_LINK100M           MS(3, 0x1)
+#define   TXGBE_PORT_LANID(r)           RS(r, 8, 0x1)
+#define TXGBE_EXTAG                     0x014408
+#define   TXGBE_EXTAG_ETAG_MASK         MS(0, 0xFFFF)
+#define   TXGBE_EXTAG_ETAG(v)           LS(v, 0, 0xFFFF)
+#define   TXGBE_EXTAG_VLAN_MASK         MS(16, 0xFFFF)
+#define   TXGBE_EXTAG_VLAN(v)           LS(v, 16, 0xFFFF)
+#define TXGBE_VXLANPORT                 0x014410
+#define TXGBE_VXLANPORTGPE              0x014414
+#define TXGBE_GENEVEPORT                0x014418
+#define TXGBE_TEREDOPORT                0x01441C
+#define TXGBE_LEDCTL                    0x014424
+#define   TXGBE_LEDCTL_SEL_MASK         MS(0, 0xFFFF)
+#define   TXGBE_LEDCTL_SEL(s)           MS((s), 0x1)
+#define   TXGBE_LEDCTL_OD_MASK          MS(16, 0xFFFF)
+#define   TXGBE_LEDCTL_OD(s)            MS(((s)+16), 0x1)
+	/* s=UP(0),10G(1),1G(2),100M(3),BSY(4) */
+#define   TXGBE_LEDCTL_ACTIVE      (TXGBE_LEDCTL_SEL(4) | TXGBE_LEDCTL_OD(4))
+#define TXGBE_TAGTPID(i)                (0x014430 + (i) * 4) /* 0-3 */
+#define   TXGBE_TAGTPID_LSB_MASK        MS(0, 0xFFFF)
+#define   TXGBE_TAGTPID_LSB(v)          LS(v, 0, 0xFFFF)
+#define   TXGBE_TAGTPID_MSB_MASK        MS(16, 0xFFFF)
+#define   TXGBE_TAGTPID_MSB(v)          LS(v, 16, 0xFFFF)
+
+/**
+ * GPIO Control
+ * P0: link speed change
+ * P1:
+ * P2:
+ * P3: optical laser disable
+ * P4:
+ * P5: link speed selection
+ * P6:
+ * P7: external phy event
+ **/
+#define TXGBE_SDP                  0x014800
+#define   TXGBE_SDP_0              MS(0, 0x1)
+#define   TXGBE_SDP_1              MS(1, 0x1)
+#define   TXGBE_SDP_2              MS(2, 0x1)
+#define   TXGBE_SDP_3              MS(3, 0x1)
+#define   TXGBE_SDP_4              MS(4, 0x1)
+#define   TXGBE_SDP_5              MS(5, 0x1)
+#define   TXGBE_SDP_6              MS(6, 0x1)
+#define   TXGBE_SDP_7              MS(7, 0x1)
+#define TXGBE_SDPDIR               0x014804
+#define TXGBE_SDPCTL               0x014808
+#define TXGBE_SDPINTEA             0x014830
+#define TXGBE_SDPINTMSK            0x014834
+#define TXGBE_SDPINTTYP            0x014838
+#define TXGBE_SDPINTPOL            0x01483C
+#define TXGBE_SDPINT               0x014840
+#define TXGBE_SDPINTDB             0x014848
+#define TXGBE_SDPINTEND            0x01484C
+#define TXGBE_SDPDAT               0x014850
+#define TXGBE_SDPLVLSYN            0x014854
+
+/**
+ * MDIO(PHY)
+ **/
+#define TXGBE_MDIOSCA                   0x011200
+#define   TXGBE_MDIOSCA_REG(v)          LS(v, 0, 0xFFFF)
+#define   TXGBE_MDIOSCA_PORT(v)         LS(v, 16, 0x1F)
+#define   TXGBE_MDIOSCA_DEV(v)          LS(v, 21, 0x1F)
+#define TXGBE_MDIOSCD                   0x011204
+#define   TXGBD_MDIOSCD_DAT(r)          RS(r, 0, 0xFFFF)
+#define   TXGBE_MDIOSCD_DAT(v)          LS(v, 0, 0xFFFF)
+#define   TXGBE_MDIOSCD_CMD_PREAD       LS(1, 16, 0x3)
+#define   TXGBE_MDIOSCD_CMD_WRITE       LS(2, 16, 0x3)
+#define   TXGBE_MDIOSCD_CMD_READ        LS(3, 16, 0x3)
+#define   TXGBE_MDIOSCD_SADDR           MS(18, 0x1)
+#define   TXGBE_MDIOSCD_CLOCK(v)        LS(v, 19, 0x7)
+#define   TXGBE_MDIOSCD_BUSY            MS(22, 0x1)
+
+/**
+ * I2C (SFP)
+ **/
+#define TXGBE_I2CCTL               0x014900
+#define   TXGBE_I2CCTL_MAEA        MS(0, 0x1)
+#define   TXGBE_I2CCTL_SPEED(v)    LS(v, 1, 0x3)
+#define   TXGBE_I2CCTL_RESTART     MS(5, 0x1)
+#define   TXGBE_I2CCTL_SLDA        MS(6, 0x1)
+#define TXGBE_I2CTGT               0x014904
+#define   TXGBE_I2CTGT_ADDR(v)     LS(v, 0, 0x3FF)
+#define TXGBE_I2CCMD               0x014910
+#define   TXGBE_I2CCMD_READ        (MS(9, 0x1) | 0x100)
+#define   TXGBE_I2CCMD_WRITE       (MS(9, 0x1))
+#define TXGBE_I2CSCLHITM           0x014914
+#define TXGBE_I2CSCLLOTM           0x014918
+#define TXGBE_I2CINT               0x014934
+#define   TXGBE_I2CINT_RXFULL      MS(2, 0x1)
+#define   TXGBE_I2CINT_TXEMPTY     MS(4, 0x1)
+#define TXGBE_I2CINTMSK            0x014930
+#define TXGBE_I2CRXFIFO            0x014938
+#define TXGBE_I2CTXFIFO            0x01493C
+#define TXGBE_I2CEA                0x01496C
+#define TXGBE_I2CST                0x014970
+#define   TXGBE_I2CST_ACT          MS(5, 0x1)
+#define TXGBE_I2CSCLTM             0x0149AC
+#define TXGBE_I2CSDATM             0x0149B0
+
+/**
+ * TPH
+ **/
+#define TXGBE_TPHCFG               0x014F00
+
+/******************************************************************************
+ * Pool Registers
+ ******************************************************************************/
+#define TXGBE_POOLETHCTL(pl)            (0x015600 + (pl) * 4)
+#define   TXGBE_POOLETHCTL_LBDIA        MS(0, 0x1)
+#define   TXGBE_POOLETHCTL_LLBDIA       MS(1, 0x1)
+#define   TXGBE_POOLETHCTL_LLB          MS(2, 0x1)
+#define   TXGBE_POOLETHCTL_UCP          MS(4, 0x1)
+#define   TXGBE_POOLETHCTL_ETP          MS(5, 0x1)
+#define   TXGBE_POOLETHCTL_VLA          MS(6, 0x1)
+#define   TXGBE_POOLETHCTL_VLP          MS(7, 0x1)
+#define   TXGBE_POOLETHCTL_UTA          MS(8, 0x1)
+#define   TXGBE_POOLETHCTL_MCHA         MS(9, 0x1)
+#define   TXGBE_POOLETHCTL_UCHA         MS(10, 0x1)
+#define   TXGBE_POOLETHCTL_BCA          MS(11, 0x1)
+#define   TXGBE_POOLETHCTL_MCP          MS(12, 0x1)
+
+/* DMA Control */
+#define TXGBE_POOLRXENA(i)              (0x012004 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLRXDNA(i)              (0x012060 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLTXENA(i)              (0x018004 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLTXDSA(i)              (0x0180A0 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLTXLBET(i)             (0x018050 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLTXASET(i)             (0x018058 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLTXASMAC(i)            (0x018060 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLTXASVLAN(i)           (0x018070 + (i) * 4) /* 0-1 */
+#define TXGBE_POOLDROPSWBK(i)           (0x0151C8 + (i) * 4) /* 0-1 */
+
+#define TXGBE_POOLTAG(pl)               (0x018100 + (pl) * 4)
+#define   TXGBE_POOLTAG_VTAG(v)         LS(v, 0, 0xFFFF)
+#define   TXGBE_POOLTAG_VTAG_MASK       MS(0, 0xFFFF)
+#define   TXGBD_POOLTAG_VTAG_UP(r)	RS(r, 13, 0x7)
+#define   TXGBE_POOLTAG_TPIDSEL(v)      LS(v, 24, 0x7)
+#define   TXGBE_POOLTAG_ETAG_MASK       MS(27, 0x3)
+#define   TXGBE_POOLTAG_ETAG            LS(2, 27, 0x3)
+#define   TXGBE_POOLTAG_ACT_MASK        MS(30, 0x3)
+#define   TXGBE_POOLTAG_ACT_ALWAYS      LS(1, 30, 0x3)
+#define   TXGBE_POOLTAG_ACT_NEVER       LS(2, 30, 0x3)
+#define TXGBE_POOLTXARB                 0x018204
+#define   TXGBE_POOLTXARB_WRR           MS(1, 0x1)
+#define TXGBE_POOLETAG(pl)              (0x018700 + (pl) * 4)
+
+/* RSS Hash */
+#define TXGBE_POOLRSS(pl)          (0x019300 + (pl) * 4)
+#define   TXGBE_POOLRSS_L4HDR      MS(1, 0x1)
+#define   TXGBE_POOLRSS_L3HDR      MS(2, 0x1)
+#define   TXGBE_POOLRSS_L2HDR      MS(3, 0x1)
+#define   TXGBE_POOLRSS_L2TUN      MS(4, 0x1)
+#define   TXGBE_POOLRSS_TUNHDR     MS(5, 0x1)
+#define TXGBE_POOLRSSKEY(pl, i)    (0x01A000 + (pl) * 0x40 + (i) * 4)
+#define TXGBE_POOLRSSMAP(pl, i)    (0x01B000 + (pl) * 0x40 + (i) * 4)
+
+/******************************************************************************
+ * Packet Buffer
+ ******************************************************************************/
+/* Flow Control */
+#define TXGBE_FCXOFFTM(i)               (0x019200 + (i) * 4) /* 0-3 */
+#define TXGBE_FCWTRLO(tc)               (0x019220 + (tc) * 4)
+#define   TXGBE_FCWTRLO_TH(v)           LS(v, 10, 0x1FF) /* KB */
+#define   TXGBE_FCWTRLO_XON             MS(31, 0x1)
+#define TXGBE_FCWTRHI(tc)               (0x019260 + (tc) * 4)
+#define   TXGBE_FCWTRHI_TH(v)           LS(v, 10, 0x1FF) /* KB */
+#define   TXGBE_FCWTRHI_XOFF            MS(31, 0x1)
+#define TXGBE_RXFCRFSH                  0x0192A0
+#define   TXGBE_RXFCFSH_TIME(v)         LS(v, 0, 0xFFFF)
+#define TXGBE_FCSTAT                    0x01CE00
+#define   TXGBE_FCSTAT_DLNK(tc)         MS((tc), 0x1)
+#define   TXGBE_FCSTAT_ULNK(tc)         MS((tc) + 8, 0x1)
+
+#define TXGBE_RXFCCFG                   0x011090
+#define   TXGBE_RXFCCFG_FC              MS(0, 0x1)
+#define   TXGBE_RXFCCFG_PFC             MS(8, 0x1)
+#define TXGBE_TXFCCFG                   0x0192A4
+#define   TXGBE_TXFCCFG_FC              MS(3, 0x1)
+#define   TXGBE_TXFCCFG_PFC             MS(4, 0x1)
+
+/* Data Buffer */
+#define TXGBE_PBRXCTL                   0x019000
+#define   TXGBE_PBRXCTL_ST              MS(0, 0x1)
+#define   TXGBE_PBRXCTL_ENA             MS(31, 0x1)
+#define TXGBE_PBRXUP2TC                 0x019008
+#define TXGBE_PBTXUP2TC                 0x01C800
+#define   TXGBE_DCBUP2TC_MAP(tc, v)     LS(v, 3 * (tc), 0x7)
+#define   TXGBE_DCBUP2TC_DEC(tc, r)     RS(r, 3 * (tc), 0x7)
+#define TXGBE_PBRXSIZE(tc)              (0x019020 + (tc) * 4)
+#define   TXGBE_PBRXSIZE_KB(v)          LS(v, 10, 0x3FF)
+
+#define TXGBE_PBRXOFTMR                 0x019094
+#define TXGBE_PBRXDBGCMD                0x019090
+#define TXGBE_PBRXDBGDAT(tc)            (0x0190A0 + (tc) * 4)
+#define TXGBE_PBTXDMATH(tc)             (0x018020 + (tc) * 4)
+#define TXGBE_PBTXSIZE(tc)              (0x01CC00 + (tc) * 4)
+
+/* LLI */
+#define TXGBE_PBRXLLI              0x19080
+#define   TXGBE_PBRXLLI_SZLT(v)    LS(v, 0, 0xFFF)
+#define   TXGBE_PBRXLLI_UPLT(v)    LS(v, 16, 0x7)
+#define   TXGBE_PBRXLLI_UPEA       MS(19, 0x1)
+#define   TXGBE_PBRXLLI_CNM        MS(20, 0x1)
+
+/* Port Arbiter(QoS) */
+#define TXGBE_PARBTXCTL            0x01CD00
+#define   TXGBE_PARBTXCTL_SP       MS(5, 0x1)
+#define   TXGBE_PARBTXCTL_DA       MS(6, 0x1)
+#define   TXGBE_PARBTXCTL_RECYC    MS(8, 0x1)
+#define TXGBE_PARBTXCFG(tc)        (0x01CD20 + (tc) * 4)
+#define   TXGBE_PARBTXCFG_CRQ(v)   LS(v, 0, 0x1FF)
+#define   TXGBE_PARBTXCFG_BWG(v)   LS(v, 9, 0x7)
+#define   TXGBE_PARBTXCFG_MCL(v)   LS(v, 12, 0xFFF)
+#define   TXGBE_PARBTXCFG_GSP      MS(30, 0x1)
+#define   TXGBE_PARBTXCFG_LSP      MS(31, 0x1)
+
+/******************************************************************************
+ * Queue Registers
+ ******************************************************************************/
+/* Queue Control */
+#define TXGBE_QPRXDROP(i)               (0x012080 + (i) * 4) /* 0-3 */
+#define TXGBE_QPRXSTRPVLAN(i)           (0x012090 + (i) * 4) /* 0-3 */
+#define TXGBE_QPTXLLI(i)                (0x018040 + (i) * 4) /* 0-3 */
+
+/* Queue Arbiter(QoS) */
+#define TXGBE_QARBRXCTL            0x012000
+#define   TXGBE_QARBRXCTL_RC       MS(1, 0x1)
+#define   TXGBE_QARBRXCTL_WSP      MS(2, 0x1)
+#define   TXGBE_QARBRXCTL_DA       MS(6, 0x1)
+#define TXGBE_QARBRXCFG(tc)        (0x012040 + (tc) * 4)
+#define   TXGBE_QARBRXCFG_CRQ(v)   LS(v, 0, 0x1FF)
+#define   TXGBE_QARBRXCFG_BWG(v)   LS(v, 9, 0x7)
+#define   TXGBE_QARBRXCFG_MCL(v)   LS(v, 12, 0xFFF)
+#define   TXGBE_QARBRXCFG_GSP      MS(30, 0x1)
+#define   TXGBE_QARBRXCFG_LSP      MS(31, 0x1)
+#define TXGBE_QARBRXTC             0x0194F8
+#define   TXGBE_QARBRXTC_RR        MS(0, 0x1)
+
+#define TXGBE_QARBTXCTL            0x018200
+#define   TXGBE_QARBTXCTL_WSP      MS(1, 0x1)
+#define   TXGBE_QARBTXCTL_RECYC    MS(4, 0x1)
+#define   TXGBE_QARBTXCTL_DA       MS(6, 0x1)
+#define TXGBE_QARBTXCFG(tc)        (0x018220 + (tc) * 4)
+#define   TXGBE_QARBTXCFG_CRQ(v)   LS(v, 0, 0x1FF)
+#define   TXGBE_QARBTXCFG_BWG(v)   LS(v, 9, 0x7)
+#define   TXGBE_QARBTXCFG_MCL(v)   LS(v, 12, 0xFFF)
+#define   TXGBE_QARBTXCFG_GSP      MS(30, 0x1)
+#define   TXGBE_QARBTXCFG_LSP      MS(31, 0x1)
+#define TXGBE_QARBTXMMW            0x018208
+#define     TXGBE_QARBTXMMW_DEF     (4)
+#define     TXGBE_QARBTXMMW_JF      (20)
+#define TXGBE_QARBTXRATEI          0x01820C
+#define TXGBE_QARBTXRATE           0x018404
+#define   TXGBE_QARBTXRATE_MIN(v)  LS(v, 0, 0x3FFF)
+#define   TXGBE_QARBTXRATE_MAX(v)  LS(v, 16, 0x3FFF)
+#define TXGBE_QARBTXCRED(rp)       (0x018500 + (rp) * 4)
+
+/* QCN */
+#define TXGBE_QCNADJ               0x018210
+#define TXGBE_QCNRP                0x018400
+#define TXGBE_QCNRPRATE            0x018404
+#define TXGBE_QCNRPADJ             0x018408
+#define TXGBE_QCNRPRLD             0x01840C
+
+/* Misc Control */
+#define TXGBE_RSECCTL                    0x01200C
+#define   TXGBE_RSECCTL_TSRSC            MS(0, 0x1)
+#define TXGBE_DMATXCTRL                  0x018000
+#define   TXGBE_DMATXCTRL_ENA            MS(0, 0x1)
+#define   TXGBE_DMATXCTRL_TPID_MASK      MS(16, 0xFFFF)
+#define   TXGBE_DMATXCTRL_TPID(v)        LS(v, 16, 0xFFFF)
+
+/******************************************************************************
+ * Packet Filter (L2-7)
+ ******************************************************************************/
+/**
+ * Receive Scaling
+ **/
+#define TXGBE_RSSTBL(i)                 (0x019400 + (i) * 4) /* 32 */
+#define TXGBE_RSSKEY(i)                 (0x019480 + (i) * 4) /* 10 */
+#define TXGBE_RSSPBHASH                 0x0194F0
+#define   TXGBE_RSSPBHASH_BITS(tc, v)   LS(v, 3 * (tc), 0x7)
+#define TXGBE_RACTL                     0x0194F4
+#define   TXGBE_RACTL_RSSMKEY           MS(0, 0x1)
+#define   TXGBE_RACTL_RSSENA            MS(2, 0x1)
+#define   TXGBE_RACTL_RSSMASK           MS(16, 0xFFFF)
+#define   TXGBE_RACTL_RSSIPV4TCP        MS(16, 0x1)
+#define   TXGBE_RACTL_RSSIPV4           MS(17, 0x1)
+#define   TXGBE_RACTL_RSSIPV6           MS(20, 0x1)
+#define   TXGBE_RACTL_RSSIPV6TCP        MS(21, 0x1)
+#define   TXGBE_RACTL_RSSIPV4UDP        MS(22, 0x1)
+#define   TXGBE_RACTL_RSSIPV6UDP        MS(23, 0x1)
+
+/**
+ * Flow Director
+ **/
+#define PERFECT_BUCKET_64KB_HASH_MASK   0x07FF  /* 11 bits */
+#define PERFECT_BUCKET_128KB_HASH_MASK  0x0FFF  /* 12 bits */
+#define PERFECT_BUCKET_256KB_HASH_MASK  0x1FFF  /* 13 bits */
+#define SIG_BUCKET_64KB_HASH_MASK       0x1FFF  /* 13 bits */
+#define SIG_BUCKET_128KB_HASH_MASK      0x3FFF  /* 14 bits */
+#define SIG_BUCKET_256KB_HASH_MASK      0x7FFF  /* 15 bits */
+
+#define TXGBE_FDIRCTL                   0x019500
+#define   TXGBE_FDIRCTL_BUF_MASK        MS(0, 0x3)
+#define   TXGBE_FDIRCTL_BUF_64K         LS(1, 0, 0x3)
+#define   TXGBE_FDIRCTL_BUF_128K        LS(2, 0, 0x3)
+#define   TXGBE_FDIRCTL_BUF_256K        LS(3, 0, 0x3)
+#define   TXGBD_FDIRCTL_BUF_BYTE(r)     (1 << (15 + RS(r, 0, 0x3)))
+#define   TXGBE_FDIRCTL_INITDONE        MS(3, 0x1)
+#define   TXGBE_FDIRCTL_PERFECT         MS(4, 0x1)
+#define   TXGBE_FDIRCTL_REPORT_MASK     MS(5, 0x7)
+#define   TXGBE_FDIRCTL_REPORT_MATCH    LS(1, 5, 0x7)
+#define   TXGBE_FDIRCTL_REPORT_ALWAYS   LS(5, 5, 0x7)
+#define   TXGBE_FDIRCTL_DROPQP_MASK     MS(8, 0x7F)
+#define   TXGBE_FDIRCTL_DROPQP(v)       LS(v, 8, 0x7F)
+#define   TXGBE_FDIRCTL_HASHBITS_MASK   LS(20, 0xF)
+#define   TXGBE_FDIRCTL_HASHBITS(v)     LS(v, 20, 0xF)
+#define   TXGBE_FDIRCTL_MAXLEN(v)       LS(v, 24, 0xF)
+#define   TXGBE_FDIRCTL_FULLTHR(v)      LS(v, 28, 0xF)
+#define TXGBE_FDIRFLEXCFG(i)            (0x019580 + (i) * 4) /* 0-15 */
+#define   TXGBD_FDIRFLEXCFG_ALL(r, i)   RS(0, (i) << 3, 0xFF)
+#define   TXGBE_FDIRFLEXCFG_ALL(v, i)   LS(v, (i) << 3, 0xFF)
+#define   TXGBE_FDIRFLEXCFG_BASE_MAC    LS(0, 0, 0x3)
+#define   TXGBE_FDIRFLEXCFG_BASE_L2     LS(1, 0, 0x3)
+#define   TXGBE_FDIRFLEXCFG_BASE_L3     LS(2, 0, 0x3)
+#define   TXGBE_FDIRFLEXCFG_BASE_PAY    LS(3, 0, 0x3)
+#define   TXGBE_FDIRFLEXCFG_DIA         MS(2, 0x1)
+#define   TXGBE_FDIRFLEXCFG_OFST_MASK   MS(3, 0x1F)
+#define   TXGBD_FDIRFLEXCFG_OFST(r)     RS(r, 3, 0x1F)
+#define   TXGBE_FDIRFLEXCFG_OFST(v)     LS(v, 3, 0x1F)
+#define TXGBE_FDIRBKTHKEY               0x019568
+#define TXGBE_FDIRSIGHKEY               0x01956C
+
+/* Common Mask */
+#define TXGBE_FDIRDIP4MSK               0x01953C
+#define TXGBE_FDIRSIP4MSK               0x019540
+#define TXGBE_FDIRIP6MSK                0x019574
+#define   TXGBE_FDIRIP6MSK_SRC(v)       LS(v, 0, 0xFFFF)
+#define   TXGBE_FDIRIP6MSK_DST(v)       LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRTCPMSK                0x019544
+#define   TXGBE_FDIRTCPMSK_SRC(v)       LS(v, 0, 0xFFFF)
+#define   TXGBE_FDIRTCPMSK_DST(v)       LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRUDPMSK                0x019548
+#define   TXGBE_FDIRUDPMSK_SRC(v)       LS(v, 0, 0xFFFF)
+#define   TXGBE_FDIRUDPMSK_DST(v)       LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRSCTPMSK               0x019560
+#define   TXGBE_FDIRSCTPMSK_SRC(v)      LS(v, 0, 0xFFFF)
+#define   TXGBE_FDIRSCTPMSK_DST(v)      LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRMSK                   0x019570
+#define   TXGBE_FDIRMSK_POOL            MS(2, 0x1)
+#define   TXGBE_FDIRMSK_L4P             MS(3, 0x1)
+#define   TXGBE_FDIRMSK_L3P             MS(4, 0x1)
+#define   TXGBE_FDIRMSK_TUNTYPE         MS(5, 0x1)
+#define   TXGBE_FDIRMSK_TUNIP           MS(6, 0x1)
+#define   TXGBE_FDIRMSK_TUNPKT          MS(7, 0x1)
+
+/* Programming Interface */
+#define TXGBE_FDIRPIPORT                0x019520
+#define   TXGBE_FDIRPIPORT_SRC(v)       LS(v, 0, 0xFFFF)
+#define   TXGBE_FDIRPIPORT_DST(v)       LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRPISIP6(i)             (0x01950C + (i) * 4) /* [0,2] */
+#define TXGBE_FDIRPISIP4                0x019518
+#define TXGBE_FDIRPIDIP4                0x01951C
+#define TXGBE_FDIRPIFLEX                0x019524
+#define   TXGBE_FDIRPIFLEX_PTYPE(v)     LS(v, 0, 0xFF)
+#define   TXGBE_FDIRPIFLEX_FLEX(v)      LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRPIHASH                0x019528
+#define   TXGBE_FDIRPIHASH_BKT(v)       LS(v, 0, 0x7FFF)
+#define   TXGBE_FDIRPIHASH_VLD          MS(15, 0x1)
+#define   TXGBE_FDIRPIHASH_SIG(v)       LS(v, 16, 0x7FFF)
+#define   TXGBE_FDIRPIHASH_IDX(v)       LS(v, 16, 0xFFFF)
+#define TXGBE_FDIRPICMD                 0x01952C
+#define   TXGBE_FDIRPICMD_OP_MASK       MS(0, 0x3)
+#define   TXGBE_FDIRPICMD_OP_ADD        LS(1, 0, 0x3)
+#define   TXGBE_FDIRPICMD_OP_REM        LS(2, 0, 0x3)
+#define   TXGBE_FDIRPICMD_OP_QRY        LS(3, 0, 0x3)
+#define   TXGBE_FDIRPICMD_VLD           MS(2, 0x1)
+#define   TXGBE_FDIRPICMD_UPD           MS(3, 0x1)
+#define   TXGBE_FDIRPICMD_DIP6          MS(4, 0x1)
+#define   TXGBE_FDIRPICMD_FT(v)         LS(v, 5, 0x3)
+#define   TXGBE_FDIRPICMD_FT_MASK       MS(5, 0x3)
+#define   TXGBE_FDIRPICMD_FT_UDP        LS(1, 5, 0x3)
+#define   TXGBE_FDIRPICMD_FT_TCP        LS(2, 5, 0x3)
+#define   TXGBE_FDIRPICMD_FT_SCTP       LS(3, 5, 0x3)
+#define   TXGBE_FDIRPICMD_IP6           MS(7, 0x1)
+#define   TXGBE_FDIRPICMD_CLR           MS(8, 0x1)
+#define   TXGBE_FDIRPICMD_DROP          MS(9, 0x1)
+#define   TXGBE_FDIRPICMD_LLI           MS(10, 0x1)
+#define   TXGBE_FDIRPICMD_LAST          MS(11, 0x1)
+#define   TXGBE_FDIRPICMD_COLLI         MS(12, 0x1)
+#define   TXGBE_FDIRPICMD_QPENA         MS(15, 0x1)
+#define   TXGBE_FDIRPICMD_QP(v)         LS(v, 16, 0x7F)
+#define   TXGBE_FDIRPICMD_POOL(v)       LS(v, 24, 0x3F)
+
+/**
+ * 5-tuple Filter
+ **/
+#define TXGBE_5TFSADDR(i)               (0x019600 + (i) * 4) /* 0-127 */
+#define TXGBE_5TFDADDR(i)               (0x019800 + (i) * 4) /* 0-127 */
+#define TXGBE_5TFPORT(i)                (0x019A00 + (i) * 4) /* 0-127 */
+#define   TXGBE_5TFPORT_SRC(v)          LS(v, 0, 0xFFFF)
+#define   TXGBE_5TFPORT_DST(v)          LS(v, 16, 0xFFFF)
+#define TXGBE_5TFCTL0(i)                (0x019C00 + (i) * 4) /* 0-127 */
+#define   TXGBE_5TFCTL0_PROTO(v)        LS(v, 0, 0x3)
+enum txgbe_5tuple_protocol {
+	TXGBE_5TF_PROT_TCP = 0,
+	TXGBE_5TF_PROT_UDP,
+	TXGBE_5TF_PROT_SCTP,
+	TXGBE_5TF_PROT_NONE,
+};
+#define   TXGBE_5TFCTL0_PRI(v)          LS(v, 2, 0x7)
+#define   TXGBE_5TFCTL0_POOL(v)         LS(v, 8, 0x3F)
+#define   TXGBE_5TFCTL0_MASK            MS(25, 0x3F)
+#define     TXGBE_5TFCTL0_MSADDR        MS(25, 0x1)
+#define     TXGBE_5TFCTL0_MDADDR        MS(26, 0x1)
+#define     TXGBE_5TFCTL0_MSPORT        MS(27, 0x1)
+#define     TXGBE_5TFCTL0_MDPORT        MS(28, 0x1)
+#define     TXGBE_5TFCTL0_MPROTO        MS(29, 0x1)
+#define     TXGBE_5TFCTL0_MPOOL         MS(30, 0x1)
+#define   TXGBE_5TFCTL0_ENA             MS(31, 0x1)
+#define TXGBE_5TFCTL1(i)                (0x019E00 + (i) * 4) /* 0-127 */
+#define   TXGBE_5TFCTL1_CHKSZ           MS(12, 0x1)
+#define   TXGBE_5TFCTL1_LLI             MS(20, 0x1)
+#define   TXGBE_5TFCTL1_QP(v)           LS(v, 21, 0x7F)
+
+/**
+ * Storm Control
+ **/
+#define TXGBE_STRMCTL              0x015004
+#define   TXGBE_STRMCTL_MCPNSH     MS(0, 0x1)
+#define   TXGBE_STRMCTL_MCDROP     MS(1, 0x1)
+#define   TXGBE_STRMCTL_BCPNSH     MS(2, 0x1)
+#define   TXGBE_STRMCTL_BCDROP     MS(3, 0x1)
+#define   TXGBE_STRMCTL_DFTPOOL    MS(4, 0x1)
+#define   TXGBE_STRMCTL_ITVL(v)    LS(v, 8, 0x3FF)
+#define TXGBE_STRMTH               0x015008
+#define   TXGBE_STRMTH_MC(v)       LS(v, 0, 0xFFFF)
+#define   TXGBE_STRMTH_BC(v)       LS(v, 16, 0xFFFF)
+
+/******************************************************************************
+ * Ether Flow
+ ******************************************************************************/
+#define TXGBE_PSRCTL                    0x015000
+#define   TXGBE_PSRCTL_TPE              MS(4, 0x1)
+#define   TXGBE_PSRCTL_ADHF12_MASK      MS(5, 0x3)
+#define   TXGBE_PSRCTL_ADHF12(v)        LS(v, 5, 0x3)
+#define   TXGBE_PSRCTL_UCHFENA          MS(7, 0x1)
+#define   TXGBE_PSRCTL_MCHFENA          MS(7, 0x1)
+#define   TXGBE_PSRCTL_MCP              MS(8, 0x1)
+#define   TXGBE_PSRCTL_UCP              MS(9, 0x1)
+#define   TXGBE_PSRCTL_BCA              MS(10, 0x1)
+#define   TXGBE_PSRCTL_L4CSUM           MS(12, 0x1)
+#define   TXGBE_PSRCTL_PCSD             MS(13, 0x1)
+#define   TXGBE_PSRCTL_RSCPUSH          MS(15, 0x1)
+#define   TXGBE_PSRCTL_RSCDIA           MS(16, 0x1)
+#define   TXGBE_PSRCTL_RSCACK           MS(17, 0x1)
+#define   TXGBE_PSRCTL_LBENA            MS(18, 0x1)
+#define TXGBE_FRMSZ                     0x015020
+#define   TXGBE_FRMSZ_MAX_MASK          MS(0, 0xFFFF)
+#define   TXGBE_FRMSZ_MAX(v)            LS((v) + 4, 0, 0xFFFF)
+#define TXGBE_VLANCTL                   0x015088
+#define   TXGBE_VLANCTL_TPID_MASK       MS(0, 0xFFFF)
+#define   TXGBE_VLANCTL_TPID(v)         LS(v, 0, 0xFFFF)
+#define   TXGBE_VLANCTL_CFI             MS(28, 0x1)
+#define   TXGBE_VLANCTL_CFIENA          MS(29, 0x1)
+#define   TXGBE_VLANCTL_VFE             MS(30, 0x1)
+#define TXGBE_POOLCTL                   0x0151B0
+#define   TXGBE_POOLCTL_DEFDSA          MS(29, 0x1)
+#define   TXGBE_POOLCTL_RPLEN           MS(30, 0x1)
+#define   TXGBE_POOLCTL_MODE_MASK       MS(16, 0x3)
+#define     TXGBE_PSRPOOL_MODE_MAC      LS(0, 16, 0x3)
+#define     TXGBE_PSRPOOL_MODE_ETAG     LS(1, 16, 0x3)
+#define   TXGBE_POOLCTL_DEFPL(v)        LS(v, 7, 0x3F)
+#define     TXGBE_POOLCTL_DEFPL_MASK    MS(7, 0x3F)
+
+#define TXGBE_ETFLT(i)                  (0x015128 + (i) * 4) /* 0-7 */
+#define   TXGBE_ETFLT_ETID(v)           LS(v, 0, 0xFFFF)
+#define   TXGBE_ETFLT_ETID_MASK         MS(0, 0xFFFF)
+#define   TXGBE_ETFLT_POOL(v)           LS(v, 20, 0x3FF)
+#define   TXGBE_ETFLT_POOLENA           MS(26, 0x1)
+#define   TXGBE_ETFLT_FCOE              MS(27, 0x1)
+#define   TXGBE_ETFLT_TXAS              MS(29, 0x1)
+#define   TXGBE_ETFLT_1588              MS(30, 0x1)
+#define   TXGBE_ETFLT_ENA               MS(31, 0x1)
+#define TXGBE_ETCLS(i)                  (0x019100 + (i) * 4) /* 0-7 */
+#define   TXGBE_ETCLS_QPID(v)           LS(v, 16, 0x7F)
+#define   TXGBD_ETCLS_QPID(r)           RS(r, 16, 0x7F)
+#define   TXGBE_ETCLS_LLI               MS(29, 0x1)
+#define   TXGBE_ETCLS_QENA              MS(31, 0x1)
+#define TXGBE_SYNCLS                    0x019130
+#define   TXGBE_SYNCLS_ENA              MS(0, 0x1)
+#define   TXGBE_SYNCLS_QPID(v)          LS(v, 1, 0x7F)
+#define   TXGBD_SYNCLS_QPID(r)          RS(r, 1, 0x7F)
+#define   TXGBE_SYNCLS_QPID_MASK        MS(1, 0x7F)
+#define   TXGBE_SYNCLS_HIPRIO           MS(31, 0x1)
+
+/* MAC & VLAN & NVE */
+#define TXGBE_PSRVLANIDX           0x016230 /* 0-63 */
+#define TXGBE_PSRVLAN              0x016220
+#define   TXGBE_PSRVLAN_VID(v)     LS(v, 0, 0xFFF)
+#define   TXGBE_PSRVLAN_EA         MS(31, 0x1)
+#define TXGBE_PSRVLANPLM(i)        (0x016224 + (i) * 4) /* 0-1 */
+
+#define TXGBE_PSRNVEI              0x016260 /* 256 */
+#define TXGBE_PSRNVEADDR(i)        (0x016240 + (i) * 4) /* 0-3 */
+#define TXGBE_PSRNVE               0x016250
+#define   TXGBE_PSRNVE_KEY(v)      LS(v, 0, 0xFFFFFF)
+#define   TXGBE_PSRNVE_TYPE(v)     LS(v, 24, 0x3)
+#define TXGBE_PSRNVECTL            0x016254
+#define   TXGBE_PSRNVECTL_MKEY     MS(0, 0x1)
+#define   TXGBE_PSRNVECTL_MADDR    MS(1, 0x1)
+#define   TXGBE_PSRNVECTL_SEL(v)   LS(v, 8, 0x3)
+#define     TXGBE_PSRNVECTL_SEL_ODIP    (0)
+#define     TXGBE_PSRNVECTL_SEL_IDMAC   (1)
+#define     TXGBE_PSRNVECTL_SEL_IDIP    (2)
+#define   TXGBE_PSRNVECTL_EA       MS(31, 0x1)
+#define TXGBE_PSRNVEPM(i)          (0x016258 + (i) * 4) /* 0-1 */
+
+/**
+ * FCoE
+ **/
+#define TXGBE_FCCTL                0x015100
+#define   TXGBE_FCCTL_LLI          MS(0, 0x1)
+#define   TXGBE_FCCTL_SAVBAD       MS(1, 0x1)
+#define   TXGBE_FCCTL_FRSTRDH      MS(2, 0x1)
+#define   TXGBE_FCCTL_LSEQH        MS(3, 0x1)
+#define   TXGBE_FCCTL_ALLH         MS(4, 0x1)
+#define   TXGBE_FCCTL_FSEQH        MS(5, 0x1)
+#define   TXGBE_FCCTL_ICRC         MS(6, 0x1)
+#define   TXGBE_FCCTL_CRCBO        MS(7, 0x1)
+#define   TXGBE_FCCTL_VER(v)       LS(v, 8, 0xF)
+#define TXGBE_FCRSSCTL             0x019140
+#define   TXGBE_FCRSSCTL_EA        MS(0, 0x1)
+#define TXGBE_FCRSSTBL(i)          (0x019160 + (i) * 4) /* 0-7 */
+#define   TXGBE_FCRSSTBL_QUE(v)    LS(v, 0, 0x7F)
+
+#define TXGBE_FCRXEOF              0x015158
+#define TXGBE_FCRXSOF              0x0151F8
+#define TXGBE_FCTXEOF              0x018384
+#define TXGBE_FCTXSOF              0x018380
+#define TXGBE_FCRXFCDESC(i)        (0x012410 + (i) * 4) /* 0-1 */
+#define TXGBE_FCRXFCBUF            0x012418
+#define TXGBE_FCRXFCDDP            0x012420
+#define TXGBE_FCRXCTXINVL(i)       (0x0190C0 + (i) * 4) /* 0-15 */
+
+/* Programming Interface */
+#define TXGBE_FCCTXT               0x015110
+#define   TXGBE_FCCTXT_ID(v)       (((v) & 0x1FF)) /* 512 */
+#define   TXGBE_FCCTXT_REVA        LS(0x1, 13, 0x1)
+#define   TXGBE_FCCTXT_WREA        LS(0x1, 14, 0x1)
+#define   TXGBE_FCCTXT_RDEA        LS(0x1, 15, 0x1)
+#define TXGBE_FCCTXTCTL            0x015108
+#define   TXGBE_FCCTXTCTL_EA       MS(0, 0x1)
+#define   TXGBE_FCCTXTCTL_FIRST    MS(1, 0x1)
+#define   TXGBE_FCCTXTCTL_WR       MS(2, 0x1)
+#define   TXGBE_FCCTXTCTL_SEQID(v) LS(v, 8, 0xFF)
+#define   TXGBE_FCCTXTCTL_SEQNR(v) LS(v, 16, 0xFFFF)
+#define TXGBE_FCCTXTPARM           0x0151D8
+
+/**
+ * Mirror Rules
+ **/
+#define TXGBE_MIRRCTL(i)           (0x015B00 + (i) * 4)
+#define  TXGBE_MIRRCTL_POOL        MS(0, 0x1)
+#define  TXGBE_MIRRCTL_UPLINK      MS(1, 0x1)
+#define  TXGBE_MIRRCTL_DNLINK      MS(2, 0x1)
+#define  TXGBE_MIRRCTL_VLAN        MS(3, 0x1)
+#define  TXGBE_MIRRCTL_DESTP(v)    LS(v, 8, 0x3F)
+#define TXGBE_MIRRVLANL(i)         (0x015B10 + (i) * 8)
+#define TXGBE_MIRRVLANH(i)         (0x015B14 + (i) * 8)
+#define TXGBE_MIRRPOOLL(i)         (0x015B30 + (i) * 8)
+#define TXGBE_MIRRPOOLH(i)         (0x015B34 + (i) * 8)
+
+/**
+ * Time Stamp
+ **/
+#define TXGBE_TSRXCTL              0x015188
+#define   TXGBE_TSRXCTL_VLD        MS(0, 0x1)
+#define   TXGBE_TSRXCTL_TYPE(v)    LS(v, 1, 0x7)
+#define     TXGBE_TSRXCTL_TYPE_V2L2         (0)
+#define     TXGBE_TSRXCTL_TYPE_V1L4         (1)
+#define     TXGBE_TSRXCTL_TYPE_V2L24        (2)
+#define     TXGBE_TSRXCTL_TYPE_V2EVENT      (5)
+#define   TXGBE_TSRXCTL_ENA        MS(4, 0x1)
+#define TXGBE_TSRXSTMPL            0x0151E8
+#define TXGBE_TSRXSTMPH            0x0151A4
+#define TXGBE_TSTXCTL              0x01D400
+#define   TXGBE_TSTXCTL_VLD        MS(0, 0x1)
+#define   TXGBE_TSTXCTL_ENA        MS(4, 0x1)
+#define TXGBE_TSTXSTMPL            0x01D404
+#define TXGBE_TSTXSTMPH            0x01D408
+#define TXGBE_TSTIMEL              0x01D40C
+#define TXGBE_TSTIMEH              0x01D410
+#define TXGBE_TSTIMEINC            0x01D414
+#define   TXGBE_TSTIMEINC_IV(v)    LS(v, 0, 0xFFFFFF)
+#define   TXGBE_TSTIMEINC_IP(v)    LS(v, 24, 0xFF)
+#define   TXGBE_TSTIMEINC_VP(v, p) \
+			(((v) & MS(0, 0xFFFFFF)) | TXGBE_TSTIMEINC_IP(p))
+
+/**
+ * Wake on Lan
+ **/
+#define TXGBE_WOLCTL               0x015B80
+#define TXGBE_WOLIPCTL             0x015B84
+#define TXGBE_WOLIP4(i)            (0x015BC0 + (i) * 4) /* 0-3 */
+#define TXGBE_WOLIP6(i)            (0x015BE0 + (i) * 4) /* 0-3 */
+
+#define TXGBE_WOLFLEXCTL           0x015CFC
+#define TXGBE_WOLFLEXI             0x015B8C
+#define TXGBE_WOLFLEXDAT(i)        (0x015C00 + (i) * 16) /* 0-15 */
+#define TXGBE_WOLFLEXMSK(i)        (0x015C08 + (i) * 16) /* 0-15 */
+
+/******************************************************************************
+ * Security Registers
+ ******************************************************************************/
+#define TXGBE_SECRXCTL             0x017000
+#define   TXGBE_SECRXCTL_ODSA      MS(0, 0x1)
+#define   TXGBE_SECRXCTL_XDSA      MS(1, 0x1)
+#define   TXGBE_SECRXCTL_CRCSTRIP  MS(2, 0x1)
+#define   TXGBE_SECRXCTL_SAVEBAD   MS(6, 0x1)
+#define TXGBE_SECRXSTAT            0x017004
+#define   TXGBE_SECRXSTAT_RDY      MS(0, 0x1)
+#define   TXGBE_SECRXSTAT_ECC      MS(1, 0x1)
+
+#define TXGBE_SECTXCTL             0x01D000
+#define   TXGBE_SECTXCTL_ODSA      MS(0, 0x1)
+#define   TXGBE_SECTXCTL_XDSA      MS(1, 0x1)
+#define   TXGBE_SECTXCTL_STFWD     MS(2, 0x1)
+#define   TXGBE_SECTXCTL_MSKIV     MS(3, 0x1)
+#define TXGBE_SECTXSTAT            0x01D004
+#define   TXGBE_SECTXSTAT_RDY      MS(0, 0x1)
+#define   TXGBE_SECTXSTAT_ECC      MS(1, 0x1)
+#define TXGBE_SECTXBUFAF           0x01D008
+#define TXGBE_SECTXBUFAE           0x01D00C
+#define TXGBE_SECTXIFG             0x01D020
+#define   TXGBE_SECTXIFG_MIN(v)    LS(v, 0, 0xF)
+#define   TXGBE_SECTXIFG_MIN_MASK  MS(0, 0xF)
+
+
+/**
+ * LinkSec
+ **/
+#define TXGBE_LSECRXCAP	               0x017200
+#define TXGBE_LSECRXCTL                0x017204
+	/* disabled(0),check(1),strict(2),drop(3) */
+#define   TXGBE_LSECRXCTL_MODE_MASK    MS(2, 0x3)
+#define   TXGBE_LSECRXCTL_MODE_STRICT  LS(2, 2, 0x3)
+#define   TXGBE_LSECRXCTL_POSTHDR      MS(6, 0x1)
+#define   TXGBE_LSECRXCTL_REPLAY       MS(7, 0x1)
+#define TXGBE_LSECRXSCIL               0x017208
+#define TXGBE_LSECRXSCIH               0x01720C
+#define TXGBE_LSECRXSA(i)              (0x017210 + (i) * 4) /* 0-1 */
+#define TXGBE_LSECRXPN(i)              (0x017218 + (i) * 4) /* 0-1 */
+#define TXGBE_LSECRXKEY(n, i)	       (0x017220 + 0x10 * (n) + 4 * (i)) /* 0-3 */
+#define TXGBE_LSECTXCAP                0x01D200
+#define TXGBE_LSECTXCTL                0x01D204
+	/* disabled(0), auth(1), auth+encrypt(2) */
+#define   TXGBE_LSECTXCTL_MODE_MASK    MS(0, 0x3)
+#define   TXGBE_LSECTXCTL_MODE_AUTH    LS(1, 0, 0x3)
+#define   TXGBE_LSECTXCTL_MODE_AENC    LS(2, 0, 0x3)
+#define   TXGBE_LSECTXCTL_PNTRH_MASK   MS(8, 0xFFFFFF)
+#define   TXGBE_LSECTXCTL_PNTRH(v)     LS(v, 8, 0xFFFFFF)
+#define TXGBE_LSECTXSCIL               0x01D208
+#define TXGBE_LSECTXSCIH               0x01D20C
+#define TXGBE_LSECTXSA                 0x01D210
+#define TXGBE_LSECTXPN0                0x01D214
+#define TXGBE_LSECTXPN1                0x01D218
+#define TXGBE_LSECTXKEY0(i)            (0x01D21C + (i) * 4) /* 0-3 */
+#define TXGBE_LSECTXKEY1(i)            (0x01D22C + (i) * 4) /* 0-3 */
+
+#define TXGBE_LSECRX_UTPKT             0x017240
+#define TXGBE_LSECRX_DECOCT            0x017244
+#define TXGBE_LSECRX_VLDOCT            0x017248
+#define TXGBE_LSECRX_BTPKT             0x01724C
+#define TXGBE_LSECRX_NOSCIPKT          0x017250
+#define TXGBE_LSECRX_UNSCIPKT          0x017254
+#define TXGBE_LSECRX_UNCHKPKT          0x017258
+#define TXGBE_LSECRX_DLYPKT            0x01725C
+#define TXGBE_LSECRX_LATEPKT           0x017260
+#define TXGBE_LSECRX_OKPKT(i)          (0x017264 + (i) * 4) /* 0-1 */
+#define TXGBE_LSECRX_BADPKT(i)         (0x01726C + (i) * 4) /* 0-1 */
+#define TXGBE_LSECRX_INVPKT(i)         (0x017274 + (i) * 4) /* 0-1 */
+#define TXGBE_LSECRX_BADSAPKT          0x01727C
+#define TXGBE_LSECRX_INVSAPKT          0x017280
+#define TXGBE_LSECTX_UTPKT             0x01D23C
+#define TXGBE_LSECTX_ENCPKT            0x01D240
+#define TXGBE_LSECTX_PROTPKT           0x01D244
+#define TXGBE_LSECTX_ENCOCT            0x01D248
+#define TXGBE_LSECTX_PROTOCT           0x01D24C
+
+/**
+ * IpSec
+ **/
+#define TXGBE_ISECRXIDX            0x017100
+#define TXGBE_ISECRXADDR(i)        (0x017104 + (i) * 4) /*0-3*/
+#define TXGBE_ISECRXSPI            0x017114
+#define TXGBE_ISECRXIPIDX          0x017118
+#define TXGBE_ISECRXKEY(i)         (0x01711C + (i) * 4) /*0-3*/
+#define TXGBE_ISECRXSALT           0x01712C
+#define TXGBE_ISECRXMODE           0x017130
+
+#define TXGBE_ISECTXIDX            0x01D100
+#define   TXGBE_ISECTXIDX_WT       0x80000000U
+#define   TXGBE_ISECTXIDX_RD       0x40000000U
+#define   TXGBE_ISECTXIDX_SDIDX    0x0U
+#define   TXGBE_ISECTXIDX_ENA      0x00000001U
+
+#define TXGBE_ISECTXSALT           0x01D104
+#define TXGBE_ISECTXKEY(i)         (0x01D108 + (i) * 4) /* 0-3 */
+
+/******************************************************************************
+ * MAC Registers
+ ******************************************************************************/
+#define TXGBE_MACRXCFG                  0x011004
+#define   TXGBE_MACRXCFG_ENA            MS(0, 0x1)
+#define   TXGBE_MACRXCFG_JUMBO          MS(8, 0x1)
+#define   TXGBE_MACRXCFG_LB             MS(10, 0x1)
+#define TXGBE_MACCNTCTL                 0x011800
+#define   TXGBE_MACCNTCTL_RC            MS(2, 0x1)
+
+#define TXGBE_MACRXFLT                  0x011008
+#define   TXGBE_MACRXFLT_PROMISC        MS(0, 0x1)
+#define   TXGBE_MACRXFLT_CTL_MASK       MS(6, 0x3)
+#define   TXGBE_MACRXFLT_CTL_DROP       LS(0, 6, 0x3)
+#define   TXGBE_MACRXFLT_CTL_NOPS       LS(1, 6, 0x3)
+#define   TXGBE_MACRXFLT_CTL_NOFT       LS(2, 6, 0x3)
+#define   TXGBE_MACRXFLT_CTL_PASS       LS(3, 6, 0x3)
+#define   TXGBE_MACRXFLT_RXALL          MS(31, 0x1)
+
+/******************************************************************************
+ * Statistic Registers
+ ******************************************************************************/
+/* Ring Counter */
+#define TXGBE_QPRXPKT(rp)                 (0x001014 + 0x40 * (rp))
+#define TXGBE_QPRXOCTL(rp)                (0x001018 + 0x40 * (rp))
+#define TXGBE_QPRXOCTH(rp)                (0x00101C + 0x40 * (rp))
+#define TXGBE_QPTXPKT(rp)                 (0x003014 + 0x40 * (rp))
+#define TXGBE_QPTXOCTL(rp)                (0x003018 + 0x40 * (rp))
+#define TXGBE_QPTXOCTH(rp)                (0x00301C + 0x40 * (rp))
+#define TXGBE_QPRXMPKT(rp)                (0x001020 + 0x40 * (rp))
+
+/* Host DMA Counter */
+#define TXGBE_DMATXDROP                   0x018300
+#define TXGBE_DMATXSECDROP                0x018304
+#define TXGBE_DMATXPKT                    0x018308
+#define TXGBE_DMATXOCTL                   0x01830C
+#define TXGBE_DMATXOCTH                   0x018310
+#define TXGBE_DMATXMNG                    0x018314
+#define TXGBE_DMARXDROP                   0x012500
+#define TXGBE_DMARXPKT                    0x012504
+#define TXGBE_DMARXOCTL                   0x012508
+#define TXGBE_DMARXOCTH                   0x01250C
+#define TXGBE_DMARXMNG                    0x012510
+
+/* Packet Buffer Counter */
+#define TXGBE_PBRXMISS(tc)                (0x019040 + (tc) * 4)
+#define TXGBE_PBRXPKT                     0x019060
+#define TXGBE_PBRXREP                     0x019064
+#define TXGBE_PBRXDROP                    0x019068
+#define TXGBE_PBRXLNKXOFF                 0x011988
+#define TXGBE_PBRXLNKXON                  0x011E0C
+#define TXGBE_PBRXUPXON(up)               (0x011E30 + (up) * 4)
+#define TXGBE_PBRXUPXOFF(up)              (0x011E10 + (up) * 4)
+
+#define TXGBE_PBTXLNKXOFF                 0x019218
+#define TXGBE_PBTXLNKXON                  0x01921C
+#define TXGBE_PBTXUPXON(up)               (0x0192E0 + (up) * 4)
+#define TXGBE_PBTXUPXOFF(up)              (0x0192C0 + (up) * 4)
+#define TXGBE_PBTXUPOFF(up)               (0x019280 + (up) * 4)
+
+#define TXGBE_PBLPBK                      0x01CF08
+
+/* Ether Flow Counter */
+#define TXGBE_LANPKTDROP                  0x0151C0
+#define TXGBE_MNGPKTDROP                  0x0151C4
+
+/* MAC Counter */
+#define TXGBE_MACRXERRCRCL           0x011928
+#define TXGBE_MACRXERRCRCH           0x01192C
+#define TXGBE_MACRXERRLENL           0x011978
+#define TXGBE_MACRXERRLENH           0x01197C
+#define TXGBE_MACRX1to64L            0x001940
+#define TXGBE_MACRX1to64H            0x001944
+#define TXGBE_MACRX65to127L          0x001948
+#define TXGBE_MACRX65to127H          0x00194C
+#define TXGBE_MACRX128to255L         0x001950
+#define TXGBE_MACRX128to255H         0x001954
+#define TXGBE_MACRX256to511L         0x001958
+#define TXGBE_MACRX256to511H         0x00195C
+#define TXGBE_MACRX512to1023L        0x001960
+#define TXGBE_MACRX512to1023H        0x001964
+#define TXGBE_MACRX1024toMAXL        0x001968
+#define TXGBE_MACRX1024toMAXH        0x00196C
+#define TXGBE_MACTX1to64L            0x001834
+#define TXGBE_MACTX1to64H            0x001838
+#define TXGBE_MACTX65to127L          0x00183C
+#define TXGBE_MACTX65to127H          0x001840
+#define TXGBE_MACTX128to255L         0x001844
+#define TXGBE_MACTX128to255H         0x001848
+#define TXGBE_MACTX256to511L         0x00184C
+#define TXGBE_MACTX256to511H         0x001850
+#define TXGBE_MACTX512to1023L        0x001854
+#define TXGBE_MACTX512to1023H        0x001858
+#define TXGBE_MACTX1024toMAXL        0x00185C
+#define TXGBE_MACTX1024toMAXH        0x001860
+
+#define TXGBE_MACRXUNDERSIZE         0x011938
+#define TXGBE_MACRXOVERSIZE          0x01193C
+#define TXGBE_MACRXJABBER            0x011934
+
+#define TXGBE_MACRXPKTL                0x011900
+#define TXGBE_MACRXPKTH                0x011904
+#define TXGBE_MACTXPKTL                0x01181C
+#define TXGBE_MACTXPKTH                0x011820
+#define TXGBE_MACRXGBOCTL              0x011908
+#define TXGBE_MACRXGBOCTH              0x01190C
+#define TXGBE_MACTXGBOCTL              0x011814
+#define TXGBE_MACTXGBOCTH              0x011818
+
+#define TXGBE_MACRXOCTL                0x011918
+#define TXGBE_MACRXOCTH                0x01191C
+#define TXGBE_MACRXMPKTL               0x011920
+#define TXGBE_MACRXMPKTH               0x011924
+#define TXGBE_MACTXOCTL                0x011824
+#define TXGBE_MACTXOCTH                0x011828
+#define TXGBE_MACTXMPKTL               0x01182C
+#define TXGBE_MACTXMPKTH               0x011830
+
+/* Management Counter */
+#define TXGBE_MNGOUT              0x01CF00
+#define TXGBE_MNGIN               0x01CF04
+
+/* MAC SEC Counter */
+#define TXGBE_LSECRXUNTAG         0x017240
+#define TXGBE_LSECRXDECOCT        0x017244
+#define TXGBE_LSECRXVLDOCT        0x017248
+#define TXGBE_LSECRXBADTAG        0x01724C
+#define TXGBE_LSECRXNOSCI         0x017250
+#define TXGBE_LSECRXUKSCI         0x017254
+#define TXGBE_LSECRXUNCHK         0x017258
+#define TXGBE_LSECRXDLY           0x01725C
+#define TXGBE_LSECRXLATE          0x017260
+#define TXGBE_LSECRXGOOD          0x017264
+#define TXGBE_LSECRXBAD           0x01726C
+#define TXGBE_LSECRXUK            0x017274
+#define TXGBE_LSECRXBADSA         0x01727C
+#define TXGBE_LSECRXUKSA          0x017280
+#define TXGBE_LSECTXUNTAG         0x01D23C
+#define TXGBE_LSECTXENC           0x01D240
+#define TXGBE_LSECTXPTT           0x01D244
+#define TXGBE_LSECTXENCOCT        0x01D248
+#define TXGBE_LSECTXPTTOCT        0x01D24C
+
+/* IP SEC Counter */
+
+/* FDIR Counter */
+#define TXGBE_FDIRFREE                  0x019538
+#define   TXGBE_FDIRFREE_FLT(r)         RS(r, 0, 0xFFFF)
+#define TXGBE_FDIRLEN                   0x01954C
+#define   TXGBE_FDIRLEN_BKTLEN(r)       RS(r, 0, 0x3F)
+#define   TXGBE_FDIRLEN_MAXLEN(r)       RS(r, 8, 0x3F)
+#define TXGBE_FDIRUSED                  0x019550
+#define   TXGBE_FDIRUSED_ADD(r)         RS(r, 0, 0xFFFF)
+#define   TXGBE_FDIRUSED_REM(r)         RS(r, 16, 0xFFFF)
+#define TXGBE_FDIRFAIL                  0x019554
+#define   TXGBE_FDIRFAIL_ADD(r)         RS(r, 0, 0xFF)
+#define   TXGBE_FDIRFAIL_REM(r)         RS(r, 8, 0xFF)
+#define TXGBE_FDIRMATCH                 0x019558
+#define TXGBE_FDIRMISS                  0x01955C
+
+/* FCOE Counter */
+#define TXGBE_FCOECRC                   0x015160
+#define TXGBE_FCOERPDC                  0x012514
+#define TXGBE_FCOELAST                  0x012518
+#define TXGBE_FCOEPRC                   0x015164
+#define TXGBE_FCOEDWRC                  0x015168
+#define TXGBE_FCOEPTC                   0x018318
+#define TXGBE_FCOEDWTC                  0x01831C
+
+/* Management Counter */
+#define TXGBE_MNGOS2BMC                 0x01E094
+#define TXGBE_MNGBMC2OS                 0x01E090
+
+/******************************************************************************
+ * PF(Physical Function) Registers
+ ******************************************************************************/
+/* Interrupt */
+#define TXGBE_ICRMISC          0x000100
+#define   TXGBE_ICRMISC_MASK   MS(8, 0xFFFFFF)
+#define   TXGBE_ICRMISC_LNKDN  MS(8, 0x1) /* eth link down */
+#define   TXGBE_ICRMISC_RST    MS(10, 0x1) /* device reset event */
+#define   TXGBE_ICRMISC_TS     MS(11, 0x1) /* time sync */
+#define   TXGBE_ICRMISC_STALL  MS(12, 0x1) /* trans or recv path is stalled */
+#define   TXGBE_ICRMISC_LNKSEC MS(13, 0x1) /* Tx LinkSec require key exchange */
+#define   TXGBE_ICRMISC_ERRBUF MS(14, 0x1) /* Packet Buffer Overrun */
+#define   TXGBE_ICRMISC_FDIR   MS(15, 0x1) /* FDir Exception */
+#define   TXGBE_ICRMISC_I2C    MS(16, 0x1) /* I2C interrupt */
+#define   TXGBE_ICRMISC_ERRMAC MS(17, 0x1) /* err reported by MAC */
+#define   TXGBE_ICRMISC_LNKUP  MS(18, 0x1) /* link up */
+#define   TXGBE_ICRMISC_ANDONE MS(19, 0x1) /* link auto-nego done */
+#define   TXGBE_ICRMISC_ERRIG  MS(20, 0x1) /* integrity error */
+#define   TXGBE_ICRMISC_SPI    MS(21, 0x1) /* SPI interface */
+#define   TXGBE_ICRMISC_VFMBX  MS(22, 0x1) /* VF-PF message box */
+#define   TXGBE_ICRMISC_GPIO   MS(26, 0x1) /* GPIO interrupt */
+#define   TXGBE_ICRMISC_ERRPCI MS(27, 0x1) /* pcie request error */
+#define   TXGBE_ICRMISC_HEAT   MS(28, 0x1) /* overheat detection */
+#define   TXGBE_ICRMISC_PROBE  MS(29, 0x1) /* probe match */
+#define   TXGBE_ICRMISC_MNGMBX MS(30, 0x1) /* mng mailbox */
+#define   TXGBE_ICRMISC_TIMER  MS(31, 0x1) /* tcp timer */
+#define   TXGBE_ICRMISC_DEFAULT ( \
+			TXGBE_ICRMISC_LNKDN | \
+			TXGBE_ICRMISC_RST | \
+			TXGBE_ICRMISC_ERRMAC | \
+			TXGBE_ICRMISC_LNKUP | \
+			TXGBE_ICRMISC_ANDONE | \
+			TXGBE_ICRMISC_ERRIG | \
+			TXGBE_ICRMISC_VFMBX | \
+			TXGBE_ICRMISC_MNGMBX | \
+			TXGBE_ICRMISC_STALL | \
+			TXGBE_ICRMISC_TIMER)
+#define   TXGBE_ICRMISC_LSC ( \
+			TXGBE_ICRMISC_LNKDN | \
+			TXGBE_ICRMISC_LNKUP)
+#define TXGBE_ICSMISC                   0x000104
+#define TXGBE_IENMISC                   0x000108
+#define TXGBE_IVARMISC                  0x0004FC
+#define   TXGBE_IVARMISC_VEC(v)         LS(v, 0, 0x7)
+#define   TXGBE_IVARMISC_VLD            MS(7, 0x1)
+#define TXGBE_ICR(i)                    (0x000120 + (i) * 4) /* 0-1 */
+#define   TXGBE_ICR_MASK                MS(0, 0xFFFFFFFF)
+#define TXGBE_ICS(i)                    (0x000130 + (i) * 4) /* 0-1 */
+#define   TXGBE_ICS_MASK                TXGBE_ICR_MASK
+#define TXGBE_IMS(i)                    (0x000140 + (i) * 4) /* 0-1 */
+#define   TXGBE_IMS_MASK                TXGBE_ICR_MASK
+#define TXGBE_IMC(i)                    (0x000150 + (i) * 4) /* 0-1 */
+#define   TXGBE_IMC_MASK                TXGBE_ICR_MASK
+#define TXGBE_IVAR(i)                   (0x000500 + (i) * 4) /* 0-3 */
+#define   TXGBE_IVAR_VEC(v)             LS(v, 0, 0x7)
+#define   TXGBE_IVAR_VLD                MS(7, 0x1)
+#define TXGBE_TCPTMR                    0x000170
+#define TXGBE_ITRSEL                    0x000180
+
+/* P2V Mailbox */
+#define TXGBE_MBMEM(i)           (0x005000 + 0x40 * (i)) /* 0-63 */
+#define TXGBE_MBCTL(i)           (0x000600 + 4 * (i)) /* 0-63 */
+#define   TXGBE_MBCTL_STS        MS(0, 0x1) /* Initiate message send to VF */
+#define   TXGBE_MBCTL_ACK        MS(1, 0x1) /* Ack message recv'd from VF */
+#define   TXGBE_MBCTL_VFU        MS(2, 0x1) /* VF owns the mailbox buffer */
+#define   TXGBE_MBCTL_PFU        MS(3, 0x1) /* PF owns the mailbox buffer */
+#define   TXGBE_MBCTL_RVFU       MS(4, 0x1) /* Reset VFU - used when VF stuck */
+#define TXGBE_MBVFICR(i)                (0x000480 + 4 * (i)) /* 0-3 */
+#define   TXGBE_MBVFICR_INDEX(vf)       ((vf) >> 4)
+#define   TXGBE_MBVFICR_VFREQ_MASK      (0x0000FFFF) /* bits for VF messages */
+#define   TXGBE_MBVFICR_VFREQ_VF1       (0x00000001) /* bit for VF 1 message */
+#define   TXGBE_MBVFICR_VFACK_MASK      (0xFFFF0000) /* bits for VF acks */
+#define   TXGBE_MBVFICR_VFACK_VF1       (0x00010000) /* bit for VF 1 ack */
+#define TXGBE_FLRVFP(i)                 (0x000490 + 4 * (i)) /* 0-1 */
+#define TXGBE_FLRVFE(i)                 (0x0004A0 + 4 * (i)) /* 0-1 */
+#define TXGBE_FLRVFEC(i)                (0x0004A8 + 4 * (i)) /* 0-1 */
+
+/******************************************************************************
+ * VF(Virtual Function) Registers
+ ******************************************************************************/
+#define TXGBE_VFPBWRAP                  0x000000
+#define   TXGBE_VFPBWRAP_WRAP(r, tc)    ((0x7 << 4 * (tc) & (r)) >> 4 * (tc))
+#define   TXGBE_VFPBWRAP_EMPT(r, tc)    ((0x8 << 4 * (tc) & (r)) >> 4 * (tc))
+#define TXGBE_VFSTATUS                  0x000004
+#define   TXGBE_VFSTATUS_UP             MS(0, 0x1)
+#define   TXGBE_VFSTATUS_BW_MASK        MS(1, 0x7)
+#define     TXGBE_VFSTATUS_BW_10G       LS(0x1, 1, 0x7)
+#define     TXGBE_VFSTATUS_BW_1G        LS(0x2, 1, 0x7)
+#define     TXGBE_VFSTATUS_BW_100M      LS(0x4, 1, 0x7)
+#define   TXGBE_VFSTATUS_BUSY           MS(4, 0x1)
+#define   TXGBE_VFSTATUS_LANID          MS(8, 0x1)
+#define TXGBE_VFRST                     0x000008
+#define   TXGBE_VFRST_SET               MS(0, 0x1)
+#define TXGBE_VFPLCFG                   0x000078
+#define   TXGBE_VFPLCFG_RSV             MS(0, 0x1)
+#define   TXGBE_VFPLCFG_PSR(v)          LS(v, 1, 0x1F)
+#define     TXGBE_VFPLCFG_PSRL4HDR      (0x1)
+#define     TXGBE_VFPLCFG_PSRL3HDR      (0x2)
+#define     TXGBE_VFPLCFG_PSRL2HDR      (0x4)
+#define     TXGBE_VFPLCFG_PSRTUNHDR     (0x8)
+#define     TXGBE_VFPLCFG_PSRTUNMAC     (0x10)
+#define   TXGBE_VFPLCFG_RSSMASK         MS(16, 0xFF)
+#define   TXGBE_VFPLCFG_RSSIPV4TCP      MS(16, 0x1)
+#define   TXGBE_VFPLCFG_RSSIPV4         MS(17, 0x1)
+#define   TXGBE_VFPLCFG_RSSIPV6         MS(20, 0x1)
+#define   TXGBE_VFPLCFG_RSSIPV6TCP      MS(21, 0x1)
+#define   TXGBE_VFPLCFG_RSSIPV4UDP      MS(22, 0x1)
+#define   TXGBE_VFPLCFG_RSSIPV6UDP      MS(23, 0x1)
+#define   TXGBE_VFPLCFG_RSSENA          MS(24, 0x1)
+#define   TXGBE_VFPLCFG_RSSHASH(v)      LS(v, 29, 0x7)
+#define TXGBE_VFRSSKEY(i)               (0x000080 + (i) * 4) /* 0-9 */
+#define TXGBE_VFRSSTBL(i)               (0x0000C0 + (i) * 4) /* 0-15 */
+#define TXGBE_VFICR                     0x000100
+#define   TXGBE_VFICR_MASK              LS(7, 0, 0x7)
+#define   TXGBE_VFICR_MBX               MS(0, 0x1)
+#define   TXGBE_VFICR_DONE1             MS(1, 0x1)
+#define   TXGBE_VFICR_DONE2             MS(2, 0x1)
+#define TXGBE_VFICS                     0x000104
+#define   TXGBE_VFICS_MASK              TXGBE_VFICR_MASK
+#define TXGBE_VFIMS                     0x000108
+#define   TXGBE_VFIMS_MASK              TXGBE_VFICR_MASK
+#define TXGBE_VFIMC                     0x00010C
+#define   TXGBE_VFIMC_MASK              TXGBE_VFICR_MASK
+#define TXGBE_VFGPIE                    0x000118
+#define TXGBE_VFIVAR(i)                 (0x000240 + 4 * (i)) /* 0-3 */
+#define TXGBE_VFIVARMISC                0x000260
+#define   TXGBE_VFIVAR_ALLOC(v)         LS(v, 0, 0x3)
+#define   TXGBE_VFIVAR_VLD              MS(7, 0x1)
+
+#define TXGBE_VFMBCTL                   0x000600
+#define   TXGBE_VFMBCTL_REQ     MS(0, 0x1) /* Request for PF Ready bit */
+#define   TXGBE_VFMBCTL_ACK     MS(1, 0x1) /* Ack PF message received */
+#define   TXGBE_VFMBCTL_VFU     MS(2, 0x1) /* VF owns the mailbox buffer */
+#define   TXGBE_VFMBCTL_PFU     MS(3, 0x1) /* PF owns the mailbox buffer */
+#define   TXGBE_VFMBCTL_PFSTS   MS(4, 0x1) /* PF wrote a message in the MB */
+#define   TXGBE_VFMBCTL_PFACK   MS(5, 0x1) /* PF ack the previous VF msg */
+#define   TXGBE_VFMBCTL_RSTI    MS(6, 0x1) /* PF has reset indication */
+#define   TXGBE_VFMBCTL_RSTD    MS(7, 0x1) /* PF has indicated reset done */
+#define   TXGBE_VFMBCTL_R2C_BITS        (TXGBE_VFMBCTL_RSTD | \
+					 TXGBE_VFMBCTL_PFSTS | \
+					 TXGBE_VFMBCTL_PFACK)
+#define TXGBE_VFMBX                     0x000C00 /* 0-15 */
+#define TXGBE_VFTPHCTL(i)               (0x000D00 + 4 * (i)) /* 0-7 */
+
+/******************************************************************************
+ * PF&VF TxRx Interface
+ ******************************************************************************/
+#define RNGLEN(v)     ROUND_OVER(v, 13, 7)
+#define HDRLEN(v)     ROUND_OVER(v, 10, 6)
+#define PKTLEN(v)     ROUND_OVER(v, 14, 10)
+#define INTTHR(v)     ROUND_OVER(v, 4,  0)
+
+#define	TXGBE_RING_DESC_ALIGN	128
+#define	TXGBE_RING_DESC_MIN	128
+#define	TXGBE_RING_DESC_MAX	8192
+#define TXGBE_RXD_ALIGN		TXGBE_RING_DESC_ALIGN
+#define TXGBE_TXD_ALIGN		TXGBE_RING_DESC_ALIGN
+
+/* receive ring */
+#define TXGBE_RXBAL(rp)                 (0x001000 + 0x40 * (rp))
+#define TXGBE_RXBAH(rp)                 (0x001004 + 0x40 * (rp))
+#define TXGBE_RXRP(rp)                  (0x00100C + 0x40 * (rp))
+#define TXGBE_RXWP(rp)                  (0x001008 + 0x40 * (rp))
+#define TXGBE_RXCFG(rp)                 (0x001010 + 0x40 * (rp))
+#define   TXGBE_RXCFG_ENA               MS(0, 0x1)
+#define   TXGBE_RXCFG_RNGLEN(v)         LS(RNGLEN(v), 1, 0x3F)
+#define   TXGBE_RXCFG_PKTLEN(v)         LS(PKTLEN(v), 8, 0xF)
+#define     TXGBE_RXCFG_PKTLEN_MASK     MS(8, 0xF)
+#define   TXGBE_RXCFG_HDRLEN(v)         LS(HDRLEN(v), 12, 0xF)
+#define     TXGBE_RXCFG_HDRLEN_MASK     MS(12, 0xF)
+#define   TXGBE_RXCFG_WTHRESH(v)        LS(v, 16, 0x7)
+#define   TXGBE_RXCFG_ETAG              MS(22, 0x1)
+#define   TXGBE_RXCFG_RSCMAX_MASK       MS(23, 0x3)
+#define     TXGBE_RXCFG_RSCMAX_1        LS(0, 23, 0x3)
+#define     TXGBE_RXCFG_RSCMAX_4        LS(1, 23, 0x3)
+#define     TXGBE_RXCFG_RSCMAX_8        LS(2, 23, 0x3)
+#define     TXGBE_RXCFG_RSCMAX_16       LS(3, 23, 0x3)
+#define   TXGBE_RXCFG_STALL             MS(25, 0x1)
+#define   TXGBE_RXCFG_SPLIT             MS(26, 0x1)
+#define   TXGBE_RXCFG_RSCMODE           MS(27, 0x1)
+#define   TXGBE_RXCFG_CNTAG             MS(28, 0x1)
+#define   TXGBE_RXCFG_RSCENA            MS(29, 0x1)
+#define   TXGBE_RXCFG_DROP              MS(30, 0x1)
+#define   TXGBE_RXCFG_VLAN              MS(31, 0x1)
+
+/* transmit ring */
+#define TXGBE_TXBAL(rp)                 (0x003000 + 0x40 * (rp))
+#define TXGBE_TXBAH(rp)                 (0x003004 + 0x40 * (rp))
+#define TXGBE_TXWP(rp)                  (0x003008 + 0x40 * (rp))
+#define TXGBE_TXRP(rp)                  (0x00300C + 0x40 * (rp))
+#define TXGBE_TXCFG(rp)                 (0x003010 + 0x40 * (rp))
+#define   TXGBE_TXCFG_ENA               MS(0, 0x1)
+#define   TXGBE_TXCFG_BUFLEN_MASK       MS(1, 0x3F)
+#define   TXGBE_TXCFG_BUFLEN(v)         LS(RNGLEN(v), 1, 0x3F)
+#define   TXGBE_TXCFG_HTHRESH_MASK      MS(8, 0xF)
+#define   TXGBE_TXCFG_HTHRESH(v)        LS(v, 8, 0xF)
+#define   TXGBE_TXCFG_WTHRESH_MASK      MS(16, 0x7F)
+#define   TXGBE_TXCFG_WTHRESH(v)        LS(v, 16, 0x7F)
+#define   TXGBE_TXCFG_FLUSH             MS(26, 0x1)
+
+/* interrupt registers */
+#define TXGBE_ITRI                      0x000180
+#define TXGBE_ITR(i)                    (0x000200 + 4 * (i))
+#define   TXGBE_ITR_IVAL_MASK           MS(2, 0x3FE)
+#define   TXGBE_ITR_IVAL(v)             LS(v, 2, 0x3FE)
+#define     TXGBE_ITR_IVAL_1G(us)       TXGBE_ITR_IVAL((us) / 2)
+#define     TXGBE_ITR_IVAL_10G(us)      TXGBE_ITR_IVAL((us) / 20)
+#define   TXGBE_ITR_LLIEA               MS(15, 0x1)
+#define   TXGBE_ITR_LLICREDIT(v)        LS(v, 16, 0x1F)
+#define   TXGBE_ITR_CNT(v)              LS(v, 21, 0x7F)
+#define   TXGBE_ITR_WRDSA               MS(31, 0x1)
+#define TXGBE_GPIE                      0x000118
+#define   TXGBE_GPIE_MSIX               MS(0, 0x1)
+#define   TXGBE_GPIE_LLIEA              MS(1, 0x1)
+#define   TXGBE_GPIE_LLIVAL(v)          LS(v, 4, 0xF)
+#define   TXGBE_GPIE_RSCDLY(v)          LS(v, 8, 0x7)
+
+/******************************************************************************
+ * Debug Registers
+ ******************************************************************************/
+/**
+ * Probe
+ **/
+#define TXGBE_PROB                      0x010010
+#define TXGBE_IODRV                     0x010024
+
+#define TXGBE_PRBCTL                    0x010200
+#define TXGBE_PRBSTA                    0x010204
+#define TXGBE_PRBDAT                    0x010220
+#define TXGBE_PRBPTN                    0x010224
+#define TXGBE_PRBCNT                    0x010228
+#define TXGBE_PRBMSK                    0x01022C
+
+#define TXGBE_PRBPCI                    0x01F010
+#define TXGBE_PRBRDMA                   0x012010
+#define TXGBE_PRBTDMA                   0x018010
+#define TXGBE_PRBPSR                    0x015010
+#define TXGBE_PRBRDB                    0x019010
+#define TXGBE_PRBTDB                    0x01C010
+#define TXGBE_PRBRSEC                   0x017010
+#define TXGBE_PRBTSEC                   0x01D010
+#define TXGBE_PRBMNG                    0x01E010
+#define TXGBE_PRBRMAC                   0x011014
+#define TXGBE_PRBTMAC                   0x011010
+#define TXGBE_PRBREMAC                  0x011E04
+#define TXGBE_PRBTEMAC                  0x011E00
+
+/**
+ * ECC
+ **/
+#define TXGBE_ECCRXDMACTL               0x012014
+#define TXGBE_ECCRXDMAINJ               0x012018
+#define TXGBE_ECCRXDMA                  0x01201C
+#define TXGBE_ECCTXDMACTL               0x018014
+#define TXGBE_ECCTXDMAINJ               0x018018
+#define TXGBE_ECCTXDMA                  0x01801C
+
+#define TXGBE_ECCRXPBCTL                0x019014
+#define TXGBE_ECCRXPBINJ                0x019018
+#define TXGBE_ECCRXPB                   0x01901C
+#define TXGBE_ECCTXPBCTL                0x01C014
+#define TXGBE_ECCTXPBINJ                0x01C018
+#define TXGBE_ECCTXPB                   0x01C01C
+
+#define TXGBE_ECCRXETHCTL               0x015014
+#define TXGBE_ECCRXETHINJ               0x015018
+#define TXGBE_ECCRXETH                  0x01401C
+
+#define TXGBE_ECCRXSECCTL               0x017014
+#define TXGBE_ECCRXSECINJ               0x017018
+#define TXGBE_ECCRXSEC                  0x01701C
+#define TXGBE_ECCTXSECCTL               0x01D014
+#define TXGBE_ECCTXSECINJ               0x01D018
+#define TXGBE_ECCTXSEC                  0x01D01C
+
+/**
+ * Inspection
+ **/
+#define TXGBE_PBLBSTAT                  0x01906C
+#define   TXGBE_PBLBSTAT_FREE(r)        RS(r, 0, 0x3FF)
+#define   TXGBE_PBLBSTAT_FULL           MS(11, 0x1)
+#define TXGBE_PBRXSTAT                  0x019004
+#define   TXGBE_PBRXSTAT_WRAP(tc, r)    ((7u << 4 * (tc) & (r)) >> 4 * (tc))
+#define   TXGBE_PBRXSTAT_EMPT(tc, r)    ((8u << 4 * (tc) & (r)) >> 4 * (tc))
+#define TXGBE_PBRXSTAT2(tc)             (0x019180 + (tc) * 4)
+#define   TXGBE_PBRXSTAT2_USED(r)       RS(r, 0, 0xFFFF)
+#define TXGBE_PBRXWRPTR(tc)             (0x019180 + (tc) * 4)
+#define   TXGBE_PBRXWRPTR_HEAD(r)       RS(r, 0, 0xFFFF)
+#define   TXGBE_PBRXWRPTR_TAIL(r)       RS(r, 16, 0xFFFF)
+#define TXGBE_PBRXRDPTR(tc)             (0x0191A0 + (tc) * 4)
+#define   TXGBE_PBRXRDPTR_HEAD(r)       RS(r, 0, 0xFFFF)
+#define   TXGBE_PBRXRDPTR_TAIL(r)       RS(r, 16, 0xFFFF)
+#define TXGBE_PBRXDATA(tc)              (0x0191C0 + (tc) * 4)
+#define   TXGBE_PBRXDATA_RDPTR(r)       RS(r, 0, 0xFFFF)
+#define   TXGBE_PBRXDATA_WRPTR(r)       RS(r, 16, 0xFFFF)
+#define TXGBE_PBTXSTAT                  0x01C004
+#define   TXGBE_PBTXSTAT_EMPT(tc, r)    ((1 << (tc) & (r)) >> (tc))
+
+#define TXGBE_RXPBPFCDMACL              0x019210
+#define TXGBE_RXPBPFCDMACH              0x019214
+
+#define TXGBE_PSRLANPKTCNT              0x0151B8
+#define TXGBE_PSRMNGPKTCNT              0x0151BC
+
+#define TXGBE_P2VMBX_SIZE          (16) /* 16*4B */
+#define TXGBE_P2MMBX_SIZE          (64) /* 64*4B */
+
+/**************** Global Registers ****************************/
+/* chip control Registers */
+#define TXGBE_PWR                       0x010000
+#define   TXGBE_PWR_LANID(r)            RS(r, 30, 0x3)
+#define   TXGBE_PWR_LANID_SWAP          LS(2, 30, 0x3)
+
+/* Sensors for PVT(Process Voltage Temperature) */
+#define TXGBE_TSCTRL                    0x010300
+#define   TXGBE_TSCTRL_EVALMD           MS(31, 0x1)
+#define TXGBE_TSEN                      0x010304
+#define   TXGBE_TSEN_ENA                MS(0, 0x1)
+#define TXGBE_TSSTAT                    0x010308
+#define   TXGBE_TSSTAT_VLD              MS(16, 0x1)
+#define   TXGBE_TSSTAT_DATA(r)          RS(r, 0, 0x3FF)
+
+#define TXGBE_TSATHRE                   0x01030C
+#define TXGBE_TSDTHRE                   0x010310
+#define TXGBE_TSINTR                    0x010314
+#define   TXGBE_TSINTR_AEN              MS(0, 0x1)
+#define   TXGBE_TSINTR_DEN              MS(1, 0x1)
+#define TXGBE_TS_ALARM_ST               0x10318
+#define TXGBE_TS_ALARM_ST_DALARM        0x00000002U
+#define TXGBE_TS_ALARM_ST_ALARM         0x00000001U
+
+/* FMGR Registers */
+#define TXGBE_ILDRSTAT                  0x010120
+#define   TXGBE_ILDRSTAT_PCIRST         MS(0, 0x1)
+#define   TXGBE_ILDRSTAT_PWRRST         MS(1, 0x1)
+#define   TXGBE_ILDRSTAT_SWRST          MS(7, 0x1)
+#define   TXGBE_ILDRSTAT_SWRST_LAN0     MS(9, 0x1)
+#define   TXGBE_ILDRSTAT_SWRST_LAN1     MS(10, 0x1)
+
+#define TXGBE_SPISTAT                   0x01010C
+#define   TXGBE_SPISTAT_OPDONE          MS(0, 0x1)
+#define   TXGBE_SPISTAT_BPFLASH         MS(31, 0x1)
+
+/************************* Port Registers ************************************/
+/* I2C registers */
+#define TXGBE_I2CCON                 0x014900 /* I2C Control */
+#define   TXGBE_I2CCON_SDIA          ((1 << 6))
+#define   TXGBE_I2CCON_RESTART       ((1 << 5))
+#define   TXGBE_I2CCON_M10BITADDR    ((1 << 4))
+#define   TXGBE_I2CCON_S10BITADDR    ((1 << 3))
+#define   TXGBE_I2CCON_SPEED(v)      (((v) & 0x3) << 1)
+#define   TXGBE_I2CCON_MENA          ((1 << 0))
+#define TXGBE_I2CTAR                 0x014904 /* I2C Target Address */
+#define TXGBE_I2CDATA                0x014910 /* I2C Rx/Tx Data Buf and Cmd */
+#define   TXGBE_I2CDATA_STOP         ((1 << 9))
+#define   TXGBE_I2CDATA_READ         ((1 << 8) | TXGBE_I2CDATA_STOP)
+#define   TXGBE_I2CDATA_WRITE        ((0 << 8) | TXGBE_I2CDATA_STOP)
+#define TXGBE_I2CSSSCLHCNT           0x014914 /* Standard speed I2C Clock SCL High Count */
+#define TXGBE_I2CSSSCLLCNT           0x014918 /* Standard speed I2C Clock SCL Low Count */
+#define TXGBE_I2CICR                 0x014934 /* I2C Raw Interrupt Status */
+#define   TXGBE_I2CICR_RXFULL        ((0x1) << 2)
+#define   TXGBE_I2CICR_TXEMPTY       ((0x1) << 4)
+#define TXGBE_I2CICM                 0x014930 /* I2C Interrupt Mask */
+#define TXGBE_I2CRXTL                0x014938 /* I2C Receive FIFO Threshold */
+#define TXGBE_I2CTXTL                0x01493C /* I2C TX FIFO Threshold */
+#define TXGBE_I2CENA                 0x01496C /* I2C Enable */
+#define TXGBE_I2CSTAT                0x014970 /* I2C Status register */
+#define   TXGBE_I2CSTAT_MST          ((1U << 5))
+#define TXGBE_I2CSCLTMOUT            0x0149AC /* I2C SCL stuck at low timeout register */
+#define TXGBE_I2CSDATMOUT            0x0149B0 /*I2C SDA Stuck at Low Timeout*/
+
+/* port cfg Registers */
+#define TXGBE_PORTSTAT                  0x014404
+#define   TXGBE_PORTSTAT_UP             MS(0, 0x1)
+#define   TXGBE_PORTSTAT_BW_MASK        MS(1, 0x7)
+#define     TXGBE_PORTSTAT_BW_10G       MS(1, 0x1)
+#define     TXGBE_PORTSTAT_BW_1G        MS(2, 0x1)
+#define     TXGBE_PORTSTAT_BW_100M      MS(3, 0x1)
+#define   TXGBE_PORTSTAT_ID(r)          RS(r, 8, 0x1)
+
+#define TXGBE_VXLAN                     0x014410
+#define TXGBE_VXLAN_GPE                 0x014414
+#define TXGBE_GENEVE                    0x014418
+#define TXGBE_TEREDO                    0x01441C
+#define TXGBE_TCPTIME                   0x014420
+
+/* GPIO Registers */
+#define TXGBE_GPIODATA                  0x014800
+#define   TXGBE_GPIOBIT_0      MS(0, 0x1) /* O:tx fault */
+#define   TXGBE_GPIOBIT_1      MS(1, 0x1) /* O:tx disabled */
+#define   TXGBE_GPIOBIT_2      MS(2, 0x1) /* I:sfp module absent */
+#define   TXGBE_GPIOBIT_3      MS(3, 0x1) /* I:rx signal lost */
+#define   TXGBE_GPIOBIT_4      MS(4, 0x1) /* O:rate select, 1G(0) 10G(1) */
+#define   TXGBE_GPIOBIT_5      MS(5, 0x1) /* O:rate select, 1G(0) 10G(1) */
+#define   TXGBE_GPIOBIT_6      MS(6, 0x1) /* I:ext phy interrupt */
+#define   TXGBE_GPIOBIT_7      MS(7, 0x1) /* I:fan speed alarm */
+#define TXGBE_GPIODIR                   0x014804
+#define TXGBE_GPIOCTL                   0x014808
+#define TXGBE_GPIOINTEN                 0x014830
+#define TXGBE_GPIOINTMASK               0x014834
+#define TXGBE_GPIOINTTYPE               0x014838
+#define TXGBE_GPIOINTSTAT               0x014840
+#define TXGBE_GPIOEOI                   0x01484C
+
+
+#define TXGBE_ARBPOOLIDX                0x01820C
+#define TXGBE_ARBTXRATE                 0x018404
+#define   TXGBE_ARBTXRATE_MIN(v)        LS(v, 0, 0x3FFF)
+#define   TXGBE_ARBTXRATE_MAX(v)        LS(v, 16, 0x3FFF)
+
+/* qos */
+#define TXGBE_ARBTXCTL                  0x018200
+#define   TXGBE_ARBTXCTL_RRM            MS(1, 0x1)
+#define   TXGBE_ARBTXCTL_WSP            MS(2, 0x1)
+#define   TXGBE_ARBTXCTL_DIA            MS(6, 0x1)
+#define TXGBE_ARBTXMMW                  0x018208
+
+/**************************** Receive DMA registers **************************/
+/* receive control */
+#define TXGBE_ARBRXCTL                  0x012000
+#define   TXGBE_ARBRXCTL_RRM            MS(1, 0x1)
+#define   TXGBE_ARBRXCTL_WSP            MS(2, 0x1)
+#define   TXGBE_ARBRXCTL_DIA            MS(6, 0x1)
+
+#define TXGBE_RPUP2TC                   0x019008
+#define   TXGBE_RPUP2TC_UP_SHIFT        3
+#define   TXGBE_RPUP2TC_UP_MASK         0x7
+
+/* mac switcher */
+#define TXGBE_ETHADDRL                  0x016200
+#define   TXGBE_ETHADDRL_AD0(v)         LS(v, 0, 0xFF)
+#define   TXGBE_ETHADDRL_AD1(v)         LS(v, 8, 0xFF)
+#define   TXGBE_ETHADDRL_AD2(v)         LS(v, 16, 0xFF)
+#define   TXGBE_ETHADDRL_AD3(v)         LS(v, 24, 0xFF)
+#define   TXGBE_ETHADDRL_ETAG(r)        RS(r, 0, 0x3FFF)
+#define TXGBE_ETHADDRH                  0x016204
+#define   TXGBE_ETHADDRH_AD4(v)         LS(v, 0, 0xFF)
+#define   TXGBE_ETHADDRH_AD5(v)         LS(v, 8, 0xFF)
+#define   TXGBE_ETHADDRH_AD_MASK        MS(0, 0xFFFF)
+#define   TXGBE_ETHADDRH_ETAG           MS(30, 0x1)
+#define   TXGBE_ETHADDRH_VLD            MS(31, 0x1)
+#define TXGBE_ETHADDRASSL               0x016208
+#define TXGBE_ETHADDRASSH               0x01620C
+#define TXGBE_ETHADDRIDX                0x016210
+
+/* Outmost Barrier Filters */
+#define TXGBE_MCADDRTBL(i)              (0x015200 + (i) * 4) /* 0-127 */
+#define TXGBE_UCADDRTBL(i)              (0x015400 + (i) * 4) /* 0-127 */
+#define TXGBE_VLANTBL(i)                (0x016000 + (i) * 4) /* 0-127 */
+
+#define TXGBE_MNGFLEXSEL                0x1582C
+#define TXGBE_MNGFLEXDWL(i)             (0x15A00 + ((i) * 16))
+#define TXGBE_MNGFLEXDWH(i)             (0x15A04 + ((i) * 16))
+#define TXGBE_MNGFLEXMSK(i)             (0x15A08 + ((i) * 16))
+
+#define TXGBE_LANFLEXSEL                0x15B8C
+#define TXGBE_LANFLEXDWL(i)             (0x15C00 + ((i) * 16))
+#define TXGBE_LANFLEXDWH(i)             (0x15C04 + ((i) * 16))
+#define TXGBE_LANFLEXMSK(i)             (0x15C08 + ((i) * 16))
+#define TXGBE_LANFLEXCTL                0x15CFC
+
+/* ipsec */
+#define TXGBE_IPSRXIDX                  0x017100
+#define   TXGBE_IPSRXIDX_ENA            MS(0, 0x1)
+#define   TXGBE_IPSRXIDX_TB_MASK        MS(1, 0x3)
+#define   TXGBE_IPSRXIDX_TB_IP          LS(1, 1, 0x3)
+#define   TXGBE_IPSRXIDX_TB_SPI         LS(2, 1, 0x3)
+#define   TXGBE_IPSRXIDX_TB_KEY         LS(3, 1, 0x3)
+#define   TXGBE_IPSRXIDX_TBIDX(v)       LS(v, 3, 0x3FF)
+#define   TXGBE_IPSRXIDX_READ           MS(30, 0x1)
+#define   TXGBE_IPSRXIDX_WRITE          MS(31, 0x1)
+#define TXGBE_IPSRXADDR(i)              (0x017104 + (i) * 4)
+
+#define TXGBE_IPSRXSPI                  0x017114
+#define TXGBE_IPSRXADDRIDX              0x017118
+#define TXGBE_IPSRXKEY(i)               (0x01711C + (i) * 4)
+#define TXGBE_IPSRXSALT                 0x01712C
+#define TXGBE_IPSRXMODE                 0x017130
+#define   TXGBE_IPSRXMODE_IPV6          0x00000010
+#define   TXGBE_IPSRXMODE_DEC           0x00000008
+#define   TXGBE_IPSRXMODE_ESP           0x00000004
+#define   TXGBE_IPSRXMODE_AH            0x00000002
+#define   TXGBE_IPSRXMODE_VLD           0x00000001
+#define TXGBE_IPSTXIDX                  0x01D100
+#define   TXGBE_IPSTXIDX_ENA            MS(0, 0x1)
+#define   TXGBE_IPSTXIDX_SAIDX(v)       LS(v, 3, 0x3FF)
+#define   TXGBE_IPSTXIDX_READ           MS(30, 0x1)
+#define   TXGBE_IPSTXIDX_WRITE          MS(31, 0x1)
+#define TXGBE_IPSTXSALT                 0x01D104
+#define TXGBE_IPSTXKEY(i)               (0x01D108 + (i) * 4)
+
+#define TXGBE_MACTXCFG                  0x011000
+#define   TXGBE_MACTXCFG_TE             MS(0, 0x1)
+#define   TXGBE_MACTXCFG_SPEED_MASK     MS(29, 0x3)
+#define   TXGBE_MACTXCFG_SPEED(v)       LS(v, 29, 0x3)
+#define   TXGBE_MACTXCFG_SPEED_10G      LS(0, 29, 0x3)
+#define   TXGBE_MACTXCFG_SPEED_1G       LS(3, 29, 0x3)
+
+#define TXGBE_ISBADDRL                  0x000160
+#define TXGBE_ISBADDRH                  0x000164
+
+#define NVM_OROM_OFFSET		0x17
+#define NVM_OROM_BLK_LOW	0x83
+#define NVM_OROM_BLK_HI		0x84
+#define NVM_OROM_PATCH_MASK	0xFF
+#define NVM_OROM_SHIFT		8
+#define NVM_VER_MASK		0x00FF /* version mask */
+#define NVM_VER_SHIFT		8     /* version bit shift */
+#define NVM_OEM_PROD_VER_PTR	0x1B  /* OEM Product version block pointer */
+#define NVM_OEM_PROD_VER_CAP_OFF 0x1  /* OEM Product version format offset */
+#define NVM_OEM_PROD_VER_OFF_L	0x2   /* OEM Product version offset low */
+#define NVM_OEM_PROD_VER_OFF_H	0x3   /* OEM Product version offset high */
+#define NVM_OEM_PROD_VER_CAP_MASK 0xF /* OEM Product version cap mask */
+#define NVM_OEM_PROD_VER_MOD_LEN 0x3  /* OEM Product version module length */
+#define NVM_ETK_OFF_LOW		0x2D  /* version low order word */
+#define NVM_ETK_OFF_HI		0x2E  /* version high order word */
+#define NVM_ETK_SHIFT		16    /* high version word shift */
+#define NVM_VER_INVALID		0xFFFF
+#define NVM_ETK_VALID		0x8000
+#define NVM_INVALID_PTR		0xFFFF
+#define NVM_VER_SIZE		32    /* version sting size */
+
+#define TXGBE_REG_RSSTBL   TXGBE_RSSTBL(0)
+#define TXGBE_REG_RSSKEY   TXGBE_RSSKEY(0)
+
+/**
+ * register operations
+ **/
+#define TXGBE_REG_READ32(addr)               rte_read32(addr)
+#define TXGBE_REG_READ32_RELAXED(addr)       rte_read32_relaxed(addr)
+#define TXGBE_REG_WRITE32(addr, val)         rte_write32(val, addr)
+#define TXGBE_REG_WRITE32_RELAXED(addr, val) rte_write32_relaxed(val, addr)
+
+#define TXGBE_DEAD_READ_REG         0xdeadbeefU
+#define TXGBE_FAILED_READ_REG       0xffffffffU
+#define TXGBE_REG_ADDR(hw, reg) \
+	((volatile u32 *)((char *)(hw)->hw_addr + (reg)))
+
+static inline u32
+txgbe_get32(volatile u32 *addr)
+{
+	u32 val = TXGBE_REG_READ32(addr);
+	return rte_le_to_cpu_32(val);
+}
+
+static inline void
+txgbe_set32(volatile u32 *addr, u32 val)
+{
+	val = rte_cpu_to_le_32(val);
+	TXGBE_REG_WRITE32(addr, val);
+}
+
+static inline u32
+txgbe_get32_masked(volatile u32 *addr, u32 mask)
+{
+	u32 val = txgbe_get32(addr);
+	val &= mask;
+	return val;
+}
+
+static inline void
+txgbe_set32_masked(volatile u32 *addr, u32 mask, u32 field)
+{
+	u32 val = txgbe_get32(addr);
+	val = ((val & ~mask) | (field & mask));
+	txgbe_set32(addr, val);
+}
+
+static inline u32
+txgbe_get32_relaxed(volatile u32 *addr)
+{
+	u32 val = TXGBE_REG_READ32_RELAXED(addr);
+	return rte_le_to_cpu_32(val);
+}
+
+static inline void
+txgbe_set32_relaxed(volatile u32 *addr, u32 val)
+{
+	val = rte_cpu_to_le_32(val);
+	TXGBE_REG_WRITE32_RELAXED(addr, val);
+	return;
+}
+
+static inline u32
+rd32(struct txgbe_hw *hw, u32 reg)
+{
+	if (reg == TXGBE_REG_DUMMY)
+		return 0;
+	return txgbe_get32(TXGBE_REG_ADDR(hw, reg));
+}
+
+static inline void
+wr32(struct txgbe_hw *hw, u32 reg, u32 val)
+{
+	if (reg == TXGBE_REG_DUMMY)
+		return;
+	txgbe_set32(TXGBE_REG_ADDR(hw, reg), val);
+}
+
+static inline u32
+rd32m(struct txgbe_hw *hw, u32 reg, u32 mask)
+{
+	u32 val = rd32(hw, reg);
+	val &= mask;
+	return val;
+}
+
+static inline void
+wr32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 field)
+{
+	u32 val = rd32(hw, reg);
+	val = ((val & ~mask) | (field & mask));
+	wr32(hw, reg, val);
+}
+
+static inline u64
+rd64(struct txgbe_hw *hw, u32 reg)
+{
+	u64 lsb = rd32(hw, reg);
+	u64 msb = rd32(hw, reg + 4);
+	return (lsb | msb << 32);
+}
+
+static inline void
+wr64(struct txgbe_hw *hw, u32 reg, u64 val)
+{
+	wr32(hw, reg, (u32)val);
+	wr32(hw, reg + 4, (u32)(val >> 32));
+}
+
+/* poll register */
+static inline u32
+po32m(struct txgbe_hw *hw, u32 reg, u32 mask, u32 expect, u32 *actual,
+	u32 loop, u32 slice)
+{
+	bool usec = true;
+	u32 value = 0, all = 0;
+
+	if (slice > 1000 * MAX_UDELAY_MS) {
+		usec = false;
+		slice = (slice + 500) / 1000;
+	}
+
+	do {
+		all |= rd32(hw, reg);
+		value |= mask & all;
+		if (value == expect) {
+			break;
+		}
+
+		usec ? usec_delay(slice) : msec_delay(slice);
+	} while (--loop > 0);
+
+	if (actual) {
+		*actual = all;
+	}
+
+	return loop;
+}
+
+/* flush all write operations */
+#define txgbe_flush(hw) rd32(hw, 0x00100C)
+
+#define rd32a(hw, reg, idx) ( \
+	rd32((hw), (reg) + ((idx) << 2)))
+#define wr32a(hw, reg, idx, val) \
+	wr32((hw), (reg) + ((idx) << 2), (val))
+
+#define rd32at(hw, reg, idx) \
+		rd32a(hw, txgbe_map_reg(hw, reg), idx)
+#define wr32at(hw, reg, idx, val) \
+		wr32a(hw, txgbe_map_reg(hw, reg), idx, val)
+
+#define rd32w(hw, reg, mask, slice) do { \
+	rd32((hw), reg); \
+	po32m((hw), reg, mask, mask, NULL, 5, slice); \
+} while (0)
+
+#define wr32w(hw, reg, val, mask, slice) do { \
+	wr32((hw), reg, val); \
+	po32m((hw), reg, mask, mask, NULL, 5, slice); \
+} while (0)
+
+#define TXGBE_XPCS_IDAADDR    0x13000
+#define TXGBE_XPCS_IDADATA    0x13004
+#define TXGBE_EPHY_IDAADDR    0x13008
+#define TXGBE_EPHY_IDADATA    0x1300C
+static inline u32
+rd32_epcs(struct txgbe_hw *hw, u32 addr)
+{
+	u32 data;
+	wr32(hw, TXGBE_XPCS_IDAADDR, addr);
+	data = rd32(hw, TXGBE_XPCS_IDADATA);
+	return data;
+}
+
+static inline void
+wr32_epcs(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+	wr32(hw, TXGBE_XPCS_IDAADDR, addr);
+	wr32(hw, TXGBE_XPCS_IDADATA, data);
+}
+
+static inline u32
+rd32_ephy(struct txgbe_hw *hw, u32 addr)
+{
+	u32 data;
+	wr32(hw, TXGBE_EPHY_IDAADDR, addr);
+	data = rd32(hw, TXGBE_EPHY_IDADATA);
+	return data;
+}
+
+static inline void
+wr32_ephy(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+	wr32(hw, TXGBE_EPHY_IDAADDR, addr);
+	wr32(hw, TXGBE_EPHY_IDADATA, data);
+}
+
+#endif /* _TXGBE_REGS_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 8ed324a1b..ad4eba21a 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -5,10 +5,125 @@
 #ifndef _TXGBE_TYPE_H_
 #define _TXGBE_TYPE_H_
 
+#define TXGBE_ALIGN				128 /* as intel did */
+
+#include "txgbe_osdep.h"
 #include "txgbe_devids.h"
 
+enum txgbe_mac_type {
+	txgbe_mac_unknown = 0,
+	txgbe_mac_raptor,
+	txgbe_mac_raptor_vf,
+	txgbe_num_macs
+};
+
+enum txgbe_phy_type {
+	txgbe_phy_unknown = 0,
+	txgbe_phy_none,
+	txgbe_phy_tn,
+	txgbe_phy_aq,
+	txgbe_phy_ext_1g_t,
+	txgbe_phy_cu_mtd,
+	txgbe_phy_cu_unknown,
+	txgbe_phy_qt,
+	txgbe_phy_xaui,
+	txgbe_phy_nl,
+	txgbe_phy_sfp_tyco_passive,
+	txgbe_phy_sfp_unknown_passive,
+	txgbe_phy_sfp_unknown_active,
+	txgbe_phy_sfp_avago,
+	txgbe_phy_sfp_ftl,
+	txgbe_phy_sfp_ftl_active,
+	txgbe_phy_sfp_unknown,
+	txgbe_phy_sfp_intel,
+	txgbe_phy_qsfp_unknown_passive,
+	txgbe_phy_qsfp_unknown_active,
+	txgbe_phy_qsfp_intel,
+	txgbe_phy_qsfp_unknown,
+	txgbe_phy_sfp_unsupported, /* Enforce bit set with unsupported module */
+	txgbe_phy_sgmii,
+	txgbe_phy_fw,
+	txgbe_phy_generic
+};
+
+/*
+ * SFP+ module type IDs:
+ *
+ * ID	Module Type
+ * =============
+ * 0	SFP_DA_CU
+ * 1	SFP_SR
+ * 2	SFP_LR
+ * 3	SFP_DA_CU_CORE0 - chip-specific
+ * 4	SFP_DA_CU_CORE1 - chip-specific
+ * 5	SFP_SR/LR_CORE0 - chip-specific
+ * 6	SFP_SR/LR_CORE1 - chip-specific
+ */
+enum txgbe_sfp_type {
+	txgbe_sfp_type_unknown = 0,
+	txgbe_sfp_type_da_cu,
+	txgbe_sfp_type_sr,
+	txgbe_sfp_type_lr,
+	txgbe_sfp_type_da_cu_core0,
+	txgbe_sfp_type_da_cu_core1,
+	txgbe_sfp_type_srlr_core0,
+	txgbe_sfp_type_srlr_core1,
+	txgbe_sfp_type_da_act_lmt_core0,
+	txgbe_sfp_type_da_act_lmt_core1,
+	txgbe_sfp_type_1g_cu_core0,
+	txgbe_sfp_type_1g_cu_core1,
+	txgbe_sfp_type_1g_sx_core0,
+	txgbe_sfp_type_1g_sx_core1,
+	txgbe_sfp_type_1g_lx_core0,
+	txgbe_sfp_type_1g_lx_core1,
+	txgbe_sfp_type_not_present = 0xFFFE,
+	txgbe_sfp_type_not_known = 0xFFFF
+};
+
+enum txgbe_media_type {
+	txgbe_media_type_unknown = 0,
+	txgbe_media_type_fiber,
+	txgbe_media_type_fiber_qsfp,
+	txgbe_media_type_copper,
+	txgbe_media_type_backplane,
+	txgbe_media_type_cx4,
+	txgbe_media_type_virtual
+};
+
+struct txgbe_hw_stats {
+	u64 counter;
+};
+struct txgbe_mac_info {
+	enum txgbe_mac_type type;
+	u8 addr[ETH_ADDR_LEN];
+	u8 perm_addr[ETH_ADDR_LEN];
+	u8 san_addr[ETH_ADDR_LEN];
+
+	u32 num_rar_entries;
+};
+
+struct txgbe_phy_info {
+	enum txgbe_phy_type type;
+	enum txgbe_sfp_type sfp_type;
+};
+
 struct txgbe_hw {
+	void IOMEM *hw_addr;
 	void *back;
+	struct txgbe_mac_info mac;
+	struct txgbe_phy_info phy;
+
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+
+	bool allow_unsupported_sfp;
+
+	uint64_t isb_dma;
+	void IOMEM *isb_mem;
 };
 
+#include "txgbe_regs.h"
+
 #endif /* _TXGBE_TYPE_H_ */
diff --git a/drivers/net/txgbe/meson.build b/drivers/net/txgbe/meson.build
index f45b04b1c..88b05ad83 100644
--- a/drivers/net/txgbe/meson.build
+++ b/drivers/net/txgbe/meson.build
@@ -8,6 +8,8 @@ objs = [base_objs]
 
 sources = files(
 	'txgbe_ethdev.c',
+	'txgbe_pf.c',
+	'txgbe_rxtx.c',
 	'txgbe_vf_representor.c',
 )
 
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 86d2b9064..165132908 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -33,6 +33,10 @@
 #include "txgbe_logs.h"
 #include "base/txgbe.h"
 #include "txgbe_ethdev.h"
+#include "txgbe_rxtx.h"
+
+static void txgbe_dev_close(struct rte_eth_dev *dev);
+static int txgbe_dev_stats_reset(struct rte_eth_dev *dev);
 
 /*
  * The set of PCI devices this driver supports
@@ -43,11 +47,176 @@ static const struct rte_pci_id pci_id_txgbe_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static const struct eth_dev_ops txgbe_eth_dev_ops;
+
+static inline int
+txgbe_is_sfp(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+	return 0;
+}
+
+static inline int32_t
+txgbe_pf_reset_hw(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+	return 0;
+}
+
+static inline void
+txgbe_enable_intr(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
+static void
+txgbe_disable_intr(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+}
+
 static int
 eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	const struct rte_memzone *mz;
+	uint32_t ctrl_ext;
+	uint16_t csum;
+	int err;
+
+	PMD_INIT_FUNC_TRACE();
+
+	eth_dev->dev_ops = &txgbe_eth_dev_ops;
+
+	/*
+	 * For secondary processes, we don't initialise any further as primary
+	 * has already done this work. Only check we don't need a different
+	 * RX and TX function.
+	 */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+		struct txgbe_tx_queue *txq;
+		/* TX queue function in primary, set by last queue initialized
+		 * Tx queue may not initialized by primary process
+		 */
+		if (eth_dev->data->tx_queues) {
+			txq = eth_dev->data->tx_queues[eth_dev->data->nb_tx_queues-1];
+			txgbe_set_tx_function(eth_dev, txq);
+		} else {
+			/* Use default TX function if we get here */
+			PMD_INIT_LOG(NOTICE, "No TX queues configured yet. "
+				     "Using default TX function.");
+		}
+
+		txgbe_set_rx_function(eth_dev);
+
+		return 0;
+	}
+
+	rte_eth_copy_pci_info(eth_dev, pci_dev);
+
+	/* Vendor and Device ID need to be set before init of shared code */
+	hw->device_id = pci_dev->id.device_id;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->allow_unsupported_sfp = 1;
+
+	/* Reserve memory for interrupt status block */
+	mz = rte_eth_dma_zone_reserve(eth_dev, "txgbe_driver", -1,
+		16, TXGBE_ALIGN, SOCKET_ID_ANY);
+	if (mz == NULL) {
+		return -ENOMEM;
+	}
+	hw->isb_dma = TMZ_PADDR(mz);
+	hw->isb_mem = TMZ_VADDR(mz);
+
+	/* Initialize the shared code (base driver) */
+	err = txgbe_init_shared_code(hw);
+	if (err != 0) {
+		PMD_INIT_LOG(ERR, "Shared code init failed: %d", err);
+		return -EIO;
+	}
+
+	err = txgbe_init_eeprom_params(hw);
+	if (err != 0) {
+		PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
+		return -EIO;
+	}
+
+	/* Make sure we have a good EEPROM before we read from it */
+	err = txgbe_validate_eeprom_checksum(hw, &csum);
+	if (err != 0) {
+		PMD_INIT_LOG(ERR, "The EEPROM checksum is not valid: %d", err);
+		return -EIO;
+	}
 
-	RTE_SET_USED(eth_dev);
+	err = txgbe_init_hw(hw);
+
+	/* Reset the hw statistics */
+	txgbe_dev_stats_reset(eth_dev);
+
+	/* disable interrupt */
+	txgbe_disable_intr(hw);
+
+	/* Allocate memory for storing MAC addresses */
+	eth_dev->data->mac_addrs = rte_zmalloc("txgbe", RTE_ETHER_ADDR_LEN *
+					       hw->mac.num_rar_entries, 0);
+	if (eth_dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate %u bytes needed to store "
+			     "MAC addresses",
+			     RTE_ETHER_ADDR_LEN * hw->mac.num_rar_entries);
+		return -ENOMEM;
+	}
+
+	/* Copy the permanent MAC address */
+	rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.perm_addr,
+			&eth_dev->data->mac_addrs[0]);
+
+	/* Allocate memory for storing hash filter MAC addresses */
+	eth_dev->data->hash_mac_addrs = rte_zmalloc("txgbe", RTE_ETHER_ADDR_LEN *
+						    TXGBE_VMDQ_NUM_UC_MAC, 0);
+	if (eth_dev->data->hash_mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR,
+			     "Failed to allocate %d bytes needed to store MAC addresses",
+			     RTE_ETHER_ADDR_LEN * TXGBE_VMDQ_NUM_UC_MAC);
+		return -ENOMEM;
+	}
+
+	/* Pass the information to the rte_eth_dev_close() that it should also
+	 * release the private port resources.
+	 */
+	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
+
+	/* initialize PF if max_vfs not zero */
+	txgbe_pf_host_init(eth_dev);
+
+	ctrl_ext = rd32(hw, TXGBE_PORTCTL);
+	/* let hardware know driver is loaded */
+	ctrl_ext |= TXGBE_PORTCTL_DRVLOAD;
+	/* Set PF Reset Done bit so PF/VF Mail Ops can work */
+	ctrl_ext |= TXGBE_PORTCTL_RSTDONE;
+	wr32(hw, TXGBE_PORTCTL, ctrl_ext);
+	txgbe_flush(hw);
+
+	if (txgbe_is_sfp(hw) && hw->phy.sfp_type != txgbe_sfp_type_not_present)
+		PMD_INIT_LOG(DEBUG, "MAC: %d, PHY: %d, SFP+: %d",
+			     (int)hw->mac.type, (int)hw->phy.type,
+			     (int)hw->phy.sfp_type);
+	else
+		PMD_INIT_LOG(DEBUG, "MAC: %d, PHY: %d",
+			     (int)hw->mac.type, (int) hw->phy.type);
+
+	PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
+		     eth_dev->data->port_id, pci_dev->id.vendor_id,
+		     pci_dev->id.device_id);
+
+	/* enable uio/vfio intr/eventfd mapping */
+	rte_intr_enable(intr_handle);
+
+	/* enable support intr */
+	txgbe_enable_intr(eth_dev);
 
 	return 0;
 }
@@ -55,8 +224,12 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 static int
 eth_txgbe_dev_uninit(struct rte_eth_dev *eth_dev)
 {
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
 
-	RTE_SET_USED(eth_dev);
+	txgbe_dev_close(eth_dev);
 
 	return 0;
 }
@@ -145,6 +318,85 @@ static struct rte_pci_driver rte_txgbe_pmd = {
 	.remove = eth_txgbe_pci_remove,
 };
 
+static int
+txgbe_dev_start(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+	return 0;
+}
+
+static void
+txgbe_dev_stop(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
+/*
+ * Reset and stop device.
+ */
+static void
+txgbe_dev_close(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txgbe_pf_reset_hw(hw);
+
+	txgbe_dev_stop(dev);
+
+	txgbe_dev_free_queues(dev);
+
+	/* reprogram the RAR[0] in case user changed it. */
+	txgbe_set_rar(hw, 0, hw->mac.addr, 0, true);
+
+	dev->dev_ops = NULL;
+
+	/* disable uio intr before callback unregister */
+	rte_intr_disable(intr_handle);
+
+	/* uninitialize PF if max_vfs not zero */
+	txgbe_pf_host_uninit(dev);
+
+	rte_free(dev->data->mac_addrs);
+	dev->data->mac_addrs = NULL;
+
+	rte_free(dev->data->hash_mac_addrs);
+	dev->data->hash_mac_addrs = NULL;
+}
+
+static int
+txgbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	RTE_SET_USED(dev);
+	RTE_SET_USED(stats);
+
+	return 0;
+}
+
+static int
+txgbe_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
+
+	/* HW registers are cleared on read */
+	txgbe_dev_stats_get(dev, NULL);
+
+	/* Reset software totals */
+	memset(hw_stats, 0, sizeof(*hw_stats));
+
+	return 0;
+}
+
+static const struct eth_dev_ops txgbe_eth_dev_ops = {
+	.dev_start                  = txgbe_dev_start,
+	.dev_stop                   = txgbe_dev_stop,
+	.dev_close                  = txgbe_dev_close,
+	.stats_get                  = txgbe_dev_stats_get,
+	.stats_reset                = txgbe_dev_stats_reset,
+};
 
 RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_txgbe, pci_id_txgbe_map);
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 8dbc4a64a..e6d533141 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -23,6 +23,7 @@ struct txgbe_vf_info {
  */
 struct txgbe_adapter {
 	struct txgbe_hw             hw;
+	struct txgbe_hw_stats       stats;
 	struct txgbe_vf_info        *vfdata;
 };
 
@@ -35,7 +36,29 @@ struct txgbe_vf_representor {
 int txgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params);
 int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 
+#define TXGBE_DEV_ADAPTER(dev) \
+	((struct txgbe_adapter *)(dev)->data->dev_private)
+
+#define TXGBE_DEV_HW(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->hw)
+
+#define TXGBE_DEV_STATS(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->stats)
+
 #define TXGBE_DEV_VFDATA(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
 
+/*
+ * RX/TX function prototypes
+ */
+void txgbe_dev_free_queues(struct rte_eth_dev *dev);
+
+/*
+ * misc function prototypes
+ */
+void txgbe_pf_host_init(struct rte_eth_dev *eth_dev);
+
+void txgbe_pf_host_uninit(struct rte_eth_dev *eth_dev);
+
+#define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
 #endif /* _TXGBE_ETHDEV_H_ */
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
new file mode 100644
index 000000000..0fac19c5d
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_ethdev_driver.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+
+#include "base/txgbe.h"
+#include "txgbe_ethdev.h"
+
+void txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
+{
+	RTE_SET_USED(eth_dev);
+}
+
+void txgbe_pf_host_uninit(struct rte_eth_dev *eth_dev)
+{
+	RTE_SET_USED(eth_dev);
+}
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
new file mode 100644
index 000000000..8236807d1
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include <sys/queue.h>
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+
+#include <rte_ethdev.h>
+
+#include "txgbe_logs.h"
+#include "base/txgbe.h"
+#include "txgbe_ethdev.h"
+#include "txgbe_rxtx.h"
+
+/* Takes an ethdev and a queue and sets up the tx function to be used based on
+ * the queue parameters. Used in tx_queue_setup by primary process and then
+ * in dev_init by secondary process when attaching to an existing ethdev.
+ */
+void __rte_cold
+txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq)
+{
+	RTE_SET_USED(dev);
+	RTE_SET_USED(txq);
+}
+
+void __rte_cold
+txgbe_set_rx_function(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
+void
+txgbe_dev_free_queues(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
new file mode 100644
index 000000000..c5e2e56d3
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_RXTX_H_
+#define _TXGBE_RXTX_H_
+
+/**
+ * Structure associated with each TX queue.
+ */
+struct txgbe_tx_queue {
+	uint64_t            tx_ring_phys_addr; /**< TX ring DMA address. */
+};
+
+/* Takes an ethdev and a queue and sets up the tx function to be used based on
+ * the queue parameters. Used in tx_queue_setup by primary process and then
+ * in dev_init by secondary process when attaching to an existing ethdev.
+ */
+void txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq);
+
+void txgbe_set_rx_function(struct rte_eth_dev *dev);
+
+
+#endif /* _TXGBE_RXTX_H_ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 04/42] net/txgbe: add error types and dummy function
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 03/42] net/txgbe: add device init and uninit Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 05/42] net/txgbe: add mac type and HW ops dummy Jiawen Wu
                   ` (38 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add error types and dummy function.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_dummy.h  | 739 ++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_status.h | 122 +++++
 drivers/net/txgbe/base/txgbe_type.h   | 263 ++++++++-
 3 files changed, 1123 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/txgbe/base/txgbe_dummy.h
 create mode 100644 drivers/net/txgbe/base/txgbe_status.h

diff --git a/drivers/net/txgbe/base/txgbe_dummy.h b/drivers/net/txgbe/base/txgbe_dummy.h
new file mode 100644
index 000000000..2039fa596
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_dummy.h
@@ -0,0 +1,739 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_TYPE_DUMMY_H_
+#define _TXGBE_TYPE_DUMMY_H_
+
+#ifdef TUP
+#elif defined(__GNUC__)
+#define TUP(x) x##_unused __attribute__((unused))
+#elif defined(__LCLINT__)
+#define TUP(x) x /*@unused@*/
+#else
+#define TUP(x) x
+#endif /*TUP*/
+#define TUP0 TUP(p0)
+#define TUP1 TUP(p1)
+#define TUP2 TUP(p2)
+#define TUP3 TUP(p3)
+#define TUP4 TUP(p4)
+#define TUP5 TUP(p5)
+#define TUP6 TUP(p6)
+#define TUP7 TUP(p7)
+#define TUP8 TUP(p8)
+#define TUP9 TUP(p9)
+
+/* struct txgbe_bus_operations */
+static inline s32 txgbe_bus_get_bus_info_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_bus_set_lan_id_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+/* struct txgbe_rom_operations */
+static inline s32 txgbe_rom_init_params_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_read16_dummy(struct txgbe_hw *TUP0, u32 TUP1, u16 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_readw_buffer_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, void *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_readw_sw_dummy(struct txgbe_hw *TUP0, u32 TUP1, u16 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_read32_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_read_buffer_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, void *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_write16_dummy(struct txgbe_hw *TUP0, u32 TUP1, u16 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_writew_buffer_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, void *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_writew_sw_dummy(struct txgbe_hw *TUP0, u32 TUP1, u16 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_write32_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_write_buffer_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, void *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_validate_checksum_dummy(struct txgbe_hw *TUP0, u16 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_update_checksum_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_rom_calc_checksum_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+
+/* struct txgbe_mac_operations */
+static inline s32 txgbe_mac_init_hw_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_reset_hw_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_start_hw_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_stop_hw_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_clear_hw_cntrs_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_enable_relaxed_ordering_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline u64 txgbe_mac_get_supported_physical_layer_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_mac_addr_dummy(struct txgbe_hw *TUP0, u8 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_san_mac_addr_dummy(struct txgbe_hw *TUP0, u8 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_san_mac_addr_dummy(struct txgbe_hw *TUP0, u8 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_device_caps_dummy(struct txgbe_hw *TUP0, u16 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_wwn_prefix_dummy(struct txgbe_hw *TUP0, u16 *TUP1, u16 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_fcoe_boot_status_dummy(struct txgbe_hw *TUP0, u16 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_read_analog_reg8_dummy(struct txgbe_hw *TUP0, u32 TUP1, u8 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_write_analog_reg8_dummy(struct txgbe_hw *TUP0, u32 TUP1, u8 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_setup_sfp_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_enable_rx_dma_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_disable_sec_rx_path_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_enable_sec_rx_path_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_disable_sec_tx_path_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_enable_sec_tx_path_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_acquire_swfw_sync_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_release_swfw_sync_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return;
+}
+static inline void txgbe_mac_init_swfw_sync_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline u64 txgbe_mac_autoc_read_dummy(struct txgbe_hw *TUP0)
+{
+	return 0;
+}
+static inline void txgbe_mac_autoc_write_dummy(struct txgbe_hw *TUP0, u64 TUP1)
+{
+	return;
+}
+static inline s32 txgbe_mac_prot_autoc_read_dummy(struct txgbe_hw *TUP0, bool *TUP1, u64 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_prot_autoc_write_dummy(struct txgbe_hw *TUP0, bool TUP1, u64 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_negotiate_api_version_dummy(struct txgbe_hw *TUP0, int TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_disable_tx_laser_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline void txgbe_mac_enable_tx_laser_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline void txgbe_mac_flap_tx_laser_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline s32 txgbe_mac_setup_link_dummy(struct txgbe_hw *TUP0, u32 TUP1, bool TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_setup_mac_link_dummy(struct txgbe_hw *TUP0, u32 TUP1, bool TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_check_link_dummy(struct txgbe_hw *TUP0, u32 *TUP1, bool *TUP3, bool TUP4)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_link_capabilities_dummy(struct txgbe_hw *TUP0, u32 *TUP1, bool *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_set_rate_select_speed_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return;
+}
+static inline void txgbe_mac_setup_pba_dummy(struct txgbe_hw *TUP0, int TUP1, u32 TUP2, int TUP3)
+{
+	return;
+}
+static inline s32 txgbe_mac_led_on_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_led_off_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_blink_led_start_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_blink_led_stop_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_init_led_link_act_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_rar_dummy(struct txgbe_hw *TUP0, u32 TUP1, u8 *TUP2, u32 TUP3, u32 TUP4)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_uc_addr_dummy(struct txgbe_hw *TUP0, u32 TUP1, u8 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_clear_rar_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_insert_mac_addr_dummy(struct txgbe_hw *TUP0, u8 *TUP1, u32 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_vmdq_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_vmdq_san_mac_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_clear_vmdq_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_init_rx_addrs_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_update_uc_addr_list_dummy(struct txgbe_hw *TUP0, u8 *TUP1, u32 TUP2, txgbe_mc_addr_itr TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_update_mc_addr_list_dummy(struct txgbe_hw *TUP0, u8 *TUP1, u32 TUP2, txgbe_mc_addr_itr TUP3, bool TUP4)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_enable_mc_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_disable_mc_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_clear_vfta_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_vfta_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, bool TUP3, bool TUP4)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_vlvf_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, bool TUP3, u32 *TUP4, u32 TUP5, bool TUP6)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_init_uta_tables_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_set_mac_anti_spoofing_dummy(struct txgbe_hw *TUP0, bool TUP1, int TUP2)
+{
+	return;
+}
+static inline void txgbe_mac_set_vlan_anti_spoofing_dummy(struct txgbe_hw *TUP0, bool TUP1, int TUP2)
+{
+	return;
+}
+static inline s32 txgbe_mac_update_xcast_mode_dummy(struct txgbe_hw *TUP0, int TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_set_rlpml_dummy(struct txgbe_hw *TUP0, u16 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_fc_enable_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_setup_fc_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_fc_autoneg_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline s32 txgbe_mac_set_fw_drv_ver_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 TUP2, u8 TUP3, u8 TUP4, u16 TUP5, const char *TUP6)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_get_thermal_sensor_data_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_init_thermal_sensor_thresh_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_get_rtrup2tc_dummy(struct txgbe_hw *TUP0, u8 *TUP1)
+{
+	return;
+}
+static inline void txgbe_mac_disable_rx_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline void txgbe_mac_enable_rx_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline void txgbe_mac_set_source_address_pruning_dummy(struct txgbe_hw *TUP0, bool TUP1, unsigned int TUP2)
+{
+	return;
+}
+static inline void txgbe_mac_set_ethertype_anti_spoofing_dummy(struct txgbe_hw *TUP0, bool TUP1, int TUP2)
+{
+	return;
+}
+static inline s32 txgbe_mac_dmac_update_tcs_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_dmac_config_tcs_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_dmac_config_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_setup_eee_dummy(struct txgbe_hw *TUP0, bool TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_read_iosf_sb_reg_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, u32 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mac_write_iosf_sb_reg_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, u32 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_mac_disable_mdd_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline void txgbe_mac_enable_mdd_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline void txgbe_mac_mdd_event_dummy(struct txgbe_hw *TUP0, u32 *TUP1)
+{
+	return;
+}
+static inline void txgbe_mac_restore_mdd_vf_dummy(struct txgbe_hw *TUP0, u32 TUP1)
+{
+	return;
+}
+static inline bool txgbe_mac_fw_recovery_mode_dummy(struct txgbe_hw *TUP0)
+{
+	return false;
+}
+
+/* struct txgbe_phy_operations */
+static inline u32 txgbe_phy_get_media_type_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_identify_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_identify_sfp_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_init_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_reset_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_read_reg_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, u16 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_write_reg_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_read_reg_mdi_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, u16 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_write_reg_mdi_dummy(struct txgbe_hw *TUP0, u32 TUP1, u32 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_setup_link_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_setup_internal_link_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_setup_link_speed_dummy(struct txgbe_hw *TUP0, u32 TUP1, bool TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_check_link_dummy(struct txgbe_hw *TUP0, u32 *TUP1, bool *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_get_firmware_version_dummy(struct txgbe_hw *TUP0, u32 *TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_read_i2c_byte_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 TUP2, u8 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_write_i2c_byte_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 TUP2, u8 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_read_i2c_sff8472_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_read_i2c_eeprom_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 *TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_write_i2c_eeprom_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 TUP2)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline void txgbe_phy_i2c_bus_clear_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline s32 txgbe_phy_check_overtemp_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_set_phy_power_dummy(struct txgbe_hw *TUP0, bool TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_enter_lplu_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_handle_lasi_dummy(struct txgbe_hw *TUP0)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_read_i2c_byte_unlocked_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 TUP2, u8 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_phy_write_i2c_byte_unlocked_dummy(struct txgbe_hw *TUP0, u8 TUP1, u8 TUP2, u8 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+
+/* struct txgbe_link_operations */
+static inline s32 txgbe_link_read_link_dummy(struct txgbe_hw *TUP0, u8 TUP1, u16 TUP2, u16 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_link_read_link_unlocked_dummy(struct txgbe_hw *TUP0, u8 TUP1, u16 TUP2, u16 *TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_link_write_link_dummy(struct txgbe_hw *TUP0, u8 TUP1, u16 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_link_write_link_unlocked_dummy(struct txgbe_hw *TUP0, u8 TUP1, u16 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+
+/* struct txgbe_mbx_operations */
+static inline void txgbe_mbx_init_params_dummy(struct txgbe_hw *TUP0)
+{
+	return;
+}
+static inline s32 txgbe_mbx_read_dummy(struct txgbe_hw *TUP0, u32 *TUP1, u16 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mbx_write_dummy(struct txgbe_hw *TUP0, u32 *TUP1, u16 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mbx_read_posted_dummy(struct txgbe_hw *TUP0, u32 *TUP1, u16 TUP2, u16 TUP3)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mbx_write_posted_dummy(struct txgbe_hw *TUP0, u32 *TUP1, u16 TUP2, u16 TUP4)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mbx_check_for_msg_dummy(struct txgbe_hw *TUP0, u16 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mbx_check_for_ack_dummy(struct txgbe_hw *TUP0, u16 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+static inline s32 txgbe_mbx_check_for_rst_dummy(struct txgbe_hw *TUP0, u16 TUP1)
+{
+	return TXGBE_ERR_OPS_DUMMY;
+}
+
+
+static inline void txgbe_init_ops_dummy(struct txgbe_hw *hw)
+{
+	hw->bus.get_bus_info = txgbe_bus_get_bus_info_dummy;
+	hw->bus.set_lan_id = txgbe_bus_set_lan_id_dummy;
+	hw->rom.init_params = txgbe_rom_init_params_dummy;
+	hw->rom.read16 = txgbe_rom_read16_dummy;
+	hw->rom.readw_buffer = txgbe_rom_readw_buffer_dummy;
+	hw->rom.readw_sw = txgbe_rom_readw_sw_dummy;
+	hw->rom.read32 = txgbe_rom_read32_dummy;
+	hw->rom.read_buffer = txgbe_rom_read_buffer_dummy;
+	hw->rom.write16 = txgbe_rom_write16_dummy;
+	hw->rom.writew_buffer = txgbe_rom_writew_buffer_dummy;
+	hw->rom.writew_sw = txgbe_rom_writew_sw_dummy;
+	hw->rom.write32 = txgbe_rom_write32_dummy;
+	hw->rom.write_buffer = txgbe_rom_write_buffer_dummy;
+	hw->rom.validate_checksum = txgbe_rom_validate_checksum_dummy;
+	hw->rom.update_checksum = txgbe_rom_update_checksum_dummy;
+	hw->rom.calc_checksum = txgbe_rom_calc_checksum_dummy;
+	hw->mac.init_hw = txgbe_mac_init_hw_dummy;
+	hw->mac.reset_hw = txgbe_mac_reset_hw_dummy;
+	hw->mac.start_hw = txgbe_mac_start_hw_dummy;
+	hw->mac.stop_hw = txgbe_mac_stop_hw_dummy;
+	hw->mac.clear_hw_cntrs = txgbe_mac_clear_hw_cntrs_dummy;
+	hw->mac.enable_relaxed_ordering = txgbe_mac_enable_relaxed_ordering_dummy;
+	hw->mac.get_supported_physical_layer = txgbe_mac_get_supported_physical_layer_dummy;
+	hw->mac.get_mac_addr = txgbe_mac_get_mac_addr_dummy;
+	hw->mac.get_san_mac_addr = txgbe_mac_get_san_mac_addr_dummy;
+	hw->mac.set_san_mac_addr = txgbe_mac_set_san_mac_addr_dummy;
+	hw->mac.get_device_caps = txgbe_mac_get_device_caps_dummy;
+	hw->mac.get_wwn_prefix = txgbe_mac_get_wwn_prefix_dummy;
+	hw->mac.get_fcoe_boot_status = txgbe_mac_get_fcoe_boot_status_dummy;
+	hw->mac.read_analog_reg8 = txgbe_mac_read_analog_reg8_dummy;
+	hw->mac.write_analog_reg8 = txgbe_mac_write_analog_reg8_dummy;
+	hw->mac.setup_sfp = txgbe_mac_setup_sfp_dummy;
+	hw->mac.enable_rx_dma = txgbe_mac_enable_rx_dma_dummy;
+	hw->mac.disable_sec_rx_path = txgbe_mac_disable_sec_rx_path_dummy;
+	hw->mac.enable_sec_rx_path = txgbe_mac_enable_sec_rx_path_dummy;
+	hw->mac.disable_sec_tx_path = txgbe_mac_disable_sec_tx_path_dummy;
+	hw->mac.enable_sec_tx_path = txgbe_mac_enable_sec_tx_path_dummy;
+	hw->mac.acquire_swfw_sync = txgbe_mac_acquire_swfw_sync_dummy;
+	hw->mac.release_swfw_sync = txgbe_mac_release_swfw_sync_dummy;
+	hw->mac.init_swfw_sync = txgbe_mac_init_swfw_sync_dummy;
+	hw->mac.autoc_read = txgbe_mac_autoc_read_dummy;
+	hw->mac.autoc_write = txgbe_mac_autoc_write_dummy;
+	hw->mac.prot_autoc_read = txgbe_mac_prot_autoc_read_dummy;
+	hw->mac.prot_autoc_write = txgbe_mac_prot_autoc_write_dummy;
+	hw->mac.negotiate_api_version = txgbe_mac_negotiate_api_version_dummy;
+	hw->mac.disable_tx_laser = txgbe_mac_disable_tx_laser_dummy;
+	hw->mac.enable_tx_laser = txgbe_mac_enable_tx_laser_dummy;
+	hw->mac.flap_tx_laser = txgbe_mac_flap_tx_laser_dummy;
+	hw->mac.setup_link = txgbe_mac_setup_link_dummy;
+	hw->mac.setup_mac_link = txgbe_mac_setup_mac_link_dummy;
+	hw->mac.check_link = txgbe_mac_check_link_dummy;
+	hw->mac.get_link_capabilities = txgbe_mac_get_link_capabilities_dummy;
+	hw->mac.set_rate_select_speed = txgbe_mac_set_rate_select_speed_dummy;
+	hw->mac.setup_pba = txgbe_mac_setup_pba_dummy;
+	hw->mac.led_on = txgbe_mac_led_on_dummy;
+	hw->mac.led_off = txgbe_mac_led_off_dummy;
+	hw->mac.blink_led_start = txgbe_mac_blink_led_start_dummy;
+	hw->mac.blink_led_stop = txgbe_mac_blink_led_stop_dummy;
+	hw->mac.init_led_link_act = txgbe_mac_init_led_link_act_dummy;
+	hw->mac.set_rar = txgbe_mac_set_rar_dummy;
+	hw->mac.set_uc_addr = txgbe_mac_set_uc_addr_dummy;
+	hw->mac.clear_rar = txgbe_mac_clear_rar_dummy;
+	hw->mac.insert_mac_addr = txgbe_mac_insert_mac_addr_dummy;
+	hw->mac.set_vmdq = txgbe_mac_set_vmdq_dummy;
+	hw->mac.set_vmdq_san_mac = txgbe_mac_set_vmdq_san_mac_dummy;
+	hw->mac.clear_vmdq = txgbe_mac_clear_vmdq_dummy;
+	hw->mac.init_rx_addrs = txgbe_mac_init_rx_addrs_dummy;
+	hw->mac.update_uc_addr_list = txgbe_mac_update_uc_addr_list_dummy;
+	hw->mac.update_mc_addr_list = txgbe_mac_update_mc_addr_list_dummy;
+	hw->mac.enable_mc = txgbe_mac_enable_mc_dummy;
+	hw->mac.disable_mc = txgbe_mac_disable_mc_dummy;
+	hw->mac.clear_vfta = txgbe_mac_clear_vfta_dummy;
+	hw->mac.set_vfta = txgbe_mac_set_vfta_dummy;
+	hw->mac.set_vlvf = txgbe_mac_set_vlvf_dummy;
+	hw->mac.init_uta_tables = txgbe_mac_init_uta_tables_dummy;
+	hw->mac.set_mac_anti_spoofing = txgbe_mac_set_mac_anti_spoofing_dummy;
+	hw->mac.set_vlan_anti_spoofing = txgbe_mac_set_vlan_anti_spoofing_dummy;
+	hw->mac.update_xcast_mode = txgbe_mac_update_xcast_mode_dummy;
+	hw->mac.set_rlpml = txgbe_mac_set_rlpml_dummy;
+	hw->mac.fc_enable = txgbe_mac_fc_enable_dummy;
+	hw->mac.setup_fc = txgbe_mac_setup_fc_dummy;
+	hw->mac.fc_autoneg = txgbe_mac_fc_autoneg_dummy;
+	hw->mac.set_fw_drv_ver = txgbe_mac_set_fw_drv_ver_dummy;
+	hw->mac.get_thermal_sensor_data = txgbe_mac_get_thermal_sensor_data_dummy;
+	hw->mac.init_thermal_sensor_thresh = txgbe_mac_init_thermal_sensor_thresh_dummy;
+	hw->mac.get_rtrup2tc = txgbe_mac_get_rtrup2tc_dummy;
+	hw->mac.disable_rx = txgbe_mac_disable_rx_dummy;
+	hw->mac.enable_rx = txgbe_mac_enable_rx_dummy;
+	hw->mac.set_source_address_pruning = txgbe_mac_set_source_address_pruning_dummy;
+	hw->mac.set_ethertype_anti_spoofing = txgbe_mac_set_ethertype_anti_spoofing_dummy;
+	hw->mac.dmac_update_tcs = txgbe_mac_dmac_update_tcs_dummy;
+	hw->mac.dmac_config_tcs = txgbe_mac_dmac_config_tcs_dummy;
+	hw->mac.dmac_config = txgbe_mac_dmac_config_dummy;
+	hw->mac.setup_eee = txgbe_mac_setup_eee_dummy;
+	hw->mac.read_iosf_sb_reg = txgbe_mac_read_iosf_sb_reg_dummy;
+	hw->mac.write_iosf_sb_reg = txgbe_mac_write_iosf_sb_reg_dummy;
+	hw->mac.disable_mdd = txgbe_mac_disable_mdd_dummy;
+	hw->mac.enable_mdd = txgbe_mac_enable_mdd_dummy;
+	hw->mac.mdd_event = txgbe_mac_mdd_event_dummy;
+	hw->mac.restore_mdd_vf = txgbe_mac_restore_mdd_vf_dummy;
+	hw->mac.fw_recovery_mode = txgbe_mac_fw_recovery_mode_dummy;
+	hw->phy.get_media_type = txgbe_phy_get_media_type_dummy;
+	hw->phy.identify = txgbe_phy_identify_dummy;
+	hw->phy.identify_sfp = txgbe_phy_identify_sfp_dummy;
+	hw->phy.init = txgbe_phy_init_dummy;
+	hw->phy.reset = txgbe_phy_reset_dummy;
+	hw->phy.read_reg = txgbe_phy_read_reg_dummy;
+	hw->phy.write_reg = txgbe_phy_write_reg_dummy;
+	hw->phy.read_reg_mdi = txgbe_phy_read_reg_mdi_dummy;
+	hw->phy.write_reg_mdi = txgbe_phy_write_reg_mdi_dummy;
+	hw->phy.setup_link = txgbe_phy_setup_link_dummy;
+	hw->phy.setup_internal_link = txgbe_phy_setup_internal_link_dummy;
+	hw->phy.setup_link_speed = txgbe_phy_setup_link_speed_dummy;
+	hw->phy.check_link = txgbe_phy_check_link_dummy;
+	hw->phy.get_firmware_version = txgbe_phy_get_firmware_version_dummy;
+	hw->phy.read_i2c_byte = txgbe_phy_read_i2c_byte_dummy;
+	hw->phy.write_i2c_byte = txgbe_phy_write_i2c_byte_dummy;
+	hw->phy.read_i2c_sff8472 = txgbe_phy_read_i2c_sff8472_dummy;
+	hw->phy.read_i2c_eeprom = txgbe_phy_read_i2c_eeprom_dummy;
+	hw->phy.write_i2c_eeprom = txgbe_phy_write_i2c_eeprom_dummy;
+	hw->phy.i2c_bus_clear = txgbe_phy_i2c_bus_clear_dummy;
+	hw->phy.check_overtemp = txgbe_phy_check_overtemp_dummy;
+	hw->phy.set_phy_power = txgbe_phy_set_phy_power_dummy;
+	hw->phy.enter_lplu = txgbe_phy_enter_lplu_dummy;
+	hw->phy.handle_lasi = txgbe_phy_handle_lasi_dummy;
+	hw->phy.read_i2c_byte_unlocked = txgbe_phy_read_i2c_byte_unlocked_dummy;
+	hw->phy.write_i2c_byte_unlocked = txgbe_phy_write_i2c_byte_unlocked_dummy;
+	hw->link.read_link = txgbe_link_read_link_dummy;
+	hw->link.read_link_unlocked = txgbe_link_read_link_unlocked_dummy;
+	hw->link.write_link = txgbe_link_write_link_dummy;
+	hw->link.write_link_unlocked = txgbe_link_write_link_unlocked_dummy;
+	hw->mbx.init_params = txgbe_mbx_init_params_dummy;
+	hw->mbx.read = txgbe_mbx_read_dummy;
+	hw->mbx.write = txgbe_mbx_write_dummy;
+	hw->mbx.read_posted = txgbe_mbx_read_posted_dummy;
+	hw->mbx.write_posted = txgbe_mbx_write_posted_dummy;
+	hw->mbx.check_for_msg = txgbe_mbx_check_for_msg_dummy;
+	hw->mbx.check_for_ack = txgbe_mbx_check_for_ack_dummy;
+	hw->mbx.check_for_rst = txgbe_mbx_check_for_rst_dummy;
+}
+
+#endif /* _TXGBE_TYPE_DUMMY_H_ */
+
diff --git a/drivers/net/txgbe/base/txgbe_status.h b/drivers/net/txgbe/base/txgbe_status.h
new file mode 100644
index 000000000..db5e521e4
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_status.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_STATUS_H_
+#define _TXGBE_STATUS_H_
+
+/* Error Codes:
+ * common error
+ * module error(simple)
+ * module error(detailed)
+ *
+ * (-256, 256): reserved for non-txgbe defined error code
+ */
+#define TERR_BASE (0x100)
+enum txgbe_error {
+	TERR_NULL = TERR_BASE,
+	TERR_ANY,
+	TERR_NOSUPP,
+	TERR_NOIMPL,
+	TERR_NOMEM,
+	TERR_NOSPACE,
+	TERR_NOENTRY,
+	TERR_CONFIG,
+	TERR_ARGS,
+	TERR_PARAM,
+	TERR_INVALID,
+	TERR_TIMEOUT,
+	TERR_VERSION,
+	TERR_REGISTER,
+	TERR_FEATURE,
+	TERR_RESET,
+	TERR_AUTONEG,
+	TERR_MBX,
+	TERR_I2C,
+	TERR_FC,
+	TERR_FLASH,
+	TERR_DEVICE,
+	TERR_HOSTIF,
+	TERR_SRAM,
+	TERR_EEPROM,
+	TERR_EEPROM_CHECKSUM,
+	TERR_EEPROM_PROTECT,
+	TERR_EEPROM_VERSION,
+	TERR_MAC,
+	TERR_MAC_ADDR,
+	TERR_SFP,
+	TERR_SFP_INITSEQ,
+	TERR_SFP_PRESENT,
+	TERR_SFP_SUPPORT,
+	TERR_SFP_SETUP,
+	TERR_PHY,
+	TERR_PHY_ADDR,
+	TERR_PHY_INIT,
+	TERR_FDIR_CMD,
+	TERR_FDIR_REINIT,
+	TERR_SWFW_SYNC,
+	TERR_SWFW_COMMAND,
+	TERR_FC_CFG,
+	TERR_FC_NEGO,
+	TERR_LINK_SETUP,
+	TERR_PCIE_PENDING,
+	TERR_PBA_SECTION,
+	TERR_OVERTEMP,
+	TERR_UNDERTEMP,
+	TERR_XPCS_POWERUP,
+};
+
+/* WARNING: just for legacy compatibility */
+#define TXGBE_NOT_IMPLEMENTED 0x7FFFFFFF
+#define TXGBE_ERR_OPS_DUMMY   0x3FFFFFFF
+
+/* Error Codes */
+#define TXGBE_ERR_EEPROM			-(TERR_BASE + 1)
+#define TXGBE_ERR_EEPROM_CHECKSUM		-(TERR_BASE + 2)
+#define TXGBE_ERR_PHY				-(TERR_BASE + 3)
+#define TXGBE_ERR_CONFIG			-(TERR_BASE + 4)
+#define TXGBE_ERR_PARAM				-(TERR_BASE + 5)
+#define TXGBE_ERR_MAC_TYPE			-(TERR_BASE + 6)
+#define TXGBE_ERR_UNKNOWN_PHY			-(TERR_BASE + 7)
+#define TXGBE_ERR_LINK_SETUP			-(TERR_BASE + 8)
+#define TXGBE_ERR_ADAPTER_STOPPED		-(TERR_BASE + 9)
+#define TXGBE_ERR_INVALID_MAC_ADDR		-(TERR_BASE + 10)
+#define TXGBE_ERR_DEVICE_NOT_SUPPORTED		-(TERR_BASE + 11)
+#define TXGBE_ERR_MASTER_REQUESTS_PENDING	-(TERR_BASE + 12)
+#define TXGBE_ERR_INVALID_LINK_SETTINGS		-(TERR_BASE + 13)
+#define TXGBE_ERR_AUTONEG_NOT_COMPLETE		-(TERR_BASE + 14)
+#define TXGBE_ERR_RESET_FAILED			-(TERR_BASE + 15)
+#define TXGBE_ERR_SWFW_SYNC			-(TERR_BASE + 16)
+#define TXGBE_ERR_PHY_ADDR_INVALID		-(TERR_BASE + 17)
+#define TXGBE_ERR_I2C				-(TERR_BASE + 18)
+#define TXGBE_ERR_SFP_NOT_SUPPORTED		-(TERR_BASE + 19)
+#define TXGBE_ERR_SFP_NOT_PRESENT		-(TERR_BASE + 20)
+#define TXGBE_ERR_SFP_NO_INIT_SEQ_PRESENT	-(TERR_BASE + 21)
+#define TXGBE_ERR_NO_SAN_ADDR_PTR		-(TERR_BASE + 22)
+#define TXGBE_ERR_FDIR_REINIT_FAILED		-(TERR_BASE + 23)
+#define TXGBE_ERR_EEPROM_VERSION		-(TERR_BASE + 24)
+#define TXGBE_ERR_NO_SPACE			-(TERR_BASE + 25)
+#define TXGBE_ERR_OVERTEMP			-(TERR_BASE + 26)
+#define TXGBE_ERR_FC_NOT_NEGOTIATED		-(TERR_BASE + 27)
+#define TXGBE_ERR_FC_NOT_SUPPORTED		-(TERR_BASE + 28)
+#define TXGBE_ERR_SFP_SETUP_NOT_COMPLETE	-(TERR_BASE + 30)
+#define TXGBE_ERR_PBA_SECTION			-(TERR_BASE + 31)
+#define TXGBE_ERR_INVALID_ARGUMENT		-(TERR_BASE + 32)
+#define TXGBE_ERR_HOST_INTERFACE_COMMAND	-(TERR_BASE + 33)
+#define TXGBE_ERR_OUT_OF_MEM			-(TERR_BASE + 34)
+#define TXGBE_ERR_FEATURE_NOT_SUPPORTED		-(TERR_BASE + 36)
+#define TXGBE_ERR_EEPROM_PROTECTED_REGION	-(TERR_BASE + 37)
+#define TXGBE_ERR_FDIR_CMD_INCOMPLETE		-(TERR_BASE + 38)
+#define TXGBE_ERR_FW_RESP_INVALID		-(TERR_BASE + 39)
+#define TXGBE_ERR_TOKEN_RETRY			-(TERR_BASE + 40)
+#define TXGBE_ERR_FLASH_LOADING_FAILED          -(TERR_BASE + 41)
+
+#define TXGBE_ERR_NOSUPP                        -(TERR_BASE + 42)
+#define TXGBE_ERR_UNDERTEMP                     -(TERR_BASE + 43)
+#define TXGBE_ERR_XPCS_POWER_UP_FAILED          -(TERR_BASE + 44)
+#define TXGBE_ERR_PHY_INIT_NOT_DONE             -(TERR_BASE + 45)
+#define TXGBE_ERR_TIMEOUT                       -(TERR_BASE + 46)
+#define TXGBE_ERR_REGISTER                      -(TERR_BASE + 47)
+#define TXGBE_ERR_MNG_ACCESS_FAILED             -(TERR_BASE + 49)
+
+#endif /* _TXGBE_STATUS_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index ad4eba21a..1264a83d9 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -7,9 +7,17 @@
 
 #define TXGBE_ALIGN				128 /* as intel did */
 
+#include "txgbe_status.h"
 #include "txgbe_osdep.h"
 #include "txgbe_devids.h"
 
+enum txgbe_eeprom_type {
+	txgbe_eeprom_unknown = 0,
+	txgbe_eeprom_spi,
+	txgbe_eeprom_flash,
+	txgbe_eeprom_none /* No NVM support */
+};
+
 enum txgbe_mac_type {
 	txgbe_mac_unknown = 0,
 	txgbe_mac_raptor,
@@ -90,10 +98,218 @@ enum txgbe_media_type {
 	txgbe_media_type_virtual
 };
 
+
+/* PCI bus types */
+enum txgbe_bus_type {
+	txgbe_bus_type_unknown = 0,
+	txgbe_bus_type_pci,
+	txgbe_bus_type_pcix,
+	txgbe_bus_type_pci_express,
+	txgbe_bus_type_internal,
+	txgbe_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum txgbe_bus_speed {
+	txgbe_bus_speed_unknown	= 0,
+	txgbe_bus_speed_33	= 33,
+	txgbe_bus_speed_66	= 66,
+	txgbe_bus_speed_100	= 100,
+	txgbe_bus_speed_120	= 120,
+	txgbe_bus_speed_133	= 133,
+	txgbe_bus_speed_2500	= 2500,
+	txgbe_bus_speed_5000	= 5000,
+	txgbe_bus_speed_8000	= 8000,
+	txgbe_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum txgbe_bus_width {
+	txgbe_bus_width_unknown	= 0,
+	txgbe_bus_width_pcie_x1	= 1,
+	txgbe_bus_width_pcie_x2	= 2,
+	txgbe_bus_width_pcie_x4	= 4,
+	txgbe_bus_width_pcie_x8	= 8,
+	txgbe_bus_width_32	= 32,
+	txgbe_bus_width_64	= 64,
+	txgbe_bus_width_reserved
+};
+
+struct txgbe_hw;
+
+/* Bus parameters */
+struct txgbe_bus_info {
+	s32 (*get_bus_info)(struct txgbe_hw *);
+	void (*set_lan_id)(struct txgbe_hw *);
+
+	enum txgbe_bus_speed speed;
+	enum txgbe_bus_width width;
+	enum txgbe_bus_type type;
+
+	u16 func;
+	u8 lan_id;
+	u16 instance_id;
+};
 struct txgbe_hw_stats {
 	u64 counter;
 };
+
+/* iterator type for walking multicast address lists */
+typedef u8* (*txgbe_mc_addr_itr) (struct txgbe_hw *hw, u8 **mc_addr_ptr,
+				  u32 *vmdq);
+
+struct txgbe_link_info {
+	s32 (*read_link)(struct txgbe_hw *, u8 addr, u16 reg, u16 *val);
+	s32 (*read_link_unlocked)(struct txgbe_hw *, u8 addr, u16 reg,
+				  u16 *val);
+	s32 (*write_link)(struct txgbe_hw *, u8 addr, u16 reg, u16 val);
+	s32 (*write_link_unlocked)(struct txgbe_hw *, u8 addr, u16 reg,
+				   u16 val);
+
+	u8 addr;
+};
+
+struct txgbe_rom_info {
+	s32 (*init_params)(struct txgbe_hw *);
+	s32 (*read16)(struct txgbe_hw *, u32, u16 *);
+	s32 (*readw_sw)(struct txgbe_hw *, u32, u16 *);
+	s32 (*readw_buffer)(struct txgbe_hw *, u32, u32, void *);
+	s32 (*read32)(struct txgbe_hw *, u32, u32 *);
+	s32 (*read_buffer)(struct txgbe_hw *, u32, u32, void *);
+	s32 (*write16)(struct txgbe_hw *, u32, u16);
+	s32 (*writew_sw)(struct txgbe_hw *, u32, u16);
+	s32 (*writew_buffer)(struct txgbe_hw *, u32, u32, void *);
+	s32 (*write32)(struct txgbe_hw *, u32, u32);
+	s32 (*write_buffer)(struct txgbe_hw *, u32, u32, void *);
+	s32 (*validate_checksum)(struct txgbe_hw *, u16 *);
+	s32 (*update_checksum)(struct txgbe_hw *);
+	s32 (*calc_checksum)(struct txgbe_hw *);
+
+	enum txgbe_eeprom_type type;
+	u32 semaphore_delay;
+	u16 word_size;
+	u16 address_bits;
+	u16 word_page_size;
+	u16 ctrl_word_3;
+
+	u32 sw_addr;
+};
+
+
+struct txgbe_flash_info {
+	s32 (*init_params)(struct txgbe_hw *);
+	s32 (*read_buffer)(struct txgbe_hw *, u32, u32, u32 *);
+	s32 (*write_buffer)(struct txgbe_hw *, u32, u32, u32 *);
+	u32 semaphore_delay;
+	u32 dword_size;
+	u16 address_bits;
+};
+
 struct txgbe_mac_info {
+	s32 (*init_hw)(struct txgbe_hw *);
+	s32 (*reset_hw)(struct txgbe_hw *);
+	s32 (*start_hw)(struct txgbe_hw *);
+	s32 (*stop_hw)(struct txgbe_hw *);
+	s32 (*clear_hw_cntrs)(struct txgbe_hw *);
+	void (*enable_relaxed_ordering)(struct txgbe_hw *);
+	u64 (*get_supported_physical_layer)(struct txgbe_hw *);
+	s32 (*get_mac_addr)(struct txgbe_hw *, u8 *);
+	s32 (*get_san_mac_addr)(struct txgbe_hw *, u8 *);
+	s32 (*set_san_mac_addr)(struct txgbe_hw *, u8 *);
+	s32 (*get_device_caps)(struct txgbe_hw *, u16 *);
+	s32 (*get_wwn_prefix)(struct txgbe_hw *, u16 *, u16 *);
+	s32 (*get_fcoe_boot_status)(struct txgbe_hw *, u16 *);
+	s32 (*read_analog_reg8)(struct txgbe_hw*, u32, u8*);
+	s32 (*write_analog_reg8)(struct txgbe_hw*, u32, u8);
+	s32 (*setup_sfp)(struct txgbe_hw *);
+	s32 (*enable_rx_dma)(struct txgbe_hw *, u32);
+	s32 (*disable_sec_rx_path)(struct txgbe_hw *);
+	s32 (*enable_sec_rx_path)(struct txgbe_hw *);
+	s32 (*disable_sec_tx_path)(struct txgbe_hw *);
+	s32 (*enable_sec_tx_path)(struct txgbe_hw *);
+	s32 (*acquire_swfw_sync)(struct txgbe_hw *, u32);
+	void (*release_swfw_sync)(struct txgbe_hw *, u32);
+	void (*init_swfw_sync)(struct txgbe_hw *);
+	u64 (*autoc_read)(struct txgbe_hw *);
+	void (*autoc_write)(struct txgbe_hw *, u64);
+	s32 (*prot_autoc_read)(struct txgbe_hw *, bool *, u64 *);
+	s32 (*prot_autoc_write)(struct txgbe_hw *, bool, u64);
+	s32 (*negotiate_api_version)(struct txgbe_hw *hw, int api);
+
+	/* Link */
+	void (*disable_tx_laser)(struct txgbe_hw *);
+	void (*enable_tx_laser)(struct txgbe_hw *);
+	void (*flap_tx_laser)(struct txgbe_hw *);
+	s32 (*setup_link)(struct txgbe_hw *, u32, bool);
+	s32 (*setup_mac_link)(struct txgbe_hw *, u32, bool);
+	s32 (*check_link)(struct txgbe_hw *, u32 *, bool *, bool);
+	s32 (*get_link_capabilities)(struct txgbe_hw *, u32 *,
+				     bool *);
+	void (*set_rate_select_speed)(struct txgbe_hw *, u32);
+
+	/* Packet Buffer manipulation */
+	void (*setup_pba)(struct txgbe_hw *, int, u32, int);
+
+	/* LED */
+	s32 (*led_on)(struct txgbe_hw *, u32);
+	s32 (*led_off)(struct txgbe_hw *, u32);
+	s32 (*blink_led_start)(struct txgbe_hw *, u32);
+	s32 (*blink_led_stop)(struct txgbe_hw *, u32);
+	s32 (*init_led_link_act)(struct txgbe_hw *);
+
+	/* RAR, Multicast, VLAN */
+	s32 (*set_rar)(struct txgbe_hw *, u32, u8 *, u32, u32);
+	s32 (*set_uc_addr)(struct txgbe_hw *, u32, u8 *);
+	s32 (*clear_rar)(struct txgbe_hw *, u32);
+	s32 (*insert_mac_addr)(struct txgbe_hw *, u8 *, u32);
+	s32 (*set_vmdq)(struct txgbe_hw *, u32, u32);
+	s32 (*set_vmdq_san_mac)(struct txgbe_hw *, u32);
+	s32 (*clear_vmdq)(struct txgbe_hw *, u32, u32);
+	s32 (*init_rx_addrs)(struct txgbe_hw *);
+	s32 (*update_uc_addr_list)(struct txgbe_hw *, u8 *, u32,
+				   txgbe_mc_addr_itr);
+	s32 (*update_mc_addr_list)(struct txgbe_hw *, u8 *, u32,
+				   txgbe_mc_addr_itr, bool clear);
+	s32 (*enable_mc)(struct txgbe_hw *);
+	s32 (*disable_mc)(struct txgbe_hw *);
+	s32 (*clear_vfta)(struct txgbe_hw *);
+	s32 (*set_vfta)(struct txgbe_hw *, u32, u32, bool, bool);
+	s32 (*set_vlvf)(struct txgbe_hw *, u32, u32, bool, u32 *, u32,
+			bool);
+	s32 (*init_uta_tables)(struct txgbe_hw *);
+	void (*set_mac_anti_spoofing)(struct txgbe_hw *, bool, int);
+	void (*set_vlan_anti_spoofing)(struct txgbe_hw *, bool, int);
+	s32 (*update_xcast_mode)(struct txgbe_hw *, int);
+	s32 (*set_rlpml)(struct txgbe_hw *, u16);
+
+	/* Flow Control */
+	s32 (*fc_enable)(struct txgbe_hw *);
+	s32 (*setup_fc)(struct txgbe_hw *);
+	void (*fc_autoneg)(struct txgbe_hw *);
+
+	/* Manageability interface */
+	s32 (*set_fw_drv_ver)(struct txgbe_hw *, u8, u8, u8, u8, u16,
+			      const char *);
+	s32 (*get_thermal_sensor_data)(struct txgbe_hw *);
+	s32 (*init_thermal_sensor_thresh)(struct txgbe_hw *hw);
+	void (*get_rtrup2tc)(struct txgbe_hw *hw, u8 *map);
+	void (*disable_rx)(struct txgbe_hw *hw);
+	void (*enable_rx)(struct txgbe_hw *hw);
+	void (*set_source_address_pruning)(struct txgbe_hw *, bool,
+					   unsigned int);
+	void (*set_ethertype_anti_spoofing)(struct txgbe_hw *, bool, int);
+	s32 (*dmac_update_tcs)(struct txgbe_hw *hw);
+	s32 (*dmac_config_tcs)(struct txgbe_hw *hw);
+	s32 (*dmac_config)(struct txgbe_hw *hw);
+	s32 (*setup_eee)(struct txgbe_hw *hw, bool enable_eee);
+	s32 (*read_iosf_sb_reg)(struct txgbe_hw *, u32, u32, u32 *);
+	s32 (*write_iosf_sb_reg)(struct txgbe_hw *, u32, u32, u32);
+	void (*disable_mdd)(struct txgbe_hw *hw);
+	void (*enable_mdd)(struct txgbe_hw *hw);
+	void (*mdd_event)(struct txgbe_hw *hw, u32 *vf_bitmap);
+	void (*restore_mdd_vf)(struct txgbe_hw *hw, u32 vf);
+	bool (*fw_recovery_mode)(struct txgbe_hw *hw);
+
 	enum txgbe_mac_type type;
 	u8 addr[ETH_ADDR_LEN];
 	u8 perm_addr[ETH_ADDR_LEN];
@@ -103,16 +319,60 @@ struct txgbe_mac_info {
 };
 
 struct txgbe_phy_info {
+	u32 (*get_media_type)(struct txgbe_hw *);
+	s32 (*identify)(struct txgbe_hw *);
+	s32 (*identify_sfp)(struct txgbe_hw *);
+	s32 (*init)(struct txgbe_hw *);
+	s32 (*reset)(struct txgbe_hw *);
+	s32 (*read_reg)(struct txgbe_hw *, u32, u32, u16 *);
+	s32 (*write_reg)(struct txgbe_hw *, u32, u32, u16);
+	s32 (*read_reg_mdi)(struct txgbe_hw *, u32, u32, u16 *);
+	s32 (*write_reg_mdi)(struct txgbe_hw *, u32, u32, u16);
+	s32 (*setup_link)(struct txgbe_hw *);
+	s32 (*setup_internal_link)(struct txgbe_hw *);
+	s32 (*setup_link_speed)(struct txgbe_hw *, u32, bool);
+	s32 (*check_link)(struct txgbe_hw *, u32 *, bool *);
+	s32 (*get_firmware_version)(struct txgbe_hw *, u32 *);
+	s32 (*read_i2c_byte)(struct txgbe_hw *, u8, u8, u8 *);
+	s32 (*write_i2c_byte)(struct txgbe_hw *, u8, u8, u8);
+	s32 (*read_i2c_sff8472)(struct txgbe_hw *, u8, u8 *);
+	s32 (*read_i2c_eeprom)(struct txgbe_hw *, u8, u8 *);
+	s32 (*write_i2c_eeprom)(struct txgbe_hw *, u8, u8);
+	void (*i2c_bus_clear)(struct txgbe_hw *);
+	s32 (*check_overtemp)(struct txgbe_hw *);
+	s32 (*set_phy_power)(struct txgbe_hw *, bool on);
+	s32 (*enter_lplu)(struct txgbe_hw *);
+	s32 (*handle_lasi)(struct txgbe_hw *hw);
+	s32 (*read_i2c_byte_unlocked)(struct txgbe_hw *, u8 offset, u8 addr,
+				      u8 *value);
+	s32 (*write_i2c_byte_unlocked)(struct txgbe_hw *, u8 offset, u8 addr,
+				       u8 value);
+
 	enum txgbe_phy_type type;
 	enum txgbe_sfp_type sfp_type;
 };
 
+struct txgbe_mbx_info {
+	void (*init_params)(struct txgbe_hw *hw);
+	s32  (*read)(struct txgbe_hw *, u32 *, u16,  u16);
+	s32  (*write)(struct txgbe_hw *, u32 *, u16, u16);
+	s32  (*read_posted)(struct txgbe_hw *, u32 *, u16,  u16);
+	s32  (*write_posted)(struct txgbe_hw *, u32 *, u16, u16);
+	s32  (*check_for_msg)(struct txgbe_hw *, u16);
+	s32  (*check_for_ack)(struct txgbe_hw *, u16);
+	s32  (*check_for_rst)(struct txgbe_hw *, u16);
+};
+
 struct txgbe_hw {
 	void IOMEM *hw_addr;
 	void *back;
 	struct txgbe_mac_info mac;
 	struct txgbe_phy_info phy;
-
+	struct txgbe_link_info link;
+	struct txgbe_rom_info rom;
+	struct txgbe_flash_info flash;
+	struct txgbe_bus_info bus;
+	struct txgbe_mbx_info mbx;
 	u16 device_id;
 	u16 vendor_id;
 	u16 subsystem_device_id;
@@ -125,5 +385,6 @@ struct txgbe_hw {
 };
 
 #include "txgbe_regs.h"
+#include "txgbe_dummy.h"
 
 #endif /* _TXGBE_TYPE_H_ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 05/42] net/txgbe: add mac type and HW ops dummy
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (2 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 04/42] net/txgbe: add error types and dummy function Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 06/42] net/txgbe: add EEPROM functions Jiawen Wu
                   ` (37 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add base driver shared code from dummy function.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build  |  1 +
 drivers/net/txgbe/base/txgbe_hw.c   | 90 ++++++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_hw.h   |  3 +-
 drivers/net/txgbe/base/txgbe_type.h |  3 +
 drivers/net/txgbe/base/txgbe_vf.c   | 14 +++++
 drivers/net/txgbe/base/txgbe_vf.h   | 12 ++++
 6 files changed, 121 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/txgbe/base/txgbe_vf.c
 create mode 100644 drivers/net/txgbe/base/txgbe_vf.h

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
index 72f1e73c9..1cce9b679 100644
--- a/drivers/net/txgbe/base/meson.build
+++ b/drivers/net/txgbe/base/meson.build
@@ -4,6 +4,7 @@
 sources = [
 	'txgbe_eeprom.c',
 	'txgbe_hw.c',
+	'txgbe_vf.c',
 ]
 
 error_cflags = ['-Wno-unused-value',
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 17ccd0b65..5ff3983d9 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -3,6 +3,7 @@
  */
 
 #include "txgbe_type.h"
+#include "txgbe_vf.h"
 #include "txgbe_eeprom.h"
 #include "txgbe_hw.h"
 
@@ -19,12 +20,99 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 }
 
 s32 txgbe_init_shared_code(struct txgbe_hw *hw)
+{
+	s32 status;
+
+	DEBUGFUNC("txgbe_init_shared_code");
+
+	/*
+	 * Set the mac type
+	 */
+	txgbe_set_mac_type(hw);
+
+	txgbe_init_ops_dummy(hw);
+	switch (hw->mac.type) {
+	case txgbe_mac_raptor:
+		status = txgbe_init_ops_pf(hw);
+		break;
+	case txgbe_mac_raptor_vf:
+		status = txgbe_init_ops_vf(hw);
+		break;
+	default:
+		status = TXGBE_ERR_DEVICE_NOT_SUPPORTED;
+		break;
+	}
+	hw->mac.max_link_up_time = TXGBE_LINK_UP_TIME;
+
+	hw->bus.set_lan_id(hw);
+
+	return status;
+
+}
+
+/**
+ *  txgbe_set_mac_type - Sets MAC type
+ *  @hw: pointer to the HW structure
+ *
+ *  This function sets the mac type of the adapter based on the
+ *  vendor ID and device ID stored in the hw structure.
+ **/
+s32 txgbe_set_mac_type(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("txgbe_set_mac_type\n");
+
+	if (hw->vendor_id != PCI_VENDOR_ID_WANGXUN) {
+		DEBUGOUT("Unsupported vendor id: %x", hw->vendor_id);
+		return TXGBE_ERR_DEVICE_NOT_SUPPORTED;
+	}
+
+	switch (hw->device_id) {
+	case TXGBE_DEV_ID_RAPTOR_KR_KX_KX4:
+		hw->mac.type = txgbe_mac_raptor;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_XAUI:
+	case TXGBE_DEV_ID_RAPTOR_SGMII:
+		hw->mac.type = txgbe_mac_raptor;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_SFP:
+	case TXGBE_DEV_ID_WX1820_SFP:
+		hw->mac.type = txgbe_mac_raptor;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_QSFP:
+		hw->mac.type = txgbe_mac_raptor;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_VF:
+	case TXGBE_DEV_ID_RAPTOR_VF_HV:
+		hw->mac.type = txgbe_mac_raptor_vf;
+		break;
+	default:
+		err = TXGBE_ERR_DEVICE_NOT_SUPPORTED;
+		DEBUGOUT("Unsupported device id: %x", hw->device_id);
+		break;
+	}
+
+	DEBUGOUT("txgbe_set_mac_type found mac: %d, returns: %d\n",
+		  hw->mac.type, err);
+	return err;
+}
+
+s32 txgbe_init_hw(struct txgbe_hw *hw)
 {
 	RTE_SET_USED(hw);
 	return 0;
 }
 
-s32 txgbe_init_hw(struct txgbe_hw *hw)
+
+/**
+ *  txgbe_init_ops_pf - Inits func ptrs and MAC type
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize the function pointers and assign the MAC type.
+ *  Does not touch the hardware.
+ **/
+s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 {
 	RTE_SET_USED(hw);
 	return 0;
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index cd738245f..adcc5fc48 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -12,5 +12,6 @@ s32 txgbe_init_hw(struct txgbe_hw *hw);
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
-
+s32 txgbe_set_mac_type(struct txgbe_hw *hw);
+s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 1264a83d9..5524e5de0 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -5,6 +5,8 @@
 #ifndef _TXGBE_TYPE_H_
 #define _TXGBE_TYPE_H_
 
+#define TXGBE_LINK_UP_TIME	90 /* 9.0 Seconds */
+
 #define TXGBE_ALIGN				128 /* as intel did */
 
 #include "txgbe_status.h"
@@ -316,6 +318,7 @@ struct txgbe_mac_info {
 	u8 san_addr[ETH_ADDR_LEN];
 
 	u32 num_rar_entries;
+	u32  max_link_up_time;
 };
 
 struct txgbe_phy_info {
diff --git a/drivers/net/txgbe/base/txgbe_vf.c b/drivers/net/txgbe/base/txgbe_vf.c
new file mode 100644
index 000000000..d96b57ec6
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_vf.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_type.h"
+#include "txgbe_vf.h"
+
+s32 txgbe_init_ops_vf(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+
+	return 0;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_vf.h b/drivers/net/txgbe/base/txgbe_vf.h
new file mode 100644
index 000000000..9572845c8
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_vf.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_VF_H_
+#define _TXGBE_VF_H_
+
+#include "txgbe_type.h"
+
+s32 txgbe_init_ops_vf(struct txgbe_hw *hw);
+
+#endif /* __TXGBE_VF_H__ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 06/42] net/txgbe: add EEPROM functions
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (3 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 05/42] net/txgbe: add mac type and HW ops dummy Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 07/42] net/txgbe: add HW init function Jiawen Wu
                   ` (36 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add EEPROM functions.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build    |   1 +
 drivers/net/txgbe/base/txgbe.h        |   1 +
 drivers/net/txgbe/base/txgbe_eeprom.c | 553 +++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_eeprom.h |  36 ++
 drivers/net/txgbe/base/txgbe_hw.c     |  17 +-
 drivers/net/txgbe/base/txgbe_mng.c    | 399 +++++++++++++++++++
 drivers/net/txgbe/base/txgbe_mng.h    | 175 ++++++++
 drivers/net/txgbe/base/txgbe_type.h   |   5 +
 drivers/net/txgbe/txgbe_ethdev.c      |   4 +-
 9 files changed, 1183 insertions(+), 8 deletions(-)
 create mode 100644 drivers/net/txgbe/base/txgbe_mng.c
 create mode 100644 drivers/net/txgbe/base/txgbe_mng.h

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
index 1cce9b679..94cdfcfc1 100644
--- a/drivers/net/txgbe/base/meson.build
+++ b/drivers/net/txgbe/base/meson.build
@@ -4,6 +4,7 @@
 sources = [
 	'txgbe_eeprom.c',
 	'txgbe_hw.c',
+	'txgbe_mng.c',
 	'txgbe_vf.c',
 ]
 
diff --git a/drivers/net/txgbe/base/txgbe.h b/drivers/net/txgbe/base/txgbe.h
index 32867f5aa..329764be0 100644
--- a/drivers/net/txgbe/base/txgbe.h
+++ b/drivers/net/txgbe/base/txgbe.h
@@ -6,6 +6,7 @@
 #define _TXGBE_H_
 
 #include "txgbe_type.h"
+#include "txgbe_mng.h"
 #include "txgbe_eeprom.h"
 #include "txgbe_hw.h"
 
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.c b/drivers/net/txgbe/base/txgbe_eeprom.c
index 287233dda..d4eeadd8e 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.c
+++ b/drivers/net/txgbe/base/txgbe_eeprom.c
@@ -3,7 +3,7 @@
  */
 
 #include "txgbe_hw.h"
-
+#include "txgbe_mng.h"
 #include "txgbe_eeprom.h"
 
 /**
@@ -15,11 +15,485 @@
  **/
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	struct txgbe_rom_info *eeprom = &hw->rom;
+	u32 eec;
+	u16 eeprom_size;
+	int err = 0;
+
+	DEBUGFUNC("txgbe_init_eeprom_params");
+
+	if (eeprom->type != txgbe_eeprom_unknown) {
+		return 0;
+	}
+
+	eeprom->type = txgbe_eeprom_none;
+	/* Set default semaphore delay to 10ms which is a well
+	 * tested value */
+	eeprom->semaphore_delay = 10; /*ms*/
+	/* Clear EEPROM page size, it will be initialized as needed */
+	eeprom->word_page_size = 0;
+
+	/*
+	 * Check for EEPROM present first.
+	 * If not present leave as none
+	 */
+	eec = rd32(hw, TXGBE_SPISTAT);
+	if (!(eec & TXGBE_SPISTAT_BPFLASH)) {
+		eeprom->type = txgbe_eeprom_flash;
+
+		/*
+		 * SPI EEPROM is assumed here.  This code would need to
+		 * change if a future EEPROM is not SPI.
+		 */
+		eeprom_size = 4096;
+		eeprom->word_size = eeprom_size >> 1;
+	}
+
+	eeprom->address_bits = 16;
+
+	err = eeprom->read32(hw, TXGBE_SW_REGION_PTR << 1, &eeprom->sw_addr);
+	if (err) {
+		DEBUGOUT("EEPROM read failed.\n");
+		return err;
+	}
+
+	DEBUGOUT("eeprom params: type = %d, size = %d, address bits: "
+		  "%d %d\n", eeprom->type, eeprom->word_size,
+		  eeprom->address_bits, eeprom->sw_addr);
 
 	return 0;
 }
 
+/**
+ *  txgbe_get_eeprom_semaphore - Get hardware semaphore
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets the hardware semaphores so EEPROM access can occur for bit-bang method
+ **/
+s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw)
+{
+	s32 status = TXGBE_ERR_EEPROM;
+	u32 timeout = 2000;
+	u32 i;
+	u32 swsm;
+
+	DEBUGFUNC("txgbe_get_eeprom_semaphore");
+
+
+	/* Get SMBI software semaphore between device drivers first */
+	for (i = 0; i < timeout; i++) {
+		/*
+		 * If the SMBI bit is 0 when we read it, then the bit will be
+		 * set and we have the semaphore
+		 */
+		swsm = rd32(hw, TXGBE_SWSEM);
+		if (!(swsm & TXGBE_SWSEM_PF)) {
+			status = 0;
+			break;
+		}
+		usec_delay(50);
+	}
+
+	if (i == timeout) {
+		DEBUGOUT("Driver can't access the eeprom - SMBI Semaphore "
+			 "not granted.\n");
+		/*
+		 * this release is particularly important because our attempts
+		 * above to get the semaphore may have succeeded, and if there
+		 * was a timeout, we should unconditionally clear the semaphore
+		 * bits to free the driver to make progress
+		 */
+		txgbe_release_eeprom_semaphore(hw);
+
+		usec_delay(50);
+		/*
+		 * one last try
+		 * If the SMBI bit is 0 when we read it, then the bit will be
+		 * set and we have the semaphore
+		 */
+		swsm = rd32(hw, TXGBE_SWSEM);
+		if (!(swsm & TXGBE_SWSEM_PF))
+			status = 0;
+	}
+
+	/* Now get the semaphore between SW/FW through the SWESMBI bit */
+	if (status == 0) {
+		for (i = 0; i < timeout; i++) {
+			/* Set the SW EEPROM semaphore bit to request access */
+			wr32m(hw, TXGBE_MNGSWSYNC,
+				TXGBE_MNGSWSYNC_REQ, TXGBE_MNGSWSYNC_REQ);
+
+			/*
+			 * If we set the bit successfully then we got the
+			 * semaphore.
+			 */
+			swsm = rd32(hw, TXGBE_MNGSWSYNC);
+			if (swsm & TXGBE_MNGSWSYNC_REQ)
+				break;
+
+			usec_delay(50);
+		}
+
+		/*
+		 * Release semaphores and return error if SW EEPROM semaphore
+		 * was not granted because we don't have access to the EEPROM
+		 */
+		if (i >= timeout) {
+			DEBUGOUT("SWESMBI Software EEPROM semaphore not granted.\n");
+			txgbe_release_eeprom_semaphore(hw);
+			status = TXGBE_ERR_EEPROM;
+		}
+	} else {
+		DEBUGOUT("Software semaphore SMBI between device drivers "
+			 "not granted.\n");
+	}
+
+	return status;
+}
+
+/**
+ *  txgbe_release_eeprom_semaphore - Release hardware semaphore
+ *  @hw: pointer to hardware structure
+ *
+ *  This function clears hardware semaphore bits.
+ **/
+void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw)
+{
+	DEBUGFUNC("txgbe_release_eeprom_semaphore");
+
+	wr32m(hw, TXGBE_MNGSWSYNC, TXGBE_MNGSWSYNC_REQ, 0);
+	wr32m(hw, TXGBE_SWSEM, TXGBE_SWSEM_PF, 0);
+	txgbe_flush(hw);
+}
+
+/**
+ *  txgbe_ee_read - Read EEPROM word using a host interface cmd
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @data: word read from the EEPROM
+ *
+ *  Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_read16(struct txgbe_hw *hw, u32 offset,
+			      u16 *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u32 addr = (offset << 1);
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	err = txgbe_hic_sr_read(hw, addr, (u8 *)data, 2);
+
+	hw->mac.release_swfw_sync(hw, mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_ee_read_buffer- Read EEPROM word(s) using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @words: number of words
+ *  @data: word(s) read from the EEPROM
+ *
+ *  Reads a 16 bit word(s) from the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_readw_buffer(struct txgbe_hw *hw,
+				     u32 offset, u32 words, void *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u32 addr = (offset << 1);
+	u32 len = (words << 1);
+	u8 *buf = (u8 *)data;
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	while (len) {
+		u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
+				? len : TXGBE_PMMBX_DATA_SIZE);
+
+		err = txgbe_hic_sr_read(hw, addr, buf, seg);
+		if (err)
+			break;
+
+		len -= seg;
+		addr += seg;
+		buf += seg;
+	}
+
+	hw->mac.release_swfw_sync(hw, mask);
+	return err;
+}
+
+
+s32 txgbe_ee_readw_sw(struct txgbe_hw *hw, u32 offset,
+			      u16 *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u32 addr = hw->rom.sw_addr + (offset << 1);
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	err = txgbe_hic_sr_read(hw, addr, (u8 *)data, 2);
+
+	hw->mac.release_swfw_sync(hw, mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_ee_read32 - Read EEPROM word using a host interface cmd
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @data: word read from the EEPROM
+ *
+ *  Reads a 32 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_read32(struct txgbe_hw *hw, u32 addr, u32 *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	err = txgbe_hic_sr_read(hw, addr, (u8 *)data, 4);
+
+	hw->mac.release_swfw_sync(hw, mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_ee_read_buffer - Read EEPROM byte(s) using hostif
+ *  @hw: pointer to hardware structure
+ *  @addr: offset of bytes in the EEPROM to read
+ *  @len: number of bytes
+ *  @data: byte(s) read from the EEPROM
+ *
+ *  Reads a 8 bit byte(s) from the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_read_buffer(struct txgbe_hw *hw,
+				     u32 addr, u32 len, void *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u8 *buf = (u8 *)data;
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	while (len) {
+		u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
+				? len : TXGBE_PMMBX_DATA_SIZE);
+
+		err = txgbe_hic_sr_read(hw, addr, buf, seg);
+		if (err)
+			break;
+
+		len -= seg;
+		buf += seg;
+	}
+
+	hw->mac.release_swfw_sync(hw, mask);
+	return err;
+}
+
+/**
+ *  txgbe_ee_write - Write EEPROM word using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to write
+ *  @data: word write to the EEPROM
+ *
+ *  Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_write16(struct txgbe_hw *hw, u32 offset,
+			       u16 data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u32 addr = (offset << 1);
+	int err;
+
+	DEBUGFUNC("\n");
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	err = txgbe_hic_sr_write(hw, addr, (u8 *)&data, 2);
+
+	hw->mac.release_swfw_sync(hw, mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_ee_write_buffer - Write EEPROM word(s) using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to write
+ *  @words: number of words
+ *  @data: word(s) write to the EEPROM
+ *
+ *  Write a 16 bit word(s) to the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_writew_buffer(struct txgbe_hw *hw,
+				      u32 offset, u32 words, void *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u32 addr = (offset << 1);
+	u32 len = (words << 1);
+	u8 *buf = (u8 *)data;
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	while (len) {
+		u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
+				? len : TXGBE_PMMBX_DATA_SIZE);
+
+		err = txgbe_hic_sr_write(hw, addr, buf, seg);
+		if (err)
+			break;
+
+		len -= seg;
+		buf += seg;
+	}
+
+	hw->mac.release_swfw_sync(hw, mask);
+	return err;
+}
+
+s32 txgbe_ee_writew_sw(struct txgbe_hw *hw, u32 offset,
+			       u16 data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u32 addr = hw->rom.sw_addr + (offset << 1);
+	int err;
+
+	DEBUGFUNC("\n");
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	err = txgbe_hic_sr_write(hw, addr, (u8 *)&data, 2);
+
+	hw->mac.release_swfw_sync(hw, mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_ee_write32 - Read EEPROM word using a host interface cmd
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @data: word read from the EEPROM
+ *
+ *  Reads a 32 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_write32(struct txgbe_hw *hw, u32 addr, u32 data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	err = txgbe_hic_sr_write(hw, addr, (u8 *)&data, 4);
+
+	hw->mac.release_swfw_sync(hw, mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_ee_write_buffer - Write EEPROM byte(s) using hostif
+ *  @hw: pointer to hardware structure
+ *  @addr: offset of bytes in the EEPROM to write
+ *  @len: number of bytes
+ *  @data: word(s) write to the EEPROM
+ *
+ *  Write a 8 bit byte(s) to the EEPROM using the hostif.
+ **/
+s32 txgbe_ee_write_buffer(struct txgbe_hw *hw,
+				      u32 addr, u32 len, void *data)
+{
+	const u32 mask = TXGBE_MNGSEM_SWMBX | TXGBE_MNGSEM_SWFLASH;
+	u8 *buf = (u8 *)data;
+	int err;
+
+	err = hw->mac.acquire_swfw_sync(hw, mask);
+	if (err)
+		return err;
+
+	while (len) {
+		u32 seg = (len <= TXGBE_PMMBX_DATA_SIZE
+				? len : TXGBE_PMMBX_DATA_SIZE);
+
+		err = txgbe_hic_sr_write(hw, addr, buf, seg);
+		if (err)
+			break;
+
+		len -= seg;
+		buf += seg;
+	}
+
+	hw->mac.release_swfw_sync(hw, mask);
+	return err;
+}
+
+/**
+ *  txgbe_calc_eeprom_checksum - Calculates and returns the checksum
+ *  @hw: pointer to hardware structure
+ *
+ *  Returns a negative error code on error, or the 16-bit checksum
+ **/
+#define BUFF_SIZE  64
+s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw)
+{
+	u16 checksum = 0, read_checksum = 0;
+	int i, j, seg;
+	int err;
+	u16 buffer[BUFF_SIZE];
+
+	DEBUGFUNC("txgbe_calc_eeprom_checksum");
+
+	err = hw->rom.readw_sw(hw, TXGBE_EEPROM_CHECKSUM, &read_checksum);
+	if (err) {
+		DEBUGOUT("EEPROM read failed\n");
+		return err;
+	}
+
+	for (i = 0; i < TXGBE_EE_CSUM_MAX; i += seg) {
+		seg = (i + BUFF_SIZE < TXGBE_EE_CSUM_MAX
+		       ? BUFF_SIZE : TXGBE_EE_CSUM_MAX - i);
+		err = hw->rom.readw_buffer(hw, i, seg, buffer);
+		if (err)
+			return err;
+		for (j = 0; j < seg; j++) {
+			checksum += buffer[j];
+		}
+	}
+
+	checksum = (u16)TXGBE_EEPROM_SUM - checksum + read_checksum;
+
+	return (s32)checksum;
+}
+
 /**
  *  txgbe_validate_eeprom_checksum - Validate EEPROM checksum
  *  @hw: pointer to hardware structure
@@ -31,9 +505,78 @@ s32 txgbe_init_eeprom_params(struct txgbe_hw *hw)
 s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw,
 					   u16 *checksum_val)
 {
-	RTE_SET_USED(hw);
-	RTE_SET_USED(checksum_val);
+	u16 checksum;
+	u16 read_checksum = 0;
+	int err;
 
-	return 0;
+	DEBUGFUNC("txgbe_validate_eeprom_checksum");
+
+	/* Read the first word from the EEPROM. If this times out or fails, do
+	 * not continue or we could be in for a very long wait while every
+	 * EEPROM read fails
+	 */
+	err = hw->rom.read16(hw, 0, &checksum);
+	if (err) {
+		DEBUGOUT("EEPROM read failed\n");
+		return err;
+	}
+
+	err = hw->rom.calc_checksum(hw);
+	if (err < 0)
+		return err;
+
+	checksum = (u16)(err & 0xffff);
+
+	err = hw->rom.readw_sw(hw, TXGBE_EEPROM_CHECKSUM, &read_checksum);
+	if (err) {
+		DEBUGOUT("EEPROM read failed\n");
+		return err;
+	}
+
+	/* Verify read checksum from EEPROM is the same as
+	 * calculated checksum
+	 */
+	if (read_checksum != checksum) {
+		err = TXGBE_ERR_EEPROM_CHECKSUM;
+		DEBUGOUT("EEPROM checksum error\n");
+	}
+
+	/* If the user cares, return the calculated checksum */
+	if (checksum_val)
+		*checksum_val = checksum;
+
+	return err;
+}
+
+/**
+ *  txgbe_update_eeprom_checksum - Updates the EEPROM checksum
+ *  @hw: pointer to hardware structure
+ **/
+s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw)
+{
+	s32 status;
+	u16 checksum;
+
+	DEBUGFUNC("txgbe_update_eeprom_checksum");
+
+	/* Read the first word from the EEPROM. If this times out or fails, do
+	 * not continue or we could be in for a very long wait while every
+	 * EEPROM read fails
+	 */
+	status = hw->rom.read16(hw, 0, &checksum);
+	if (status) {
+		DEBUGOUT("EEPROM read failed\n");
+		return status;
+	}
+
+	status = hw->rom.calc_checksum(hw);
+	if (status < 0)
+		return status;
+
+	checksum = (u16)(status & 0xffff);
+
+	status = hw->rom.writew_sw(hw, TXGBE_EEPROM_CHECKSUM, checksum);
+
+	return status;
 }
 
diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index e845492f3..5858e185c 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -5,7 +5,43 @@
 #ifndef _TXGBE_EEPROM_H_
 #define _TXGBE_EEPROM_H_
 
+/* Checksum and EEPROM pointers */
+#define TXGBE_PBANUM_PTR_GUARD		0xFAFA
+#define TXGBE_EEPROM_SUM		0xBABA
+
+#define TXGBE_FW_PTR			0x0F
+#define TXGBE_PBANUM0_PTR		0x05
+#define TXGBE_PBANUM1_PTR		0x06
+#define TXGBE_SW_REGION_PTR             0x1C
+
+#define TXGBE_EE_CSUM_MAX		0x800
+#define TXGBE_EEPROM_CHECKSUM		0x2F
+
+#define TXGBE_SAN_MAC_ADDR_PTR		0x18
+#define TXGBE_DEVICE_CAPS		0x1C
+#define TXGBE_EEPROM_VERSION_L          0x1D
+#define TXGBE_EEPROM_VERSION_H          0x1E
+#define TXGBE_ISCSI_BOOT_CONFIG         0x07
+
+
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
+s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
 s32 txgbe_validate_eeprom_checksum(struct txgbe_hw *hw, u16 *checksum_val);
+s32 txgbe_update_eeprom_checksum(struct txgbe_hw *hw);
+s32 txgbe_get_eeprom_semaphore(struct txgbe_hw *hw);
+void txgbe_release_eeprom_semaphore(struct txgbe_hw *hw);
+
+s32 txgbe_ee_read16(struct txgbe_hw *hw, u32 offset, u16 *data);
+s32 txgbe_ee_readw_sw(struct txgbe_hw *hw, u32 offset, u16 *data);
+s32 txgbe_ee_readw_buffer(struct txgbe_hw *hw, u32 offset, u32 words, void *data);
+s32 txgbe_ee_read32(struct txgbe_hw *hw, u32 addr, u32 *data);
+s32 txgbe_ee_read_buffer(struct txgbe_hw *hw, u32 addr, u32 len, void *data);
+
+s32 txgbe_ee_write16(struct txgbe_hw *hw, u32 offset, u16 data);
+s32 txgbe_ee_writew_sw(struct txgbe_hw *hw, u32 offset, u16 data);
+s32 txgbe_ee_writew_buffer(struct txgbe_hw *hw, u32 offset, u32 words, void *data);
+s32 txgbe_ee_write32(struct txgbe_hw *hw, u32 addr, u32 data);
+s32 txgbe_ee_write_buffer(struct txgbe_hw *hw, u32 addr, u32 len, void *data);
+
 
 #endif /* _TXGBE_EEPROM_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 5ff3983d9..358872d30 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -114,7 +114,22 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
  **/
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	struct txgbe_rom_info *rom = &hw->rom;
+
+	/* EEPROM */
+	rom->init_params = txgbe_init_eeprom_params;
+	rom->read16 = txgbe_ee_read16;
+	rom->readw_buffer = txgbe_ee_readw_buffer;
+	rom->readw_sw = txgbe_ee_readw_sw;
+	rom->read32 = txgbe_ee_read32;
+	rom->write16 = txgbe_ee_write16;
+	rom->writew_buffer = txgbe_ee_writew_buffer;
+	rom->writew_sw = txgbe_ee_writew_sw;
+	rom->write32 = txgbe_ee_write32;
+	rom->validate_checksum = txgbe_validate_eeprom_checksum;
+	rom->update_checksum = txgbe_update_eeprom_checksum;
+	rom->calc_checksum = txgbe_calc_eeprom_checksum;
+
 	return 0;
 }
 
diff --git a/drivers/net/txgbe/base/txgbe_mng.c b/drivers/net/txgbe/base/txgbe_mng.c
new file mode 100644
index 000000000..f7ca9c10f
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_mng.c
@@ -0,0 +1,399 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_type.h"
+#include "txgbe_mng.h"
+
+/**
+ *  txgbe_calculate_checksum - Calculate checksum for buffer
+ *  @buffer: pointer to EEPROM
+ *  @length: size of EEPROM to calculate a checksum for
+ *  Calculates the checksum for some buffer on a specified length.  The
+ *  checksum calculated is returned.
+ **/
+static u8
+txgbe_calculate_checksum(u8 *buffer, u32 length)
+{
+	u32 i;
+	u8 sum = 0;
+
+	for (i = 0; i < length; i++)
+		sum += buffer[i];
+
+	return (u8) (0 - sum);
+}
+
+/**
+ *  txgbe_hic_unlocked - Issue command to manageability block unlocked
+ *  @hw: pointer to the HW structure
+ *  @buffer: command to write and where the return status will be placed
+ *  @length: length of buffer, must be multiple of 4 bytes
+ *  @timeout: time in ms to wait for command completion
+ *
+ *  Communicates with the manageability block. On success return 0
+ *  else returns semaphore error when encountering an error acquiring
+ *  semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ *
+ *  This function assumes that the TXGBE_MNGSEM_SWMBX semaphore is held
+ *  by the caller.
+ **/
+static s32
+txgbe_hic_unlocked(struct txgbe_hw *hw, u32 *buffer, u32 length, u32 timeout)
+{
+	u32 value, loop;
+	u16 i, dword_len;
+
+	DEBUGFUNC("txgbe_hic_unlocked");
+
+	if (!length || length > TXGBE_PMMBX_BSIZE) {
+		DEBUGOUT("Buffer length failure buffersize=%d.\n", length);
+		return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	/* Calculate length in DWORDs. We must be DWORD aligned */
+	if (length % sizeof(u32)) {
+		DEBUGOUT("Buffer length failure, not aligned to dword");
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	dword_len = length >> 2;
+
+	/* The device driver writes the relevant command block
+	 * into the ram area.
+	 */
+	for (i = 0; i < dword_len; i++) {
+		wr32a(hw, TXGBE_MNGMBX, i, cpu_to_le32(buffer[i]));
+		buffer[i] = rd32a(hw, TXGBE_MNGMBX, i);
+	}
+	txgbe_flush(hw);
+
+	/* Setting this bit tells the ARC that a new command is pending. */
+	wr32m(hw, TXGBE_MNGMBXCTL,
+	      TXGBE_MNGMBXCTL_SWRDY, TXGBE_MNGMBXCTL_SWRDY);
+
+	/* Check command completion */
+	loop = po32m(hw, TXGBE_MNGMBXCTL,
+		TXGBE_MNGMBXCTL_FWRDY, TXGBE_MNGMBXCTL_FWRDY,
+		&value, timeout, 1000);
+	if (!loop || !(value & TXGBE_MNGMBXCTL_FWACK)) {
+		DEBUGOUT("Command has failed with no status valid.\n");
+		return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_host_interface_command - Issue command to manageability block
+ *  @hw: pointer to the HW structure
+ *  @buffer: contains the command to write and where the return status will
+ *   be placed
+ *  @length: length of buffer, must be multiple of 4 bytes
+ *  @timeout: time in ms to wait for command completion
+ *  @return_data: read and return data from the buffer (true) or not (false)
+ *   Needed because FW structures are big endian and decoding of
+ *   these fields can be 8 bit or 16 bit based on command. Decoding
+ *   is not easily understood without making a table of commands.
+ *   So we will leave this up to the caller to read back the data
+ *   in these cases.
+ *
+ *  Communicates with the manageability block. On success return 0
+ *  else returns semaphore error when encountering an error acquiring
+ *  semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+static s32
+txgbe_host_interface_command(struct txgbe_hw *hw, u32 *buffer,
+				 u32 length, u32 timeout, bool return_data)
+{
+	u32 hdr_size = sizeof(struct txgbe_hic_hdr);
+	struct txgbe_hic_hdr *resp = (struct txgbe_hic_hdr *)buffer;
+	u16 buf_len;
+	s32 err;
+	u32 bi;
+	u32 dword_len;
+
+	DEBUGFUNC("txgbe_host_interface_command");
+
+	if (length == 0 || length > TXGBE_PMMBX_BSIZE) {
+		DEBUGOUT("Buffer length failure buffersize=%d.\n", length);
+		return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+	}
+
+	/* Take management host interface semaphore */
+	err = hw->mac.acquire_swfw_sync(hw, TXGBE_MNGSEM_SWMBX);
+	if (err)
+		return err;
+
+	err = txgbe_hic_unlocked(hw, buffer, length, timeout);
+	if (err)
+		goto rel_out;
+
+	if (!return_data)
+		goto rel_out;
+
+	/* Calculate length in DWORDs */
+	dword_len = hdr_size >> 2;
+
+	/* first pull in the header so we know the buffer length */
+	for (bi = 0; bi < dword_len; bi++) {
+		buffer[bi] = rd32a(hw, TXGBE_MNGMBX, bi);
+	}
+
+	/*
+	 * If there is any thing in data position pull it in
+	 * Read Flash command requires reading buffer length from
+	 * two byes instead of one byte
+	 */
+	if (resp->cmd == 0x30) {
+		for (; bi < dword_len + 2; bi++) {
+			buffer[bi] = rd32a(hw, TXGBE_MNGMBX, bi);
+		}
+		buf_len = (((u16)(resp->cmd_or_resp.ret_status) << 3)
+				  & 0xF00) | resp->buf_len;
+		hdr_size += (2 << 2);
+	} else {
+		buf_len = resp->buf_len;
+	}
+	if (!buf_len)
+		goto rel_out;
+
+	if (length < buf_len + hdr_size) {
+		DEBUGOUT("Buffer not large enough for reply message.\n");
+		err = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+		goto rel_out;
+	}
+
+	/* Calculate length in DWORDs, add 3 for odd lengths */
+	dword_len = (buf_len + 3) >> 2;
+
+	/* Pull in the rest of the buffer (bi is where we left off) */
+	for (; bi <= dword_len; bi++) {
+		buffer[bi] = rd32a(hw, TXGBE_MNGMBX, bi);
+	}
+
+rel_out:
+	hw->mac.release_swfw_sync(hw, TXGBE_MNGSEM_SWMBX);
+
+	return err;
+}
+
+/**
+ *  txgbe_hic_sr_read - Read EEPROM word using a host interface cmd
+ *  assuming that the semaphore is already obtained.
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to read
+ *  @data: word read from the EEPROM
+ *
+ *  Reads a 16 bit word from the EEPROM using the hostif.
+ **/
+s32 txgbe_hic_sr_read(struct txgbe_hw *hw, u32 addr, u8 *buf, int len)
+{
+	struct txgbe_hic_read_shadow_ram command;
+	u32 value;
+	int err, i = 0, j = 0;
+
+	if (len > TXGBE_PMMBX_DATA_SIZE)
+		return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+	memset(&command, 0, sizeof(command));
+	command.hdr.req.cmd = FW_READ_SHADOW_RAM_CMD;
+	command.hdr.req.buf_lenh = 0;
+	command.hdr.req.buf_lenl = FW_READ_SHADOW_RAM_LEN;
+	command.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+	command.address = cpu_to_be32(addr);
+	command.length = cpu_to_be16(len);
+
+	err = txgbe_hic_unlocked(hw, (u32 *)&command,
+			sizeof(command), TXGBE_HI_COMMAND_TIMEOUT);
+	if (err)
+		return err;
+
+	while (i < (len >> 2)) {
+		value = rd32a(hw, TXGBE_MNGMBX, FW_NVM_DATA_OFFSET + i);
+		((u32 *)buf)[i] = value;
+		i++;
+	}
+
+	value = rd32a(hw, TXGBE_MNGMBX, FW_NVM_DATA_OFFSET + i);
+	for (i <<= 2; i < len; i++) {
+		((u8 *)buf)[i] = ((u8 *)&value)[j++];
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_hic_sr_write - Write EEPROM word using hostif
+ *  @hw: pointer to hardware structure
+ *  @offset: offset of  word in the EEPROM to write
+ *  @data: word write to the EEPROM
+ *
+ *  Write a 16 bit word to the EEPROM using the hostif.
+ **/
+s32 txgbe_hic_sr_write(struct txgbe_hw *hw, u32 addr, u8 *buf, int len)
+{
+	struct txgbe_hic_write_shadow_ram command;
+	u32 value;
+	int err = 0, i = 0, j = 0;
+
+	if (len > TXGBE_PMMBX_DATA_SIZE)
+		return TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+	memset(&command, 0, sizeof(command));
+	command.hdr.req.cmd = FW_WRITE_SHADOW_RAM_CMD;
+	command.hdr.req.buf_lenh = 0;
+	command.hdr.req.buf_lenl = FW_WRITE_SHADOW_RAM_LEN;
+	command.hdr.req.checksum = FW_DEFAULT_CHECKSUM;
+	command.address = cpu_to_be32(addr);
+	command.length = cpu_to_be16(len);
+
+	while (i < (len >> 2)) {
+		value = ((u32 *)buf)[i];
+		wr32a(hw, TXGBE_MNGMBX, FW_NVM_DATA_OFFSET + i, value);
+		i++;
+	}
+
+	for (i <<= 2; i < len; i++) {
+		((u8 *)&value)[j++] = ((u8 *)buf)[i];
+	}
+	wr32a(hw, TXGBE_MNGMBX, FW_NVM_DATA_OFFSET + (i >> 2), value);
+
+	UNREFERENCED_PARAMETER(&command);
+
+	return err;
+}
+
+/**
+ *  txgbe_hic_set_drv_ver - Sends driver version to firmware
+ *  @hw: pointer to the HW structure
+ *  @maj: driver version major number
+ *  @min: driver version minor number
+ *  @build: driver version build number
+ *  @sub: driver version sub build number
+ *  @len: unused
+ *  @driver_ver: unused
+ *
+ *  Sends driver version number to firmware through the manageability
+ *  block.  On success return 0
+ *  else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ *  semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32 txgbe_hic_set_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min,
+				 u8 build, u8 sub, u16 len,
+				 const char *driver_ver)
+{
+	struct txgbe_hic_drv_info fw_cmd;
+	int i;
+	s32 ret_val = 0;
+
+	DEBUGFUNC("txgbe_hic_set_drv_ver");
+	UNREFERENCED_PARAMETER(len, driver_ver);
+
+	fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
+	fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN;
+	fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+	fw_cmd.port_num = (u8)hw->bus.func;
+	fw_cmd.ver_maj = maj;
+	fw_cmd.ver_min = min;
+	fw_cmd.ver_build = build;
+	fw_cmd.ver_sub = sub;
+	fw_cmd.hdr.checksum = 0;
+	fw_cmd.pad = 0;
+	fw_cmd.pad2 = 0;
+	fw_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&fw_cmd,
+				(FW_CEM_HDR_LEN + fw_cmd.hdr.buf_len));
+
+	for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+		ret_val = txgbe_host_interface_command(hw, (u32 *)&fw_cmd,
+						       sizeof(fw_cmd),
+						       TXGBE_HI_COMMAND_TIMEOUT,
+						       true);
+		if (ret_val != 0)
+			continue;
+
+		if (fw_cmd.hdr.cmd_or_resp.ret_status ==
+		    FW_CEM_RESP_STATUS_SUCCESS)
+			ret_val = 0;
+		else
+			ret_val = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+		break;
+	}
+
+	return ret_val;
+}
+
+/**
+ *  txgbe_hic_reset - send reset cmd to fw
+ *  @hw: pointer to hardware structure
+ *
+ *  Sends reset cmd to firmware through the manageability
+ *  block.  On success return 0
+ *  else returns TXGBE_ERR_SWFW_SYNC when encountering an error acquiring
+ *  semaphore or TXGBE_ERR_HOST_INTERFACE_COMMAND when command fails.
+ **/
+s32
+txgbe_hic_reset(struct txgbe_hw *hw)
+{
+	struct txgbe_hic_reset reset_cmd;
+	int i;
+	s32 err = 0;
+
+	DEBUGFUNC("\n");
+
+	reset_cmd.hdr.cmd = FW_RESET_CMD;
+	reset_cmd.hdr.buf_len = FW_RESET_LEN;
+	reset_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
+	reset_cmd.lan_id = hw->bus.lan_id;
+	reset_cmd.reset_type = (u16)hw->reset_type;
+	reset_cmd.hdr.checksum = 0;
+	reset_cmd.hdr.checksum = txgbe_calculate_checksum((u8 *)&reset_cmd,
+				(FW_CEM_HDR_LEN + reset_cmd.hdr.buf_len));
+
+	for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
+		err = txgbe_host_interface_command(hw, (u32 *)&reset_cmd,
+						       sizeof(reset_cmd),
+						       TXGBE_HI_COMMAND_TIMEOUT,
+						       true);
+		if (err != 0)
+			continue;
+
+		if (reset_cmd.hdr.cmd_or_resp.ret_status ==
+		    FW_CEM_RESP_STATUS_SUCCESS)
+			err = 0;
+		else
+			err = TXGBE_ERR_HOST_INTERFACE_COMMAND;
+
+		break;
+	}
+
+	return err;
+}
+
+/**
+ * txgbe_mng_present - returns true when management capability is present
+ * @hw: pointer to hardware structure
+ */
+bool
+txgbe_mng_present(struct txgbe_hw *hw)
+{
+	if (hw->mac.type == txgbe_mac_unknown)
+		return false;
+
+	return !!rd32m(hw, TXGBE_STAT, TXGBE_STAT_MNGINIT);
+}
+
+/**
+ * txgbe_mng_enabled - Is the manageability engine enabled?
+ * @hw: pointer to hardware structure
+ *
+ * Returns true if the manageability engine is enabled.
+ **/
+bool
+txgbe_mng_enabled(struct txgbe_hw *hw)
+{
+	UNREFERENCED_PARAMETER(hw);
+	/* firmware doesn't control laser */
+	return false;
+}
diff --git a/drivers/net/txgbe/base/txgbe_mng.h b/drivers/net/txgbe/base/txgbe_mng.h
new file mode 100644
index 000000000..61a9dd891
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_mng.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_MNG_H_
+#define _TXGBE_MNG_H_
+
+#include "txgbe_type.h"
+
+
+#define TXGBE_PMMBX_QSIZE       64 /* Num of dwords in range */
+#define TXGBE_PMMBX_BSIZE       (TXGBE_PMMBX_QSIZE * 4)
+#define TXGBE_PMMBX_DATA_SIZE   (TXGBE_PMMBX_BSIZE - FW_NVM_DATA_OFFSET * 4)
+#define TXGBE_HI_COMMAND_TIMEOUT        5000 /* Process HI command limit */
+#define TXGBE_HI_FLASH_ERASE_TIMEOUT    5000 /* Process Erase command limit */
+#define TXGBE_HI_FLASH_UPDATE_TIMEOUT   5000 /* Process Update command limit */
+#define TXGBE_HI_FLASH_VERIFY_TIMEOUT   60000 /* Process Apply command limit */
+#define TXGBE_HI_PHY_MGMT_REQ_TIMEOUT   2000 /* Wait up to 2 seconds */
+
+/* CEM Support */
+#define FW_CEM_HDR_LEN                  0x4
+#define FW_CEM_CMD_DRIVER_INFO          0xDD
+#define FW_CEM_CMD_DRIVER_INFO_LEN      0x5
+#define FW_CEM_CMD_RESERVED             0X0
+#define FW_CEM_UNUSED_VER               0x0
+#define FW_CEM_MAX_RETRIES              3
+#define FW_CEM_RESP_STATUS_SUCCESS      0x1
+#define FW_READ_SHADOW_RAM_CMD          0x31
+#define FW_READ_SHADOW_RAM_LEN          0x6
+#define FW_WRITE_SHADOW_RAM_CMD         0x33
+#define FW_WRITE_SHADOW_RAM_LEN         0xA /* 8 plus 1 WORD to write */
+#define FW_SHADOW_RAM_DUMP_CMD          0x36
+#define FW_SHADOW_RAM_DUMP_LEN          0
+#define FW_DEFAULT_CHECKSUM             0xFF /* checksum always 0xFF */
+#define FW_NVM_DATA_OFFSET              3
+#define FW_MAX_READ_BUFFER_SIZE         244
+#define FW_DISABLE_RXEN_CMD             0xDE
+#define FW_DISABLE_RXEN_LEN             0x1
+#define FW_PHY_MGMT_REQ_CMD             0x20
+#define FW_RESET_CMD                    0xDF
+#define FW_RESET_LEN                    0x2
+#define FW_SETUP_MAC_LINK_CMD           0xE0
+#define FW_SETUP_MAC_LINK_LEN           0x2
+#define FW_FLASH_UPGRADE_START_CMD      0xE3
+#define FW_FLASH_UPGRADE_START_LEN      0x1
+#define FW_FLASH_UPGRADE_WRITE_CMD      0xE4
+#define FW_FLASH_UPGRADE_VERIFY_CMD     0xE5
+#define FW_FLASH_UPGRADE_VERIFY_LEN     0x4
+#define FW_PHY_ACT_DATA_COUNT		4
+#define FW_PHY_TOKEN_DELAY		5	/* milliseconds */
+#define FW_PHY_TOKEN_WAIT		5	/* seconds */
+#define FW_PHY_TOKEN_RETRIES ((FW_PHY_TOKEN_WAIT * 1000) / FW_PHY_TOKEN_DELAY)
+
+/* Host Interface Command Structures */
+struct txgbe_hic_hdr {
+	u8 cmd;
+	u8 buf_len;
+	union {
+		u8 cmd_resv;
+		u8 ret_status;
+	} cmd_or_resp;
+	u8 checksum;
+};
+
+struct txgbe_hic_hdr2_req {
+	u8 cmd;
+	u8 buf_lenh;
+	u8 buf_lenl;
+	u8 checksum;
+};
+
+struct txgbe_hic_hdr2_rsp {
+	u8 cmd;
+	u8 buf_lenl;
+	u8 buf_lenh_status;     /* 7-5: high bits of buf_len, 4-0: status */
+	u8 checksum;
+};
+
+union txgbe_hic_hdr2 {
+	struct txgbe_hic_hdr2_req req;
+	struct txgbe_hic_hdr2_rsp rsp;
+};
+
+struct txgbe_hic_drv_info {
+	struct txgbe_hic_hdr hdr;
+	u8 port_num;
+	u8 ver_sub;
+	u8 ver_build;
+	u8 ver_min;
+	u8 ver_maj;
+	u8 pad; /* end spacing to ensure length is mult. of dword */
+	u16 pad2; /* end spacing to ensure length is mult. of dword2 */
+};
+
+/* These need to be dword aligned */
+struct txgbe_hic_read_shadow_ram {
+	union txgbe_hic_hdr2 hdr;
+	u32 address;
+	u16 length;
+	u16 pad2;
+	u16 data;
+	u16 pad3;
+};
+
+struct txgbe_hic_write_shadow_ram {
+	union txgbe_hic_hdr2 hdr;
+	u32 address;
+	u16 length;
+	u16 pad2;
+	u16 data;
+	u16 pad3;
+};
+
+struct txgbe_hic_disable_rxen {
+	struct txgbe_hic_hdr hdr;
+	u8  port_number;
+	u8  pad2;
+	u16 pad3;
+};
+
+struct txgbe_hic_reset {
+	struct txgbe_hic_hdr hdr;
+	u16 lan_id;
+	u16 reset_type;
+};
+
+struct txgbe_hic_phy_cfg {
+	struct txgbe_hic_hdr hdr;
+	u8 lan_id;
+	u8 phy_mode;
+	u16 phy_speed;
+};
+
+enum txgbe_module_id {
+	TXGBE_MODULE_EEPROM = 0,
+	TXGBE_MODULE_FIRMWARE,
+	TXGBE_MODULE_HARDWARE,
+	TXGBE_MODULE_PCIE
+};
+
+struct txgbe_hic_upg_start {
+	struct txgbe_hic_hdr hdr;
+	u8 module_id;
+	u8  pad2;
+	u16 pad3;
+};
+
+struct txgbe_hic_upg_write {
+	struct txgbe_hic_hdr hdr;
+	u8 data_len;
+	u8 eof_flag;
+	u16 check_sum;
+	u32 data[62];
+};
+
+enum txgbe_upg_flag {
+	TXGBE_RESET_NONE = 0,
+	TXGBE_RESET_FIRMWARE,
+	TXGBE_RELOAD_EEPROM,
+	TXGBE_RESET_LAN
+};
+
+struct txgbe_hic_upg_verify {
+	struct txgbe_hic_hdr hdr;
+	u32 action_flag;
+};
+
+s32 txgbe_hic_sr_read(struct txgbe_hw *hw, u32 addr, u8 *buf, int len);
+s32 txgbe_hic_sr_write(struct txgbe_hw *hw, u32 addr, u8 *buf, int len);
+
+s32 txgbe_hic_set_drv_ver(struct txgbe_hw *hw, u8 maj, u8 min, u8 build, u8 ver, u16 len, const char *str);
+s32 txgbe_hic_reset(struct txgbe_hw *hw);
+bool txgbe_mng_present(struct txgbe_hw *hw);
+bool txgbe_mng_enabled(struct txgbe_hw *hw);
+#endif /* _TXGBE_MNG_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 5524e5de0..8b7cfd8ff 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -385,6 +385,11 @@ struct txgbe_hw {
 
 	uint64_t isb_dma;
 	void IOMEM *isb_mem;
+	enum txgbe_reset_type {
+		TXGBE_LAN_RESET = 0,
+		TXGBE_SW_RESET,
+		TXGBE_GLOBAL_RESET
+	} reset_type;
 };
 
 #include "txgbe_regs.h"
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 165132908..1cae321f1 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -138,14 +138,14 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
-	err = txgbe_init_eeprom_params(hw);
+	err = hw->rom.init_params(hw);
 	if (err != 0) {
 		PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
 		return -EIO;
 	}
 
 	/* Make sure we have a good EEPROM before we read from it */
-	err = txgbe_validate_eeprom_checksum(hw, &csum);
+	err = hw->rom.validate_checksum(hw, &csum);
 	if (err != 0) {
 		PMD_INIT_LOG(ERR, "The EEPROM checksum is not valid: %d", err);
 		return -EIO;
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 07/42] net/txgbe: add HW init function
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (4 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 06/42] net/txgbe: add EEPROM functions Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 08/42] net/txgbe: add HW reset operation Jiawen Wu
                   ` (35 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add hardware init function in mac layer.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   | 103 ++++++++++++++++++++++++++--
 drivers/net/txgbe/base/txgbe_hw.h   |   4 ++
 drivers/net/txgbe/base/txgbe_type.h |   1 +
 drivers/net/txgbe/txgbe_ethdev.c    |   2 +-
 4 files changed, 102 insertions(+), 8 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 358872d30..c644de864 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -7,6 +7,68 @@
 #include "txgbe_eeprom.h"
 #include "txgbe_hw.h"
 
+/**
+ *  txgbe_start_hw - Prepare hardware for Tx/Rx
+ *  @hw: pointer to hardware structure
+ *
+ *  Starts the hardware by filling the bus info structure and media type, clears
+ *  all on chip counters, initializes receive address registers, multicast
+ *  table, VLAN filter table, calls routine to set up link and flow control
+ *  settings, and leaves transmit and receive units disabled and uninitialized
+ **/
+s32 txgbe_start_hw(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_start_hw_gen2 - Init sequence for common device family
+ *  @hw: pointer to hw structure
+ *
+ * Performs the init sequence common to the second generation
+ * of 10 GbE devices.
+ **/
+s32 txgbe_start_hw_gen2(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_init_hw - Generic hardware initialization
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize the hardware by resetting the hardware, filling the bus info
+ *  structure and media type, clears all on chip counters, initializes receive
+ *  address registers, multicast table, VLAN filter table, calls routine to set
+ *  up link and flow control settings, and leaves transmit and receive units
+ *  disabled and uninitialized
+ **/
+s32 txgbe_init_hw(struct txgbe_hw *hw)
+{
+	s32 status;
+
+	DEBUGFUNC("txgbe_init_hw");
+
+	/* Reset the hardware */
+	status = hw->mac.reset_hw(hw);
+	if (status == 0 || status == TXGBE_ERR_SFP_NOT_PRESENT) {
+		/* Start the HW */
+		status = hw->mac.start_hw(hw);
+	}
+
+	/* Initialize the LED link active for LED blink support */
+	hw->mac.init_led_link_act(hw);
+
+	if (status != 0)
+		DEBUGOUT("Failed to initialize HW, STATUS = %d\n", status);
+
+	return status;
+}
+
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr)
 {
@@ -98,13 +160,6 @@ s32 txgbe_set_mac_type(struct txgbe_hw *hw)
 	return err;
 }
 
-s32 txgbe_init_hw(struct txgbe_hw *hw)
-{
-	RTE_SET_USED(hw);
-	return 0;
-}
-
-
 /**
  *  txgbe_init_ops_pf - Inits func ptrs and MAC type
  *  @hw: pointer to hardware structure
@@ -114,6 +169,7 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
  **/
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 {
+	struct txgbe_mac_info *mac = &hw->mac;
 	struct txgbe_rom_info *rom = &hw->rom;
 
 	/* EEPROM */
@@ -130,6 +186,39 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	rom->update_checksum = txgbe_update_eeprom_checksum;
 	rom->calc_checksum = txgbe_calc_eeprom_checksum;
 
+	/* MAC */
+	mac->init_hw = txgbe_init_hw;
+	mac->start_hw = txgbe_start_hw_raptor;
+
 	return 0;
 }
 
+/**
+ *  txgbe_start_hw_raptor - Prepare hardware for Tx/Rx
+ *  @hw: pointer to hardware structure
+ *
+ *  Starts the hardware using the generic start_hw function
+ *  and the generation start_hw function.
+ *  Then performs revision-specific operations, if any.
+ **/
+s32 txgbe_start_hw_raptor(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("txgbe_start_hw_raptor");
+
+	err = txgbe_start_hw(hw);
+	if (err != 0)
+		goto out;
+
+	err = txgbe_start_hw_gen2(hw);
+	if (err != 0)
+		goto out;
+
+	/* We need to run link autotry after the driver loads */
+	hw->mac.autotry_restart = true;
+
+out:
+	return err;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index adcc5fc48..55b1b60de 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -8,6 +8,10 @@
 #include "txgbe_type.h"
 
 s32 txgbe_init_hw(struct txgbe_hw *hw);
+s32 txgbe_start_hw(struct txgbe_hw *hw);
+s32 txgbe_stop_hw(struct txgbe_hw *hw);
+s32 txgbe_start_hw_gen2(struct txgbe_hw *hw);
+s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 8b7cfd8ff..92068b6f7 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -318,6 +318,7 @@ struct txgbe_mac_info {
 	u8 san_addr[ETH_ADDR_LEN];
 
 	u32 num_rar_entries;
+	bool autotry_restart;
 	u32  max_link_up_time;
 };
 
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 1cae321f1..921a75f25 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -151,7 +151,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
-	err = txgbe_init_hw(hw);
+	err = hw->mac.init_hw(hw);
 
 	/* Reset the hw statistics */
 	txgbe_dev_stats_reset(eth_dev);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 08/42] net/txgbe: add HW reset operation
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (5 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 07/42] net/txgbe: add HW init function Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 09/42] net/txgbe: add PHY init Jiawen Wu
                   ` (34 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add hardware reset operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   | 298 +++++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_hw.h   |   5 +
 drivers/net/txgbe/base/txgbe_type.h |  12 ++
 3 files changed, 304 insertions(+), 11 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index c644de864..9a77adc72 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -5,6 +5,7 @@
 #include "txgbe_type.h"
 #include "txgbe_vf.h"
 #include "txgbe_eeprom.h"
+#include "txgbe_mng.h"
 #include "txgbe_hw.h"
 
 /**
@@ -69,18 +70,131 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_validate_mac_addr - Validate MAC address
+ *  @mac_addr: pointer to MAC address.
+ *
+ *  Tests a MAC address to ensure it is a valid Individual Address.
+ **/
+s32 txgbe_validate_mac_addr(u8 *mac_addr)
+{
+	s32 status = 0;
+
+	DEBUGFUNC("txgbe_validate_mac_addr");
+
+	/* Make sure it is not a multicast address */
+	if (TXGBE_IS_MULTICAST(mac_addr)) {
+		status = TXGBE_ERR_INVALID_MAC_ADDR;
+	/* Not a broadcast address */
+	} else if (TXGBE_IS_BROADCAST(mac_addr)) {
+		status = TXGBE_ERR_INVALID_MAC_ADDR;
+	/* Reject the zero address */
+	} else if (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 &&
+		   mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0) {
+		status = TXGBE_ERR_INVALID_MAC_ADDR;
+	}
+	return status;
+}
+
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr)
 {
-	RTE_SET_USED(hw);
-	RTE_SET_USED(index);
-	RTE_SET_USED(addr);
-	RTE_SET_USED(vmdq);
-	RTE_SET_USED(enable_addr);
+	u32 rar_low, rar_high;
+	u32 rar_entries = hw->mac.num_rar_entries;
 
-	return 0;
+	DEBUGFUNC("txgbe_set_rar");
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries) {
+		DEBUGOUT("RAR index %d is out of range.\n", index);
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	/* setup VMDq pool selection before this RAR gets enabled */
+	hw->mac.set_vmdq(hw, index, vmdq);
+
+	/*
+	 * HW expects these in little endian so we reverse the byte
+	 * order from network order (big endian) to little endian
+	 */
+	rar_low = TXGBE_ETHADDRL_AD0(addr[5]) |
+		  TXGBE_ETHADDRL_AD1(addr[4]) |
+		  TXGBE_ETHADDRL_AD2(addr[3]) |
+		  TXGBE_ETHADDRL_AD3(addr[2]);
+	/*
+	 * Some parts put the VMDq setting in the extra RAH bits,
+	 * so save everything except the lower 16 bits that hold part
+	 * of the address and the address valid bit.
+	 */
+	rar_high = rd32(hw, TXGBE_ETHADDRH);
+	rar_high &= ~TXGBE_ETHADDRH_AD_MASK;
+	rar_high |= (TXGBE_ETHADDRH_AD4(addr[1]) |
+		     TXGBE_ETHADDRH_AD5(addr[0]));
+
+	rar_high &= ~TXGBE_ETHADDRH_VLD;
+	if (enable_addr != 0)
+		rar_high |= TXGBE_ETHADDRH_VLD;
+
+	wr32(hw, TXGBE_ETHADDRIDX, index);
+	wr32(hw, TXGBE_ETHADDRL, rar_low);
+	wr32(hw, TXGBE_ETHADDRH, rar_high);
+}
+
+/**
+ * txgbe_clear_tx_pending - Clear pending TX work from the PCIe fifo
+ * @hw: pointer to the hardware structure
+ *
+ * The MACs can experience issues if TX work is still pending
+ * when a reset occurs.  This function prevents this by flushing the PCIe
+ * buffers on the system.
+ **/
+void txgbe_clear_tx_pending(struct txgbe_hw *hw)
+{
+	u32 hlreg0, i, poll;
+
+	/*
+	 * If double reset is not requested then all transactions should
+	 * already be clear and as such there is no work to do
+	 */
+	if (!(hw->mac.flags & TXGBE_FLAGS_DOUBLE_RESET_REQUIRED))
+		return;
+
+	hlreg0 = rd32(hw, TXGBE_PSRCTL);
+	wr32(hw, TXGBE_PSRCTL, hlreg0 | TXGBE_PSRCTL_LBENA);
+
+	/* Wait for a last completion before clearing buffers */
+	txgbe_flush(hw);
+	msec_delay(3);
+
+	/*
+	 * Before proceeding, make sure that the PCIe block does not have
+	 * transactions pending.
+	 */
+	poll = (800 * 11) / 10;
+	for (i = 0; i < poll; i++) {
+		usec_delay(100);
+	}
+
+	/* Flush all writes and allow 20usec for all transactions to clear */
+	txgbe_flush(hw);
+	usec_delay(20);
+
+	/* restore previous register values */
+	wr32(hw, TXGBE_PSRCTL, hlreg0);
 }
 
+/**
+ *  txgbe_init_shared_code - Initialize the shared code
+ *  @hw: pointer to hardware structure
+ *
+ *  This will assign function pointers and assign the MAC type and PHY code.
+ *  Does not touch the hardware. This function must be called prior to any
+ *  other function in the shared code. The txgbe_hw structure should be
+ *  memset to 0 prior to calling this function.  The following fields in
+ *  hw structure should be filled in prior to calling this function:
+ *  hw_addr, back, device_id, vendor_id, subsystem_device_id,
+ *  subsystem_vendor_id, and revision_id
+ **/
 s32 txgbe_init_shared_code(struct txgbe_hw *hw)
 {
 	s32 status;
@@ -109,7 +223,6 @@ s32 txgbe_init_shared_code(struct txgbe_hw *hw)
 	hw->bus.set_lan_id(hw);
 
 	return status;
-
 }
 
 /**
@@ -172,6 +285,11 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	struct txgbe_mac_info *mac = &hw->mac;
 	struct txgbe_rom_info *rom = &hw->rom;
 
+	/* MAC */
+	mac->init_hw = txgbe_init_hw;
+	mac->start_hw = txgbe_start_hw_raptor;
+	mac->reset_hw = txgbe_reset_hw;
+
 	/* EEPROM */
 	rom->init_params = txgbe_init_eeprom_params;
 	rom->read16 = txgbe_ee_read16;
@@ -186,13 +304,171 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	rom->update_checksum = txgbe_update_eeprom_checksum;
 	rom->calc_checksum = txgbe_calc_eeprom_checksum;
 
-	/* MAC */
-	mac->init_hw = txgbe_init_hw;
-	mac->start_hw = txgbe_start_hw_raptor;
-
 	return 0;
 }
 
+static int
+txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
+{
+	u32 reg = 0;
+	u32 i;
+	int err = 0;
+	/* if there's flash existing */
+	if (!(rd32(hw, TXGBE_SPISTAT) & TXGBE_SPISTAT_BPFLASH)) {
+		/* wait hw load flash done */
+		for (i = 0; i < 10; i++) {
+			reg = rd32(hw, TXGBE_ILDRSTAT);
+			if (!(reg & check_bit)) {
+				/* done */
+				break;
+			}
+			msleep(100);
+		}
+		if (i == 10) {
+			err = TXGBE_ERR_FLASH_LOADING_FAILED;
+		}
+	}
+	return err;
+}
+
+static void
+txgbe_reset_misc(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+}
+
+/**
+ *  txgbe_reset_hw - Perform hardware reset
+ *  @hw: pointer to hardware structure
+ *
+ *  Resets the hardware by resetting the transmit and receive units, masks
+ *  and clears all interrupts, perform a PHY reset, and perform a link (MAC)
+ *  reset.
+ **/
+s32 txgbe_reset_hw(struct txgbe_hw *hw)
+{
+	s32 status;
+	u32 autoc;
+
+	DEBUGFUNC("txgbe_reset_hw");
+
+	/* Call adapter stop to disable tx/rx and clear interrupts */
+	status = hw->mac.stop_hw(hw);
+	if (status != 0)
+		return status;
+
+	/* flush pending Tx transactions */
+	txgbe_clear_tx_pending(hw);
+
+	/* Identify PHY and related function pointers */
+	status = hw->phy.init(hw);
+	if (status == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		return status;
+
+	/* Setup SFP module if there is one present. */
+	if (hw->phy.sfp_setup_needed) {
+		status = hw->mac.setup_sfp(hw);
+		hw->phy.sfp_setup_needed = false;
+	}
+	if (status == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		return status;
+
+	/* Reset PHY */
+	if (hw->phy.reset_disable == false)
+		hw->phy.reset(hw);
+
+	/* remember AUTOC from before we reset */
+	autoc = hw->mac.autoc_read(hw);
+
+mac_reset_top:
+	/*
+	 * Issue global reset to the MAC.  Needs to be SW reset if link is up.
+	 * If link reset is used when link is up, it might reset the PHY when
+	 * mng is using it.  If link is down or the flag to force full link
+	 * reset is set, then perform link reset.
+	 */
+	if (txgbe_mng_present(hw)) {
+		txgbe_hic_reset(hw);
+	} else {
+		wr32(hw, TXGBE_RST, TXGBE_RST_LAN(hw->bus.lan_id));
+		txgbe_flush(hw);
+	}
+	usec_delay(10);
+
+	txgbe_reset_misc(hw);
+
+	if (hw->bus.lan_id == 0) {
+		status = txgbe_check_flash_load(hw,
+				TXGBE_ILDRSTAT_SWRST_LAN0);
+	} else {
+		status = txgbe_check_flash_load(hw,
+				TXGBE_ILDRSTAT_SWRST_LAN1);
+	}
+	if (status != 0)
+		return status;
+
+	msec_delay(50);
+
+	/*
+	 * Double resets are required for recovery from certain error
+	 * conditions.  Between resets, it is necessary to stall to
+	 * allow time for any pending HW events to complete.
+	 */
+	if (hw->mac.flags & TXGBE_FLAGS_DOUBLE_RESET_REQUIRED) {
+		hw->mac.flags &= ~TXGBE_FLAGS_DOUBLE_RESET_REQUIRED;
+		goto mac_reset_top;
+	}
+
+	/*
+	 * Store the original AUTOC/AUTOC2 values if they have not been
+	 * stored off yet.  Otherwise restore the stored original
+	 * values since the reset operation sets back to defaults.
+	 */
+	if (hw->mac.orig_link_settings_stored == false) {
+		hw->mac.orig_autoc = hw->mac.autoc_read(hw);
+		hw->mac.autoc_write(hw, hw->mac.orig_autoc);
+		hw->mac.orig_link_settings_stored = true;
+	} else {
+		hw->mac.orig_autoc = autoc;
+	}
+
+	/* Store the permanent mac address */
+	hw->mac.get_mac_addr(hw, hw->mac.perm_addr);
+
+	/*
+	 * Store MAC address from RAR0, clear receive address registers, and
+	 * clear the multicast table.  Also reset num_rar_entries to 128,
+	 * since we modify this value when programming the SAN MAC address.
+	 */
+	hw->mac.num_rar_entries = 128;
+	hw->mac.init_rx_addrs(hw);
+
+	/* Store the permanent SAN mac address */
+	hw->mac.get_san_mac_addr(hw, hw->mac.san_addr);
+
+	/* Add the SAN MAC address to the RAR only if it's a valid address */
+	if (txgbe_validate_mac_addr(hw->mac.san_addr) == 0) {
+		/* Save the SAN MAC RAR index */
+		hw->mac.san_mac_rar_index = hw->mac.num_rar_entries - 1;
+
+		hw->mac.set_rar(hw, hw->mac.san_mac_rar_index,
+				    hw->mac.san_addr, 0, true);
+
+		/* clear VMDq pool/queue selection for this RAR */
+		hw->mac.clear_vmdq(hw, hw->mac.san_mac_rar_index,
+				       BIT_MASK32);
+
+		/* Reserve the last RAR for the SAN MAC address */
+		hw->mac.num_rar_entries--;
+	}
+
+	/* Store the alternative WWNN/WWPN prefix */
+	hw->mac.get_wwn_prefix(hw, &hw->mac.wwnn_prefix,
+				   &hw->mac.wwpn_prefix);
+
+	return status;
+}
+
 /**
  *  txgbe_start_hw_raptor - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index 55b1b60de..a2816c40a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -15,7 +15,12 @@ s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
+
+s32 txgbe_validate_mac_addr(u8 *mac_addr);
+void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
+s32 txgbe_reset_hw(struct txgbe_hw *hw);
+s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 92068b6f7..f8ac41fe9 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -207,6 +207,7 @@ struct txgbe_flash_info {
 	u16 address_bits;
 };
 
+#define TXGBE_FLAGS_DOUBLE_RESET_REQUIRED	0x01
 struct txgbe_mac_info {
 	s32 (*init_hw)(struct txgbe_hw *);
 	s32 (*reset_hw)(struct txgbe_hw *);
@@ -316,9 +317,18 @@ struct txgbe_mac_info {
 	u8 addr[ETH_ADDR_LEN];
 	u8 perm_addr[ETH_ADDR_LEN];
 	u8 san_addr[ETH_ADDR_LEN];
+	/* prefix for World Wide Node Name (WWNN) */
+	u16 wwnn_prefix;
+	/* prefix for World Wide Port Name (WWPN) */
+	u16 wwpn_prefix;
 
 	u32 num_rar_entries;
+
+	u8  san_mac_rar_index;
+	u64 orig_autoc;  /* cached value of AUTOC */
+	bool orig_link_settings_stored;
 	bool autotry_restart;
+	u8 flags;
 	u32  max_link_up_time;
 };
 
@@ -354,6 +364,8 @@ struct txgbe_phy_info {
 
 	enum txgbe_phy_type type;
 	enum txgbe_sfp_type sfp_type;
+	bool sfp_setup_needed;
+	bool reset_disable;
 };
 
 struct txgbe_mbx_info {
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 09/42] net/txgbe: add PHY init
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (6 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 08/42] net/txgbe: add HW reset operation Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 10/42] net/txgbe: add module identify Jiawen Wu
                   ` (33 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add phy init functions, get phy type and identify.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build  |   1 +
 drivers/net/txgbe/base/txgbe.h      |   1 +
 drivers/net/txgbe/base/txgbe_hw.c   |  57 +++++
 drivers/net/txgbe/base/txgbe_hw.h   |   2 +
 drivers/net/txgbe/base/txgbe_phy.c  | 253 +++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_phy.h  | 336 ++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_type.h |   7 +
 7 files changed, 657 insertions(+)
 create mode 100644 drivers/net/txgbe/base/txgbe_phy.c
 create mode 100644 drivers/net/txgbe/base/txgbe_phy.h

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
index 94cdfcfc1..069879a7c 100644
--- a/drivers/net/txgbe/base/meson.build
+++ b/drivers/net/txgbe/base/meson.build
@@ -5,6 +5,7 @@ sources = [
 	'txgbe_eeprom.c',
 	'txgbe_hw.c',
 	'txgbe_mng.c',
+	'txgbe_phy.c',
 	'txgbe_vf.c',
 ]
 
diff --git a/drivers/net/txgbe/base/txgbe.h b/drivers/net/txgbe/base/txgbe.h
index 329764be0..764caa439 100644
--- a/drivers/net/txgbe/base/txgbe.h
+++ b/drivers/net/txgbe/base/txgbe.h
@@ -8,6 +8,7 @@
 #include "txgbe_type.h"
 #include "txgbe_mng.h"
 #include "txgbe_eeprom.h"
+#include "txgbe_phy.h"
 #include "txgbe_hw.h"
 
 #endif /* _TXGBE_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 9a77adc72..8090d68f9 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -3,6 +3,7 @@
  */
 
 #include "txgbe_type.h"
+#include "txgbe_phy.h"
 #include "txgbe_vf.h"
 #include "txgbe_eeprom.h"
 #include "txgbe_mng.h"
@@ -138,6 +139,8 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 	wr32(hw, TXGBE_ETHADDRIDX, index);
 	wr32(hw, TXGBE_ETHADDRL, rar_low);
 	wr32(hw, TXGBE_ETHADDRH, rar_high);
+
+	return 0;
 }
 
 /**
@@ -273,6 +276,55 @@ s32 txgbe_set_mac_type(struct txgbe_hw *hw)
 	return err;
 }
 
+void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
+{
+	struct txgbe_mac_info *mac = &hw->mac;
+
+	DEBUGFUNC("txgbe_init_mac_link_ops");
+
+	/*
+	 * enable the laser control functions for SFP+ fiber
+	 * and MNG not enabled
+	 */
+	RTE_SET_USED(mac);
+}
+
+/**
+ *  txgbe_init_phy_raptor - PHY/SFP specific init
+ *  @hw: pointer to hardware structure
+ *
+ *  Initialize any function pointers that were not able to be
+ *  set during init_shared_code because the PHY/SFP type was
+ *  not known.  Perform the SFP init if necessary.
+ *
+ **/
+s32 txgbe_init_phy_raptor(struct txgbe_hw *hw)
+{
+	struct txgbe_phy_info *phy = &hw->phy;
+	s32 err = 0;
+
+	DEBUGFUNC("txgbe_init_phy_raptor");
+
+	if (hw->device_id == TXGBE_DEV_ID_RAPTOR_QSFP) {
+		/* Store flag indicating I2C bus access control unit. */
+		hw->phy.qsfp_shared_i2c_bus = TRUE;
+
+		/* Initialize access to QSFP+ I2C bus */
+		txgbe_flush(hw);
+	}
+
+	/* Identify the PHY or SFP module */
+	err = phy->identify(hw);
+	if (err == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		goto init_phy_ops_out;
+
+	/* Setup function pointers based on detected SFP module and speeds */
+	txgbe_init_mac_link_ops(hw);
+
+init_phy_ops_out:
+	return err;
+}
+
 /**
  *  txgbe_init_ops_pf - Inits func ptrs and MAC type
  *  @hw: pointer to hardware structure
@@ -283,8 +335,13 @@ s32 txgbe_set_mac_type(struct txgbe_hw *hw)
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 {
 	struct txgbe_mac_info *mac = &hw->mac;
+	struct txgbe_phy_info *phy = &hw->phy;
 	struct txgbe_rom_info *rom = &hw->rom;
 
+	/* PHY */
+	phy->identify = txgbe_identify_phy;
+	phy->init = txgbe_init_phy_raptor;
+
 	/* MAC */
 	mac->init_hw = txgbe_init_hw;
 	mac->start_hw = txgbe_start_hw_raptor;
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index a2816c40a..a70b0340a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -21,6 +21,8 @@ void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
+void txgbe_init_mac_link_ops(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
+s32 txgbe_init_phy_raptor(struct txgbe_hw *hw);
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
new file mode 100644
index 000000000..f2f79475c
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -0,0 +1,253 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_hw.h"
+#include "txgbe_eeprom.h"
+#include "txgbe_mng.h"
+#include "txgbe_phy.h"
+
+/**
+ * txgbe_identify_extphy - Identify a single address for a PHY
+ * @hw: pointer to hardware structure
+ * @phy_addr: PHY address to probe
+ *
+ * Returns true if PHY found
+ */
+static bool txgbe_identify_extphy(struct txgbe_hw *hw)
+{
+	u16 phy_addr = 0;
+
+	if (!txgbe_validate_phy_addr(hw, phy_addr)) {
+		DEBUGOUT("Unable to validate PHY address 0x%04X\n",
+			phy_addr);
+		return false;
+	}
+
+	if (txgbe_get_phy_id(hw))
+		return false;
+
+	hw->phy.type = txgbe_get_phy_type_from_id(hw->phy.id);
+	if (hw->phy.type == txgbe_phy_unknown) {
+		u16 ext_ability = 0;
+		hw->phy.read_reg(hw, TXGBE_MD_PHY_EXT_ABILITY,
+				 TXGBE_MD_DEV_PMA_PMD,
+				 &ext_ability);
+
+		if (ext_ability & (TXGBE_MD_PHY_10GBASET_ABILITY |
+			TXGBE_MD_PHY_1000BASET_ABILITY))
+			hw->phy.type = txgbe_phy_cu_unknown;
+		else
+			hw->phy.type = txgbe_phy_generic;
+	}
+
+	return true;
+}
+
+/**
+ *  txgbe_read_phy_if - Read TXGBE_ETHPHYIF register
+ *  @hw: pointer to hardware structure
+ *
+ *  Read TXGBE_ETHPHYIF register and save field values, and check for valid field
+ *  values.
+ **/
+static s32 txgbe_read_phy_if(struct txgbe_hw *hw)
+{
+	hw->phy.media_type = hw->phy.get_media_type(hw);
+
+	/* Save NW management interface connected on board. This is used
+	 * to determine internal PHY mode.
+	 */
+	hw->phy.nw_mng_if_sel = rd32(hw, TXGBE_ETHPHYIF);
+
+	/* If MDIO is connected to external PHY, then set PHY address. */
+	if (hw->phy.nw_mng_if_sel & TXGBE_ETHPHYIF_MDIO_ACT) {
+		hw->phy.addr = TXGBE_ETHPHYIF_MDIO_BASE(hw->phy.nw_mng_if_sel);
+	}
+
+	if (!hw->phy.phy_semaphore_mask) {
+		if (hw->bus.lan_id)
+			hw->phy.phy_semaphore_mask = TXGBE_MNGSEM_SWPHY;
+		else
+			hw->phy.phy_semaphore_mask = TXGBE_MNGSEM_SWPHY;
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_identify_phy - Get physical layer module
+ *  @hw: pointer to hardware structure
+ *
+ *  Determines the physical layer module found on the current adapter.
+ **/
+s32 txgbe_identify_phy(struct txgbe_hw *hw)
+{
+	s32 err = TXGBE_ERR_PHY_ADDR_INVALID;
+
+	DEBUGFUNC("txgbe_identify_phy");
+
+	txgbe_read_phy_if(hw);
+
+	if (hw->phy.type != txgbe_phy_unknown)
+		return 0;
+
+	/* Raptor 10GBASE-T requires an external PHY */
+	if (hw->phy.media_type == txgbe_media_type_copper) {
+		err = txgbe_identify_extphy(hw);
+	} else if (hw->phy.media_type == txgbe_media_type_fiber) {
+		err = txgbe_identify_module(hw);
+	} else {
+		hw->phy.type = txgbe_phy_none;
+		return 0;
+	}
+
+	/* Return error if SFP module has been detected but is not supported */
+	if (hw->phy.type == txgbe_phy_sfp_unsupported)
+		return TXGBE_ERR_SFP_NOT_SUPPORTED;
+
+	return err;
+}
+
+/**
+ *  txgbe_validate_phy_addr - Determines phy address is valid
+ *  @hw: pointer to hardware structure
+ *  @phy_addr: PHY address
+ *
+ **/
+bool txgbe_validate_phy_addr(struct txgbe_hw *hw, u32 phy_addr)
+{
+	u16 phy_id = 0;
+	bool valid = false;
+
+	DEBUGFUNC("txgbe_validate_phy_addr");
+
+	hw->phy.addr = phy_addr;
+	hw->phy.read_reg(hw, TXGBE_MD_PHY_ID_HIGH,
+			     TXGBE_MD_DEV_PMA_PMD, &phy_id);
+
+	if (phy_id != 0xFFFF && phy_id != 0x0)
+		valid = true;
+
+	DEBUGOUT("PHY ID HIGH is 0x%04X\n", phy_id);
+
+	return valid;
+}
+
+/**
+ *  txgbe_get_phy_id - Get the phy type
+ *  @hw: pointer to hardware structure
+ *
+ **/
+s32 txgbe_get_phy_id(struct txgbe_hw *hw)
+{
+	u32 err;
+	u16 phy_id_high = 0;
+	u16 phy_id_low = 0;
+
+	DEBUGFUNC("txgbe_get_phy_id");
+
+	err = hw->phy.read_reg(hw, TXGBE_MD_PHY_ID_HIGH,
+				      TXGBE_MD_DEV_PMA_PMD,
+				      &phy_id_high);
+
+	if (err == 0) {
+		hw->phy.id = (u32)(phy_id_high << 16);
+		err = hw->phy.read_reg(hw, TXGBE_MD_PHY_ID_LOW,
+					      TXGBE_MD_DEV_PMA_PMD,
+					      &phy_id_low);
+		hw->phy.id |= (u32)(phy_id_low & TXGBE_PHY_REVISION_MASK);
+		hw->phy.revision = (u32)(phy_id_low & ~TXGBE_PHY_REVISION_MASK);
+	}
+	DEBUGOUT("PHY_ID_HIGH 0x%04X, PHY_ID_LOW 0x%04X\n",
+		  phy_id_high, phy_id_low);
+
+	return err;
+}
+
+/**
+ *  txgbe_get_phy_type_from_id - Get the phy type
+ *  @phy_id: PHY ID information
+ *
+ **/
+enum txgbe_phy_type txgbe_get_phy_type_from_id(u32 phy_id)
+{
+	enum txgbe_phy_type phy_type;
+
+	DEBUGFUNC("txgbe_get_phy_type_from_id");
+
+	switch (phy_id) {
+	case TXGBE_PHYID_TN1010:
+		phy_type = txgbe_phy_tn;
+		break;
+	case TXGBE_PHYID_QT2022:
+		phy_type = txgbe_phy_qt;
+		break;
+	case TXGBE_PHYID_ATH:
+		phy_type = txgbe_phy_nl;
+		break;
+	case TXGBE_PHYID_MTD3310:
+		phy_type = txgbe_phy_cu_mtd;
+		break;
+	default:
+		phy_type = txgbe_phy_unknown;
+		break;
+	}
+
+	return phy_type;
+}
+
+/**
+ *  txgbe_identify_module - Identifies module type
+ *  @hw: pointer to hardware structure
+ *
+ *  Determines HW type and calls appropriate function.
+ **/
+s32 txgbe_identify_module(struct txgbe_hw *hw)
+{
+	s32 err = TXGBE_ERR_SFP_NOT_PRESENT;
+
+	DEBUGFUNC("txgbe_identify_module");
+
+	switch (hw->phy.media_type) {
+	case txgbe_media_type_fiber:
+		err = txgbe_identify_sfp_module(hw);
+		break;
+
+	case txgbe_media_type_fiber_qsfp:
+		err = txgbe_identify_qsfp_module(hw);
+		break;
+
+	default:
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		err = TXGBE_ERR_SFP_NOT_PRESENT;
+		break;
+	}
+
+	return err;
+}
+
+/**
+ *  txgbe_identify_sfp_module - Identifies SFP modules
+ *  @hw: pointer to hardware structure
+ *
+ *  Searches for and identifies the SFP module and assigns appropriate PHY type.
+ **/
+s32 txgbe_identify_sfp_module(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+	return 0;
+}
+
+/**
+ *  txgbe_identify_qsfp_module - Identifies QSFP modules
+ *  @hw: pointer to hardware structure
+ *
+ *  Searches for and identifies the QSFP module and assigns appropriate PHY type
+ **/
+s32 txgbe_identify_qsfp_module(struct txgbe_hw *hw)
+{
+	RTE_SET_USED(hw);
+	return 0;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_phy.h b/drivers/net/txgbe/base/txgbe_phy.h
new file mode 100644
index 000000000..73ed734b2
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_phy.h
@@ -0,0 +1,336 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_PHY_H_
+#define _TXGBE_PHY_H_
+
+#include "txgbe_type.h"
+
+#define TXGBE_SFP_DETECT_RETRIES	10
+#define TXGBE_MD_COMMAND_TIMEOUT	100 /* PHY Timeout for 1 GB mode */
+
+
+/* ETH PHY Registers */
+#define SR_XS_PCS_MMD_STATUS1           0x030001
+#define SR_XS_PCS_CTRL2                 0x030007
+#define   SR_PCS_CTRL2_TYPE_SEL         MS16(0, 0x3)
+#define   SR_PCS_CTRL2_TYPE_SEL_R       LS16(0, 0, 0x3)
+#define   SR_PCS_CTRL2_TYPE_SEL_X       LS16(1, 0, 0x3)
+#define   SR_PCS_CTRL2_TYPE_SEL_W       LS16(2, 0, 0x3)
+#define SR_PMA_CTRL1                    0x010000
+#define   SR_PMA_CTRL1_SS13             MS16(13, 0x1)
+#define   SR_PMA_CTRL1_SS13_KX          LS16(0, 13, 0x1)
+#define   SR_PMA_CTRL1_SS13_KX4         LS16(1, 13, 0x1)
+#define   SR_PMA_CTRL1_LB               MS16(0, 0x1)
+#define SR_MII_MMD_CTL                  0x1F0000
+#define   SR_MII_MMD_CTL_AN_EN              0x1000
+#define   SR_MII_MMD_CTL_RESTART_AN         0x0200
+#define SR_MII_MMD_DIGI_CTL             0x1F8000
+#define SR_MII_MMD_AN_CTL               0x1F8001
+#define SR_MII_MMD_AN_ADV               0x1F0004
+#define   SR_MII_MMD_AN_ADV_PAUSE(v)    ((0x3 & (v)) << 7)
+#define   SR_MII_MMD_AN_ADV_PAUSE_ASM   0x80
+#define   SR_MII_MMD_AN_ADV_PAUSE_SYM   0x100
+#define SR_MII_MMD_LP_BABL              0x1F0005
+#define SR_AN_CTRL                      0x070000
+#define   SR_AN_CTRL_RSTRT_AN           MS16(9, 0x1)
+#define   SR_AN_CTRL_AN_EN              MS16(12, 0x1)
+#define SR_AN_MMD_ADV_REG1                0x070010
+#define   SR_AN_MMD_ADV_REG1_PAUSE(v)      ((0x3 & (v)) << 10)
+#define   SR_AN_MMD_ADV_REG1_PAUSE_SYM      0x400
+#define   SR_AN_MMD_ADV_REG1_PAUSE_ASM      0x800
+#define SR_AN_MMD_ADV_REG2                0x070011
+#define   SR_AN_MMD_ADV_REG2_BP_TYPE_KX4    0x40
+#define   SR_AN_MMD_ADV_REG2_BP_TYPE_KX     0x20
+#define   SR_AN_MMD_ADV_REG2_BP_TYPE_KR     0x80
+#define   SR_AN_MMD_ADV_REG2_BP_TYPE_MASK   0xFFFF
+#define SR_AN_MMD_LP_ABL1                 0x070013
+#define VR_AN_KR_MODE_CL                  0x078003
+#define VR_XS_OR_PCS_MMD_DIGI_CTL1        0x038000
+#define   VR_XS_OR_PCS_MMD_DIGI_CTL1_ENABLE 0x1000
+#define   VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST 0x8000
+#define VR_XS_OR_PCS_MMD_DIGI_STATUS      0x038010
+#define   VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK            0x1C
+#define   VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD      0x10
+
+#define TXGBE_PHY_MPLLA_CTL0                    0x018071
+#define TXGBE_PHY_MPLLA_CTL3                    0x018077
+#define TXGBE_PHY_MISC_CTL0                     0x018090
+#define TXGBE_PHY_VCO_CAL_LD0                   0x018092
+#define TXGBE_PHY_VCO_CAL_LD1                   0x018093
+#define TXGBE_PHY_VCO_CAL_LD2                   0x018094
+#define TXGBE_PHY_VCO_CAL_LD3                   0x018095
+#define TXGBE_PHY_VCO_CAL_REF0                  0x018096
+#define TXGBE_PHY_VCO_CAL_REF1                  0x018097
+#define TXGBE_PHY_RX_AD_ACK                     0x018098
+#define TXGBE_PHY_AFE_DFE_ENABLE                0x01805D
+#define TXGBE_PHY_DFE_TAP_CTL0                  0x01805E
+#define TXGBE_PHY_RX_EQ_ATT_LVL0                0x018057
+#define TXGBE_PHY_RX_EQ_CTL0                    0x018058
+#define TXGBE_PHY_RX_EQ_CTL                     0x01805C
+#define TXGBE_PHY_TX_EQ_CTL0                    0x018036
+#define TXGBE_PHY_TX_EQ_CTL1                    0x018037
+#define TXGBE_PHY_TX_RATE_CTL                   0x018034
+#define TXGBE_PHY_RX_RATE_CTL                   0x018054
+#define TXGBE_PHY_TX_GEN_CTL2                   0x018032
+#define TXGBE_PHY_RX_GEN_CTL2                   0x018052
+#define TXGBE_PHY_RX_GEN_CTL3                   0x018053
+#define TXGBE_PHY_MPLLA_CTL2                    0x018073
+#define TXGBE_PHY_RX_POWER_ST_CTL               0x018055
+#define TXGBE_PHY_TX_POWER_ST_CTL               0x018035
+#define TXGBE_PHY_TX_GENCTRL1                   0x018031
+
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX              32
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_10GBASER_KR             33
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER                   40
+#define TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_MASK                    0xFF
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX           0x46
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR          0x7B
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER                0x56
+#define TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_MASK                 0x7FF
+#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_0                       0x1
+#define TXGBE_PHY_MISC_CTL0_TX2RX_LB_EN_3_1                     0xE
+#define TXGBE_PHY_MISC_CTL0_RX_VREF_CTRL                        0x1F00
+#define TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX                        1344
+#define TXGBE_PHY_VCO_CAL_LD0_10GBASER_KR                       1353
+#define TXGBE_PHY_VCO_CAL_LD0_OTHER                             1360
+#define TXGBE_PHY_VCO_CAL_LD0_MASK                              0x1000
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX                   42
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_10GBASER_KR                  41
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_OTHER                        34
+#define TXGBE_PHY_VCO_CAL_REF0_LD0_MASK                         0x3F
+#define TXGBE_PHY_AFE_DFE_ENABLE_DFE_EN0                        0x10
+#define TXGBE_PHY_AFE_DFE_ENABLE_AFE_EN0                        0x1
+#define TXGBE_PHY_AFE_DFE_ENABLE_MASK                           0xFF
+#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT0                         0x1
+#define TXGBE_PHY_RX_EQ_CTL_CONT_ADAPT_MASK                     0xF
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_10GBASER_KR              0x0
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_RXAUI                    0x1
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX               0x3
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_OTHER                    0x2
+#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_OTHER                    0x20
+#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_OTHER                    0x200
+#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_OTHER                    0x2000
+#define TXGBE_PHY_TX_RATE_CTL_TX0_RATE_MASK                     0x7
+#define TXGBE_PHY_TX_RATE_CTL_TX1_RATE_MASK                     0x70
+#define TXGBE_PHY_TX_RATE_CTL_TX2_RATE_MASK                     0x700
+#define TXGBE_PHY_TX_RATE_CTL_TX3_RATE_MASK                     0x7000
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_10GBASER_KR              0x0
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_RXAUI                    0x1
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX               0x3
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_OTHER                    0x2
+#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_OTHER                    0x20
+#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_OTHER                    0x200
+#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_OTHER                    0x2000
+#define TXGBE_PHY_RX_RATE_CTL_RX0_RATE_MASK                     0x7
+#define TXGBE_PHY_RX_RATE_CTL_RX1_RATE_MASK                     0x70
+#define TXGBE_PHY_RX_RATE_CTL_RX2_RATE_MASK                     0x700
+#define TXGBE_PHY_RX_RATE_CTL_RX3_RATE_MASK                     0x7000
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR             0x200
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_10GBASER_KR_RXAUI       0x300
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER                   0x100
+#define TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_MASK                    0x300
+#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_OTHER                   0x400
+#define TXGBE_PHY_TX_GEN_CTL2_TX1_WIDTH_MASK                    0xC00
+#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_OTHER                   0x1000
+#define TXGBE_PHY_TX_GEN_CTL2_TX2_WIDTH_MASK                    0x3000
+#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_OTHER                   0x4000
+#define TXGBE_PHY_TX_GEN_CTL2_TX3_WIDTH_MASK                    0xC000
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR             0x200
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_10GBASER_KR_RXAUI       0x300
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER                   0x100
+#define TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_MASK                    0x300
+#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_OTHER                   0x400
+#define TXGBE_PHY_RX_GEN_CTL2_RX1_WIDTH_MASK                    0xC00
+#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_OTHER                   0x1000
+#define TXGBE_PHY_RX_GEN_CTL2_RX2_WIDTH_MASK                    0x3000
+#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_OTHER                   0x4000
+#define TXGBE_PHY_RX_GEN_CTL2_RX3_WIDTH_MASK                    0xC000
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_8                       0x100
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10                      0x200
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_16P5                    0x400
+#define TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_MASK                    0x700
+
+/******************************************************************************
+ * SFP I2C Registers:
+ ******************************************************************************/
+/* SFP IDs: format of OUI is 0x[byte0][byte1][byte2][00] */
+#define TXGBE_SFF_VENDOR_OUI_TYCO	0x00407600
+#define TXGBE_SFF_VENDOR_OUI_FTL	0x00906500
+#define TXGBE_SFF_VENDOR_OUI_AVAGO	0x00176A00
+#define TXGBE_SFF_VENDOR_OUI_INTEL	0x001B2100
+
+/* EEPROM (dev_addr = 0xA0) */
+#define TXGBE_I2C_EEPROM_DEV_ADDR	0xA0
+#define TXGBE_SFF_IDENTIFIER		0x00
+#define TXGBE_SFF_IDENTIFIER_SFP	0x03
+#define TXGBE_SFF_VENDOR_OUI_BYTE0	0x25
+#define TXGBE_SFF_VENDOR_OUI_BYTE1	0x26
+#define TXGBE_SFF_VENDOR_OUI_BYTE2	0x27
+#define TXGBE_SFF_1GBE_COMP_CODES	0x06
+#define TXGBE_SFF_10GBE_COMP_CODES	0x03
+#define TXGBE_SFF_CABLE_TECHNOLOGY	0x08
+#define   TXGBE_SFF_CABLE_DA_PASSIVE    0x4
+#define   TXGBE_SFF_CABLE_DA_ACTIVE     0x8
+#define TXGBE_SFF_CABLE_SPEC_COMP	0x3C
+#define TXGBE_SFF_SFF_8472_SWAP		0x5C
+#define TXGBE_SFF_SFF_8472_COMP		0x5E
+#define TXGBE_SFF_SFF_8472_OSCB		0x6E
+#define TXGBE_SFF_SFF_8472_ESCB		0x76
+
+#define TXGBE_SFF_IDENTIFIER_QSFP_PLUS	0x0D
+#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE0	0xA5
+#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE1	0xA6
+#define TXGBE_SFF_QSFP_VENDOR_OUI_BYTE2	0xA7
+#define TXGBE_SFF_QSFP_CONNECTOR	0x82
+#define TXGBE_SFF_QSFP_10GBE_COMP	0x83
+#define TXGBE_SFF_QSFP_1GBE_COMP	0x86
+#define TXGBE_SFF_QSFP_CABLE_LENGTH	0x92
+#define TXGBE_SFF_QSFP_DEVICE_TECH	0x93
+
+/* Bitmasks */
+#define TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING	0x4
+#define TXGBE_SFF_1GBASESX_CAPABLE		0x1
+#define TXGBE_SFF_1GBASELX_CAPABLE		0x2
+#define TXGBE_SFF_1GBASET_CAPABLE		0x8
+#define TXGBE_SFF_10GBASESR_CAPABLE		0x10
+#define TXGBE_SFF_10GBASELR_CAPABLE		0x20
+#define TXGBE_SFF_SOFT_RS_SELECT_MASK		0x8
+#define TXGBE_SFF_SOFT_RS_SELECT_10G		0x8
+#define TXGBE_SFF_SOFT_RS_SELECT_1G		0x0
+#define TXGBE_SFF_ADDRESSING_MODE		0x4
+#define TXGBE_SFF_QSFP_DA_ACTIVE_CABLE		0x1
+#define TXGBE_SFF_QSFP_DA_PASSIVE_CABLE		0x8
+#define TXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE	0x23
+#define TXGBE_SFF_QSFP_TRANSMITER_850NM_VCSEL	0x0
+#define TXGBE_I2C_EEPROM_READ_MASK		0x100
+#define TXGBE_I2C_EEPROM_STATUS_MASK		0x3
+#define TXGBE_I2C_EEPROM_STATUS_NO_OPERATION	0x0
+#define TXGBE_I2C_EEPROM_STATUS_PASS		0x1
+#define TXGBE_I2C_EEPROM_STATUS_FAIL		0x2
+#define TXGBE_I2C_EEPROM_STATUS_IN_PROGRESS	0x3
+
+/* EEPROM for SFF-8472 (dev_addr = 0xA2) */
+#define TXGBE_I2C_EEPROM_DEV_ADDR2	0xA2
+
+/* SFP+ SFF-8472 Compliance */
+#define TXGBE_SFF_SFF_8472_UNSUP	0x00
+
+/******************************************************************************
+ * PHY MDIO Registers:
+ ******************************************************************************/
+#define TXGBE_MAX_PHY_ADDR		32
+/* PHY IDs*/
+#define TXGBE_PHYID_MTD3310             0x00000000U
+#define TXGBE_PHYID_TN1010              0x00A19410U
+#define TXGBE_PHYID_QT2022              0x0043A400U
+#define TXGBE_PHYID_ATH                 0x03429050U
+
+/* (dev_type = 1) */
+#define TXGBE_MD_DEV_PMA_PMD		0x1
+#define TXGBE_MD_PHY_ID_HIGH		0x2 /* PHY ID High Reg*/
+#define TXGBE_MD_PHY_ID_LOW		0x3 /* PHY ID Low Reg*/
+#define   TXGBE_PHY_REVISION_MASK	0xFFFFFFF0
+#define TXGBE_MD_PHY_SPEED_ABILITY	0x4 /* Speed Ability Reg */
+#define TXGBE_MD_PHY_SPEED_10G		0x0001 /* 10G capable */
+#define TXGBE_MD_PHY_SPEED_1G		0x0010 /* 1G capable */
+#define TXGBE_MD_PHY_SPEED_100M		0x0020 /* 100M capable */
+#define TXGBE_MD_PHY_EXT_ABILITY	0xB /* Ext Ability Reg */
+#define TXGBE_MD_PHY_10GBASET_ABILITY	0x0004 /* 10GBaseT capable */
+#define TXGBE_MD_PHY_1000BASET_ABILITY	0x0020 /* 1000BaseT capable */
+#define TXGBE_MD_PHY_100BASETX_ABILITY	0x0080 /* 100BaseTX capable */
+#define TXGBE_MD_PHY_SET_LOW_POWER_MODE	0x0800 /* Set low power mode */
+
+#define TXGBE_MD_TX_VENDOR_ALARMS_3	0xCC02 /* Vendor Alarms 3 Reg */
+#define TXGBE_MD_PMA_PMD_SDA_SCL_ADDR	0xC30A /* PHY_XS SDA/SCL Addr Reg */
+#define TXGBE_MD_PMA_PMD_SDA_SCL_DATA	0xC30B /* PHY_XS SDA/SCL Data Reg */
+#define TXGBE_MD_PMA_PMD_SDA_SCL_STAT	0xC30C /* PHY_XS SDA/SCL Status Reg */
+
+#define TXGBE_MD_FW_REV_LO		0xC011
+#define TXGBE_MD_FW_REV_HI		0xC012
+
+#define TXGBE_TN_LASI_STATUS_REG	0x9005
+#define TXGBE_TN_LASI_STATUS_TEMP_ALARM	0x0008
+
+/* (dev_type = 3) */
+#define TXGBE_MD_DEV_PCS	0x3
+#define TXGBE_PCRC8ECL		0x0E810 /* PCR CRC-8 Error Count Lo */
+#define TXGBE_PCRC8ECH		0x0E811 /* PCR CRC-8 Error Count Hi */
+#define   TXGBE_PCRC8ECH_MASK	0x1F
+#define TXGBE_LDPCECL		0x0E820 /* PCR Uncorrected Error Count Lo */
+#define TXGBE_LDPCECH		0x0E821 /* PCR Uncorrected Error Count Hi */
+
+/* (dev_type = 4) */
+#define TXGBE_MD_DEV_PHY_XS		0x4
+#define TXGBE_MD_PHY_XS_CONTROL		0x0 /* PHY_XS Control Reg */
+#define TXGBE_MD_PHY_XS_RESET		0x8000 /* PHY_XS Reset */
+
+/* (dev_type = 7) */
+#define TXGBE_MD_DEV_AUTO_NEG		0x7
+
+#define TXGBE_MD_AUTO_NEG_CONTROL	   0x0 /* AUTO_NEG Control Reg */
+#define TXGBE_MD_AUTO_NEG_STATUS           0x1 /* AUTO_NEG Status Reg */
+#define TXGBE_MD_AUTO_NEG_VENDOR_STAT      0xC800 /* AUTO_NEG Vendor Status Reg */
+#define TXGBE_MD_AUTO_NEG_VENDOR_TX_ALARM  0xCC00 /* AUTO_NEG Vendor TX Reg */
+#define TXGBE_MD_AUTO_NEG_VENDOR_TX_ALARM2 0xCC01 /* AUTO_NEG Vendor Tx Reg */
+#define TXGBE_MD_AUTO_NEG_VEN_LSC	   0x1 /* AUTO_NEG Vendor Tx LSC */
+#define TXGBE_MD_AUTO_NEG_ADVT		   0x10 /* AUTO_NEG Advt Reg */
+#define   TXGBE_TAF_SYM_PAUSE		   MS16(10, 0x3)
+#define   TXGBE_TAF_ASM_PAUSE		   MS16(11, 0x3)
+
+#define TXGBE_MD_AUTO_NEG_LP		0x13 /* AUTO_NEG LP Status Reg */
+#define TXGBE_MD_AUTO_NEG_EEE_ADVT	0x3C /* AUTO_NEG EEE Advt Reg */
+/* PHY address definitions for new protocol MDIO commands */
+#define TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG	0x20   /* 10G Control Reg */
+#define TXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG 0xC400 /* 1G Provisioning 1 */
+#define TXGBE_MII_AUTONEG_XNP_TX_REG		0x17   /* 1G XNP Transmit */
+#define TXGBE_MII_AUTONEG_ADVERTISE_REG		0x10   /* 100M Advertisement */
+#define TXGBE_MII_10GBASE_T_ADVERTISE		0x1000 /* full duplex, bit:12*/
+#define TXGBE_MII_1GBASE_T_ADVERTISE_XNP_TX	0x4000 /* full duplex, bit:14*/
+#define TXGBE_MII_1GBASE_T_ADVERTISE		0x8000 /* full duplex, bit:15*/
+#define TXGBE_MII_2_5GBASE_T_ADVERTISE		0x0400
+#define TXGBE_MII_5GBASE_T_ADVERTISE		0x0800
+#define TXGBE_MII_100BASE_T_ADVERTISE		0x0100 /* full duplex, bit:8 */
+#define TXGBE_MII_100BASE_T_ADVERTISE_HALF	0x0080 /* half duplex, bit:7 */
+#define TXGBE_MII_RESTART			0x200
+#define TXGBE_MII_AUTONEG_COMPLETE		0x20
+#define TXGBE_MII_AUTONEG_LINK_UP		0x04
+#define TXGBE_MII_AUTONEG_REG			0x0
+#define TXGBE_MD_PMA_TX_VEN_LASI_INT_MASK 0xD401 /* PHY TX Vendor LASI */
+#define TXGBE_MD_PMA_TX_VEN_LASI_INT_EN   0x1 /* PHY TX Vendor LASI enable */
+#define TXGBE_MD_PMD_STD_TX_DISABLE_CNTR 0x9 /* Standard Transmit Dis Reg */
+#define TXGBE_MD_PMD_GLOBAL_TX_DISABLE 0x0001 /* PMD Global Transmit Dis */
+
+/* (dev_type = 30) */
+#define TXGBE_MD_DEV_VENDOR_1	30
+#define TXGBE_MD_DEV_XFI_DSP	30
+#define TNX_FW_REV		0xB
+#define TXGBE_MD_VENDOR_SPECIFIC_1_CONTROL		0x0 /* VS1 Ctrl Reg */
+#define TXGBE_MD_VENDOR_SPECIFIC_1_STATUS		0x1 /* VS1 Status Reg */
+#define TXGBE_MD_VENDOR_SPECIFIC_1_LINK_STATUS		0x0008 /* 1 = Link Up */
+#define TXGBE_MD_VENDOR_SPECIFIC_1_SPEED_STATUS		0x0010 /* 0-10G, 1-1G */
+#define TXGBE_MD_VENDOR_SPECIFIC_1_10G_SPEED		0x0018
+#define TXGBE_MD_VENDOR_SPECIFIC_1_1G_SPEED		0x0010
+
+/* (dev_type = 31) */
+#define TXGBE_MD_DEV_GENERAL          31
+#define TXGBE_MD_PORT_CTRL            0xF001
+#define   TXGBE_MD_PORT_CTRL_RESET    MS16(14, 0x1)
+
+/******************************************************************************
+ * SFP I2C Registers:
+ ******************************************************************************/
+#define TXGBE_I2C_SLAVEADDR            (0x50)
+
+bool txgbe_validate_phy_addr(struct txgbe_hw *hw, u32 phy_addr);
+enum txgbe_phy_type txgbe_get_phy_type_from_id(u32 phy_id);
+s32 txgbe_get_phy_id(struct txgbe_hw *hw);
+s32 txgbe_identify_phy(struct txgbe_hw *hw);
+
+/* PHY specific */
+s32 txgbe_identify_module(struct txgbe_hw *hw);
+s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
+s32 txgbe_identify_qsfp_module(struct txgbe_hw *hw);
+
+#endif /* _TXGBE_PHY_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index f8ac41fe9..9bbb04d20 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -363,9 +363,16 @@ struct txgbe_phy_info {
 				       u8 value);
 
 	enum txgbe_phy_type type;
+	u32 addr;
+	u32 id;
 	enum txgbe_sfp_type sfp_type;
 	bool sfp_setup_needed;
+	u32 revision;
+	u32 media_type;
+	u32 phy_semaphore_mask;
 	bool reset_disable;
+	bool qsfp_shared_i2c_bus;
+	u32 nw_mng_if_sel;
 };
 
 struct txgbe_mbx_info {
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 10/42] net/txgbe: add module identify
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (7 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 09/42] net/txgbe: add PHY init Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 11/42] net/txgbe: add PHY reset Jiawen Wu
                   ` (32 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add sfp anf qsfp module identify, i2c start and stop.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_eeprom.h |   1 +
 drivers/net/txgbe/base/txgbe_hw.c     |   4 +
 drivers/net/txgbe/base/txgbe_phy.c    | 590 +++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_phy.h    |  12 +
 drivers/net/txgbe/base/txgbe_type.h   |   1 +
 5 files changed, 605 insertions(+), 3 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index 5858e185c..29973e624 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -23,6 +23,7 @@
 #define TXGBE_EEPROM_VERSION_H          0x1E
 #define TXGBE_ISCSI_BOOT_CONFIG         0x07
 
+#define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP		0x1
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
 s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 8090d68f9..64fc14478 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -341,6 +341,10 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	/* PHY */
 	phy->identify = txgbe_identify_phy;
 	phy->init = txgbe_init_phy_raptor;
+	phy->read_i2c_byte = txgbe_read_i2c_byte;
+	phy->write_i2c_byte = txgbe_write_i2c_byte;
+	phy->read_i2c_eeprom = txgbe_read_i2c_eeprom;
+	phy->write_i2c_eeprom = txgbe_write_i2c_eeprom;
 
 	/* MAC */
 	mac->init_hw = txgbe_init_hw;
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index f2f79475c..540bc9ce9 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -7,6 +7,9 @@
 #include "txgbe_mng.h"
 #include "txgbe_phy.h"
 
+STATIC void txgbe_i2c_start(struct txgbe_hw *hw);
+STATIC void txgbe_i2c_stop(struct txgbe_hw *hw);
+
 /**
  * txgbe_identify_extphy - Identify a single address for a PHY
  * @hw: pointer to hardware structure
@@ -235,8 +238,204 @@ s32 txgbe_identify_module(struct txgbe_hw *hw)
  **/
 s32 txgbe_identify_sfp_module(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
-	return 0;
+	s32 err = TXGBE_ERR_PHY_ADDR_INVALID;
+	u32 vendor_oui = 0;
+	enum txgbe_sfp_type stored_sfp_type = hw->phy.sfp_type;
+	u8 identifier = 0;
+	u8 comp_codes_1g = 0;
+	u8 comp_codes_10g = 0;
+	u8 oui_bytes[3] = {0, 0, 0};
+	u8 cable_tech = 0;
+	u8 cable_spec = 0;
+	u16 enforce_sfp = 0;
+
+	DEBUGFUNC("txgbe_identify_sfp_module");
+
+	if (hw->phy.media_type != txgbe_media_type_fiber) {
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		return TXGBE_ERR_SFP_NOT_PRESENT;
+	}
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_IDENTIFIER,
+					     &identifier);
+	if (err != 0) {
+ERR_I2C:
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		if (hw->phy.type != txgbe_phy_nl) {
+			hw->phy.id = 0;
+			hw->phy.type = txgbe_phy_unknown;
+		}
+		return TXGBE_ERR_SFP_NOT_PRESENT;
+	}
+
+	if (identifier != TXGBE_SFF_IDENTIFIER_SFP) {
+		hw->phy.type = txgbe_phy_sfp_unsupported;
+		return TXGBE_ERR_SFP_NOT_SUPPORTED;
+	}
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_1GBE_COMP_CODES,
+					     &comp_codes_1g);
+	if (err != 0)
+		goto ERR_I2C;
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_10GBE_COMP_CODES,
+					     &comp_codes_10g);
+	if (err != 0)
+		goto ERR_I2C;
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_CABLE_TECHNOLOGY,
+					     &cable_tech);
+	if (err != 0)
+		goto ERR_I2C;
+
+	 /* ID Module
+	  * =========
+	  * 0   SFP_DA_CU
+	  * 1   SFP_SR
+	  * 2   SFP_LR
+	  * 3   SFP_DA_CORE0 - chip-specific
+	  * 4   SFP_DA_CORE1 - chip-specific
+	  * 5   SFP_SR/LR_CORE0 - chip-specific
+	  * 6   SFP_SR/LR_CORE1 - chip-specific
+	  * 7   SFP_act_lmt_DA_CORE0 - chip-specific
+	  * 8   SFP_act_lmt_DA_CORE1 - chip-specific
+	  * 9   SFP_1g_cu_CORE0 - chip-specific
+	  * 10  SFP_1g_cu_CORE1 - chip-specific
+	  * 11  SFP_1g_sx_CORE0 - chip-specific
+	  * 12  SFP_1g_sx_CORE1 - chip-specific
+	  */
+	if (cable_tech & TXGBE_SFF_CABLE_DA_PASSIVE) {
+		if (hw->bus.lan_id == 0)
+			hw->phy.sfp_type = txgbe_sfp_type_da_cu_core0;
+		else
+			hw->phy.sfp_type = txgbe_sfp_type_da_cu_core1;
+	} else if (cable_tech & TXGBE_SFF_CABLE_DA_ACTIVE) {
+		err = hw->phy.read_i2c_eeprom(hw,
+			TXGBE_SFF_CABLE_SPEC_COMP, &cable_spec);
+		if (err != 0)
+			goto ERR_I2C;
+		if (cable_spec & TXGBE_SFF_DA_SPEC_ACTIVE_LIMITING) {
+			hw->phy.sfp_type = (hw->bus.lan_id == 0
+				? txgbe_sfp_type_da_act_lmt_core0
+				: txgbe_sfp_type_da_act_lmt_core1);
+		} else {
+			hw->phy.sfp_type = txgbe_sfp_type_unknown;
+		}
+	} else if (comp_codes_10g &
+		   (TXGBE_SFF_10GBASESR_CAPABLE |
+		    TXGBE_SFF_10GBASELR_CAPABLE)) {
+		hw->phy.sfp_type = (hw->bus.lan_id == 0
+				? txgbe_sfp_type_srlr_core0
+				: txgbe_sfp_type_srlr_core1);
+	} else if (comp_codes_1g & TXGBE_SFF_1GBASET_CAPABLE) {
+		hw->phy.sfp_type = (hw->bus.lan_id == 0
+				? txgbe_sfp_type_1g_cu_core0
+				: txgbe_sfp_type_1g_cu_core1);
+	} else if (comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) {
+		hw->phy.sfp_type = (hw->bus.lan_id == 0
+				? txgbe_sfp_type_1g_sx_core0
+				: txgbe_sfp_type_1g_sx_core1);
+	} else if (comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) {
+		hw->phy.sfp_type = (hw->bus.lan_id == 0
+				? txgbe_sfp_type_1g_lx_core0
+				: txgbe_sfp_type_1g_lx_core1);
+	} else {
+		hw->phy.sfp_type = txgbe_sfp_type_unknown;
+	}
+
+	if (hw->phy.sfp_type != stored_sfp_type)
+		hw->phy.sfp_setup_needed = true;
+
+	/* Determine if the SFP+ PHY is dual speed or not. */
+	hw->phy.multispeed_fiber = false;
+	if (((comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) &&
+	     (comp_codes_10g & TXGBE_SFF_10GBASESR_CAPABLE)) ||
+	    ((comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) &&
+	     (comp_codes_10g & TXGBE_SFF_10GBASELR_CAPABLE)))
+		hw->phy.multispeed_fiber = true;
+
+	/* Determine PHY vendor */
+	if (hw->phy.type != txgbe_phy_nl) {
+		hw->phy.id = identifier;
+		err = hw->phy.read_i2c_eeprom(hw,
+			TXGBE_SFF_VENDOR_OUI_BYTE0, &oui_bytes[0]);
+		if (err != 0)
+			goto ERR_I2C;
+
+		err = hw->phy.read_i2c_eeprom(hw,
+			TXGBE_SFF_VENDOR_OUI_BYTE1, &oui_bytes[1]);
+		if (err != 0)
+			goto ERR_I2C;
+
+		err = hw->phy.read_i2c_eeprom(hw,
+			TXGBE_SFF_VENDOR_OUI_BYTE2, &oui_bytes[2]);
+		if (err != 0)
+			goto ERR_I2C;
+
+		vendor_oui = ((u32)oui_bytes[0] << 24) |
+			     ((u32)oui_bytes[1] << 16) |
+			     ((u32)oui_bytes[2] << 8);
+		switch (vendor_oui) {
+		case TXGBE_SFF_VENDOR_OUI_TYCO:
+			if (cable_tech & TXGBE_SFF_CABLE_DA_PASSIVE)
+				hw->phy.type = txgbe_phy_sfp_tyco_passive;
+			break;
+		case TXGBE_SFF_VENDOR_OUI_FTL:
+			if (cable_tech & TXGBE_SFF_CABLE_DA_ACTIVE)
+				hw->phy.type = txgbe_phy_sfp_ftl_active;
+			else
+				hw->phy.type = txgbe_phy_sfp_ftl;
+			break;
+		case TXGBE_SFF_VENDOR_OUI_AVAGO:
+			hw->phy.type = txgbe_phy_sfp_avago;
+			break;
+		case TXGBE_SFF_VENDOR_OUI_INTEL:
+			hw->phy.type = txgbe_phy_sfp_intel;
+			break;
+		default:
+			if (cable_tech & TXGBE_SFF_CABLE_DA_PASSIVE)
+				hw->phy.type = txgbe_phy_sfp_unknown_passive;
+			else if (cable_tech & TXGBE_SFF_CABLE_DA_ACTIVE)
+				hw->phy.type = txgbe_phy_sfp_unknown_active;
+			else
+				hw->phy.type = txgbe_phy_sfp_unknown;
+			break;
+		}
+	}
+
+	/* Allow any DA cable vendor */
+	if (cable_tech & (TXGBE_SFF_CABLE_DA_PASSIVE |
+			  TXGBE_SFF_CABLE_DA_ACTIVE)) {
+		return 0;
+	}
+
+	/* Verify supported 1G SFP modules */
+	if (comp_codes_10g == 0 &&
+	    !(hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1)) {
+		hw->phy.type = txgbe_phy_sfp_unsupported;
+		return TXGBE_ERR_SFP_NOT_SUPPORTED;
+	}
+
+	hw->mac.get_device_caps(hw, &enforce_sfp);
+	if (!(enforce_sfp & TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP) &&
+	    !hw->allow_unsupported_sfp &&
+	    !(hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+	      hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1)) {
+		DEBUGOUT("SFP+ module not supported\n");
+		hw->phy.type = txgbe_phy_sfp_unsupported;
+		return TXGBE_ERR_SFP_NOT_SUPPORTED;
+	}
+
+	return err;
 }
 
 /**
@@ -247,7 +446,392 @@ s32 txgbe_identify_sfp_module(struct txgbe_hw *hw)
  **/
 s32 txgbe_identify_qsfp_module(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	s32 err = TXGBE_ERR_PHY_ADDR_INVALID;
+	u32 vendor_oui = 0;
+	enum txgbe_sfp_type stored_sfp_type = hw->phy.sfp_type;
+	u8 identifier = 0;
+	u8 comp_codes_1g = 0;
+	u8 comp_codes_10g = 0;
+	u8 oui_bytes[3] = {0, 0, 0};
+	u16 enforce_sfp = 0;
+	u8 connector = 0;
+	u8 cable_length = 0;
+	u8 device_tech = 0;
+	bool active_cable = false;
+
+	DEBUGFUNC("txgbe_identify_qsfp_module");
+
+	if (hw->phy.media_type != txgbe_media_type_fiber_qsfp) {
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		err = TXGBE_ERR_SFP_NOT_PRESENT;
+		goto out;
+	}
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_IDENTIFIER,
+					     &identifier);
+ERR_I2C:
+	if (err != 0) {
+		hw->phy.sfp_type = txgbe_sfp_type_not_present;
+		hw->phy.id = 0;
+		hw->phy.type = txgbe_phy_unknown;
+		return TXGBE_ERR_SFP_NOT_PRESENT;
+	}
+	if (identifier != TXGBE_SFF_IDENTIFIER_QSFP_PLUS) {
+		hw->phy.type = txgbe_phy_sfp_unsupported;
+		err = TXGBE_ERR_SFP_NOT_SUPPORTED;
+		goto out;
+	}
+
+	hw->phy.id = identifier;
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_QSFP_10GBE_COMP,
+					     &comp_codes_10g);
+
+	if (err != 0)
+		goto ERR_I2C;
+
+	err = hw->phy.read_i2c_eeprom(hw, TXGBE_SFF_QSFP_1GBE_COMP,
+					     &comp_codes_1g);
+
+	if (err != 0)
+		goto ERR_I2C;
+
+	if (comp_codes_10g & TXGBE_SFF_QSFP_DA_PASSIVE_CABLE) {
+		hw->phy.type = txgbe_phy_qsfp_unknown_passive;
+		if (hw->bus.lan_id == 0)
+			hw->phy.sfp_type = txgbe_sfp_type_da_cu_core0;
+		else
+			hw->phy.sfp_type = txgbe_sfp_type_da_cu_core1;
+	} else if (comp_codes_10g & (TXGBE_SFF_10GBASESR_CAPABLE |
+				     TXGBE_SFF_10GBASELR_CAPABLE)) {
+		if (hw->bus.lan_id == 0)
+			hw->phy.sfp_type = txgbe_sfp_type_srlr_core0;
+		else
+			hw->phy.sfp_type = txgbe_sfp_type_srlr_core1;
+	} else {
+		if (comp_codes_10g & TXGBE_SFF_QSFP_DA_ACTIVE_CABLE)
+			active_cable = true;
+
+		if (!active_cable) {
+			hw->phy.read_i2c_eeprom(hw,
+					TXGBE_SFF_QSFP_CONNECTOR,
+					&connector);
+
+			hw->phy.read_i2c_eeprom(hw,
+					TXGBE_SFF_QSFP_CABLE_LENGTH,
+					&cable_length);
+
+			hw->phy.read_i2c_eeprom(hw,
+					TXGBE_SFF_QSFP_DEVICE_TECH,
+					&device_tech);
+
+			if ((connector ==
+				     TXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE) &&
+			    (cable_length > 0) &&
+			    ((device_tech >> 4) ==
+				     TXGBE_SFF_QSFP_TRANSMITER_850NM_VCSEL))
+				active_cable = true;
+		}
+
+		if (active_cable) {
+			hw->phy.type = txgbe_phy_qsfp_unknown_active;
+			if (hw->bus.lan_id == 0)
+				hw->phy.sfp_type =
+					txgbe_sfp_type_da_act_lmt_core0;
+			else
+				hw->phy.sfp_type =
+					txgbe_sfp_type_da_act_lmt_core1;
+		} else {
+			/* unsupported module type */
+			hw->phy.type = txgbe_phy_sfp_unsupported;
+			err = TXGBE_ERR_SFP_NOT_SUPPORTED;
+			goto out;
+		}
+	}
+
+	if (hw->phy.sfp_type != stored_sfp_type)
+		hw->phy.sfp_setup_needed = true;
+
+	/* Determine if the QSFP+ PHY is dual speed or not. */
+	hw->phy.multispeed_fiber = false;
+	if (((comp_codes_1g & TXGBE_SFF_1GBASESX_CAPABLE) &&
+	   (comp_codes_10g & TXGBE_SFF_10GBASESR_CAPABLE)) ||
+	   ((comp_codes_1g & TXGBE_SFF_1GBASELX_CAPABLE) &&
+	   (comp_codes_10g & TXGBE_SFF_10GBASELR_CAPABLE)))
+		hw->phy.multispeed_fiber = true;
+
+	/* Determine PHY vendor for optical modules */
+	if (comp_codes_10g & (TXGBE_SFF_10GBASESR_CAPABLE |
+			      TXGBE_SFF_10GBASELR_CAPABLE))  {
+		err = hw->phy.read_i2c_eeprom(hw,
+					    TXGBE_SFF_QSFP_VENDOR_OUI_BYTE0,
+					    &oui_bytes[0]);
+
+		if (err != 0)
+			goto ERR_I2C;
+
+		err = hw->phy.read_i2c_eeprom(hw,
+					    TXGBE_SFF_QSFP_VENDOR_OUI_BYTE1,
+					    &oui_bytes[1]);
+
+		if (err != 0)
+			goto ERR_I2C;
+
+		err = hw->phy.read_i2c_eeprom(hw,
+					    TXGBE_SFF_QSFP_VENDOR_OUI_BYTE2,
+					    &oui_bytes[2]);
+
+		if (err != 0)
+			goto ERR_I2C;
+
+		vendor_oui =
+		  ((oui_bytes[0] << 24) |
+		   (oui_bytes[1] << 16) |
+		   (oui_bytes[2] << 8));
+
+		if (vendor_oui == TXGBE_SFF_VENDOR_OUI_INTEL)
+			hw->phy.type = txgbe_phy_qsfp_intel;
+		else
+			hw->phy.type = txgbe_phy_qsfp_unknown;
+
+		hw->mac.get_device_caps(hw, &enforce_sfp);
+		if (!(enforce_sfp & TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP)) {
+			/* Make sure we're a supported PHY type */
+			if (hw->phy.type == txgbe_phy_qsfp_intel) {
+				err = 0;
+			} else {
+				if (hw->allow_unsupported_sfp == true) {
+					DEBUGOUT(
+						"WARNING: Wangxun (R) Network Connections are quality tested using Wangxun (R) Ethernet Optics. "
+						"Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. "
+						"Wangxun Corporation is not responsible for any harm caused by using untested modules.\n");
+					err = 0;
+				} else {
+					DEBUGOUT("QSFP module not supported\n");
+					hw->phy.type =
+						txgbe_phy_sfp_unsupported;
+					err = TXGBE_ERR_SFP_NOT_SUPPORTED;
+				}
+			}
+		} else {
+			err = 0;
+		}
+	}
+
+out:
+	return err;
+}
+
+/**
+ *  txgbe_read_i2c_eeprom - Reads 8 bit EEPROM word over I2C interface
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: EEPROM byte offset to read
+ *  @eeprom_data: value read
+ *
+ *  Performs byte read operation to SFP module's EEPROM over I2C interface.
+ **/
+s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+				  u8 *eeprom_data)
+{
+	DEBUGFUNC("txgbe_read_i2c_eeprom");
+
+	return hw->phy.read_i2c_byte(hw, byte_offset,
+					 TXGBE_I2C_EEPROM_DEV_ADDR,
+					 eeprom_data);
+}
+
+/**
+ *  txgbe_write_i2c_eeprom - Writes 8 bit EEPROM word over I2C interface
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: EEPROM byte offset to write
+ *  @eeprom_data: value to write
+ *
+ *  Performs byte write operation to SFP module's EEPROM over I2C interface.
+ **/
+s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+				   u8 eeprom_data)
+{
+	DEBUGFUNC("txgbe_write_i2c_eeprom");
+
+	return hw->phy.write_i2c_byte(hw, byte_offset,
+					  TXGBE_I2C_EEPROM_DEV_ADDR,
+					  eeprom_data);
+}
+
+/**
+ *  txgbe_read_i2c_byte_unlocked - Reads 8 bit word over I2C
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset to read
+ *  @dev_addr: address to read from
+ *  @data: value read
+ *
+ *  Performs byte read operation to SFP module's EEPROM over I2C interface at
+ *  a specified device address.
+ **/
+s32 txgbe_read_i2c_byte_unlocked(struct txgbe_hw *hw, u8 byte_offset,
+					   u8 dev_addr, u8 *data)
+{
+	UNREFERENCED_PARAMETER(dev_addr);
+
+	DEBUGFUNC("txgbe_read_i2c_byte");
+
+	txgbe_i2c_start(hw);
+
+	/* wait tx empty */
+	if (!po32m(hw, TXGBE_I2CICR, TXGBE_I2CICR_TXEMPTY,
+		TXGBE_I2CICR_TXEMPTY, NULL, 100, 100)) {
+		return -TERR_TIMEOUT;
+	}
+
+	/* read data */
+	wr32(hw, TXGBE_I2CDATA,
+			byte_offset | TXGBE_I2CDATA_STOP);
+	wr32(hw, TXGBE_I2CDATA, TXGBE_I2CDATA_READ);
+
+	/* wait for read complete */
+	if (!po32m(hw, TXGBE_I2CICR, TXGBE_I2CICR_RXFULL,
+		TXGBE_I2CICR_RXFULL, NULL, 100, 100)) {
+		return -TERR_TIMEOUT;
+	}
+
+	txgbe_i2c_stop(hw);
+
+	*data = 0xFF & rd32(hw, TXGBE_I2CDATA);
+
 	return 0;
 }
 
+/**
+ *  txgbe_read_i2c_byte - Reads 8 bit word over I2C
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset to read
+ *  @dev_addr: address to read from
+ *  @data: value read
+ *
+ *  Performs byte read operation to SFP module's EEPROM over I2C interface at
+ *  a specified device address.
+ **/
+s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+				u8 dev_addr, u8 *data)
+{
+	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+	int err = 0;
+
+	if (hw->mac.acquire_swfw_sync(hw, swfw_mask))
+		return TXGBE_ERR_SWFW_SYNC;
+	err = txgbe_read_i2c_byte_unlocked(hw, byte_offset, dev_addr, data);
+	hw->mac.release_swfw_sync(hw, swfw_mask);
+	return err;
+}
+
+/**
+ *  txgbe_write_i2c_byte_unlocked - Writes 8 bit word over I2C
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset to write
+ *  @dev_addr: address to write to
+ *  @data: value to write
+ *
+ *  Performs byte write operation to SFP module's EEPROM over I2C interface at
+ *  a specified device address.
+ **/
+s32 txgbe_write_i2c_byte_unlocked(struct txgbe_hw *hw, u8 byte_offset,
+					    u8 dev_addr, u8 data)
+{
+	UNREFERENCED_PARAMETER(dev_addr);
+
+	DEBUGFUNC("txgbe_write_i2c_byte");
+
+	txgbe_i2c_start(hw);
+
+	/* wait tx empty */
+	if (!po32m(hw, TXGBE_I2CICR, TXGBE_I2CICR_TXEMPTY,
+		TXGBE_I2CICR_TXEMPTY, NULL, 100, 100)) {
+		return -TERR_TIMEOUT;
+	}
+
+	wr32(hw, TXGBE_I2CDATA, byte_offset | TXGBE_I2CDATA_STOP);
+	wr32(hw, TXGBE_I2CDATA, data | TXGBE_I2CDATA_WRITE);
+
+	/* wait for write complete */
+	if (!po32m(hw, TXGBE_I2CICR, TXGBE_I2CICR_RXFULL,
+		TXGBE_I2CICR_RXFULL, NULL, 100, 100)) {
+		return -TERR_TIMEOUT;
+	}
+	txgbe_i2c_stop(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_write_i2c_byte - Writes 8 bit word over I2C
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset to write
+ *  @dev_addr: address to write to
+ *  @data: value to write
+ *
+ *  Performs byte write operation to SFP module's EEPROM over I2C interface at
+ *  a specified device address.
+ **/
+s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+				 u8 dev_addr, u8 data)
+{
+	u32 swfw_mask = hw->phy.phy_semaphore_mask;
+	int err = 0;
+
+	if (hw->mac.acquire_swfw_sync(hw, swfw_mask))
+		return TXGBE_ERR_SWFW_SYNC;
+	err = txgbe_write_i2c_byte_unlocked(hw, byte_offset, dev_addr, data);
+	hw->mac.release_swfw_sync(hw, swfw_mask);
+
+	return err;
+}
+
+/**
+ *  txgbe_i2c_start - Sets I2C start condition
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets I2C start condition (High -> Low on SDA while SCL is High)
+ **/
+STATIC void txgbe_i2c_start(struct txgbe_hw *hw)
+{
+	DEBUGFUNC("txgbe_i2c_start");
+
+	wr32(hw, TXGBE_I2CENA, 0);
+
+	wr32(hw, TXGBE_I2CCON,
+		(TXGBE_I2CCON_MENA |
+		TXGBE_I2CCON_SPEED(1) |
+		TXGBE_I2CCON_RESTART |
+		TXGBE_I2CCON_SDIA));
+	wr32(hw, TXGBE_I2CTAR, TXGBE_I2C_SLAVEADDR);
+	wr32(hw, TXGBE_I2CSSSCLHCNT, 600);
+	wr32(hw, TXGBE_I2CSSSCLLCNT, 600);
+	wr32(hw, TXGBE_I2CRXTL, 0); /* 1byte for rx full signal */
+	wr32(hw, TXGBE_I2CTXTL, 4);
+	wr32(hw, TXGBE_I2CSCLTMOUT, 0xFFFFFF);
+	wr32(hw, TXGBE_I2CSDATMOUT, 0xFFFFFF);
+
+	wr32(hw, TXGBE_I2CICM, 0);
+	wr32(hw, TXGBE_I2CENA, 1);
+
+}
+
+/**
+ *  txgbe_i2c_stop - Sets I2C stop condition
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets I2C stop condition (Low -> High on SDA while SCL is High)
+ **/
+STATIC void txgbe_i2c_stop(struct txgbe_hw *hw)
+{
+	DEBUGFUNC("txgbe_i2c_stop");
+
+	/* wait for completion */
+	if (!po32m(hw, TXGBE_I2CSTAT, TXGBE_I2CSTAT_MST,
+		0, NULL, 100, 100)) {
+		DEBUGFUNC("i2c stop timeout.");
+	}
+
+	wr32(hw, TXGBE_I2CENA, 0);
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_phy.h b/drivers/net/txgbe/base/txgbe_phy.h
index 73ed734b2..b13ac2c60 100644
--- a/drivers/net/txgbe/base/txgbe_phy.h
+++ b/drivers/net/txgbe/base/txgbe_phy.h
@@ -332,5 +332,17 @@ s32 txgbe_identify_phy(struct txgbe_hw *hw);
 s32 txgbe_identify_module(struct txgbe_hw *hw);
 s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
 s32 txgbe_identify_qsfp_module(struct txgbe_hw *hw);
+s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+				u8 dev_addr, u8 *data);
+s32 txgbe_read_i2c_byte_unlocked(struct txgbe_hw *hw, u8 byte_offset,
+					 u8 dev_addr, u8 *data);
+s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
+				 u8 dev_addr, u8 data);
+s32 txgbe_write_i2c_byte_unlocked(struct txgbe_hw *hw, u8 byte_offset,
+					  u8 dev_addr, u8 data);
+s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+				  u8 *eeprom_data);
+s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
+				   u8 eeprom_data);
 
 #endif /* _TXGBE_PHY_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 9bbb04d20..f4c861497 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -371,6 +371,7 @@ struct txgbe_phy_info {
 	u32 media_type;
 	u32 phy_semaphore_mask;
 	bool reset_disable;
+	bool multispeed_fiber;
 	bool qsfp_shared_i2c_bus;
 	u32 nw_mng_if_sel;
 };
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 11/42] net/txgbe: add PHY reset
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (8 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 10/42] net/txgbe: add module identify Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 12/42] net/txgbe: add device start and stop Jiawen Wu
                   ` (31 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add phy reset function, support read and write phy registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c  |   5 +
 drivers/net/txgbe/base/txgbe_phy.c | 226 +++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_phy.h |  10 ++
 3 files changed, 241 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 64fc14478..21745905d 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -341,10 +341,15 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	/* PHY */
 	phy->identify = txgbe_identify_phy;
 	phy->init = txgbe_init_phy_raptor;
+	phy->read_reg = txgbe_read_phy_reg;
+	phy->write_reg = txgbe_write_phy_reg;
+	phy->read_reg_mdi = txgbe_read_phy_reg_mdi;
+	phy->write_reg_mdi = txgbe_write_phy_reg_mdi;
 	phy->read_i2c_byte = txgbe_read_i2c_byte;
 	phy->write_i2c_byte = txgbe_write_i2c_byte;
 	phy->read_i2c_eeprom = txgbe_read_i2c_eeprom;
 	phy->write_i2c_eeprom = txgbe_write_i2c_eeprom;
+	phy->reset = txgbe_reset_phy;
 
 	/* MAC */
 	mac->init_hw = txgbe_init_hw;
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index 540bc9ce9..5e42dfa23 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -112,6 +112,30 @@ s32 txgbe_identify_phy(struct txgbe_hw *hw)
 	return err;
 }
 
+/**
+ * txgbe_check_reset_blocked - check status of MNG FW veto bit
+ * @hw: pointer to the hardware structure
+ *
+ * This function checks the STAT.MNGVETO bit to see if there are
+ * any constraints on link from manageability.  For MAC's that don't
+ * have this bit just return faluse since the link can not be blocked
+ * via this method.
+ **/
+s32 txgbe_check_reset_blocked(struct txgbe_hw *hw)
+{
+	u32 mmngc;
+
+	DEBUGFUNC("txgbe_check_reset_blocked");
+
+	mmngc = rd32(hw, TXGBE_STAT);
+	if (mmngc & TXGBE_STAT_MNGVETO) {
+		DEBUGOUT("MNG_VETO bit detected.\n");
+		return true;
+	}
+
+	return false;
+}
+
 /**
  *  txgbe_validate_phy_addr - Determines phy address is valid
  *  @hw: pointer to hardware structure
@@ -200,6 +224,208 @@ enum txgbe_phy_type txgbe_get_phy_type_from_id(u32 phy_id)
 	return phy_type;
 }
 
+static s32
+txgbe_reset_extphy(struct txgbe_hw *hw)
+{
+	u16 ctrl = 0;
+	int err, i;
+
+	err = hw->phy.read_reg(hw, TXGBE_MD_PORT_CTRL,
+			TXGBE_MD_DEV_GENERAL, &ctrl);
+	if (err != 0)
+		return err;
+	ctrl |= TXGBE_MD_PORT_CTRL_RESET;
+	err = hw->phy.write_reg(hw, TXGBE_MD_PORT_CTRL,
+			TXGBE_MD_DEV_GENERAL, ctrl);
+	if (err != 0)
+		return err;
+
+	/*
+	 * Poll for reset bit to self-clear indicating reset is complete.
+	 * Some PHYs could take up to 3 seconds to complete and need about
+	 * 1.7 usec delay after the reset is complete.
+	 */
+	for (i = 0; i < 30; i++) {
+		msec_delay(100);
+		err = hw->phy.read_reg(hw, TXGBE_MD_PORT_CTRL,
+			TXGBE_MD_DEV_GENERAL, &ctrl);
+		if (err != 0)
+			return err;
+
+		if (!(ctrl & TXGBE_MD_PORT_CTRL_RESET)) {
+			usec_delay(2);
+			break;
+		}
+	}
+
+	if (ctrl & TXGBE_MD_PORT_CTRL_RESET) {
+		err = TXGBE_ERR_RESET_FAILED;
+		DEBUGOUT("PHY reset polling failed to complete.\n");
+	}
+
+	return err;
+}
+
+/**
+ *  txgbe_reset_phy - Performs a PHY reset
+ *  @hw: pointer to hardware structure
+ **/
+s32 txgbe_reset_phy(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("txgbe_reset_phy");
+
+	if (hw->phy.type == txgbe_phy_unknown)
+		err = txgbe_identify_phy(hw);
+
+	if (err != 0 || hw->phy.type == txgbe_phy_none)
+		return err;
+
+	/* Don't reset PHY if it's shut down due to overtemp. */
+	if (TXGBE_ERR_OVERTEMP == hw->phy.check_overtemp(hw))
+		return err;
+
+	/* Blocked by MNG FW so bail */
+	if (txgbe_check_reset_blocked(hw))
+		return err;
+
+	switch (hw->phy.type) {
+	case txgbe_phy_cu_mtd:
+		err = txgbe_reset_extphy(hw);
+		break;
+	default:
+		break;
+	}
+
+	return err;
+}
+
+/**
+ *  txgbe_read_phy_mdi - Reads a value from a specified PHY register without
+ *  the SWFW lock
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit address of PHY register to read
+ *  @device_type: 5 bit device type
+ *  @phy_data: Pointer to read data from PHY register
+ **/
+s32 txgbe_read_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type,
+			   u16 *phy_data)
+{
+	u32 command, data;
+
+	/* Setup and write the address cycle command */
+	command = TXGBE_MDIOSCA_REG(reg_addr) |
+		  TXGBE_MDIOSCA_DEV(device_type) |
+		  TXGBE_MDIOSCA_PORT(hw->phy.addr);
+	wr32(hw, TXGBE_MDIOSCA, command);
+
+	command = TXGBE_MDIOSCD_CMD_READ |
+		  TXGBE_MDIOSCD_BUSY;
+	wr32(hw, TXGBE_MDIOSCD, command);
+
+	/*
+	 * Check every 10 usec to see if the address cycle completed.
+	 * The MDI Command bit will clear when the operation is
+	 * complete
+	 */
+	if (!po32m(hw, TXGBE_MDIOSCD, TXGBE_MDIOSCD_BUSY,
+		0, NULL, 100, 100)) {
+		DEBUGOUT("PHY address command did not complete\n");
+		return TXGBE_ERR_PHY;
+	}
+
+	data = rd32(hw, TXGBE_MDIOSCD);
+	*phy_data = (u16)TXGBD_MDIOSCD_DAT(data);
+
+	return 0;
+}
+
+/**
+ *  txgbe_read_phy_reg - Reads a value from a specified PHY register
+ *  using the SWFW lock - this function is needed in most cases
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit address of PHY register to read
+ *  @device_type: 5 bit device type
+ *  @phy_data: Pointer to read data from PHY register
+ **/
+s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+			       u32 device_type, u16 *phy_data)
+{
+	s32 err;
+	u32 gssr = hw->phy.phy_semaphore_mask;
+
+	DEBUGFUNC("txgbe_read_phy_reg");
+
+	if (hw->mac.acquire_swfw_sync(hw, gssr))
+		return TXGBE_ERR_SWFW_SYNC;
+
+	err = hw->phy.read_reg_mdi(hw, reg_addr, device_type, phy_data);
+
+	hw->mac.release_swfw_sync(hw, gssr);
+
+	return err;
+}
+
+/**
+ *  txgbe_write_phy_reg_mdi - Writes a value to specified PHY register
+ *  without SWFW lock
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit PHY register to write
+ *  @device_type: 5 bit device type
+ *  @phy_data: Data to write to the PHY register
+ **/
+s32 txgbe_write_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data)
+{
+	u32 command;
+
+	/* write command */
+	command = TXGBE_MDIOSCA_REG(reg_addr) |
+		  TXGBE_MDIOSCA_DEV(device_type) |
+		  TXGBE_MDIOSCA_PORT(hw->phy.addr);
+	wr32(hw, TXGBE_MDIOSCA, command);
+
+	command = TXGBE_MDIOSCD_CMD_WRITE |
+		  TXGBE_MDIOSCD_DAT(phy_data) |
+		  TXGBE_MDIOSCD_BUSY;
+	wr32(hw, TXGBE_MDIOSCD, command);
+
+	/* wait for completion */
+	if (!po32m(hw, TXGBE_MDIOSCD, TXGBE_MDIOSCD_BUSY,
+		0, NULL, 100, 100)) {
+		TLOG_DEBUG("PHY write cmd didn't complete\n");
+		return -TERR_PHY;
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_write_phy_reg - Writes a value to specified PHY register
+ *  using SWFW lock- this function is needed in most cases
+ *  @hw: pointer to hardware structure
+ *  @reg_addr: 32 bit PHY register to write
+ *  @device_type: 5 bit device type
+ *  @phy_data: Data to write to the PHY register
+ **/
+s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data)
+{
+	s32 err;
+	u32 gssr = hw->phy.phy_semaphore_mask;
+
+	DEBUGFUNC("txgbe_write_phy_reg");
+
+	if (hw->mac.acquire_swfw_sync(hw, gssr))
+		err = TXGBE_ERR_SWFW_SYNC;
+
+	err = hw->phy.write_reg_mdi(hw, reg_addr, device_type,
+					 phy_data);
+	hw->mac.release_swfw_sync(hw, gssr);
+
+	return err;
+}
 /**
  *  txgbe_identify_module - Identifies module type
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_phy.h b/drivers/net/txgbe/base/txgbe_phy.h
index b13ac2c60..318dca61c 100644
--- a/drivers/net/txgbe/base/txgbe_phy.h
+++ b/drivers/net/txgbe/base/txgbe_phy.h
@@ -327,6 +327,16 @@ bool txgbe_validate_phy_addr(struct txgbe_hw *hw, u32 phy_addr);
 enum txgbe_phy_type txgbe_get_phy_type_from_id(u32 phy_id);
 s32 txgbe_get_phy_id(struct txgbe_hw *hw);
 s32 txgbe_identify_phy(struct txgbe_hw *hw);
+s32 txgbe_reset_phy(struct txgbe_hw *hw);
+s32 txgbe_read_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type,
+			   u16 *phy_data);
+s32 txgbe_write_phy_reg_mdi(struct txgbe_hw *hw, u32 reg_addr, u32 device_type,
+			    u16 phy_data);
+s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+			       u32 device_type, u16 *phy_data);
+s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
+				u32 device_type, u16 phy_data);
+s32 txgbe_check_reset_blocked(struct txgbe_hw *hw);
 
 /* PHY specific */
 s32 txgbe_identify_module(struct txgbe_hw *hw);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 12/42] net/txgbe: add device start and stop
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (9 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 11/42] net/txgbe: add PHY reset Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 13/42] net/txgbe: add interrupt operation Jiawen Wu
                   ` (30 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device start and stop operations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_eeprom.h |   1 +
 drivers/net/txgbe/base/txgbe_hw.c     | 197 ++++++++++++++++-
 drivers/net/txgbe/base/txgbe_hw.h     |   3 +
 drivers/net/txgbe/base/txgbe_type.h   |   8 +-
 drivers/net/txgbe/txgbe_ethdev.c      | 302 +++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_ethdev.h      |   6 +
 drivers/net/txgbe/txgbe_rxtx.c        |  35 ++-
 7 files changed, 541 insertions(+), 11 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index 29973e624..47b6a2f2b 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -24,6 +24,7 @@
 #define TXGBE_ISCSI_BOOT_CONFIG         0x07
 
 #define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP		0x1
+#define TXGBE_DEVICE_CAPS_NO_CROSSTALK_WR	(1 << 7)
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
 s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 21745905d..215900895 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -20,7 +20,32 @@
  **/
 s32 txgbe_start_hw(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	u16 device_caps;
+
+	DEBUGFUNC("txgbe_start_hw");
+
+	/* Set the media type */
+	hw->phy.media_type = hw->phy.get_media_type(hw);
+
+	/* Clear statistics registers */
+	hw->mac.clear_hw_cntrs(hw);
+
+	/* Cache bit indicating need for crosstalk fix */
+	switch (hw->mac.type) {
+	case txgbe_mac_raptor:
+		hw->mac.get_device_caps(hw, &device_caps);
+		if (device_caps & TXGBE_DEVICE_CAPS_NO_CROSSTALK_WR)
+			hw->need_crosstalk_fix = false;
+		else
+			hw->need_crosstalk_fix = true;
+		break;
+	default:
+		hw->need_crosstalk_fix = false;
+		break;
+	}
+
+	/* Clear adapter stopped flag */
+	hw->adapter_stopped = false;
 
 	return 0;
 }
@@ -34,7 +59,17 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
  **/
 s32 txgbe_start_hw_gen2(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	u32 i;
+
+	/* Clear the rate limiters */
+	for (i = 0; i < hw->mac.max_tx_queues; i++) {
+		wr32(hw, TXGBE_ARBPOOLIDX, i);
+		wr32(hw, TXGBE_ARBTXRATE, 0);
+	}
+	txgbe_flush(hw);
+
+	/* We need to run link autotry after the driver loads */
+	hw->mac.autotry_restart = true;
 
 	return 0;
 }
@@ -71,6 +106,56 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_stop_hw - Generic stop Tx/Rx units
+ *  @hw: pointer to hardware structure
+ *
+ *  Sets the adapter_stopped flag within txgbe_hw struct. Clears interrupts,
+ *  disables transmit and receive units. The adapter_stopped flag is used by
+ *  the shared code and drivers to determine if the adapter is in a stopped
+ *  state and should not touch the hardware.
+ **/
+s32 txgbe_stop_hw(struct txgbe_hw *hw)
+{
+	u32 reg_val;
+	u16 i;
+
+	DEBUGFUNC("txgbe_stop_hw");
+
+	/*
+	 * Set the adapter_stopped flag so other driver functions stop touching
+	 * the hardware
+	 */
+	hw->adapter_stopped = true;
+
+	/* Clear interrupt mask to stop interrupts from being generated */
+	wr32(hw, TXGBE_IENMISC, 0);
+	wr32(hw, TXGBE_IMS(0), TXGBE_IMS_MASK);
+	wr32(hw, TXGBE_IMS(1), TXGBE_IMS_MASK);
+
+	/* Clear any pending interrupts, flush previous writes */
+	wr32(hw, TXGBE_ICRMISC, TXGBE_ICRMISC_MASK);
+	wr32(hw, TXGBE_ICR(0), TXGBE_ICR_MASK);
+	wr32(hw, TXGBE_ICR(1), TXGBE_ICR_MASK);
+
+	/* Disable the transmit unit.  Each queue must be disabled. */
+	for (i = 0; i < hw->mac.max_tx_queues; i++)
+		wr32(hw, TXGBE_TXCFG(i), TXGBE_TXCFG_FLUSH);
+
+	/* Disable the receive unit by stopping each queue */
+	for (i = 0; i < hw->mac.max_rx_queues; i++) {
+		reg_val = rd32(hw, TXGBE_RXCFG(i));
+		reg_val &= ~TXGBE_RXCFG_ENA;
+		wr32(hw, TXGBE_RXCFG(i), reg_val);
+	}
+
+	/* flush all queues disables */
+	txgbe_flush(hw);
+	msec_delay(2);
+
+	return 0;
+}
+
 /**
  *  txgbe_validate_mac_addr - Validate MAC address
  *  @mac_addr: pointer to MAC address.
@@ -143,6 +228,24 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 	return 0;
 }
 
+
+/**
+ *  txgbe_get_device_caps - Get additional device capabilities
+ *  @hw: pointer to hardware structure
+ *  @device_caps: the EEPROM word with the extra device capabilities
+ *
+ *  This function will read the EEPROM location for the device capabilities,
+ *  and return the word through device_caps.
+ **/
+s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps)
+{
+	DEBUGFUNC("txgbe_get_device_caps");
+
+	hw->rom.readw_sw(hw, TXGBE_DEVICE_CAPS, device_caps);
+
+	return 0;
+}
+
 /**
  * txgbe_clear_tx_pending - Clear pending TX work from the PCIe fifo
  * @hw: pointer to the hardware structure
@@ -248,21 +351,26 @@ s32 txgbe_set_mac_type(struct txgbe_hw *hw)
 
 	switch (hw->device_id) {
 	case TXGBE_DEV_ID_RAPTOR_KR_KX_KX4:
+		hw->phy.media_type = txgbe_media_type_backplane;
 		hw->mac.type = txgbe_mac_raptor;
 		break;
 	case TXGBE_DEV_ID_RAPTOR_XAUI:
 	case TXGBE_DEV_ID_RAPTOR_SGMII:
+		hw->phy.media_type = txgbe_media_type_copper;
 		hw->mac.type = txgbe_mac_raptor;
 		break;
 	case TXGBE_DEV_ID_RAPTOR_SFP:
 	case TXGBE_DEV_ID_WX1820_SFP:
+		hw->phy.media_type = txgbe_media_type_fiber;
 		hw->mac.type = txgbe_mac_raptor;
 		break;
 	case TXGBE_DEV_ID_RAPTOR_QSFP:
+		hw->phy.media_type = txgbe_media_type_fiber_qsfp;
 		hw->mac.type = txgbe_mac_raptor;
 		break;
 	case TXGBE_DEV_ID_RAPTOR_VF:
 	case TXGBE_DEV_ID_RAPTOR_VF_HV:
+		hw->phy.media_type = txgbe_media_type_virtual;
 		hw->mac.type = txgbe_mac_raptor_vf;
 		break;
 	default:
@@ -271,8 +379,8 @@ s32 txgbe_set_mac_type(struct txgbe_hw *hw)
 		break;
 	}
 
-	DEBUGOUT("txgbe_set_mac_type found mac: %d, returns: %d\n",
-		  hw->mac.type, err);
+	DEBUGOUT("txgbe_set_mac_type found mac: %d media: %d, returns: %d\n",
+		  hw->mac.type, hw->phy.media_type, err);
 	return err;
 }
 
@@ -325,6 +433,38 @@ s32 txgbe_init_phy_raptor(struct txgbe_hw *hw)
 	return err;
 }
 
+s32 txgbe_setup_sfp_modules(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("txgbe_setup_sfp_modules");
+
+	if (hw->phy.sfp_type == txgbe_sfp_type_unknown)
+		return 0;
+
+	txgbe_init_mac_link_ops(hw);
+
+	/* PHY config will finish before releasing the semaphore */
+	err = hw->mac.acquire_swfw_sync(hw, TXGBE_MNGSEM_SWPHY);
+	if (err != 0)
+		return TXGBE_ERR_SWFW_SYNC;
+
+	/* Release the semaphore */
+	hw->mac.release_swfw_sync(hw, TXGBE_MNGSEM_SWPHY);
+
+	/* Delay obtaining semaphore again to allow FW access
+	 * prot_autoc_write uses the semaphore too.
+	 */
+	msec_delay(hw->rom.semaphore_delay);
+
+	if (err) {
+		DEBUGOUT("sfp module setup not complete\n");
+		return TXGBE_ERR_SFP_SETUP_NOT_COMPLETE;
+	}
+
+	return err;
+}
+
 /**
  *  txgbe_init_ops_pf - Inits func ptrs and MAC type
  *  @hw: pointer to hardware structure
@@ -339,6 +479,7 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	struct txgbe_rom_info *rom = &hw->rom;
 
 	/* PHY */
+	phy->get_media_type = txgbe_get_media_type_raptor;
 	phy->identify = txgbe_identify_phy;
 	phy->init = txgbe_init_phy_raptor;
 	phy->read_reg = txgbe_read_phy_reg;
@@ -356,6 +497,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->start_hw = txgbe_start_hw_raptor;
 	mac->reset_hw = txgbe_reset_hw;
 
+	mac->get_device_caps = txgbe_get_device_caps;
+
 	/* EEPROM */
 	rom->init_params = txgbe_init_eeprom_params;
 	rom->read16 = txgbe_ee_read16;
@@ -373,6 +516,52 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	return 0;
 }
 
+
+/**
+ *  txgbe_get_media_type_raptor - Get media type
+ *  @hw: pointer to hardware structure
+ *
+ *  Returns the media type (fiber, copper, backplane)
+ **/
+u32 txgbe_get_media_type_raptor(struct txgbe_hw *hw)
+{
+	u32 media_type;
+
+	DEBUGFUNC("txgbe_get_media_type_raptor");
+
+	/* Detect if there is a copper PHY attached. */
+	switch (hw->phy.type) {
+	case txgbe_phy_cu_unknown:
+	case txgbe_phy_tn:
+		media_type = txgbe_media_type_copper;
+		return media_type;
+	default:
+		break;
+	}
+
+	switch (hw->device_id) {
+	case TXGBE_DEV_ID_RAPTOR_KR_KX_KX4:
+		/* Default device ID is mezzanine card KX/KX4 */
+		media_type = txgbe_media_type_backplane;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_SFP:
+	case TXGBE_DEV_ID_WX1820_SFP:
+		media_type = txgbe_media_type_fiber;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_QSFP:
+		media_type = txgbe_media_type_fiber_qsfp;
+		break;
+	case TXGBE_DEV_ID_RAPTOR_XAUI:
+	case TXGBE_DEV_ID_RAPTOR_SGMII:
+		media_type = txgbe_media_type_copper;
+		break;
+	default:
+		media_type = txgbe_media_type_unknown;
+		break;
+	}
+
+	return media_type;
+}
 static int
 txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 {
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index a70b0340a..884d24124 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -17,10 +17,13 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
 
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
+s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps);
 void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
+u32 txgbe_get_media_type_raptor(struct txgbe_hw *hw);
+s32 txgbe_setup_sfp_modules(struct txgbe_hw *hw);
 void txgbe_init_mac_link_ops(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index f4c861497..7eff5c05b 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -323,8 +323,11 @@ struct txgbe_mac_info {
 	u16 wwpn_prefix;
 
 	u32 num_rar_entries;
+	u32 max_tx_queues;
+	u32 max_rx_queues;
 
 	u8  san_mac_rar_index;
+	bool get_link_status;
 	u64 orig_autoc;  /* cached value of AUTOC */
 	bool orig_link_settings_stored;
 	bool autotry_restart;
@@ -401,11 +404,14 @@ struct txgbe_hw {
 	u16 vendor_id;
 	u16 subsystem_device_id;
 	u16 subsystem_vendor_id;
-
+	bool adapter_stopped;
 	bool allow_unsupported_sfp;
+	bool need_crosstalk_fix;
 
 	uint64_t isb_dma;
 	void IOMEM *isb_mem;
+	u16 nb_rx_queues;
+	u16 nb_tx_queues;
 	enum txgbe_reset_type {
 		TXGBE_LAN_RESET = 0,
 		TXGBE_SW_RESET,
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 921a75f25..f29bd2112 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -36,6 +36,8 @@
 #include "txgbe_rxtx.h"
 
 static void txgbe_dev_close(struct rte_eth_dev *dev);
+static int txgbe_dev_link_update(struct rte_eth_dev *dev,
+				int wait_to_complete);
 static int txgbe_dev_stats_reset(struct rte_eth_dev *dev);
 
 /*
@@ -52,8 +54,17 @@ static const struct eth_dev_ops txgbe_eth_dev_ops;
 static inline int
 txgbe_is_sfp(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
-	return 0;
+	switch (hw->phy.type) {
+	case txgbe_phy_sfp_avago:
+	case txgbe_phy_sfp_ftl:
+	case txgbe_phy_sfp_intel:
+	case txgbe_phy_sfp_unknown:
+	case txgbe_phy_sfp_tyco_passive:
+	case txgbe_phy_sfp_unknown_passive:
+		return 1;
+	default:
+		return 0;
+	}
 }
 
 static inline int32_t
@@ -153,6 +164,38 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 
 	err = hw->mac.init_hw(hw);
 
+	/*
+	 * Devices with copper phys will fail to initialise if txgbe_init_hw()
+	 * is called too soon after the kernel driver unbinding/binding occurs.
+	 * The failure occurs in txgbe_identify_phy() for all devices,
+	 * but for non-copper devies, txgbe_identify_sfp_module() is
+	 * also called. See txgbe_identify_phy(). The reason for the
+	 * failure is not known, and only occuts when virtualisation features
+	 * are disabled in the bios. A delay of 200ms  was found to be enough by
+	 * trial-and-error, and is doubled to be safe.
+	 */
+	if (err && (hw->phy.media_type == txgbe_media_type_copper)) {
+		rte_delay_ms(200);
+		err = hw->mac.init_hw(hw);
+	}
+
+	if (err == TXGBE_ERR_SFP_NOT_PRESENT)
+		err = 0;
+
+	if (err == TXGBE_ERR_EEPROM_VERSION) {
+		PMD_INIT_LOG(ERR, "This device is a pre-production adapter/"
+			     "LOM.  Please be aware there may be issues associated "
+			     "with your hardware.");
+		PMD_INIT_LOG(ERR, "If you are experiencing problems "
+			     "please contact your hardware representative "
+			     "who provided you with this hardware.");
+	} else if (err == TXGBE_ERR_SFP_NOT_SUPPORTED)
+		PMD_INIT_LOG(ERR, "Unsupported SFP+ Module");
+	if (err) {
+		PMD_INIT_LOG(ERR, "Hardware Initialization Failure: %d", err);
+		return -EIO;
+	}
+
 	/* Reset the hw statistics */
 	txgbe_dev_stats_reset(eth_dev);
 
@@ -318,17 +361,227 @@ static struct rte_pci_driver rte_txgbe_pmd = {
 	.remove = eth_txgbe_pci_remove,
 };
 
+
+/*
+ * Configure device link speed and setup link.
+ * It returns 0 on success.
+ */
 static int
 txgbe_dev_start(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	uint32_t intr_vector = 0;
+	int err;
+	bool link_up = false, negotiate = 0;
+	uint32_t speed = 0;
+	uint32_t allowed_speeds = 0;
+	int status;
+	uint32_t *link_speeds;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* TXGBE devices don't support:
+	 *    - half duplex (checked afterwards for valid speeds)
+	 *    - fixed speed: TODO implement
+	 */
+	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR,
+		"Invalid link_speeds for port %u, fix speed not supported",
+				dev->data->port_id);
+		return -EINVAL;
+	}
+
+	/* Stop the link setup handler before resetting the HW. */
+	rte_eal_alarm_cancel(txgbe_dev_setup_link_alarm_handler, dev);
+
+	/* disable uio/vfio intr/eventfd mapping */
+	rte_intr_disable(intr_handle);
+
+	/* stop adapter */
+	hw->adapter_stopped = 0;
+	txgbe_stop_hw(hw);
+
+	/* reinitialize adapter
+	 * this calls reset and start
+	 */
+	hw->nb_rx_queues = dev->data->nb_rx_queues;
+	hw->nb_tx_queues = dev->data->nb_tx_queues;
+	status = txgbe_pf_reset_hw(hw);
+	if (status != 0)
+		return -1;
+	hw->mac.start_hw(hw);
+	hw->mac.get_link_status = true;
+
+	/* check and configure queue intr-vector mapping */
+	if ((rte_intr_cap_multiple(intr_handle) ||
+	     !RTE_ETH_DEV_SRIOV(dev).active) &&
+	    dev->data->dev_conf.intr_conf.rxq != 0) {
+		intr_vector = dev->data->nb_rx_queues;
+		if (rte_intr_efd_enable(intr_handle, intr_vector))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (intr_handle->intr_vec == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+				     " intr_vec", dev->data->nb_rx_queues);
+			return -ENOMEM;
+		}
+	}
+
+	/* initialize transmission unit */
+	txgbe_dev_tx_init(dev);
+
+	/* This can fail when allocating mbufs for descriptor rings */
+	err = txgbe_dev_rx_init(dev);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Unable to initialize RX hardware");
+		goto error;
+	}
+
+	err = txgbe_dev_rxtx_start(dev);
+	if (err < 0) {
+		PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
+		goto error;
+	}
+
+	/* Skip link setup if loopback mode is enabled. */
+	if (hw->mac.type == txgbe_mac_raptor &&
+	    dev->data->dev_conf.lpbk_mode)
+		goto skip_link_setup;
+
+	if (txgbe_is_sfp(hw) && hw->phy.multispeed_fiber) {
+		err = hw->mac.setup_sfp(hw);
+		if (err)
+			goto error;
+	}
+
+	if (hw->phy.media_type == txgbe_media_type_copper) {
+		/* Turn on the copper */
+		hw->phy.set_phy_power(hw, true);
+	} else {
+		/* Turn on the laser */
+		hw->mac.enable_tx_laser(hw);
+	}
+
+	err = hw->mac.check_link(hw, &speed, &link_up, 0);
+	if (err)
+		goto error;
+	dev->data->dev_link.link_status = link_up;
+
+	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
+	if (err)
+		goto error;
+
+	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
+			ETH_LINK_SPEED_10G;
+
+	link_speeds = &dev->data->dev_conf.link_speeds;
+	if (*link_speeds & ~allowed_speeds) {
+		PMD_INIT_LOG(ERR, "Invalid link setting");
+		goto error;
+	}
+
+	speed = 0x0;
+	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		speed = (TXGBE_LINK_SPEED_100M_FULL |
+			 TXGBE_LINK_SPEED_1GB_FULL |
+			 TXGBE_LINK_SPEED_10GB_FULL);
+	} else {
+		if (*link_speeds & ETH_LINK_SPEED_10G)
+			speed |= TXGBE_LINK_SPEED_10GB_FULL;
+		if (*link_speeds & ETH_LINK_SPEED_5G)
+			speed |= TXGBE_LINK_SPEED_5GB_FULL;
+		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
+		if (*link_speeds & ETH_LINK_SPEED_1G)
+			speed |= TXGBE_LINK_SPEED_1GB_FULL;
+		if (*link_speeds & ETH_LINK_SPEED_100M)
+			speed |= TXGBE_LINK_SPEED_100M_FULL;
+	}
+
+	err = hw->mac.setup_link(hw, speed, link_up);
+	if (err)
+		goto error;
+
+skip_link_setup:
+
+	/* enable uio/vfio intr/eventfd mapping */
+	rte_intr_enable(intr_handle);
+
+	/* resume enabled intr since hw reset */
+	txgbe_enable_intr(dev);
+
+	/*
+	 * Update link status right before return, because it may
+	 * start link configuration process in a separate thread.
+	 */
+	txgbe_dev_link_update(dev, 0);
+
 	return 0;
+
+error:
+	PMD_INIT_LOG(ERR, "failure in txgbe_dev_start(): %d", err);
+	return -EIO;
 }
 
+/*
+ * Stop device: disable rx and tx functions to allow for reconfiguring.
+ */
 static void
 txgbe_dev_stop(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct rte_eth_link link;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+	if (hw->adapter_stopped)
+		return;
+
+	PMD_INIT_FUNC_TRACE();
+
+	rte_eal_alarm_cancel(txgbe_dev_setup_link_alarm_handler, dev);
+
+	/* disable interrupts */
+	txgbe_disable_intr(hw);
+
+	/* reset the NIC */
+	txgbe_pf_reset_hw(hw);
+	hw->adapter_stopped = 0;
+
+	/* stop adapter */
+	txgbe_stop_hw(hw);
+
+	if (hw->phy.media_type == txgbe_media_type_copper) {
+		/* Turn off the copper */
+		hw->phy.set_phy_power(hw, false);
+	} else {
+		/* Turn off the laser */
+		hw->mac.disable_tx_laser(hw);
+	}
+
+	/* Clear stored conf */
+	dev->data->scattered_rx = 0;
+	dev->data->lro = 0;
+
+	/* Clear recorded link status */
+	memset(&link, 0, sizeof(link));
+	rte_eth_linkstatus_set(dev, &link);
+
+	/* Clean datapath event and queue/vec mapping */
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec != NULL) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
+	hw->adapter_stopped = true;
 }
 
 /*
@@ -367,6 +620,31 @@ txgbe_dev_close(struct rte_eth_dev *dev)
 	dev->data->hash_mac_addrs = NULL;
 }
 
+/*
+ * Reset PF device.
+ */
+static int
+txgbe_dev_reset(struct rte_eth_dev *dev)
+{
+	int ret;
+
+	/* When a DPDK PMD PF begin to reset PF port, it should notify all
+	 * its VF to make them align with it. The detailed notification
+	 * mechanism is PMD specific. As to txgbe PF, it is rather complex.
+	 * To avoid unexpected behavior in VF, currently reset of PF with
+	 * SR-IOV activation is not supported. It might be supported later.
+	 */
+	if (dev->data->sriov.active)
+		return -ENOTSUP;
+
+	ret = eth_txgbe_dev_uninit(dev);
+	if (ret)
+		return ret;
+
+	ret = eth_txgbe_dev_init(dev, NULL);
+
+	return ret;
+}
 static int
 txgbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
@@ -390,10 +668,26 @@ txgbe_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+void
+txgbe_dev_setup_link_alarm_handler(void *param)
+{
+	RTE_SET_USED(param);
+}
+
+static int
+txgbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
+{
+	RTE_SET_USED(dev);
+	RTE_SET_USED(wait_to_complete);
+	return 0;
+}
+
 static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_start                  = txgbe_dev_start,
 	.dev_stop                   = txgbe_dev_stop,
 	.dev_close                  = txgbe_dev_close,
+	.dev_reset                  = txgbe_dev_reset,
+	.link_update                = txgbe_dev_link_update,
 	.stats_get                  = txgbe_dev_stats_get,
 	.stats_reset                = txgbe_dev_stats_reset,
 };
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index e6d533141..eb9f29b97 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -53,6 +53,11 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
  */
 void txgbe_dev_free_queues(struct rte_eth_dev *dev);
 
+int txgbe_dev_rx_init(struct rte_eth_dev *dev);
+
+void txgbe_dev_tx_init(struct rte_eth_dev *dev);
+
+int txgbe_dev_rxtx_start(struct rte_eth_dev *dev);
 /*
  * misc function prototypes
  */
@@ -61,4 +66,5 @@ void txgbe_pf_host_init(struct rte_eth_dev *eth_dev);
 void txgbe_pf_host_uninit(struct rte_eth_dev *eth_dev);
 
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
+void txgbe_dev_setup_link_alarm_handler(void *param);
 #endif /* _TXGBE_ETHDEV_H_ */
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 8236807d1..cb067d4f4 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -31,15 +31,46 @@ txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq)
 	RTE_SET_USED(txq);
 }
 
+
+void
+txgbe_dev_free_queues(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
 void __rte_cold
 txgbe_set_rx_function(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 }
 
-void
-txgbe_dev_free_queues(struct rte_eth_dev *dev)
+/*
+ * Initializes Receive Unit.
+ */
+int __rte_cold
+txgbe_dev_rx_init(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+
+	return 0;
+}
+
+/*
+ * Initializes Transmit Unit.
+ */
+void __rte_cold
+txgbe_dev_tx_init(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
+/*
+ * Start Transmit and Receive Units.
+ */
+int __rte_cold
+txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
+	return 0;
 }
 
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 13/42] net/txgbe: add interrupt operation
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (10 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 12/42] net/txgbe: add device start and stop Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 14/42] net/txgbe: add link status change Jiawen Wu
                   ` (29 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device interrupt handler and setup misx interrupt.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h |   8 +
 drivers/net/txgbe/txgbe_ethdev.c    | 457 +++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_ethdev.h    |  32 ++
 drivers/net/txgbe/txgbe_pf.c        |   6 +
 4 files changed, 501 insertions(+), 2 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 7eff5c05b..5bde3c642 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -390,6 +390,14 @@ struct txgbe_mbx_info {
 	s32  (*check_for_rst)(struct txgbe_hw *, u16);
 };
 
+enum txgbe_isb_idx {
+	TXGBE_ISB_HEADER,
+	TXGBE_ISB_MISC,
+	TXGBE_ISB_VEC0,
+	TXGBE_ISB_VEC1,
+	TXGBE_ISB_MAX
+};
+
 struct txgbe_hw {
 	void IOMEM *hw_addr;
 	void *back;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index f29bd2112..88967dede 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -40,6 +40,17 @@ static int txgbe_dev_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete);
 static int txgbe_dev_stats_reset(struct rte_eth_dev *dev);
 
+static void txgbe_dev_link_status_print(struct rte_eth_dev *dev);
+static int txgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on);
+static int txgbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev);
+static int txgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev);
+static int txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev);
+static int txgbe_dev_interrupt_action(struct rte_eth_dev *dev,
+				      struct rte_intr_handle *handle);
+static void txgbe_dev_interrupt_handler(void *param);
+static void txgbe_dev_interrupt_delayed_handler(void *param);
+static void txgbe_configure_msix(struct rte_eth_dev *dev);
+
 /*
  * The set of PCI devices this driver supports
  */
@@ -77,13 +88,24 @@ txgbe_pf_reset_hw(struct txgbe_hw *hw)
 static inline void
 txgbe_enable_intr(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	wr32(hw, TXGBE_IENMISC, intr->mask_misc);
+	wr32(hw, TXGBE_IMC(0), TXGBE_IMC_MASK);
+	wr32(hw, TXGBE_IMC(1), TXGBE_IMC_MASK);
+	txgbe_flush(hw);
 }
 
 static void
 txgbe_disable_intr(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	PMD_INIT_FUNC_TRACE();
+
+	wr32(hw, TXGBE_IENMISC, ~BIT_MASK32);
+	wr32(hw, TXGBE_IMS(0), TXGBE_IMC_MASK);
+	wr32(hw, TXGBE_IMS(1), TXGBE_IMC_MASK);
+	txgbe_flush(hw);
 }
 
 static int
@@ -255,6 +277,9 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		     eth_dev->data->port_id, pci_dev->id.vendor_id,
 		     pci_dev->id.device_id);
 
+	rte_intr_callback_register(intr_handle,
+				   txgbe_dev_interrupt_handler, eth_dev);
+
 	/* enable uio/vfio intr/eventfd mapping */
 	rte_intr_enable(intr_handle);
 
@@ -362,6 +387,20 @@ static struct rte_pci_driver rte_txgbe_pmd = {
 };
 
 
+
+static void
+txgbe_dev_phy_intr_setup(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	uint32_t gpie;
+
+	gpie = rd32(hw, TXGBE_GPIOINTEN);
+	gpie |= TXGBE_GPIOBIT_6;
+	wr32(hw, TXGBE_GPIOINTEN, gpie);
+	intr->mask_misc |= TXGBE_ICRMISC_GPIO;
+}
+
 /*
  * Configure device link speed and setup link.
  * It returns 0 on success.
@@ -414,6 +453,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	hw->mac.start_hw(hw);
 	hw->mac.get_link_status = true;
 
+	txgbe_dev_phy_intr_setup(dev);
+
 	/* check and configure queue intr-vector mapping */
 	if ((rte_intr_cap_multiple(intr_handle) ||
 	     !RTE_ETH_DEV_SRIOV(dev).active) &&
@@ -434,6 +475,9 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	/* confiugre msix for sleep until rx interrupt */
+	txgbe_configure_msix(dev);
+
 	/* initialize transmission unit */
 	txgbe_dev_tx_init(dev);
 
@@ -511,6 +555,27 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 
 skip_link_setup:
 
+	if (rte_intr_allow_others(intr_handle)) {
+		/* check if lsc interrupt is enabled */
+		if (dev->data->dev_conf.intr_conf.lsc != 0)
+			txgbe_dev_lsc_interrupt_setup(dev, TRUE);
+		else
+			txgbe_dev_lsc_interrupt_setup(dev, FALSE);
+		txgbe_dev_macsec_interrupt_setup(dev);
+		txgbe_set_ivar_map(hw, -1, 1, TXGBE_MISC_VEC_ID);
+	} else {
+		rte_intr_callback_unregister(intr_handle,
+					     txgbe_dev_interrupt_handler, dev);
+		if (dev->data->dev_conf.intr_conf.lsc != 0)
+			PMD_INIT_LOG(INFO, "lsc won't enable because of"
+				     " no intr multiplex");
+	}
+
+	/* check if rxq interrupt is enabled */
+	if (dev->data->dev_conf.intr_conf.rxq != 0 &&
+	    rte_intr_dp_is_en(intr_handle))
+		txgbe_dev_rxq_interrupt_setup(dev);
+
 	/* enable uio/vfio intr/eventfd mapping */
 	rte_intr_enable(intr_handle);
 
@@ -574,6 +639,12 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
 	memset(&link, 0, sizeof(link));
 	rte_eth_linkstatus_set(dev, &link);
 
+	if (!rte_intr_allow_others(intr_handle))
+		/* resume to the default handler */
+		rte_intr_callback_register(intr_handle,
+					   txgbe_dev_interrupt_handler,
+					   (void *)dev);
+
 	/* Clean datapath event and queue/vec mapping */
 	rte_intr_efd_disable(intr_handle);
 	if (intr_handle->intr_vec != NULL) {
@@ -593,6 +664,8 @@ txgbe_dev_close(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	int retries = 0;
+	int ret;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -610,6 +683,22 @@ txgbe_dev_close(struct rte_eth_dev *dev)
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
 
+	do {
+		ret = rte_intr_callback_unregister(intr_handle,
+				txgbe_dev_interrupt_handler, dev);
+		if (ret >= 0 || ret == -ENOENT) {
+			break;
+		} else if (ret != -EAGAIN) {
+			PMD_INIT_LOG(ERR,
+				"intr callback unregister failed: %d",
+				ret);
+		}
+		rte_delay_ms(100);
+	} while (retries++ < (10 + TXGBE_LINK_UP_TIME));
+
+	/* cancel the delay handler before remove dev */
+	rte_eal_alarm_cancel(txgbe_dev_interrupt_delayed_handler, dev);
+
 	/* uninitialize PF if max_vfs not zero */
 	txgbe_pf_host_uninit(dev);
 
@@ -682,6 +771,370 @@ txgbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	return 0;
 }
 
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ * @param on
+ *  Enable or Disable.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+txgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on)
+{
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+
+	txgbe_dev_link_status_print(dev);
+	if (on)
+		intr->mask_misc |= TXGBE_ICRMISC_LSC;
+	else
+		intr->mask_misc &= ~TXGBE_ICRMISC_LSC;
+
+	return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+txgbe_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
+{
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+
+	intr->mask[0] |= TXGBE_ICR_MASK;
+	intr->mask[1] |= TXGBE_ICR_MASK;
+
+	return 0;
+}
+
+/**
+ * It clears the interrupt causes and enables the interrupt.
+ * It will be called once only during nic initialized.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+txgbe_dev_macsec_interrupt_setup(struct rte_eth_dev *dev)
+{
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+
+	intr->mask_misc |= TXGBE_ICRMISC_LNKSEC;
+
+	return 0;
+}
+
+/*
+ * It reads ICR and sets flag (TXGBE_ICRMISC_LSC) for the link_update.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
+{
+	uint32_t eicr;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+
+	/* clear all cause mask */
+	txgbe_disable_intr(hw);
+
+	/* read-on-clear nic registers here */
+	eicr = ((u32 *)hw->isb_mem)[TXGBE_ISB_MISC];
+	PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
+
+	intr->flags = 0;
+
+	/* set flag for async link update */
+	if (eicr & TXGBE_ICRMISC_LSC)
+		intr->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+
+	if (eicr & TXGBE_ICRMISC_VFMBX)
+		intr->flags |= TXGBE_FLAG_MAILBOX;
+
+	if (eicr & TXGBE_ICRMISC_LNKSEC)
+		intr->flags |= TXGBE_FLAG_MACSEC;
+
+	if (eicr & TXGBE_ICRMISC_GPIO)
+		intr->flags |= TXGBE_FLAG_PHY_INTERRUPT;
+
+	return 0;
+}
+
+/**
+ * It gets and then prints the link status.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static void
+txgbe_dev_link_status_print(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+}
+
+/*
+ * It executes link_update after knowing an interrupt occurred.
+ *
+ * @param dev
+ *  Pointer to struct rte_eth_dev.
+ *
+ * @return
+ *  - On success, zero.
+ *  - On failure, a negative value.
+ */
+static int
+txgbe_dev_interrupt_action(struct rte_eth_dev *dev,
+			   struct rte_intr_handle *intr_handle)
+{
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	int64_t timeout;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	PMD_DRV_LOG(DEBUG, "intr action type %d", intr->flags);
+
+	if (intr->flags & TXGBE_FLAG_MAILBOX) {
+		txgbe_pf_mbx_process(dev);
+		intr->flags &= ~TXGBE_FLAG_MAILBOX;
+	}
+
+	if (intr->flags & TXGBE_FLAG_PHY_INTERRUPT) {
+		hw->phy.handle_lasi(hw);
+		intr->flags &= ~TXGBE_FLAG_PHY_INTERRUPT;
+	}
+
+	if (intr->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
+		struct rte_eth_link link;
+
+		/* get the link status before link update, for predicting later */
+		rte_eth_linkstatus_get(dev, &link);
+
+		txgbe_dev_link_update(dev, 0);
+
+		/* likely to up */
+		if (!link.link_status)
+			/* handle it 1 sec later, wait it being stable */
+			timeout = TXGBE_LINK_UP_CHECK_TIMEOUT;
+		/* likely to down */
+		else
+			/* handle it 4 sec later, wait it being stable */
+			timeout = TXGBE_LINK_DOWN_CHECK_TIMEOUT;
+
+		txgbe_dev_link_status_print(dev);
+		if (rte_eal_alarm_set(timeout * 1000,
+				      txgbe_dev_interrupt_delayed_handler,
+				      (void *)dev) < 0)
+			PMD_DRV_LOG(ERR, "Error setting alarm");
+		else {
+			/* remember original mask */
+			intr->mask_misc_orig = intr->mask_misc;
+			/* only disable lsc interrupt */
+			intr->mask_misc &= ~TXGBE_ICRMISC_LSC;
+		}
+	}
+
+	PMD_DRV_LOG(DEBUG, "enable intr immediately");
+	txgbe_enable_intr(dev);
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
+/**
+ * Interrupt handler which shall be registered for alarm callback for delayed
+ * handling specific interrupt to wait for the stable nic state. As the
+ * NIC interrupt state is not stable for txgbe after link is just down,
+ * it needs to wait 4 seconds to get the stable status.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+txgbe_dev_interrupt_delayed_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t eicr;
+
+	txgbe_disable_intr(hw);
+
+	eicr = ((u32 *)hw->isb_mem)[TXGBE_ISB_MISC];
+	if (eicr & TXGBE_ICRMISC_VFMBX)
+		txgbe_pf_mbx_process(dev);
+
+	if (intr->flags & TXGBE_FLAG_PHY_INTERRUPT) {
+		hw->phy.handle_lasi(hw);
+		intr->flags &= ~TXGBE_FLAG_PHY_INTERRUPT;
+	}
+
+	if (intr->flags & TXGBE_FLAG_NEED_LINK_UPDATE) {
+		txgbe_dev_link_update(dev, 0);
+		intr->flags &= ~TXGBE_FLAG_NEED_LINK_UPDATE;
+		txgbe_dev_link_status_print(dev);
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC,
+					      NULL);
+	}
+
+	if (intr->flags & TXGBE_FLAG_MACSEC) {
+		_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_MACSEC,
+					      NULL);
+		intr->flags &= ~TXGBE_FLAG_MACSEC;
+	}
+
+	/* restore original mask */
+	intr->mask_misc = intr->mask_misc_orig;
+	intr->mask_misc_orig = 0;
+
+	PMD_DRV_LOG(DEBUG, "enable intr in delayed handler S[%08x]", eicr);
+	txgbe_enable_intr(dev);
+	rte_intr_enable(intr_handle);
+}
+
+/**
+ * Interrupt handler triggered by NIC  for handling
+ * specific interrupt.
+ *
+ * @param handle
+ *  Pointer to interrupt handle.
+ * @param param
+ *  The address of parameter (struct rte_eth_dev *) regsitered before.
+ *
+ * @return
+ *  void
+ */
+static void
+txgbe_dev_interrupt_handler(void *param)
+{
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+
+	txgbe_dev_interrupt_get_status(dev);
+	txgbe_dev_interrupt_action(dev, dev->intr_handle);
+}
+
+/**
+ * set the IVAR registers, mapping interrupt causes to vectors
+ * @param hw
+ *  pointer to txgbe_hw struct
+ * @direction
+ *  0 for Rx, 1 for Tx, -1 for other causes
+ * @queue
+ *  queue to map the corresponding interrupt to
+ * @msix_vector
+ *  the vector to map to the corresponding queue
+ */
+void
+txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
+		   uint8_t queue, uint8_t msix_vector)
+{
+	uint32_t tmp, idx;
+
+	if (direction == -1) {
+		/* other causes */
+		msix_vector |= TXGBE_IVARMISC_VLD;
+		idx = 0;
+		tmp = rd32(hw, TXGBE_IVARMISC);
+		tmp &= ~(0xFF << idx);
+		tmp |= (msix_vector << idx);
+		wr32(hw, TXGBE_IVARMISC, tmp);
+	} else {
+		/* rx or tx causes */
+		/* Workround for ICR lost */
+		idx = ((16 * (queue & 1)) + (8 * direction));
+		tmp = rd32(hw, TXGBE_IVAR(queue >> 1));
+		tmp &= ~(0xFF << idx);
+		tmp |= (msix_vector << idx);
+		wr32(hw, TXGBE_IVAR(queue >> 1), tmp);
+	}
+}
+
+/**
+ * Sets up the hardware to properly generate MSI-X interrupts
+ * @hw
+ *  board private structure
+ */
+static void
+txgbe_configure_msix(struct rte_eth_dev *dev)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
+	uint32_t vec = TXGBE_MISC_VEC_ID;
+	uint32_t gpie;
+
+	/* won't configure msix register if no mapping is done
+	 * between intr vector and event fd
+	 * but if misx has been enabled already, need to configure
+	 * auto clean, auto mask and throttling.
+	 */
+	gpie = rd32(hw, TXGBE_GPIE);
+	if (!rte_intr_dp_is_en(intr_handle) &&
+	    !(gpie & TXGBE_GPIE_MSIX))
+		return;
+
+	if (rte_intr_allow_others(intr_handle))
+		vec = base = TXGBE_RX_VEC_START;
+
+	/* setup GPIE for MSI-x mode */
+	gpie = rd32(hw, TXGBE_GPIE);
+	gpie |= TXGBE_GPIE_MSIX;
+	wr32(hw, TXGBE_GPIE, gpie);
+
+	/* Populate the IVAR table and set the ITR values to the
+	 * corresponding register.
+	 */
+	if (rte_intr_dp_is_en(intr_handle)) {
+		for (queue_id = 0; queue_id < dev->data->nb_rx_queues;
+			queue_id++) {
+			/* by default, 1:1 mapping */
+			txgbe_set_ivar_map(hw, 0, queue_id, vec);
+			intr_handle->intr_vec[queue_id] = vec;
+			if (vec < base + intr_handle->nb_efd - 1)
+				vec++;
+		}
+
+		txgbe_set_ivar_map(hw, -1, 1, TXGBE_MISC_VEC_ID);
+	}
+	wr32(hw, TXGBE_ITR(TXGBE_MISC_VEC_ID),
+			TXGBE_ITR_IVAL_10G(TXGBE_QUEUE_ITR_INTERVAL_DEFAULT)
+			| TXGBE_ITR_WRDSA);
+}
+
 static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_start                  = txgbe_dev_start,
 	.dev_stop                   = txgbe_dev_stop,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index eb9f29b97..11f19650b 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -12,6 +12,26 @@
 #include <rte_bus_pci.h>
 #include <rte_tm_driver.h>
 
+/* need update link, bit flag */
+#define TXGBE_FLAG_NEED_LINK_UPDATE (uint32_t)(1 << 0)
+#define TXGBE_FLAG_MAILBOX          (uint32_t)(1 << 1)
+#define TXGBE_FLAG_PHY_INTERRUPT    (uint32_t)(1 << 2)
+#define TXGBE_FLAG_MACSEC           (uint32_t)(1 << 3)
+#define TXGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
+
+#define TXGBE_QUEUE_ITR_INTERVAL_DEFAULT	500 /* 500us */
+
+#define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
+#define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
+
+/* structure for interrupt relative data */
+struct txgbe_interrupt {
+	uint32_t flags;
+	uint32_t mask_misc;
+	/* to save original mask during delayed handler */
+	uint32_t mask_misc_orig;
+	uint32_t mask[2];
+};
 
 struct txgbe_vf_info {
 	uint8_t api_version;
@@ -24,6 +44,7 @@ struct txgbe_vf_info {
 struct txgbe_adapter {
 	struct txgbe_hw             hw;
 	struct txgbe_hw_stats       stats;
+	struct txgbe_interrupt      intr;
 	struct txgbe_vf_info        *vfdata;
 };
 
@@ -45,6 +66,9 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 #define TXGBE_DEV_STATS(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->stats)
 
+#define TXGBE_DEV_INTR(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->intr)
+
 #define TXGBE_DEV_VFDATA(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
 
@@ -58,6 +82,9 @@ int txgbe_dev_rx_init(struct rte_eth_dev *dev);
 void txgbe_dev_tx_init(struct rte_eth_dev *dev);
 
 int txgbe_dev_rxtx_start(struct rte_eth_dev *dev);
+void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
+			       uint8_t queue, uint8_t msix_vector);
+
 /*
  * misc function prototypes
  */
@@ -65,6 +92,11 @@ void txgbe_pf_host_init(struct rte_eth_dev *eth_dev);
 
 void txgbe_pf_host_uninit(struct rte_eth_dev *eth_dev);
 
+void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
+
+
+#define TXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
+#define TXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
 void txgbe_dev_setup_link_alarm_handler(void *param);
 #endif /* _TXGBE_ETHDEV_H_ */
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 0fac19c5d..de2b8b0e6 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -32,3 +32,9 @@ void txgbe_pf_host_uninit(struct rte_eth_dev *eth_dev)
 {
 	RTE_SET_USED(eth_dev);
 }
+
+void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev)
+{
+	RTE_SET_USED(eth_dev);
+}
+
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 14/42] net/txgbe: add link status change
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (11 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 13/42] net/txgbe: add interrupt operation Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 15/42] net/txgbe: add multi-speed link setup Jiawen Wu
                   ` (28 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add ethdev link interrupt handler, MAC setup link and check link status and get capabilities.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_eeprom.h |   3 +
 drivers/net/txgbe/base/txgbe_hw.c     | 508 +++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_hw.h     |  15 +
 drivers/net/txgbe/base/txgbe_phy.c    | 312 ++++++++++++++++
 drivers/net/txgbe/base/txgbe_phy.h    |  12 +
 drivers/net/txgbe/base/txgbe_type.h   |  18 +
 drivers/net/txgbe/txgbe_ethdev.c      | 164 ++++++++-
 drivers/net/txgbe/txgbe_ethdev.h      |   5 +
 8 files changed, 1026 insertions(+), 11 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index 47b6a2f2b..21de7e9b5 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -25,6 +25,9 @@
 
 #define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP		0x1
 #define TXGBE_DEVICE_CAPS_NO_CROSSTALK_WR	(1 << 7)
+#define TXGBE_FW_LESM_PARAMETERS_PTR		0x2
+#define TXGBE_FW_LESM_STATE_1			0x1
+#define TXGBE_FW_LESM_STATE_ENABLED		0x8000 /* LESM Enable bit */
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
 s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 215900895..26593c5f6 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -9,6 +9,11 @@
 #include "txgbe_mng.h"
 #include "txgbe_hw.h"
 
+
+STATIC s32 txgbe_setup_copper_link_raptor(struct txgbe_hw *hw,
+					 u32 speed,
+					 bool autoneg_wait_to_complete);
+
 /**
  *  txgbe_start_hw - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
@@ -229,6 +234,118 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 }
 
 
+/**
+ *  txgbe_need_crosstalk_fix - Determine if we need to do cross talk fix
+ *  @hw: pointer to hardware structure
+ *
+ *  Contains the logic to identify if we need to verify link for the
+ *  crosstalk fix
+ **/
+static bool txgbe_need_crosstalk_fix(struct txgbe_hw *hw)
+{
+
+	/* Does FW say we need the fix */
+	if (!hw->need_crosstalk_fix)
+		return false;
+
+	/* Only consider SFP+ PHYs i.e. media type fiber */
+	switch (hw->phy.media_type) {
+	case txgbe_media_type_fiber:
+	case txgbe_media_type_fiber_qsfp:
+		break;
+	default:
+		return false;
+	}
+
+	return true;
+}
+
+/**
+ *  txgbe_check_mac_link - Determine link and speed status
+ *  @hw: pointer to hardware structure
+ *  @speed: pointer to link speed
+ *  @link_up: true when link is up
+ *  @link_up_wait_to_complete: bool used to wait for link up or not
+ *
+ *  Reads the links register to determine if link is up and the current speed
+ **/
+s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed,
+				 bool *link_up, bool link_up_wait_to_complete)
+{
+	u32 links_reg, links_orig;
+	u32 i;
+
+	DEBUGFUNC("txgbe_check_mac_link");
+
+	/* If Crosstalk fix enabled do the sanity check of making sure
+	 * the SFP+ cage is full.
+	 */
+	if (txgbe_need_crosstalk_fix(hw)) {
+		u32 sfp_cage_full;
+
+		switch (hw->mac.type) {
+		case txgbe_mac_raptor:
+			sfp_cage_full = !rd32m(hw, TXGBE_GPIODATA,
+					TXGBE_GPIOBIT_2);
+			break;
+		default:
+			/* sanity check - No SFP+ devices here */
+			sfp_cage_full = false;
+			break;
+		}
+
+		if (!sfp_cage_full) {
+			*link_up = false;
+			*speed = TXGBE_LINK_SPEED_UNKNOWN;
+			return 0;
+		}
+	}
+
+	/* clear the old state */
+	links_orig = rd32(hw, TXGBE_PORTSTAT);
+
+	links_reg = rd32(hw, TXGBE_PORTSTAT);
+
+	if (links_orig != links_reg) {
+		DEBUGOUT("LINKS changed from %08X to %08X\n",
+			  links_orig, links_reg);
+	}
+
+	if (link_up_wait_to_complete) {
+		for (i = 0; i < hw->mac.max_link_up_time; i++) {
+			if (links_reg & TXGBE_PORTSTAT_UP) {
+				*link_up = true;
+				break;
+			} else {
+				*link_up = false;
+			}
+			msec_delay(100);
+			links_reg = rd32(hw, TXGBE_PORTSTAT);
+		}
+	} else {
+		if (links_reg & TXGBE_PORTSTAT_UP)
+			*link_up = true;
+		else
+			*link_up = false;
+	}
+
+	switch (links_reg & TXGBE_PORTSTAT_BW_MASK) {
+	case TXGBE_PORTSTAT_BW_10G:
+		*speed = TXGBE_LINK_SPEED_10GB_FULL;
+		break;
+	case TXGBE_PORTSTAT_BW_1G:
+		*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		break;
+	case TXGBE_PORTSTAT_BW_100M:
+		*speed = TXGBE_LINK_SPEED_100M_FULL;
+		break;
+	default:
+		*speed = TXGBE_LINK_SPEED_UNKNOWN;
+	}
+
+	return 0;
+}
+
 /**
  *  txgbe_get_device_caps - Get additional device capabilities
  *  @hw: pointer to hardware structure
@@ -390,11 +507,7 @@ void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
 
 	DEBUGFUNC("txgbe_init_mac_link_ops");
 
-	/*
-	 * enable the laser control functions for SFP+ fiber
-	 * and MNG not enabled
-	 */
-	RTE_SET_USED(mac);
+	mac->setup_link = txgbe_setup_mac_link;
 }
 
 /**
@@ -408,6 +521,7 @@ void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
  **/
 s32 txgbe_init_phy_raptor(struct txgbe_hw *hw)
 {
+	struct txgbe_mac_info *mac = &hw->mac;
 	struct txgbe_phy_info *phy = &hw->phy;
 	s32 err = 0;
 
@@ -429,6 +543,22 @@ s32 txgbe_init_phy_raptor(struct txgbe_hw *hw)
 	/* Setup function pointers based on detected SFP module and speeds */
 	txgbe_init_mac_link_ops(hw);
 
+	/* If copper media, overwrite with copper function pointers */
+	if (phy->media_type == txgbe_media_type_copper) {
+		mac->setup_link = txgbe_setup_copper_link_raptor;
+		mac->get_link_capabilities =
+				  txgbe_get_copper_link_capabilities;
+	}
+
+	/* Set necessary function pointers based on PHY type */
+	switch (hw->phy.type) {
+	case txgbe_phy_tn:
+		phy->setup_link = txgbe_setup_phy_link_tnx;
+		phy->check_link = txgbe_check_phy_link_tnx;
+		break;
+	default:
+		break;
+	}
 init_phy_ops_out:
 	return err;
 }
@@ -478,6 +608,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	struct txgbe_phy_info *phy = &hw->phy;
 	struct txgbe_rom_info *rom = &hw->rom;
 
+	DEBUGFUNC("txgbe_init_ops_pf");
+
 	/* PHY */
 	phy->get_media_type = txgbe_get_media_type_raptor;
 	phy->identify = txgbe_identify_phy;
@@ -486,6 +618,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	phy->write_reg = txgbe_write_phy_reg;
 	phy->read_reg_mdi = txgbe_read_phy_reg_mdi;
 	phy->write_reg_mdi = txgbe_write_phy_reg_mdi;
+	phy->setup_link = txgbe_setup_phy_link;
+	phy->setup_link_speed = txgbe_setup_phy_link_speed;
 	phy->read_i2c_byte = txgbe_read_i2c_byte;
 	phy->write_i2c_byte = txgbe_write_i2c_byte;
 	phy->read_i2c_eeprom = txgbe_read_i2c_eeprom;
@@ -499,6 +633,10 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 
 	mac->get_device_caps = txgbe_get_device_caps;
 
+	/* Link */
+	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
+	mac->check_link = txgbe_check_mac_link;
+
 	/* EEPROM */
 	rom->init_params = txgbe_init_eeprom_params;
 	rom->read16 = txgbe_ee_read16;
@@ -516,6 +654,102 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_get_link_capabilities_raptor - Determines link capabilities
+ *  @hw: pointer to hardware structure
+ *  @speed: pointer to link speed
+ *  @autoneg: true when autoneg or autotry is enabled
+ *
+ *  Determines the link capabilities by reading the AUTOC register.
+ **/
+s32 txgbe_get_link_capabilities_raptor(struct txgbe_hw *hw,
+				      u32 *speed,
+				      bool *autoneg)
+{
+	s32 status = 0;
+	u32 autoc = 0;
+
+	DEBUGFUNC("txgbe_get_link_capabilities_raptor");
+
+	/* Check if 1G SFP module. */
+	if (hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core0 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_cu_core1 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core0 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_lx_core1 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core0 ||
+	    hw->phy.sfp_type == txgbe_sfp_type_1g_sx_core1) {
+		*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = true;
+		return 0;
+	}
+
+	/*
+	 * Determine link capabilities based on the stored value of AUTOC,
+	 * which represents EEPROM defaults.  If AUTOC value has not
+	 * been stored, use the current register values.
+	 */
+	if (hw->mac.orig_link_settings_stored)
+		autoc = hw->mac.orig_autoc;
+	else
+		autoc = hw->mac.autoc_read(hw);
+
+	switch (autoc & TXGBE_AUTOC_LMS_MASK) {
+	case TXGBE_AUTOC_LMS_1G_LINK_NO_AN:
+		*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = false;
+		break;
+
+	case TXGBE_AUTOC_LMS_10G_LINK_NO_AN:
+		*speed = TXGBE_LINK_SPEED_10GB_FULL;
+		*autoneg = false;
+		break;
+
+	case TXGBE_AUTOC_LMS_1G_AN:
+		*speed = TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = true;
+		break;
+
+	case TXGBE_AUTOC_LMS_10Gs:
+		*speed = TXGBE_LINK_SPEED_10GB_FULL;
+		*autoneg = false;
+		break;
+
+	case TXGBE_AUTOC_LMS_KX4_KX_KR:
+	case TXGBE_AUTOC_LMS_KX4_KX_KR_1G_AN:
+		*speed = TXGBE_LINK_SPEED_UNKNOWN;
+		if (autoc & TXGBE_AUTOC_KR_SUPP)
+			*speed |= TXGBE_LINK_SPEED_10GB_FULL;
+		if (autoc & TXGBE_AUTOC_KX4_SUPP)
+			*speed |= TXGBE_LINK_SPEED_10GB_FULL;
+		if (autoc & TXGBE_AUTOC_KX_SUPP)
+			*speed |= TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = true;
+		break;
+
+	case TXGBE_AUTOC_LMS_KX4_KX_KR_SGMII:
+		*speed = TXGBE_LINK_SPEED_100M_FULL;
+		if (autoc & TXGBE_AUTOC_KR_SUPP)
+			*speed |= TXGBE_LINK_SPEED_10GB_FULL;
+		if (autoc & TXGBE_AUTOC_KX4_SUPP)
+			*speed |= TXGBE_LINK_SPEED_10GB_FULL;
+		if (autoc & TXGBE_AUTOC_KX_SUPP)
+			*speed |= TXGBE_LINK_SPEED_1GB_FULL;
+		*autoneg = true;
+		break;
+
+	case TXGBE_AUTOC_LMS_SGMII_1G_100M:
+		*speed = TXGBE_LINK_SPEED_1GB_FULL |
+			 TXGBE_LINK_SPEED_100M_FULL |
+			 TXGBE_LINK_SPEED_10M_FULL;
+		*autoneg = false;
+		break;
+
+	default:
+		return TXGBE_ERR_LINK_SETUP;
+	}
+
+	return status;
+}
 
 /**
  *  txgbe_get_media_type_raptor - Get media type
@@ -562,6 +796,193 @@ u32 txgbe_get_media_type_raptor(struct txgbe_hw *hw)
 
 	return media_type;
 }
+
+/**
+ *  txgbe_start_mac_link_raptor - Setup MAC link settings
+ *  @hw: pointer to hardware structure
+ *  @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ *  Configures link settings based on values in the txgbe_hw struct.
+ *  Restarts the link.  Performs autonegotiation if needed.
+ **/
+s32 txgbe_start_mac_link_raptor(struct txgbe_hw *hw,
+			       bool autoneg_wait_to_complete)
+{
+	s32 status = 0;
+	bool got_lock = false;
+
+	DEBUGFUNC("txgbe_start_mac_link_raptor");
+
+	/*  reset_pipeline requires us to hold this lock as it writes to
+	 *  AUTOC.
+	 */
+	if (txgbe_verify_lesm_fw_enabled_raptor(hw)) {
+		status = hw->mac.acquire_swfw_sync(hw, TXGBE_MNGSEM_SWPHY);
+		if (status != 0)
+			goto out;
+
+		got_lock = true;
+	}
+
+	/* Restart link */
+	txgbe_reset_pipeline_raptor(hw);
+
+	if (got_lock)
+		hw->mac.release_swfw_sync(hw, TXGBE_MNGSEM_SWPHY);
+
+	/* Add delay to filter out noises during initial link setup */
+	msec_delay(50);
+
+out:
+	return status;
+}
+
+/**
+ *  txgbe_setup_mac_link - Set MAC link speed
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ *  Set the link speed in the AUTOC register and restarts link.
+ **/
+s32 txgbe_setup_mac_link(struct txgbe_hw *hw,
+			       u32 speed,
+			       bool autoneg_wait_to_complete)
+{
+	bool autoneg = false;
+	s32 status = 0;
+
+	u64 autoc = hw->mac.autoc_read(hw);
+	u64 pma_pmd_10gs = autoc & TXGBE_AUTOC_10Gs_PMA_PMD_MASK;
+	u64 pma_pmd_1g = autoc & TXGBE_AUTOC_1G_PMA_PMD_MASK;
+	u64 link_mode = autoc & TXGBE_AUTOC_LMS_MASK;
+	u64 current_autoc = autoc;
+	u64 orig_autoc = 0;
+	u32 links_reg;
+	u32 i;
+	u32 link_capabilities = TXGBE_LINK_SPEED_UNKNOWN;
+
+	DEBUGFUNC("txgbe_setup_mac_link");
+
+	/* Check to see if speed passed in is supported. */
+	status = hw->mac.get_link_capabilities(hw,
+			&link_capabilities, &autoneg);
+	if (status)
+		return status;
+
+	speed &= link_capabilities;
+	if (speed == TXGBE_LINK_SPEED_UNKNOWN) {
+		return TXGBE_ERR_LINK_SETUP;
+	}
+
+	/* Use stored value (EEPROM defaults) of AUTOC to find KR/KX4 support*/
+	if (hw->mac.orig_link_settings_stored)
+		orig_autoc = hw->mac.orig_autoc;
+	else
+		orig_autoc = autoc;
+
+	link_mode = autoc & TXGBE_AUTOC_LMS_MASK;
+	pma_pmd_1g = autoc & TXGBE_AUTOC_1G_PMA_PMD_MASK;
+
+	if (link_mode == TXGBE_AUTOC_LMS_KX4_KX_KR ||
+	    link_mode == TXGBE_AUTOC_LMS_KX4_KX_KR_1G_AN ||
+	    link_mode == TXGBE_AUTOC_LMS_KX4_KX_KR_SGMII) {
+		/* Set KX4/KX/KR support according to speed requested */
+		autoc &= ~(TXGBE_AUTOC_KX_SUPP |
+			   TXGBE_AUTOC_KX4_SUPP |
+			   TXGBE_AUTOC_KR_SUPP);
+		if (speed & TXGBE_LINK_SPEED_10GB_FULL) {
+			if (orig_autoc & TXGBE_AUTOC_KX4_SUPP)
+				autoc |= TXGBE_AUTOC_KX4_SUPP;
+			if ((orig_autoc & TXGBE_AUTOC_KR_SUPP) &&
+			    (hw->phy.smart_speed_active == false))
+				autoc |= TXGBE_AUTOC_KR_SUPP;
+		}
+		if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+			autoc |= TXGBE_AUTOC_KX_SUPP;
+	} else if ((pma_pmd_1g == TXGBE_AUTOC_1G_SFI) &&
+		   (link_mode == TXGBE_AUTOC_LMS_1G_LINK_NO_AN ||
+		    link_mode == TXGBE_AUTOC_LMS_1G_AN)) {
+		/* Switch from 1G SFI to 10G SFI if requested */
+		if ((speed == TXGBE_LINK_SPEED_10GB_FULL) &&
+		    (pma_pmd_10gs == TXGBE_AUTOC_10Gs_SFI)) {
+			autoc &= ~TXGBE_AUTOC_LMS_MASK;
+			autoc |= TXGBE_AUTOC_LMS_10Gs;
+		}
+	} else if ((pma_pmd_10gs == TXGBE_AUTOC_10Gs_SFI) &&
+		   (link_mode == TXGBE_AUTOC_LMS_10Gs)) {
+		/* Switch from 10G SFI to 1G SFI if requested */
+		if ((speed == TXGBE_LINK_SPEED_1GB_FULL) &&
+		    (pma_pmd_1g == TXGBE_AUTOC_1G_SFI)) {
+			autoc &= ~TXGBE_AUTOC_LMS_MASK;
+			if (autoneg || hw->phy.type == txgbe_phy_qsfp_intel)
+				autoc |= TXGBE_AUTOC_LMS_1G_AN;
+			else
+				autoc |= TXGBE_AUTOC_LMS_1G_LINK_NO_AN;
+		}
+	}
+
+	if (autoc == current_autoc) {
+		return status;
+	}
+
+	autoc &= ~TXGBE_AUTOC_SPEED_MASK;
+	autoc |= TXGBE_AUTOC_SPEED(speed);
+	autoc |= (autoneg ? TXGBE_AUTOC_AUTONEG : 0);
+
+	/* Restart link */
+	hw->mac.autoc_write(hw, autoc);
+
+	/* Only poll for autoneg to complete if specified to do so */
+	if (autoneg_wait_to_complete) {
+		if (link_mode == TXGBE_AUTOC_LMS_KX4_KX_KR ||
+		    link_mode == TXGBE_AUTOC_LMS_KX4_KX_KR_1G_AN ||
+		    link_mode == TXGBE_AUTOC_LMS_KX4_KX_KR_SGMII) {
+			links_reg = 0; /*Just in case Autoneg time=0*/
+			for (i = 0; i < TXGBE_AUTO_NEG_TIME; i++) {
+				links_reg = rd32(hw, TXGBE_PORTSTAT);
+				if (links_reg & TXGBE_PORTSTAT_UP)
+					break;
+				msec_delay(100);
+			}
+			if (!(links_reg & TXGBE_PORTSTAT_UP)) {
+				status = TXGBE_ERR_AUTONEG_NOT_COMPLETE;
+				DEBUGOUT("Autoneg did not complete.\n");
+			}
+		}
+	}
+
+	/* Add delay to filter out noises during initial link setup */
+	msec_delay(50);
+
+	return status;
+}
+
+/**
+ *  txgbe_setup_copper_link_raptor - Set the PHY autoneg advertised field
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: true if waiting is needed to complete
+ *
+ *  Restarts link on PHY and MAC based on settings passed in.
+ **/
+STATIC s32 txgbe_setup_copper_link_raptor(struct txgbe_hw *hw,
+					 u32 speed,
+					 bool autoneg_wait_to_complete)
+{
+	s32 status;
+
+	DEBUGFUNC("txgbe_setup_copper_link_raptor");
+
+	/* Setup the PHY according to input speed */
+	status = hw->phy.setup_link_speed(hw, speed,
+					      autoneg_wait_to_complete);
+	/* Set up MAC */
+	txgbe_start_mac_link_raptor(hw, autoneg_wait_to_complete);
+
+	return status;
+}
+
 static int
 txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 {
@@ -753,3 +1174,80 @@ s32 txgbe_start_hw_raptor(struct txgbe_hw *hw)
 	return err;
 }
 
+
+/**
+ *  txgbe_verify_lesm_fw_enabled_raptor - Checks LESM FW module state.
+ *  @hw: pointer to hardware structure
+ *
+ *  Returns true if the LESM FW module is present and enabled. Otherwise
+ *  returns false. Smart Speed must be disabled if LESM FW module is enabled.
+ **/
+bool txgbe_verify_lesm_fw_enabled_raptor(struct txgbe_hw *hw)
+{
+	bool lesm_enabled = false;
+	u16 fw_offset, fw_lesm_param_offset, fw_lesm_state;
+	s32 status;
+
+	DEBUGFUNC("txgbe_verify_lesm_fw_enabled_raptor");
+
+	/* get the offset to the Firmware Module block */
+	status = hw->rom.read16(hw, TXGBE_FW_PTR, &fw_offset);
+
+	if ((status != 0) ||
+	    (fw_offset == 0) || (fw_offset == 0xFFFF))
+		goto out;
+
+	/* get the offset to the LESM Parameters block */
+	status = hw->rom.read16(hw, (fw_offset +
+				     TXGBE_FW_LESM_PARAMETERS_PTR),
+				     &fw_lesm_param_offset);
+
+	if ((status != 0) ||
+	    (fw_lesm_param_offset == 0) || (fw_lesm_param_offset == 0xFFFF))
+		goto out;
+
+	/* get the LESM state word */
+	status = hw->rom.read16(hw, (fw_lesm_param_offset +
+				     TXGBE_FW_LESM_STATE_1),
+				     &fw_lesm_state);
+
+	if ((status == 0) &&
+	    (fw_lesm_state & TXGBE_FW_LESM_STATE_ENABLED))
+		lesm_enabled = true;
+
+out:
+	lesm_enabled = false;
+	return lesm_enabled;
+}
+
+/**
+ * txgbe_reset_pipeline_raptor - perform pipeline reset
+ *
+ *  @hw: pointer to hardware structure
+ *
+ * Reset pipeline by asserting Restart_AN together with LMS change to ensure
+ * full pipeline reset.  This function assumes the SW/FW lock is held.
+ **/
+s32 txgbe_reset_pipeline_raptor(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+	u64 autoc;
+
+	autoc = hw->mac.autoc_read(hw);
+
+	/* Enable link if disabled in NVM */
+	if (autoc & TXGBE_AUTOC_LINK_DIA_MASK) {
+		autoc &= ~TXGBE_AUTOC_LINK_DIA_MASK;
+	}
+
+	autoc |= TXGBE_AUTOC_AN_RESTART;
+	/* Write AUTOC register with toggled LMS[2] bit and Restart_AN */
+	hw->mac.autoc_write(hw, autoc ^ TXGBE_AUTOC_LMS_AN);
+
+	/* Write AUTOC register with original LMS field and Restart_AN */
+	hw->mac.autoc_write(hw, autoc);
+	txgbe_flush(hw);
+
+	return err;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index 884d24124..e64c09950 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -17,15 +17,30 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
 
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
+
+s32 txgbe_check_mac_link(struct txgbe_hw *hw,
+			       u32 *speed,
+			       bool *link_up, bool link_up_wait_to_complete);
+
 s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps);
 void txgbe_clear_tx_pending(struct txgbe_hw *hw);
+
+extern s32 txgbe_reset_pipeline_raptor(struct txgbe_hw *hw);
+
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
+s32 txgbe_get_link_capabilities_raptor(struct txgbe_hw *hw,
+				      u32 *speed, bool *autoneg);
 u32 txgbe_get_media_type_raptor(struct txgbe_hw *hw);
+s32 txgbe_start_mac_link_raptor(struct txgbe_hw *hw,
+			       bool autoneg_wait_to_complete);
+s32 txgbe_setup_mac_link(struct txgbe_hw *hw, u32 speed,
+			       bool autoneg_wait_to_complete);
 s32 txgbe_setup_sfp_modules(struct txgbe_hw *hw);
 void txgbe_init_mac_link_ops(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 s32 txgbe_init_phy_raptor(struct txgbe_hw *hw);
+bool txgbe_verify_lesm_fw_enabled_raptor(struct txgbe_hw *hw);
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index 5e42dfa23..59d28506e 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -426,6 +426,318 @@ s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
 
 	return err;
 }
+
+/**
+ *  txgbe_setup_phy_link - Set and restart auto-neg
+ *  @hw: pointer to hardware structure
+ *
+ *  Restart auto-negotiation and PHY and waits for completion.
+ **/
+s32 txgbe_setup_phy_link(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+	u16 autoneg_reg = TXGBE_MII_AUTONEG_REG;
+	bool autoneg = false;
+	u32 speed;
+
+	DEBUGFUNC("txgbe_setup_phy_link");
+
+	txgbe_get_copper_link_capabilities(hw, &speed, &autoneg);
+
+	/* Set or unset auto-negotiation 10G advertisement */
+	hw->phy.read_reg(hw, TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG,
+			     TXGBE_MD_DEV_AUTO_NEG,
+			     &autoneg_reg);
+
+	autoneg_reg &= ~TXGBE_MII_10GBASE_T_ADVERTISE;
+	if ((hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL) &&
+	    (speed & TXGBE_LINK_SPEED_10GB_FULL))
+		autoneg_reg |= TXGBE_MII_10GBASE_T_ADVERTISE;
+
+	hw->phy.write_reg(hw, TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG,
+			      TXGBE_MD_DEV_AUTO_NEG,
+			      autoneg_reg);
+
+	hw->phy.read_reg(hw, TXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG,
+			     TXGBE_MD_DEV_AUTO_NEG,
+			     &autoneg_reg);
+
+	/* Set or unset auto-negotiation 5G advertisement */
+	autoneg_reg &= ~TXGBE_MII_5GBASE_T_ADVERTISE;
+	if ((hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_5GB_FULL) &&
+	    (speed & TXGBE_LINK_SPEED_5GB_FULL))
+		autoneg_reg |= TXGBE_MII_5GBASE_T_ADVERTISE;
+
+	/* Set or unset auto-negotiation 2.5G advertisement */
+	autoneg_reg &= ~TXGBE_MII_2_5GBASE_T_ADVERTISE;
+	if ((hw->phy.autoneg_advertised &
+	     TXGBE_LINK_SPEED_2_5GB_FULL) &&
+	    (speed & TXGBE_LINK_SPEED_2_5GB_FULL))
+		autoneg_reg |= TXGBE_MII_2_5GBASE_T_ADVERTISE;
+	/* Set or unset auto-negotiation 1G advertisement */
+	autoneg_reg &= ~TXGBE_MII_1GBASE_T_ADVERTISE;
+	if ((hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL) &&
+	    (speed & TXGBE_LINK_SPEED_1GB_FULL))
+		autoneg_reg |= TXGBE_MII_1GBASE_T_ADVERTISE;
+
+	hw->phy.write_reg(hw, TXGBE_MII_AUTONEG_VENDOR_PROVISION_1_REG,
+			      TXGBE_MD_DEV_AUTO_NEG,
+			      autoneg_reg);
+
+	/* Set or unset auto-negotiation 100M advertisement */
+	hw->phy.read_reg(hw, TXGBE_MII_AUTONEG_ADVERTISE_REG,
+			     TXGBE_MD_DEV_AUTO_NEG,
+			     &autoneg_reg);
+
+	autoneg_reg &= ~(TXGBE_MII_100BASE_T_ADVERTISE |
+			 TXGBE_MII_100BASE_T_ADVERTISE_HALF);
+	if ((hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100M_FULL) &&
+	    (speed & TXGBE_LINK_SPEED_100M_FULL))
+		autoneg_reg |= TXGBE_MII_100BASE_T_ADVERTISE;
+
+	hw->phy.write_reg(hw, TXGBE_MII_AUTONEG_ADVERTISE_REG,
+			      TXGBE_MD_DEV_AUTO_NEG,
+			      autoneg_reg);
+
+	/* Blocked by MNG FW so don't reset PHY */
+	if (txgbe_check_reset_blocked(hw))
+		return err;
+
+	/* Restart PHY auto-negotiation. */
+	hw->phy.read_reg(hw, TXGBE_MD_AUTO_NEG_CONTROL,
+			     TXGBE_MD_DEV_AUTO_NEG, &autoneg_reg);
+
+	autoneg_reg |= TXGBE_MII_RESTART;
+
+	hw->phy.write_reg(hw, TXGBE_MD_AUTO_NEG_CONTROL,
+			      TXGBE_MD_DEV_AUTO_NEG, autoneg_reg);
+
+	return err;
+}
+
+/**
+ *  txgbe_setup_phy_link_speed - Sets the auto advertised capabilities
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: unused
+ **/
+s32 txgbe_setup_phy_link_speed(struct txgbe_hw *hw,
+				       u32 speed,
+				       bool autoneg_wait_to_complete)
+{
+	UNREFERENCED_PARAMETER(autoneg_wait_to_complete);
+
+	DEBUGFUNC("txgbe_setup_phy_link_speed");
+
+	/*
+	 * Clear autoneg_advertised and set new values based on input link
+	 * speed.
+	 */
+	hw->phy.autoneg_advertised = 0;
+
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_5GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_5GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_2_5GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_2_5GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_100M_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_100M_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_10M_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10M_FULL;
+
+	/* Setup link based on the new speed settings */
+	hw->phy.setup_link(hw);
+
+	return 0;
+}
+
+/**
+ * txgbe_get_copper_speeds_supported - Get copper link speeds from phy
+ * @hw: pointer to hardware structure
+ *
+ * Determines the supported link capabilities by reading the PHY auto
+ * negotiation register.
+ **/
+static s32 txgbe_get_copper_speeds_supported(struct txgbe_hw *hw)
+{
+	s32 err;
+	u16 speed_ability;
+
+	err = hw->phy.read_reg(hw, TXGBE_MD_PHY_SPEED_ABILITY,
+				      TXGBE_MD_DEV_PMA_PMD,
+				      &speed_ability);
+	if (err)
+		return err;
+
+	if (speed_ability & TXGBE_MD_PHY_SPEED_10G)
+		hw->phy.speeds_supported |= TXGBE_LINK_SPEED_10GB_FULL;
+	if (speed_ability & TXGBE_MD_PHY_SPEED_1G)
+		hw->phy.speeds_supported |= TXGBE_LINK_SPEED_1GB_FULL;
+	if (speed_ability & TXGBE_MD_PHY_SPEED_100M)
+		hw->phy.speeds_supported |= TXGBE_LINK_SPEED_100M_FULL;
+
+	return err;
+}
+
+/**
+ *  txgbe_get_copper_link_capabilities - Determines link capabilities
+ *  @hw: pointer to hardware structure
+ *  @speed: pointer to link speed
+ *  @autoneg: boolean auto-negotiation value
+ **/
+s32 txgbe_get_copper_link_capabilities(struct txgbe_hw *hw,
+					       u32 *speed,
+					       bool *autoneg)
+{
+	s32 err = 0;
+
+	DEBUGFUNC("txgbe_get_copper_link_capabilities");
+
+	*autoneg = true;
+	if (!hw->phy.speeds_supported)
+		err = txgbe_get_copper_speeds_supported(hw);
+
+	*speed = hw->phy.speeds_supported;
+	return err;
+}
+
+/**
+ *  txgbe_check_phy_link_tnx - Determine link and speed status
+ *  @hw: pointer to hardware structure
+ *  @speed: current link speed
+ *  @link_up: true is link is up, false otherwise
+ *
+ *  Reads the VS1 register to determine if link is up and the current speed for
+ *  the PHY.
+ **/
+s32 txgbe_check_phy_link_tnx(struct txgbe_hw *hw, u32 *speed,
+			     bool *link_up)
+{
+	s32 err = 0;
+	u32 time_out;
+	u32 max_time_out = 10;
+	u16 phy_link = 0;
+	u16 phy_speed = 0;
+	u16 phy_data = 0;
+
+	DEBUGFUNC("txgbe_check_phy_link_tnx");
+
+	/* Initialize speed and link to default case */
+	*link_up = false;
+	*speed = TXGBE_LINK_SPEED_10GB_FULL;
+
+	/*
+	 * Check current speed and link status of the PHY register.
+	 * This is a vendor specific register and may have to
+	 * be changed for other copper PHYs.
+	 */
+	for (time_out = 0; time_out < max_time_out; time_out++) {
+		usec_delay(10);
+		err = hw->phy.read_reg(hw,
+					TXGBE_MD_VENDOR_SPECIFIC_1_STATUS,
+					TXGBE_MD_DEV_VENDOR_1,
+					&phy_data);
+		phy_link = phy_data & TXGBE_MD_VENDOR_SPECIFIC_1_LINK_STATUS;
+		phy_speed = phy_data &
+				 TXGBE_MD_VENDOR_SPECIFIC_1_SPEED_STATUS;
+		if (phy_link == TXGBE_MD_VENDOR_SPECIFIC_1_LINK_STATUS) {
+			*link_up = true;
+			if (phy_speed ==
+			    TXGBE_MD_VENDOR_SPECIFIC_1_SPEED_STATUS)
+				*speed = TXGBE_LINK_SPEED_1GB_FULL;
+			break;
+		}
+	}
+
+	return err;
+}
+
+/**
+ *  txgbe_setup_phy_link_tnx - Set and restart auto-neg
+ *  @hw: pointer to hardware structure
+ *
+ *  Restart auto-negotiation and PHY and waits for completion.
+ **/
+s32 txgbe_setup_phy_link_tnx(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+	u16 autoneg_reg = TXGBE_MII_AUTONEG_REG;
+	bool autoneg = false;
+	u32 speed;
+
+	DEBUGFUNC("txgbe_setup_phy_link_tnx");
+
+	txgbe_get_copper_link_capabilities(hw, &speed, &autoneg);
+
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL) {
+		/* Set or unset auto-negotiation 10G advertisement */
+		hw->phy.read_reg(hw, TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG,
+				     TXGBE_MD_DEV_AUTO_NEG,
+				     &autoneg_reg);
+
+		autoneg_reg &= ~TXGBE_MII_10GBASE_T_ADVERTISE;
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_10GB_FULL)
+			autoneg_reg |= TXGBE_MII_10GBASE_T_ADVERTISE;
+
+		hw->phy.write_reg(hw, TXGBE_MII_10GBASE_T_AUTONEG_CTRL_REG,
+				      TXGBE_MD_DEV_AUTO_NEG,
+				      autoneg_reg);
+	}
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL) {
+		/* Set or unset auto-negotiation 1G advertisement */
+		hw->phy.read_reg(hw, TXGBE_MII_AUTONEG_XNP_TX_REG,
+				     TXGBE_MD_DEV_AUTO_NEG,
+				     &autoneg_reg);
+
+		autoneg_reg &= ~TXGBE_MII_1GBASE_T_ADVERTISE_XNP_TX;
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_1GB_FULL)
+			autoneg_reg |= TXGBE_MII_1GBASE_T_ADVERTISE_XNP_TX;
+
+		hw->phy.write_reg(hw, TXGBE_MII_AUTONEG_XNP_TX_REG,
+				      TXGBE_MD_DEV_AUTO_NEG,
+				      autoneg_reg);
+	}
+
+	if (speed & TXGBE_LINK_SPEED_100M_FULL) {
+		/* Set or unset auto-negotiation 100M advertisement */
+		hw->phy.read_reg(hw, TXGBE_MII_AUTONEG_ADVERTISE_REG,
+				     TXGBE_MD_DEV_AUTO_NEG,
+				     &autoneg_reg);
+
+		autoneg_reg &= ~TXGBE_MII_100BASE_T_ADVERTISE;
+		if (hw->phy.autoneg_advertised & TXGBE_LINK_SPEED_100M_FULL)
+			autoneg_reg |= TXGBE_MII_100BASE_T_ADVERTISE;
+
+		hw->phy.write_reg(hw, TXGBE_MII_AUTONEG_ADVERTISE_REG,
+				      TXGBE_MD_DEV_AUTO_NEG,
+				      autoneg_reg);
+	}
+
+	/* Blocked by MNG FW so don't reset PHY */
+	if (txgbe_check_reset_blocked(hw))
+		return err;
+
+	/* Restart PHY auto-negotiation. */
+	hw->phy.read_reg(hw, TXGBE_MD_AUTO_NEG_CONTROL,
+			     TXGBE_MD_DEV_AUTO_NEG, &autoneg_reg);
+
+	autoneg_reg |= TXGBE_MII_RESTART;
+
+	hw->phy.write_reg(hw, TXGBE_MD_AUTO_NEG_CONTROL,
+			      TXGBE_MD_DEV_AUTO_NEG, autoneg_reg);
+
+	return err;
+}
+
 /**
  *  txgbe_identify_module - Identifies module type
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_phy.h b/drivers/net/txgbe/base/txgbe_phy.h
index 318dca61c..56959b837 100644
--- a/drivers/net/txgbe/base/txgbe_phy.h
+++ b/drivers/net/txgbe/base/txgbe_phy.h
@@ -336,9 +336,21 @@ s32 txgbe_read_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
 			       u32 device_type, u16 *phy_data);
 s32 txgbe_write_phy_reg(struct txgbe_hw *hw, u32 reg_addr,
 				u32 device_type, u16 phy_data);
+s32 txgbe_setup_phy_link(struct txgbe_hw *hw);
+s32 txgbe_setup_phy_link_speed(struct txgbe_hw *hw,
+				       u32 speed,
+				       bool autoneg_wait_to_complete);
+s32 txgbe_get_copper_link_capabilities(struct txgbe_hw *hw,
+					       u32 *speed,
+					       bool *autoneg);
 s32 txgbe_check_reset_blocked(struct txgbe_hw *hw);
 
 /* PHY specific */
+s32 txgbe_check_phy_link_tnx(struct txgbe_hw *hw,
+			     u32 *speed,
+			     bool *link_up);
+s32 txgbe_setup_phy_link_tnx(struct txgbe_hw *hw);
+
 s32 txgbe_identify_module(struct txgbe_hw *hw);
 s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
 s32 txgbe_identify_qsfp_module(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 5bde3c642..b94217b8b 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -6,6 +6,7 @@
 #define _TXGBE_TYPE_H_
 
 #define TXGBE_LINK_UP_TIME	90 /* 9.0 Seconds */
+#define TXGBE_AUTO_NEG_TIME	45 /* 4.5 Seconds */
 
 #define TXGBE_ALIGN				128 /* as intel did */
 
@@ -101,6 +102,14 @@ enum txgbe_media_type {
 };
 
 
+/* Smart Speed Settings */
+#define TXGBE_SMARTSPEED_MAX_RETRIES	3
+enum txgbe_smart_speed {
+	txgbe_smart_speed_auto = 0,
+	txgbe_smart_speed_on,
+	txgbe_smart_speed_off
+};
+
 /* PCI bus types */
 enum txgbe_bus_type {
 	txgbe_bus_type_unknown = 0,
@@ -374,6 +383,10 @@ struct txgbe_phy_info {
 	u32 media_type;
 	u32 phy_semaphore_mask;
 	bool reset_disable;
+	u32 autoneg_advertised;
+	u32 speeds_supported;
+	enum txgbe_smart_speed smart_speed;
+	bool smart_speed_active;
 	bool multispeed_fiber;
 	bool qsfp_shared_i2c_bus;
 	u32 nw_mng_if_sel;
@@ -420,6 +433,11 @@ struct txgbe_hw {
 	void IOMEM *isb_mem;
 	u16 nb_rx_queues;
 	u16 nb_tx_queues;
+	enum txgbe_link_status {
+		TXGBE_LINK_STATUS_NONE = 0,
+		TXGBE_LINK_STATUS_KX,
+		TXGBE_LINK_STATUS_KX4
+	} link_status;
 	enum txgbe_reset_type {
 		TXGBE_LAN_RESET = 0,
 		TXGBE_SW_RESET,
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 88967dede..16008ea4e 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -35,6 +35,8 @@
 #include "txgbe_ethdev.h"
 #include "txgbe_rxtx.h"
 
+static int  txgbe_dev_set_link_up(struct rte_eth_dev *dev);
+static int  txgbe_dev_set_link_down(struct rte_eth_dev *dev);
 static void txgbe_dev_close(struct rte_eth_dev *dev);
 static int txgbe_dev_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete);
@@ -655,6 +657,46 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
 	hw->adapter_stopped = true;
 }
 
+/*
+ * Set device link up: enable tx.
+ */
+static int
+txgbe_dev_set_link_up(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (hw->phy.media_type == txgbe_media_type_copper) {
+		/* Turn on the copper */
+		hw->phy.set_phy_power(hw, true);
+	} else {
+		/* Turn on the laser */
+		hw->mac.enable_tx_laser(hw);
+		txgbe_dev_link_update(dev, 0);
+	}
+
+	return 0;
+}
+
+/*
+ * Set device link down: disable tx.
+ */
+static int
+txgbe_dev_set_link_down(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (hw->phy.media_type == txgbe_media_type_copper) {
+		/* Turn off the copper */
+		hw->phy.set_phy_power(hw, false);
+	} else {
+		/* Turn off the laser */
+		hw->mac.disable_tx_laser(hw);
+		txgbe_dev_link_update(dev, 0);
+	}
+
+	return 0;
+}
+
 /*
  * Reset and stop device.
  */
@@ -760,18 +802,107 @@ txgbe_dev_stats_reset(struct rte_eth_dev *dev)
 void
 txgbe_dev_setup_link_alarm_handler(void *param)
 {
-	RTE_SET_USED(param);
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	u32 speed;
+	bool autoneg = false;
+
+	speed = hw->phy.autoneg_advertised;
+	if (!speed)
+		hw->mac.get_link_capabilities(hw, &speed, &autoneg);
+
+	hw->mac.setup_link(hw, speed, true);
+
+	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+}
+
+/* return 0 means link status changed, -1 means not changed */
+int
+txgbe_dev_link_update_share(struct rte_eth_dev *dev,
+			    int wait_to_complete)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_eth_link link;
+	u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	bool link_up;
+	int err;
+	int wait = 1;
+
+	memset(&link, 0, sizeof(link));
+	link.link_status = ETH_LINK_DOWN;
+	link.link_speed = ETH_SPEED_NUM_NONE;
+	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = ETH_LINK_AUTONEG;
+
+	hw->mac.get_link_status = true;
+
+	if (intr->flags & TXGBE_FLAG_NEED_LINK_CONFIG)
+		return rte_eth_linkstatus_set(dev, &link);
+
+	/* check if it needs to wait to complete, if lsc interrupt is enabled */
+	if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)
+		wait = 0;
+
+	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
+
+	if (err != 0) {
+		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		return rte_eth_linkstatus_set(dev, &link);
+	}
+
+	if (link_up == 0) {
+		if (hw->phy.media_type == txgbe_media_type_fiber) {
+			intr->flags |= TXGBE_FLAG_NEED_LINK_CONFIG;
+			rte_eal_alarm_set(10,
+				txgbe_dev_setup_link_alarm_handler, dev);
+		}
+		return rte_eth_linkstatus_set(dev, &link);
+	}
+
+	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
+	link.link_status = ETH_LINK_UP;
+	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+
+	switch (link_speed) {
+	default:
+	case TXGBE_LINK_SPEED_UNKNOWN:
+		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+
+	case TXGBE_LINK_SPEED_100M_FULL:
+		link.link_speed = ETH_SPEED_NUM_100M;
+		break;
+
+	case TXGBE_LINK_SPEED_1GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_1G;
+		break;
+
+	case TXGBE_LINK_SPEED_2_5GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_2_5G;
+		break;
+
+	case TXGBE_LINK_SPEED_5GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_5G;
+		break;
+
+	case TXGBE_LINK_SPEED_10GB_FULL:
+		link.link_speed = ETH_SPEED_NUM_10G;
+		break;
+	}
+
+	return rte_eth_linkstatus_set(dev, &link);
 }
 
 static int
 txgbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 {
-	RTE_SET_USED(dev);
-	RTE_SET_USED(wait_to_complete);
-	return 0;
+	return txgbe_dev_link_update_share(dev, wait_to_complete);
 }
 
-
 /**
  * It clears the interrupt causes and enables the interrupt.
  * It will be called once only during nic initialized.
@@ -897,7 +1028,26 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
 static void
 txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_eth_link link;
+
+	rte_eth_linkstatus_get(dev, &link);
+
+	if (link.link_status) {
+		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
+					(int)(dev->data->port_id),
+					(unsigned)link.link_speed,
+			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+					"full-duplex" : "half-duplex");
+	} else {
+		PMD_INIT_LOG(INFO, " Port %d: Link Down",
+				(int)(dev->data->port_id));
+	}
+	PMD_INIT_LOG(DEBUG, "PCI Address: " PCI_PRI_FMT,
+				pci_dev->addr.domain,
+				pci_dev->addr.bus,
+				pci_dev->addr.devid,
+				pci_dev->addr.function);
 }
 
 /*
@@ -1138,6 +1288,8 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
 static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_start                  = txgbe_dev_start,
 	.dev_stop                   = txgbe_dev_stop,
+	.dev_set_link_up            = txgbe_dev_set_link_up,
+	.dev_set_link_down          = txgbe_dev_set_link_down,
 	.dev_close                  = txgbe_dev_close,
 	.dev_reset                  = txgbe_dev_reset,
 	.link_update                = txgbe_dev_link_update,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 11f19650b..ff2b36f02 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -85,6 +85,11 @@ int txgbe_dev_rxtx_start(struct rte_eth_dev *dev);
 void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
 			       uint8_t queue, uint8_t msix_vector);
 
+
+int
+txgbe_dev_link_update_share(struct rte_eth_dev *dev,
+		int wait_to_complete);
+
 /*
  * misc function prototypes
  */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 15/42] net/txgbe: add multi-speed link setup
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (12 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 14/42] net/txgbe: add link status change Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 16/42] net/txgbe: add autoc read and write Jiawen Wu
                   ` (27 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add multispeed fiber setup link and laser control.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c | 400 +++++++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_hw.h |  11 +
 2 files changed, 410 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 26593c5f6..b494a57e8 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -406,6 +406,152 @@ void txgbe_clear_tx_pending(struct txgbe_hw *hw)
 	wr32(hw, TXGBE_PSRCTL, hlreg0);
 }
 
+
+/**
+ *  txgbe_setup_mac_link_multispeed_fiber - Set MAC link speed
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ *  Set the link speed in the MAC and/or PHY register and restarts link.
+ **/
+s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
+					  u32 speed,
+					  bool autoneg_wait_to_complete)
+{
+	u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	u32 highest_link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	s32 status = 0;
+	u32 speedcnt = 0;
+	u32 i = 0;
+	bool autoneg, link_up = false;
+
+	DEBUGFUNC("txgbe_setup_mac_link_multispeed_fiber");
+
+	/* Mask off requested but non-supported speeds */
+	status = hw->mac.get_link_capabilities(hw, &link_speed, &autoneg);
+	if (status != 0)
+		return status;
+
+	speed &= link_speed;
+
+	/* Try each speed one by one, highest priority first.  We do this in
+	 * software because 10Gb fiber doesn't support speed autonegotiation.
+	 */
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL) {
+		speedcnt++;
+		highest_link_speed = TXGBE_LINK_SPEED_10GB_FULL;
+
+		/* Set the module link speed */
+		switch (hw->phy.media_type) {
+		case txgbe_media_type_fiber:
+			hw->mac.set_rate_select_speed(hw,
+				TXGBE_LINK_SPEED_10GB_FULL);
+			break;
+		case txgbe_media_type_fiber_qsfp:
+			/* QSFP module automatically detects MAC link speed */
+			break;
+		default:
+			DEBUGOUT("Unexpected media type.\n");
+			break;
+		}
+
+		/* Allow module to change analog characteristics (1G->10G) */
+		msec_delay(40);
+
+		status = hw->mac.setup_mac_link(hw,
+				TXGBE_LINK_SPEED_10GB_FULL,
+				autoneg_wait_to_complete);
+		if (status != 0)
+			return status;
+
+		/* Flap the Tx laser if it has not already been done */
+		hw->mac.flap_tx_laser(hw);
+
+		/* Wait for the controller to acquire link.  Per IEEE 802.3ap,
+		 * Section 73.10.2, we may have to wait up to 500ms if KR is
+		 * attempted.  uses the same timing for 10g SFI.
+		 */
+		for (i = 0; i < 5; i++) {
+			/* Wait for the link partner to also set speed */
+			msec_delay(100);
+
+			/* If we have link, just jump out */
+			status = hw->mac.check_link(hw, &link_speed,
+				&link_up, false);
+			if (status != 0)
+				return status;
+
+			if (link_up)
+				goto out;
+		}
+	}
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL) {
+		speedcnt++;
+		if (highest_link_speed == TXGBE_LINK_SPEED_UNKNOWN)
+			highest_link_speed = TXGBE_LINK_SPEED_1GB_FULL;
+
+		/* Set the module link speed */
+		switch (hw->phy.media_type) {
+		case txgbe_media_type_fiber:
+			hw->mac.set_rate_select_speed(hw,
+				TXGBE_LINK_SPEED_1GB_FULL);
+			break;
+		case txgbe_media_type_fiber_qsfp:
+			/* QSFP module automatically detects link speed */
+			break;
+		default:
+			DEBUGOUT("Unexpected media type.\n");
+			break;
+		}
+
+		/* Allow module to change analog characteristics (10G->1G) */
+		msec_delay(40);
+
+		status = hw->mac.setup_mac_link(hw,
+				TXGBE_LINK_SPEED_1GB_FULL,
+				autoneg_wait_to_complete);
+		if (status != 0)
+			return status;
+
+		/* Flap the Tx laser if it has not already been done */
+		hw->mac.flap_tx_laser(hw);
+
+		/* Wait for the link partner to also set speed */
+		msec_delay(100);
+
+		/* If we have link, just jump out */
+		status = hw->mac.check_link(hw, &link_speed, &link_up, false);
+		if (status != 0)
+			return status;
+
+		if (link_up)
+			goto out;
+	}
+
+	/* We didn't get link.  Configure back to the highest speed we tried,
+	 * (if there was more than one).  We call ourselves back with just the
+	 * single highest speed that the user requested.
+	 */
+	if (speedcnt > 1)
+		status = txgbe_setup_mac_link_multispeed_fiber(hw,
+						      highest_link_speed,
+						      autoneg_wait_to_complete);
+
+out:
+	/* Set autoneg_advertised value based on input link speed */
+	hw->phy.autoneg_advertised = 0;
+
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+	return status;
+}
+
 /**
  *  txgbe_init_shared_code - Initialize the shared code
  *  @hw: pointer to hardware structure
@@ -507,7 +653,35 @@ void txgbe_init_mac_link_ops(struct txgbe_hw *hw)
 
 	DEBUGFUNC("txgbe_init_mac_link_ops");
 
-	mac->setup_link = txgbe_setup_mac_link;
+	/*
+	 * enable the laser control functions for SFP+ fiber
+	 * and MNG not enabled
+	 */
+	if ((hw->phy.media_type == txgbe_media_type_fiber) &&
+	    !txgbe_mng_enabled(hw)) {
+		mac->disable_tx_laser =
+			txgbe_disable_tx_laser_multispeed_fiber;
+		mac->enable_tx_laser =
+			txgbe_enable_tx_laser_multispeed_fiber;
+		mac->flap_tx_laser =
+			txgbe_flap_tx_laser_multispeed_fiber;
+	}
+
+	if ((hw->phy.media_type == txgbe_media_type_fiber ||
+	     hw->phy.media_type == txgbe_media_type_fiber_qsfp) &&
+	    hw->phy.multispeed_fiber) {
+		/* Set up dual speed SFP+ support */
+		mac->setup_link = txgbe_setup_mac_link_multispeed_fiber;
+		mac->setup_mac_link = txgbe_setup_mac_link;
+		mac->set_rate_select_speed = txgbe_set_hard_rate_select_speed;
+	} else if ((hw->phy.media_type == txgbe_media_type_backplane) &&
+		    (hw->phy.smart_speed == txgbe_smart_speed_auto ||
+		     hw->phy.smart_speed == txgbe_smart_speed_on) &&
+		     !txgbe_verify_lesm_fw_enabled_raptor(hw)) {
+		mac->setup_link = txgbe_setup_mac_link_smartspeed;
+	} else {
+		mac->setup_link = txgbe_setup_mac_link;
+	}
 }
 
 /**
@@ -629,6 +803,7 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	/* MAC */
 	mac->init_hw = txgbe_init_hw;
 	mac->start_hw = txgbe_start_hw_raptor;
+	mac->stop_hw = txgbe_stop_hw;
 	mac->reset_hw = txgbe_reset_hw;
 
 	mac->get_device_caps = txgbe_get_device_caps;
@@ -748,6 +923,19 @@ s32 txgbe_get_link_capabilities_raptor(struct txgbe_hw *hw,
 		return TXGBE_ERR_LINK_SETUP;
 	}
 
+	if (hw->phy.multispeed_fiber) {
+		*speed |= TXGBE_LINK_SPEED_10GB_FULL |
+			  TXGBE_LINK_SPEED_1GB_FULL;
+
+		/* QSFP must not enable full auto-negotiation
+		 * Limited autoneg is enabled at 1G
+		 */
+		if (hw->phy.media_type == txgbe_media_type_fiber_qsfp)
+			*autoneg = false;
+		else
+			*autoneg = true;
+	}
+
 	return status;
 }
 
@@ -837,6 +1025,216 @@ s32 txgbe_start_mac_link_raptor(struct txgbe_hw *hw,
 	return status;
 }
 
+/**
+ *  txgbe_disable_tx_laser_multispeed_fiber - Disable Tx laser
+ *  @hw: pointer to hardware structure
+ *
+ *  The base drivers may require better control over SFP+ module
+ *  PHY states.  This includes selectively shutting down the Tx
+ *  laser on the PHY, effectively halting physical link.
+ **/
+void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+	u32 esdp_reg = rd32(hw, TXGBE_GPIODATA);
+
+	/* Blocked by MNG FW so bail */
+	if (txgbe_check_reset_blocked(hw))
+		return;
+
+	/* Disable Tx laser; allow 100us to go dark per spec */
+	esdp_reg |= (TXGBE_GPIOBIT_0 | TXGBE_GPIOBIT_1);
+	wr32(hw, TXGBE_GPIODATA, esdp_reg);
+	txgbe_flush(hw);
+	usec_delay(100);
+}
+
+/**
+ *  txgbe_enable_tx_laser_multispeed_fiber - Enable Tx laser
+ *  @hw: pointer to hardware structure
+ *
+ *  The base drivers may require better control over SFP+ module
+ *  PHY states.  This includes selectively turning on the Tx
+ *  laser on the PHY, effectively starting physical link.
+ **/
+void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+	u32 esdp_reg = rd32(hw, TXGBE_GPIODATA);
+
+	/* Enable Tx laser; allow 100ms to light up */
+	esdp_reg &= ~(TXGBE_GPIOBIT_0 | TXGBE_GPIOBIT_1);
+	wr32(hw, TXGBE_GPIODATA, esdp_reg);
+	txgbe_flush(hw);
+	msec_delay(100);
+}
+
+/**
+ *  txgbe_flap_tx_laser_multispeed_fiber - Flap Tx laser
+ *  @hw: pointer to hardware structure
+ *
+ *  When the driver changes the link speeds that it can support,
+ *  it sets autotry_restart to true to indicate that we need to
+ *  initiate a new autotry session with the link partner.  To do
+ *  so, we set the speed then disable and re-enable the Tx laser, to
+ *  alert the link partner that it also needs to restart autotry on its
+ *  end.  This is consistent with true clause 37 autoneg, which also
+ *  involves a loss of signal.
+ **/
+void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw)
+{
+	DEBUGFUNC("txgbe_flap_tx_laser_multispeed_fiber");
+
+	/* Blocked by MNG FW so bail */
+	if (txgbe_check_reset_blocked(hw))
+		return;
+
+	if (hw->mac.autotry_restart) {
+		txgbe_disable_tx_laser_multispeed_fiber(hw);
+		txgbe_enable_tx_laser_multispeed_fiber(hw);
+		hw->mac.autotry_restart = false;
+	}
+}
+
+/**
+ *  txgbe_set_hard_rate_select_speed - Set module link speed
+ *  @hw: pointer to hardware structure
+ *  @speed: link speed to set
+ *
+ *  Set module link speed via RS0/RS1 rate select pins.
+ */
+void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw,
+					u32 speed)
+{
+	u32 esdp_reg = rd32(hw, TXGBE_GPIODATA);
+
+	switch (speed) {
+	case TXGBE_LINK_SPEED_10GB_FULL:
+		esdp_reg |= (TXGBE_GPIOBIT_4 | TXGBE_GPIOBIT_5);
+		break;
+	case TXGBE_LINK_SPEED_1GB_FULL:
+		esdp_reg &= ~(TXGBE_GPIOBIT_4 | TXGBE_GPIOBIT_5);
+		break;
+	default:
+		DEBUGOUT("Invalid fixed module speed\n");
+		return;
+	}
+
+	wr32(hw, TXGBE_GPIODATA, esdp_reg);
+	txgbe_flush(hw);
+}
+
+/**
+ *  txgbe_setup_mac_link_smartspeed - Set MAC link speed using SmartSpeed
+ *  @hw: pointer to hardware structure
+ *  @speed: new link speed
+ *  @autoneg_wait_to_complete: true when waiting for completion is needed
+ *
+ *  Implements the Intel SmartSpeed algorithm.
+ **/
+s32 txgbe_setup_mac_link_smartspeed(struct txgbe_hw *hw,
+				    u32 speed,
+				    bool autoneg_wait_to_complete)
+{
+	s32 status = 0;
+	u32 link_speed = TXGBE_LINK_SPEED_UNKNOWN;
+	s32 i, j;
+	bool link_up = false;
+	u32 autoc_reg = rd32_epcs(hw, SR_AN_MMD_ADV_REG1);
+
+	DEBUGFUNC("txgbe_setup_mac_link_smartspeed");
+
+	 /* Set autoneg_advertised value based on input link speed */
+	hw->phy.autoneg_advertised = 0;
+
+	if (speed & TXGBE_LINK_SPEED_10GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_10GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_1GB_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_1GB_FULL;
+
+	if (speed & TXGBE_LINK_SPEED_100M_FULL)
+		hw->phy.autoneg_advertised |= TXGBE_LINK_SPEED_100M_FULL;
+
+	/*
+	 * Implement Intel SmartSpeed algorithm.  SmartSpeed will reduce the
+	 * autoneg advertisement if link is unable to be established at the
+	 * highest negotiated rate.  This can sometimes happen due to integrity
+	 * issues with the physical media connection.
+	 */
+
+	/* First, try to get link with full advertisement */
+	hw->phy.smart_speed_active = false;
+	for (j = 0; j < TXGBE_SMARTSPEED_MAX_RETRIES; j++) {
+		status = txgbe_setup_mac_link(hw, speed,
+						    autoneg_wait_to_complete);
+		if (status != 0)
+			goto out;
+
+		/*
+		 * Wait for the controller to acquire link.  Per IEEE 802.3ap,
+		 * Section 73.10.2, we may have to wait up to 500ms if KR is
+		 * attempted, or 200ms if KX/KX4/BX/BX4 is attempted, per
+		 * Table 9 in the AN MAS.
+		 */
+		for (i = 0; i < 5; i++) {
+			msec_delay(100);
+
+			/* If we have link, just jump out */
+			status = hw->mac.check_link(hw, &link_speed, &link_up,
+						  false);
+			if (status != 0)
+				goto out;
+
+			if (link_up)
+				goto out;
+		}
+	}
+
+	/*
+	 * We didn't get link.  If we advertised KR plus one of KX4/KX
+	 * (or BX4/BX), then disable KR and try again.
+	 */
+	if (((autoc_reg & TXGBE_AUTOC_KR_SUPP) == 0) ||
+	    ((autoc_reg & TXGBE_AUTOC_KX_SUPP) == 0 &&
+	     (autoc_reg & TXGBE_AUTOC_KX4_SUPP) == 0))
+		goto out;
+
+	/* Turn SmartSpeed on to disable KR support */
+	hw->phy.smart_speed_active = true;
+	status = txgbe_setup_mac_link(hw, speed,
+					    autoneg_wait_to_complete);
+	if (status != 0)
+		goto out;
+
+	/*
+	 * Wait for the controller to acquire link.  600ms will allow for
+	 * the AN link_fail_inhibit_timer as well for multiple cycles of
+	 * parallel detect, both 10g and 1g. This allows for the maximum
+	 * connect attempts as defined in the AN MAS table 73-7.
+	 */
+	for (i = 0; i < 6; i++) {
+		msec_delay(100);
+
+		/* If we have link, just jump out */
+		status = hw->mac.check_link(hw, &link_speed, &link_up, false);
+		if (status != 0)
+			goto out;
+
+		if (link_up)
+			goto out;
+	}
+
+	/* We didn't get link.  Turn SmartSpeed back off. */
+	hw->phy.smart_speed_active = false;
+	status = txgbe_setup_mac_link(hw, speed,
+					    autoneg_wait_to_complete);
+
+out:
+	if (link_up && (link_speed == TXGBE_LINK_SPEED_1GB_FULL))
+		DEBUGOUT("Smartspeed has downgraded the link speed "
+		"from the maximum advertised\n");
+	return status;
+}
+
 /**
  *  txgbe_setup_mac_link - Set MAC link speed
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index e64c09950..d361f6590 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -27,12 +27,23 @@ void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 
 extern s32 txgbe_reset_pipeline_raptor(struct txgbe_hw *hw);
 
+s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
+					  u32 speed,
+					  bool autoneg_wait_to_complete);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
 s32 txgbe_get_link_capabilities_raptor(struct txgbe_hw *hw,
 				      u32 *speed, bool *autoneg);
 u32 txgbe_get_media_type_raptor(struct txgbe_hw *hw);
+void txgbe_disable_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_enable_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_flap_tx_laser_multispeed_fiber(struct txgbe_hw *hw);
+void txgbe_set_hard_rate_select_speed(struct txgbe_hw *hw,
+					u32 speed);
+s32 txgbe_setup_mac_link_smartspeed(struct txgbe_hw *hw,
+				    u32 speed,
+				    bool autoneg_wait_to_complete);
 s32 txgbe_start_mac_link_raptor(struct txgbe_hw *hw,
 			       bool autoneg_wait_to_complete);
 s32 txgbe_setup_mac_link(struct txgbe_hw *hw, u32 speed,
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 16/42] net/txgbe: add autoc read and write
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (13 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 15/42] net/txgbe: add multi-speed link setup Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 17/42] net/txgbe: support device LED on and off Jiawen Wu
                   ` (26 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add autoc read and write for kr/kx/kx4/sfi link.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   |   2 +
 drivers/net/txgbe/base/txgbe_phy.c  | 848 ++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_phy.h  |   2 +
 drivers/net/txgbe/base/txgbe_type.h |  21 +
 4 files changed, 873 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index b494a57e8..37f55c1fc 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -807,6 +807,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->reset_hw = txgbe_reset_hw;
 
 	mac->get_device_caps = txgbe_get_device_caps;
+	mac->autoc_read = txgbe_autoc_read;
+	mac->autoc_write = txgbe_autoc_write;
 
 	/* Link */
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index 59d28506e..7981fb2f8 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -1373,3 +1373,851 @@ STATIC void txgbe_i2c_stop(struct txgbe_hw *hw)
 	wr32(hw, TXGBE_I2CENA, 0);
 }
 
+static s32
+txgbe_set_sgmii_an37_ability(struct txgbe_hw *hw)
+{
+	u32 value;
+
+	wr32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1, 0x3002);
+	wr32_epcs(hw, SR_MII_MMD_AN_CTL, 0x0105);
+	wr32_epcs(hw, SR_MII_MMD_DIGI_CTL, 0x0200);
+	value = rd32_epcs(hw, SR_MII_MMD_CTL);
+	value = (value & ~0x1200) | (0x1 << 12) | (0x1 << 9);
+	wr32_epcs(hw, SR_MII_MMD_CTL, value);
+	return 0;
+}
+
+static s32
+txgbe_set_link_to_kr(struct txgbe_hw *hw, bool autoneg)
+{
+	u32 i;
+	s32 err = 0;
+
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == 100) {
+		err = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	if (!autoneg) {
+		/* 2. Disable xpcs AN-73 */
+		wr32_epcs(hw, SR_AN_CTRL, 0x0);
+		/* Disable PHY MPLLA for eth mode change(after ECO) */
+		wr32_ephy(hw, 0x4, 0x243A);
+		txgbe_flush(hw);
+		msleep(1);
+		/* Set the eth change_mode bit first in mis_rst register
+		 * for corresponding LAN port */
+		wr32(hw, TXGBE_RST, TXGBE_RST_ETH(hw->bus.lan_id));
+
+		/* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register
+		 * Bit[10:0](MPLLA_BANDWIDTH) = 11'd123 (default: 11'd16)
+		 */
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+			TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_10GBASER_KR);
+
+		/* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register
+		 * Bit[12:8](RX_VREF_CTRL) = 5'hF (default: 5'h11)
+		 */
+		wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+
+		/* 5. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+		 * Bit[15:8](VGA1/2_GAIN_0) = 8'h77
+		 * Bit[7:5](CTLE_POLE_0) = 3'h2
+		 * Bit[4:0](CTLE_BOOST_0) = 4'hA
+		 */
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774A);
+
+		/* 6. Set VR_MII_Gen5_12G_RX_GENCTRL3 Register
+		 * Bit[2:0](LOS_TRSHLD_0) = 3'h4 (default: 3)
+		 */
+		wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, 0x0004);
+
+		/* 7. Initialize the mode by setting VR XS or PCS MMD Digital
+		 * Control1 Register Bit[15](VR_RST) */
+		wr32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+
+		/* Wait phy initialization done */
+		for (i = 0; i < 100; i++) {
+			if ((rd32_epcs(hw,
+				VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+				VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+				break;
+			msleep(100);
+		}
+		if (i == 100) {
+			err = TXGBE_ERR_PHY_INIT_NOT_DONE;
+			goto out;
+		}
+	} else {
+		 wr32_epcs(hw, VR_AN_KR_MODE_CL, 0x1);
+	}
+out:
+	return err;
+}
+
+static s32
+txgbe_set_link_to_kx4(struct txgbe_hw *hw, bool autoneg)
+{
+	u32 i;
+	s32 err = 0;
+	u32 value;
+
+	/* Check link status, if already set, skip setting it again */
+	if (hw->link_status == TXGBE_LINK_STATUS_KX4) {
+		goto out;
+	}
+
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == 100) {
+		err = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	wr32m(hw, TXGBE_MACTXCFG, TXGBE_MACTXCFG_TE,
+			~TXGBE_MACTXCFG_TE);
+
+	/* 2. Disable xpcs AN-73 */
+	if (!autoneg)
+		wr32_epcs(hw, SR_AN_CTRL, 0x0);
+	else
+		wr32_epcs(hw, SR_AN_CTRL, 0x3000);
+
+	/* Disable PHY MPLLA for eth mode change(after ECO) */
+	wr32_ephy(hw, 0x4, 0x250A);
+	txgbe_flush(hw);
+	msleep(1);
+
+	/* Set the eth change_mode bit first in mis_rst register
+	 * for corresponding LAN port */
+	wr32(hw, TXGBE_RST, TXGBE_RST_ETH(hw->bus.lan_id));
+
+	/* Set SR PCS Control2 Register Bits[1:0] = 2'b01
+	 * PCS_TYPE_SEL: non KR
+	 */
+	wr32_epcs(hw, SR_XS_PCS_CTRL2,
+			SR_PCS_CTRL2_TYPE_SEL_X);
+
+	/* Set SR PMA MMD Control1 Register Bit[13] = 1'b1
+	 * SS13: 10G speed
+	 */
+	wr32_epcs(hw, SR_PMA_CTRL1,
+			SR_PMA_CTRL1_SS13_KX4);
+
+	value = (0xf5f0 & ~0x7F0) |  (0x5 << 8) | (0x7 << 5) | 0x10;
+	wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+
+	wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00);
+
+	value = (0x1804 & ~0x3F3F);
+	wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+	value = (0x50 & ~0x7F) | 40 | (1 << 6);
+	wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+
+	for (i = 0; i < 4; i++) {
+		if (i == 0)
+			value = (0x45 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6;
+		else
+			value = (0xff06 & ~0xFFFF) | (0x7 << 12) | (0x7 << 8) | 0x6;
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value);
+	}
+
+	value = 0x0 & ~0x7777;
+	wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+	wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+
+	value = (0x6db & ~0xFFF) | (0x1 << 9) | (0x1 << 6) | (0x1 << 3) | 0x1;
+	wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA
+	 * Control 0 Register Bit[7:0] = 8'd40  //MPLLA_MULTIPLIER
+	 */
+	wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0,
+			TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_OTHER);
+
+	/* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA
+	 * Control 3 Register Bit[10:0] = 11'd86  //MPLLA_BANDWIDTH
+	 */
+	wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+			TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Load 0 Register  Bit[12:0] = 13'd1360  //VCO_LD_VAL_0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0,
+			TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Load 1 Register  Bit[12:0] = 13'd1360  //VCO_LD_VAL_1
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1,
+			TXGBE_PHY_VCO_CAL_LD0_OTHER);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Load 2 Register  Bit[12:0] = 13'd1360  //VCO_LD_VAL_2
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2,
+			TXGBE_PHY_VCO_CAL_LD0_OTHER);
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Load 3 Register  Bit[12:0] = 13'd1360  //VCO_LD_VAL_3
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3,
+			TXGBE_PHY_VCO_CAL_LD0_OTHER);
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Reference 0 Register Bit[5:0] = 6'd34  //VCO_REF_LD_0/1
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Reference 1 Register Bit[5:0] = 6'd34  //VCO_REF_LD_2/3
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE
+	 * Enable Register Bit[7:0] = 8'd0  //AFE_EN_0/3_1, DFE_EN_0/3_1
+	 */
+	wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0);
+
+	/* Set  VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx
+	 * Equalization Control 4 Register Bit[3:0] = 4'd0  //CONT_ADAPT_0/3_1
+	 */
+	wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x00F0);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate
+	 * Control Register Bit[14:12], Bit[10:8], Bit[6:4], Bit[2:0],
+	 * all rates to 3'b010  //TX0/1/2/3_RATE
+	 */
+	wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate
+	 * Control Register Bit[13:12], Bit[9:8], Bit[5:4], Bit[1:0],
+	 * all rates to 2'b10  //RX0/1/2/3_RATE
+	 */
+	wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0x2222);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General
+	 * Control 2 Register Bit[15:8] = 2'b01  //TX0/1/2/3_WIDTH: 10bits
+	 */
+	wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x5500);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General
+	 * Control 2 Register Bit[15:8] = 2'b01  //RX0/1/2/3_WIDTH: 10bits
+	 */
+	wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x5500);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 2 Register Bit[10:8] = 3'b010
+	 * MPLLA_DIV16P5_CLK_EN=0, MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2,
+			TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10);
+
+	wr32_epcs(hw, 0x1f0000, 0x0);
+	wr32_epcs(hw, 0x1f8001, 0x0);
+	wr32_epcs(hw, SR_MII_MMD_DIGI_CTL, 0x0);
+
+	/* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+	 * Register Bit[15](VR_RST) */
+	wr32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+
+	/* Wait phy initialization done */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+			break;
+		msleep(100);
+	}
+
+	/* If success, set link status */
+	hw->link_status = TXGBE_LINK_STATUS_KX4;
+
+	if (i == 100) {
+		err = TXGBE_ERR_PHY_INIT_NOT_DONE;
+		goto out;
+	}
+
+out:
+	return err;
+}
+
+static s32
+txgbe_set_link_to_kx(struct txgbe_hw *hw,
+			       u32 speed,
+			       bool autoneg)
+{
+	u32 i;
+	s32 err = 0;
+	u32 wdata = 0;
+	u32 value;
+
+	/* Check link status, if already set, skip setting it again */
+	if (hw->link_status == TXGBE_LINK_STATUS_KX) {
+		goto out;
+	}
+
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == 100) {
+		err = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	wr32m(hw, TXGBE_MACTXCFG, TXGBE_MACTXCFG_TE,
+				~TXGBE_MACTXCFG_TE);
+
+	/* 2. Disable xpcs AN-73 */
+	if (!autoneg)
+		wr32_epcs(hw, SR_AN_CTRL, 0x0);
+	else
+		wr32_epcs(hw, SR_AN_CTRL, 0x3000);
+
+	/* Disable PHY MPLLA for eth mode change(after ECO) */
+	wr32_ephy(hw, 0x4, 0x240A);
+	txgbe_flush(hw);
+	msleep(1);
+
+	/* Set the eth change_mode bit first in mis_rst register
+	 * for corresponding LAN port */
+	wr32(hw, TXGBE_RST, TXGBE_RST_ETH(hw->bus.lan_id));
+
+	/* Set SR PCS Control2 Register Bits[1:0] = 2'b01
+	 * PCS_TYPE_SEL: non KR
+	 */
+	wr32_epcs(hw, SR_XS_PCS_CTRL2,
+			SR_PCS_CTRL2_TYPE_SEL_X);
+
+	/* Set SR PMA MMD Control1 Register Bit[13] = 1'b0
+	 * SS13: 1G speed
+	 */
+	wr32_epcs(hw, SR_PMA_CTRL1,
+			SR_PMA_CTRL1_SS13_KX);
+
+	/* Set SR MII MMD Control Register to corresponding speed: {Bit[6],
+	 * Bit[13]}=[2'b00,2'b01,2'b10]->[10M,100M,1G]
+	 */
+	if (speed == TXGBE_LINK_SPEED_100M_FULL)
+		wdata = 0x2100;
+	else if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+		wdata = 0x0140;
+	else if (speed == TXGBE_LINK_SPEED_10M_FULL)
+		wdata = 0x0100;
+	wr32_epcs(hw, SR_MII_MMD_CTL,
+			wdata);
+
+	value = (0xf5f0 & ~0x710) |  (0x5 << 8);
+	wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+
+	wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0x4F00);
+
+	value = (0x1804 & ~0x3F3F) | (24 << 8) | 4;
+	wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+
+	value = (0x50 & ~0x7F) | 16 | (1 << 6);
+	wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+
+	for (i = 0; i < 4; i++) {
+		if (i) {
+			value = 0xff06;
+		} else {
+			value = (0x45 & ~0xFFFF) | (0x7 << 12) |
+				(0x7 << 8) | 0x6;
+		}
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0 + i, value);
+	}
+
+	value = 0x0 & ~0x7;
+	wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+	wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+
+	value = (0x6db & ~0x7) | 0x4;
+	wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 0 Register Bit[7:0] = 8'd32  //MPLLA_MULTIPLIER
+	 */
+	wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0,
+			TXGBE_PHY_MPLLA_CTL0_MULTIPLIER_1GBASEX_KX);
+
+	/* Set VR XS, PMA or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 3 Register Bit[10:0] = 11'd70  //MPLLA_BANDWIDTH
+	 */
+	wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3,
+			TXGBE_PHY_MPLLA_CTL3_MULTIPLIER_BW_1GBASEX_KX);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Load 0 Register  Bit[12:0] = 13'd1344  //VCO_LD_VAL_0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0,
+			TXGBE_PHY_VCO_CAL_LD0_1GBASEX_KX);
+
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD1, 0x549);
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD2, 0x549);
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD3, 0x549);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY VCO
+	 * Calibration Reference 0 Register Bit[5:0] = 6'd42  //VCO_REF_LD_0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0,
+			TXGBE_PHY_VCO_CAL_REF0_LD0_1GBASEX_KX);
+
+	wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF1, 0x2929);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY AFE-DFE
+	 * Enable Register Bit[4], Bit[0] = 1'b0  //AFE_EN_0, DFE_EN_0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE,
+			0x0);
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx
+	 * Equalization Control 4 Register Bit[0] = 1'b0  //CONT_ADAPT_0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL,
+			0x0010);
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx Rate
+	 * Control Register Bit[2:0] = 3'b011  //TX0_RATE
+	 */
+	wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL,
+			TXGBE_PHY_TX_RATE_CTL_TX0_RATE_1GBASEX_KX);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx Rate
+	 * Control Register Bit[2:0] = 3'b011 //RX0_RATE
+	 */
+	wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL,
+			TXGBE_PHY_RX_RATE_CTL_RX0_RATE_1GBASEX_KX);
+
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Tx General
+	 * Control 2 Register Bit[9:8] = 2'b01  //TX0_WIDTH: 10bits
+	 */
+	wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2,
+			TXGBE_PHY_TX_GEN_CTL2_TX0_WIDTH_OTHER);
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY Rx General
+	 * Control 2 Register Bit[9:8] = 2'b01  //RX0_WIDTH: 10bits
+	 */
+	wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2,
+			TXGBE_PHY_RX_GEN_CTL2_RX0_WIDTH_OTHER);
+	/* Set VR XS, PMA, or MII Synopsys Enterprise Gen5 12G PHY MPLLA Control
+	 * 2 Register Bit[10:8] = 3'b010   //MPLLA_DIV16P5_CLK_EN=0,
+	 * MPLLA_DIV10_CLK_EN=1, MPLLA_DIV8_CLK_EN=0
+	 */
+	wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2,
+			TXGBE_PHY_MPLLA_CTL2_DIV_CLK_EN_10);
+
+	/* VR MII MMD AN Control Register Bit[8] = 1'b1 //MII_CTRL
+	 * Set to 8bit MII (required in 10M/100M SGMII)
+	 */
+	wr32_epcs(hw, SR_MII_MMD_AN_CTL,
+			0x0100);
+
+	/* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+	 * Register Bit[15](VR_RST)
+	 */
+	wr32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+
+	/* Wait phy initialization done */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+			break;
+		msleep(100);
+	}
+
+	/* If success, set link status */
+	hw->link_status = TXGBE_LINK_STATUS_KX;
+
+	if (i == 100) {
+		err = TXGBE_ERR_PHY_INIT_NOT_DONE;
+		goto out;
+	}
+
+out:
+	return err;
+}
+
+static s32
+txgbe_set_link_to_sfi(struct txgbe_hw *hw,
+			       u32 speed)
+{
+	u32 i;
+	s32 err = 0;
+	u32 value = 0;
+
+	/* Set the module link speed */
+	hw->mac.set_rate_select_speed(hw, speed);
+	/* 1. Wait xpcs power-up good */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_STATUS) &
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_MASK) ==
+			VR_XS_OR_PCS_MMD_DIGI_STATUS_PSEQ_POWER_GOOD)
+			break;
+		msleep(10);
+	}
+	if (i == 100) {
+		err = TXGBE_ERR_XPCS_POWER_UP_FAILED;
+		goto out;
+	}
+
+	wr32m(hw, TXGBE_MACTXCFG, TXGBE_MACTXCFG_TE,
+			~TXGBE_MACTXCFG_TE);
+
+	/* 2. Disable xpcs AN-73 */
+	wr32_epcs(hw, SR_AN_CTRL, 0x0);
+
+	/* Disable PHY MPLLA for eth mode change(after ECO) */
+	wr32_ephy(hw, 0x4, 0x243A);
+	txgbe_flush(hw);
+	msleep(1);
+	/* Set the eth change_mode bit first in mis_rst register
+	 * for corresponding LAN port */
+	wr32(hw, TXGBE_RST, TXGBE_RST_ETH(hw->bus.lan_id));
+
+	if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+		/* Set SR PCS Control2 Register Bits[1:0] = 2'b00
+		 * PCS_TYPE_SEL: KR
+		 */
+		wr32_epcs(hw, SR_XS_PCS_CTRL2, 0);
+		value = rd32_epcs(hw, SR_PMA_CTRL1);
+		value = value | 0x2000;
+		wr32_epcs(hw, SR_PMA_CTRL1, value);
+		/* Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL0 Register Bit[7:0] = 8'd33
+		 * MPLLA_MULTIPLIER
+		 */
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0021);
+		/* 3. Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL3 Register
+		 * Bit[10:0](MPLLA_BANDWIDTH) = 11'd0
+		 */
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0);
+		value = rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1);
+		value = (value & ~0x700) | 0x500;
+		wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+		/* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register
+		 * Bit[12:8](RX_VREF_CTRL) = 5'hF
+		 */
+		wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+		/* Set VR_XS_PMA_Gen5_12G_VCO_CAL_LD0 Register
+		 * Bit[12:0] = 13'd1353  //VCO_LD_VAL_0
+		 */
+		wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0549);
+		/* Set VR_XS_PMA_Gen5_12G_VCO_CAL_REF0 Register
+		 * Bit[5:0] = 6'd41  //VCO_REF_LD_0
+		 */
+		wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x0029);
+		/* Set VR_XS_PMA_Gen5_12G_TX_RATE_CTRL Register
+		 * Bit[2:0] = 3'b000  //TX0_RATE
+		 */
+		wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0);
+		/* Set VR_XS_PMA_Gen5_12G_RX_RATE_CTRL Register
+		 * Bit[2:0] = 3'b000  //RX0_RATE
+		 */
+		wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0);
+		/* Set VR_XS_PMA_Gen5_12G_TX_GENCTRL2 Register Bit[9:8] = 2'b11
+		 * TX0_WIDTH: 20bits
+		 */
+		wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0300);
+		/* Set VR_XS_PMA_Gen5_12G_RX_GENCTRL2 Register Bit[9:8] = 2'b11
+		 * RX0_WIDTH: 20bits
+		 */
+		wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0300);
+		/* Set VR_XS_PMA_Gen5_12G_MPLLA_CTRL2 Register
+		 * Bit[10:8] = 3'b110
+		 * MPLLA_DIV16P5_CLK_EN=1
+		 * MPLLA_DIV10_CLK_EN=1
+		 * MPLLA_DIV8_CLK_EN=0
+		 */
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0600);
+		/* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register
+		 * Bit[13:8](TX_EQ_MAIN) = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4
+		 */
+		value = rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+		value = (value & ~0x3F3F) | (24 << 8) | 4;
+		wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+		/* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register
+		 * Bit[6](TX_EQ_OVR_RIDE) = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36
+		 */
+		value = rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+		value = (value & ~0x7F) | 16 | (1 << 6);
+		wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+		if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+			hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+			/* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+			 * Bit[15:8](VGA1/2_GAIN_0) = 8'h77
+			 * Bit[7:5](CTLE_POLE_0) = 3'h2
+			 * Bit[4:0](CTLE_BOOST_0) = 4'hF
+			 */
+			wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F);
+
+		} else {
+			/* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+			 * Bit[15:8](VGA1/2_GAIN_0) = 8'h00
+			 * Bit[7:5](CTLE_POLE_0) = 3'h2
+			 * Bit[4:0](CTLE_BOOST_0) = 4'hA
+			 */
+			value = rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0);
+			value = (value & ~0xFFFF) | (2 << 5) | 0x05;
+			wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value);
+		}
+		value = rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0);
+		value = (value & ~0x7) | 0x0;
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+
+		if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+			hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+			/* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register
+			 * Bit[7:0](DFE_TAP1_0) = 8'd20 */
+			wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0014);
+			value = rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE);
+			value = (value & ~0x11) | 0x11;
+			wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value);
+		} else {
+			/* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register
+			 * Bit[7:0](DFE_TAP1_0) = 8'd20 */
+			wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0xBE);
+			/* 9. Set VR_MII_Gen5_12G_AFE_DFE_EN_CTRL Register
+			 * Bit[4](DFE_EN_0) = 1'b0, Bit[0](AFE_EN_0) = 1'b0 */
+			value = rd32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE);
+			value = (value & ~0x11) | 0x0;
+			wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, value);
+		}
+		value = rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL);
+		value = value & ~0x1;
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, value);
+	} else {
+		/* Set SR PCS Control2 Register Bits[1:0] = 2'b00
+		 * PCS_TYPE_SEL: KR
+		 */
+		wr32_epcs(hw, SR_XS_PCS_CTRL2, 0x1);
+		/* Set SR PMA MMD Control1 Register Bit[13] = 1'b0
+		 * SS13: 1G speed
+		 */
+		wr32_epcs(hw, SR_PMA_CTRL1, 0x0000);
+		/* Set SR MII MMD Control Register to corresponding speed */
+		wr32_epcs(hw, SR_MII_MMD_CTL, 0x0140);
+
+		value = rd32_epcs(hw, TXGBE_PHY_TX_GENCTRL1);
+		value = (value & ~0x710) | 0x500;
+		wr32_epcs(hw, TXGBE_PHY_TX_GENCTRL1, value);
+		/* 4. Set VR_XS_PMA_Gen5_12G_MISC_CTRL0 Register
+		 * Bit[12:8](RX_VREF_CTRL) = 5'hF */
+		wr32_epcs(hw, TXGBE_PHY_MISC_CTL0, 0xCF00);
+		/* 5. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL0 Register
+		 * Bit[13:8](TX_EQ_MAIN) = 6'd30, Bit[5:0](TX_EQ_PRE) = 6'd4 */
+		value = rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0);
+		value = (value & ~0x3F3F) | (24 << 8) | 4;
+		wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL0, value);
+		/* 6. Set VR_XS_PMA_Gen5_12G_TX_EQ_CTRL1 Register Bit[6]
+		 * (TX_EQ_OVR_RIDE) = 1'b1, Bit[5:0](TX_EQ_POST) = 6'd36 */
+		value = rd32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1);
+		value = (value & ~0x7F) | 16 | (1 << 6);
+		wr32_epcs(hw, TXGBE_PHY_TX_EQ_CTL1, value);
+		if (hw->phy.sfp_type == txgbe_sfp_type_da_cu_core0 ||
+			hw->phy.sfp_type == txgbe_sfp_type_da_cu_core1) {
+			wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, 0x774F);
+		} else {
+			/* 7. Set VR_XS_PMA_Gen5_12G_RX_EQ_CTRL0 Register
+			 * Bit[15:8](VGA1/2_GAIN_0) = 8'h00
+			 * Bit[7:5](CTLE_POLE_0) = 3'h2
+			 * Bit[4:0](CTLE_BOOST_0) = 4'hA
+			 */
+			value = rd32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0);
+			value = (value & ~0xFFFF) | 0x7706;
+			wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL0, value);
+		}
+		value = rd32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0);
+		value = (value & ~0x7) | 0x0;
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_ATT_LVL0, value);
+		/* 8. Set VR_XS_PMA_Gen5_12G_DFE_TAP_CTRL0 Register
+		 * Bit[7:0](DFE_TAP1_0) = 8'd00 */
+		wr32_epcs(hw, TXGBE_PHY_DFE_TAP_CTL0, 0x0);
+		/* 9. Set VR_MII_Gen5_12G_AFE_DFE_EN_CTRL Register
+		 * Bit[4](DFE_EN_0) = 1'b0, Bit[0](AFE_EN_0) = 1'b0 */
+		value = rd32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3);
+		value = (value & ~0x7) | 0x4;
+		wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL3, value);
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL0, 0x0020);
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL3, 0x0046);
+		wr32_epcs(hw, TXGBE_PHY_VCO_CAL_LD0, 0x0540);
+		wr32_epcs(hw, TXGBE_PHY_VCO_CAL_REF0, 0x002A);
+		wr32_epcs(hw, TXGBE_PHY_AFE_DFE_ENABLE, 0x0);
+		wr32_epcs(hw, TXGBE_PHY_RX_EQ_CTL, 0x0010);
+		wr32_epcs(hw, TXGBE_PHY_TX_RATE_CTL, 0x0003);
+		wr32_epcs(hw, TXGBE_PHY_RX_RATE_CTL, 0x0003);
+		wr32_epcs(hw, TXGBE_PHY_TX_GEN_CTL2, 0x0100);
+		wr32_epcs(hw, TXGBE_PHY_RX_GEN_CTL2, 0x0100);
+		wr32_epcs(hw, TXGBE_PHY_MPLLA_CTL2, 0x0200);
+		wr32_epcs(hw, SR_MII_MMD_AN_CTL, 0x0100);
+	}
+	/* 10. Initialize the mode by setting VR XS or PCS MMD Digital Control1
+	 * Register Bit[15](VR_RST) */
+	wr32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1, 0xA000);
+
+	/* Wait phy initialization done */
+	for (i = 0; i < 100; i++) {
+		if ((rd32_epcs(hw, VR_XS_OR_PCS_MMD_DIGI_CTL1) &
+			VR_XS_OR_PCS_MMD_DIGI_CTL1_VR_RST) == 0)
+			break;
+		msleep(100);
+	}
+	if (i == 100) {
+		err = TXGBE_ERR_PHY_INIT_NOT_DONE;
+		goto out;
+	}
+
+out:
+	return err;
+}
+
+/**
+ *  txgbe_autoc_read - Hides MAC differences needed for AUTOC read
+ *  @hw: pointer to hardware structure
+ */
+u64 txgbe_autoc_read(struct txgbe_hw *hw)
+{
+	u64 autoc = 0;
+	u32 sr_pcs_ctl;
+	u32 sr_pma_ctl1;
+	u32 sr_an_ctl;
+	u32 sr_an_adv_reg2;
+
+	if (hw->phy.multispeed_fiber) {
+		autoc |= TXGBE_AUTOC_LMS_10Gs;
+	} else if (hw->device_id == TXGBE_DEV_ID_RAPTOR_SFP ||
+		   hw->device_id == TXGBE_DEV_ID_WX1820_SFP) {
+		autoc |= TXGBE_AUTOC_LMS_10Gs |
+			 TXGBE_AUTOC_10Gs_SFI;
+	} else if (hw->device_id == TXGBE_DEV_ID_RAPTOR_QSFP) {
+		autoc = 0; /*TBD*/
+	} else if (hw->device_id == TXGBE_DEV_ID_RAPTOR_XAUI) {
+		autoc |= TXGBE_AUTOC_LMS_10G_LINK_NO_AN |
+			 TXGBE_AUTOC_10G_XAUI;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_T;
+	} else if (hw->device_id == TXGBE_DEV_ID_RAPTOR_SGMII) {
+		autoc |= TXGBE_AUTOC_LMS_SGMII_1G_100M;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_T |
+				TXGBE_PHYSICAL_LAYER_100BASE_TX;
+	}
+
+	if (hw->device_id != TXGBE_DEV_ID_RAPTOR_SGMII) {
+		return autoc;
+	}
+
+	sr_pcs_ctl = rd32_epcs(hw, SR_XS_PCS_CTRL2);
+	sr_pma_ctl1 = rd32_epcs(hw, SR_PMA_CTRL1);
+	sr_an_ctl = rd32_epcs(hw, SR_AN_CTRL);
+	sr_an_adv_reg2 = rd32_epcs(hw, SR_AN_MMD_ADV_REG2);
+
+	if ((sr_pcs_ctl & SR_PCS_CTRL2_TYPE_SEL) == SR_PCS_CTRL2_TYPE_SEL_X &&
+	    (sr_pma_ctl1 & SR_PMA_CTRL1_SS13) == SR_PMA_CTRL1_SS13_KX &&
+	    (sr_an_ctl & SR_AN_CTRL_AN_EN) == 0) {
+		/* 1G or KX - no backplane auto-negotiation */
+		autoc |= TXGBE_AUTOC_LMS_1G_LINK_NO_AN |
+			 TXGBE_AUTOC_1G_KX;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+	} else if ((sr_pcs_ctl & SR_PCS_CTRL2_TYPE_SEL) ==
+		SR_PCS_CTRL2_TYPE_SEL_X &&
+		(sr_pma_ctl1 & SR_PMA_CTRL1_SS13) == SR_PMA_CTRL1_SS13_KX4 &&
+		(sr_an_ctl & SR_AN_CTRL_AN_EN) == 0) {
+		autoc |= TXGBE_AUTOC_LMS_10Gs |
+			 TXGBE_AUTOC_10G_KX4;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KX4;
+	} else if ((sr_pcs_ctl & SR_PCS_CTRL2_TYPE_SEL) ==
+		SR_PCS_CTRL2_TYPE_SEL_R &&
+		(sr_an_ctl & SR_AN_CTRL_AN_EN) == 0) {
+		/* 10 GbE serial link (KR -no backplane auto-negotiation) */
+		autoc |= TXGBE_AUTOC_LMS_10Gs |
+			 TXGBE_AUTOC_10Gs_KR;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR;
+	} else if ((sr_an_ctl & SR_AN_CTRL_AN_EN)) {
+		/* KX/KX4/KR backplane auto-negotiation enable */
+		if (sr_an_adv_reg2 & SR_AN_MMD_ADV_REG2_BP_TYPE_KR) {
+			autoc |= TXGBE_AUTOC_10G_KR;
+		}
+		if (sr_an_adv_reg2 & SR_AN_MMD_ADV_REG2_BP_TYPE_KX4) {
+			autoc |= TXGBE_AUTOC_10G_KX4;
+		}
+		if (sr_an_adv_reg2 & SR_AN_MMD_ADV_REG2_BP_TYPE_KX) {
+			autoc |= TXGBE_AUTOC_1G_KX;
+		}
+		autoc |= TXGBE_AUTOC_LMS_KX4_KX_KR;
+		hw->phy.link_mode = TXGBE_PHYSICAL_LAYER_10GBASE_KR |
+				TXGBE_PHYSICAL_LAYER_10GBASE_KX4 |
+				TXGBE_PHYSICAL_LAYER_1000BASE_KX;
+	}
+
+	return autoc;
+}
+
+/**
+ * txgbe_autoc_write - Hides MAC differences needed for AUTOC write
+ * @hw: pointer to hardware structure
+ * @autoc: value to write to AUTOC
+ */
+void txgbe_autoc_write(struct txgbe_hw *hw, u64 autoc)
+{
+	bool autoneg;
+	u32 speed;
+	u32 mactxcfg = 0;
+
+	speed = TXGBE_AUTOC_SPEED(autoc);
+	autoc &= ~TXGBE_AUTOC_SPEED_MASK;
+	autoneg = (autoc & TXGBE_AUTOC_AUTONEG ? true : false);
+	autoc &= ~TXGBE_AUTOC_AUTONEG;
+
+	if (hw->device_id == TXGBE_DEV_ID_RAPTOR_KR_KX_KX4) {
+		if (!autoneg) {
+			switch (hw->phy.link_mode) {
+			case TXGBE_PHYSICAL_LAYER_10GBASE_KR:
+				txgbe_set_link_to_kr(hw, autoneg);
+				break;
+			case TXGBE_PHYSICAL_LAYER_10GBASE_KX4:
+				txgbe_set_link_to_kx4(hw, autoneg);
+				break;
+			case TXGBE_PHYSICAL_LAYER_1000BASE_KX:
+				txgbe_set_link_to_kx(hw, speed, autoneg);
+				break;
+			default:
+				return;
+			}
+		}
+	} else if (hw->device_id == TXGBE_DEV_ID_RAPTOR_XAUI ||
+		   hw->device_id == TXGBE_DEV_ID_RAPTOR_SGMII) {
+		if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+			txgbe_set_link_to_kx4(hw, autoneg);
+		} else {
+			txgbe_set_link_to_kx(hw, speed, 0);
+			txgbe_set_sgmii_an37_ability(hw);
+		}
+	} else if (hw->device_id == TXGBE_DEV_ID_RAPTOR_SFP ||
+		   hw->device_id == TXGBE_DEV_ID_WX1820_SFP) {
+		txgbe_set_link_to_sfi(hw, speed);
+	}
+
+	if (speed == TXGBE_LINK_SPEED_10GB_FULL) {
+		mactxcfg = TXGBE_MACTXCFG_SPEED_10G;
+	} else if (speed == TXGBE_LINK_SPEED_1GB_FULL) {
+		mactxcfg = TXGBE_MACTXCFG_SPEED_1G;
+	}
+	/* enable mac transmitter */
+	wr32m(hw, TXGBE_MACTXCFG, TXGBE_MACTXCFG_SPEED_MASK, mactxcfg);
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_phy.h b/drivers/net/txgbe/base/txgbe_phy.h
index 56959b837..fbef67e78 100644
--- a/drivers/net/txgbe/base/txgbe_phy.h
+++ b/drivers/net/txgbe/base/txgbe_phy.h
@@ -366,5 +366,7 @@ s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
 				  u8 *eeprom_data);
 s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
 				   u8 eeprom_data);
+u64 txgbe_autoc_read(struct txgbe_hw *hw);
+void txgbe_autoc_write(struct txgbe_hw *hw, u64 value);
 
 #endif /* _TXGBE_PHY_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index b94217b8b..5fd51fece 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -14,6 +14,26 @@
 #include "txgbe_osdep.h"
 #include "txgbe_devids.h"
 
+/* Physical layer type */
+#define TXGBE_PHYSICAL_LAYER_UNKNOWN		0
+#define TXGBE_PHYSICAL_LAYER_10GBASE_T		0x00001
+#define TXGBE_PHYSICAL_LAYER_1000BASE_T		0x00002
+#define TXGBE_PHYSICAL_LAYER_100BASE_TX		0x00004
+#define TXGBE_PHYSICAL_LAYER_SFP_PLUS_CU	0x00008
+#define TXGBE_PHYSICAL_LAYER_10GBASE_LR		0x00010
+#define TXGBE_PHYSICAL_LAYER_10GBASE_LRM	0x00020
+#define TXGBE_PHYSICAL_LAYER_10GBASE_SR		0x00040
+#define TXGBE_PHYSICAL_LAYER_10GBASE_KX4	0x00080
+#define TXGBE_PHYSICAL_LAYER_10GBASE_CX4	0x00100
+#define TXGBE_PHYSICAL_LAYER_1000BASE_KX	0x00200
+#define TXGBE_PHYSICAL_LAYER_1000BASE_BX	0x00400
+#define TXGBE_PHYSICAL_LAYER_10GBASE_KR		0x00800
+#define TXGBE_PHYSICAL_LAYER_10GBASE_XAUI	0x01000
+#define TXGBE_PHYSICAL_LAYER_SFP_ACTIVE_DA	0x02000
+#define TXGBE_PHYSICAL_LAYER_1000BASE_SX	0x04000
+#define TXGBE_PHYSICAL_LAYER_10BASE_T		0x08000
+#define TXGBE_PHYSICAL_LAYER_2500BASE_KX	0x10000
+
 enum txgbe_eeprom_type {
 	txgbe_eeprom_unknown = 0,
 	txgbe_eeprom_spi,
@@ -390,6 +410,7 @@ struct txgbe_phy_info {
 	bool multispeed_fiber;
 	bool qsfp_shared_i2c_bus;
 	u32 nw_mng_if_sel;
+	u32 link_mode;
 };
 
 struct txgbe_mbx_info {
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 17/42] net/txgbe: support device LED on and off
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (14 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 16/42] net/txgbe: add autoc read and write Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 18/42] net/txgbe: add rx and tx init Jiawen Wu
                   ` (25 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Support device LED on and off.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c | 46 +++++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_hw.h |  3 ++
 drivers/net/txgbe/txgbe_ethdev.c  | 23 ++++++++++++++++
 3 files changed, 72 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 37f55c1fc..13f79741a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -161,6 +161,52 @@ s32 txgbe_stop_hw(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_led_on - Turns on the software controllable LEDs.
+ *  @hw: pointer to hardware structure
+ *  @index: led number to turn on
+ **/
+s32 txgbe_led_on(struct txgbe_hw *hw, u32 index)
+{
+	u32 led_reg = rd32(hw, TXGBE_LEDCTL);
+
+	DEBUGFUNC("txgbe_led_on");
+
+	if (index > 4)
+		return TXGBE_ERR_PARAM;
+
+	/* To turn on the LED, set mode to ON. */
+	led_reg |= TXGBE_LEDCTL_SEL(index);
+	led_reg |= TXGBE_LEDCTL_OD(index);
+	wr32(hw, TXGBE_LEDCTL, led_reg);
+	txgbe_flush(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_led_off - Turns off the software controllable LEDs.
+ *  @hw: pointer to hardware structure
+ *  @index: led number to turn off
+ **/
+s32 txgbe_led_off(struct txgbe_hw *hw, u32 index)
+{
+	u32 led_reg = rd32(hw, TXGBE_LEDCTL);
+
+	DEBUGFUNC("txgbe_led_off");
+
+	if (index > 4)
+		return TXGBE_ERR_PARAM;
+
+	/* To turn off the LED, set mode to OFF. */
+	led_reg &= ~(TXGBE_LEDCTL_SEL(index));
+	led_reg &= ~(TXGBE_LEDCTL_OD(index));
+	wr32(hw, TXGBE_LEDCTL, led_reg);
+	txgbe_flush(hw);
+
+	return 0;
+}
+
 /**
  *  txgbe_validate_mac_addr - Validate MAC address
  *  @mac_addr: pointer to MAC address.
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index d361f6590..f57c26bee 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -13,6 +13,9 @@ s32 txgbe_stop_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_gen2(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 
+s32 txgbe_led_on(struct txgbe_hw *hw, u32 index);
+s32 txgbe_led_off(struct txgbe_hw *hw, u32 index);
+
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
 
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 16008ea4e..1803ace01 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -590,6 +590,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	 */
 	txgbe_dev_link_update(dev, 0);
 
+	wr32m(hw, TXGBE_LEDCTL, 0xFFFFFFFF, TXGBE_LEDCTL_OD_MASK);
+
 	return 0;
 
 error:
@@ -654,6 +656,8 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
 		intr_handle->intr_vec = NULL;
 	}
 
+	wr32m(hw, TXGBE_LEDCTL, 0xFFFFFFFF, TXGBE_LEDCTL_SEL_MASK);
+
 	hw->adapter_stopped = true;
 }
 
@@ -1196,6 +1200,23 @@ txgbe_dev_interrupt_handler(void *param)
 	txgbe_dev_interrupt_action(dev, dev->intr_handle);
 }
 
+static int
+txgbe_dev_led_on(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw;
+
+	hw = TXGBE_DEV_HW(dev);
+	return txgbe_led_on(hw, 4) == 0 ? 0 : -ENOTSUP;
+}
+
+static int
+txgbe_dev_led_off(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw;
+
+	hw = TXGBE_DEV_HW(dev);
+	return txgbe_led_off(hw, 4) == 0 ? 0 : -ENOTSUP;
+}
 /**
  * set the IVAR registers, mapping interrupt causes to vectors
  * @param hw
@@ -1295,6 +1316,8 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.link_update                = txgbe_dev_link_update,
 	.stats_get                  = txgbe_dev_stats_get,
 	.stats_reset                = txgbe_dev_stats_reset,
+	.dev_led_on                 = txgbe_dev_led_on,
+	.dev_led_off                = txgbe_dev_led_off,
 };
 
 RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 18/42] net/txgbe: add rx and tx init
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (15 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 17/42] net/txgbe: support device LED on and off Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 19/42] net/txgbe: add RX and TX start Jiawen Wu
                   ` (24 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add receive and transmit initialize unit.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h |   3 +
 drivers/net/txgbe/txgbe_ethdev.c    |   3 +
 drivers/net/txgbe/txgbe_ethdev.h    |  28 +++
 drivers/net/txgbe/txgbe_rxtx.c      | 330 +++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_rxtx.h      |  23 ++
 5 files changed, 381 insertions(+), 6 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 5fd51fece..6229d8acc 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -8,6 +8,9 @@
 #define TXGBE_LINK_UP_TIME	90 /* 9.0 Seconds */
 #define TXGBE_AUTO_NEG_TIME	45 /* 4.5 Seconds */
 
+#define TXGBE_FRAME_SIZE_MAX	(9728) /* Maximum frame size, +FCS */
+#define TXGBE_FRAME_SIZE_DFT	(1518) /* Default frame size, +FCS */
+
 #define TXGBE_ALIGN				128 /* as intel did */
 
 #include "txgbe_status.h"
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 1803ace01..abc457109 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -124,6 +124,9 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	PMD_INIT_FUNC_TRACE();
 
 	eth_dev->dev_ops = &txgbe_eth_dev_ops;
+	eth_dev->rx_pkt_burst = &txgbe_recv_pkts;
+	eth_dev->tx_pkt_burst = &txgbe_xmit_pkts;
+	eth_dev->tx_pkt_prepare = &txgbe_prep_pkts;
 
 	/*
 	 * For secondary processes, we don't initialise any further as primary
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index ff2b36f02..6739b580c 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -19,6 +19,12 @@
 #define TXGBE_FLAG_MACSEC           (uint32_t)(1 << 3)
 #define TXGBE_FLAG_NEED_LINK_CONFIG (uint32_t)(1 << 4)
 
+/*
+ * Defines that were not part of txgbe_type.h as they are not used by the
+ * FreeBSD driver.
+ */
+#define TXGBE_VLAN_TAG_SIZE 4
+
 #define TXGBE_QUEUE_ITR_INTERVAL_DEFAULT	500 /* 500us */
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
@@ -46,6 +52,7 @@ struct txgbe_adapter {
 	struct txgbe_hw_stats       stats;
 	struct txgbe_interrupt      intr;
 	struct txgbe_vf_info        *vfdata;
+	bool rx_bulk_alloc_allowed;
 };
 
 struct txgbe_vf_representor {
@@ -82,6 +89,27 @@ int txgbe_dev_rx_init(struct rte_eth_dev *dev);
 void txgbe_dev_tx_init(struct rte_eth_dev *dev);
 
 int txgbe_dev_rxtx_start(struct rte_eth_dev *dev);
+
+uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts);
+
+uint16_t txgbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+				    uint16_t nb_pkts);
+
+uint16_t txgbe_recv_pkts_lro_single_alloc(void *rx_queue,
+		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+uint16_t txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue,
+		struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
+uint16_t txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
+
+uint16_t txgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
+
+uint16_t txgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts);
+
 void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
 			       uint8_t queue, uint8_t msix_vector);
 
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index cb067d4f4..d3782f44d 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -20,6 +20,87 @@
 #include "txgbe_ethdev.h"
 #include "txgbe_rxtx.h"
 
+uint16_t
+txgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
+		       uint16_t nb_pkts)
+{
+	RTE_SET_USED(tx_queue);
+	RTE_SET_USED(tx_pkts);
+	RTE_SET_USED(nb_pkts);
+
+	return 0;
+}
+
+uint16_t
+txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+		uint16_t nb_pkts)
+{
+	RTE_SET_USED(tx_queue);
+	RTE_SET_USED(tx_pkts);
+	RTE_SET_USED(nb_pkts);
+
+	return 0;
+}
+
+uint16_t
+txgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	RTE_SET_USED(tx_queue);
+	RTE_SET_USED(tx_pkts);
+	RTE_SET_USED(nb_pkts);
+
+	return 0;
+}
+
+/* split requests into chunks of size RTE_PMD_TXGBE_RX_MAX_BURST */
+uint16_t
+txgbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+			   uint16_t nb_pkts)
+{
+	RTE_SET_USED(rx_queue);
+	RTE_SET_USED(rx_pkts);
+	RTE_SET_USED(nb_pkts);
+
+	return 0;
+}
+
+uint16_t
+txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts)
+{
+	RTE_SET_USED(rx_queue);
+	RTE_SET_USED(rx_pkts);
+	RTE_SET_USED(nb_pkts);
+
+	return 0;
+}
+
+static inline uint16_t
+txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
+		    bool bulk_alloc)
+{
+	RTE_SET_USED(rx_queue);
+	RTE_SET_USED(rx_pkts);
+	RTE_SET_USED(nb_pkts);
+	RTE_SET_USED(bulk_alloc);
+
+	return 0;
+}
+
+uint16_t
+txgbe_recv_pkts_lro_single_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+				 uint16_t nb_pkts)
+{
+	return txgbe_recv_pkts_lro(rx_queue, rx_pkts, nb_pkts, false);
+}
+
+uint16_t
+txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	return txgbe_recv_pkts_lro(rx_queue, rx_pkts, nb_pkts, true);
+}
+
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
  * in dev_init by secondary process when attaching to an existing ethdev.
@@ -27,11 +108,26 @@
 void __rte_cold
 txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq)
 {
-	RTE_SET_USED(dev);
-	RTE_SET_USED(txq);
+	/* Use a simple Tx queue (no offloads, no multi segs) if possible */
+	if ((txq->offloads == 0) &&
+			(txq->tx_free_thresh >= RTE_PMD_TXGBE_TX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Using simple tx code path");
+		dev->tx_pkt_burst = txgbe_xmit_pkts_simple;
+		dev->tx_pkt_prepare = NULL;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Using full-featured tx code path");
+		PMD_INIT_LOG(DEBUG,
+				" - offloads = 0x%" PRIx64,
+				txq->offloads);
+		PMD_INIT_LOG(DEBUG,
+				" - tx_free_thresh = %lu " "[RTE_PMD_TXGBE_TX_MAX_BURST=%lu]",
+				(unsigned long)txq->tx_free_thresh,
+				(unsigned long)RTE_PMD_TXGBE_TX_MAX_BURST);
+		dev->tx_pkt_burst = txgbe_xmit_pkts;
+		dev->tx_pkt_prepare = txgbe_prep_pkts;
+	}
 }
 
-
 void
 txgbe_dev_free_queues(struct rte_eth_dev *dev)
 {
@@ -41,7 +137,66 @@ txgbe_dev_free_queues(struct rte_eth_dev *dev)
 void __rte_cold
 txgbe_set_rx_function(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
+
+	/*
+	 * Initialize the appropriate LRO callback.
+	 *
+	 * If all queues satisfy the bulk allocation preconditions
+	 * (adapter->rx_bulk_alloc_allowed is TRUE) then we may use
+	 * bulk allocation. Otherwise use a single allocation version.
+	 */
+	if (dev->data->lro) {
+		if (adapter->rx_bulk_alloc_allowed) {
+			PMD_INIT_LOG(DEBUG, "LRO is requested. Using a bulk "
+					   "allocation version");
+			dev->rx_pkt_burst = txgbe_recv_pkts_lro_bulk_alloc;
+		} else {
+			PMD_INIT_LOG(DEBUG, "LRO is requested. Using a single "
+					   "allocation version");
+			dev->rx_pkt_burst = txgbe_recv_pkts_lro_single_alloc;
+		}
+	} else if (dev->data->scattered_rx) {
+		/*
+		 * Set the non-LRO scattered callback: there are bulk and
+		 * single allocation versions.
+		 */
+		if (adapter->rx_bulk_alloc_allowed) {
+			PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk "
+					   "allocation callback (port=%d).",
+				     dev->data->port_id);
+			dev->rx_pkt_burst = txgbe_recv_pkts_lro_bulk_alloc;
+		} else {
+			PMD_INIT_LOG(DEBUG, "Using Regualr (non-vector, "
+					    "single allocation) "
+					    "Scattered Rx callback "
+					    "(port=%d).",
+				     dev->data->port_id);
+
+			dev->rx_pkt_burst = txgbe_recv_pkts_lro_single_alloc;
+		}
+	/*
+	 * Below we set "simple" callbacks according to port/queues parameters.
+	 * If parameters allow we are going to choose between the following
+	 * callbacks:
+	 *    - Bulk Allocation
+	 *    - Single buffer allocation (the simplest one)
+	 */
+	} else if (adapter->rx_bulk_alloc_allowed) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are "
+				    "satisfied. Rx Burst Bulk Alloc function "
+				    "will be used on port=%d.",
+			     dev->data->port_id);
+
+		dev->rx_pkt_burst = txgbe_recv_pkts_bulk_alloc;
+	} else {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are not "
+				    "satisfied, or Scattered Rx is requested "
+				    "(port=%d).",
+			     dev->data->port_id);
+
+		dev->rx_pkt_burst = txgbe_recv_pkts;
+	}
 }
 
 /*
@@ -50,7 +205,148 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 int __rte_cold
 txgbe_dev_rx_init(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct txgbe_hw *hw;
+	struct txgbe_rx_queue *rxq;
+	uint64_t bus_addr;
+	uint32_t fctrl;
+	uint32_t hlreg0;
+	uint32_t srrctl;
+	uint32_t rdrxctl;
+	uint32_t rxcsum;
+	uint16_t buf_size;
+	uint16_t i;
+	struct rte_eth_rxmode *rx_conf = &dev->data->dev_conf.rxmode;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = TXGBE_DEV_HW(dev);
+
+	/*
+	 * Make sure receives are disabled while setting
+	 * up the RX context (registers, descriptor rings, etc.).
+	 */
+	wr32m(hw, TXGBE_MACRXCFG, TXGBE_MACRXCFG_ENA, 0);
+	wr32m(hw, TXGBE_PBRXCTL, TXGBE_PBRXCTL_ENA, 0);
+
+	/* Enable receipt of broadcasted frames */
+	fctrl = rd32(hw, TXGBE_PSRCTL);
+	fctrl |= TXGBE_PSRCTL_BCA;
+	wr32(hw, TXGBE_PSRCTL, fctrl);
+
+	/*
+	 * Configure CRC stripping, if any.
+	 */
+	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
+	else
+		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
+	wr32(hw, TXGBE_SECRXCTL, hlreg0);
+
+	/*
+	 * Configure jumbo frame support, if any.
+	 */
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+			TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
+	} else {
+		wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+			TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
+	}
+
+	/*
+	 * If loopback mode is configured, set LPBK bit.
+	 */
+	hlreg0 = rd32(hw, TXGBE_PSRCTL);
+	if (hw->mac.type == txgbe_mac_raptor &&
+	    dev->data->dev_conf.lpbk_mode)
+		hlreg0 |= TXGBE_PSRCTL_LBENA;
+	else
+		hlreg0 &= ~TXGBE_PSRCTL_LBENA;
+
+	wr32(hw, TXGBE_PSRCTL, hlreg0);
+
+	/*
+	 * Assume no header split and no VLAN strip support
+	 * on any Rx queue first .
+	 */
+	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+
+	/* Setup RX queues */
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+
+		/*
+		 * Reset crc_len in case it was changed after queue setup by a
+		 * call to configure.
+		 */
+		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+			rxq->crc_len = RTE_ETHER_CRC_LEN;
+		else
+			rxq->crc_len = 0;
+
+		/* Setup the Base and Length of the Rx Descriptor Rings */
+		bus_addr = rxq->rx_ring_phys_addr;
+		wr32(hw, TXGBE_RXBAL(rxq->reg_idx),
+				(uint32_t)(bus_addr & BIT_MASK32));
+		wr32(hw, TXGBE_RXBAH(rxq->reg_idx),
+				(uint32_t)(bus_addr >> 32));
+		wr32(hw, TXGBE_RXRP(rxq->reg_idx), 0);
+		wr32(hw, TXGBE_RXWP(rxq->reg_idx), 0);
+
+		srrctl = TXGBE_RXCFG_RNGLEN(rxq->nb_rx_desc);
+
+		/* Set if packets are dropped when no descriptors available */
+		if (rxq->drop_en)
+			srrctl |= TXGBE_RXCFG_DROP;
+
+		/*
+		 * Configure the RX buffer size in the PKTLEN field of
+		 * the RXCFG register of the queue.
+		 * The value is in 1 KB resolution. Valid values can be from
+		 * 1 KB to 16 KB.
+		 */
+		buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mb_pool) -
+			RTE_PKTMBUF_HEADROOM);
+		buf_size = ROUND_UP(buf_size, 0x1 << 10);
+		srrctl |= TXGBE_RXCFG_PKTLEN(buf_size);
+
+		wr32(hw, TXGBE_RXCFG(rxq->reg_idx), srrctl);
+
+		/* It adds dual VLAN length for supporting dual VLAN */
+		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
+					    2 * TXGBE_VLAN_TAG_SIZE > buf_size)
+			dev->data->scattered_rx = 1;
+		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	}
+
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+		dev->data->scattered_rx = 1;
+
+	/*
+	 * Setup the Checksum Register.
+	 * Disable Full-Packet Checksum which is mutually exclusive with RSS.
+	 * Enable IP/L4 checkum computation by hardware if requested to do so.
+	 */
+	rxcsum = rd32(hw, TXGBE_PSRCTL);
+	rxcsum |= TXGBE_PSRCTL_PCSD;
+	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxcsum |= TXGBE_PSRCTL_L4CSUM;
+	else
+		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
+
+	wr32(hw, TXGBE_PSRCTL, rxcsum);
+
+	if (hw->mac.type == txgbe_mac_raptor) {
+		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
+		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
+		else
+			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
+		wr32(hw, TXGBE_SECRXCTL, rdrxctl);
+	}
+
+	txgbe_set_rx_function(dev);
 
 	return 0;
 }
@@ -61,7 +357,29 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 void __rte_cold
 txgbe_dev_tx_init(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct txgbe_hw     *hw;
+	struct txgbe_tx_queue *txq;
+	uint64_t bus_addr;
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = TXGBE_DEV_HW(dev);
+
+	/* Setup the Base and Length of the Tx Descriptor Rings */
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+
+		bus_addr = txq->tx_ring_phys_addr;
+		wr32(hw, TXGBE_TXBAL(txq->reg_idx),
+				(uint32_t)(bus_addr & BIT_MASK32));
+		wr32(hw, TXGBE_TXBAH(txq->reg_idx),
+				(uint32_t)(bus_addr >> 32));
+		wr32m(hw, TXGBE_TXCFG(txq->reg_idx), TXGBE_TXCFG_BUFLEN_MASK,
+			TXGBE_TXCFG_BUFLEN(txq->nb_tx_desc));
+		/* Setup the HW Tx Head and TX Tail descriptor pointers */
+		wr32(hw, TXGBE_TXRP(txq->reg_idx), 0);
+		wr32(hw, TXGBE_TXWP(txq->reg_idx), 0);
+	}
 }
 
 /*
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index c5e2e56d3..2d337c46a 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -5,11 +5,34 @@
 #ifndef _TXGBE_RXTX_H_
 #define _TXGBE_RXTX_H_
 
+
+#define RTE_PMD_TXGBE_TX_MAX_BURST 32
+#define RTE_PMD_TXGBE_RX_MAX_BURST 32
+
+/**
+ * Structure associated with each RX queue.
+ */
+struct txgbe_rx_queue {
+	struct rte_mempool  *mb_pool; /**< mbuf pool to populate RX ring. */
+	uint64_t            rx_ring_phys_addr; /**< RX ring DMA address. */
+	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
+	uint16_t            reg_idx;  /**< RX queue register index. */
+	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
+	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
+	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+};
+
 /**
  * Structure associated with each TX queue.
  */
 struct txgbe_tx_queue {
 	uint64_t            tx_ring_phys_addr; /**< TX ring DMA address. */
+	uint16_t            nb_tx_desc;    /**< number of TX descriptors. */
+	/**< Start freeing TX buffers if there are less free descriptors than
+	     this value. */
+	uint16_t            tx_free_thresh;
+	uint16_t            reg_idx;       /**< TX queue register index. */
+	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
 };
 
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 19/42] net/txgbe: add RX and TX start
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (16 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 18/42] net/txgbe: add rx and tx init Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 20/42] net/txgbe: add RX and TX stop Jiawen Wu
                   ` (23 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add receive and transmit units start for specified queue.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.h |   1 +
 drivers/net/txgbe/txgbe_ethdev.c  |   2 +
 drivers/net/txgbe/txgbe_ethdev.h  |   4 +
 drivers/net/txgbe/txgbe_rxtx.c    | 175 +++++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_rxtx.h    |  62 +++++++++++
 5 files changed, 243 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index f57c26bee..a597383b8 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -56,5 +56,6 @@ void txgbe_init_mac_link_ops(struct txgbe_hw *hw);
 s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 s32 txgbe_init_phy_raptor(struct txgbe_hw *hw);
+s32 txgbe_enable_rx_dma_raptor(struct txgbe_hw *hw, u32 regval);
 bool txgbe_verify_lesm_fw_enabled_raptor(struct txgbe_hw *hw);
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index abc457109..4fab88c5c 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1319,6 +1319,8 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.link_update                = txgbe_dev_link_update,
 	.stats_get                  = txgbe_dev_stats_get,
 	.stats_reset                = txgbe_dev_stats_reset,
+	.rx_queue_start	            = txgbe_dev_rx_queue_start,
+	.tx_queue_start	            = txgbe_dev_tx_queue_start,
 	.dev_led_on                 = txgbe_dev_led_on,
 	.dev_led_off                = txgbe_dev_led_off,
 };
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 6739b580c..2dc0327cb 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -90,6 +90,10 @@ void txgbe_dev_tx_init(struct rte_eth_dev *dev);
 
 int txgbe_dev_rxtx_start(struct rte_eth_dev *dev);
 
+int txgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
+int txgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
 uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index d3782f44d..ad5d1d22f 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -14,6 +14,7 @@
 #include <inttypes.h>
 
 #include <rte_ethdev.h>
+#include <rte_ethdev_driver.h>
 
 #include "txgbe_logs.h"
 #include "base/txgbe.h"
@@ -134,6 +135,38 @@ txgbe_dev_free_queues(struct rte_eth_dev *dev)
 	RTE_SET_USED(dev);
 }
 
+static int __rte_cold
+txgbe_alloc_rx_queue_mbufs(struct txgbe_rx_queue *rxq)
+{
+	struct txgbe_rx_entry *rxe = rxq->sw_ring;
+	uint64_t dma_addr;
+	unsigned int i;
+
+	/* Initialize software ring entries */
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		volatile struct txgbe_rx_desc *rxd;
+		struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+
+		if (mbuf == NULL) {
+			PMD_INIT_LOG(ERR, "RX mbuf alloc failed queue_id=%u",
+				     (unsigned) rxq->queue_id);
+			return -ENOMEM;
+		}
+
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+		rxd = &rxq->rx_ring[i];
+		TXGBE_RXD_HDRADDR(rxd, 0);
+		TXGBE_RXD_PKTADDR(rxd, dma_addr);
+		rxe[i].mbuf = mbuf;
+	}
+
+	return 0;
+}
+
 void __rte_cold
 txgbe_set_rx_function(struct rte_eth_dev *dev)
 {
@@ -382,13 +415,153 @@ txgbe_dev_tx_init(struct rte_eth_dev *dev)
 	}
 }
 
+/*
+ * Set up link loopback mode Tx->Rx.
+ */
+static inline void __rte_cold
+txgbe_setup_loopback_link_raptor(struct txgbe_hw *hw)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	wr32m(hw, TXGBE_MACRXCFG, TXGBE_MACRXCFG_LB, TXGBE_MACRXCFG_LB);
+
+	msec_delay(50);
+}
+
 /*
  * Start Transmit and Receive Units.
  */
 int __rte_cold
 txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	struct txgbe_hw     *hw;
+	struct txgbe_tx_queue *txq;
+	struct txgbe_rx_queue *rxq;
+	uint32_t dmatxctl;
+	uint32_t rxctrl;
+	uint16_t i;
+	int ret = 0;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = TXGBE_DEV_HW(dev);
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		/* Setup Transmit Threshold Registers */
+		wr32m(hw, TXGBE_TXCFG(txq->reg_idx),
+		      TXGBE_TXCFG_HTHRESH_MASK |
+		      TXGBE_TXCFG_WTHRESH_MASK,
+		      TXGBE_TXCFG_HTHRESH(txq->hthresh) |
+		      TXGBE_TXCFG_WTHRESH(txq->wthresh));
+	}
+
+	dmatxctl = rd32(hw, TXGBE_DMATXCTRL);
+	dmatxctl |= TXGBE_DMATXCTRL_ENA;
+	wr32(hw, TXGBE_DMATXCTRL, dmatxctl);
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq->tx_deferred_start) {
+			ret = txgbe_dev_tx_queue_start(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq->rx_deferred_start) {
+			ret = txgbe_dev_rx_queue_start(dev, i);
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	/* Enable Receive engine */
+	rxctrl = rd32(hw, TXGBE_PBRXCTL);
+	rxctrl |= TXGBE_PBRXCTL_ENA;
+	hw->mac.enable_rx_dma(hw, rxctrl);
+
+	/* If loopback mode is enabled, set up the link accordingly */
+	if (hw->mac.type == txgbe_mac_raptor &&
+	    dev->data->dev_conf.lpbk_mode)
+		txgbe_setup_loopback_link_raptor(hw);
+
+	return 0;
+}
+
+
+/*
+ * Start Receive Units for specified queue.
+ */
+int __rte_cold
+txgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_rx_queue *rxq;
+	uint32_t rxdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	/* Allocate buffers for descriptor rings */
+	if (txgbe_alloc_rx_queue_mbufs(rxq) != 0) {
+		PMD_INIT_LOG(ERR, "Could not alloc mbuf for queue:%d",
+			     rx_queue_id);
+		return -1;
+	}
+	rxdctl = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
+	rxdctl |= TXGBE_RXCFG_ENA;
+	wr32(hw, TXGBE_RXCFG(rxq->reg_idx), rxdctl);
+
+	/* Wait until RX Enable ready */
+	poll_ms = RTE_TXGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		rxdctl = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
+	} while (--poll_ms && !(rxdctl & TXGBE_RXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not enable Rx Queue %d", rx_queue_id);
+	rte_wmb();
+	wr32(hw, TXGBE_RXRP(rxq->reg_idx), 0);
+	wr32(hw, TXGBE_RXWP(rxq->reg_idx), rxq->nb_rx_desc - 1);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+/*
+ * Start Transmit Units for specified queue.
+ */
+int __rte_cold
+txgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_tx_queue *txq;
+	uint32_t txdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	wr32m(hw, TXGBE_TXCFG(txq->reg_idx), TXGBE_TXCFG_ENA, TXGBE_TXCFG_ENA);
+
+	/* Wait until TX Enable ready */
+	poll_ms = RTE_TXGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		txdctl = rd32(hw, TXGBE_TXCFG(txq->reg_idx));
+	} while (--poll_ms && !(txdctl & TXGBE_TXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not enable "
+			     "Tx Queue %d", tx_queue_id);
+
+	rte_wmb();
+	wr32(hw, TXGBE_TXWP(txq->reg_idx), txq->tx_tail);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
 	return 0;
 }
 
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 2d337c46a..b8ca83672 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -5,20 +5,78 @@
 #ifndef _TXGBE_RXTX_H_
 #define _TXGBE_RXTX_H_
 
+/*****************************************************************************
+ * Receive Descriptor
+ *****************************************************************************/
+struct txgbe_rx_desc {
+	struct {
+		union {
+			__le32 dw0;
+			struct {
+				__le16 pkt;
+				__le16 hdr;
+			} lo;
+		};
+		union {
+			__le32 dw1;
+			struct {
+				__le16 ipid;
+				__le16 csum;
+			} hi;
+		};
+	} qw0; /* also as r.pkt_addr */
+	struct {
+		union {
+			__le32 dw2;
+			struct {
+				__le32 status;
+			} lo;
+		};
+		union {
+			__le32 dw3;
+			struct {
+				__le16 len;
+				__le16 tag;
+			} hi;
+		};
+	} qw1; /* also as r.hdr_addr */
+};
+
+/* @txgbe_rx_desc.qw0 */
+#define TXGBE_RXD_PKTADDR(rxd, v)  \
+	(((volatile __le64 *)(rxd))[0] = cpu_to_le64(v))
+
+/* @txgbe_rx_desc.qw1 */
+#define TXGBE_RXD_HDRADDR(rxd, v)  \
+	(((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
 
 #define RTE_PMD_TXGBE_TX_MAX_BURST 32
 #define RTE_PMD_TXGBE_RX_MAX_BURST 32
 
+#define RTE_TXGBE_REGISTER_POLL_WAIT_10_MS  10
+
+/**
+ * Structure associated with each descriptor of the RX ring of a RX queue.
+ */
+struct txgbe_rx_entry {
+	struct rte_mbuf *mbuf; /**< mbuf associated with RX descriptor. */
+};
+
 /**
  * Structure associated with each RX queue.
  */
 struct txgbe_rx_queue {
 	struct rte_mempool  *mb_pool; /**< mbuf pool to populate RX ring. */
+	volatile struct txgbe_rx_desc *rx_ring; /**< RX ring virtual address. */
 	uint64_t            rx_ring_phys_addr; /**< RX ring DMA address. */
+	struct txgbe_rx_entry *sw_ring; /**< address of RX software ring. */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
+	uint16_t            queue_id; /**< RX queue index. */
 	uint16_t            reg_idx;  /**< RX queue register index. */
+	uint16_t            port_id;  /**< Device port identifier. */
 	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
+	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
 };
 
@@ -28,11 +86,15 @@ struct txgbe_rx_queue {
 struct txgbe_tx_queue {
 	uint64_t            tx_ring_phys_addr; /**< TX ring DMA address. */
 	uint16_t            nb_tx_desc;    /**< number of TX descriptors. */
+	uint16_t            tx_tail;       /**< current value of TDT reg. */
 	/**< Start freeing TX buffers if there are less free descriptors than
 	     this value. */
 	uint16_t            tx_free_thresh;
 	uint16_t            reg_idx;       /**< TX queue register index. */
+	uint8_t             hthresh;       /**< Host threshold register. */
+	uint8_t             wthresh;       /**< Write-back threshold reg. */
 	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint8_t             tx_deferred_start; /**< not in global dev start. */
 };
 
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 20/42] net/txgbe: add RX and TX stop
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (17 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 19/42] net/txgbe: add RX and TX start Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 21/42] net/txgbe: add RX and TX queues setup Jiawen Wu
                   ` (22 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add receive and transmit units stop for specified queue, release mbufs and free queues.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h |   3 +
 drivers/net/txgbe/txgbe_ethdev.c    |   7 +
 drivers/net/txgbe/txgbe_ethdev.h    |  15 ++
 drivers/net/txgbe/txgbe_rxtx.c      | 305 +++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_rxtx.h      |  25 +++
 5 files changed, 354 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 6229d8acc..c05e8e8b1 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -467,6 +467,9 @@ struct txgbe_hw {
 		TXGBE_SW_RESET,
 		TXGBE_GLOBAL_RESET
 	} reset_type;
+
+	u32 q_rx_regs[128 * 4];
+	u32 q_tx_regs[128 * 4];
 };
 
 #include "txgbe_regs.h"
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 4fab88c5c..80470c6e7 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -599,6 +599,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 
 error:
 	PMD_INIT_LOG(ERR, "failure in txgbe_dev_start(): %d", err);
+	txgbe_dev_clear_queues(dev);
 	return -EIO;
 }
 
@@ -638,6 +639,8 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
 		hw->mac.disable_tx_laser(hw);
 	}
 
+	txgbe_dev_clear_queues(dev);
+
 	/* Clear stored conf */
 	dev->data->scattered_rx = 0;
 	dev->data->lro = 0;
@@ -1320,7 +1323,11 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.stats_get                  = txgbe_dev_stats_get,
 	.stats_reset                = txgbe_dev_stats_reset,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
+	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
 	.tx_queue_start	            = txgbe_dev_tx_queue_start,
+	.tx_queue_stop              = txgbe_dev_tx_queue_stop,
+	.rx_queue_release           = txgbe_dev_rx_queue_release,
+	.tx_queue_release           = txgbe_dev_tx_queue_release,
 	.dev_led_on                 = txgbe_dev_led_on,
 	.dev_led_off                = txgbe_dev_led_off,
 };
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 2dc0327cb..f5ee1cae6 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -82,18 +82,33 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 /*
  * RX/TX function prototypes
  */
+void txgbe_dev_clear_queues(struct rte_eth_dev *dev);
+
 void txgbe_dev_free_queues(struct rte_eth_dev *dev);
 
+void txgbe_dev_rx_queue_release(void *rxq);
+
+void txgbe_dev_tx_queue_release(void *txq);
+
 int txgbe_dev_rx_init(struct rte_eth_dev *dev);
 
 void txgbe_dev_tx_init(struct rte_eth_dev *dev);
 
 int txgbe_dev_rxtx_start(struct rte_eth_dev *dev);
 
+void txgbe_dev_save_rx_queue(struct txgbe_hw *hw, uint16_t rx_queue_id);
+void txgbe_dev_store_rx_queue(struct txgbe_hw *hw, uint16_t rx_queue_id);
+void txgbe_dev_save_tx_queue(struct txgbe_hw *hw, uint16_t tx_queue_id);
+void txgbe_dev_store_tx_queue(struct txgbe_hw *hw, uint16_t tx_queue_id);
+
 int txgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 
+int txgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+
 int txgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 
+int txgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+
 uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index ad5d1d22f..58824045b 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -15,6 +15,8 @@
 
 #include <rte_ethdev.h>
 #include <rte_ethdev_driver.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
 
 #include "txgbe_logs.h"
 #include "base/txgbe.h"
@@ -102,6 +104,22 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return txgbe_recv_pkts_lro(rx_queue, rx_pkts, nb_pkts, true);
 }
 
+static void __rte_cold
+txgbe_tx_queue_release(struct txgbe_tx_queue *txq)
+{
+	if (txq != NULL && txq->ops != NULL) {
+		txq->ops->release_mbufs(txq);
+		txq->ops->free_swring(txq);
+		rte_free(txq);
+	}
+}
+
+void __rte_cold
+txgbe_dev_tx_queue_release(void *txq)
+{
+	txgbe_tx_queue_release(txq);
+}
+
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
  * in dev_init by secondary process when attaching to an existing ethdev.
@@ -129,10 +147,169 @@ txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq)
 	}
 }
 
+/**
+ * txgbe_free_sc_cluster - free the not-yet-completed scattered cluster
+ *
+ * The "next" pointer of the last segment of (not-yet-completed) RSC clusters
+ * in the sw_rsc_ring is not set to NULL but rather points to the next
+ * mbuf of this RSC aggregation (that has not been completed yet and still
+ * resides on the HW ring). So, instead of calling for rte_pktmbuf_free() we
+ * will just free first "nb_segs" segments of the cluster explicitly by calling
+ * an rte_pktmbuf_free_seg().
+ *
+ * @m scattered cluster head
+ */
+static void __rte_cold
+txgbe_free_sc_cluster(struct rte_mbuf *m)
+{
+	uint16_t i, nb_segs = m->nb_segs;
+	struct rte_mbuf *next_seg;
+
+	for (i = 0; i < nb_segs; i++) {
+		next_seg = m->next;
+		rte_pktmbuf_free_seg(m);
+		m = next_seg;
+	}
+}
+
+static void __rte_cold
+txgbe_rx_queue_release_mbufs(struct txgbe_rx_queue *rxq)
+{
+	unsigned i;
+
+	if (rxq->sw_ring != NULL) {
+		for (i = 0; i < rxq->nb_rx_desc; i++) {
+			if (rxq->sw_ring[i].mbuf != NULL) {
+				rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+				rxq->sw_ring[i].mbuf = NULL;
+			}
+		}
+		if (rxq->rx_nb_avail) {
+			for (i = 0; i < rxq->rx_nb_avail; ++i) {
+				struct rte_mbuf *mb;
+
+				mb = rxq->rx_stage[rxq->rx_next_avail + i];
+				rte_pktmbuf_free_seg(mb);
+			}
+			rxq->rx_nb_avail = 0;
+		}
+	}
+
+	if (rxq->sw_sc_ring)
+		for (i = 0; i < rxq->nb_rx_desc; i++)
+			if (rxq->sw_sc_ring[i].fbuf) {
+				txgbe_free_sc_cluster(rxq->sw_sc_ring[i].fbuf);
+				rxq->sw_sc_ring[i].fbuf = NULL;
+			}
+}
+
+static void __rte_cold
+txgbe_rx_queue_release(struct txgbe_rx_queue *rxq)
+{
+	if (rxq != NULL) {
+		txgbe_rx_queue_release_mbufs(rxq);
+		rte_free(rxq->sw_ring);
+		rte_free(rxq->sw_sc_ring);
+		rte_free(rxq);
+	}
+}
+
+void __rte_cold
+txgbe_dev_rx_queue_release(void *rxq)
+{
+	txgbe_rx_queue_release(rxq);
+}
+
+/* Reset dynamic txgbe_rx_queue fields back to defaults */
+static void __rte_cold
+txgbe_reset_rx_queue(struct txgbe_adapter *adapter, struct txgbe_rx_queue *rxq)
+{
+	static const struct txgbe_rx_desc zeroed_desc = {{{0}, {0} }, {{0}, {0} } };
+	unsigned i;
+	uint16_t len = rxq->nb_rx_desc;
+
+	/*
+	 * By default, the Rx queue setup function allocates enough memory for
+	 * TXGBE_RING_DESC_MAX.  The Rx Burst bulk allocation function requires
+	 * extra memory at the end of the descriptor ring to be zero'd out.
+	 */
+	if (adapter->rx_bulk_alloc_allowed)
+		/* zero out extra memory */
+		len += RTE_PMD_TXGBE_RX_MAX_BURST;
+
+	/*
+	 * Zero out HW ring memory. Zero out extra memory at the end of
+	 * the H/W ring so look-ahead logic in Rx Burst bulk alloc function
+	 * reads extra memory as zeros.
+	 */
+	for (i = 0; i < len; i++) {
+		rxq->rx_ring[i] = zeroed_desc;
+	}
+
+	/*
+	 * initialize extra software ring entries. Space for these extra
+	 * entries is always allocated
+	 */
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+	for (i = rxq->nb_rx_desc; i < len; ++i) {
+		rxq->sw_ring[i].mbuf = &rxq->fake_mbuf;
+	}
+
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+
+}
+
+void __rte_cold
+txgbe_dev_clear_queues(struct rte_eth_dev *dev)
+{
+	unsigned i;
+	struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		struct txgbe_tx_queue *txq = dev->data->tx_queues[i];
+
+		if (txq != NULL) {
+			txq->ops->release_mbufs(txq);
+			txq->ops->reset(txq);
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
+
+		if (rxq != NULL) {
+			txgbe_rx_queue_release_mbufs(rxq);
+			txgbe_reset_rx_queue(adapter, rxq);
+		}
+	}
+}
+
 void
 txgbe_dev_free_queues(struct rte_eth_dev *dev)
 {
-	RTE_SET_USED(dev);
+	unsigned i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		txgbe_dev_rx_queue_release(dev->data->rx_queues[i]);
+		dev->data->rx_queues[i] = NULL;
+	}
+	dev->data->nb_rx_queues = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txgbe_dev_tx_queue_release(dev->data->tx_queues[i]);
+		dev->data->tx_queues[i] = NULL;
+	}
+	dev->data->nb_tx_queues = 0;
 }
 
 static int __rte_cold
@@ -490,6 +667,41 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 	return 0;
 }
 
+void
+txgbe_dev_save_rx_queue(struct txgbe_hw *hw, uint16_t rx_queue_id)
+{
+	u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
+	*(reg++) = rd32(hw, TXGBE_RXBAL(rx_queue_id));
+	*(reg++) = rd32(hw, TXGBE_RXBAH(rx_queue_id));
+	*(reg++) = rd32(hw, TXGBE_RXCFG(rx_queue_id));
+}
+
+void
+txgbe_dev_store_rx_queue(struct txgbe_hw *hw, uint16_t rx_queue_id)
+{
+	u32 *reg = &hw->q_rx_regs[rx_queue_id * 8];
+	wr32(hw, TXGBE_RXBAL(rx_queue_id), *(reg++));
+	wr32(hw, TXGBE_RXBAH(rx_queue_id), *(reg++));
+	wr32(hw, TXGBE_RXCFG(rx_queue_id), *(reg++) & ~TXGBE_RXCFG_ENA);
+}
+
+void
+txgbe_dev_save_tx_queue(struct txgbe_hw *hw, uint16_t tx_queue_id)
+{
+	u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
+	*(reg++) = rd32(hw, TXGBE_TXBAL(tx_queue_id));
+	*(reg++) = rd32(hw, TXGBE_TXBAH(tx_queue_id));
+	*(reg++) = rd32(hw, TXGBE_TXCFG(tx_queue_id));
+}
+
+void
+txgbe_dev_store_tx_queue(struct txgbe_hw *hw, uint16_t tx_queue_id)
+{
+	u32 *reg = &hw->q_tx_regs[tx_queue_id * 8];
+	wr32(hw, TXGBE_TXBAL(tx_queue_id), *(reg++));
+	wr32(hw, TXGBE_TXBAH(tx_queue_id), *(reg++));
+	wr32(hw, TXGBE_TXCFG(tx_queue_id), *(reg++) & ~TXGBE_TXCFG_ENA);
+}
 
 /*
  * Start Receive Units for specified queue.
@@ -532,6 +744,44 @@ txgbe_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	return 0;
 }
 
+/*
+ * Stop Receive Units for specified queue.
+ */
+int __rte_cold
+txgbe_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
+	struct txgbe_rx_queue *rxq;
+	uint32_t rxdctl;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	txgbe_dev_save_rx_queue(hw, rxq->reg_idx);
+	wr32m(hw, TXGBE_RXCFG(rxq->reg_idx), TXGBE_RXCFG_ENA, 0);
+
+	/* Wait until RX Enable bit clear */
+	poll_ms = RTE_TXGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		rxdctl = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
+	} while (--poll_ms && (rxdctl & TXGBE_RXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not disable Rx Queue %d", rx_queue_id);
+
+	rte_delay_us(RTE_TXGBE_WAIT_100_US);
+	txgbe_dev_store_rx_queue(hw, rxq->reg_idx);
+
+	txgbe_rx_queue_release_mbufs(rxq);
+	txgbe_reset_rx_queue(adapter, rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
 /*
  * Start Transmit Units for specified queue.
  */
@@ -565,3 +815,56 @@ txgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+/*
+ * Stop Transmit Units for specified queue.
+ */
+int __rte_cold
+txgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_tx_queue *txq;
+	uint32_t txdctl;
+	uint32_t txtdh, txtdt;
+	int poll_ms;
+
+	PMD_INIT_FUNC_TRACE();
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Wait until TX queue is empty */
+	poll_ms = RTE_TXGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_us(RTE_TXGBE_WAIT_100_US);
+		txtdh = rd32(hw, TXGBE_TXRP(txq->reg_idx));
+		txtdt = rd32(hw, TXGBE_TXWP(txq->reg_idx));
+	} while (--poll_ms && (txtdh != txtdt));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR,
+			"Tx Queue %d is not empty when stopping.",
+			tx_queue_id);
+
+	txgbe_dev_save_tx_queue(hw, txq->reg_idx);
+	wr32m(hw, TXGBE_TXCFG(txq->reg_idx), TXGBE_TXCFG_ENA, 0);
+
+	/* Wait until TX Enable bit clear */
+	poll_ms = RTE_TXGBE_REGISTER_POLL_WAIT_10_MS;
+	do {
+		rte_delay_ms(1);
+		txdctl = rd32(hw, TXGBE_TXCFG(txq->reg_idx));
+	} while (--poll_ms && (txdctl & TXGBE_TXCFG_ENA));
+	if (!poll_ms)
+		PMD_INIT_LOG(ERR, "Could not disable Tx Queue %d",
+			tx_queue_id);
+
+	rte_delay_us(RTE_TXGBE_WAIT_100_US);
+	txgbe_dev_store_tx_queue(hw, txq->reg_idx);
+
+	if (txq->ops != NULL) {
+		txq->ops->release_mbufs(txq);
+		txq->ops->reset(txq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b8ca83672..72cbf1f87 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -54,6 +54,7 @@ struct txgbe_rx_desc {
 #define RTE_PMD_TXGBE_RX_MAX_BURST 32
 
 #define RTE_TXGBE_REGISTER_POLL_WAIT_10_MS  10
+#define RTE_TXGBE_WAIT_100_US               100
 
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
@@ -62,6 +63,10 @@ struct txgbe_rx_entry {
 	struct rte_mbuf *mbuf; /**< mbuf associated with RX descriptor. */
 };
 
+struct txgbe_scattered_rx_entry {
+	struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
+};
+
 /**
  * Structure associated with each RX queue.
  */
@@ -70,7 +75,16 @@ struct txgbe_rx_queue {
 	volatile struct txgbe_rx_desc *rx_ring; /**< RX ring virtual address. */
 	uint64_t            rx_ring_phys_addr; /**< RX ring DMA address. */
 	struct txgbe_rx_entry *sw_ring; /**< address of RX software ring. */
+	struct txgbe_scattered_rx_entry *sw_sc_ring; /**< address of scattered Rx software ring. */
+	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
+	struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
+	uint16_t            rx_tail;  /**< current value of RDT register. */
+	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
+	uint16_t rx_nb_avail; /**< nr of staged pkts ready to ret to app */
+	uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */
+	uint16_t rx_free_trigger; /**< triggers rx buffer allocation */
+	uint16_t            rx_free_thresh; /**< max free RX desc to hold. */
 	uint16_t            queue_id; /**< RX queue index. */
 	uint16_t            reg_idx;  /**< RX queue register index. */
 	uint16_t            port_id;  /**< Device port identifier. */
@@ -78,6 +92,10 @@ struct txgbe_rx_queue {
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
+	struct rte_mbuf fake_mbuf;
+	/** hold packets to return to application */
+	struct rte_mbuf *rx_stage[RTE_PMD_TXGBE_RX_MAX_BURST*2];
 };
 
 /**
@@ -94,9 +112,16 @@ struct txgbe_tx_queue {
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
 	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	const struct txgbe_txq_ops *ops;       /**< txq ops */
 	uint8_t             tx_deferred_start; /**< not in global dev start. */
 };
 
+struct txgbe_txq_ops {
+	void (*release_mbufs)(struct txgbe_tx_queue *txq);
+	void (*free_swring)(struct txgbe_tx_queue *txq);
+	void (*reset)(struct txgbe_tx_queue *txq);
+};
+
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
  * in dev_init by secondary process when attaching to an existing ethdev.
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 21/42] net/txgbe: add RX and TX queues setup
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (18 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 20/42] net/txgbe: add RX and TX stop Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 22/42] net/txgbe: add packet type Jiawen Wu
                   ` (21 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add receive and transmit queues setup.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c |   2 +
 drivers/net/txgbe/txgbe_ethdev.h |   9 +
 drivers/net/txgbe/txgbe_rxtx.c   | 365 +++++++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_rxtx.h   |  44 ++++
 4 files changed, 420 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 80470c6e7..d2a355524 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1326,7 +1326,9 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
 	.tx_queue_start	            = txgbe_dev_tx_queue_start,
 	.tx_queue_stop              = txgbe_dev_tx_queue_stop,
+	.rx_queue_setup             = txgbe_dev_rx_queue_setup,
 	.rx_queue_release           = txgbe_dev_rx_queue_release,
+	.tx_queue_setup             = txgbe_dev_tx_queue_setup,
 	.tx_queue_release           = txgbe_dev_tx_queue_release,
 	.dev_led_on                 = txgbe_dev_led_on,
 	.dev_led_off                = txgbe_dev_led_off,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index f5ee1cae6..d38021538 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -90,6 +90,15 @@ void txgbe_dev_rx_queue_release(void *rxq);
 
 void txgbe_dev_tx_queue_release(void *txq);
 
+int  txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+		uint16_t nb_rx_desc, unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mb_pool);
+
+int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+		uint16_t nb_tx_desc, unsigned int socket_id,
+		const struct rte_eth_txconf *tx_conf);
+
 int txgbe_dev_rx_init(struct rte_eth_dev *dev);
 
 void txgbe_dev_tx_init(struct rte_eth_dev *dev);
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 58824045b..2288332ce 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -15,6 +15,8 @@
 
 #include <rte_ethdev.h>
 #include <rte_ethdev_driver.h>
+#include <rte_memzone.h>
+#include <rte_mempool.h>
 #include <rte_malloc.h>
 #include <rte_mbuf.h>
 
@@ -34,6 +36,10 @@ txgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return 0;
 }
 
+#ifndef DEFAULT_TX_FREE_THRESH
+#define DEFAULT_TX_FREE_THRESH 32
+#endif
+
 uint16_t
 txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts)
@@ -104,6 +110,29 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return txgbe_recv_pkts_lro(rx_queue, rx_pkts, nb_pkts, true);
 }
 
+static void __rte_cold
+txgbe_tx_queue_release_mbufs(struct txgbe_tx_queue *txq)
+{
+	unsigned i;
+
+	if (txq->sw_ring != NULL) {
+		for (i = 0; i < txq->nb_tx_desc; i++) {
+			if (txq->sw_ring[i].mbuf != NULL) {
+				rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+				txq->sw_ring[i].mbuf = NULL;
+			}
+		}
+	}
+}
+
+static void __rte_cold
+txgbe_tx_free_swring(struct txgbe_tx_queue *txq)
+{
+	if (txq != NULL &&
+	    txq->sw_ring != NULL)
+		rte_free(txq->sw_ring);
+}
+
 static void __rte_cold
 txgbe_tx_queue_release(struct txgbe_tx_queue *txq)
 {
@@ -120,6 +149,11 @@ txgbe_dev_tx_queue_release(void *txq)
 	txgbe_tx_queue_release(txq);
 }
 
+static const struct txgbe_txq_ops def_txq_ops = {
+	.release_mbufs = txgbe_tx_queue_release_mbufs,
+	.free_swring = txgbe_tx_free_swring,
+};
+
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
  * the queue parameters. Used in tx_queue_setup by primary process and then
  * in dev_init by secondary process when attaching to an existing ethdev.
@@ -147,6 +181,136 @@ txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq)
 	}
 }
 
+int __rte_cold
+txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
+			 uint16_t queue_idx,
+			 uint16_t nb_desc,
+			 unsigned int socket_id,
+			 const struct rte_eth_txconf *tx_conf)
+{
+	const struct rte_memzone *tz;
+	struct txgbe_tx_queue *txq;
+	struct txgbe_hw     *hw;
+	uint16_t tx_free_thresh;
+	uint64_t offloads;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = TXGBE_DEV_HW(dev);
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	/*
+	 * Validate number of transmit descriptors.
+	 * It must not exceed hardware maximum, and must be multiple
+	 * of TXGBE_ALIGN.
+	 */
+	if (nb_desc % TXGBE_TXD_ALIGN != 0 ||
+	    (nb_desc > TXGBE_RING_DESC_MAX) ||
+	    (nb_desc < TXGBE_RING_DESC_MIN)) {
+		return -EINVAL;
+	}
+
+	/*
+	 * The TX descriptor ring will be cleaned after txq->tx_free_thresh
+	 * descriptors are used or if the number of descriptors required
+	 * to transmit a packet is greater than the number of free TX
+	 * descriptors.
+	 * One descriptor in the TX ring is used as a sentinel to avoid a
+	 * H/W race condition, hence the maximum threshold constraints.
+	 * When set to zero use default values.
+	 */
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+			tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH);
+	if (tx_free_thresh >= (nb_desc - 3)) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh must be less than the number of "
+			     "TX descriptors minus 3. (tx_free_thresh=%u "
+			     "port=%d queue=%d)",
+			     (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id, (int)queue_idx);
+		return -(EINVAL);
+	}
+
+	if ((nb_desc % tx_free_thresh) != 0) {
+		PMD_INIT_LOG(ERR, "tx_free_thresh must be a divisor of the "
+			     "number of TX descriptors. (tx_free_thresh=%u "
+			     "port=%d queue=%d)", (unsigned int)tx_free_thresh,
+			     (int)dev->data->port_id, (int)queue_idx);
+		return -(EINVAL);
+	}
+
+	/* Free memory prior to re-allocation if needed... */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		txgbe_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
+	/* First allocate the tx queue data structure */
+	txq = rte_zmalloc_socket("ethdev TX queue", sizeof(struct txgbe_tx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL)
+		return -ENOMEM;
+
+	/*
+	 * Allocate TX ring hardware descriptors. A memzone large enough to
+	 * handle the maximum ring size is allocated in order to allow for
+	 * resizing in later calls to the queue setup function.
+	 */
+	tz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+			sizeof(struct txgbe_tx_desc) * TXGBE_RING_DESC_MAX,
+			TXGBE_ALIGN, socket_id);
+	if (tz == NULL) {
+		txgbe_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+
+	txq->nb_tx_desc = nb_desc;
+	txq->tx_free_thresh = tx_free_thresh;
+	txq->pthresh = tx_conf->tx_thresh.pthresh;
+	txq->hthresh = tx_conf->tx_thresh.hthresh;
+	txq->wthresh = tx_conf->tx_thresh.wthresh;
+	txq->queue_id = queue_idx;
+	txq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
+		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
+	txq->port_id = dev->data->port_id;
+	txq->offloads = offloads;
+	txq->ops = &def_txq_ops;
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	/*
+	 * Modification to set tail pointer for virtual function if vf is detected
+	 */
+	if (hw->mac.type == txgbe_mac_raptor_vf) {
+		txq->tdt_reg_addr = TXGBE_REG_ADDR(hw, TXGBE_TXWP(queue_idx));
+		txq->tdc_reg_addr = TXGBE_REG_ADDR(hw, TXGBE_TXCFG(queue_idx));
+	} else {
+		txq->tdt_reg_addr = TXGBE_REG_ADDR(hw, TXGBE_TXWP(txq->reg_idx));
+		txq->tdc_reg_addr = TXGBE_REG_ADDR(hw, TXGBE_TXCFG(txq->reg_idx));
+	}
+
+	txq->tx_ring_phys_addr = TMZ_PADDR(tz);
+	txq->tx_ring = (struct txgbe_tx_desc *) TMZ_VADDR(tz);
+
+	/* Allocate software ring */
+	txq->sw_ring = rte_zmalloc_socket("txq->sw_ring",
+				sizeof(struct txgbe_tx_entry) * nb_desc,
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		txgbe_tx_queue_release(txq);
+		return -ENOMEM;
+	}
+	PMD_INIT_LOG(DEBUG, "sw_ring=%p hw_ring=%p dma_addr=0x%"PRIx64,
+		     txq->sw_ring, txq->tx_ring, txq->tx_ring_phys_addr);
+
+	/* set up scalar TX function as appropriate */
+	txgbe_set_tx_function(dev, txq);
+
+	txq->ops->reset(txq);
+
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+}
+
 /**
  * txgbe_free_sc_cluster - free the not-yet-completed scattered cluster
  *
@@ -220,6 +384,50 @@ txgbe_dev_rx_queue_release(void *rxq)
 	txgbe_rx_queue_release(rxq);
 }
 
+/*
+ * Check if Rx Burst Bulk Alloc function can be used.
+ * Return
+ *        0: the preconditions are satisfied and the bulk allocation function
+ *           can be used.
+ *  -EINVAL: the preconditions are NOT satisfied and the default Rx burst
+ *           function must be used.
+ */
+static inline int __rte_cold
+check_rx_burst_bulk_alloc_preconditions(struct txgbe_rx_queue *rxq)
+{
+	int ret = 0;
+
+	/*
+	 * Make sure the following pre-conditions are satisfied:
+	 *   rxq->rx_free_thresh >= RTE_PMD_TXGBE_RX_MAX_BURST
+	 *   rxq->rx_free_thresh < rxq->nb_rx_desc
+	 *   (rxq->nb_rx_desc % rxq->rx_free_thresh) == 0
+	 * Scattered packets are not supported.  This should be checked
+	 * outside of this function.
+	 */
+	if (!(rxq->rx_free_thresh >= RTE_PMD_TXGBE_RX_MAX_BURST)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "RTE_PMD_TXGBE_RX_MAX_BURST=%d",
+			     rxq->rx_free_thresh, RTE_PMD_TXGBE_RX_MAX_BURST);
+		ret = -EINVAL;
+	} else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->rx_free_thresh=%d, "
+			     "rxq->nb_rx_desc=%d",
+			     rxq->rx_free_thresh, rxq->nb_rx_desc);
+		ret = -EINVAL;
+	} else if (!((rxq->nb_rx_desc % rxq->rx_free_thresh) == 0)) {
+		PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: "
+			     "rxq->nb_rx_desc=%d, "
+			     "rxq->rx_free_thresh=%d",
+			     rxq->nb_rx_desc, rxq->rx_free_thresh);
+		ret = -EINVAL;
+	}
+
+	return ret;
+}
+
 /* Reset dynamic txgbe_rx_queue fields back to defaults */
 static void __rte_cold
 txgbe_reset_rx_queue(struct txgbe_adapter *adapter, struct txgbe_rx_queue *rxq)
@@ -265,6 +473,163 @@ txgbe_reset_rx_queue(struct txgbe_adapter *adapter, struct txgbe_rx_queue *rxq)
 
 }
 
+int __rte_cold
+txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
+			 uint16_t queue_idx,
+			 uint16_t nb_desc,
+			 unsigned int socket_id,
+			 const struct rte_eth_rxconf *rx_conf,
+			 struct rte_mempool *mp)
+{
+	const struct rte_memzone *rz;
+	struct txgbe_rx_queue *rxq;
+	struct txgbe_hw     *hw;
+	uint16_t len;
+	struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
+	uint64_t offloads;
+
+	PMD_INIT_FUNC_TRACE();
+	hw = TXGBE_DEV_HW(dev);
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/*
+	 * Validate number of receive descriptors.
+	 * It must not exceed hardware maximum, and must be multiple
+	 * of TXGBE_ALIGN.
+	 */
+	if (nb_desc % TXGBE_RXD_ALIGN != 0 ||
+			(nb_desc > TXGBE_RING_DESC_MAX) ||
+			(nb_desc < TXGBE_RING_DESC_MIN)) {
+		return -EINVAL;
+	}
+
+	/* Free memory prior to re-allocation if needed... */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		txgbe_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
+	/* First allocate the rx queue data structure */
+	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct txgbe_rx_queue),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL)
+		return -ENOMEM;
+	rxq->mb_pool = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
+	rxq->queue_id = queue_idx;
+	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
+		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
+	rxq->port_id = dev->data->port_id;
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		rxq->crc_len = RTE_ETHER_CRC_LEN;
+	else
+		rxq->crc_len = 0;
+	rxq->drop_en = rx_conf->rx_drop_en;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->offloads = offloads;
+
+	/*
+	 * The packet type in RX descriptor is different for different NICs.
+	 * So set different masks for different NICs.
+	 */
+	rxq->pkt_type_mask = TXGBE_PTID_MASK;
+
+	/*
+	 * Allocate RX ring hardware descriptors. A memzone large enough to
+	 * handle the maximum ring size is allocated in order to allow for
+	 * resizing in later calls to the queue setup function.
+	 */
+	rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
+				      RX_RING_SZ, TXGBE_ALIGN, socket_id);
+	if (rz == NULL) {
+		txgbe_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Zero init all the descriptors in the ring.
+	 */
+	memset(rz->addr, 0, RX_RING_SZ);
+
+	/*
+	 * Modified to setup VFRDT for Virtual Function
+	 */
+	if (hw->mac.type == txgbe_mac_raptor_vf) {
+		rxq->rdt_reg_addr =
+			TXGBE_REG_ADDR(hw, TXGBE_RXWP(queue_idx));
+		rxq->rdh_reg_addr =
+			TXGBE_REG_ADDR(hw, TXGBE_RXRP(queue_idx));
+	} else {
+		rxq->rdt_reg_addr =
+			TXGBE_REG_ADDR(hw, TXGBE_RXWP(rxq->reg_idx));
+		rxq->rdh_reg_addr =
+			TXGBE_REG_ADDR(hw, TXGBE_RXRP(rxq->reg_idx));
+	}
+
+	rxq->rx_ring_phys_addr = TMZ_PADDR(rz);
+	rxq->rx_ring = (struct txgbe_rx_desc *)TMZ_VADDR(rz);
+
+	/*
+	 * Certain constraints must be met in order to use the bulk buffer
+	 * allocation Rx burst function. If any of Rx queues doesn't meet them
+	 * the feature should be disabled for the whole port.
+	 */
+	if (check_rx_burst_bulk_alloc_preconditions(rxq)) {
+		PMD_INIT_LOG(DEBUG, "queue[%d] doesn't meet Rx Bulk Alloc "
+				    "preconditions - canceling the feature for "
+				    "the whole port[%d]",
+			     rxq->queue_id, rxq->port_id);
+		adapter->rx_bulk_alloc_allowed = false;
+	}
+
+	/*
+	 * Allocate software ring. Allow for space at the end of the
+	 * S/W ring to make sure look-ahead logic in bulk alloc Rx burst
+	 * function does not access an invalid memory region.
+	 */
+	len = nb_desc;
+	if (adapter->rx_bulk_alloc_allowed)
+		len += RTE_PMD_TXGBE_RX_MAX_BURST;
+
+	rxq->sw_ring = rte_zmalloc_socket("rxq->sw_ring",
+					  sizeof(struct txgbe_rx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq->sw_ring) {
+		txgbe_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	/*
+	 * Always allocate even if it's not going to be needed in order to
+	 * simplify the code.
+	 *
+	 * This ring is used in LRO and Scattered Rx cases and Scattered Rx may
+	 * be requested in txgbe_dev_rx_init(), which is called later from
+	 * dev_start() flow.
+	 */
+	rxq->sw_sc_ring =
+		rte_zmalloc_socket("rxq->sw_sc_ring",
+				   sizeof(struct txgbe_scattered_rx_entry) * len,
+				   RTE_CACHE_LINE_SIZE, socket_id);
+	if (!rxq->sw_sc_ring) {
+		txgbe_rx_queue_release(rxq);
+		return -ENOMEM;
+	}
+
+	PMD_INIT_LOG(DEBUG, "sw_ring=%p sw_sc_ring=%p hw_ring=%p "
+			    "dma_addr=0x%"PRIx64,
+		     rxq->sw_ring, rxq->sw_sc_ring, rxq->rx_ring,
+		     rxq->rx_ring_phys_addr);
+
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	txgbe_reset_rx_queue(adapter, rxq);
+
+	return 0;
+}
+
 void __rte_cold
 txgbe_dev_clear_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 72cbf1f87..763ce3439 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -50,12 +50,26 @@ struct txgbe_rx_desc {
 #define TXGBE_RXD_HDRADDR(rxd, v)  \
 	(((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
 
+/**
+ * Transmit Data Descriptor (TXGBE_TXD_TYP=DATA)
+ **/
+struct txgbe_tx_desc {
+	__le64 qw0; /* r.buffer_addr ,  w.reserved    */
+	__le32 dw2; /* r.cmd_type_len,  w.nxtseq_seed */
+	__le32 dw3; /* r.olinfo_status, w.status      */
+};
+
 #define RTE_PMD_TXGBE_TX_MAX_BURST 32
 #define RTE_PMD_TXGBE_RX_MAX_BURST 32
 
+#define RX_RING_SZ ((TXGBE_RING_DESC_MAX + RTE_PMD_TXGBE_RX_MAX_BURST) * \
+		    sizeof(struct txgbe_rx_desc))
+
 #define RTE_TXGBE_REGISTER_POLL_WAIT_10_MS  10
 #define RTE_TXGBE_WAIT_100_US               100
 
+#define TXGBE_PTID_MASK                 0xFF
+
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
  */
@@ -67,6 +81,22 @@ struct txgbe_scattered_rx_entry {
 	struct rte_mbuf *fbuf; /**< First segment of the fragmented packet. */
 };
 
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct txgbe_tx_entry {
+	struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
+	uint16_t next_id; /**< Index of next descriptor in ring. */
+	uint16_t last_id; /**< Index of last scattered descriptor. */
+};
+
+/**
+ * Structure associated with each descriptor of the TX ring of a TX queue.
+ */
+struct txgbe_tx_entry_v {
+	struct rte_mbuf *mbuf; /**< mbuf associated with TX desc, if any. */
+};
+
 /**
  * Structure associated with each RX queue.
  */
@@ -74,6 +104,8 @@ struct txgbe_rx_queue {
 	struct rte_mempool  *mb_pool; /**< mbuf pool to populate RX ring. */
 	volatile struct txgbe_rx_desc *rx_ring; /**< RX ring virtual address. */
 	uint64_t            rx_ring_phys_addr; /**< RX ring DMA address. */
+	volatile uint32_t   *rdt_reg_addr; /**< RDT register address. */
+	volatile uint32_t   *rdh_reg_addr; /**< RDH register address. */
 	struct txgbe_rx_entry *sw_ring; /**< address of RX software ring. */
 	struct txgbe_scattered_rx_entry *sw_sc_ring; /**< address of scattered Rx software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
@@ -87,6 +119,7 @@ struct txgbe_rx_queue {
 	uint16_t            rx_free_thresh; /**< max free RX desc to hold. */
 	uint16_t            queue_id; /**< RX queue index. */
 	uint16_t            reg_idx;  /**< RX queue register index. */
+	uint16_t            pkt_type_mask;  /**< Packet type mask for different NICs. */
 	uint16_t            port_id;  /**< Device port identifier. */
 	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
@@ -102,13 +135,24 @@ struct txgbe_rx_queue {
  * Structure associated with each TX queue.
  */
 struct txgbe_tx_queue {
+	/** TX ring virtual address. */
+	volatile struct txgbe_tx_desc *tx_ring;
 	uint64_t            tx_ring_phys_addr; /**< TX ring DMA address. */
+	union {
+		struct txgbe_tx_entry *sw_ring; /**< address of SW ring for scalar PMD. */
+		struct txgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */
+	};
+	volatile uint32_t   *tdt_reg_addr; /**< Address of TDT register. */
+	volatile uint32_t   *tdc_reg_addr; /**< Address of TDC register. */
 	uint16_t            nb_tx_desc;    /**< number of TX descriptors. */
 	uint16_t            tx_tail;       /**< current value of TDT reg. */
 	/**< Start freeing TX buffers if there are less free descriptors than
 	     this value. */
 	uint16_t            tx_free_thresh;
+	uint16_t            queue_id;      /**< TX queue index. */
 	uint16_t            reg_idx;       /**< TX queue register index. */
+	uint16_t            port_id;       /**< Device port identifier. */
+	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
 	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 22/42] net/txgbe: add packet type
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (19 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 21/42] net/txgbe: add RX and TX queues setup Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 23/42] net/txgbe: fill simple transmit function Jiawen Wu
                   ` (20 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add packet type marco definition and convert ptype to ptid.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/meson.build    |   1 +
 drivers/net/txgbe/txgbe_ethdev.h |   1 +
 drivers/net/txgbe/txgbe_ptypes.c | 676 +++++++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_ptypes.h | 351 ++++++++++++++++
 drivers/net/txgbe/txgbe_rxtx.h   |   2 -
 5 files changed, 1029 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/txgbe/txgbe_ptypes.c
 create mode 100644 drivers/net/txgbe/txgbe_ptypes.h

diff --git a/drivers/net/txgbe/meson.build b/drivers/net/txgbe/meson.build
index 88b05ad83..beb2052b0 100644
--- a/drivers/net/txgbe/meson.build
+++ b/drivers/net/txgbe/meson.build
@@ -8,6 +8,7 @@ objs = [base_objs]
 
 sources = files(
 	'txgbe_ethdev.c',
+	'txgbe_ptypes.c',
 	'txgbe_pf.c',
 	'txgbe_rxtx.c',
 	'txgbe_vf_representor.c',
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index d38021538..be6876823 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -8,6 +8,7 @@
 #include <stdint.h>
 
 #include "base/txgbe.h"
+#include "txgbe_ptypes.h"
 #include <rte_pci.h>
 #include <rte_bus_pci.h>
 #include <rte_tm_driver.h>
diff --git a/drivers/net/txgbe/txgbe_ptypes.c b/drivers/net/txgbe/txgbe_ptypes.c
new file mode 100644
index 000000000..e76b4001d
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_ptypes.c
@@ -0,0 +1,676 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include <rte_mbuf.h>
+#include <rte_memory.h>
+
+#include "base/txgbe_type.h"
+#include "txgbe_ptypes.h"
+
+/* The txgbe_ptype_lookup is used to convert from the 8-bit ptid in the
+ * hardware to a bit-field that can be used by SW to more easily determine the
+ * packet type.
+ *
+ * Macros are used to shorten the table lines and make this table human
+ * readable.
+ *
+ * We store the PTYPE in the top byte of the bit field - this is just so that
+ * we can check that the table doesn't have a row missing, as the index into
+ * the table should be the PTYPE.
+ *
+ * Typical work flow:
+ *
+ * IF NOT txgbe_ptype_lookup[ptid].known
+ * THEN
+ *      Packet is unknown
+ * ELSE IF txgbe_ptype_lookup[ptid].mac == TXGBE_DEC_PTYPE_MAC_IP
+ *      Use the rest of the fields to look at the tunnels, inner protocols, etc
+ * ELSE
+ *      Use the enum txgbe_l2_ptypes to decode the packet type
+ * ENDIF
+ */
+#define TPTE(ptid, l2, l3, l4, tun, el2, el3, el4) \
+  [ptid] = (RTE_PTYPE_L2_##l2 | \
+		RTE_PTYPE_L3_##l3 | \
+		RTE_PTYPE_L4_##l4 | \
+		RTE_PTYPE_TUNNEL_##tun | \
+		RTE_PTYPE_INNER_L2_##el2 | \
+		RTE_PTYPE_INNER_L3_##el3 | \
+		RTE_PTYPE_INNER_L4_##el4)
+
+#define RTE_PTYPE_L2_NONE               0
+#define RTE_PTYPE_L3_NONE               0
+#define RTE_PTYPE_L4_NONE               0
+#define RTE_PTYPE_TUNNEL_NONE           0
+#define RTE_PTYPE_INNER_L2_NONE         0
+#define RTE_PTYPE_INNER_L3_NONE         0
+#define RTE_PTYPE_INNER_L4_NONE         0
+
+static u32 txgbe_ptype_lookup[TXGBE_PTID_MAX] __rte_cache_aligned = {
+  /* L2:0-3 L3:4-7 L4:8-11 TUN:12-15 EL2:16-19 EL3:20-23 EL2:24-27 */
+  /* L2: ETH */
+  TPTE(0x11, ETHER,          NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x12, ETHER_TIMESYNC, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x13, ETHER_FIP,      NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x14, ETHER_LLDP,     NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x15, ETHER_CNM,      NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x16, ETHER_EAPOL,    NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x17, ETHER_ARP,      NONE, NONE, NONE, NONE, NONE, NONE),
+  /* L2: Ethertype Filter */
+  TPTE(0x18, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x19, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x1A, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x1B, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x1C, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x1D, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x1E, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x1F, ETHER_FILTER,   NONE, NONE, NONE, NONE, NONE, NONE),
+  /* L3: IP */
+  TPTE(0x21, ETHER, IPV4, FRAG,    NONE, NONE, NONE, NONE),
+  TPTE(0x22, ETHER, IPV4, NONFRAG, NONE, NONE, NONE, NONE),
+  TPTE(0x23, ETHER, IPV4, UDP,     NONE, NONE, NONE, NONE),
+  TPTE(0x24, ETHER, IPV4, TCP,     NONE, NONE, NONE, NONE),
+  TPTE(0x25, ETHER, IPV4, SCTP,    NONE, NONE, NONE, NONE),
+  TPTE(0x29, ETHER, IPV6, FRAG,    NONE, NONE, NONE, NONE),
+  TPTE(0x2A, ETHER, IPV6, NONFRAG, NONE, NONE, NONE, NONE),
+  TPTE(0x2B, ETHER, IPV6, UDP,     NONE, NONE, NONE, NONE),
+  TPTE(0x2C, ETHER, IPV6, TCP,     NONE, NONE, NONE, NONE),
+  TPTE(0x2D, ETHER, IPV6, SCTP,    NONE, NONE, NONE, NONE),
+  /* L2: FCoE */
+  TPTE(0x30, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x31, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x32, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x33, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x34, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x35, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x36, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x37, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x38, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  TPTE(0x39, ETHER_FCOE, NONE, NONE, NONE, NONE, NONE, NONE),
+  /* IPv4 -> IPv4/IPv6 */
+  TPTE(0x81, ETHER, IPV4, NONE, IP, NONE, IPV4, FRAG),
+  TPTE(0x82, ETHER, IPV4, NONE, IP, NONE, IPV4, NONFRAG),
+  TPTE(0x83, ETHER, IPV4, NONE, IP, NONE, IPV4, UDP),
+  TPTE(0x84, ETHER, IPV4, NONE, IP, NONE, IPV4, TCP),
+  TPTE(0x85, ETHER, IPV4, NONE, IP, NONE, IPV4, SCTP),
+  TPTE(0x89, ETHER, IPV4, NONE, IP, NONE, IPV6, FRAG),
+  TPTE(0x8A, ETHER, IPV4, NONE, IP, NONE, IPV6, NONFRAG),
+  TPTE(0x8B, ETHER, IPV4, NONE, IP, NONE, IPV6, UDP),
+  TPTE(0x8C, ETHER, IPV4, NONE, IP, NONE, IPV6, TCP),
+  TPTE(0x8D, ETHER, IPV4, NONE, IP, NONE, IPV6, SCTP),
+  /* IPv4 -> GRE/Teredo/VXLAN -> NONE/IPv4/IPv6 */
+  TPTE(0x90, ETHER, IPV4, NONE, GRENAT, NONE, NONE,  NONE),
+  TPTE(0x91, ETHER, IPV4, NONE, GRENAT, NONE, IPV4, FRAG),
+  TPTE(0x92, ETHER, IPV4, NONE, GRENAT, NONE, IPV4, NONFRAG),
+  TPTE(0x93, ETHER, IPV4, NONE, GRENAT, NONE, IPV4, UDP),
+  TPTE(0x94, ETHER, IPV4, NONE, GRENAT, NONE, IPV4, TCP),
+  TPTE(0x95, ETHER, IPV4, NONE, GRENAT, NONE, IPV4, SCTP),
+  TPTE(0x99, ETHER, IPV4, NONE, GRENAT, NONE, IPV6, FRAG),
+  TPTE(0x9A, ETHER, IPV4, NONE, GRENAT, NONE, IPV6, NONFRAG),
+  TPTE(0x9B, ETHER, IPV4, NONE, GRENAT, NONE, IPV6, UDP),
+  TPTE(0x9C, ETHER, IPV4, NONE, GRENAT, NONE, IPV6, TCP),
+  TPTE(0x9D, ETHER, IPV4, NONE, GRENAT, NONE, IPV6, SCTP),
+  /* IPv4 -> GRE/Teredo/VXLAN -> MAC -> NONE/IPv4/IPv6 */
+  TPTE(0xA0, ETHER, IPV4, NONE, GRENAT, ETHER, NONE,  NONE),
+  TPTE(0xA1, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, FRAG),
+  TPTE(0xA2, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, NONFRAG),
+  TPTE(0xA3, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, UDP),
+  TPTE(0xA4, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, TCP),
+  TPTE(0xA5, ETHER, IPV4, NONE, GRENAT, ETHER, IPV4, SCTP),
+  TPTE(0xA9, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, FRAG),
+  TPTE(0xAA, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, NONFRAG),
+  TPTE(0xAB, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, UDP),
+  TPTE(0xAC, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, TCP),
+  TPTE(0xAD, ETHER, IPV4, NONE, GRENAT, ETHER, IPV6, SCTP),
+  /* IPv4 -> GRE/Teredo/VXLAN -> MAC+VLAN -> NONE/IPv4/IPv6 */
+  TPTE(0xB0, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, NONE,  NONE),
+  TPTE(0xB1, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, FRAG),
+  TPTE(0xB2, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, NONFRAG),
+  TPTE(0xB3, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, UDP),
+  TPTE(0xB4, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, TCP),
+  TPTE(0xB5, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV4, SCTP),
+  TPTE(0xB9, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, FRAG),
+  TPTE(0xBA, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, NONFRAG),
+  TPTE(0xBB, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, UDP),
+  TPTE(0xBC, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, TCP),
+  TPTE(0xBD, ETHER, IPV4, NONE, GRENAT, ETHER_VLAN, IPV6, SCTP),
+  /* IPv6 -> IPv4/IPv6 */
+  TPTE(0xC1, ETHER, IPV6, NONE, IP, NONE, IPV4, FRAG),
+  TPTE(0xC2, ETHER, IPV6, NONE, IP, NONE, IPV4, NONFRAG),
+  TPTE(0xC3, ETHER, IPV6, NONE, IP, NONE, IPV4, UDP),
+  TPTE(0xC4, ETHER, IPV6, NONE, IP, NONE, IPV4, TCP),
+  TPTE(0xC5, ETHER, IPV6, NONE, IP, NONE, IPV4, SCTP),
+  TPTE(0xC9, ETHER, IPV6, NONE, IP, NONE, IPV6, FRAG),
+  TPTE(0xCA, ETHER, IPV6, NONE, IP, NONE, IPV6, NONFRAG),
+  TPTE(0xCB, ETHER, IPV6, NONE, IP, NONE, IPV6, UDP),
+  TPTE(0xCC, ETHER, IPV6, NONE, IP, NONE, IPV6, TCP),
+  TPTE(0xCD, ETHER, IPV6, NONE, IP, NONE, IPV6, SCTP),
+  /* IPv6 -> GRE/Teredo/VXLAN -> NONE/IPv4/IPv6 */
+  TPTE(0xD0, ETHER, IPV6, NONE, GRENAT, NONE, NONE,  NONE),
+  TPTE(0xD1, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, FRAG),
+  TPTE(0xD2, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, NONFRAG),
+  TPTE(0xD3, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, UDP),
+  TPTE(0xD4, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, TCP),
+  TPTE(0xD5, ETHER, IPV6, NONE, GRENAT, NONE, IPV4, SCTP),
+  TPTE(0xD9, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, FRAG),
+  TPTE(0xDA, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, NONFRAG),
+  TPTE(0xDB, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, UDP),
+  TPTE(0xDC, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, TCP),
+  TPTE(0xDD, ETHER, IPV6, NONE, GRENAT, NONE, IPV6, SCTP),
+  /* IPv6 -> GRE/Teredo/VXLAN -> MAC -> NONE/IPv4/IPv6 */
+  TPTE(0xE0, ETHER, IPV6, NONE, GRENAT, ETHER, NONE,  NONE),
+  TPTE(0xE1, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, FRAG),
+  TPTE(0xE2, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, NONFRAG),
+  TPTE(0xE3, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, UDP),
+  TPTE(0xE4, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, TCP),
+  TPTE(0xE5, ETHER, IPV6, NONE, GRENAT, ETHER, IPV4, SCTP),
+  TPTE(0xE9, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, FRAG),
+  TPTE(0xEA, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, NONFRAG),
+  TPTE(0xEB, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, UDP),
+  TPTE(0xEC, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, TCP),
+  TPTE(0xED, ETHER, IPV6, NONE, GRENAT, ETHER, IPV6, SCTP),
+  /* IPv6 -> GRE/Teredo/VXLAN -> MAC+VLAN -> NONE/IPv4/IPv6 */
+  TPTE(0xF0, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, NONE,  NONE),
+  TPTE(0xF1, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, FRAG),
+  TPTE(0xF2, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, NONFRAG),
+  TPTE(0xF3, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, UDP),
+  TPTE(0xF4, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, TCP),
+  TPTE(0xF5, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV4, SCTP),
+  TPTE(0xF9, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, FRAG),
+  TPTE(0xFA, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, NONFRAG),
+  TPTE(0xFB, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, UDP),
+  TPTE(0xFC, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, TCP),
+  TPTE(0xFD, ETHER, IPV6, NONE, GRENAT, ETHER_VLAN, IPV6, SCTP),
+};
+
+u32 *txgbe_get_supported_ptypes(void)
+{
+	static u32 ptypes[] = {
+		/* For non-vec functions,
+		 * refers to txgbe_rxd_pkt_info_to_pkt_type();
+		 * for vec functions,
+		 * refers to _recv_raw_pkts_vec().
+		 */
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4,
+		RTE_PTYPE_L3_IPV4_EXT,
+		RTE_PTYPE_L3_IPV6,
+		RTE_PTYPE_L3_IPV6_EXT,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_TUNNEL_IP,
+		RTE_PTYPE_INNER_L3_IPV6,
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static inline u8
+txgbe_encode_ptype_fcoe(u32 ptype)
+{
+	u8 ptid;
+
+	UNREFERENCED_PARAMETER(ptype);
+	ptid = TXGBE_PTID_PKT_FCOE;
+
+	return ptid;
+}
+
+static inline u8
+txgbe_encode_ptype_mac(u32 ptype)
+{
+	u8 ptid;
+
+	ptid = TXGBE_PTID_PKT_MAC;
+
+	switch (ptype & RTE_PTYPE_L2_MASK) {
+	case RTE_PTYPE_L2_ETHER_FCOE:
+		ptid = txgbe_encode_ptype_fcoe(ptype);
+		break;
+	case RTE_PTYPE_UNKNOWN:
+		break;
+	case RTE_PTYPE_L2_ETHER_TIMESYNC:
+		ptid |= TXGBE_PTID_TYP_TS;
+		break;
+	case RTE_PTYPE_L2_ETHER_ARP:
+		ptid |= TXGBE_PTID_TYP_ARP;
+		break;
+	case RTE_PTYPE_L2_ETHER_LLDP:
+		ptid |= TXGBE_PTID_TYP_LLDP;
+		break;
+	default:
+		ptid |= TXGBE_PTID_TYP_MAC;
+		break;
+	}
+
+	return ptid;
+}
+
+static inline u8
+txgbe_encode_ptype_ip(u32 ptype)
+{
+	u8 ptid;
+
+	ptid = TXGBE_PTID_PKT_IP;
+
+	switch (ptype & RTE_PTYPE_L3_MASK) {
+	case RTE_PTYPE_L3_IPV4:
+	case RTE_PTYPE_L3_IPV4_EXT:
+	case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+		break;
+	case RTE_PTYPE_L3_IPV6:
+	case RTE_PTYPE_L3_IPV6_EXT:
+	case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+		ptid |= TXGBE_PTID_PKT_IPV6;
+		break;
+	default:
+		return txgbe_encode_ptype_mac(ptype);
+	}
+
+	switch (ptype & RTE_PTYPE_L4_MASK) {
+	case RTE_PTYPE_L4_TCP:
+		ptid |= TXGBE_PTID_TYP_TCP;
+		break;
+	case RTE_PTYPE_L4_UDP:
+		ptid |= TXGBE_PTID_TYP_UDP;
+		break;
+	case RTE_PTYPE_L4_SCTP:
+		ptid |= TXGBE_PTID_TYP_SCTP;
+		break;
+	case RTE_PTYPE_L4_FRAG:
+		ptid |= TXGBE_PTID_TYP_IPFRAG;
+		break;
+	default:
+		ptid |= TXGBE_PTID_TYP_IPDATA;
+		break;
+	}
+
+	return ptid;
+}
+
+static inline u8
+txgbe_encode_ptype_tunnel(u32 ptype)
+{
+	u8 ptid;
+
+	ptid = TXGBE_PTID_PKT_TUN;
+
+	switch (ptype & RTE_PTYPE_L3_MASK) {
+	case RTE_PTYPE_L3_IPV4:
+	case RTE_PTYPE_L3_IPV4_EXT:
+	case RTE_PTYPE_L3_IPV4_EXT_UNKNOWN:
+		break;
+	case RTE_PTYPE_L3_IPV6:
+	case RTE_PTYPE_L3_IPV6_EXT:
+	case RTE_PTYPE_L3_IPV6_EXT_UNKNOWN:
+		ptid |= TXGBE_PTID_TUN_IPV6;
+		break;
+	default:
+		return txgbe_encode_ptype_ip(ptype);
+	}
+
+	switch (ptype & RTE_PTYPE_TUNNEL_MASK) {
+	case RTE_PTYPE_TUNNEL_IP:
+		ptid |= TXGBE_PTID_TUN_EI;
+		break;
+	case RTE_PTYPE_TUNNEL_GRE:
+		ptid |= TXGBE_PTID_TUN_EIG;
+		break;
+	case RTE_PTYPE_TUNNEL_VXLAN:
+	case RTE_PTYPE_TUNNEL_VXLAN_GPE:
+	case RTE_PTYPE_TUNNEL_NVGRE:
+	case RTE_PTYPE_TUNNEL_GENEVE:
+	case RTE_PTYPE_TUNNEL_GRENAT:
+		break;
+	default:
+		return ptid;
+	}
+
+	switch (ptype & RTE_PTYPE_INNER_L2_MASK) {
+	case RTE_PTYPE_INNER_L2_ETHER:
+		ptid |= TXGBE_PTID_TUN_EIGM;
+		break;
+	case RTE_PTYPE_INNER_L2_ETHER_VLAN:
+		ptid |= TXGBE_PTID_TUN_EIGMV;
+		break;
+	case RTE_PTYPE_INNER_L2_ETHER_QINQ:
+		ptid |= TXGBE_PTID_TUN_EIGMV;
+		return ptid;
+	default:
+		break;
+	}
+
+	switch (ptype & RTE_PTYPE_INNER_L3_MASK) {
+	case RTE_PTYPE_INNER_L3_IPV4:
+	case RTE_PTYPE_INNER_L3_IPV4_EXT:
+	case RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN:
+		break;
+	case RTE_PTYPE_INNER_L3_IPV6:
+	case RTE_PTYPE_INNER_L3_IPV6_EXT:
+	case RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN:
+		ptid |= TXGBE_PTID_PKT_IPV6;
+		break;
+	default:
+		return ptid;
+	}
+
+	switch (ptype & RTE_PTYPE_INNER_L4_MASK) {
+	case RTE_PTYPE_INNER_L4_TCP:
+		ptid |= TXGBE_PTID_TYP_TCP;
+		break;
+	case RTE_PTYPE_INNER_L4_UDP:
+		ptid |= TXGBE_PTID_TYP_UDP;
+		break;
+	case RTE_PTYPE_INNER_L4_SCTP:
+		ptid |= TXGBE_PTID_TYP_SCTP;
+		break;
+	case RTE_PTYPE_INNER_L4_FRAG:
+		ptid |= TXGBE_PTID_TYP_IPFRAG;
+		break;
+	default:
+		ptid |= TXGBE_PTID_TYP_IPDATA;
+		break;
+	}
+
+	return ptid;
+}
+
+u32 txgbe_decode_ptype(u8 ptid)
+{
+	if (-1 != txgbe_etflt_id(ptid))
+		return RTE_PTYPE_UNKNOWN;
+
+	return txgbe_ptype_lookup[ptid];
+}
+
+u8 txgbe_encode_ptype(u32 ptype)
+{
+	u8 ptid = 0;
+
+	if (ptype & RTE_PTYPE_TUNNEL_MASK) {
+		ptid = txgbe_encode_ptype_tunnel(ptype);
+	} else if (ptype & RTE_PTYPE_L3_MASK) {
+		ptid = txgbe_encode_ptype_ip(ptype);
+	} else if (ptype & RTE_PTYPE_L2_MASK) {
+		ptid = txgbe_encode_ptype_mac(ptype);
+	} else {
+		ptid = TXGBE_PTID_NULL;
+	}
+
+	return ptid;
+}
+
+/**
+ * Use 2 different table for normal packet and tunnel packet
+ * to save the space.
+ */
+const u32
+txgbe_ptype_table[TXGBE_PTID_MAX] __rte_cache_aligned = {
+	[TXGBE_PT_ETHER] = RTE_PTYPE_L2_ETHER,
+	[TXGBE_PT_IPV4] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4,
+	[TXGBE_PT_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+	[TXGBE_PT_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+	[TXGBE_PT_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+	[TXGBE_PT_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT,
+	[TXGBE_PT_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_TCP,
+	[TXGBE_PT_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_UDP,
+	[TXGBE_PT_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_L4_SCTP,
+	[TXGBE_PT_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6,
+	[TXGBE_PT_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+	[TXGBE_PT_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+	[TXGBE_PT_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP,
+	[TXGBE_PT_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT,
+	[TXGBE_PT_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_TCP,
+	[TXGBE_PT_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_UDP,
+	[TXGBE_PT_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV6_EXT | RTE_PTYPE_L4_SCTP,
+	[TXGBE_PT_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6,
+	[TXGBE_PT_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_IPV4_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_IPV4_EXT_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6,
+	[TXGBE_PT_IPV4_EXT_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_IPV4_EXT_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_IPV4_EXT_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[TXGBE_PT_IPV4_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_IPV4_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_IPV4_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_IPV4_EXT_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[TXGBE_PT_IPV4_EXT_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_IPV4_EXT_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_IPV4_EXT_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT | RTE_PTYPE_TUNNEL_IP |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
+};
+
+const u32
+txgbe_ptype_table_tn[TXGBE_PTID_MAX] __rte_cache_aligned = {
+	[TXGBE_PT_NVGRE] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER,
+	[TXGBE_PT_NVGRE_IPV4] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT,
+	[TXGBE_PT_NVGRE_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6,
+	[TXGBE_PT_NVGRE_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[TXGBE_PT_NVGRE_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
+		RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_NVGRE_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
+		RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_NVGRE_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
+		RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_NVGRE_IPV4_IPV6_EXT_TCP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_TUNNEL_GRE | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
+		RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_NVGRE_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
+		RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_NVGRE_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6 |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_NVGRE_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
+		RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_NVGRE_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV6_EXT |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_NVGRE_IPV4_IPV6_EXT_UDP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_TUNNEL_GRE | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_NVGRE_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4 |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_NVGRE_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
+		RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_NVGRE_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
+		RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_NVGRE_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_TUNNEL_GRE |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT |
+		RTE_PTYPE_INNER_L4_UDP,
+
+	[TXGBE_PT_VXLAN] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER,
+	[TXGBE_PT_VXLAN_IPV4] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV4_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT,
+	[TXGBE_PT_VXLAN_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6,
+	[TXGBE_PT_VXLAN_IPV4_IPV6] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT,
+	[TXGBE_PT_VXLAN_IPV4_IPV6_EXT] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV4_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_VXLAN_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_VXLAN_IPV4_IPV6_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV6_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_VXLAN_IPV4_IPV6_EXT_TCP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV4_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_VXLAN_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_VXLAN_IPV6_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6 | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_VXLAN_IPV4_IPV6_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV6_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_UDP,
+	[TXGBE_PT_VXLAN_IPV6_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV6_EXT | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_VXLAN_IPV4_IPV6_EXT_UDP] =
+		RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
+		RTE_PTYPE_L4_UDP | RTE_PTYPE_TUNNEL_VXLAN |
+		RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4,
+	[TXGBE_PT_VXLAN_IPV4_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4 | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_VXLAN_IPV4_EXT_SCTP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_SCTP,
+	[TXGBE_PT_VXLAN_IPV4_EXT_TCP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_TCP,
+	[TXGBE_PT_VXLAN_IPV4_EXT_UDP] = RTE_PTYPE_L2_ETHER |
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP |
+		RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_INNER_L2_ETHER |
+		RTE_PTYPE_INNER_L3_IPV4_EXT | RTE_PTYPE_INNER_L4_UDP,
+};
+
diff --git a/drivers/net/txgbe/txgbe_ptypes.h b/drivers/net/txgbe/txgbe_ptypes.h
new file mode 100644
index 000000000..6af4b0ded
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_ptypes.h
@@ -0,0 +1,351 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_PTYPE_H_
+#define _TXGBE_PTYPE_H_
+
+/**
+ * PTID(Packet Type Identifier, 8bits)
+ * - Bit 3:0 detailed types.
+ * - Bit 5:4 basic types.
+ * - Bit 7:6 tunnel types.
+ **/
+#define TXGBE_PTID_NULL                 0
+#define TXGBE_PTID_MAX                  256
+#define TXGBE_PTID_MASK                 0xFF
+#define TXGBE_PTID_MASK_TUNNEL          0x7F
+
+/* TUN */
+#define TXGBE_PTID_TUN_IPV6             0x40
+#define TXGBE_PTID_TUN_EI               0x00 /* IP */
+#define TXGBE_PTID_TUN_EIG              0x10 /* IP+GRE */
+#define TXGBE_PTID_TUN_EIGM             0x20 /* IP+GRE+MAC */
+#define TXGBE_PTID_TUN_EIGMV            0x30 /* IP+GRE+MAC+VLAN */
+
+/* PKT for !TUN */
+#define TXGBE_PTID_PKT_TUN             (0x80)
+#define TXGBE_PTID_PKT_MAC             (0x10)
+#define TXGBE_PTID_PKT_IP              (0x20)
+#define TXGBE_PTID_PKT_FCOE            (0x30)
+
+/* TYP for PKT=mac */
+#define TXGBE_PTID_TYP_MAC             (0x01)
+#define TXGBE_PTID_TYP_TS              (0x02) /* time sync */
+#define TXGBE_PTID_TYP_FIP             (0x03)
+#define TXGBE_PTID_TYP_LLDP            (0x04)
+#define TXGBE_PTID_TYP_CNM             (0x05)
+#define TXGBE_PTID_TYP_EAPOL           (0x06)
+#define TXGBE_PTID_TYP_ARP             (0x07)
+#define TXGBE_PTID_TYP_ETF             (0x08)
+
+/* TYP for PKT=ip */
+#define TXGBE_PTID_PKT_IPV6            (0x08)
+#define TXGBE_PTID_TYP_IPFRAG          (0x01)
+#define TXGBE_PTID_TYP_IPDATA          (0x02)
+#define TXGBE_PTID_TYP_UDP             (0x03)
+#define TXGBE_PTID_TYP_TCP             (0x04)
+#define TXGBE_PTID_TYP_SCTP            (0x05)
+
+/* TYP for PKT=fcoe */
+#define TXGBE_PTID_PKT_VFT             (0x08)
+#define TXGBE_PTID_TYP_FCOE            (0x00)
+#define TXGBE_PTID_TYP_FCDATA          (0x01)
+#define TXGBE_PTID_TYP_FCRDY           (0x02)
+#define TXGBE_PTID_TYP_FCRSP           (0x03)
+#define TXGBE_PTID_TYP_FCOTHER         (0x04)
+
+/* packet type non-ip values */
+enum txgbe_l2_ptids {
+	TXGBE_PTID_L2_ABORTED = (TXGBE_PTID_PKT_MAC),
+	TXGBE_PTID_L2_MAC = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_MAC),
+	TXGBE_PTID_L2_TMST = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_TS),
+	TXGBE_PTID_L2_FIP = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_FIP),
+	TXGBE_PTID_L2_LLDP = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_LLDP),
+	TXGBE_PTID_L2_CNM = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_CNM),
+	TXGBE_PTID_L2_EAPOL = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_EAPOL),
+	TXGBE_PTID_L2_ARP = (TXGBE_PTID_PKT_MAC | TXGBE_PTID_TYP_ARP),
+
+	TXGBE_PTID_L2_IPV4_FRAG = (TXGBE_PTID_PKT_IP | TXGBE_PTID_TYP_IPFRAG),
+	TXGBE_PTID_L2_IPV4 = (TXGBE_PTID_PKT_IP | TXGBE_PTID_TYP_IPDATA),
+	TXGBE_PTID_L2_IPV4_UDP = (TXGBE_PTID_PKT_IP | TXGBE_PTID_TYP_UDP),
+	TXGBE_PTID_L2_IPV4_TCP = (TXGBE_PTID_PKT_IP | TXGBE_PTID_TYP_TCP),
+	TXGBE_PTID_L2_IPV4_SCTP = (TXGBE_PTID_PKT_IP | TXGBE_PTID_TYP_SCTP),
+	TXGBE_PTID_L2_IPV6_FRAG = (TXGBE_PTID_PKT_IP | TXGBE_PTID_PKT_IPV6 |
+			TXGBE_PTID_TYP_IPFRAG),
+	TXGBE_PTID_L2_IPV6 = (TXGBE_PTID_PKT_IP | TXGBE_PTID_PKT_IPV6 |
+			TXGBE_PTID_TYP_IPDATA),
+	TXGBE_PTID_L2_IPV6_UDP = (TXGBE_PTID_PKT_IP | TXGBE_PTID_PKT_IPV6 |
+			TXGBE_PTID_TYP_UDP),
+	TXGBE_PTID_L2_IPV6_TCP = (TXGBE_PTID_PKT_IP | TXGBE_PTID_PKT_IPV6 |
+			TXGBE_PTID_TYP_TCP),
+	TXGBE_PTID_L2_IPV6_SCTP = (TXGBE_PTID_PKT_IP | TXGBE_PTID_PKT_IPV6 |
+			TXGBE_PTID_TYP_SCTP),
+
+	TXGBE_PTID_L2_FCOE = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_TYP_FCOE),
+	TXGBE_PTID_L2_FCOE_FCDATA = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_TYP_FCDATA),
+	TXGBE_PTID_L2_FCOE_FCRDY = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_TYP_FCRDY),
+	TXGBE_PTID_L2_FCOE_FCRSP = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_TYP_FCRSP),
+	TXGBE_PTID_L2_FCOE_FCOTHER = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_TYP_FCOTHER),
+	TXGBE_PTID_L2_FCOE_VFT = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_PKT_VFT),
+	TXGBE_PTID_L2_FCOE_VFT_FCDATA = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_PKT_VFT | TXGBE_PTID_TYP_FCDATA),
+	TXGBE_PTID_L2_FCOE_VFT_FCRDY = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_PKT_VFT | TXGBE_PTID_TYP_FCRDY),
+	TXGBE_PTID_L2_FCOE_VFT_FCRSP = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_PKT_VFT | TXGBE_PTID_TYP_FCRSP),
+	TXGBE_PTID_L2_FCOE_VFT_FCOTHER = (TXGBE_PTID_PKT_FCOE |
+			TXGBE_PTID_PKT_VFT | TXGBE_PTID_TYP_FCOTHER),
+
+	TXGBE_PTID_L2_TUN4_MAC = (TXGBE_PTID_PKT_TUN |
+			TXGBE_PTID_TUN_EIGM),
+	TXGBE_PTID_L2_TUN6_MAC = (TXGBE_PTID_PKT_TUN |
+			TXGBE_PTID_TUN_IPV6 | TXGBE_PTID_TUN_EIGM),
+};
+
+
+/*
+ * PTYPE(Packet Type, 32bits)
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ * please ref to rte_mbuf.h: rte_mbuf.packet_type
+ */
+struct rte_txgbe_ptype {
+	u32 l2:4;  /* outer mac */
+	u32 l3:4;  /* outer internet protocol */
+	u32 l4:4;  /* outer transport protocol */
+	u32 tun:4; /* tunnel protocol */
+
+	u32 el2:4; /* inner mac */
+	u32 el3:4; /* inner internet protocol */
+	u32 el4:4; /* inner transport protocol */
+	u32 rsv:3;
+	u32 known:1;
+};
+
+#ifndef RTE_PTYPE_UNKNOWN
+#define RTE_PTYPE_UNKNOWN                   0x00000000
+#define RTE_PTYPE_L2_ETHER                  0x00000001
+#define RTE_PTYPE_L2_ETHER_TIMESYNC         0x00000002
+#define RTE_PTYPE_L2_ETHER_ARP              0x00000003
+#define RTE_PTYPE_L2_ETHER_LLDP             0x00000004
+#define RTE_PTYPE_L2_ETHER_NSH              0x00000005
+#define RTE_PTYPE_L2_ETHER_FCOE             0x00000009
+#define RTE_PTYPE_L3_IPV4                   0x00000010
+#define RTE_PTYPE_L3_IPV4_EXT               0x00000030
+#define RTE_PTYPE_L3_IPV6                   0x00000040
+#define RTE_PTYPE_L3_IPV4_EXT_UNKNOWN       0x00000090
+#define RTE_PTYPE_L3_IPV6_EXT               0x000000c0
+#define RTE_PTYPE_L3_IPV6_EXT_UNKNOWN       0x000000e0
+#define RTE_PTYPE_L4_TCP                    0x00000100
+#define RTE_PTYPE_L4_UDP                    0x00000200
+#define RTE_PTYPE_L4_FRAG                   0x00000300
+#define RTE_PTYPE_L4_SCTP                   0x00000400
+#define RTE_PTYPE_L4_ICMP                   0x00000500
+#define RTE_PTYPE_L4_NONFRAG                0x00000600
+#define RTE_PTYPE_TUNNEL_IP                 0x00001000
+#define RTE_PTYPE_TUNNEL_GRE                0x00002000
+#define RTE_PTYPE_TUNNEL_VXLAN              0x00003000
+#define RTE_PTYPE_TUNNEL_NVGRE              0x00004000
+#define RTE_PTYPE_TUNNEL_GENEVE             0x00005000
+#define RTE_PTYPE_TUNNEL_GRENAT             0x00006000
+#define RTE_PTYPE_INNER_L2_ETHER            0x00010000
+#define RTE_PTYPE_INNER_L2_ETHER_VLAN       0x00020000
+#define RTE_PTYPE_INNER_L3_IPV4             0x00100000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT         0x00200000
+#define RTE_PTYPE_INNER_L3_IPV6             0x00300000
+#define RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN 0x00400000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT         0x00500000
+#define RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN 0x00600000
+#define RTE_PTYPE_INNER_L4_TCP              0x01000000
+#define RTE_PTYPE_INNER_L4_UDP              0x02000000
+#define RTE_PTYPE_INNER_L4_FRAG             0x03000000
+#define RTE_PTYPE_INNER_L4_SCTP             0x04000000
+#define RTE_PTYPE_INNER_L4_ICMP             0x05000000
+#define RTE_PTYPE_INNER_L4_NONFRAG          0x06000000
+#endif /* !RTE_PTYPE_UNKNOWN */
+#define RTE_PTYPE_L3_IPV4u                  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN
+#define RTE_PTYPE_L3_IPV6u                  RTE_PTYPE_L3_IPV6_EXT_UNKNOWN
+#define RTE_PTYPE_INNER_L3_IPV4u            RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN
+#define RTE_PTYPE_INNER_L3_IPV6u            RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN
+#define RTE_PTYPE_L2_ETHER_FIP              RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_CNM              RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_EAPOL            RTE_PTYPE_L2_ETHER
+#define RTE_PTYPE_L2_ETHER_FILTER           RTE_PTYPE_L2_ETHER
+
+u32 *txgbe_get_supported_ptypes(void);
+u32 txgbe_decode_ptype(u8 ptid);
+u8 txgbe_encode_ptype(u32 ptype);
+
+/**
+ * PT(Packet Type, 32bits)
+ * - Bit 3:0 is for L2 types.
+ * - Bit 7:4 is for L3 or outer L3 (for tunneling case) types.
+ * - Bit 11:8 is for L4 or outer L4 (for tunneling case) types.
+ * - Bit 15:12 is for tunnel types.
+ * - Bit 19:16 is for inner L2 types.
+ * - Bit 23:20 is for inner L3 types.
+ * - Bit 27:24 is for inner L4 types.
+ * - Bit 31:28 is reserved.
+ * PT is a more accurate version of PTYPE
+ **/
+#define TXGBE_PT_ETHER                   0x00
+#define TXGBE_PT_IPV4                    0x01
+#define TXGBE_PT_IPV4_TCP                0x11
+#define TXGBE_PT_IPV4_UDP                0x21
+#define TXGBE_PT_IPV4_SCTP               0x41
+#define TXGBE_PT_IPV4_EXT                0x03
+#define TXGBE_PT_IPV4_EXT_TCP            0x13
+#define TXGBE_PT_IPV4_EXT_UDP            0x23
+#define TXGBE_PT_IPV4_EXT_SCTP           0x43
+#define TXGBE_PT_IPV6                    0x04
+#define TXGBE_PT_IPV6_TCP                0x14
+#define TXGBE_PT_IPV6_UDP                0x24
+#define TXGBE_PT_IPV6_SCTP               0x44
+#define TXGBE_PT_IPV6_EXT                0x0C
+#define TXGBE_PT_IPV6_EXT_TCP            0x1C
+#define TXGBE_PT_IPV6_EXT_UDP            0x2C
+#define TXGBE_PT_IPV6_EXT_SCTP           0x4C
+#define TXGBE_PT_IPV4_IPV6               0x05
+#define TXGBE_PT_IPV4_IPV6_TCP           0x15
+#define TXGBE_PT_IPV4_IPV6_UDP           0x25
+#define TXGBE_PT_IPV4_IPV6_SCTP          0x45
+#define TXGBE_PT_IPV4_EXT_IPV6           0x07
+#define TXGBE_PT_IPV4_EXT_IPV6_TCP       0x17
+#define TXGBE_PT_IPV4_EXT_IPV6_UDP       0x27
+#define TXGBE_PT_IPV4_EXT_IPV6_SCTP      0x47
+#define TXGBE_PT_IPV4_IPV6_EXT           0x0D
+#define TXGBE_PT_IPV4_IPV6_EXT_TCP       0x1D
+#define TXGBE_PT_IPV4_IPV6_EXT_UDP       0x2D
+#define TXGBE_PT_IPV4_IPV6_EXT_SCTP      0x4D
+#define TXGBE_PT_IPV4_EXT_IPV6_EXT       0x0F
+#define TXGBE_PT_IPV4_EXT_IPV6_EXT_TCP   0x1F
+#define TXGBE_PT_IPV4_EXT_IPV6_EXT_UDP   0x2F
+#define TXGBE_PT_IPV4_EXT_IPV6_EXT_SCTP  0x4F
+
+#define TXGBE_PT_NVGRE                   0x00
+#define TXGBE_PT_NVGRE_IPV4              0x01
+#define TXGBE_PT_NVGRE_IPV4_TCP          0x11
+#define TXGBE_PT_NVGRE_IPV4_UDP          0x21
+#define TXGBE_PT_NVGRE_IPV4_SCTP         0x41
+#define TXGBE_PT_NVGRE_IPV4_EXT          0x03
+#define TXGBE_PT_NVGRE_IPV4_EXT_TCP      0x13
+#define TXGBE_PT_NVGRE_IPV4_EXT_UDP      0x23
+#define TXGBE_PT_NVGRE_IPV4_EXT_SCTP     0x43
+#define TXGBE_PT_NVGRE_IPV6              0x04
+#define TXGBE_PT_NVGRE_IPV6_TCP          0x14
+#define TXGBE_PT_NVGRE_IPV6_UDP          0x24
+#define TXGBE_PT_NVGRE_IPV6_SCTP         0x44
+#define TXGBE_PT_NVGRE_IPV6_EXT          0x0C
+#define TXGBE_PT_NVGRE_IPV6_EXT_TCP      0x1C
+#define TXGBE_PT_NVGRE_IPV6_EXT_UDP      0x2C
+#define TXGBE_PT_NVGRE_IPV6_EXT_SCTP     0x4C
+#define TXGBE_PT_NVGRE_IPV4_IPV6         0x05
+#define TXGBE_PT_NVGRE_IPV4_IPV6_TCP     0x15
+#define TXGBE_PT_NVGRE_IPV4_IPV6_UDP     0x25
+#define TXGBE_PT_NVGRE_IPV4_IPV6_EXT     0x0D
+#define TXGBE_PT_NVGRE_IPV4_IPV6_EXT_TCP 0x1D
+#define TXGBE_PT_NVGRE_IPV4_IPV6_EXT_UDP 0x2D
+
+#define TXGBE_PT_VXLAN                   0x80
+#define TXGBE_PT_VXLAN_IPV4              0x81
+#define TXGBE_PT_VXLAN_IPV4_TCP          0x91
+#define TXGBE_PT_VXLAN_IPV4_UDP          0xA1
+#define TXGBE_PT_VXLAN_IPV4_SCTP         0xC1
+#define TXGBE_PT_VXLAN_IPV4_EXT          0x83
+#define TXGBE_PT_VXLAN_IPV4_EXT_TCP      0x93
+#define TXGBE_PT_VXLAN_IPV4_EXT_UDP      0xA3
+#define TXGBE_PT_VXLAN_IPV4_EXT_SCTP     0xC3
+#define TXGBE_PT_VXLAN_IPV6              0x84
+#define TXGBE_PT_VXLAN_IPV6_TCP          0x94
+#define TXGBE_PT_VXLAN_IPV6_UDP          0xA4
+#define TXGBE_PT_VXLAN_IPV6_SCTP         0xC4
+#define TXGBE_PT_VXLAN_IPV6_EXT          0x8C
+#define TXGBE_PT_VXLAN_IPV6_EXT_TCP      0x9C
+#define TXGBE_PT_VXLAN_IPV6_EXT_UDP      0xAC
+#define TXGBE_PT_VXLAN_IPV6_EXT_SCTP     0xCC
+#define TXGBE_PT_VXLAN_IPV4_IPV6         0x85
+#define TXGBE_PT_VXLAN_IPV4_IPV6_TCP     0x95
+#define TXGBE_PT_VXLAN_IPV4_IPV6_UDP     0xA5
+#define TXGBE_PT_VXLAN_IPV4_IPV6_EXT     0x8D
+#define TXGBE_PT_VXLAN_IPV4_IPV6_EXT_TCP 0x9D
+#define TXGBE_PT_VXLAN_IPV4_IPV6_EXT_UDP 0xAD
+
+#define TXGBE_PT_MAX    256
+extern const u32 txgbe_ptype_table[TXGBE_PT_MAX];
+extern const u32 txgbe_ptype_table_tn[TXGBE_PT_MAX];
+
+
+/* ether type filter list: one static filter per filter consumer. This is
+ *                 to avoid filter collisions later. Add new filters
+ *                 here!!
+ *      EAPOL 802.1x (0x888e): Filter 0
+ *      FCoE (0x8906):   Filter 2
+ *      1588 (0x88f7):   Filter 3
+ *      FIP  (0x8914):   Filter 4
+ *      LLDP (0x88CC):   Filter 5
+ *      LACP (0x8809):   Filter 6
+ *      FC   (0x8808):   Filter 7
+ */
+#define TXGBE_ETF_ID_EAPOL        0
+#define TXGBE_ETF_ID_FCOE         2
+#define TXGBE_ETF_ID_1588         3
+#define TXGBE_ETF_ID_FIP          4
+#define TXGBE_ETF_ID_LLDP         5
+#define TXGBE_ETF_ID_LACP         6
+#define TXGBE_ETF_ID_FC           7
+#define TXGBE_ETF_ID_MAX          8
+
+#define TXGBE_PTID_ETF_MIN  0x18
+#define TXGBE_PTID_ETF_MAX  0x1F
+static inline int txgbe_etflt_id(u8 ptid)
+{
+	if (ptid >= TXGBE_PTID_ETF_MIN && ptid <= TXGBE_PTID_ETF_MAX)
+		return ptid - TXGBE_PTID_ETF_MIN;
+	else
+		return -1;
+}
+
+struct txgbe_udphdr {
+	__be16	source;
+	__be16	dest;
+	__be16	len;
+	__be16	check;
+};
+
+struct txgbe_vxlanhdr {
+	__be32 vx_flags;
+	__be32 vx_vni;
+};
+
+struct txgbe_genevehdr {
+	u8 opt_len:6;
+	u8 ver:2;
+	u8 rsvd1:6;
+	u8 critical:1;
+	u8 oam:1;
+	__be16 proto_type;
+
+	u8 vni[3];
+	u8 rsvd2;
+};
+
+struct txgbe_nvgrehdr {
+	__be16 flags;
+	__be16 proto;
+	__be32 tni;
+};
+
+#endif /* _TXGBE_PTYPE_H_ */
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 763ce3439..f30dc68b4 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -68,8 +68,6 @@ struct txgbe_tx_desc {
 #define RTE_TXGBE_REGISTER_POLL_WAIT_10_MS  10
 #define RTE_TXGBE_WAIT_100_US               100
 
-#define TXGBE_PTID_MASK                 0xFF
-
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
  */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 23/42] net/txgbe: fill simple transmit function
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (20 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 22/42] net/txgbe: add packet type Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 24/42] net/txgbe: fill transmit function with hardware offload Jiawen Wu
                   ` (19 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Fill simple transmit function and define transmit descriptor.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_rxtx.c | 226 ++++++++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_rxtx.h |  82 ++++++++++++
 2 files changed, 304 insertions(+), 4 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 2288332ce..3db9d314f 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -25,15 +25,233 @@
 #include "txgbe_ethdev.h"
 #include "txgbe_rxtx.h"
 
+/*********************************************************************
+ *
+ *  TX functions
+ *
+ **********************************************************************/
+
+/*
+ * Check for descriptors with their DD bit set and free mbufs.
+ * Return the total number of buffers freed.
+ */
+static __rte_always_inline int
+txgbe_tx_free_bufs(struct txgbe_tx_queue *txq)
+{
+	struct txgbe_tx_entry *txep;
+	uint32_t status;
+	int i, nb_free = 0;
+	struct rte_mbuf *m, *free[RTE_TXGBE_TX_MAX_FREE_BUF_SZ];
+
+	/* check DD bit on threshold descriptor */
+	status = txq->tx_ring[txq->tx_next_dd].dw3;
+	if (!(status & rte_cpu_to_le_32(TXGBE_TXD_DD))) {
+		if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
+			txgbe_set32_masked(txq->tdc_reg_addr,
+				TXGBE_TXCFG_FLUSH, TXGBE_TXCFG_FLUSH);
+		return 0;
+	}
+
+	/*
+	 * first buffer to free from S/W ring is at index
+	 * tx_next_dd - (tx_free_thresh-1)
+	 */
+	txep = &(txq->sw_ring[txq->tx_next_dd - (txq->tx_free_thresh - 1)]);
+	for (i = 0; i < txq->tx_free_thresh; ++i, ++txep) {
+		/* free buffers one at a time */
+		m = rte_pktmbuf_prefree_seg(txep->mbuf);
+		txep->mbuf = NULL;
+
+		if (unlikely(m == NULL))
+			continue;
+
+		if (nb_free >= RTE_TXGBE_TX_MAX_FREE_BUF_SZ ||
+		    (nb_free > 0 && m->pool != free[0]->pool)) {
+			rte_mempool_put_bulk(free[0]->pool,
+					     (void **)free, nb_free);
+			nb_free = 0;
+		}
+
+		free[nb_free++] = m;
+	}
+
+	if (nb_free > 0)
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+
+	/* buffers were freed, update counters */
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_free_thresh);
+	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_free_thresh);
+	if (txq->tx_next_dd >= txq->nb_tx_desc)
+		txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
+
+	return txq->tx_free_thresh;
+}
+
+/* Populate 4 descriptors with data from 4 mbufs */
+static inline void
+tx4(volatile struct txgbe_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t buf_dma_addr;
+	uint32_t pkt_len;
+	int i;
+
+	for (i = 0; i < 4; ++i, ++txdp, ++pkts) {
+		buf_dma_addr = rte_mbuf_data_iova(*pkts);
+		pkt_len = (*pkts)->data_len;
+
+		/* write data to descriptor */
+		txdp->qw0 = rte_cpu_to_le_64(buf_dma_addr);
+		txdp->dw2 = cpu_to_le32(TXGBE_TXD_FLAGS |
+					TXGBE_TXD_DATLEN(pkt_len));
+		txdp->dw3 = cpu_to_le32(TXGBE_TXD_PAYLEN(pkt_len));
+
+		rte_prefetch0(&(*pkts)->pool);
+	}
+}
+
+/* Populate 1 descriptor with data from 1 mbuf */
+static inline void
+tx1(volatile struct txgbe_tx_desc *txdp, struct rte_mbuf **pkts)
+{
+	uint64_t buf_dma_addr;
+	uint32_t pkt_len;
+
+	buf_dma_addr = rte_mbuf_data_iova(*pkts);
+	pkt_len = (*pkts)->data_len;
+
+	/* write data to descriptor */
+	txdp->qw0 = cpu_to_le64(buf_dma_addr);
+	txdp->dw2 = cpu_to_le32(TXGBE_TXD_FLAGS |
+				TXGBE_TXD_DATLEN(pkt_len));
+	txdp->dw3 = cpu_to_le32(TXGBE_TXD_PAYLEN(pkt_len));
+
+	rte_prefetch0(&(*pkts)->pool);
+}
+
+/*
+ * Fill H/W descriptor ring with mbuf data.
+ * Copy mbuf pointers to the S/W ring.
+ */
+static inline void
+txgbe_tx_fill_hw_ring(struct txgbe_tx_queue *txq, struct rte_mbuf **pkts,
+		      uint16_t nb_pkts)
+{
+	volatile struct txgbe_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]);
+	struct txgbe_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]);
+	const int N_PER_LOOP = 4;
+	const int N_PER_LOOP_MASK = N_PER_LOOP-1;
+	int mainpart, leftover;
+	int i, j;
+
+	/*
+	 * Process most of the packets in chunks of N pkts.  Any
+	 * leftover packets will get processed one at a time.
+	 */
+	mainpart = (nb_pkts & ((uint32_t) ~N_PER_LOOP_MASK));
+	leftover = (nb_pkts & ((uint32_t)  N_PER_LOOP_MASK));
+	for (i = 0; i < mainpart; i += N_PER_LOOP) {
+		/* Copy N mbuf pointers to the S/W ring */
+		for (j = 0; j < N_PER_LOOP; ++j) {
+			(txep + i + j)->mbuf = *(pkts + i + j);
+		}
+		tx4(txdp + i, pkts + i);
+	}
+
+	if (unlikely(leftover > 0)) {
+		for (i = 0; i < leftover; ++i) {
+			(txep + mainpart + i)->mbuf = *(pkts + mainpart + i);
+			tx1(txdp + mainpart + i, pkts + mainpart + i);
+		}
+	}
+}
+
+static inline uint16_t
+tx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+	     uint16_t nb_pkts)
+{
+	struct txgbe_tx_queue *txq = (struct txgbe_tx_queue *)tx_queue;
+	uint16_t n = 0;
+
+	/*
+	 * Begin scanning the H/W ring for done descriptors when the
+	 * number of available descriptors drops below tx_free_thresh.  For
+	 * each done descriptor, free the associated buffer.
+	 */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		txgbe_tx_free_bufs(txq);
+
+	/* Only use descriptors that are available */
+	nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts);
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	/* Use exactly nb_pkts descriptors */
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts);
+
+	/*
+	 * At this point, we know there are enough descriptors in the
+	 * ring to transmit all the packets.  This assumes that each
+	 * mbuf contains a single segment, and that no new offloads
+	 * are expected, which would require a new context descriptor.
+	 */
+
+	/*
+	 * See if we're going to wrap-around. If so, handle the top
+	 * of the descriptor ring first, then do the bottom.  If not,
+	 * the processing looks just like the "bottom" part anyway...
+	 */
+	if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) {
+		n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail);
+		txgbe_tx_fill_hw_ring(txq, tx_pkts, n);
+		txq->tx_tail = 0;
+	}
+
+	/* Fill H/W descriptor ring with mbuf data */
+	txgbe_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n));
+	txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n));
+
+	/*
+	 * Check for wrap-around. This would only happen if we used
+	 * up to the last descriptor in the ring, no more, no less.
+	 */
+	if (txq->tx_tail >= txq->nb_tx_desc)
+		txq->tx_tail = 0;
+
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   (unsigned) txq->port_id, (unsigned) txq->queue_id,
+		   (unsigned) txq->tx_tail, (unsigned) nb_pkts);
+
+	/* update tail pointer */
+	rte_wmb();
+	txgbe_set32_relaxed(txq->tdt_reg_addr, txq->tx_tail);
+
+	return nb_pkts;
+}
+
 uint16_t
 txgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 		       uint16_t nb_pkts)
 {
-	RTE_SET_USED(tx_queue);
-	RTE_SET_USED(tx_pkts);
-	RTE_SET_USED(nb_pkts);
+	uint16_t nb_tx;
+
+	/* Try to transmit at least chunks of TX_MAX_BURST pkts */
+	if (likely(nb_pkts <= RTE_PMD_TXGBE_TX_MAX_BURST))
+		return tx_xmit_pkts(tx_queue, tx_pkts, nb_pkts);
+
+	/* transmit more than the max burst, in chunks of TX_MAX_BURST */
+	nb_tx = 0;
+	while (nb_pkts) {
+		uint16_t ret, n;
+
+		n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_TXGBE_TX_MAX_BURST);
+		ret = tx_xmit_pkts(tx_queue, &(tx_pkts[nb_tx]), n);
+		nb_tx = (uint16_t)(nb_tx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < n)
+			break;
+	}
 
-	return 0;
+	return nb_tx;
 }
 
 #ifndef DEFAULT_TX_FREE_THRESH
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index f30dc68b4..421e17d67 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -50,6 +50,59 @@ struct txgbe_rx_desc {
 #define TXGBE_RXD_HDRADDR(rxd, v)  \
 	(((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
 
+/******************************************************************************
+ * Transmit Descriptor
+******************************************************************************/
+/**
+ * Transmit Context Descriptor (TXGBE_TXD_TYP=CTXT)
+ **/
+struct txgbe_tx_ctx_desc {
+	__le32 dw0; /* w.vlan_macip_lens  */
+	__le32 dw1; /* w.seqnum_seed      */
+	__le32 dw2; /* w.type_tucmd_mlhl  */
+	__le32 dw3; /* w.mss_l4len_idx    */
+};
+
+/* @txgbe_tx_ctx_desc.dw0 */
+#define TXGBE_TXD_IPLEN(v)         LS(v, 0, 0x1FF) /* ip/fcoe header end */
+#define TXGBE_TXD_MACLEN(v)        LS(v, 9, 0x7F) /* desc mac len */
+#define TXGBE_TXD_VLAN(v)          LS(v, 16, 0xFFFF) /* vlan tag */
+
+/* @txgbe_tx_ctx_desc.dw1 */
+/*** bit 0-31, when TXGBE_TXD_DTYP_FCOE=0 ***/
+#define TXGBE_TXD_IPSEC_SAIDX(v)   LS(v, 0, 0x3FF) /* ipsec SA index */
+#define TXGBE_TXD_ETYPE(v)         LS(v, 11, 0x1) /* tunnel type */
+#define TXGBE_TXD_ETYPE_UDP        LS(0, 11, 0x1)
+#define TXGBE_TXD_ETYPE_GRE        LS(1, 11, 0x1)
+#define TXGBE_TXD_EIPLEN(v)        LS(v, 12, 0x7F) /* tunnel ip header */
+#define TXGBE_TXD_DTYP_FCOE        MS(16, 0x1) /* FCoE/IP descriptor */
+#define TXGBE_TXD_ETUNLEN(v)       LS(v, 21, 0xFF) /* tunnel header */
+#define TXGBE_TXD_DECTTL(v)        LS(v, 29, 0xF) /* decrease ip TTL */
+/*** bit 0-31, when TXGBE_TXD_DTYP_FCOE=1 ***/
+#define TXGBE_TXD_FCOEF_EOF_MASK   MS(10, 0x3) /* FC EOF index */
+#define TXGBE_TXD_FCOEF_EOF_N      LS(0, 10, 0x3) /* EOFn */
+#define TXGBE_TXD_FCOEF_EOF_T      LS(1, 10, 0x3) /* EOFt */
+#define TXGBE_TXD_FCOEF_EOF_NI     LS(2, 10, 0x3) /* EOFni */
+#define TXGBE_TXD_FCOEF_EOF_A      LS(3, 10, 0x3) /* EOFa */
+#define TXGBE_TXD_FCOEF_SOF        MS(12, 0x1) /* FC SOF index */
+#define TXGBE_TXD_FCOEF_PARINC     MS(13, 0x1) /* Rel_Off in F_CTL */
+#define TXGBE_TXD_FCOEF_ORIE       MS(14, 0x1) /* orientation end */
+#define TXGBE_TXD_FCOEF_ORIS       MS(15, 0x1) /* orientation start */
+
+/* @txgbe_tx_ctx_desc.dw2 */
+#define TXGBE_TXD_IPSEC_ESPLEN(v)  LS(v, 1, 0x1FF) /* ipsec ESP length */
+#define TXGBE_TXD_SNAP             MS(10, 0x1) /* SNAP indication */
+#define TXGBE_TXD_TPID_SEL(v)      LS(v, 11, 0x7) /* vlan tag index */
+#define TXGBE_TXD_IPSEC_ESP        MS(14, 0x1) /* ipsec type: esp=1 ah=0 */
+#define TXGBE_TXD_IPSEC_ESPENC     MS(15, 0x1) /* ESP encrypt */
+#define TXGBE_TXD_CTXT             MS(20, 0x1) /* context descriptor */
+#define TXGBE_TXD_PTID(v)          LS(v, 24, 0xFF) /* packet type */
+/* @txgbe_tx_ctx_desc.dw3 */
+#define TXGBE_TXD_DD               MS(0, 0x1) /* descriptor done */
+#define TXGBE_TXD_IDX(v)           LS(v, 4, 0x1) /* ctxt desc index */
+#define TXGBE_TXD_L4LEN(v)         LS(v, 8, 0xFF) /* l4 header length */
+#define TXGBE_TXD_MSS(v)           LS(v, 16, 0xFFFF) /* l4 MSS */
+
 /**
  * Transmit Data Descriptor (TXGBE_TXD_TYP=DATA)
  **/
@@ -58,9 +111,36 @@ struct txgbe_tx_desc {
 	__le32 dw2; /* r.cmd_type_len,  w.nxtseq_seed */
 	__le32 dw3; /* r.olinfo_status, w.status      */
 };
+/* @txgbe_tx_desc.qw0 */
+
+/* @txgbe_tx_desc.dw2 */
+#define TXGBE_TXD_DATLEN(v)        ((0xFFFF & (v))) /* data buffer length */
+#define TXGBE_TXD_1588             ((0x1) << 19) /* IEEE1588 time stamp */
+#define TXGBE_TXD_DATA             ((0x0) << 20) /* data descriptor */
+#define TXGBE_TXD_EOP              ((0x1) << 24) /* End of Packet */
+#define TXGBE_TXD_FCS              ((0x1) << 25) /* Insert FCS */
+#define TXGBE_TXD_LINKSEC          ((0x1) << 26) /* Insert LinkSec */
+#define TXGBE_TXD_ECU              ((0x1) << 28) /* forward to ECU */
+#define TXGBE_TXD_CNTAG            ((0x1) << 29) /* insert CN tag */
+#define TXGBE_TXD_VLE              ((0x1) << 30) /* insert VLAN tag */
+#define TXGBE_TXD_TSE              ((0x1) << 31) /* transmit segmentation */
+
+#define TXGBE_TXD_FLAGS (TXGBE_TXD_FCS | TXGBE_TXD_EOP)
+
+/* @txgbe_tx_desc.dw3 */
+#define TXGBE_TXD_DD_UNUSED        TXGBE_TXD_DD
+#define TXGBE_TXD_IDX_UNUSED(v)    TXGBE_TXD_IDX(v)
+#define TXGBE_TXD_CC               ((0x1) << 7) /* check context */
+#define TXGBE_TXD_IPSEC            ((0x1) << 8) /* request ipsec offload */
+#define TXGBE_TXD_L4CS             ((0x1) << 9) /* insert TCP/UDP/SCTP csum */
+#define TXGBE_TXD_IPCS             ((0x1) << 10) /* insert IPv4 csum */
+#define TXGBE_TXD_EIPCS            ((0x1) << 11) /* insert outer IP csum */
+#define TXGBE_TXD_MNGFLT           ((0x1) << 12) /* enable management filter */
+#define TXGBE_TXD_PAYLEN(v)        ((0x7FFFF & (v)) << 13) /* payload length */
 
 #define RTE_PMD_TXGBE_TX_MAX_BURST 32
 #define RTE_PMD_TXGBE_RX_MAX_BURST 32
+#define RTE_TXGBE_TX_MAX_FREE_BUF_SZ 64
 
 #define RX_RING_SZ ((TXGBE_RING_DESC_MAX + RTE_PMD_TXGBE_RX_MAX_BURST) * \
 		    sizeof(struct txgbe_rx_desc))
@@ -147,6 +227,8 @@ struct txgbe_tx_queue {
 	/**< Start freeing TX buffers if there are less free descriptors than
 	     this value. */
 	uint16_t            tx_free_thresh;
+	uint16_t            nb_tx_free;
+	uint16_t tx_next_dd; /**< next desc to scan for DD bit */
 	uint16_t            queue_id;      /**< TX queue index. */
 	uint16_t            reg_idx;       /**< TX queue register index. */
 	uint16_t            port_id;       /**< Device port identifier. */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 24/42] net/txgbe: fill transmit function with hardware offload
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (21 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 23/42] net/txgbe: fill simple transmit function Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 25/42] net/txgbe: fill receive functions Jiawen Wu
                   ` (18 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Fill transmit function with hardware offload.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_rxtx.c | 662 ++++++++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_rxtx.h |  45 +++
 2 files changed, 703 insertions(+), 4 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 3db9d314f..39055b4d1 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -25,6 +25,19 @@
 #include "txgbe_ethdev.h"
 #include "txgbe_rxtx.h"
 
+/* Bit Mask to indicate what bits required for building TX context */
+static const u64 TXGBE_TX_OFFLOAD_MASK = (
+		PKT_TX_OUTER_IPV6 |
+		PKT_TX_OUTER_IPV4 |
+		PKT_TX_IPV6 |
+		PKT_TX_IPV4 |
+		PKT_TX_VLAN_PKT |
+		PKT_TX_IP_CKSUM |
+		PKT_TX_L4_MASK |
+		PKT_TX_TCP_SEG |
+		PKT_TX_TUNNEL_MASK |
+		PKT_TX_OUTER_IP_CKSUM);
+
 /*********************************************************************
  *
  *  TX functions
@@ -254,19 +267,660 @@ txgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
+static inline void
+txgbe_set_xmit_ctx(struct txgbe_tx_queue *txq,
+		volatile struct txgbe_tx_ctx_desc *ctx_txd,
+		uint64_t ol_flags, union txgbe_tx_offload tx_offload,
+		__rte_unused uint64_t *mdata)
+{
+	union txgbe_tx_offload tx_offload_mask;
+	uint32_t type_tucmd_mlhl;
+	uint32_t mss_l4len_idx;
+	uint32_t ctx_idx;
+	uint32_t vlan_macip_lens;
+	uint32_t tunnel_seed;
+
+	ctx_idx = txq->ctx_curr;
+	tx_offload_mask.data[0] = 0;
+	tx_offload_mask.data[1] = 0;
+
+	/* Specify which HW CTX to upload. */
+	mss_l4len_idx = TXGBE_TXD_IDX(ctx_idx);
+	type_tucmd_mlhl = TXGBE_TXD_CTXT;
+
+	tx_offload_mask.ptid |= ~0;
+	type_tucmd_mlhl |= TXGBE_TXD_PTID(tx_offload.ptid);
+
+	/* check if TCP segmentation required for this packet */
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		tx_offload_mask.l2_len |= ~0;
+		tx_offload_mask.l3_len |= ~0;
+		tx_offload_mask.l4_len |= ~0;
+		tx_offload_mask.tso_segsz |= ~0;
+		mss_l4len_idx |= TXGBE_TXD_MSS(tx_offload.tso_segsz);
+		mss_l4len_idx |= TXGBE_TXD_L4LEN(tx_offload.l4_len);
+	} else { /* no TSO, check if hardware checksum is needed */
+		if (ol_flags & PKT_TX_IP_CKSUM) {
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+		}
+
+		switch (ol_flags & PKT_TX_L4_MASK) {
+		case PKT_TX_UDP_CKSUM:
+			mss_l4len_idx |=
+				TXGBE_TXD_L4LEN(sizeof(struct rte_udp_hdr));
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+			break;
+		case PKT_TX_TCP_CKSUM:
+			mss_l4len_idx |=
+				TXGBE_TXD_L4LEN(sizeof(struct rte_tcp_hdr));
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+			break;
+		case PKT_TX_SCTP_CKSUM:
+			mss_l4len_idx |=
+				TXGBE_TXD_L4LEN(sizeof(struct rte_sctp_hdr));
+			tx_offload_mask.l2_len |= ~0;
+			tx_offload_mask.l3_len |= ~0;
+			break;
+		default:
+			break;
+		}
+	}
+
+	vlan_macip_lens = TXGBE_TXD_IPLEN(tx_offload.l3_len >> 1);
+
+	if (ol_flags & PKT_TX_TUNNEL_MASK) {
+		tx_offload_mask.outer_tun_len |= ~0;
+		tx_offload_mask.outer_l2_len |= ~0;
+		tx_offload_mask.outer_l3_len |= ~0;
+		tx_offload_mask.l2_len |= ~0;
+		tunnel_seed = TXGBE_TXD_ETUNLEN(tx_offload.outer_tun_len >> 1);
+		tunnel_seed |= TXGBE_TXD_EIPLEN(tx_offload.outer_l3_len >> 2);
+
+		switch (ol_flags & PKT_TX_TUNNEL_MASK) {
+		case PKT_TX_TUNNEL_IPIP:
+			/* for non UDP / GRE tunneling, set to 0b */
+			break;
+		case PKT_TX_TUNNEL_VXLAN:
+		case PKT_TX_TUNNEL_GENEVE:
+			tunnel_seed |= TXGBE_TXD_ETYPE_UDP;
+			break;
+		case PKT_TX_TUNNEL_GRE:
+			tunnel_seed |= TXGBE_TXD_ETYPE_GRE;
+			break;
+		default:
+			PMD_TX_LOG(ERR, "Tunnel type not supported");
+			return;
+		}
+		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.outer_l2_len);
+	} else {
+		tunnel_seed = 0;
+		vlan_macip_lens |= TXGBE_TXD_MACLEN(tx_offload.l2_len);
+	}
+
+	if (ol_flags & PKT_TX_VLAN_PKT) {
+		tx_offload_mask.vlan_tci |= ~0;
+		vlan_macip_lens |= TXGBE_TXD_VLAN(tx_offload.vlan_tci);
+	}
+
+	txq->ctx_cache[ctx_idx].flags = ol_flags;
+	txq->ctx_cache[ctx_idx].tx_offload.data[0] =
+		tx_offload_mask.data[0] & tx_offload.data[0];
+	txq->ctx_cache[ctx_idx].tx_offload.data[1] =
+		tx_offload_mask.data[1] & tx_offload.data[1];
+	txq->ctx_cache[ctx_idx].tx_offload_mask = tx_offload_mask;
+
+	ctx_txd->dw0 = rte_cpu_to_le_32(vlan_macip_lens);
+	ctx_txd->dw1 = rte_cpu_to_le_32(tunnel_seed);
+	ctx_txd->dw2 = rte_cpu_to_le_32(type_tucmd_mlhl);
+	ctx_txd->dw3 = rte_cpu_to_le_32(mss_l4len_idx);
+}
+
+/*
+ * Check which hardware context can be used. Use the existing match
+ * or create a new context descriptor.
+ */
+static inline uint32_t
+what_ctx_update(struct txgbe_tx_queue *txq, uint64_t flags,
+		   union txgbe_tx_offload tx_offload)
+{
+	/* If match with the current used context */
+	if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
+		     & tx_offload.data[0])) &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
+		     & tx_offload.data[1]))))
+		return txq->ctx_curr;
+
+	/* What if match with the next context  */
+	txq->ctx_curr ^= 1;
+	if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[0] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[0]
+		     & tx_offload.data[0])) &&
+		   (txq->ctx_cache[txq->ctx_curr].tx_offload.data[1] ==
+		    (txq->ctx_cache[txq->ctx_curr].tx_offload_mask.data[1]
+		     & tx_offload.data[1]))))
+		return txq->ctx_curr;
+
+	/* Mismatch, use the previous context */
+	return TXGBE_CTX_NUM;
+}
+
+static inline uint32_t
+tx_desc_cksum_flags_to_olinfo(uint64_t ol_flags)
+{
+	uint32_t tmp = 0;
+
+	if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM) {
+		tmp |= TXGBE_TXD_CC;
+		tmp |= TXGBE_TXD_L4CS;
+	}
+	if (ol_flags & PKT_TX_IP_CKSUM) {
+		tmp |= TXGBE_TXD_CC;
+		tmp |= TXGBE_TXD_IPCS;
+	}
+	if (ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		tmp |= TXGBE_TXD_CC;
+		tmp |= TXGBE_TXD_EIPCS;
+	}
+	if (ol_flags & PKT_TX_TCP_SEG) {
+		tmp |= TXGBE_TXD_CC;
+		/* implies IPv4 cksum */
+		if (ol_flags & PKT_TX_IPV4)
+			tmp |= TXGBE_TXD_IPCS;
+		tmp |= TXGBE_TXD_L4CS;
+	}
+	if (ol_flags & PKT_TX_VLAN_PKT) {
+		tmp |= TXGBE_TXD_CC;
+	}
+
+	return tmp;
+}
+
+static inline uint32_t
+tx_desc_ol_flags_to_cmdtype(uint64_t ol_flags)
+{
+	uint32_t cmdtype = 0;
+
+	if (ol_flags & PKT_TX_VLAN_PKT)
+		cmdtype |= TXGBE_TXD_VLE;
+	if (ol_flags & PKT_TX_TCP_SEG)
+		cmdtype |= TXGBE_TXD_TSE;
+	if (ol_flags & PKT_TX_MACSEC)
+		cmdtype |= TXGBE_TXD_LINKSEC;
+	return cmdtype;
+}
+
+static inline uint8_t
+tx_desc_ol_flags_to_ptid(uint64_t oflags, uint32_t ptype)
+{
+	bool tun;
+
+	if (ptype)
+		return txgbe_encode_ptype(ptype);
+
+	/* Only suport flags in TXGBE_TX_OFFLOAD_MASK */
+	tun = !!(oflags & PKT_TX_TUNNEL_MASK);
+
+	/* L2 level */
+	ptype = RTE_PTYPE_L2_ETHER;
+	if (oflags & PKT_TX_VLAN) {
+		ptype |= RTE_PTYPE_L2_ETHER_VLAN;
+	}
+
+	/* L3 level */
+	if (oflags & (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IP_CKSUM)) {
+		ptype |= RTE_PTYPE_L3_IPV4;
+	} else if (oflags & (PKT_TX_OUTER_IPV6)) {
+		ptype |= RTE_PTYPE_L3_IPV6;
+	}
+
+	if (oflags & (PKT_TX_IPV4 | PKT_TX_IP_CKSUM)) {
+		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV4 : RTE_PTYPE_L3_IPV4);
+	} else if (oflags & (PKT_TX_IPV6)) {
+		ptype |= (tun ? RTE_PTYPE_INNER_L3_IPV6 : RTE_PTYPE_L3_IPV6);
+	}
+
+	/* L4 level */
+	switch (oflags & (PKT_TX_L4_MASK)) {
+	case PKT_TX_TCP_CKSUM:
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+		break;
+	case PKT_TX_UDP_CKSUM:
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_UDP : RTE_PTYPE_L4_UDP);
+		break;
+	case PKT_TX_SCTP_CKSUM:
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_SCTP : RTE_PTYPE_L4_SCTP);
+		break;
+	}
+
+	if (oflags & PKT_TX_TCP_SEG) {
+		ptype |= (tun ? RTE_PTYPE_INNER_L4_TCP : RTE_PTYPE_L4_TCP);
+	}
+
+	/* Tunnel */
+	switch (oflags & PKT_TX_TUNNEL_MASK) {
+	case PKT_TX_TUNNEL_VXLAN:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_VXLAN;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_GRE:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_GRE;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_GENEVE:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_GENEVE;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_VXLAN_GPE:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_VXLAN_GPE;
+		ptype |= RTE_PTYPE_INNER_L2_ETHER;
+		break;
+	case PKT_TX_TUNNEL_IPIP:
+	case PKT_TX_TUNNEL_IP:
+		ptype |= RTE_PTYPE_L2_ETHER |
+			 RTE_PTYPE_L3_IPV4 |
+			 RTE_PTYPE_TUNNEL_IP;
+		break;
+	}
+
+	return txgbe_encode_ptype(ptype);
+}
+
 #ifndef DEFAULT_TX_FREE_THRESH
 #define DEFAULT_TX_FREE_THRESH 32
 #endif
 
+/* Reset transmit descriptors after they have been used */
+static inline int
+txgbe_xmit_cleanup(struct txgbe_tx_queue *txq)
+{
+	struct txgbe_tx_entry *sw_ring = txq->sw_ring;
+	volatile struct txgbe_tx_desc *txr = txq->tx_ring;
+	uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+	uint16_t nb_tx_desc = txq->nb_tx_desc;
+	uint16_t desc_to_clean_to;
+	uint16_t nb_tx_to_clean;
+	uint32_t status;
+
+	/* Determine the last descriptor needing to be cleaned */
+	desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_free_thresh);
+	if (desc_to_clean_to >= nb_tx_desc)
+		desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+	/* Check to make sure the last descriptor to clean is done */
+	desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+	status = txr[desc_to_clean_to].dw3;
+	if (!(status & rte_cpu_to_le_32(TXGBE_TXD_DD))) {
+		PMD_TX_FREE_LOG(DEBUG,
+				"TX descriptor %4u is not done"
+				"(port=%d queue=%d)",
+				desc_to_clean_to,
+				txq->port_id, txq->queue_id);
+		if (txq->nb_tx_free >> 1 < txq->tx_free_thresh)
+			txgbe_set32_masked(txq->tdc_reg_addr,
+				TXGBE_TXCFG_FLUSH, TXGBE_TXCFG_FLUSH);
+		/* Failed to clean any descriptors, better luck next time */
+		return -(1);
+	}
+
+	/* Figure out how many descriptors will be cleaned */
+	if (last_desc_cleaned > desc_to_clean_to)
+		nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+							desc_to_clean_to);
+	else
+		nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+						last_desc_cleaned);
+
+	PMD_TX_FREE_LOG(DEBUG,
+			"Cleaning %4u TX descriptors: %4u to %4u "
+			"(port=%d queue=%d)",
+			nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to,
+			txq->port_id, txq->queue_id);
+
+	/*
+	 * The last descriptor to clean is done, so that means all the
+	 * descriptors from the last descriptor that was cleaned
+	 * up to the last descriptor with the RS bit set
+	 * are done. Only reset the threshold descriptor.
+	 */
+	txr[desc_to_clean_to].dw3 = 0;
+
+	/* Update the txq to reflect the last descriptor that was cleaned */
+	txq->last_desc_cleaned = desc_to_clean_to;
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+	/* No Error */
+	return 0;
+}
+
+static inline uint8_t
+txgbe_get_tun_len(struct rte_mbuf *mbuf)
+{
+	struct txgbe_genevehdr genevehdr;
+	const struct txgbe_genevehdr *gh;
+	uint8_t tun_len;
+
+	switch (mbuf->ol_flags & PKT_TX_TUNNEL_MASK) {
+	case PKT_TX_TUNNEL_IPIP:
+		tun_len = 0;
+		break;
+	case PKT_TX_TUNNEL_VXLAN:
+	case PKT_TX_TUNNEL_VXLAN_GPE:
+		tun_len = sizeof(struct txgbe_udphdr)
+			+ sizeof(struct txgbe_vxlanhdr);
+		break;
+	case PKT_TX_TUNNEL_GRE:
+		tun_len = sizeof(struct txgbe_nvgrehdr);
+		break;
+	case PKT_TX_TUNNEL_GENEVE:
+		gh = rte_pktmbuf_read(mbuf,
+			mbuf->outer_l2_len + mbuf->outer_l3_len,
+			sizeof(genevehdr), &genevehdr);
+		tun_len = sizeof(struct txgbe_udphdr)
+			+ sizeof(struct txgbe_genevehdr)
+			+ (gh->opt_len << 2);
+		break;
+	default:
+		tun_len = 0;
+	}
+
+	return tun_len;
+}
+
 uint16_t
 txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		uint16_t nb_pkts)
 {
-	RTE_SET_USED(tx_queue);
-	RTE_SET_USED(tx_pkts);
-	RTE_SET_USED(nb_pkts);
+	struct txgbe_tx_queue *txq;
+	struct txgbe_tx_entry *sw_ring;
+	struct txgbe_tx_entry *txe, *txn;
+	volatile struct txgbe_tx_desc *txr;
+	volatile struct txgbe_tx_desc *txd;
+	struct rte_mbuf     *tx_pkt;
+	struct rte_mbuf     *m_seg;
+	uint64_t buf_dma_addr;
+	uint32_t olinfo_status;
+	uint32_t cmd_type_len;
+	uint32_t pkt_len;
+	uint16_t slen;
+	uint64_t ol_flags;
+	uint16_t tx_id;
+	uint16_t tx_last;
+	uint16_t nb_tx;
+	uint16_t nb_used;
+	uint64_t tx_ol_req;
+	uint32_t ctx = 0;
+	uint32_t new_ctx;
+	union txgbe_tx_offload tx_offload;
+
+	tx_offload.data[0] = 0;
+	tx_offload.data[1] = 0;
+	txq = tx_queue;
+	sw_ring = txq->sw_ring;
+	txr     = txq->tx_ring;
+	tx_id   = txq->tx_tail;
+	txe = &sw_ring[tx_id];
+
+	/* Determine if the descriptor ring needs to be cleaned. */
+	if (txq->nb_tx_free < txq->tx_free_thresh)
+		txgbe_xmit_cleanup(txq);
 
-	return 0;
+	rte_prefetch0(&txe->mbuf->pool);
+
+	/* TX loop */
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		new_ctx = 0;
+		tx_pkt = *tx_pkts++;
+		pkt_len = tx_pkt->pkt_len;
+
+		/*
+		 * Determine how many (if any) context descriptors
+		 * are needed for offload functionality.
+		 */
+		ol_flags = tx_pkt->ol_flags;
+
+		/* If hardware offload required */
+		tx_ol_req = ol_flags & TXGBE_TX_OFFLOAD_MASK;
+		if (tx_ol_req) {
+			tx_offload.ptid = tx_desc_ol_flags_to_ptid(
+					tx_ol_req, tx_pkt->packet_type);
+			tx_offload.l2_len = tx_pkt->l2_len;
+			tx_offload.l3_len = tx_pkt->l3_len;
+			tx_offload.l4_len = tx_pkt->l4_len;
+			tx_offload.vlan_tci = tx_pkt->vlan_tci;
+			tx_offload.tso_segsz = tx_pkt->tso_segsz;
+			tx_offload.outer_l2_len = tx_pkt->outer_l2_len;
+			tx_offload.outer_l3_len = tx_pkt->outer_l3_len;
+			tx_offload.outer_tun_len = txgbe_get_tun_len(tx_pkt);
+
+			/* If new context need be built or reuse the exist ctx. */
+			ctx = what_ctx_update(txq, tx_ol_req, tx_offload);
+			/* Only allocate context descriptor if required*/
+			new_ctx = (ctx == TXGBE_CTX_NUM);
+			ctx = txq->ctx_curr;
+		}
+
+		/*
+		 * Keep track of how many descriptors are used this loop
+		 * This will always be the number of segments + the number of
+		 * Context descriptors required to transmit the packet
+		 */
+		nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
+
+		/*
+		 * The number of descriptors that must be allocated for a
+		 * packet is the number of segments of that packet, plus 1
+		 * Context Descriptor for the hardware offload, if any.
+		 * Determine the last TX descriptor to allocate in the TX ring
+		 * for the packet, starting from the current position (tx_id)
+		 * in the ring.
+		 */
+		tx_last = (uint16_t) (tx_id + nb_used - 1);
+
+		/* Circular ring */
+		if (tx_last >= txq->nb_tx_desc)
+			tx_last = (uint16_t) (tx_last - txq->nb_tx_desc);
+
+		PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u"
+			   " tx_first=%u tx_last=%u",
+			   (unsigned) txq->port_id,
+			   (unsigned) txq->queue_id,
+			   (unsigned) pkt_len,
+			   (unsigned) tx_id,
+			   (unsigned) tx_last);
+
+		/*
+		 * Make sure there are enough TX descriptors available to
+		 * transmit the entire packet.
+		 * nb_used better be less than or equal to txq->tx_free_thresh
+		 */
+		if (nb_used > txq->nb_tx_free) {
+			PMD_TX_FREE_LOG(DEBUG,
+					"Not enough free TX descriptors "
+					"nb_used=%4u nb_free=%4u "
+					"(port=%d queue=%d)",
+					nb_used, txq->nb_tx_free,
+					txq->port_id, txq->queue_id);
+
+			if (txgbe_xmit_cleanup(txq) != 0) {
+				/* Could not clean any descriptors */
+				if (nb_tx == 0)
+					return 0;
+				goto end_of_tx;
+			}
+
+			/* nb_used better be <= txq->tx_free_thresh */
+			if (unlikely(nb_used > txq->tx_free_thresh)) {
+				PMD_TX_FREE_LOG(DEBUG,
+					"The number of descriptors needed to "
+					"transmit the packet exceeds the "
+					"RS bit threshold. This will impact "
+					"performance."
+					"nb_used=%4u nb_free=%4u "
+					"tx_free_thresh=%4u. "
+					"(port=%d queue=%d)",
+					nb_used, txq->nb_tx_free,
+					txq->tx_free_thresh,
+					txq->port_id, txq->queue_id);
+				/*
+				 * Loop here until there are enough TX
+				 * descriptors or until the ring cannot be
+				 * cleaned.
+				 */
+				while (nb_used > txq->nb_tx_free) {
+					if (txgbe_xmit_cleanup(txq) != 0) {
+						/*
+						 * Could not clean any
+						 * descriptors
+						 */
+						if (nb_tx == 0)
+							return 0;
+						goto end_of_tx;
+					}
+				}
+			}
+		}
+
+		/*
+		 * By now there are enough free TX descriptors to transmit
+		 * the packet.
+		 */
+
+		/*
+		 * Set common flags of all TX Data Descriptors.
+		 *
+		 * The following bits must be set in all Data Descriptors:
+		 *   - TXGBE_TXD_DTYP_DATA
+		 *   - TXGBE_TXD_DCMD_DEXT
+		 *
+		 * The following bits must be set in the first Data Descriptor
+		 * and are ignored in the other ones:
+		 *   - TXGBE_TXD_DCMD_IFCS
+		 *   - TXGBE_TXD_MAC_1588
+		 *   - TXGBE_TXD_DCMD_VLE
+		 *
+		 * The following bits must only be set in the last Data
+		 * Descriptor:
+		 *   - TXGBE_TXD_CMD_EOP
+		 *
+		 * The following bits can be set in any Data Descriptor, but
+		 * are only set in the last Data Descriptor:
+		 *   - TXGBE_TXD_CMD_RS
+		 */
+		cmd_type_len = TXGBE_TXD_FCS;
+
+		olinfo_status = 0;
+		if (tx_ol_req) {
+
+			if (ol_flags & PKT_TX_TCP_SEG) {
+				/* when TSO is on, paylen in descriptor is the
+				 * not the packet len but the tcp payload len */
+				pkt_len -= (tx_offload.l2_len +
+					tx_offload.l3_len + tx_offload.l4_len);
+				pkt_len -=
+					(tx_pkt->ol_flags & PKT_TX_TUNNEL_MASK)
+					? tx_offload.outer_l2_len +
+					  tx_offload.outer_l3_len : 0;
+			}
+
+			/*
+			 * Setup the TX Advanced Context Descriptor if required
+			 */
+			if (new_ctx) {
+				volatile struct txgbe_tx_ctx_desc *ctx_txd;
+
+				ctx_txd = (volatile struct txgbe_tx_ctx_desc *)
+				    &txr[tx_id];
+
+				txn = &sw_ring[txe->next_id];
+				rte_prefetch0(&txn->mbuf->pool);
+
+				if (txe->mbuf != NULL) {
+					rte_pktmbuf_free_seg(txe->mbuf);
+					txe->mbuf = NULL;
+				}
+
+				txgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
+					tx_offload, &tx_pkt->udata64);
+
+				txe->last_id = tx_last;
+				tx_id = txe->next_id;
+				txe = txn;
+			}
+
+			/*
+			 * Setup the TX Advanced Data Descriptor,
+			 * This path will go through
+			 * whatever new/reuse the context descriptor
+			 */
+			cmd_type_len  |= tx_desc_ol_flags_to_cmdtype(ol_flags);
+			olinfo_status |= tx_desc_cksum_flags_to_olinfo(ol_flags);
+			olinfo_status |= TXGBE_TXD_IDX(ctx);
+		}
+
+		olinfo_status |= TXGBE_TXD_PAYLEN(pkt_len);
+
+		m_seg = tx_pkt;
+		do {
+			txd = &txr[tx_id];
+			txn = &sw_ring[txe->next_id];
+			rte_prefetch0(&txn->mbuf->pool);
+
+			if (txe->mbuf != NULL)
+				rte_pktmbuf_free_seg(txe->mbuf);
+			txe->mbuf = m_seg;
+
+			/*
+			 * Set up Transmit Data Descriptor.
+			 */
+			slen = m_seg->data_len;
+			buf_dma_addr = rte_mbuf_data_iova(m_seg);
+			txd->qw0 = rte_cpu_to_le_64(buf_dma_addr);
+			txd->dw2 = rte_cpu_to_le_32(cmd_type_len | slen);
+			txd->dw3 = rte_cpu_to_le_32(olinfo_status);
+			txe->last_id = tx_last;
+			tx_id = txe->next_id;
+			txe = txn;
+			m_seg = m_seg->next;
+		} while (m_seg != NULL);
+
+		/*
+		 * The last packet data descriptor needs End Of Packet (EOP)
+		 */
+		cmd_type_len |= TXGBE_TXD_EOP;
+		txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used);
+
+		txd->dw2 |= rte_cpu_to_le_32(cmd_type_len);
+	}
+
+end_of_tx:
+
+	rte_wmb();
+
+	/*
+	 * Set the Transmit Descriptor Tail (TDT)
+	 */
+	PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+		   (unsigned) txq->port_id, (unsigned) txq->queue_id,
+		   (unsigned) tx_id, (unsigned) nb_tx);
+	txgbe_set32_relaxed(txq->tdt_reg_addr, tx_id);
+	txq->tx_tail = tx_id;
+
+	return nb_tx;
 }
 
 uint16_t
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 421e17d67..5f01068de 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -209,6 +209,45 @@ struct txgbe_rx_queue {
 	struct rte_mbuf *rx_stage[RTE_PMD_TXGBE_RX_MAX_BURST*2];
 };
 
+/**
+ * TXGBE CTX Constants
+ */
+enum txgbe_ctx_num {
+	TXGBE_CTX_0    = 0, /**< CTX0 */
+	TXGBE_CTX_1    = 1, /**< CTX1  */
+	TXGBE_CTX_NUM  = 2, /**< CTX NUMBER  */
+};
+
+/** Offload features */
+union txgbe_tx_offload {
+	uint64_t data[2];
+	struct {
+		uint64_t ptid:8; /**< Packet Type Identifier. */
+		uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
+		uint64_t l3_len:9; /**< L3 (IP) Header Length. */
+		uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
+		uint64_t tso_segsz:16; /**< TCP TSO segment size */
+		uint64_t vlan_tci:16;
+		/**< VLAN Tag Control Identifier (CPU order). */
+
+		/* fields for TX offloading of tunnels */
+		uint64_t outer_tun_len:8; /**< Outer TUN (Tunnel) Hdr Length. */
+		uint64_t outer_l2_len:8; /**< Outer L2 (MAC) Hdr Length. */
+		uint64_t outer_l3_len:16; /**< Outer L3 (IP) Hdr Length. */
+	};
+};
+
+/**
+ * Structure to check if new context need be built
+ */
+struct txgbe_ctx_info {
+	uint64_t flags;           /**< ol_flags for context build. */
+	/**< tx offload: vlan, tso, l2-l3-l4 lengths. */
+	union txgbe_tx_offload tx_offload;
+	/** compare mask for tx offload. */
+	union txgbe_tx_offload tx_offload_mask;
+};
+
 /**
  * Structure associated with each TX queue.
  */
@@ -227,6 +266,9 @@ struct txgbe_tx_queue {
 	/**< Start freeing TX buffers if there are less free descriptors than
 	     this value. */
 	uint16_t            tx_free_thresh;
+	/** Index to last TX descriptor to have been cleaned. */
+	uint16_t            last_desc_cleaned;
+	/** Total number of TX descriptors ready to be allocated. */
 	uint16_t            nb_tx_free;
 	uint16_t tx_next_dd; /**< next desc to scan for DD bit */
 	uint16_t            queue_id;      /**< TX queue index. */
@@ -236,6 +278,9 @@ struct txgbe_tx_queue {
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
 	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint32_t            ctx_curr;      /**< Hardware context states. */
+	/** Hardware context0 history. */
+	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
 	const struct txgbe_txq_ops *ops;       /**< txq ops */
 	uint8_t             tx_deferred_start; /**< not in global dev start. */
 };
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 25/42] net/txgbe: fill receive functions
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (22 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 24/42] net/txgbe: fill transmit function with hardware offload Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 26/42] net/txgbe: fill TX prepare funtion Jiawen Wu
                   ` (17 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Fill receive functions and define receive descriptor.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h |   2 +
 drivers/net/txgbe/txgbe_ethdev.c    |  13 +
 drivers/net/txgbe/txgbe_ethdev.h    |   2 +
 drivers/net/txgbe/txgbe_ptypes.c    |   2 -
 drivers/net/txgbe/txgbe_rxtx.c      | 872 +++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_rxtx.h      | 102 ++++
 6 files changed, 978 insertions(+), 15 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index c05e8e8b1..1c16257da 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -37,6 +37,8 @@
 #define TXGBE_PHYSICAL_LAYER_10BASE_T		0x08000
 #define TXGBE_PHYSICAL_LAYER_2500BASE_KX	0x10000
 
+#define TXGBE_ATR_HASH_MASK			0x7fff
+
 enum txgbe_eeprom_type {
 	txgbe_eeprom_unknown = 0,
 	txgbe_eeprom_spi,
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index d2a355524..08b31f66e 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -809,6 +809,18 @@ txgbe_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+const uint32_t *
+txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
+{
+	if (dev->rx_pkt_burst == txgbe_recv_pkts ||
+	    dev->rx_pkt_burst == txgbe_recv_pkts_lro_single_alloc ||
+	    dev->rx_pkt_burst == txgbe_recv_pkts_lro_bulk_alloc ||
+	    dev->rx_pkt_burst == txgbe_recv_pkts_bulk_alloc)
+		return txgbe_get_supported_ptypes();
+
+	return NULL;
+}
+
 void
 txgbe_dev_setup_link_alarm_handler(void *param)
 {
@@ -1322,6 +1334,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.link_update                = txgbe_dev_link_update,
 	.stats_get                  = txgbe_dev_stats_get,
 	.stats_reset                = txgbe_dev_stats_reset,
+	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
 	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
 	.tx_queue_start	            = txgbe_dev_tx_queue_start,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index be6876823..dceb88d2f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -160,5 +160,7 @@ void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
 #define TXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
 #define TXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
+
+const uint32_t *txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void txgbe_dev_setup_link_alarm_handler(void *param);
 #endif /* _TXGBE_ETHDEV_H_ */
diff --git a/drivers/net/txgbe/txgbe_ptypes.c b/drivers/net/txgbe/txgbe_ptypes.c
index e76b4001d..9b841bff8 100644
--- a/drivers/net/txgbe/txgbe_ptypes.c
+++ b/drivers/net/txgbe/txgbe_ptypes.c
@@ -189,8 +189,6 @@ u32 *txgbe_get_supported_ptypes(void)
 	static u32 ptypes[] = {
 		/* For non-vec functions,
 		 * refers to txgbe_rxd_pkt_info_to_pkt_type();
-		 * for vec functions,
-		 * refers to _recv_raw_pkts_vec().
 		 */
 		RTE_PTYPE_L2_ETHER,
 		RTE_PTYPE_L3_IPV4,
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 39055b4d1..0c35d3c9e 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -13,12 +13,35 @@
 #include <unistd.h>
 #include <inttypes.h>
 
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_log.h>
+#include <rte_debug.h>
 #include <rte_ethdev.h>
 #include <rte_ethdev_driver.h>
+#include <rte_interrupts.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
 #include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
 #include <rte_mempool.h>
 #include <rte_malloc.h>
 #include <rte_mbuf.h>
+#include <rte_ether.h>
+#include <rte_prefetch.h>
+#include <rte_udp.h>
+#include <rte_tcp.h>
+#include <rte_sctp.h>
+#include <rte_string_fns.h>
+#include <rte_errno.h>
+#include <rte_ip.h>
+#include <rte_net.h>
 
 #include "txgbe_logs.h"
 #include "base/txgbe.h"
@@ -38,6 +61,19 @@ static const u64 TXGBE_TX_OFFLOAD_MASK = (
 		PKT_TX_TUNNEL_MASK |
 		PKT_TX_OUTER_IP_CKSUM);
 
+#if 1
+#define RTE_PMD_USE_PREFETCH
+#endif
+
+#ifdef RTE_PMD_USE_PREFETCH
+/*
+ * Prefetch a cache line into all cache levels.
+ */
+#define rte_txgbe_prefetch(p)   rte_prefetch0(p)
+#else
+#define rte_txgbe_prefetch(p)   do {} while (0)
+#endif
+
 /*********************************************************************
  *
  *  TX functions
@@ -933,39 +969,849 @@ txgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 	return 0;
 }
 
+/*********************************************************************
+ *
+ *  RX functions
+ *
+ **********************************************************************/
+/* @note: fix txgbe_dev_supported_ptypes_get() if any change here. */
+static inline uint32_t
+txgbe_rxd_pkt_info_to_pkt_type(uint32_t pkt_info, uint16_t ptid_mask)
+{
+	uint16_t ptid = TXGBE_RXD_PTID(pkt_info);
+
+	ptid &= ptid_mask;
+
+	return txgbe_decode_ptype(ptid);
+}
+
+static inline uint64_t
+txgbe_rxd_pkt_info_to_pkt_flags(uint32_t pkt_info)
+{
+	static uint64_t ip_rss_types_map[16] __rte_cache_aligned = {
+		0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
+		0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
+		PKT_RX_RSS_HASH, 0, 0, 0,
+		0, 0, 0,  PKT_RX_FDIR,
+	};
+
+	return ip_rss_types_map[TXGBE_RXD_RSSTYPE(pkt_info)];
+}
+
+static inline uint64_t
+rx_desc_status_to_pkt_flags(uint32_t rx_status, uint64_t vlan_flags)
+{
+	uint64_t pkt_flags;
+
+	/*
+	 * Check if VLAN present only.
+	 * Do not check whether L3/L4 rx checksum done by NIC or not,
+	 * That can be found from rte_eth_rxmode.offloads flag
+	 */
+	pkt_flags = (rx_status & TXGBE_RXD_STAT_VLAN &&
+		     vlan_flags & PKT_RX_VLAN_STRIPPED)
+		    ? vlan_flags : 0;
+
+	return pkt_flags;
+}
+
+static inline uint64_t
+rx_desc_error_to_pkt_flags(uint32_t rx_status)
+{
+	uint64_t pkt_flags = 0;
+
+	/* checksum offload can't be disabled */
+	if (rx_status & TXGBE_RXD_STAT_IPCS) {
+		pkt_flags |= (rx_status & TXGBE_RXD_ERR_IPCS
+				? PKT_RX_IP_CKSUM_BAD : PKT_RX_IP_CKSUM_GOOD);
+	}
+
+	if (rx_status & TXGBE_RXD_STAT_L4CS) {
+		pkt_flags |= (rx_status & TXGBE_RXD_ERR_L4CS
+				? PKT_RX_L4_CKSUM_BAD : PKT_RX_L4_CKSUM_GOOD);
+	}
+
+	if (rx_status & TXGBE_RXD_STAT_EIPCS &&
+	    rx_status & TXGBE_RXD_ERR_EIPCS) {
+		pkt_flags |= PKT_RX_EIP_CKSUM_BAD;
+	}
+
+	return pkt_flags;
+}
+
+/*
+ * LOOK_AHEAD defines how many desc statuses to check beyond the
+ * current descriptor.
+ * It must be a pound define for optimal performance.
+ * Do not change the value of LOOK_AHEAD, as the txgbe_rx_scan_hw_ring
+ * function only works with LOOK_AHEAD=8.
+ */
+#define LOOK_AHEAD 8
+#if (LOOK_AHEAD != 8)
+#error "PMD TXGBE: LOOK_AHEAD must be 8\n"
+#endif
+static inline int
+txgbe_rx_scan_hw_ring(struct txgbe_rx_queue *rxq)
+{
+	volatile struct txgbe_rx_desc *rxdp;
+	struct txgbe_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t pkt_len;
+	uint64_t pkt_flags;
+	int nb_dd;
+	uint32_t s[LOOK_AHEAD];
+	uint32_t pkt_info[LOOK_AHEAD];
+	int i, j, nb_rx = 0;
+	uint32_t status;
+
+	/* get references to current descriptor and S/W ring entry */
+	rxdp = &rxq->rx_ring[rxq->rx_tail];
+	rxep = &rxq->sw_ring[rxq->rx_tail];
+
+	status = rxdp->qw1.lo.status;
+	/* check to make sure there is at least 1 packet to receive */
+	if (!(status & rte_cpu_to_le_32(TXGBE_RXD_STAT_DD)))
+		return 0;
+
+	/*
+	 * Scan LOOK_AHEAD descriptors at a time to determine which descriptors
+	 * reference packets that are ready to be received.
+	 */
+	for (i = 0; i < RTE_PMD_TXGBE_RX_MAX_BURST;
+	     i += LOOK_AHEAD, rxdp += LOOK_AHEAD, rxep += LOOK_AHEAD) {
+		/* Read desc statuses backwards to avoid race condition */
+		for (j = 0; j < LOOK_AHEAD; j++)
+			s[j] = rte_le_to_cpu_32(rxdp[j].qw1.lo.status);
+
+		rte_smp_rmb();
+
+		/* Compute how many status bits were set */
+		for (nb_dd = 0; nb_dd < LOOK_AHEAD &&
+				(s[nb_dd] & TXGBE_RXD_STAT_DD); nb_dd++)
+			;
+
+		for (j = 0; j < nb_dd; j++)
+			pkt_info[j] = rte_le_to_cpu_32(rxdp[j].qw0.dw0);
+
+		nb_rx += nb_dd;
+
+		/* Translate descriptor info to mbuf format */
+		for (j = 0; j < nb_dd; ++j) {
+			mb = rxep[j].mbuf;
+			pkt_len = rte_le_to_cpu_16(rxdp[j].qw1.hi.len) -
+				  rxq->crc_len;
+			mb->data_len = pkt_len;
+			mb->pkt_len = pkt_len;
+			mb->vlan_tci = rte_le_to_cpu_16(rxdp[j].qw1.hi.tag);
+
+			/* convert descriptor fields to rte mbuf flags */
+			pkt_flags = rx_desc_status_to_pkt_flags(s[j],
+					rxq->vlan_flags);
+			pkt_flags |= rx_desc_error_to_pkt_flags(s[j]);
+			pkt_flags |= txgbe_rxd_pkt_info_to_pkt_flags(
+					pkt_info[j]);
+			mb->ol_flags = pkt_flags;
+			mb->packet_type = txgbe_rxd_pkt_info_to_pkt_type(
+					pkt_info[j], rxq->pkt_type_mask);
+
+			if (likely(pkt_flags & PKT_RX_RSS_HASH))
+				mb->hash.rss = rte_le_to_cpu_32(
+				    rxdp[j].qw0.dw1);
+			else if (pkt_flags & PKT_RX_FDIR) {
+				mb->hash.fdir.hash = rte_le_to_cpu_16(
+				    rxdp[j].qw0.hi.csum) &
+				    TXGBE_ATR_HASH_MASK;
+				mb->hash.fdir.id = rte_le_to_cpu_16(
+				    rxdp[j].qw0.hi.ipid);
+			}
+		}
+
+		/* Move mbuf pointers from the S/W ring to the stage */
+		for (j = 0; j < LOOK_AHEAD; ++j) {
+			rxq->rx_stage[i + j] = rxep[j].mbuf;
+		}
+
+		/* stop if all requested packets could not be received */
+		if (nb_dd != LOOK_AHEAD)
+			break;
+	}
+
+	/* clear software ring entries so we can cleanup correctly */
+	for (i = 0; i < nb_rx; ++i) {
+		rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL;
+	}
+
+	return nb_rx;
+}
+
+static inline int
+txgbe_rx_alloc_bufs(struct txgbe_rx_queue *rxq, bool reset_mbuf)
+{
+	volatile struct txgbe_rx_desc *rxdp;
+	struct txgbe_rx_entry *rxep;
+	struct rte_mbuf *mb;
+	uint16_t alloc_idx;
+	__le64 dma_addr;
+	int diag, i;
+
+	/* allocate buffers in bulk directly into the S/W ring */
+	alloc_idx = rxq->rx_free_trigger - (rxq->rx_free_thresh - 1);
+	rxep = &rxq->sw_ring[alloc_idx];
+	diag = rte_mempool_get_bulk(rxq->mb_pool, (void *)rxep,
+				    rxq->rx_free_thresh);
+	if (unlikely(diag != 0))
+		return -ENOMEM;
+
+	rxdp = &rxq->rx_ring[alloc_idx];
+	for (i = 0; i < rxq->rx_free_thresh; ++i) {
+		/* populate the static rte mbuf fields */
+		mb = rxep[i].mbuf;
+		if (reset_mbuf) {
+			mb->port = rxq->port_id;
+		}
+
+		rte_mbuf_refcnt_set(mb, 1);
+		mb->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/* populate the descriptors */
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb));
+		TXGBE_RXD_HDRADDR(&rxdp[i], 0);
+		TXGBE_RXD_PKTADDR(&rxdp[i], dma_addr);
+	}
+
+	/* update state of internal queue structure */
+	rxq->rx_free_trigger = rxq->rx_free_trigger + rxq->rx_free_thresh;
+	if (rxq->rx_free_trigger >= rxq->nb_rx_desc)
+		rxq->rx_free_trigger = rxq->rx_free_thresh - 1;
+
+	/* no errors */
+	return 0;
+}
+
+static inline uint16_t
+txgbe_rx_fill_from_stage(struct txgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts,
+			 uint16_t nb_pkts)
+{
+	struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail];
+	int i;
+
+	/* how many packets are ready to return? */
+	nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail);
+
+	/* copy mbuf pointers to the application's packet list */
+	for (i = 0; i < nb_pkts; ++i)
+		rx_pkts[i] = stage[i];
+
+	/* update internal queue state */
+	rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts);
+	rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts);
+
+	return nb_pkts;
+}
+
+static inline uint16_t
+txgbe_rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+	     uint16_t nb_pkts)
+{
+	struct txgbe_rx_queue *rxq = (struct txgbe_rx_queue *)rx_queue;
+	uint16_t nb_rx = 0;
+
+	/* Any previously recv'd pkts will be returned from the Rx stage */
+	if (rxq->rx_nb_avail)
+		return txgbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	/* Scan the H/W ring for packets to receive */
+	nb_rx = (uint16_t)txgbe_rx_scan_hw_ring(rxq);
+
+	/* update internal queue state */
+	rxq->rx_next_avail = 0;
+	rxq->rx_nb_avail = nb_rx;
+	rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx);
+
+	/* if required, allocate new buffers to replenish descriptors */
+	if (rxq->rx_tail > rxq->rx_free_trigger) {
+		uint16_t cur_free_trigger = rxq->rx_free_trigger;
+
+		if (txgbe_rx_alloc_bufs(rxq, true) != 0) {
+			int i, j;
+
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", (unsigned) rxq->port_id,
+				   (unsigned) rxq->queue_id);
+
+			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed +=
+				rxq->rx_free_thresh;
+
+			/*
+			 * Need to rewind any previous receives if we cannot
+			 * allocate new buffers to replenish the old ones.
+			 */
+			rxq->rx_nb_avail = 0;
+			rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx);
+			for (i = 0, j = rxq->rx_tail; i < nb_rx; ++i, ++j)
+				rxq->sw_ring[j].mbuf = rxq->rx_stage[i];
+
+			return 0;
+		}
+
+		/* update tail pointer */
+		rte_wmb();
+		txgbe_set32_relaxed(rxq->rdt_reg_addr, cur_free_trigger);
+	}
+
+	if (rxq->rx_tail >= rxq->nb_rx_desc)
+		rxq->rx_tail = 0;
+
+	/* received any packets this loop? */
+	if (rxq->rx_nb_avail)
+		return txgbe_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
+
+	return 0;
+}
+
 /* split requests into chunks of size RTE_PMD_TXGBE_RX_MAX_BURST */
 uint16_t
 txgbe_recv_pkts_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 			   uint16_t nb_pkts)
 {
-	RTE_SET_USED(rx_queue);
-	RTE_SET_USED(rx_pkts);
-	RTE_SET_USED(nb_pkts);
+	uint16_t nb_rx;
 
-	return 0;
+	if (unlikely(nb_pkts == 0))
+		return 0;
+
+	if (likely(nb_pkts <= RTE_PMD_TXGBE_RX_MAX_BURST))
+		return txgbe_rx_recv_pkts(rx_queue, rx_pkts, nb_pkts);
+
+	/* request is relatively large, chunk it up */
+	nb_rx = 0;
+	while (nb_pkts) {
+		uint16_t ret, n;
+
+		n = (uint16_t)RTE_MIN(nb_pkts, RTE_PMD_TXGBE_RX_MAX_BURST);
+		ret = txgbe_rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n);
+		nb_rx = (uint16_t)(nb_rx + ret);
+		nb_pkts = (uint16_t)(nb_pkts - ret);
+		if (ret < n)
+			break;
+	}
+
+	return nb_rx;
 }
 
 uint16_t
 txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts)
 {
-	RTE_SET_USED(rx_queue);
-	RTE_SET_USED(rx_pkts);
-	RTE_SET_USED(nb_pkts);
+	struct txgbe_rx_queue *rxq;
+	volatile struct txgbe_rx_desc *rx_ring;
+	volatile struct txgbe_rx_desc *rxdp;
+	struct txgbe_rx_entry *sw_ring;
+	struct txgbe_rx_entry *rxe;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct txgbe_rx_desc rxd;
+	uint64_t dma_addr;
+	uint32_t staterr;
+	uint32_t pkt_info;
+	uint16_t pkt_len;
+	uint16_t rx_id;
+	uint16_t nb_rx;
+	uint16_t nb_hold;
+	uint64_t pkt_flags;
+
+	nb_rx = 0;
+	nb_hold = 0;
+	rxq = rx_queue;
+	rx_id = rxq->rx_tail;
+	rx_ring = rxq->rx_ring;
+	sw_ring = rxq->sw_ring;
+	while (nb_rx < nb_pkts) {
+		/*
+		 * The order of operations here is important as the DD status
+		 * bit must not be read after any other descriptor fields.
+		 * rx_ring and rxdp are pointing to volatile data so the order
+		 * of accesses cannot be reordered by the compiler. If they were
+		 * not volatile, they could be reordered which could lead to
+		 * using invalid descriptor fields when read from rxd.
+		 */
+		rxdp = &rx_ring[rx_id];
+		staterr = rxdp->qw1.lo.status;
+		if (!(staterr & rte_cpu_to_le_32(TXGBE_RXD_STAT_DD)))
+			break;
+		rxd = *rxdp;
 
-	return 0;
+		/*
+		 * End of packet.
+		 *
+		 * If the TXGBE_RXD_STAT_EOP flag is not set, the RX packet
+		 * is likely to be invalid and to be dropped by the various
+		 * validation checks performed by the network stack.
+		 *
+		 * Allocate a new mbuf to replenish the RX ring descriptor.
+		 * If the allocation fails:
+		 *    - arrange for that RX descriptor to be the first one
+		 *      being parsed the next time the receive function is
+		 *      invoked [on the same queue].
+		 *
+		 *    - Stop parsing the RX ring and return immediately.
+		 *
+		 * This policy do not drop the packet received in the RX
+		 * descriptor for which the allocation of a new mbuf failed.
+		 * Thus, it allows that packet to be later retrieved if
+		 * mbuf have been freed in the mean time.
+		 * As a side effect, holding RX descriptors instead of
+		 * systematically giving them back to the NIC may lead to
+		 * RX ring exhaustion situations.
+		 * However, the NIC can gracefully prevent such situations
+		 * to happen by sending specific "back-pressure" flow control
+		 * frames to its peer(s).
+		 */
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
+			   "ext_err_stat=0x%08x pkt_len=%u",
+			   (unsigned) rxq->port_id, (unsigned) rxq->queue_id,
+			   (unsigned) rx_id, (unsigned) staterr,
+			   (unsigned) rte_le_to_cpu_16(rxd.qw1.hi.len));
+
+		nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+		if (nmb == NULL) {
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+				   "queue_id=%u", (unsigned) rxq->port_id,
+				   (unsigned) rxq->queue_id);
+			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
+			break;
+		}
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id];
+		rx_id++;
+		if (rx_id == rxq->nb_rx_desc)
+			rx_id = 0;
+
+		/* Prefetch next mbuf while processing current one. */
+		rte_txgbe_prefetch(sw_ring[rx_id].mbuf);
+
+		/*
+		 * When next RX descriptor is on a cache-line boundary,
+		 * prefetch the next 4 RX descriptors and the next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_txgbe_prefetch(&rx_ring[rx_id]);
+			rte_txgbe_prefetch(&sw_ring[rx_id]);
+		}
+
+		rxm = rxe->mbuf;
+		rxe->mbuf = nmb;
+		dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		TXGBE_RXD_HDRADDR(rxdp, 0);
+		TXGBE_RXD_PKTADDR(rxdp, dma_addr);
+
+		/*
+		 * Initialize the returned mbuf.
+		 * 1) setup generic mbuf fields:
+		 *    - number of segments,
+		 *    - next segment,
+		 *    - packet length,
+		 *    - RX port identifier.
+		 * 2) integrate hardware offload data, if any:
+		 *    - RSS flag & hash,
+		 *    - IP checksum flag,
+		 *    - VLAN TCI, if any,
+		 *    - error flags.
+		 */
+		pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.qw1.hi.len) -
+				      rxq->crc_len);
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+		rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
+		rxm->nb_segs = 1;
+		rxm->next = NULL;
+		rxm->pkt_len = pkt_len;
+		rxm->data_len = pkt_len;
+		rxm->port = rxq->port_id;
+
+		pkt_info = rte_le_to_cpu_32(rxd.qw0.dw0);
+		/* Only valid if PKT_RX_VLAN set in pkt_flags */
+		rxm->vlan_tci = rte_le_to_cpu_16(rxd.qw1.hi.tag);
+
+		pkt_flags = rx_desc_status_to_pkt_flags(staterr,
+					rxq->vlan_flags);
+		pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+		pkt_flags |= txgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+		rxm->ol_flags = pkt_flags;
+		rxm->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
+						       rxq->pkt_type_mask);
+
+		if (likely(pkt_flags & PKT_RX_RSS_HASH))
+			rxm->hash.rss = rte_le_to_cpu_32(rxd.qw0.dw1);
+		else if (pkt_flags & PKT_RX_FDIR) {
+			rxm->hash.fdir.hash = rte_le_to_cpu_16(
+					rxd.qw0.hi.csum) &
+					TXGBE_ATR_HASH_MASK;
+			rxm->hash.fdir.id = rte_le_to_cpu_16(
+					rxd.qw0.hi.ipid);
+		}
+		/*
+		 * Store the mbuf address into the next entry of the array
+		 * of returned packets.
+		 */
+		rx_pkts[nb_rx++] = rxm;
+	}
+	rxq->rx_tail = rx_id;
+
+	/*
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register.
+	 * Update the RDT with the value of the last processed RX descriptor
+	 * minus 1, to guarantee that the RDT register is never equal to the
+	 * RDH register, which creates a "full" ring situtation from the
+	 * hardware point of view...
+	 */
+	nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold);
+	if (nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   (unsigned) rxq->port_id, (unsigned) rxq->queue_id,
+			   (unsigned) rx_id, (unsigned) nb_hold,
+			   (unsigned) nb_rx);
+		rx_id = (uint16_t) ((rx_id == 0) ?
+				     (rxq->nb_rx_desc - 1) : (rx_id - 1));
+		txgbe_set32(rxq->rdt_reg_addr, rx_id);
+		nb_hold = 0;
+	}
+	rxq->nb_rx_hold = nb_hold;
+	return nb_rx;
 }
 
+/**
+ * txgbe_fill_cluster_head_buf - fill the first mbuf of the returned packet
+ *
+ * Fill the following info in the HEAD buffer of the Rx cluster:
+ *    - RX port identifier
+ *    - hardware offload data, if any:
+ *      - RSS flag & hash
+ *      - IP checksum flag
+ *      - VLAN TCI, if any
+ *      - error flags
+ * @head HEAD of the packet cluster
+ * @desc HW descriptor to get data from
+ * @rxq Pointer to the Rx queue
+ */
+static inline void
+txgbe_fill_cluster_head_buf(
+	struct rte_mbuf *head,
+	struct txgbe_rx_desc *desc,
+	struct txgbe_rx_queue *rxq,
+	uint32_t staterr)
+{
+	uint32_t pkt_info;
+	uint64_t pkt_flags;
+
+	head->port = rxq->port_id;
+
+	/* The vlan_tci field is only valid when PKT_RX_VLAN is
+	 * set in the pkt_flags field.
+	 */
+	head->vlan_tci = rte_le_to_cpu_16(desc->qw1.hi.tag);
+	pkt_info = rte_le_to_cpu_32(desc->qw0.dw0);
+	pkt_flags = rx_desc_status_to_pkt_flags(staterr, rxq->vlan_flags);
+	pkt_flags |= rx_desc_error_to_pkt_flags(staterr);
+	pkt_flags |= txgbe_rxd_pkt_info_to_pkt_flags(pkt_info);
+	head->ol_flags = pkt_flags;
+	head->packet_type = txgbe_rxd_pkt_info_to_pkt_type(pkt_info,
+						rxq->pkt_type_mask);
+
+	if (likely(pkt_flags & PKT_RX_RSS_HASH))
+		head->hash.rss = rte_le_to_cpu_32(desc->qw0.dw1);
+	else if (pkt_flags & PKT_RX_FDIR) {
+		head->hash.fdir.hash = rte_le_to_cpu_16(desc->qw0.hi.csum)
+				& TXGBE_ATR_HASH_MASK;
+		head->hash.fdir.id = rte_le_to_cpu_16(desc->qw0.hi.ipid);
+	}
+}
+
+/**
+ * txgbe_recv_pkts_lro - receive handler for and LRO case.
+ *
+ * @rx_queue Rx queue handle
+ * @rx_pkts table of received packets
+ * @nb_pkts size of rx_pkts table
+ * @bulk_alloc if TRUE bulk allocation is used for a HW ring refilling
+ *
+ * Handles the Rx HW ring completions when RSC feature is configured. Uses an
+ * additional ring of txgbe_rsc_entry's that will hold the relevant RSC info.
+ *
+ * We use the same logic as in Linux and in FreeBSD txgbe drivers:
+ * 1) When non-EOP RSC completion arrives:
+ *    a) Update the HEAD of the current RSC aggregation cluster with the new
+ *       segment's data length.
+ *    b) Set the "next" pointer of the current segment to point to the segment
+ *       at the NEXTP index.
+ *    c) Pass the HEAD of RSC aggregation cluster on to the next NEXTP entry
+ *       in the sw_rsc_ring.
+ * 2) When EOP arrives we just update the cluster's total length and offload
+ *    flags and deliver the cluster up to the upper layers. In our case - put it
+ *    in the rx_pkts table.
+ *
+ * Returns the number of received packets/clusters (according to the "bulk
+ * receive" interface).
+ */
 static inline uint16_t
 txgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts,
 		    bool bulk_alloc)
 {
-	RTE_SET_USED(rx_queue);
-	RTE_SET_USED(rx_pkts);
-	RTE_SET_USED(nb_pkts);
-	RTE_SET_USED(bulk_alloc);
+	struct txgbe_rx_queue *rxq = rx_queue;
+	volatile struct txgbe_rx_desc *rx_ring = rxq->rx_ring;
+	struct txgbe_rx_entry *sw_ring = rxq->sw_ring;
+	struct txgbe_scattered_rx_entry *sw_sc_ring = rxq->sw_sc_ring;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t nb_rx = 0;
+	uint16_t nb_hold = rxq->nb_rx_hold;
+	uint16_t prev_id = rxq->rx_tail;
+
+	while (nb_rx < nb_pkts) {
+		bool eop;
+		struct txgbe_rx_entry *rxe;
+		struct txgbe_scattered_rx_entry *sc_entry;
+		struct txgbe_scattered_rx_entry *next_sc_entry = NULL;
+		struct txgbe_rx_entry *next_rxe = NULL;
+		struct rte_mbuf *first_seg;
+		struct rte_mbuf *rxm;
+		struct rte_mbuf *nmb = NULL;
+		struct txgbe_rx_desc rxd;
+		uint16_t data_len;
+		uint16_t next_id;
+		volatile struct txgbe_rx_desc *rxdp;
+		uint32_t staterr;
+
+next_desc:
+		/*
+		 * The code in this whole file uses the volatile pointer to
+		 * ensure the read ordering of the status and the rest of the
+		 * descriptor fields (on the compiler level only!!!). This is so
+		 * UGLY - why not to just use the compiler barrier instead? DPDK
+		 * even has the rte_compiler_barrier() for that.
+		 *
+		 * But most importantly this is just wrong because this doesn't
+		 * ensure memory ordering in a general case at all. For
+		 * instance, DPDK is supposed to work on Power CPUs where
+		 * compiler barrier may just not be enough!
+		 *
+		 * I tried to write only this function properly to have a
+		 * starting point (as a part of an LRO/RSC series) but the
+		 * compiler cursed at me when I tried to cast away the
+		 * "volatile" from rx_ring (yes, it's volatile too!!!). So, I'm
+		 * keeping it the way it is for now.
+		 *
+		 * The code in this file is broken in so many other places and
+		 * will just not work on a big endian CPU anyway therefore the
+		 * lines below will have to be revisited together with the rest
+		 * of the txgbe PMD.
+		 *
+		 * TODO:
+		 *    - Get rid of "volatile" and let the compiler do its job.
+		 *    - Use the proper memory barrier (rte_rmb()) to ensure the
+		 *      memory ordering below.
+		 */
+		rxdp = &rx_ring[rx_id];
+		staterr = rte_le_to_cpu_32(rxdp->qw1.lo.status);
 
-	return 0;
+		if (!(staterr & TXGBE_RXD_STAT_DD))
+			break;
+
+		rxd = *rxdp;
+
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u "
+				  "staterr=0x%x data_len=%u",
+			   rxq->port_id, rxq->queue_id, rx_id, staterr,
+			   rte_le_to_cpu_16(rxd.qw1.hi.len));
+
+		if (!bulk_alloc) {
+			nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+			if (nmb == NULL) {
+				PMD_RX_LOG(DEBUG, "RX mbuf alloc failed "
+						  "port_id=%u queue_id=%u",
+					   rxq->port_id, rxq->queue_id);
+
+				rte_eth_devices[rxq->port_id].data->
+							rx_mbuf_alloc_failed++;
+				break;
+			}
+		} else if (nb_hold > rxq->rx_free_thresh) {
+			uint16_t next_rdt = rxq->rx_free_trigger;
+
+			if (!txgbe_rx_alloc_bufs(rxq, false)) {
+				rte_wmb();
+				txgbe_set32_relaxed(rxq->rdt_reg_addr,
+							    next_rdt);
+				nb_hold -= rxq->rx_free_thresh;
+			} else {
+				PMD_RX_LOG(DEBUG, "RX bulk alloc failed "
+						  "port_id=%u queue_id=%u",
+					   rxq->port_id, rxq->queue_id);
+
+				rte_eth_devices[rxq->port_id].data->
+							rx_mbuf_alloc_failed++;
+				break;
+			}
+		}
+
+		nb_hold++;
+		rxe = &sw_ring[rx_id];
+		eop = staterr & TXGBE_RXD_STAT_EOP;
+
+		next_id = rx_id + 1;
+		if (next_id == rxq->nb_rx_desc)
+			next_id = 0;
+
+		/* Prefetch next mbuf while processing current one. */
+		rte_txgbe_prefetch(sw_ring[next_id].mbuf);
+
+		/*
+		 * When next RX descriptor is on a cache-line boundary,
+		 * prefetch the next 4 RX descriptors and the next 4 pointers
+		 * to mbufs.
+		 */
+		if ((next_id & 0x3) == 0) {
+			rte_txgbe_prefetch(&rx_ring[next_id]);
+			rte_txgbe_prefetch(&sw_ring[next_id]);
+		}
+
+		rxm = rxe->mbuf;
+
+		if (!bulk_alloc) {
+			__le64 dma =
+			  rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+			/*
+			 * Update RX descriptor with the physical address of the
+			 * new data buffer of the new allocated mbuf.
+			 */
+			rxe->mbuf = nmb;
+
+			rxm->data_off = RTE_PKTMBUF_HEADROOM;
+			TXGBE_RXD_HDRADDR(rxdp, 0);
+			TXGBE_RXD_PKTADDR(rxdp, dma);
+		} else
+			rxe->mbuf = NULL;
+
+		/*
+		 * Set data length & data buffer address of mbuf.
+		 */
+		data_len = rte_le_to_cpu_16(rxd.qw1.hi.len);
+		rxm->data_len = data_len;
+
+		if (!eop) {
+			uint16_t nextp_id;
+			/*
+			 * Get next descriptor index:
+			 *  - For RSC it's in the NEXTP field.
+			 *  - For a scattered packet - it's just a following
+			 *    descriptor.
+			 */
+			if (TXGBE_RXD_RSCCNT(rxd.qw0.dw0))
+				nextp_id = TXGBE_RXD_NEXTP(staterr);
+			else
+				nextp_id = next_id;
+
+			next_sc_entry = &sw_sc_ring[nextp_id];
+			next_rxe = &sw_ring[nextp_id];
+			rte_txgbe_prefetch(next_rxe);
+		}
+
+		sc_entry = &sw_sc_ring[rx_id];
+		first_seg = sc_entry->fbuf;
+		sc_entry->fbuf = NULL;
+
+		/*
+		 * If this is the first buffer of the received packet,
+		 * set the pointer to the first mbuf of the packet and
+		 * initialize its context.
+		 * Otherwise, update the total length and the number of segments
+		 * of the current scattered packet, and update the pointer to
+		 * the last mbuf of the current packet.
+		 */
+		if (first_seg == NULL) {
+			first_seg = rxm;
+			first_seg->pkt_len = data_len;
+			first_seg->nb_segs = 1;
+		} else {
+			first_seg->pkt_len += data_len;
+			first_seg->nb_segs++;
+		}
+
+		prev_id = rx_id;
+		rx_id = next_id;
+
+		/*
+		 * If this is not the last buffer of the received packet, update
+		 * the pointer to the first mbuf at the NEXTP entry in the
+		 * sw_sc_ring and continue to parse the RX ring.
+		 */
+		if (!eop && next_rxe) {
+			rxm->next = next_rxe->mbuf;
+			next_sc_entry->fbuf = first_seg;
+			goto next_desc;
+		}
+
+		/* Initialize the first mbuf of the returned packet */
+		txgbe_fill_cluster_head_buf(first_seg, &rxd, rxq, staterr);
+
+		/*
+		 * Deal with the case, when HW CRC srip is disabled.
+		 * That can't happen when LRO is enabled, but still could
+		 * happen for scattered RX mode.
+		 */
+		first_seg->pkt_len -= rxq->crc_len;
+		if (unlikely(rxm->data_len <= rxq->crc_len)) {
+			struct rte_mbuf *lp;
+
+			for (lp = first_seg; lp->next != rxm; lp = lp->next)
+				;
+
+			first_seg->nb_segs--;
+			lp->data_len -= rxq->crc_len - rxm->data_len;
+			lp->next = NULL;
+			rte_pktmbuf_free_seg(rxm);
+		} else
+			rxm->data_len -= rxq->crc_len;
+
+		/* Prefetch data of first segment, if configured to do so. */
+		rte_packet_prefetch((char *)first_seg->buf_addr +
+			first_seg->data_off);
+
+		/*
+		 * Store the mbuf address into the next entry of the array
+		 * of returned packets.
+		 */
+		rx_pkts[nb_rx++] = first_seg;
+	}
+
+	/*
+	 * Record index of the next RX descriptor to probe.
+	 */
+	rxq->rx_tail = rx_id;
+
+	/*
+	 * If the number of free RX descriptors is greater than the RX free
+	 * threshold of the queue, advance the Receive Descriptor Tail (RDT)
+	 * register.
+	 * Update the RDT with the value of the last processed RX descriptor
+	 * minus 1, to guarantee that the RDT register is never equal to the
+	 * RDH register, which creates a "full" ring situtation from the
+	 * hardware point of view...
+	 */
+	if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) {
+		PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u "
+			   "nb_hold=%u nb_rx=%u",
+			   rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx);
+
+		rte_wmb();
+		txgbe_set32_relaxed(rxq->rdt_reg_addr, prev_id);
+		nb_hold = 0;
+	}
+
+	rxq->nb_rx_hold = nb_hold;
+	return nb_rx;
 }
 
 uint16_t
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 5f01068de..f9abb5ab8 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -50,6 +50,100 @@ struct txgbe_rx_desc {
 #define TXGBE_RXD_HDRADDR(rxd, v)  \
 	(((volatile __le64 *)(rxd))[1] = cpu_to_le64(v))
 
+/* @txgbe_rx_desc.dw0 */
+#define TXGBE_RXD_RSSTYPE(dw)      RS(dw, 0, 0xF)
+#define   TXGBE_RSSTYPE_NONE       0
+#define   TXGBE_RSSTYPE_IPV4TCP    1
+#define   TXGBE_RSSTYPE_IPV4       2
+#define   TXGBE_RSSTYPE_IPV6TCP    3
+#define   TXGBE_RSSTYPE_IPV4SCTP   4
+#define   TXGBE_RSSTYPE_IPV6       5
+#define   TXGBE_RSSTYPE_IPV6SCTP   6
+#define   TXGBE_RSSTYPE_IPV4UDP    7
+#define   TXGBE_RSSTYPE_IPV6UDP    8
+#define   TXGBE_RSSTYPE_FDIR       15
+#define TXGBE_RXD_SECTYPE(dw)      RS(dw, 4, 0x3)
+#define TXGBE_RXD_SECTYPE_NONE     LS(0, 4, 0x3)
+#define TXGBE_RXD_SECTYPE_LINKSEC  LS(1, 4, 0x3)
+#define TXGBE_RXD_SECTYPE_IPSECESP LS(2, 4, 0x3)
+#define TXGBE_RXD_SECTYPE_IPSECAH  LS(3, 4, 0x3)
+#define TXGBE_RXD_TPIDSEL(dw)      RS(dw, 6, 0x7)
+#define TXGBE_RXD_PTID(dw)         RS(dw, 9, 0xFF)
+#define TXGBE_RXD_RSCCNT(dw)       RS(dw, 17, 0xF)
+#define TXGBE_RXD_HDRLEN(dw)       RS(dw, 21, 0x3FF)
+#define TXGBE_RXD_SPH              MS(31, 0x1)
+
+/* @txgbe_rx_desc.dw1 */
+/** bit 0-31, as rss hash when  **/
+#define TXGBE_RXD_RSSHASH(rxd)     ((rxd)->qw0.dw1)
+
+/** bit 0-31, as ip csum when  **/
+#define TXGBE_RXD_IPID(rxd)        ((rxd)->qw0.hi.ipid)
+#define TXGBE_RXD_CSUM(rxd)        ((rxd)->qw0.hi.csum)
+
+/** bit 0-31, as fdir id when  **/
+#define TXGBE_RXD_FDIRID(rxd)      ((rxd)->qw0.hi.dw1)
+
+/* @txgbe_rx_desc.dw2 */
+#define TXGBE_RXD_STATUS(rxd)      ((rxd)->qw1.lo.status)
+/** bit 0-1 **/
+#define TXGBE_RXD_STAT_DD          MS(0, 0x1) /* Descriptor Done */
+#define TXGBE_RXD_STAT_EOP         MS(1, 0x1) /* End of Packet */
+/** bit 2-31, when EOP=0 **/
+#define TXGBE_RXD_NEXTP_RESV(v)    LS(v, 2, 0x3)
+#define TXGBE_RXD_NEXTP(dw)        RS(dw, 4, 0xFFFF) /* Next Descriptor */
+/** bit 2-31, when EOP=1 **/
+#define TXGBE_RXD_PKT_CLS_MASK     MS(2, 0x7) /* Packet Class */
+#define TXGBE_RXD_PKT_CLS_TC_RSS   LS(0, 2, 0x7) /* RSS Hash */
+#define TXGBE_RXD_PKT_CLS_FLM      LS(1, 2, 0x7) /* FDir Match */
+#define TXGBE_RXD_PKT_CLS_SYN      LS(2, 2, 0x7) /* TCP Sync */
+#define TXGBE_RXD_PKT_CLS_5TUPLE   LS(3, 2, 0x7) /* 5 Tuple */
+#define TXGBE_RXD_PKT_CLS_ETF      LS(4, 2, 0x7) /* Ethertype Filter */
+#define TXGBE_RXD_STAT_VLAN        MS(5, 0x1) /* IEEE VLAN Packet */
+#define TXGBE_RXD_STAT_UDPCS       MS(6, 0x1) /* UDP xsum calculated */
+#define TXGBE_RXD_STAT_L4CS        MS(7, 0x1) /* L4 xsum calculated */
+#define TXGBE_RXD_STAT_IPCS        MS(8, 0x1) /* IP xsum calculated */
+#define TXGBE_RXD_STAT_PIF         MS(9, 0x1) /* Non-unicast address */
+#define TXGBE_RXD_STAT_EIPCS       MS(10, 0x1) /* Encap IP xsum calculated */
+#define TXGBE_RXD_STAT_VEXT        MS(11, 0x1) /* Multi-VLAN */
+#define TXGBE_RXD_STAT_IPV6EX      MS(12, 0x1) /* IPv6 with option header */
+#define TXGBE_RXD_STAT_LLINT       MS(13, 0x1) /* Pkt caused LLI */
+#define TXGBE_RXD_STAT_1588        MS(14, 0x1) /* IEEE1588 Time Stamp */
+#define TXGBE_RXD_STAT_SECP        MS(15, 0x1) /* Security Processing */
+#define TXGBE_RXD_STAT_LB          MS(16, 0x1) /* Loopback Status */
+/*** bit 17-30, when PTYPE=IP ***/
+#define TXGBE_RXD_STAT_BMC         MS(17, 0x1) /* PTYPE=IP, BMC status */
+#define TXGBE_RXD_ERR_FDIR_LEN     MS(20, 0x1) /* FDIR Length error */
+#define TXGBE_RXD_ERR_FDIR_DROP    MS(21, 0x1) /* FDIR Drop error */
+#define TXGBE_RXD_ERR_FDIR_COLL    MS(22, 0x1) /* FDIR Collision error */
+#define TXGBE_RXD_ERR_HBO          MS(23, 0x1) /* Header Buffer Overflow */
+#define TXGBE_RXD_ERR_EIPCS        MS(26, 0x1) /* Encap IP header error */
+#define TXGBE_RXD_ERR_SECERR       MS(27, 0x1) /* macsec or ipsec error */
+#define TXGBE_RXD_ERR_RXE          MS(29, 0x1) /* Any MAC Error */
+#define TXGBE_RXD_ERR_L4CS         MS(30, 0x1) /* TCP/UDP xsum error */
+#define TXGBE_RXD_ERR_IPCS         MS(31, 0x1) /* IP xsum error */
+#define TXGBE_RXD_ERR_CSUM(dw)     RS(dw, 30, 0x3)
+/*** bit 17-30, when PTYPE=FCOE ***/
+#define TXGBE_RXD_STAT_FCOEFS      MS(17, 0x1) /* PTYPE=FCOE, FCoE EOF/SOF */
+#define TXGBE_RXD_FCSTAT_MASK      MS(18, 0x3) /* FCoE Pkt Stat */
+#define TXGBE_RXD_FCSTAT_NOMTCH    LS(0, 18, 0x3) /* No Ctxt Match */
+#define TXGBE_RXD_FCSTAT_NODDP     LS(1, 18, 0x3) /* Ctxt w/o DDP */
+#define TXGBE_RXD_FCSTAT_FCPRSP    LS(2, 18, 0x3) /* Recv. FCP_RSP */
+#define TXGBE_RXD_FCSTAT_DDP       LS(3, 18, 0x3) /* Ctxt w/ DDP */
+#define TXGBE_RXD_FCERR_MASK       MS(20, 0x7) /* FCERR */
+#define TXGBE_RXD_FCERR_0          LS(0, 20, 0x7)
+#define TXGBE_RXD_FCERR_1          LS(1, 20, 0x7)
+#define TXGBE_RXD_FCERR_2          LS(2, 20, 0x7)
+#define TXGBE_RXD_FCERR_3          LS(3, 20, 0x7)
+#define TXGBE_RXD_FCERR_4          LS(4, 20, 0x7)
+#define TXGBE_RXD_FCERR_5          LS(5, 20, 0x7)
+#define TXGBE_RXD_FCERR_6          LS(6, 20, 0x7)
+#define TXGBE_RXD_FCERR_7          LS(7, 20, 0x7)
+
+/* @txgbe_rx_desc.dw3 */
+#define TXGBE_RXD_LENGTH(rxd)           ((rxd)->qw1.hi.len)
+#define TXGBE_RXD_VLAN(rxd)             ((rxd)->qw1.hi.tag)
+
 /******************************************************************************
  * Transmit Descriptor
 ******************************************************************************/
@@ -145,6 +239,12 @@ struct txgbe_tx_desc {
 #define RX_RING_SZ ((TXGBE_RING_DESC_MAX + RTE_PMD_TXGBE_RX_MAX_BURST) * \
 		    sizeof(struct txgbe_rx_desc))
 
+#ifdef RTE_PMD_PACKET_PREFETCH
+#define rte_packet_prefetch(p)  rte_prefetch1(p)
+#else
+#define rte_packet_prefetch(p)  do {} while (0)
+#endif
+
 #define RTE_TXGBE_REGISTER_POLL_WAIT_10_MS  10
 #define RTE_TXGBE_WAIT_100_US               100
 
@@ -202,6 +302,8 @@ struct txgbe_rx_queue {
 	uint8_t             crc_len;  /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
+	/** flags to set in mbuf when a vlan is detected. */
+	uint64_t            vlan_flags;
 	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 26/42] net/txgbe: fill TX prepare funtion
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (23 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 25/42] net/txgbe: fill receive functions Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 27/42] net/txgbe: add device stats get Jiawen Wu
                   ` (16 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Fill transmit prepare function.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_rxtx.c | 52 +++++++++++++++++++++++++++++++---
 drivers/net/txgbe/txgbe_rxtx.h |  2 ++
 2 files changed, 50 insertions(+), 4 deletions(-)

diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 0c35d3c9e..ef3d63b01 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -61,6 +61,9 @@ static const u64 TXGBE_TX_OFFLOAD_MASK = (
 		PKT_TX_TUNNEL_MASK |
 		PKT_TX_OUTER_IP_CKSUM);
 
+#define TXGBE_TX_OFFLOAD_NOTSUP_MASK \
+		(PKT_TX_OFFLOAD_MASK ^ TXGBE_TX_OFFLOAD_MASK)
+
 #if 1
 #define RTE_PMD_USE_PREFETCH
 #endif
@@ -959,14 +962,55 @@ txgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	return nb_tx;
 }
 
+/*********************************************************************
+ *
+ *  TX prep functions
+ *
+ **********************************************************************/
 uint16_t
 txgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 {
-	RTE_SET_USED(tx_queue);
-	RTE_SET_USED(tx_pkts);
-	RTE_SET_USED(nb_pkts);
+	int i, ret;
+	uint64_t ol_flags;
+	struct rte_mbuf *m;
+	struct txgbe_tx_queue *txq = (struct txgbe_tx_queue *)tx_queue;
 
-	return 0;
+	for (i = 0; i < nb_pkts; i++) {
+		m = tx_pkts[i];
+		ol_flags = m->ol_flags;
+
+		/**
+		 * Check if packet meets requirements for number of segments
+		 *
+		 * NOTE: for txgbe it's always (40 - WTHRESH) for both TSO and
+		 *       non-TSO
+		 */
+
+		if (m->nb_segs > TXGBE_TX_MAX_SEG - txq->wthresh) {
+			rte_errno = -EINVAL;
+			return i;
+		}
+
+		if (ol_flags & TXGBE_TX_OFFLOAD_NOTSUP_MASK) {
+			rte_errno = -ENOTSUP;
+			return i;
+		}
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+		ret = rte_validate_tx_offload(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+#endif
+		ret = rte_net_intel_cksum_prepare(m);
+		if (ret != 0) {
+			rte_errno = ret;
+			return i;
+		}
+	}
+
+	return i;
 }
 
 /*********************************************************************
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index f9abb5ab8..296e34475 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -248,6 +248,8 @@ struct txgbe_tx_desc {
 #define RTE_TXGBE_REGISTER_POLL_WAIT_10_MS  10
 #define RTE_TXGBE_WAIT_100_US               100
 
+#define TXGBE_TX_MAX_SEG                    40
+
 /**
  * Structure associated with each descriptor of the RX ring of a RX queue.
  */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 27/42] net/txgbe: add device stats get
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (24 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 26/42] net/txgbe: fill TX prepare funtion Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 28/42] net/txgbe: add device xstats get Jiawen Wu
                   ` (15 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device stats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h | 153 ++++++++++++++++-
 drivers/net/txgbe/txgbe_ethdev.c    | 245 +++++++++++++++++++++++++++-
 drivers/net/txgbe/txgbe_ethdev.h    |  16 ++
 3 files changed, 411 insertions(+), 3 deletions(-)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 1c16257da..f9a18d581 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -10,6 +10,8 @@
 
 #define TXGBE_FRAME_SIZE_MAX	(9728) /* Maximum frame size, +FCS */
 #define TXGBE_FRAME_SIZE_DFT	(1518) /* Default frame size, +FCS */
+#define TXGBE_MAX_UP		8
+#define TXGBE_MAX_QP		(128)
 
 #define TXGBE_ALIGN				128 /* as intel did */
 
@@ -186,8 +188,149 @@ struct txgbe_bus_info {
 	u8 lan_id;
 	u16 instance_id;
 };
+/* Statistics counters collected by the MAC */
+/* PB[] RxTx */
+struct txgbe_pb_stats {
+	u64 tx_pb_xon_packets;
+	u64 rx_pb_xon_packets;
+	u64 tx_pb_xoff_packets;
+	u64 rx_pb_xoff_packets;
+	u64 rx_pb_dropped;
+	u64 rx_pb_mbuf_alloc_errors;
+	u64 tx_pb_xon2off_packets;
+};
+
+/* QP[] RxTx */
+struct txgbe_qp_stats {
+	u64 rx_qp_packets;
+	u64 tx_qp_packets;
+	u64 rx_qp_bytes;
+	u64 tx_qp_bytes;
+	u64 rx_qp_mc_packets;
+};
+
 struct txgbe_hw_stats {
-	u64 counter;
+	/* MNG RxTx */
+	u64 mng_bmc2host_packets;
+	u64 mng_host2bmc_packets;
+	/* Basix RxTx */
+	u64 rx_packets;
+	u64 tx_packets;
+	u64 rx_bytes;
+	u64 tx_bytes;
+	u64 rx_total_bytes;
+	u64 rx_total_packets;
+	u64 tx_total_packets;
+	u64 rx_total_missed_packets;
+	u64 rx_broadcast_packets;
+	u64 tx_broadcast_packets;
+	u64 rx_multicast_packets;
+	u64 tx_multicast_packets;
+	u64 rx_management_packets;
+	u64 tx_management_packets;
+	u64 rx_management_dropped;
+	u64 rx_drop_packets;
+
+	/* Basic Error */
+	u64 rx_crc_errors;
+	u64 rx_illegal_byte_errors;
+	u64 rx_error_bytes;
+	u64 rx_mac_short_packet_dropped;
+	u64 rx_length_errors;
+	u64 rx_undersize_errors;
+	u64 rx_fragment_errors;
+	u64 rx_oversize_errors;
+	u64 rx_jabber_errors;
+	u64 rx_l3_l4_xsum_error;
+	u64 mac_local_errors;
+	u64 mac_remote_errors;
+
+	/* Flow Director */
+	u64 flow_director_added_filters;
+	u64 flow_director_removed_filters;
+	u64 flow_director_filter_add_errors;
+	u64 flow_director_filter_remove_errors;
+	u64 flow_director_matched_filters;
+	u64 flow_director_missed_filters;
+
+	/* FCoE */
+	u64 rx_fcoe_crc_errors;
+	u64 rx_fcoe_mbuf_allocation_errors;
+	u64 rx_fcoe_dropped;
+	u64 rx_fcoe_packets;
+	u64 tx_fcoe_packets;
+	u64 rx_fcoe_bytes;
+	u64 tx_fcoe_bytes;
+	u64 rx_fcoe_no_ddp;
+	u64 rx_fcoe_no_ddp_ext_buff;
+
+	/* MACSEC */
+	u64 tx_macsec_pkts_untagged;
+	u64 tx_macsec_pkts_encrypted;
+	u64 tx_macsec_pkts_protected;
+	u64 tx_macsec_octets_encrypted;
+	u64 tx_macsec_octets_protected;
+	u64 rx_macsec_pkts_untagged;
+	u64 rx_macsec_pkts_badtag;
+	u64 rx_macsec_pkts_nosci;
+	u64 rx_macsec_pkts_unknownsci;
+	u64 rx_macsec_octets_decrypted;
+	u64 rx_macsec_octets_validated;
+	u64 rx_macsec_sc_pkts_unchecked;
+	u64 rx_macsec_sc_pkts_delayed;
+	u64 rx_macsec_sc_pkts_late;
+	u64 rx_macsec_sa_pkts_ok;
+	u64 rx_macsec_sa_pkts_invalid;
+	u64 rx_macsec_sa_pkts_notvalid;
+	u64 rx_macsec_sa_pkts_unusedsa;
+	u64 rx_macsec_sa_pkts_notusingsa;
+
+	/* MAC RxTx */
+	u64 rx_size_64_packets;
+	u64 rx_size_65_to_127_packets;
+	u64 rx_size_128_to_255_packets;
+	u64 rx_size_256_to_511_packets;
+	u64 rx_size_512_to_1023_packets;
+	u64 rx_size_1024_to_max_packets;
+	u64 tx_size_64_packets;
+	u64 tx_size_65_to_127_packets;
+	u64 tx_size_128_to_255_packets;
+	u64 tx_size_256_to_511_packets;
+	u64 tx_size_512_to_1023_packets;
+	u64 tx_size_1024_to_max_packets;
+
+	/* Flow Control */
+	u64 tx_xon_packets;
+	u64 rx_xon_packets;
+	u64 tx_xoff_packets;
+	u64 rx_xoff_packets;
+
+	/* PB[] RxTx */
+	struct {
+		u64 rx_up_packets;
+		u64 tx_up_packets;
+		u64 rx_up_bytes;
+		u64 tx_up_bytes;
+		u64 rx_up_drop_packets;
+
+		u64 tx_up_xon_packets;
+		u64 rx_up_xon_packets;
+		u64 tx_up_xoff_packets;
+		u64 rx_up_xoff_packets;
+		u64 rx_up_dropped;
+		u64 rx_up_mbuf_alloc_errors;
+		u64 tx_up_xon2off_packets;
+	} up[TXGBE_MAX_UP];
+
+	/* QP[] RxTx */
+	struct {
+		u64 rx_qp_packets;
+		u64 tx_qp_packets;
+		u64 rx_qp_bytes;
+		u64 tx_qp_bytes;
+		u64 rx_qp_mc_packets;
+	} qp[TXGBE_MAX_QP];
+
 };
 
 /* iterator type for walking multicast address lists */
@@ -472,6 +615,14 @@ struct txgbe_hw {
 
 	u32 q_rx_regs[128 * 4];
 	u32 q_tx_regs[128 * 4];
+	bool offset_loaded;
+	struct {
+		u64 rx_qp_packets;
+		u64 tx_qp_packets;
+		u64 rx_qp_bytes;
+		u64 tx_qp_bytes;
+		u64 rx_qp_mc_packets;
+	} qp_last[TXGBE_MAX_QP];
 };
 
 #include "txgbe_regs.h"
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 08b31f66e..63f811d93 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -414,6 +414,7 @@ static int
 txgbe_dev_start(struct rte_eth_dev *dev)
 {
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint32_t intr_vector = 0;
@@ -595,6 +596,9 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 
 	wr32m(hw, TXGBE_LEDCTL, 0xFFFFFFFF, TXGBE_LEDCTL_OD_MASK);
 
+	txgbe_read_stats_registers(hw, hw_stats);
+	hw->offset_loaded = 1;
+
 	return 0;
 
 error:
@@ -731,6 +735,9 @@ txgbe_dev_close(struct rte_eth_dev *dev)
 	txgbe_set_rar(hw, 0, hw->mac.addr, 0, true);
 
 	dev->dev_ops = NULL;
+	dev->rx_pkt_burst = NULL;
+	dev->tx_pkt_burst = NULL;
+
 
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
@@ -786,22 +793,256 @@ txgbe_dev_reset(struct rte_eth_dev *dev)
 
 	return ret;
 }
+
+#define UPDATE_QP_COUNTER_32bit(reg, last_counter, counter)     \
+	{                                                       \
+		uint32_t current_counter = rd32(hw, reg);       \
+		if (current_counter < last_counter)             \
+			current_counter += 0x100000000LL;       \
+		if (!hw->offset_loaded)                         \
+			last_counter = current_counter;         \
+		counter = current_counter - last_counter;       \
+		counter &= 0xFFFFFFFFLL;                        \
+	}
+
+#define UPDATE_QP_COUNTER_36bit(reg_lsb, reg_msb, last_counter, counter) \
+	{                                                                \
+		uint64_t current_counter_lsb = rd32(hw, reg_lsb);        \
+		uint64_t current_counter_msb = rd32(hw, reg_msb);        \
+		uint64_t current_counter = (current_counter_msb << 32) | \
+			current_counter_lsb;                             \
+		if (current_counter < last_counter)                      \
+			current_counter += 0x1000000000LL;               \
+		if (!hw->offset_loaded)                                  \
+			last_counter = current_counter;                  \
+		counter = current_counter - last_counter;                \
+		counter &= 0xFFFFFFFFFLL;                                \
+	}
+
+void
+txgbe_read_stats_registers(struct txgbe_hw *hw,
+			   struct txgbe_hw_stats *hw_stats)
+{
+	unsigned i;
+
+	/* QP Stats */
+	for (i = 0; i < hw->nb_rx_queues; i++) {
+		UPDATE_QP_COUNTER_32bit(TXGBE_QPRXPKT(i),
+			hw->qp_last[i].rx_qp_packets,
+			hw_stats->qp[i].rx_qp_packets);
+		UPDATE_QP_COUNTER_36bit(TXGBE_QPRXOCTL(i), TXGBE_QPRXOCTH(i),
+			hw->qp_last[i].rx_qp_bytes,
+			hw_stats->qp[i].rx_qp_bytes);
+		UPDATE_QP_COUNTER_32bit(TXGBE_QPRXMPKT(i),
+			hw->qp_last[i].rx_qp_mc_packets,
+			hw_stats->qp[i].rx_qp_mc_packets);
+	}
+
+	for (i = 0; i < hw->nb_tx_queues; i++) {
+		UPDATE_QP_COUNTER_32bit(TXGBE_QPTXPKT(i),
+			hw->qp_last[i].tx_qp_packets,
+			hw_stats->qp[i].tx_qp_packets);
+		UPDATE_QP_COUNTER_36bit(TXGBE_QPTXOCTL(i), TXGBE_QPTXOCTH(i),
+			hw->qp_last[i].tx_qp_bytes,
+			hw_stats->qp[i].tx_qp_bytes);
+	}
+	/* PB Stats */
+	for (i = 0; i < TXGBE_MAX_UP; i++) {
+		hw_stats->up[i].rx_up_xon_packets +=
+				rd32(hw, TXGBE_PBRXUPXON(i));
+		hw_stats->up[i].rx_up_xoff_packets +=
+				rd32(hw, TXGBE_PBRXUPXOFF(i));
+		hw_stats->up[i].tx_up_xon_packets +=
+				rd32(hw, TXGBE_PBTXUPXON(i));
+		hw_stats->up[i].tx_up_xoff_packets +=
+				rd32(hw, TXGBE_PBTXUPXOFF(i));
+		hw_stats->up[i].tx_up_xon2off_packets +=
+				rd32(hw, TXGBE_PBTXUPOFF(i));
+		hw_stats->up[i].rx_up_dropped +=
+				rd32(hw, TXGBE_PBRXMISS(i));
+	}
+	hw_stats->rx_xon_packets += rd32(hw, TXGBE_PBRXLNKXON);
+	hw_stats->rx_xoff_packets += rd32(hw, TXGBE_PBRXLNKXOFF);
+	hw_stats->tx_xon_packets += rd32(hw, TXGBE_PBTXLNKXON);
+	hw_stats->tx_xoff_packets += rd32(hw, TXGBE_PBTXLNKXOFF);
+
+	/* DMA Stats */
+	hw_stats->rx_packets += rd32(hw, TXGBE_DMARXPKT);
+	hw_stats->tx_packets += rd32(hw, TXGBE_DMATXPKT);
+
+	hw_stats->rx_bytes += rd64(hw, TXGBE_DMARXOCTL);
+	hw_stats->tx_bytes += rd64(hw, TXGBE_DMATXOCTL);
+	hw_stats->rx_drop_packets += rd32(hw, TXGBE_PBRXDROP);
+
+	/* MAC Stats */
+	hw_stats->rx_crc_errors += rd64(hw, TXGBE_MACRXERRCRCL);
+	hw_stats->rx_multicast_packets += rd64(hw, TXGBE_MACRXMPKTL);
+	hw_stats->tx_multicast_packets += rd64(hw, TXGBE_MACTXMPKTL);
+
+	hw_stats->rx_total_packets += rd64(hw, TXGBE_MACRXPKTL);
+	hw_stats->tx_total_packets += rd64(hw, TXGBE_MACTXPKTL);
+	hw_stats->rx_total_bytes += rd64(hw, TXGBE_MACRXGBOCTL);
+
+	hw_stats->rx_broadcast_packets += rd64(hw, TXGBE_MACRXOCTL);
+	hw_stats->tx_broadcast_packets += rd32(hw, TXGBE_MACTXOCTL);
+
+	hw_stats->rx_size_64_packets += rd64(hw, TXGBE_MACRX1to64L);
+	hw_stats->rx_size_65_to_127_packets += rd64(hw, TXGBE_MACRX65to127L);
+	hw_stats->rx_size_128_to_255_packets += rd64(hw, TXGBE_MACRX128to255L);
+	hw_stats->rx_size_256_to_511_packets += rd64(hw, TXGBE_MACRX256to511L);
+	hw_stats->rx_size_512_to_1023_packets += rd64(hw, TXGBE_MACRX512to1023L);
+	hw_stats->rx_size_1024_to_max_packets += rd64(hw, TXGBE_MACRX1024toMAXL);
+	hw_stats->tx_size_64_packets += rd64(hw, TXGBE_MACTX1to64L);
+	hw_stats->tx_size_65_to_127_packets += rd64(hw, TXGBE_MACTX65to127L);
+	hw_stats->tx_size_128_to_255_packets += rd64(hw, TXGBE_MACTX128to255L);
+	hw_stats->tx_size_256_to_511_packets += rd64(hw, TXGBE_MACTX256to511L);
+	hw_stats->tx_size_512_to_1023_packets += rd64(hw, TXGBE_MACTX512to1023L);
+	hw_stats->tx_size_1024_to_max_packets += rd64(hw, TXGBE_MACTX1024toMAXL);
+
+	hw_stats->rx_undersize_errors += rd64(hw, TXGBE_MACRXERRLENL);
+	hw_stats->rx_oversize_errors += rd32(hw, TXGBE_MACRXOVERSIZE);
+	hw_stats->rx_jabber_errors += rd32(hw, TXGBE_MACRXJABBER);
+
+	/* MNG Stats */
+	hw_stats->mng_bmc2host_packets = rd32(hw, TXGBE_MNGBMC2OS);
+	hw_stats->mng_host2bmc_packets = rd32(hw, TXGBE_MNGOS2BMC);
+	hw_stats->rx_management_packets = rd32(hw, TXGBE_DMARXMNG);
+	hw_stats->tx_management_packets = rd32(hw, TXGBE_DMATXMNG);
+
+	/* FCoE Stats */
+	hw_stats->rx_fcoe_crc_errors += rd32(hw, TXGBE_FCOECRC);
+	hw_stats->rx_fcoe_mbuf_allocation_errors += rd32(hw, TXGBE_FCOELAST);
+	hw_stats->rx_fcoe_dropped += rd32(hw, TXGBE_FCOERPDC);
+	hw_stats->rx_fcoe_packets += rd32(hw, TXGBE_FCOEPRC);
+	hw_stats->tx_fcoe_packets += rd32(hw, TXGBE_FCOEPTC);
+	hw_stats->rx_fcoe_bytes += rd32(hw, TXGBE_FCOEDWRC);
+	hw_stats->tx_fcoe_bytes += rd32(hw, TXGBE_FCOEDWTC);
+
+	/* Flow Director Stats */
+	hw_stats->flow_director_matched_filters += rd32(hw, TXGBE_FDIRMATCH);
+	hw_stats->flow_director_missed_filters += rd32(hw, TXGBE_FDIRMISS);
+	hw_stats->flow_director_added_filters +=
+		TXGBE_FDIRUSED_ADD(rd32(hw, TXGBE_FDIRUSED));
+	hw_stats->flow_director_removed_filters +=
+		TXGBE_FDIRUSED_REM(rd32(hw, TXGBE_FDIRUSED));
+	hw_stats->flow_director_filter_add_errors +=
+		TXGBE_FDIRFAIL_ADD(rd32(hw, TXGBE_FDIRFAIL));
+	hw_stats->flow_director_filter_remove_errors +=
+		TXGBE_FDIRFAIL_REM(rd32(hw, TXGBE_FDIRFAIL));
+
+	/* MACsec Stats */
+	hw_stats->tx_macsec_pkts_untagged += rd32(hw, TXGBE_LSECTX_UTPKT);
+	hw_stats->tx_macsec_pkts_encrypted +=
+			rd32(hw, TXGBE_LSECTX_ENCPKT);
+	hw_stats->tx_macsec_pkts_protected +=
+			rd32(hw, TXGBE_LSECTX_PROTPKT);
+	hw_stats->tx_macsec_octets_encrypted +=
+			rd32(hw, TXGBE_LSECTX_ENCOCT);
+	hw_stats->tx_macsec_octets_protected +=
+			rd32(hw, TXGBE_LSECTX_PROTOCT);
+	hw_stats->rx_macsec_pkts_untagged += rd32(hw, TXGBE_LSECRX_UTPKT);
+	hw_stats->rx_macsec_pkts_badtag += rd32(hw, TXGBE_LSECRX_BTPKT);
+	hw_stats->rx_macsec_pkts_nosci += rd32(hw, TXGBE_LSECRX_NOSCIPKT);
+	hw_stats->rx_macsec_pkts_unknownsci += rd32(hw, TXGBE_LSECRX_UNSCIPKT);
+	hw_stats->rx_macsec_octets_decrypted += rd32(hw, TXGBE_LSECRX_DECOCT);
+	hw_stats->rx_macsec_octets_validated += rd32(hw, TXGBE_LSECRX_VLDOCT);
+	hw_stats->rx_macsec_sc_pkts_unchecked += rd32(hw, TXGBE_LSECRX_UNCHKPKT);
+	hw_stats->rx_macsec_sc_pkts_delayed += rd32(hw, TXGBE_LSECRX_DLYPKT);
+	hw_stats->rx_macsec_sc_pkts_late += rd32(hw, TXGBE_LSECRX_LATEPKT);
+	for (i = 0; i < 2; i++) {
+		hw_stats->rx_macsec_sa_pkts_ok +=
+			rd32(hw, TXGBE_LSECRX_OKPKT(i));
+		hw_stats->rx_macsec_sa_pkts_invalid +=
+			rd32(hw, TXGBE_LSECRX_INVPKT(i));
+		hw_stats->rx_macsec_sa_pkts_notvalid +=
+			rd32(hw, TXGBE_LSECRX_BADPKT(i));
+	}
+	hw_stats->rx_macsec_sa_pkts_unusedsa +=
+			rd32(hw, TXGBE_LSECRX_INVSAPKT);
+	hw_stats->rx_macsec_sa_pkts_notusingsa +=
+			rd32(hw, TXGBE_LSECRX_BADSAPKT);
+
+	hw_stats->rx_total_missed_packets = 0;
+	for (i = 0; i < TXGBE_MAX_UP; i++) {
+		hw_stats->rx_total_missed_packets +=
+			hw_stats->up[i].rx_up_dropped;
+	}
+}
+
 static int
 txgbe_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 {
-	RTE_SET_USED(dev);
-	RTE_SET_USED(stats);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
+	struct txgbe_stat_mappings *stat_mappings =
+			TXGBE_DEV_STAT_MAPPINGS(dev);
+	uint32_t i, j;
 
+	txgbe_read_stats_registers(hw, hw_stats);
+
+	if (stats == NULL)
+		return -EINVAL;
+
+	/* Fill out the rte_eth_stats statistics structure */
+	stats->ipackets = hw_stats->rx_packets;
+	stats->ibytes = hw_stats->rx_bytes;
+	stats->opackets = hw_stats->tx_packets;
+	stats->obytes = hw_stats->tx_bytes;
+
+	memset(&stats->q_ipackets, 0, sizeof(stats->q_ipackets));
+	memset(&stats->q_opackets, 0, sizeof(stats->q_opackets));
+	memset(&stats->q_ibytes, 0, sizeof(stats->q_ibytes));
+	memset(&stats->q_obytes, 0, sizeof(stats->q_obytes));
+	memset(&stats->q_errors, 0, sizeof(stats->q_errors));
+	for (i = 0; i < TXGBE_MAX_QP; i++) {
+		uint32_t n = i / NB_QMAP_FIELDS_PER_QSM_REG;
+		uint32_t offset = (i % NB_QMAP_FIELDS_PER_QSM_REG) * 8;
+		uint32_t q_map;
+
+		q_map = (stat_mappings->rqsm[n] >> offset)
+				& QMAP_FIELD_RESERVED_BITS_MASK;
+		j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
+		     ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
+		stats->q_ipackets[j] += hw_stats->qp[i].rx_qp_packets;
+		stats->q_ibytes[j] += hw_stats->qp[i].rx_qp_bytes;
+
+		q_map = (stat_mappings->tqsm[n] >> offset)
+				& QMAP_FIELD_RESERVED_BITS_MASK;
+		j = (q_map < RTE_ETHDEV_QUEUE_STAT_CNTRS
+		     ? q_map : q_map % RTE_ETHDEV_QUEUE_STAT_CNTRS);
+		stats->q_opackets[j] += hw_stats->qp[i].tx_qp_packets;
+		stats->q_obytes[j] += hw_stats->qp[i].tx_qp_bytes;
+	}
+
+	/* Rx Errors */
+	stats->imissed  = hw_stats->rx_total_missed_packets;
+	stats->ierrors  = hw_stats->rx_crc_errors +
+			  hw_stats->rx_mac_short_packet_dropped +
+			  hw_stats->rx_length_errors +
+			  hw_stats->rx_undersize_errors +
+			  hw_stats->rx_oversize_errors +
+			  hw_stats->rx_drop_packets +
+			  hw_stats->rx_illegal_byte_errors +
+			  hw_stats->rx_error_bytes +
+			  hw_stats->rx_fragment_errors +
+			  hw_stats->rx_fcoe_crc_errors +
+			  hw_stats->rx_fcoe_mbuf_allocation_errors;
+
+	/* Tx Errors */
+	stats->oerrors  = 0;
 	return 0;
 }
 
 static int
 txgbe_dev_stats_reset(struct rte_eth_dev *dev)
 {
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
 
 	/* HW registers are cleared on read */
+	hw->offset_loaded = 0;
 	txgbe_dev_stats_get(dev, NULL);
+	hw->offset_loaded = 1;
 
 	/* Reset software totals */
 	memset(hw_stats, 0, sizeof(*hw_stats));
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index dceb88d2f..d896b7775 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -40,6 +40,15 @@ struct txgbe_interrupt {
 	uint32_t mask[2];
 };
 
+#define TXGBE_NB_STAT_MAPPING  32
+#define QSM_REG_NB_BITS_PER_QMAP_FIELD 8
+#define NB_QMAP_FIELDS_PER_QSM_REG 4
+#define QMAP_FIELD_RESERVED_BITS_MASK 0x0f
+struct txgbe_stat_mappings {
+	uint32_t tqsm[TXGBE_NB_STAT_MAPPING];
+	uint32_t rqsm[TXGBE_NB_STAT_MAPPING];
+};
+
 struct txgbe_vf_info {
 	uint8_t api_version;
 	uint16_t switch_domain_id;
@@ -52,6 +61,7 @@ struct txgbe_adapter {
 	struct txgbe_hw             hw;
 	struct txgbe_hw_stats       stats;
 	struct txgbe_interrupt      intr;
+	struct txgbe_stat_mappings  stat_mappings;
 	struct txgbe_vf_info        *vfdata;
 	bool rx_bulk_alloc_allowed;
 };
@@ -77,6 +87,9 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 #define TXGBE_DEV_INTR(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->intr)
 
+#define TXGBE_DEV_STAT_MAPPINGS(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->stat_mappings)
+
 #define TXGBE_DEV_VFDATA(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
 
@@ -163,4 +176,7 @@ void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
 
 const uint32_t *txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void txgbe_dev_setup_link_alarm_handler(void *param);
+void txgbe_read_stats_registers(struct txgbe_hw *hw,
+			   struct txgbe_hw_stats *hw_stats);
+
 #endif /* _TXGBE_ETHDEV_H_ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 28/42] net/txgbe: add device xstats get
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (25 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 27/42] net/txgbe: add device stats get Jiawen Wu
@ 2020-09-01 11:50 ` Jiawen Wu
  2020-09-09 17:53   ` Ferruh Yigit
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 29/42] net/txgbe: add queue stats mapping and enable RX DMA unit Jiawen Wu
                   ` (14 subsequent siblings)
  41 siblings, 1 reply; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:50 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device xstats get from reading hardware registers.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 383 +++++++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_ethdev.h |   6 +
 2 files changed, 389 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 63f811d93..51554844e 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -64,6 +64,144 @@ static const struct rte_pci_id pci_id_txgbe_map[] = {
 
 static const struct eth_dev_ops txgbe_eth_dev_ops;
 
+#define HW_XSTAT(m) {#m, offsetof(struct txgbe_hw_stats, m)}
+#define HW_XSTAT_NAME(m, n) {n, offsetof(struct txgbe_hw_stats, m)}
+static const struct rte_txgbe_xstats_name_off rte_txgbe_stats_strings[] = {
+	/* MNG RxTx */
+	HW_XSTAT(mng_bmc2host_packets),
+	HW_XSTAT(mng_host2bmc_packets),
+	/* Basic RxTx */
+	HW_XSTAT(rx_packets),
+	HW_XSTAT(tx_packets),
+	HW_XSTAT(rx_bytes),
+	HW_XSTAT(tx_bytes),
+	HW_XSTAT(rx_total_bytes),
+	HW_XSTAT(rx_total_packets),
+	HW_XSTAT(tx_total_packets),
+	HW_XSTAT(rx_total_missed_packets),
+	HW_XSTAT(rx_broadcast_packets),
+	HW_XSTAT(rx_multicast_packets),
+	HW_XSTAT(rx_management_packets),
+	HW_XSTAT(tx_management_packets),
+	HW_XSTAT(rx_management_dropped),
+
+	/* Basic Error */
+	HW_XSTAT(rx_crc_errors),
+	HW_XSTAT(rx_illegal_byte_errors),
+	HW_XSTAT(rx_error_bytes),
+	HW_XSTAT(rx_mac_short_packet_dropped),
+	HW_XSTAT(rx_length_errors),
+	HW_XSTAT(rx_undersize_errors),
+	HW_XSTAT(rx_fragment_errors),
+	HW_XSTAT(rx_oversize_errors),
+	HW_XSTAT(rx_jabber_errors),
+	HW_XSTAT(rx_l3_l4_xsum_error),
+	HW_XSTAT(mac_local_errors),
+	HW_XSTAT(mac_remote_errors),
+
+	/* Flow Director */
+	HW_XSTAT(flow_director_added_filters),
+	HW_XSTAT(flow_director_removed_filters),
+	HW_XSTAT(flow_director_filter_add_errors),
+	HW_XSTAT(flow_director_filter_remove_errors),
+	HW_XSTAT(flow_director_matched_filters),
+	HW_XSTAT(flow_director_missed_filters),
+
+	/* FCoE */
+	HW_XSTAT(rx_fcoe_crc_errors),
+	HW_XSTAT(rx_fcoe_mbuf_allocation_errors),
+	HW_XSTAT(rx_fcoe_dropped),
+	HW_XSTAT(rx_fcoe_packets),
+	HW_XSTAT(tx_fcoe_packets),
+	HW_XSTAT(rx_fcoe_bytes),
+	HW_XSTAT(tx_fcoe_bytes),
+	HW_XSTAT(rx_fcoe_no_ddp),
+	HW_XSTAT(rx_fcoe_no_ddp_ext_buff),
+
+	/* MACSEC */
+	HW_XSTAT(tx_macsec_pkts_untagged),
+	HW_XSTAT(tx_macsec_pkts_encrypted),
+	HW_XSTAT(tx_macsec_pkts_protected),
+	HW_XSTAT(tx_macsec_octets_encrypted),
+	HW_XSTAT(tx_macsec_octets_protected),
+	HW_XSTAT(rx_macsec_pkts_untagged),
+	HW_XSTAT(rx_macsec_pkts_badtag),
+	HW_XSTAT(rx_macsec_pkts_nosci),
+	HW_XSTAT(rx_macsec_pkts_unknownsci),
+	HW_XSTAT(rx_macsec_octets_decrypted),
+	HW_XSTAT(rx_macsec_octets_validated),
+	HW_XSTAT(rx_macsec_sc_pkts_unchecked),
+	HW_XSTAT(rx_macsec_sc_pkts_delayed),
+	HW_XSTAT(rx_macsec_sc_pkts_late),
+	HW_XSTAT(rx_macsec_sa_pkts_ok),
+	HW_XSTAT(rx_macsec_sa_pkts_invalid),
+	HW_XSTAT(rx_macsec_sa_pkts_notvalid),
+	HW_XSTAT(rx_macsec_sa_pkts_unusedsa),
+	HW_XSTAT(rx_macsec_sa_pkts_notusingsa),
+
+	/* MAC RxTx */
+	HW_XSTAT(rx_size_64_packets),
+	HW_XSTAT(rx_size_65_to_127_packets),
+	HW_XSTAT(rx_size_128_to_255_packets),
+	HW_XSTAT(rx_size_256_to_511_packets),
+	HW_XSTAT(rx_size_512_to_1023_packets),
+	HW_XSTAT(rx_size_1024_to_max_packets),
+	HW_XSTAT(tx_size_64_packets),
+	HW_XSTAT(tx_size_65_to_127_packets),
+	HW_XSTAT(tx_size_128_to_255_packets),
+	HW_XSTAT(tx_size_256_to_511_packets),
+	HW_XSTAT(tx_size_512_to_1023_packets),
+	HW_XSTAT(tx_size_1024_to_max_packets),
+
+	/* Flow Control */
+	HW_XSTAT(tx_xon_packets),
+	HW_XSTAT(rx_xon_packets),
+	HW_XSTAT(tx_xoff_packets),
+	HW_XSTAT(rx_xoff_packets),
+
+	HW_XSTAT_NAME(tx_xon_packets, "tx_flow_control_xon_packets"),
+	HW_XSTAT_NAME(rx_xon_packets, "rx_flow_control_xon_packets"),
+	HW_XSTAT_NAME(tx_xoff_packets, "tx_flow_control_xoff_packets"),
+	HW_XSTAT_NAME(rx_xoff_packets, "rx_flow_control_xoff_packets"),
+};
+
+#define TXGBE_NB_HW_STATS (sizeof(rte_txgbe_stats_strings) / \
+			   sizeof(rte_txgbe_stats_strings[0]))
+
+/* Per-priority statistics */
+#define UP_XSTAT(m) {#m, offsetof(struct txgbe_hw_stats, up[0].m)}
+static const struct rte_txgbe_xstats_name_off rte_txgbe_up_strings[] = {
+	UP_XSTAT(rx_up_packets),
+	UP_XSTAT(tx_up_packets),
+	UP_XSTAT(rx_up_bytes),
+	UP_XSTAT(tx_up_bytes),
+	UP_XSTAT(rx_up_drop_packets),
+
+	UP_XSTAT(tx_up_xon_packets),
+	UP_XSTAT(rx_up_xon_packets),
+	UP_XSTAT(tx_up_xoff_packets),
+	UP_XSTAT(rx_up_xoff_packets),
+	UP_XSTAT(rx_up_dropped),
+	UP_XSTAT(rx_up_mbuf_alloc_errors),
+	UP_XSTAT(tx_up_xon2off_packets),
+};
+
+#define TXGBE_NB_UP_STATS (sizeof(rte_txgbe_up_strings) / \
+			   sizeof(rte_txgbe_up_strings[0]))
+
+/* Per-queue statistics */
+#define QP_XSTAT(m) {#m, offsetof(struct txgbe_hw_stats, qp[0].m)}
+static const struct rte_txgbe_xstats_name_off rte_txgbe_qp_strings[] = {
+	QP_XSTAT(rx_qp_packets),
+	QP_XSTAT(tx_qp_packets),
+	QP_XSTAT(rx_qp_bytes),
+	QP_XSTAT(tx_qp_bytes),
+	QP_XSTAT(rx_qp_mc_packets),
+};
+
+#define TXGBE_NB_QP_STATS (sizeof(rte_txgbe_qp_strings) / \
+			   sizeof(rte_txgbe_qp_strings[0]))
+
 static inline int
 txgbe_is_sfp(struct txgbe_hw *hw)
 {
@@ -1050,6 +1188,246 @@ txgbe_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+/* This function calculates the number of xstats based on the current config */
+static unsigned
+txgbe_xstats_calc_num(struct rte_eth_dev *dev)
+{
+	int nb_queues = max(dev->data->nb_rx_queues, dev->data->nb_tx_queues);
+	return TXGBE_NB_HW_STATS +
+	       TXGBE_NB_UP_STATS * TXGBE_MAX_UP +
+	       TXGBE_NB_QP_STATS * nb_queues;
+}
+
+static inline int
+txgbe_get_name_by_id(uint32_t id, char *name, uint32_t size)
+{
+	int nb, st;
+
+	/* Extended stats from txgbe_hw_stats */
+	if (id < TXGBE_NB_HW_STATS) {
+		snprintf(name, size, "[hw]%s",
+			rte_txgbe_stats_strings[id].name);
+		return 0;
+	}
+	id -= TXGBE_NB_HW_STATS;
+
+	/* Priority Stats */
+	if (id < TXGBE_NB_UP_STATS * TXGBE_MAX_UP) {
+		nb = id / TXGBE_NB_UP_STATS;
+		st = id % TXGBE_NB_UP_STATS;
+		snprintf(name, size, "[p%u]%s", nb,
+			rte_txgbe_up_strings[st].name);
+		return 0;
+	}
+	id -= TXGBE_NB_UP_STATS * TXGBE_MAX_UP;
+
+	/* Queue Stats */
+	if (id < TXGBE_NB_QP_STATS * TXGBE_MAX_QP) {
+		nb = id / TXGBE_NB_QP_STATS;
+		st = id % TXGBE_NB_QP_STATS;
+		snprintf(name, size, "[q%u]%s", nb,
+			rte_txgbe_qp_strings[st].name);
+		return 0;
+	}
+	id -= TXGBE_NB_QP_STATS * TXGBE_MAX_QP;
+
+	return -(int)(id + 1);
+}
+
+static inline int
+txgbe_get_offset_by_id(uint32_t id, uint32_t *offset)
+{
+	int nb, st;
+
+	/* Extended stats from txgbe_hw_stats */
+	if (id < TXGBE_NB_HW_STATS) {
+		*offset = rte_txgbe_stats_strings[id].offset;
+		return 0;
+	}
+	id -= TXGBE_NB_HW_STATS;
+
+	/* Priority Stats */
+	if (id < TXGBE_NB_UP_STATS * TXGBE_MAX_UP) {
+		nb = id / TXGBE_NB_UP_STATS;
+		st = id % TXGBE_NB_UP_STATS;
+		*offset = rte_txgbe_up_strings[st].offset +
+			nb * (TXGBE_NB_UP_STATS * sizeof(uint64_t));
+		return 0;
+	}
+	id -= TXGBE_NB_UP_STATS * TXGBE_MAX_UP;
+
+	/* Queue Stats */
+	if (id < TXGBE_NB_QP_STATS * TXGBE_MAX_QP) {
+		nb = id / TXGBE_NB_QP_STATS;
+		st = id % TXGBE_NB_QP_STATS;
+		*offset = rte_txgbe_qp_strings[st].offset +
+			nb * (TXGBE_NB_QP_STATS * sizeof(uint64_t));
+		return 0;
+	}
+	id -= TXGBE_NB_QP_STATS * TXGBE_MAX_QP;
+
+	return -(int)(id + 1);
+}
+
+static int txgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+	struct rte_eth_xstat_name *xstats_names, unsigned int limit)
+{
+	unsigned i, count;
+
+	count = txgbe_xstats_calc_num(dev);
+	if (xstats_names == NULL) {
+		return count;
+	}
+
+	/* Note: limit >= cnt_stats checked upstream
+	 * in rte_eth_xstats_names()
+	 */
+	limit = min(limit, count);
+
+	/* Extended stats from txgbe_hw_stats */
+	for (i = 0; i < limit; i++) {
+		if (txgbe_get_name_by_id(i, xstats_names[i].name,
+			sizeof(xstats_names[i].name))) {
+			PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+			break;
+		}
+	}
+
+	return i;
+}
+
+static int txgbe_dev_xstats_get_names_by_id(struct rte_eth_dev *dev,
+	struct rte_eth_xstat_name *xstats_names,
+	const uint64_t *ids,
+	unsigned int limit)
+{
+	unsigned i;
+
+	if (ids == NULL) {
+		return txgbe_dev_xstats_get_names(dev, xstats_names, limit);
+	}
+
+	for (i = 0; i < limit; i++) {
+		if (txgbe_get_name_by_id(ids[i], xstats_names[i].name,
+				sizeof(xstats_names[i].name))) {
+			PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+			return -1;
+		}
+	}
+
+	return i;
+}
+
+static int
+txgbe_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+					 unsigned limit)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
+	unsigned i, count;
+
+	txgbe_read_stats_registers(hw, hw_stats);
+
+	/* If this is a reset xstats is NULL, and we have cleared the
+	 * registers by reading them.
+	 */
+	count = txgbe_xstats_calc_num(dev);
+	if (xstats == NULL) {
+		return count;
+	}
+
+	limit = min(limit, txgbe_xstats_calc_num(dev));
+
+	/* Extended stats from txgbe_hw_stats */
+	for (i = 0; i < limit; i++) {
+		uint32_t offset = 0;
+
+		if (txgbe_get_offset_by_id(i, &offset)) {
+			PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+			break;
+		}
+		xstats[i].value = *(uint64_t *)(((char *)hw_stats) + offset);
+		xstats[i].id = i;
+	}
+
+	return i;
+}
+
+static int
+txgbe_dev_xstats_get_(struct rte_eth_dev *dev, uint64_t *values,
+					 unsigned limit)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
+	unsigned i, count;
+
+	txgbe_read_stats_registers(hw, hw_stats);
+
+	/* If this is a reset xstats is NULL, and we have cleared the
+	 * registers by reading them.
+	 */
+	count = txgbe_xstats_calc_num(dev);
+	if (values == NULL) {
+		return count;
+	}
+
+	limit = min(limit, txgbe_xstats_calc_num(dev));
+
+	/* Extended stats from txgbe_hw_stats */
+	for (i = 0; i < limit; i++) {
+		uint32_t offset;
+
+		if (txgbe_get_offset_by_id(i, &offset)) {
+			PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+			break;
+		}
+		values[i] = *(uint64_t *)(((char *)hw_stats) + offset);
+	}
+
+	return i;
+}
+
+static int
+txgbe_dev_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
+		uint64_t *values, unsigned int limit)
+{
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
+	unsigned i;
+
+	if (ids == NULL) {
+		return txgbe_dev_xstats_get_(dev, values, limit);
+	}
+
+	for (i = 0; i < limit; i++) {
+		uint32_t offset;
+
+		if (txgbe_get_offset_by_id(ids[i], &offset)) {
+			PMD_INIT_LOG(WARNING, "id value %d isn't valid", i);
+			break;
+		}
+		values[i] = *(uint64_t *)(((char *)hw_stats) + offset);
+	}
+
+	return i;
+}
+
+static int
+txgbe_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
+
+	/* HW registers are cleared on read */
+	hw->offset_loaded = 0;
+	txgbe_read_stats_registers(hw, hw_stats);
+	hw->offset_loaded = 1;
+
+	/* Reset software totals */
+	memset(hw_stats, 0, sizeof(*hw_stats));
+
+	return 0;
+}
+
 const uint32_t *
 txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -1574,7 +1952,12 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_reset                  = txgbe_dev_reset,
 	.link_update                = txgbe_dev_link_update,
 	.stats_get                  = txgbe_dev_stats_get,
+	.xstats_get                 = txgbe_dev_xstats_get,
+	.xstats_get_by_id           = txgbe_dev_xstats_get_by_id,
 	.stats_reset                = txgbe_dev_stats_reset,
+	.xstats_reset               = txgbe_dev_xstats_reset,
+	.xstats_get_names           = txgbe_dev_xstats_get_names,
+	.xstats_get_names_by_id     = txgbe_dev_xstats_get_names_by_id,
 	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
 	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index d896b7775..ffff4ee11 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -174,6 +174,12 @@ void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
 #define TXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
 
+/* store statistics names and its offset in stats structure */
+struct rte_txgbe_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned offset;
+};
+
 const uint32_t *txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
 void txgbe_dev_setup_link_alarm_handler(void *param);
 void txgbe_read_stats_registers(struct txgbe_hw *hw,
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 29/42] net/txgbe: add queue stats mapping and enable RX DMA unit
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (26 preceding siblings ...)
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 28/42] net/txgbe: add device xstats get Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-09 17:54   ` Ferruh Yigit
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 30/42] net/txgbe: add device info get Jiawen Wu
                   ` (13 subsequent siblings)
  41 siblings, 1 reply; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add queue stats mapping set, complete receive and transmit unit with DMA and sec path.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   | 389 +++++++++++++++++++++++++++-
 drivers/net/txgbe/base/txgbe_hw.h   |   9 +
 drivers/net/txgbe/base/txgbe_type.h |   1 +
 drivers/net/txgbe/txgbe_ethdev.c    |  52 ++++
 4 files changed, 450 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 13f79741a..05f323a07 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -9,6 +9,8 @@
 #include "txgbe_mng.h"
 #include "txgbe_hw.h"
 
+#define TXGBE_RAPTOR_MAX_TX_QUEUES 128
+#define TXGBE_RAPTOR_MAX_RX_QUEUES 128
 
 STATIC s32 txgbe_setup_copper_link_raptor(struct txgbe_hw *hw,
 					 u32 speed,
@@ -111,6 +113,149 @@ s32 txgbe_init_hw(struct txgbe_hw *hw)
 	return status;
 }
 
+/**
+ *  txgbe_clear_hw_cntrs - Generic clear hardware counters
+ *  @hw: pointer to hardware structure
+ *
+ *  Clears all hardware statistics counters by reading them from the hardware
+ *  Statistics counters are clear on read.
+ **/
+s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw)
+{
+	u16 i = 0;
+
+	DEBUGFUNC("txgbe_clear_hw_cntrs");
+
+	/* QP Stats */
+	/* don't write clear queue stats */
+	for (i = 0; i < TXGBE_MAX_QP; i++) {
+		hw->qp_last[i].rx_qp_packets = 0;
+		hw->qp_last[i].tx_qp_packets = 0;
+		hw->qp_last[i].rx_qp_bytes = 0;
+		hw->qp_last[i].tx_qp_bytes = 0;
+		hw->qp_last[i].rx_qp_mc_packets = 0;
+	}
+
+	/* PB Stats */
+	for (i = 0; i < TXGBE_MAX_UP; i++) {
+		rd32(hw, TXGBE_PBRXUPXON(i));
+		rd32(hw, TXGBE_PBRXUPXOFF(i));
+		rd32(hw, TXGBE_PBTXUPXON(i));
+		rd32(hw, TXGBE_PBTXUPXOFF(i));
+		rd32(hw, TXGBE_PBTXUPOFF(i));
+
+		rd32(hw, TXGBE_PBRXMISS(i));
+	}
+	rd32(hw, TXGBE_PBRXLNKXON);
+	rd32(hw, TXGBE_PBRXLNKXOFF);
+	rd32(hw, TXGBE_PBTXLNKXON);
+	rd32(hw, TXGBE_PBTXLNKXOFF);
+
+	/* DMA Stats */
+	rd32(hw, TXGBE_DMARXPKT);
+	rd32(hw, TXGBE_DMATXPKT);
+
+	rd64(hw, TXGBE_DMARXOCTL);
+	rd64(hw, TXGBE_DMATXOCTL);
+
+	/* MAC Stats */
+	rd64(hw, TXGBE_MACRXERRCRCL);
+	rd64(hw, TXGBE_MACRXMPKTL);
+	rd64(hw, TXGBE_MACTXMPKTL);
+
+	rd64(hw, TXGBE_MACRXPKTL);
+	rd64(hw, TXGBE_MACTXPKTL);
+	rd64(hw, TXGBE_MACRXGBOCTL);
+
+	rd64(hw, TXGBE_MACRXOCTL);
+	rd32(hw, TXGBE_MACTXOCTL);
+
+	rd64(hw, TXGBE_MACRX1to64L);
+	rd64(hw, TXGBE_MACRX65to127L);
+	rd64(hw, TXGBE_MACRX128to255L);
+	rd64(hw, TXGBE_MACRX256to511L);
+	rd64(hw, TXGBE_MACRX512to1023L);
+	rd64(hw, TXGBE_MACRX1024toMAXL);
+	rd64(hw, TXGBE_MACTX1to64L);
+	rd64(hw, TXGBE_MACTX65to127L);
+	rd64(hw, TXGBE_MACTX128to255L);
+	rd64(hw, TXGBE_MACTX256to511L);
+	rd64(hw, TXGBE_MACTX512to1023L);
+	rd64(hw, TXGBE_MACTX1024toMAXL);
+
+	rd64(hw, TXGBE_MACRXERRLENL);
+	rd32(hw, TXGBE_MACRXOVERSIZE);
+	rd32(hw, TXGBE_MACRXJABBER);
+
+	/* FCoE Stats */
+	rd32(hw, TXGBE_FCOECRC);
+	rd32(hw, TXGBE_FCOELAST);
+	rd32(hw, TXGBE_FCOERPDC);
+	rd32(hw, TXGBE_FCOEPRC);
+	rd32(hw, TXGBE_FCOEPTC);
+	rd32(hw, TXGBE_FCOEDWRC);
+	rd32(hw, TXGBE_FCOEDWTC);
+
+	/* Flow Director Stats */
+	rd32(hw, TXGBE_FDIRMATCH);
+	rd32(hw, TXGBE_FDIRMISS);
+	rd32(hw, TXGBE_FDIRUSED);
+	rd32(hw, TXGBE_FDIRUSED);
+	rd32(hw, TXGBE_FDIRFAIL);
+	rd32(hw, TXGBE_FDIRFAIL);
+
+	/* MACsec Stats */
+	rd32(hw, TXGBE_LSECTX_UTPKT);
+	rd32(hw, TXGBE_LSECTX_ENCPKT);
+	rd32(hw, TXGBE_LSECTX_PROTPKT);
+	rd32(hw, TXGBE_LSECTX_ENCOCT);
+	rd32(hw, TXGBE_LSECTX_PROTOCT);
+	rd32(hw, TXGBE_LSECRX_UTPKT);
+	rd32(hw, TXGBE_LSECRX_BTPKT);
+	rd32(hw, TXGBE_LSECRX_NOSCIPKT);
+	rd32(hw, TXGBE_LSECRX_UNSCIPKT);
+	rd32(hw, TXGBE_LSECRX_DECOCT);
+	rd32(hw, TXGBE_LSECRX_VLDOCT);
+	rd32(hw, TXGBE_LSECRX_UNCHKPKT);
+	rd32(hw, TXGBE_LSECRX_DLYPKT);
+	rd32(hw, TXGBE_LSECRX_LATEPKT);
+	for (i = 0; i < 2; i++) {
+		rd32(hw, TXGBE_LSECRX_OKPKT(i));
+		rd32(hw, TXGBE_LSECRX_INVPKT(i));
+		rd32(hw, TXGBE_LSECRX_BADPKT(i));
+	}
+	rd32(hw, TXGBE_LSECRX_INVSAPKT);
+	rd32(hw, TXGBE_LSECRX_BADSAPKT);
+
+	return 0;
+}
+
+/**
+ *  txgbe_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices
+ *  @hw: pointer to the HW structure
+ *
+ *  Determines the LAN function id by reading memory-mapped registers and swaps
+ *  the port value if requested, and set MAC instance for devices that share
+ *  CS4227.
+ **/
+void txgbe_set_lan_id_multi_port(struct txgbe_hw *hw)
+{
+	struct txgbe_bus_info *bus = &hw->bus;
+	u32 reg;
+
+	DEBUGFUNC("txgbe_set_lan_id_multi_port_pcie");
+
+	reg = rd32(hw, TXGBE_PORTSTAT);
+	bus->lan_id = TXGBE_PORTSTAT_ID(reg);
+
+	/* check for single port */
+	reg = rd32(hw, TXGBE_PWR);
+	if (TXGBE_PWR_LANID_SWAP == TXGBE_PWR_LANID(reg))
+		bus->func = 0;
+	else
+		bus->func = bus->lan_id;
+}
+
 /**
  *  txgbe_stop_hw - Generic stop Tx/Rx units
  *  @hw: pointer to hardware structure
@@ -133,6 +278,9 @@ s32 txgbe_stop_hw(struct txgbe_hw *hw)
 	 */
 	hw->adapter_stopped = true;
 
+	/* Disable the receive unit */
+	txgbe_disable_rx(hw);
+
 	/* Clear interrupt mask to stop interrupts from being generated */
 	wr32(hw, TXGBE_IENMISC, 0);
 	wr32(hw, TXGBE_IMS(0), TXGBE_IMS_MASK);
@@ -279,6 +427,113 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 	return 0;
 }
 
+/**
+ *  txgbe_disable_sec_rx_path - Stops the receive data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Stops the receive data path and waits for the HW to internally empty
+ *  the Rx security block
+ **/
+s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw)
+{
+#define TXGBE_MAX_SECRX_POLL 4000
+
+	int i;
+	u32 secrxreg;
+
+	DEBUGFUNC("txgbe_disable_sec_rx_path");
+
+	secrxreg = rd32(hw, TXGBE_SECRXCTL);
+	secrxreg |= TXGBE_SECRXCTL_XDSA;
+	wr32(hw, TXGBE_SECRXCTL, secrxreg);
+	for (i = 0; i < TXGBE_MAX_SECRX_POLL; i++) {
+		secrxreg = rd32(hw, TXGBE_SECRXSTAT);
+		if (secrxreg & TXGBE_SECRXSTAT_RDY)
+			break;
+		else
+			/* Use interrupt-safe sleep just in case */
+			usec_delay(10);
+	}
+
+	/* For informational purposes only */
+	if (i >= TXGBE_MAX_SECRX_POLL)
+		DEBUGOUT("Rx unit being enabled before security "
+			 "path fully disabled.  Continuing with init.\n");
+
+	return 0;
+}
+
+/**
+ *  txgbe_enable_sec_rx_path - Enables the receive data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Enables the receive data path.
+ **/
+s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw)
+{
+	u32 secrxreg;
+
+	DEBUGFUNC("txgbe_enable_sec_rx_path");
+
+	secrxreg = rd32(hw, TXGBE_SECRXCTL);
+	secrxreg &= ~TXGBE_SECRXCTL_XDSA;
+	wr32(hw, TXGBE_SECRXCTL, secrxreg);
+	txgbe_flush(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_disable_sec_tx_path - Stops the transmit data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Stops the transmit data path and waits for the HW to internally empty
+ *  the Tx security block
+ **/
+int txgbe_disable_sec_tx_path(struct txgbe_hw *hw)
+{
+#define TXGBE_MAX_SECTX_POLL 40
+
+	int i;
+	u32 sectxreg;
+
+	sectxreg = rd32(hw, TXGBE_SECTXCTL);
+	sectxreg |= TXGBE_SECTXCTL_XDSA;
+	wr32(hw, TXGBE_SECTXCTL, sectxreg);
+	for (i = 0; i < TXGBE_MAX_SECTX_POLL; i++) {
+		sectxreg = rd32(hw, TXGBE_SECTXSTAT);
+		if (sectxreg & TXGBE_SECTXSTAT_RDY)
+			break;
+		/* Use interrupt-safe sleep just in case */
+		usec_delay(1000);
+	}
+
+	/* For informational purposes only */
+	if (i >= TXGBE_MAX_SECTX_POLL)
+		PMD_DRV_LOG(DEBUG, "Tx unit being enabled before security "
+			 "path fully disabled.  Continuing with init.");
+
+	return 0;
+}
+
+/**
+ *  txgbe_enable_sec_tx_path - Enables the transmit data path
+ *  @hw: pointer to hardware structure
+ *
+ *  Enables the transmit data path.
+ **/
+int txgbe_enable_sec_tx_path(struct txgbe_hw *hw)
+{
+	uint32_t sectxreg;
+
+	sectxreg = rd32(hw, TXGBE_SECTXCTL);
+	sectxreg &= ~TXGBE_SECTXCTL_XDSA;
+	wr32(hw, TXGBE_SECTXCTL, sectxreg);
+	txgbe_flush(hw);
+
+	return 0;
+}
+
 
 /**
  *  txgbe_need_crosstalk_fix - Determine if we need to do cross talk fix
@@ -453,6 +708,38 @@ void txgbe_clear_tx_pending(struct txgbe_hw *hw)
 }
 
 
+void txgbe_disable_rx(struct txgbe_hw *hw)
+{
+	u32 pfdtxgswc;
+
+	pfdtxgswc = rd32(hw, TXGBE_PSRCTL);
+	if (pfdtxgswc & TXGBE_PSRCTL_LBENA) {
+		pfdtxgswc &= ~TXGBE_PSRCTL_LBENA;
+		wr32(hw, TXGBE_PSRCTL, pfdtxgswc);
+		hw->mac.set_lben = true;
+	} else {
+		hw->mac.set_lben = false;
+	}
+
+	wr32m(hw, TXGBE_PBRXCTL, TXGBE_PBRXCTL_ENA, 0);
+	wr32m(hw, TXGBE_MACRXCFG, TXGBE_MACRXCFG_ENA, 0);
+}
+
+void txgbe_enable_rx(struct txgbe_hw *hw)
+{
+	u32 pfdtxgswc;
+
+	wr32m(hw, TXGBE_MACRXCFG, TXGBE_MACRXCFG_ENA, TXGBE_MACRXCFG_ENA);
+	wr32m(hw, TXGBE_PBRXCTL, TXGBE_PBRXCTL_ENA, TXGBE_PBRXCTL_ENA);
+
+	if (hw->mac.set_lben) {
+		pfdtxgswc = rd32(hw, TXGBE_PSRCTL);
+		pfdtxgswc |= TXGBE_PSRCTL_LBENA;
+		wr32(hw, TXGBE_PSRCTL, pfdtxgswc);
+		hw->mac.set_lben = false;
+	}
+}
+
 /**
  *  txgbe_setup_mac_link_multispeed_fiber - Set MAC link speed
  *  @hw: pointer to hardware structure
@@ -824,12 +1111,16 @@ s32 txgbe_setup_sfp_modules(struct txgbe_hw *hw)
  **/
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 {
+	struct txgbe_bus_info *bus = &hw->bus;
 	struct txgbe_mac_info *mac = &hw->mac;
 	struct txgbe_phy_info *phy = &hw->phy;
 	struct txgbe_rom_info *rom = &hw->rom;
 
 	DEBUGFUNC("txgbe_init_ops_pf");
 
+	/* BUS */
+	bus->set_lan_id = txgbe_set_lan_id_multi_port;
+
 	/* PHY */
 	phy->get_media_type = txgbe_get_media_type_raptor;
 	phy->identify = txgbe_identify_phy;
@@ -849,13 +1140,21 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	/* MAC */
 	mac->init_hw = txgbe_init_hw;
 	mac->start_hw = txgbe_start_hw_raptor;
+	mac->clear_hw_cntrs = txgbe_clear_hw_cntrs;
+	mac->enable_rx_dma = txgbe_enable_rx_dma_raptor;
 	mac->stop_hw = txgbe_stop_hw;
 	mac->reset_hw = txgbe_reset_hw;
 
+	mac->disable_sec_rx_path = txgbe_disable_sec_rx_path;
+	mac->enable_sec_rx_path = txgbe_enable_sec_rx_path;
+	mac->disable_sec_tx_path = txgbe_disable_sec_tx_path;
+	mac->enable_sec_tx_path = txgbe_enable_sec_tx_path;
 	mac->get_device_caps = txgbe_get_device_caps;
 	mac->autoc_read = txgbe_autoc_read;
 	mac->autoc_write = txgbe_autoc_write;
 
+	mac->enable_rx = txgbe_enable_rx;
+	mac->disable_rx = txgbe_disable_rx;
 	/* Link */
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
 	mac->check_link = txgbe_check_mac_link;
@@ -873,6 +1172,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	rom->validate_checksum = txgbe_validate_eeprom_checksum;
 	rom->update_checksum = txgbe_update_eeprom_checksum;
 	rom->calc_checksum = txgbe_calc_eeprom_checksum;
+	mac->max_rx_queues	= TXGBE_RAPTOR_MAX_RX_QUEUES;
+	mac->max_tx_queues	= TXGBE_RAPTOR_MAX_TX_QUEUES;
 
 	return 0;
 }
@@ -1456,7 +1757,63 @@ txgbe_check_flash_load(struct txgbe_hw *hw, u32 check_bit)
 static void
 txgbe_reset_misc(struct txgbe_hw *hw)
 {
-	RTE_SET_USED(hw);
+	int i;
+	u32 value;
+
+	wr32(hw, TXGBE_ISBADDRL, hw->isb_dma & 0x00000000FFFFFFFF);
+	wr32(hw, TXGBE_ISBADDRH, hw->isb_dma >> 32);
+
+	value = rd32_epcs(hw, SR_XS_PCS_CTRL2);
+	if ((value & 0x3) != SR_PCS_CTRL2_TYPE_SEL_X) {
+		hw->link_status = TXGBE_LINK_STATUS_NONE;
+	}
+
+	/* receive packets that size > 2048 */
+	wr32m(hw, TXGBE_MACRXCFG,
+		TXGBE_MACRXCFG_JUMBO, TXGBE_MACRXCFG_JUMBO);
+
+	wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+		TXGBE_FRMSZ_MAX(TXGBE_FRAME_SIZE_DFT));
+
+	/* clear counters on read */
+	wr32m(hw, TXGBE_MACCNTCTL,
+		TXGBE_MACCNTCTL_RC, TXGBE_MACCNTCTL_RC);
+
+	wr32m(hw, TXGBE_RXFCCFG,
+		TXGBE_RXFCCFG_FC, TXGBE_RXFCCFG_FC);
+	wr32m(hw, TXGBE_TXFCCFG,
+		TXGBE_TXFCCFG_FC, TXGBE_TXFCCFG_FC);
+
+	wr32m(hw, TXGBE_MACRXFLT,
+		TXGBE_MACRXFLT_PROMISC, TXGBE_MACRXFLT_PROMISC);
+
+	wr32m(hw, TXGBE_RSTSTAT,
+		TXGBE_RSTSTAT_TMRINIT_MASK, TXGBE_RSTSTAT_TMRINIT(30));
+
+	/* errata 4: initialize mng flex tbl and wakeup flex tbl*/
+	wr32(hw, TXGBE_MNGFLEXSEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, TXGBE_MNGFLEXDWL(i), 0);
+		wr32(hw, TXGBE_MNGFLEXDWH(i), 0);
+		wr32(hw, TXGBE_MNGFLEXMSK(i), 0);
+	}
+	wr32(hw, TXGBE_LANFLEXSEL, 0);
+	for (i = 0; i < 16; i++) {
+		wr32(hw, TXGBE_LANFLEXDWL(i), 0);
+		wr32(hw, TXGBE_LANFLEXDWH(i), 0);
+		wr32(hw, TXGBE_LANFLEXMSK(i), 0);
+	}
+
+	/* set pause frame dst mac addr */
+	wr32(hw, TXGBE_RXPBPFCDMACL, 0xC2000001);
+	wr32(hw, TXGBE_RXPBPFCDMACH, 0x0180);
+
+	/* enable mac transmiter */
+	wr32m(hw, TXGBE_MACTXCFG, TXGBE_MACTXCFG_TE, TXGBE_MACTXCFG_TE);
+
+	for (i = 0; i < 4; i++) {
+		wr32m(hw, TXGBE_IVAR(i), 0x80808080, 0);
+	}
 }
 
 /**
@@ -1620,6 +1977,36 @@ s32 txgbe_start_hw_raptor(struct txgbe_hw *hw)
 	return err;
 }
 
+/**
+ *  txgbe_enable_rx_dma_raptor - Enable the Rx DMA unit
+ *  @hw: pointer to hardware structure
+ *  @regval: register value to write to RXCTRL
+ *
+ *  Enables the Rx DMA unit
+ **/
+s32 txgbe_enable_rx_dma_raptor(struct txgbe_hw *hw, u32 regval)
+{
+
+	DEBUGFUNC("txgbe_enable_rx_dma_raptor");
+
+	/*
+	 * Workaround silicon errata when enabling the Rx datapath.
+	 * If traffic is incoming before we enable the Rx unit, it could hang
+	 * the Rx DMA unit.  Therefore, make sure the security engine is
+	 * completely disabled prior to enabling the Rx unit.
+	 */
+
+	hw->mac.disable_sec_rx_path(hw);
+
+	if (regval & TXGBE_PBRXCTL_ENA)
+		txgbe_enable_rx(hw);
+	else
+		txgbe_disable_rx(hw);
+
+	hw->mac.enable_sec_rx_path(hw);
+
+	return 0;
+}
 
 /**
  *  txgbe_verify_lesm_fw_enabled_raptor - Checks LESM FW module state.
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index a597383b8..86b616d38 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -12,12 +12,19 @@ s32 txgbe_start_hw(struct txgbe_hw *hw);
 s32 txgbe_stop_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_gen2(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
+s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw);
+
+void txgbe_set_lan_id_multi_port(struct txgbe_hw *hw);
 
 s32 txgbe_led_on(struct txgbe_hw *hw, u32 index);
 s32 txgbe_led_off(struct txgbe_hw *hw, u32 index);
 
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
+s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw);
+s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
+s32 txgbe_disable_sec_tx_path(struct txgbe_hw *hw);
+s32 txgbe_enable_sec_tx_path(struct txgbe_hw *hw);
 
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
 
@@ -30,6 +37,8 @@ void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 
 extern s32 txgbe_reset_pipeline_raptor(struct txgbe_hw *hw);
 
+void txgbe_disable_rx(struct txgbe_hw *hw);
+void txgbe_enable_rx(struct txgbe_hw *hw);
 s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
 					  u32 speed,
 					  bool autoneg_wait_to_complete);
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index f9a18d581..35a8ed3eb 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -509,6 +509,7 @@ struct txgbe_mac_info {
 	bool orig_link_settings_stored;
 	bool autotry_restart;
 	u8 flags;
+	bool set_lben;
 	u32  max_link_up_time;
 };
 
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 51554844e..c43d5b56f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -248,6 +248,57 @@ txgbe_disable_intr(struct txgbe_hw *hw)
 	txgbe_flush(hw);
 }
 
+static int
+txgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
+				  uint16_t queue_id,
+				  uint8_t stat_idx,
+				  uint8_t is_rx)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
+	struct txgbe_stat_mappings *stat_mappings =
+		TXGBE_DEV_STAT_MAPPINGS(eth_dev);
+	uint32_t qsmr_mask = 0;
+	uint32_t clearing_mask = QMAP_FIELD_RESERVED_BITS_MASK;
+	uint32_t q_map;
+	uint8_t n, offset;
+
+	if (hw->mac.type != txgbe_mac_raptor)
+		return -ENOSYS;
+
+	PMD_INIT_LOG(DEBUG, "Setting port %d, %s queue_id %d to stat index %d",
+		     (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX",
+		     queue_id, stat_idx);
+
+	n = (uint8_t)(queue_id / NB_QMAP_FIELDS_PER_QSM_REG);
+	if (n >= TXGBE_NB_STAT_MAPPING) {
+		PMD_INIT_LOG(ERR, "Nb of stat mapping registers exceeded");
+		return -EIO;
+	}
+	offset = (uint8_t)(queue_id % NB_QMAP_FIELDS_PER_QSM_REG);
+
+	/* Now clear any previous stat_idx set */
+	clearing_mask <<= (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset);
+	if (!is_rx)
+		stat_mappings->tqsm[n] &= ~clearing_mask;
+	else
+		stat_mappings->rqsm[n] &= ~clearing_mask;
+
+	q_map = (uint32_t)stat_idx;
+	q_map &= QMAP_FIELD_RESERVED_BITS_MASK;
+	qsmr_mask = q_map << (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset);
+	if (!is_rx)
+		stat_mappings->tqsm[n] |= qsmr_mask;
+	else
+		stat_mappings->rqsm[n] |= qsmr_mask;
+
+	PMD_INIT_LOG(DEBUG, "Set port %d, %s queue_id %d to stat index %d",
+		     (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX",
+		     queue_id, stat_idx);
+	PMD_INIT_LOG(DEBUG, "%s[%d] = 0x%08x", is_rx ? "RQSMR" : "TQSM", n,
+		     is_rx ? stat_mappings->rqsm[n] : stat_mappings->tqsm[n]);
+	return 0;
+}
+
 static int
 eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
@@ -1958,6 +2009,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.xstats_reset               = txgbe_dev_xstats_reset,
 	.xstats_get_names           = txgbe_dev_xstats_get_names,
 	.xstats_get_names_by_id     = txgbe_dev_xstats_get_names_by_id,
+	.queue_stats_mapping_set    = txgbe_dev_queue_stats_mapping_set,
 	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
 	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 30/42] net/txgbe: add device info get
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (27 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 29/42] net/txgbe: add queue stats mapping and enable RX DMA unit Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-09 17:54   ` Ferruh Yigit
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 31/42] net/txgbe: add MAC address operations Jiawen Wu
                   ` (12 subsequent siblings)
  41 siblings, 1 reply; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device information get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 78 +++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_ethdev.h | 25 ++++++++
 drivers/net/txgbe/txgbe_rxtx.c   | 99 ++++++++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_rxtx.h   |  4 ++
 4 files changed, 206 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index c43d5b56f..682519726 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -62,6 +62,20 @@ static const struct rte_pci_id pci_id_txgbe_map[] = {
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
+static const struct rte_eth_desc_lim rx_desc_lim = {
+	.nb_max = TXGBE_RING_DESC_MAX,
+	.nb_min = TXGBE_RING_DESC_MIN,
+	.nb_align = TXGBE_RXD_ALIGN,
+};
+
+static const struct rte_eth_desc_lim tx_desc_lim = {
+	.nb_max = TXGBE_RING_DESC_MAX,
+	.nb_min = TXGBE_RING_DESC_MIN,
+	.nb_align = TXGBE_TXD_ALIGN,
+	.nb_seg_max = TXGBE_TX_MAX_SEG,
+	.nb_mtu_seg_max = TXGBE_TX_MAX_SEG,
+};
+
 static const struct eth_dev_ops txgbe_eth_dev_ops;
 
 #define HW_XSTAT(m) {#m, offsetof(struct txgbe_hw_stats, m)}
@@ -1479,6 +1493,69 @@ txgbe_dev_xstats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	dev_info->max_rx_queues = (uint16_t)hw->mac.max_rx_queues;
+	dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
+	dev_info->min_rx_bufsize = 1024;
+	dev_info->max_rx_pktlen = 15872;
+	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
+	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
+	dev_info->max_vfs = pci_dev->max_vfs;
+	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
+	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
+	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
+				     dev_info->rx_queue_offload_capa);
+	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
+	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = TXGBE_DEFAULT_RX_PTHRESH,
+			.hthresh = TXGBE_DEFAULT_RX_HTHRESH,
+			.wthresh = TXGBE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = TXGBE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = TXGBE_DEFAULT_TX_PTHRESH,
+			.hthresh = TXGBE_DEFAULT_TX_HTHRESH,
+			.wthresh = TXGBE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = TXGBE_DEFAULT_TX_FREE_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = rx_desc_lim;
+	dev_info->tx_desc_lim = tx_desc_lim;
+
+	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
+	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
+
+	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+
+	/* Driver-preferred Rx/Tx parameters */
+	dev_info->default_rxportconf.burst_size = 32;
+	dev_info->default_txportconf.burst_size = 32;
+	dev_info->default_rxportconf.nb_queues = 1;
+	dev_info->default_txportconf.nb_queues = 1;
+	dev_info->default_rxportconf.ring_size = 256;
+	dev_info->default_txportconf.ring_size = 256;
+
+	return 0;
+}
+
 const uint32_t *
 txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev)
 {
@@ -2010,6 +2087,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.xstats_get_names           = txgbe_dev_xstats_get_names,
 	.xstats_get_names_by_id     = txgbe_dev_xstats_get_names_by_id,
 	.queue_stats_mapping_set    = txgbe_dev_queue_stats_mapping_set,
+	.dev_infos_get              = txgbe_dev_info_get,
 	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
 	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index ffff4ee11..61f4aa772 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -25,9 +25,21 @@
  * FreeBSD driver.
  */
 #define TXGBE_VLAN_TAG_SIZE 4
+#define TXGBE_HKEY_MAX_INDEX 10
 
 #define TXGBE_QUEUE_ITR_INTERVAL_DEFAULT	500 /* 500us */
 
+#define TXGBE_RSS_OFFLOAD_ALL ( \
+	ETH_RSS_IPV4 | \
+	ETH_RSS_NONFRAG_IPV4_TCP | \
+	ETH_RSS_NONFRAG_IPV4_UDP | \
+	ETH_RSS_IPV6 | \
+	ETH_RSS_NONFRAG_IPV6_TCP | \
+	ETH_RSS_NONFRAG_IPV6_UDP | \
+	ETH_RSS_IPV6_EX | \
+	ETH_RSS_IPV6_TCP_EX | \
+	ETH_RSS_IPV6_UDP_EX)
+
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
 
@@ -174,6 +186,19 @@ void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
 #define TXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
 
+/*
+ *  Default values for RX/TX configuration
+ */
+#define TXGBE_DEFAULT_RX_FREE_THRESH  32
+#define TXGBE_DEFAULT_RX_PTHRESH      8
+#define TXGBE_DEFAULT_RX_HTHRESH      8
+#define TXGBE_DEFAULT_RX_WTHRESH      0
+
+#define TXGBE_DEFAULT_TX_FREE_THRESH  32
+#define TXGBE_DEFAULT_TX_PTHRESH      32
+#define TXGBE_DEFAULT_TX_HTHRESH      0
+#define TXGBE_DEFAULT_TX_WTHRESH      0
+
 /* store statistics names and its offset in stats structure */
 struct rte_txgbe_xstats_name_off {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index ef3d63b01..f50bc82ce 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -77,6 +77,19 @@ static const u64 TXGBE_TX_OFFLOAD_MASK = (
 #define rte_txgbe_prefetch(p)   do {} while (0)
 #endif
 
+static int
+txgbe_is_vf(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	switch (hw->mac.type) {
+	case txgbe_mac_raptor_vf:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
 /*********************************************************************
  *
  *  TX functions
@@ -1943,6 +1956,45 @@ txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq)
 	}
 }
 
+uint64_t
+txgbe_get_tx_queue_offloads(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+
+	return 0;
+}
+
+uint64_t
+txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
+{
+	uint64_t tx_offload_capa;
+
+	tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM  |
+		DEV_TX_OFFLOAD_UDP_CKSUM   |
+		DEV_TX_OFFLOAD_TCP_CKSUM   |
+		DEV_TX_OFFLOAD_SCTP_CKSUM  |
+		DEV_TX_OFFLOAD_TCP_TSO     |
+		DEV_TX_OFFLOAD_UDP_TSO	   |
+		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
+		DEV_TX_OFFLOAD_IP_TNL_TSO	|
+		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
+		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
+		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	if (!txgbe_is_vf(dev))
+		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+
+	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+
+	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	return tx_offload_capa;
+}
+
 int __rte_cold
 txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
@@ -2235,6 +2287,53 @@ txgbe_reset_rx_queue(struct txgbe_adapter *adapter, struct txgbe_rx_queue *rxq)
 
 }
 
+uint64_t
+txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
+{
+	uint64_t offloads = 0;
+
+	offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+
+	return offloads;
+}
+
+uint64_t
+txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
+{
+	uint64_t offloads;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
+
+	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
+		   DEV_RX_OFFLOAD_UDP_CKSUM   |
+		   DEV_RX_OFFLOAD_TCP_CKSUM   |
+		   DEV_RX_OFFLOAD_KEEP_CRC    |
+		   DEV_RX_OFFLOAD_JUMBO_FRAME |
+		   DEV_RX_OFFLOAD_VLAN_FILTER |
+		   DEV_RX_OFFLOAD_RSS_HASH |
+		   DEV_RX_OFFLOAD_SCATTER;
+
+	if (!txgbe_is_vf(dev))
+		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
+			     DEV_RX_OFFLOAD_QINQ_STRIP |
+			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+
+	/*
+	 * RSC is only supported by PF devices in a non-SR-IOV
+	 * mode.
+	 */
+	if ((hw->mac.type == txgbe_mac_raptor) &&
+	    !sriov->active)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	if (hw->mac.type == txgbe_mac_raptor)
+		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+
+	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
+	return offloads;
+}
+
 int __rte_cold
 txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 			 uint16_t queue_idx,
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 296e34475..958ca2e97 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -403,5 +403,9 @@ void txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq);
 
 void txgbe_set_rx_function(struct rte_eth_dev *dev);
 
+uint64_t txgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
+uint64_t txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
+uint64_t txgbe_get_rx_port_offloads(struct rte_eth_dev *dev);
+uint64_t txgbe_get_tx_queue_offloads(struct rte_eth_dev *dev);
 
 #endif /* _TXGBE_RXTX_H_ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 31/42] net/txgbe: add MAC address operations
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (28 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 30/42] net/txgbe: add device info get Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 32/42] net/txgbe: add FW version get operation Jiawen Wu
                   ` (11 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add MAC address related operations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_eeprom.h |   2 +
 drivers/net/txgbe/base/txgbe_hw.c     | 434 ++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_hw.h     |  11 +
 drivers/net/txgbe/base/txgbe_type.h   |  14 +-
 drivers/net/txgbe/txgbe_ethdev.c      |  61 ++++
 drivers/net/txgbe/txgbe_ethdev.h      |   3 +
 6 files changed, 524 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index 21de7e9b5..44f555bbc 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -23,6 +23,8 @@
 #define TXGBE_EEPROM_VERSION_H          0x1E
 #define TXGBE_ISCSI_BOOT_CONFIG         0x07
 
+#define TXGBE_SAN_MAC_ADDR_PORT0_OFFSET		0x0
+#define TXGBE_SAN_MAC_ADDR_PORT1_OFFSET		0x3
 #define TXGBE_DEVICE_CAPS_ALLOW_ANY_SFP		0x1
 #define TXGBE_DEVICE_CAPS_NO_CROSSTALK_WR	(1 << 7)
 #define TXGBE_FW_LESM_PARAMETERS_PTR		0x2
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 05f323a07..088fa0aab 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -11,11 +11,17 @@
 
 #define TXGBE_RAPTOR_MAX_TX_QUEUES 128
 #define TXGBE_RAPTOR_MAX_RX_QUEUES 128
+#define TXGBE_RAPTOR_RAR_ENTRIES   128
+#define TXGBE_RAPTOR_MC_TBL_SIZE   128
 
 STATIC s32 txgbe_setup_copper_link_raptor(struct txgbe_hw *hw,
 					 u32 speed,
 					 bool autoneg_wait_to_complete);
 
+STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr);
+STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
+					 u16 *san_mac_offset);
+
 /**
  *  txgbe_start_hw - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
@@ -230,6 +236,36 @@ s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_get_mac_addr - Generic get MAC address
+ *  @hw: pointer to hardware structure
+ *  @mac_addr: Adapter MAC address
+ *
+ *  Reads the adapter's MAC address from first Receive Address Register (RAR0)
+ *  A reset of the adapter must be performed prior to calling this function
+ *  in order for the MAC address to have been loaded from the EEPROM into RAR0
+ **/
+s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr)
+{
+	u32 rar_high;
+	u32 rar_low;
+	u16 i;
+
+	DEBUGFUNC("txgbe_get_mac_addr");
+
+	wr32(hw, TXGBE_ETHADDRIDX, 0);
+	rar_high = rd32(hw, TXGBE_ETHADDRH);
+	rar_low = rd32(hw, TXGBE_ETHADDRL);
+
+	for (i = 0; i < 2; i++)
+		mac_addr[i] = (u8)(rar_high >> (1 - i) * 8);
+
+	for (i = 0; i < 4; i++)
+		mac_addr[i + 2] = (u8)(rar_low >> (3 - i) * 8);
+
+	return 0;
+}
+
 /**
  *  txgbe_set_lan_id_multi_port_pcie - Set LAN id for PCIe multiple port devices
  *  @hw: pointer to the HW structure
@@ -381,6 +417,16 @@ s32 txgbe_validate_mac_addr(u8 *mac_addr)
 	return status;
 }
 
+/**
+ *  txgbe_set_rar - Set Rx address register
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *  @addr: Address to put into receive address register
+ *  @vmdq: VMDq "set" or "pool" index
+ *  @enable_addr: set flag that address is active
+ *
+ *  Puts an ethernet address into a receive address register.
+ **/
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr)
 {
@@ -427,6 +473,250 @@ s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 	return 0;
 }
 
+/**
+ *  txgbe_clear_rar - Remove Rx address register
+ *  @hw: pointer to hardware structure
+ *  @index: Receive address register to write
+ *
+ *  Clears an ethernet address from a receive address register.
+ **/
+s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index)
+{
+	u32 rar_high;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("txgbe_clear_rar");
+
+	/* Make sure we are using a valid rar index range */
+	if (index >= rar_entries) {
+		DEBUGOUT("RAR index %d is out of range.\n", index);
+		return TXGBE_ERR_INVALID_ARGUMENT;
+	}
+
+	/*
+	 * Some parts put the VMDq setting in the extra RAH bits,
+	 * so save everything except the lower 16 bits that hold part
+	 * of the address and the address valid bit.
+	 */
+	wr32(hw, TXGBE_ETHADDRIDX, index);
+	rar_high = rd32(hw, TXGBE_ETHADDRH);
+	rar_high &= ~(TXGBE_ETHADDRH_AD_MASK | TXGBE_ETHADDRH_VLD);
+
+	wr32(hw, TXGBE_ETHADDRL, 0);
+	wr32(hw, TXGBE_ETHADDRH, rar_high);
+
+	/* clear VMDq pool/queue selection for this RAR */
+	hw->mac.clear_vmdq(hw, index, BIT_MASK32);
+
+	return 0;
+}
+
+/**
+ *  txgbe_init_rx_addrs - Initializes receive address filters.
+ *  @hw: pointer to hardware structure
+ *
+ *  Places the MAC address in receive address register 0 and clears the rest
+ *  of the receive address registers. Clears the multicast table. Assumes
+ *  the receiver is in reset when the routine is called.
+ **/
+s32 txgbe_init_rx_addrs(struct txgbe_hw *hw)
+{
+	u32 i;
+	u32 psrctl;
+	u32 rar_entries = hw->mac.num_rar_entries;
+
+	DEBUGFUNC("txgbe_init_rx_addrs");
+
+	/*
+	 * If the current mac address is valid, assume it is a software override
+	 * to the permanent address.
+	 * Otherwise, use the permanent address from the eeprom.
+	 */
+	if (txgbe_validate_mac_addr(hw->mac.addr) ==
+	    TXGBE_ERR_INVALID_MAC_ADDR) {
+		/* Get the MAC address from the RAR0 for later reference */
+		hw->mac.get_mac_addr(hw, hw->mac.addr);
+
+		DEBUGOUT(" Keeping Current RAR0 Addr =%.2X %.2X %.2X ",
+			  hw->mac.addr[0], hw->mac.addr[1],
+			  hw->mac.addr[2]);
+		DEBUGOUT("%.2X %.2X %.2X\n", hw->mac.addr[3],
+			  hw->mac.addr[4], hw->mac.addr[5]);
+	} else {
+		/* Setup the receive address. */
+		DEBUGOUT("Overriding MAC Address in RAR[0]\n");
+		DEBUGOUT(" New MAC Addr =%.2X %.2X %.2X ",
+			  hw->mac.addr[0], hw->mac.addr[1],
+			  hw->mac.addr[2]);
+		DEBUGOUT("%.2X %.2X %.2X\n", hw->mac.addr[3],
+			  hw->mac.addr[4], hw->mac.addr[5]);
+
+		hw->mac.set_rar(hw, 0, hw->mac.addr, 0, true);
+	}
+
+	/* clear VMDq pool/queue selection for RAR 0 */
+	hw->mac.clear_vmdq(hw, 0, BIT_MASK32);
+
+	hw->addr_ctrl.overflow_promisc = 0;
+
+	hw->addr_ctrl.rar_used_count = 1;
+
+	/* Zero out the other receive addresses. */
+	DEBUGOUT("Clearing RAR[1-%d]\n", rar_entries - 1);
+	for (i = 1; i < rar_entries; i++) {
+		wr32(hw, TXGBE_ETHADDRIDX, i);
+		wr32(hw, TXGBE_ETHADDRL, 0);
+		wr32(hw, TXGBE_ETHADDRH, 0);
+	}
+
+	/* Clear the MTA */
+	hw->addr_ctrl.mta_in_use = 0;
+	psrctl = rd32(hw, TXGBE_PSRCTL);
+	psrctl &= ~(TXGBE_PSRCTL_ADHF12_MASK | TXGBE_PSRCTL_MCHFENA);
+	psrctl |= TXGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+	wr32(hw, TXGBE_PSRCTL, psrctl);
+
+	DEBUGOUT(" Clearing MTA\n");
+	for (i = 0; i < hw->mac.mcft_size; i++)
+		wr32(hw, TXGBE_MCADDRTBL(i), 0);
+
+	txgbe_init_uta_tables(hw);
+
+	return 0;
+}
+
+/**
+ *  txgbe_mta_vector - Determines bit-vector in multicast table to set
+ *  @hw: pointer to hardware structure
+ *  @mc_addr: the multicast address
+ *
+ *  Extracts the 12 bits, from a multicast address, to determine which
+ *  bit-vector to set in the multicast table. The hardware uses 12 bits, from
+ *  incoming rx multicast addresses, to determine the bit-vector to check in
+ *  the MTA. Which of the 4 combination, of 12-bits, the hardware uses is set
+ *  by the MO field of the PSRCTRL. The MO field is set during initialization
+ *  to mc_filter_type.
+ **/
+STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr)
+{
+	u32 vector = 0;
+
+	DEBUGFUNC("txgbe_mta_vector");
+
+	switch (hw->mac.mc_filter_type) {
+	case 0:   /* use bits [47:36] of the address */
+		vector = ((mc_addr[4] >> 4) | (((u16)mc_addr[5]) << 4));
+		break;
+	case 1:   /* use bits [46:35] of the address */
+		vector = ((mc_addr[4] >> 3) | (((u16)mc_addr[5]) << 5));
+		break;
+	case 2:   /* use bits [45:34] of the address */
+		vector = ((mc_addr[4] >> 2) | (((u16)mc_addr[5]) << 6));
+		break;
+	case 3:   /* use bits [43:32] of the address */
+		vector = ((mc_addr[4]) | (((u16)mc_addr[5]) << 8));
+		break;
+	default:  /* Invalid mc_filter_type */
+		DEBUGOUT("MC filter type param set incorrectly\n");
+		ASSERT(0);
+		break;
+	}
+
+	/* vector can only be 12-bits or boundary will be exceeded */
+	vector &= 0xFFF;
+	return vector;
+}
+
+/**
+ *  txgbe_set_mta - Set bit-vector in multicast table
+ *  @hw: pointer to hardware structure
+ *  @mc_addr: Multicast address
+ *
+ *  Sets the bit-vector in the multicast table.
+ **/
+void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr)
+{
+	u32 vector;
+	u32 vector_bit;
+	u32 vector_reg;
+
+	DEBUGFUNC("txgbe_set_mta");
+
+	hw->addr_ctrl.mta_in_use++;
+
+	vector = txgbe_mta_vector(hw, mc_addr);
+	DEBUGOUT(" bit-vector = 0x%03X\n", vector);
+
+	/*
+	 * The MTA is a register array of 128 32-bit registers. It is treated
+	 * like an array of 4096 bits.  We want to set bit
+	 * BitArray[vector_value]. So we figure out what register the bit is
+	 * in, read it, OR in the new bit, then write back the new value.  The
+	 * register is determined by the upper 7 bits of the vector value and
+	 * the bit within that register are determined by the lower 5 bits of
+	 * the value.
+	 */
+	vector_reg = (vector >> 5) & 0x7F;
+	vector_bit = vector & 0x1F;
+	hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+}
+
+/**
+ *  txgbe_update_mc_addr_list - Updates MAC list of multicast addresses
+ *  @hw: pointer to hardware structure
+ *  @mc_addr_list: the list of new multicast addresses
+ *  @mc_addr_count: number of addresses
+ *  @next: iterator function to walk the multicast address list
+ *  @clear: flag, when set clears the table beforehand
+ *
+ *  When the clear flag is set, the given list replaces any existing list.
+ *  Hashes the given addresses into the multicast table.
+ **/
+s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
+				      u32 mc_addr_count, txgbe_mc_addr_itr next,
+				      bool clear)
+{
+	u32 i;
+	u32 vmdq;
+
+	DEBUGFUNC("txgbe_update_mc_addr_list");
+
+	/*
+	 * Set the new number of MC addresses that we are being requested to
+	 * use.
+	 */
+	hw->addr_ctrl.num_mc_addrs = mc_addr_count;
+	hw->addr_ctrl.mta_in_use = 0;
+
+	/* Clear mta_shadow */
+	if (clear) {
+		DEBUGOUT(" Clearing MTA\n");
+		memset(&hw->mac.mta_shadow, 0, sizeof(hw->mac.mta_shadow));
+	}
+
+	/* Update mta_shadow */
+	for (i = 0; i < mc_addr_count; i++) {
+		DEBUGOUT(" Adding the multicast addresses:\n");
+		txgbe_set_mta(hw, next(hw, &mc_addr_list, &vmdq));
+	}
+
+	/* Enable mta */
+	for (i = 0; i < hw->mac.mcft_size; i++)
+		wr32a(hw, TXGBE_MCADDRTBL(0), i,
+				      hw->mac.mta_shadow[i]);
+
+	if (hw->addr_ctrl.mta_in_use > 0) {
+		u32 psrctl = rd32(hw, TXGBE_PSRCTL);
+		psrctl &= ~(TXGBE_PSRCTL_ADHF12_MASK | TXGBE_PSRCTL_MCHFENA);
+		psrctl |= TXGBE_PSRCTL_MCHFENA |
+			 TXGBE_PSRCTL_ADHF12(hw->mac.mc_filter_type);
+		wr32(hw, TXGBE_PSRCTL, psrctl);
+	}
+
+	DEBUGOUT("txgbe_update_mc_addr_list Complete\n");
+	return 0;
+}
+
 /**
  *  txgbe_disable_sec_rx_path - Stops the receive data path
  *  @hw: pointer to hardware structure
@@ -534,6 +824,139 @@ int txgbe_enable_sec_tx_path(struct txgbe_hw *hw)
 	return 0;
 }
 
+/**
+ *  txgbe_get_san_mac_addr_offset - Get SAN MAC address offset from the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @san_mac_offset: SAN MAC address offset
+ *
+ *  This function will read the EEPROM location for the SAN MAC address
+ *  pointer, and returns the value at that location.  This is used in both
+ *  get and set mac_addr routines.
+ **/
+STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
+					 u16 *san_mac_offset)
+{
+	s32 err;
+
+	DEBUGFUNC("txgbe_get_san_mac_addr_offset");
+
+	/*
+	 * First read the EEPROM pointer to see if the MAC addresses are
+	 * available.
+	 */
+	err = hw->rom.readw_sw(hw, TXGBE_SAN_MAC_ADDR_PTR,
+				      san_mac_offset);
+	if (err) {
+		DEBUGOUT("eeprom at offset %d failed",
+			 TXGBE_SAN_MAC_ADDR_PTR);
+	}
+
+	return err;
+}
+
+/**
+ *  txgbe_get_san_mac_addr - SAN MAC address retrieval from the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @san_mac_addr: SAN MAC address
+ *
+ *  Reads the SAN MAC address from the EEPROM, if it's available.  This is
+ *  per-port, so set_lan_id() must be called before reading the addresses.
+ *  set_lan_id() is called by identify_sfp(), but this cannot be relied
+ *  upon for non-SFP connections, so we must call it here.
+ **/
+s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr)
+{
+	u16 san_mac_data, san_mac_offset;
+	u8 i;
+	s32 err;
+
+	DEBUGFUNC("txgbe_get_san_mac_addr");
+
+	/*
+	 * First read the EEPROM pointer to see if the MAC addresses are
+	 * available. If they're not, no point in calling set_lan_id() here.
+	 */
+	err = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset);
+	if (err || san_mac_offset == 0 || san_mac_offset == 0xFFFF)
+		goto san_mac_addr_out;
+
+	/* apply the port offset to the address offset */
+	(hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) :
+			 (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET);
+	for (i = 0; i < 3; i++) {
+		err = hw->rom.read16(hw, san_mac_offset,
+					      &san_mac_data);
+		if (err) {
+			DEBUGOUT("eeprom read at offset %d failed",
+				 san_mac_offset);
+			goto san_mac_addr_out;
+		}
+		san_mac_addr[i * 2] = (u8)(san_mac_data);
+		san_mac_addr[i * 2 + 1] = (u8)(san_mac_data >> 8);
+		san_mac_offset++;
+	}
+	return 0;
+
+san_mac_addr_out:
+	/*
+	 * No addresses available in this EEPROM.  It's not an
+	 * error though, so just wipe the local address and return.
+	 */
+	for (i = 0; i < 6; i++)
+		san_mac_addr[i] = 0xFF;
+	return 0;
+}
+
+/**
+ *  txgbe_set_san_mac_addr - Write the SAN MAC address to the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @san_mac_addr: SAN MAC address
+ *
+ *  Write a SAN MAC address to the EEPROM.
+ **/
+s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr)
+{
+	s32 err;
+	u16 san_mac_data, san_mac_offset;
+	u8 i;
+
+	DEBUGFUNC("txgbe_set_san_mac_addr");
+
+	/* Look for SAN mac address pointer.  If not defined, return */
+	err = txgbe_get_san_mac_addr_offset(hw, &san_mac_offset);
+	if (err || san_mac_offset == 0 || san_mac_offset == 0xFFFF)
+		return TXGBE_ERR_NO_SAN_ADDR_PTR;
+
+	/* Apply the port offset to the address offset */
+	(hw->bus.func) ? (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT1_OFFSET) :
+			 (san_mac_offset += TXGBE_SAN_MAC_ADDR_PORT0_OFFSET);
+
+	for (i = 0; i < 3; i++) {
+		san_mac_data = (u16)((u16)(san_mac_addr[i * 2 + 1]) << 8);
+		san_mac_data |= (u16)(san_mac_addr[i * 2]);
+		hw->rom.write16(hw, san_mac_offset, san_mac_data);
+		san_mac_offset++;
+	}
+
+	return 0;
+}
+
+/**
+ *  txgbe_init_uta_tables - Initialize the Unicast Table Array
+ *  @hw: pointer to hardware structure
+ **/
+s32 txgbe_init_uta_tables(struct txgbe_hw *hw)
+{
+	int i;
+
+	DEBUGFUNC("txgbe_init_uta_tables");
+	DEBUGOUT(" Clearing UTA\n");
+
+	for (i = 0; i < 128; i++)
+		wr32(hw, TXGBE_UCADDRTBL(i), 0);
+
+	return 0;
+}
 
 /**
  *  txgbe_need_crosstalk_fix - Determine if we need to do cross talk fix
@@ -1142,6 +1565,7 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->start_hw = txgbe_start_hw_raptor;
 	mac->clear_hw_cntrs = txgbe_clear_hw_cntrs;
 	mac->enable_rx_dma = txgbe_enable_rx_dma_raptor;
+	mac->get_mac_addr = txgbe_get_mac_addr;
 	mac->stop_hw = txgbe_stop_hw;
 	mac->reset_hw = txgbe_reset_hw;
 
@@ -1149,12 +1573,19 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->enable_sec_rx_path = txgbe_enable_sec_rx_path;
 	mac->disable_sec_tx_path = txgbe_disable_sec_tx_path;
 	mac->enable_sec_tx_path = txgbe_enable_sec_tx_path;
+	mac->get_san_mac_addr = txgbe_get_san_mac_addr;
+	mac->set_san_mac_addr = txgbe_set_san_mac_addr;
 	mac->get_device_caps = txgbe_get_device_caps;
 	mac->autoc_read = txgbe_autoc_read;
 	mac->autoc_write = txgbe_autoc_write;
 
+	mac->set_rar = txgbe_set_rar;
+	mac->clear_rar = txgbe_clear_rar;
+	mac->init_rx_addrs = txgbe_init_rx_addrs;
 	mac->enable_rx = txgbe_enable_rx;
 	mac->disable_rx = txgbe_disable_rx;
+	mac->init_uta_tables = txgbe_init_uta_tables;
+	mac->setup_sfp = txgbe_setup_sfp_modules;
 	/* Link */
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
 	mac->check_link = txgbe_check_mac_link;
@@ -1172,6 +1603,9 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	rom->validate_checksum = txgbe_validate_eeprom_checksum;
 	rom->update_checksum = txgbe_update_eeprom_checksum;
 	rom->calc_checksum = txgbe_calc_eeprom_checksum;
+
+	mac->mcft_size		= TXGBE_RAPTOR_MC_TBL_SIZE;
+	mac->num_rar_entries	= TXGBE_RAPTOR_RAR_ENTRIES;
 	mac->max_rx_queues	= TXGBE_RAPTOR_MAX_RX_QUEUES;
 	mac->max_tx_queues	= TXGBE_RAPTOR_MAX_TX_QUEUES;
 
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index 86b616d38..61bbb5e0a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -13,6 +13,7 @@ s32 txgbe_stop_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_gen2(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw);
+s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr);
 
 void txgbe_set_lan_id_multi_port(struct txgbe_hw *hw);
 
@@ -21,6 +22,11 @@ s32 txgbe_led_off(struct txgbe_hw *hw, u32 index);
 
 s32 txgbe_set_rar(struct txgbe_hw *hw, u32 index, u8 *addr, u32 vmdq,
 			  u32 enable_addr);
+s32 txgbe_clear_rar(struct txgbe_hw *hw, u32 index);
+s32 txgbe_init_rx_addrs(struct txgbe_hw *hw);
+s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
+				      u32 mc_addr_count,
+				      txgbe_mc_addr_itr func, bool clear);
 s32 txgbe_disable_sec_rx_path(struct txgbe_hw *hw);
 s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
 s32 txgbe_disable_sec_tx_path(struct txgbe_hw *hw);
@@ -28,6 +34,10 @@ s32 txgbe_enable_sec_tx_path(struct txgbe_hw *hw);
 
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
 
+s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
+s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
+
+s32 txgbe_init_uta_tables(struct txgbe_hw *hw);
 s32 txgbe_check_mac_link(struct txgbe_hw *hw,
 			       u32 *speed,
 			       bool *link_up, bool link_up_wait_to_complete);
@@ -42,6 +52,7 @@ void txgbe_enable_rx(struct txgbe_hw *hw);
 s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
 					  u32 speed,
 					  bool autoneg_wait_to_complete);
+void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 35a8ed3eb..86fb6e259 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -175,6 +175,14 @@ enum txgbe_bus_width {
 
 struct txgbe_hw;
 
+struct txgbe_addr_filter_info {
+	u32 num_mc_addrs;
+	u32 rar_used_count;
+	u32 mta_in_use;
+	u32 overflow_promisc;
+	bool user_set_promisc;
+};
+
 /* Bus parameters */
 struct txgbe_bus_info {
 	s32 (*get_bus_info)(struct txgbe_hw *);
@@ -498,7 +506,10 @@ struct txgbe_mac_info {
 	u16 wwnn_prefix;
 	/* prefix for World Wide Port Name (WWPN) */
 	u16 wwpn_prefix;
-
+#define TXGBE_MAX_MTA			128
+	u32 mta_shadow[TXGBE_MAX_MTA];
+	s32 mc_filter_type;
+	u32 mcft_size;
 	u32 num_rar_entries;
 	u32 max_tx_queues;
 	u32 max_rx_queues;
@@ -585,6 +596,7 @@ struct txgbe_hw {
 	void IOMEM *hw_addr;
 	void *back;
 	struct txgbe_mac_info mac;
+	struct txgbe_addr_filter_info addr_ctrl;
 	struct txgbe_phy_info phy;
 	struct txgbe_link_info link;
 	struct txgbe_rom_info rom;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 682519726..4922a9ca0 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1982,6 +1982,37 @@ txgbe_dev_led_off(struct rte_eth_dev *dev)
 	hw = TXGBE_DEV_HW(dev);
 	return txgbe_led_off(hw, 4) == 0 ? 0 : -ENOTSUP;
 }
+
+static int
+txgbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
+				uint32_t index, uint32_t pool)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t enable_addr = 1;
+
+	return txgbe_set_rar(hw, index, mac_addr->addr_bytes,
+			     pool, enable_addr);
+}
+
+static void
+txgbe_remove_rar(struct rte_eth_dev *dev, uint32_t index)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	txgbe_clear_rar(hw, index);
+}
+
+static int
+txgbe_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+
+	txgbe_remove_rar(dev, 0);
+	txgbe_add_rar(dev, addr, 0, pci_dev->max_vfs);
+
+	return 0;
+}
+
 /**
  * set the IVAR registers, mapping interrupt causes to vectors
  * @param hw
@@ -2071,6 +2102,32 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
 			| TXGBE_ITR_WRDSA);
 }
 
+static u8 *
+txgbe_dev_addr_list_itr(__rte_unused struct txgbe_hw *hw,
+			u8 **mc_addr_ptr, u32 *vmdq)
+{
+	u8 *mc_addr;
+
+	*vmdq = 0;
+	mc_addr = *mc_addr_ptr;
+	*mc_addr_ptr = (mc_addr + sizeof(struct rte_ether_addr));
+	return mc_addr;
+}
+
+int
+txgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
+			  struct rte_ether_addr *mc_addr_set,
+			  uint32_t nb_mc_addr)
+{
+	struct txgbe_hw *hw;
+	u8 *mc_addr_list;
+
+	hw = TXGBE_DEV_HW(dev);
+	mc_addr_list = (u8 *)mc_addr_set;
+	return txgbe_update_mc_addr_list(hw, mc_addr_list, nb_mc_addr,
+					 txgbe_dev_addr_list_itr, TRUE);
+}
+
 static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_start                  = txgbe_dev_start,
 	.dev_stop                   = txgbe_dev_stop,
@@ -2099,6 +2156,10 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.tx_queue_release           = txgbe_dev_tx_queue_release,
 	.dev_led_on                 = txgbe_dev_led_on,
 	.dev_led_off                = txgbe_dev_led_off,
+	.mac_addr_add               = txgbe_add_rar,
+	.mac_addr_remove            = txgbe_remove_rar,
+	.mac_addr_set               = txgbe_set_default_mac_addr,
+	.set_mc_addr_list           = txgbe_dev_set_mc_addr_list,
 };
 
 RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd);
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 61f4aa772..b25846721 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -206,6 +206,9 @@ struct rte_txgbe_xstats_name_off {
 };
 
 const uint32_t *txgbe_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+int txgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
+				      struct rte_ether_addr *mc_addr_set,
+				      uint32_t nb_mc_addr);
 void txgbe_dev_setup_link_alarm_handler(void *param);
 void txgbe_read_stats_registers(struct txgbe_hw *hw,
 			   struct txgbe_hw_stats *hw_stats);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 32/42] net/txgbe: add FW version get operation
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (29 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 31/42] net/txgbe: add MAC address operations Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 33/42] net/txgbe: add EEPROM info " Jiawen Wu
                   ` (10 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add firmware version get operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 4922a9ca0..f5a986309 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1493,6 +1493,27 @@ txgbe_dev_xstats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+txgbe_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	u16 eeprom_verh, eeprom_verl;
+	u32 etrack_id;
+	int ret;
+
+	hw->rom.readw_sw(hw, TXGBE_EEPROM_VERSION_H, &eeprom_verh);
+	hw->rom.readw_sw(hw, TXGBE_EEPROM_VERSION_L, &eeprom_verl);
+
+	etrack_id = (eeprom_verh << 16) | eeprom_verl;
+	ret = snprintf(fw_version, fw_size, "0x%08x", etrack_id);
+
+	ret += 1; /* add the size of '\0' */
+	if (fw_size < (u32)ret)
+		return ret;
+	else
+		return 0;
+}
+
 static int
 txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 {
@@ -2144,6 +2165,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.xstats_get_names           = txgbe_dev_xstats_get_names,
 	.xstats_get_names_by_id     = txgbe_dev_xstats_get_names_by_id,
 	.queue_stats_mapping_set    = txgbe_dev_queue_stats_mapping_set,
+	.fw_version_get             = txgbe_fw_version_get,
 	.dev_infos_get              = txgbe_dev_info_get,
 	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 33/42] net/txgbe: add EEPROM info get operation
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (30 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 32/42] net/txgbe: add FW version get operation Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 34/42] net/txgbe: add remaining RX and TX queue operations Jiawen Wu
                   ` (9 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add EEPROM information get related operations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_eeprom.h |   8 ++
 drivers/net/txgbe/base/txgbe_hw.c     | 137 +++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_hw.h     |   5 +
 drivers/net/txgbe/base/txgbe_phy.c    |  16 +++
 drivers/net/txgbe/base/txgbe_phy.h    |   3 +
 drivers/net/txgbe/txgbe_ethdev.c      | 154 ++++++++++++++++++++++++++
 6 files changed, 323 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_eeprom.h b/drivers/net/txgbe/base/txgbe_eeprom.h
index 44f555bbc..34bcb3feb 100644
--- a/drivers/net/txgbe/base/txgbe_eeprom.h
+++ b/drivers/net/txgbe/base/txgbe_eeprom.h
@@ -30,6 +30,14 @@
 #define TXGBE_FW_LESM_PARAMETERS_PTR		0x2
 #define TXGBE_FW_LESM_STATE_1			0x1
 #define TXGBE_FW_LESM_STATE_ENABLED		0x8000 /* LESM Enable bit */
+#define TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR		0x27 /* Alt. SAN MAC block */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET	0x0 /* Alt SAN MAC capability */
+#define TXGBE_ALT_SAN_MAC_ADDR_PORT0_OFFSET	0x1 /* Alt SAN MAC 0 offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_PORT1_OFFSET	0x4 /* Alt SAN MAC 1 offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET	0x7 /* Alt WWNN prefix offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET	0x8 /* Alt WWPN prefix offset */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_SANMAC	0x0 /* Alt SAN MAC exists */
+#define TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN	0x1 /* Alt WWN base exists */
 
 s32 txgbe_init_eeprom_params(struct txgbe_hw *hw);
 s32 txgbe_calc_eeprom_checksum(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 088fa0aab..80ecec34d 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -717,6 +717,77 @@ s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
 	return 0;
 }
 
+/**
+ *  txgbe_acquire_swfw_sync - Acquire SWFW semaphore
+ *  @hw: pointer to hardware structure
+ *  @mask: Mask to specify which semaphore to acquire
+ *
+ *  Acquires the SWFW semaphore through the MNGSEM register for the specified
+ *  function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask)
+{
+	u32 mngsem = 0;
+	u32 swmask = TXGBE_MNGSEM_SW(mask);
+	u32 fwmask = TXGBE_MNGSEM_FW(mask);
+	u32 timeout = 200;
+	u32 i;
+
+	DEBUGFUNC("txgbe_acquire_swfw_sync");
+
+	for (i = 0; i < timeout; i++) {
+		/*
+		 * SW NVM semaphore bit is used for access to all
+		 * SW_FW_SYNC bits (not just NVM)
+		 */
+		if (txgbe_get_eeprom_semaphore(hw))
+			return TXGBE_ERR_SWFW_SYNC;
+
+		mngsem = rd32(hw, TXGBE_MNGSEM);
+		if (!(mngsem & (fwmask | swmask))) {
+			mngsem |= swmask;
+			wr32(hw, TXGBE_MNGSEM, mngsem);
+			txgbe_release_eeprom_semaphore(hw);
+			return 0;
+		} else {
+			/* Resource is currently in use by FW or SW */
+			txgbe_release_eeprom_semaphore(hw);
+			msec_delay(5);
+		}
+	}
+
+	/* If time expired clear the bits holding the lock and retry */
+	if (mngsem & (fwmask | swmask))
+		txgbe_release_swfw_sync(hw, mngsem & (fwmask | swmask));
+
+	msec_delay(5);
+	return TXGBE_ERR_SWFW_SYNC;
+}
+
+/**
+ *  txgbe_release_swfw_sync - Release SWFW semaphore
+ *  @hw: pointer to hardware structure
+ *  @mask: Mask to specify which semaphore to release
+ *
+ *  Releases the SWFW semaphore through the MNGSEM register for the specified
+ *  function (CSR, PHY0, PHY1, EEPROM, Flash)
+ **/
+void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask)
+{
+	u32 mngsem;
+	u32 swmask = mask;
+
+	DEBUGFUNC("txgbe_release_swfw_sync");
+
+	txgbe_get_eeprom_semaphore(hw);
+
+	mngsem = rd32(hw, TXGBE_MNGSEM);
+	mngsem &= ~swmask;
+	wr32(hw, TXGBE_MNGSEM, mngsem);
+
+	txgbe_release_eeprom_semaphore(hw);
+}
+
 /**
  *  txgbe_disable_sec_rx_path - Stops the receive data path
  *  @hw: pointer to hardware structure
@@ -1070,6 +1141,62 @@ s32 txgbe_check_mac_link(struct txgbe_hw *hw, u32 *speed,
 	return 0;
 }
 
+/**
+ *  txgbe_get_wwn_prefix - Get alternative WWNN/WWPN prefix from
+ *  the EEPROM
+ *  @hw: pointer to hardware structure
+ *  @wwnn_prefix: the alternative WWNN prefix
+ *  @wwpn_prefix: the alternative WWPN prefix
+ *
+ *  This function will read the EEPROM from the alternative SAN MAC address
+ *  block to check the support for the alternative WWNN/WWPN prefix support.
+ **/
+s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
+				 u16 *wwpn_prefix)
+{
+	u16 offset, caps;
+	u16 alt_san_mac_blk_offset;
+
+	DEBUGFUNC("txgbe_get_wwn_prefix");
+
+	/* clear output first */
+	*wwnn_prefix = 0xFFFF;
+	*wwpn_prefix = 0xFFFF;
+
+	/* check if alternative SAN MAC is supported */
+	offset = TXGBE_ALT_SAN_MAC_ADDR_BLK_PTR;
+	if (hw->rom.readw_sw(hw, offset, &alt_san_mac_blk_offset))
+		goto wwn_prefix_err;
+
+	if ((alt_san_mac_blk_offset == 0) ||
+	    (alt_san_mac_blk_offset == 0xFFFF))
+		goto wwn_prefix_out;
+
+	/* check capability in alternative san mac address block */
+	offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_CAPS_OFFSET;
+	if (hw->rom.read16(hw, offset, &caps))
+		goto wwn_prefix_err;
+	if (!(caps & TXGBE_ALT_SAN_MAC_ADDR_CAPS_ALTWWN))
+		goto wwn_prefix_out;
+
+	/* get the corresponding prefix for WWNN/WWPN */
+	offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWNN_OFFSET;
+	if (hw->rom.read16(hw, offset, wwnn_prefix)) {
+		DEBUGOUT("eeprom read at offset %d failed", offset);
+	}
+
+	offset = alt_san_mac_blk_offset + TXGBE_ALT_SAN_MAC_ADDR_WWPN_OFFSET;
+	if (hw->rom.read16(hw, offset, wwpn_prefix))
+		goto wwn_prefix_err;
+
+wwn_prefix_out:
+	return 0;
+
+wwn_prefix_err:
+	DEBUGOUT("eeprom read at offset %d failed", offset);
+	return 0;
+}
+
 /**
  *  txgbe_get_device_caps - Get additional device capabilities
  *  @hw: pointer to hardware structure
@@ -1556,8 +1683,12 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	phy->setup_link_speed = txgbe_setup_phy_link_speed;
 	phy->read_i2c_byte = txgbe_read_i2c_byte;
 	phy->write_i2c_byte = txgbe_write_i2c_byte;
+	phy->read_i2c_sff8472 = txgbe_read_i2c_sff8472;
 	phy->read_i2c_eeprom = txgbe_read_i2c_eeprom;
 	phy->write_i2c_eeprom = txgbe_write_i2c_eeprom;
+	phy->identify_sfp = txgbe_identify_module;
+	phy->read_i2c_byte_unlocked = txgbe_read_i2c_byte_unlocked;
+	phy->write_i2c_byte_unlocked = txgbe_write_i2c_byte_unlocked;
 	phy->reset = txgbe_reset_phy;
 
 	/* MAC */
@@ -1567,6 +1698,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->enable_rx_dma = txgbe_enable_rx_dma_raptor;
 	mac->get_mac_addr = txgbe_get_mac_addr;
 	mac->stop_hw = txgbe_stop_hw;
+	mac->acquire_swfw_sync = txgbe_acquire_swfw_sync;
+	mac->release_swfw_sync = txgbe_release_swfw_sync;
 	mac->reset_hw = txgbe_reset_hw;
 
 	mac->disable_sec_rx_path = txgbe_disable_sec_rx_path;
@@ -1576,6 +1709,7 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->get_san_mac_addr = txgbe_get_san_mac_addr;
 	mac->set_san_mac_addr = txgbe_set_san_mac_addr;
 	mac->get_device_caps = txgbe_get_device_caps;
+	mac->get_wwn_prefix = txgbe_get_wwn_prefix;
 	mac->autoc_read = txgbe_autoc_read;
 	mac->autoc_write = txgbe_autoc_write;
 
@@ -1590,6 +1724,9 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
 	mac->check_link = txgbe_check_mac_link;
 
+	/* Manageability interface */
+	mac->set_fw_drv_ver = txgbe_hic_set_drv_ver;
+
 	/* EEPROM */
 	rom->init_params = txgbe_init_eeprom_params;
 	rom->read16 = txgbe_ee_read16;
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index 61bbb5e0a..a5ee3ec0a 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -33,6 +33,8 @@ s32 txgbe_disable_sec_tx_path(struct txgbe_hw *hw);
 s32 txgbe_enable_sec_tx_path(struct txgbe_hw *hw);
 
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
+s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask);
+void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask);
 
 s32 txgbe_get_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
 s32 txgbe_set_san_mac_addr(struct txgbe_hw *hw, u8 *san_mac_addr);
@@ -42,6 +44,9 @@ s32 txgbe_check_mac_link(struct txgbe_hw *hw,
 			       u32 *speed,
 			       bool *link_up, bool link_up_wait_to_complete);
 
+s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
+				 u16 *wwpn_prefix);
+
 s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps);
 void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 
diff --git a/drivers/net/txgbe/base/txgbe_phy.c b/drivers/net/txgbe/base/txgbe_phy.c
index 7981fb2f8..e9b096df9 100644
--- a/drivers/net/txgbe/base/txgbe_phy.c
+++ b/drivers/net/txgbe/base/txgbe_phy.c
@@ -1178,6 +1178,22 @@ s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
 					 eeprom_data);
 }
 
+/**
+ *  txgbe_read_i2c_sff8472 - Reads 8 bit word over I2C interface
+ *  @hw: pointer to hardware structure
+ *  @byte_offset: byte offset at address 0xA2
+ *  @sff8472_data: value read
+ *
+ *  Performs byte read operation to SFP module's SFF-8472 data over I2C
+ **/
+s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset,
+					  u8 *sff8472_data)
+{
+	return hw->phy.read_i2c_byte(hw, byte_offset,
+					 TXGBE_I2C_EEPROM_DEV_ADDR2,
+					 sff8472_data);
+}
+
 /**
  *  txgbe_write_i2c_eeprom - Writes 8 bit EEPROM word over I2C interface
  *  @hw: pointer to hardware structure
diff --git a/drivers/net/txgbe/base/txgbe_phy.h b/drivers/net/txgbe/base/txgbe_phy.h
index fbef67e78..33afa367a 100644
--- a/drivers/net/txgbe/base/txgbe_phy.h
+++ b/drivers/net/txgbe/base/txgbe_phy.h
@@ -354,6 +354,7 @@ s32 txgbe_setup_phy_link_tnx(struct txgbe_hw *hw);
 s32 txgbe_identify_module(struct txgbe_hw *hw);
 s32 txgbe_identify_sfp_module(struct txgbe_hw *hw);
 s32 txgbe_identify_qsfp_module(struct txgbe_hw *hw);
+
 s32 txgbe_read_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
 				u8 dev_addr, u8 *data);
 s32 txgbe_read_i2c_byte_unlocked(struct txgbe_hw *hw, u8 byte_offset,
@@ -362,6 +363,8 @@ s32 txgbe_write_i2c_byte(struct txgbe_hw *hw, u8 byte_offset,
 				 u8 dev_addr, u8 data);
 s32 txgbe_write_i2c_byte_unlocked(struct txgbe_hw *hw, u8 byte_offset,
 					  u8 dev_addr, u8 data);
+s32 txgbe_read_i2c_sff8472(struct txgbe_hw *hw, u8 byte_offset,
+					  u8 *sff8472_data);
 s32 txgbe_read_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
 				  u8 *eeprom_data);
 s32 txgbe_write_i2c_eeprom(struct txgbe_hw *hw, u8 byte_offset,
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index f5a986309..ba2849a82 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -313,6 +313,29 @@ txgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+/*
+ * Ensure that all locks are released before first NVM or PHY access
+ */
+static void
+txgbe_swfw_lock_reset(struct txgbe_hw *hw)
+{
+	uint16_t mask;
+
+	/*
+	 * These ones are more tricky since they are common to all ports; but
+	 * swfw_sync retries last long enough (1s) to be almost sure that if
+	 * lock can not be taken it is due to an improper lock of the
+	 * semaphore.
+	 */
+	mask = TXGBE_MNGSEM_SWPHY |
+	       TXGBE_MNGSEM_SWMBX |
+	       TXGBE_MNGSEM_SWFLASH;
+	if (hw->mac.acquire_swfw_sync(hw, mask) < 0) {
+		PMD_DRV_LOG(DEBUG, "SWFW common locks released");
+	}
+	hw->mac.release_swfw_sync(hw, mask);
+}
+
 static int
 eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
@@ -379,6 +402,9 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		return -EIO;
 	}
 
+	/* Unlock any pending hardware semaphore */
+	txgbe_swfw_lock_reset(hw);
+
 	err = hw->rom.init_params(hw);
 	if (err != 0) {
 		PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
@@ -941,6 +967,8 @@ txgbe_dev_close(struct rte_eth_dev *dev)
 	dev->rx_pkt_burst = NULL;
 	dev->tx_pkt_burst = NULL;
 
+	/* Unlock any pending hardware semaphore */
+	txgbe_swfw_lock_reset(hw);
 
 	/* disable uio intr before callback unregister */
 	rte_intr_disable(intr_handle);
@@ -2149,6 +2177,127 @@ txgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
 					 txgbe_dev_addr_list_itr, TRUE);
 }
 
+static int
+txgbe_get_eeprom_length(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	/* Return unit is byte count */
+	return hw->rom.word_size * 2;
+}
+
+static int
+txgbe_get_eeprom(struct rte_eth_dev *dev,
+		struct rte_dev_eeprom_info *in_eeprom)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_rom_info *eeprom = &hw->rom;
+	uint16_t *data = in_eeprom->data;
+	int first, length;
+
+	first = in_eeprom->offset >> 1;
+	length = in_eeprom->length >> 1;
+	if ((first > hw->rom.word_size) ||
+	    ((first + length) > hw->rom.word_size))
+		return -EINVAL;
+
+	in_eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	return eeprom->readw_buffer(hw, first, length, data);
+}
+
+static int
+txgbe_set_eeprom(struct rte_eth_dev *dev,
+		struct rte_dev_eeprom_info *in_eeprom)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_rom_info *eeprom = &hw->rom;
+	uint16_t *data = in_eeprom->data;
+	int first, length;
+
+	first = in_eeprom->offset >> 1;
+	length = in_eeprom->length >> 1;
+	if ((first > hw->rom.word_size) ||
+	    ((first + length) > hw->rom.word_size))
+		return -EINVAL;
+
+	in_eeprom->magic = hw->vendor_id | (hw->device_id << 16);
+
+	return eeprom->writew_buffer(hw,  first, length, data);
+}
+
+static int
+txgbe_get_module_info(struct rte_eth_dev *dev,
+		      struct rte_eth_dev_module_info *modinfo)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t status;
+	uint8_t sff8472_rev, addr_mode;
+	bool page_swap = false;
+
+	/* Check whether we support SFF-8472 or not */
+	status = hw->phy.read_i2c_eeprom(hw,
+					     TXGBE_SFF_SFF_8472_COMP,
+					     &sff8472_rev);
+	if (status != 0)
+		return -EIO;
+
+	/* addressing mode is not supported */
+	status = hw->phy.read_i2c_eeprom(hw,
+					     TXGBE_SFF_SFF_8472_SWAP,
+					     &addr_mode);
+	if (status != 0)
+		return -EIO;
+
+	if (addr_mode & TXGBE_SFF_ADDRESSING_MODE) {
+		PMD_DRV_LOG(ERR,
+			    "Address change required to access page 0xA2, "
+			    "but not supported. Please report the module "
+			    "type to the driver maintainers.");
+		page_swap = true;
+	}
+
+	if (sff8472_rev == TXGBE_SFF_SFF_8472_UNSUP || page_swap) {
+		/* We have a SFP, but it does not support SFF-8472 */
+		modinfo->type = RTE_ETH_MODULE_SFF_8079;
+		modinfo->eeprom_len = RTE_ETH_MODULE_SFF_8079_LEN;
+	} else {
+		/* We have a SFP which supports a revision of SFF-8472. */
+		modinfo->type = RTE_ETH_MODULE_SFF_8472;
+		modinfo->eeprom_len = RTE_ETH_MODULE_SFF_8472_LEN;
+	}
+
+	return 0;
+}
+
+static int
+txgbe_get_module_eeprom(struct rte_eth_dev *dev,
+			struct rte_dev_eeprom_info *info)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t status = TXGBE_ERR_PHY_ADDR_INVALID;
+	uint8_t databyte = 0xFF;
+	uint8_t *data = info->data;
+	uint32_t i = 0;
+
+	if (info->length == 0)
+		return -EINVAL;
+
+	for (i = info->offset; i < info->offset + info->length; i++) {
+		if (i < RTE_ETH_MODULE_SFF_8079_LEN)
+			status = hw->phy.read_i2c_eeprom(hw, i, &databyte);
+		else
+			status = hw->phy.read_i2c_sff8472(hw, i, &databyte);
+
+		if (status != 0)
+			return -EIO;
+
+		data[i - info->offset] = databyte;
+	}
+
+	return 0;
+}
+
 static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_start                  = txgbe_dev_start,
 	.dev_stop                   = txgbe_dev_stop,
@@ -2182,6 +2331,11 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.mac_addr_remove            = txgbe_remove_rar,
 	.mac_addr_set               = txgbe_set_default_mac_addr,
 	.set_mc_addr_list           = txgbe_dev_set_mc_addr_list,
+	.get_eeprom_length          = txgbe_get_eeprom_length,
+	.get_eeprom                 = txgbe_get_eeprom,
+	.set_eeprom                 = txgbe_set_eeprom,
+	.get_module_info            = txgbe_get_module_info,
+	.get_module_eeprom          = txgbe_get_module_eeprom,
 };
 
 RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 34/42] net/txgbe: add remaining RX and TX queue operations
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (31 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 33/42] net/txgbe: add EEPROM info " Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-09 18:15   ` Ferruh Yigit
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 35/42] net/txgbe: add VLAN handle support Jiawen Wu
                   ` (8 subsequent siblings)
  41 siblings, 1 reply; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add remaining receive and transmit queue operaions.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 123 +++++++++++++++
 drivers/net/txgbe/txgbe_ethdev.h |  16 ++
 drivers/net/txgbe/txgbe_rxtx.c   | 259 +++++++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_rxtx.h   |   1 +
 4 files changed, 399 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index ba2849a82..54c97f81c 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -622,6 +622,46 @@ static struct rte_pci_driver rte_txgbe_pmd = {
 
 
 
+static int
+txgbe_check_mq_mode(struct rte_eth_dev *dev)
+{
+	RTE_SET_USED(dev);
+
+	return 0;
+}
+
+static int
+txgbe_dev_configure(struct rte_eth_dev *dev)
+{
+	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
+	struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
+	int ret;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
+	/* multiple queue mode checking */
+	ret  = txgbe_check_mq_mode(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "txgbe_check_mq_mode fails with %d.",
+			    ret);
+		return ret;
+	}
+
+	/* set flag to update link status after init */
+	intr->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
+
+	/*
+	 * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
+	 * allocation Rx preconditions we will reset it.
+	 */
+	adapter->rx_bulk_alloc_allowed = true;
+
+	return 0;
+}
+
 static void
 txgbe_dev_phy_intr_setup(struct rte_eth_dev *dev)
 {
@@ -2062,6 +2102,47 @@ txgbe_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
 	return 0;
 }
 
+static int
+txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+	uint32_t mask;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (queue_id < 32) {
+		mask = rd32(hw, TXGBE_IMS(0));
+		mask &= (1 << queue_id);
+		wr32(hw, TXGBE_IMS(0), mask);
+	} else if (queue_id < 64) {
+		mask = rd32(hw, TXGBE_IMS(1));
+		mask &= (1 << (queue_id - 32));
+		wr32(hw, TXGBE_IMS(1), mask);
+	}
+	rte_intr_enable(intr_handle);
+
+	return 0;
+}
+
+static int
+txgbe_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+{
+	uint32_t mask;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	if (queue_id < 32) {
+		mask = rd32(hw, TXGBE_IMS(0));
+		mask &= ~(1 << queue_id);
+		wr32(hw, TXGBE_IMS(0), mask);
+	} else if (queue_id < 64) {
+		mask = rd32(hw, TXGBE_IMS(1));
+		mask &= ~(1 << (queue_id - 32));
+		wr32(hw, TXGBE_IMS(1), mask);
+	}
+
+	return 0;
+}
+
 /**
  * set the IVAR registers, mapping interrupt causes to vectors
  * @param hw
@@ -2151,6 +2232,37 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
 			| TXGBE_ITR_WRDSA);
 }
 
+int
+txgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
+			   uint16_t queue_idx, uint16_t tx_rate)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t bcnrc_val;
+
+	if (queue_idx >= hw->mac.max_tx_queues)
+		return -EINVAL;
+
+	if (tx_rate != 0) {
+		bcnrc_val = TXGBE_ARBTXRATE_MAX(tx_rate);
+		bcnrc_val |= TXGBE_ARBTXRATE_MIN(tx_rate / 2);
+	} else {
+		bcnrc_val = 0;
+	}
+
+	/*
+	 * Set global transmit compensation time to the MMW_SIZE in ARBTXMMW
+	 * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported.
+	 */
+	wr32(hw, TXGBE_ARBTXMMW, 0x14);
+
+	/* Set ARBTXRATE of queue X */
+	wr32(hw, TXGBE_ARBPOOLIDX, queue_idx);
+	wr32(hw, TXGBE_ARBTXRATE, bcnrc_val);
+	txgbe_flush(hw);
+
+	return 0;
+}
+
 static u8 *
 txgbe_dev_addr_list_itr(__rte_unused struct txgbe_hw *hw,
 			u8 **mc_addr_ptr, u32 *vmdq)
@@ -2299,6 +2411,7 @@ txgbe_get_module_eeprom(struct rte_eth_dev *dev,
 }
 
 static const struct eth_dev_ops txgbe_eth_dev_ops = {
+	.dev_configure              = txgbe_dev_configure,
 	.dev_start                  = txgbe_dev_start,
 	.dev_stop                   = txgbe_dev_stop,
 	.dev_set_link_up            = txgbe_dev_set_link_up,
@@ -2322,7 +2435,13 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.tx_queue_start	            = txgbe_dev_tx_queue_start,
 	.tx_queue_stop              = txgbe_dev_tx_queue_stop,
 	.rx_queue_setup             = txgbe_dev_rx_queue_setup,
+	.rx_queue_intr_enable       = txgbe_dev_rx_queue_intr_enable,
+	.rx_queue_intr_disable      = txgbe_dev_rx_queue_intr_disable,
 	.rx_queue_release           = txgbe_dev_rx_queue_release,
+	.rx_queue_count             = txgbe_dev_rx_queue_count,
+	.rx_descriptor_done         = txgbe_dev_rx_descriptor_done,
+	.rx_descriptor_status       = txgbe_dev_rx_descriptor_status,
+	.tx_descriptor_status       = txgbe_dev_tx_descriptor_status,
 	.tx_queue_setup             = txgbe_dev_tx_queue_setup,
 	.tx_queue_release           = txgbe_dev_tx_queue_release,
 	.dev_led_on                 = txgbe_dev_led_on,
@@ -2330,12 +2449,16 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.mac_addr_add               = txgbe_add_rar,
 	.mac_addr_remove            = txgbe_remove_rar,
 	.mac_addr_set               = txgbe_set_default_mac_addr,
+	.set_queue_rate_limit       = txgbe_set_queue_rate_limit,
 	.set_mc_addr_list           = txgbe_dev_set_mc_addr_list,
+	.rxq_info_get               = txgbe_rxq_info_get,
+	.txq_info_get               = txgbe_txq_info_get,
 	.get_eeprom_length          = txgbe_get_eeprom_length,
 	.get_eeprom                 = txgbe_get_eeprom,
 	.set_eeprom                 = txgbe_set_eeprom,
 	.get_module_info            = txgbe_get_module_info,
 	.get_module_eeprom          = txgbe_get_module_eeprom,
+	.tx_done_cleanup            = txgbe_dev_tx_done_cleanup,
 };
 
 RTE_PMD_REGISTER_PCI(net_txgbe, rte_txgbe_pmd);
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index b25846721..017d708ae 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -125,6 +125,14 @@ int  txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 		uint16_t nb_tx_desc, unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
+uint32_t txgbe_dev_rx_queue_count(struct rte_eth_dev *dev,
+		uint16_t rx_queue_id);
+
+int txgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset);
+
+int txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
+
 int txgbe_dev_rx_init(struct rte_eth_dev *dev);
 
 void txgbe_dev_tx_init(struct rte_eth_dev *dev);
@@ -144,6 +152,12 @@ int txgbe_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 
 int txgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 
+void txgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+	struct rte_eth_rxq_info *qinfo);
+
+void txgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+	struct rte_eth_txq_info *qinfo);
+
 uint16_t txgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts);
 
@@ -182,6 +196,8 @@ void txgbe_pf_host_uninit(struct rte_eth_dev *eth_dev);
 void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
 
 
+int txgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx,
+			       uint16_t tx_rate);
 #define TXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
 #define TXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index f50bc82ce..df094408f 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1900,6 +1900,97 @@ txgbe_tx_queue_release_mbufs(struct txgbe_tx_queue *txq)
 	}
 }
 
+static int
+txgbe_tx_done_cleanup_full(struct txgbe_tx_queue *txq, uint32_t free_cnt)
+{
+	struct txgbe_tx_entry *swr_ring = txq->sw_ring;
+	uint16_t i, tx_last, tx_id;
+	uint16_t nb_tx_free_last;
+	uint16_t nb_tx_to_clean;
+	uint32_t pkt_cnt;
+
+	/* Start free mbuf from the next of tx_tail */
+	tx_last = txq->tx_tail;
+	tx_id  = swr_ring[tx_last].next_id;
+
+	if (txq->nb_tx_free == 0 && txgbe_xmit_cleanup(txq))
+		return 0;
+
+	nb_tx_to_clean = txq->nb_tx_free;
+	nb_tx_free_last = txq->nb_tx_free;
+	if (!free_cnt)
+		free_cnt = txq->nb_tx_desc;
+
+	/* Loop through swr_ring to count the amount of
+	 * freeable mubfs and packets.
+	 */
+	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
+		for (i = 0; i < nb_tx_to_clean &&
+			pkt_cnt < free_cnt &&
+			tx_id != tx_last; i++) {
+			if (swr_ring[tx_id].mbuf != NULL) {
+				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
+				swr_ring[tx_id].mbuf = NULL;
+
+				/*
+				 * last segment in the packet,
+				 * increment packet count
+				 */
+				pkt_cnt += (swr_ring[tx_id].last_id == tx_id);
+			}
+
+			tx_id = swr_ring[tx_id].next_id;
+		}
+
+		if (pkt_cnt < free_cnt) {
+			if (txgbe_xmit_cleanup(txq))
+				break;
+
+			nb_tx_to_clean = txq->nb_tx_free - nb_tx_free_last;
+			nb_tx_free_last = txq->nb_tx_free;
+		}
+	}
+
+	return (int)pkt_cnt;
+}
+
+static int
+txgbe_tx_done_cleanup_simple(struct txgbe_tx_queue *txq,
+			uint32_t free_cnt)
+{
+	int i, n, cnt;
+
+	if (free_cnt == 0 || free_cnt > txq->nb_tx_desc)
+		free_cnt = txq->nb_tx_desc;
+
+	cnt = free_cnt - free_cnt % txq->tx_free_thresh;
+
+	for (i = 0; i < cnt; i += n) {
+		if (txq->nb_tx_desc - txq->nb_tx_free < txq->tx_free_thresh)
+			break;
+
+		n = txgbe_tx_free_bufs(txq);
+
+		if (n == 0)
+			break;
+	}
+
+	return i;
+}
+
+int
+txgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt)
+{
+	struct txgbe_tx_queue *txq = (struct txgbe_tx_queue *)tx_queue;
+	if (txq->offloads == 0 &&
+		txq->tx_free_thresh >= RTE_PMD_TXGBE_TX_MAX_BURST) {
+
+		return txgbe_tx_done_cleanup_simple(txq, free_cnt);
+	}
+
+	return txgbe_tx_done_cleanup_full(txq, free_cnt);
+}
+
 static void __rte_cold
 txgbe_tx_free_swring(struct txgbe_tx_queue *txq)
 {
@@ -1924,9 +2015,49 @@ txgbe_dev_tx_queue_release(void *txq)
 	txgbe_tx_queue_release(txq);
 }
 
+/* (Re)set dynamic txgbe_tx_queue fields to defaults */
+static void __rte_cold
+txgbe_reset_tx_queue(struct txgbe_tx_queue *txq)
+{
+	static const struct txgbe_tx_desc zeroed_desc = {0};
+	struct txgbe_tx_entry *txe = txq->sw_ring;
+	uint16_t prev, i;
+
+	/* Zero out HW ring memory */
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i] = zeroed_desc;
+	}
+
+	/* Initialize SW ring entries */
+	prev = (uint16_t) (txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		volatile struct txgbe_tx_desc *txd = &txq->tx_ring[i];
+
+		txd->dw3 = rte_cpu_to_le_32(TXGBE_TXD_DD);
+		txe[i].mbuf = NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_next_dd = (uint16_t)(txq->tx_free_thresh - 1);
+	txq->tx_tail = 0;
+
+	/*
+	 * Always allow 1 descriptor to be un-allocated to avoid
+	 * a H/W race condition
+	 */
+	txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1);
+	txq->ctx_curr = 0;
+	memset((void *)&txq->ctx_cache, 0,
+		TXGBE_CTX_NUM * sizeof(struct txgbe_ctx_info));
+}
+
 static const struct txgbe_txq_ops def_txq_ops = {
 	.release_mbufs = txgbe_tx_queue_release_mbufs,
 	.free_swring = txgbe_tx_free_swring,
+	.reset = txgbe_reset_tx_queue,
 };
 
 /* Takes an ethdev and a queue and sets up the tx function to be used based on
@@ -2491,6 +2622,97 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
+uint32_t
+txgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+#define TXGBE_RXQ_SCAN_INTERVAL 4
+	volatile struct txgbe_rx_desc *rxdp;
+	struct txgbe_rx_queue *rxq;
+	uint32_t desc = 0;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxdp = &(rxq->rx_ring[rxq->rx_tail]);
+
+	while ((desc < rxq->nb_rx_desc) &&
+		(rxdp->qw1.lo.status &
+			rte_cpu_to_le_32(TXGBE_RXD_STAT_DD))) {
+		desc += TXGBE_RXQ_SCAN_INTERVAL;
+		rxdp += TXGBE_RXQ_SCAN_INTERVAL;
+		if (rxq->rx_tail + desc >= rxq->nb_rx_desc)
+			rxdp = &(rxq->rx_ring[rxq->rx_tail +
+				desc - rxq->nb_rx_desc]);
+	}
+
+	return desc;
+}
+
+int
+txgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset)
+{
+	volatile struct txgbe_rx_desc *rxdp;
+	struct txgbe_rx_queue *rxq = rx_queue;
+	uint32_t desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return 0;
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	rxdp = &rxq->rx_ring[desc];
+	return !!(rxdp->qw1.lo.status &
+			rte_cpu_to_le_32(TXGBE_RXD_STAT_DD));
+}
+
+int
+txgbe_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+	struct txgbe_rx_queue *rxq = rx_queue;
+	volatile uint32_t *status;
+	uint32_t nb_hold, desc;
+
+	if (unlikely(offset >= rxq->nb_rx_desc))
+		return -EINVAL;
+
+	nb_hold = rxq->nb_rx_hold;
+	if (offset >= rxq->nb_rx_desc - nb_hold)
+		return RTE_ETH_RX_DESC_UNAVAIL;
+
+	desc = rxq->rx_tail + offset;
+	if (desc >= rxq->nb_rx_desc)
+		desc -= rxq->nb_rx_desc;
+
+	status = &rxq->rx_ring[desc].qw1.lo.status;
+	if (*status & rte_cpu_to_le_32(TXGBE_RXD_STAT_DD))
+		return RTE_ETH_RX_DESC_DONE;
+
+	return RTE_ETH_RX_DESC_AVAIL;
+}
+
+int
+txgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+	struct txgbe_tx_queue *txq = tx_queue;
+	volatile uint32_t *status;
+	uint32_t desc;
+
+	if (unlikely(offset >= txq->nb_tx_desc))
+		return -EINVAL;
+
+	desc = txq->tx_tail + offset;
+	if (desc >= txq->nb_tx_desc) {
+		desc -= txq->nb_tx_desc;
+		if (desc >= txq->nb_tx_desc)
+			desc -= txq->nb_tx_desc;
+	}
+
+	status = &txq->tx_ring[desc].dw3;
+	if (*status & rte_cpu_to_le_32(TXGBE_TXD_DD))
+		return RTE_ETH_TX_DESC_DONE;
+
+	return RTE_ETH_TX_DESC_FULL;
+}
+
 void __rte_cold
 txgbe_dev_clear_queues(struct rte_eth_dev *dev)
 {
@@ -3094,3 +3316,40 @@ txgbe_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+txgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+	struct rte_eth_rxq_info *qinfo)
+{
+	struct txgbe_rx_queue *rxq;
+
+	rxq = dev->data->rx_queues[queue_id];
+
+	qinfo->mp = rxq->mb_pool;
+	qinfo->scattered_rx = dev->data->scattered_rx;
+	qinfo->nb_desc = rxq->nb_rx_desc;
+
+	qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+	qinfo->conf.rx_drop_en = rxq->drop_en;
+	qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+	qinfo->conf.offloads = rxq->offloads;
+}
+
+void
+txgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+	struct rte_eth_txq_info *qinfo)
+{
+	struct txgbe_tx_queue *txq;
+
+	txq = dev->data->tx_queues[queue_id];
+
+	qinfo->nb_desc = txq->nb_tx_desc;
+
+	qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+	qinfo->conf.tx_thresh.hthresh = txq->hthresh;
+	qinfo->conf.tx_thresh.wthresh = txq->wthresh;
+
+	qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+	qinfo->conf.offloads = txq->offloads;
+	qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+}
+
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index 958ca2e97..f773357a3 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -402,6 +402,7 @@ struct txgbe_txq_ops {
 void txgbe_set_tx_function(struct rte_eth_dev *dev, struct txgbe_tx_queue *txq);
 
 void txgbe_set_rx_function(struct rte_eth_dev *dev);
+int txgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt);
 
 uint64_t txgbe_get_tx_port_offloads(struct rte_eth_dev *dev);
 uint64_t txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 35/42] net/txgbe: add VLAN handle support
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (32 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 34/42] net/txgbe: add remaining RX and TX queue operations Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 36/42] net/txgbe: add flow control support Jiawen Wu
                   ` (7 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add VLAN filter, tpid, offload and strip set support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 370 +++++++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_ethdev.h |  32 +++
 2 files changed, 402 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 54c97f81c..5e0b800ef 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -41,6 +41,8 @@ static void txgbe_dev_close(struct rte_eth_dev *dev);
 static int txgbe_dev_link_update(struct rte_eth_dev *dev,
 				int wait_to_complete);
 static int txgbe_dev_stats_reset(struct rte_eth_dev *dev);
+static void txgbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue);
+static void txgbe_vlan_hw_strip_disable(struct rte_eth_dev *dev, uint16_t queue);
 
 static void txgbe_dev_link_status_print(struct rte_eth_dev *dev);
 static int txgbe_dev_lsc_interrupt_setup(struct rte_eth_dev *dev, uint8_t on);
@@ -53,6 +55,24 @@ static void txgbe_dev_interrupt_handler(void *param);
 static void txgbe_dev_interrupt_delayed_handler(void *param);
 static void txgbe_configure_msix(struct rte_eth_dev *dev);
 
+#define TXGBE_SET_HWSTRIP(h, q) do {\
+		uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \
+		uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \
+		(h)->bitmap[idx] |= 1 << bit;\
+	} while (0)
+
+#define TXGBE_CLEAR_HWSTRIP(h, q) do {\
+		uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \
+		uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \
+		(h)->bitmap[idx] &= ~(1 << bit);\
+	} while (0)
+
+#define TXGBE_GET_HWSTRIP(h, q, r) do {\
+		uint32_t idx = (q) / (sizeof((h)->bitmap[0]) * NBBY); \
+		uint32_t bit = (q) % (sizeof((h)->bitmap[0]) * NBBY); \
+		(r) = (h)->bitmap[idx] >> bit & 1;\
+	} while (0)
+
 /*
  * The set of PCI devices this driver supports
  */
@@ -341,6 +361,8 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
+	struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
+	struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	const struct rte_memzone *mz;
 	uint32_t ctrl_ext;
@@ -488,6 +510,12 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	 */
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE;
 
+	/* initialize the vfta */
+	memset(shadow_vfta, 0, sizeof(*shadow_vfta));
+
+	/* initialize the hw strip bitmap*/
+	memset(hwstrip, 0, sizeof(*hwstrip));
+
 	/* initialize PF if max_vfs not zero */
 	txgbe_pf_host_init(eth_dev);
 
@@ -620,6 +648,335 @@ static struct rte_pci_driver rte_txgbe_pmd = {
 	.remove = eth_txgbe_pci_remove,
 };
 
+static int
+txgbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(dev);
+	uint32_t vfta;
+	uint32_t vid_idx;
+	uint32_t vid_bit;
+
+	vid_idx = (uint32_t)((vlan_id >> 5) & 0x7F);
+	vid_bit = (uint32_t)(1 << (vlan_id & 0x1F));
+	vfta = rd32(hw, TXGBE_VLANTBL(vid_idx));
+	if (on)
+		vfta |= vid_bit;
+	else
+		vfta &= ~vid_bit;
+	wr32(hw, TXGBE_VLANTBL(vid_idx), vfta);
+
+	/* update local VFTA copy */
+	shadow_vfta->vfta[vid_idx] = vfta;
+
+	return 0;
+}
+
+static void
+txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_rx_queue *rxq;
+	bool restart;
+	uint32_t rxcfg, rxbal, rxbah;
+
+	if (on)
+		txgbe_vlan_hw_strip_enable(dev, queue);
+	else
+		txgbe_vlan_hw_strip_disable(dev, queue);
+
+	rxq = dev->data->rx_queues[queue];
+	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
+	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
+	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
+	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
+			!(rxcfg & TXGBE_RXCFG_VLAN);
+		rxcfg |= TXGBE_RXCFG_VLAN;
+	} else {
+		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
+			(rxcfg & TXGBE_RXCFG_VLAN);
+		rxcfg &= ~TXGBE_RXCFG_VLAN;
+	}
+	rxcfg &= ~TXGBE_RXCFG_ENA;
+
+	if (restart) {
+		/* set vlan strip for ring */
+		txgbe_dev_rx_queue_stop(dev, queue);
+		wr32(hw, TXGBE_RXBAL(rxq->reg_idx), rxbal);
+		wr32(hw, TXGBE_RXBAH(rxq->reg_idx), rxbah);
+		wr32(hw, TXGBE_RXCFG(rxq->reg_idx), rxcfg);
+		txgbe_dev_rx_queue_start(dev, queue);
+	}
+}
+
+static int
+txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
+		    enum rte_vlan_type vlan_type,
+		    uint16_t tpid)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	int ret = 0;
+	uint32_t portctrl, vlan_ext, qinq;
+
+	portctrl = rd32(hw, TXGBE_PORTCTL);
+
+	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
+	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
+	switch (vlan_type) {
+	case ETH_VLAN_TYPE_INNER:
+		if (vlan_ext) {
+			wr32m(hw, TXGBE_VLANCTL,
+				TXGBE_VLANCTL_TPID_MASK,
+				TXGBE_VLANCTL_TPID(tpid));
+			wr32m(hw, TXGBE_DMATXCTRL,
+				TXGBE_DMATXCTRL_TPID_MASK,
+				TXGBE_DMATXCTRL_TPID(tpid));
+		} else {
+			ret = -ENOTSUP;
+			PMD_DRV_LOG(ERR, "Inner type is not supported"
+				    " by single VLAN");
+		}
+
+		if (qinq) {
+			wr32m(hw, TXGBE_TAGTPID(0),
+				TXGBE_TAGTPID_LSB_MASK,
+				TXGBE_TAGTPID_LSB(tpid));
+		}
+		break;
+	case ETH_VLAN_TYPE_OUTER:
+		if (vlan_ext) {
+			/* Only the high 16-bits is valid */
+			wr32m(hw, TXGBE_EXTAG,
+				TXGBE_EXTAG_VLAN_MASK,
+				TXGBE_EXTAG_VLAN(tpid));
+		} else {
+			wr32m(hw, TXGBE_VLANCTL,
+				TXGBE_VLANCTL_TPID_MASK,
+				TXGBE_VLANCTL_TPID(tpid));
+			wr32m(hw, TXGBE_DMATXCTRL,
+				TXGBE_DMATXCTRL_TPID_MASK,
+				TXGBE_DMATXCTRL_TPID(tpid));
+		}
+
+		if (qinq) {
+			wr32m(hw, TXGBE_TAGTPID(0),
+				TXGBE_TAGTPID_MSB_MASK,
+				TXGBE_TAGTPID_MSB(tpid));
+		}
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "Unsupported VLAN type %d", vlan_type);
+		return -EINVAL;
+	}
+
+	return ret;
+}
+
+void
+txgbe_vlan_hw_filter_disable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t vlnctrl;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Filter Table Disable */
+	vlnctrl = rd32(hw, TXGBE_VLANCTL);
+	vlnctrl &= ~TXGBE_VLANCTL_VFE;
+	wr32(hw, TXGBE_VLANCTL, vlnctrl);
+}
+
+void
+txgbe_vlan_hw_filter_enable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(dev);
+	uint32_t vlnctrl;
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Filter Table Enable */
+	vlnctrl = rd32(hw, TXGBE_VLANCTL);
+	vlnctrl &= ~TXGBE_VLANCTL_CFIENA;
+	vlnctrl |= TXGBE_VLANCTL_VFE;
+	wr32(hw, TXGBE_VLANCTL, vlnctrl);
+
+	/* write whatever is in local vfta copy */
+	for (i = 0; i < TXGBE_VFTA_SIZE; i++)
+		wr32(hw, TXGBE_VLANTBL(i), shadow_vfta->vfta[i]);
+}
+
+void
+txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
+{
+	struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(dev);
+	struct txgbe_rx_queue *rxq;
+
+	if (queue >= TXGBE_MAX_RX_QUEUE_NUM)
+		return;
+
+	if (on)
+		TXGBE_SET_HWSTRIP(hwstrip, queue);
+	else
+		TXGBE_CLEAR_HWSTRIP(hwstrip, queue);
+
+	if (queue >= dev->data->nb_rx_queues)
+		return;
+
+	rxq = dev->data->rx_queues[queue];
+
+	if (on) {
+		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
+		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	} else {
+		rxq->vlan_flags = PKT_RX_VLAN;
+		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	}
+}
+
+static void
+txgbe_vlan_hw_strip_disable(struct rte_eth_dev *dev, uint16_t queue)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ctrl = rd32(hw, TXGBE_RXCFG(queue));
+	ctrl &= ~TXGBE_RXCFG_VLAN;
+	wr32(hw, TXGBE_RXCFG(queue), ctrl);
+
+	/* record those setting for HW strip per queue */
+	txgbe_vlan_hw_strip_bitmap_set(dev, queue, 0);
+}
+
+static void
+txgbe_vlan_hw_strip_enable(struct rte_eth_dev *dev, uint16_t queue)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ctrl = rd32(hw, TXGBE_RXCFG(queue));
+	ctrl |= TXGBE_RXCFG_VLAN;
+	wr32(hw, TXGBE_RXCFG(queue), ctrl);
+
+	/* record those setting for HW strip per queue */
+	txgbe_vlan_hw_strip_bitmap_set(dev, queue, 1);
+}
+
+static void
+txgbe_vlan_hw_extend_disable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ctrl = rd32(hw, TXGBE_PORTCTL);
+	ctrl &= ~TXGBE_PORTCTL_VLANEXT;
+	ctrl &= ~TXGBE_PORTCTL_QINQ;
+	wr32(hw, TXGBE_PORTCTL, ctrl);
+}
+
+static void
+txgbe_vlan_hw_extend_enable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+	struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode;
+	uint32_t ctrl;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ctrl  = rd32(hw, TXGBE_PORTCTL);
+	ctrl |= TXGBE_PORTCTL_VLANEXT;
+	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP ||
+	    txmode->offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+		ctrl |= TXGBE_PORTCTL_QINQ;
+	wr32(hw, TXGBE_PORTCTL, ctrl);
+}
+
+void
+txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
+{
+	struct txgbe_rx_queue *rxq;
+	uint16_t i;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+
+		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			txgbe_vlan_strip_queue_set(dev, i, 1);
+		} else {
+			txgbe_vlan_strip_queue_set(dev, i, 0);
+		}
+	}
+}
+
+void
+txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
+{
+	uint16_t i;
+	struct rte_eth_rxmode *rxmode;
+	struct txgbe_rx_queue *rxq;
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		rxmode = &dev->data->dev_conf.rxmode;
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			}
+		else
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			}
+	}
+}
+
+static int
+txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
+{
+	struct rte_eth_rxmode *rxmode;
+	rxmode = &dev->data->dev_conf.rxmode;
+
+	if (mask & ETH_VLAN_STRIP_MASK) {
+		txgbe_vlan_hw_strip_config(dev);
+	}
+
+	if (mask & ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+			txgbe_vlan_hw_filter_enable(dev);
+		else
+			txgbe_vlan_hw_filter_disable(dev);
+	}
+
+	if (mask & ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+			txgbe_vlan_hw_extend_enable(dev);
+		else
+			txgbe_vlan_hw_extend_disable(dev);
+	}
+
+	return 0;
+}
+
+static int
+txgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+	txgbe_config_vlan_strip_on_all_queues(dev, mask);
+
+	txgbe_vlan_offload_config(dev, mask);
+
+	return 0;
+}
 
 
 static int
@@ -691,6 +1048,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	bool link_up = false, negotiate = 0;
 	uint32_t speed = 0;
 	uint32_t allowed_speeds = 0;
+	int mask = 0;
 	int status;
 	uint32_t *link_speeds;
 
@@ -763,6 +1121,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
+	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
+		ETH_VLAN_EXTEND_MASK;
+	err = txgbe_vlan_offload_config(dev, mask);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
+		goto error;
+	}
+
 	err = txgbe_dev_rxtx_start(dev);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
@@ -2430,6 +2796,10 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.fw_version_get             = txgbe_fw_version_get,
 	.dev_infos_get              = txgbe_dev_info_get,
 	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
+	.vlan_filter_set            = txgbe_vlan_filter_set,
+	.vlan_tpid_set              = txgbe_vlan_tpid_set,
+	.vlan_offload_set           = txgbe_vlan_offload_set,
+	.vlan_strip_queue_set       = txgbe_vlan_strip_queue_set,
 	.rx_queue_start	            = txgbe_dev_rx_queue_start,
 	.rx_queue_stop              = txgbe_dev_rx_queue_stop,
 	.tx_queue_start	            = txgbe_dev_tx_queue_start,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 017d708ae..5319f42b3 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -24,8 +24,16 @@
  * Defines that were not part of txgbe_type.h as they are not used by the
  * FreeBSD driver.
  */
+#define TXGBE_VFTA_SIZE 128
 #define TXGBE_VLAN_TAG_SIZE 4
 #define TXGBE_HKEY_MAX_INDEX 10
+/*Default value of Max Rx Queue*/
+#define TXGBE_MAX_RX_QUEUE_NUM	128
+
+#ifndef NBBY
+#define NBBY	8	/* number of bits in a byte */
+#endif
+#define TXGBE_HWSTRIP_BITMAP_SIZE (TXGBE_MAX_RX_QUEUE_NUM / (sizeof(uint32_t) * NBBY))
 
 #define TXGBE_QUEUE_ITR_INTERVAL_DEFAULT	500 /* 500us */
 
@@ -61,6 +69,14 @@ struct txgbe_stat_mappings {
 	uint32_t rqsm[TXGBE_NB_STAT_MAPPING];
 };
 
+struct txgbe_vfta {
+	uint32_t vfta[TXGBE_VFTA_SIZE];
+};
+
+struct txgbe_hwstrip {
+	uint32_t bitmap[TXGBE_HWSTRIP_BITMAP_SIZE];
+};
+
 struct txgbe_vf_info {
 	uint8_t api_version;
 	uint16_t switch_domain_id;
@@ -74,6 +90,8 @@ struct txgbe_adapter {
 	struct txgbe_hw_stats       stats;
 	struct txgbe_interrupt      intr;
 	struct txgbe_stat_mappings  stat_mappings;
+	struct txgbe_vfta           shadow_vfta;
+	struct txgbe_hwstrip        hwstrip;
 	struct txgbe_vf_info        *vfdata;
 	bool rx_bulk_alloc_allowed;
 };
@@ -102,6 +120,12 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 #define TXGBE_DEV_STAT_MAPPINGS(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->stat_mappings)
 
+#define TXGBE_DEV_VFTA(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->shadow_vfta)
+
+#define TXGBE_DEV_HWSTRIP(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->hwstrip)
+
 #define TXGBE_DEV_VFDATA(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
 
@@ -229,4 +253,12 @@ void txgbe_dev_setup_link_alarm_handler(void *param);
 void txgbe_read_stats_registers(struct txgbe_hw *hw,
 			   struct txgbe_hw_stats *hw_stats);
 
+void txgbe_vlan_hw_filter_enable(struct rte_eth_dev *dev);
+void txgbe_vlan_hw_filter_disable(struct rte_eth_dev *dev);
+void txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev);
+void txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev,
+		uint16_t queue, bool on);
+void txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev,
+						  int mask);
+
 #endif /* _TXGBE_ETHDEV_H_ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 36/42] net/txgbe: add flow control support
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (33 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 35/42] net/txgbe: add VLAN handle support Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 37/42] net/txgbe: add FC auto negotiation support Jiawen Wu
                   ` (6 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add flow control support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c   | 426 ++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_hw.h   |   6 +
 drivers/net/txgbe/base/txgbe_type.h |  24 ++
 drivers/net/txgbe/txgbe_ethdev.c    | 118 +++++++-
 drivers/net/txgbe/txgbe_ethdev.h    |   8 +
 5 files changed, 581 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 80ecec34d..34e7c3d1e 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -22,6 +22,212 @@ STATIC s32 txgbe_mta_vector(struct txgbe_hw *hw, u8 *mc_addr);
 STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
 					 u16 *san_mac_offset);
 
+/**
+ * txgbe_device_supports_autoneg_fc - Check if device supports autonegotiation
+ * of flow control
+ * @hw: pointer to hardware structure
+ *
+ * This function returns true if the device supports flow control
+ * autonegotiation, and false if it does not.
+ *
+ **/
+bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw)
+{
+	bool supported = false;
+	u32 speed;
+	bool link_up;
+
+	DEBUGFUNC("txgbe_device_supports_autoneg_fc");
+
+	switch (hw->phy.media_type) {
+	case txgbe_media_type_fiber_qsfp:
+	case txgbe_media_type_fiber:
+		hw->mac.check_link(hw, &speed, &link_up, false);
+		/* if link is down, assume supported */
+		if (link_up)
+			supported = speed == TXGBE_LINK_SPEED_1GB_FULL ?
+			true : false;
+		else
+			supported = true;
+
+		break;
+	case txgbe_media_type_backplane:
+		supported = true;
+		break;
+	case txgbe_media_type_copper:
+		/* only some copper devices support flow control autoneg */
+		switch (hw->device_id) {
+		case TXGBE_DEV_ID_RAPTOR_XAUI:
+		case TXGBE_DEV_ID_RAPTOR_SGMII:
+			supported = true;
+			break;
+		default:
+			supported = false;
+		}
+	default:
+		break;
+	}
+
+	if (!supported)
+		DEBUGOUT("Device %x does not support flow control autoneg",
+			      hw->device_id);
+	return supported;
+}
+
+/**
+ *  txgbe_setup_fc - Set up flow control
+ *  @hw: pointer to hardware structure
+ *
+ *  Called at init time to set up flow control.
+ **/
+s32 txgbe_setup_fc(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+	u32 reg = 0;
+	u16 reg_cu = 0;
+	u32 value = 0;
+	u64 reg_bp = 0;
+	bool locked = false;
+
+	DEBUGFUNC("txgbe_setup_fc");
+
+	/* Validate the requested mode */
+	if (hw->fc.strict_ieee && hw->fc.requested_mode == txgbe_fc_rx_pause) {
+		DEBUGOUT("txgbe_fc_rx_pause not valid in strict IEEE mode\n");
+		err = TXGBE_ERR_INVALID_LINK_SETTINGS;
+		goto out;
+	}
+
+	/*
+	 * 10gig parts do not have a word in the EEPROM to determine the
+	 * default flow control setting, so we explicitly set it to full.
+	 */
+	if (hw->fc.requested_mode == txgbe_fc_default)
+		hw->fc.requested_mode = txgbe_fc_full;
+
+	/*
+	 * Set up the 1G and 10G flow control advertisement registers so the
+	 * HW will be able to do fc autoneg once the cable is plugged in.  If
+	 * we link at 10G, the 1G advertisement is harmless and vice versa.
+	 */
+	switch (hw->phy.media_type) {
+	case txgbe_media_type_backplane:
+		/* some MAC's need RMW protection on AUTOC */
+		err = hw->mac.prot_autoc_read(hw, &locked, &reg_bp);
+		if (err != 0)
+			goto out;
+
+		/* fall through - only backplane uses autoc */
+	case txgbe_media_type_fiber_qsfp:
+	case txgbe_media_type_fiber:
+	case txgbe_media_type_copper:
+		hw->phy.read_reg(hw, TXGBE_MD_AUTO_NEG_ADVT,
+				     TXGBE_MD_DEV_AUTO_NEG, &reg_cu);
+		break;
+	default:
+		break;
+	}
+
+	/*
+	 * The possible values of fc.requested_mode are:
+	 * 0: Flow control is completely disabled
+	 * 1: Rx flow control is enabled (we can receive pause frames,
+	 *    but not send pause frames).
+	 * 2: Tx flow control is enabled (we can send pause frames but
+	 *    we do not support receiving pause frames).
+	 * 3: Both Rx and Tx flow control (symmetric) are enabled.
+	 * other: Invalid.
+	 */
+	switch (hw->fc.requested_mode) {
+	case txgbe_fc_none:
+		/* Flow control completely disabled by software override. */
+		reg &= ~(SR_MII_MMD_AN_ADV_PAUSE_SYM |
+			SR_MII_MMD_AN_ADV_PAUSE_ASM);
+		if (hw->phy.media_type == txgbe_media_type_backplane)
+			reg_bp &= ~(TXGBE_AUTOC_SYM_PAUSE |
+				    TXGBE_AUTOC_ASM_PAUSE);
+		else if (hw->phy.media_type == txgbe_media_type_copper)
+			reg_cu &= ~(TXGBE_TAF_SYM_PAUSE | TXGBE_TAF_ASM_PAUSE);
+		break;
+	case txgbe_fc_tx_pause:
+		/*
+		 * Tx Flow control is enabled, and Rx Flow control is
+		 * disabled by software override.
+		 */
+		reg |= SR_MII_MMD_AN_ADV_PAUSE_ASM;
+		reg &= ~SR_MII_MMD_AN_ADV_PAUSE_SYM;
+		if (hw->phy.media_type == txgbe_media_type_backplane) {
+			reg_bp |= TXGBE_AUTOC_ASM_PAUSE;
+			reg_bp &= ~TXGBE_AUTOC_SYM_PAUSE;
+		} else if (hw->phy.media_type == txgbe_media_type_copper) {
+			reg_cu |= TXGBE_TAF_ASM_PAUSE;
+			reg_cu &= ~TXGBE_TAF_SYM_PAUSE;
+		}
+		reg |= SR_MII_MMD_AN_ADV_PAUSE_ASM;
+		reg_bp |= SR_AN_MMD_ADV_REG1_PAUSE_ASM;
+		break;
+	case txgbe_fc_rx_pause:
+		/*
+		 * Rx Flow control is enabled and Tx Flow control is
+		 * disabled by software override. Since there really
+		 * isn't a way to advertise that we are capable of RX
+		 * Pause ONLY, we will advertise that we support both
+		 * symmetric and asymmetric Rx PAUSE, as such we fall
+		 * through to the fc_full statement.  Later, we will
+		 * disable the adapter's ability to send PAUSE frames.
+		 */
+	case txgbe_fc_full:
+		/* Flow control (both Rx and Tx) is enabled by SW override. */
+		reg |= SR_MII_MMD_AN_ADV_PAUSE_SYM |
+			SR_MII_MMD_AN_ADV_PAUSE_ASM;
+		if (hw->phy.media_type == txgbe_media_type_backplane)
+			reg_bp |= TXGBE_AUTOC_SYM_PAUSE |
+				  TXGBE_AUTOC_ASM_PAUSE;
+		else if (hw->phy.media_type == txgbe_media_type_copper)
+			reg_cu |= TXGBE_TAF_SYM_PAUSE | TXGBE_TAF_ASM_PAUSE;
+		reg |= SR_MII_MMD_AN_ADV_PAUSE_SYM |
+			SR_MII_MMD_AN_ADV_PAUSE_ASM;
+		reg_bp |= SR_AN_MMD_ADV_REG1_PAUSE_SYM |
+			SR_AN_MMD_ADV_REG1_PAUSE_ASM;
+		break;
+	default:
+		DEBUGOUT("Flow control param set incorrectly\n");
+		err = TXGBE_ERR_CONFIG;
+		goto out;
+		break;
+	}
+
+	/*
+	 * Enable auto-negotiation between the MAC & PHY;
+	 * the MAC will advertise clause 37 flow control.
+	 */
+	value = rd32_epcs(hw, SR_MII_MMD_AN_ADV);
+	value = (value & ~(SR_MII_MMD_AN_ADV_PAUSE_ASM |
+		SR_MII_MMD_AN_ADV_PAUSE_SYM)) | reg;
+	wr32_epcs(hw, SR_MII_MMD_AN_ADV, value);
+
+	/*
+	 * AUTOC restart handles negotiation of 1G and 10G on backplane
+	 * and copper. There is no need to set the PCS1GCTL register.
+	 *
+	 */
+	if (hw->phy.media_type == txgbe_media_type_backplane) {
+		value = rd32_epcs(hw, SR_AN_MMD_ADV_REG1);
+		value = (value & ~(SR_AN_MMD_ADV_REG1_PAUSE_ASM |
+			SR_AN_MMD_ADV_REG1_PAUSE_SYM)) |
+			reg_bp;
+		wr32_epcs(hw, SR_AN_MMD_ADV_REG1, value);
+	} else if ((hw->phy.media_type == txgbe_media_type_copper) &&
+		    (txgbe_device_supports_autoneg_fc(hw))) {
+		hw->phy.write_reg(hw, TXGBE_MD_AUTO_NEG_ADVT,
+				      TXGBE_MD_DEV_AUTO_NEG, reg_cu);
+	}
+
+	DEBUGOUT("Set up FC; reg = 0x%08X\n", reg);
+out:
+	return err;
+}
+
 /**
  *  txgbe_start_hw - Prepare hardware for Tx/Rx
  *  @hw: pointer to hardware structure
@@ -33,6 +239,7 @@ STATIC s32 txgbe_get_san_mac_addr_offset(struct txgbe_hw *hw,
  **/
 s32 txgbe_start_hw(struct txgbe_hw *hw)
 {
+	s32 err;
 	u16 device_caps;
 
 	DEBUGFUNC("txgbe_start_hw");
@@ -43,6 +250,13 @@ s32 txgbe_start_hw(struct txgbe_hw *hw)
 	/* Clear statistics registers */
 	hw->mac.clear_hw_cntrs(hw);
 
+	/* Setup flow control */
+	err = txgbe_setup_fc(hw);
+	if (err != 0 && err != TXGBE_NOT_IMPLEMENTED) {
+		DEBUGOUT("Flow control setup failed, returning %d\n", err);
+		return err;
+	}
+
 	/* Cache bit indicating need for crosstalk fix */
 	switch (hw->mac.type) {
 	case txgbe_mac_raptor:
@@ -717,6 +931,136 @@ s32 txgbe_update_mc_addr_list(struct txgbe_hw *hw, u8 *mc_addr_list,
 	return 0;
 }
 
+/**
+ *  txgbe_fc_enable - Enable flow control
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according to the current settings.
+ **/
+s32 txgbe_fc_enable(struct txgbe_hw *hw)
+{
+	s32 err = 0;
+	u32 mflcn_reg, fccfg_reg;
+	u32 pause_time;
+	u32 fcrtl, fcrth;
+	int i;
+
+	DEBUGFUNC("txgbe_fc_enable");
+
+	/* Validate the water mark configuration */
+	if (!hw->fc.pause_time) {
+		err = TXGBE_ERR_INVALID_LINK_SETTINGS;
+		goto out;
+	}
+
+	/* Low water mark of zero causes XOFF floods */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+		    hw->fc.high_water[i]) {
+			if (!hw->fc.low_water[i] ||
+			    hw->fc.low_water[i] >= hw->fc.high_water[i]) {
+				DEBUGOUT("Invalid water mark configuration\n");
+				err = TXGBE_ERR_INVALID_LINK_SETTINGS;
+				goto out;
+			}
+		}
+	}
+
+	/* Negotiate the fc mode to use */
+	hw->mac.fc_autoneg(hw);
+
+	/* Disable any previous flow control settings */
+	mflcn_reg = rd32(hw, TXGBE_RXFCCFG);
+	mflcn_reg &= ~(TXGBE_RXFCCFG_FC | TXGBE_RXFCCFG_PFC);
+
+	fccfg_reg = rd32(hw, TXGBE_TXFCCFG);
+	fccfg_reg &= ~(TXGBE_TXFCCFG_FC | TXGBE_TXFCCFG_PFC);
+
+	/*
+	 * The possible values of fc.current_mode are:
+	 * 0: Flow control is completely disabled
+	 * 1: Rx flow control is enabled (we can receive pause frames,
+	 *    but not send pause frames).
+	 * 2: Tx flow control is enabled (we can send pause frames but
+	 *    we do not support receiving pause frames).
+	 * 3: Both Rx and Tx flow control (symmetric) are enabled.
+	 * other: Invalid.
+	 */
+	switch (hw->fc.current_mode) {
+	case txgbe_fc_none:
+		/*
+		 * Flow control is disabled by software override or autoneg.
+		 * The code below will actually disable it in the HW.
+		 */
+		break;
+	case txgbe_fc_rx_pause:
+		/*
+		 * Rx Flow control is enabled and Tx Flow control is
+		 * disabled by software override. Since there really
+		 * isn't a way to advertise that we are capable of RX
+		 * Pause ONLY, we will advertise that we support both
+		 * symmetric and asymmetric Rx PAUSE.  Later, we will
+		 * disable the adapter's ability to send PAUSE frames.
+		 */
+		mflcn_reg |= TXGBE_RXFCCFG_FC;
+		break;
+	case txgbe_fc_tx_pause:
+		/*
+		 * Tx Flow control is enabled, and Rx Flow control is
+		 * disabled by software override.
+		 */
+		fccfg_reg |= TXGBE_TXFCCFG_FC;
+		break;
+	case txgbe_fc_full:
+		/* Flow control (both Rx and Tx) is enabled by SW override. */
+		mflcn_reg |= TXGBE_RXFCCFG_FC;
+		fccfg_reg |= TXGBE_TXFCCFG_FC;
+		break;
+	default:
+		DEBUGOUT("Flow control param set incorrectly\n");
+		err = TXGBE_ERR_CONFIG;
+		goto out;
+	}
+
+	/* Set 802.3x based flow control settings. */
+	wr32(hw, TXGBE_RXFCCFG, mflcn_reg);
+	wr32(hw, TXGBE_TXFCCFG, fccfg_reg);
+
+	/* Set up and enable Rx high/low water mark thresholds, enable XON. */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+		    hw->fc.high_water[i]) {
+			fcrtl = TXGBE_FCWTRLO_TH(hw->fc.low_water[i]) |
+				TXGBE_FCWTRLO_XON;
+			fcrth = TXGBE_FCWTRHI_TH(hw->fc.high_water[i]) |
+				TXGBE_FCWTRHI_XOFF;
+		} else {
+			/*
+			 * In order to prevent Tx hangs when the internal Tx
+			 * switch is enabled we must set the high water mark
+			 * to the Rx packet buffer size - 24KB.  This allows
+			 * the Tx switch to function even under heavy Rx
+			 * workloads.
+			 */
+			fcrtl = 0;
+			fcrth = rd32(hw, TXGBE_PBRXSIZE(i)) - 24576;
+		}
+		wr32(hw, TXGBE_FCWTRLO(i), fcrtl);
+		wr32(hw, TXGBE_FCWTRHI(i), fcrth);
+	}
+
+	/* Configure pause time (2 TCs per register) */
+	pause_time = TXGBE_RXFCFSH_TIME(hw->fc.pause_time);
+	for (i = 0; i < (TXGBE_DCB_TC_MAX / 2); i++)
+		wr32(hw, TXGBE_FCXOFFTM(i), pause_time * 0x00010001);
+
+	/* Configure flow control refresh threshold value */
+	wr32(hw, TXGBE_RXFCRFSH, hw->fc.pause_time / 2);
+
+out:
+	return err;
+}
+
 /**
  *  txgbe_acquire_swfw_sync - Acquire SWFW semaphore
  *  @hw: pointer to hardware structure
@@ -1652,6 +1996,82 @@ s32 txgbe_setup_sfp_modules(struct txgbe_hw *hw)
 	return err;
 }
 
+/**
+ *  txgbe_prot_autoc_read_raptor - Hides MAC differences needed for AUTOC read
+ *  @hw: pointer to hardware structure
+ *  @locked: Return the if we locked for this read.
+ *  @value: Value we read from AUTOC
+ *
+ *  For this part we need to wrap read-modify-writes with a possible
+ *  FW/SW lock.  It is assumed this lock will be freed with the next
+ *  prot_autoc_write_raptor().
+ */
+s32 txgbe_prot_autoc_read_raptor(struct txgbe_hw *hw, bool *locked, u64 *value)
+{
+	s32 err;
+	bool lock_state = false;
+
+	 /* If LESM is on then we need to hold the SW/FW semaphore. */
+	if (txgbe_verify_lesm_fw_enabled_raptor(hw)) {
+		err = hw->mac.acquire_swfw_sync(hw,
+					TXGBE_MNGSEM_SWPHY);
+		if (err != 0)
+			return TXGBE_ERR_SWFW_SYNC;
+
+		lock_state = true;
+	}
+
+	if (locked)
+		*locked = lock_state;
+
+	*value = txgbe_autoc_read(hw);
+	return 0;
+}
+
+/**
+ * txgbe_prot_autoc_write_raptor - Hides MAC differences needed for AUTOC write
+ * @hw: pointer to hardware structure
+ * @autoc: value to write to AUTOC
+ * @locked: bool to indicate whether the SW/FW lock was already taken by
+ *           previous prot_autoc_read_raptor.
+ *
+ * This part may need to hold the SW/FW lock around all writes to
+ * AUTOC. Likewise after a write we need to do a pipeline reset.
+ */
+s32 txgbe_prot_autoc_write_raptor(struct txgbe_hw *hw, bool locked, u64 autoc)
+{
+	int err = 0;
+
+	/* Blocked by MNG FW so bail */
+	if (txgbe_check_reset_blocked(hw))
+		goto out;
+
+	/* We only need to get the lock if:
+	 *  - We didn't do it already (in the read part of a read-modify-write)
+	 *  - LESM is enabled.
+	 */
+	if (!locked && txgbe_verify_lesm_fw_enabled_raptor(hw)) {
+		err = hw->mac.acquire_swfw_sync(hw,
+					TXGBE_MNGSEM_SWPHY);
+		if (err != 0)
+			return TXGBE_ERR_SWFW_SYNC;
+
+		locked = true;
+	}
+
+	txgbe_autoc_write(hw, autoc);
+	err = txgbe_reset_pipeline_raptor(hw);
+
+out:
+	/* Free the SW/FW semaphore as we either grabbed it here or
+	 * already had it when this function was called.
+	 */
+	if (locked)
+		hw->mac.release_swfw_sync(hw, TXGBE_MNGSEM_SWPHY);
+
+	return err;
+}
+
 /**
  *  txgbe_init_ops_pf - Inits func ptrs and MAC type
  *  @hw: pointer to hardware structure
@@ -1712,6 +2132,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->get_wwn_prefix = txgbe_get_wwn_prefix;
 	mac->autoc_read = txgbe_autoc_read;
 	mac->autoc_write = txgbe_autoc_write;
+	mac->prot_autoc_read = txgbe_prot_autoc_read_raptor;
+	mac->prot_autoc_write = txgbe_prot_autoc_write_raptor;
 
 	mac->set_rar = txgbe_set_rar;
 	mac->clear_rar = txgbe_clear_rar;
@@ -1720,6 +2142,10 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	mac->disable_rx = txgbe_disable_rx;
 	mac->init_uta_tables = txgbe_init_uta_tables;
 	mac->setup_sfp = txgbe_setup_sfp_modules;
+
+	/* Flow Control */
+	mac->fc_enable = txgbe_fc_enable;
+	mac->setup_fc = txgbe_setup_fc;
 	/* Link */
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
 	mac->check_link = txgbe_check_mac_link;
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index a5ee3ec0a..1604d1fca 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -32,6 +32,10 @@ s32 txgbe_enable_sec_rx_path(struct txgbe_hw *hw);
 s32 txgbe_disable_sec_tx_path(struct txgbe_hw *hw);
 s32 txgbe_enable_sec_tx_path(struct txgbe_hw *hw);
 
+s32 txgbe_fc_enable(struct txgbe_hw *hw);
+bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw);
+s32 txgbe_setup_fc(struct txgbe_hw *hw);
+
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
 s32 txgbe_acquire_swfw_sync(struct txgbe_hw *hw, u32 mask);
 void txgbe_release_swfw_sync(struct txgbe_hw *hw, u32 mask);
@@ -82,5 +86,7 @@ s32 txgbe_reset_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 s32 txgbe_init_phy_raptor(struct txgbe_hw *hw);
 s32 txgbe_enable_rx_dma_raptor(struct txgbe_hw *hw, u32 regval);
+s32 txgbe_prot_autoc_read_raptor(struct txgbe_hw *hw, bool *locked, u64 *value);
+s32 txgbe_prot_autoc_write_raptor(struct txgbe_hw *hw, bool locked, u64 value);
 bool txgbe_verify_lesm_fw_enabled_raptor(struct txgbe_hw *hw);
 #endif /* _TXGBE_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 86fb6e259..4a30a99db 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -5,6 +5,7 @@
 #ifndef _TXGBE_TYPE_H_
 #define _TXGBE_TYPE_H_
 
+#define TXGBE_DCB_TC_MAX	TXGBE_MAX_UP
 #define TXGBE_LINK_UP_TIME	90 /* 9.0 Seconds */
 #define TXGBE_AUTO_NEG_TIME	45 /* 4.5 Seconds */
 
@@ -128,6 +129,14 @@ enum txgbe_media_type {
 	txgbe_media_type_virtual
 };
 
+/* Flow Control Settings */
+enum txgbe_fc_mode {
+	txgbe_fc_none = 0,
+	txgbe_fc_rx_pause,
+	txgbe_fc_tx_pause,
+	txgbe_fc_full,
+	txgbe_fc_default
+};
 
 /* Smart Speed Settings */
 #define TXGBE_SMARTSPEED_MAX_RETRIES	3
@@ -196,6 +205,20 @@ struct txgbe_bus_info {
 	u8 lan_id;
 	u16 instance_id;
 };
+
+/* Flow control parameters */
+struct txgbe_fc_info {
+	u32 high_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl High-water */
+	u32 low_water[TXGBE_DCB_TC_MAX]; /* Flow Ctrl Low-water */
+	u16 pause_time; /* Flow Control Pause timer */
+	bool send_xon; /* Flow control send XON */
+	bool strict_ieee; /* Strict IEEE mode */
+	bool disable_fc_autoneg; /* Do not autonegotiate FC */
+	bool fc_was_autonegged; /* Is current_mode the result of autonegging? */
+	enum txgbe_fc_mode current_mode; /* FC mode in effect */
+	enum txgbe_fc_mode requested_mode; /* FC mode requested by caller */
+};
+
 /* Statistics counters collected by the MAC */
 /* PB[] RxTx */
 struct txgbe_pb_stats {
@@ -597,6 +620,7 @@ struct txgbe_hw {
 	void *back;
 	struct txgbe_mac_info mac;
 	struct txgbe_addr_filter_info addr_ctrl;
+	struct txgbe_fc_info fc;
 	struct txgbe_phy_info phy;
 	struct txgbe_link_info link;
 	struct txgbe_rom_info rom;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 5e0b800ef..cab89f5f8 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -367,7 +367,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	const struct rte_memzone *mz;
 	uint32_t ctrl_ext;
 	uint16_t csum;
-	int err;
+	int err, i;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -427,6 +427,16 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	/* Unlock any pending hardware semaphore */
 	txgbe_swfw_lock_reset(hw);
 
+	/* Get Hardware Flow Control setting */
+	hw->fc.requested_mode = txgbe_fc_full;
+	hw->fc.current_mode = txgbe_fc_full;
+	hw->fc.pause_time = TXGBE_FC_PAUSE_TIME;
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		hw->fc.low_water[i] = TXGBE_FC_XON_LOTH;
+		hw->fc.high_water[i] = TXGBE_FC_XOFF_HITH;
+	}
+	hw->fc.send_xon = 1;
+
 	err = hw->rom.init_params(hw);
 	if (err != 0) {
 		PMD_INIT_LOG(ERR, "The EEPROM init failed: %d", err);
@@ -2438,6 +2448,110 @@ txgbe_dev_led_off(struct rte_eth_dev *dev)
 	return txgbe_led_off(hw, 4) == 0 ? 0 : -ENOTSUP;
 }
 
+static int
+txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct txgbe_hw *hw;
+	uint32_t mflcn_reg;
+	uint32_t fccfg_reg;
+	int rx_pause;
+	int tx_pause;
+
+	hw = TXGBE_DEV_HW(dev);
+
+	fc_conf->pause_time = hw->fc.pause_time;
+	fc_conf->high_water = hw->fc.high_water[0];
+	fc_conf->low_water = hw->fc.low_water[0];
+	fc_conf->send_xon = hw->fc.send_xon;
+	fc_conf->autoneg = !hw->fc.disable_fc_autoneg;
+
+	/*
+	 * Return rx_pause status according to actual setting of
+	 * RXFCCFG register.
+	 */
+	mflcn_reg = rd32(hw, TXGBE_RXFCCFG);
+	if (mflcn_reg & (TXGBE_RXFCCFG_FC | TXGBE_RXFCCFG_PFC))
+		rx_pause = 1;
+	else
+		rx_pause = 0;
+
+	/*
+	 * Return tx_pause status according to actual setting of
+	 * TXFCCFG register.
+	 */
+	fccfg_reg = rd32(hw, TXGBE_TXFCCFG);
+	if (fccfg_reg & (TXGBE_TXFCCFG_FC | TXGBE_TXFCCFG_PFC))
+		tx_pause = 1;
+	else
+		tx_pause = 0;
+
+	if (rx_pause && tx_pause)
+		fc_conf->mode = RTE_FC_FULL;
+	else if (rx_pause)
+		fc_conf->mode = RTE_FC_RX_PAUSE;
+	else if (tx_pause)
+		fc_conf->mode = RTE_FC_TX_PAUSE;
+	else
+		fc_conf->mode = RTE_FC_NONE;
+
+	return 0;
+}
+
+static int
+txgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
+{
+	struct txgbe_hw *hw;
+	int err;
+	uint32_t rx_buf_size;
+	uint32_t max_high_water;
+	enum txgbe_fc_mode rte_fcmode_2_txgbe_fcmode[] = {
+		txgbe_fc_none,
+		txgbe_fc_rx_pause,
+		txgbe_fc_tx_pause,
+		txgbe_fc_full
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	hw = TXGBE_DEV_HW(dev);
+	rx_buf_size = rd32(hw, TXGBE_PBRXSIZE(0));
+	PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x", rx_buf_size);
+
+	/*
+	 * At least reserve one Ethernet frame for watermark
+	 * high_water/low_water in kilo bytes for txgbe
+	 */
+	max_high_water = (rx_buf_size - RTE_ETHER_MAX_LEN) >> 10;
+	if ((fc_conf->high_water > max_high_water) ||
+	    (fc_conf->high_water < fc_conf->low_water)) {
+		PMD_INIT_LOG(ERR, "Invalid high/low water setup value in KB");
+		PMD_INIT_LOG(ERR, "High_water must <= 0x%x", max_high_water);
+		return -EINVAL;
+	}
+
+	hw->fc.requested_mode = rte_fcmode_2_txgbe_fcmode[fc_conf->mode];
+	hw->fc.pause_time     = fc_conf->pause_time;
+	hw->fc.high_water[0]  = fc_conf->high_water;
+	hw->fc.low_water[0]   = fc_conf->low_water;
+	hw->fc.send_xon       = fc_conf->send_xon;
+	hw->fc.disable_fc_autoneg = !fc_conf->autoneg;
+
+	err = txgbe_fc_enable(hw);
+
+	/* Not negotiated is not an error case */
+	if ((err == 0) || (err == TXGBE_ERR_FC_NOT_NEGOTIATED)) {
+		wr32m(hw, TXGBE_MACRXFLT, TXGBE_MACRXFLT_CTL_MASK,
+		      (fc_conf->mac_ctrl_frame_fwd
+		       ? TXGBE_MACRXFLT_CTL_NOPS : TXGBE_MACRXFLT_CTL_DROP));
+		txgbe_flush(hw);
+
+		return 0;
+	}
+
+	PMD_INIT_LOG(ERR, "txgbe_fc_enable = 0x%x", err);
+	return -EIO;
+}
+
 static int
 txgbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
 				uint32_t index, uint32_t pool)
@@ -2816,6 +2930,8 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.tx_queue_release           = txgbe_dev_tx_queue_release,
 	.dev_led_on                 = txgbe_dev_led_on,
 	.dev_led_off                = txgbe_dev_led_off,
+	.flow_ctrl_get              = txgbe_flow_ctrl_get,
+	.flow_ctrl_set              = txgbe_flow_ctrl_set,
 	.mac_addr_add               = txgbe_add_rar,
 	.mac_addr_remove            = txgbe_remove_rar,
 	.mac_addr_set               = txgbe_set_default_mac_addr,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 5319f42b3..667b11127 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -222,6 +222,14 @@ void txgbe_pf_mbx_process(struct rte_eth_dev *eth_dev);
 
 int txgbe_set_queue_rate_limit(struct rte_eth_dev *dev, uint16_t queue_idx,
 			       uint16_t tx_rate);
+/* High threshold controlling when to start sending XOFF frames. */
+#define TXGBE_FC_XOFF_HITH              128 /*KB*/
+/* Low threshold controlling when to start sending XON frames. */
+#define TXGBE_FC_XON_LOTH               64 /*KB*/
+
+/* Timer value included in XOFF frames. */
+#define TXGBE_FC_PAUSE_TIME 0x680
+
 #define TXGBE_LINK_DOWN_CHECK_TIMEOUT 4000 /* ms */
 #define TXGBE_LINK_UP_CHECK_TIMEOUT   1000 /* ms */
 #define TXGBE_VMDQ_NUM_UC_MAC         4096 /* Maximum nb. of UC MAC addr. */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 37/42] net/txgbe: add FC auto negotiation support
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (34 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 36/42] net/txgbe: add flow control support Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 38/42] net/txgbe: add DCB packet buffer allocation Jiawen Wu
                   ` (5 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add flow control negotitation with link partner.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_hw.c | 201 ++++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_hw.h |   4 +-
 2 files changed, 204 insertions(+), 1 deletion(-)

diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 34e7c3d1e..164d3b5b8 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -1061,6 +1061,205 @@ s32 txgbe_fc_enable(struct txgbe_hw *hw)
 	return err;
 }
 
+/**
+ *  txgbe_negotiate_fc - Negotiate flow control
+ *  @hw: pointer to hardware structure
+ *  @adv_reg: flow control advertised settings
+ *  @lp_reg: link partner's flow control settings
+ *  @adv_sym: symmetric pause bit in advertisement
+ *  @adv_asm: asymmetric pause bit in advertisement
+ *  @lp_sym: symmetric pause bit in link partner advertisement
+ *  @lp_asm: asymmetric pause bit in link partner advertisement
+ *
+ *  Find the intersection between advertised settings and link partner's
+ *  advertised settings
+ **/
+s32 txgbe_negotiate_fc(struct txgbe_hw *hw, u32 adv_reg, u32 lp_reg,
+		       u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm)
+{
+	if ((!(adv_reg)) ||  (!(lp_reg))) {
+		DEBUGOUT("Local or link partner's advertised flow control "
+			      "settings are NULL. Local: %x, link partner: %x\n",
+			      adv_reg, lp_reg);
+		return TXGBE_ERR_FC_NOT_NEGOTIATED;
+	}
+
+	if ((adv_reg & adv_sym) && (lp_reg & lp_sym)) {
+		/*
+		 * Now we need to check if the user selected Rx ONLY
+		 * of pause frames.  In this case, we had to advertise
+		 * FULL flow control because we could not advertise RX
+		 * ONLY. Hence, we must now check to see if we need to
+		 * turn OFF the TRANSMISSION of PAUSE frames.
+		 */
+		if (hw->fc.requested_mode == txgbe_fc_full) {
+			hw->fc.current_mode = txgbe_fc_full;
+			DEBUGOUT("Flow Control = FULL.\n");
+		} else {
+			hw->fc.current_mode = txgbe_fc_rx_pause;
+			DEBUGOUT("Flow Control=RX PAUSE frames only\n");
+		}
+	} else if (!(adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+		   (lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+		hw->fc.current_mode = txgbe_fc_tx_pause;
+		DEBUGOUT("Flow Control = TX PAUSE frames only.\n");
+	} else if ((adv_reg & adv_sym) && (adv_reg & adv_asm) &&
+		   !(lp_reg & lp_sym) && (lp_reg & lp_asm)) {
+		hw->fc.current_mode = txgbe_fc_rx_pause;
+		DEBUGOUT("Flow Control = RX PAUSE frames only.\n");
+	} else {
+		hw->fc.current_mode = txgbe_fc_none;
+		DEBUGOUT("Flow Control = NONE.\n");
+	}
+	return 0;
+}
+
+/**
+ *  txgbe_fc_autoneg_fiber - Enable flow control on 1 gig fiber
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according on 1 gig fiber.
+ **/
+STATIC s32 txgbe_fc_autoneg_fiber(struct txgbe_hw *hw)
+{
+	u32 pcs_anadv_reg, pcs_lpab_reg;
+	s32 err = TXGBE_ERR_FC_NOT_NEGOTIATED;
+
+	/*
+	 * On multispeed fiber at 1g, bail out if
+	 * - link is up but AN did not complete, or if
+	 * - link is up and AN completed but timed out
+	 */
+
+	pcs_anadv_reg = rd32_epcs(hw, SR_MII_MMD_AN_ADV);
+	pcs_lpab_reg = rd32_epcs(hw, SR_MII_MMD_LP_BABL);
+
+	err =  txgbe_negotiate_fc(hw, pcs_anadv_reg,
+				      pcs_lpab_reg,
+				      SR_MII_MMD_AN_ADV_PAUSE_SYM,
+				      SR_MII_MMD_AN_ADV_PAUSE_ASM,
+				      SR_MII_MMD_AN_ADV_PAUSE_SYM,
+				      SR_MII_MMD_AN_ADV_PAUSE_ASM);
+
+	return err;
+}
+
+/**
+ *  txgbe_fc_autoneg_backplane - Enable flow control IEEE clause 37
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according to IEEE clause 37.
+ **/
+STATIC s32 txgbe_fc_autoneg_backplane(struct txgbe_hw *hw)
+{
+	u32 anlp1_reg, autoc_reg;
+	s32 err = TXGBE_ERR_FC_NOT_NEGOTIATED;
+
+	/*
+	 * Read the 10g AN autoc and LP ability registers and resolve
+	 * local flow control settings accordingly
+	 */
+	autoc_reg = rd32_epcs(hw, SR_AN_MMD_ADV_REG1);
+	anlp1_reg = rd32_epcs(hw, SR_AN_MMD_LP_ABL1);
+
+	err = txgbe_negotiate_fc(hw, autoc_reg,
+		anlp1_reg,
+		SR_AN_MMD_ADV_REG1_PAUSE_SYM,
+		SR_AN_MMD_ADV_REG1_PAUSE_ASM,
+		SR_AN_MMD_ADV_REG1_PAUSE_SYM,
+		SR_AN_MMD_ADV_REG1_PAUSE_ASM);
+
+	return err;
+}
+
+/**
+ *  txgbe_fc_autoneg_copper - Enable flow control IEEE clause 37
+ *  @hw: pointer to hardware structure
+ *
+ *  Enable flow control according to IEEE clause 37.
+ **/
+STATIC s32 txgbe_fc_autoneg_copper(struct txgbe_hw *hw)
+{
+	u16 technology_ability_reg = 0;
+	u16 lp_technology_ability_reg = 0;
+
+	hw->phy.read_reg(hw, TXGBE_MD_AUTO_NEG_ADVT,
+			     TXGBE_MD_DEV_AUTO_NEG,
+			     &technology_ability_reg);
+	hw->phy.read_reg(hw, TXGBE_MD_AUTO_NEG_LP,
+			     TXGBE_MD_DEV_AUTO_NEG,
+			     &lp_technology_ability_reg);
+
+	return txgbe_negotiate_fc(hw, (u32)technology_ability_reg,
+				  (u32)lp_technology_ability_reg,
+				  TXGBE_TAF_SYM_PAUSE, TXGBE_TAF_ASM_PAUSE,
+				  TXGBE_TAF_SYM_PAUSE, TXGBE_TAF_ASM_PAUSE);
+}
+
+/**
+ *  txgbe_fc_autoneg - Configure flow control
+ *  @hw: pointer to hardware structure
+ *
+ *  Compares our advertised flow control capabilities to those advertised by
+ *  our link partner, and determines the proper flow control mode to use.
+ **/
+void txgbe_fc_autoneg(struct txgbe_hw *hw)
+{
+	s32 err = TXGBE_ERR_FC_NOT_NEGOTIATED;
+	u32 speed;
+	bool link_up;
+
+	DEBUGFUNC("txgbe_fc_autoneg");
+
+	/*
+	 * AN should have completed when the cable was plugged in.
+	 * Look for reasons to bail out.  Bail out if:
+	 * - FC autoneg is disabled, or if
+	 * - link is not up.
+	 */
+	if (hw->fc.disable_fc_autoneg) {
+		DEBUGOUT("Flow control autoneg is disabled");
+		goto out;
+	}
+
+	hw->mac.check_link(hw, &speed, &link_up, false);
+	if (!link_up) {
+		DEBUGOUT("The link is down");
+		goto out;
+	}
+
+	switch (hw->phy.media_type) {
+	/* Autoneg flow control on fiber adapters */
+	case txgbe_media_type_fiber_qsfp:
+	case txgbe_media_type_fiber:
+		if (speed == TXGBE_LINK_SPEED_1GB_FULL)
+			err = txgbe_fc_autoneg_fiber(hw);
+		break;
+
+	/* Autoneg flow control on backplane adapters */
+	case txgbe_media_type_backplane:
+		err = txgbe_fc_autoneg_backplane(hw);
+		break;
+
+	/* Autoneg flow control on copper adapters */
+	case txgbe_media_type_copper:
+		if (txgbe_device_supports_autoneg_fc(hw))
+			err = txgbe_fc_autoneg_copper(hw);
+		break;
+
+	default:
+		break;
+	}
+
+out:
+	if (err == 0) {
+		hw->fc.fc_was_autonegged = true;
+	} else {
+		hw->fc.fc_was_autonegged = false;
+		hw->fc.current_mode = hw->fc.requested_mode;
+	}
+}
+
 /**
  *  txgbe_acquire_swfw_sync - Acquire SWFW semaphore
  *  @hw: pointer to hardware structure
@@ -2146,6 +2345,8 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	/* Flow Control */
 	mac->fc_enable = txgbe_fc_enable;
 	mac->setup_fc = txgbe_setup_fc;
+	mac->fc_autoneg = txgbe_fc_autoneg;
+
 	/* Link */
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
 	mac->check_link = txgbe_check_mac_link;
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index 1604d1fca..047c71ecf 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -11,7 +11,6 @@ s32 txgbe_init_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw(struct txgbe_hw *hw);
 s32 txgbe_stop_hw(struct txgbe_hw *hw);
 s32 txgbe_start_hw_gen2(struct txgbe_hw *hw);
-s32 txgbe_start_hw_raptor(struct txgbe_hw *hw);
 s32 txgbe_clear_hw_cntrs(struct txgbe_hw *hw);
 s32 txgbe_get_mac_addr(struct txgbe_hw *hw, u8 *mac_addr);
 
@@ -34,6 +33,7 @@ s32 txgbe_enable_sec_tx_path(struct txgbe_hw *hw);
 
 s32 txgbe_fc_enable(struct txgbe_hw *hw);
 bool txgbe_device_supports_autoneg_fc(struct txgbe_hw *hw);
+void txgbe_fc_autoneg(struct txgbe_hw *hw);
 s32 txgbe_setup_fc(struct txgbe_hw *hw);
 
 s32 txgbe_validate_mac_addr(u8 *mac_addr);
@@ -62,6 +62,8 @@ s32 txgbe_setup_mac_link_multispeed_fiber(struct txgbe_hw *hw,
 					  u32 speed,
 					  bool autoneg_wait_to_complete);
 void txgbe_set_mta(struct txgbe_hw *hw, u8 *mc_addr);
+s32 txgbe_negotiate_fc(struct txgbe_hw *hw, u32 adv_reg, u32 lp_reg,
+			u32 adv_sym, u32 adv_asm, u32 lp_sym, u32 lp_asm);
 s32 txgbe_init_shared_code(struct txgbe_hw *hw);
 s32 txgbe_set_mac_type(struct txgbe_hw *hw);
 s32 txgbe_init_ops_pf(struct txgbe_hw *hw);
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 38/42] net/txgbe: add DCB packet buffer allocation
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (35 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 37/42] net/txgbe: add FC auto negotiation support Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 39/42] net/txgbe: configure DCB HW resources Jiawen Wu
                   ` (4 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add DCB packet buffer allocation and priority flow control support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build  |   1 +
 drivers/net/txgbe/base/txgbe.h      |   1 +
 drivers/net/txgbe/base/txgbe_dcb.c  | 180 ++++++++++++++++++++++++++++
 drivers/net/txgbe/base/txgbe_dcb.h  |  86 +++++++++++++
 drivers/net/txgbe/base/txgbe_hw.c   |  63 ++++++++++
 drivers/net/txgbe/base/txgbe_hw.h   |   2 +
 drivers/net/txgbe/base/txgbe_type.h |  13 ++
 drivers/net/txgbe/txgbe_ethdev.c    |  98 +++++++++++++++
 drivers/net/txgbe/txgbe_ethdev.h    |   6 +
 drivers/net/txgbe/txgbe_rxtx.c      |  51 ++++++++
 10 files changed, 501 insertions(+)
 create mode 100644 drivers/net/txgbe/base/txgbe_dcb.c
 create mode 100644 drivers/net/txgbe/base/txgbe_dcb.h

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
index 069879a7c..13b418f19 100644
--- a/drivers/net/txgbe/base/meson.build
+++ b/drivers/net/txgbe/base/meson.build
@@ -2,6 +2,7 @@
 # Copyright(c) 2015-2020
 
 sources = [
+	'txgbe_dcb.c',
 	'txgbe_eeprom.c',
 	'txgbe_hw.c',
 	'txgbe_mng.c',
diff --git a/drivers/net/txgbe/base/txgbe.h b/drivers/net/txgbe/base/txgbe.h
index 764caa439..1bb8f3af8 100644
--- a/drivers/net/txgbe/base/txgbe.h
+++ b/drivers/net/txgbe/base/txgbe.h
@@ -10,5 +10,6 @@
 #include "txgbe_eeprom.h"
 #include "txgbe_phy.h"
 #include "txgbe_hw.h"
+#include "txgbe_dcb.h"
 
 #endif /* _TXGBE_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_dcb.c b/drivers/net/txgbe/base/txgbe_dcb.c
new file mode 100644
index 000000000..6366da92a
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_dcb.c
@@ -0,0 +1,180 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_type.h"
+#include "txgbe_hw.h"
+#include "txgbe_dcb.h"
+
+/**
+ *  txgbe_pfc_enable - Enable flow control
+ *  @hw: pointer to hardware structure
+ *  @tc_num: traffic class number
+ *  Enable flow control according to the current settings.
+ */
+int
+txgbe_dcb_pfc_enable(struct txgbe_hw *hw, uint8_t tc_num)
+{
+	int ret_val = 0;
+	uint32_t mflcn_reg, fccfg_reg;
+	uint32_t pause_time;
+	uint32_t fcrtl, fcrth;
+	uint8_t i;
+	uint8_t nb_rx_en;
+
+	/* Validate the water mark configuration */
+	if (!hw->fc.pause_time) {
+		ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+		goto out;
+	}
+
+	/* Low water mark of zero causes XOFF floods */
+	if (hw->fc.current_mode & txgbe_fc_tx_pause) {
+		 /* High/Low water can not be 0 */
+		if (!hw->fc.high_water[tc_num] ||
+		    !hw->fc.low_water[tc_num]) {
+			PMD_INIT_LOG(ERR, "Invalid water mark configuration");
+			ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+			goto out;
+		}
+
+		if (hw->fc.low_water[tc_num] >= hw->fc.high_water[tc_num]) {
+			PMD_INIT_LOG(ERR, "Invalid water mark configuration");
+			ret_val = TXGBE_ERR_INVALID_LINK_SETTINGS;
+			goto out;
+		}
+	}
+	/* Negotiate the fc mode to use */
+	txgbe_fc_autoneg(hw);
+
+	/* Disable any previous flow control settings */
+	mflcn_reg = rd32(hw, TXGBE_RXFCCFG);
+	mflcn_reg &= ~(TXGBE_RXFCCFG_FC | TXGBE_RXFCCFG_PFC);
+
+	fccfg_reg = rd32(hw, TXGBE_TXFCCFG);
+	fccfg_reg &= ~(TXGBE_TXFCCFG_FC | TXGBE_TXFCCFG_PFC);
+
+	switch (hw->fc.current_mode) {
+	case txgbe_fc_none:
+		/*
+		 * If the count of enabled RX Priority Flow control > 1,
+		 * and the TX pause can not be disabled
+		 */
+		nb_rx_en = 0;
+		for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+			uint32_t reg = rd32(hw, TXGBE_FCWTRHI(i));
+			if (reg & TXGBE_FCWTRHI_XOFF)
+				nb_rx_en++;
+		}
+		if (nb_rx_en > 1)
+			fccfg_reg |= TXGBE_TXFCCFG_PFC;
+		break;
+	case txgbe_fc_rx_pause:
+		/*
+		 * Rx Flow control is enabled and Tx Flow control is
+		 * disabled by software override. Since there really
+		 * isn't a way to advertise that we are capable of RX
+		 * Pause ONLY, we will advertise that we support both
+		 * symmetric and asymmetric Rx PAUSE.  Later, we will
+		 * disable the adapter's ability to send PAUSE frames.
+		 */
+		mflcn_reg |= TXGBE_RXFCCFG_PFC;
+		/*
+		 * If the count of enabled RX Priority Flow control > 1,
+		 * and the TX pause can not be disabled
+		 */
+		nb_rx_en = 0;
+		for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+			uint32_t reg = rd32(hw, TXGBE_FCWTRHI(i));
+			if (reg & TXGBE_FCWTRHI_XOFF)
+				nb_rx_en++;
+		}
+		if (nb_rx_en > 1)
+			fccfg_reg |= TXGBE_TXFCCFG_PFC;
+		break;
+	case txgbe_fc_tx_pause:
+		/*
+		 * Tx Flow control is enabled, and Rx Flow control is
+		 * disabled by software override.
+		 */
+		fccfg_reg |= TXGBE_TXFCCFG_PFC;
+		break;
+	case txgbe_fc_full:
+		/* Flow control (both Rx and Tx) is enabled by SW override. */
+		mflcn_reg |= TXGBE_RXFCCFG_PFC;
+		fccfg_reg |= TXGBE_TXFCCFG_PFC;
+		break;
+	default:
+		PMD_DRV_LOG(DEBUG, "Flow control param set incorrectly");
+		ret_val = TXGBE_ERR_CONFIG;
+		goto out;
+	}
+
+	/* Set 802.3x based flow control settings. */
+	wr32(hw, TXGBE_RXFCCFG, mflcn_reg);
+	wr32(hw, TXGBE_TXFCCFG, fccfg_reg);
+
+	/* Set up and enable Rx high/low water mark thresholds, enable XON. */
+	if ((hw->fc.current_mode & txgbe_fc_tx_pause) &&
+		hw->fc.high_water[tc_num]) {
+		fcrtl = TXGBE_FCWTRLO_TH(hw->fc.low_water[tc_num]) |
+			TXGBE_FCWTRLO_XON;
+		fcrth = TXGBE_FCWTRHI_TH(hw->fc.high_water[tc_num]) |
+			TXGBE_FCWTRHI_XOFF;
+	} else {
+		/*
+		 * In order to prevent Tx hangs when the internal Tx
+		 * switch is enabled we must set the high water mark
+		 * to the maximum FCRTH value.  This allows the Tx
+		 * switch to function even under heavy Rx workloads.
+		 */
+		fcrtl = 0;
+		fcrth = rd32(hw, TXGBE_PBRXSIZE(tc_num)) - 32;
+	}
+	wr32(hw, TXGBE_FCWTRLO(tc_num), fcrtl);
+	wr32(hw, TXGBE_FCWTRHI(tc_num), fcrth);
+
+	/* Configure pause time (2 TCs per register) */
+	pause_time = TXGBE_RXFCFSH_TIME(hw->fc.pause_time);
+	for (i = 0; i < (TXGBE_DCB_TC_MAX / 2); i++)
+		wr32(hw, TXGBE_FCXOFFTM(i), pause_time * 0x00010001);
+
+	/* Configure flow control refresh threshold value */
+	wr32(hw, TXGBE_RXFCRFSH, pause_time / 2);
+
+out:
+	return ret_val;
+}
+
+u8 txgbe_dcb_get_tc_from_up(struct txgbe_dcb_config *cfg, int direction, u8 up)
+{
+	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
+	u8 prio_mask = 1 << up;
+	u8 tc = cfg->num_tcs.pg_tcs;
+
+	/* If tc is 0 then DCB is likely not enabled or supported */
+	if (!tc)
+		goto out;
+
+	/*
+	 * Test from maximum TC to 1 and report the first match we find.  If
+	 * we find no match we can assume that the TC is 0 since the TC must
+	 * be set for all user priorities
+	 */
+	for (tc--; tc; tc--) {
+		if (prio_mask & tc_config[tc].path[direction].up_to_tc_bitmap)
+			break;
+	}
+out:
+	return tc;
+}
+
+void txgbe_dcb_unpack_map_cee(struct txgbe_dcb_config *cfg, int direction,
+			      u8 *map)
+{
+	u8 up;
+
+	for (up = 0; up < TXGBE_DCB_UP_MAX; up++)
+		map[up] = txgbe_dcb_get_tc_from_up(cfg, direction, up);
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_dcb.h b/drivers/net/txgbe/base/txgbe_dcb.h
new file mode 100644
index 000000000..67de5c54b
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_dcb.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_DCB_H_
+#define _TXGBE_DCB_H_
+
+#include "txgbe_type.h"
+
+#define TXGBE_DCB_TX_CONFIG		0
+#define TXGBE_DCB_RX_CONFIG		1
+
+struct txgbe_dcb_support {
+	u32 capabilities; /* DCB capabilities */
+
+	/* Each bit represents a number of TCs configurable in the hw.
+	 * If 8 traffic classes can be configured, the value is 0x80. */
+	u8 traffic_classes;
+	u8 pfc_traffic_classes;
+};
+
+enum txgbe_dcb_tsa {
+	txgbe_dcb_tsa_ets = 0,
+	txgbe_dcb_tsa_group_strict_cee,
+	txgbe_dcb_tsa_strict
+};
+
+/* Traffic class bandwidth allocation per direction */
+struct txgbe_dcb_tc_path {
+	u8 bwg_id; /* Bandwidth Group (BWG) ID */
+	u8 bwg_percent; /* % of BWG's bandwidth */
+	u8 link_percent; /* % of link bandwidth */
+	u8 up_to_tc_bitmap; /* User Priority to Traffic Class mapping */
+	u16 data_credits_refill; /* Credit refill amount in 64B granularity */
+	u16 data_credits_max; /* Max credits for a configured packet buffer
+			       * in 64B granularity.*/
+	enum txgbe_dcb_tsa tsa; /* Link or Group Strict Priority */
+};
+
+enum txgbe_dcb_pfc {
+	txgbe_dcb_pfc_disabled = 0,
+	txgbe_dcb_pfc_enabled,
+	txgbe_dcb_pfc_enabled_txonly,
+	txgbe_dcb_pfc_enabled_rxonly
+};
+
+/* Traffic class configuration */
+struct txgbe_dcb_tc_config {
+	struct txgbe_dcb_tc_path path[2]; /* One each for Tx/Rx */
+	enum txgbe_dcb_pfc pfc; /* Class based flow control setting */
+
+	u16 desc_credits_max; /* For Tx Descriptor arbitration */
+	u8 tc; /* Traffic class (TC) */
+};
+
+enum txgbe_dcb_pba {
+	/* PBA[0-7] each use 64KB FIFO */
+	txgbe_dcb_pba_equal = PBA_STRATEGY_EQUAL,
+	/* PBA[0-3] each use 80KB, PBA[4-7] each use 48KB */
+	txgbe_dcb_pba_80_48 = PBA_STRATEGY_WEIGHTED
+};
+
+struct txgbe_dcb_num_tcs {
+	u8 pg_tcs;
+	u8 pfc_tcs;
+};
+
+struct txgbe_dcb_config {
+	struct txgbe_dcb_tc_config tc_config[TXGBE_DCB_TC_MAX];
+	struct txgbe_dcb_support support;
+	struct txgbe_dcb_num_tcs num_tcs;
+	u8 bw_percentage[TXGBE_DCB_BWG_MAX][2]; /* One each for Tx/Rx */
+	bool pfc_mode_enable;
+	bool round_robin_enable;
+
+	enum txgbe_dcb_pba rx_pba_cfg;
+
+	u32 link_speed; /* For bandwidth allocation validation purpose */
+	bool vt_mode;
+};
+
+int txgbe_dcb_pfc_enable(struct txgbe_hw *hw, u8 tc_num);
+void txgbe_dcb_unpack_map_cee(struct txgbe_dcb_config *, int, u8 *);
+u8 txgbe_dcb_get_tc_from_up(struct txgbe_dcb_config *, int, u8);
+
+#endif /* _TXGBE_DCB_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 164d3b5b8..15ab0213d 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -1757,6 +1757,68 @@ s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps)
 	return 0;
 }
 
+/**
+ * txgbe_set_pba - Initialize Rx packet buffer
+ * @hw: pointer to hardware structure
+ * @num_pb: number of packet buffers to allocate
+ * @headroom: reserve n KB of headroom
+ * @strategy: packet buffer allocation strategy
+ **/
+void txgbe_set_pba(struct txgbe_hw *hw, int num_pb, u32 headroom,
+			     int strategy)
+{
+	u32 pbsize = hw->mac.rx_pb_size;
+	int i = 0;
+	u32 rxpktsize, txpktsize, txpbthresh;
+
+	UNREFERENCED_PARAMETER(hw);
+
+	/* Reserve headroom */
+	pbsize -= headroom;
+
+	if (!num_pb)
+		num_pb = 1;
+
+	/* Divide remaining packet buffer space amongst the number of packet
+	 * buffers requested using supplied strategy.
+	 */
+	switch (strategy) {
+	case PBA_STRATEGY_WEIGHTED:
+		/* txgbe_dcb_pba_80_48 strategy weight first half of packet
+		 * buffer with 5/8 of the packet buffer space.
+		 */
+		rxpktsize = (pbsize * 5) / (num_pb * 4);
+		pbsize -= rxpktsize * (num_pb / 2);
+		rxpktsize <<= 10;
+		for (; i < (num_pb / 2); i++)
+			wr32(hw, TXGBE_PBRXSIZE(i), rxpktsize);
+		/* fall through - configure remaining packet buffers */
+	case PBA_STRATEGY_EQUAL:
+		rxpktsize = (pbsize / (num_pb - i));
+		rxpktsize <<= 10;
+		for (; i < num_pb; i++)
+			wr32(hw, TXGBE_PBRXSIZE(i), rxpktsize);
+		break;
+	default:
+		break;
+	}
+
+	/* Only support an equally distributed Tx packet buffer strategy. */
+	txpktsize = TXGBE_PBTXSIZE_MAX / num_pb;
+	txpbthresh = (txpktsize / 1024) - TXGBE_TXPKT_SIZE_MAX;
+	for (i = 0; i < num_pb; i++) {
+		wr32(hw, TXGBE_PBTXSIZE(i), txpktsize);
+		wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
+	}
+
+	/* Clear unused TCs, if any, to zero buffer size*/
+	for (; i < TXGBE_MAX_UP; i++) {
+		wr32(hw, TXGBE_PBRXSIZE(i), 0);
+		wr32(hw, TXGBE_PBTXSIZE(i), 0);
+		wr32(hw, TXGBE_PBTXDMATH(i), 0);
+	}
+}
+
 /**
  * txgbe_clear_tx_pending - Clear pending TX work from the PCIe fifo
  * @hw: pointer to the hardware structure
@@ -2350,6 +2412,7 @@ s32 txgbe_init_ops_pf(struct txgbe_hw *hw)
 	/* Link */
 	mac->get_link_capabilities = txgbe_get_link_capabilities_raptor;
 	mac->check_link = txgbe_check_mac_link;
+	mac->setup_pba = txgbe_set_pba;
 
 	/* Manageability interface */
 	mac->set_fw_drv_ver = txgbe_hic_set_drv_ver;
diff --git a/drivers/net/txgbe/base/txgbe_hw.h b/drivers/net/txgbe/base/txgbe_hw.h
index 047c71ecf..ea65e14bf 100644
--- a/drivers/net/txgbe/base/txgbe_hw.h
+++ b/drivers/net/txgbe/base/txgbe_hw.h
@@ -52,6 +52,8 @@ s32 txgbe_get_wwn_prefix(struct txgbe_hw *hw, u16 *wwnn_prefix,
 				 u16 *wwpn_prefix);
 
 s32 txgbe_get_device_caps(struct txgbe_hw *hw, u16 *device_caps);
+void txgbe_set_pba(struct txgbe_hw *hw, int num_pb, u32 headroom,
+			     int strategy);
 void txgbe_clear_tx_pending(struct txgbe_hw *hw);
 
 extern s32 txgbe_reset_pipeline_raptor(struct txgbe_hw *hw);
diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 4a30a99db..fcc44ece8 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -6,11 +6,15 @@
 #define _TXGBE_TYPE_H_
 
 #define TXGBE_DCB_TC_MAX	TXGBE_MAX_UP
+#define TXGBE_DCB_UP_MAX	TXGBE_MAX_UP
+#define TXGBE_DCB_BWG_MAX	TXGBE_MAX_UP
 #define TXGBE_LINK_UP_TIME	90 /* 9.0 Seconds */
 #define TXGBE_AUTO_NEG_TIME	45 /* 4.5 Seconds */
 
 #define TXGBE_FRAME_SIZE_MAX	(9728) /* Maximum frame size, +FCS */
 #define TXGBE_FRAME_SIZE_DFT	(1518) /* Default frame size, +FCS */
+#define TXGBE_PBTXSIZE_MAX	0x00028000 /* 160KB Packet Buffer */
+#define TXGBE_TXPKT_SIZE_MAX	0xA /* Max Tx Packet size */
 #define TXGBE_MAX_UP		8
 #define TXGBE_MAX_QP		(128)
 
@@ -19,6 +23,14 @@
 #include "txgbe_status.h"
 #include "txgbe_osdep.h"
 #include "txgbe_devids.h"
+/* Packet buffer allocation strategies */
+enum {
+	PBA_STRATEGY_EQUAL	= 0, /* Distribute PB space equally */
+#define PBA_STRATEGY_EQUAL	PBA_STRATEGY_EQUAL
+	PBA_STRATEGY_WEIGHTED	= 1, /* Weight front half of TCs */
+#define PBA_STRATEGY_WEIGHTED	PBA_STRATEGY_WEIGHTED
+};
+
 
 /* Physical layer type */
 #define TXGBE_PHYSICAL_LAYER_UNKNOWN		0
@@ -534,6 +546,7 @@ struct txgbe_mac_info {
 	s32 mc_filter_type;
 	u32 mcft_size;
 	u32 num_rar_entries;
+	u32 rx_pb_size;
 	u32 max_tx_queues;
 	u32 max_rx_queues;
 
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index cab89f5f8..a72994d08 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -333,6 +333,43 @@ txgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
 	return 0;
 }
 
+static void
+txgbe_dcb_init(struct txgbe_hw *hw, struct txgbe_dcb_config *dcb_config)
+{
+	int i;
+	u8 bwgp;
+	struct txgbe_dcb_tc_config *tc;
+
+	UNREFERENCED_PARAMETER(hw);
+
+	dcb_config->num_tcs.pg_tcs = TXGBE_DCB_TC_MAX;
+	dcb_config->num_tcs.pfc_tcs = TXGBE_DCB_TC_MAX;
+	bwgp = (u8)(100 / TXGBE_DCB_TC_MAX);
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		tc = &dcb_config->tc_config[i];
+		tc->path[TXGBE_DCB_TX_CONFIG].bwg_id = i;
+		tc->path[TXGBE_DCB_TX_CONFIG].bwg_percent = bwgp + (i & 1);
+		tc->path[TXGBE_DCB_RX_CONFIG].bwg_id = i;
+		tc->path[TXGBE_DCB_RX_CONFIG].bwg_percent = bwgp + (i & 1);
+		tc->pfc = txgbe_dcb_pfc_disabled;
+	}
+
+	/* Initialize default user to priority mapping, UPx->TC0 */
+	tc = &dcb_config->tc_config[0];
+	tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap = 0xFF;
+	tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap = 0xFF;
+	for (i = 0; i < TXGBE_DCB_BWG_MAX; i++) {
+		dcb_config->bw_percentage[i][TXGBE_DCB_TX_CONFIG] = 100;
+		dcb_config->bw_percentage[i][TXGBE_DCB_RX_CONFIG] = 100;
+	}
+	dcb_config->rx_pba_cfg = txgbe_dcb_pba_equal;
+	dcb_config->pfc_mode_enable = false;
+	dcb_config->vt_mode = true;
+	dcb_config->round_robin_enable = false;
+	/* support all DCB capabilities */
+	dcb_config->support.capabilities = 0xFF;
+}
+
 /*
  * Ensure that all locks are released before first NVM or PHY access
  */
@@ -363,6 +400,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
 	struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
 	struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
+	struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	const struct rte_memzone *mz;
 	uint32_t ctrl_ext;
@@ -427,6 +465,10 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	/* Unlock any pending hardware semaphore */
 	txgbe_swfw_lock_reset(hw);
 
+	/* Initialize DCB configuration*/
+	memset(dcb_config, 0, sizeof(struct txgbe_dcb_config));
+	txgbe_dcb_init(hw, dcb_config);
+
 	/* Get Hardware Flow Control setting */
 	hw->fc.requested_mode = txgbe_fc_full;
 	hw->fc.current_mode = txgbe_fc_full;
@@ -1139,6 +1181,9 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
+	txgbe_configure_pb(dev);
+	txgbe_configure_port(dev);
+
 	err = txgbe_dev_rxtx_start(dev);
 	if (err < 0) {
 		PMD_INIT_LOG(ERR, "Unable to start rxtx queues");
@@ -2552,6 +2597,58 @@ txgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	return -EIO;
 }
 
+static int
+txgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_pfc_conf *pfc_conf)
+{
+	int err;
+	uint32_t rx_buf_size;
+	uint32_t max_high_water;
+	uint8_t tc_num;
+	uint8_t  map[TXGBE_DCB_UP_MAX] = { 0 };
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(dev);
+
+	enum txgbe_fc_mode rte_fcmode_2_txgbe_fcmode[] = {
+		txgbe_fc_none,
+		txgbe_fc_rx_pause,
+		txgbe_fc_tx_pause,
+		txgbe_fc_full
+	};
+
+	PMD_INIT_FUNC_TRACE();
+
+	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
+	tc_num = map[pfc_conf->priority];
+	rx_buf_size = rd32(hw, TXGBE_PBRXSIZE(tc_num));
+	PMD_INIT_LOG(DEBUG, "Rx packet buffer size = 0x%x", rx_buf_size);
+	/*
+	 * At least reserve one Ethernet frame for watermark
+	 * high_water/low_water in kilo bytes for txgbe
+	 */
+	max_high_water = (rx_buf_size - RTE_ETHER_MAX_LEN) >> 10;
+	if ((pfc_conf->fc.high_water > max_high_water) ||
+	    (pfc_conf->fc.high_water <= pfc_conf->fc.low_water)) {
+		PMD_INIT_LOG(ERR, "Invalid high/low water setup value in KB");
+		PMD_INIT_LOG(ERR, "High_water must <= 0x%x", max_high_water);
+		return -EINVAL;
+	}
+
+	hw->fc.requested_mode = rte_fcmode_2_txgbe_fcmode[pfc_conf->fc.mode];
+	hw->fc.pause_time = pfc_conf->fc.pause_time;
+	hw->fc.send_xon = pfc_conf->fc.send_xon;
+	hw->fc.low_water[tc_num] =  pfc_conf->fc.low_water;
+	hw->fc.high_water[tc_num] = pfc_conf->fc.high_water;
+
+	err = txgbe_dcb_pfc_enable(hw, tc_num);
+
+	/* Not negotiated is not an error case */
+	if ((err == 0) || (err == TXGBE_ERR_FC_NOT_NEGOTIATED))
+		return 0;
+
+	PMD_INIT_LOG(ERR, "txgbe_dcb_pfc_enable = 0x%x", err);
+	return -EIO;
+}
+
 static int
 txgbe_add_rar(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr,
 				uint32_t index, uint32_t pool)
@@ -2932,6 +3029,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_led_off                = txgbe_dev_led_off,
 	.flow_ctrl_get              = txgbe_flow_ctrl_get,
 	.flow_ctrl_set              = txgbe_flow_ctrl_set,
+	.priority_flow_ctrl_set     = txgbe_priority_flow_ctrl_set,
 	.mac_addr_add               = txgbe_add_rar,
 	.mac_addr_remove            = txgbe_remove_rar,
 	.mac_addr_set               = txgbe_set_default_mac_addr,
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 667b11127..1166c151d 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -92,6 +92,7 @@ struct txgbe_adapter {
 	struct txgbe_stat_mappings  stat_mappings;
 	struct txgbe_vfta           shadow_vfta;
 	struct txgbe_hwstrip        hwstrip;
+	struct txgbe_dcb_config     dcb_config;
 	struct txgbe_vf_info        *vfdata;
 	bool rx_bulk_alloc_allowed;
 };
@@ -126,6 +127,9 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 #define TXGBE_DEV_HWSTRIP(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->hwstrip)
 
+#define TXGBE_DEV_DCB_CONFIG(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->dcb_config)
+
 #define TXGBE_DEV_VFDATA(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
 
@@ -205,6 +209,8 @@ uint16_t txgbe_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
 			       uint8_t queue, uint8_t msix_vector);
 
+void txgbe_configure_pb(struct rte_eth_dev *dev);
+void txgbe_configure_port(struct rte_eth_dev *dev);
 
 int
 txgbe_dev_link_update_share(struct rte_eth_dev *dev,
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index df094408f..e2ab86568 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2760,6 +2760,57 @@ txgbe_dev_free_queues(struct rte_eth_dev *dev)
 	dev->data->nb_tx_queues = 0;
 }
 
+void txgbe_configure_pb(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *dev_conf = &(dev->data->dev_conf);
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	int hdrm;
+	int tc = dev_conf->rx_adv_conf.dcb_rx_conf.nb_tcs;
+
+	/* Reserve 256KB(/512KB) rx buffer for fdir */
+	hdrm = 256; /*KB*/
+
+	hw->mac.setup_pba(hw, tc, hdrm, PBA_STRATEGY_EQUAL);
+}
+
+void txgbe_configure_port(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	int i = 0;
+	uint16_t tpids[8] = {RTE_ETHER_TYPE_VLAN, RTE_ETHER_TYPE_QINQ,
+				0x9100, 0x9200,
+				0x0000, 0x0000,
+				0x0000, 0x0000};
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* default outer vlan tpid */
+	wr32(hw, TXGBE_EXTAG,
+		TXGBE_EXTAG_ETAG(RTE_ETHER_TYPE_ETAG) |
+		TXGBE_EXTAG_VLAN(RTE_ETHER_TYPE_QINQ));
+
+	/* default inner vlan tpid */
+	wr32m(hw, TXGBE_VLANCTL,
+		TXGBE_VLANCTL_TPID_MASK,
+		TXGBE_VLANCTL_TPID(RTE_ETHER_TYPE_VLAN));
+	wr32m(hw, TXGBE_DMATXCTRL,
+		TXGBE_DMATXCTRL_TPID_MASK,
+		TXGBE_DMATXCTRL_TPID(RTE_ETHER_TYPE_VLAN));
+
+	/* default vlan tpid filters */
+	for (i = 0; i < 8; i++) {
+		wr32m(hw, TXGBE_TAGTPID(i/2),
+			(i % 2 ? TXGBE_TAGTPID_MSB_MASK
+			       : TXGBE_TAGTPID_LSB_MASK),
+			(i % 2 ? TXGBE_TAGTPID_MSB(tpids[i])
+			       : TXGBE_TAGTPID_LSB(tpids[i])));
+	}
+
+	/* default vxlan port */
+	wr32(hw, TXGBE_VXLANPORT, 4789);
+}
+
 static int __rte_cold
 txgbe_alloc_rx_queue_mbufs(struct txgbe_rx_queue *rxq)
 {
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 39/42] net/txgbe: configure DCB HW resources
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (36 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 38/42] net/txgbe: add DCB packet buffer allocation Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 40/42] net/txgbe: add device promiscuous and allmulticast mode Jiawen Wu
                   ` (3 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add DCB transmit and receive mode configurations.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/meson.build    |   1 +
 drivers/net/txgbe/base/txgbe_dcb.c    | 180 ++++++++++++
 drivers/net/txgbe/base/txgbe_dcb.h    |  27 ++
 drivers/net/txgbe/base/txgbe_dcb_hw.c | 283 +++++++++++++++++++
 drivers/net/txgbe/base/txgbe_dcb_hw.h |  23 ++
 drivers/net/txgbe/base/txgbe_hw.c     |   1 +
 drivers/net/txgbe/txgbe_ethdev.c      |   6 +
 drivers/net/txgbe/txgbe_ethdev.h      |  10 +
 drivers/net/txgbe/txgbe_rxtx.c        | 383 ++++++++++++++++++++++++++
 9 files changed, 914 insertions(+)
 create mode 100644 drivers/net/txgbe/base/txgbe_dcb_hw.c
 create mode 100644 drivers/net/txgbe/base/txgbe_dcb_hw.h

diff --git a/drivers/net/txgbe/base/meson.build b/drivers/net/txgbe/base/meson.build
index 13b418f19..d240a4335 100644
--- a/drivers/net/txgbe/base/meson.build
+++ b/drivers/net/txgbe/base/meson.build
@@ -2,6 +2,7 @@
 # Copyright(c) 2015-2020
 
 sources = [
+	'txgbe_dcb_hw.c',
 	'txgbe_dcb.c',
 	'txgbe_eeprom.c',
 	'txgbe_hw.c',
diff --git a/drivers/net/txgbe/base/txgbe_dcb.c b/drivers/net/txgbe/base/txgbe_dcb.c
index 6366da92a..7e9a16cfe 100644
--- a/drivers/net/txgbe/base/txgbe_dcb.c
+++ b/drivers/net/txgbe/base/txgbe_dcb.c
@@ -5,6 +5,7 @@
 #include "txgbe_type.h"
 #include "txgbe_hw.h"
 #include "txgbe_dcb.h"
+#include "txgbe_dcb_hw.h"
 
 /**
  *  txgbe_pfc_enable - Enable flow control
@@ -146,6 +147,177 @@ txgbe_dcb_pfc_enable(struct txgbe_hw *hw, uint8_t tc_num)
 	return ret_val;
 }
 
+/**
+ * txgbe_dcb_calculate_tc_credits_cee - Calculates traffic class credits
+ * @hw: pointer to hardware structure
+ * @dcb_config: Struct containing DCB settings
+ * @max_frame_size: Maximum frame size
+ * @direction: Configuring either Tx or Rx
+ *
+ * This function calculates the credits allocated to each traffic class.
+ * It should be called only after the rules are checked by
+ * txgbe_dcb_check_config_cee().
+ */
+s32 txgbe_dcb_calculate_tc_credits_cee(struct txgbe_hw *hw,
+				   struct txgbe_dcb_config *dcb_config,
+				   u32 max_frame_size, u8 direction)
+{
+	struct txgbe_dcb_tc_path *p;
+	u32 min_multiplier	= 0;
+	u16 min_percent		= 100;
+	s32 ret_val =		0;
+	/* Initialization values default for Tx settings */
+	u32 min_credit		= 0;
+	u32 credit_refill	= 0;
+	u32 credit_max		= 0;
+	u16 link_percentage	= 0;
+	u8  bw_percent		= 0;
+	u8  i;
+
+	UNREFERENCED_PARAMETER(hw);
+
+	if (dcb_config == NULL) {
+		ret_val = TXGBE_ERR_CONFIG;
+		goto out;
+	}
+
+	min_credit = ((max_frame_size / 2) + TXGBE_DCB_CREDIT_QUANTUM - 1) /
+		     TXGBE_DCB_CREDIT_QUANTUM;
+
+	/* Find smallest link percentage */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		p = &dcb_config->tc_config[i].path[direction];
+		bw_percent = dcb_config->bw_percentage[p->bwg_id][direction];
+		link_percentage = p->bwg_percent;
+
+		link_percentage = (link_percentage * bw_percent) / 100;
+
+		if (link_percentage && link_percentage < min_percent)
+			min_percent = link_percentage;
+	}
+
+	/*
+	 * The ratio between traffic classes will control the bandwidth
+	 * percentages seen on the wire. To calculate this ratio we use
+	 * a multiplier. It is required that the refill credits must be
+	 * larger than the max frame size so here we find the smallest
+	 * multiplier that will allow all bandwidth percentages to be
+	 * greater than the max frame size.
+	 */
+	min_multiplier = (min_credit / min_percent) + 1;
+
+	/* Find out the link percentage for each TC first */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		p = &dcb_config->tc_config[i].path[direction];
+		bw_percent = dcb_config->bw_percentage[p->bwg_id][direction];
+
+		link_percentage = p->bwg_percent;
+		/* Must be careful of integer division for very small nums */
+		link_percentage = (link_percentage * bw_percent) / 100;
+		if (p->bwg_percent > 0 && link_percentage == 0)
+			link_percentage = 1;
+
+		/* Save link_percentage for reference */
+		p->link_percent = (u8)link_percentage;
+
+		/* Calculate credit refill ratio using multiplier */
+		credit_refill = min(link_percentage * min_multiplier,
+				    (u32)TXGBE_DCB_MAX_CREDIT_REFILL);
+
+		/* Refill at least minimum credit */
+		if (credit_refill < min_credit)
+			credit_refill = min_credit;
+
+		p->data_credits_refill = (u16)credit_refill;
+
+		/* Calculate maximum credit for the TC */
+		credit_max = (link_percentage * TXGBE_DCB_MAX_CREDIT) / 100;
+
+		/*
+		 * Adjustment based on rule checking, if the percentage
+		 * of a TC is too small, the maximum credit may not be
+		 * enough to send out a jumbo frame in data plane arbitration.
+		 */
+		if (credit_max < min_credit)
+			credit_max = min_credit;
+
+		if (direction == TXGBE_DCB_TX_CONFIG) {
+			dcb_config->tc_config[i].desc_credits_max =
+								(u16)credit_max;
+		}
+
+		p->data_credits_max = (u16)credit_max;
+	}
+
+out:
+	return ret_val;
+}
+
+/**
+ * txgbe_dcb_unpack_pfc_cee - Unpack dcb_config PFC info
+ * @cfg: dcb configuration to unpack into hardware consumable fields
+ * @map: user priority to traffic class map
+ * @pfc_up: u8 to store user priority PFC bitmask
+ *
+ * This unpacks the dcb configuration PFC info which is stored per
+ * traffic class into a 8bit user priority bitmask that can be
+ * consumed by hardware routines. The priority to tc map must be
+ * updated before calling this routine to use current up-to maps.
+ */
+void txgbe_dcb_unpack_pfc_cee(struct txgbe_dcb_config *cfg, u8 *map, u8 *pfc_up)
+{
+	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
+	int up;
+
+	/*
+	 * If the TC for this user priority has PFC enabled then set the
+	 * matching bit in 'pfc_up' to reflect that PFC is enabled.
+	 */
+	for (*pfc_up = 0, up = 0; up < TXGBE_DCB_UP_MAX; up++) {
+		if (tc_config[map[up]].pfc != txgbe_dcb_pfc_disabled)
+			*pfc_up |= 1 << up;
+	}
+}
+
+void txgbe_dcb_unpack_refill_cee(struct txgbe_dcb_config *cfg, int direction,
+			     u16 *refill)
+{
+	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
+	int tc;
+
+	for (tc = 0; tc < TXGBE_DCB_TC_MAX; tc++)
+		refill[tc] = tc_config[tc].path[direction].data_credits_refill;
+}
+
+void txgbe_dcb_unpack_max_cee(struct txgbe_dcb_config *cfg, u16 *max)
+{
+	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
+	int tc;
+
+	for (tc = 0; tc < TXGBE_DCB_TC_MAX; tc++)
+		max[tc] = tc_config[tc].desc_credits_max;
+}
+
+void txgbe_dcb_unpack_bwgid_cee(struct txgbe_dcb_config *cfg, int direction,
+			    u8 *bwgid)
+{
+	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
+	int tc;
+
+	for (tc = 0; tc < TXGBE_DCB_TC_MAX; tc++)
+		bwgid[tc] = tc_config[tc].path[direction].bwg_id;
+}
+
+void txgbe_dcb_unpack_tsa_cee(struct txgbe_dcb_config *cfg, int direction,
+			   u8 *tsa)
+{
+	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
+	int tc;
+
+	for (tc = 0; tc < TXGBE_DCB_TC_MAX; tc++)
+		tsa[tc] = tc_config[tc].path[direction].tsa;
+}
+
 u8 txgbe_dcb_get_tc_from_up(struct txgbe_dcb_config *cfg, int direction, u8 up)
 {
 	struct txgbe_dcb_tc_config *tc_config = &cfg->tc_config[0];
@@ -178,3 +350,11 @@ void txgbe_dcb_unpack_map_cee(struct txgbe_dcb_config *cfg, int direction,
 		map[up] = txgbe_dcb_get_tc_from_up(cfg, direction, up);
 }
 
+/* Helper routines to abstract HW specifics from DCB netlink ops */
+s32 txgbe_dcb_config_pfc(struct txgbe_hw *hw, u8 pfc_en, u8 *map)
+{
+	int ret = TXGBE_ERR_PARAM;
+	ret = txgbe_dcb_config_pfc_raptor(hw, pfc_en, map);
+	return ret;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_dcb.h b/drivers/net/txgbe/base/txgbe_dcb.h
index 67de5c54b..c679f1d75 100644
--- a/drivers/net/txgbe/base/txgbe_dcb.h
+++ b/drivers/net/txgbe/base/txgbe_dcb.h
@@ -7,6 +7,17 @@
 
 #include "txgbe_type.h"
 
+/* DCB defines */
+/* DCB credit calculation defines */
+#define TXGBE_DCB_CREDIT_QUANTUM	64
+#define TXGBE_DCB_MAX_CREDIT_REFILL	200   /* 200 * 64B = 12800B */
+#define TXGBE_DCB_MAX_TSO_SIZE		(32 * 1024) /* Max TSO pkt size in DCB*/
+#define TXGBE_DCB_MAX_CREDIT		(2 * TXGBE_DCB_MAX_CREDIT_REFILL)
+
+/* 513 for 32KB TSO packet */
+#define TXGBE_DCB_MIN_TSO_CREDIT	\
+	((TXGBE_DCB_MAX_TSO_SIZE / TXGBE_DCB_CREDIT_QUANTUM) + 1)
+
 #define TXGBE_DCB_TX_CONFIG		0
 #define TXGBE_DCB_RX_CONFIG		1
 
@@ -80,7 +91,23 @@ struct txgbe_dcb_config {
 };
 
 int txgbe_dcb_pfc_enable(struct txgbe_hw *hw, u8 tc_num);
+
+/* DCB credits calculation */
+s32 txgbe_dcb_calculate_tc_credits_cee(struct txgbe_hw *,
+				       struct txgbe_dcb_config *, u32, u8);
+
+/* DCB PFC */
+s32 txgbe_dcb_config_pfc(struct txgbe_hw *, u8, u8 *);
+
+/* DCB unpack routines */
+void txgbe_dcb_unpack_pfc_cee(struct txgbe_dcb_config *, u8 *, u8 *);
+void txgbe_dcb_unpack_refill_cee(struct txgbe_dcb_config *, int, u16 *);
+void txgbe_dcb_unpack_max_cee(struct txgbe_dcb_config *, u16 *);
+void txgbe_dcb_unpack_bwgid_cee(struct txgbe_dcb_config *, int, u8 *);
+void txgbe_dcb_unpack_tsa_cee(struct txgbe_dcb_config *, int, u8 *);
 void txgbe_dcb_unpack_map_cee(struct txgbe_dcb_config *, int, u8 *);
 u8 txgbe_dcb_get_tc_from_up(struct txgbe_dcb_config *, int, u8);
 
+#include "txgbe_dcb_hw.h"
+
 #endif /* _TXGBE_DCB_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_dcb_hw.c b/drivers/net/txgbe/base/txgbe_dcb_hw.c
new file mode 100644
index 000000000..68901012b
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_dcb_hw.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#include "txgbe_type.h"
+
+#include "txgbe_dcb.h"
+
+/**
+ * txgbe_dcb_config_rx_arbiter_raptor - Config Rx Data arbiter
+ * @hw: pointer to hardware structure
+ * @refill: refill credits index by traffic class
+ * @max: max credits index by traffic class
+ * @bwg_id: bandwidth grouping indexed by traffic class
+ * @tsa: transmission selection algorithm indexed by traffic class
+ * @map: priority to tc assignments indexed by priority
+ *
+ * Configure Rx Packet Arbiter and credits for each traffic class.
+ */
+s32 txgbe_dcb_config_rx_arbiter_raptor(struct txgbe_hw *hw, u16 *refill,
+				      u16 *max, u8 *bwg_id, u8 *tsa,
+				      u8 *map)
+{
+	u32 reg = 0;
+	u32 credit_refill = 0;
+	u32 credit_max = 0;
+	u8  i = 0;
+
+	/*
+	 * Disable the arbiter before changing parameters
+	 * (always enable recycle mode; WSP)
+	 */
+	reg = TXGBE_ARBRXCTL_RRM | TXGBE_ARBRXCTL_WSP |
+	      TXGBE_ARBRXCTL_DIA;
+	wr32(hw, TXGBE_ARBRXCTL, reg);
+
+	/*
+	 * map all UPs to TCs. up_to_tc_bitmap for each TC has corresponding
+	 * bits sets for the UPs that needs to be mappped to that TC.
+	 * e.g if priorities 6 and 7 are to be mapped to a TC then the
+	 * up_to_tc_bitmap value for that TC will be 11000000 in binary.
+	 */
+	reg = 0;
+	for (i = 0; i < TXGBE_DCB_UP_MAX; i++)
+		reg |= (map[i] << (i * TXGBE_RPUP2TC_UP_SHIFT));
+
+	wr32(hw, TXGBE_RPUP2TC, reg);
+
+	/* Configure traffic class credits and priority */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		credit_refill = refill[i];
+		credit_max = max[i];
+		reg = TXGBE_QARBRXCFG_CRQ(credit_refill) |
+		      TXGBE_QARBRXCFG_MCL(credit_max) |
+		      TXGBE_QARBRXCFG_BWG(bwg_id[i]);
+
+		if (tsa[i] == txgbe_dcb_tsa_strict)
+			reg |= TXGBE_QARBRXCFG_LSP;
+
+		wr32(hw, TXGBE_QARBRXCFG(i), reg);
+	}
+
+	/*
+	 * Configure Rx packet plane (recycle mode; WSP) and
+	 * enable arbiter
+	 */
+	reg = TXGBE_ARBRXCTL_RRM | TXGBE_ARBRXCTL_WSP;
+	wr32(hw, TXGBE_ARBRXCTL, reg);
+
+	return 0;
+}
+
+/**
+ * txgbe_dcb_config_tx_desc_arbiter_raptor - Config Tx Desc. arbiter
+ * @hw: pointer to hardware structure
+ * @refill: refill credits index by traffic class
+ * @max: max credits index by traffic class
+ * @bwg_id: bandwidth grouping indexed by traffic class
+ * @tsa: transmission selection algorithm indexed by traffic class
+ *
+ * Configure Tx Descriptor Arbiter and credits for each traffic class.
+ */
+s32 txgbe_dcb_config_tx_desc_arbiter_raptor(struct txgbe_hw *hw, u16 *refill,
+					   u16 *max, u8 *bwg_id, u8 *tsa)
+{
+	u32 reg, max_credits;
+	u8  i;
+
+	/* Clear the per-Tx queue credits; we use per-TC instead */
+	for (i = 0; i < 128; i++) {
+		wr32(hw, TXGBE_QARBTXCRED(i), 0);
+	}
+
+	/* Configure traffic class credits and priority */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		max_credits = max[i];
+		reg = TXGBE_QARBTXCFG_MCL(max_credits) |
+		      TXGBE_QARBTXCFG_CRQ(refill[i]) |
+		      TXGBE_QARBTXCFG_BWG(bwg_id[i]);
+
+		if (tsa[i] == txgbe_dcb_tsa_group_strict_cee)
+			reg |= TXGBE_QARBTXCFG_GSP;
+
+		if (tsa[i] == txgbe_dcb_tsa_strict)
+			reg |= TXGBE_QARBTXCFG_LSP;
+
+		wr32(hw, TXGBE_QARBTXCFG(i), reg);
+	}
+
+	/*
+	 * Configure Tx descriptor plane (recycle mode; WSP) and
+	 * enable arbiter
+	 */
+	reg = TXGBE_ARBTXCTL_WSP | TXGBE_ARBTXCTL_RRM;
+	wr32(hw, TXGBE_ARBTXCTL, reg);
+
+	return 0;
+}
+
+/**
+ * txgbe_dcb_config_tx_data_arbiter_raptor - Config Tx Data arbiter
+ * @hw: pointer to hardware structure
+ * @refill: refill credits index by traffic class
+ * @max: max credits index by traffic class
+ * @bwg_id: bandwidth grouping indexed by traffic class
+ * @tsa: transmission selection algorithm indexed by traffic class
+ * @map: priority to tc assignments indexed by priority
+ *
+ * Configure Tx Packet Arbiter and credits for each traffic class.
+ */
+s32 txgbe_dcb_config_tx_data_arbiter_raptor(struct txgbe_hw *hw, u16 *refill,
+					   u16 *max, u8 *bwg_id, u8 *tsa,
+					   u8 *map)
+{
+	u32 reg;
+	u8 i;
+
+	/*
+	 * Disable the arbiter before changing parameters
+	 * (always enable recycle mode; SP; arb delay)
+	 */
+	reg = TXGBE_PARBTXCTL_SP |
+	      TXGBE_PARBTXCTL_RECYC |
+	      TXGBE_PARBTXCTL_DA;
+	wr32(hw, TXGBE_PARBTXCTL, reg);
+
+	/*
+	 * map all UPs to TCs. up_to_tc_bitmap for each TC has corresponding
+	 * bits sets for the UPs that needs to be mappped to that TC.
+	 * e.g if priorities 6 and 7 are to be mapped to a TC then the
+	 * up_to_tc_bitmap value for that TC will be 11000000 in binary.
+	 */
+	reg = 0;
+	for (i = 0; i < TXGBE_DCB_UP_MAX; i++)
+		reg |= TXGBE_DCBUP2TC_MAP(i, map[i]);
+
+	wr32(hw, TXGBE_PBRXUP2TC, reg);
+
+	/* Configure traffic class credits and priority */
+	for (i = 0; i < TXGBE_DCB_TC_MAX; i++) {
+		reg = TXGBE_PARBTXCFG_CRQ(refill[i]) |
+		      TXGBE_PARBTXCFG_MCL(max[i]) |
+		      TXGBE_PARBTXCFG_BWG(bwg_id[i]);
+
+		if (tsa[i] == txgbe_dcb_tsa_group_strict_cee)
+			reg |= TXGBE_PARBTXCFG_GSP;
+
+		if (tsa[i] == txgbe_dcb_tsa_strict)
+			reg |= TXGBE_PARBTXCFG_LSP;
+
+		wr32(hw, TXGBE_PARBTXCFG(i), reg);
+	}
+
+	/*
+	 * Configure Tx packet plane (recycle mode; SP; arb delay) and
+	 * enable arbiter
+	 */
+	reg = TXGBE_PARBTXCTL_SP | TXGBE_PARBTXCTL_RECYC;
+	wr32(hw, TXGBE_PARBTXCTL, reg);
+
+	return 0;
+}
+
+/**
+ * txgbe_dcb_config_pfc_raptor - Configure priority flow control
+ * @hw: pointer to hardware structure
+ * @pfc_en: enabled pfc bitmask
+ * @map: priority to tc assignments indexed by priority
+ *
+ * Configure Priority Flow Control (PFC) for each traffic class.
+ */
+s32 txgbe_dcb_config_pfc_raptor(struct txgbe_hw *hw, u8 pfc_en, u8 *map)
+{
+	u32 i, j, fcrtl, reg;
+	u8 max_tc = 0;
+
+	/* Enable Transmit Priority Flow Control */
+	wr32(hw, TXGBE_TXFCCFG, TXGBE_TXFCCFG_PFC);
+
+	/* Enable Receive Priority Flow Control */
+	wr32m(hw, TXGBE_RXFCCFG, TXGBE_RXFCCFG_PFC,
+		pfc_en ? TXGBE_RXFCCFG_PFC : 0);
+
+	for (i = 0; i < TXGBE_DCB_UP_MAX; i++) {
+		if (map[i] > max_tc)
+			max_tc = map[i];
+	}
+
+	/* Configure PFC Tx thresholds per TC */
+	for (i = 0; i <= max_tc; i++) {
+		int enabled = 0;
+
+		for (j = 0; j < TXGBE_DCB_UP_MAX; j++) {
+			if ((map[j] == i) && (pfc_en & (1 << j))) {
+				enabled = 1;
+				break;
+			}
+		}
+
+		if (enabled) {
+			reg = TXGBE_FCWTRHI_TH(hw->fc.high_water[i]) |
+			      TXGBE_FCWTRHI_XOFF;
+			fcrtl = TXGBE_FCWTRLO_TH(hw->fc.low_water[i]) |
+				TXGBE_FCWTRLO_XON;
+			wr32(hw, TXGBE_FCWTRLO(i), fcrtl);
+		} else {
+			/*
+			 * In order to prevent Tx hangs when the internal Tx
+			 * switch is enabled we must set the high water mark
+			 * to the Rx packet buffer size - 24KB.  This allows
+			 * the Tx switch to function even under heavy Rx
+			 * workloads.
+			 */
+			reg = rd32(hw, TXGBE_PBRXSIZE(i)) - 24576;
+			wr32(hw, TXGBE_FCWTRLO(i), 0);
+		}
+
+		wr32(hw, TXGBE_FCWTRHI(i), reg);
+	}
+
+	for (; i < TXGBE_DCB_TC_MAX; i++) {
+		wr32(hw, TXGBE_FCWTRLO(i), 0);
+		wr32(hw, TXGBE_FCWTRHI(i), 0);
+	}
+
+	/* Configure pause time (2 TCs per register) */
+	reg = hw->fc.pause_time | (hw->fc.pause_time << 16);
+	for (i = 0; i < (TXGBE_DCB_TC_MAX / 2); i++)
+		wr32(hw, TXGBE_FCXOFFTM(i), reg);
+
+	/* Configure flow control refresh threshold value */
+	wr32(hw, TXGBE_RXFCRFSH, hw->fc.pause_time / 2);
+
+	return 0;
+}
+
+/**
+ * txgbe_dcb_config_tc_stats_raptor - Config traffic class statistics
+ * @hw: pointer to hardware structure
+ * @dcb_config: pointer to txgbe_dcb_config structure
+ *
+ * Configure queue statistics registers, all queues belonging to same traffic
+ * class uses a single set of queue statistics counters.
+ */
+s32 txgbe_dcb_config_tc_stats_raptor(struct txgbe_hw *hw,
+				    struct txgbe_dcb_config *dcb_config)
+{
+	u8 tc_count = 8;
+	bool vt_mode = false;
+
+	UNREFERENCED_PARAMETER(hw);
+
+	if (dcb_config != NULL) {
+		tc_count = dcb_config->num_tcs.pg_tcs;
+		vt_mode = dcb_config->vt_mode;
+	}
+
+	if (!((tc_count == 8 && vt_mode == false) || tc_count == 4))
+		return TXGBE_ERR_PARAM;
+
+	return 0;
+}
+
diff --git a/drivers/net/txgbe/base/txgbe_dcb_hw.h b/drivers/net/txgbe/base/txgbe_dcb_hw.h
new file mode 100644
index 000000000..d31a70f1d
--- /dev/null
+++ b/drivers/net/txgbe/base/txgbe_dcb_hw.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_DCB_HW_H_
+#define _TXGBE_DCB_HW_H_
+
+/* DCB PFC */
+s32 txgbe_dcb_config_pfc_raptor(struct txgbe_hw *, u8, u8 *);
+
+/* DCB stats */
+s32 txgbe_dcb_config_tc_stats_raptor(struct txgbe_hw *,
+				    struct txgbe_dcb_config *);
+
+/* DCB config arbiters */
+s32 txgbe_dcb_config_tx_desc_arbiter_raptor(struct txgbe_hw *, u16 *, u16 *,
+					   u8 *, u8 *);
+s32 txgbe_dcb_config_tx_data_arbiter_raptor(struct txgbe_hw *, u16 *, u16 *,
+					   u8 *, u8 *, u8 *);
+s32 txgbe_dcb_config_rx_arbiter_raptor(struct txgbe_hw *, u16 *, u16 *, u8 *,
+				      u8 *, u8 *);
+
+#endif /* _TXGBE_DCB_HW_H_ */
diff --git a/drivers/net/txgbe/base/txgbe_hw.c b/drivers/net/txgbe/base/txgbe_hw.c
index 15ab0213d..465106009 100644
--- a/drivers/net/txgbe/base/txgbe_hw.c
+++ b/drivers/net/txgbe/base/txgbe_hw.c
@@ -4,6 +4,7 @@
 
 #include "txgbe_type.h"
 #include "txgbe_phy.h"
+#include "txgbe_dcb.h"
 #include "txgbe_vf.h"
 #include "txgbe_eeprom.h"
 #include "txgbe_mng.h"
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index a72994d08..7a2f16d63 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -401,6 +401,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
 	struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
 	struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
+	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	const struct rte_memzone *mz;
 	uint32_t ctrl_ext;
@@ -600,6 +601,9 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	/* enable support intr */
 	txgbe_enable_intr(eth_dev);
 
+	/* initialize bandwidth configuration info */
+	memset(bw_conf, 0, sizeof(struct txgbe_bw_conf));
+
 	return 0;
 }
 
@@ -1181,8 +1185,10 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
+	/* Configure DCB hw */
 	txgbe_configure_pb(dev);
 	txgbe_configure_port(dev);
+	txgbe_configure_dcb(dev);
 
 	err = txgbe_dev_rxtx_start(dev);
 	if (err < 0) {
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 1166c151d..8a3c56a56 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -82,6 +82,11 @@ struct txgbe_vf_info {
 	uint16_t switch_domain_id;
 };
 
+/* The configuration of bandwidth */
+struct txgbe_bw_conf {
+	uint8_t tc_num; /* Number of TCs. */
+};
+
 /*
  * Structure to store private data for each driver instance (for each port).
  */
@@ -94,6 +99,7 @@ struct txgbe_adapter {
 	struct txgbe_hwstrip        hwstrip;
 	struct txgbe_dcb_config     dcb_config;
 	struct txgbe_vf_info        *vfdata;
+	struct txgbe_bw_conf        bw_conf;
 	bool rx_bulk_alloc_allowed;
 };
 
@@ -132,6 +138,9 @@ int txgbe_vf_representor_uninit(struct rte_eth_dev *ethdev);
 
 #define TXGBE_DEV_VFDATA(dev) \
 	(&((struct txgbe_adapter *)(dev)->data->dev_private)->vfdata)
+#define TXGBE_DEV_BW_CONF(dev) \
+	(&((struct txgbe_adapter *)(dev)->data->dev_private)->bw_conf)
+
 
 /*
  * RX/TX function prototypes
@@ -211,6 +220,7 @@ void txgbe_set_ivar_map(struct txgbe_hw *hw, int8_t direction,
 
 void txgbe_configure_pb(struct rte_eth_dev *dev);
 void txgbe_configure_port(struct rte_eth_dev *dev);
+void txgbe_configure_dcb(struct rte_eth_dev *dev);
 
 int
 txgbe_dev_link_update_share(struct rte_eth_dev *dev,
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index e2ab86568..a1d1c83da 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2760,6 +2760,365 @@ txgbe_dev_free_queues(struct rte_eth_dev *dev)
 	dev->data->nb_tx_queues = 0;
 }
 
+#define NUM_VFTA_REGISTERS 128
+#define NIC_RX_BUFFER_SIZE 0x200
+
+/**
+ * txgbe_dcb_config_tx_hw_config - Configure general DCB TX parameters
+ * @dev: pointer to eth_dev structure
+ * @dcb_config: pointer to txgbe_dcb_config structure
+ */
+static void
+txgbe_dcb_tx_hw_config(struct rte_eth_dev *dev,
+		       struct txgbe_dcb_config *dcb_config)
+{
+	uint32_t reg;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Disable the Tx desc arbiter */
+	reg = rd32(hw, TXGBE_ARBTXCTL);
+	reg |= TXGBE_ARBTXCTL_DIA;
+	wr32(hw, TXGBE_ARBTXCTL, reg);
+
+	/* Enable DCB for Tx with 8 TCs */
+	reg = rd32(hw, TXGBE_PORTCTL);
+	reg &= TXGBE_PORTCTL_NUMTC_MASK;
+	reg |= TXGBE_PORTCTL_DCB;
+	if (dcb_config->num_tcs.pg_tcs == 8) {
+		reg |= TXGBE_PORTCTL_NUMTC_8;
+	} else {
+		reg |= TXGBE_PORTCTL_NUMTC_4;
+	}
+	wr32(hw, TXGBE_PORTCTL, reg);
+
+	/* Enable the Tx desc arbiter */
+	reg = rd32(hw, TXGBE_ARBTXCTL);
+	reg &= ~TXGBE_ARBTXCTL_DIA;
+	wr32(hw, TXGBE_ARBTXCTL, reg);
+}
+
+static void
+txgbe_dcb_rx_config(struct rte_eth_dev *dev,
+		struct txgbe_dcb_config *dcb_config)
+{
+	struct rte_eth_dcb_rx_conf *rx_conf =
+			&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
+	struct txgbe_dcb_tc_config *tc;
+	uint8_t i, j;
+
+	dcb_config->num_tcs.pg_tcs = (uint8_t)rx_conf->nb_tcs;
+	dcb_config->num_tcs.pfc_tcs = (uint8_t)rx_conf->nb_tcs;
+
+	/* Initialize User Priority to Traffic Class mapping */
+	for (j = 0; j < TXGBE_DCB_TC_MAX; j++) {
+		tc = &dcb_config->tc_config[j];
+		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap = 0;
+	}
+
+	/* User Priority to Traffic Class mapping */
+	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		j = rx_conf->dcb_tc[i];
+		tc = &dcb_config->tc_config[j];
+		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
+						(uint8_t)(1 << i);
+	}
+}
+
+static void
+txgbe_dcb_tx_config(struct rte_eth_dev *dev,
+		struct txgbe_dcb_config *dcb_config)
+{
+	struct rte_eth_dcb_tx_conf *tx_conf =
+			&dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
+	struct txgbe_dcb_tc_config *tc;
+	uint8_t i, j;
+
+	dcb_config->num_tcs.pg_tcs = (uint8_t)tx_conf->nb_tcs;
+	dcb_config->num_tcs.pfc_tcs = (uint8_t)tx_conf->nb_tcs;
+
+	/* Initialize User Priority to Traffic Class mapping */
+	for (j = 0; j < TXGBE_DCB_TC_MAX; j++) {
+		tc = &dcb_config->tc_config[j];
+		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap = 0;
+	}
+
+	/* User Priority to Traffic Class mapping */
+	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		j = tx_conf->dcb_tc[i];
+		tc = &dcb_config->tc_config[j];
+		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
+						(uint8_t)(1 << i);
+	}
+}
+
+/**
+ * txgbe_dcb_rx_hw_config - Configure general DCB RX HW parameters
+ * @dev: pointer to eth_dev structure
+ * @dcb_config: pointer to txgbe_dcb_config structure
+ */
+static void
+txgbe_dcb_rx_hw_config(struct rte_eth_dev *dev,
+		       struct txgbe_dcb_config *dcb_config)
+{
+	uint32_t reg;
+	uint32_t vlanctrl;
+	uint8_t i;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	PMD_INIT_FUNC_TRACE();
+	/*
+	 * Disable the arbiter before changing parameters
+	 * (always enable recycle mode; WSP)
+	 */
+	reg = TXGBE_ARBRXCTL_RRM | TXGBE_ARBRXCTL_WSP | TXGBE_ARBRXCTL_DIA;
+	wr32(hw, TXGBE_ARBRXCTL, reg);
+
+	reg = rd32(hw, TXGBE_PORTCTL);
+	reg &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
+	if (dcb_config->num_tcs.pg_tcs == 4) {
+		reg |= TXGBE_PORTCTL_NUMTC_4;
+		if (dcb_config->vt_mode) {
+			reg |= TXGBE_PORTCTL_NUMVT_32;
+		} else {
+			wr32(hw, TXGBE_POOLCTL, 0);
+		}
+	}
+
+	if (dcb_config->num_tcs.pg_tcs == 8) {
+		reg |= TXGBE_PORTCTL_NUMTC_8;
+		if (dcb_config->vt_mode)
+			reg |= TXGBE_PORTCTL_NUMVT_16;
+		else {
+			wr32(hw, TXGBE_POOLCTL, 0);
+		}
+	}
+
+	wr32(hw, TXGBE_PORTCTL, reg);
+
+	/* VLNCTL: enable vlan filtering and allow all vlan tags through */
+	vlanctrl = rd32(hw, TXGBE_VLANCTL);
+	vlanctrl |= TXGBE_VLANCTL_VFE; /* enable vlan filters */
+	wr32(hw, TXGBE_VLANCTL, vlanctrl);
+
+	/* VLANTBL - enable all vlan filters */
+	for (i = 0; i < NUM_VFTA_REGISTERS; i++) {
+		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
+	}
+
+	/*
+	 * Configure Rx packet plane (recycle mode; WSP) and
+	 * enable arbiter
+	 */
+	reg = TXGBE_ARBRXCTL_RRM | TXGBE_ARBRXCTL_WSP;
+	wr32(hw, TXGBE_ARBRXCTL, reg);
+}
+
+static void
+txgbe_dcb_hw_arbite_rx_config(struct txgbe_hw *hw, uint16_t *refill,
+			uint16_t *max, uint8_t *bwg_id, uint8_t *tsa, uint8_t *map)
+{
+	txgbe_dcb_config_rx_arbiter_raptor(hw, refill, max, bwg_id,
+					  tsa, map);
+}
+
+static void
+txgbe_dcb_hw_arbite_tx_config(struct txgbe_hw *hw, uint16_t *refill, uint16_t *max,
+			    uint8_t *bwg_id, uint8_t *tsa, uint8_t *map)
+{
+	switch (hw->mac.type) {
+	case txgbe_mac_raptor:
+		txgbe_dcb_config_tx_desc_arbiter_raptor(hw, refill, max, bwg_id, tsa);
+		txgbe_dcb_config_tx_data_arbiter_raptor(hw, refill, max, bwg_id, tsa, map);
+		break;
+	default:
+		break;
+	}
+}
+
+#define DCB_RX_CONFIG  1
+#define DCB_TX_CONFIG  1
+#define DCB_TX_PB      1024
+/**
+ * txgbe_dcb_hw_configure - Enable DCB and configure
+ * general DCB in VT mode and non-VT mode parameters
+ * @dev: pointer to rte_eth_dev structure
+ * @dcb_config: pointer to txgbe_dcb_config structure
+ */
+static int
+txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
+			struct txgbe_dcb_config *dcb_config)
+{
+	int     ret = 0;
+	uint8_t i, pfc_en, nb_tcs;
+	uint16_t pbsize, rx_buffer_size;
+	uint8_t config_dcb_rx = 0;
+	uint8_t config_dcb_tx = 0;
+	uint8_t tsa[TXGBE_DCB_TC_MAX] = {0};
+	uint8_t bwgid[TXGBE_DCB_TC_MAX] = {0};
+	uint16_t refill[TXGBE_DCB_TC_MAX] = {0};
+	uint16_t max[TXGBE_DCB_TC_MAX] = {0};
+	uint8_t map[TXGBE_DCB_TC_MAX] = {0};
+	struct txgbe_dcb_tc_config *tc;
+	uint32_t max_frame = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
+
+	switch (dev->data->dev_conf.rxmode.mq_mode) {
+	case ETH_MQ_RX_DCB:
+	case ETH_MQ_RX_DCB_RSS:
+		dcb_config->vt_mode = false;
+		config_dcb_rx = DCB_RX_CONFIG;
+		/* Get dcb TX configuration parameters from rte_eth_conf */
+		txgbe_dcb_rx_config(dev, dcb_config);
+		/*Configure general DCB RX parameters*/
+		txgbe_dcb_rx_hw_config(dev, dcb_config);
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Incorrect DCB RX mode configuration");
+		break;
+	}
+	switch (dev->data->dev_conf.txmode.mq_mode) {
+	case ETH_MQ_TX_DCB:
+		dcb_config->vt_mode = false;
+		config_dcb_tx = DCB_TX_CONFIG;
+		/* get DCB TX configuration parameters from rte_eth_conf */
+		txgbe_dcb_tx_config(dev, dcb_config);
+		/* Configure general DCB TX parameters */
+		txgbe_dcb_tx_hw_config(dev, dcb_config);
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Incorrect DCB TX mode configuration");
+		break;
+	}
+
+	nb_tcs = dcb_config->num_tcs.pfc_tcs;
+	/* Unpack map */
+	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
+	if (nb_tcs == ETH_4_TCS) {
+		/* Avoid un-configured priority mapping to TC0 */
+		uint8_t j = 4;
+		uint8_t mask = 0xFF;
+
+		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+			mask = (uint8_t)(mask & (~(1 << map[i])));
+		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
+			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+				map[j++] = i;
+			mask >>= 1;
+		}
+		/* Re-configure 4 TCs BW */
+		for (i = 0; i < nb_tcs; i++) {
+			tc = &dcb_config->tc_config[i];
+			if (bw_conf->tc_num != nb_tcs)
+				tc->path[TXGBE_DCB_TX_CONFIG].bwg_percent =
+					(uint8_t)(100 / nb_tcs);
+			tc->path[TXGBE_DCB_RX_CONFIG].bwg_percent =
+						(uint8_t)(100 / nb_tcs);
+		}
+		for (; i < TXGBE_DCB_TC_MAX; i++) {
+			tc = &dcb_config->tc_config[i];
+			tc->path[TXGBE_DCB_TX_CONFIG].bwg_percent = 0;
+			tc->path[TXGBE_DCB_RX_CONFIG].bwg_percent = 0;
+		}
+	} else {
+		/* Re-configure 8 TCs BW */
+		for (i = 0; i < nb_tcs; i++) {
+			tc = &dcb_config->tc_config[i];
+			if (bw_conf->tc_num != nb_tcs)
+				tc->path[TXGBE_DCB_TX_CONFIG].bwg_percent =
+					(uint8_t)(100 / nb_tcs + (i & 1));
+			tc->path[TXGBE_DCB_RX_CONFIG].bwg_percent =
+				(uint8_t)(100 / nb_tcs + (i & 1));
+		}
+	}
+
+	rx_buffer_size = NIC_RX_BUFFER_SIZE;
+
+	if (config_dcb_rx) {
+		/* Set RX buffer size */
+		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
+		uint32_t rxpbsize = pbsize << 10;
+
+		for (i = 0; i < nb_tcs; i++) {
+			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
+		}
+		/* zero alloc all unused TCs */
+		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+			wr32(hw, TXGBE_PBRXSIZE(i), 0);
+		}
+	}
+	if (config_dcb_tx) {
+		/* Only support an equally distributed
+		 *  Tx packet buffer strategy.
+		 */
+		uint32_t txpktsize = TXGBE_PBTXSIZE_MAX / nb_tcs;
+		uint32_t txpbthresh = (txpktsize / DCB_TX_PB) - TXGBE_TXPKT_SIZE_MAX;
+
+		for (i = 0; i < nb_tcs; i++) {
+			wr32(hw, TXGBE_PBTXSIZE(i), txpktsize);
+			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
+		}
+		/* Clear unused TCs, if any, to zero buffer size*/
+		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+			wr32(hw, TXGBE_PBTXSIZE(i), 0);
+			wr32(hw, TXGBE_PBTXDMATH(i), 0);
+		}
+	}
+
+	/*Calculates traffic class credits*/
+	txgbe_dcb_calculate_tc_credits_cee(hw, dcb_config, max_frame,
+				TXGBE_DCB_TX_CONFIG);
+	txgbe_dcb_calculate_tc_credits_cee(hw, dcb_config, max_frame,
+				TXGBE_DCB_RX_CONFIG);
+
+	if (config_dcb_rx) {
+		/* Unpack CEE standard containers */
+		txgbe_dcb_unpack_refill_cee(dcb_config, TXGBE_DCB_RX_CONFIG, refill);
+		txgbe_dcb_unpack_max_cee(dcb_config, max);
+		txgbe_dcb_unpack_bwgid_cee(dcb_config, TXGBE_DCB_RX_CONFIG, bwgid);
+		txgbe_dcb_unpack_tsa_cee(dcb_config, TXGBE_DCB_RX_CONFIG, tsa);
+		/* Configure PG(ETS) RX */
+		txgbe_dcb_hw_arbite_rx_config(hw, refill, max, bwgid, tsa, map);
+	}
+
+	if (config_dcb_tx) {
+		/* Unpack CEE standard containers */
+		txgbe_dcb_unpack_refill_cee(dcb_config, TXGBE_DCB_TX_CONFIG, refill);
+		txgbe_dcb_unpack_max_cee(dcb_config, max);
+		txgbe_dcb_unpack_bwgid_cee(dcb_config, TXGBE_DCB_TX_CONFIG, bwgid);
+		txgbe_dcb_unpack_tsa_cee(dcb_config, TXGBE_DCB_TX_CONFIG, tsa);
+		/* Configure PG(ETS) TX */
+		txgbe_dcb_hw_arbite_tx_config(hw, refill, max, bwgid, tsa, map);
+	}
+
+	/* Configure queue statistics registers */
+	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
+
+	/* Check if the PFC is supported */
+	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
+		for (i = 0; i < nb_tcs; i++) {
+			/*
+			* If the TC count is 8,and the default high_water is 48,
+			* the low_water is 16 as default.
+			*/
+			hw->fc.high_water[i] = (pbsize * 3) / 4;
+			hw->fc.low_water[i] = pbsize / 4;
+			/* Enable pfc for this TC */
+			tc = &dcb_config->tc_config[i];
+			tc->pfc = txgbe_dcb_pfc_enabled;
+		}
+		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
+		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+			pfc_en &= 0x0F;
+		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
+	}
+
+	return ret;
+}
+
 void txgbe_configure_pb(struct rte_eth_dev *dev)
 {
 	struct rte_eth_conf *dev_conf = &(dev->data->dev_conf);
@@ -2811,6 +3170,30 @@ void txgbe_configure_port(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_VXLANPORT, 4789);
 }
 
+/**
+ * txgbe_configure_dcb - Configure DCB  Hardware
+ * @dev: pointer to rte_eth_dev
+ */
+void txgbe_configure_dcb(struct rte_eth_dev *dev)
+{
+	struct txgbe_dcb_config *dcb_cfg = TXGBE_DEV_DCB_CONFIG(dev);
+	struct rte_eth_conf *dev_conf = &(dev->data->dev_conf);
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* check support mq_mode for DCB */
+	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
+	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+		return;
+
+	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+		return;
+
+	/** Configure DCB hardware **/
+	txgbe_dcb_hw_configure(dev, dcb_cfg);
+}
+
 static int __rte_cold
 txgbe_alloc_rx_queue_mbufs(struct txgbe_rx_queue *rxq)
 {
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 40/42] net/txgbe: add device promiscuous and allmulticast mode
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (37 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 39/42] net/txgbe: configure DCB HW resources Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 41/42] net/txgbe: add MTU set operation Jiawen Wu
                   ` (2 subsequent siblings)
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add device promiscuous and allmulticast mode

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/txgbe_ethdev.c | 63 ++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7a2f16d63..a2a8f2726 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2188,6 +2188,65 @@ txgbe_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	return txgbe_dev_link_update_share(dev, wait_to_complete);
 }
 
+static int
+txgbe_dev_promiscuous_enable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t fctrl;
+
+	fctrl = rd32(hw, TXGBE_PSRCTL);
+	fctrl |= (TXGBE_PSRCTL_UCP | TXGBE_PSRCTL_MCP);
+	wr32(hw, TXGBE_PSRCTL, fctrl);
+
+	return 0;
+}
+
+static int
+txgbe_dev_promiscuous_disable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t fctrl;
+
+	fctrl = rd32(hw, TXGBE_PSRCTL);
+	fctrl &= (~TXGBE_PSRCTL_UCP);
+	if (dev->data->all_multicast == 1)
+		fctrl |= TXGBE_PSRCTL_MCP;
+	else
+		fctrl &= (~TXGBE_PSRCTL_MCP);
+	wr32(hw, TXGBE_PSRCTL, fctrl);
+
+	return 0;
+}
+
+static int
+txgbe_dev_allmulticast_enable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t fctrl;
+
+	fctrl = rd32(hw, TXGBE_PSRCTL);
+	fctrl |= TXGBE_PSRCTL_MCP;
+	wr32(hw, TXGBE_PSRCTL, fctrl);
+
+	return 0;
+}
+
+static int
+txgbe_dev_allmulticast_disable(struct rte_eth_dev *dev)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t fctrl;
+
+	if (dev->data->promiscuous == 1)
+		return 0; /* must remain in all_multicast mode */
+
+	fctrl = rd32(hw, TXGBE_PSRCTL);
+	fctrl &= (~TXGBE_PSRCTL_MCP);
+	wr32(hw, TXGBE_PSRCTL, fctrl);
+
+	return 0;
+}
+
 /**
  * It clears the interrupt causes and enables the interrupt.
  * It will be called once only during nic initialized.
@@ -3001,6 +3060,10 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.dev_set_link_down          = txgbe_dev_set_link_down,
 	.dev_close                  = txgbe_dev_close,
 	.dev_reset                  = txgbe_dev_reset,
+	.promiscuous_enable         = txgbe_dev_promiscuous_enable,
+	.promiscuous_disable        = txgbe_dev_promiscuous_disable,
+	.allmulticast_enable        = txgbe_dev_allmulticast_enable,
+	.allmulticast_disable       = txgbe_dev_allmulticast_disable,
 	.link_update                = txgbe_dev_link_update,
 	.stats_get                  = txgbe_dev_stats_get,
 	.xstats_get                 = txgbe_dev_xstats_get,
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 41/42] net/txgbe: add MTU set operation
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (38 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 40/42] net/txgbe: add device promiscuous and allmulticast mode Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 42/42] net/txgbe: add register dump support Jiawen Wu
  2020-09-09 17:48 ` [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Ferruh Yigit
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add MTU set operation.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h |  2 ++
 drivers/net/txgbe/txgbe_ethdev.c    | 41 +++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index fcc44ece8..8d1a3a986 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -652,6 +652,8 @@ struct txgbe_hw {
 	void IOMEM *isb_mem;
 	u16 nb_rx_queues;
 	u16 nb_tx_queues;
+
+	u32 mode;
 	enum txgbe_link_status {
 		TXGBE_LINK_STATUS_NONE = 0,
 		TXGBE_LINK_STATUS_KX,
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index a2a8f2726..8a6b7e483 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -2744,6 +2744,46 @@ txgbe_set_default_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr)
 	return 0;
 }
 
+static int
+txgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	struct rte_eth_dev_info dev_info;
+	uint32_t frame_size = mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
+	struct rte_eth_dev_data *dev_data = dev->data;
+	int ret;
+
+	ret = txgbe_dev_info_get(dev, &dev_info);
+	if (ret != 0)
+		return ret;
+
+	/* check that mtu is within the allowed range */
+	if ((mtu < RTE_ETHER_MIN_MTU) || (frame_size > dev_info.max_rx_pktlen))
+		return -EINVAL;
+
+	/* If device is started, refuse mtu that requires the support of
+	 * scattered packets when this feature has not been enabled before.
+	 */
+	if (dev_data->dev_started && !dev_data->scattered_rx &&
+	    (frame_size + 2 * TXGBE_VLAN_TAG_SIZE >
+	     dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM)) {
+		PMD_INIT_LOG(ERR, "Stop port first.");
+		return -EINVAL;
+	}
+
+	/* update max frame size */
+	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
+
+	if (hw->mode)
+		wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+			TXGBE_FRAME_SIZE_MAX);
+	else
+		wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
+			TXGBE_FRMSZ_MAX(dev->data->dev_conf.rxmode.max_rx_pkt_len));
+
+	return 0;
+}
+
 static int
 txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
 {
@@ -3076,6 +3116,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.fw_version_get             = txgbe_fw_version_get,
 	.dev_infos_get              = txgbe_dev_info_get,
 	.dev_supported_ptypes_get   = txgbe_dev_supported_ptypes_get,
+	.mtu_set                    = txgbe_dev_mtu_set,
 	.vlan_filter_set            = txgbe_vlan_filter_set,
 	.vlan_tpid_set              = txgbe_vlan_tpid_set,
 	.vlan_offload_set           = txgbe_vlan_offload_set,
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* [dpdk-dev] [PATCH v1 42/42] net/txgbe: add register dump support
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (39 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 41/42] net/txgbe: add MTU set operation Jiawen Wu
@ 2020-09-01 11:51 ` Jiawen Wu
  2020-09-09 17:48 ` [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Ferruh Yigit
  41 siblings, 0 replies; 49+ messages in thread
From: Jiawen Wu @ 2020-09-01 11:51 UTC (permalink / raw)
  To: dev; +Cc: Jiawen Wu

Add register dump support.

Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
 drivers/net/txgbe/base/txgbe_type.h  |   1 +
 drivers/net/txgbe/txgbe_ethdev.c     | 113 +++++++++++++++++++++++++++
 drivers/net/txgbe/txgbe_regs_group.h |  54 +++++++++++++
 3 files changed, 168 insertions(+)
 create mode 100644 drivers/net/txgbe/txgbe_regs_group.h

diff --git a/drivers/net/txgbe/base/txgbe_type.h b/drivers/net/txgbe/base/txgbe_type.h
index 8d1a3a986..0d3d8d99d 100644
--- a/drivers/net/txgbe/base/txgbe_type.h
+++ b/drivers/net/txgbe/base/txgbe_type.h
@@ -644,6 +644,7 @@ struct txgbe_hw {
 	u16 vendor_id;
 	u16 subsystem_device_id;
 	u16 subsystem_vendor_id;
+	u8 revision_id;
 	bool adapter_stopped;
 	bool allow_unsupported_sfp;
 	bool need_crosstalk_fix;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 8a6b7e483..aca595862 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -34,6 +34,72 @@
 #include "base/txgbe.h"
 #include "txgbe_ethdev.h"
 #include "txgbe_rxtx.h"
+#include "txgbe_regs_group.h"
+
+static const struct reg_info txgbe_regs_general[] = {
+	{TXGBE_RST, 1, 1, "TXGBE_RST"},
+	{TXGBE_STAT, 1, 1, "TXGBE_STAT"},
+	{TXGBE_PORTCTL, 1, 1, "TXGBE_PORTCTL"},
+	{TXGBE_SDP, 1, 1, "TXGBE_SDP"},
+	{TXGBE_SDPCTL, 1, 1, "TXGBE_SDPCTL"},
+	{TXGBE_LEDCTL, 1, 1, "TXGBE_LEDCTL"},
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_nvm[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_interrupt[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_fctl_others[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_rxdma[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_rx[] = {
+	{0, 0, 0, ""}
+};
+
+static struct reg_info txgbe_regs_tx[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_wakeup[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_dcb[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_mac[] = {
+	{0, 0, 0, ""}
+};
+
+static const struct reg_info txgbe_regs_diagnostic[] = {
+	{0, 0, 0, ""},
+};
+
+/* PF registers */
+static const struct reg_info *txgbe_regs_others[] = {
+				txgbe_regs_general,
+				txgbe_regs_nvm,
+				txgbe_regs_interrupt,
+				txgbe_regs_fctl_others,
+				txgbe_regs_rxdma,
+				txgbe_regs_rx,
+				txgbe_regs_tx,
+				txgbe_regs_wakeup,
+				txgbe_regs_dcb,
+				txgbe_regs_mac,
+				txgbe_regs_diagnostic,
+				NULL};
 
 static int  txgbe_dev_set_link_up(struct rte_eth_dev *dev);
 static int  txgbe_dev_set_link_down(struct rte_eth_dev *dev);
@@ -2971,6 +3037,52 @@ txgbe_dev_set_mc_addr_list(struct rte_eth_dev *dev,
 					 txgbe_dev_addr_list_itr, TRUE);
 }
 
+static int
+txgbe_get_reg_length(struct rte_eth_dev *dev __rte_unused)
+{
+	int count = 0;
+	int g_ind = 0;
+	const struct reg_info *reg_group;
+	const struct reg_info **reg_set = txgbe_regs_others;
+
+	while ((reg_group = reg_set[g_ind++]))
+		count += txgbe_regs_group_count(reg_group);
+
+	return count;
+}
+
+static int
+txgbe_get_regs(struct rte_eth_dev *dev,
+	      struct rte_dev_reg_info *regs)
+{
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+	uint32_t *data = regs->data;
+	int g_ind = 0;
+	int count = 0;
+	const struct reg_info *reg_group;
+	const struct reg_info **reg_set = txgbe_regs_others;
+
+	if (data == NULL) {
+		regs->length = txgbe_get_reg_length(dev);
+		regs->width = sizeof(uint32_t);
+		return 0;
+	}
+
+	/* Support only full register dump */
+	if ((regs->length == 0) ||
+	    (regs->length == (uint32_t)txgbe_get_reg_length(dev))) {
+		regs->version = hw->mac.type << 24 |
+				hw->revision_id << 16 |
+				hw->device_id;
+		while ((reg_group = reg_set[g_ind++]))
+			count += txgbe_read_regs_group(dev, &data[count],
+						      reg_group);
+		return 0;
+	}
+
+	return -ENOTSUP;
+}
+
 static int
 txgbe_get_eeprom_length(struct rte_eth_dev *dev)
 {
@@ -3147,6 +3259,7 @@ static const struct eth_dev_ops txgbe_eth_dev_ops = {
 	.set_mc_addr_list           = txgbe_dev_set_mc_addr_list,
 	.rxq_info_get               = txgbe_rxq_info_get,
 	.txq_info_get               = txgbe_txq_info_get,
+	.get_reg                    = txgbe_get_regs,
 	.get_eeprom_length          = txgbe_get_eeprom_length,
 	.get_eeprom                 = txgbe_get_eeprom,
 	.set_eeprom                 = txgbe_set_eeprom,
diff --git a/drivers/net/txgbe/txgbe_regs_group.h b/drivers/net/txgbe/txgbe_regs_group.h
new file mode 100644
index 000000000..6f8f0bc29
--- /dev/null
+++ b/drivers/net/txgbe/txgbe_regs_group.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2015-2020
+ */
+
+#ifndef _TXGBE_REGS_GROUP_H_
+#define _TXGBE_REGS_GROUP_H_
+
+#include "txgbe_ethdev.h"
+
+struct txgbe_hw;
+struct reg_info {
+	uint32_t base_addr;
+	uint32_t count;
+	uint32_t stride;
+	const char *name;
+};
+
+static inline int
+txgbe_read_regs(struct txgbe_hw *hw, const struct reg_info *reg,
+	uint32_t *reg_buf)
+{
+	unsigned int i;
+
+	for (i = 0; i < reg->count; i++)
+		reg_buf[i] = rd32(hw,
+					reg->base_addr + i * reg->stride);
+	return reg->count;
+};
+
+static inline int
+txgbe_regs_group_count(const struct reg_info *regs)
+{
+	int count = 0;
+	int i = 0;
+
+	while (regs[i].count)
+		count += regs[i++].count;
+	return count;
+};
+
+static inline int
+txgbe_read_regs_group(struct rte_eth_dev *dev, uint32_t *reg_buf,
+					  const struct reg_info *regs)
+{
+	int count = 0;
+	int i = 0;
+	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
+
+	while (regs[i].count)
+		count += txgbe_read_regs(hw, &regs[i++], &reg_buf[count]);
+	return count;
+};
+
+#endif /* _TXGBE_REGS_GROUP_H_ */
-- 
2.18.4




^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure
  2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
                   ` (40 preceding siblings ...)
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 42/42] net/txgbe: add register dump support Jiawen Wu
@ 2020-09-09 17:48 ` Ferruh Yigit
  41 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 17:48 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:50 PM, Jiawen Wu wrote:
> Adding bare minimum PMD library and doc build infrastructure and claim the maintainership for txgbe PMD.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

<...>

>  
> +Wangxun txgbe
> +M: Jiawen Wu <jiawenwu@trustnetic.com>
> +M: Jian Wang <jianwang@trustnetic.com>
> +F: drivers/net/txgbe/
> +F: doc/guides/nics/txgbe.rst
> +F: doc/guides/nics/features/txgbe.ini
> +

You can move the block above vmxnet, since with vmxnet3 paravirtual device block
starts (yes the boundries are not very clear)

<...>

> --- a/config/common_base
> +++ b/config/common_base
> @@ -389,6 +389,16 @@ CONFIG_RTE_LIBRTE_MLX5_VDPA_PMD=n
>  CONFIG_RTE_IBVERBS_LINK_DLOPEN=n
>  CONFIG_RTE_IBVERBS_LINK_STATIC=n
>  
> +#
> +# Compile burst-oriented TXGBE PMD driver
> +#
> +CONFIG_RTE_LIBRTE_TXGBE_PMD=y
> +CONFIG_RTE_LIBRTE_TXGBE_DEBUG_RX=n
> +CONFIG_RTE_LIBRTE_TXGBE_DEBUG_TX=n
> +CONFIG_RTE_LIBRTE_TXGBE_DEBUG_TX_FREE=n
> +CONFIG_RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC=n
> +CONFIG_RTE_LIBRTE_TXGBE_BYPASS=n


Make support is gone, in next version can you please drop all make build related
changes?
Also it is harder to add compile time flags with meson, better to eliminate them
as much as possible.

<...>

> +++ b/doc/guides/nics/features/txgbe.ini
> @@ -0,0 +1,52 @@
> +;
> +; Supported features of the 'txgbe' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +[Features]
> +Speed capabilities   = Y
> +Link status          = Y
> +Link status event    = Y
> +Rx interrupt         = Y
> +Queue start/stop     = Y
> +MTU update           = Y
> +Jumbo frame          = Y
> +Scattered Rx         = Y
> +LRO                  = Y
> +TSO                  = Y
> +Promiscuous mode     = Y
> +Allmulticast mode    = Y
> +Unicast MAC filter   = Y
> +Multicast MAC filter = Y
> +RSS hash             = Y
> +RSS key update       = Y
> +RSS reta update      = Y
> +DCB                  = Y
> +VLAN filter          = Y
> +Flow control         = Y
> +Flow API             = Y
> +Rate limitation      = Y
> +Traffic mirroring    = Y
> +Inline crypto        = Y
> +CRC offload          = P
> +VLAN offload         = P
> +QinQ offload         = P
> +L3 checksum offload  = P
> +L4 checksum offload  = P
> +MACsec offload       = P
> +Inner L3 checksum    = P
> +Inner L4 checksum    = P
> +Packet type parsing  = Y
> +Timesync             = Y
> +Rx descriptor status = Y
> +Tx descriptor status = Y
> +Basic stats          = Y
> +Extended stats       = Y
> +Stats per queue      = Y
> +FW version           = Y
> +EEPROM dump          = Y
> +Module EEPROM dump   = Y
> +Multiprocess aware   = Y
> +BSD nic_uio          = Y
> +Linux UIO            = Y
> +Linux VFIO           = Y

This file should be updated as the claimed featue added into the code, instead
of marking all in one go.

> diff --git a/doc/guides/nics/txgbe.rst b/doc/guides/nics/txgbe.rst
> new file mode 100644
> index 000000000..133e17bc0
> --- /dev/null
> +++ b/doc/guides/nics/txgbe.rst
> @@ -0,0 +1,67 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2015-2020.
> +
> +TXGBE Poll Mode Driver
> +======================
> +
> +The TXGBE PMD (librte_pmd_txgbe) provides poll mode driver support
> +for Wangxun 10 Gigabit Ethernet NICs.

Can you please add a link to the NIC? I can see a link exists below but it is to
the general product web page, but it can be good to have details of this
sepecific NIC.

> +
> +Features
> +--------
> +
> +- Multiple queues for TX and RX
> +- Receiver Side Scaling (RSS)
> +- MAC/VLAN filtering
> +- Packet type information
> +- Checksum offload
> +- VLAN/QinQ stripping and inserting
> +- TSO offload
> +- Promiscuous mode
> +- Multicast mode
> +- Port hardware statistics
> +- Jumbo frames
> +- Link state information
> +- Link flow control
> +- Interrupt mode for RX
> +- Scattered and gather for TX and RX
> +- DCB
> +- IEEE 1588
> +- FW version
> +- LRO
> +- Generic flow API


Similar comment with the .ini file, the feature list should be build up
gradually as the code adds the mentioned feature.

> +
> +Prerequisites
> +-------------
> +
> +- Learning about Wangxun 10 Gigabit Ethernet NICs using
> +  `<https://www.net-swift.com/c/product.html>`_.

Not sure this is a prerequisite :) What do you think moving the link to "TXGBE
Poll Mode Driver" section?

> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +
> +- ``CONFIG_RTE_LIBRTE_TXGBE_PMD`` (default ``y``)
> +
> +  Toggle compilation of the ``librte_pmd_txgbe`` driver.
> +
> +- ``CONFIG_RTE_LIBRTE_TXGBE_DEBUG_*`` (default ``n``)
> +
> +  Toggle display of generic debugging messages.


These also should go away since Makefile is going away.

<...>

> +++ b/drivers/net/txgbe/meson.build
> @@ -0,0 +1,9 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2015-2020
> +
> +cflags += ['-DRTE_LIBRTE_TXGBE_BYPASS']
>

Why this compile flag is required? At least it is not needed in this patch, can
you add it when it is used?

And can it be removed completely, or converted to the runtime config like device
parameter?

<...>

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove Jiawen Wu
@ 2020-09-09 17:50   ` Ferruh Yigit
  0 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 17:50 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:50 PM, Jiawen Wu wrote:
> add basic PCIe ethdev probe and remove.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

<...>

> +++ b/drivers/net/txgbe/base/meson.build
> @@ -0,0 +1,21 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2015-2020
> +
> +sources = [
> +
> +]
> +
> +error_cflags = ['-Wno-unused-value',
> +				'-Wno-unused-parameter',
> +				'-Wno-unused-but-set-variable']

Why these warnings are disabled, can't they fixed in the code? Can you please
remove them?

<...>

> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -2,3 +2,164 @@
>   * Copyright(c) 2015-2020
>   */
>  
> +#include <sys/queue.h>
> +#include <stdio.h>
> +#include <errno.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <stdarg.h>
> +#include <inttypes.h>
> +#include <netinet/in.h>
> +#include <rte_byteorder.h>
> +#include <rte_common.h>
> +#include <rte_cycles.h>
> +#include <rte_ethdev_driver.h>
> +#include <rte_ethdev_pci.h>
> +
> +#include <rte_interrupts.h>
> +#include <rte_log.h>
> +#include <rte_debug.h>
> +#include <rte_pci.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_memory.h>
> +#include <rte_eal.h>
> +#include <rte_alarm.h>
> +#include <rte_ether.h>
> +#include <rte_malloc.h>
> +#include <rte_random.h>
> +#include <rte_dev.h>

Are all these headers needed at this stage? Can they be added as they needed?

<...>

> +static int
> +eth_txgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> +		struct rte_pci_device *pci_dev)
> +{
> +	char name[RTE_ETH_NAME_MAX_LEN];
> +	struct rte_eth_dev *pf_ethdev;
> +	struct rte_eth_devargs eth_da;
> +	int i, retval;
> +
> +	if (pci_dev->device.devargs) {
> +		retval = rte_eth_devargs_parse(pci_dev->device.devargs->args,
> +				&eth_da);
> +		if (retval)
> +			return retval;
> +	} else
> +		memset(&eth_da, 0, sizeof(eth_da));
> +
> +	retval = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
> +		sizeof(struct txgbe_adapter),
> +		eth_dev_pci_specific_init, pci_dev,
> +		eth_txgbe_dev_init, NULL);

Better to indent with double tab to diffrenciate line continuation.

<...>

> +
> +		if (retval)
> +			PMD_DRV_LOG(ERR, "failed to create txgbe vf "
> +				"representor %s.", name);

You can join the splitted log message.
PMD_DRV_LOG(ERR,
	"failed to create txgbe vf representor %s.",
	name)


> +	}
> +
> +	return 0;
> +}
> +
> +static int eth_txgbe_pci_remove(struct rte_pci_device *pci_dev)
> +{
> +	struct rte_eth_dev *ethdev;
> +
> +	ethdev = rte_eth_dev_allocated(pci_dev->device.name);
> +	if (!ethdev)
> +		return -ENODEV;
> +
> +	if (ethdev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR)

At least 'txgbe_vf_representor_init()' should set the 'RTE_ETH_DEV_REPRESENTOR'
in this patch for this check to make sense.

<...>

> +
> +#ifdef RTE_LIBRTE_TXGBE_DEBUG_INIT
> +#define PMD_TLOG_INIT(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, txgbe_logtype_init, \
> +		"%s(): " fmt, __func__, ##args)
> +#else
> +#define PMD_TLOG_INIT(level, fmt, args...)   do { } while (0)
> +#endif
> +
> +#ifdef RTE_LIBRTE_TXGBE_DEBUG_DRIVER
> +#define PMD_TLOG_DRIVER(level, fmt, args...) \
> +	rte_log(RTE_LOG_ ## level, txgbe_logtype_driver, \
> +		"%s(): " fmt, __func__, ##args)
> +#else
> +#define PMD_TLOG_DRIVER(level, fmt, args...) do { } while (0)
> +#endif

There is not config types for 'RTE_LIBRTE_TXGBE_DEBUG_INIT' &
'RTE_LIBRTE_TXGBE_DEBUG_DRIVER' and above there is already dynamic log types for
them, these look redundant.

> +
> +/*
> + * PMD_DEBUG_LOG: for debugger
> + */
> +#define TLOG_EMERG(fmt, args...)    PMD_TLOG_DRIVER(EMERG, fmt, ##args)
> +#define TLOG_ALERT(fmt, args...)    PMD_TLOG_DRIVER(ALERT, fmt, ##args)
> +#define TLOG_CRIT(fmt, args...)     PMD_TLOG_DRIVER(CRIT, fmt, ##args)
> +#define TLOG_ERR(fmt, args...)      PMD_TLOG_DRIVER(ERR, fmt, ##args)
> +#define TLOG_WARN(fmt, args...)     PMD_TLOG_DRIVER(WARNING, fmt, ##args)
> +#define TLOG_NOTICE(fmt, args...)   PMD_TLOG_DRIVER(NOTICE, fmt, ##args)
> +#define TLOG_INFO(fmt, args...)     PMD_TLOG_DRIVER(INFO, fmt, ##args)
> +#define TLOG_DEBUG(fmt, args...)    PMD_TLOG_DRIVER(DEBUG, fmt, ##args)

These can be dropped as well if 'PMD_TLOG_DRIVER' removed.

> +
> +/* to be deleted */
> +#define DEBUGOUT(fmt, args...)    TLOG_DEBUG(fmt, ##args)
> +#define PMD_INIT_FUNC_TRACE()     TLOG_DEBUG(" >>")
> +#define DEBUGFUNC(fmt)            TLOG_DEBUG(fmt)

It looks like they are forgotten to be deleted.

> +
> +/*
> + * PMD_TEMP_LOG: for tester
> + */
> +#ifdef RTE_LIBRTE_TXGBE_DEBUG
> +#define wjmsg_line(fmt, ...) \
> +    do { \
> +	RTE_LOG(CRIT, PMD, "%s(%d): " fmt, \
> +	       __FUNCTION__, __LINE__, ## __VA_ARGS__); \
> +    } while (0)
> +#define wjmsg_stack(fmt, ...) \
> +    do { \
> +	wjmsg_line(fmt, ## __VA_ARGS__); \
> +	rte_dump_stack(); \
> +    } while (0)
> +#define wjmsg wjmsg_line
> +
> +#define wjdump(mb) { \
> +	int j; char buf[128] = ""; \
> +	wjmsg("data_len=%d pkt_len=%d vlan_tci=%d " \
> +		"packet_type=0x%08x ol_flags=0x%016lx " \
> +		"hash.rss=0x%08x hash.fdir.hash=0x%04x hash.fdir.id=%d\n", \
> +		mb->data_len, mb->pkt_len, mb->vlan_tci, \
> +		mb->packet_type, mb->ol_flags, \
> +		mb->hash.rss, mb->hash.fdir.hash, mb->hash.fdir.id); \
> +	for (j = 0; j < mb->data_len; j++) { \
> +		sprintf(buf + strlen(buf), "%02x ", \
> +			((uint8_t *)(mb->buf_addr) + mb->data_off)[j]); \
> +		if (j % 8 == 7) {\
> +			wjmsg("%s\n", buf); \
> +			buf[0] = '\0'; \
> +		} \
> +	} \
> +	wjmsg("%s\n", buf); \
> +}
> +#else /* RTE_LIBRTE_TXGBE_DEBUG */
> +#define wjmsg_line(fmt, args...) do {} while (0)
> +#define wjmsg_limit(fmt, args...) do {} while (0)
> +#define wjmsg_stack(fmt, args...) do {} while (0)
> +#define wjmsg(fmt, args...) do {} while (0)
> +#define wjdump(fmt, args...) do {} while (0)
> +#endif /* RTE_LIBRTE_TXGBE_DEBUG */

The 'RTE_LIBRTE_TXGBE_DEBUG' also doesn't exist.

<...>

> \ No newline at end of file

Can you please add the EOL.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 03/42] net/txgbe: add device init and uninit
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 03/42] net/txgbe: add device init and uninit Jiawen Wu
@ 2020-09-09 17:52   ` Ferruh Yigit
  0 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 17:52 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:50 PM, Jiawen Wu wrote:
> Add basic init and uninit function, registers and some macro definitions prepare for hardware infrastructure.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

<...>

> +static const struct eth_dev_ops txgbe_eth_dev_ops = {
> +	.dev_start                  = txgbe_dev_start,
> +	.dev_stop                   = txgbe_dev_stop,
> +	.dev_close                  = txgbe_dev_close,
> +	.stats_get                  = txgbe_dev_stats_get,
> +	.stats_reset                = txgbe_dev_stats_reset,

What do you think adding '.stats_get' & '.stats_reset' when
'txgbe_dev_stats_get()' & 'txgbe_dev_stats_reset()' implemented (patch 27/42).

Same is also for '.dev_start' & '.dev_stop'. (It will work if you drop
'txgbe_dev_stop()' from 'txgbe_dev_close()', and since the device did not start
at first place, it should be OK, it can be added where device start/stop support
added.)

I see you are using empty functions to constract the driver, I think better to
reduce the number of the empty functions as much as possible although sometimes
you may have to add them, you know it better.

<...>

> +++ b/drivers/net/txgbe/txgbe_pf.c
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2015-2020
> + */
> +
> +#include <stdio.h>
> +#include <errno.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include <stdarg.h>
> +#include <inttypes.h>
> +
> +#include <rte_interrupts.h>
> +#include <rte_log.h>
> +#include <rte_debug.h>
> +#include <rte_eal.h>
> +#include <rte_ether.h>
> +#include <rte_ethdev_driver.h>
> +#include <rte_memcpy.h>
> +#include <rte_malloc.h>
> +#include <rte_random.h>

Similar comment for all new files, if the include list really required or they
are from copy/paste. Can you only keep the includes needed, and add them as they
needed.

<...>

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 28/42] net/txgbe: add device xstats get
  2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 28/42] net/txgbe: add device xstats get Jiawen Wu
@ 2020-09-09 17:53   ` Ferruh Yigit
  0 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 17:53 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:50 PM, Jiawen Wu wrote:
> Add device xstats get from reading hardware registers.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

<...>

> +
> +static int txgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
> +	struct rte_eth_xstat_name *xstats_names, unsigned int limit)
> +{

'dev' is used in this function.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 29/42] net/txgbe: add queue stats mapping and enable RX DMA unit
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 29/42] net/txgbe: add queue stats mapping and enable RX DMA unit Jiawen Wu
@ 2020-09-09 17:54   ` Ferruh Yigit
  0 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 17:54 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:51 PM, Jiawen Wu wrote:
> Add queue stats mapping set, complete receive and transmit unit with DMA and sec path.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

<...>

> +static int
> +txgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
> +				  uint16_t queue_id,
> +				  uint8_t stat_idx,
> +				  uint8_t is_rx)
> +{
> +	struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
> +	struct txgbe_stat_mappings *stat_mappings =
> +		TXGBE_DEV_STAT_MAPPINGS(eth_dev);
> +	uint32_t qsmr_mask = 0;
> +	uint32_t clearing_mask = QMAP_FIELD_RESERVED_BITS_MASK;
> +	uint32_t q_map;
> +	uint8_t n, offset;
> +
> +	if (hw->mac.type != txgbe_mac_raptor)
> +		return -ENOSYS;
> +
> +	PMD_INIT_LOG(DEBUG, "Setting port %d, %s queue_id %d to stat index %d",
> +		     (int)(eth_dev->data->port_id), is_rx ? "RX" : "TX",
> +		     queue_id, stat_idx);
> +
> +	n = (uint8_t)(queue_id / NB_QMAP_FIELDS_PER_QSM_REG);
> +	if (n >= TXGBE_NB_STAT_MAPPING) {
> +		PMD_INIT_LOG(ERR, "Nb of stat mapping registers exceeded");
> +		return -EIO;
> +	}
> +	offset = (uint8_t)(queue_id % NB_QMAP_FIELDS_PER_QSM_REG);
> +
> +	/* Now clear any previous stat_idx set */
> +	clearing_mask <<= (QSM_REG_NB_BITS_PER_QMAP_FIELD * offset);
> +	if (!is_rx)
> +		stat_mappings->tqsm[n] &= ~clearing_mask;
> +	else
> +		stat_mappings->rqsm[n] &= ~clearing_mask;
> +
> +	q_map = (uint32_t)stat_idx;
> +	q_map &= QMAP_FIELD_RESERVED_BITS_MASK;

A check for "'stat_idx' > QMAP_FIELD_RESERVED_BITS_MASK" can be good, although
it is masked out, it will give different stat than user requested.


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 30/42] net/txgbe: add device info get
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 30/42] net/txgbe: add device info get Jiawen Wu
@ 2020-09-09 17:54   ` Ferruh Yigit
  0 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 17:54 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:51 PM, Jiawen Wu wrote:
> Add device information get operation.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>

<...>

> +uint64_t
> +txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
> +{
> +	uint64_t offloads = 0;
> +
> +	offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
>

Instead of initialize to zero and or the value, can just assign it.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [dpdk-dev] [PATCH v1 34/42] net/txgbe: add remaining RX and TX queue operations
  2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 34/42] net/txgbe: add remaining RX and TX queue operations Jiawen Wu
@ 2020-09-09 18:15   ` Ferruh Yigit
  0 siblings, 0 replies; 49+ messages in thread
From: Ferruh Yigit @ 2020-09-09 18:15 UTC (permalink / raw)
  To: Jiawen Wu, dev

On 9/1/2020 12:51 PM, Jiawen Wu wrote:
> Add remaining receive and transmit queue operaions.
> 
> Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
> ---
>  drivers/net/txgbe/txgbe_ethdev.c | 123 +++++++++++++++
>  drivers/net/txgbe/txgbe_ethdev.h |  16 ++
>  drivers/net/txgbe/txgbe_rxtx.c   | 259 +++++++++++++++++++++++++++++++
>  drivers/net/txgbe/txgbe_rxtx.h   |   1 +
>  4 files changed, 399 insertions(+)
> 
> diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
> index ba2849a82..54c97f81c 100644
> --- a/drivers/net/txgbe/txgbe_ethdev.c
> +++ b/drivers/net/txgbe/txgbe_ethdev.c
> @@ -622,6 +622,46 @@ static struct rte_pci_driver rte_txgbe_pmd = {
>  
>  
>  
> +static int
> +txgbe_check_mq_mode(struct rte_eth_dev *dev)
> +{
> +	RTE_SET_USED(dev);
> +
> +	return 0;
> +}
> +
> +static int
> +txgbe_dev_configure(struct rte_eth_dev *dev)
> +{
> +	struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
> +	struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
> +	int ret;
> +
> +	PMD_INIT_FUNC_TRACE();
> +
> +	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
> +		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
> +
> +	/* multiple queue mode checking */
> +	ret  = txgbe_check_mq_mode(dev);
> +	if (ret != 0) {
> +		PMD_DRV_LOG(ERR, "txgbe_check_mq_mode fails with %d.",
> +			    ret);
> +		return ret;
> +	}
> +
> +	/* set flag to update link status after init */
> +	intr->flags |= TXGBE_FLAG_NEED_LINK_UPDATE;
> +
> +	/*
> +	 * Initialize to TRUE. If any of Rx queues doesn't meet the bulk
> +	 * allocation Rx preconditions we will reset it.
> +	 */
> +	adapter->rx_bulk_alloc_allowed = true;
> +
> +	return 0;
> +}

'.dev_configure' is relatively more important funtion for the driver, I think it
would be better to introduce it in earlier stages in the patchset, if possible.

There is no guideline or requirement for the ordering but if the re-ordering
patches won't cause too much work, I would suggest following order as guideline
if helps (please don't take it too strict):
- basic infrastructure
  - build files, initial doc, log, probe()/init() funtions, base files (hw
config files) ...
- device configuration
  - .dev_configure, .dev_infos_get, interrupt configuration, mac set, link
status ...
- Data path
  - Rx/Tx init, queue setup, start/stop, data path implementations, ...
- More features
  - stats, vlan, flow ctrl, promiscuous and allmulticast, mtu ...
- Optional features
  - fw version, dump registers, led, eeprom get, descriptor_status ...


^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2020-09-09 18:15 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-01 11:50 [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 02/42] net/txgbe: add ethdev probe and remove Jiawen Wu
2020-09-09 17:50   ` Ferruh Yigit
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 03/42] net/txgbe: add device init and uninit Jiawen Wu
2020-09-09 17:52   ` Ferruh Yigit
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 04/42] net/txgbe: add error types and dummy function Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 05/42] net/txgbe: add mac type and HW ops dummy Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 06/42] net/txgbe: add EEPROM functions Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 07/42] net/txgbe: add HW init function Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 08/42] net/txgbe: add HW reset operation Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 09/42] net/txgbe: add PHY init Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 10/42] net/txgbe: add module identify Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 11/42] net/txgbe: add PHY reset Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 12/42] net/txgbe: add device start and stop Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 13/42] net/txgbe: add interrupt operation Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 14/42] net/txgbe: add link status change Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 15/42] net/txgbe: add multi-speed link setup Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 16/42] net/txgbe: add autoc read and write Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 17/42] net/txgbe: support device LED on and off Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 18/42] net/txgbe: add rx and tx init Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 19/42] net/txgbe: add RX and TX start Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 20/42] net/txgbe: add RX and TX stop Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 21/42] net/txgbe: add RX and TX queues setup Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 22/42] net/txgbe: add packet type Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 23/42] net/txgbe: fill simple transmit function Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 24/42] net/txgbe: fill transmit function with hardware offload Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 25/42] net/txgbe: fill receive functions Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 26/42] net/txgbe: fill TX prepare funtion Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 27/42] net/txgbe: add device stats get Jiawen Wu
2020-09-01 11:50 ` [dpdk-dev] [PATCH v1 28/42] net/txgbe: add device xstats get Jiawen Wu
2020-09-09 17:53   ` Ferruh Yigit
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 29/42] net/txgbe: add queue stats mapping and enable RX DMA unit Jiawen Wu
2020-09-09 17:54   ` Ferruh Yigit
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 30/42] net/txgbe: add device info get Jiawen Wu
2020-09-09 17:54   ` Ferruh Yigit
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 31/42] net/txgbe: add MAC address operations Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 32/42] net/txgbe: add FW version get operation Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 33/42] net/txgbe: add EEPROM info " Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 34/42] net/txgbe: add remaining RX and TX queue operations Jiawen Wu
2020-09-09 18:15   ` Ferruh Yigit
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 35/42] net/txgbe: add VLAN handle support Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 36/42] net/txgbe: add flow control support Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 37/42] net/txgbe: add FC auto negotiation support Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 38/42] net/txgbe: add DCB packet buffer allocation Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 39/42] net/txgbe: configure DCB HW resources Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 40/42] net/txgbe: add device promiscuous and allmulticast mode Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 41/42] net/txgbe: add MTU set operation Jiawen Wu
2020-09-01 11:51 ` [dpdk-dev] [PATCH v1 42/42] net/txgbe: add register dump support Jiawen Wu
2020-09-09 17:48 ` [dpdk-dev] [PATCH v1 01/42] net/txgbe: add build and doc infrastructure Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).