* [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver
@ 2023-08-07 2:16 Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 1/8] net/rnp: add skeleton Wenbo Cao
` (7 more replies)
0 siblings, 8 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun, Wenbo Cao
For This patchset just to support the basic chip init work
and user can just found the eth_dev, but can't control more.
For Now just support 2*10g nic,the chip can support
2*10g,4*10g,4*1g,8*1g,8*10g.
The Feature rx side can support rx-cksum-offload,rss,vlan-filter
flow_clow,uncast_filter,mcast_filter,1588,Jumbo-frame
The Feature tx side can supprt tx-cksum-offload,tso,vxlan-tso
flow director base on ntuple pattern of tcp/udp/ip/ eth_hdr->type
for sriov is also support.
Because of the chip desgin defect, for multiple-port mode
one pci-bdf will have multiple-port (max can have four ports)
so this code must be care of one bdf init multiple-port.
v4:
* one patch has been forget to upload :(
v3:
* fixed http://dpdk.org/patch/129830 FreeBSD 13 compile Issue
* change iobar type to void suggest by Stephen Hemminger
* add KMOD_DEP support for vfio-pci
* change run-cmd argument parse check for invalid extra_args
v2:
* fixed MAINTAIN maillist fullname format
* fixed driver/net/meson the order issue of new driver to driver list
* improve virtual point function usage suggest by Stephen Hemminger
Wenbo Cao (8):
net/rnp: add skeleton
net/rnp: add ethdev probe and remove
net/rnp: add device init and uninit
net/rnp: add mbx basic api feature
net/rnp add reset code for Chip Init process
net/rnp add port info resource init
net/rnp add devargs runtime parsing functions
net/rnp handle device interrupts
MAINTAINERS | 6 +
doc/guides/nics/features/rnp.ini | 8 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/rnp.rst | 43 ++
drivers/net/meson.build | 1 +
drivers/net/rnp/base/rnp_api.c | 71 +++
drivers/net/rnp/base/rnp_api.h | 17 +
drivers/net/rnp/base/rnp_cfg.h | 7 +
drivers/net/rnp/base/rnp_dma_regs.h | 73 +++
drivers/net/rnp/base/rnp_eth_regs.h | 124 +++++
drivers/net/rnp/base/rnp_hw.h | 206 +++++++
drivers/net/rnp/base/rnp_mac_regs.h | 279 ++++++++++
drivers/net/rnp/meson.build | 18 +
drivers/net/rnp/rnp.h | 218 ++++++++
drivers/net/rnp/rnp_ethdev.c | 822 ++++++++++++++++++++++++++++
drivers/net/rnp/rnp_logs.h | 43 ++
drivers/net/rnp/rnp_mbx.c | 524 ++++++++++++++++++
drivers/net/rnp/rnp_mbx.h | 140 +++++
drivers/net/rnp/rnp_mbx_fw.c | 781 ++++++++++++++++++++++++++
drivers/net/rnp/rnp_mbx_fw.h | 401 ++++++++++++++
drivers/net/rnp/rnp_osdep.h | 30 +
drivers/net/rnp/rnp_rxtx.c | 83 +++
drivers/net/rnp/rnp_rxtx.h | 14 +
23 files changed, 3910 insertions(+)
create mode 100644 doc/guides/nics/features/rnp.ini
create mode 100644 doc/guides/nics/rnp.rst
create mode 100644 drivers/net/rnp/base/rnp_api.c
create mode 100644 drivers/net/rnp/base/rnp_api.h
create mode 100644 drivers/net/rnp/base/rnp_cfg.h
create mode 100644 drivers/net/rnp/base/rnp_dma_regs.h
create mode 100644 drivers/net/rnp/base/rnp_eth_regs.h
create mode 100644 drivers/net/rnp/base/rnp_hw.h
create mode 100644 drivers/net/rnp/base/rnp_mac_regs.h
create mode 100644 drivers/net/rnp/meson.build
create mode 100644 drivers/net/rnp/rnp.h
create mode 100644 drivers/net/rnp/rnp_ethdev.c
create mode 100644 drivers/net/rnp/rnp_logs.h
create mode 100644 drivers/net/rnp/rnp_mbx.c
create mode 100644 drivers/net/rnp/rnp_mbx.h
create mode 100644 drivers/net/rnp/rnp_mbx_fw.c
create mode 100644 drivers/net/rnp/rnp_mbx_fw.h
create mode 100644 drivers/net/rnp/rnp_osdep.h
create mode 100644 drivers/net/rnp/rnp_rxtx.c
create mode 100644 drivers/net/rnp/rnp_rxtx.h
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 1/8] net/rnp: add skeleton
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-15 11:10 ` Thomas Monjalon
2023-08-07 2:16 ` [PATCH v5 2/8] net/rnp: add ethdev probe and remove Wenbo Cao
` (6 subsequent siblings)
7 siblings, 1 reply; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Thomas Monjalon, Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Add Basic PMD library and doc build infrastructure
Update maintainers file to claim responsibility.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
MAINTAINERS | 6 +++++
doc/guides/nics/features/rnp.ini | 8 ++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/rnp.rst | 43 ++++++++++++++++++++++++++++++++
drivers/net/meson.build | 1 +
drivers/net/rnp/meson.build | 11 ++++++++
drivers/net/rnp/rnp_ethdev.c | 3 +++
7 files changed, 73 insertions(+)
create mode 100644 doc/guides/nics/features/rnp.ini
create mode 100644 doc/guides/nics/rnp.rst
create mode 100644 drivers/net/rnp/meson.build
create mode 100644 drivers/net/rnp/rnp_ethdev.c
diff --git a/MAINTAINERS b/MAINTAINERS
index a5219926ab..29c130b280 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -955,6 +955,12 @@ F: drivers/net/qede/
F: doc/guides/nics/qede.rst
F: doc/guides/nics/features/qede*.ini
+Mucse rnp
+M: Wenbo Cao <caowenbo@mucse.com>
+F: drivers/net/rnp
+F: doc/guides/nics/rnp.rst
+F: doc/guides/nics/features/rnp.ini
+
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: drivers/common/sfc_efx/
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
new file mode 100644
index 0000000000..2ad04ee330
--- /dev/null
+++ b/doc/guides/nics/features/rnp.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'rnp' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 5c9d1edf5e..cc89d3154a 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -61,6 +61,7 @@ Network Interface Controller Drivers
pcap_ring
pfe
qede
+ rnp
sfc_efx
softnic
tap
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
new file mode 100644
index 0000000000..5b3a3d0483
--- /dev/null
+++ b/doc/guides/nics/rnp.rst
@@ -0,0 +1,43 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2023 Mucse IC Design Ltd.
+
+RNP Poll Mode driver
+==========================
+
+The RNP ETHDEV PMD (**librte_net_rnp**) provides poll mode ethdev
+driver support for the inbuilt network device found in the **Mucse RNP**
+
+Prerequisites
+-------------
+More information can be found at `Mucse, Official Website
+<https://mucse.com/productDetail>`_.
+
+Supported RNP SoCs
+------------------------
+
+- N10
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+for details.
+
+#. Running testpmd:
+
+ Follow instructions available in the document
+ :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
+ to run testpmd.
+
+Limitations or Known issues
+----------------------------
+Build with ICC is not supported yet.
+CRC stripping
+~~~~~~~~~~~~~~
+The RNP SoC family NICs strip the CRC for every packets coming into the
+host interface irrespective of the offload configuration.
+When You Want To Disable CRC_OFFLOAD The Feature Will Influence The RxCksum Offload
+VLAN Strip
+~~~~~~~~~~~
+For VLAN Strip RNP Just Support CVLAN(0x8100) Type If The Vlan Type Is SVLAN(0X88a8)
+VLAN Filter Or Strip Will Not Effert For This Packet It Will Bypass To The Host.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index b1df17ce8c..f9e013d38e 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -54,6 +54,7 @@ drivers = [
'pfe',
'qede',
'ring',
+ 'rnp',
'sfc',
'softnic',
'tap',
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
new file mode 100644
index 0000000000..4f37c6b456
--- /dev/null
+++ b/drivers/net/rnp/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2023 Mucse IC Design Ltd.
+#
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+endif
+
+sources = files(
+ 'rnp_ethdev.c',
+)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
new file mode 100644
index 0000000000..9ce3c0b497
--- /dev/null
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -0,0 +1,3 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 2/8] net/rnp: add ethdev probe and remove
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 1/8] net/rnp: add skeleton Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 3/8] net/rnp: add device init and uninit Wenbo Cao
` (5 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao, Anatoly Burakov; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Add basic PCIe ethdev probe and remove.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp.h | 13 ++++++
drivers/net/rnp/rnp_ethdev.c | 83 ++++++++++++++++++++++++++++++++++++
2 files changed, 96 insertions(+)
create mode 100644 drivers/net/rnp/rnp.h
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
new file mode 100644
index 0000000000..76d281cc0a
--- /dev/null
+++ b/drivers/net/rnp/rnp.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+#ifndef __RNP_H__
+#define __RNP_H__
+
+#define PCI_VENDOR_ID_MUCSE (0x8848)
+#define RNP_DEV_ID_N10G (0x1000)
+
+struct rnp_eth_port {
+} __rte_cache_aligned;
+
+#endif /* __RNP_H__ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 9ce3c0b497..390f2e7743 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -1,3 +1,86 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(C) 2023 Mucse IC Design Ltd.
*/
+
+#include <ethdev_pci.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "rnp.h"
+
+static int
+rnp_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+rnp_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+rnp_pci_remove(struct rte_pci_device *pci_dev)
+{
+ struct rte_eth_dev *eth_dev;
+ int rc;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+
+ if (eth_dev) {
+ /* Cleanup eth dev */
+ rc = rte_eth_dev_pci_generic_remove(pci_dev,
+ rnp_eth_dev_uninit);
+ if (rc)
+ return rc;
+ }
+ /* Nothing to be done for secondary processes */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ return 0;
+}
+
+static int
+rnp_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ int rc;
+
+ RTE_SET_USED(pci_drv);
+
+ rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct rnp_eth_port),
+ rnp_eth_dev_init);
+
+ /* On error on secondary, recheck if port exists in primary or
+ * in mid of detach state.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
+ if (!rte_eth_dev_allocated(pci_dev->device.name))
+ return 0;
+ return rc;
+}
+
+static const struct rte_pci_id pci_id_rnp_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_MUCSE, RNP_DEV_ID_N10G)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver rte_rnp_pmd = {
+ .id_table = pci_id_rnp_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = rnp_pci_probe,
+ .remove = rnp_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_rnp, rte_rnp_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_rnp, pci_id_rnp_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_rnp, "igb_uio | uio_pci_generic | vfio-pci");
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 3/8] net/rnp: add device init and uninit
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 1/8] net/rnp: add skeleton Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 2/8] net/rnp: add ethdev probe and remove Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 4/8] net/rnp: add mbx basic api feature Wenbo Cao
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao, Anatoly Burakov; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Add basic init and uninit function
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_hw.h | 19 ++++
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 25 +++++
drivers/net/rnp/rnp_ethdev.c | 196 +++++++++++++++++++++++++++++++++-
drivers/net/rnp/rnp_logs.h | 34 ++++++
drivers/net/rnp/rnp_osdep.h | 30 ++++++
6 files changed, 300 insertions(+), 5 deletions(-)
create mode 100644 drivers/net/rnp/base/rnp_hw.h
create mode 100644 drivers/net/rnp/rnp_logs.h
create mode 100644 drivers/net/rnp/rnp_osdep.h
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
new file mode 100644
index 0000000000..d80d23f4b4
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+#ifndef __RNP_HW_H__
+#define __RNP_HW_H__
+
+struct rnp_eth_adapter;
+struct rnp_hw {
+ struct rnp_eth_adapter *back;
+ void *iobar0;
+ uint32_t iobar0_len;
+ void *iobar4;
+ uint32_t iobar4_len;
+
+ uint16_t device_id;
+ uint16_t vendor_id;
+} __rte_cache_aligned;
+
+#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index 4f37c6b456..f85d597e68 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -9,3 +9,4 @@ endif
sources = files(
'rnp_ethdev.c',
)
+includes += include_directories('base')
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 76d281cc0a..c7959c64aa 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -4,10 +4,35 @@
#ifndef __RNP_H__
#define __RNP_H__
+#include "base/rnp_hw.h"
+
#define PCI_VENDOR_ID_MUCSE (0x8848)
#define RNP_DEV_ID_N10G (0x1000)
+#define RNP_MAX_PORT_OF_PF (4)
+#define RNP_CFG_BAR (4)
+#define RNP_PF_INFO_BAR (0)
struct rnp_eth_port {
+ struct rnp_eth_adapter *adapt;
+ struct rte_eth_dev *eth_dev;
+} __rte_cache_aligned;
+
+struct rnp_share_ops {
} __rte_cache_aligned;
+struct rnp_eth_adapter {
+ struct rnp_hw hw;
+ struct rte_pci_device *pdev;
+ struct rte_eth_dev *eth_dev; /* master eth_dev */
+ struct rnp_eth_port *ports[RNP_MAX_PORT_OF_PF];
+ struct rnp_share_ops *share_priv;
+
+ uint8_t num_ports; /* Cur Pf Has physical Port Num */
+} __rte_cache_aligned;
+
+#define RNP_DEV_TO_PORT(eth_dev) \
+ (((struct rnp_eth_port *)((eth_dev)->data->dev_private)))
+#define RNP_DEV_TO_ADAPTER(eth_dev) \
+ ((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT(eth_dev)->adapt))
+
#endif /* __RNP_H__ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 390f2e7743..357375ee39 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -5,23 +5,198 @@
#include <ethdev_pci.h>
#include <rte_io.h>
#include <rte_malloc.h>
+#include <ethdev_driver.h>
#include "rnp.h"
+#include "rnp_logs.h"
static int
-rnp_eth_dev_init(struct rte_eth_dev *eth_dev)
+rnp_mac_rx_disable(struct rte_eth_dev *dev)
{
- RTE_SET_USED(eth_dev);
+ RTE_SET_USED(dev);
- return -ENODEV;
+ return 0;
+}
+
+static int
+rnp_mac_tx_disable(struct rte_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+static int rnp_dev_close(struct rte_eth_dev *dev)
+{
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+/* Features supported by this driver */
+static const struct eth_dev_ops rnp_eth_dev_ops = {
+};
+
+static int
+rnp_init_port_resource(struct rnp_eth_adapter *adapter,
+ struct rte_eth_dev *dev,
+ char *name,
+ uint8_t p_id)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ port->eth_dev = dev;
+ adapter->ports[p_id] = port;
+ dev->dev_ops = &rnp_eth_dev_ops;
+ RTE_SET_USED(name);
+
+ return 0;
+}
+
+static struct rte_eth_dev *
+rnp_alloc_eth_port(struct rte_pci_device *master_pci, char *name)
+{
+ struct rnp_eth_port *port;
+ struct rte_eth_dev *eth_dev;
+
+ eth_dev = rte_eth_dev_allocate(name);
+ if (!eth_dev) {
+ RNP_PMD_DRV_LOG(ERR, "Could not allocate "
+ "eth_dev for %s\n", name);
+ return NULL;
+ }
+ port = rte_zmalloc_socket(name,
+ sizeof(*port),
+ RTE_CACHE_LINE_SIZE,
+ master_pci->device.numa_node);
+ if (!port) {
+ RNP_PMD_DRV_LOG(ERR, "Could not allocate "
+ "rnp_eth_port for %s\n", name);
+ return NULL;
+ }
+ eth_dev->data->dev_private = port;
+ eth_dev->process_private = calloc(1, sizeof(struct rnp_share_ops));
+ if (!eth_dev->process_private) {
+ RNP_PMD_DRV_LOG(ERR, "Could not calloc "
+ "for Process_priv\n");
+ goto fail_calloc;
+ }
+ return eth_dev;
+fail_calloc:
+ rte_free(port);
+ rte_eth_dev_release_port(eth_dev);
+
+ return NULL;
+}
+
+static int
+rnp_eth_dev_init(struct rte_eth_dev *dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rnp_eth_adapter *adapter = NULL;
+ char name[RTE_ETH_NAME_MAX_LEN] = " ";
+ struct rnp_eth_port *port = NULL;
+ struct rte_eth_dev *eth_dev;
+ struct rnp_hw *hw = NULL;
+ int32_t p_id;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+ memset(name, 0, sizeof(name));
+ snprintf(name, sizeof(name), "rnp_adapter_%d", dev->data->port_id);
+ adapter = rte_zmalloc(name, sizeof(struct rnp_eth_adapter), 0);
+ if (!adapter) {
+ RNP_PMD_DRV_LOG(ERR, "zmalloc for adapter failed\n");
+ return -ENOMEM;
+ }
+ hw = &adapter->hw;
+ adapter->pdev = pci_dev;
+ adapter->eth_dev = dev;
+ adapter->num_ports = 1;
+ hw->back = adapter;
+ hw->iobar4 = pci_dev->mem_resource[RNP_CFG_BAR].addr;
+ hw->iobar0 = pci_dev->mem_resource[RNP_PF_INFO_BAR].addr;
+ hw->iobar4_len = pci_dev->mem_resource[RNP_CFG_BAR].len;
+ hw->iobar0_len = pci_dev->mem_resource[RNP_PF_INFO_BAR].len;
+ hw->device_id = pci_dev->id.device_id;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->device_id = pci_dev->id.device_id;
+ for (p_id = 0; p_id < adapter->num_ports; p_id++) {
+ /* port 0 resource has been alloced When Probe */
+ if (!p_id) {
+ eth_dev = dev;
+ } else {
+ snprintf(name, sizeof(name), "%s_%d",
+ adapter->pdev->device.name,
+ p_id);
+ eth_dev = rnp_alloc_eth_port(pci_dev, name);
+ if (eth_dev)
+ rte_memcpy(eth_dev->process_private,
+ adapter->share_priv,
+ sizeof(*adapter->share_priv));
+ if (!eth_dev) {
+ ret = -ENOMEM;
+ goto eth_alloc_error;
+ }
+ }
+ ret = rnp_init_port_resource(adapter, eth_dev, name, p_id);
+ if (ret)
+ goto eth_alloc_error;
+
+ rnp_mac_rx_disable(eth_dev);
+ rnp_mac_tx_disable(eth_dev);
+ }
+
+ return 0;
+eth_alloc_error:
+ for (p_id = 0; p_id < adapter->num_ports; p_id++) {
+ port = adapter->ports[p_id];
+ if (!port)
+ continue;
+ if (port->eth_dev) {
+ rnp_dev_close(port->eth_dev);
+ rte_eth_dev_release_port(port->eth_dev);
+ if (port->eth_dev->process_private)
+ free(port->eth_dev->process_private);
+ }
+ rte_free(port);
+ }
+ rte_free(adapter);
+
+ return 0;
}
static int
rnp_eth_dev_uninit(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct rnp_eth_adapter *adapter = RNP_DEV_TO_ADAPTER(eth_dev);
+ struct rnp_eth_port *port = NULL;
+ uint8_t p_id;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
- return -ENODEV;
+ if (adapter->eth_dev != eth_dev) {
+ RNP_PMD_DRV_LOG(ERR, "Input Argument ethdev "
+ "Isn't Master Ethdev\n");
+ return -EINVAL;
+ }
+ for (p_id = 0; p_id < adapter->num_ports; p_id++) {
+ port = adapter->ports[p_id];
+ if (!port)
+ continue;
+ if (port->eth_dev) {
+ rnp_dev_close(port->eth_dev);
+ /* Just Release Not Master Port Alloced By PMD */
+ if (p_id)
+ rte_eth_dev_release_port(port->eth_dev);
+ }
+ }
+
+ return 0;
}
static int
@@ -84,3 +259,14 @@ static struct rte_pci_driver rte_rnp_pmd = {
RTE_PMD_REGISTER_PCI(net_rnp, rte_rnp_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_rnp, pci_id_rnp_map);
RTE_PMD_REGISTER_KMOD_DEP(net_rnp, "igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_LOG_REGISTER_SUFFIX(rnp_init_logtype, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(rnp_drv_logtype, driver, NOTICE);
+
+#ifdef RTE_LIBRTE_RNP_DEBUG_RX
+ RTE_LOG_REGISTER_SUFFIX(rnp_rx_logtype, rx, DEBUG);
+#endif
+
+#ifdef RTE_LIBRTE_RNP_DEBUG_TX
+ RTE_LOG_REGISTER_SUFFIX(rnp_tx_logtype, tx, DEBUG);
+#endif
diff --git a/drivers/net/rnp/rnp_logs.h b/drivers/net/rnp/rnp_logs.h
new file mode 100644
index 0000000000..1b3ee33745
--- /dev/null
+++ b/drivers/net/rnp/rnp_logs.h
@@ -0,0 +1,34 @@
+#ifndef __RNP_LOGS_H__
+#define __RNP_LOGS_H__
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+extern int rnp_init_logtype;
+
+#define RNP_PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_##level, rnp_init_logtype, \
+ "%s() " fmt, __func__, ##args)
+#define PMD_INIT_FUNC_TRACE() RNP_PMD_INIT_LOG(DEBUG, " >>")
+extern int rnp_drv_logtype;
+#define RNP_PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_##level, rnp_drv_logtype, \
+ "%s() " fmt, __func__, ##args)
+#ifdef RTE_LIBRTE_RNP_DEBUG_RX
+extern int rnp_rx_logtype;
+#define RNP_PMD_RX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rnp_rx_logtype, \
+ "%s(): " fmt "\n", __func__, ##args)
+#else
+#define RNP_PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_RNP_DEBUG_TX
+extern int rnp_tx_logtype;
+#define PMD_TX_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rnp_tx_logtype, \
+ "%s(): " fmt "\n", __func__, ##args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* __RNP_LOGS_H__ */
diff --git a/drivers/net/rnp/rnp_osdep.h b/drivers/net/rnp/rnp_osdep.h
new file mode 100644
index 0000000000..5685dd2404
--- /dev/null
+++ b/drivers/net/rnp/rnp_osdep.h
@@ -0,0 +1,30 @@
+#ifndef __RNP_OSDEP_H__
+#define __RNP_OSDEP_H__
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+#include <stdint.h>
+
+#include <rte_byteorder.h>
+
+#define __iomem
+#define _RING_(off) ((off) + 0x000000)
+#define _DMA_(off) ((off))
+#define _GLB_(off) ((off) + 0x000000)
+#define _NIC_(off) ((off) + 0x000000)
+#define _ETH_(off) ((off))
+#define _MAC_(off) ((off))
+#define BIT(n) (1UL << (n))
+#define BIT64(n) (1ULL << (n))
+#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#define GENMASK(h, l) \
+ (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+typedef uint64_t u64;
+typedef int32_t s32;
+typedef int16_t s16;
+typedef int8_t s8;
+#endif /* __RNP_OSDEP_H__ */
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 4/8] net/rnp: add mbx basic api feature
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (2 preceding siblings ...)
2023-08-07 2:16 ` [PATCH v5 3/8] net/rnp: add device init and uninit Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 5/8] net/rnp add reset code for Chip Init process Wenbo Cao
` (3 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun, Stephen Hemminger
mbx base code is for communicate with the firmware
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/rnp/base/rnp_api.c | 23 ++
drivers/net/rnp/base/rnp_api.h | 7 +
drivers/net/rnp/base/rnp_cfg.h | 7 +
drivers/net/rnp/base/rnp_dma_regs.h | 73 ++++
drivers/net/rnp/base/rnp_eth_regs.h | 124 +++++++
drivers/net/rnp/base/rnp_hw.h | 112 +++++-
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 35 ++
drivers/net/rnp/rnp_ethdev.c | 70 +++-
drivers/net/rnp/rnp_logs.h | 9 +
drivers/net/rnp/rnp_mbx.c | 522 ++++++++++++++++++++++++++++
drivers/net/rnp/rnp_mbx.h | 139 ++++++++
drivers/net/rnp/rnp_mbx_fw.c | 271 +++++++++++++++
drivers/net/rnp/rnp_mbx_fw.h | 22 ++
14 files changed, 1412 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/rnp/base/rnp_api.c
create mode 100644 drivers/net/rnp/base/rnp_api.h
create mode 100644 drivers/net/rnp/base/rnp_cfg.h
create mode 100644 drivers/net/rnp/base/rnp_dma_regs.h
create mode 100644 drivers/net/rnp/base/rnp_eth_regs.h
create mode 100644 drivers/net/rnp/rnp_mbx.c
create mode 100644 drivers/net/rnp/rnp_mbx.h
create mode 100644 drivers/net/rnp/rnp_mbx_fw.c
create mode 100644 drivers/net/rnp/rnp_mbx_fw.h
diff --git a/drivers/net/rnp/base/rnp_api.c b/drivers/net/rnp/base/rnp_api.c
new file mode 100644
index 0000000000..550da6217d
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_api.c
@@ -0,0 +1,23 @@
+#include "rnp.h"
+#include "rnp_api.h"
+
+int
+rnp_init_hw(struct rte_eth_dev *dev)
+{
+ const struct rnp_mac_api *ops = RNP_DEV_TO_MAC_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+
+ if (ops->init_hw)
+ return ops->init_hw(hw);
+ return -EOPNOTSUPP;
+}
+
+int
+rnp_reset_hw(struct rte_eth_dev *dev, struct rnp_hw *hw)
+{
+ const struct rnp_mac_api *ops = RNP_DEV_TO_MAC_OPS(dev);
+
+ if (ops->reset_hw)
+ return ops->reset_hw(hw);
+ return -EOPNOTSUPP;
+}
diff --git a/drivers/net/rnp/base/rnp_api.h b/drivers/net/rnp/base/rnp_api.h
new file mode 100644
index 0000000000..df574dab77
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_api.h
@@ -0,0 +1,7 @@
+#ifndef __RNP_API_H__
+#define __RNP_API_H__
+int
+rnp_init_hw(struct rte_eth_dev *dev);
+int
+rnp_reset_hw(struct rte_eth_dev *dev, struct rnp_hw *hw);
+#endif /* __RNP_API_H__ */
diff --git a/drivers/net/rnp/base/rnp_cfg.h b/drivers/net/rnp/base/rnp_cfg.h
new file mode 100644
index 0000000000..90f25268ad
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_cfg.h
@@ -0,0 +1,7 @@
+#ifndef __RNP_CFG_H__
+#define __RNP_CFG_H__
+#include "rnp_osdep.h"
+
+#define RNP_NIC_RESET _NIC_(0x0010)
+#define RNP_TX_QINQ_WORKAROUND _NIC_(0x801c)
+#endif /* __RNP_CFG_H__ */
diff --git a/drivers/net/rnp/base/rnp_dma_regs.h b/drivers/net/rnp/base/rnp_dma_regs.h
new file mode 100644
index 0000000000..bfe87e534d
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_dma_regs.h
@@ -0,0 +1,73 @@
+#ifndef __RNP_REGS_H__
+#define __RNP_REGS_H__
+
+#include "rnp_osdep.h"
+
+/* mac address offset */
+#define RNP_DMA_CTRL (0x4)
+#define RNP_VEB_BYPASS_EN BIT(4)
+#define RNP_DMA_MEM_CFG_LE (0 << 5)
+#define TSNR10_DMA_MEM_CFG_BE (1 << 5)
+#define RNP_DMA_SCATTER_MEM_SHIFT (16)
+
+#define RNP_FIRMWARE_SYNC (0xc)
+#define RNP_FIRMWARE_SYNC_MASK GENMASK(31, 16)
+#define RNP_FIRMWARE_SYNC_MAGIC (0xa5a40000)
+#define RNP_DRIVER_REMOVE (0x5a000000)
+/* 1BIT <-> 16 bytes Dma Addr Size*/
+#define RNP_DMA_SCATTER_MEM_MASK GENMASK(31, 16)
+#define RNP_DMA_TX_MAP_MODE_SHIFT (12)
+#define RNP_DMA_TX_MAP_MODE_MASK GENMASK(15, 12)
+#define RNP_DMA_RX_MEM_PAD_EN BIT(8)
+/* === queue register ===== */
+/* enable */
+#define RNP_DMA_RXQ_START(qid) _RING_(0x0010 + 0x100 * (qid))
+#define RNP_DMA_RXQ_READY(qid) _RING_(0x0014 + 0x100 * (qid))
+#define RNP_DMA_TXQ_START(qid) _RING_(0x0018 + 0x100 * (qid))
+#define RNP_DMA_TXQ_READY(qid) _RING_(0x001c + 0x100 * (qid))
+
+#define RNP_DMA_INT_STAT(qid) _RING_(0x0020 + 0x100 * (qid))
+#define RNP_DMA_INT_MASK(qid) _RING_(0x0024 + 0x100 * (qid))
+#define RNP_TX_INT_MASK BIT(1)
+#define RNP_RX_INT_MASK BIT(0)
+#define RNP_DMA_INT_CLER(qid) _RING_(0x0028 + 0x100 * (qid))
+
+/* rx-queue */
+#define RNP_DMA_RXQ_BASE_ADDR_HI(qid) _RING_(0x0030 + 0x100 * (qid))
+#define RNP_DMA_RXQ_BASE_ADDR_LO(qid) _RING_(0x0034 + 0x100 * (qid))
+#define RNP_DMA_RXQ_LEN(qid) _RING_(0x0038 + 0x100 * (qid))
+#define RNP_DMA_RXQ_HEAD(qid) _RING_(0x003c + 0x100 * (qid))
+#define RNP_DMA_RXQ_TAIL(qid) _RING_(0x0040 + 0x100 * (qid))
+#define RNP_DMA_RXQ_DESC_FETCH_CTRL(qid) _RING_(0x0044 + 0x100 * (qid))
+#define RNP_DMA_RXQ_INT_DELAY_TIMER(qid) _RING_(0x0048 + 0x100 * (qid))
+#define RNP_DMA_RXQ_INT_DELAY_PKTCNT(qidx) _RING_(0x004c + 0x100 * (qid))
+#define RNP_DMA_RXQ_RX_PRI_LVL(qid) _RING_(0x0050 + 0x100 * (qid))
+#define RNP_DMA_RXQ_DROP_TIMEOUT_TH(qid) _RING_(0x0054 + 0x100 * (qid))
+/* tx-queue */
+#define RNP_DMA_TXQ_BASE_ADDR_HI(qid) _RING_(0x0060 + 0x100 * (qid))
+#define RNP_DMA_TXQ_BASE_ADDR_LO(qid) _RING_(0x0064 + 0x100 * (qid))
+#define RNP_DMA_TXQ_LEN(qid) _RING_(0x0068 + 0x100 * (qid))
+#define RNP_DMA_TXQ_HEAD(qid) _RING_(0x006c + 0x100 * (qid))
+#define RNP_DMA_TXQ_TAIL(qid) _RING_(0x0070 + 0x100 * (qid))
+#define RNP_DMA_TXQ_DESC_FETCH_CTRL(qid) _RING_(0x0074 + 0x100 * (qid))
+#define RNP_DMA_TXQ_INT_DELAY_TIMER(qid) _RING_(0x0078 + 0x100 * (qid))
+#define RNP_DMA_TXQ_INT_DELAY_PKTCNT(qid) _RING_(0x007c + 0x100 * (qid))
+
+#define RNP_DMA_TXQ_PRI_LVL(qid) _RING_(0x0080 + 0x100 * (qid))
+#define RNP_DMA_TXQ_RATE_CTRL_TH(qid) _RING_(0x0084 + 0x100 * (qid))
+#define RNP_DMA_TXQ_RATE_CTRL_TM(qid) _RING_(0x0088 + 0x100 * (qid))
+
+/* VEB Table Register */
+#define RNP_VBE_MAC_LO(port, nr) _RING_(0x00a0 + (4 * (port)) + \
+ (0x100 * (nr)))
+#define RNP_VBE_MAC_HI(port, nr) _RING_(0x00b0 + (4 * (port)) + \
+ (0x100 * (nr)))
+#define RNP_VEB_VID_CFG(port, nr) _RING_(0x00c0 + (4 * (port)) + \
+ (0x100 * (nr)))
+#define RNP_VEB_VF_RING(port, nr) _RING_(0x00d0 + (4 * (port)) + \
+ (0x100 * (nr)))
+#define RNP_MAX_VEB_TB (64)
+#define RNP_VEB_RING_CFG_OFFSET (8)
+#define RNP_VEB_SWITCH_VF_EN BIT(7)
+#define MAX_VEB_TABLES_NUM (4)
+#endif /* RNP_DMA_REGS_H_ */
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
new file mode 100644
index 0000000000..88e8e1e552
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -0,0 +1,124 @@
+#ifndef _RNP_ETH_REGS_H_
+#define _RNP_ETH_REGS_H_
+
+#include "rnp_osdep.h"
+
+/* PTP 1588 TM Offload */
+#define RNP_ETH_PTP_TX_STATUS(n) _ETH_(0x0400 + ((n) * 0x14))
+#define RNP_ETH_PTP_TX_HTIMES(n) _ETH_(0x0404 + ((n) * 0x14))
+#define RNP_ETH_PTP_TX_LTIMES(n) _ETH_(0x0408 + ((n) * 0x14))
+#define RNP_ETH_PTP_TX_TS_ST(n) _ETH_(0x040c + ((n) * 0x14))
+#define RNP_ETH_PTP_TX_CLEAR(n) _ETH_(0x0410 + ((n) * 0x14))
+
+#define RNP_ETH_ENGINE_BYPASS _ETH_(0x8000)
+#define RNP_EN_TUNNEL_VXLAN_PARSE _ETH_(0x8004)
+#define RNP_ETH_MAC_LOOPBACK _ETH_(0x8008)
+#define RNP_ETH_FIFO_CTRL _ETH_(0x800c)
+#define RNP_ETH_FOUR_FIFO BIT(0)
+#define RNP_ETH_TWO_FIFO BIT(1)
+#define RNP_ETH_ONE_FIFO BIT(2)
+#define RNP_FIFO_CFG_EN (0x1221)
+#define RNP_ETH_VXLAN_PORT_CTRL _ETH_(0x8010)
+#define RNP_ETH_VXLAN_DEF_PORT (4789)
+#define RNP_HOST_FILTER_EN _ETH_(0x801c)
+#define RNP_HW_SCTP_CKSUM_CTRL _ETH_(0x8038)
+#define RNP_HW_CHECK_ERR_CTRL _ETH_(0x8060)
+#define RNP_HW_ERR_HDR_LEN BIT(0)
+#define RNP_HW_ERR_PKTLEN BIT(1)
+#define RNP_HW_L3_CKSUM_ERR BIT(2)
+#define RNP_HW_L4_CKSUM_ERR BIT(3)
+#define RNP_HW_SCTP_CKSUM_ERR BIT(4)
+#define RNP_HW_INNER_L3_CKSUM_ERR BIT(5)
+#define RNP_HW_INNER_L4_CKSUM_ERR BIT(6)
+#define RNP_HW_CKSUM_ERR_MASK GENMASK(6, 2)
+#define RNP_HW_CHECK_ERR_MASK GENMASK(6, 0)
+#define RNP_HW_ERR_RX_ALL_MASK GENMASK(1, 0)
+
+#define RNP_REDIR_CTRL _ETH_(0x8030)
+#define RNP_VLAN_Q_STRIP_CTRL(n) _ETH_(0x8040 + 0x4 * ((n) / 32))
+/* This Just VLAN Master Switch */
+#define RNP_VLAN_TUNNEL_STRIP_EN _ETH_(0x8050)
+#define RNP_VLAN_TUNNEL_STRIP_MODE _ETH_(0x8054)
+#define RNP_VLAN_TUNNEL_STRIP_OUTER (0)
+#define RNP_VLAN_TUNNEL_STRIP_INNER (1)
+#define RNP_RSS_INNER_CTRL _ETH_(0x805c)
+#define RNP_INNER_RSS_EN (1)
+
+#define RNP_ETH_DEFAULT_RX_RING _ETH_(0x806c)
+#define RNP_RX_FC_HI_WATER(n) _ETH_(0x80c0 + ((n) * 0x8))
+#define RNP_RX_FC_LO_WATER(n) _ETH_(0x80c4 + ((n) * 0x8))
+
+#define RNP_RX_FIFO_FULL_THRETH(n) _ETH_(0x8070 + ((n) * 0x8))
+#define RNP_RX_WORKAROUND_VAL _ETH_(0x7ff)
+#define RNP_RX_DEFAULT_VAL _ETH_(0x270)
+
+#define RNP_MIN_FRAME_CTRL _ETH_(0x80f0)
+#define RNP_MAX_FRAME_CTRL _ETH_(0x80f4)
+
+#define RNP_RX_FC_ENABLE _ETH_(0x8520)
+#define RNP_RING_FC_EN(n) _ETH_(0x8524 + 0x4 * ((n) / 32))
+#define RNP_RING_FC_THRESH(n) _ETH_(0x8a00 + 0x4 * (n))
+
+/* Mac Host Filter */
+#define RNP_MAC_FCTRL _ETH_(0x9110)
+#define RNP_MAC_FCTRL_MPE BIT(8) /* Multicast Promiscuous En */
+#define RNP_MAC_FCTRL_UPE BIT(9) /* Unicast Promiscuous En */
+#define RNP_MAC_FCTRL_BAM BIT(10) /* Broadcast Accept Mode */
+#define RNP_MAC_FCTRL_BYPASS (RNP_MAC_FCTRL_MPE | \
+ RNP_MAC_FCTRL_UPE | \
+ RNP_MAC_FCTRL_BAM)
+/* MC UC Mac Hash Filter Ctrl */
+#define RNP_MAC_MCSTCTRL _ETH_(0x9114)
+#define RNP_MAC_HASH_MASK GENMASK(11, 0)
+#define RNP_MAC_MULTICASE_TBL_EN BIT(2)
+#define RNP_MAC_UNICASE_TBL_EN BIT(3)
+#define RNP_UC_HASH_TB(n) _ETH_(0xA800 + ((n) * 0x4))
+#define RNP_MC_HASH_TB(n) _ETH_(0xAC00 + ((n) * 0x4))
+
+#define RNP_VLAN_FILTER_CTRL _ETH_(0x9118)
+#define RNP_L2TYPE_FILTER_CTRL (RNP_VLAN_FILTER_CTRL)
+#define RNP_L2TYPE_FILTER_EN BIT(31)
+#define RNP_VLAN_FILTER_EN BIT(30)
+
+#define RNP_FC_PAUSE_FWD_ACT _ETH_(0x9280)
+#define RNP_FC_PAUSE_DROP BIT(31)
+#define RNP_FC_PAUSE_PASS (0)
+#define RNP_FC_PAUSE_TYPE _ETH_(0x9284)
+#define RNP_FC_PAUSE_POLICY_EN BIT(31)
+#define RNP_PAUSE_TYPE _ETH_(0x8808)
+
+#define RNP_INPUT_USE_CTRL _ETH_(0x91d0)
+#define RNP_INPUT_VALID_MASK (0xf)
+#define RNP_INPUT_POLICY(n) _ETH_(0x91e0 + ((n) * 0x4))
+/* RSS */
+#define RNP_RSS_MRQC_ADDR _ETH_(0x92a0)
+#define RNP_SRIOV_CTRL RNP_RSS_MRQC_ADDR
+#define RNP_SRIOV_ENABLE BIT(3)
+
+#define RNP_RSS_REDIR_TB(mac, idx) _ETH_(0xe000 + \
+ ((mac) * 0x200) + ((idx) * 0x4))
+#define RNP_RSS_KEY_TABLE(idx) _ETH_(0x92d0 + ((idx) * 0x4))
+/*=======================================================================
+ *HOST_MAC_ADDRESS_FILTER
+ *=======================================================================
+ */
+#define RNP_RAL_BASE_ADDR(vf_id) _ETH_(0xA000 + 0x04 * (vf_id))
+#define RNP_RAH_BASE_ADDR(vf_id) _ETH_(0xA400 + 0x04 * (vf_id))
+#define RNP_MAC_FILTER_EN BIT(31)
+
+/* ETH Statistic */
+#define RNP_ETH_RXTRANS_DROP(p_id) _ETH_((0x8904) + ((p_id) * (0x40)))
+#define RNP_ETH_RXTRANS_CAT_ERR(p_id) _ETH_((0x8928) + ((p_id) * (0x40)))
+#define RNP_ETH_TXTM_DROP _ETH_(0X0470)
+
+#define RNP_VFTA_BASE_ADDR _ETH_(0xB000)
+#define RNP_VFTA_HASH_TABLE(id) (RNP_VFTA_BASE_ADDR + 0x4 * (id))
+#define RNP_ETYPE_BASE_ADDR _ETH_(0xB300)
+#define RNP_MPSAR_BASE_ADDR(vf_id) _ETH_(0xB400 + 0x04 * (vf_id))
+#define RNP_PFVLVF_BASE_ADDR _ETH_(0xB600)
+#define RNP_PFVLVFB_BASE_ADDR _ETH_(0xB700)
+#define RNP_TUNNEL_PFVLVF_BASE_ADDR _ETH_(0xB800)
+#define RNP_TUNNEL_PFVLVFB_BASE_ADDR _ETH_(0xB900)
+
+#define RNP_TC_PORT_MAP_TB(port) _ETH_(0xe840 + 0x04 * (port))
+#endif /* RNP_ETH_REGS_H_ */
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index d80d23f4b4..1db966cf21 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -4,16 +4,126 @@
#ifndef __RNP_HW_H__
#define __RNP_HW_H__
+#include <rte_io.h>
+#include <ethdev_driver.h>
+
+#include "rnp_osdep.h"
+
+static inline unsigned int rnp_rd_reg(volatile void *addr)
+{
+ unsigned int v = rte_read32(addr);
+
+ return v;
+}
+
+static inline void rnp_wr_reg(volatile void *reg, int val)
+{
+ rte_write32_relaxed((val), (reg));
+}
+
+#define mbx_rd32(_hw, _off) \
+ rnp_rd_reg((uint8_t *)((_hw)->iobar4) + (_off))
+#define mbx_wr32(_hw, _off, _val) \
+ rnp_wr_reg((uint8_t *)((_hw)->iobar4) + (_off), (_val))
+#define rnp_io_rd(_base, _off) \
+ rnp_rd_reg((uint8_t *)(_base) + (_off))
+#define rnp_io_wr(_base, _off, _val) \
+ rnp_wr_reg((uint8_t *)(_base) + (_off), (_val))
+
+struct rnp_hw;
+/* Mbx Operate info */
+enum MBX_ID {
+ MBX_PF = 0,
+ MBX_VF,
+ MBX_CM3CPU,
+ MBX_FW = MBX_CM3CPU,
+ MBX_VFCNT
+};
+struct rnp_mbx_api {
+ void (*init_mbx)(struct rnp_hw *hw);
+ int32_t (*read)(struct rnp_hw *hw,
+ uint32_t *msg,
+ uint16_t size,
+ enum MBX_ID);
+ int32_t (*write)(struct rnp_hw *hw,
+ uint32_t *msg,
+ uint16_t size,
+ enum MBX_ID);
+ int32_t (*read_posted)(struct rte_eth_dev *dev,
+ uint32_t *msg,
+ uint16_t size,
+ enum MBX_ID);
+ int32_t (*write_posted)(struct rte_eth_dev *dev,
+ uint32_t *msg,
+ uint16_t size,
+ enum MBX_ID);
+ int32_t (*check_for_msg)(struct rnp_hw *hw, enum MBX_ID);
+ int32_t (*check_for_ack)(struct rnp_hw *hw, enum MBX_ID);
+ int32_t (*check_for_rst)(struct rnp_hw *hw, enum MBX_ID);
+};
+
+struct rnp_mbx_stats {
+ u32 msgs_tx;
+ u32 msgs_rx;
+
+ u32 acks;
+ u32 reqs;
+ u32 rsts;
+};
+
+struct rnp_mbx_info {
+ struct rnp_mbx_api ops;
+ uint32_t usec_delay; /* retry interval delay time */
+ uint32_t timeout; /* retry ops timeout limit */
+ uint16_t size; /* data buffer size*/
+ uint16_t vf_num; /* Virtual Function num */
+ uint16_t pf_num; /* Physical Function num */
+ uint16_t sriov_st; /* Sriov state */
+ bool irq_enabled;
+ union {
+ struct {
+ unsigned short pf_req;
+ unsigned short pf_ack;
+ };
+ struct {
+ unsigned short cpu_req;
+ unsigned short cpu_ack;
+ };
+ };
+ unsigned short vf_req[64];
+ unsigned short vf_ack[64];
+
+ struct rnp_mbx_stats stats;
+
+ rte_atomic16_t state;
+} __rte_cache_aligned;
+
struct rnp_eth_adapter;
+#define RNP_MAX_HW_PORT_PERR_PF (4)
struct rnp_hw {
struct rnp_eth_adapter *back;
void *iobar0;
uint32_t iobar0_len;
void *iobar4;
uint32_t iobar4_len;
+ void *link_sync;
+ void *dma_base;
+ void *eth_base;
+ void *veb_base;
+ void *mac_base[RNP_MAX_HW_PORT_PERR_PF];
+ void *msix_base;
+ /* === dma == */
+ void *dma_axi_en;
+ void *dma_axi_st;
uint16_t device_id;
uint16_t vendor_id;
-} __rte_cache_aligned;
+ uint16_t function;
+ uint16_t pf_vf_num;
+ uint16_t max_vfs;
+ void *cookie_pool;
+ char cookie_p_name[RTE_MEMZONE_NAMESIZE];
+ struct rnp_mbx_info mbx;
+} __rte_cache_aligned;
#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index f85d597e68..60bba486fc 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -8,5 +8,6 @@ endif
sources = files(
'rnp_ethdev.c',
+ 'rnp_mbx.c',
)
includes += include_directories('base')
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index c7959c64aa..086667cec1 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -3,6 +3,7 @@
*/
#ifndef __RNP_H__
#define __RNP_H__
+#include <rte_log.h>
#include "base/rnp_hw.h"
@@ -14,14 +15,17 @@
struct rnp_eth_port {
struct rnp_eth_adapter *adapt;
+ struct rnp_hw *hw;
struct rte_eth_dev *eth_dev;
} __rte_cache_aligned;
struct rnp_share_ops {
+ const struct rnp_mbx_api *mbx_api;
} __rte_cache_aligned;
struct rnp_eth_adapter {
struct rnp_hw hw;
+ uint16_t max_vfs;
struct rte_pci_device *pdev;
struct rte_eth_dev *eth_dev; /* master eth_dev */
struct rnp_eth_port *ports[RNP_MAX_PORT_OF_PF];
@@ -34,5 +38,36 @@ struct rnp_eth_adapter {
(((struct rnp_eth_port *)((eth_dev)->data->dev_private)))
#define RNP_DEV_TO_ADAPTER(eth_dev) \
((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT(eth_dev)->adapt))
+#define RNP_DEV_TO_HW(eth_dev) \
+ (&((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT((eth_dev))->adapt))->hw)
+#define RNP_DEV_PP_PRIV_TO_MBX_OPS(dev) \
+ (((struct rnp_share_ops *)(dev)->process_private)->mbx_api)
+#define RNP_DEV_TO_MBX_OPS(dev) RNP_DEV_PP_PRIV_TO_MBX_OPS(dev)
+static inline void rnp_reg_offset_init(struct rnp_hw *hw)
+{
+ uint16_t i;
+
+ if (hw->device_id == RNP_DEV_ID_N10G && hw->mbx.pf_num) {
+ hw->iobar4 = (void *)((uint8_t *)hw->iobar4 + 0x100000);
+ hw->msix_base = (void *)((uint8_t *)hw->iobar4 + 0xa4000);
+ hw->msix_base = (void *)((uint8_t *)hw->msix_base + 0x200);
+ } else {
+ hw->msix_base = (void *)((uint8_t *)hw->iobar4 + 0xa4000);
+ }
+ /* === dma status/config====== */
+ hw->link_sync = (void *)((uint8_t *)hw->iobar4 + 0x000c);
+ hw->dma_axi_en = (void *)((uint8_t *)hw->iobar4 + 0x0010);
+ hw->dma_axi_st = (void *)((uint8_t *)hw->iobar4 + 0x0014);
+
+ if (hw->mbx.pf_num)
+ hw->msix_base = (void *)((uint8_t *)0x200);
+ /* === queue registers === */
+ hw->dma_base = (void *)((uint8_t *)hw->iobar4 + 0x08000);
+ hw->veb_base = (void *)((uint8_t *)hw->iobar4 + 0x0);
+ hw->eth_base = (void *)((uint8_t *)hw->iobar4 + 0x10000);
+ /* mac */
+ for (i = 0; i < RNP_MAX_HW_PORT_PERR_PF; i++)
+ hw->mac_base[i] = (void *)((uint8_t *)hw->iobar4 + 0x60000 + 0x10000 * i);
+}
#endif /* __RNP_H__ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 357375ee39..8a6635951b 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -8,6 +8,7 @@
#include <ethdev_driver.h>
#include "rnp.h"
+#include "rnp_mbx.h"
#include "rnp_logs.h"
static int
@@ -89,6 +90,58 @@ rnp_alloc_eth_port(struct rte_pci_device *master_pci, char *name)
return NULL;
}
+static void rnp_get_nic_attr(struct rnp_eth_adapter *adapter)
+{
+ RTE_SET_USED(adapter);
+}
+
+static int
+rnp_process_resource_init(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_share_ops *share_priv;
+
+ /* allocate process_private memory this must can't
+ * belone to the dpdk mem resource manager
+ * such as from rte_malloc or rte_dma_zone..
+ */
+ /* use the process_prive point to resolve secondary process
+ * use point-func. This point is per process will be safe to cover.
+ * This will cause secondary process core-dump because of IPC
+ * Secondary will call primary process point func virt-address
+ * secondary process don't alloc user/pmd to alloc or free
+ * the memory of dpdk-mem resource it will cause hugepage
+ * mem exception
+ * be careful for secondary Process to use the share-mem of
+ * point correlation
+ */
+ share_priv = calloc(1, sizeof(*share_priv));
+ if (!share_priv) {
+ PMD_DRV_LOG(ERR, "calloc share_priv failed");
+ return -ENOMEM;
+ }
+ memset(share_priv, 0, sizeof(*share_priv));
+ eth_dev->process_private = share_priv;
+
+ return 0;
+}
+
+static void
+rnp_common_ops_init(struct rnp_eth_adapter *adapter)
+{
+ struct rnp_share_ops *share_priv;
+
+ share_priv = adapter->share_priv;
+ share_priv->mbx_api = &rnp_mbx_pf_ops;
+}
+
+static int
+rnp_special_ops_init(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
+
static int
rnp_eth_dev_init(struct rte_eth_dev *dev)
{
@@ -124,6 +177,20 @@ rnp_eth_dev_init(struct rte_eth_dev *dev)
hw->device_id = pci_dev->id.device_id;
hw->vendor_id = pci_dev->id.vendor_id;
hw->device_id = pci_dev->id.device_id;
+ adapter->max_vfs = pci_dev->max_vfs;
+ ret = rnp_process_resource_init(dev);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "share prive resource init failed");
+ return ret;
+ }
+ adapter->share_priv = dev->process_private;
+ rnp_common_ops_init(adapter);
+ rnp_get_nic_attr(adapter);
+ /* We need Use Device Id To Change The Resource Mode */
+ rnp_special_ops_init(dev);
+ port->adapt = adapter;
+ port->hw = hw;
+ rnp_init_mbx_ops_pf(hw);
for (p_id = 0; p_id < adapter->num_ports; p_id++) {
/* port 0 resource has been alloced When Probe */
if (!p_id) {
@@ -158,11 +225,10 @@ rnp_eth_dev_init(struct rte_eth_dev *dev)
continue;
if (port->eth_dev) {
rnp_dev_close(port->eth_dev);
- rte_eth_dev_release_port(port->eth_dev);
if (port->eth_dev->process_private)
free(port->eth_dev->process_private);
+ rte_eth_dev_release_port(port->eth_dev);
}
- rte_free(port);
}
rte_free(adapter);
diff --git a/drivers/net/rnp/rnp_logs.h b/drivers/net/rnp/rnp_logs.h
index 1b3ee33745..f1648aabb5 100644
--- a/drivers/net/rnp/rnp_logs.h
+++ b/drivers/net/rnp/rnp_logs.h
@@ -13,6 +13,15 @@ extern int rnp_drv_logtype;
#define RNP_PMD_DRV_LOG(level, fmt, args...) \
rte_log(RTE_LOG_##level, rnp_drv_logtype, \
"%s() " fmt, __func__, ##args)
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rnp_drv_logtype, "%s(): " fmt, \
+ __func__, ## args)
+#define PMD_DRV_LOG(level, fmt, args...) \
+ PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#define RNP_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_##level, rnp_drv_logtype, \
+ "rnp_net: (%d) " fmt, __LINE__, ##args)
#ifdef RTE_LIBRTE_RNP_DEBUG_RX
extern int rnp_rx_logtype;
#define RNP_PMD_RX_LOG(level, fmt, args...) \
diff --git a/drivers/net/rnp/rnp_mbx.c b/drivers/net/rnp/rnp_mbx.c
new file mode 100644
index 0000000000..29aedc554b
--- /dev/null
+++ b/drivers/net/rnp/rnp_mbx.c
@@ -0,0 +1,522 @@
+#include <rte_cycles.h>
+#include <rte_log.h>
+
+#include "rnp.h"
+#include "rnp_hw.h"
+#include "rnp_mbx.h"
+#include "rnp_mbx_fw.h"
+#include "rnp_logs.h"
+
+#define RNP_MAX_VF_FUNCTIONS (64)
+/* == VEC == */
+#define VF2PF_MBOX_VEC(VF) (0xa5100 + 4 * (VF))
+#define CPU2PF_MBOX_VEC (0xa5300)
+
+/* == PF <--> VF mailbox ==== */
+#define SHARE_MEM_BYTES (64) /* 64bytes */
+/* for PF1 rtl will remap 6000 to 0xb000 */
+#define PF_VF_SHM(vf) ((0xa6000) + (64 * (vf)))
+#define PF2VF_COUNTER(vf) (PF_VF_SHM(vf) + 0)
+#define VF2PF_COUNTER(vf) (PF_VF_SHM(vf) + 4)
+#define PF_VF_SHM_DATA(vf) (PF_VF_SHM(vf) + 8)
+#define PF2VF_MBOX_CTRL(vf) ((0xa7100) + (4 * (vf)))
+#define PF_VF_MBOX_MASK_LO ((0xa7200))
+#define PF_VF_MBOX_MASK_HI ((0xa7300))
+
+/* === CPU <--> PF === */
+#define CPU_PF_SHM (0xaa000)
+#define CPU2PF_COUNTER (CPU_PF_SHM + 0)
+#define PF2CPU_COUNTER (CPU_PF_SHM + 4)
+#define CPU_PF_SHM_DATA (CPU_PF_SHM + 8)
+#define PF2CPU_MBOX_CTRL (0xaa100)
+#define CPU_PF_MBOX_MASK (0xaa300)
+
+/* === CPU <--> VF === */
+#define CPU_VF_SHM(vf) (0xa8000 + (64 * (vf)))
+#define CPU2VF_COUNTER(vf) (CPU_VF_SHM(vf) + 0)
+#define VF2CPU_COUNTER(vf) (CPU_VF_SHM(vf) + 4)
+#define CPU_VF_SHM_DATA(vf) (CPU_VF_SHM(vf) + 8)
+#define VF2CPU_MBOX_CTRL(vf) (0xa9000 + 64 * (vf))
+#define CPU_VF_MBOX_MASK_LO(vf) (0xa9200 + 64 * (vf))
+#define CPU_VF_MBOX_MASK_HI(vf) (0xa9300 + 64 * (vf))
+
+#define MBOX_CTRL_REQ (1 << 0) /* WO */
+/* VF:WR, PF:RO */
+#define MBOX_CTRL_PF_HOLD_SHM (1 << 3) /* VF:RO, PF:WR */
+
+#define MBOX_IRQ_EN (0)
+#define MBOX_IRQ_DISABLE (1)
+
+/****************************PF MBX OPS************************************/
+static inline u16 rnp_mbx_get_req(struct rnp_hw *hw, int reg)
+{
+ rte_mb();
+ return mbx_rd32(hw, reg) & 0xffff;
+}
+
+static inline u16 rnp_mbx_get_ack(struct rnp_hw *hw, int reg)
+{
+ rte_mb();
+ return (mbx_rd32(hw, reg) >> 16) & 0xffff;
+}
+
+static inline void rnp_mbx_inc_pf_req(struct rnp_hw *hw, enum MBX_ID mbx_id)
+{
+ int reg = (mbx_id == MBX_CM3CPU) ?
+ PF2CPU_COUNTER : PF2VF_COUNTER(mbx_id);
+ u32 v = mbx_rd32(hw, reg);
+ u16 req;
+
+ req = (v & 0xffff);
+ req++;
+ v &= ~(0x0000ffff);
+ v |= req;
+
+ rte_mb();
+ mbx_wr32(hw, reg, v);
+
+ /* update stats */
+ /* hw->mbx.stats.msgs_tx++; */
+}
+
+static inline void rnp_mbx_inc_pf_ack(struct rnp_hw *hw, enum MBX_ID mbx_id)
+{
+ int reg = (mbx_id == MBX_CM3CPU) ?
+ PF2CPU_COUNTER : PF2VF_COUNTER(mbx_id);
+ u32 v = mbx_rd32(hw, reg);
+ u16 ack;
+
+ ack = (v >> 16) & 0xffff;
+ ack++;
+ v &= ~(0xffff0000);
+ v |= (ack << 16);
+
+ rte_mb();
+ mbx_wr32(hw, reg, v);
+
+ /* update stats */
+ /* hw->mbx.stats.msgs_rx++; */
+}
+
+/**
+ * rnp_poll_for_msg - Wait for message notification
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message notification
+ **/
+static int32_t rnp_poll_for_msg(struct rte_eth_dev *dev, enum MBX_ID mbx_id)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int countdown = mbx->timeout;
+
+ if (!countdown || !ops->check_for_msg)
+ goto out;
+
+ while (countdown && ops->check_for_msg(hw, mbx_id)) {
+ countdown--;
+ if (!countdown)
+ break;
+ rte_delay_us_block(mbx->usec_delay);
+ }
+
+out:
+ return countdown ? 0 : -ETIMEDOUT;
+}
+
+/**
+ * rnp_poll_for_ack - Wait for message acknowledgment
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message acknowledgment
+ **/
+static int32_t rnp_poll_for_ack(struct rte_eth_dev *dev, enum MBX_ID mbx_id)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int countdown = mbx->timeout;
+
+ if (!countdown || !ops->check_for_ack)
+ goto out;
+
+ while (countdown && ops->check_for_ack(hw, mbx_id)) {
+ countdown--;
+ if (!countdown)
+ break;
+ rte_delay_us_block(mbx->usec_delay);
+ }
+
+out:
+ return countdown ? 0 : -ETIMEDOUT;
+}
+
+/**
+ * rnp_read_posted_mbx - Wait for message notification and receive message
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message notification and
+ * copied it into the receive buffer.
+ **/
+static int32_t
+rnp_read_posted_mbx_pf(struct rte_eth_dev *dev, u32 *msg, u16 size,
+ enum MBX_ID mbx_id)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int countdown = mbx->timeout;
+ int32_t ret_val = -ETIMEDOUT;
+
+ if (!ops->read || !countdown)
+ return -EOPNOTSUPP;
+
+ ret_val = rnp_poll_for_msg(dev, mbx_id);
+
+ /* if ack received read message, otherwise we timed out */
+ if (!ret_val)
+ return ops->read(hw, msg, size, mbx_id);
+ return ret_val;
+}
+
+/**
+ * rnp_write_posted_mbx - Write a message to the mailbox, wait for ack
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully copied message into the buffer and
+ * received an ack to that message within delay * timeout period
+ **/
+static int32_t
+rnp_write_posted_mbx_pf(struct rte_eth_dev *dev, u32 *msg, u16 size,
+ enum MBX_ID mbx_id)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int32_t ret_val = -ETIMEDOUT;
+
+ /* exit if either we can't write or there isn't a defined timeout */
+ if (!ops->write || !mbx->timeout)
+ goto out;
+
+ /* send msg and hold buffer lock */
+ if (ops->write)
+ ret_val = ops->write(hw, msg, size, mbx_id);
+
+ /* if msg sent wait until we receive an ack */
+ if (!ret_val)
+ ret_val = rnp_poll_for_ack(dev, mbx_id);
+out:
+ return ret_val;
+}
+
+/**
+ * rnp_check_for_msg_pf - checks to see if the VF has sent mail
+ * @hw: pointer to the HW structure
+ * @vf_number: the VF index
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ **/
+static int32_t rnp_check_for_msg_pf(struct rnp_hw *hw, enum MBX_ID mbx_id)
+{
+ int32_t ret_val = -ETIMEDOUT;
+
+ if (mbx_id == MBX_CM3CPU) {
+ if (rnp_mbx_get_req(hw, CPU2PF_COUNTER) != hw->mbx.cpu_req) {
+ ret_val = 0;
+ /* hw->mbx.stats.reqs++; */
+ }
+ } else {
+ if (rnp_mbx_get_req(hw, VF2PF_COUNTER(mbx_id)) !=
+ hw->mbx.vf_req[mbx_id]) {
+ ret_val = 0;
+ /* hw->mbx.stats.reqs++; */
+ }
+ }
+
+ return ret_val;
+}
+
+/**
+ * rnp_check_for_ack_pf - checks to see if the VF has ACKed
+ * @hw: pointer to the HW structure
+ * @vf_number: the VF index
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ **/
+static int32_t rnp_check_for_ack_pf(struct rnp_hw *hw, enum MBX_ID mbx_id)
+{
+ int32_t ret_val = -ETIMEDOUT;
+
+ if (mbx_id == MBX_CM3CPU) {
+ if (rnp_mbx_get_ack(hw, CPU2PF_COUNTER) != hw->mbx.cpu_ack) {
+ ret_val = 0;
+ /* hw->mbx.stats.acks++; */
+ }
+ } else {
+ if (rnp_mbx_get_ack(hw, VF2PF_COUNTER(mbx_id)) != hw->mbx.vf_ack[mbx_id]) {
+ ret_val = 0;
+ /* hw->mbx.stats.acks++; */
+ }
+ }
+
+ return ret_val;
+}
+
+/**
+ * rnp_obtain_mbx_lock_pf - obtain mailbox lock
+ * @hw: pointer to the HW structure
+ * @mbx_id: the VF index or CPU
+ *
+ * return SUCCESS if we obtained the mailbox lock
+ **/
+static int32_t rnp_obtain_mbx_lock_pf(struct rnp_hw *hw, enum MBX_ID mbx_id)
+{
+ int32_t ret_val = -ETIMEDOUT;
+ int try_cnt = 5000; /* 500ms */
+ u32 CTRL_REG = (mbx_id == MBX_CM3CPU) ?
+ PF2CPU_MBOX_CTRL : PF2VF_MBOX_CTRL(mbx_id);
+
+ while (try_cnt-- > 0) {
+ /* Take ownership of the buffer */
+ mbx_wr32(hw, CTRL_REG, MBOX_CTRL_PF_HOLD_SHM);
+
+ /* reserve mailbox for cm3 use */
+ if (mbx_rd32(hw, CTRL_REG) & MBOX_CTRL_PF_HOLD_SHM)
+ return 0;
+ rte_delay_us_block(100);
+ }
+
+ RNP_PMD_LOG(WARNING, "%s: failed to get:%d lock\n",
+ __func__, mbx_id);
+ return ret_val;
+}
+
+/**
+ * rnp_write_mbx_pf - Places a message in the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: the VF index
+ *
+ * returns SUCCESS if it successfully copied message into the buffer
+ **/
+static int32_t rnp_write_mbx_pf(struct rnp_hw *hw, u32 *msg,
+ u16 size, enum MBX_ID mbx_id)
+{
+ u32 DATA_REG = (mbx_id == MBX_CM3CPU) ?
+ CPU_PF_SHM_DATA : PF_VF_SHM_DATA(mbx_id);
+ u32 CTRL_REG = (mbx_id == MBX_CM3CPU) ?
+ PF2CPU_MBOX_CTRL : PF2VF_MBOX_CTRL(mbx_id);
+ int32_t ret_val = 0;
+ u32 stat __rte_unused;
+ u16 i;
+
+ if (size > RNP_VFMAILBOX_SIZE) {
+ RNP_PMD_LOG(ERR, "%s: size:%d should <%d\n", __func__,
+ size, RNP_VFMAILBOX_SIZE);
+ return -EINVAL;
+ }
+
+ /* lock the mailbox to prevent pf/vf/cpu race condition */
+ ret_val = rnp_obtain_mbx_lock_pf(hw, mbx_id);
+ if (ret_val) {
+ RNP_PMD_LOG(WARNING, "PF[%d] Can't Get Mbx-Lock Try Again\n",
+ hw->function);
+ return ret_val;
+ }
+
+ /* copy the caller specified message to the mailbox memory buffer */
+ for (i = 0; i < size; i++) {
+#ifdef MBX_WR_DEBUG
+ mbx_pwr32(hw, DATA_REG + i * 4, msg[i]);
+#else
+ mbx_wr32(hw, DATA_REG + i * 4, msg[i]);
+#endif
+ }
+
+ /* flush msg and acks as we are overwriting the message buffer */
+ if (mbx_id == MBX_CM3CPU)
+ hw->mbx.cpu_ack = rnp_mbx_get_ack(hw, CPU2PF_COUNTER);
+ else
+ hw->mbx.vf_ack[mbx_id] = rnp_mbx_get_ack(hw, VF2PF_COUNTER(mbx_id));
+
+ rnp_mbx_inc_pf_req(hw, mbx_id);
+ rte_mb();
+
+ rte_delay_us(300);
+
+ /* Interrupt VF/CM3 to tell it a message
+ * has been sent and release buffer
+ */
+ mbx_wr32(hw, CTRL_REG, MBOX_CTRL_REQ);
+
+ return 0;
+}
+
+/**
+ * rnp_read_mbx_pf - Read a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @vf_number: the VF index
+ *
+ * This function copies a message from the mailbox buffer to the caller's
+ * memory buffer. The presumption is that the caller knows that there was
+ * a message due to a VF/CPU request so no polling for message is needed.
+ **/
+static int32_t rnp_read_mbx_pf(struct rnp_hw *hw, u32 *msg,
+ u16 size, enum MBX_ID mbx_id)
+{
+ u32 BUF_REG = (mbx_id == MBX_CM3CPU) ?
+ CPU_PF_SHM_DATA : PF_VF_SHM_DATA(mbx_id);
+ u32 CTRL_REG = (mbx_id == MBX_CM3CPU) ?
+ PF2CPU_MBOX_CTRL : PF2VF_MBOX_CTRL(mbx_id);
+ int32_t ret_val = -EIO;
+ u32 stat __rte_unused, i;
+ if (size > RNP_VFMAILBOX_SIZE) {
+ RNP_PMD_LOG(ERR, "%s: size:%d should <%d\n", __func__,
+ size, RNP_VFMAILBOX_SIZE);
+ return -EINVAL;
+ }
+ /* lock the mailbox to prevent pf/vf race condition */
+ ret_val = rnp_obtain_mbx_lock_pf(hw, mbx_id);
+ if (ret_val)
+ goto out_no_read;
+
+ /* copy the message from the mailbox memory buffer */
+ for (i = 0; i < size; i++) {
+#ifdef MBX_RD_DEBUG
+ msg[i] = mbx_prd32(hw, BUF_REG + 4 * i);
+#else
+ msg[i] = mbx_rd32(hw, BUF_REG + 4 * i);
+#endif
+ }
+ mbx_wr32(hw, BUF_REG, 0);
+
+ /* update req. used by rnpvf_check_for_msg_vf */
+ if (mbx_id == MBX_CM3CPU)
+ hw->mbx.cpu_req = rnp_mbx_get_req(hw, CPU2PF_COUNTER);
+ else
+ hw->mbx.vf_req[mbx_id] = rnp_mbx_get_req(hw, VF2PF_COUNTER(mbx_id));
+
+ /* this ack maybe too earier? */
+ /* Acknowledge receipt and release mailbox, then we're done */
+ rnp_mbx_inc_pf_ack(hw, mbx_id);
+
+ rte_mb();
+
+ /* free ownership of the buffer */
+ mbx_wr32(hw, CTRL_REG, 0);
+
+out_no_read:
+
+ return ret_val;
+}
+
+static void rnp_mbx_reset_pf(struct rnp_hw *hw)
+{
+ int v;
+
+ /* reset pf->cm3 status */
+ v = mbx_rd32(hw, CPU2PF_COUNTER);
+ hw->mbx.cpu_req = v & 0xffff;
+ hw->mbx.cpu_ack = (v >> 16) & 0xffff;
+ /* release pf->cm3 buffer lock */
+ mbx_wr32(hw, PF2CPU_MBOX_CTRL, 0);
+
+ rte_mb();
+ /* enable irq to fw */
+ mbx_wr32(hw, CPU_PF_MBOX_MASK, 0);
+}
+
+static int get_pfvfnum(struct rnp_hw *hw)
+{
+ uint32_t addr_mask;
+ uint32_t offset;
+ uint32_t val;
+#define RNP_PF_NUM_REG (0x75f000)
+#define RNP_PFVF_SHIFT (4)
+#define RNP_PF_SHIFT (6)
+#define RNP_PF_BIT_MASK BIT(6)
+ addr_mask = hw->iobar0_len - 1;
+ offset = RNP_PF_NUM_REG & addr_mask;
+ val = rnp_io_rd(hw->iobar0, offset);
+
+ return val >> RNP_PFVF_SHIFT;
+}
+
+const struct rnp_mbx_api rnp_mbx_pf_ops = {
+ .read = rnp_read_mbx_pf,
+ .write = rnp_write_mbx_pf,
+ .read_posted = rnp_read_posted_mbx_pf,
+ .write_posted = rnp_write_posted_mbx_pf,
+ .check_for_msg = rnp_check_for_msg_pf,
+ .check_for_ack = rnp_check_for_ack_pf,
+};
+
+void *rnp_memzone_reserve(const char *name, unsigned int size)
+{
+#define NO_FLAGS 0
+ const struct rte_memzone *mz = NULL;
+
+ if (name) {
+ if (size) {
+ mz = rte_memzone_reserve(name, size,
+ rte_socket_id(), NO_FLAGS);
+ if (mz)
+ memset(mz->addr, 0, size);
+ } else {
+ mz = rte_memzone_lookup(name);
+ }
+ return mz ? mz->addr : NULL;
+ }
+ return NULL;
+}
+
+void rnp_init_mbx_ops_pf(struct rnp_hw *hw)
+{
+ struct rnp_eth_adapter *adapter = hw->back;
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ struct mbx_req_cookie *cookie;
+ uint32_t vf_isolat_off;
+
+ mbx->size = RNP_VFMAILBOX_SIZE;
+ mbx->usec_delay = RNP_MBX_DELAY_US;
+ mbx->timeout = (RNP_MBX_TIMEOUT_SECONDS * 1000 * 1000) /
+ mbx->usec_delay;
+ if (hw->device_id == RNP_DEV_ID_N10G) {
+ vf_isolat_off = RNP_VF_ISOLATE_CTRL &
+ (hw->iobar0_len - 1);
+ rnp_io_wr(hw->iobar0, vf_isolat_off, 0);
+ }
+ mbx->sriov_st = 0;
+ hw->pf_vf_num = get_pfvfnum(hw);
+ mbx->vf_num = UINT16_MAX;
+ mbx->pf_num = (hw->pf_vf_num & RNP_PF_BIT_MASK) >> RNP_PF_SHIFT;
+ hw->function = mbx->pf_num;
+ /* Retrieving and storing the HW base address of device */
+ rnp_reg_offset_init(hw);
+ snprintf(hw->cookie_p_name, RTE_MEMZONE_NAMESIZE, "mbx_req_cookie%d_%d",
+ hw->function, adapter->eth_dev->data->port_id);
+ hw->cookie_pool = rnp_memzone_reserve(hw->cookie_p_name,
+ sizeof(struct mbx_req_cookie));
+
+ cookie = (struct mbx_req_cookie *)hw->cookie_pool;
+ if (cookie) {
+ cookie->timeout_ms = 1000;
+ cookie->magic = COOKIE_MAGIC;
+ cookie->priv_len = RNP_MAX_SHARE_MEM;
+ }
+
+ rnp_mbx_reset_pf(hw);
+}
diff --git a/drivers/net/rnp/rnp_mbx.h b/drivers/net/rnp/rnp_mbx.h
new file mode 100644
index 0000000000..87949c1726
--- /dev/null
+++ b/drivers/net/rnp/rnp_mbx.h
@@ -0,0 +1,139 @@
+#ifndef __TSRN10_MBX_H__
+#define __TSRN10_MBX_H__
+
+#define VF_NUM_MASK_TEMP (0xff0)
+#define VF_NUM_OFF (4)
+#define RNP_VF_NUM (0x75f000)
+#define RNP_VF_NB_MASK (0x3f)
+#define RNP_PF_NB_MASK (0x40)
+#define RNP_VF_ISOLATE_CTRL (0x7982fc)
+#define RNP_IS_SRIOV BIT(7)
+#define RNP_SRIOV_ST_SHIFT (24)
+#define RNP_VF_DEFAULT_PORT (0)
+
+/* Mbx Ctrl state */
+#define RNP_VFMAILBOX_SIZE (14) /* 16 32 bit words - 64 bytes */
+#define TSRN10_VFMBX_SIZE (RNP_VFMAILBOX_SIZE)
+#define RNP_VT_MSGTYPE_ACK (0x80000000)
+
+#define RNP_VT_MSGTYPE_NACK (0x40000000)
+/* Messages below or'd with * this are the NACK */
+#define RNP_VT_MSGTYPE_CTS (0x20000000)
+/* Indicates that VF is still
+ *clear to send requests
+ */
+#define RNP_VT_MSGINFO_SHIFT (16)
+
+#define RNP_VT_MSGINFO_MASK (0xFF << RNP_VT_MSGINFO_SHIFT)
+/* The mailbox memory size is 64 bytes accessed by 32-bit registers */
+#define RNP_VLVF_VIEN (0x80000000) /* filter is valid */
+#define RNP_VLVF_ENTRIES (64)
+#define RNP_VLVF_VLANID_MASK (0x00000FFF)
+/* Every VF own 64 bytes mem for communitate accessed by 32-bit */
+
+#define RNP_VF_RESET (0x01) /* VF requests reset */
+#define RNP_VF_SET_MAC_ADDR (0x02) /* VF requests PF to set MAC addr */
+#define RNP_VF_SET_MULTICAST (0x03) /* VF requests PF to set MC addr */
+#define RNP_VF_SET_VLAN (0x04) /* VF requests PF to set VLAN */
+
+#define RNP_VF_SET_LPE (0x05) /* VF requests PF to set VMOLR.LPE */
+#define RNP_VF_SET_MACVLAN (0x06) /* VF requests PF for unicast filter */
+#define RNP_VF_GET_MACVLAN (0x07) /* VF requests mac */
+#define RNP_VF_API_NEGOTIATE (0x08) /* negotiate API version */
+#define RNP_VF_GET_QUEUES (0x09) /* get queue configuration */
+#define RNP_VF_GET_LINK (0x10) /* get link status */
+
+#define RNP_VF_SET_VLAN_STRIP (0x0a) /* VF Requests PF to set VLAN STRIP */
+#define RNP_VF_REG_RD (0x0b) /* VF Read Reg */
+#define RNP_VF_GET_MAX_MTU (0x0c) /* VF Get Max Mtu */
+#define RNP_VF_SET_MTU (0x0d) /* VF Set Mtu */
+#define RNP_VF_GET_FW (0x0e) /* VF Get Firmware Version */
+
+#define RNP_PF_VFNUM_MASK GENMASK(26, 21)
+
+#define RNP_PF_SET_FCS (0x10) /* PF set fcs status */
+#define RNP_PF_SET_PAUSE (0x11) /* PF set pause status */
+#define RNP_PF_SET_FT_PADDING (0x12) /* PF set ft padding status */
+#define RNP_PF_SET_VLAN_FILTER (0x13) /* PF set ntuple status */
+#define RNP_PF_SET_VLAN (0x14)
+#define RNP_PF_SET_LINK (0x15)
+#define RNP_PF_SET_SPEED_40G BIT(8)
+#define RNP_PF_SET_SPEED_10G BIT(7)
+#define RNP_PF_SET_SPEED_1G BIT(5)
+#define RNP_PF_SET_SPEED_100M BIT(3)
+
+#define RNP_PF_SET_MTU (0x16)
+#define RNP_PF_SET_RESET (0x17)
+#define RNP_PF_LINK_UP BIT(31)
+#define RNP_PF_SPEED_MASK GENMASK(15, 0)
+
+/* Define mailbox register bits */
+#define RNP_PF_REMOVE (0x0f)
+
+/* Mailbox API ID VF Request */
+/* length of permanent address message returned from PF */
+#define RNP_VF_PERMADDR_MSG_LEN (11)
+#define RNP_VF_TX_QUEUES (1) /* number of Tx queues supported */
+#define RNP_VF_RX_QUEUES (2) /* number of Rx queues supported */
+#define RNP_VF_TRANS_VLAN (3) /* Indication of port vlan */
+#define RNP_VF_DEF_QUEUE (4) /* Default queue offset */
+/* word in permanent address message with the current multicast type */
+#define RNP_VF_VLAN_WORD (5)
+#define RNP_VF_PHY_TYPE_WORD (6)
+#define RNP_VF_FW_VERSION_WORD (7)
+#define RNP_VF_LINK_STATUS_WORD (8)
+#define RNP_VF_AXI_MHZ (9)
+#define RNP_VF_RNP_VF_FEATURE (10)
+#define RNP_VF_RNP_VF_FILTER_EN BIT(0)
+
+#define RNP_LINK_SPEED_UNKNOWN 0
+#define RNP_LINK_SPEED_10_FULL BIT(2)
+#define RNP_LINK_SPEED_100_FULL BIT(3)
+#define RNP_LINK_SPEED_1GB_FULL BIT(4)
+#define RNP_LINK_SPEED_10GB_FULL BIT(5)
+#define RNP_LINK_SPEED_40GB_FULL BIT(6)
+#define RNP_LINK_SPEED_25GB_FULL BIT(7)
+#define RNP_LINK_SPEED_50GB_FULL BIT(8)
+#define RNP_LINK_SPEED_100GB_FULL BIT(9)
+#define RNP_LINK_SPEED_10_HALF BIT(10)
+#define RNP_LINK_SPEED_100_HALF BIT(11)
+#define RNP_LINK_SPEED_1GB_HALF BIT(12)
+
+/* Mailbox API ID PF Request */
+#define RNP_VF_MC_TYPE_WORD (3)
+#define RNP_VF_DMA_VERSION_WORD (4)
+/* Get Queue write-back reference value */
+#define RNP_PF_CONTROL_PRING_MSG (0x0100) /* PF control message */
+
+#define TSRN10_MBX_VECTOR_ID (0)
+#define TSRN10_PF2VF_MBX_VEC_CTR(n) (0xa5000 + 0x4 * (n))
+
+#define RNP_VF_INIT_TIMEOUT (200) /* Number of retries to clear RSTI */
+#define RNP_VF_MBX_INIT_TIMEOUT (2000) /* number of retries on mailbox */
+
+#define MBOX_CTRL_REQ (1 << 0) /* WO */
+#define MBOX_CTRL_VF_HOLD_SHM (1 << 2) /* VF:WR, PF:RO */
+#define VF_NUM_MASK 0x3f
+#define VFNUM(num) ((num) & VF_NUM_MASK)
+
+#define PF_VF_SHM(vf) \
+ ((0xa6000) + (64 * (vf))) /* for PF1 rtl will remap 6000 to 0xb000 */
+#define PF2VF_COUNTER(vf) (PF_VF_SHM(vf) + 0)
+#define VF2PF_COUNTER(vf) (PF_VF_SHM(vf) + 4)
+#define PF_VF_SHM_DATA(vf) (PF_VF_SHM(vf) + 8)
+#define VF2PF_MBOX_CTRL(vf) ((0xa7000) + (4 * (vf)))
+
+/* Error Codes */
+#define RNP_ERR_INVALID_MAC_ADDR (-1)
+#define RNP_ERR_MBX (-100)
+
+#define RNP_MBX_DELAY_US (100) /* Delay us for Retry */
+/* Max Retry Time */
+#define RNP_MBX_TIMEOUT_SECONDS (2) /* Max Retry Time 2s */
+#define RNP_ARRAY_OPCODE_OFFSET (0)
+#define RNP_ARRAY_CTRL_OFFSET (1)
+
+void rnp_init_mbx_ops_pf(struct rnp_hw *hw);
+extern const struct rnp_mbx_api rnp_mbx_pf_ops;
+void *rnp_memzone_reserve(const char *name, unsigned int size);
+#endif
diff --git a/drivers/net/rnp/rnp_mbx_fw.c b/drivers/net/rnp/rnp_mbx_fw.c
new file mode 100644
index 0000000000..6fe008351b
--- /dev/null
+++ b/drivers/net/rnp/rnp_mbx_fw.c
@@ -0,0 +1,271 @@
+#include <stdio.h>
+
+#include <rte_version.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_alarm.h>
+
+#include "rnp.h"
+#include "rnp_mbx.h"
+#include "rnp_mbx_fw.h"
+#include "rnp_logs.h"
+
+static int
+rnp_fw_send_cmd_wait(struct rte_eth_dev *dev, struct mbx_fw_cmd_req *req,
+ struct mbx_fw_cmd_reply *reply)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ int err;
+
+ rte_spinlock_lock(&hw->fw_lock);
+
+ err = ops->write_posted(dev, (u32 *)req,
+ (req->datalen + MBX_REQ_HDR_LEN) / 4, MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: write_posted failed! err:0x%x\n",
+ __func__, err);
+ rte_spinlock_unlock(&hw->fw_lock);
+ return err;
+ }
+
+ err = ops->read_posted(dev, (u32 *)reply, sizeof(*reply) / 4, MBX_FW);
+ rte_spinlock_unlock(&hw->fw_lock);
+ if (err) {
+ RNP_PMD_LOG(ERR,
+ "%s: read_posted failed! err:0x%x. "
+ "req-op:0x%x\n",
+ __func__,
+ err,
+ req->opcode);
+ goto err_quit;
+ }
+
+ if (reply->error_code) {
+ RNP_PMD_LOG(ERR,
+ "%s: reply err:0x%x. req-op:0x%x\n",
+ __func__,
+ reply->error_code,
+ req->opcode);
+ err = -reply->error_code;
+ goto err_quit;
+ }
+
+ return 0;
+err_quit:
+ RNP_PMD_LOG(ERR,
+ "%s:PF[%d]: req:%08x_%08x_%08x_%08x "
+ "reply:%08x_%08x_%08x_%08x\n",
+ __func__,
+ hw->function,
+ ((int *)req)[0],
+ ((int *)req)[1],
+ ((int *)req)[2],
+ ((int *)req)[3],
+ ((int *)reply)[0],
+ ((int *)reply)[1],
+ ((int *)reply)[2],
+ ((int *)reply)[3]);
+
+ return err;
+}
+
+static int rnp_mbx_fw_post_req(struct rte_eth_dev *dev,
+ struct mbx_fw_cmd_req *req,
+ struct mbx_req_cookie *cookie)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ int err = 0;
+ int timeout_cnt;
+#define WAIT_MS 10
+
+ cookie->done = 0;
+
+ rte_spinlock_lock(&hw->fw_lock);
+
+ /* down_interruptible(&pf_cpu_lock); */
+ err = ops->write(hw, (u32 *)req,
+ (req->datalen + MBX_REQ_HDR_LEN) / 4, MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR, "rnp_write_mbx failed!\n");
+ goto quit;
+ }
+
+ timeout_cnt = cookie->timeout_ms / WAIT_MS;
+ while (timeout_cnt > 0) {
+ rte_delay_ms(WAIT_MS);
+ timeout_cnt--;
+ if (cookie->done)
+ break;
+ }
+
+quit:
+ rte_spinlock_unlock(&hw->fw_lock);
+ return err;
+}
+
+static int rnp_fw_get_capablity(struct rte_eth_dev *dev,
+ struct phy_abilities *abil)
+{
+ struct mbx_fw_cmd_reply reply;
+ struct mbx_fw_cmd_req req;
+ int err;
+
+ memset(&req, 0, sizeof(req));
+ memset(&reply, 0, sizeof(reply));
+
+ build_phy_abalities_req(&req, &req);
+
+ err = rnp_fw_send_cmd_wait(dev, &req, &reply);
+ if (err)
+ return err;
+
+ memcpy(abil, &reply.phy_abilities, sizeof(*abil));
+
+ return 0;
+}
+
+#define RNP_MBX_API_MAX_RETRY (10)
+int rnp_mbx_get_capability(struct rte_eth_dev *dev,
+ int *lane_mask,
+ int *nic_mode)
+{
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct phy_abilities ablity;
+ uint16_t temp_lmask;
+ uint16_t lane_bit = 0;
+ uint16_t retry = 0;
+ int lane_cnt = 0;
+ uint8_t lane_idx;
+ int err = -EIO;
+ uint8_t idx;
+
+ memset(&ablity, 0, sizeof(ablity));
+
+ /* enable CM3CPU to PF MBX IRQ */
+ do {
+ err = rnp_fw_get_capablity(dev, &ablity);
+ if (retry > RNP_MBX_API_MAX_RETRY)
+ break;
+ retry++;
+ } while (err);
+ if (!err) {
+ hw->lane_mask = ablity.lane_mask;
+ hw->nic_mode = ablity.nic_mode;
+ hw->pfvfnum = ablity.pfnum;
+ hw->fw_version = ablity.fw_version;
+ hw->axi_mhz = ablity.axi_mhz;
+ hw->fw_uid = ablity.fw_uid;
+ if (ablity.phy_type == PHY_TYPE_SGMII) {
+ hw->is_sgmii = 1;
+ hw->sgmii_phy_id = ablity.phy_id;
+ }
+
+ if (ablity.ext_ablity != 0xffffffff && ablity.e.valid) {
+ hw->ncsi_en = (ablity.e.ncsi_en == 1);
+ hw->ncsi_rar_entries = 1;
+ }
+
+ if (hw->nic_mode == RNP_SINGLE_10G &&
+ hw->fw_version >= 0x00050201 &&
+ ablity.speed == RTE_ETH_SPEED_NUM_10G) {
+ hw->force_speed_stat = FORCE_SPEED_STAT_DISABLED;
+ hw->force_10g_1g_speed_ablity = 1;
+ }
+
+ if (lane_mask)
+ *lane_mask = hw->lane_mask;
+ if (nic_mode)
+ *nic_mode = hw->nic_mode;
+
+ lane_cnt = __builtin_popcount(hw->lane_mask);
+ temp_lmask = hw->lane_mask;
+ for (idx = 0; idx < lane_cnt; idx++) {
+ hw->phy_port_ids[idx] = ablity.port_ids[idx];
+ lane_bit = ffs(temp_lmask) - 1;
+ lane_idx = ablity.port_ids[idx] % lane_cnt;
+ hw->lane_of_port[lane_idx] = lane_bit;
+ temp_lmask &= ~BIT(lane_bit);
+ }
+ hw->max_port_num = lane_cnt;
+ }
+
+ RNP_PMD_LOG(INFO,
+ "%s: nic-mode:%d lane_cnt:%d lane_mask:0x%x "
+ "pfvfnum:0x%x, fw_version:0x%08x, ports:%d-%d-%d-%d ncsi:en:%d\n",
+ __func__,
+ hw->nic_mode,
+ lane_cnt,
+ hw->lane_mask,
+ hw->pfvfnum,
+ ablity.fw_version,
+ ablity.port_ids[0],
+ ablity.port_ids[1],
+ ablity.port_ids[2],
+ ablity.port_ids[3],
+ hw->ncsi_en);
+
+ if (lane_cnt <= 0 || lane_cnt > 4)
+ return -EIO;
+
+ return err;
+}
+
+int rnp_mbx_link_event_enable(struct rte_eth_dev *dev, int enable)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct mbx_fw_cmd_reply reply;
+ struct mbx_fw_cmd_req req;
+ int err, v;
+
+ memset(&req, 0, sizeof(req));
+ memset(&reply, 0, sizeof(reply));
+
+ rte_spinlock_lock(&hw->fw_lock);
+ if (enable) {
+ v = rnp_rd_reg(hw->link_sync);
+ v &= ~RNP_FIRMWARE_SYNC_MASK;
+ v |= RNP_FIRMWARE_SYNC_MAGIC;
+ rnp_wr_reg(hw->link_sync, v);
+ } else {
+ rnp_wr_reg(hw->link_sync, 0);
+ }
+ rte_spinlock_unlock(&hw->fw_lock);
+
+ build_link_set_event_mask(&req, BIT(EVT_LINK_UP),
+ (enable & 1) << EVT_LINK_UP, &req);
+
+ rte_spinlock_lock(&hw->fw_lock);
+ err = ops->write_posted(dev, (u32 *)&req,
+ (req.datalen + MBX_REQ_HDR_LEN) / 4, MBX_FW);
+ rte_spinlock_unlock(&hw->fw_lock);
+
+ rte_delay_ms(200);
+
+ return err;
+}
+
+int rnp_mbx_fw_reset_phy(struct rte_eth_dev *dev)
+{
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct mbx_fw_cmd_reply reply;
+ struct mbx_req_cookie *cookie;
+ struct mbx_fw_cmd_req req;
+
+ memset(&req, 0, sizeof(req));
+ memset(&reply, 0, sizeof(reply));
+
+ if (hw->mbx.irq_enabled) {
+ cookie = rnp_memzone_reserve(hw->cookie_p_name, 0);
+ if (!cookie)
+ return -ENOMEM;
+ memset(cookie->priv, 0, cookie->priv_len);
+ build_reset_phy_req(&req, cookie);
+ return rnp_mbx_fw_post_req(dev, &req, cookie);
+ }
+ build_reset_phy_req(&req, &req);
+
+ return rnp_fw_send_cmd_wait(dev, &req, &reply);
+}
diff --git a/drivers/net/rnp/rnp_mbx_fw.h b/drivers/net/rnp/rnp_mbx_fw.h
new file mode 100644
index 0000000000..439090b5a3
--- /dev/null
+++ b/drivers/net/rnp/rnp_mbx_fw.h
@@ -0,0 +1,22 @@
+#ifndef __RNP_MBX_FW_H__
+#define __RNP_MBX_FW_H__
+
+struct mbx_fw_cmd_reply;
+typedef void (*cookie_cb)(struct mbx_fw_cmd_reply *reply, void *priv);
+#define RNP_MAX_SHARE_MEM (8 * 8)
+struct mbx_req_cookie {
+ int magic;
+#define COOKIE_MAGIC 0xCE
+ cookie_cb cb;
+ int timeout_ms;
+ int errcode;
+
+ /* wait_queue_head_t wait; */
+ volatile int done;
+ int priv_len;
+ char priv[RNP_MAX_SHARE_MEM];
+};
+struct mbx_fw_cmd_reply {
+} __rte_cache_aligned;
+
+#endif /* __RNP_MBX_FW_H__*/
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 5/8] net/rnp add reset code for Chip Init process
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (3 preceding siblings ...)
2023-08-07 2:16 ` [PATCH v5 4/8] net/rnp: add mbx basic api feature Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 6/8] net/rnp add port info resource init Wenbo Cao
` (2 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
we must get the shape info of nic from Firmware for
reset. so the related codes is first get firmware info
and then reset the chip
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_hw.h | 56 +++++++++++-
drivers/net/rnp/meson.build | 3 +
drivers/net/rnp/rnp.h | 27 ++++++
drivers/net/rnp/rnp_ethdev.c | 93 ++++++++++++++++++-
drivers/net/rnp/rnp_mbx_fw.h | 163 +++++++++++++++++++++++++++++++++-
5 files changed, 339 insertions(+), 3 deletions(-)
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 1db966cf21..57b7dc75a0 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -8,6 +8,9 @@
#include <ethdev_driver.h>
#include "rnp_osdep.h"
+#include "rnp_dma_regs.h"
+#include "rnp_eth_regs.h"
+#include "rnp_cfg.h"
static inline unsigned int rnp_rd_reg(volatile void *addr)
{
@@ -29,7 +32,18 @@ static inline void rnp_wr_reg(volatile void *reg, int val)
rnp_rd_reg((uint8_t *)(_base) + (_off))
#define rnp_io_wr(_base, _off, _val) \
rnp_wr_reg((uint8_t *)(_base) + (_off), (_val))
-
+#define rnp_eth_rd(_hw, _off) \
+ rnp_rd_reg((uint8_t *)((_hw)->eth_base) + (_off))
+#define rnp_eth_wr(_hw, _off, _val) \
+ rnp_wr_reg((uint8_t *)((_hw)->eth_base) + (_off), (_val))
+#define rnp_dma_rd(_hw, _off) \
+ rnp_rd_reg((uint8_t *)((_hw)->dma_base) + (_off))
+#define rnp_dma_wr(_hw, _off, _val) \
+ rnp_wr_reg((uint8_t *)((_hw)->dma_base) + (_off), (_val))
+#define rnp_top_rd(_hw, _off) \
+ rnp_rd_reg((uint8_t *)((_hw)->comm_reg_base) + (_off))
+#define rnp_top_wr(_hw, _off, _val) \
+ rnp_wr_reg((uint8_t *)((_hw)->comm_reg_base) + (_off), (_val))
struct rnp_hw;
/* Mbx Operate info */
enum MBX_ID {
@@ -98,6 +112,17 @@ struct rnp_mbx_info {
rte_atomic16_t state;
} __rte_cache_aligned;
+struct rnp_mac_api {
+ int32_t (*init_hw)(struct rnp_hw *hw);
+ int32_t (*reset_hw)(struct rnp_hw *hw);
+};
+
+struct rnp_mac_info {
+ uint8_t assign_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t set_addr[RTE_ETHER_ADDR_LEN];
+ struct rnp_mac_api ops;
+} __rte_cache_aligned;
+
struct rnp_eth_adapter;
#define RNP_MAX_HW_PORT_PERR_PF (4)
struct rnp_hw {
@@ -111,8 +136,10 @@ struct rnp_hw {
void *eth_base;
void *veb_base;
void *mac_base[RNP_MAX_HW_PORT_PERR_PF];
+ void *comm_reg_base;
void *msix_base;
/* === dma == */
+ void *dev_version;
void *dma_axi_en;
void *dma_axi_st;
@@ -120,10 +147,37 @@ struct rnp_hw {
uint16_t vendor_id;
uint16_t function;
uint16_t pf_vf_num;
+ int pfvfnum;
uint16_t max_vfs;
+
+ bool ncsi_en;
+ uint8_t ncsi_rar_entries;
+
+ int sgmii_phy_id;
+ int is_sgmii;
+ u16 phy_type;
+ uint8_t force_10g_1g_speed_ablity;
+ uint8_t force_speed_stat;
+#define FORCE_SPEED_STAT_DISABLED (0)
+#define FORCE_SPEED_STAT_1G (1)
+#define FORCE_SPEED_STAT_10G (2)
+ uint32_t speed;
+ unsigned int axi_mhz;
+
+ int fw_version; /* Primary FW Version */
+ uint32_t fw_uid; /* Subclass Fw Version */
+
+ int nic_mode;
+ unsigned char lane_mask;
+ int lane_of_port[4];
+ char phy_port_ids[4]; /* port id: for lane0~3: value: 0 ~ 7 */
+ uint8_t max_port_num; /* Max Port Num This PF Have */
+
void *cookie_pool;
char cookie_p_name[RTE_MEMZONE_NAMESIZE];
+ struct rnp_mac_info mac;
struct rnp_mbx_info mbx;
+ rte_spinlock_t fw_lock;
} __rte_cache_aligned;
#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index 60bba486fc..855c894032 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -9,5 +9,8 @@ endif
sources = files(
'rnp_ethdev.c',
'rnp_mbx.c',
+ 'rnp_mbx_fw.c',
+ 'base/rnp_api.c',
)
+
includes += include_directories('base')
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 086667cec1..f6c9231eb1 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -13,6 +13,20 @@
#define RNP_CFG_BAR (4)
#define RNP_PF_INFO_BAR (0)
+enum rnp_resource_share_m {
+ RNP_SHARE_CORPORATE = 0,
+ RNP_SHARE_INDEPEND,
+};
+/*
+ * Structure to store private data for each driver instance (for each port).
+ */
+enum rnp_work_mode {
+ RNP_SINGLE_40G = 0,
+ RNP_SINGLE_10G = 1,
+ RNP_DUAL_10G = 2,
+ RNP_QUAD_10G = 3,
+};
+
struct rnp_eth_port {
struct rnp_eth_adapter *adapt;
struct rnp_hw *hw;
@@ -21,9 +35,12 @@ struct rnp_eth_port {
struct rnp_share_ops {
const struct rnp_mbx_api *mbx_api;
+ const struct rnp_mac_api *mac_api;
} __rte_cache_aligned;
struct rnp_eth_adapter {
+ enum rnp_work_mode mode;
+ enum rnp_resource_share_m s_mode; /* Port Resource Share Policy */
struct rnp_hw hw;
uint16_t max_vfs;
struct rte_pci_device *pdev;
@@ -31,7 +48,9 @@ struct rnp_eth_adapter {
struct rnp_eth_port *ports[RNP_MAX_PORT_OF_PF];
struct rnp_share_ops *share_priv;
+ int max_link_speed;
uint8_t num_ports; /* Cur Pf Has physical Port Num */
+ uint8_t lane_mask;
} __rte_cache_aligned;
#define RNP_DEV_TO_PORT(eth_dev) \
@@ -40,9 +59,14 @@ struct rnp_eth_adapter {
((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT(eth_dev)->adapt))
#define RNP_DEV_TO_HW(eth_dev) \
(&((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT((eth_dev))->adapt))->hw)
+#define RNP_HW_TO_ADAPTER(hw) \
+ ((struct rnp_eth_adapter *)((hw)->back))
#define RNP_DEV_PP_PRIV_TO_MBX_OPS(dev) \
(((struct rnp_share_ops *)(dev)->process_private)->mbx_api)
#define RNP_DEV_TO_MBX_OPS(dev) RNP_DEV_PP_PRIV_TO_MBX_OPS(dev)
+#define RNP_DEV_PP_PRIV_TO_MAC_OPS(dev) \
+ (((struct rnp_share_ops *)(dev)->process_private)->mac_api)
+#define RNP_DEV_TO_MAC_OPS(dev) RNP_DEV_PP_PRIV_TO_MAC_OPS(dev)
static inline void rnp_reg_offset_init(struct rnp_hw *hw)
{
@@ -56,6 +80,7 @@ static inline void rnp_reg_offset_init(struct rnp_hw *hw)
hw->msix_base = (void *)((uint8_t *)hw->iobar4 + 0xa4000);
}
/* === dma status/config====== */
+ hw->dev_version = (void *)((uint8_t *)hw->iobar4 + 0x0000);
hw->link_sync = (void *)((uint8_t *)hw->iobar4 + 0x000c);
hw->dma_axi_en = (void *)((uint8_t *)hw->iobar4 + 0x0010);
hw->dma_axi_st = (void *)((uint8_t *)hw->iobar4 + 0x0014);
@@ -69,5 +94,7 @@ static inline void rnp_reg_offset_init(struct rnp_hw *hw)
/* mac */
for (i = 0; i < RNP_MAX_HW_PORT_PERR_PF; i++)
hw->mac_base[i] = (void *)((uint8_t *)hw->iobar4 + 0x60000 + 0x10000 * i);
+ /* === top reg === */
+ hw->comm_reg_base = (void *)((uint8_t *)hw->iobar4 + 0x30000);
}
#endif /* __RNP_H__ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 8a6635951b..13d03a23c5 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -8,7 +8,9 @@
#include <ethdev_driver.h>
#include "rnp.h"
+#include "rnp_api.h"
#include "rnp_mbx.h"
+#include "rnp_mbx_fw.h"
#include "rnp_logs.h"
static int
@@ -92,7 +94,30 @@ rnp_alloc_eth_port(struct rte_pci_device *master_pci, char *name)
static void rnp_get_nic_attr(struct rnp_eth_adapter *adapter)
{
- RTE_SET_USED(adapter);
+ struct rnp_hw *hw = &adapter->hw;
+ int lane_mask = 0, err, mode = 0;
+
+ rnp_mbx_link_event_enable(adapter->eth_dev, false);
+
+ err = rnp_mbx_get_capability(adapter->eth_dev, &lane_mask, &mode);
+ if (err < 0 || !lane_mask) {
+ PMD_DRV_LOG(ERR, "%s: mbx_get_capability error! errcode=%d\n",
+ __func__, hw->speed);
+ return;
+ }
+
+ adapter->num_ports = __builtin_popcount(lane_mask);
+ adapter->max_link_speed = hw->speed;
+ adapter->lane_mask = lane_mask;
+ adapter->mode = hw->nic_mode;
+
+ PMD_DRV_LOG(INFO, "max link speed:%d lane_mask:0x%x nic-mode:0x%x\n",
+ (int)adapter->max_link_speed,
+ (int)adapter->num_ports, adapter->mode);
+ if (adapter->num_ports && adapter->num_ports == 1)
+ adapter->s_mode = RNP_SHARE_CORPORATE;
+ else
+ adapter->s_mode = RNP_SHARE_INDEPEND;
}
static int
@@ -125,6 +150,72 @@ rnp_process_resource_init(struct rte_eth_dev *eth_dev)
return 0;
}
+static int32_t rnp_init_hw_pf(struct rnp_hw *hw)
+{
+ struct rnp_eth_adapter *adapter = RNP_HW_TO_ADAPTER(hw);
+ uint32_t version;
+ uint32_t reg;
+
+ PMD_INIT_FUNC_TRACE();
+ version = rnp_rd_reg(hw->dev_version);
+ PMD_DRV_LOG(INFO, "NIC HW Version:0x%.2x\n", version);
+
+ /* Disable Rx/Tx Dma */
+ rnp_wr_reg(hw->dma_axi_en, false);
+ /* Check Dma Chanle Status */
+ while (rnp_rd_reg(hw->dma_axi_st) == 0)
+ ;
+
+ /* Reset Nic All Hardware */
+ if (rnp_reset_hw(adapter->eth_dev, hw))
+ return -EPERM;
+
+ /* Rx Proto Offload No-BYPASS */
+ rnp_eth_wr(hw, RNP_ETH_ENGINE_BYPASS, false);
+ /* Enable Flow Filter Engine */
+ rnp_eth_wr(hw, RNP_HOST_FILTER_EN, true);
+ /* Enable VXLAN Parse */
+ rnp_eth_wr(hw, RNP_EN_TUNNEL_VXLAN_PARSE, true);
+ /* Enabled REDIR ACTION */
+ rnp_eth_wr(hw, RNP_REDIR_CTRL, true);
+
+ /* Setup Scatter DMA Mem Size */
+ reg = ((RTE_ETHER_MAX_LEN / 16) << RNP_DMA_SCATTER_MEM_SHIFT);
+ rnp_dma_wr(hw, RNP_DMA_CTRL, reg);
+#ifdef PHYTIUM_SUPPORT
+#define RNP_DMA_PADDING (1 << 8)
+ reg = rnp_dma_rd(hw, RNP_DMA_CTRL);
+ reg |= RNP_DMA_PADDING;
+ rnp_dma_wr(hw, RNP_DMA_CTRL, reg);
+#endif
+ /* Enable Rx/Tx Dma */
+ rnp_wr_reg(hw->dma_axi_en, 0b1111);
+
+ rnp_top_wr(hw, RNP_TX_QINQ_WORKAROUND, 1);
+
+ return 0;
+}
+
+static int32_t rnp_reset_hw_pf(struct rnp_hw *hw)
+{
+ struct rnp_eth_adapter *adapter = hw->back;
+
+ rnp_top_wr(hw, RNP_NIC_RESET, 0);
+ rte_wmb();
+ rnp_top_wr(hw, RNP_NIC_RESET, 1);
+
+ rnp_mbx_fw_reset_phy(adapter->eth_dev);
+
+ PMD_DRV_LOG(INFO, "PF[%d] reset nic finish\n",
+ hw->function);
+ return 0;
+}
+
+const struct rnp_mac_api rnp_mac_ops = {
+ .reset_hw = rnp_reset_hw_pf,
+ .init_hw = rnp_init_hw_pf
+};
+
static void
rnp_common_ops_init(struct rnp_eth_adapter *adapter)
{
diff --git a/drivers/net/rnp/rnp_mbx_fw.h b/drivers/net/rnp/rnp_mbx_fw.h
index 439090b5a3..44ffe56908 100644
--- a/drivers/net/rnp/rnp_mbx_fw.h
+++ b/drivers/net/rnp/rnp_mbx_fw.h
@@ -16,7 +16,168 @@ struct mbx_req_cookie {
int priv_len;
char priv[RNP_MAX_SHARE_MEM];
};
+enum GENERIC_CMD {
+ /* link configuration admin commands */
+ GET_PHY_ABALITY = 0x0601,
+ RESET_PHY = 0x0603,
+ SET_EVENT_MASK = 0x0613,
+};
+
+enum link_event_mask {
+ EVT_LINK_UP = 1,
+ EVT_NO_MEDIA = 2,
+ EVT_LINK_FAULT = 3,
+ EVT_PHY_TEMP_ALARM = 4,
+ EVT_EXCESSIVE_ERRORS = 5,
+ EVT_SIGNAL_DETECT = 6,
+ EVT_AUTO_NEGOTIATION_DONE = 7,
+ EVT_MODULE_QUALIFICATION_FAILD = 8,
+ EVT_PORT_TX_SUSPEND = 9,
+};
+
+enum pma_type {
+ PHY_TYPE_NONE = 0,
+ PHY_TYPE_1G_BASE_KX,
+ PHY_TYPE_SGMII,
+ PHY_TYPE_10G_BASE_KR,
+ PHY_TYPE_25G_BASE_KR,
+ PHY_TYPE_40G_BASE_KR4,
+ PHY_TYPE_10G_BASE_SR,
+ PHY_TYPE_40G_BASE_SR4,
+ PHY_TYPE_40G_BASE_CR4,
+ PHY_TYPE_40G_BASE_LR4,
+ PHY_TYPE_10G_BASE_LR,
+ PHY_TYPE_10G_BASE_ER,
+};
+
+struct phy_abilities {
+ unsigned char link_stat;
+ unsigned char lane_mask;
+
+ int speed;
+ short phy_type;
+ short nic_mode;
+ short pfnum;
+ unsigned int fw_version;
+ unsigned int axi_mhz;
+ uint8_t port_ids[4];
+ uint32_t fw_uid;
+ uint32_t phy_id;
+
+ int wol_status;
+
+ union {
+ unsigned int ext_ablity;
+ struct {
+ unsigned int valid : 1;
+ unsigned int wol_en : 1;
+ unsigned int pci_preset_runtime_en : 1;
+ unsigned int smbus_en : 1;
+ unsigned int ncsi_en : 1;
+ unsigned int rpu_en : 1;
+ unsigned int v2 : 1;
+ unsigned int pxe_en : 1;
+ unsigned int mctp_en : 1;
+ } e;
+ };
+} __rte_packed __rte_aligned(4);
+
+/* firmware -> driver */
struct mbx_fw_cmd_reply {
-} __rte_cache_aligned;
+ /* fw must set: DD, CMP, Error(if error), copy value */
+ unsigned short flags;
+ /* from command: LB,RD,VFC,BUF,SI,EI,FE */
+ unsigned short opcode; /* 2-3: copy from req */
+ unsigned short error_code; /* 4-5: 0 if no error */
+ unsigned short datalen; /* 6-7: */
+ union {
+ struct {
+ unsigned int cookie_lo; /* 8-11: */
+ unsigned int cookie_hi; /* 12-15: */
+ };
+ void *cookie;
+ };
+ /* ===== data ==== [16-64] */
+ union {
+ struct phy_abilities phy_abilities;
+ };
+} __rte_packed __rte_aligned(4);
+
+#define MBX_REQ_HDR_LEN 24
+/* driver -> firmware */
+struct mbx_fw_cmd_req {
+ unsigned short flags; /* 0-1 */
+ unsigned short opcode; /* 2-3 enum LINK_ADM_CMD */
+ unsigned short datalen; /* 4-5 */
+ unsigned short ret_value; /* 6-7 */
+ union {
+ struct {
+ unsigned int cookie_lo; /* 8-11 */
+ unsigned int cookie_hi; /* 12-15 */
+ };
+ void *cookie;
+ };
+ unsigned int reply_lo; /* 16-19 5dw */
+ unsigned int reply_hi; /* 20-23 */
+ /* === data === [24-64] 7dw */
+ union {
+ struct {
+ int requester;
+#define REQUEST_BY_DPDK 0xa1
+#define REQUEST_BY_DRV 0xa2
+#define REQUEST_BY_PXE 0xa3
+ } get_phy_ablity;
+
+ struct {
+ unsigned short enable_stat;
+ unsigned short event_mask; /* enum link_event_mask */
+ } stat_event_mask;
+ };
+} __rte_packed __rte_aligned(4);
+
+static inline void
+build_phy_abalities_req(struct mbx_fw_cmd_req *req, void *cookie)
+{
+ req->flags = 0;
+ req->opcode = GET_PHY_ABALITY;
+ req->datalen = 0;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->cookie = cookie;
+}
+
+/* enum link_event_mask or */
+static inline void
+build_link_set_event_mask(struct mbx_fw_cmd_req *req,
+ unsigned short event_mask,
+ unsigned short enable,
+ void *cookie)
+{
+ req->flags = 0;
+ req->opcode = SET_EVENT_MASK;
+ req->datalen = sizeof(req->stat_event_mask);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->stat_event_mask.event_mask = event_mask;
+ req->stat_event_mask.enable_stat = enable;
+}
+
+static inline void
+build_reset_phy_req(struct mbx_fw_cmd_req *req,
+ void *cookie)
+{
+ req->flags = 0;
+ req->opcode = RESET_PHY;
+ req->datalen = 0;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->cookie = cookie;
+}
+int rnp_mbx_get_capability(struct rte_eth_dev *dev,
+ int *lane_mask,
+ int *nic_mode);
+int rnp_mbx_link_event_enable(struct rte_eth_dev *dev, int enable);
+int rnp_mbx_fw_reset_phy(struct rte_eth_dev *dev);
#endif /* __RNP_MBX_FW_H__*/
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 6/8] net/rnp add port info resource init
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (4 preceding siblings ...)
2023-08-07 2:16 ` [PATCH v5 5/8] net/rnp add reset code for Chip Init process Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 7/8] net/rnp add devargs runtime parsing functions Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 8/8] net/rnp handle device interrupts Wenbo Cao
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Add Api For FW Mac Info, Port Resoucre info init Code
For Different Shape Of Nic.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_api.c | 48 +++++++
drivers/net/rnp/base/rnp_api.h | 10 ++
drivers/net/rnp/base/rnp_hw.h | 18 +++
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 88 +++++++++++++
drivers/net/rnp/rnp_ethdev.c | 224 +++++++++++++++++++++++++++++++--
drivers/net/rnp/rnp_mbx_fw.c | 112 +++++++++++++++++
drivers/net/rnp/rnp_mbx_fw.h | 115 +++++++++++++++++
drivers/net/rnp/rnp_rxtx.c | 83 ++++++++++++
drivers/net/rnp/rnp_rxtx.h | 14 +++
10 files changed, 706 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/rnp/rnp_rxtx.c
create mode 100644 drivers/net/rnp/rnp_rxtx.h
diff --git a/drivers/net/rnp/base/rnp_api.c b/drivers/net/rnp/base/rnp_api.c
index 550da6217d..cf74769fb6 100644
--- a/drivers/net/rnp/base/rnp_api.c
+++ b/drivers/net/rnp/base/rnp_api.c
@@ -21,3 +21,51 @@ rnp_reset_hw(struct rte_eth_dev *dev, struct rnp_hw *hw)
return ops->reset_hw(hw);
return -EOPNOTSUPP;
}
+
+int
+rnp_get_mac_addr(struct rte_eth_dev *dev, uint8_t *macaddr)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ const struct rnp_mac_api *ops = RNP_DEV_TO_MAC_OPS(dev);
+
+ if (!macaddr)
+ return -EINVAL;
+ if (ops->get_mac_addr)
+ return ops->get_mac_addr(port, port->attr.nr_lane, macaddr);
+ return -EOPNOTSUPP;
+}
+
+int
+rnp_set_default_mac(struct rte_eth_dev *dev, uint8_t *mac_addr)
+{
+ const struct rnp_mac_api *ops = RNP_DEV_TO_MAC_OPS(dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (ops->set_default_mac)
+ return ops->set_default_mac(port, mac_addr);
+ return -EOPNOTSUPP;
+}
+
+int
+rnp_set_rafb(struct rte_eth_dev *dev, uint8_t *addr,
+ uint8_t vm_pool, uint8_t index)
+{
+ const struct rnp_mac_api *ops = RNP_DEV_TO_MAC_OPS(dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (ops->set_rafb)
+ return ops->set_rafb(port, addr, vm_pool, index);
+ return -EOPNOTSUPP;
+}
+
+int
+rnp_clear_rafb(struct rte_eth_dev *dev,
+ uint8_t vm_pool, uint8_t index)
+{
+ const struct rnp_mac_api *ops = RNP_DEV_TO_MAC_OPS(dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (ops->clear_rafb)
+ return ops->clear_rafb(port, vm_pool, index);
+ return -EOPNOTSUPP;
+}
diff --git a/drivers/net/rnp/base/rnp_api.h b/drivers/net/rnp/base/rnp_api.h
index df574dab77..b998b11237 100644
--- a/drivers/net/rnp/base/rnp_api.h
+++ b/drivers/net/rnp/base/rnp_api.h
@@ -4,4 +4,14 @@ int
rnp_init_hw(struct rte_eth_dev *dev);
int
rnp_reset_hw(struct rte_eth_dev *dev, struct rnp_hw *hw);
+int
+rnp_get_mac_addr(struct rte_eth_dev *dev, uint8_t *macaddr);
+int
+rnp_set_default_mac(struct rte_eth_dev *dev, uint8_t *mac_addr);
+int
+rnp_set_rafb(struct rte_eth_dev *dev, uint8_t *addr,
+ uint8_t vm_pool, uint8_t index);
+int
+rnp_clear_rafb(struct rte_eth_dev *dev,
+ uint8_t vm_pool, uint8_t index);
#endif /* __RNP_API_H__ */
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 57b7dc75a0..395b9d5c71 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -44,6 +44,10 @@ static inline void rnp_wr_reg(volatile void *reg, int val)
rnp_rd_reg((uint8_t *)((_hw)->comm_reg_base) + (_off))
#define rnp_top_wr(_hw, _off, _val) \
rnp_wr_reg((uint8_t *)((_hw)->comm_reg_base) + (_off), (_val))
+#define RNP_MACADDR_UPDATE_LO(hw, hw_idx, val) \
+ rnp_eth_wr(hw, RNP_RAL_BASE_ADDR(hw_idx), val)
+#define RNP_MACADDR_UPDATE_HI(hw, hw_idx, val) \
+ rnp_eth_wr(hw, RNP_RAH_BASE_ADDR(hw_idx), val)
struct rnp_hw;
/* Mbx Operate info */
enum MBX_ID {
@@ -112,9 +116,23 @@ struct rnp_mbx_info {
rte_atomic16_t state;
} __rte_cache_aligned;
+struct rnp_eth_port;
struct rnp_mac_api {
int32_t (*init_hw)(struct rnp_hw *hw);
int32_t (*reset_hw)(struct rnp_hw *hw);
+ /* MAC Address */
+ int32_t (*get_mac_addr)(struct rnp_eth_port *port,
+ uint8_t lane,
+ uint8_t *macaddr);
+ int32_t (*set_default_mac)(struct rnp_eth_port *port, uint8_t *mac);
+ /* Receive Address Filter Table */
+ int32_t (*set_rafb)(struct rnp_eth_port *port,
+ uint8_t *mac,
+ uint8_t vm_pool,
+ uint8_t index);
+ int32_t (*clear_rafb)(struct rnp_eth_port *port,
+ uint8_t vm_pool,
+ uint8_t index);
};
struct rnp_mac_info {
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index 855c894032..f72815b396 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -10,6 +10,7 @@ sources = files(
'rnp_ethdev.c',
'rnp_mbx.c',
'rnp_mbx_fw.c',
+ 'rnp_rxtx.c',
'base/rnp_api.c',
)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index f6c9231eb1..6f216cc5ca 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -9,14 +9,90 @@
#define PCI_VENDOR_ID_MUCSE (0x8848)
#define RNP_DEV_ID_N10G (0x1000)
+#define RNP_DEV_ID_N400L_X4 (0x1021)
#define RNP_MAX_PORT_OF_PF (4)
#define RNP_CFG_BAR (4)
#define RNP_PF_INFO_BAR (0)
+/* Peer Port Own Independent Resource */
+#define RNP_PORT_MAX_MACADDR (32)
+#define RNP_PORT_MAX_UC_MAC_SIZE (256)
+#define RNP_PORT_MAX_VLAN_HASH (12)
+#define RNP_PORT_MAX_UC_HASH_TB (8)
+
+/* Hardware Resource info */
+#define RNP_MAX_RX_QUEUE_NUM (128)
+#define RNP_MAX_TX_QUEUE_NUM (128)
+#define RNP_N400_MAX_RX_QUEUE_NUM (8)
+#define RNP_N400_MAX_TX_QUEUE_NUM (8)
+#define RNP_MAX_HASH_KEY_SIZE (10)
+#define RNP_MAX_MAC_ADDRS (128)
+#define RNP_MAX_SUPPORT_VF_NUM (64)
+#define RNP_MAX_VFTA_SIZE (128)
+#define RNP_MAX_TC_SUPPORT (4)
+
+#define RNP_MAX_UC_MAC_SIZE (4096) /* Max Num of Unicast MAC addr */
+#define RNP_MAX_UC_HASH_TB (128)
+#define RNP_MAX_MC_MAC_SIZE (4096) /* Max Num of Multicast MAC addr */
+#define RNP_MAC_MC_HASH_TB (128)
+#define RNP_MAX_VLAN_HASH_TB_SIZE (4096)
+
+#define RNP_MAX_UC_HASH_TABLE (128)
+#define RNP_MAC_MC_HASH_TABLE (128)
+#define RNP_UTA_BIT_SHIFT (5)
+
enum rnp_resource_share_m {
RNP_SHARE_CORPORATE = 0,
RNP_SHARE_INDEPEND,
};
+
+/* media type */
+enum rnp_media_type {
+ RNP_MEDIA_TYPE_UNKNOWN,
+ RNP_MEDIA_TYPE_FIBER,
+ RNP_MEDIA_TYPE_COPPER,
+ RNP_MEDIA_TYPE_BACKPLANE,
+ RNP_MEDIA_TYPE_NONE,
+};
+
+struct rnp_phy_meta {
+ uint16_t phy_type;
+ uint32_t speed_cap;
+ uint32_t supported_link;
+ uint16_t link_duplex;
+ uint16_t link_autoneg;
+ uint8_t media_type;
+ bool is_sgmii;
+ bool is_backplane;
+ bool fec;
+ uint32_t phy_identifier;
+};
+
+struct rnp_port_attr {
+ uint16_t max_mac_addrs; /* Max Support Mac Address */
+ uint16_t uc_hash_tb_size; /* Unicast Hash Table Size */
+ uint16_t max_uc_mac_hash; /* Max Num of hash MAC addr for UC */
+ uint16_t mc_hash_tb_size; /* Multicast Hash Table Size */
+ uint16_t max_mc_mac_hash; /* Max Num Of Hash Mac addr For MC */
+ uint16_t max_vlan_hash; /* Max Num Of Hash For Vlan ID*/
+ uint32_t hash_table_shift;
+ uint16_t rte_pid; /* Dpdk Manage Port Sequence Id */
+ uint8_t max_rx_queues; /* Belong To This Port Rxq Resource */
+ uint8_t max_tx_queues; /* Belong To This Port Rxq Resource */
+ uint8_t queue_ring_base;
+ uint8_t port_offset; /* Use For Redir Table Dma Ring Offset Of Port */
+ union {
+ uint8_t nr_lane; /* phy lane of This PF:0~3 */
+ uint8_t nr_port; /* phy lane of This PF:0~3 */
+ };
+ struct rnp_phy_meta phy_meta;
+ bool link_ready;
+ bool pre_link;
+ uint32_t speed;
+ uint16_t max_rx_pktlen; /* Current Port Max Support Packet Len */
+ uint16_t max_mtu;
+};
+
/*
* Structure to store private data for each driver instance (for each port).
*/
@@ -29,8 +105,16 @@ enum rnp_work_mode {
struct rnp_eth_port {
struct rnp_eth_adapter *adapt;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
struct rnp_hw *hw;
struct rte_eth_dev *eth_dev;
+ struct rnp_port_attr attr;
+ /* Recvice Mac Address Record Table */
+ uint8_t mac_use_tb[RNP_MAX_MAC_ADDRS];
+ uint8_t use_num_mac;
+ bool port_stopped;
+ bool port_closed;
+ enum rnp_resource_share_m s_mode; /* Independent Port Resource */
} __rte_cache_aligned;
struct rnp_share_ops {
@@ -61,6 +145,10 @@ struct rnp_eth_adapter {
(&((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT((eth_dev))->adapt))->hw)
#define RNP_HW_TO_ADAPTER(hw) \
((struct rnp_eth_adapter *)((hw)->back))
+#define RNP_PORT_TO_HW(port) \
+ (&(((struct rnp_eth_adapter *)(port)->adapt)->hw))
+#define RNP_PORT_TO_ADAPTER(port) \
+ ((struct rnp_eth_adapter *)((port)->adapt))
#define RNP_DEV_PP_PRIV_TO_MBX_OPS(dev) \
(((struct rnp_share_ops *)(dev)->process_private)->mbx_api)
#define RNP_DEV_TO_MBX_OPS(dev) RNP_DEV_PP_PRIV_TO_MBX_OPS(dev)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 13d03a23c5..ad99f99d4a 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -11,6 +11,7 @@
#include "rnp_api.h"
#include "rnp_mbx.h"
#include "rnp_mbx_fw.h"
+#include "rnp_rxtx.h"
#include "rnp_logs.h"
static int
@@ -40,6 +41,62 @@ static int rnp_dev_close(struct rte_eth_dev *dev)
static const struct eth_dev_ops rnp_eth_dev_ops = {
};
+static void
+rnp_setup_port_attr(struct rnp_eth_port *port,
+ struct rte_eth_dev *dev,
+ uint8_t num_ports,
+ uint8_t p_id)
+{
+ struct rnp_port_attr *attr = &port->attr;
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ uint32_t lane_bit;
+
+ if (port->s_mode == RNP_SHARE_INDEPEND) {
+ attr->max_mac_addrs = RNP_PORT_MAX_MACADDR;
+ attr->max_uc_mac_hash = RNP_PORT_MAX_UC_MAC_SIZE;
+ attr->uc_hash_tb_size = RNP_PORT_MAX_UC_HASH_TB;
+ attr->max_mc_mac_hash = RNP_PORT_MAX_MACADDR;
+ attr->max_vlan_hash = RNP_PORT_MAX_VLAN_HASH;
+ attr->hash_table_shift = 26 - (attr->max_uc_mac_hash >> 7);
+ } else {
+ attr->max_mac_addrs = RNP_MAX_MAC_ADDRS / num_ports;
+ attr->max_uc_mac_hash = RNP_MAX_UC_MAC_SIZE / num_ports;
+ attr->uc_hash_tb_size = RNP_MAX_UC_HASH_TB;
+ attr->max_mc_mac_hash = RNP_MAX_MC_MAC_SIZE / num_ports;
+ attr->mc_hash_tb_size = RNP_MAC_MC_HASH_TB;
+ attr->max_vlan_hash = RNP_MAX_VLAN_HASH_TB_SIZE / num_ports;
+ attr->hash_table_shift = RNP_UTA_BIT_SHIFT;
+ }
+ if (hw->ncsi_en)
+ attr->uc_hash_tb_size -= hw->ncsi_rar_entries;
+ if (hw->device_id == RNP_DEV_ID_N400L_X4) {
+ attr->max_rx_queues = RNP_N400_MAX_RX_QUEUE_NUM;
+ attr->max_tx_queues = RNP_N400_MAX_TX_QUEUE_NUM;
+ } else {
+ attr->max_rx_queues = RNP_MAX_RX_QUEUE_NUM / num_ports;
+ attr->max_tx_queues = RNP_MAX_TX_QUEUE_NUM / num_ports;
+ }
+
+ attr->rte_pid = dev->data->port_id;
+ lane_bit = hw->phy_port_ids[p_id] & (hw->max_port_num - 1);
+
+ attr->nr_port = lane_bit;
+ attr->port_offset = rnp_eth_rd(hw, RNP_TC_PORT_MAP_TB(attr->nr_port));
+
+ rnp_mbx_get_lane_stat(dev);
+
+ PMD_DRV_LOG(INFO, "PF[%d] SW-ETH-PORT[%d]<->PHY_LANE[%d]\n",
+ hw->function, p_id, lane_bit);
+}
+
+static void
+rnp_init_filter_setup(struct rnp_eth_port *port,
+ uint8_t num_ports)
+{
+ RTE_SET_USED(port);
+ RTE_SET_USED(num_ports);
+}
+
static int
rnp_init_port_resource(struct rnp_eth_adapter *adapter,
struct rte_eth_dev *dev,
@@ -47,11 +104,53 @@ rnp_init_port_resource(struct rnp_eth_adapter *adapter,
uint8_t p_id)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rte_pci_device *pci_dev = adapter->pdev;
+ struct rnp_hw *hw = &adapter->hw;
+ port->adapt = adapter;
+ port->s_mode = adapter->s_mode;
+ port->port_stopped = 1;
+ port->hw = hw;
port->eth_dev = dev;
- adapter->ports[p_id] = port;
+
+ dev->device = &pci_dev->device;
+ rte_eth_copy_pci_info(dev, pci_dev);
dev->dev_ops = &rnp_eth_dev_ops;
- RTE_SET_USED(name);
+ dev->rx_queue_count = rnp_dev_rx_queue_count;
+ dev->rx_descriptor_status = rnp_dev_rx_descriptor_status;
+ dev->tx_descriptor_status = rnp_dev_tx_descriptor_status;
+ dev->rx_pkt_burst = rnp_recv_pkts;
+ dev->tx_pkt_burst = rnp_xmit_pkts;
+ dev->tx_pkt_prepare = rnp_prep_pkts;
+
+ rnp_setup_port_attr(port, dev, adapter->num_ports, p_id);
+ rnp_init_filter_setup(port, adapter->num_ports);
+ rnp_get_mac_addr(dev, port->mac_addr);
+ dev->data->mac_addrs = rte_zmalloc(name, sizeof(struct rte_ether_addr) *
+ port->attr.max_mac_addrs, 0);
+ if (!dev->data->mac_addrs) {
+ RNP_PMD_DRV_LOG(ERR, "Memory allocation "
+ "for MAC failed! Exiting.\n");
+ return -ENOMEM;
+ }
+ /* Allocate memory for storing hash filter MAC addresses */
+ dev->data->hash_mac_addrs = rte_zmalloc(name,
+ RTE_ETHER_ADDR_LEN * port->attr.max_uc_mac_hash, 0);
+ if (dev->data->hash_mac_addrs == NULL) {
+ RNP_PMD_INIT_LOG(ERR, "Failed to allocate %d bytes "
+ "needed to store MAC addresses",
+ RTE_ETHER_ADDR_LEN * port->attr.max_uc_mac_hash);
+ return -ENOMEM;
+ }
+
+ rnp_set_default_mac(dev, port->mac_addr);
+ rte_ether_addr_copy((const struct rte_ether_addr *)port->mac_addr,
+ dev->data->mac_addrs);
+ /* MTU */
+ dev->data->mtu = RTE_ETHER_MAX_LEN -
+ RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN;
+ adapter->ports[p_id] = port;
+ rte_eth_dev_probing_finish(dev);
return 0;
}
@@ -211,9 +310,116 @@ static int32_t rnp_reset_hw_pf(struct rnp_hw *hw)
return 0;
}
+static void
+rnp_mac_res_take_in(struct rnp_eth_port *port,
+ uint8_t index)
+{
+ if (!port->mac_use_tb[index]) {
+ port->mac_use_tb[index] = true;
+ port->use_num_mac++;
+ }
+}
+
+static void
+rnp_mac_res_remove(struct rnp_eth_port *port,
+ uint8_t index)
+{
+ if (port->mac_use_tb[index]) {
+ port->mac_use_tb[index] = false;
+ port->use_num_mac--;
+ }
+}
+
+static int32_t rnp_set_mac_addr_pf(struct rnp_eth_port *port,
+ uint8_t *mac, uint8_t vm_pool,
+ uint8_t index)
+{
+ struct rnp_hw *hw = RNP_PORT_TO_HW(port);
+ struct rnp_port_attr *attr = &port->attr;
+ uint8_t hw_idx;
+ uint32_t value;
+
+ if (port->use_num_mac > port->attr.max_mac_addrs ||
+ index > port->attr.max_mac_addrs)
+ return -ENOMEM;
+
+ if (vm_pool != UINT8_MAX)
+ hw_idx = (attr->nr_port * attr->max_mac_addrs) + vm_pool + index;
+ else
+ hw_idx = (attr->nr_port * attr->max_mac_addrs) + index;
+
+ rnp_mac_res_take_in(port, hw_idx);
+
+ value = (mac[0] << 8) | mac[1];
+ value |= RNP_MAC_FILTER_EN;
+ RNP_MACADDR_UPDATE_HI(hw, hw_idx, value);
+
+ value = (mac[2] << 24) | (mac[3] << 16) | (mac[4] << 8) | mac[5];
+ RNP_MACADDR_UPDATE_LO(hw, hw_idx, value);
+
+ return 0;
+}
+
+static void
+rnp_remove_mac_from_hw(struct rnp_eth_port *port,
+ uint8_t vm_pool, uint8_t index)
+{
+ struct rnp_hw *hw = RNP_PORT_TO_HW(port);
+ struct rnp_port_attr *attr = &port->attr;
+ uint16_t hw_idx;
+
+ if (vm_pool != UINT8_MAX)
+ hw_idx = (attr->nr_port * attr->max_mac_addrs) + vm_pool + index;
+ else
+ hw_idx = (attr->nr_port * attr->max_mac_addrs) + index;
+
+ rnp_mac_res_remove(port, hw_idx);
+
+ rnp_eth_wr(hw, RNP_RAL_BASE_ADDR(hw_idx), 0);
+ rnp_eth_wr(hw, RNP_RAH_BASE_ADDR(hw_idx), 0);
+}
+
+static int32_t
+rnp_clear_mac_addr_pf(struct rnp_eth_port *port,
+ uint8_t vm_pool, uint8_t index)
+{
+ rnp_remove_mac_from_hw(port, vm_pool, index);
+
+ return 0;
+}
+
+static int32_t rnp_get_mac_addr_pf(struct rnp_eth_port *port,
+ uint8_t lane,
+ uint8_t *macaddr)
+{
+ struct rnp_hw *hw = RNP_DEV_TO_HW(port->eth_dev);
+
+ return rnp_fw_get_macaddr(port->eth_dev, hw->pf_vf_num, macaddr, lane);
+}
+
+static int32_t
+rnp_set_default_mac_pf(struct rnp_eth_port *port,
+ uint8_t *mac)
+{
+ struct rnp_eth_adapter *adap = RNP_PORT_TO_ADAPTER(port);
+ uint16_t max_vfs;
+
+ if (port->s_mode == RNP_SHARE_INDEPEND)
+ return rnp_set_rafb(port->eth_dev, (uint8_t *)mac,
+ UINT8_MAX, 0);
+
+ max_vfs = adap->max_vfs;
+
+ return rnp_set_rafb(port->eth_dev, mac, max_vfs, 0);
+}
+
const struct rnp_mac_api rnp_mac_ops = {
.reset_hw = rnp_reset_hw_pf,
- .init_hw = rnp_init_hw_pf
+ .init_hw = rnp_init_hw_pf,
+ .get_mac_addr = rnp_get_mac_addr_pf,
+ .set_default_mac = rnp_set_default_mac_pf,
+ .set_rafb = rnp_set_mac_addr_pf,
+ .clear_rafb = rnp_clear_mac_addr_pf
};
static void
@@ -228,7 +434,11 @@ rnp_common_ops_init(struct rnp_eth_adapter *adapter)
static int
rnp_special_ops_init(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct rnp_eth_adapter *adapter = RNP_DEV_TO_ADAPTER(eth_dev);
+ struct rnp_share_ops *share_priv;
+
+ share_priv = adapter->share_priv;
+ share_priv->mac_api = &rnp_mac_ops;
return 0;
}
@@ -237,9 +447,9 @@ static int
rnp_eth_dev_init(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
struct rnp_eth_adapter *adapter = NULL;
char name[RTE_ETH_NAME_MAX_LEN] = " ";
- struct rnp_eth_port *port = NULL;
struct rte_eth_dev *eth_dev;
struct rnp_hw *hw = NULL;
int32_t p_id;
@@ -275,13 +485,13 @@ rnp_eth_dev_init(struct rte_eth_dev *dev)
return ret;
}
adapter->share_priv = dev->process_private;
+ port->adapt = adapter;
rnp_common_ops_init(adapter);
+ rnp_init_mbx_ops_pf(hw);
rnp_get_nic_attr(adapter);
/* We need Use Device Id To Change The Resource Mode */
rnp_special_ops_init(dev);
- port->adapt = adapter;
port->hw = hw;
- rnp_init_mbx_ops_pf(hw);
for (p_id = 0; p_id < adapter->num_ports; p_id++) {
/* port 0 resource has been alloced When Probe */
if (!p_id) {
diff --git a/drivers/net/rnp/rnp_mbx_fw.c b/drivers/net/rnp/rnp_mbx_fw.c
index 6fe008351b..856b3f956b 100644
--- a/drivers/net/rnp/rnp_mbx_fw.c
+++ b/drivers/net/rnp/rnp_mbx_fw.c
@@ -269,3 +269,115 @@ int rnp_mbx_fw_reset_phy(struct rte_eth_dev *dev)
return rnp_fw_send_cmd_wait(dev, &req, &reply);
}
+
+int
+rnp_fw_get_macaddr(struct rte_eth_dev *dev,
+ int pfvfnum,
+ u8 *mac_addr,
+ int nr_lane)
+{
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct mbx_req_cookie *cookie;
+ struct mbx_fw_cmd_reply reply;
+ struct mbx_fw_cmd_req req;
+ struct mac_addr *mac;
+ int err;
+
+ memset(&req, 0, sizeof(req));
+ memset(&reply, 0, sizeof(reply));
+
+ if (!mac_addr)
+ return -EINVAL;
+
+ if (hw->mbx.irq_enabled) {
+ cookie = rnp_memzone_reserve(hw->cookie_p_name, 0);
+ if (!cookie)
+ return -ENOMEM;
+ memset(cookie->priv, 0, cookie->priv_len);
+ mac = (struct mac_addr *)cookie->priv;
+ build_get_macaddress_req(&req, 1 << nr_lane, pfvfnum, cookie);
+ err = rnp_mbx_fw_post_req(dev, &req, cookie);
+ if (err)
+ goto quit;
+
+ if ((1 << nr_lane) & mac->lanes) {
+ memcpy(mac_addr, mac->addrs[nr_lane].mac, 6);
+ err = 0;
+ } else {
+ err = -EIO;
+ }
+quit:
+ return err;
+ }
+ build_get_macaddress_req(&req, 1 << nr_lane, pfvfnum, &req);
+ err = rnp_fw_send_cmd_wait(dev, &req, &reply);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: failed. err:%d\n", __func__, err);
+ return err;
+ }
+
+ if ((1 << nr_lane) & reply.mac_addr.lanes) {
+ memcpy(mac_addr, reply.mac_addr.addrs[nr_lane].mac, 6);
+ return 0;
+ }
+
+ return -EIO;
+}
+
+int rnp_mbx_get_lane_stat(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_phy_meta *phy_meta = &port->attr.phy_meta;
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct lane_stat_data *lane_stat;
+ int nr_lane = port->attr.nr_lane;
+ struct mbx_req_cookie *cookie;
+ struct mbx_fw_cmd_reply reply;
+ struct mbx_fw_cmd_req req;
+ int err = 0;
+
+ memset(&req, 0, sizeof(req));
+
+ if (hw->mbx.irq_enabled) {
+ cookie = rnp_memzone_reserve(hw->cookie_p_name, 0);
+
+ if (!cookie)
+ return -ENOMEM;
+ memset(cookie->priv, 0, cookie->priv_len);
+ lane_stat = (struct lane_stat_data *)cookie->priv;
+ build_get_lane_status_req(&req, nr_lane, cookie);
+ err = rnp_mbx_fw_post_req(dev, &req, cookie);
+ if (err)
+ goto quit;
+ } else {
+ memset(&reply, 0, sizeof(reply));
+ build_get_lane_status_req(&req, nr_lane, &req);
+ err = rnp_fw_send_cmd_wait(dev, &req, &reply);
+ if (err)
+ goto quit;
+ lane_stat = (struct lane_stat_data *)reply.data;
+ }
+
+ phy_meta->supported_link = lane_stat->supported_link;
+ phy_meta->is_backplane = lane_stat->is_backplane;
+ phy_meta->phy_identifier = lane_stat->phy_addr;
+ phy_meta->link_autoneg = lane_stat->autoneg;
+ phy_meta->link_duplex = lane_stat->duplex;
+ phy_meta->phy_type = lane_stat->phy_type;
+ phy_meta->is_sgmii = lane_stat->is_sgmii;
+ phy_meta->fec = lane_stat->fec;
+
+ if (phy_meta->is_sgmii) {
+ phy_meta->media_type = RNP_MEDIA_TYPE_COPPER;
+ phy_meta->supported_link |=
+ RNP_SPEED_CAP_100M_HALF | RNP_SPEED_CAP_10M_HALF;
+ } else if (phy_meta->is_backplane) {
+ phy_meta->media_type = RNP_MEDIA_TYPE_BACKPLANE;
+ } else {
+ phy_meta->media_type = RNP_MEDIA_TYPE_FIBER;
+ }
+
+ return 0;
+quit:
+ return err;
+}
diff --git a/drivers/net/rnp/rnp_mbx_fw.h b/drivers/net/rnp/rnp_mbx_fw.h
index 44ffe56908..7bf5c2a865 100644
--- a/drivers/net/rnp/rnp_mbx_fw.h
+++ b/drivers/net/rnp/rnp_mbx_fw.h
@@ -19,7 +19,9 @@ struct mbx_req_cookie {
enum GENERIC_CMD {
/* link configuration admin commands */
GET_PHY_ABALITY = 0x0601,
+ GET_MAC_ADDRES = 0x0602,
RESET_PHY = 0x0603,
+ GET_LANE_STATUS = 0x0610,
SET_EVENT_MASK = 0x0613,
};
@@ -82,6 +84,61 @@ struct phy_abilities {
};
} __rte_packed __rte_aligned(4);
+#define RNP_SPEED_CAP_UNKNOWN (0)
+#define RNP_SPEED_CAP_10M_FULL BIT(2)
+#define RNP_SPEED_CAP_100M_FULL BIT(3)
+#define RNP_SPEED_CAP_1GB_FULL BIT(4)
+#define RNP_SPEED_CAP_10GB_FULL BIT(5)
+#define RNP_SPEED_CAP_40GB_FULL BIT(6)
+#define RNP_SPEED_CAP_25GB_FULL BIT(7)
+#define RNP_SPEED_CAP_50GB_FULL BIT(8)
+#define RNP_SPEED_CAP_100GB_FULL BIT(9)
+#define RNP_SPEED_CAP_10M_HALF BIT(10)
+#define RNP_SPEED_CAP_100M_HALF BIT(11)
+#define RNP_SPEED_CAP_1GB_HALF BIT(12)
+
+struct lane_stat_data {
+ u8 nr_lane; /* 0-3 cur port correspond with hw lane */
+ u8 pci_gen : 4; /* nic cur pci speed genX: 1,2,3 */
+ u8 pci_lanes : 4; /* nic cur pci x1 x2 x4 x8 x16 */
+ u8 pma_type;
+ u8 phy_type; /* interface media type */
+
+ u16 linkup : 1; /* cur port link state */
+ u16 duplex : 1; /* duplex state only RJ45 valid */
+ u16 autoneg : 1; /* autoneg state */
+ u16 fec : 1; /* fec state */
+ u16 rev_an : 1;
+ u16 link_traing : 1; /* link-traing state */
+ u16 media_availble : 1;
+ u16 is_sgmii : 1; /* 1: Twisted Pair 0: FIBRE */
+ u16 link_fault : 4;
+#define LINK_LINK_FAULT BIT(0)
+#define LINK_TX_FAULT BIT(1)
+#define LINK_RX_FAULT BIT(2)
+#define LINK_REMOTE_FAULT BIT(3)
+ u16 is_backplane : 1; /* 1: Backplane Mode */
+ union {
+ u8 phy_addr; /* Phy MDIO address */
+ struct {
+ u8 mod_abs : 1;
+ u8 fault : 1;
+ u8 tx_dis : 1;
+ u8 los : 1;
+ } sfp;
+ };
+ u8 sfp_connector;
+ u32 speed; /* Current Speed Value */
+
+ u32 si_main;
+ u32 si_pre;
+ u32 si_post;
+ u32 si_tx_boost;
+ u32 supported_link; /* Cur nic Support Link cap */
+ u32 phy_id;
+ u32 advertised_link; /* autoneg mode advertised cap */
+} __rte_packed __rte_aligned(4);
+
/* firmware -> driver */
struct mbx_fw_cmd_reply {
/* fw must set: DD, CMP, Error(if error), copy value */
@@ -99,6 +156,19 @@ struct mbx_fw_cmd_reply {
};
/* ===== data ==== [16-64] */
union {
+ char data[0];
+
+ struct mac_addr {
+ int lanes;
+ struct _addr {
+ /* for macaddr:01:02:03:04:05:06
+ * mac-hi=0x01020304 mac-lo=0x05060000
+ */
+ unsigned char mac[8];
+ } addrs[4];
+ } mac_addr;
+
+ struct lane_stat_data lanestat;
struct phy_abilities phy_abilities;
};
} __rte_packed __rte_aligned(4);
@@ -128,10 +198,19 @@ struct mbx_fw_cmd_req {
#define REQUEST_BY_PXE 0xa3
} get_phy_ablity;
+ struct {
+ int lane_mask;
+ int pfvf_num;
+ } get_mac_addr;
+
struct {
unsigned short enable_stat;
unsigned short event_mask; /* enum link_event_mask */
} stat_event_mask;
+
+ struct {
+ int nr_lane;
+ } get_lane_st;
};
} __rte_packed __rte_aligned(4);
@@ -146,6 +225,23 @@ build_phy_abalities_req(struct mbx_fw_cmd_req *req, void *cookie)
req->cookie = cookie;
}
+static inline void
+build_get_macaddress_req(struct mbx_fw_cmd_req *req,
+ int lane_mask,
+ int pfvfnum,
+ void *cookie)
+{
+ req->flags = 0;
+ req->opcode = GET_MAC_ADDRES;
+ req->datalen = sizeof(req->get_mac_addr);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ req->get_mac_addr.lane_mask = lane_mask;
+ req->get_mac_addr.pfvf_num = pfvfnum;
+}
+
/* enum link_event_mask or */
static inline void
build_link_set_event_mask(struct mbx_fw_cmd_req *req,
@@ -175,9 +271,28 @@ build_reset_phy_req(struct mbx_fw_cmd_req *req,
req->cookie = cookie;
}
+static inline void
+build_get_lane_status_req(struct mbx_fw_cmd_req *req,
+ int nr_lane, void *cookie)
+{
+ req->flags = 0;
+ req->opcode = GET_LANE_STATUS;
+ req->datalen = sizeof(req->get_lane_st);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->get_lane_st.nr_lane = nr_lane;
+}
+
int rnp_mbx_get_capability(struct rte_eth_dev *dev,
int *lane_mask,
int *nic_mode);
int rnp_mbx_link_event_enable(struct rte_eth_dev *dev, int enable);
int rnp_mbx_fw_reset_phy(struct rte_eth_dev *dev);
+int
+rnp_fw_get_macaddr(struct rte_eth_dev *dev,
+ int pfvfnum,
+ u8 *mac_addr,
+ int nr_lane);
+int rnp_mbx_get_lane_stat(struct rte_eth_dev *dev);
#endif /* __RNP_MBX_FW_H__*/
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
new file mode 100644
index 0000000000..679c0649a7
--- /dev/null
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -0,0 +1,83 @@
+#include <stdbool.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <assert.h>
+
+#include <rte_version.h>
+#include <rte_ether.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_vxlan.h>
+#include <rte_gre.h>
+#ifdef RTE_ARCH_ARM64
+#include <rte_cpuflags_64.h>
+#elif defined(RTE_ARCH_ARM)
+#include <rte_cpuflags_32.h>
+#endif
+
+#include "base/rnp_hw.h"
+#include "rnp.h"
+#include "rnp_rxtx.h"
+#include "rnp_logs.h"
+
+int
+rnp_dev_rx_descriptor_status(void *rx_queue, uint16_t offset)
+{
+ RTE_SET_USED(rx_queue);
+ RTE_SET_USED(offset);
+
+ return 0;
+}
+
+int
+rnp_dev_tx_descriptor_status(void *tx_queue, uint16_t offset)
+{
+ RTE_SET_USED(tx_queue);
+ RTE_SET_USED(offset);
+
+ return 0;
+}
+
+uint32_t
+rnp_dev_rx_queue_count(void *rx_queue)
+{
+ RTE_SET_USED(rx_queue);
+
+ return 0;
+}
+
+__rte_always_inline uint16_t
+rnp_recv_pkts(void *_rxq,
+ struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ RTE_SET_USED(_rxq);
+ RTE_SET_USED(rx_pkts);
+ RTE_SET_USED(nb_pkts);
+
+ return 0;
+}
+
+__rte_always_inline uint16_t
+rnp_xmit_pkts(void *_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ RTE_SET_USED(_txq);
+ RTE_SET_USED(tx_pkts);
+ RTE_SET_USED(nb_pkts);
+
+ return 0;
+}
+
+uint16_t rnp_prep_pkts(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ RTE_SET_USED(tx_queue);
+ RTE_SET_USED(tx_pkts);
+ RTE_SET_USED(nb_pkts);
+
+ return 0;
+}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
new file mode 100644
index 0000000000..0352971fcb
--- /dev/null
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -0,0 +1,14 @@
+#ifndef __RNP_RXTX_H__
+#define __RNP_RXTX_H__
+
+uint32_t rnp_dev_rx_queue_count(void *rx_queue);
+int rnp_dev_rx_descriptor_status(void *rx_queue, uint16_t offset);
+int rnp_dev_tx_descriptor_status(void *tx_queue, uint16_t offset);
+uint16_t
+rnp_recv_pkts(void *_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+uint16_t
+rnp_xmit_pkts(void *_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+uint16_t rnp_prep_pkts(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+#endif /* __RNP_RXTX_H__ */
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 7/8] net/rnp add devargs runtime parsing functions
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (5 preceding siblings ...)
2023-08-07 2:16 ` [PATCH v5 6/8] net/rnp add port info resource init Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 8/8] net/rnp handle device interrupts Wenbo Cao
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
add various runtime devargs command line options
supported by this driver.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp.h | 22 +++++
drivers/net/rnp/rnp_ethdev.c | 166 +++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_mbx_fw.c | 164 ++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_mbx_fw.h | 69 +++++++++++++++
4 files changed, 421 insertions(+)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 6f216cc5ca..933cdc6007 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -107,6 +107,8 @@ struct rnp_eth_port {
struct rnp_eth_adapter *adapt;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
struct rnp_hw *hw;
+ uint8_t rx_func_sec; /* force set io rx_func */
+ uint8_t tx_func_sec; /* force set io tx func */
struct rte_eth_dev *eth_dev;
struct rnp_port_attr attr;
/* Recvice Mac Address Record Table */
@@ -122,6 +124,13 @@ struct rnp_share_ops {
const struct rnp_mac_api *mac_api;
} __rte_cache_aligned;
+enum {
+ RNP_IO_FUNC_USE_NONE = 0,
+ RNP_IO_FUNC_USE_VEC,
+ RNP_IO_FUNC_USE_SIMPLE,
+ RNP_IO_FUNC_USE_COMMON,
+};
+
struct rnp_eth_adapter {
enum rnp_work_mode mode;
enum rnp_resource_share_m s_mode; /* Port Resource Share Policy */
@@ -135,6 +144,19 @@ struct rnp_eth_adapter {
int max_link_speed;
uint8_t num_ports; /* Cur Pf Has physical Port Num */
uint8_t lane_mask;
+
+ uint8_t rx_func_sec; /* force set io rx_func */
+ uint8_t tx_func_sec; /* force set io tx func*/
+ /*fw-update*/
+ bool do_fw_update;
+ char *fw_path;
+
+ bool loopback_en;
+ bool fw_sfp_10g_1g_auto_det;
+ int fw_force_speed_1g;
+#define FOCE_SPEED_1G_NOT_SET (-1)
+#define FOCE_SPEED_1G_DISABLED (0)
+#define FOCE_SPEED_1G_ENABLED (1)
} __rte_cache_aligned;
#define RNP_DEV_TO_PORT(eth_dev) \
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index ad99f99d4a..5313dae5a2 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -6,6 +6,7 @@
#include <rte_io.h>
#include <rte_malloc.h>
#include <ethdev_driver.h>
+#include <rte_kvargs.h>
#include "rnp.h"
#include "rnp_api.h"
@@ -14,6 +15,13 @@
#include "rnp_rxtx.h"
#include "rnp_logs.h"
+#define RNP_HW_MAC_LOOPBACK_ARG "hw_loopback"
+#define RNP_FW_UPDATE "fw_update"
+#define RNP_RX_FUNC_SELECT "rx_func_sec"
+#define RNP_TX_FUNC_SELECT "tx_func_sec"
+#define RNP_FW_4X10G_10G_1G_DET "fw_4x10g_10g_1g_auto_det"
+#define RNP_FW_FORCE_SPEED_1G "fw_force_1g_speed"
+
static int
rnp_mac_rx_disable(struct rte_eth_dev *dev)
{
@@ -108,6 +116,8 @@ rnp_init_port_resource(struct rnp_eth_adapter *adapter,
struct rnp_hw *hw = &adapter->hw;
port->adapt = adapter;
+ port->rx_func_sec = adapter->rx_func_sec;
+ port->tx_func_sec = adapter->tx_func_sec;
port->s_mode = adapter->s_mode;
port->port_stopped = 1;
port->hw = hw;
@@ -443,6 +453,154 @@ rnp_special_ops_init(struct rte_eth_dev *eth_dev)
return 0;
}
+static const char *const rnp_valid_arguments[] = {
+ RNP_HW_MAC_LOOPBACK_ARG,
+ RNP_FW_UPDATE,
+ RNP_RX_FUNC_SELECT,
+ RNP_TX_FUNC_SELECT,
+ RNP_FW_4X10G_10G_1G_DET,
+ RNP_FW_FORCE_SPEED_1G,
+ NULL
+};
+
+static int
+rnp_parse_handle_devarg(const char *key, const char *value,
+ void *extra_args)
+{
+ struct rnp_eth_adapter *adapter = NULL;
+
+ if (value == NULL || extra_args == NULL)
+ return -EINVAL;
+
+ if (strcmp(key, RNP_HW_MAC_LOOPBACK_ARG) == 0) {
+ uint64_t *n = extra_args;
+ *n = (uint16_t)strtoul(value, NULL, 10);
+ if (*n > UINT16_MAX && errno == ERANGE) {
+ RNP_PMD_DRV_LOG(ERR, "invalid extra param value\n");
+ return -1;
+ }
+ } else if (strcmp(key, RNP_FW_UPDATE) == 0) {
+ adapter = (struct rnp_eth_adapter *)extra_args;
+ adapter->do_fw_update = true;
+ adapter->fw_path = strdup(value);
+ } else if (strcmp(key, RNP_FW_4X10G_10G_1G_DET) == 0) {
+ adapter = (struct rnp_eth_adapter *)extra_args;
+ if (adapter->num_ports == 2 && adapter->hw.speed == 10 * 1000) {
+ adapter->fw_sfp_10g_1g_auto_det =
+ (strcmp(value, "on") == 0) ? true : false;
+ } else {
+ adapter->fw_sfp_10g_1g_auto_det = false;
+ }
+ } else if (strcmp(key, RNP_FW_FORCE_SPEED_1G) == 0) {
+ adapter = (struct rnp_eth_adapter *)extra_args;
+ if (adapter->num_ports == 2) {
+ if (strcmp(value, "on") == 0)
+ adapter->fw_force_speed_1g = FOCE_SPEED_1G_ENABLED;
+ else if (strcmp(value, "off") == 0)
+ adapter->fw_force_speed_1g = FOCE_SPEED_1G_DISABLED;
+ }
+ } else {
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+rnp_parse_io_select_func(const char *key, const char *value, void *extra_args)
+{
+ uint8_t select = RNP_IO_FUNC_USE_NONE;
+
+ RTE_SET_USED(key);
+
+ if (strcmp(value, "vec") == 0)
+ select = RNP_IO_FUNC_USE_VEC;
+ else if (strcmp(value, "simple") == 0)
+ select = RNP_IO_FUNC_USE_SIMPLE;
+ else if (strcmp(value, "common") == 0)
+ select = RNP_IO_FUNC_USE_COMMON;
+
+ *(uint8_t *)extra_args = select;
+
+ return 0;
+}
+
+static int
+rnp_parse_devargs(struct rnp_eth_adapter *adapter,
+ struct rte_devargs *devargs)
+{
+ uint8_t rx_io_func = RNP_IO_FUNC_USE_NONE;
+ uint8_t tx_io_func = RNP_IO_FUNC_USE_NONE;
+ struct rte_kvargs *kvlist;
+ bool loopback_en = false;
+ int ret = 0;
+
+ adapter->do_fw_update = false;
+ adapter->fw_sfp_10g_1g_auto_det = false;
+ adapter->fw_force_speed_1g = FOCE_SPEED_1G_NOT_SET;
+
+ if (!devargs)
+ goto def;
+
+ kvlist = rte_kvargs_parse(devargs->args, rnp_valid_arguments);
+ if (kvlist == NULL)
+ goto def;
+
+ if (rte_kvargs_count(kvlist, RNP_HW_MAC_LOOPBACK_ARG) == 1)
+ ret = rte_kvargs_process(kvlist, RNP_HW_MAC_LOOPBACK_ARG,
+ &rnp_parse_handle_devarg, &loopback_en);
+
+ if (rte_kvargs_count(kvlist, RNP_FW_4X10G_10G_1G_DET) == 1)
+ ret = rte_kvargs_process(kvlist,
+ RNP_FW_4X10G_10G_1G_DET,
+ &rnp_parse_handle_devarg,
+ adapter);
+
+ if (rte_kvargs_count(kvlist, RNP_FW_FORCE_SPEED_1G) == 1)
+ ret = rte_kvargs_process(kvlist,
+ RNP_FW_FORCE_SPEED_1G,
+ &rnp_parse_handle_devarg,
+ adapter);
+
+ if (rte_kvargs_count(kvlist, RNP_FW_UPDATE) == 1)
+ ret = rte_kvargs_process(kvlist, RNP_FW_UPDATE,
+ &rnp_parse_handle_devarg, adapter);
+ if (rte_kvargs_count(kvlist, RNP_RX_FUNC_SELECT) == 1)
+ ret = rte_kvargs_process(kvlist, RNP_RX_FUNC_SELECT,
+ &rnp_parse_io_select_func, &rx_io_func);
+ if (rte_kvargs_count(kvlist, RNP_TX_FUNC_SELECT) == 1)
+ ret = rte_kvargs_process(kvlist, RNP_TX_FUNC_SELECT,
+ &rnp_parse_io_select_func, &tx_io_func);
+ rte_kvargs_free(kvlist);
+def:
+ adapter->loopback_en = loopback_en;
+ adapter->rx_func_sec = rx_io_func;
+ adapter->tx_func_sec = tx_io_func;
+
+ return ret;
+}
+
+static int rnp_post_handle(struct rnp_eth_adapter *adapter)
+{
+ bool on = false;
+
+ if (!adapter->eth_dev)
+ return -ENOMEM;
+ if (adapter->do_fw_update && adapter->fw_path) {
+ rnp_fw_update(adapter);
+ adapter->do_fw_update = 0;
+ }
+
+ if (adapter->fw_sfp_10g_1g_auto_det)
+ return rnp_hw_set_fw_10g_1g_auto_detch(adapter->eth_dev, 1);
+
+ on = (adapter->fw_force_speed_1g == FOCE_SPEED_1G_ENABLED) ? 1 : 0;
+ if (adapter->fw_force_speed_1g != FOCE_SPEED_1G_NOT_SET)
+ return rnp_hw_set_fw_force_speed_1g(adapter->eth_dev, on);
+
+ return 0;
+}
+
static int
rnp_eth_dev_init(struct rte_eth_dev *dev)
{
@@ -492,6 +650,11 @@ rnp_eth_dev_init(struct rte_eth_dev *dev)
/* We need Use Device Id To Change The Resource Mode */
rnp_special_ops_init(dev);
port->hw = hw;
+ ret = rnp_parse_devargs(adapter, pci_dev->device.devargs);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "parse_devargs failed");
+ return ret;
+ }
for (p_id = 0; p_id < adapter->num_ports; p_id++) {
/* port 0 resource has been alloced When Probe */
if (!p_id) {
@@ -517,6 +680,9 @@ rnp_eth_dev_init(struct rte_eth_dev *dev)
rnp_mac_rx_disable(eth_dev);
rnp_mac_tx_disable(eth_dev);
}
+ ret = rnp_post_handle(adapter);
+ if (ret)
+ goto eth_alloc_error;
return 0;
eth_alloc_error:
diff --git a/drivers/net/rnp/rnp_mbx_fw.c b/drivers/net/rnp/rnp_mbx_fw.c
index 856b3f956b..0c3f499cf2 100644
--- a/drivers/net/rnp/rnp_mbx_fw.c
+++ b/drivers/net/rnp/rnp_mbx_fw.c
@@ -105,6 +105,27 @@ static int rnp_mbx_fw_post_req(struct rte_eth_dev *dev,
return err;
}
+static int
+rnp_mbx_write_posted_locked(struct rte_eth_dev *dev, struct mbx_fw_cmd_req *req)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ int err = 0;
+
+ rte_spinlock_lock(&hw->fw_lock);
+
+ err = ops->write_posted(dev, (u32 *)req,
+ (req->datalen + MBX_REQ_HDR_LEN) / 4, MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s failed!\n", __func__);
+ goto quit;
+ }
+
+quit:
+ rte_spinlock_unlock(&hw->fw_lock);
+ return err;
+}
+
static int rnp_fw_get_capablity(struct rte_eth_dev *dev,
struct phy_abilities *abil)
{
@@ -381,3 +402,146 @@ int rnp_mbx_get_lane_stat(struct rte_eth_dev *dev)
quit:
return err;
}
+
+static int rnp_maintain_req(struct rte_eth_dev *dev,
+ int cmd,
+ int arg0,
+ int req_data_bytes,
+ int reply_bytes,
+ phys_addr_t dma_phy_addr)
+{
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct mbx_req_cookie *cookie = NULL;
+ struct mbx_fw_cmd_req req;
+ int err;
+
+ if (!hw->mbx.irq_enabled)
+ return -EIO;
+ cookie = rnp_memzone_reserve(hw->cookie_p_name, 0);
+ if (!cookie)
+ return -ENOMEM;
+ memset(&req, 0, sizeof(req));
+ cookie->timeout_ms = 60 * 1000; /* 60s */
+
+ build_maintain_req(&req,
+ cookie,
+ cmd,
+ arg0,
+ req_data_bytes,
+ reply_bytes,
+ dma_phy_addr & 0xffffffff,
+ (dma_phy_addr >> 32) & 0xffffffff);
+
+ err = rnp_mbx_fw_post_req(dev, &req, cookie);
+
+ return (err) ? -EIO : 0;
+}
+
+int rnp_fw_update(struct rnp_eth_adapter *adapter)
+{
+ const struct rte_memzone *rz = NULL;
+ struct maintain_req *mt;
+ FILE *file;
+ int fsz;
+#define MAX_FW_BIN_SZ (552 * 1024)
+#define FW_256KB (256 * 1024)
+
+ RNP_PMD_LOG(INFO, "%s: %s\n", __func__, adapter->fw_path);
+
+ file = fopen(adapter->fw_path, "rb");
+ if (!file) {
+ RNP_PMD_LOG(ERR,
+ "RNP: [%s] %s can't open for read\n",
+ __func__,
+ adapter->fw_path);
+ return -ENOENT;
+ }
+ /* get dma */
+ rz = rte_memzone_reserve("fw_update", MAX_FW_BIN_SZ, SOCKET_ID_ANY, 4);
+ if (rz == NULL) {
+ RNP_PMD_LOG(ERR, "RNP: [%s] not memory:%d\n", __func__,
+ MAX_FW_BIN_SZ);
+ return -EFBIG;
+ }
+ memset(rz->addr, 0xff, rz->len);
+ mt = (struct maintain_req *)rz->addr;
+
+ /* read data */
+ fsz = fread(mt->data, 1, rz->len, file);
+ if (fsz <= 0) {
+ RNP_PMD_LOG(INFO, "RNP: [%s] read failed! err:%d\n",
+ __func__, fsz);
+ return -EIO;
+ }
+ fclose(file);
+
+ if (fsz > ((256 + 4) * 1024)) {
+ printf("fw length:%d is two big. not supported!\n", fsz);
+ return -EINVAL;
+ }
+ RNP_PMD_LOG(NOTICE, "RNP: fw update ...\n");
+ fflush(stdout);
+
+ /* ==== update fw */
+ mt->magic = MAINTAIN_MAGIC;
+ mt->cmd = MT_WRITE_FLASH;
+ mt->arg0 = 1;
+ mt->req_data_bytes = (fsz > FW_256KB) ? FW_256KB : fsz;
+ mt->reply_bytes = 0;
+
+ if (rnp_maintain_req(adapter->eth_dev, mt->cmd, mt->arg0,
+ mt->req_data_bytes, mt->reply_bytes, rz->iova))
+ RNP_PMD_LOG(ERR, "maintain request failed!\n");
+ else
+ RNP_PMD_LOG(INFO, "maintail request done!\n");
+
+ /* ==== update cfg */
+ if (fsz > FW_256KB) {
+ mt->magic = MAINTAIN_MAGIC;
+ mt->cmd = MT_WRITE_FLASH;
+ mt->arg0 = 2;
+ mt->req_data_bytes = 4096;
+ mt->reply_bytes = 0;
+ memcpy(mt->data, mt->data + FW_256KB, mt->req_data_bytes);
+
+ if (rnp_maintain_req(adapter->eth_dev,
+ mt->cmd, mt->arg0, mt->req_data_bytes,
+ mt->reply_bytes, rz->iova))
+ RNP_PMD_LOG(ERR, "maintain request failed!\n");
+ else
+ RNP_PMD_LOG(INFO, "maintail request done!\n");
+ }
+
+ RNP_PMD_LOG(NOTICE, "done\n");
+ fflush(stdout);
+
+ rte_memzone_free(rz);
+
+ exit(0);
+
+ return 0;
+}
+
+static int rnp_mbx_set_dump(struct rte_eth_dev *dev, int flag)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct mbx_fw_cmd_req req;
+ int err;
+
+ memset(&req, 0, sizeof(req));
+ build_set_dump(&req, port->attr.nr_lane, flag);
+
+ err = rnp_mbx_write_posted_locked(dev, &req);
+
+ return err;
+}
+
+int rnp_hw_set_fw_10g_1g_auto_detch(struct rte_eth_dev *dev, int enable)
+{
+ return rnp_mbx_set_dump(dev, 0x01140000 | (enable & 1));
+}
+
+int rnp_hw_set_fw_force_speed_1g(struct rte_eth_dev *dev, int enable)
+{
+ return rnp_mbx_set_dump(dev, 0x01150000 | (enable & 1));
+}
diff --git a/drivers/net/rnp/rnp_mbx_fw.h b/drivers/net/rnp/rnp_mbx_fw.h
index 7bf5c2a865..051ffd1bdc 100644
--- a/drivers/net/rnp/rnp_mbx_fw.h
+++ b/drivers/net/rnp/rnp_mbx_fw.h
@@ -16,6 +16,17 @@ struct mbx_req_cookie {
int priv_len;
char priv[RNP_MAX_SHARE_MEM];
};
+struct maintain_req {
+ int magic;
+#define MAINTAIN_MAGIC 0xa6a7a8a9
+
+ int cmd;
+ int arg0;
+ int req_data_bytes;
+ int reply_bytes;
+ char data[0];
+} __rte_packed;
+
enum GENERIC_CMD {
/* link configuration admin commands */
GET_PHY_ABALITY = 0x0601,
@@ -23,6 +34,9 @@ enum GENERIC_CMD {
RESET_PHY = 0x0603,
GET_LANE_STATUS = 0x0610,
SET_EVENT_MASK = 0x0613,
+ /* fw update */
+ FW_MAINTAIN = 0x0701,
+ SET_DUMP = 0x0a10,
};
enum link_event_mask {
@@ -211,6 +225,21 @@ struct mbx_fw_cmd_req {
struct {
int nr_lane;
} get_lane_st;
+
+ struct {
+ int cmd;
+#define MT_WRITE_FLASH 1
+ int arg0;
+ int req_bytes;
+ int reply_bytes;
+ int ddr_lo;
+ int ddr_hi;
+ } maintain;
+
+ struct {
+ int flag;
+ int nr_lane;
+ } set_dump;
};
} __rte_packed __rte_aligned(4);
@@ -284,6 +313,43 @@ build_get_lane_status_req(struct mbx_fw_cmd_req *req,
req->get_lane_st.nr_lane = nr_lane;
}
+static inline void
+build_maintain_req(struct mbx_fw_cmd_req *req,
+ void *cookie,
+ int cmd,
+ int arg0,
+ int req_bytes,
+ int reply_bytes,
+ u32 dma_phy_lo,
+ u32 dma_phy_hi)
+{
+ req->flags = 0;
+ req->opcode = FW_MAINTAIN;
+ req->datalen = sizeof(req->maintain);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->maintain.cmd = cmd;
+ req->maintain.arg0 = arg0;
+ req->maintain.req_bytes = req_bytes;
+ req->maintain.reply_bytes = reply_bytes;
+ req->maintain.ddr_lo = dma_phy_lo;
+ req->maintain.ddr_hi = dma_phy_hi;
+}
+
+static inline void
+build_set_dump(struct mbx_fw_cmd_req *req, int nr_lane, int flag)
+{
+ req->flags = 0;
+ req->opcode = SET_DUMP;
+ req->datalen = sizeof(req->set_dump);
+ req->cookie = NULL;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->set_dump.flag = flag;
+ req->set_dump.nr_lane = nr_lane;
+}
+
int rnp_mbx_get_capability(struct rte_eth_dev *dev,
int *lane_mask,
int *nic_mode);
@@ -295,4 +361,7 @@ rnp_fw_get_macaddr(struct rte_eth_dev *dev,
u8 *mac_addr,
int nr_lane);
int rnp_mbx_get_lane_stat(struct rte_eth_dev *dev);
+int rnp_fw_update(struct rnp_eth_adapter *adapter);
+int rnp_hw_set_fw_10g_1g_auto_detch(struct rte_eth_dev *dev, int enable);
+int rnp_hw_set_fw_force_speed_1g(struct rte_eth_dev *dev, int enable);
#endif /* __RNP_MBX_FW_H__*/
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v5 8/8] net/rnp handle device interrupts
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (6 preceding siblings ...)
2023-08-07 2:16 ` [PATCH v5 7/8] net/rnp add devargs runtime parsing functions Wenbo Cao
@ 2023-08-07 2:16 ` Wenbo Cao
7 siblings, 0 replies; 12+ messages in thread
From: Wenbo Cao @ 2023-08-07 2:16 UTC (permalink / raw)
To: Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Handle device lsc interrupt event
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_hw.h | 5 +
drivers/net/rnp/base/rnp_mac_regs.h | 279 ++++++++++++++++++++++++++++
drivers/net/rnp/rnp.h | 8 +
drivers/net/rnp/rnp_ethdev.c | 17 ++
drivers/net/rnp/rnp_mbx.h | 3 +-
drivers/net/rnp/rnp_mbx_fw.c | 233 +++++++++++++++++++++++
drivers/net/rnp/rnp_mbx_fw.h | 38 +++-
7 files changed, 580 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/rnp/base/rnp_mac_regs.h
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 395b9d5c71..5c50484c6c 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -10,6 +10,7 @@
#include "rnp_osdep.h"
#include "rnp_dma_regs.h"
#include "rnp_eth_regs.h"
+#include "rnp_mac_regs.h"
#include "rnp_cfg.h"
static inline unsigned int rnp_rd_reg(volatile void *addr)
@@ -48,6 +49,10 @@ static inline void rnp_wr_reg(volatile void *reg, int val)
rnp_eth_wr(hw, RNP_RAL_BASE_ADDR(hw_idx), val)
#define RNP_MACADDR_UPDATE_HI(hw, hw_idx, val) \
rnp_eth_wr(hw, RNP_RAH_BASE_ADDR(hw_idx), val)
+#define rnp_mac_rd(hw, id, off) \
+ rnp_rd_reg((char *)(hw)->mac_base[id] + (off))
+#define rnp_mac_wr(hw, id, off, val) \
+ rnp_wr_reg((char *)(hw)->mac_base[id] + (off), val)
struct rnp_hw;
/* Mbx Operate info */
enum MBX_ID {
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
new file mode 100644
index 0000000000..f9466b3841
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -0,0 +1,279 @@
+#ifndef __RNP_MAC_REGS_H__
+#define __RNP_MAC_REGS_H__
+
+#include "rnp_osdep.h"
+#define RNP_MAC_TX_CFG (0x0)
+
+/* Transmitter Enable */
+#define RNP_MAC_TE BIT(0)
+/* Jabber Disable */
+#define RNP_MAC_JD BIT(16)
+#define RNP_SPEED_SEL_1G (BIT(30) | BIT(29) | BIT(28))
+#define RNP_SPEED_SEL_10G BIT(30)
+#define RNP_SPEED_SEL_40G (0)
+#define RNP_MAC_RX_CFG (0x4)
+/* Receiver Enable */
+#define RNP_MAC_RE BIT(0)
+/* Automatic Pad or CRC Stripping */
+#define RNP_MAC_ACS BIT(1)
+/* CRC stripping for Type packets */
+#define RNP_MAC_CST BIT(2)
+/* Disable CRC Check */
+#define RNP_MAC_DCRCC BIT(3)
+/* Enable Max Frame Size Limit */
+#define RNP_MAC_GPSLCE BIT(6)
+/* Watchdog Disable */
+#define RNP_MAC_WD BIT(7)
+/* Jumbo Packet Support En */
+#define RNP_MAC_JE BIT(8)
+/* Loopback Mode */
+#define RNP_MAC_LM BIT(10)
+/* Giant Packet Size Limit */
+#define RNP_MAC_GPSL_MASK GENMASK(29, 16)
+#define RNP_MAC_MAX_GPSL (1518)
+#define RNP_MAC_CPSL_SHIFT (16)
+
+#define RNP_MAC_PKT_FLT_CTRL (0x8)
+
+/* Receive All */
+#define RNP_MAC_RA BIT(31)
+/* Pass Control Packets */
+#define RNP_MAC_PCF GENMASK(7, 6)
+#define RNP_MAC_PCF_OFFSET (6)
+/* Mac Filter ALL Ctrl Frame */
+#define RNP_MAC_PCF_FAC (0)
+/* Mac Forward ALL Ctrl Frame Except Pause */
+#define RNP_MAC_PCF_NO_PAUSE (1)
+/* Mac Forward All Ctrl Pkt */
+#define RNP_MAC_PCF_PA (2)
+/* Mac Forward Ctrl Frame Match Unicast */
+#define RNP_MAC_PCF_PUN (3)
+/* Promiscuous Mode */
+#define RNP_MAC_PROMISC_EN BIT(0)
+/* Hash Unicast */
+#define RNP_MAC_HUC BIT(1)
+/* Hash Multicast */
+#define RNP_MAC_HMC BIT(2)
+/* Pass All Multicast */
+#define RNP_MAC_PM BIT(4)
+/* Disable Broadcast Packets */
+#define RNP_MAC_DBF BIT(5)
+/* Hash or Perfect Filter */
+#define RNP_MAC_HPF BIT(10)
+#define RNP_MAC_VTFE BIT(16)
+/* Interrupt Status */
+#define RNP_MAC_INT_STATUS _MAC_(0xb0)
+#define RNP_MAC_LS_MASK GENMASK(25, 24)
+#define RNP_MAC_LS_UP (0)
+#define RNP_MAC_LS_LOCAL_FAULT BIT(25)
+#define RNP_MAC_LS_REMOTE_FAULT (BIT(25) | BIT(24))
+/* Unicast Mac Hash Table */
+#define RNP_MAC_UC_HASH_TB(n) _MAC_(0x10 + ((n) * 0x4))
+
+
+#define RNP_MAC_LPI_CTRL (0xd0)
+
+/* PHY Link Status Disable */
+#define RNP_MAC_PLSDIS BIT(18)
+/* PHY Link Status */
+#define RNP_MAC_PLS BIT(17)
+
+/* MAC VLAN CTRL Strip REG */
+#define RNP_MAC_VLAN_TAG (0x50)
+
+/* En Inner VLAN Strip Action */
+#define RNP_MAC_EIVLS GENMASK(29, 28)
+/* Inner VLAN Strip Action Shift */
+#define RNP_MAC_IV_EIVLS_SHIFT (28)
+/* Inner Vlan Don't Strip*/
+#define RNP_MAC_IV_STRIP_NONE (0x0)
+/* Inner Vlan Strip When Filter Match Success */
+#define RNP_MAC_IV_STRIP_PASS (0x1)
+/* Inner Vlan STRIP When Filter Match FAIL */
+#define RNP_MAC_IV_STRIP_FAIL (0x2)
+/* Inner Vlan STRIP Always */
+#define RNP_MAC_IV_STRIP_ALL (0X3)
+/* VLAN Strip Mode Ctrl Shift */
+#define RNP_VLAN_TAG_CTRL_EVLS_SHIFT (21)
+/* En Double Vlan Processing */
+#define RNP_MAC_VLAN_EDVLP BIT(26)
+/* VLAN Tag Hash Table Match Enable */
+#define RNP_MAC_VLAN_VTHM BIT(25)
+/* Enable VLAN Tag in Rx status */
+#define RNP_MAC_VLAN_EVLRXS BIT(24)
+/* Disable VLAN Type Check */
+#define RNP_MAC_VLAN_DOVLTC BIT(20)
+/* Enable S-VLAN */
+#define RNP_MAC_VLAN_ESVL BIT(18)
+/* Enable 12-Bit VLAN Tag Comparison Filter */
+#define RNP_MAC_VLAN_ETV BIT(16)
+#define RNP_MAC_VLAN_HASH_EN GENMASK(15, 0)
+#define RNP_MAC_VLAN_VID GENMASK(15, 0)
+/* VLAN Don't Strip */
+#define RNP_MAC_VLAN_STRIP_NONE (0x0 << RNP_VLAN_TAG_CTRL_EVLS_SHIFT)
+/* VLAN Filter Success Then STRIP */
+#define RNP_MAC_VLAN_STRIP_PASS (0x1 << RNP_VLAN_TAG_CTRL_EVLS_SHIFT)
+/* VLAN Filter Failed Then STRIP */
+#define RNP_MAC_VLAN_STRIP_FAIL (0x2 << RNP_VLAN_TAG_CTRL_EVLS_SHIFT)
+/* All Vlan Will Strip */
+#define RNP_MAC_VLAN_STRIP_ALL (0x3 << RNP_VLAN_TAG_CTRL_EVLS_SHIFT)
+
+#define RNP_MAC_VLAN_HASH_TB (0x58)
+#define RNP_MAC_VLAN_HASH_MASK GENMASK(15, 0)
+
+/* MAC VLAN CTRL INSERT REG */
+#define RNP_MAC_VLAN_INCL (0x60)
+#define RNP_MAC_INVLAN_INCL (0x64)
+
+/* VLAN Tag Input */
+/* VLAN_Tag Insert From Description */
+#define RNP_MAC_VLAN_VLTI BIT(20)
+/* C-VLAN or S-VLAN */
+#define RNP_MAC_VLAN_CSVL BIT(19)
+#define RNP_MAC_VLAN_INSERT_CVLAN (0 << 19)
+#define RNP_MAC_VLAN_INSERT_SVLAN (1 << 19)
+/* VLAN Tag Control in Transmit Packets */
+#define RNP_MAC_VLAN_VLC GENMASK(17, 16)
+/* VLAN Tag Control Offset Bit */
+#define RNP_MAC_VLAN_VLC_SHIFT (16)
+/* Don't Anything ON TX VLAN*/
+#define RNP_MAC_VLAN_VLC_NONE (0x0 << RNP_MAC_VLAN_VLC_SHIFT)
+/* MAC Delete VLAN */
+#define RNP_MAC_VLAN_VLC_DEL (0x1 << RNP_MAC_VLAN_VLC_SHIFT)
+/* MAC Add VLAN */
+#define RNP_MAC_VLAN_VLC_ADD (0x2 << RNP_MAC_VLAN_VLC_SHIFT)
+/* MAC Replace VLAN */
+#define RNP_MAC_VLAN_VLC_REPLACE (0x3 << RNP_MAC_VLAN_VLC_SHIFT)
+/* VLAN Tag for Transmit Packets For Insert/Remove */
+#define RNP_MAC_VLAN_VLT GENMASK(15, 0)
+/* TX Peer TC Flow Ctrl */
+
+#define RNP_MAC_Q0_TX_FC(n) (0x70 + ((n) * 0x4))
+
+/* Edit Pause Time */
+#define RNP_MAC_FC_PT GENMASK(31, 16)
+#define RNP_MAC_FC_PT_OFFSET (16)
+/* Disable Zero-Quanta Pause */
+#define RNP_MAC_FC_DZPQ BIT(7)
+/* Pause Low Threshold */
+#define RNP_MAC_FC_PLT GENMASK(6, 4)
+#define RNP_MAC_FC_PLT_OFFSET (4)
+#define RNP_MAC_FC_PLT_4_SLOT (0)
+#define RNP_MAC_FC_PLT_28_SLOT (1)
+#define RNP_MAC_FC_PLT_36_SLOT (2)
+#define RNP_MAC_FC_PLT_144_SLOT (3)
+#define RNP_MAC_FC_PLT_256_SLOT (4)
+/* Transmit Flow Control Enable */
+#define RNP_MAC_FC_TEE BIT(1)
+/* Transmit Flow Control Busy Immediately */
+#define RNP_MAC_FC_FCB BIT(0)
+/* Mac RX Flow Ctrl*/
+
+#define RNP_MAC_RX_FC (0x90)
+
+/* Rx Priority Based Flow Control Enable */
+#define RNP_MAC_RX_FC_PFCE BIT(8)
+/* Unicast Pause Packet Detect */
+#define RNP_MAC_RX_FC_UP BIT(1)
+/* Receive Flow Control Enable */
+#define RNP_MAC_RX_FC_RFE BIT(0)
+
+/* Rx Mac Address Base */
+#define RNP_MAC_ADDR_DEF_HI _MAC_(0x0300)
+
+#define RNP_MAC_AE BIT(31)
+#define RNP_MAC_ADDR_LO(n) _MAC_((0x0304) + ((n) * 0x8))
+#define RNP_MAC_ADDR_HI(n) _MAC_((0x0300) + ((n) * 0x8))
+
+/* Mac Manage Counts */
+#define RNP_MMC_CTRL _MAC_(0x0800)
+#define RNP_MMC_RSTONRD BIT(2)
+/* Tx Good And Bad Bytes Base */
+#define RNP_MMC_TX_GBOCTGB _MAC_(0x0814)
+/* Tx Good And Bad Frame Num Base */
+#define RNP_MMC_TX_GBFRMB _MAC_(0x081c)
+/* Tx Good Broadcast Frame Num Base */
+#define RNP_MMC_TX_BCASTB _MAC_(0x0824)
+/* Tx Good Multicast Frame Num Base */
+#define RNP_MMC_TX_MCASTB _MAC_(0x082c)
+/* Tx 64Bytes Frame Num */
+#define RNP_MMC_TX_64_BYTESB _MAC_(0x0834)
+#define RNP_MMC_TX_65TO127_BYTESB _MAC_(0x083c)
+#define RNP_MMC_TX_128TO255_BYTEB _MAC_(0x0844)
+#define RNP_MMC_TX_256TO511_BYTEB _MAC_(0x084c)
+#define RNP_MMC_TX_512TO1023_BYTEB _MAC_(0x0854)
+#define RNP_MMC_TX_1024TOMAX_BYTEB _MAC_(0x085c)
+/* Tx Good And Bad Unicast Frame Num Base */
+#define RNP_MMC_TX_GBUCASTB _MAC_(0x0864)
+/* Tx Good And Bad Multicast Frame Num Base */
+#define RNP_MMC_TX_GBMCASTB _MAC_(0x086c)
+/* Tx Good And Bad Broadcast Frame NUM Base */
+#define RNP_MMC_TX_GBBCASTB _MAC_(0x0874)
+/* Tx Frame Underflow Error */
+#define RNP_MMC_TX_UNDRFLWB _MAC_(0x087c)
+/* Tx Good Frame Bytes Base */
+#define RNP_MMC_TX_GBYTESB _MAC_(0x0884)
+/* Tx Good Frame Num Base*/
+#define RNP_MMC_TX_GBRMB _MAC_(0x088c)
+/* Tx Good Pause Frame Num Base */
+#define RNP_MMC_TX_PAUSEB _MAC_(0x0894)
+/* Tx Good Vlan Frame Num Base */
+#define RNP_MMC_TX_VLANB _MAC_(0x089c)
+
+/* Rx Good And Bad Frames Num Base */
+#define RNP_MMC_RX_GBFRMB _MAC_(0x0900)
+/* Rx Good And Bad Frames Bytes Base */
+#define RNP_MMC_RX_GBOCTGB _MAC_(0x0908)
+/* Rx Good Framse Bytes Base */
+#define RNP_MMC_RX_GOCTGB _MAC_(0x0910)
+/* Rx Good Broadcast Frames Num Base */
+#define RNP_MMC_RX_BCASTGB _MAC_(0x0918)
+/* Rx Good Multicast Frames Num Base */
+#define RNP_MMC_RX_MCASTGB _MAC_(0x0920)
+/* Rx Crc Error Frames Num Base */
+#define RNP_MMC_RX_CRCERB _MAC_(0x0928)
+/* Rx Less Than 64Byes with Crc Err Base*/
+#define RNP_MMC_RX_RUNTERB _MAC_(0x0930)
+/* Receive Jumbo Frame Error */
+#define RNP_MMC_RX_JABBER_ERR _MAC_(0x0934)
+/* Shorter Than 64Bytes without Any Errora Base */
+#define RNP_MMC_RX_USIZEGB _MAC_(0x0938)
+/* Len Oversize Than Support */
+#define RNP_MMC_RX_OSIZEGB _MAC_(0x093c)
+/* Rx 64Byes Frame Num Base */
+#define RNP_MMC_RX_64_BYTESB _MAC_(0x0940)
+/* Rx 65Bytes To 127Bytes Frame Num Base */
+#define RNP_MMC_RX_65TO127_BYTESB _MAC_(0x0948)
+/* Rx 128Bytes To 255Bytes Frame Num Base */
+#define RNP_MMC_RX_128TO255_BYTESB _MAC_(0x0950)
+/* Rx 256Bytes To 511Bytes Frame Num Base */
+#define RNP_MMC_RX_256TO511_BYTESB _MAC_(0x0958)
+/* Rx 512Bytes To 1023Bytes Frame Num Base */
+#define RNP_MMC_RX_512TO1203_BYTESB _MAC_(0x0960)
+/* Rx Len Bigger Than 1024Bytes Base */
+#define RNP_MMC_RX_1024TOMAX_BYTESB _MAC_(0x0968)
+/* Rx Unicast Frame Good Num Base */
+#define RNP_MMC_RX_UCASTGB _MAC_(0x0970)
+/* Rx Length Error Of Frame Part */
+#define RNP_MMC_RX_LENERRB _MAC_(0x0978)
+/* Rx received with a Length field not equal to the valid frame size */
+#define RNP_MMC_RX_OUTOF_RANGE _MAC_(0x0980)
+/* Rx Pause Frame Good Num Base */
+#define RNP_MMC_RX_PAUSEB _MAC_(0x0988)
+/* Rx Vlan Frame Good Num Base */
+#define RNP_MMC_RX_VLANGB _MAC_(0x0998)
+/* Rx With A Watchdog Timeout Err Frame Base */
+#define RNP_MMC_RX_WDOGERRB _MAC_(0x09a0)
+
+/* 1588 */
+#define RNP_MAC_TS_CTRL _MAC_(0X0d00)
+#define RNP_MAC_SUB_SECOND_INCREMENT _MAC_(0x0d04)
+#define RNP_MAC_SYS_TIME_SEC_CFG _MAC_(0x0d08)
+#define RNP_MAC_SYS_TIME_NANOSEC_CFG _MAC_(0x0d0c)
+#define RNP_MAC_SYS_TIME_SEC_UPDATE _MAC_(0x0d10)
+#define RNP_MAC_SYS_TIME_NANOSEC_UPDATE _MAC_(0x0d14)
+#define RNP_MAC_TS_ADDEND _MAC_(0x0d18)
+#define RNP_MAC_TS_STATS _MAC_(0x0d20)
+#define RNP_MAC_INTERRUPT_ENABLE _MAC_(0x00b4)
+
+#endif /* __RNP_MAC_REGS_H__ */
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 933cdc6007..61adb20909 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -111,6 +111,8 @@ struct rnp_eth_port {
uint8_t tx_func_sec; /* force set io tx func */
struct rte_eth_dev *eth_dev;
struct rnp_port_attr attr;
+ uint64_t state;
+ rte_spinlock_t rx_mac_lock; /* Lock For Mac_cfg resource write */
/* Recvice Mac Address Record Table */
uint8_t mac_use_tb[RNP_MAX_MAC_ADDRS];
uint8_t use_num_mac;
@@ -131,6 +133,12 @@ enum {
RNP_IO_FUNC_USE_COMMON,
};
+enum rnp_port_state {
+ RNP_PORT_STATE_PAUSE = 0,
+ RNP_PORT_STATE_FINISH,
+ RNP_PORT_STATE_SETTING,
+};
+
struct rnp_eth_adapter {
enum rnp_work_mode mode;
enum rnp_resource_share_m s_mode; /* Port Resource Share Policy */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 5313dae5a2..ddbe84180d 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -601,10 +601,23 @@ static int rnp_post_handle(struct rnp_eth_adapter *adapter)
return 0;
}
+static void rnp_dev_interrupt_handler(void *param)
+{
+ struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+ struct rnp_eth_adapter *adapter = RNP_DEV_TO_ADAPTER(dev);
+
+ rte_intr_disable(intr_handle);
+ rnp_fw_msg_handler(adapter);
+ rte_intr_enable(intr_handle);
+}
+
static int
rnp_eth_dev_init(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
struct rnp_eth_adapter *adapter = NULL;
char name[RTE_ETH_NAME_MAX_LEN] = " ";
@@ -680,6 +693,10 @@ rnp_eth_dev_init(struct rte_eth_dev *dev)
rnp_mac_rx_disable(eth_dev);
rnp_mac_tx_disable(eth_dev);
}
+ rte_intr_disable(intr_handle);
+ /* Enable Link Update Event Interrupt */
+ rte_intr_callback_register(intr_handle,
+ rnp_dev_interrupt_handler, dev);
ret = rnp_post_handle(adapter);
if (ret)
goto eth_alloc_error;
diff --git a/drivers/net/rnp/rnp_mbx.h b/drivers/net/rnp/rnp_mbx.h
index 87949c1726..d6b78e32a7 100644
--- a/drivers/net/rnp/rnp_mbx.h
+++ b/drivers/net/rnp/rnp_mbx.h
@@ -13,7 +13,8 @@
/* Mbx Ctrl state */
#define RNP_VFMAILBOX_SIZE (14) /* 16 32 bit words - 64 bytes */
-#define TSRN10_VFMBX_SIZE (RNP_VFMAILBOX_SIZE)
+#define RNP_FW_MAILBOX_SIZE RNP_VFMAILBOX_SIZE
+#define RNP_VFMBX_SIZE (RNP_VFMAILBOX_SIZE)
#define RNP_VT_MSGTYPE_ACK (0x80000000)
#define RNP_VT_MSGTYPE_NACK (0x40000000)
diff --git a/drivers/net/rnp/rnp_mbx_fw.c b/drivers/net/rnp/rnp_mbx_fw.c
index 0c3f499cf2..a0a163e98c 100644
--- a/drivers/net/rnp/rnp_mbx_fw.c
+++ b/drivers/net/rnp/rnp_mbx_fw.c
@@ -545,3 +545,236 @@ int rnp_hw_set_fw_force_speed_1g(struct rte_eth_dev *dev, int enable)
{
return rnp_mbx_set_dump(dev, 0x01150000 | (enable & 1));
}
+
+static inline int
+rnp_mbx_fw_reply_handler(struct rnp_eth_adapter *adapter __rte_unused,
+ struct mbx_fw_cmd_reply *reply)
+{
+ struct mbx_req_cookie *cookie;
+ /* dbg_here; */
+ cookie = reply->cookie;
+ if (!cookie || cookie->magic != COOKIE_MAGIC) {
+ RNP_PMD_LOG(ERR,
+ "[%s] invalid cookie:%p opcode: "
+ "0x%x v0:0x%x\n",
+ __func__,
+ cookie,
+ reply->opcode,
+ *((int *)reply));
+ return -EIO;
+ }
+
+ if (cookie->priv_len > 0)
+ memcpy(cookie->priv, reply->data, cookie->priv_len);
+
+ cookie->done = 1;
+
+ if (reply->flags & FLAGS_ERR)
+ cookie->errcode = reply->error_code;
+ else
+ cookie->errcode = 0;
+
+ return 0;
+}
+
+void rnp_link_stat_mark(struct rnp_hw *hw, int nr_lane, int up)
+{
+ u32 v;
+
+ rte_spinlock_lock(&hw->fw_lock);
+ v = rnp_rd_reg(hw->link_sync);
+ v &= ~(0xffff0000);
+ v |= 0xa5a40000;
+ if (up)
+ v |= BIT(nr_lane);
+ else
+ v &= ~BIT(nr_lane);
+ rnp_wr_reg(hw->link_sync, v);
+
+ rte_spinlock_unlock(&hw->fw_lock);
+}
+
+void rnp_link_report(struct rte_eth_dev *dev, bool link_en)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw *hw = RNP_DEV_TO_HW(dev);
+ struct rte_eth_link link;
+
+ link.link_duplex = link_en ? port->attr.phy_meta.link_duplex :
+ RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = link_en ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ link.link_speed = link_en ? port->attr.speed :
+ RTE_ETH_SPEED_NUM_UNKNOWN;
+ RNP_PMD_LOG(INFO,
+ "\nPF[%d]link changed: changed_lane:0x%x, "
+ "status:0x%x\n",
+ hw->pf_vf_num & RNP_PF_NB_MASK ? 1 : 0,
+ port->attr.nr_port,
+ link_en);
+ link.link_autoneg = port->attr.phy_meta.link_autoneg
+ ? RTE_ETH_LINK_SPEED_AUTONEG
+ : RTE_ETH_LINK_SPEED_FIXED;
+ /* Report Link Info To Upper Firmwork */
+ rte_eth_linkstatus_set(dev, &link);
+ /* Notice Event Process Link Status Change */
+ rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+ /* Notce Firmware LSC Event SW Received */
+ rnp_link_stat_mark(hw, port->attr.nr_port, link_en);
+}
+
+static void rnp_dev_alarm_link_handler(void *param)
+{
+ struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint32_t status;
+
+ status = port->attr.link_ready;
+ rnp_link_report(dev, status);
+}
+
+static void rnp_link_event(struct rnp_eth_adapter *adapter,
+ struct mbx_fw_cmd_req *req)
+{
+ struct rnp_hw *hw = &adapter->hw;
+ struct rnp_eth_port *port;
+ bool link_change = false;
+ uint32_t lane_bit;
+ uint32_t sync_bit;
+ uint32_t link_en;
+ uint32_t ctrl;
+ int i;
+
+ for (i = 0; i < adapter->num_ports; i++) {
+ port = adapter->ports[i];
+ if (port == NULL)
+ continue;
+ link_change = false;
+ lane_bit = port->attr.nr_port;
+ if (__atomic_load_n(&port->state, __ATOMIC_RELAXED)
+ != RNP_PORT_STATE_FINISH)
+ continue;
+ if (!(BIT(lane_bit) & req->link_stat.changed_lanes))
+ continue;
+ link_en = BIT(lane_bit) & req->link_stat.lane_status;
+ sync_bit = BIT(lane_bit) & rnp_rd_reg(hw->link_sync);
+
+ if (link_en) {
+ /* Port Link Change To Up */
+ if (!port->attr.link_ready) {
+ link_change = true;
+ port->attr.link_ready = true;
+ }
+ if (req->link_stat.port_st_magic == SPEED_VALID_MAGIC) {
+ port->attr.speed = req->link_stat.st[lane_bit].speed;
+ port->attr.phy_meta.link_duplex =
+ req->link_stat.st[lane_bit].duplex;
+ port->attr.phy_meta.link_autoneg =
+ req->link_stat.st[lane_bit].autoneg;
+ RNP_PMD_INIT_LOG(INFO,
+ "phy_id %d speed %d duplex "
+ "%d issgmii %d PortID %d\n",
+ req->link_stat.st[lane_bit].phy_addr,
+ req->link_stat.st[lane_bit].speed,
+ req->link_stat.st[lane_bit].duplex,
+ req->link_stat.st[lane_bit].is_sgmii,
+ port->attr.rte_pid);
+ }
+ } else {
+ /* Port Link to Down */
+ if (port->attr.link_ready) {
+ link_change = true;
+ port->attr.link_ready = false;
+ }
+ }
+ if (link_change || sync_bit != link_en) {
+ /* WorkAround For Hardware When Link Down
+ * Eth Module Tx-side Can't Drop In some condition
+ * So back The Packet To Rx Side To Drop Packet
+ */
+ /* To Protect Conflict Hw Resource */
+ rte_spinlock_lock(&port->rx_mac_lock);
+ ctrl = rnp_mac_rd(hw, lane_bit, RNP_MAC_RX_CFG);
+ if (port->attr.link_ready) {
+ ctrl &= ~RNP_MAC_LM;
+ rnp_eth_wr(hw,
+ RNP_RX_FIFO_FULL_THRETH(lane_bit),
+ RNP_RX_DEFAULT_VAL);
+ } else {
+ rnp_eth_wr(hw,
+ RNP_RX_FIFO_FULL_THRETH(lane_bit),
+ RNP_RX_WORKAROUND_VAL);
+ ctrl |= RNP_MAC_LM;
+ }
+ rnp_mac_wr(hw, lane_bit, RNP_MAC_RX_CFG, ctrl);
+ rte_spinlock_unlock(&port->rx_mac_lock);
+ rte_eal_alarm_set(RNP_ALARM_INTERVAL,
+ rnp_dev_alarm_link_handler,
+ (void *)port->eth_dev);
+ }
+ }
+}
+
+static inline int
+rnp_mbx_fw_req_handler(struct rnp_eth_adapter *adapter,
+ struct mbx_fw_cmd_req *req)
+{
+ switch (req->opcode) {
+ case LINK_STATUS_EVENT:
+ rnp_link_event(adapter, req);
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static inline int rnp_rcv_msg_from_fw(struct rnp_eth_adapter *adapter)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(adapter->eth_dev);
+ struct rnp_hw *hw = &adapter->hw;
+ u32 msgbuf[RNP_FW_MAILBOX_SIZE];
+ uint16_t check_state;
+ int retval;
+
+ retval = ops->read(hw, msgbuf, RNP_FW_MAILBOX_SIZE, MBX_FW);
+ if (retval) {
+ PMD_DRV_LOG(ERR, "Error receiving message from FW\n");
+ return retval;
+ }
+#define RNP_MBX_SYNC_MASK GENMASK(15, 0)
+
+ check_state = msgbuf[0] & RNP_MBX_SYNC_MASK;
+ /* this is a message we already processed, do nothing */
+ if (check_state & FLAGS_DD)
+ return rnp_mbx_fw_reply_handler(adapter,
+ (struct mbx_fw_cmd_reply *)msgbuf);
+ else
+ return rnp_mbx_fw_req_handler(adapter,
+ (struct mbx_fw_cmd_req *)msgbuf);
+
+ return 0;
+}
+
+static void rnp_rcv_ack_from_fw(struct rnp_eth_adapter *adapter)
+{
+ struct rnp_hw *hw __rte_unused = &adapter->hw;
+ u32 msg __rte_unused = RNP_VT_MSGTYPE_NACK;
+ /* do-nothing */
+}
+
+int rnp_fw_msg_handler(struct rnp_eth_adapter *adapter)
+{
+ const struct rnp_mbx_api *ops = RNP_DEV_TO_MBX_OPS(adapter->eth_dev);
+ struct rnp_hw *hw = &adapter->hw;
+
+ /* == check cpureq */
+ if (!ops->check_for_msg(hw, MBX_FW))
+ rnp_rcv_msg_from_fw(adapter);
+
+ /* process any acks */
+ if (!ops->check_for_ack(hw, MBX_FW))
+ rnp_rcv_ack_from_fw(adapter);
+
+ return 0;
+}
diff --git a/drivers/net/rnp/rnp_mbx_fw.h b/drivers/net/rnp/rnp_mbx_fw.h
index 051ffd1bdc..292ad6dfbe 100644
--- a/drivers/net/rnp/rnp_mbx_fw.h
+++ b/drivers/net/rnp/rnp_mbx_fw.h
@@ -32,6 +32,8 @@ enum GENERIC_CMD {
GET_PHY_ABALITY = 0x0601,
GET_MAC_ADDRES = 0x0602,
RESET_PHY = 0x0603,
+ GET_LINK_STATUS = 0x0607,
+ LINK_STATUS_EVENT = 0x0608,
GET_LANE_STATUS = 0x0610,
SET_EVENT_MASK = 0x0613,
/* fw update */
@@ -98,6 +100,21 @@ struct phy_abilities {
};
} __rte_packed __rte_aligned(4);
+struct port_stat {
+ u8 phy_addr; /* Phy MDIO address */
+
+ u8 duplex : 1; /* FIBRE is always 1,Twisted Pair 1 or 0 */
+ u8 autoneg : 1; /* autoned state */
+ u8 fec : 1;
+ u8 an_rev : 1;
+ u8 link_traing : 1;
+ u8 is_sgmii : 1; /* avild fw >= 0.5.0.17 */
+ u16 speed; /* cur port linked speed */
+
+ u16 pause : 4;
+ u16 rev : 12;
+} __rte_packed;
+
#define RNP_SPEED_CAP_UNKNOWN (0)
#define RNP_SPEED_CAP_10M_FULL BIT(2)
#define RNP_SPEED_CAP_100M_FULL BIT(3)
@@ -186,8 +203,14 @@ struct mbx_fw_cmd_reply {
struct phy_abilities phy_abilities;
};
} __rte_packed __rte_aligned(4);
-
-#define MBX_REQ_HDR_LEN 24
+/* == flags == */
+#define FLAGS_DD BIT(0) /* driver clear 0, FW must set 1 */
+#define FLAGS_CMP BIT(1) /* driver clear 0, FW mucst set */
+/* driver clear 0, FW must set only if it reporting an error */
+#define FLAGS_ERR BIT(2)
+
+#define MBX_REQ_HDR_LEN (24)
+#define RNP_ALARM_INTERVAL (50000) /* unit us */
/* driver -> firmware */
struct mbx_fw_cmd_req {
unsigned short flags; /* 0-1 */
@@ -240,6 +263,14 @@ struct mbx_fw_cmd_req {
int flag;
int nr_lane;
} set_dump;
+
+ struct {
+ unsigned short changed_lanes;
+ unsigned short lane_status;
+ unsigned int port_st_magic;
+#define SPEED_VALID_MAGIC 0xa4a6a8a9
+ struct port_stat st[4];
+ } link_stat; /* FW->RC */
};
} __rte_packed __rte_aligned(4);
@@ -364,4 +395,7 @@ int rnp_mbx_get_lane_stat(struct rte_eth_dev *dev);
int rnp_fw_update(struct rnp_eth_adapter *adapter);
int rnp_hw_set_fw_10g_1g_auto_detch(struct rte_eth_dev *dev, int enable);
int rnp_hw_set_fw_force_speed_1g(struct rte_eth_dev *dev, int enable);
+void rnp_link_stat_mark(struct rnp_hw *hw, int nr_lane, int up);
+void rnp_link_report(struct rte_eth_dev *dev, bool link_en);
+int rnp_fw_msg_handler(struct rnp_eth_adapter *adapter);
#endif /* __RNP_MBX_FW_H__*/
--
2.27.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 1/8] net/rnp: add skeleton
2023-08-07 2:16 ` [PATCH v5 1/8] net/rnp: add skeleton Wenbo Cao
@ 2023-08-15 11:10 ` Thomas Monjalon
2023-08-21 9:32 ` 11
0 siblings, 1 reply; 12+ messages in thread
From: Thomas Monjalon @ 2023-08-15 11:10 UTC (permalink / raw)
To: Wenbo Cao; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Hi,
Wenbo Cao:
> --- /dev/null
> +++ b/doc/guides/nics/rnp.rst
> @@ -0,0 +1,43 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2023 Mucse IC Design Ltd.
> +
> +RNP Poll Mode driver
> +==========================
Please keep underlining the same size as the text above.
> +
> +The RNP ETHDEV PMD (**librte_net_rnp**) provides poll mode ethdev
> +driver support for the inbuilt network device found in the **Mucse RNP**
> +
> +Prerequisites
> +-------------
> +More information can be found at `Mucse, Official Website
> +<https://mucse.com/productDetail>`_.
> +
> +Supported RNP SoCs
> +------------------------
> +
> +- N10
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> +for details.
It was a mistake to originally introduce the anchor "pmd_build_and_test".
You should achieve the same result with the shorter syntax :doc:`build_and_test`
> +
> +#. Running testpmd:
> +
> + Follow instructions available in the document
> + :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> + to run testpmd.
Do we really need that referencing the same document as above?
> +
> +Limitations or Known issues
> +----------------------------
> +Build with ICC is not supported yet.
> +CRC stripping
> +~~~~~~~~~~~~~~
> +The RNP SoC family NICs strip the CRC for every packets coming into the
> +host interface irrespective of the offload configuration.
> +When You Want To Disable CRC_OFFLOAD The Feature Will Influence The RxCksum Offload
> +VLAN Strip
> +~~~~~~~~~~~
> +For VLAN Strip RNP Just Support CVLAN(0x8100) Type If The Vlan Type Is SVLAN(0X88a8)
> +VLAN Filter Or Strip Will Not Effert For This Packet It Will Bypass To The Host.
Please check the doc contribution guide.
You should add spaces before and after titles.
^ permalink raw reply [flat|nested] 12+ messages in thread
* RE: [PATCH v5 1/8] net/rnp: add skeleton
2023-08-15 11:10 ` Thomas Monjalon
@ 2023-08-21 9:32 ` 11
2023-08-30 16:27 ` Thomas Monjalon
0 siblings, 1 reply; 12+ messages in thread
From: 11 @ 2023-08-21 9:32 UTC (permalink / raw)
To: 'Thomas Monjalon'; +Cc: dev, ferruh.yigit, andrew.rybchenko, yaojun
Hi Thomas,
Thanks for your useful advice, previously only focused on code format
and Ignored document format.
Regards Wenbo
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: 2023年8月15日 19:11
> To: Wenbo Cao <caowenbo@mucse.com>
> Cc: dev@dpdk.org; ferruh.yigit@amd.com; andrew.rybchenko@oktetlabs.ru;
> yaojun@mucse.com
> Subject: Re: [PATCH v5 1/8] net/rnp: add skeleton
>
> Hi,
>
> Wenbo Cao:
> > --- /dev/null
> > +++ b/doc/guides/nics/rnp.rst
> > @@ -0,0 +1,43 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > + Copyright(c) 2023 Mucse IC Design Ltd.
> > +
> > +RNP Poll Mode driver
> > +==========================
>
> Please keep underlining the same size as the text above.
Thanks for your kindly comment, the format of document I must lake of
this knowledge
>
> > +
> > +The RNP ETHDEV PMD (**librte_net_rnp**) provides poll mode ethdev
> > +driver support for the inbuilt network device found in the **Mucse
> > +RNP**
> > +
> > +Prerequisites
> > +-------------
> > +More information can be found at `Mucse, Official Website
> > +<https://mucse.com/productDetail>`_.
> > +
> > +Supported RNP SoCs
> > +------------------------
> > +
> > +- N10
> > +
> > +Driver compilation and testing
> > +------------------------------
> > +
> > +Refer to the document :ref:`compiling and testing a PMD for a NIC
> > +<pmd_build_and_test>` for details.
>
> It was a mistake to originally introduce the anchor "pmd_build_and_test".
> You should achieve the same result with the shorter
> syntax :doc:`build_and_test`
>
> > +
> > +#. Running testpmd:
> > +
> > + Follow instructions available in the document
> > + :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> > + to run testpmd.
>
> Do we really need that referencing the same document as above?
For this block, there's really no need to add this.
Previous ideas I want to add new content as the subsequent code is
submitted.
Do I need to add full features and NIC Description at the first code commit
?
>
> > +
> > +Limitations or Known issues
> > +----------------------------
> > +Build with ICC is not supported yet.
> > +CRC stripping
> > +~~~~~~~~~~~~~~
> > +The RNP SoC family NICs strip the CRC for every packets coming into
> > +the host interface irrespective of the offload configuration.
> > +When You Want To Disable CRC_OFFLOAD The Feature Will Influence The
> > +RxCksum Offload VLAN Strip ~~~~~~~~~~~ For VLAN Strip RNP Just
> > +Support CVLAN(0x8100) Type If The Vlan Type Is SVLAN(0X88a8) VLAN
> > +Filter Or Strip Will Not Effert For This Packet It Will Bypass To The
Host.
>
> Please check the doc contribution guide.
> You should add spaces before and after titles.
Yes this is my fault, :) , I have read the document and known that
Add 2 blank lines before each section header.
Add 1 blank line after each section header.
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v5 1/8] net/rnp: add skeleton
2023-08-21 9:32 ` 11
@ 2023-08-30 16:27 ` Thomas Monjalon
0 siblings, 0 replies; 12+ messages in thread
From: Thomas Monjalon @ 2023-08-30 16:27 UTC (permalink / raw)
To: 11, yaojun; +Cc: dev, ferruh.yigit, andrew.rybchenko
21/08/2023 11:32, 11:
> Hi Thomas,
>
> Thanks for your useful advice, previously only focused on code format
> and Ignored document format.
no worries
> > > +#. Running testpmd:
> > > +
> > > + Follow instructions available in the document
> > > + :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
> > > + to run testpmd.
> >
> > Do we really need that referencing the same document as above?
> For this block, there's really no need to add this.
> Previous ideas I want to add new content as the subsequent code is
> submitted.
> Do I need to add full features and NIC Description at the first code commit
> ?
The best is to introduce doc when code is added, in the same patch.
So no full features in the first patch :)
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2023-08-30 16:27 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-07 2:16 [PATCH v4 0/8] [v4]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 1/8] net/rnp: add skeleton Wenbo Cao
2023-08-15 11:10 ` Thomas Monjalon
2023-08-21 9:32 ` 11
2023-08-30 16:27 ` Thomas Monjalon
2023-08-07 2:16 ` [PATCH v5 2/8] net/rnp: add ethdev probe and remove Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 3/8] net/rnp: add device init and uninit Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 4/8] net/rnp: add mbx basic api feature Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 5/8] net/rnp add reset code for Chip Init process Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 6/8] net/rnp add port info resource init Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 7/8] net/rnp add devargs runtime parsing functions Wenbo Cao
2023-08-07 2:16 ` [PATCH v5 8/8] net/rnp handle device interrupts Wenbo Cao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).