* [PATCH v7 01/28] net/rnp: add skeleton
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 02/28] net/rnp: add ethdev probe and remove Wenbo Cao
` (26 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
Add basic PMD library and doc build infrastructure
Update maintainers file to claim responsibility.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
---
MAINTAINERS | 6 +++
doc/guides/nics/features/rnp.ini | 8 ++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/rnp.rst | 82 ++++++++++++++++++++++++++++++++++++++++
drivers/net/meson.build | 1 +
5 files changed, 98 insertions(+)
create mode 100644 doc/guides/nics/features/rnp.ini
create mode 100644 doc/guides/nics/rnp.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 812463f..cf4806e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -974,6 +974,12 @@ F: drivers/net/qede/
F: doc/guides/nics/qede.rst
F: doc/guides/nics/features/qede*.ini
+Mucse rnp
+M: Wenbo Cao <caowenbo@mucse.com>
+F: drivers/net/rnp
+F: doc/guides/nics/rnp.rst
+F: doc/guides/nics/features/rnp.ini
+
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
F: drivers/common/sfc_efx/
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
new file mode 100644
index 0000000..2ad04ee
--- /dev/null
+++ b/doc/guides/nics/features/rnp.ini
@@ -0,0 +1,8 @@
+;
+; Supported features of the 'rnp' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index c14bc79..b12f409 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -60,6 +60,7 @@ Network Interface Controller Drivers
pcap_ring
pfe
qede
+ rnp
sfc_efx
softnic
tap
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
new file mode 100644
index 0000000..618baa8
--- /dev/null
+++ b/doc/guides/nics/rnp.rst
@@ -0,0 +1,82 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2023 Mucse IC Design Ltd.
+
+RNP Poll Mode driver
+====================
+
+The RNP ETHDEV PMD (**librte_net_rnp**) provides poll mode ethdev
+driver support for the inbuilt network device found in the **Mucse RNP**
+
+Prerequisites
+-------------
+More information can be found at `Mucse, Official Website
+<https://mucse.com/productDetail>`_.
+For English version you can download the below pdf.
+`<https://muchuang-bucket.oss-cn-beijing.aliyuncs.com/aea70403c0de4fa58cd507632009103dMUCSE%20Product%20Manual%202023.pdf>`
+
+Supported Chipsets and NICs
+---------------------------
+
+- MUCSE Ethernet Controller N10 Series for 10GbE or 40GbE (Dual-port)
+
+Chip Basic Overview
+-------------------
+N10 isn't normal with traditional PCIe network card, The chip only have two pcie physical function.
+The Chip max can support eight ports.
+
+.. code-block:: console
+
+ +------------------------------------------------+
+ | OS |
+ | PCIE (PF0) |
+ | | | | | |
+ +----|------------|------------|------------|----+
+ | | | |
+ +-|------------|------------|------------|-+
+ | Extend Mac |
+ | VLAN/Unicast/multicast |
+ | Promisc Mode Ctrl |
+ | |
+ +-|------------|------------|------------|-+
+ | | | |
+ +---|---+ +---|---+ +---|---+ +---|---+
+ | | | | | | | |
+ | MAC 0 | | MAC 1 | | MAC 2 | | MAC 3 |
+ | | | | | | | |
+ +---|---+ +---|---+ +---|---+ +---|---+
+ | | | |
+ +---|---+ +---|---+ +---|---+ +---|---+
+ | | | | | | | |
+ | PORT 0| | PORT 1| | PORT 2| | PORT 3|
+ | | | | | | | |
+ +-------+ +-------+ +-------+ +-------+
+
+ +------------------------------------------------+
+ | OS |
+ | PCIE (PF1) |
+ | | | | | |
+ +----|------------|------------|------------|----+
+ | | | |
+ +-|------------|------------|------------|-+
+ | Extend Mac |
+ | VLAN/Unicast/multicast |
+ | Promisc Mode Ctrl |
+ | |
+ +-|------------|------------|------------|-+
+ | | | |
+ +---|---+ +---|---+ +---|---+ +---|---+
+ | | | | | | | |
+ | MAC 4 | | MAC 5 | | MAC 6 | | MAC 7 |
+ | | | | | | | |
+ +---|---+ +---|---+ +---|---+ +---|---+
+ | | | |
+ +---|---+ +---|---+ +---|---+ +---|---+
+ | | | | | | | |
+ | PORT 4| | PORT 5| | PORT 6| | PORT 7|
+ | | | | | | | |
+ +-------+ +-------+ +-------+ +-------+
+
+Limitations or Known issues
+---------------------------
+
+BSD are not supported yet.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index fb6d34b..9308577 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -53,6 +53,7 @@ drivers = [
'pfe',
'qede',
'ring',
+ 'rnp',
'sfc',
'softnic',
'tap',
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 02/28] net/rnp: add ethdev probe and remove
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 01/28] net/rnp: add skeleton Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 03/28] net/rnp: add log Wenbo Cao
` (25 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
Add basic PCIe ethdev probe and remove.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/rnp/meson.build | 11 +++++++
drivers/net/rnp/rnp.h | 13 ++++++++
drivers/net/rnp/rnp_ethdev.c | 77 ++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 101 insertions(+)
create mode 100644 drivers/net/rnp/meson.build
create mode 100644 drivers/net/rnp/rnp.h
create mode 100644 drivers/net/rnp/rnp_ethdev.c
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
new file mode 100644
index 0000000..4f37c6b
--- /dev/null
+++ b/drivers/net/rnp/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2023 Mucse IC Design Ltd.
+#
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+endif
+
+sources = files(
+ 'rnp_ethdev.c',
+)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
new file mode 100644
index 0000000..6cd717a
--- /dev/null
+++ b/drivers/net/rnp/rnp.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+#ifndef __RNP_H__
+#define __RNP_H__
+
+#define PCI_VENDOR_ID_MUCSE (0x8848)
+#define RNP_DEV_ID_N10G (0x1000)
+
+struct rnp_eth_port {
+};
+
+#endif /* __RNP_H__ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
new file mode 100644
index 0000000..2f34f88
--- /dev/null
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include <ethdev_pci.h>
+#include <rte_io.h>
+#include <rte_malloc.h>
+
+#include "rnp.h"
+
+static int
+rnp_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+rnp_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return -ENODEV;
+}
+
+static int
+rnp_pci_remove(struct rte_pci_device *pci_dev)
+{
+ struct rte_eth_dev *eth_dev;
+ int rc;
+
+ eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
+
+ if (eth_dev) {
+ /* Cleanup eth dev */
+ rc = rte_eth_dev_pci_generic_remove(pci_dev,
+ rnp_eth_dev_uninit);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+rnp_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ int rc;
+
+ RTE_SET_USED(pci_drv);
+
+ rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct rnp_eth_port),
+ rnp_eth_dev_init);
+
+ return rc;
+}
+
+static const struct rte_pci_id pci_id_rnp_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_MUCSE, RNP_DEV_ID_N10G)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver rte_rnp_pmd = {
+ .id_table = pci_id_rnp_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = rnp_pci_probe,
+ .remove = rnp_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_rnp, rte_rnp_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_rnp, pci_id_rnp_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_rnp, "igb_uio | uio_pci_generic | vfio-pci");
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 03/28] net/rnp: add log
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 01/28] net/rnp: add skeleton Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 02/28] net/rnp: add ethdev probe and remove Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 04/28] net/rnp: support mailbox basic operate Wenbo Cao
` (24 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add log function for trace or debug
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp_ethdev.c | 2 ++
drivers/net/rnp/rnp_logs.h | 37 +++++++++++++++++++++++++++++++++++++
2 files changed, 39 insertions(+)
create mode 100644 drivers/net/rnp/rnp_logs.h
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 2f34f88..389c6ad 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -72,6 +72,8 @@
.remove = rnp_pci_remove,
};
+RTE_LOG_REGISTER_SUFFIX(rnp_init_logtype, init, NOTICE);
+
RTE_PMD_REGISTER_PCI(net_rnp, rte_rnp_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_rnp, pci_id_rnp_map);
RTE_PMD_REGISTER_KMOD_DEP(net_rnp, "igb_uio | uio_pci_generic | vfio-pci");
diff --git a/drivers/net/rnp/rnp_logs.h b/drivers/net/rnp/rnp_logs.h
new file mode 100644
index 0000000..078c752
--- /dev/null
+++ b/drivers/net/rnp/rnp_logs.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef __RNP_LOGS_H__
+#define __RNP_LOGS_H__
+
+#include <rte_log.h>
+
+extern int rnp_init_logtype;
+
+#define RNP_PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rnp_init_logtype, "%s(): " fmt "\n", \
+ __func__, ##args)
+#define PMD_INIT_FUNC_TRACE() RNP_PMD_INIT_LOG(DEBUG, " >>")
+#define RNP_PMD_DRV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_##level, rnp_init_logtype, \
+ "%s() " fmt "\n", __func__, ##args)
+#define RNP_PMD_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_##level, rnp_init_logtype, \
+ "rnp_net: (%d) " fmt "\n", __LINE__, ##args)
+#define RNP_PMD_ERR(fmt, args...) \
+ RNP_PMD_LOG(ERR, fmt, ## args)
+#define RNP_PMD_WARN(fmt, args...) \
+ RNP_PMD_LOG(WARNING, fmt, ## args)
+#define RNP_PMD_INFO(fmt, args...) \
+ RNP_PMD_LOG(INFO, fmt, ## args)
+
+#ifdef RTE_LIBRTE_RNP_REG_DEBUG
+#define RNP_PMD_REG_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, rnp_init_logtype, \
+ "%s(): " fmt "\n", __func__, ##args)
+#else
+#define RNP_PMD_REG_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* __RNP_LOGS_H__ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 04/28] net/rnp: support mailbox basic operate
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (2 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 03/28] net/rnp: add log Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 05/28] net/rnp: add device init and uninit Wenbo Cao
` (23 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
This patch adds support for mailbox of rnp PMD driver,
mailbox is used for communication between pf with fw
and vf driver.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/rnp/base/meson.build | 22 ++
drivers/net/rnp/base/rnp_hw.h | 76 ++++++
drivers/net/rnp/base/rnp_mbx.c | 512 +++++++++++++++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_mbx.h | 58 +++++
drivers/net/rnp/base/rnp_osdep.h | 53 ++++
drivers/net/rnp/meson.build | 5 +
drivers/net/rnp/rnp.h | 19 ++
7 files changed, 745 insertions(+)
create mode 100644 drivers/net/rnp/base/meson.build
create mode 100644 drivers/net/rnp/base/rnp_hw.h
create mode 100644 drivers/net/rnp/base/rnp_mbx.c
create mode 100644 drivers/net/rnp/base/rnp_mbx.h
create mode 100644 drivers/net/rnp/base/rnp_osdep.h
diff --git a/drivers/net/rnp/base/meson.build b/drivers/net/rnp/base/meson.build
new file mode 100644
index 0000000..9ea88c3
--- /dev/null
+++ b/drivers/net/rnp/base/meson.build
@@ -0,0 +1,22 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2023 Mucse IC Design Ltd.
+
+sources = [
+ 'rnp_mbx.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+ '-Wno-unused-but-set-variable',
+ '-Wno-unused-parameter',
+ ]
+c_args = cflags
+foreach flag: error_cflags
+ if cc.has_argument(flag)
+ c_args += flag
+ endif
+endforeach
+
+base_lib = static_library('rnp_base', sources,
+ dependencies: [static_rte_eal, static_rte_net, static_rte_ethdev],
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects(recursive: true)
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
new file mode 100644
index 0000000..959b4c3
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+#ifndef __RNP_HW_H__
+#define __RNP_HW_H__
+
+#include "rnp_osdep.h"
+
+struct rnp_hw;
+/* Mailbox Operate Info */
+enum RNP_MBX_ID {
+ RNP_MBX_PF = 0,
+ RNP_MBX_VF,
+ RNP_MBX_FW = 64,
+};
+
+struct rnp_mbx_ops {
+ int (*read)(struct rnp_hw *hw,
+ u32 *msg,
+ u16 size,
+ enum RNP_MBX_ID);
+ int (*write)(struct rnp_hw *hw,
+ u32 *msg,
+ u16 size,
+ enum RNP_MBX_ID);
+ int (*read_posted)(struct rnp_hw *hw,
+ u32 *msg,
+ u16 size,
+ enum RNP_MBX_ID);
+ int (*write_posted)(struct rnp_hw *hw,
+ u32 *msg,
+ u16 size,
+ enum RNP_MBX_ID);
+ int (*check_for_msg)(struct rnp_hw *hw, enum RNP_MBX_ID);
+ int (*check_for_ack)(struct rnp_hw *hw, enum RNP_MBX_ID);
+ int (*check_for_rst)(struct rnp_hw *hw, enum RNP_MBX_ID);
+};
+
+struct rnp_mbx_sync {
+ u16 req;
+ u16 ack;
+};
+
+struct rnp_mbx_info {
+ const struct rnp_mbx_ops *ops;
+ u32 usec_delay; /* retry interval delay time */
+ u32 timeout; /* retry ops timeout limit */
+ u16 size; /* data buffer size*/
+ u16 vf_num; /* Virtual Function num */
+ u16 pf_num; /* Physical Function num */
+ u16 sriov_st; /* Sriov state */
+ u16 en_vfs; /* user enabled vf num */
+ bool is_pf;
+
+ struct rnp_mbx_sync syncs[RNP_MBX_FW];
+};
+
+struct rnp_eth_adapter;
+
+/* hw device description */
+struct rnp_hw {
+ struct rnp_eth_adapter *back; /* backup to the adapter handle */
+ void __iomem *e_ctrl; /* ethernet control bar */
+ void __iomem *c_ctrl; /* crypto control bar */
+ u32 c_blen; /* crypto bar size */
+
+ /* pci device info */
+ u16 device_id;
+ u16 vendor_id;
+ u16 max_vfs; /* device max support vf */
+
+ u16 pf_vf_num;
+ struct rnp_mbx_info mbx;
+};
+
+#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/base/rnp_mbx.c b/drivers/net/rnp/base/rnp_mbx.c
new file mode 100644
index 0000000..a53404a
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mbx.c
@@ -0,0 +1,512 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include <string.h>
+
+#include "rnp_hw.h"
+#include "rnp_mbx.h"
+#include "../rnp.h"
+
+/****************************PF MBX OPS************************************/
+static inline u16
+rnp_mbx_get_req(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ u32 reg = 0;
+
+ if (mbx_id == RNP_MBX_FW)
+ reg = RNP_FW2PF_SYNC;
+ else
+ reg = RNP_VF2PF_SYNC(mbx_id);
+ mb();
+
+ return RNP_E_REG_RD(hw, reg) & RNP_MBX_SYNC_REQ_MASK;
+}
+
+static inline u16
+rnp_mbx_get_ack(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ u32 reg = 0;
+ u32 v = 0;
+
+ if (mbx_id == RNP_MBX_FW)
+ reg = RNP_FW2PF_SYNC;
+ else
+ reg = RNP_VF2PF_SYNC(mbx_id);
+ mb();
+ v = RNP_E_REG_RD(hw, reg);
+
+ return (v & RNP_MBX_SYNC_ACK_MASK) >> RNP_MBX_SYNC_ACK_S;
+}
+
+/*
+ * rnp_mbx_inc_pf_ack - increase ack num of mailbox sync info
+ * @hw pointer to the HW structure
+ * @sync_base: addr of sync
+ */
+static inline void
+rnp_mbx_inc_pf_req(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ u32 sync_base;
+ u32 req;
+ u32 v;
+
+ if (mbx_id == RNP_MBX_FW)
+ sync_base = RNP_PF2FW_SYNC;
+ else
+ sync_base = RNP_PF2VF_SYNC(mbx_id);
+ v = RNP_E_REG_RD(hw, sync_base);
+ req = (v & RNP_MBX_SYNC_REQ_MASK);
+ req++;
+ /* clear sync req value */
+ v &= ~(RNP_MBX_SYNC_REQ_MASK);
+ v |= req;
+
+ mb();
+ RNP_E_REG_WR(hw, sync_base, v);
+}
+
+/*
+ * rnp_mbx_inc_pf_ack - increase ack num of maixbox sync info
+ * @hw pointer to the HW structure
+ * @sync_base: addr of sync
+ */
+static inline void
+rnp_mbx_inc_pf_ack(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ u32 ack;
+ u32 reg;
+ u32 v;
+
+ if (mbx_id == RNP_MBX_FW)
+ reg = RNP_PF2FW_SYNC;
+ else
+ reg = RNP_PF2VF_SYNC(mbx_id);
+ v = RNP_E_REG_RD(hw, reg);
+ ack = (v & RNP_MBX_SYNC_ACK_MASK) >> RNP_MBX_SYNC_ACK_S;
+ ack++;
+ /* clear old sync ack */
+ v &= ~RNP_MBX_SYNC_ACK_MASK;
+ v |= (ack << RNP_MBX_SYNC_ACK_S);
+ mb();
+ RNP_E_REG_WR(hw, reg, v);
+}
+
+static void
+rnp_mbx_write_msg(struct rnp_hw *hw,
+ u32 *msg, u16 size,
+ enum RNP_MBX_ID mbx_id)
+{
+ u32 msg_base;
+ u16 i = 0;
+
+ if (mbx_id == RNP_MBX_FW)
+ msg_base = RNP_FW2PF_MSG_DATA;
+ else
+ msg_base = RNP_PF2VF_MSG_DATA(mbx_id);
+ for (i = 0; i < size; i++)
+ RNP_E_REG_WR(hw, msg_base + i * 4, msg[i]);
+}
+
+static void
+rnp_mbx_read_msg(struct rnp_hw *hw,
+ u32 *msg, u16 size,
+ enum RNP_MBX_ID mbx_id)
+{
+ u32 msg_base;
+ u16 i = 0;
+
+ if (mbx_id == RNP_MBX_FW)
+ msg_base = RNP_FW2PF_MSG_DATA;
+ else
+ msg_base = RNP_PF2VF_MSG_DATA(mbx_id);
+ for (i = 0; i < size; i++)
+ msg[i] = RNP_E_REG_RD(hw, msg_base + 4 * i);
+ mb();
+ /* clear msg cmd */
+ RNP_E_REG_WR(hw, msg_base, 0);
+}
+
+/*
+ * rnp_poll_for_msg - Wait for message notification
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message notification
+ */
+static int
+rnp_poll_for_msg(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ u32 countdown = mbx->timeout;
+
+ if (!countdown || !mbx->ops->check_for_msg)
+ goto out;
+
+ while (countdown && mbx->ops->check_for_msg(hw, mbx_id)) {
+ countdown--;
+ if (!countdown)
+ break;
+ udelay(mbx->usec_delay);
+ }
+out:
+ return countdown ? 0 : -ETIME;
+}
+
+/*
+ * rnp_poll_for_ack - Wait for message acknowledgment
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message acknowledgment
+ */
+static int
+rnp_poll_for_ack(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int countdown = mbx->timeout;
+
+ if (!countdown || !mbx->ops->check_for_ack)
+ goto out;
+
+ while (countdown && mbx->ops->check_for_ack(hw, mbx_id)) {
+ countdown--;
+ if (!countdown)
+ break;
+ udelay(mbx->usec_delay);
+ }
+
+out:
+ return countdown ? 0 : -ETIME;
+}
+
+static int
+rnp_read_mbx_msg(struct rnp_hw *hw, u32 *msg, u16 size,
+ enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int ret = RNP_ERR_MBX;
+
+ if (size > mbx->size)
+ return -EINVAL;
+ if (mbx->ops->read)
+ return mbx->ops->read(hw, msg, size, mbx_id);
+ return ret;
+}
+
+static int
+rnp_write_mbx_msg(struct rnp_hw *hw, u32 *msg, u16 size,
+ enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int ret = RNP_ERR_MBX;
+
+ /* exit if either we can't write or there isn't a defined timeout */
+ if (size > mbx->size)
+ return -EINVAL;
+ if (mbx->ops->write)
+ return mbx->ops->write(hw, msg, size, mbx_id);
+ return ret;
+}
+
+/*
+ * rnp_obtain_mbx_lock_pf - obtain mailbox lock
+ * @hw: pointer to the HW structure
+ * @ctrl_base: ctrl mbx addr
+ *
+ * return SUCCESS if we obtained the mailbox lock
+ */
+static int rnp_obtain_mbx_lock_pf(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ int ret_val = -ETIME;
+ u32 try_cnt = 5000; /* 500ms */
+ u32 ctrl_base;
+
+ if (mbx_id == RNP_MBX_FW)
+ ctrl_base = RNP_PF2FW_MBX_CTRL;
+ else
+ ctrl_base = RNP_PF2VF_MBX_CTRL(mbx_id);
+ while (try_cnt-- > 0) {
+ /* take ownership of the buffer */
+ RNP_E_REG_WR(hw, ctrl_base, RNP_MBX_CTRL_PF_HOLD);
+ wmb();
+ /* reserve mailbox for pf used */
+ if (RNP_E_REG_RD(hw, ctrl_base) & RNP_MBX_CTRL_PF_HOLD)
+ return 0;
+ udelay(100);
+ }
+
+ RNP_PMD_LOG(WARNING, "%s: failed to get:lock\n",
+ __func__);
+ return ret_val;
+}
+
+static void
+rnp_obtain_mbx_unlock_pf(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ u32 ctrl_base;
+
+ if (mbx_id == RNP_MBX_FW)
+ ctrl_base = RNP_PF2FW_MBX_CTRL;
+ else
+ ctrl_base = RNP_PF2VF_MBX_CTRL(mbx_id);
+ RNP_E_REG_WR(hw, ctrl_base, 0);
+}
+
+static void
+rnp_mbx_send_irq_pf(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ u32 ctrl_base;
+
+ if (mbx_id == RNP_MBX_FW)
+ ctrl_base = RNP_PF2FW_MBX_CTRL;
+ else
+ ctrl_base = RNP_PF2VF_MBX_CTRL(mbx_id);
+
+ RNP_E_REG_WR(hw, ctrl_base, RNP_MBX_CTRL_REQ);
+}
+/*
+ * rnp_read_mbx_pf - Read a message from the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of request mbx target
+ *
+ * This function copies a message from the mailbox buffer to the caller's
+ * memory buffer. The presumption is that the caller knows that there was
+ * a message due to a VF/FW request so no polling for message is needed.
+ */
+static int rnp_read_mbx_pf(struct rnp_hw *hw, u32 *msg,
+ u16 size, enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_sync *sync = &hw->mbx.syncs[mbx_id];
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int ret_val = -EBUSY;
+
+ if (size > mbx->size) {
+ RNP_PMD_LOG(ERR, "%s: mbx msg block size:%d should <%d\n",
+ __func__, size, mbx->size);
+ return -EINVAL;
+ }
+ memset(msg, 0, sizeof(*msg) * size);
+ /* lock the mailbox to prevent pf/vf race condition */
+ ret_val = rnp_obtain_mbx_lock_pf(hw, mbx_id);
+ if (ret_val)
+ goto out_no_read;
+ /* copy the message from the mailbox memory buffer */
+ rnp_mbx_read_msg(hw, msg, size, mbx_id);
+ /* update req. sync with fw or vf */
+ sync->req = rnp_mbx_get_req(hw, mbx_id);
+ /* Acknowledge receipt and release mailbox, then we're done */
+ rnp_mbx_inc_pf_ack(hw, mbx_id);
+ mb();
+ /* free ownership of the buffer */
+ rnp_obtain_mbx_unlock_pf(hw, mbx_id);
+
+out_no_read:
+
+ return ret_val;
+}
+
+/*
+ * rnp_write_mbx_pf - Places a message in the mailbox
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of request mbx target
+ *
+ * returns SUCCESS if it successfully copied message into the buffer
+ */
+static int rnp_write_mbx_pf(struct rnp_hw *hw, u32 *msg, u16 size,
+ enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_sync *sync = &hw->mbx.syncs[mbx_id];
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ int ret_val = 0;
+
+ if (size > mbx->size) {
+ RNP_PMD_LOG(ERR, "%s: size:%d should <%d\n", __func__, size,
+ mbx->size);
+ return -EINVAL;
+ }
+ /* lock the mailbox to prevent pf/vf/cpu race condition */
+ ret_val = rnp_obtain_mbx_lock_pf(hw, mbx_id);
+ if (ret_val) {
+ RNP_PMD_LOG(ERR, "%s: get mbx:%d wlock failed. "
+ "msg:0x%08x-0x%08x\n", __func__, mbx_id,
+ msg[0], msg[1]);
+ goto out_no_write;
+ }
+ /* copy the caller specified message to the mailbox memory buffer */
+ rnp_mbx_write_msg(hw, msg, size, mbx_id);
+ /* flush msg and acks as we are overwriting the message buffer */
+ sync->ack = rnp_mbx_get_ack(hw, mbx_id);
+ rnp_mbx_inc_pf_req(hw, mbx_id);
+ udelay(300);
+ mb();
+ /* interrupt VF/FW to tell it a message has been sent and release buf */
+ rnp_mbx_send_irq_pf(hw, mbx_id);
+
+out_no_write:
+
+ return ret_val;
+}
+
+/*
+ * rnp_read_posted_mbx - Wait for message notification and receive message
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully received a message notification and
+ * copied it into the receive buffer.
+ */
+static int32_t
+rnp_read_posted_mbx(struct rnp_hw *hw,
+ u32 *msg, u16 size, enum RNP_MBX_ID mbx_id)
+{
+ int ret_val = -EINVAL;
+
+ ret_val = rnp_poll_for_msg(hw, mbx_id);
+ /* if ack received read message, otherwise we timed out */
+ if (!ret_val)
+ return rnp_read_mbx_msg(hw, msg, size, mbx_id);
+ return ret_val;
+}
+
+/*
+ * rnp_write_posted_mbx - Write a message to the mailbox, wait for ack
+ * @hw: pointer to the HW structure
+ * @msg: The message buffer
+ * @size: Length of buffer
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if it successfully copied message into the buffer and
+ * received an ack to that message within delay * timeout period
+ */
+static int rnp_write_posted_mbx(struct rnp_hw *hw,
+ u32 *msg, u16 size,
+ enum RNP_MBX_ID mbx_id)
+{
+ int ret_val = RNP_ERR_MBX;
+
+ ret_val = rnp_write_mbx_msg(hw, msg, size, mbx_id);
+ if (ret_val)
+ return ret_val;
+ /* if msg sent wait until we receive an ack */
+ if (!ret_val)
+ ret_val = rnp_poll_for_ack(hw, mbx_id);
+ return ret_val;
+}
+
+/*
+ * rnp_check_for_msg_pf - checks to see if the VF/FW has sent mail
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ */
+static int rnp_check_for_msg_pf(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_sync *sync = &hw->mbx.syncs[mbx_id];
+ int ret_val = RNP_ERR_MBX;
+
+ if (rnp_mbx_get_req(hw, mbx_id) != sync->req)
+ ret_val = 0;
+
+ return ret_val;
+}
+
+/*
+ * rnp_check_for_ack_pf - checks to see if the VF/FW has ACKed
+ * @hw: pointer to the HW structure
+ * @mbx_id: id of mailbox to write
+ *
+ * returns SUCCESS if the VF has set the Status bit or else ERR_MBX
+ */
+static int rnp_check_for_ack_pf(struct rnp_hw *hw, enum RNP_MBX_ID mbx_id)
+{
+ struct rnp_mbx_sync *sync = &hw->mbx.syncs[mbx_id];
+ int ret_val = RNP_ERR_MBX;
+
+ if (rnp_mbx_get_ack(hw, mbx_id) != sync->ack)
+ ret_val = 0;
+
+ return ret_val;
+}
+
+const struct rnp_mbx_ops rnp_mbx_ops_pf = {
+ .read = rnp_read_mbx_pf,
+ .write = rnp_write_mbx_pf,
+ .read_posted = rnp_read_posted_mbx,
+ .write_posted = rnp_write_posted_mbx,
+ .check_for_msg = rnp_check_for_msg_pf,
+ .check_for_ack = rnp_check_for_ack_pf,
+};
+
+static int rnp_get_pfvfnum(struct rnp_hw *hw)
+{
+ u32 addr_mask;
+ u32 offset;
+ u32 val;
+
+ addr_mask = hw->c_blen - 1;
+ offset = RNP_SRIOV_INFO & addr_mask;
+ val = RNP_REG_RD(hw->c_ctrl, offset);
+
+ return val >> RNP_PFVF_SHIFT;
+}
+
+static void rnp_mbx_reset(struct rnp_hw *hw)
+{
+ struct rnp_mbx_sync *sync = hw->mbx.syncs;
+ int idx = 0;
+ u32 v = 0;
+
+ for (idx = 0; idx < hw->mbx.en_vfs; idx++) {
+ v = RNP_E_REG_RD(hw, RNP_VF2PF_SYNC(idx));
+ sync[idx].ack = (v & RNP_MBX_SYNC_ACK_MASK) >> RNP_MBX_SYNC_ACK_S;
+ sync[idx].req = v & RNP_MBX_SYNC_REQ_MASK;
+ /* release pf<->vf pf used buffer lock */
+ RNP_E_REG_WR(hw, RNP_PF2VF_MBX_CTRL(idx), 0);
+ }
+ /* reset pf->fw status */
+ v = RNP_E_REG_RD(hw, RNP_FW2PF_SYNC);
+ sync[RNP_MBX_FW].ack = (v & RNP_MBX_SYNC_ACK_MASK) >> RNP_MBX_SYNC_ACK_S;
+ sync[RNP_MBX_FW].req = v & RNP_MBX_SYNC_REQ_MASK;
+
+ RNP_PMD_LOG(INFO, "now fw_req %d fw_ack %d\n",
+ hw->mbx.syncs[idx].req, hw->mbx.syncs[idx].ack);
+ /* release pf->fw buffer lock */
+ RNP_E_REG_WR(hw, RNP_PF2FW_MBX_CTRL, 0);
+ /* setup mailbox vec id */
+ RNP_E_REG_WR(hw, RNP_FW2PF_MBOX_VEC, RNP_MISC_VEC_ID);
+ /* enable 0-31 vf interrupt */
+ RNP_E_REG_WR(hw, RNP_PF2VF_INTR_MASK(0), 0);
+ /* enable 32-63 vf interrupt */
+ RNP_E_REG_WR(hw, RNP_PF2VF_INTR_MASK(33), 0);
+ /* enable firmware interrupt */
+ RNP_E_REG_WR(hw, RNP_FW2PF_INTR_MASK, 0);
+}
+
+int rnp_init_mbx_pf(struct rnp_hw *hw)
+{
+ struct rnp_proc_priv *proc_priv = RNP_DEV_TO_PROC_PRIV(hw->back->eth_dev);
+ struct rnp_mbx_info *mbx = &hw->mbx;
+ u32 pf_vf_num;
+
+ pf_vf_num = rnp_get_pfvfnum(hw);
+ mbx->usec_delay = RNP_MBX_DELAY_US;
+ mbx->timeout = RNP_MBX_MAX_TM_SEC / mbx->usec_delay;
+ mbx->size = RNP_MBX_MSG_BLOCK_SIZE;
+ mbx->pf_num = (pf_vf_num & RNP_PF_BIT_MASK) ? 1 : 0;
+ mbx->vf_num = UINT16_MAX;
+ mbx->ops = &rnp_mbx_ops_pf;
+ proc_priv->mbx_ops = &rnp_mbx_ops_pf;
+ hw->pf_vf_num = pf_vf_num;
+ mbx->is_pf = 1;
+ rnp_mbx_reset(hw);
+
+ return 0;
+}
diff --git a/drivers/net/rnp/base/rnp_mbx.h b/drivers/net/rnp/base/rnp_mbx.h
new file mode 100644
index 0000000..b241657
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mbx.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef __RNP_MBX_H__
+#define __RNP_MBX_H__
+
+#include "rnp_osdep.h"
+
+#include "rnp_hw.h"
+
+#define RNP_ISOLATE_CTRL (0x7982fc)
+#define RNP_SRIOV_INFO (0x75f000)
+#define RNP_PFVF_SHIFT (4)
+#define RNP_PF_BIT_MASK RTE_BIT32(6)
+
+#define RNP_MBX_MSG_BLOCK_SIZE (14)
+/* mailbox share memory detail divide */
+/*|0------------------15|---------------------31|
+ *|------master-req-----|-------master-ack------|
+ *|------slave-req------|-------slave-ack-------|
+ *|---------------------|-----------------------|
+ *| data(56 bytes) |
+ *----------------------------------------------|
+ */
+/* FW <--> PF */
+#define RNP_FW2PF_MBOX_VEC _MSI_(0x5300)
+#define RNP_FW2PF_MEM_BASE _MSI_(0xa000)
+#define RNP_FW2PF_SYNC (RNP_FW2PF_MEM_BASE + 0)
+#define RNP_PF2FW_SYNC (RNP_FW2PF_MEM_BASE + 4)
+#define RNP_FW2PF_MSG_DATA (RNP_FW2PF_MEM_BASE + 8)
+#define RNP_PF2FW_MBX_CTRL _MSI_(0xa100)
+#define RNP_FW2PF_MBX_CTRL _MSI_(0xa200)
+#define RNP_FW2PF_INTR_MASK _MSI_(0xa300)
+/* PF <-> VF */
+#define RNP_PF2VF_MBOX_VEC(vf) _MSI_(0x5100 + (4 * (vf)))
+#define RNP_PF2VF_MEM_BASE(vf) _MSI_(0x6000 + (64 * (vf)))
+#define RNP_PF2VF_SYNC(vf) (RNP_PF2VF_MEM_BASE(vf) + 0)
+#define RNP_VF2PF_SYNC(vf) (RNP_PF2VF_MEM_BASE(vf) + 4)
+#define RNP_PF2VF_MSG_DATA(vf) (RNP_PF2VF_MEM_BASE(vf) + 8)
+#define RNP_VF2PF_MBX_CTRL(vf) _MSI_(0x7000 + ((vf) * 4))
+#define RNP_PF2VF_MBX_CTRL(vf) _MSI_(0x7100 + ((vf) * 4))
+#define RNP_PF2VF_INTR_MASK(vf) _MSI_(0x7200 + ((((vf) & 32) / 32) * 0x4))
+/* sync memory define */
+#define RNP_MBX_SYNC_REQ_MASK RTE_GENMASK32(15, 0)
+#define RNP_MBX_SYNC_ACK_MASK RTE_GENMASK32(31, 16)
+#define RNP_MBX_SYNC_ACK_S (16)
+/* for pf <--> fw/vf */
+#define RNP_MBX_CTRL_PF_HOLD RTE_BIT32(3) /* VF:RO, PF:WR */
+#define RNP_MBX_CTRL_REQ RTE_BIT32(0) /* msg write request */
+
+#define RNP_MBX_DELAY_US (100) /* delay us for retry */
+#define RNP_MBX_MAX_TM_SEC (4 * 1000 * 1000) /* 4 sec */
+
+#define RNP_ERR_MBX (-100)
+int rnp_init_mbx_pf(struct rnp_hw *hw);
+
+#endif /* _RNP_MBX_H_ */
diff --git a/drivers/net/rnp/base/rnp_osdep.h b/drivers/net/rnp/base/rnp_osdep.h
new file mode 100644
index 0000000..b0b3f34
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_osdep.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_OSDEP_H
+#define _RNP_OSDEP_H
+
+#include <stdint.h>
+#include <stdbool.h>
+#include <inttypes.h>
+
+#include <rte_io.h>
+#include <rte_log.h>
+#include <rte_cycles.h>
+
+#include "../rnp_logs.h"
+
+typedef uint8_t u8;
+typedef uint16_t u16;
+typedef uint32_t u32;
+
+#define mb() rte_mb()
+#define wmb() rte_wmb()
+
+#define udelay(x) rte_delay_us(x)
+
+#define _MSI_(off) ((off) + (0xA0000))
+
+#define __iomem
+static inline u32
+rnp_reg_read32(void *base, size_t offset)
+{
+ unsigned int v = rte_read32(((u8 *)base + offset));
+
+ RNP_PMD_REG_LOG(DEBUG, "offset=0x%08lx val=0x%04x",
+ (unsigned long)offset, v);
+ return v;
+}
+
+static inline void
+rnp_reg_write32(void *base, size_t offset, u32 val)
+{
+ RNP_PMD_REG_LOG(DEBUG, "offset=0x%08lx val=0x%08x",
+ (unsigned long)offset, val);
+ rte_write32(val, ((u8 *)base + offset));
+}
+
+#define RNP_REG_RD(base, offset) rnp_reg_read32(base, offset)
+#define RNP_REG_WR(base, offset) rnp_reg_write32(base, offset)
+#define RNP_E_REG_WR(hw, off, value) rnp_reg_write32((hw)->e_ctrl, (off), (value))
+#define RNP_E_REG_RD(hw, off) rnp_reg_read32((hw)->e_ctrl, (off))
+
+#endif /* _RNP_OSDEP_H_ */
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index 4f37c6b..d6cb380 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -6,6 +6,11 @@ if not is_linux
reason = 'only supported on Linux'
endif
+subdir('base')
+objs = [base_objs]
+
+includes += include_directories('base')
+
sources = files(
'rnp_ethdev.c',
)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 6cd717a..904b7ad 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -4,10 +4,29 @@
#ifndef __RNP_H__
#define __RNP_H__
+#include <ethdev_driver.h>
+#include <rte_interrupts.h>
+
+#include "base/rnp_hw.h"
+
#define PCI_VENDOR_ID_MUCSE (0x8848)
#define RNP_DEV_ID_N10G (0x1000)
+#define RNP_MAX_VF_NUM (64)
+#define RNP_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
+
+struct rnp_proc_priv {
+ const struct rnp_mbx_ops *mbx_ops;
+};
struct rnp_eth_port {
};
+struct rnp_eth_adapter {
+ struct rnp_hw hw;
+ struct rte_eth_dev *eth_dev; /* alloc eth_dev by platform */
+};
+
+#define RNP_DEV_TO_PROC_PRIV(eth_dev) \
+ ((struct rnp_proc_priv *)(eth_dev)->process_private)
+
#endif /* __RNP_H__ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 05/28] net/rnp: add device init and uninit
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (3 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 04/28] net/rnp: support mailbox basic operate Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 06/28] net/rnp: add get device information operation Wenbo Cao
` (22 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao, Anatoly Burakov
Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add firmware communic method and basic device
init, uninit and close resource function.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/rnp/base/meson.build | 4 +
drivers/net/rnp/base/rnp_common.c | 73 ++++++++
drivers/net/rnp/base/rnp_common.h | 12 ++
drivers/net/rnp/base/rnp_dma_regs.h | 13 ++
drivers/net/rnp/base/rnp_eth_regs.h | 15 ++
drivers/net/rnp/base/rnp_fw_cmd.c | 75 ++++++++
drivers/net/rnp/base/rnp_fw_cmd.h | 216 +++++++++++++++++++++++
drivers/net/rnp/base/rnp_hw.h | 39 +++++
drivers/net/rnp/base/rnp_mac.c | 28 +++
drivers/net/rnp/base/rnp_mac.h | 14 ++
drivers/net/rnp/base/rnp_mbx_fw.c | 338 ++++++++++++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_mbx_fw.h | 18 ++
drivers/net/rnp/base/rnp_osdep.h | 100 ++++++++++-
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 40 +++++
drivers/net/rnp/rnp_ethdev.c | 317 ++++++++++++++++++++++++++++++++-
16 files changed, 1290 insertions(+), 13 deletions(-)
create mode 100644 drivers/net/rnp/base/rnp_common.c
create mode 100644 drivers/net/rnp/base/rnp_common.h
create mode 100644 drivers/net/rnp/base/rnp_dma_regs.h
create mode 100644 drivers/net/rnp/base/rnp_eth_regs.h
create mode 100644 drivers/net/rnp/base/rnp_fw_cmd.c
create mode 100644 drivers/net/rnp/base/rnp_fw_cmd.h
create mode 100644 drivers/net/rnp/base/rnp_mac.c
create mode 100644 drivers/net/rnp/base/rnp_mac.h
create mode 100644 drivers/net/rnp/base/rnp_mbx_fw.c
create mode 100644 drivers/net/rnp/base/rnp_mbx_fw.h
diff --git a/drivers/net/rnp/base/meson.build b/drivers/net/rnp/base/meson.build
index 9ea88c3..b9db033 100644
--- a/drivers/net/rnp/base/meson.build
+++ b/drivers/net/rnp/base/meson.build
@@ -3,6 +3,10 @@
sources = [
'rnp_mbx.c',
+ 'rnp_fw_cmd.c',
+ 'rnp_mbx_fw.c',
+ 'rnp_common.c',
+ 'rnp_mac.c',
]
error_cflags = ['-Wno-unused-value',
diff --git a/drivers/net/rnp/base/rnp_common.c b/drivers/net/rnp/base/rnp_common.c
new file mode 100644
index 0000000..47a979b
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_common.c
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include "rnp_osdep.h"
+#include "rnp_hw.h"
+#include "rnp_eth_regs.h"
+#include "rnp_dma_regs.h"
+#include "rnp_common.h"
+#include "rnp_mbx_fw.h"
+#include "rnp_mac.h"
+#include "../rnp.h"
+
+static void
+rnp_hw_reset(struct rnp_hw *hw)
+{
+ PMD_INIT_FUNC_TRACE();
+
+ RNP_E_REG_WR(hw, RNP_NIC_RESET, 0);
+ /* hardware reset valid must be 0 -> 1 */
+ wmb();
+ RNP_E_REG_WR(hw, RNP_NIC_RESET, 1);
+ RNP_PMD_DRV_LOG(INFO, "PF[%d] reset nic finish\n", hw->mbx.pf_num);
+}
+
+int rnp_init_hw(struct rnp_hw *hw)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(hw->back->eth_dev);
+ u32 version = 0;
+ int ret = -1;
+ u32 state;
+
+ PMD_INIT_FUNC_TRACE();
+ version = RNP_E_REG_RD(hw, RNP_DMA_VERSION);
+ RNP_PMD_DRV_LOG(INFO, "nic hw version:0x%.2x\n", version);
+ rnp_fw_init(hw);
+ RNP_E_REG_WR(hw, RNP_DMA_HW_EN, FALSE);
+ do {
+ state = RNP_E_REG_RD(hw, RNP_DMA_HW_STATE);
+ } while (state == 0);
+ ret = rnp_mbx_fw_get_capability(port);
+ if (ret) {
+ RNP_PMD_ERR("mbx_get_capability error! errcode=%d\n", ret);
+ return ret;
+ }
+ rnp_hw_reset(hw);
+ rnp_mbx_fw_reset_phy(hw);
+ /* rx packet protocol engine bypass */
+ RNP_E_REG_WR(hw, RNP_E_ENG_BYPASS, FALSE);
+ /* enable host filter */
+ RNP_E_REG_WR(hw, RNP_E_FILTER_EN, TRUE);
+ /* enable vxlan parse */
+ RNP_E_REG_WR(hw, RNP_E_VXLAN_PARSE_EN, TRUE);
+ /* enable flow direct engine */
+ RNP_E_REG_WR(hw, RNP_E_REDIR_EN, TRUE);
+ /* enable dma engine */
+ RNP_E_REG_WR(hw, RNP_DMA_HW_EN, RNP_DMA_EN_ALL);
+#define RNP_TARGET_TC_PORT (2)
+#define RNP_PORT_OFF_QUEUE_NUM (2)
+ if (hw->nic_mode == RNP_DUAL_10G && hw->max_port_num == 2)
+ RNP_E_REG_WR(hw, RNP_TC_PORT_OFFSET(RNP_TARGET_TC_PORT),
+ RNP_PORT_OFF_QUEUE_NUM);
+
+ return 0;
+}
+
+int
+rnp_setup_common_ops(struct rnp_hw *hw)
+{
+ rnp_mac_ops_init(hw);
+
+ return 0;
+}
diff --git a/drivers/net/rnp/base/rnp_common.h b/drivers/net/rnp/base/rnp_common.h
new file mode 100644
index 0000000..aaf77a6
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_common.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_COMMON_H_
+#define _RNP_COMMON_H_
+
+#define RNP_NIC_RESET _NIC_(0x0010)
+int rnp_init_hw(struct rnp_hw *hw);
+int rnp_setup_common_ops(struct rnp_hw *hw);
+
+#endif /* _RNP_COMMON_H_ */
diff --git a/drivers/net/rnp/base/rnp_dma_regs.h b/drivers/net/rnp/base/rnp_dma_regs.h
new file mode 100644
index 0000000..00f8aff
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_dma_regs.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_DMA_REGS_H_
+#define _RNP_DMA_REGS_H_
+
+#define RNP_DMA_VERSION (0)
+#define RNP_DMA_HW_EN (0x10)
+#define RNP_DMA_EN_ALL (0b1111)
+#define RNP_DMA_HW_STATE (0x14)
+
+#endif /* _RNP_DMA_REGS_H_ */
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
new file mode 100644
index 0000000..6957866
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_ETH_REGS_H
+#define _RNP_ETH_REGS_H
+
+#define RNP_E_ENG_BYPASS _ETH_(0x8000)
+#define RNP_E_VXLAN_PARSE_EN _ETH_(0x8004)
+#define RNP_E_FILTER_EN _ETH_(0x801c)
+#define RNP_E_REDIR_EN _ETH_(0x8030)
+
+#define RNP_TC_PORT_OFFSET(lane) _ETH_(0xe840 + 0x04 * (lane))
+
+#endif /* _RNP_ETH_REGS_H */
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.c b/drivers/net/rnp/base/rnp_fw_cmd.c
new file mode 100644
index 0000000..064ba9e
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_fw_cmd.c
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include "rnp_fw_cmd.h"
+
+static void
+rnp_build_phy_abalities_req(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *req_arg,
+ void *cookie)
+{
+ struct rnp_get_phy_ablity *arg = (struct rnp_get_phy_ablity *)req->data;
+
+ req->flags = 0;
+ req->opcode = RNP_GET_PHY_ABALITY;
+ req->datalen = sizeof(*arg);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ arg->requester = RNP_REQUEST_BY_DPDK;
+}
+
+static void
+rnp_build_reset_phy_req(struct rnp_mbx_fw_cmd_req *req,
+ void *cookie)
+{
+ req->flags = 0;
+ req->opcode = RNP_RESET_PHY;
+ req->datalen = 0;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+ req->cookie = cookie;
+}
+
+static void
+rnp_build_get_macaddress_req(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *req_arg,
+ void *cookie)
+{
+ struct rnp_mac_addr_req *arg = (struct rnp_mac_addr_req *)req->data;
+
+ req->flags = 0;
+ req->opcode = RNP_GET_MAC_ADDRESS;
+ req->datalen = sizeof(*arg);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ arg->lane_mask = RTE_BIT32(req_arg->param0);
+ arg->pfvf_num = req_arg->param1;
+}
+
+int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *arg,
+ void *cookie)
+{
+ int err = 0;
+
+ switch (arg->opcode) {
+ case RNP_GET_PHY_ABALITY:
+ rnp_build_phy_abalities_req(req, arg, cookie);
+ break;
+ case RNP_RESET_PHY:
+ rnp_build_reset_phy_req(req, cookie);
+ break;
+ case RNP_GET_MAC_ADDRESS:
+ rnp_build_get_macaddress_req(req, arg, cookie);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
+ return err;
+}
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.h b/drivers/net/rnp/base/rnp_fw_cmd.h
new file mode 100644
index 0000000..fb7a0af
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_fw_cmd.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_FW_CMD_H_
+#define _RNP_FW_CMD_H_
+
+#include "rnp_osdep.h"
+
+#define RNP_FW_LINK_SYNC _NIC_(0x000c)
+#define RNP_LINK_MAGIC_CODE (0xa5a40000)
+#define RNP_LINK_MAGIC_MASK RTE_GENMASK32(31, 16)
+
+enum RNP_GENERIC_CMD {
+ /* general */
+ RNP_GET_FW_VERSION = 0x0001,
+ RNP_READ_REG = 0xFF03,
+ RNP_WRITE_REG = 0xFF04,
+ RNP_MODIFY_REG = 0xFF07,
+
+ /* virtualization */
+ RNP_IFUP_DOWN = 0x0800,
+ RNP_PTP_EVENT = 0x0801,
+ RNP_DRIVER_INSMOD = 0x0803,
+ RNP_SYSTEM_SUSPUSE = 0x0804,
+ RNP_FORCE_LINK_ON_CLOSE = 0x0805,
+
+ /* link configuration admin commands */
+ RNP_GET_PHY_ABALITY = 0x0601,
+ RNP_GET_MAC_ADDRESS = 0x0602,
+ RNP_RESET_PHY = 0x0603,
+ RNP_LED_SET = 0x0604,
+ RNP_GET_LINK_STATUS = 0x0607,
+ RNP_LINK_STATUS_EVENT = 0x0608,
+ RNP_SET_LANE_FUN = 0x0609,
+ RNP_GET_LANE_STATUS = 0x0610,
+ RNP_SFP_SPEED_CHANGED_EVENT = 0x0611,
+ RNP_SET_EVENT_MASK = 0x0613,
+ RNP_SET_LANE_EVENT_EN = 0x0614,
+ RNP_SET_LOOPBACK_MODE = 0x0618,
+ RNP_PLUG_EVENT = 0x0620,
+ RNP_SET_PHY_REG = 0x0628,
+ RNP_GET_PHY_REG = 0x0629,
+ RNP_PHY_LINK_SET = 0x0630,
+ RNP_GET_PHY_STATISTICS = 0x0631,
+ RNP_GET_PCS_REG = 0x0633,
+ RNP_MODIFY_PCS_REG = 0x0634,
+ RNP_MODIFY_PHY_REG = 0x0635,
+
+ /*sfp-module*/
+ RNP_SFP_MODULE_READ = 0x0900,
+ RNP_SFP_MODULE_WRITE = 0x0901,
+
+ /* fw update */
+ RNP_FW_UPDATE = 0x0700,
+ RNP_FW_MAINTAIN = 0x0701,
+ RNP_EEPROM_OP = 0x0705,
+ RNP_EMI_SYNC = 0x0706,
+
+ RNP_GET_DUMP = 0x0a00,
+ RNP_SET_DUMP = 0x0a10,
+ RNP_GET_TEMP = 0x0a11,
+ RNP_SET_WOL = 0x0a12,
+ RNP_LLDP_TX_CTL = 0x0a13,
+ RNP_LLDP_STAT = 0x0a14,
+ RNP_SFC_OP = 0x0a15,
+ RNP_SRIOV_SET = 0x0a16,
+ RNP_SRIOV_STAT = 0X0a17,
+
+ RNP_SN_PN = 0x0b00,
+
+ RNP_ATU_OBOUND_SET = 0xFF10,
+ RNP_SET_DDR_CSL = 0xFF11,
+};
+
+/* firmware -> driver reply */
+struct rnp_phy_abilities_rep {
+ u8 link_stat;
+ u8 lane_mask;
+
+ u32 speed;
+ u16 phy_type;
+ u16 nic_mode;
+ u16 pfnum;
+ u32 fw_version;
+ u32 nic_clock;
+ union {
+ u8 port_ids[4];
+ u32 port_idf;
+ };
+ u32 fw_ext;
+ u32 phy_id;
+ u32 wol_status; /* bit0-3 wol supported . bit4-7 wol enable */
+ union {
+ u32 ext_ablity;
+ struct {
+ u32 valid : 1; /* 0 */
+ u32 wol_en : 1; /* 1 */
+ u32 pci_preset_runtime_en : 1; /* 2 */
+ u32 smbus_en : 1; /* 3 */
+ u32 ncsi_en : 1; /* 4 */
+ u32 rpu_en : 1; /* 5 */
+ u32 v2 : 1; /* 6 */
+ u32 pxe_en : 1; /* 7 */
+ u32 mctp_en : 1; /* 8 */
+ u32 yt8614 : 1; /* 9 */
+ u32 pci_ext_reset : 1; /* 10 */
+ u32 rpu_availble : 1; /* 11 */
+ u32 fw_lldp_ablity : 1; /* 12 */
+ u32 lldp_enabled : 1; /* 13 */
+ u32 only_1g : 1; /* 14 */
+ u32 force_link_down_en : 4; /* lane0 - lane4 */
+ u32 force_link_supported : 1;
+ u32 ports_is_sgmii_valid : 1;
+ u32 lane_is_sgmii : 4; /* 24 bit */
+ u32 rsvd : 7;
+ } e;
+ };
+} _PACKED_ALIGN4;
+
+struct rnp_mac_addr_rep {
+ u32 lanes;
+ struct _addr {
+ /* for macaddr:01:02:03:04:05:06
+ * mac-hi=0x01020304 mac-lo=0x05060000
+ */
+ u8 mac[8];
+ } addrs[4];
+ u32 pcode;
+};
+
+#define RNP_FW_REP_DATA_NUM (40)
+struct rnp_mbx_fw_cmd_reply {
+ u16 flags;
+ u16 opcode;
+ u16 error_code;
+ u16 datalen;
+ union {
+ struct {
+ u32 cookie_lo;
+ u32 cookie_hi;
+ };
+ void *cookie;
+ };
+ u8 data[RNP_FW_REP_DATA_NUM];
+} _PACKED_ALIGN4;
+
+struct rnp_fw_req_arg {
+ u16 opcode;
+ u32 param0;
+ u32 param1;
+ u32 param2;
+ u32 param3;
+ u32 param4;
+ u32 param5;
+};
+
+static_assert(sizeof(struct rnp_mbx_fw_cmd_reply) == 56,
+ "firmware request cmd size changed: rnp_mbx_fw_cmd_reply");
+
+#define RNP_FW_REQ_DATA_NUM (32)
+/* driver op -> firmware */
+struct rnp_mac_addr_req {
+ u32 lane_mask;
+ u32 pfvf_num;
+ u32 rsv[2];
+} _PACKED_ALIGN4;
+
+struct rnp_get_phy_ablity {
+ u32 requester;
+#define RNP_REQUEST_BY_DPDK (0xa1)
+#define RNP_REQUEST_BY_DRV (0xa2)
+#define RNP_REQUEST_BY_PXE (0xa3)
+ u32 rsv[7];
+} _PACKED_ALIGN4;
+
+struct rnp_mbx_fw_cmd_req {
+ u16 flags;
+ u16 opcode;
+ u16 datalen;
+ u16 ret_value;
+ union {
+ struct {
+ u32 cookie_lo; /* 8-11 */
+ u32 cookie_hi; /* 12-15 */
+ };
+ void *cookie;
+ };
+ u32 reply_lo;
+ u32 reply_hi;
+
+ u8 data[RNP_FW_REQ_DATA_NUM];
+} _PACKED_ALIGN4;
+
+static_assert(sizeof(struct rnp_mbx_fw_cmd_req) == 56,
+ "firmware request cmd size changed: rnp_mbx_fw_cmd_req");
+
+#define RNP_MBX_REQ_HDR_LEN (24)
+#define RNP_MBX_REPLYHDR_LEN (16)
+#define RNP_MAX_SHARE_MEM (8 * 8)
+struct rnp_mbx_req_cookie {
+ u32 magic;
+#define RNP_COOKIE_MAGIC (0xCE)
+ u32 timeout_ms;
+ u32 errcode;
+
+ /* wait_queue_head_t wait; */
+ volatile u32 done;
+ u32 priv_len;
+ u8 priv[RNP_MAX_SHARE_MEM];
+};
+
+int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *arg,
+ void *cookie);
+#endif /* _RNP_FW_CMD_H */
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 959b4c3..e150543 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -6,6 +6,8 @@
#include "rnp_osdep.h"
+#define RNP_MAX_PORT_OF_PF (4)
+
struct rnp_hw;
/* Mailbox Operate Info */
enum RNP_MBX_ID {
@@ -55,7 +57,34 @@ struct rnp_mbx_info {
struct rnp_mbx_sync syncs[RNP_MBX_FW];
};
+struct rnp_eth_port;
+/* mac operations */
+struct rnp_mac_ops {
+ /* update mac packet filter mode */
+ int (*get_macaddr)(struct rnp_eth_port *port, u8 *mac);
+};
+
struct rnp_eth_adapter;
+struct rnp_fw_info {
+ char cookie_name[RTE_MEMZONE_NAMESIZE];
+ struct rnp_dma_mem mem;
+ void *cookie_pool;
+ bool fw_irq_en;
+ bool msg_alloced;
+
+ u64 fw_features;
+ spinlock_t fw_lock;
+};
+
+#define rnp_call_hwif_impl(port, func, arg...) \
+ (((func) != NULL) ? ((func) (port, arg)) : (-ENODEV))
+
+enum rnp_nic_mode {
+ RNP_SINGLE_40G = 0,
+ RNP_SINGLE_10G = 1,
+ RNP_DUAL_10G = 2,
+ RNP_QUAD_10G = 3,
+};
/* hw device description */
struct rnp_hw {
@@ -69,8 +98,18 @@ struct rnp_hw {
u16 vendor_id;
u16 max_vfs; /* device max support vf */
+ char device_name[RTE_DEV_NAME_MAX_LEN];
+
+ u8 max_port_num; /* max sub port of this nic */
+ u8 lane_mask; /* lane enabled bit */
+ u8 nic_mode;
u16 pf_vf_num;
+ /* hardware port sequence info */
+ u8 phy_port_ids[RNP_MAX_PORT_OF_PF]; /* port id: for lane0~3: value:0 ~ 7*/
+ u8 lane_of_port[RNP_MAX_PORT_OF_PF]; /* lane_id: hw lane map port 1:0 0:1 or 0:0 1:1 */
+ bool lane_is_sgmii[RNP_MAX_PORT_OF_PF];
struct rnp_mbx_info mbx;
+ struct rnp_fw_info fw_info;
};
#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/base/rnp_mac.c b/drivers/net/rnp/base/rnp_mac.c
new file mode 100644
index 0000000..b063f4c
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mac.c
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include "rnp_osdep.h"
+
+#include "rnp_mbx_fw.h"
+#include "rnp_mac.h"
+#include "../rnp.h"
+
+const struct rnp_mac_ops rnp_mac_ops_pf = {
+ .get_macaddr = rnp_mbx_fw_get_macaddr,
+};
+
+int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac)
+{
+ const struct rnp_mac_ops *mac_ops =
+ RNP_DEV_PP_TO_MAC_OPS(port->eth_dev);
+
+ return rnp_call_hwif_impl(port, mac_ops->get_macaddr, mac);
+}
+
+void rnp_mac_ops_init(struct rnp_hw *hw)
+{
+ struct rnp_proc_priv *proc_priv = RNP_DEV_TO_PROC_PRIV(hw->back->eth_dev);
+
+ proc_priv->mac_ops = &rnp_mac_ops_pf;
+}
diff --git a/drivers/net/rnp/base/rnp_mac.h b/drivers/net/rnp/base/rnp_mac.h
new file mode 100644
index 0000000..8a12aa4
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mac.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_MAC_H_
+#define _RNP_MAC_H_
+
+#include "rnp_osdep.h"
+#include "rnp_hw.h"
+
+void rnp_mac_ops_init(struct rnp_hw *hw);
+int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac);
+
+#endif /* _RNP_MAC_H_ */
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.c b/drivers/net/rnp/base/rnp_mbx_fw.c
new file mode 100644
index 0000000..6c6f713
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mbx_fw.c
@@ -0,0 +1,338 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include <string.h>
+
+#include "rnp_mbx_fw.h"
+#include "rnp_fw_cmd.h"
+#include "rnp_mbx.h"
+#include "../rnp.h"
+
+#define RNP_MBX_API_MAX_RETRY (10)
+#define RNP_POLL_WAIT_MS (10)
+
+static int rnp_mbx_fw_post_req(struct rnp_eth_port *port,
+ struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_mbx_req_cookie *cookie)
+{
+ const struct rnp_mbx_ops *ops = RNP_DEV_PP_TO_MBX_OPS(port->eth_dev);
+ struct rnp_hw *hw = port->hw;
+ u32 timeout_cnt;
+ int err = 0;
+
+ cookie->done = 0;
+
+ spin_lock(&hw->fw_info.fw_lock);
+
+ /* down_interruptible(&pf_cpu_lock); */
+ err = ops->write(hw, (u32 *)req,
+ (req->datalen + RNP_MBX_REQ_HDR_LEN) / 4, RNP_MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR, "rnp_write_mbx failed!\n");
+ goto quit;
+ }
+
+ timeout_cnt = cookie->timeout_ms / RNP_POLL_WAIT_MS;
+ while (timeout_cnt > 0) {
+ mdelay(RNP_POLL_WAIT_MS);
+ timeout_cnt--;
+ if (cookie->done)
+ break;
+ }
+quit:
+ spin_unlock(&hw->fw_info.fw_lock);
+ return err;
+}
+
+static int
+rnp_fw_send_cmd_wait(struct rnp_eth_port *port,
+ struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_mbx_fw_cmd_reply *reply)
+{
+ const struct rnp_mbx_ops *ops = RNP_DEV_PP_TO_MBX_OPS(port->eth_dev);
+ struct rnp_hw *hw = port->hw;
+ u16 try_count = 0;
+ int err = 0;
+
+ if (ops == NULL || ops->write_posted == NULL)
+ return -EINVAL;
+ spin_lock(&hw->fw_info.fw_lock);
+ err = ops->write_posted(hw, (u32 *)req,
+ (req->datalen + RNP_MBX_REQ_HDR_LEN) / 4, RNP_MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: write_posted failed!"
+ " err:0x%x", __func__, err);
+ spin_unlock(&hw->fw_info.fw_lock);
+ return err;
+ }
+ /* ignore non target information */
+fw_api_try:
+ err = ops->read_posted(hw, (u32 *)reply,
+ sizeof(*reply) / 4, RNP_MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR,
+ "%s: read_posted failed! err:0x%x"
+ " req-op:0x%x",
+ __func__,
+ err,
+ req->opcode);
+ goto err_quit;
+ }
+ if (req->opcode != reply->opcode) {
+ try_count++;
+ if (try_count < RNP_MBX_API_MAX_RETRY)
+ goto fw_api_try;
+ RNP_PMD_LOG(ERR,
+ "%s: read reply msg failed! err:0x%x"
+ " req-op:0x%x",
+ __func__,
+ err,
+ req->opcode);
+ err = -EIO;
+ }
+ if (reply->error_code) {
+ RNP_PMD_LOG(ERR,
+ "%s: reply err:0x%x. req-op:0x%x\n",
+ __func__,
+ reply->error_code,
+ req->opcode);
+ err = -reply->error_code;
+ goto err_quit;
+ }
+ spin_unlock(&hw->fw_info.fw_lock);
+
+ return err;
+err_quit:
+
+ spin_unlock(&hw->fw_info.fw_lock);
+ RNP_PMD_LOG(ERR,
+ "%s:PF[%d]: req:%08x_%08x_%08x_%08x "
+ "reply:%08x_%08x_%08x_%08x",
+ __func__,
+ hw->mbx.pf_num,
+ ((int *)req)[0],
+ ((int *)req)[1],
+ ((int *)req)[2],
+ ((int *)req)[3],
+ ((int *)reply)[0],
+ ((int *)reply)[1],
+ ((int *)reply)[2],
+ ((int *)reply)[3]);
+
+ return err;
+}
+
+static int
+rnp_fw_send_norep_cmd(struct rnp_eth_port *port,
+ struct rnp_fw_req_arg *arg)
+{
+ const struct rnp_mbx_ops *ops = RNP_DEV_PP_TO_MBX_OPS(port->eth_dev);
+ struct rnp_mbx_fw_cmd_req req;
+ struct rnp_hw *hw = port->hw;
+ int err = 0;
+
+ if (ops == NULL || ops->write_posted == NULL)
+ return -EINVAL;
+ memset(&req, 0, sizeof(req));
+ spin_lock(&hw->fw_info.fw_lock);
+ rnp_build_fwcmd_req(&req, arg, &req);
+ err = ops->write_posted(hw, (u32 *)&req,
+ (req.datalen + RNP_MBX_REQ_HDR_LEN) / 4, RNP_MBX_FW);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: write_posted failed!"
+ " err:0x%x", __func__, err);
+ spin_unlock(&hw->fw_info.fw_lock);
+ return err;
+ }
+ spin_unlock(&hw->fw_info.fw_lock);
+
+ return 0;
+}
+
+static int
+rnp_fw_send_cmd(struct rnp_eth_port *port,
+ struct rnp_fw_req_arg *arg,
+ void *respond)
+{
+ struct rnp_mbx_req_cookie *cookie;
+ struct rnp_mbx_fw_cmd_reply reply;
+ struct rnp_mbx_fw_cmd_req req;
+ struct rnp_hw *hw = port->hw;
+ int err = 0;
+
+ memset(&req, 0, sizeof(req));
+ memset(&reply, 0, sizeof(reply));
+ if (hw->fw_info.fw_irq_en) {
+ cookie = rnp_dma_mem_alloc(hw, &hw->fw_info.mem,
+ sizeof(*cookie), hw->fw_info.cookie_name);
+ if (!cookie)
+ return -ENOMEM;
+ memset(cookie->priv, 0, cookie->priv_len);
+ rnp_build_fwcmd_req(&req, arg, cookie);
+ err = rnp_mbx_fw_post_req(port, &req, cookie);
+ if (err)
+ return err;
+ if (respond)
+ memcpy(respond, cookie->priv, RNP_FW_REP_DATA_NUM);
+ } else {
+ rnp_build_fwcmd_req(&req, arg, &req);
+ err = rnp_fw_send_cmd_wait(port, &req, &reply);
+ if (err)
+ return err;
+ if (respond)
+ memcpy(respond, reply.data, RNP_FW_REP_DATA_NUM);
+ }
+
+ return 0;
+}
+
+int rnp_fw_init(struct rnp_hw *hw)
+{
+ struct rnp_fw_info *fw_info = &hw->fw_info;
+ struct rnp_mbx_req_cookie *cookie = NULL;
+
+ snprintf(fw_info->cookie_name, RTE_MEMZONE_NAMESIZE,
+ "fw_req_cookie_%s",
+ hw->device_name);
+ fw_info->cookie_pool = rnp_dma_mem_alloc(hw, &fw_info->mem,
+ sizeof(struct rnp_mbx_req_cookie),
+ fw_info->cookie_name);
+ cookie = (struct rnp_mbx_req_cookie *)fw_info->cookie_pool;
+ if (cookie == NULL)
+ return -ENOMEM;
+ cookie->timeout_ms = 1000;
+ cookie->magic = RNP_COOKIE_MAGIC;
+ cookie->priv_len = RNP_MAX_SHARE_MEM;
+ spin_lock_init(&fw_info->fw_lock);
+ fw_info->fw_irq_en = false;
+
+ return 0;
+}
+
+static int
+rnp_fw_get_phy_capability(struct rnp_eth_port *port,
+ struct rnp_phy_abilities_rep *abil)
+{
+ u8 data[RNP_FW_REP_DATA_NUM] = {0};
+ struct rnp_fw_req_arg arg;
+ int err;
+
+ RTE_BUILD_BUG_ON(sizeof(*abil) != RNP_FW_REP_DATA_NUM);
+
+ memset(&arg, 0, sizeof(arg));
+ arg.opcode = RNP_GET_PHY_ABALITY;
+ err = rnp_fw_send_cmd(port, &arg, &data);
+ if (err)
+ return err;
+ memcpy(abil, &data, sizeof(*abil));
+
+ return 0;
+}
+
+int rnp_mbx_fw_get_capability(struct rnp_eth_port *port)
+{
+ struct rnp_phy_abilities_rep ablity;
+ struct rnp_hw *hw = port->hw;
+ u32 is_sgmii_bits = 0;
+ bool is_sgmii = false;
+ u16 lane_bit = 0;
+ u32 lane_cnt = 0;
+ int err = -EIO;
+ u16 temp_mask;
+ u8 lane_idx;
+ u8 idx;
+
+ memset(&ablity, 0, sizeof(ablity));
+ err = rnp_fw_get_phy_capability(port, &ablity);
+ if (!err) {
+ hw->lane_mask = ablity.lane_mask;
+ hw->nic_mode = ablity.nic_mode;
+ /* get phy<->lane mapping info */
+ lane_cnt = __builtin_popcount(hw->lane_mask);
+ temp_mask = hw->lane_mask;
+ if (ablity.e.ports_is_sgmii_valid)
+ is_sgmii_bits = ablity.e.lane_is_sgmii;
+ is_sgmii_bits = ablity.e.lane_is_sgmii;
+ for (idx = 0; idx < lane_cnt; idx++) {
+ hw->phy_port_ids[idx] = ablity.port_ids[idx];
+ lane_bit = ffs(temp_mask) - 1;
+ lane_idx = ablity.port_ids[idx] % lane_cnt;
+ hw->lane_of_port[lane_idx] = lane_bit;
+ is_sgmii = lane_bit & is_sgmii_bits ? 1 : 0;
+ hw->lane_is_sgmii[lane_idx] = is_sgmii;
+ temp_mask &= ~RTE_BIT32(lane_bit);
+ }
+ hw->max_port_num = lane_cnt;
+ }
+ if (lane_cnt <= 0 || lane_cnt > 4)
+ return -EIO;
+
+ RNP_PMD_LOG(INFO,
+ "%s: nic-mode:%d lane_cnt:%d lane_mask:0x%x"
+ " pfvfnum:0x%x, fw_version:0x%08x,"
+ " ports:%d-%d-%d-%d ncsi_en:%d\n",
+ __func__,
+ hw->nic_mode,
+ lane_cnt,
+ hw->lane_mask,
+ hw->pf_vf_num,
+ ablity.fw_version,
+ ablity.port_ids[0],
+ ablity.port_ids[1],
+ ablity.port_ids[2],
+ ablity.port_ids[3],
+ ablity.e.ncsi_en);
+
+ return err;
+}
+
+int rnp_mbx_fw_reset_phy(struct rnp_hw *hw)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(hw->back->eth_dev);
+ struct rnp_fw_req_arg arg;
+ int err;
+
+ memset(&arg, 0, sizeof(arg));
+ arg.opcode = RNP_RESET_PHY;
+ err = rnp_fw_send_norep_cmd(port, &arg);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: failed. err:%d", __func__, err);
+ return err;
+ }
+
+ return 0;
+}
+
+int
+rnp_mbx_fw_get_macaddr(struct rnp_eth_port *port,
+ u8 *mac_addr)
+{
+ u8 data[RNP_FW_REP_DATA_NUM] = {0};
+ u32 nr_lane = port->attr.nr_lane;
+ struct rnp_mac_addr_rep *mac;
+ struct rnp_fw_req_arg arg;
+ int err;
+
+ if (!mac_addr)
+ return -EINVAL;
+ RTE_BUILD_BUG_ON(sizeof(*mac) != RNP_FW_REP_DATA_NUM);
+ memset(&arg, 0, sizeof(arg));
+ mac = (struct rnp_mac_addr_rep *)&data;
+ arg.opcode = RNP_GET_MAC_ADDRESS;
+ arg.param0 = nr_lane;
+ arg.param1 = port->hw->pf_vf_num;
+
+ err = rnp_fw_send_cmd(port, &arg, &data);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: failed. err:%d\n", __func__, err);
+ return err;
+ }
+ if (RTE_BIT32(nr_lane) & mac->lanes) {
+ memcpy(mac_addr, mac->addrs[nr_lane].mac, 6);
+
+ return 0;
+ }
+
+ return -ENODATA;
+}
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.h b/drivers/net/rnp/base/rnp_mbx_fw.h
new file mode 100644
index 0000000..255d913
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mbx_fw.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_MBX_FW_H_
+#define _RNP_MBX_FW_H_
+
+#include "rnp_osdep.h"
+#include "rnp_hw.h"
+
+struct rnp_eth_port;
+
+int rnp_mbx_fw_get_macaddr(struct rnp_eth_port *port, u8 *mac_addr);
+int rnp_mbx_fw_get_capability(struct rnp_eth_port *port);
+int rnp_mbx_fw_reset_phy(struct rnp_hw *hw);
+int rnp_fw_init(struct rnp_hw *hw);
+
+#endif /* _RNP_MBX_FW_H_ */
diff --git a/drivers/net/rnp/base/rnp_osdep.h b/drivers/net/rnp/base/rnp_osdep.h
index b0b3f34..3f31f9b 100644
--- a/drivers/net/rnp/base/rnp_osdep.h
+++ b/drivers/net/rnp/base/rnp_osdep.h
@@ -11,26 +11,57 @@
#include <rte_io.h>
#include <rte_log.h>
+#include <rte_bitops.h>
#include <rte_cycles.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_memory.h>
+#include <rte_dev.h>
#include "../rnp_logs.h"
typedef uint8_t u8;
+typedef int8_t s8;
typedef uint16_t u16;
typedef uint32_t u32;
+typedef uint64_t u64;
+
+typedef rte_iova_t dma_addr_t;
#define mb() rte_mb()
#define wmb() rte_wmb()
#define udelay(x) rte_delay_us(x)
+#define mdelay(x) rte_delay_ms(x)
+#define memcpy rte_memcpy
+
+#define spinlock_t rte_spinlock_t
+#define spin_lock_init(spinlock_v) rte_spinlock_init(spinlock_v)
+#define spin_lock(spinlock_v) rte_spinlock_lock(spinlock_v)
+#define spin_unlock(spinlock_v) rte_spinlock_unlock(spinlock_v)
+#define _ETH_(off) ((off) + (0x10000))
+#define _NIC_(off) ((off) + (0x30000))
#define _MSI_(off) ((off) + (0xA0000))
+#ifndef _PACKED_ALIGN4
+#define _PACKED_ALIGN4 __attribute__((packed, aligned(4)))
+#endif
+
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+ #error "__BIG_ENDIAN is not support now."
+#endif
+
+#define FALSE 0
+#define TRUE 1
+
#define __iomem
static inline u32
-rnp_reg_read32(void *base, size_t offset)
+rnp_reg_read32(volatile void *base, size_t offset)
{
- unsigned int v = rte_read32(((u8 *)base + offset));
+ unsigned int v = rte_read32(((volatile u8 *)base + offset));
RNP_PMD_REG_LOG(DEBUG, "offset=0x%08lx val=0x%04x",
(unsigned long)offset, v);
@@ -38,15 +69,74 @@
}
static inline void
-rnp_reg_write32(void *base, size_t offset, u32 val)
+rnp_reg_write32(volatile void *base, size_t offset, u32 val)
{
RNP_PMD_REG_LOG(DEBUG, "offset=0x%08lx val=0x%08x",
(unsigned long)offset, val);
- rte_write32(val, ((u8 *)base + offset));
+ rte_write32(val, ((volatile u8 *)base + offset));
+}
+
+struct rnp_dma_mem {
+ void *va;
+ dma_addr_t pa;
+ u32 size;
+ const void *mz;
+};
+
+struct rnp_hw;
+
+static inline void *
+rnp_dma_mem_alloc(__rte_unused struct rnp_hw *hw,
+ struct rnp_dma_mem *mem, u64 size, const char *name)
+{
+ static RTE_ATOMIC(uint64_t) rnp_dma_memzone_id;
+ const struct rte_memzone *mz = NULL;
+ char z_name[RTE_MEMZONE_NAMESIZE];
+
+ if (!mem)
+ return NULL;
+ if (name) {
+ strcpy(z_name, name);
+ mz = rte_memzone_lookup((const char *)z_name);
+ if (mz)
+ return mem->va;
+ } else {
+ snprintf(z_name, sizeof(z_name), "rnp_dma_%" PRIu64,
+ rte_atomic_fetch_add_explicit(&rnp_dma_memzone_id, 1,
+ rte_memory_order_relaxed));
+ }
+ mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+ 0, RTE_PGSIZE_2M);
+ if (!mz)
+ return NULL;
+
+ mem->size = size;
+ mem->va = mz->addr;
+ mem->pa = mz->iova;
+ mem->mz = (const void *)mz;
+ RNP_PMD_DRV_LOG(DEBUG, "memzone %s allocated with physical address: "
+ "%"PRIu64, mz->name, mem->pa);
+
+ return mem->va;
+}
+
+static inline void
+rnp_dma_mem_free(__rte_unused struct rnp_hw *hw,
+ struct rnp_dma_mem *mem)
+{
+ RNP_PMD_DRV_LOG(DEBUG, "memzone %s to be freed with physical address: "
+ "%"PRIu64, ((const struct rte_memzone *)mem->mz)->name,
+ mem->pa);
+ if (mem->mz) {
+ rte_memzone_free((const struct rte_memzone *)mem->mz);
+ mem->mz = NULL;
+ mem->va = NULL;
+ mem->pa = (dma_addr_t)0;
+ }
}
#define RNP_REG_RD(base, offset) rnp_reg_read32(base, offset)
-#define RNP_REG_WR(base, offset) rnp_reg_write32(base, offset)
+#define RNP_REG_WR(base, offset, val) rnp_reg_write32(base, offset, val)
#define RNP_E_REG_WR(hw, off, value) rnp_reg_write32((hw)->e_ctrl, (off), (value))
#define RNP_E_REG_RD(hw, off) rnp_reg_read32((hw)->e_ctrl, (off))
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index d6cb380..29e6d49 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -9,6 +9,7 @@ endif
subdir('base')
objs = [base_objs]
+deps += ['net']
includes += include_directories('base')
sources = files(
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 904b7ad..0b33d5b 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -13,20 +13,60 @@
#define RNP_DEV_ID_N10G (0x1000)
#define RNP_MAX_VF_NUM (64)
#define RNP_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
+/* maximum frame size supported */
+#define RNP_MAC_MAXFRM_SIZE (9590)
+
+struct rnp_port_attr {
+ uint16_t max_mac_addrs; /* max support mac address */
+ uint16_t port_id; /* platform manage port sequence id */
+ uint8_t port_offset; /* port queue offset */
+ uint8_t sw_id; /* software port init sequence id */
+ uint16_t nr_lane; /* phy lane of This PF:0~3 */
+};
struct rnp_proc_priv {
+ const struct rnp_mac_ops *mac_ops;
const struct rnp_mbx_ops *mbx_ops;
};
struct rnp_eth_port {
+ struct rnp_proc_priv *proc_priv;
+ struct rte_ether_addr mac_addr;
+ struct rte_eth_dev *eth_dev;
+ struct rnp_port_attr attr;
+ struct rnp_hw *hw;
};
struct rnp_eth_adapter {
struct rnp_hw hw;
+ struct rte_pci_device *pdev;
struct rte_eth_dev *eth_dev; /* alloc eth_dev by platform */
+
+ struct rnp_eth_port *ports[RNP_MAX_PORT_OF_PF];
+ uint16_t closed_ports;
+ uint16_t inited_ports;
+ bool intr_registed;
};
+#define RNP_DEV_TO_PORT(eth_dev) \
+ ((struct rnp_eth_port *)(eth_dev)->data->dev_private)
+#define RNP_DEV_TO_ADAPTER(eth_dev) \
+ ((struct rnp_eth_adapter *)(RNP_DEV_TO_PORT(eth_dev))->hw->back)
#define RNP_DEV_TO_PROC_PRIV(eth_dev) \
((struct rnp_proc_priv *)(eth_dev)->process_private)
+#define RNP_DEV_PP_TO_MBX_OPS(priv) \
+ (((RNP_DEV_TO_PROC_PRIV(priv))->mbx_ops))
+#define RNP_DEV_PP_TO_MAC_OPS(priv) \
+ (((RNP_DEV_TO_PROC_PRIV(priv))->mac_ops))
+
+#define RNP_PF_OWN_PORTS(id) (((id) == 0) ? 1 : (((id) == 1) ? 2 : 4))
+
+static inline int
+rnp_pf_is_multiple_ports(uint32_t device_id)
+{
+ uint32_t verbit = (device_id >> 5) & 0x3;
+
+ return RNP_PF_OWN_PORTS(verbit) == 1 ? 0 : 1;
+}
#endif /* __RNP_H__ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 389c6ad..ae417a6 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -3,39 +3,340 @@
*/
#include <ethdev_pci.h>
+#include <ethdev_driver.h>
#include <rte_io.h>
#include <rte_malloc.h>
#include "rnp.h"
+#include "rnp_logs.h"
+#include "base/rnp_mbx.h"
+#include "base/rnp_mbx_fw.h"
+#include "base/rnp_mac.h"
+#include "base/rnp_common.h"
+
+static struct rte_eth_dev *
+rnp_alloc_eth_port(struct rte_pci_device *pci, char *name)
+{
+ struct rte_eth_dev *eth_dev = NULL;
+ struct rnp_eth_port *port = NULL;
+
+ eth_dev = rte_eth_dev_allocate(name);
+ if (!eth_dev) {
+ RNP_PMD_ERR("Could not allocate eth_dev for %s", name);
+ return NULL;
+ }
+ port = rte_zmalloc_socket(name,
+ sizeof(*port),
+ RTE_CACHE_LINE_SIZE,
+ pci->device.numa_node);
+ if (!port) {
+ RNP_PMD_ERR("Could not allocate rnp_eth_port for %s", name);
+ goto fail_calloc;
+ }
+ rte_eth_copy_pci_info(eth_dev, pci);
+ eth_dev->data->dev_private = port;
+ eth_dev->device = &pci->device;
+
+ return eth_dev;
+fail_calloc:
+ rte_free(port);
+ rte_eth_dev_release_port(eth_dev);
+
+ return NULL;
+}
+
+static void rnp_dev_interrupt_handler(void *param)
+{
+ RTE_SET_USED(param);
+}
+
+static int rnp_dev_stop(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
+
+static int rnp_dev_close(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_adapter *adapter = RNP_DEV_TO_ADAPTER(eth_dev);
+ struct rte_pci_device *pci_dev;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+ ret = rnp_dev_stop(eth_dev);
+ if (ret < 0)
+ return ret;
+ if (adapter->closed_ports == adapter->inited_ports) {
+ pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
+ if (adapter->intr_registed) {
+ /* disable uio intr before callback unregister */
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
+ rnp_dev_interrupt_handler,
+ (void *)eth_dev);
+ adapter->intr_registed = false;
+ }
+ rnp_dma_mem_free(&adapter->hw, &adapter->hw.fw_info.mem);
+ rte_free(adapter);
+ }
+ adapter->closed_ports++;
+
+ return 0;
+}
+
+/* Features supported by this driver */
+static const struct eth_dev_ops rnp_eth_dev_ops = {
+ .dev_close = rnp_dev_close,
+ .dev_stop = rnp_dev_stop,
+};
+
+static void
+rnp_setup_port_attr(struct rnp_eth_port *port,
+ struct rte_eth_dev *eth_dev,
+ uint8_t sw_id)
+{
+ struct rnp_port_attr *attr = &port->attr;
+ struct rnp_hw *hw = port->hw;
+ uint32_t lane;
+
+ PMD_INIT_FUNC_TRACE();
+
+ lane = hw->phy_port_ids[sw_id] & (hw->max_port_num - 1);
+ attr->port_id = eth_dev->data->port_id;
+ attr->port_offset = RNP_E_REG_RD(hw, RNP_TC_PORT_OFFSET(lane));
+ attr->nr_lane = lane;
+ attr->sw_id = sw_id;
+ attr->max_mac_addrs = 1;
+
+ RNP_PMD_INFO("PF[%d] SW-ETH-PORT[%d]<->PHY_LANE[%d]\n",
+ hw->mbx.pf_num, sw_id, lane);
+}
+
+static int
+rnp_init_port_resource(struct rnp_eth_adapter *adapter,
+ struct rte_eth_dev *eth_dev,
+ char *name,
+ uint8_t p_id)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_pci_device *pci_dev = adapter->pdev;
+ char mac_str[RTE_ETHER_ADDR_FMT_SIZE] = " ";
+
+ PMD_INIT_FUNC_TRACE();
+
+ port->eth_dev = eth_dev;
+ port->hw = &adapter->hw;
+
+ eth_dev->dev_ops = &rnp_eth_dev_ops;
+ eth_dev->device = &pci_dev->device;
+ eth_dev->data->mtu = RTE_ETHER_MTU;
+
+ rnp_setup_port_attr(port, eth_dev, p_id);
+ eth_dev->data->mac_addrs = rte_zmalloc(name,
+ sizeof(struct rte_ether_addr) *
+ port->attr.max_mac_addrs, 0);
+ if (!eth_dev->data->mac_addrs) {
+ RNP_PMD_ERR("zmalloc for mac failed! Exiting.");
+ return -ENOMEM;
+ }
+ rnp_get_mac_addr(port, port->mac_addr.addr_bytes);
+ rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
+ &port->mac_addr);
+ RNP_PMD_INFO("get mac addr from firmware %s\n", mac_str);
+ if (!rte_is_valid_assigned_ether_addr(&port->mac_addr)) {
+ RNP_PMD_WARN("get mac_addr is invalid, just use random");
+ rte_eth_random_addr(port->mac_addr.addr_bytes);
+ }
+ rte_ether_addr_copy(&port->mac_addr, ð_dev->data->mac_addrs[0]);
+
+ adapter->ports[p_id] = port;
+ adapter->inited_ports++;
+
+ return 0;
+}
+
+static int
+rnp_proc_priv_init(struct rte_eth_dev *dev)
+{
+ struct rnp_proc_priv *priv;
+
+ priv = rte_zmalloc_socket("rnp_proc_priv",
+ sizeof(struct rnp_proc_priv),
+ RTE_CACHE_LINE_SIZE,
+ dev->device->numa_node);
+ if (!priv)
+ return -ENOMEM;
+ dev->process_private = priv;
+
+ return 0;
+}
static int
rnp_eth_dev_init(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ char name[RTE_ETH_NAME_MAX_LEN] = " ";
+ struct rnp_eth_adapter *adapter;
+ struct rte_eth_dev *sub_eth_dev;
+ struct rnp_hw *hw;
+ uint16_t p_id;
+ int ret = -1;
+
+ PMD_INIT_FUNC_TRACE();
- return -ENODEV;
+ snprintf(name, sizeof(name), "rnp_adapter_%d", eth_dev->data->port_id);
+ adapter = rte_zmalloc(name, sizeof(struct rnp_eth_adapter), 0);
+ if (!adapter) {
+ RNP_PMD_ERR("rnp_adapter zmalloc mem failed");
+ return -ENOMEM;
+ }
+ hw = &adapter->hw;
+ adapter->pdev = pci_dev;
+ adapter->eth_dev = eth_dev;
+ adapter->ports[0] = port;
+ hw->back = (void *)adapter;
+ port->eth_dev = eth_dev;
+ port->hw = hw;
+
+ hw->e_ctrl = (u8 *)pci_dev->mem_resource[4].addr;
+ hw->c_ctrl = (u8 *)pci_dev->mem_resource[0].addr;
+ hw->c_blen = pci_dev->mem_resource[0].len;
+ hw->device_id = pci_dev->id.device_id;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->mbx.en_vfs = pci_dev->max_vfs;
+ if (hw->mbx.en_vfs > hw->max_vfs) {
+ ret = -EINVAL;
+ RNP_PMD_ERR("sriov vfs max support 64");
+ goto free_ad;
+ }
+
+ strlcpy(hw->device_name, pci_dev->device.name,
+ strlen(pci_dev->device.name) + 1);
+ ret = rnp_proc_priv_init(eth_dev);
+ if (ret < 0) {
+ RNP_PMD_ERR("proc_priv_alloc failed");
+ goto free_ad;
+ }
+ ret = rnp_init_mbx_pf(hw);
+ if (ret < 0) {
+ RNP_PMD_ERR("mailbox hardware init failed");
+ goto free_ad;
+ }
+ ret = rnp_init_hw(hw);
+ if (ret < 0) {
+ RNP_PMD_ERR("Hardware initialization failed");
+ goto free_ad;
+ }
+ ret = rnp_setup_common_ops(hw);
+ if (ret < 0) {
+ RNP_PMD_ERR("hardware common ops setup failed");
+ goto free_ad;
+ }
+ for (p_id = 0; p_id < hw->max_port_num; p_id++) {
+ /* port 0 resource has been allocated when probe */
+ if (!p_id) {
+ sub_eth_dev = eth_dev;
+ } else {
+ memset(name, 0, sizeof(name));
+ snprintf(name, sizeof(name),
+ "%s_%d", hw->device_name, p_id);
+ sub_eth_dev = rnp_alloc_eth_port(pci_dev, name);
+ if (!sub_eth_dev) {
+ RNP_PMD_ERR("%s sub_eth alloc failed",
+ hw->device_name);
+ ret = -ENOMEM;
+ goto eth_alloc_error;
+ }
+ ret = rnp_proc_priv_init(sub_eth_dev);
+ if (ret < 0) {
+ RNP_PMD_ERR("proc_priv_alloc failed");
+ goto eth_alloc_error;
+ }
+ rte_memcpy(sub_eth_dev->process_private,
+ eth_dev->process_private,
+ sizeof(struct rnp_proc_priv));
+ }
+ ret = rnp_init_port_resource(adapter, sub_eth_dev, name, p_id);
+ if (ret)
+ goto eth_alloc_error;
+ if (p_id) {
+ /* port 0 will be probe by plaform */
+ rte_eth_dev_probing_finish(sub_eth_dev);
+ }
+ }
+ /* enable link update event interrupt */
+ rte_intr_callback_register(intr_handle,
+ rnp_dev_interrupt_handler, adapter);
+ rte_intr_enable(intr_handle);
+ adapter->intr_registed = true;
+ hw->fw_info.fw_irq_en = true;
+
+ return 0;
+
+eth_alloc_error:
+ for (p_id = 0; p_id < adapter->inited_ports; p_id++) {
+ port = adapter->ports[p_id];
+ if (!port)
+ continue;
+ if (port->eth_dev) {
+ rnp_dev_close(port->eth_dev);
+ /* just release eth_dev alloced by myself */
+ if (port->eth_dev != adapter->eth_dev)
+ rte_eth_dev_release_port(port->eth_dev);
+ }
+ }
+free_ad:
+ if (hw->fw_info.cookie_pool)
+ rnp_dma_mem_free(hw, &hw->fw_info.mem);
+ rte_free(adapter);
+
+ return ret;
}
static int
rnp_eth_dev_uninit(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ uint16_t port_id;
+ int err = 0;
+
+ /* Free up other ports and all resources */
+ RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device)
+ err |= rte_eth_dev_close(port_id);
- return -ENODEV;
+ return err == 0 ? 0 : -EIO;
}
static int
rnp_pci_remove(struct rte_pci_device *pci_dev)
{
+ char device_name[RTE_ETH_NAME_MAX_LEN] = "";
struct rte_eth_dev *eth_dev;
+ uint16_t idx = 0;
int rc;
- eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
-
+ /* Find a port belong to pf that not be called dev_close */
+ for (idx = 0; idx < RNP_MAX_PORT_OF_PF; idx++) {
+ if (idx)
+ snprintf(device_name, sizeof(device_name), "%s_%d",
+ pci_dev->device.name,
+ idx);
+ else
+ snprintf(device_name, sizeof(device_name), "%s",
+ pci_dev->device.name);
+ eth_dev = rte_eth_dev_allocated(device_name);
+ if (eth_dev)
+ break;
+ }
if (eth_dev) {
/* Cleanup eth dev */
- rc = rte_eth_dev_pci_generic_remove(pci_dev,
- rnp_eth_dev_uninit);
+ rc = rnp_eth_dev_uninit(eth_dev);
if (rc)
return rc;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 06/28] net/rnp: add get device information operation
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (4 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 05/28] net/rnp: add device init and uninit Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 07/28] net/rnp: add support mac promisc mode Wenbo Cao
` (21 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add get device hardware capability function
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
drivers/net/rnp/base/rnp_fw_cmd.c | 20 +++++++
drivers/net/rnp/base/rnp_fw_cmd.h | 80 +++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_mbx_fw.c | 58 +++++++++++++++++++
drivers/net/rnp/base/rnp_mbx_fw.h | 1 +
drivers/net/rnp/rnp.h | 73 +++++++++++++++++++++++-
drivers/net/rnp/rnp_ethdev.c | 113 +++++++++++++++++++++++++++++++++++++-
7 files changed, 344 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 2ad04ee..6766130 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -4,5 +4,6 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
Linux = Y
x86-64 = Y
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.c b/drivers/net/rnp/base/rnp_fw_cmd.c
index 064ba9e..34a88a1 100644
--- a/drivers/net/rnp/base/rnp_fw_cmd.c
+++ b/drivers/net/rnp/base/rnp_fw_cmd.c
@@ -51,6 +51,23 @@
arg->pfvf_num = req_arg->param1;
}
+static inline void
+rnp_build_get_lane_status_req(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *req_arg,
+ void *cookie)
+{
+ struct rnp_get_lane_st_req *arg = (struct rnp_get_lane_st_req *)req->data;
+
+ req->flags = 0;
+ req->opcode = RNP_GET_LANE_STATUS;
+ req->datalen = sizeof(*arg);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ arg->nr_lane = req_arg->param0;
+}
+
int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
struct rnp_fw_req_arg *arg,
void *cookie)
@@ -67,6 +84,9 @@ int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
case RNP_GET_MAC_ADDRESS:
rnp_build_get_macaddress_req(req, arg, cookie);
break;
+ case RNP_GET_LANE_STATUS:
+ rnp_build_get_lane_status_req(req, arg, cookie);
+ break;
default:
err = -EOPNOTSUPP;
}
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.h b/drivers/net/rnp/base/rnp_fw_cmd.h
index fb7a0af..c34fc5c 100644
--- a/drivers/net/rnp/base/rnp_fw_cmd.h
+++ b/drivers/net/rnp/base/rnp_fw_cmd.h
@@ -129,6 +129,80 @@ struct rnp_mac_addr_rep {
u32 pcode;
};
+#define RNP_SPEED_CAP_UNKNOWN (0)
+#define RNP_SPEED_CAP_10M_FULL RTE_BIT32(2)
+#define RNP_SPEED_CAP_100M_FULL RTE_BIT32(3)
+#define RNP_SPEED_CAP_1GB_FULL RTE_BIT32(4)
+#define RNP_SPEED_CAP_10GB_FULL RTE_BIT32(5)
+#define RNP_SPEED_CAP_40GB_FULL RTE_BIT32(6)
+#define RNP_SPEED_CAP_25GB_FULL RTE_BIT32(7)
+#define RNP_SPEED_CAP_50GB_FULL RTE_BIT32(8)
+#define RNP_SPEED_CAP_100GB_FULL RTE_BIT32(9)
+#define RNP_SPEED_CAP_10M_HALF RTE_BIT32(10)
+#define RNP_SPEED_CAP_100M_HALF RTE_BIT32(11)
+#define RNP_SPEED_CAP_1GB_HALF RTE_BIT32(12)
+
+enum rnp_pma_phy_type {
+ RNP_PHY_TYPE_NONE = 0,
+ RNP_PHY_TYPE_1G_BASE_KX,
+ RNP_PHY_TYPE_SGMII,
+ RNP_PHY_TYPE_10G_BASE_KR,
+ RNP_PHY_TYPE_25G_BASE_KR,
+ RNP_PHY_TYPE_40G_BASE_KR4,
+ RNP_PHY_TYPE_10G_BASE_SR,
+ RNP_PHY_TYPE_40G_BASE_SR4,
+ RNP_PHY_TYPE_40G_BASE_CR4,
+ RNP_PHY_TYPE_40G_BASE_LR4,
+ RNP_PHY_TYPE_10G_BASE_LR,
+ RNP_PHY_TYPE_10G_BASE_ER,
+ RNP_PHY_TYPE_10G_TP,
+};
+
+struct rnp_lane_stat_rep {
+ u8 nr_lane; /* 0-3 cur port correspond with hw lane */
+ u8 pci_gen : 4; /* nic cur pci speed genX: 1,2,3 */
+ u8 pci_lanes : 4; /* nic cur pci x1 x2 x4 x8 x16 */
+ u8 pma_type;
+ u8 phy_type; /* interface media type */
+
+ u16 linkup : 1; /* cur port link state */
+ u16 duplex : 1; /* duplex state only RJ45 valid */
+ u16 autoneg : 1; /* autoneg state */
+ u16 fec : 1; /* fec state */
+ u16 rev_an : 1;
+ u16 link_traing : 1; /* link-traing state */
+ u16 media_availble : 1;
+ u16 is_sgmii : 1; /* 1: Twisted Pair 0: FIBRE */
+ u16 link_fault : 4;
+#define RNP_LINK_LINK_FAULT RTE_BIT32(0)
+#define RNP_LINK_TX_FAULT RTE_BIT32(1)
+#define RNP_LINK_RX_FAULT RTE_BIT32(2)
+#define RNP_LINK_REMOTE_FAULT RTE_BIT32(3)
+ u16 is_backplane : 1; /* Backplane Mode */
+ u16 is_speed_10G_1G_auto_switch_enabled : 1;
+ u16 rsvd0 : 2;
+ union {
+ u8 phy_addr; /* Phy MDIO address */
+ struct {
+ u8 mod_abs : 1;
+ u8 fault : 1;
+ u8 tx_dis : 1;
+ u8 los : 1;
+ u8 rsvd1 : 4;
+ } sfp;
+ };
+ u8 sfp_connector;
+ u32 speed; /* Current Speed Value */
+
+ u32 si_main;
+ u32 si_pre;
+ u32 si_post;
+ u32 si_tx_boost;
+ u32 supported_link; /* Cur nic Support Link cap */
+ u32 phy_id;
+ u32 rsvd;
+} _PACKED_ALIGN4;
+
#define RNP_FW_REP_DATA_NUM (40)
struct rnp_mbx_fw_cmd_reply {
u16 flags;
@@ -174,6 +248,12 @@ struct rnp_get_phy_ablity {
u32 rsv[7];
} _PACKED_ALIGN4;
+struct rnp_get_lane_st_req {
+ u32 nr_lane;
+
+ u32 rsv[7];
+} _PACKED_ALIGN4;
+
struct rnp_mbx_fw_cmd_req {
u16 flags;
u16 opcode;
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.c b/drivers/net/rnp/base/rnp_mbx_fw.c
index 6c6f713..893a460 100644
--- a/drivers/net/rnp/base/rnp_mbx_fw.c
+++ b/drivers/net/rnp/base/rnp_mbx_fw.c
@@ -336,3 +336,61 @@ int rnp_mbx_fw_reset_phy(struct rnp_hw *hw)
return -ENODATA;
}
+
+int
+rnp_mbx_fw_get_lane_stat(struct rnp_eth_port *port)
+{
+ struct rnp_phy_meta *phy_meta = &port->attr.phy_meta;
+ u8 data[RNP_FW_REP_DATA_NUM] = {0};
+ struct rnp_lane_stat_rep *lane_stat;
+ u32 nr_lane = port->attr.nr_lane;
+ struct rnp_fw_req_arg arg;
+ u32 user_set_speed = 0;
+ int err;
+
+ RTE_BUILD_BUG_ON(sizeof(*lane_stat) != RNP_FW_REP_DATA_NUM);
+ memset(&arg, 0, sizeof(arg));
+ lane_stat = (struct rnp_lane_stat_rep *)&data;
+ arg.opcode = RNP_GET_LANE_STATUS;
+ arg.param0 = nr_lane;
+
+ err = rnp_fw_send_cmd(port, &arg, &data);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: failed. err:%d\n", __func__, err);
+ return err;
+ }
+ phy_meta->supported_link = lane_stat->supported_link;
+ phy_meta->is_backplane = lane_stat->is_backplane;
+ phy_meta->phy_identifier = lane_stat->phy_addr;
+ phy_meta->link_autoneg = lane_stat->autoneg;
+ phy_meta->link_duplex = lane_stat->duplex;
+ phy_meta->phy_type = lane_stat->phy_type;
+ phy_meta->is_sgmii = lane_stat->is_sgmii;
+ phy_meta->fec = lane_stat->fec;
+
+ if (phy_meta->is_sgmii) {
+ phy_meta->media_type = RNP_MEDIA_TYPE_COPPER;
+ phy_meta->supported_link |=
+ RNP_SPEED_CAP_100M_HALF | RNP_SPEED_CAP_10M_HALF;
+ } else if (phy_meta->is_backplane) {
+ phy_meta->media_type = RNP_MEDIA_TYPE_BACKPLANE;
+ } else {
+ phy_meta->media_type = RNP_MEDIA_TYPE_FIBER;
+ }
+ if (phy_meta->phy_type == RNP_PHY_TYPE_10G_TP) {
+ phy_meta->supported_link |= RNP_SPEED_CAP_1GB_FULL;
+ phy_meta->supported_link |= RNP_SPEED_CAP_10GB_FULL;
+ }
+ if (!phy_meta->link_autoneg) {
+ /* firmware don't support upload info just use user info */
+ if (phy_meta->media_type == RNP_MEDIA_TYPE_COPPER) {
+ user_set_speed = port->eth_dev->data->dev_conf.link_speeds;
+ if (user_set_speed & RTE_ETH_LINK_SPEED_FIXED)
+ phy_meta->link_autoneg = 0;
+ else
+ phy_meta->link_autoneg = 1;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.h b/drivers/net/rnp/base/rnp_mbx_fw.h
index 255d913..fd0110b 100644
--- a/drivers/net/rnp/base/rnp_mbx_fw.h
+++ b/drivers/net/rnp/base/rnp_mbx_fw.h
@@ -12,6 +12,7 @@
int rnp_mbx_fw_get_macaddr(struct rnp_eth_port *port, u8 *mac_addr);
int rnp_mbx_fw_get_capability(struct rnp_eth_port *port);
+int rnp_mbx_fw_get_lane_stat(struct rnp_eth_port *port);
int rnp_mbx_fw_reset_phy(struct rnp_hw *hw);
int rnp_fw_init(struct rnp_hw *hw);
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 0b33d5b..19ef493 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -14,10 +14,81 @@
#define RNP_MAX_VF_NUM (64)
#define RNP_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
/* maximum frame size supported */
+#define RNP_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_VLAN_HLEN * 2)
#define RNP_MAC_MAXFRM_SIZE (9590)
+#define RNP_RX_MAX_MTU_SEG (64)
+#define RNP_TX_MAX_MTU_SEG (32)
+#define RNP_RX_MAX_SEG (150)
+#define RNP_TX_MAX_SEG (UINT8_MAX)
+#define RNP_MIN_DMA_BUF_SIZE (1024)
+/* rss support info */
+#define RNP_RSS_INDIR_SIZE (128)
+#define RNP_MAX_HASH_KEY_SIZE (10)
+#define RNP_SUPPORT_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP |\
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_IPV6_EX | \
+ RTE_ETH_RSS_IPV6_TCP_EX | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_IPV6_UDP_EX | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+/* Ring info special */
+#define RNP_MAX_BD_COUNT (4096)
+#define RNP_MIN_BD_COUNT (128)
+#define RNP_BD_ALIGN (2)
+/* Hardware resource info */
+#define RNP_MAX_MSIX_NUM (64)
+#define RNP_MAX_RX_QUEUE_NUM (128)
+#define RNP_MAX_TX_QUEUE_NUM (128)
+/* l2 filter hardware resource info */
+#define RNP_MAX_MAC_ADDRS (128) /* max unicast extract mac num */
+#define RNP_MAX_HASH_UC_MAC_SIZE (4096) /* max unicast hash mac num */
+#define RNP_MAX_HASH_MC_MAC_SIZE (4096) /* max multicast hash mac num */
+#define RNP_MAX_UC_HASH_TABLE (128) /* max unicast hash mac filter table */
+#define RNP_MAC_MC_HASH_TABLE (128) /* max multicast hash mac filter table*/
+/* hardware media type */
+enum rnp_media_type {
+ RNP_MEDIA_TYPE_UNKNOWN,
+ RNP_MEDIA_TYPE_FIBER,
+ RNP_MEDIA_TYPE_COPPER,
+ RNP_MEDIA_TYPE_BACKPLANE,
+ RNP_MEDIA_TYPE_NONE,
+};
+
+struct rnp_phy_meta {
+ uint32_t speed_cap;
+ uint32_t supported_link;
+ uint16_t link_duplex;
+ uint16_t link_autoneg;
+ uint32_t phy_identifier;
+ uint16_t phy_type;
+ uint8_t media_type;
+ bool is_sgmii;
+ bool is_backplane;
+ bool fec;
+};
+
struct rnp_port_attr {
- uint16_t max_mac_addrs; /* max support mac address */
+ uint16_t max_mac_addrs; /* max support mac address */
+ uint16_t max_uc_mac_hash; /* max hash unicast mac size */
+ uint16_t max_mc_mac_hash; /* max hash multicast mac size */
+ uint16_t uc_hash_tb_size; /* max unicast hash table block num */
+ uint16_t mc_hash_tb_size; /* max multicast hash table block num */
+ uint16_t max_rx_queues; /* belong to this port rxq resource */
+ uint16_t max_tx_queues; /* belong to this port txq resource */
+
+ struct rnp_phy_meta phy_meta;
+
uint16_t port_id; /* platform manage port sequence id */
uint8_t port_offset; /* port queue offset */
uint8_t sw_id; /* software port init sequence id */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index ae417a6..a7404ee 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -10,6 +10,7 @@
#include "rnp.h"
#include "rnp_logs.h"
#include "base/rnp_mbx.h"
+#include "base/rnp_fw_cmd.h"
#include "base/rnp_mbx_fw.h"
#include "base/rnp_mac.h"
#include "base/rnp_common.h"
@@ -88,10 +89,111 @@ static int rnp_dev_close(struct rte_eth_dev *eth_dev)
return 0;
}
+static uint32_t
+rnp_get_speed_caps(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint32_t speed_cap = 0;
+ uint32_t i = 0, speed;
+ uint32_t support_link;
+ uint32_t link_types;
+
+ support_link = port->attr.phy_meta.supported_link;
+ link_types = __builtin_popcountl(support_link);
+
+ if (!link_types)
+ return 0;
+ for (i = 0; i < link_types; i++) {
+ speed = ffs(support_link) - 1;
+ switch (RTE_BIT32(speed)) {
+ case RNP_SPEED_CAP_10M_FULL:
+ speed_cap |= RTE_ETH_LINK_SPEED_10M;
+ break;
+ case RNP_SPEED_CAP_100M_FULL:
+ speed_cap |= RTE_ETH_LINK_SPEED_100M;
+ break;
+ case RNP_SPEED_CAP_1GB_FULL:
+ speed_cap |= RTE_ETH_LINK_SPEED_1G;
+ break;
+ case RNP_SPEED_CAP_10GB_FULL:
+ speed_cap |= RTE_ETH_LINK_SPEED_10G;
+ break;
+ case RNP_SPEED_CAP_40GB_FULL:
+ speed_cap |= RTE_ETH_LINK_SPEED_40G;
+ break;
+ case RNP_SPEED_CAP_25GB_FULL:
+ speed_cap |= RTE_ETH_LINK_SPEED_25G;
+ break;
+ case RNP_SPEED_CAP_10M_HALF:
+ speed_cap |= RTE_ETH_LINK_SPEED_10M_HD;
+ break;
+ case RNP_SPEED_CAP_100M_HALF:
+ speed_cap |= RTE_ETH_LINK_SPEED_100M_HD;
+ break;
+ }
+ support_link &= ~RTE_BIT32(speed);
+ }
+ if (!port->attr.phy_meta.link_autoneg)
+ speed_cap |= RTE_ETH_LINK_SPEED_FIXED;
+
+ return speed_cap;
+}
+
+static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim){
+ .nb_max = RNP_MAX_BD_COUNT,
+ .nb_min = RNP_MIN_BD_COUNT,
+ .nb_align = RNP_BD_ALIGN,
+ .nb_seg_max = RNP_RX_MAX_SEG,
+ .nb_mtu_seg_max = RNP_RX_MAX_MTU_SEG,
+ };
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim){
+ .nb_max = RNP_MAX_BD_COUNT,
+ .nb_min = RNP_MIN_BD_COUNT,
+ .nb_align = RNP_BD_ALIGN,
+ .nb_seg_max = RNP_TX_MAX_SEG,
+ .nb_mtu_seg_max = RNP_TX_MAX_MTU_SEG,
+ };
+
+ dev_info->max_rx_pktlen = RNP_MAC_MAXFRM_SIZE;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->max_mtu = dev_info->max_rx_pktlen - RNP_ETH_OVERHEAD;
+ dev_info->min_rx_bufsize = RNP_MIN_DMA_BUF_SIZE;
+ dev_info->max_rx_queues = port->attr.max_rx_queues;
+ dev_info->max_tx_queues = port->attr.max_tx_queues;
+ /* mac filter info */
+ dev_info->max_mac_addrs = port->attr.max_mac_addrs;
+ dev_info->max_hash_mac_addrs = port->attr.max_uc_mac_hash;
+ /* for RSS offload just support four tuple */
+ dev_info->flow_type_rss_offloads = RNP_SUPPORT_RSS_OFFLOAD_ALL;
+ dev_info->hash_key_size = RNP_MAX_HASH_KEY_SIZE * sizeof(uint32_t);
+ dev_info->reta_size = RNP_RSS_INDIR_SIZE;
+ /* speed cap info */
+ dev_info->speed_capa = rnp_get_speed_caps(eth_dev);
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .offloads = 0,
+ };
+
+ return 0;
+}
+
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_close = rnp_dev_close,
.dev_stop = rnp_dev_stop,
+ .dev_infos_get = rnp_dev_infos_get,
};
static void
@@ -110,7 +212,16 @@ static int rnp_dev_close(struct rte_eth_dev *eth_dev)
attr->port_offset = RNP_E_REG_RD(hw, RNP_TC_PORT_OFFSET(lane));
attr->nr_lane = lane;
attr->sw_id = sw_id;
- attr->max_mac_addrs = 1;
+
+ attr->max_rx_queues = RNP_MAX_RX_QUEUE_NUM / hw->max_port_num;
+ attr->max_tx_queues = RNP_MAX_TX_QUEUE_NUM / hw->max_port_num;
+
+ attr->max_mac_addrs = RNP_MAX_MAC_ADDRS;
+ attr->max_uc_mac_hash = RNP_MAX_HASH_UC_MAC_SIZE;
+ attr->max_mc_mac_hash = RNP_MAX_HASH_MC_MAC_SIZE;
+ attr->uc_hash_tb_size = RNP_MAX_UC_HASH_TABLE;
+ attr->mc_hash_tb_size = RNP_MAC_MC_HASH_TABLE;
+ rnp_mbx_fw_get_lane_stat(port);
RNP_PMD_INFO("PF[%d] SW-ETH-PORT[%d]<->PHY_LANE[%d]\n",
hw->mbx.pf_num, sw_id, lane);
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 07/28] net/rnp: add support mac promisc mode
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (5 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 06/28] net/rnp: add get device information operation Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 08/28] net/rnp: add queue setup and release operations Wenbo Cao
` (20 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support two method of mac unicast promisc
mulcast promisc broadcast promisc mode
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 2 +
doc/guides/nics/rnp.rst | 5 ++
drivers/net/rnp/base/rnp_common.c | 5 ++
drivers/net/rnp/base/rnp_eth_regs.h | 15 +++++
drivers/net/rnp/base/rnp_hw.h | 12 +++-
drivers/net/rnp/base/rnp_mac.c | 114 +++++++++++++++++++++++++++++++++++-
drivers/net/rnp/base/rnp_mac.h | 2 +
drivers/net/rnp/base/rnp_mac_regs.h | 39 ++++++++++++
drivers/net/rnp/base/rnp_osdep.h | 5 ++
drivers/net/rnp/rnp_ethdev.c | 43 ++++++++++++++
10 files changed, 240 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/rnp/base/rnp_mac_regs.h
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 6766130..65f1ed3 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -5,5 +5,7 @@
;
[Features]
Speed capabilities = Y
+Promiscuous mode = Y
+Allmulticast mode = Y
Linux = Y
x86-64 = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 618baa8..62585ac 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -7,6 +7,11 @@ RNP Poll Mode driver
The RNP ETHDEV PMD (**librte_net_rnp**) provides poll mode ethdev
driver support for the inbuilt network device found in the **Mucse RNP**
+Features
+--------
+
+- Promiscuous mode
+
Prerequisites
-------------
More information can be found at `Mucse, Official Website
diff --git a/drivers/net/rnp/base/rnp_common.c b/drivers/net/rnp/base/rnp_common.c
index 47a979b..3fa2a49 100644
--- a/drivers/net/rnp/base/rnp_common.c
+++ b/drivers/net/rnp/base/rnp_common.c
@@ -4,6 +4,7 @@
#include "rnp_osdep.h"
#include "rnp_hw.h"
+#include "rnp_mac_regs.h"
#include "rnp_eth_regs.h"
#include "rnp_dma_regs.h"
#include "rnp_common.h"
@@ -28,6 +29,7 @@ int rnp_init_hw(struct rnp_hw *hw)
struct rnp_eth_port *port = RNP_DEV_TO_PORT(hw->back->eth_dev);
u32 version = 0;
int ret = -1;
+ u32 idx = 0;
u32 state;
PMD_INIT_FUNC_TRACE();
@@ -60,6 +62,9 @@ int rnp_init_hw(struct rnp_hw *hw)
if (hw->nic_mode == RNP_DUAL_10G && hw->max_port_num == 2)
RNP_E_REG_WR(hw, RNP_TC_PORT_OFFSET(RNP_TARGET_TC_PORT),
RNP_PORT_OFF_QUEUE_NUM);
+ /* setup mac resiger ctrl base */
+ for (idx = 0; idx < hw->max_port_num; idx++)
+ hw->mac_base[idx] = (u8 *)hw->e_ctrl + RNP_MAC_BASE_OFFSET(idx);
return 0;
}
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 6957866..c4519ba 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -10,6 +10,21 @@
#define RNP_E_FILTER_EN _ETH_(0x801c)
#define RNP_E_REDIR_EN _ETH_(0x8030)
+/* Mac Host Filter */
+#define RNP_MAC_FCTRL _ETH_(0x9110)
+#define RNP_MAC_FCTRL_MPE RTE_BIT32(8) /* Multicast Promiscuous En */
+#define RNP_MAC_FCTRL_UPE RTE_BIT32(9) /* Unicast Promiscuous En */
+#define RNP_MAC_FCTRL_BAM RTE_BIT32(10) /* Broadcast Accept Mode */
+#define RNP_MAC_FCTRL_BYPASS (\
+ RNP_MAC_FCTRL_MPE | \
+ RNP_MAC_FCTRL_UPE | \
+ RNP_MAC_FCTRL_BAM)
+/* Mucast unicast mac hash filter ctrl */
+#define RNP_MAC_MCSTCTRL _ETH_(0x9114)
+#define RNP_MAC_HASH_MASK RTE_GENMASK32(11, 0)
+#define RNP_MAC_MULTICASE_TBL_EN RTE_BIT32(2)
+#define RNP_MAC_UNICASE_TBL_EN RTE_BIT32(3)
+
#define RNP_TC_PORT_OFFSET(lane) _ETH_(0xe840 + 0x04 * (lane))
#endif /* _RNP_ETH_REGS_H */
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index e150543..1b31362 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -59,9 +59,18 @@ struct rnp_mbx_info {
struct rnp_eth_port;
/* mac operations */
+enum rnp_mpf_modes {
+ RNP_MPF_MODE_NONE = 0,
+ RNP_MPF_MODE_MULTI, /* Multitle Filter */
+ RNP_MPF_MODE_ALLMULTI, /* Multitle Promisc */
+ RNP_MPF_MODE_PROMISC, /* Unicast Promisc */
+};
+
struct rnp_mac_ops {
- /* update mac packet filter mode */
+ /* get default mac address */
int (*get_macaddr)(struct rnp_eth_port *port, u8 *mac);
+ /* update mac packet filter mode */
+ int (*update_mpfm)(struct rnp_eth_port *port, u32 mode, bool en);
};
struct rnp_eth_adapter;
@@ -91,6 +100,7 @@ struct rnp_hw {
struct rnp_eth_adapter *back; /* backup to the adapter handle */
void __iomem *e_ctrl; /* ethernet control bar */
void __iomem *c_ctrl; /* crypto control bar */
+ void __iomem *mac_base[RNP_MAX_PORT_OF_PF]; /* mac ctrl register base */
u32 c_blen; /* crypto bar size */
/* pci device info */
diff --git a/drivers/net/rnp/base/rnp_mac.c b/drivers/net/rnp/base/rnp_mac.c
index b063f4c..2c9499f 100644
--- a/drivers/net/rnp/base/rnp_mac.c
+++ b/drivers/net/rnp/base/rnp_mac.c
@@ -6,10 +6,110 @@
#include "rnp_mbx_fw.h"
#include "rnp_mac.h"
+#include "rnp_eth_regs.h"
+#include "rnp_mac_regs.h"
#include "../rnp.h"
+static int
+rnp_update_mpfm_indep(struct rnp_eth_port *port, u32 mode, bool en)
+{
+ u32 nr_lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ u32 disable = 0, enable = 0;
+ u32 reg;
+
+ reg = RNP_MAC_REG_RD(hw, nr_lane, RNP_MAC_PKT_FLT_CTRL);
+ /* make sure not all receive modes are available */
+ reg &= ~RNP_MAC_RA;
+ switch (mode) {
+ case RNP_MPF_MODE_NONE:
+ break;
+ case RNP_MPF_MODE_MULTI:
+ disable = RNP_MAC_PM | RNP_MAC_PROMISC_EN;
+ enable = RNP_MAC_HPF;
+ break;
+ case RNP_MPF_MODE_ALLMULTI:
+ enable = RNP_MAC_PM;
+ disable = 0;
+ break;
+ case RNP_MPF_MODE_PROMISC:
+ enable = RNP_MAC_PROMISC_EN;
+ disable = 0;
+ break;
+ default:
+ RNP_PMD_LOG(ERR, "update_mpfm argument is invalid");
+ return -EINVAL;
+ }
+ if (en) {
+ reg &= ~disable;
+ reg |= enable;
+ } else {
+ reg &= ~enable;
+ reg |= disable;
+ }
+ /* disable common filter when indep mode */
+ reg |= RNP_MAC_HPF;
+ RNP_MAC_REG_WR(hw, nr_lane, RNP_MAC_PKT_FLT_CTRL, reg);
+ RNP_MAC_REG_WR(hw, nr_lane, RNP_MAC_FCTRL, RNP_MAC_FCTRL_BYPASS);
+
+ return 0;
+}
+
+static int
+rnp_update_mpfm_pf(struct rnp_eth_port *port, u32 mode, bool en)
+{
+ u32 nr_lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ u32 mac_filter_ctrl;
+ u32 filter_ctrl;
+ u32 bypass_ctrl;
+ u32 bypass = 0;
+
+ bypass_ctrl = RNP_E_REG_RD(hw, RNP_MAC_FCTRL);
+ bypass_ctrl |= RNP_MAC_FCTRL_BAM;
+
+ filter_ctrl = RNP_MAC_MULTICASE_TBL_EN | RNP_MAC_UNICASE_TBL_EN;
+ RNP_E_REG_WR(hw, RNP_MAC_MCSTCTRL, filter_ctrl);
+
+ switch (mode) {
+ case RNP_MPF_MODE_NONE:
+ bypass = 0;
+ break;
+ case RNP_MPF_MODE_MULTI:
+ bypass = RNP_MAC_FCTRL_MPE;
+ break;
+ case RNP_MPF_MODE_ALLMULTI:
+ bypass = RNP_MAC_FCTRL_MPE;
+ break;
+ case RNP_MPF_MODE_PROMISC:
+ bypass = RNP_MAC_FCTRL_UPE | RNP_MAC_FCTRL_MPE;
+ break;
+ default:
+ RNP_PMD_LOG(ERR, "update_mpfm argument is invalid");
+ return -EINVAL;
+ }
+ if (en)
+ bypass_ctrl |= bypass;
+ else
+ bypass_ctrl &= ~bypass;
+
+ RNP_E_REG_WR(hw, RNP_MAC_FCTRL, bypass_ctrl);
+ mac_filter_ctrl = RNP_MAC_REG_RD(hw, nr_lane, RNP_MAC_PKT_FLT_CTRL);
+ mac_filter_ctrl |= RNP_MAC_PM | RNP_MAC_PROMISC_EN;
+ mac_filter_ctrl &= ~RNP_MAC_RA;
+ RNP_MAC_REG_WR(hw, nr_lane, RNP_MAC_PKT_FLT_CTRL, mac_filter_ctrl);
+
+ return 0;
+}
+
const struct rnp_mac_ops rnp_mac_ops_pf = {
.get_macaddr = rnp_mbx_fw_get_macaddr,
+ .update_mpfm = rnp_update_mpfm_pf,
+};
+
+const struct rnp_mac_ops rnp_mac_ops_indep = {
+ .get_macaddr = rnp_mbx_fw_get_macaddr,
+ .update_mpfm = rnp_update_mpfm_indep,
};
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac)
@@ -20,9 +120,21 @@ int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac)
return rnp_call_hwif_impl(port, mac_ops->get_macaddr, mac);
}
+int rnp_update_mpfm(struct rnp_eth_port *port,
+ u32 mode, bool en)
+{
+ const struct rnp_mac_ops *mac_ops =
+ RNP_DEV_PP_TO_MAC_OPS(port->eth_dev);
+
+ return rnp_call_hwif_impl(port, mac_ops->update_mpfm, mode, en);
+}
+
void rnp_mac_ops_init(struct rnp_hw *hw)
{
struct rnp_proc_priv *proc_priv = RNP_DEV_TO_PROC_PRIV(hw->back->eth_dev);
- proc_priv->mac_ops = &rnp_mac_ops_pf;
+ if (rnp_pf_is_multiple_ports(hw->device_id))
+ proc_priv->mac_ops = &rnp_mac_ops_indep;
+ else
+ proc_priv->mac_ops = &rnp_mac_ops_pf;
}
diff --git a/drivers/net/rnp/base/rnp_mac.h b/drivers/net/rnp/base/rnp_mac.h
index 8a12aa4..57cbd9e 100644
--- a/drivers/net/rnp/base/rnp_mac.h
+++ b/drivers/net/rnp/base/rnp_mac.h
@@ -10,5 +10,7 @@
void rnp_mac_ops_init(struct rnp_hw *hw);
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac);
+int rnp_update_mpfm(struct rnp_eth_port *port,
+ u32 mode, bool en);
#endif /* _RNP_MAC_H_ */
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
new file mode 100644
index 0000000..1dc0668
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_MAC_REGS_H_
+#define _RNP_MAC_REGS_H_
+
+#define RNP_MAC_BASE_OFFSET(n) (_MAC_(0) + ((0x10000) * (n)))
+
+#define RNP_MAC_PKT_FLT_CTRL (0x8)
+/* Receive All */
+#define RNP_MAC_RA RTE_BIT32(31)
+/* Pass Control Packets */
+#define RNP_MAC_PCF RTE_GENMASK32(7, 6)
+#define RNP_MAC_PCF_S (6)
+/* Mac Filter ALL Ctrl Frame */
+#define RNP_MAC_PCF_FAC (0)
+/* Mac Forward ALL Ctrl Frame Except Pause */
+#define RNP_MAC_PCF_NO_PAUSE (1)
+/* Mac Forward All Ctrl Pkt */
+#define RNP_MAC_PCF_PA (2)
+/* Mac Forward Ctrl Frame Match Unicast */
+#define RNP_MAC_PCF_PUN (3)
+/* Promiscuous Mode */
+#define RNP_MAC_PROMISC_EN RTE_BIT32(0)
+/* Hash Unicast */
+#define RNP_MAC_HUC RTE_BIT32(1)
+/* Hash Multicast */
+#define RNP_MAC_HMC RTE_BIT32(2)
+/* Pass All Multicast */
+#define RNP_MAC_PM RTE_BIT32(4)
+/* Disable Broadcast Packets */
+#define RNP_MAC_DBF RTE_BIT32(5)
+/* Hash or Perfect Filter */
+#define RNP_MAC_HPF RTE_BIT32(10)
+#define RNP_MAC_VTFE RTE_BIT32(16)
+
+
+#endif /* _RNP_MAC_REGS_H_ */
diff --git a/drivers/net/rnp/base/rnp_osdep.h b/drivers/net/rnp/base/rnp_osdep.h
index 3f31f9b..03f6c51 100644
--- a/drivers/net/rnp/base/rnp_osdep.h
+++ b/drivers/net/rnp/base/rnp_osdep.h
@@ -44,6 +44,7 @@
#define _ETH_(off) ((off) + (0x10000))
#define _NIC_(off) ((off) + (0x30000))
+#define _MAC_(off) ((off) + (0x60000))
#define _MSI_(off) ((off) + (0xA0000))
#ifndef _PACKED_ALIGN4
@@ -139,5 +140,9 @@ struct rnp_dma_mem {
#define RNP_REG_WR(base, offset, val) rnp_reg_write32(base, offset, val)
#define RNP_E_REG_WR(hw, off, value) rnp_reg_write32((hw)->e_ctrl, (off), (value))
#define RNP_E_REG_RD(hw, off) rnp_reg_read32((hw)->e_ctrl, (off))
+#define RNP_MAC_REG_WR(hw, lane, off, value) \
+ rnp_reg_write32((hw)->mac_base[lane], (off), (value))
+#define RNP_MAC_REG_RD(hw, lane, off) \
+ rnp_reg_read32((hw)->mac_base[lane], (off))
#endif /* _RNP_OSDEP_H_ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index a7404ee..13d949a 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -189,11 +189,54 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
return 0;
}
+static int rnp_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ return rnp_update_mpfm(port, RNP_MPF_MODE_PROMISC, 1);
+}
+
+static int rnp_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ return rnp_update_mpfm(port, RNP_MPF_MODE_PROMISC, 0);
+}
+
+static int rnp_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ return rnp_update_mpfm(port, RNP_MPF_MODE_ALLMULTI, 1);
+}
+
+static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+ if (eth_dev->data->promiscuous == 1)
+ return 0;
+ return rnp_update_mpfm(port, RNP_MPF_MODE_ALLMULTI, 0);
+}
+
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_close = rnp_dev_close,
.dev_stop = rnp_dev_stop,
.dev_infos_get = rnp_dev_infos_get,
+
+ /* PROMISC */
+ .promiscuous_enable = rnp_promiscuous_enable,
+ .promiscuous_disable = rnp_promiscuous_disable,
+ .allmulticast_enable = rnp_allmulticast_enable,
+ .allmulticast_disable = rnp_allmulticast_disable,
};
static void
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 08/28] net/rnp: add queue setup and release operations
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (6 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 07/28] net/rnp: add support mac promisc mode Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 09/28] net/rnp: add queue stop and start operations Wenbo Cao
` (19 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
support tx/rx queue setup and release add hw bd
queue reset,sw queue reset.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/meson.build | 1 +
drivers/net/rnp/base/rnp_bdq_if.c | 397 ++++++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_bdq_if.h | 149 +++++++++++
drivers/net/rnp/base/rnp_common.h | 4 +
drivers/net/rnp/base/rnp_dma_regs.h | 45 ++++
drivers/net/rnp/base/rnp_eth_regs.h | 4 +
drivers/net/rnp/base/rnp_hw.h | 3 +
drivers/net/rnp/base/rnp_osdep.h | 13 +
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 2 +
drivers/net/rnp/rnp_ethdev.c | 29 +++
drivers/net/rnp/rnp_rxtx.c | 476 ++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_rxtx.h | 123 ++++++++++
14 files changed, 1248 insertions(+)
create mode 100644 drivers/net/rnp/base/rnp_bdq_if.c
create mode 100644 drivers/net/rnp/base/rnp_bdq_if.h
create mode 100644 drivers/net/rnp/rnp_rxtx.c
create mode 100644 drivers/net/rnp/rnp_rxtx.h
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 62585ac..5417593 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -10,6 +10,7 @@ driver support for the inbuilt network device found in the **Mucse RNP**
Features
--------
+- Multiple queues for TX and RX
- Promiscuous mode
Prerequisites
diff --git a/drivers/net/rnp/base/meson.build b/drivers/net/rnp/base/meson.build
index b9db033..c2ef0d0 100644
--- a/drivers/net/rnp/base/meson.build
+++ b/drivers/net/rnp/base/meson.build
@@ -7,6 +7,7 @@ sources = [
'rnp_mbx_fw.c',
'rnp_common.c',
'rnp_mac.c',
+ 'rnp_bdq_if.c',
]
error_cflags = ['-Wno-unused-value',
diff --git a/drivers/net/rnp/base/rnp_bdq_if.c b/drivers/net/rnp/base/rnp_bdq_if.c
new file mode 100644
index 0000000..cc3fe51
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_bdq_if.c
@@ -0,0 +1,397 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include "rnp_osdep.h"
+
+#include "../rnp.h"
+#include "rnp_dma_regs.h"
+#include "rnp_eth_regs.h"
+#include "rnp_bdq_if.h"
+#include "rnp_common.h"
+#include "../rnp_rxtx.h"
+
+static void
+rnp_read_mac_veb(struct rnp_hw *hw,
+ u16 nr_lane,
+ u16 vf_id,
+ struct rnp_veb_cfg *cfg)
+{
+ cfg->mac_lo = RNP_E_REG_RD(hw, RNP_VEB_MAC_LO(nr_lane, vf_id));
+ cfg->mac_hi = RNP_E_REG_RD(hw, RNP_VEB_MAC_HI(nr_lane, vf_id));
+ cfg->ring = RNP_E_REG_RD(hw, RNP_VEB_VF_RING(nr_lane, vf_id));
+}
+
+static void
+rnp_update_mac_veb(struct rnp_hw *hw,
+ u16 nr_lane,
+ u16 vf_id,
+ struct rnp_veb_cfg *cfg)
+{
+ u32 reg = cfg->ring;
+ u16 idx = 0;
+
+ idx = nr_lane;
+ wmb();
+ RNP_E_REG_WR(hw, RNP_VEB_MAC_LO(idx, vf_id), cfg->mac_lo);
+ RNP_E_REG_WR(hw, RNP_VEB_MAC_HI(idx, vf_id), cfg->mac_hi);
+ reg |= ((RNP_VEB_SWITCH_VF_EN | vf_id) << 8);
+ RNP_E_REG_WR(hw, RNP_VEB_VF_RING(idx, vf_id), reg);
+}
+
+void
+rnp_rxq_flow_disable(struct rnp_hw *hw,
+ u16 hw_idx)
+{
+ u32 fc_ctrl;
+
+ spin_lock(&hw->rxq_reset_lock);
+ fc_ctrl = RNP_E_REG_RD(hw, RNP_RING_FC_EN(hw_idx));
+ wmb();
+ RNP_E_REG_WR(hw, RNP_RING_FC_THRESH(hw_idx), 0);
+ fc_ctrl |= 1 << (hw_idx % 32);
+ wmb();
+ RNP_E_REG_WR(hw, RNP_RING_FC_EN(hw_idx), fc_ctrl);
+}
+
+void
+rnp_rxq_flow_enable(struct rnp_hw *hw,
+ u16 hw_idx)
+{
+ u32 fc_ctrl;
+
+
+ fc_ctrl = RNP_E_REG_RD(hw, RNP_RING_FC_EN(hw_idx));
+ fc_ctrl &= ~(1 << (hw_idx % 32));
+ wmb();
+ RNP_E_REG_WR(hw, RNP_RING_FC_EN(hw_idx), fc_ctrl);
+
+ spin_unlock(&hw->rxq_reset_lock);
+}
+
+#define RNP_RXQ_RESET_PKT_LEN (64)
+
+static void
+rnp_reset_xmit(struct rnp_tx_queue *txq, u64 pkt_addr)
+{
+ volatile struct rnp_tx_desc *txbd;
+ struct rnp_txsw_entry *tx_entry;
+ u16 timeout = 0;
+ u16 tx_id;
+
+ tx_id = txq->tx_tail;
+ txbd = &txq->tx_bdr[tx_id];
+ tx_entry = &txq->sw_ring[tx_id];
+ memset(tx_entry, 0, sizeof(*tx_entry));
+
+ txbd->d.addr = pkt_addr;
+ txbd->d.blen = RNP_RXQ_RESET_PKT_LEN;
+ wmb();
+ txbd->d.cmd = cpu_to_le16(RNP_CMD_EOP | RNP_CMD_RS);
+ tx_id = (tx_id + 1) & txq->attr.nb_desc_mask;
+ wmb();
+ RNP_REG_WR(txq->tx_tailreg, 0, tx_id);
+ do {
+ if (txbd->d.cmd & RNP_CMD_DD)
+ break;
+ if (timeout == 1000)
+ RNP_PMD_ERR("rx queue %u reset send pkt is hang\n",
+ txq->attr.index);
+ timeout++;
+ udelay(10);
+ } while (1);
+}
+
+void
+rnp_reset_hw_rxq_op(struct rnp_hw *hw,
+ struct rnp_rx_queue *rxq,
+ struct rnp_tx_queue *txq,
+ struct rnp_rxq_reset_res *res)
+{
+ u8 reset_pcap[RNP_RXQ_RESET_PKT_LEN] = {
+ 0x01, 0x02, 0x27, 0xe2, 0x9f, 0xa6, 0x08, 0x00,
+ 0x27, 0xfc, 0x6a, 0xc9, 0x08, 0x00, 0x45, 0x00,
+ 0x01, 0xc4, 0xb5, 0xd0, 0x00, 0x7a, 0x40, 0x01,
+ 0xbc, 0xea, 0x02, 0x01, 0x01, 0x02, 0x02, 0x01,
+ 0x01, 0x01, 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0xcd,
+ 0xce, 0xcf, 0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5,
+ 0xd6, 0xd7, 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0xdd,
+ 0xde, 0xdf, 0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5};
+ struct rnp_veb_cfg veb_bak_cfg[RNP_MAX_PORT_OF_PF];
+ struct rnp_veb_cfg reset_cfg = {0};
+ volatile struct rnp_rx_desc *rxbd;
+ u16 index = rxq->attr.index;
+ u16 vf_num = hw->mbx.vf_num;
+ u8 *macaddr = res->eth_hdr;
+ u16 timeout = 0;
+ u16 vf_id = 0;
+ u16 head = 0;
+ u16 idx = 0;
+
+ memcpy(macaddr, reset_pcap, RNP_RXQ_RESET_PKT_LEN);
+ macaddr[5] = index;
+ reset_cfg.mac_hi = RNP_GET_MAC_HI(macaddr);
+ reset_cfg.mac_lo = RNP_GET_MAC_LO(macaddr);
+ reset_cfg.ring = index;
+ vf_id = (vf_num != UINT16_MAX) ? vf_num : index / 2;
+ if (hw->mbx.vf_num == UINT16_MAX) {
+ for (idx = 0; idx < RNP_MAX_PORT_OF_PF; idx++) {
+ rnp_read_mac_veb(hw, idx, vf_id, &veb_bak_cfg[idx]);
+ rnp_update_mac_veb(hw, idx, vf_id, &reset_cfg);
+ }
+ } else {
+ idx = rxq->attr.lane_id;
+ rnp_read_mac_veb(hw, idx, vf_id, &veb_bak_cfg[idx]);
+ rnp_update_mac_veb(hw, idx, vf_id, &reset_cfg);
+ }
+ wmb();
+ timeout = 0;
+ do {
+ if (!RNP_E_REG_RD(hw, RNP_RXQ_READY(index)))
+ break;
+ udelay(5);
+ timeout++;
+ } while (timeout < 100);
+ timeout = 0;
+ do {
+ if (RNP_E_REG_RD(hw, RNP_TXQ_READY(index)))
+ break;
+ udelay(10);
+ timeout++;
+ } while (timeout < 100);
+ rxq->rx_tail = RNP_E_REG_RD(hw, RNP_RXQ_HEAD(index));
+ rxbd = &rxq->rx_bdr[rxq->rx_tail];
+ rxbd->d.pkt_addr = res->rx_pkt_addr;
+ if (rxq->rx_tail != rxq->attr.nb_desc_mask)
+ RNP_E_REG_WR(hw, RNP_RXQ_LEN(index), rxq->rx_tail + 1);
+ wmb();
+ RNP_REG_WR(rxq->rx_tailreg, 0, 0);
+ RNP_E_REG_WR(hw, RNP_RXQ_START(index), TRUE);
+ rnp_reset_xmit(txq, res->tx_pkt_addr);
+ timeout = 0;
+ do {
+ if (rxbd->wb.qword1.cmd & cpu_to_le32(RNP_CMD_DD))
+ break;
+ if (timeout == 1000)
+ RNP_PMD_LOG(ERR, "rx_queue[%d] reset queue hang\n",
+ index);
+ udelay(10);
+ timeout++;
+ } while (1);
+ timeout = 0;
+ do {
+ head = RNP_E_REG_RD(hw, RNP_RXQ_HEAD(index));
+ if (head == 0)
+ break;
+ timeout++;
+ if (timeout == 1000)
+ RNP_PMD_LOG(ERR, "rx_queue[%d] reset head to 0 failed",
+ index);
+ udelay(10);
+ } while (1);
+ RNP_E_REG_WR(hw, RNP_RXQ_START(index), FALSE);
+ rxbd->d.pkt_addr = 0;
+ rxbd->d.cmd = 0;
+ if (hw->mbx.vf_num == UINT16_MAX) {
+ for (idx = 0; idx < 4; idx++)
+ rnp_update_mac_veb(hw, idx, vf_id, &veb_bak_cfg[idx]);
+ } else {
+ idx = rxq->attr.lane_id;
+ rnp_update_mac_veb(hw, idx, vf_id, &veb_bak_cfg[idx]);
+ }
+ rxq->rx_tail = head;
+}
+
+void rnp_setup_rxbdr(struct rnp_hw *hw,
+ struct rnp_rx_queue *rxq)
+{
+ u16 max_desc = rxq->attr.nb_desc;
+ u16 idx = rxq->attr.index;
+ phys_addr_t bd_address;
+ u32 dmah, dmal;
+ u32 desc_ctrl;
+
+ RNP_E_REG_WR(hw, RNP_RXQ_START(idx), FALSE);
+ bd_address = (phys_addr_t)rxq->ring_phys_addr;
+ dmah = upper_32_bits((uint64_t)bd_address);
+ dmal = lower_32_bits((uint64_t)bd_address);
+ desc_ctrl = rxq->pburst << RNQ_DESC_FETCH_BURST_S | rxq->pthresh;
+ if (hw->mbx.sriov_st)
+ dmah |= (hw->mbx.sriov_st << 24);
+ /* we must set sriov_state to hi dma_address high 8bit for vf isolation
+ * |---8bit-----|----------24bit--------|
+ * |sriov_state-|-------high dma address|
+ * |---------------8bit-----------------|
+ * |7bit | 6bit |5-0bit-----------------|
+ * |vf_en|pf_num|-------vf_num----------|
+ */
+ RNP_E_REG_WR(hw, RNP_RXQ_BASE_ADDR_LO(idx), dmal);
+ RNP_E_REG_WR(hw, RNP_RXQ_BASE_ADDR_HI(idx), dmah);
+ RNP_E_REG_WR(hw, RNP_RXQ_LEN(idx), max_desc);
+ rxq->rx_tailreg = (u32 *)((u8 *)hw->e_ctrl + RNP_RXQ_TAIL(idx));
+ rxq->rx_headreg = (u32 *)((u8 *)hw->e_ctrl + RNP_RXQ_HEAD(idx));
+ rxq->rx_tail = RNP_E_REG_RD(hw, RNP_RXQ_HEAD(idx));
+ RNP_E_REG_WR(hw, RNP_RXQ_DESC_FETCH_CTRL(idx), desc_ctrl);
+ RNP_E_REG_WR(hw, RNP_RXQ_DROP_TIMEOUT_TH(idx),
+ rxq->nodesc_tm_thresh);
+}
+
+int rnp_get_dma_ring_index(struct rnp_eth_port *port, u16 queue_idx)
+{
+ struct rnp_hw *hw = port->hw;
+ u16 lane = port->attr.nr_lane;
+ u16 hwrid = 0;
+
+ switch (hw->nic_mode) {
+ case RNP_DUAL_10G:
+ hwrid = 2 * (queue_idx + lane) - queue_idx % 2;
+ break;
+ case RNP_QUAD_10G:
+ hwrid = 4 * (queue_idx) + lane;
+ break;
+ default:
+ hwrid = queue_idx;
+ }
+
+ return hwrid;
+}
+
+void rnp_setup_txbdr(struct rnp_hw *hw, struct rnp_tx_queue *txq)
+{
+ u16 max_desc = txq->attr.nb_desc;
+ u16 idx = txq->attr.index;
+ phys_addr_t bd_address;
+ u32 desc_ctrl = 0;
+ u32 dmah, dmal;
+
+ bd_address = (phys_addr_t)txq->ring_phys_addr;
+ desc_ctrl = txq->pburst << RNQ_DESC_FETCH_BURST_S | txq->pthresh;
+ dmah = upper_32_bits((u64)bd_address);
+ dmal = lower_32_bits((u64)bd_address);
+ if (hw->mbx.sriov_st)
+ dmah |= (hw->mbx.sriov_st << 24);
+ /* We must set sriov_state to hi dma_address high 8bit for vf isolation
+ * |---8bit-----|----------24bit--------|
+ * |sriov_state-|-------high dma address|
+ * |---------------8bit-----------------|
+ * |7bit | 6bit |5-0bit-----------------|
+ * |vf_en|pf_num|-------vf_num----------|
+ */
+ RNP_E_REG_WR(hw, RNP_TXQ_BASE_ADDR_LO(idx), dmal);
+ RNP_E_REG_WR(hw, RNP_TXQ_BASE_ADDR_HI(idx), dmah);
+ RNP_E_REG_WR(hw, RNP_TXQ_LEN(idx), max_desc);
+ RNP_E_REG_WR(hw, RNP_TXQ_DESC_FETCH_CTRL(idx), desc_ctrl);
+ RNP_E_REG_WR(hw, RNP_RXTX_IRQ_MASK(idx), RNP_RXTX_IRQ_MASK_ALL);
+ txq->tx_headreg = (void *)((u8 *)hw->e_ctrl + RNP_TXQ_HEAD(idx));
+ txq->tx_tailreg = (void *)((u8 *)hw->e_ctrl + RNP_TXQ_TAIL(idx));
+
+ txq->tx_tail = RNP_E_REG_RD(hw, RNP_TXQ_HEAD(idx));
+ RNP_E_REG_WR(hw, RNP_TXQ_TAIL(idx), 0);
+}
+
+static void
+rnp_txq_reset_pre(struct rnp_hw *hw)
+{
+ u16 i = 0;
+
+ spin_lock(&hw->txq_reset_lock);
+ for (i = 0; i < RNP_MAX_RX_QUEUE_NUM; i++) {
+ wmb();
+ RNP_E_REG_WR(hw, RNP_RXQ_START(i), 0);
+ }
+}
+
+static void
+rnp_txq_reset_fin(struct rnp_hw *hw)
+{
+ u16 i = 0;
+
+ for (i = 0; i < RNP_MAX_RX_QUEUE_NUM; i++) {
+ wmb();
+ RNP_E_REG_WR(hw, RNP_RXQ_START(i), 1);
+ }
+ spin_unlock(&hw->txq_reset_lock);
+}
+
+static void
+rnp_xmit_nop_frame_ring(struct rnp_hw *hw,
+ struct rnp_tx_queue *txq,
+ u16 head)
+{
+ volatile struct rnp_tx_desc *tx_desc;
+ u16 check_head = 0;
+ u16 timeout = 0;
+ u16 index = 0;
+ u16 tx_id;
+
+ tx_id = head;
+ index = txq->attr.index;
+ tx_desc = &txq->tx_bdr[tx_id];
+
+ /* set length to 0 */
+ tx_desc->d.blen = 0;
+ tx_desc->d.addr = 0;
+ wmb();
+ tx_desc->d.cmd = cpu_to_le16(RNP_CMD_EOP);
+ wmb();
+ /* update tail */
+ RNP_REG_WR(txq->tx_tailreg, 0, 0);
+ do {
+ check_head = RNP_E_REG_RD(hw, RNP_TXQ_HEAD(index));
+ if (check_head == 0)
+ break;
+ if (timeout == 1000)
+ RNP_PMD_ERR("tx_queue[%d] reset may be hang "
+ "check_head %d base head %d\n",
+ index, check_head, head);
+ timeout++;
+ udelay(10);
+ } while (1);
+ /* restore the origin right state */
+ wmb();
+ RNP_E_REG_WR(hw, RNP_TXQ_LEN(index), txq->attr.nb_desc);
+}
+
+void rnp_reset_hw_txq_op(struct rnp_hw *hw,
+ struct rnp_tx_queue *txq)
+{
+ u16 timeout = 0;
+ u16 index = 0;
+ u16 head;
+ u16 tail;
+
+ timeout = 0;
+ /* Disable Tx Queue */
+ index = txq->attr.index;
+ rnp_txq_reset_pre(hw);
+ rmb();
+ tail = RNP_E_REG_RD(hw, RNP_TXQ_TAIL(index));
+ txq->tx_tail = tail;
+ do {
+ /* wait for hw head is stopped */
+ head = RNP_E_REG_RD(hw, RNP_TXQ_HEAD(index));
+ if (head == txq->tx_tail)
+ break;
+ if (timeout > 1000) {
+ RNP_PMD_ERR("txq[%u] 1000*10us can't "
+ "wait for hw head == tail\n", index);
+ break;
+ }
+ udelay(10);
+ } while (1);
+ rmb();
+ head = RNP_E_REG_RD(hw, RNP_TXQ_HEAD(index));
+ /* head is zero no need to reset */
+ if (head == 0)
+ goto tx_reset_fin;
+ wmb();
+ if (head != txq->attr.nb_desc_mask)
+ RNP_E_REG_WR(hw, RNP_TXQ_LEN(index), head + 1);
+ wmb();
+ /* reset hw head */
+ rnp_xmit_nop_frame_ring(hw, txq, head);
+ rmb();
+ txq->tx_tail = RNP_E_REG_RD(hw, RNP_TXQ_HEAD(index));
+tx_reset_fin:
+ rnp_txq_reset_fin(hw);
+}
diff --git a/drivers/net/rnp/base/rnp_bdq_if.h b/drivers/net/rnp/base/rnp_bdq_if.h
new file mode 100644
index 0000000..61a3832
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_bdq_if.h
@@ -0,0 +1,149 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_BDQ_IF_H_
+#define _RNP_BDQ_IF_H_
+
+#include "rnp_hw.h"
+
+struct rnp_rx_queue;
+struct rnp_tx_queue;
+#pragma pack(push)
+#pragma pack(1)
+/* receive descriptor */
+struct rnp_rx_desc {
+ /* rx buffer descriptor */
+ union {
+ struct {
+ u64 pkt_addr;
+ u16 rsvd[3];
+ u16 cmd;
+ } d;
+ struct {
+ struct {
+ u32 rss_hash;
+ u32 mark_data;
+ } qword0;
+ struct {
+ u32 lens;
+ u16 vlan_tci;
+ u16 cmd;
+ } qword1;
+ } wb;
+ };
+};
+/* tx buffer descriptors (BD) */
+struct rnp_tx_desc {
+ union {
+ struct {
+ u64 addr; /* pkt dma address */
+ u16 blen; /* pkt data Len */
+ u16 mac_ip_len; /* mac ip header len */
+ u16 vlan_tci; /* vlan_tci */
+ u16 cmd; /* ctrl command */
+ } d;
+ struct {
+ struct {
+ u16 mss; /* tso sz */
+ u8 vf_num; /* vf num */
+ u8 l4_len; /* l4 header size */
+ u8 tunnel_len; /* tunnel header size */
+ u16 vlan_tci; /* vlan_tci */
+ u8 veb_tran; /* mark pkt is transmit by veb */
+ } qword0;
+ struct {
+ u16 rsvd[3];
+ u16 cmd; /* ctrl command */
+ } qword1;
+ } c;
+ };
+};
+#pragma pack(pop)
+/* common command */
+#define RNP_CMD_EOP RTE_BIT32(0) /* End Of Packet */
+#define RNP_CMD_DD RTE_BIT32(1)
+#define RNP_CMD_RS RTE_BIT32(2)
+#define RNP_DESC_TYPE_S (3)
+#define RNP_DATA_DESC (0x00UL << RNP_DESC_TYPE_S)
+#define RNP_CTRL_DESC (0x01UL << RNP_DESC_TYPE_S)
+/* rx data cmd */
+#define RNP_RX_PTYPE_PTP RTE_BIT32(4)
+#define RNP_RX_L3TYPE_S (5)
+#define RNP_RX_L3TYPE_IPV4 (0x00UL << RNP_RX_L3TYPE_S)
+#define RNP_RX_L3TYPE_IPV6 (0x01UL << RNP_RX_L3TYPE_S)
+#define RNP_RX_L4TYPE_S (6)
+#define RNP_RX_L4TYPE_TCP (0x01UL << RNP_RX_L4TYPE_S)
+#define RNP_RX_L4TYPE_SCTP (0x02UL << RNP_RX_L4TYPE_S)
+#define RNP_RX_L4TYPE_UDP (0x03UL << RNP_RX_L4TYPE_S)
+#define RNP_RX_ERR_MASK RTE_GENMASK32(12, 8)
+#define RNP_RX_L3_ERR RTE_BIT32(8)
+#define RNP_RX_L4_ERR RTE_BIT32(9)
+#define RNP_RX_SCTP_ERR RTE_BIT32(10)
+#define RNP_RX_IN_L3_ERR RTE_BIT32(11)
+#define RNP_RX_IN_L4_ERR RTE_BIT32(12)
+#define RNP_RX_TUNNEL_TYPE_S (13)
+#define RNP_RX_PTYPE_VXLAN (0x01UL << RNP_RX_TUNNEL_TYPE_S)
+#define RNP_RX_PTYPE_NVGRE (0x02UL << RNP_RX_TUNNEL_TYPE_S)
+#define RNP_RX_PTYPE_VLAN RTE_BIT32(15)
+/* tx data cmd */
+#define RNP_TX_TSO_EN RTE_BIT32(4)
+#define RNP_TX_L3TYPE_S (5)
+#define RNP_TX_L3TYPE_IPV6 (0x01UL << RNP_TX_L3TYPE_S)
+#define RNP_TX_L3TYPE_IPV4 (0x00UL << RNP_TX_L3TYPE_S)
+#define RNP_TX_L4TYPE_S (6)
+#define RNP_TX_L4TYPE_TCP (0x01UL << RNP_TX_L4TYPE_S)
+#define RNP_TX_L4TYPE_SCTP (0x02UL << RNP_TX_L4TYPE_S)
+#define RNP_TX_L4TYPE_UDP (0x03UL << RNP_TX_L4TYPE_S)
+#define RNP_TX_TUNNEL_TYPE_S (8)
+#define RNP_TX_VXLAN_TUNNEL (0x01UL << RNP_TX_TUNNEL_TYPE_S)
+#define RNP_TX_NVGRE_TUNNEL (0x02UL << RNP_TX_TUNNEL_TYPE_S)
+#define RNP_TX_PTP_EN RTE_BIT32(10)
+#define RNP_TX_IP_CKSUM_EN RTE_BIT32(11)
+#define RNP_TX_L4CKSUM_EN RTE_BIT32(12)
+#define RNP_TX_VLAN_CTRL_S (13)
+#define RNP_TX_VLAN_STRIP (0x01UL << RNP_TX_VLAN_CTRL_S)
+#define RNP_TX_VLAN_INSERT (0x02UL << RNP_TX_VLAN_CTRL_S)
+#define RNP_TX_VLAN_VALID RTE_BIT32(15)
+/* tx data mac_ip len */
+#define RNP_TX_MAC_LEN_S (9)
+/* tx ctrl cmd */
+#define RNP_TX_LEN_PAD_S (8)
+#define RNP_TX_OFF_MAC_PAD (0x01UL << RNP_TX_LEN_PAD_S)
+#define RNP_TX_QINQ_CTRL_S (10)
+#define RNP_TX_QINQ_INSERT (0x02UL << RNP_TX_QINQ_CTRL_S)
+#define RNP_TX_QINQ_STRIP (0x01UL << RNP_TX_QINQ_CTRL_S)
+#define RNP_TX_TO_NPU_EN RTE_BIT32(15)
+/* descript op end */
+struct rnp_rxq_reset_res {
+ u64 rx_pkt_addr;
+ u64 tx_pkt_addr;
+ u8 *eth_hdr;
+};
+struct rnp_veb_cfg {
+ uint32_t mac_hi;
+ uint32_t mac_lo;
+ uint32_t vid;
+ uint16_t vf_id;
+ uint16_t ring;
+};
+void
+rnp_rxq_flow_enable(struct rnp_hw *hw,
+ u16 hw_idx);
+void
+rnp_rxq_flow_disable(struct rnp_hw *hw,
+ u16 hw_idx);
+void
+rnp_reset_hw_rxq_op(struct rnp_hw *hw,
+ struct rnp_rx_queue *rxq,
+ struct rnp_tx_queue *txq,
+ struct rnp_rxq_reset_res *res);
+void rnp_reset_hw_txq_op(struct rnp_hw *hw,
+ struct rnp_tx_queue *txq);
+void rnp_setup_rxbdr(struct rnp_hw *hw,
+ struct rnp_rx_queue *rxq);
+void rnp_setup_txbdr(struct rnp_hw *hw,
+ struct rnp_tx_queue *txq);
+int rnp_get_dma_ring_index(struct rnp_eth_port *port, u16 queue_idx);
+
+#endif /* _RNP_BDQ_IF_H_ */
diff --git a/drivers/net/rnp/base/rnp_common.h b/drivers/net/rnp/base/rnp_common.h
index aaf77a6..bd00708 100644
--- a/drivers/net/rnp/base/rnp_common.h
+++ b/drivers/net/rnp/base/rnp_common.h
@@ -6,6 +6,10 @@
#define _RNP_COMMON_H_
#define RNP_NIC_RESET _NIC_(0x0010)
+#define RNP_GET_MAC_HI(mac_addr) (((macaddr[0]) << 8) | (macaddr[1]))
+#define RNP_GET_MAC_LO(mac_addr) \
+ ((macaddr[2] << 24) | (macaddr[3] << 16) | \
+ ((macaddr[4] << 8)) | (macaddr[5]))
int rnp_init_hw(struct rnp_hw *hw);
int rnp_setup_common_ops(struct rnp_hw *hw);
diff --git a/drivers/net/rnp/base/rnp_dma_regs.h b/drivers/net/rnp/base/rnp_dma_regs.h
index 00f8aff..3664c0a 100644
--- a/drivers/net/rnp/base/rnp_dma_regs.h
+++ b/drivers/net/rnp/base/rnp_dma_regs.h
@@ -9,5 +9,50 @@
#define RNP_DMA_HW_EN (0x10)
#define RNP_DMA_EN_ALL (0b1111)
#define RNP_DMA_HW_STATE (0x14)
+/* --- queue register --- */
+/* queue enable */
+#define RNP_RXQ_START(qid) _RING_(0x0010 + 0x100 * (qid))
+#define RNP_RXQ_READY(qid) _RING_(0x0014 + 0x100 * (qid))
+#define RNP_TXQ_START(qid) _RING_(0x0018 + 0x100 * (qid))
+#define RNP_TXQ_READY(qid) _RING_(0x001c + 0x100 * (qid))
+/* queue irq generate ctrl */
+#define RNP_RXTX_IRQ_STAT(qid) _RING_(0x0020 + 0x100 * (qid))
+#define RNP_RXTX_IRQ_MASK(qid) _RING_(0x0024 + 0x100 * (qid))
+#define RNP_TX_IRQ_MASK RTE_BIT32(1)
+#define RNP_RX_IRQ_MASK RTE_BIT32(0)
+#define RNP_RXTX_IRQ_MASK_ALL (RNP_RX_IRQ_MASK | RNP_TX_IRQ_MASK)
+#define RNP_RXTX_IRQ_CLER(qid) _RING_(0x0028 + 0x100 * (qid))
+/* rx-queue setup */
+#define RNP_RXQ_BASE_ADDR_HI(qid) _RING_(0x0030 + 0x100 * (qid))
+#define RNP_RXQ_BASE_ADDR_LO(qid) _RING_(0x0034 + 0x100 * (qid))
+#define RNP_RXQ_LEN(qid) _RING_(0x0038 + 0x100 * (qid))
+#define RNP_RXQ_HEAD(qid) _RING_(0x003c + 0x100 * (qid))
+#define RNP_RXQ_TAIL(qid) _RING_(0x0040 + 0x100 * (qid))
+#define RNP_RXQ_DESC_FETCH_CTRL(qid) _RING_(0x0044 + 0x100 * (qid))
+/* rx queue interrupt generate pararm */
+#define RNP_RXQ_INT_DELAY_TIMER(qid) _RING_(0x0048 + 0x100 * (qid))
+#define RNP_RXQ_INT_DELAY_PKTCNT(qidx) _RING_(0x004c + 0x100 * (qid))
+#define RNP_RXQ_RX_PRI_LVL(qid) _RING_(0x0050 + 0x100 * (qid))
+#define RNP_RXQ_DROP_TIMEOUT_TH(qid) _RING_(0x0054 + 0x100 * (qid))
+/* tx queue setup */
+#define RNP_TXQ_BASE_ADDR_HI(qid) _RING_(0x0060 + 0x100 * (qid))
+#define RNP_TXQ_BASE_ADDR_LO(qid) _RING_(0x0064 + 0x100 * (qid))
+#define RNP_TXQ_LEN(qid) _RING_(0x0068 + 0x100 * (qid))
+#define RNP_TXQ_HEAD(qid) _RING_(0x006c + 0x100 * (qid))
+#define RNP_TXQ_TAIL(qid) _RING_(0x0070 + 0x100 * (qid))
+#define RNP_TXQ_DESC_FETCH_CTRL(qid) _RING_(0x0074 + 0x100 * (qid))
+#define RNQ_DESC_FETCH_BURST_S (16)
+/* tx queue interrupt generate pararm */
+#define RNP_TXQ_INT_DELAY_TIMER(qid) _RING_(0x0078 + 0x100 * (qid))
+#define RNP_TXQ_INT_DELAY_PKTCNT(qid) _RING_(0x007c + 0x100 * (qid))
+/* veb ctrl register */
+#define RNP_VEB_MAC_LO(p, n) _RING_(0x00a0 + (4 * (p)) + (0x100 * (n)))
+#define RNP_VEB_MAC_HI(p, n) _RING_(0x00b0 + (4 * (p)) + (0x100 * (n)))
+#define RNP_VEB_VID_CFG(p, n) _RING_(0x00c0 + (4 * (p)) + (0x100 * (n)))
+#define RNP_VEB_VF_RING(p, n) _RING_(0x00d0 + (4 * (p)) + (0x100 * (n)))
+#define RNP_MAX_VEB_TB (64)
+#define RNP_VEB_RING_CFG_S (8)
+#define RNP_VEB_SWITCH_VF_EN RTE_BIT32(7)
+#define MAX_VEB_TABLES_NUM (4)
#endif /* _RNP_DMA_REGS_H_ */
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index c4519ba..10e3d95 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -10,6 +10,10 @@
#define RNP_E_FILTER_EN _ETH_(0x801c)
#define RNP_E_REDIR_EN _ETH_(0x8030)
+/* rx queue flow ctrl */
+#define RNP_RX_FC_ENABLE _ETH_(0x8520)
+#define RNP_RING_FC_EN(n) _ETH_(0x8524 + ((0x4) * ((n) / 32)))
+#define RNP_RING_FC_THRESH(n) _ETH_(0x8a00 + ((0x4) * (n)))
/* Mac Host Filter */
#define RNP_MAC_FCTRL _ETH_(0x9110)
#define RNP_MAC_FCTRL_MPE RTE_BIT32(8) /* Multicast Promiscuous En */
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 1b31362..4f5a73e 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -120,6 +120,9 @@ struct rnp_hw {
bool lane_is_sgmii[RNP_MAX_PORT_OF_PF];
struct rnp_mbx_info mbx;
struct rnp_fw_info fw_info;
+
+ spinlock_t rxq_reset_lock;
+ spinlock_t txq_reset_lock;
};
#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/base/rnp_osdep.h b/drivers/net/rnp/base/rnp_osdep.h
index 03f6c51..137e0e8 100644
--- a/drivers/net/rnp/base/rnp_osdep.h
+++ b/drivers/net/rnp/base/rnp_osdep.h
@@ -14,6 +14,7 @@
#include <rte_bitops.h>
#include <rte_cycles.h>
#include <rte_byteorder.h>
+#include <rte_spinlock.h>
#include <rte_common.h>
#include <rte_memcpy.h>
#include <rte_memzone.h>
@@ -32,16 +33,28 @@
#define mb() rte_mb()
#define wmb() rte_wmb()
+#define rmb() rte_rmb()
#define udelay(x) rte_delay_us(x)
#define mdelay(x) rte_delay_ms(x)
#define memcpy rte_memcpy
+#ifndef upper_32_bits
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)((n) & 0xffffffff))
+#endif
+
+#ifndef cpu_to_le32
+#define cpu_to_le16(v) rte_cpu_to_le_16((u16)(v))
+#define cpu_to_le32(v) rte_cpu_to_le_32((u32)(v))
+#endif
+
#define spinlock_t rte_spinlock_t
#define spin_lock_init(spinlock_v) rte_spinlock_init(spinlock_v)
#define spin_lock(spinlock_v) rte_spinlock_lock(spinlock_v)
#define spin_unlock(spinlock_v) rte_spinlock_unlock(spinlock_v)
+#define _RING_(off) ((off) + (0x08000))
#define _ETH_(off) ((off) + (0x10000))
#define _NIC_(off) ((off) + (0x30000))
#define _MAC_(off) ((off) + (0x60000))
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index 29e6d49..ff3dc41 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -14,4 +14,5 @@ includes += include_directories('base')
sources = files(
'rnp_ethdev.c',
+ 'rnp_rxtx.c',
)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 19ef493..ab7bd60 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -105,6 +105,7 @@ struct rnp_eth_port {
struct rte_ether_addr mac_addr;
struct rte_eth_dev *eth_dev;
struct rnp_port_attr attr;
+ struct rnp_tx_queue *tx_queues[RNP_MAX_RX_QUEUE_NUM];
struct rnp_hw *hw;
};
@@ -113,6 +114,7 @@ struct rnp_eth_adapter {
struct rte_pci_device *pdev;
struct rte_eth_dev *eth_dev; /* alloc eth_dev by platform */
+ struct rte_mempool *reset_pool;
struct rnp_eth_port *ports[RNP_MAX_PORT_OF_PF];
uint16_t closed_ports;
uint16_t inited_ports;
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 13d949a..d5e5ef7 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -14,6 +14,7 @@
#include "base/rnp_mbx_fw.h"
#include "base/rnp_mac.h"
#include "base/rnp_common.h"
+#include "rnp_rxtx.h"
static struct rte_eth_dev *
rnp_alloc_eth_port(struct rte_pci_device *pci, char *name)
@@ -237,6 +238,11 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
.promiscuous_disable = rnp_promiscuous_disable,
.allmulticast_enable = rnp_allmulticast_enable,
.allmulticast_disable = rnp_allmulticast_disable,
+
+ .rx_queue_setup = rnp_rx_queue_setup,
+ .rx_queue_release = rnp_dev_rx_queue_release,
+ .tx_queue_setup = rnp_tx_queue_setup,
+ .tx_queue_release = rnp_dev_tx_queue_release,
};
static void
@@ -330,6 +336,26 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
}
static int
+rnp_rx_reset_pool_setup(struct rnp_eth_adapter *adapter)
+{
+ struct rte_eth_dev *eth_dev = adapter->eth_dev;
+ char name[RTE_MEMPOOL_NAMESIZE];
+
+ snprintf(name, sizeof(name), "rx_reset_pool_%d:%d",
+ eth_dev->data->port_id, eth_dev->device->numa_node);
+
+ adapter->reset_pool = rte_pktmbuf_pool_create(name, 2,
+ 0, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+ eth_dev->device->numa_node);
+ if (adapter->reset_pool == NULL) {
+ RNP_PMD_ERR("mempool %s create failed", name);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int
rnp_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -424,6 +450,9 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
rte_eth_dev_probing_finish(sub_eth_dev);
}
}
+ ret = rnp_rx_reset_pool_setup(adapter);
+ if (ret)
+ goto eth_alloc_error;
/* enable link update event interrupt */
rte_intr_callback_register(intr_handle,
rnp_dev_interrupt_handler, adapter);
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
new file mode 100644
index 0000000..3c34f23
--- /dev/null
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -0,0 +1,476 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include <stdint.h>
+
+#include <rte_ethdev.h>
+#include <rte_memzone.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+
+#include "base/rnp_bdq_if.h"
+#include "base/rnp_dma_regs.h"
+#include "rnp_rxtx.h"
+#include "rnp_logs.h"
+#include "rnp.h"
+
+static void rnp_tx_queue_release_mbuf(struct rnp_tx_queue *txq);
+static void rnp_tx_queue_sw_reset(struct rnp_tx_queue *txq);
+static void rnp_tx_queue_release(void *_txq);
+
+static __rte_always_inline phys_addr_t
+rnp_get_dma_addr(struct rnp_queue_attr *attr, struct rte_mbuf *mbuf)
+{
+ phys_addr_t dma_addr;
+
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(mbuf));
+ if (attr->sriov_st)
+ dma_addr |= (attr->sriov_st << 56);
+
+ return dma_addr;
+}
+
+static void rnp_rx_queue_release_mbuf(struct rnp_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (!rxq)
+ return;
+
+ if (rxq->sw_ring) {
+ for (i = 0; i < rxq->attr.nb_desc; i++) {
+ if (rxq->sw_ring[i].mbuf)
+ rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf);
+ }
+ memset(rxq->sw_ring, 0,
+ sizeof(rxq->sw_ring[0]) * rxq->attr.nb_desc);
+ }
+}
+
+static void rnp_rx_queue_release(void *_rxq)
+{
+ struct rnp_rx_queue *rxq = (struct rnp_rx_queue *)_rxq;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (rxq) {
+ rnp_rx_queue_release_mbuf(rxq);
+ if (rxq->rz)
+ rte_memzone_free(rxq->rz);
+ if (rxq->sw_ring)
+ rte_free(rxq->sw_ring);
+ rte_free(rxq);
+ }
+}
+
+static int
+rnp_tx_queue_reset(struct rnp_eth_port *port,
+ struct rnp_tx_queue *txq)
+{
+ struct rnp_hw *hw = port->hw;
+
+ rnp_reset_hw_txq_op(hw, txq);
+
+ return 0;
+}
+
+static int
+rnp_rx_queue_reset(struct rnp_eth_port *port,
+ struct rnp_rx_queue *rxq)
+{
+ struct rte_eth_dev_data *data = port->eth_dev->data;
+ struct rnp_eth_adapter *adapter = port->hw->back;
+ struct rte_eth_dev *dev = port->eth_dev;
+ struct rnp_rxq_reset_res res = {0};
+ uint16_t qidx = rxq->attr.queue_id;
+ struct rnp_tx_queue *txq = NULL;
+ struct rte_eth_txconf def_conf;
+ struct rnp_hw *hw = port->hw;
+ struct rte_mbuf *m_mbuf[2];
+ bool tx_new = false;
+ uint16_t index;
+ int err = 0;
+
+ index = rxq->attr.index;
+ /* disable eth send pkts to this ring */
+ rxq->rx_tail = RNP_E_REG_RD(hw, RNP_RXQ_HEAD(index));
+ if (!rxq->rx_tail)
+ return 0;
+ if (qidx < data->nb_tx_queues && data->tx_queues[qidx]) {
+ txq = (struct rnp_tx_queue *)data->tx_queues[qidx];
+ } else {
+ /* tx queues has been release or txq num less than rxq num */
+ def_conf.tx_deferred_start = true;
+ def_conf.tx_free_thresh = 32;
+ def_conf.tx_rs_thresh = 32;
+ if (dev->dev_ops->tx_queue_setup)
+ err = dev->dev_ops->tx_queue_setup(dev, qidx,
+ rxq->attr.nb_desc,
+ dev->data->numa_node, &def_conf);
+ if (err) {
+ RNP_PMD_ERR("rxq[%d] reset pair txq setup fail", qidx);
+ return err;
+ }
+ txq = port->tx_queues[qidx];
+ tx_new = true;
+ }
+ if (unlikely(rte_mempool_get_bulk(adapter->reset_pool, (void *)m_mbuf,
+ 2) < 0)) {
+ RNP_PMD_LOG(WARNING, "port[%d] reset rx queue[%d] failed "
+ "because mbuf alloc failed\n",
+ data->port_id, qidx);
+ return -ENOMEM;
+ }
+ rnp_rxq_flow_disable(hw, index);
+ rte_mbuf_refcnt_set(m_mbuf[0], 1);
+ rte_mbuf_refcnt_set(m_mbuf[1], 1);
+ m_mbuf[0]->data_off = RTE_PKTMBUF_HEADROOM;
+ m_mbuf[1]->data_off = RTE_PKTMBUF_HEADROOM;
+ res.eth_hdr = rte_pktmbuf_mtod(m_mbuf[0], uint8_t *);
+ res.rx_pkt_addr = rnp_get_dma_addr(&rxq->attr, m_mbuf[1]);
+ res.tx_pkt_addr = rnp_get_dma_addr(&txq->attr, m_mbuf[0]);
+ rnp_reset_hw_rxq_op(hw, rxq, txq, &res);
+ if (tx_new)
+ rnp_tx_queue_release(txq);
+ else
+ txq->tx_tail = RNP_E_REG_RD(hw, RNP_TXQ_HEAD(index));
+ if (!tx_new) {
+ if (txq->tx_tail) {
+ rnp_tx_queue_release_mbuf(txq);
+ rnp_tx_queue_reset(port, txq);
+ rnp_tx_queue_sw_reset(txq);
+ }
+ }
+ rte_mempool_put_bulk(adapter->reset_pool, (void **)m_mbuf, 2);
+ rnp_rxq_flow_enable(hw, index);
+ rte_io_wmb();
+ RNP_E_REG_WR(hw, RNP_RXQ_LEN(index), rxq->attr.nb_desc);
+
+ return 0;
+}
+
+static int
+rnp_alloc_rxbdr(struct rte_eth_dev *dev,
+ struct rnp_rx_queue *rxq,
+ uint16_t nb_rx_desc, int socket_id)
+{
+ const struct rte_memzone *rz = NULL;
+ uint32_t size = 0;
+
+ size = (nb_rx_desc + RNP_RX_MAX_BURST_SIZE) *
+ sizeof(struct rnp_rxsw_entry);
+ rxq->sw_ring = rte_zmalloc_socket("rx_swring", size,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq->sw_ring == NULL)
+ return -ENOMEM;
+ rz = rte_eth_dma_zone_reserve(dev, "rx_ring", rxq->attr.queue_id,
+ RNP_RX_MAX_RING_SZ, RNP_BD_RING_ALIGN, socket_id);
+ if (rz == NULL) {
+ rte_free(rxq->sw_ring);
+ rxq->sw_ring = NULL;
+ return -ENOMEM;
+ }
+ memset(rz->addr, 0, RNP_RX_MAX_RING_SZ);
+ rxq->rx_bdr = (struct rnp_rx_desc *)rz->addr;
+ rxq->ring_phys_addr = rz->iova;
+ rxq->rz = rz;
+
+ return 0;
+}
+
+static void
+rnp_rx_queue_sw_reset(struct rnp_rx_queue *rxq)
+{
+ uint32_t size = 0;
+ uint32_t idx = 0;
+
+ rxq->nb_rx_free = rxq->attr.nb_desc - 1;
+ rxq->rx_tail = 0;
+
+ size = rxq->attr.nb_desc + RNP_RX_MAX_BURST_SIZE;
+ for (idx = 0; idx < size * sizeof(struct rnp_rx_desc); idx++)
+ ((volatile char *)rxq->rx_bdr)[idx] = 0;
+}
+
+
+int rnp_rx_queue_setup(struct rte_eth_dev *eth_dev,
+ uint16_t qidx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq = NULL;
+ uint64_t offloads;
+ int err = 0;
+
+ RNP_PMD_LOG(INFO, "RXQ[%d] setup nb-desc %d\n", qidx, nb_rx_desc);
+ offloads = rx_conf->offloads | data->dev_conf.rxmode.offloads;
+ if (rte_is_power_of_2(nb_rx_desc) == 0) {
+ RNP_PMD_ERR("Rxq Desc Num Must power of 2\n");
+ return -EINVAL;
+ }
+ if (nb_rx_desc > RNP_MAX_BD_COUNT)
+ return -EINVAL;
+ /* check whether queue has been created if so release it */
+ if (qidx < data->nb_rx_queues &&
+ data->rx_queues[qidx] != NULL) {
+ rnp_rx_queue_release(data->rx_queues[qidx]);
+ data->rx_queues[qidx] = NULL;
+ }
+ rxq = rte_zmalloc_socket("rnp_rxq", sizeof(struct rnp_rx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq == NULL) {
+ RNP_PMD_ERR("Failed to allocate rx ring memory");
+ return -ENOMEM;
+ }
+ rxq->rx_offloads = offloads;
+ /* queue hw info */
+ rxq->attr.index = rnp_get_dma_ring_index(port, qidx);
+ rxq->attr.nb_desc_mask = nb_rx_desc - 1;
+ rxq->attr.nb_desc = nb_rx_desc;
+ rxq->attr.queue_id = qidx;
+ /* queue map to port hw info */
+ rxq->attr.vf_num = hw->mbx.vf_num;
+ rxq->attr.sriov_st = hw->mbx.sriov_st;
+ rxq->attr.lane_id = port->attr.nr_lane;
+ rxq->attr.port_id = data->port_id;
+#define RNP_RXQ_BD_TIMEOUT (5000000)
+ rxq->nodesc_tm_thresh = RNP_RXQ_BD_TIMEOUT;
+ rxq->rx_buf_len = (uint16_t)(rte_pktmbuf_data_room_size(mb_pool) -
+ RTE_PKTMBUF_HEADROOM);
+ rxq->mb_pool = mb_pool;
+ err = rnp_alloc_rxbdr(eth_dev, rxq, nb_rx_desc, socket_id);
+ if (err)
+ goto fail;
+ RNP_PMD_LOG(INFO, "PF[%d] dev:[%d] hw-lane[%d] rx_qid[%d] "
+ "hw_ridx %d socket %d\n",
+ hw->mbx.pf_num, rxq->attr.port_id,
+ rxq->attr.lane_id, qidx,
+ rxq->attr.index, socket_id);
+ rxq->rx_free_thresh = (rx_conf->rx_free_thresh) ?
+ rx_conf->rx_free_thresh : RNP_DEFAULT_RX_FREE_THRESH;
+ rxq->pthresh = (rx_conf->rx_thresh.pthresh) ?
+ rx_conf->rx_thresh.pthresh : RNP_RX_DESC_FETCH_TH;
+ rxq->pburst = (rx_conf->rx_thresh.hthresh) ?
+ rx_conf->rx_thresh.hthresh : RNP_RX_DESC_FETCH_BURST;
+ rnp_setup_rxbdr(hw, rxq);
+ if (rxq->rx_tail) {
+ err = rnp_rx_queue_reset(port, rxq);
+ if (err) {
+ RNP_PMD_ERR("PF[%d] dev:[%d] lane[%d] rx_qid[%d] "
+ "hw_ridx[%d] bdr setup failed",
+ hw->mbx.pf_num, rxq->attr.port_id,
+ rxq->attr.lane_id, qidx, rxq->attr.index);
+ goto rxbd_setup_failed;
+ }
+ }
+ rnp_rx_queue_sw_reset(rxq);
+ data->rx_queues[qidx] = rxq;
+
+ return 0;
+rxbd_setup_failed:
+ if (rxq->rz)
+ rte_memzone_free(rxq->rz);
+fail:
+ rte_free(rxq);
+
+ return err;
+}
+
+static void rnp_tx_queue_release_mbuf(struct rnp_tx_queue *txq)
+{
+ uint16_t i;
+
+ if (!txq)
+ return;
+ if (txq->sw_ring) {
+ for (i = 0; i < txq->attr.nb_desc; i++) {
+ if (txq->sw_ring[i].mbuf) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+ }
+}
+
+static void rnp_tx_queue_release(void *_txq)
+{
+ struct rnp_tx_queue *txq = (struct rnp_tx_queue *)_txq;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (txq) {
+ rnp_tx_queue_release_mbuf(txq);
+
+ if (txq->rz)
+ rte_memzone_free(txq->rz);
+ if (txq->sw_ring)
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ }
+}
+
+void
+rnp_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ rnp_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+rnp_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ rnp_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
+static int rnp_alloc_txbdr(struct rte_eth_dev *dev,
+ struct rnp_tx_queue *txq,
+ uint16_t nb_desc, int socket_id)
+{
+ const struct rte_memzone *rz = NULL;
+ int size;
+
+ size = nb_desc * sizeof(struct rnp_txsw_entry);
+ txq->sw_ring = rte_zmalloc_socket("tx_swq", size,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq->sw_ring == NULL)
+ return -ENOMEM;
+
+ rz = rte_eth_dma_zone_reserve(dev, "tx_ring", txq->attr.queue_id,
+ RNP_TX_MAX_RING_SZ, RNP_BD_RING_ALIGN, socket_id);
+ if (rz == NULL) {
+ rte_free(txq->sw_ring);
+ txq->sw_ring = NULL;
+ return -ENOMEM;
+ }
+ memset(rz->addr, 0, RNP_TX_MAX_RING_SZ);
+ txq->ring_phys_addr = rz->iova;
+ txq->tx_bdr = rz->addr;
+ txq->rz = rz;
+
+ return 0;
+}
+
+static void
+rnp_tx_queue_sw_reset(struct rnp_tx_queue *txq)
+{
+ struct rnp_txsw_entry *sw_ring = txq->sw_ring;
+ uint32_t idx = 0, prev = 0;
+ uint32_t size = 0;
+
+ prev = (uint16_t)(txq->attr.nb_desc - 1);
+ for (idx = 0; idx < txq->attr.nb_desc; idx++) {
+ sw_ring[idx].mbuf = NULL;
+ sw_ring[idx].last_id = idx;
+ sw_ring[prev].next_id = idx;
+ prev = idx;
+ }
+ txq->nb_tx_free = txq->attr.nb_desc - 1;
+ txq->tx_next_dd = txq->tx_rs_thresh - 1;
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
+
+ size = (txq->attr.nb_desc + RNP_TX_MAX_BURST_SIZE);
+ for (idx = 0; idx < size * sizeof(struct rnp_tx_desc); idx++)
+ ((volatile char *)txq->tx_bdr)[idx] = 0;
+}
+
+int
+rnp_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t qidx, uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rte_eth_dev_data *data = dev->data;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_tx_queue *txq;
+ uint64_t offloads = 0;
+ int err = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+ RNP_PMD_INFO("TXQ[%d] setup nb-desc %d\n", qidx, nb_desc);
+ if (rte_is_power_of_2(nb_desc) == 0) {
+ RNP_PMD_ERR("txq Desc num must power of 2\n");
+ return -EINVAL;
+ }
+ if (nb_desc > RNP_MAX_BD_COUNT)
+ return -EINVAL;
+ /* check whether queue Has been create if so release it */
+ if (qidx < data->nb_tx_queues && data->tx_queues[qidx]) {
+ rnp_tx_queue_release(data->tx_queues[qidx]);
+ data->tx_queues[qidx] = NULL;
+ }
+ txq = rte_zmalloc_socket("rnp_txq", sizeof(struct rnp_tx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (!txq) {
+ RNP_PMD_ERR("Failed to allocate TX ring memory");
+ return -ENOMEM;
+ }
+ txq->tx_rs_thresh = tx_conf->tx_rs_thresh ?
+ tx_conf->tx_rs_thresh : RNP_DEFAULT_TX_RS_THRESH;
+ txq->tx_free_thresh = tx_conf->tx_free_thresh ?
+ tx_conf->tx_free_thresh : RNP_DEFAULT_TX_FREE_THRESH;
+ if (txq->tx_rs_thresh > txq->tx_free_thresh) {
+ RNP_PMD_ERR("tx_rs_thresh must be less than or "
+ "equal to tx_free_thresh. (tx_free_thresh=%u"
+ " tx_rs_thresh=%u port=%d queue=%d)",
+ (unsigned int)tx_conf->tx_free_thresh,
+ (unsigned int)tx_conf->tx_rs_thresh,
+ (int)data->port_id,
+ (int)qidx);
+ err = -EINVAL;
+ goto txbd_setup_failed;
+ }
+ if (txq->tx_rs_thresh + txq->tx_free_thresh >= nb_desc) {
+ RNP_PMD_ERR("tx_rs_thresh + tx_free_thresh >= nb_desc"
+ "%d + %d >= %d", txq->tx_rs_thresh,
+ txq->tx_free_thresh, nb_desc);
+ err = -EINVAL;
+ goto txbd_setup_failed;
+ }
+ txq->pthresh = (tx_conf->tx_thresh.pthresh) ?
+ tx_conf->tx_thresh.pthresh : RNP_TX_DESC_FETCH_TH;
+ txq->pburst = (tx_conf->tx_thresh.hthresh) ?
+ tx_conf->tx_thresh.hthresh : RNP_TX_DESC_FETCH_BURST;
+ txq->free_mbufs = rte_zmalloc_socket("txq->free_mbufs",
+ sizeof(struct rte_mbuf *) * txq->tx_rs_thresh,
+ RTE_CACHE_LINE_SIZE, socket_id);
+ txq->attr.index = rnp_get_dma_ring_index(port, qidx);
+ txq->attr.lane_id = port->attr.nr_lane;
+ txq->attr.port_id = dev->data->port_id;
+ txq->attr.nb_desc_mask = nb_desc - 1;
+ txq->attr.vf_num = hw->mbx.vf_num;
+ txq->attr.nb_desc = nb_desc;
+ txq->attr.queue_id = qidx;
+
+ err = rnp_alloc_txbdr(dev, txq, nb_desc, socket_id);
+ if (err)
+ goto txbd_setup_failed;
+ rnp_setup_txbdr(hw, txq);
+ if (txq->tx_tail)
+ rnp_reset_hw_txq_op(hw, txq);
+ rnp_tx_queue_sw_reset(txq);
+ RNP_PMD_LOG(INFO, "PF[%d] dev:[%d] hw-lane[%d] txq queue_id[%d] "
+ "dma_idx %d socket %d\n",
+ hw->mbx.pf_num, txq->attr.port_id,
+ txq->attr.lane_id, qidx,
+ txq->attr.index, socket_id);
+ if (qidx < dev->data->nb_tx_queues)
+ data->tx_queues[qidx] = txq;
+ port->tx_queues[qidx] = txq;
+
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+ txq->tx_offloads = offloads;
+
+ return 0;
+txbd_setup_failed:
+
+ rte_free(txq);
+
+ return err;
+}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
new file mode 100644
index 0000000..3ea977c
--- /dev/null
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_RXTX_H_
+#define _RNP_RXTX_H_
+
+#include "rnp.h"
+#include "base/rnp_bdq_if.h"
+
+#define RNP_RX_MAX_BURST_SIZE (32)
+#define RNP_TX_MAX_BURST_SIZE (32)
+#define RNP_BD_RING_ALIGN (128)
+#define RNP_MAX_RING_DESC (4096)
+#define RNP_RX_MAX_RING_SZ \
+ ((RNP_MAX_RING_DESC + \
+ RNP_RX_MAX_BURST_SIZE) * \
+ sizeof(struct rnp_rx_desc))
+#define RNP_TX_MAX_RING_SZ \
+ ((RNP_MAX_RING_DESC + \
+ RNP_TX_MAX_BURST_SIZE) * \
+ sizeof(struct rnp_tx_desc))
+
+#define RNP_RX_DESC_FETCH_TH (96) /* dma fetch desC threshold */
+#define RNP_RX_DESC_FETCH_BURST (32) /* */
+#define RNP_TX_DESC_FETCH_TH (64) /* dma fetch desc threshold */
+#define RNP_TX_DESC_FETCH_BURST (32) /* max-num_descs_peer_read*/
+
+#define RNP_DEFAULT_TX_FREE_THRESH (32)
+#define RNP_DEFAULT_TX_RS_THRESH (32)
+#define RNP_DEFAULT_RX_FREE_THRESH (32)
+
+/* rx queue info */
+struct rnp_queue_attr {
+ uint64_t sriov_st; /* enable sriov info */
+ uint16_t vf_num; /* ring belong to which vf */
+
+ uint16_t queue_id; /* sw queue index*/
+ uint16_t index; /* hw ring index */
+ uint16_t lane_id; /* ring belong to which physical Lane */
+ uint16_t nb_desc; /* max bds */
+ uint16_t nb_desc_mask; /* mask of bds */
+ uint16_t port_id; /* dpdk manage port sequence id */
+};
+
+struct rnp_rxsw_entry {
+ struct rte_mbuf *mbuf;
+};
+
+struct rnp_rx_queue {
+ struct rte_mempool *mb_pool; /* mbuf pool to populate rx ring. */
+ const struct rte_memzone *rz; /* rx hw ring base alloc memzone */
+ uint64_t ring_phys_addr; /* rx hw ring physical addr */
+ volatile struct rnp_rx_desc *rx_bdr; /* rx hw ring rirtual Addr */
+ volatile struct rnp_rx_desc zero_desc;
+ struct rnp_rxsw_entry *sw_ring; /* rx software ring addr */
+ volatile void *rx_tailreg; /* hw desc tail register */
+ volatile void *rx_headreg; /* hw desc head register*/
+ struct rnp_queue_attr attr;
+
+ uint16_t rx_buf_len; /* mempool mbuf buf len */
+ uint16_t nb_rx_free; /* number available use desc */
+ uint16_t rx_free_thresh; /* rx free desc desource thresh */
+ uint16_t rx_tail;
+
+ uint32_t nodesc_tm_thresh; /* rx queue no desc timeout thresh */
+ uint8_t rx_deferred_start; /* do not start queue with dev_start(). */
+ uint8_t pthresh; /* rx desc prefetch threshold */
+ uint8_t pburst; /* rx desc prefetch burst */
+
+ uint64_t rx_offloads; /* user set hw offload features */
+ struct rte_mbuf **free_mbufs; /* rx bulk alloc reserve of free mbufs */
+};
+
+struct rnp_txsw_entry {
+ struct rte_mbuf *mbuf; /* sync with tx desc dma physical addr */
+ uint16_t next_id; /* next entry resource used */
+ uint16_t last_id; /* last entry resource used */
+};
+
+struct rnp_tx_desc;
+struct rnp_tx_queue {
+ const struct rte_memzone *rz;
+ uint64_t ring_phys_addr; /* tx dma ring physical addr */
+ volatile struct rnp_tx_desc *tx_bdr; /* tx dma ring virtual addr */
+ struct rnp_txsw_entry *sw_ring; /* tx software ring addr */
+ volatile void *tx_tailreg; /* hw desc tail register */
+ volatile void *tx_headreg; /* hw desc head register*/
+ struct rnp_queue_attr attr;
+
+ uint16_t nb_tx_free; /* avail desc to set pkts */
+ uint16_t nb_tx_used;
+ uint16_t tx_tail;
+
+ uint16_t tx_next_dd; /* next to scan writeback dd bit */
+ uint16_t tx_rs_thresh; /* number of interval set rs bit */
+ uint16_t tx_next_rs; /* index of next time to set rs bit*/
+ uint16_t tx_free_thresh; /* thresh to free tx desc resource */
+
+ uint8_t tx_deferred_start; /*< Do not start queue with dev_start(). */
+ uint8_t pthresh; /* rx desc prefetch threshold */
+ uint8_t pburst; /* rx desc burst*/
+
+ uint64_t tx_offloads; /* tx offload features */
+ struct rte_mbuf **free_mbufs; /* tx bulk free reserve of free mbufs */
+};
+
+void
+rnp_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void
+rnp_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+int rnp_rx_queue_setup(struct rte_eth_dev *eth_dev,
+ uint16_t qidx,
+ uint16_t nb_rx_desc,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool);
+int rnp_tx_queue_setup(struct rte_eth_dev *dev,
+ uint16_t qidx, uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf);
+
+#endif /* _RNP_RXTX_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 09/28] net/rnp: add queue stop and start operations
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (7 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 08/28] net/rnp: add queue setup and release operations Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 10/28] net/rnp: add support device start stop operations Wenbo Cao
` (18 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao, Anatoly Burakov
Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
support rx/tx queue stop/start,for rx queue stop
need to reset a queue,must stop all rx queue
durring reset this queue.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
drivers/net/rnp/base/rnp_common.c | 3 +
drivers/net/rnp/rnp_link.c | 340 ++++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_link.h | 36 ++++
drivers/net/rnp/rnp_rxtx.c | 167 +++++++++++++++++++
drivers/net/rnp/rnp_rxtx.h | 9 +
6 files changed, 556 insertions(+)
create mode 100644 drivers/net/rnp/rnp_link.c
create mode 100644 drivers/net/rnp/rnp_link.h
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 65f1ed3..fd7d4b9 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -5,6 +5,7 @@
;
[Features]
Speed capabilities = Y
+Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
Linux = Y
diff --git a/drivers/net/rnp/base/rnp_common.c b/drivers/net/rnp/base/rnp_common.c
index 3fa2a49..7d1f96c 100644
--- a/drivers/net/rnp/base/rnp_common.c
+++ b/drivers/net/rnp/base/rnp_common.c
@@ -65,6 +65,9 @@ int rnp_init_hw(struct rnp_hw *hw)
/* setup mac resiger ctrl base */
for (idx = 0; idx < hw->max_port_num; idx++)
hw->mac_base[idx] = (u8 *)hw->e_ctrl + RNP_MAC_BASE_OFFSET(idx);
+ /* tx all hw queue must be started */
+ for (idx = 0; idx < RNP_MAX_RX_QUEUE_NUM; idx++)
+ RNP_E_REG_WR(hw, RNP_TXQ_START(idx), true);
return 0;
}
diff --git a/drivers/net/rnp/rnp_link.c b/drivers/net/rnp/rnp_link.c
new file mode 100644
index 0000000..2f94397
--- /dev/null
+++ b/drivers/net/rnp/rnp_link.c
@@ -0,0 +1,340 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include <rte_alarm.h>
+
+#include "base/rnp_mac_regs.h"
+#include "base/rnp_dma_regs.h"
+#include "base/rnp_mac.h"
+#include "base/rnp_fw_cmd.h"
+#include "base/rnp_mbx_fw.h"
+
+#include "rnp.h"
+#include "rnp_rxtx.h"
+#include "rnp_link.h"
+
+static void
+rnp_link_flow_setup(struct rnp_eth_port *port)
+{
+ struct rnp_hw *hw = port->hw;
+ u32 ctrl = 0;
+ u16 lane = 0;
+
+ lane = port->attr.nr_lane;
+ rte_spinlock_lock(&port->rx_mac_lock);
+ ctrl = RNP_MAC_REG_RD(hw, lane, RNP_MAC_RX_CFG);
+ if (port->attr.link_ready) {
+ ctrl &= ~RNP_MAC_LM;
+ RNP_RX_ETH_ENABLE(hw, lane);
+ } else {
+ RNP_RX_ETH_DISABLE(hw, lane);
+ ctrl |= RNP_MAC_LM;
+ }
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_RX_CFG, ctrl);
+ rte_spinlock_unlock(&port->rx_mac_lock);
+}
+
+static uint64_t
+rnp_parse_speed_code(uint32_t speed_code)
+{
+ uint32_t speed = 0;
+
+ switch (speed_code) {
+ case RNP_LANE_SPEED_10M:
+ speed = RTE_ETH_SPEED_NUM_10M;
+ break;
+ case RNP_LANE_SPEED_100M:
+ speed = RTE_ETH_SPEED_NUM_100M;
+ break;
+ case RNP_LANE_SPEED_1G:
+ speed = RTE_ETH_SPEED_NUM_1G;
+ break;
+ case RNP_LANE_SPEED_10G:
+ speed = RTE_ETH_SPEED_NUM_10G;
+ break;
+ case RNP_LANE_SPEED_25G:
+ speed = RTE_ETH_SPEED_NUM_25G;
+ break;
+ case RNP_LANE_SPEED_40G:
+ speed = RTE_ETH_SPEED_NUM_40G;
+ break;
+ default:
+ speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ }
+
+ return speed;
+}
+
+static bool
+rnp_update_speed_changed(struct rnp_eth_port *port)
+{
+ struct rnp_hw *hw = port->hw;
+ uint32_t speed_code = 0;
+ bool speed_changed = 0;
+ bool duplex = false;
+ uint32_t magic = 0;
+ uint32_t linkstate;
+ uint64_t speed = 0;
+ uint16_t lane = 0;
+
+ lane = port->attr.nr_lane;
+ linkstate = RNP_E_REG_RD(hw, RNP_DEVICE_LINK_EX);
+ magic = linkstate & 0xF0000000;
+ /* check if speed is changed. even if link is not changed */
+ if (RNP_SPEED_META_VALID(magic) &&
+ (linkstate & RNP_LINK_STATE(lane))) {
+ speed_code = RNP_LINK_SPEED_CODE(linkstate, lane);
+ speed = rnp_parse_speed_code(speed_code);
+ if (speed != port->attr.speed) {
+ duplex = RNP_LINK_DUPLEX_STATE(linkstate, lane);
+ port->attr.phy_meta.link_duplex = duplex;
+ port->attr.speed = speed;
+ speed_changed = 1;
+ }
+ }
+
+ return speed_changed;
+}
+
+static bool
+rnp_update_link_changed(struct rnp_eth_port *port,
+ struct rnp_link_stat_req *link)
+{
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t link_up_bit = 0;
+ bool link_changed = 0;
+ uint32_t sync_bit = 0;
+ bool duplex = 0;
+
+ link_up_bit = link->lane_status & RTE_BIT32(lane);
+ sync_bit = RNP_E_REG_RD(hw, RNP_FW_LINK_SYNC);
+ lane = port->attr.nr_lane;
+ if (link_up_bit) {
+ /* port link down to up */
+ if (!port->attr.link_ready)
+ link_changed = true;
+ port->attr.link_ready = true;
+ if (link->port_st_magic == RNP_SPEED_VALID_MAGIC) {
+ port->attr.speed = link->states[lane].speed;
+ duplex = link->states[lane].duplex;
+ port->attr.duplex = duplex;
+ RNP_PMD_INFO("phy_id %d speed %d duplex "
+ "%d issgmii %d PortID %d",
+ link->states[lane].phy_addr,
+ link->states[lane].speed,
+ link->states[lane].duplex,
+ link->states[lane].is_sgmii,
+ port->attr.port_id);
+ }
+ } else {
+ /* port link to down */
+ if (port->attr.link_ready)
+ link_changed = true;
+ port->attr.link_ready = false;
+ }
+ if (!link_changed && sync_bit != link_up_bit)
+ link_changed = 1;
+
+ return link_changed;
+}
+
+static void rnp_link_stat_sync_mark(struct rnp_hw *hw, int lane, int up)
+{
+ uint32_t sync;
+
+ rte_spinlock_lock(&hw->link_sync);
+ sync = RNP_E_REG_RD(hw, RNP_FW_LINK_SYNC);
+ sync &= ~RNP_LINK_MAGIC_MASK;
+ sync |= RNP_LINK_MAGIC_CODE;
+ if (up)
+ sync |= RTE_BIT32(lane);
+ else
+ sync &= ~RTE_BIT32(lane);
+ RNP_E_REG_WR(hw, RNP_FW_LINK_SYNC, sync);
+ rte_spinlock_unlock(&hw->link_sync);
+}
+
+static void rnp_link_report(struct rnp_eth_port *port, bool link_en)
+{
+ struct rte_eth_dev_data *data = port->eth_dev->data;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ struct rnp_tx_queue *txq;
+ struct rte_eth_link link;
+ uint16_t idx;
+
+ if (data == NULL)
+ return;
+ for (idx = 0; idx < data->nb_rx_queues; idx++) {
+ rxq = data->rx_queues[idx];
+ if (!rxq)
+ continue;
+ rxq->rx_link = link_en;
+ }
+ for (idx = 0; idx < data->nb_tx_queues; idx++) {
+ txq = data->tx_queues[idx];
+ if (!txq)
+ continue;
+ txq->tx_link = link_en;
+ }
+ /* set default link info */
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ link.link_status = RTE_ETH_LINK_DOWN;
+ link.link_autoneg = RTE_ETH_LINK_FIXED;
+ if (link_en) {
+ link.link_duplex = port->attr.phy_meta.link_duplex;
+ link.link_speed = port->attr.speed;
+ link.link_status = link_en;
+ }
+ link.link_autoneg = port->attr.phy_meta.link_autoneg;
+ RNP_PMD_LOG(INFO, "PF[%d]link changed: changed_lane:0x%x, "
+ "status:0x%x",
+ hw->mbx.pf_num,
+ port->attr.nr_lane,
+ link_en);
+ /* report link info to upper firmwork */
+ rte_eth_linkstatus_set(port->eth_dev, &link);
+ /* notice event process link status change */
+ rte_eth_dev_callback_process(port->eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
+ /* notce firmware LSC event sw received */
+ rnp_link_stat_sync_mark(hw, port->attr.nr_lane, link_en);
+}
+
+static void rnp_dev_alarm_link_handler(void *param)
+{
+ struct rnp_eth_port *port = param;
+ uint32_t status;
+
+ if (port == NULL || port->eth_dev == NULL)
+ return;
+ status = port->attr.link_ready;
+ rnp_link_report(port, status);
+}
+
+void rnp_link_event(struct rnp_eth_adapter *adapter,
+ struct rnp_mbx_fw_cmd_req *req)
+{
+ struct rnp_link_stat_req *link = (struct rnp_link_stat_req *)req->data;
+ struct rnp_hw *hw = &adapter->hw;
+ struct rnp_eth_port *port = NULL;
+ bool speed_changed;
+ bool link_changed;
+ uint32_t lane;
+ uint8_t i = 0;
+
+ /* get real-time link && speed info */
+ for (i = 0; i < hw->max_port_num; i++) {
+ port = adapter->ports[i];
+ if (port == NULL)
+ continue;
+ speed_changed = false;
+ link_changed = false;
+ lane = port->attr.nr_lane;
+ if (RNP_LINK_NOCHANGED(lane, link->changed_lanes)) {
+ speed_changed = rnp_update_speed_changed(port);
+ if (!speed_changed)
+ continue;
+ }
+ link_changed = rnp_update_link_changed(port, link);
+ if (link_changed || speed_changed) {
+ rnp_link_flow_setup(port);
+ rte_eal_alarm_set(RNP_ALARM_INTERVAL,
+ rnp_dev_alarm_link_handler,
+ (void *)port);
+ }
+ }
+}
+
+int rnp_dev_link_update(struct rte_eth_dev *eth_dev,
+ int wait_to_complete)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rnp_phy_meta *phy_meta = &port->attr.phy_meta;
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ struct rte_eth_link link;
+ uint32_t status;
+
+ PMD_INIT_FUNC_TRACE();
+ memset(&link, 0, sizeof(link));
+ if (wait_to_complete && rte_eal_process_type() == RTE_PROC_PRIMARY)
+ rnp_mbx_fw_get_lane_stat(port);
+ status = port->attr.link_ready;
+ link.link_duplex = phy_meta->link_duplex;
+ link.link_status = status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+ if (link.link_status)
+ link.link_speed = port->attr.speed;
+ link.link_autoneg = phy_meta->link_autoneg ?
+ RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+ rnp_link_stat_sync_mark(hw, lane, link.link_status);
+ rte_eth_linkstatus_set(eth_dev, &link);
+
+ return 0;
+}
+
+static void rnp_dev_link_task(void *param)
+{
+ struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ bool speed_changed = false;
+ bool link_changed = false;
+ bool duplex_attr = false;
+ uint32_t speed_code = 0;
+ uint32_t link_state;
+ bool duplex = false;
+ uint32_t speed = 0;
+
+ link_state = RNP_E_REG_RD(hw, RNP_DEVICE_LINK_EX);
+ if (link_state & RNP_LINK_DUPLEX_ATTR_EN)
+ duplex_attr = true;
+ else
+ link_state = RNP_E_REG_RD(hw, RNP_DEVICE_LINK);
+ if (link_state & RNP_LINK_STATE(lane)) {
+ /* Port link change to up */
+ speed_code = RNP_LINK_SPEED_CODE(link_state, lane);
+ speed = rnp_parse_speed_code(speed_code);
+ if (duplex_attr) {
+ duplex = RNP_LINK_DUPLEX_STATE(link_state, lane);
+ port->attr.phy_meta.link_duplex = duplex;
+ }
+ port->attr.speed = speed;
+ port->attr.pre_link = port->attr.link_ready;
+ port->attr.link_ready = true;
+ } else {
+ /* Port link to down */
+ port->attr.speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+ port->attr.pre_link = port->attr.link_ready;
+ port->attr.link_ready = false;
+ }
+ if (port->attr.pre_link != port->attr.link_ready)
+ link_changed = true;
+ if (!link_changed)
+ speed_changed = rnp_update_speed_changed(port);
+ if (link_changed || speed_changed) {
+ if (!duplex_attr)
+ rnp_mbx_fw_get_lane_stat(port);
+ rnp_link_flow_setup(port);
+ rnp_link_report(port, port->attr.link_ready);
+ }
+ rte_eal_alarm_set(RNP_ALARM_INTERVAL,
+ rnp_dev_link_task,
+ (void *)dev);
+}
+
+void
+rnp_run_link_poll_task(struct rnp_eth_port *port)
+{
+ rte_eal_alarm_set(RNP_ALARM_INTERVAL, rnp_dev_link_task,
+ (void *)port->eth_dev);
+}
+
+void
+rnp_cancel_link_poll_task(struct rnp_eth_port *port)
+{
+ rte_eal_alarm_cancel(rnp_dev_link_task, port->eth_dev);
+}
diff --git a/drivers/net/rnp/rnp_link.h b/drivers/net/rnp/rnp_link.h
new file mode 100644
index 0000000..f0705f1
--- /dev/null
+++ b/drivers/net/rnp/rnp_link.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_LINK_H_
+#define _RNP_LINK_H_
+
+#define RNP_DEVICE_LINK (0x3000c)
+#define RNP_DEVICE_LINK_EX (0xa800 + 64 * 64 - 4)
+#define RNP_LINK_NOCHANGED(lane_bit, change_lane) \
+ (!((RTE_BIT32(lane_bit)) & (change_lane)))
+#define RNP_LINK_DUPLEX_ATTR_EN (0xA0000000)
+#define RNP_SPEED_META_VALID(magic) (!!(magic) == 0xA0000000)
+#define RNP_LINK_STATE(n) RTE_BIT32(n)
+#define RNP_LINK_SPEED_CODE(sp, n) \
+ (((sp) & RTE_GENMASK32((11) + ((4) * (n)), \
+ (8) + ((4) * (n)))) >> (8 + 4 * (n)))
+#define RNP_LINK_DUPLEX_STATE(sp, n) ((sp) & RTE_BIT32((24) + (n)))
+#define RNP_ALARM_INTERVAL (50000) /* unit us */
+enum rnp_lane_speed {
+ RNP_LANE_SPEED_10M = 0,
+ RNP_LANE_SPEED_100M,
+ RNP_LANE_SPEED_1G,
+ RNP_LANE_SPEED_10G,
+ RNP_LANE_SPEED_25G,
+ RNP_LANE_SPEED_40G,
+};
+
+void rnp_link_event(struct rnp_eth_adapter *adapter,
+ struct rnp_mbx_fw_cmd_req *req);
+int rnp_dev_link_update(struct rte_eth_dev *eth_dev,
+ int wait_to_complete);
+void rnp_run_link_poll_task(struct rnp_eth_port *port);
+void rnp_cancel_link_poll_task(struct rnp_eth_port *port);
+
+#endif /* _RNP_LINK_H_ */
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 3c34f23..2b172c8 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -88,6 +88,7 @@ static void rnp_rx_queue_release(void *_rxq)
struct rte_eth_txconf def_conf;
struct rnp_hw *hw = port->hw;
struct rte_mbuf *m_mbuf[2];
+ bool tx_origin_e = false;
bool tx_new = false;
uint16_t index;
int err = 0;
@@ -123,6 +124,9 @@ static void rnp_rx_queue_release(void *_rxq)
return -ENOMEM;
}
rnp_rxq_flow_disable(hw, index);
+ tx_origin_e = txq->txq_started;
+ rte_io_wmb();
+ txq->txq_started = false;
rte_mbuf_refcnt_set(m_mbuf[0], 1);
rte_mbuf_refcnt_set(m_mbuf[1], 1);
m_mbuf[0]->data_off = RTE_PKTMBUF_HEADROOM;
@@ -141,6 +145,7 @@ static void rnp_rx_queue_release(void *_rxq)
rnp_tx_queue_reset(port, txq);
rnp_tx_queue_sw_reset(txq);
}
+ txq->txq_started = tx_origin_e;
}
rte_mempool_put_bulk(adapter->reset_pool, (void **)m_mbuf, 2);
rnp_rxq_flow_enable(hw, index);
@@ -372,6 +377,7 @@ static int rnp_alloc_txbdr(struct rte_eth_dev *dev,
txq->nb_tx_free = txq->attr.nb_desc - 1;
txq->tx_next_dd = txq->tx_rs_thresh - 1;
txq->tx_next_rs = txq->tx_rs_thresh - 1;
+ txq->tx_tail = 0;
size = (txq->attr.nb_desc + RNP_TX_MAX_BURST_SIZE);
for (idx = 0; idx < size * sizeof(struct rnp_tx_desc); idx++)
@@ -474,3 +480,164 @@ static int rnp_alloc_txbdr(struct rte_eth_dev *dev,
return err;
}
+
+int rnp_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rnp_tx_queue *txq;
+
+ PMD_INIT_FUNC_TRACE();
+ txq = eth_dev->data->tx_queues[qidx];
+ if (!txq) {
+ RNP_PMD_ERR("TX queue %u is null or not setup", qidx);
+ return -EINVAL;
+ }
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) {
+ txq->txq_started = 0;
+ /* wait for tx burst process stop traffic */
+ rte_delay_us(10);
+ rnp_tx_queue_release_mbuf(txq);
+ rnp_tx_queue_reset(port, txq);
+ rnp_tx_queue_sw_reset(txq);
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+int rnp_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rnp_tx_queue *txq;
+
+ PMD_INIT_FUNC_TRACE();
+
+ txq = data->tx_queues[qidx];
+ if (!txq) {
+ RNP_PMD_ERR("Can't start tx queue %d it's not setup by "
+ "tx_queue_setup API", qidx);
+ return -EINVAL;
+ }
+ if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) {
+ data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ txq->txq_started = 1;
+ }
+
+ return 0;
+}
+
+int rnp_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ bool ori_q_state[RNP_MAX_RX_QUEUE_NUM];
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ uint16_t hwrid;
+ uint16_t i = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ memset(ori_q_state, 0, sizeof(ori_q_state));
+ if (qidx >= data->nb_rx_queues)
+ return -EINVAL;
+ rxq = data->rx_queues[qidx];
+ if (!rxq) {
+ RNP_PMD_ERR("rx queue %u is null or not setup\n", qidx);
+ return -EINVAL;
+ }
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) {
+ hwrid = rxq->attr.index;
+ for (i = 0; i < RNP_MAX_RX_QUEUE_NUM; i++) {
+ RNP_E_REG_WR(hw, RNP_RXQ_DROP_TIMEOUT_TH(i), 16);
+ ori_q_state[i] = RNP_E_REG_RD(hw, RNP_RXQ_START(i));
+ RNP_E_REG_WR(hw, RNP_RXQ_START(i), 0);
+ }
+ rxq->rxq_started = false;
+ rnp_rx_queue_release_mbuf(rxq);
+ RNP_E_REG_WR(hw, RNP_RXQ_START(hwrid), 0);
+ rnp_rx_queue_reset(port, rxq);
+ rnp_rx_queue_sw_reset(rxq);
+ for (i = 0; i < RNP_MAX_RX_QUEUE_NUM; i++) {
+ RNP_E_REG_WR(hw, RNP_RXQ_DROP_TIMEOUT_TH(i),
+ rxq->nodesc_tm_thresh);
+ RNP_E_REG_WR(hw, RNP_RXQ_START(i), ori_q_state[i]);
+ }
+ RNP_E_REG_WR(hw, RNP_RXQ_START(hwrid), 0);
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+
+ return 0;
+}
+
+static int rnp_alloc_rxq_mbuf(struct rnp_rx_queue *rxq)
+{
+ struct rnp_rxsw_entry *rx_swbd = rxq->sw_ring;
+ volatile struct rnp_rx_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->attr.nb_desc; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (!mbuf)
+ goto rx_mb_alloc_failed;
+ rx_swbd[i].mbuf = mbuf;
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->port = rxq->attr.port_id;
+ dma_addr = rnp_get_dma_addr(&rxq->attr, mbuf);
+
+ rxd = &rxq->rx_bdr[i];
+ *rxd = rxq->zero_desc;
+ rxd->d.pkt_addr = dma_addr;
+ rxd->d.cmd = 0;
+ }
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+ for (i = 0; i < RNP_RX_MAX_BURST_SIZE; ++i)
+ rxq->sw_ring[rxq->attr.nb_desc + i].mbuf = &rxq->fake_mbuf;
+
+ return 0;
+rx_mb_alloc_failed:
+ RNP_PMD_ERR("rx queue %u alloc mbuf failed", rxq->attr.queue_id);
+ rnp_rx_queue_release_mbuf(rxq);
+
+ return -ENOMEM;
+}
+
+int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ uint16_t hwrid;
+
+ PMD_INIT_FUNC_TRACE();
+ rxq = data->rx_queues[qidx];
+ if (!rxq) {
+ RNP_PMD_ERR("RX queue %u is Null or Not setup\n", qidx);
+ return -EINVAL;
+ }
+ if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) {
+ hwrid = rxq->attr.index;
+ /* disable ring */
+ rte_io_wmb();
+ RNP_E_REG_WR(hw, RNP_RXQ_START(hwrid), 0);
+ if (rnp_alloc_rxq_mbuf(rxq) != 0) {
+ RNP_PMD_ERR("Could not alloc mbuf for queue:%d", qidx);
+ return -ENOMEM;
+ }
+ rte_io_wmb();
+ RNP_REG_WR(rxq->rx_tailreg, 0, rxq->attr.nb_desc - 1);
+ RNP_E_REG_WR(hw, RNP_RXQ_START(hwrid), 1);
+ rxq->nb_rx_free = rxq->attr.nb_desc - 1;
+ rxq->rxq_started = true;
+
+ data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index 3ea977c..94e1f06 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -65,11 +65,14 @@ struct rnp_rx_queue {
uint32_t nodesc_tm_thresh; /* rx queue no desc timeout thresh */
uint8_t rx_deferred_start; /* do not start queue with dev_start(). */
+ uint8_t rxq_started; /* rx queue is started */
+ uint8_t rx_link; /* device link state */
uint8_t pthresh; /* rx desc prefetch threshold */
uint8_t pburst; /* rx desc prefetch burst */
uint64_t rx_offloads; /* user set hw offload features */
struct rte_mbuf **free_mbufs; /* rx bulk alloc reserve of free mbufs */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
};
struct rnp_txsw_entry {
@@ -98,6 +101,8 @@ struct rnp_tx_queue {
uint16_t tx_free_thresh; /* thresh to free tx desc resource */
uint8_t tx_deferred_start; /*< Do not start queue with dev_start(). */
+ uint8_t txq_started; /* tx queue is started */
+ uint8_t tx_link; /* device link state */
uint8_t pthresh; /* rx desc prefetch threshold */
uint8_t pburst; /* rx desc burst*/
@@ -115,9 +120,13 @@ int rnp_rx_queue_setup(struct rte_eth_dev *eth_dev,
unsigned int socket_id,
const struct rte_eth_rxconf *rx_conf,
struct rte_mempool *mb_pool);
+int rnp_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int rnp_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
int rnp_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t qidx, uint16_t nb_desc,
unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
+int rnp_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
#endif /* _RNP_RXTX_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 10/28] net/rnp: add support device start stop operations
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (8 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 09/28] net/rnp: add queue stop and start operations Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 11/28] net/rnp: add RSS support operations Wenbo Cao
` (17 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add basic support for device to start/stop function
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_common.c | 22 +++
drivers/net/rnp/base/rnp_common.h | 1 +
drivers/net/rnp/base/rnp_dma_regs.h | 10 ++
drivers/net/rnp/base/rnp_eth_regs.h | 5 +
drivers/net/rnp/base/rnp_hw.h | 1 +
drivers/net/rnp/base/rnp_mac.h | 14 ++
drivers/net/rnp/base/rnp_mac_regs.h | 42 ++++++
drivers/net/rnp/rnp.h | 3 +
drivers/net/rnp/rnp_ethdev.c | 274 +++++++++++++++++++++++++++++++++++-
9 files changed, 371 insertions(+), 1 deletion(-)
diff --git a/drivers/net/rnp/base/rnp_common.c b/drivers/net/rnp/base/rnp_common.c
index 7d1f96c..38a3f55 100644
--- a/drivers/net/rnp/base/rnp_common.c
+++ b/drivers/net/rnp/base/rnp_common.c
@@ -79,3 +79,25 @@ int rnp_init_hw(struct rnp_hw *hw)
return 0;
}
+
+int rnp_clock_valid_check(struct rnp_hw *hw, u16 nr_lane)
+{
+ uint16_t timeout = 0;
+
+ do {
+ RNP_E_REG_WR(hw, RNP_RSS_REDIR_TB(nr_lane, 0), 0x7f);
+ udelay(10);
+ timeout++;
+ if (timeout >= 1000)
+ break;
+ } while (RNP_E_REG_RD(hw, RNP_RSS_REDIR_TB(nr_lane, 0)) != 0x7f);
+
+ if (timeout >= 1000) {
+ RNP_PMD_ERR("ethernet[%d] eth reg can't be write", nr_lane);
+ return -EPERM;
+ }
+ /* clear the dirty value */
+ RNP_E_REG_WR(hw, RNP_RSS_REDIR_TB(nr_lane, 0), 0);
+
+ return 0;
+}
diff --git a/drivers/net/rnp/base/rnp_common.h b/drivers/net/rnp/base/rnp_common.h
index bd00708..958fcb6 100644
--- a/drivers/net/rnp/base/rnp_common.h
+++ b/drivers/net/rnp/base/rnp_common.h
@@ -12,5 +12,6 @@
((macaddr[4] << 8)) | (macaddr[5]))
int rnp_init_hw(struct rnp_hw *hw);
int rnp_setup_common_ops(struct rnp_hw *hw);
+int rnp_clock_valid_check(struct rnp_hw *hw, u16 nr_lane);
#endif /* _RNP_COMMON_H_ */
diff --git a/drivers/net/rnp/base/rnp_dma_regs.h b/drivers/net/rnp/base/rnp_dma_regs.h
index 3664c0a..32e738a 100644
--- a/drivers/net/rnp/base/rnp_dma_regs.h
+++ b/drivers/net/rnp/base/rnp_dma_regs.h
@@ -6,9 +6,19 @@
#define _RNP_DMA_REGS_H_
#define RNP_DMA_VERSION (0)
+#define RNP_DMA_CTRL (0x4)
+/* 1bit <-> 16 bytes dma addr size */
+#define RNP_DMA_SCATTER_MEM_MASK RTE_GENMASK32(31, 16)
+#define RNP_DMA_SCATTER_MEN_S (16)
+#define RNP_DMA_RX_MEM_PAD_EN RTE_BIT32(8)
+#define RTE_DMA_VEB_BYPASS RTE_BIT32(4)
+#define RNP_DMA_TXRX_LOOP RTE_BIT32(1)
+#define RNP_DMA_TXMRX_LOOP RTE_BIT32(0)
+
#define RNP_DMA_HW_EN (0x10)
#define RNP_DMA_EN_ALL (0b1111)
#define RNP_DMA_HW_STATE (0x14)
+
/* --- queue register --- */
/* queue enable */
#define RNP_RXQ_START(qid) _RING_(0x0010 + 0x100 * (qid))
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 10e3d95..60766d2 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -10,6 +10,9 @@
#define RNP_E_FILTER_EN _ETH_(0x801c)
#define RNP_E_REDIR_EN _ETH_(0x8030)
+#define RNP_RX_ETH_F_CTRL(n) _ETH_(0x8070 + ((n) * 0x8))
+#define RNP_RX_ETH_F_OFF (0x7ff)
+#define RNP_RX_ETH_F_ON (0x270)
/* rx queue flow ctrl */
#define RNP_RX_FC_ENABLE _ETH_(0x8520)
#define RNP_RING_FC_EN(n) _ETH_(0x8524 + ((0x4) * ((n) / 32)))
@@ -28,6 +31,8 @@
#define RNP_MAC_HASH_MASK RTE_GENMASK32(11, 0)
#define RNP_MAC_MULTICASE_TBL_EN RTE_BIT32(2)
#define RNP_MAC_UNICASE_TBL_EN RTE_BIT32(3)
+/* rss function ctrl */
+#define RNP_RSS_REDIR_TB(n, id) _ETH_(0xe000 + ((n) * 0x200) + ((id) * 0x4))
#define RNP_TC_PORT_OFFSET(lane) _ETH_(0xe840 + 0x04 * (lane))
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 4f5a73e..ed1e7eb 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -120,6 +120,7 @@ struct rnp_hw {
bool lane_is_sgmii[RNP_MAX_PORT_OF_PF];
struct rnp_mbx_info mbx;
struct rnp_fw_info fw_info;
+ u16 min_dma_size;
spinlock_t rxq_reset_lock;
spinlock_t txq_reset_lock;
diff --git a/drivers/net/rnp/base/rnp_mac.h b/drivers/net/rnp/base/rnp_mac.h
index 57cbd9e..1dac903 100644
--- a/drivers/net/rnp/base/rnp_mac.h
+++ b/drivers/net/rnp/base/rnp_mac.h
@@ -7,6 +7,20 @@
#include "rnp_osdep.h"
#include "rnp_hw.h"
+#include "rnp_eth_regs.h"
+
+#define RNP_RX_ETH_DISABLE(hw, nr_lane) do { \
+ wmb(); \
+ RNP_E_REG_WR(hw, RNP_RX_ETH_F_CTRL(nr_lane), \
+ RNP_RX_ETH_F_OFF); \
+} while (0)
+
+#define RNP_RX_ETH_ENABLE(hw, nr_lane) do { \
+ wmb(); \
+ RNP_E_REG_WR(hw, RNP_RX_ETH_F_CTRL(nr_lane), \
+ RNP_RX_ETH_F_ON); \
+} while (0)
+
void rnp_mac_ops_init(struct rnp_hw *hw);
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac);
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
index 1dc0668..1ae8801 100644
--- a/drivers/net/rnp/base/rnp_mac_regs.h
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -7,6 +7,41 @@
#define RNP_MAC_BASE_OFFSET(n) (_MAC_(0) + ((0x10000) * (n)))
+#define RNP_MAC_TX_CFG (0x0)
+/* Transmitter Enable */
+#define RNP_MAC_TE RTE_BIT32(0)
+/* Jabber Disable */
+#define RNP_MAC_JD RTE_BIT32(16)
+#define RNP_SPEED_SEL_MASK RTE_GENMASK32(30, 28)
+#define RNP_SPEED_SEL_S (28)
+#define RNP_SPEED_SEL_1G (b111 << RNP_SPEED_SEL_S)
+#define RNP_SPEED_SEL_10G (b010 << RNP_SPEED_SEL_S)
+#define RNP_SPEED_SEL_40G (b000 << RNP_SPEED_SEL_S)
+
+#define RNP_MAC_RX_CFG (0x4)
+/* Receiver Enable */
+#define RNP_MAC_RE RTE_BIT32(0)
+/* Automatic Pad or CRC Stripping */
+#define RNP_MAC_ACS RTE_BIT32(1)
+/* CRC stripping for Type packets */
+#define RNP_MAC_CST RTE_BIT32(2)
+/* Disable CRC Check */
+#define RNP_MAC_DCRCC RTE_BIT32(3)
+/* Enable Max Frame Size Limit */
+#define RNP_MAC_GPSLCE RTE_BIT32(6)
+/* Watchdog Disable */
+#define RNP_MAC_WD RTE_BIT32(7)
+/* Jumbo Packet Support En */
+#define RNP_MAC_JE RTE_BIT32(8)
+/* Enable IPC */
+#define RNP_MAC_IPC RTE_BIT32(9)
+/* Loopback Mode */
+#define RNP_MAC_LM RTE_BIT32(10)
+/* Giant Packet Size Limit */
+#define RNP_MAC_GPSL_MASK RTE_GENMASK32(29, 16)
+#define RNP_MAC_MAX_GPSL (1518)
+#define RNP_MAC_CPSL_SHIFT (16)
+
#define RNP_MAC_PKT_FLT_CTRL (0x8)
/* Receive All */
#define RNP_MAC_RA RTE_BIT32(31)
@@ -35,5 +70,12 @@
#define RNP_MAC_HPF RTE_BIT32(10)
#define RNP_MAC_VTFE RTE_BIT32(16)
+#define RNP_MAC_VFE RTE_BIT32(16)
+/* mac link ctrl */
+#define RNP_MAC_LPI_CTRL (0xd0)
+/* PHY Link Status Disable */
+#define RNP_MAC_PLSDIS RTE_BIT32(18)
+/* PHY Link Status */
+#define RNP_MAC_PLS RTE_BIT32(17)
#endif /* _RNP_MAC_REGS_H_ */
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index ab7bd60..086135a 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -107,6 +107,9 @@ struct rnp_eth_port {
struct rnp_port_attr attr;
struct rnp_tx_queue *tx_queues[RNP_MAX_RX_QUEUE_NUM];
struct rnp_hw *hw;
+
+ rte_spinlock_t rx_mac_lock;
+ bool port_stopped;
};
struct rnp_eth_adapter {
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index d5e5ef7..7b7ed8c 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -14,6 +14,8 @@
#include "base/rnp_mbx_fw.h"
#include "base/rnp_mac.h"
#include "base/rnp_common.h"
+#include "base/rnp_dma_regs.h"
+#include "base/rnp_mac_regs.h"
#include "rnp_rxtx.h"
static struct rte_eth_dev *
@@ -52,9 +54,275 @@ static void rnp_dev_interrupt_handler(void *param)
RTE_SET_USED(param);
}
+static void rnp_mac_rx_enable(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t mac_cfg;
+
+ rte_spinlock_lock(&port->rx_mac_lock);
+ mac_cfg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_RX_CFG);
+ mac_cfg |= RNP_MAC_RE;
+
+ mac_cfg &= ~RNP_MAC_GPSL_MASK;
+ mac_cfg |= (RNP_MAC_MAX_GPSL << RNP_MAC_CPSL_SHIFT);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_RX_CFG, mac_cfg);
+ rte_spinlock_unlock(&port->rx_mac_lock);
+}
+
+static void rnp_mac_rx_disable(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t mac_cfg;
+
+ /* to protect conflict hw resource */
+ rte_spinlock_lock(&port->rx_mac_lock);
+ mac_cfg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_RX_CFG);
+ mac_cfg &= ~RNP_MAC_RE;
+
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_RX_CFG, mac_cfg);
+ rte_spinlock_unlock(&port->rx_mac_lock);
+}
+
+static void rnp_mac_tx_enable(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t mac_cfg;
+
+ mac_cfg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_TX_CFG);
+ mac_cfg |= RNP_MAC_TE;
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_TX_CFG, mac_cfg);
+}
+
+static void rnp_mac_tx_disable(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t ctrl;
+
+ /* must wait for tx side has send finish
+ * before fisable tx side
+ */
+ ctrl = RNP_MAC_REG_RD(hw, lane, RNP_MAC_TX_CFG);
+ ctrl &= ~RNP_MAC_TE;
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_TX_CFG, ctrl);
+}
+
+static void rnp_mac_init(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t mac_cfg;
+
+ rnp_mac_tx_enable(dev);
+ rnp_mac_rx_enable(dev);
+
+ mac_cfg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_LPI_CTRL);
+ mac_cfg |= RNP_MAC_PLSDIS | RNP_MAC_PLS;
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_LPI_CTRL, mac_cfg);
+}
+
+static int
+rnp_rx_scattered_setup(struct rte_eth_dev *dev)
+{
+ uint16_t max_pkt_size =
+ dev->data->dev_conf.rxmode.mtu + RNP_ETH_OVERHEAD;
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ uint16_t dma_buf_size;
+ uint16_t queue_id;
+ uint32_t dma_ctrl;
+
+ if (dev->data->rx_queues == NULL)
+ return -ENOMEM;
+ for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
+ rxq = dev->data->rx_queues[queue_id];
+ if (!rxq)
+ continue;
+ if (hw->min_dma_size == 0)
+ hw->min_dma_size = rxq->rx_buf_len;
+ else
+ hw->min_dma_size = RTE_MIN(hw->min_dma_size,
+ rxq->rx_buf_len);
+ }
+ if (hw->min_dma_size < RNP_MIN_DMA_BUF_SIZE) {
+ RNP_PMD_ERR("port[%d] scatter dma len is not support %d",
+ dev->data->port_id, hw->min_dma_size);
+ return -ENOTSUP;
+ }
+ dma_buf_size = hw->min_dma_size;
+ /* Setup max dma scatter engine split size */
+ dma_ctrl = RNP_E_REG_RD(hw, RNP_DMA_CTRL);
+ if (max_pkt_size == dma_buf_size)
+ dma_buf_size += (dma_buf_size % 16);
+ RNP_PMD_INFO("PF[%d] MaxPktLen %d MbSize %d MbHeadRoom %d\n",
+ hw->mbx.pf_num, max_pkt_size,
+ dma_buf_size, RTE_PKTMBUF_HEADROOM);
+ dma_ctrl &= ~RNP_DMA_SCATTER_MEM_MASK;
+ dma_ctrl |= ((dma_buf_size / 16) << RNP_DMA_SCATTER_MEN_S);
+ RNP_E_REG_WR(hw, RNP_DMA_CTRL, dma_ctrl);
+
+ return 0;
+}
+
+static int rnp_enable_all_rx_queue(struct rte_eth_dev *dev)
+{
+ struct rnp_rx_queue *rxq;
+ uint16_t idx;
+ int ret = 0;
+
+ for (idx = 0; idx < dev->data->nb_rx_queues; idx++) {
+ rxq = dev->data->rx_queues[idx];
+ if (!rxq || rxq->rx_deferred_start)
+ continue;
+ if (dev->data->rx_queue_state[idx] ==
+ RTE_ETH_QUEUE_STATE_STOPPED) {
+ ret = rnp_rx_queue_start(dev, idx);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int rnp_enable_all_tx_queue(struct rte_eth_dev *dev)
+{
+ struct rnp_tx_queue *txq;
+ uint16_t idx;
+ int ret = 0;
+
+ for (idx = 0; idx < dev->data->nb_tx_queues; idx++) {
+ txq = dev->data->tx_queues[idx];
+ if (!txq || txq->tx_deferred_start)
+ continue;
+ if (dev->data->tx_queue_state[idx] ==
+ RTE_ETH_QUEUE_STATE_STOPPED) {
+ ret = rnp_tx_queue_start(dev, idx);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int rnp_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_eth_dev_data *data = eth_dev->data;
+ struct rnp_hw *hw = port->hw;
+ uint16_t lane = 0;
+ uint16_t idx = 0;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ lane = port->attr.nr_lane;
+ ret = rnp_clock_valid_check(hw, lane);
+ if (ret) {
+ RNP_PMD_ERR("port[%d] function[%d] lane[%d] hw clock error",
+ data->port_id, hw->mbx.pf_num, lane);
+ return ret;
+ }
+ /* disable eth rx flow */
+ RNP_RX_ETH_DISABLE(hw, lane);
+ ret = rnp_rx_scattered_setup(eth_dev);
+ if (ret)
+ return ret;
+ ret = rnp_enable_all_tx_queue(eth_dev);
+ if (ret)
+ goto txq_start_failed;
+ ret = rnp_enable_all_rx_queue(eth_dev);
+ if (ret)
+ goto rxq_start_failed;
+ rnp_mac_init(eth_dev);
+ /* enable eth rx flow */
+ RNP_RX_ETH_ENABLE(hw, lane);
+ port->port_stopped = 0;
+
+ return 0;
+rxq_start_failed:
+ for (idx = 0; idx < data->nb_rx_queues; idx++)
+ rnp_rx_queue_stop(eth_dev, idx);
+txq_start_failed:
+ for (idx = 0; idx < data->nb_tx_queues; idx++)
+ rnp_tx_queue_stop(eth_dev, idx);
+
+ return ret;
+}
+
+static int rnp_disable_all_rx_queue(struct rte_eth_dev *dev)
+{
+ struct rnp_rx_queue *rxq;
+ uint16_t idx;
+ int ret = 0;
+
+ for (idx = 0; idx < dev->data->nb_rx_queues; idx++) {
+ rxq = dev->data->rx_queues[idx];
+ if (!rxq || rxq->rx_deferred_start)
+ continue;
+ if (dev->data->rx_queue_state[idx] ==
+ RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = rnp_rx_queue_stop(dev, idx);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
+static int rnp_disable_all_tx_queue(struct rte_eth_dev *dev)
+{
+ struct rnp_tx_queue *txq;
+ uint16_t idx;
+ int ret = 0;
+
+ for (idx = 0; idx < dev->data->nb_tx_queues; idx++) {
+ txq = dev->data->tx_queues[idx];
+ if (!txq || txq->tx_deferred_start)
+ continue;
+ if (dev->data->tx_queue_state[idx] ==
+ RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = rnp_tx_queue_stop(dev, idx);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
static int rnp_dev_stop(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ struct rte_eth_link link;
+
+ if (port->port_stopped)
+ return 0;
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_prepare = rte_eth_pkt_burst_dummy;
+
+ /* clear the recorded link status */
+ memset(&link, 0, sizeof(link));
+ rte_eth_linkstatus_set(eth_dev, &link);
+
+ rnp_disable_all_tx_queue(eth_dev);
+ rnp_disable_all_rx_queue(eth_dev);
+ rnp_mac_tx_disable(eth_dev);
+ rnp_mac_rx_disable(eth_dev);
+
+ eth_dev->data->dev_started = 0;
+ port->port_stopped = 1;
return 0;
}
@@ -230,6 +498,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_close = rnp_dev_close,
+ .dev_start = rnp_dev_start,
.dev_stop = rnp_dev_stop,
.dev_infos_get = rnp_dev_infos_get,
@@ -313,6 +582,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
}
rte_ether_addr_copy(&port->mac_addr, ð_dev->data->mac_addrs[0]);
+ rte_spinlock_init(&port->rx_mac_lock);
adapter->ports[p_id] = port;
adapter->inited_ports++;
@@ -445,6 +715,8 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
ret = rnp_init_port_resource(adapter, sub_eth_dev, name, p_id);
if (ret)
goto eth_alloc_error;
+ rnp_mac_rx_disable(sub_eth_dev);
+ rnp_mac_tx_disable(sub_eth_dev);
if (p_id) {
/* port 0 will be probe by plaform */
rte_eth_dev_probing_finish(sub_eth_dev);
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 11/28] net/rnp: add RSS support operations
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (9 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 10/28] net/rnp: add support device start stop operations Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 12/28] net/rnp: add support link update operations Wenbo Cao
` (16 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support rss reta updata/qury rss hash update/get
dev_configure add rss conf check.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 4 +
doc/guides/nics/rnp.rst | 3 +
drivers/net/rnp/base/rnp_eth_regs.h | 16 ++
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 7 +
drivers/net/rnp/rnp_ethdev.c | 23 +++
| 367 ++++++++++++++++++++++++++++++++++++
| 43 +++++
8 files changed, 464 insertions(+)
create mode 100644 drivers/net/rnp/rnp_rss.c
create mode 100644 drivers/net/rnp/rnp_rss.h
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index fd7d4b9..2fc94825f 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -8,5 +8,9 @@ Speed capabilities = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
+Inner RSS = Y
Linux = Y
x86-64 = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 5417593..8f9d38d 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -11,6 +11,9 @@ Features
--------
- Multiple queues for TX and RX
+- Receiver Side Steering (RSS)
+ Receiver Side Steering (RSS) on IPv4, IPv6, IPv4-TCP/UDP/SCTP, IPv6-TCP/UDP/SCTP
+ Inner RSS is only support for vxlan/nvgre
- Promiscuous mode
Prerequisites
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 60766d2..be7ed5b 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -32,7 +32,23 @@
#define RNP_MAC_MULTICASE_TBL_EN RTE_BIT32(2)
#define RNP_MAC_UNICASE_TBL_EN RTE_BIT32(3)
/* rss function ctrl */
+#define RNP_RSS_INNER_CTRL _ETH_(0x805c)
+#define RNP_INNER_RSS_EN (1)
+#define RNP_INNER_RSS_DIS (0)
#define RNP_RSS_REDIR_TB(n, id) _ETH_(0xe000 + ((n) * 0x200) + ((id) * 0x4))
+#define RNP_RSS_MRQC_ADDR _ETH_(0x92a0)
+/* RSS policy */
+#define RNP_RSS_HASH_CFG_MASK (0x3F30000)
+#define RNP_RSS_HASH_IPV4_TCP RTE_BIT32(16)
+#define RNP_RSS_HASH_IPV4 RTE_BIT32(17)
+#define RNP_RSS_HASH_IPV6 RTE_BIT32(20)
+#define RNP_RSS_HASH_IPV6_TCP RTE_BIT32(21)
+#define RNP_RSS_HASH_IPV4_UDP RTE_BIT32(22)
+#define RNP_RSS_HASH_IPV6_UDP RTE_BIT32(23)
+#define RNP_RSS_HASH_IPV4_SCTP RTE_BIT32(24)
+#define RNP_RSS_HASH_IPV6_SCTP RTE_BIT32(25)
+/* rss hash key */
+#define RNP_RSS_KEY_TABLE(idx) _ETH_(0x92d0 + ((idx) * 0x4))
#define RNP_TC_PORT_OFFSET(lane) _ETH_(0xe840 + 0x04 * (lane))
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index ff3dc41..40b0139 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -15,4 +15,5 @@ includes += include_directories('base')
sources = files(
'rnp_ethdev.c',
'rnp_rxtx.c',
+ 'rnp_rss.c',
)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 086135a..e02de85 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -108,6 +108,13 @@ struct rnp_eth_port {
struct rnp_tx_queue *tx_queues[RNP_MAX_RX_QUEUE_NUM];
struct rnp_hw *hw;
+ struct rte_eth_rss_conf rss_conf;
+ uint16_t last_rx_num;
+ bool rxq_num_changed;
+ bool reta_has_cfg;
+ bool hw_rss_en;
+ uint32_t indirtbl[RNP_RSS_INDIR_SIZE];
+
rte_spinlock_t rx_mac_lock;
bool port_stopped;
};
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 7b7ed8c..bd22034 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -17,6 +17,7 @@
#include "base/rnp_dma_regs.h"
#include "base/rnp_mac_regs.h"
#include "rnp_rxtx.h"
+#include "rnp_rss.h"
static struct rte_eth_dev *
rnp_alloc_eth_port(struct rte_pci_device *pci, char *name)
@@ -234,6 +235,9 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
}
/* disable eth rx flow */
RNP_RX_ETH_DISABLE(hw, lane);
+ ret = rnp_dev_rss_configure(eth_dev);
+ if (ret)
+ return ret;
ret = rnp_rx_scattered_setup(eth_dev);
if (ret)
return ret;
@@ -301,6 +305,19 @@ static int rnp_disable_all_tx_queue(struct rte_eth_dev *dev)
return ret;
}
+static int rnp_dev_configure(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+
+ if (port->last_rx_num != eth_dev->data->nb_rx_queues)
+ port->rxq_num_changed = true;
+ else
+ port->rxq_num_changed = false;
+ port->last_rx_num = eth_dev->data->nb_rx_queues;
+
+ return 0;
+}
+
static int rnp_dev_stop(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
@@ -497,6 +514,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
+ .dev_configure = rnp_dev_configure,
.dev_close = rnp_dev_close,
.dev_start = rnp_dev_start,
.dev_stop = rnp_dev_stop,
@@ -512,6 +530,11 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
.rx_queue_release = rnp_dev_rx_queue_release,
.tx_queue_setup = rnp_tx_queue_setup,
.tx_queue_release = rnp_dev_tx_queue_release,
+ /* rss impl */
+ .reta_update = rnp_dev_rss_reta_update,
+ .reta_query = rnp_dev_rss_reta_query,
+ .rss_hash_update = rnp_dev_rss_hash_update,
+ .rss_hash_conf_get = rnp_dev_rss_hash_conf_get,
};
static void
--git a/drivers/net/rnp/rnp_rss.c b/drivers/net/rnp/rnp_rss.c
new file mode 100644
index 0000000..ebbc887
--- /dev/null
+++ b/drivers/net/rnp/rnp_rss.c
@@ -0,0 +1,367 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include <stdint.h>
+
+#include "base/rnp_bdq_if.h"
+#include "base/rnp_eth_regs.h"
+
+#include "rnp.h"
+#include "rnp_rxtx.h"
+#include "rnp_rss.h"
+
+static const struct rnp_rss_hash_cfg rnp_rss_cfg[] = {
+ {RNP_RSS_IPV4, RNP_RSS_HASH_IPV4, RTE_ETH_RSS_IPV4},
+ {RNP_RSS_IPV4, RNP_RSS_HASH_IPV4, RTE_ETH_RSS_FRAG_IPV4},
+ {RNP_RSS_IPV4, RNP_RSS_HASH_IPV4, RTE_ETH_RSS_NONFRAG_IPV4_OTHER},
+ {RNP_RSS_IPV6, RNP_RSS_HASH_IPV6, RTE_ETH_RSS_IPV6},
+ {RNP_RSS_IPV6, RNP_RSS_HASH_IPV6, RTE_ETH_RSS_FRAG_IPV6},
+ {RNP_RSS_IPV6, RNP_RSS_HASH_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_OTHER},
+ {RNP_RSS_IPV4_TCP, RNP_RSS_HASH_IPV4_TCP, RTE_ETH_RSS_NONFRAG_IPV4_TCP},
+ {RNP_RSS_IPV4_UDP, RNP_RSS_HASH_IPV4_UDP, RTE_ETH_RSS_NONFRAG_IPV4_UDP},
+ {RNP_RSS_IPV4_SCTP, RNP_RSS_HASH_IPV4_SCTP, RTE_ETH_RSS_NONFRAG_IPV4_SCTP},
+ {RNP_RSS_IPV6_TCP, RNP_RSS_HASH_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_TCP},
+ {RNP_RSS_IPV6_UDP, RNP_RSS_HASH_IPV6_UDP, RTE_ETH_RSS_NONFRAG_IPV6_UDP},
+ {RNP_RSS_IPV6_SCTP, RNP_RSS_HASH_IPV6_SCTP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP}
+};
+
+static uint8_t rnp_rss_default_key[40] = {
+ 0x6D, 0x5A, 0x56, 0xDA, 0x25, 0x5B, 0x0E, 0xC2,
+ 0x41, 0x67, 0x25, 0x3D, 0x43, 0xA3, 0x8F, 0xB0,
+ 0xD0, 0xCA, 0x2B, 0xCB, 0xAE, 0x7B, 0x30, 0xB4,
+ 0x77, 0xCB, 0x2D, 0xA3, 0x80, 0x30, 0xF2, 0x0C,
+ 0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
+};
+
+int
+rnp_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t port_offset = port->attr.port_offset;
+ uint32_t *indirtbl = &port->indirtbl[0];
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ uint16_t i, idx, shift;
+ uint16_t hwrid;
+ uint16_t qid = 0;
+
+ if (reta_size > RNP_RSS_INDIR_SIZE) {
+ RNP_PMD_ERR("Invalid reta size, reta_size:%d", reta_size);
+ return -EINVAL;
+ }
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ indirtbl[i] = reta_conf[idx].reta[shift];
+ }
+ for (i = 0; i < RNP_RSS_INDIR_SIZE; i++) {
+ qid = indirtbl[i];
+ if (qid < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[qid];
+ hwrid = rxq->attr.index - port_offset;
+ RNP_E_REG_WR(hw, RNP_RSS_REDIR_TB(lane, i), hwrid);
+ rxq->rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ } else {
+ RNP_PMD_WARN("port[%d] reta[%d]-queue=%d "
+ "rx queueu is out range of cur Settings\n",
+ dev->data->port_id, i, qid);
+ }
+ }
+ port->reta_has_cfg = true;
+
+ return 0;
+}
+
+static uint16_t
+rnp_hwrid_to_queue_id(struct rte_eth_dev *dev, uint16_t hwrid)
+{
+ struct rnp_rx_queue *rxq;
+ bool find = false;
+ uint16_t idx;
+
+ for (idx = 0; idx < dev->data->nb_rx_queues; idx++) {
+ rxq = dev->data->rx_queues[idx];
+ if (!rxq)
+ continue;
+ if (rxq->attr.index == hwrid) {
+ find = true;
+ break;
+ }
+ }
+ if (find)
+ return rxq->attr.queue_id;
+
+ return UINT16_MAX;
+}
+
+int
+rnp_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t port_offset = port->attr.port_offset;
+ struct rnp_hw *hw = port->hw;
+ uint32_t *indirtbl = &port->indirtbl[0];
+ uint16_t lane = port->attr.nr_lane;
+ uint16_t i, idx, shift;
+ uint16_t hwrid;
+ uint16_t queue_id;
+
+ if (reta_size > RNP_RSS_INDIR_SIZE) {
+ RNP_PMD_ERR("Invalid reta size, reta_size:%d", reta_size);
+ return -EINVAL;
+ }
+ for (i = 0; i < reta_size; i++) {
+ hwrid = RNP_E_REG_RD(hw, RNP_RSS_REDIR_TB(lane, i));
+ hwrid = hwrid + port_offset;
+ queue_id = rnp_hwrid_to_queue_id(dev, hwrid);
+ if (queue_id == UINT16_MAX) {
+ RNP_PMD_ERR("Invalid rss-table value is the"
+ " Sw-queue not Match Hardware?\n");
+ return -EINVAL;
+ }
+ indirtbl[i] = queue_id;
+ }
+ for (i = 0; i < reta_size; i++) {
+ idx = i / RTE_ETH_RETA_GROUP_SIZE;
+ shift = i % RTE_ETH_RETA_GROUP_SIZE;
+ if (reta_conf[idx].mask & (1ULL << shift))
+ reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
+ }
+
+ return 0;
+}
+
+static void rnp_disable_rss(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_adapter *adapter = RNP_DEV_TO_ADAPTER(dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rte_eth_rss_conf *conf = &port->rss_conf;
+ struct rnp_rx_queue *rxq = NULL;
+ struct rnp_hw *hw = port->hw;
+ uint8_t rss_disable = 0;
+ uint32_t mrqc_reg = 0;
+ uint16_t lane, index;
+ uint16_t idx;
+
+ memset(conf, 0, sizeof(*conf));
+ lane = port->attr.nr_lane;
+ for (idx = 0; idx < hw->max_port_num; idx++) {
+ if (adapter->ports[idx] == NULL) {
+ rss_disable++;
+ continue;
+ }
+ if (!adapter->ports[idx]->rss_conf.rss_hf)
+ rss_disable++;
+ }
+
+ for (idx = 0; idx < dev->data->nb_rx_queues; idx++) {
+ rxq = dev->data->rx_queues[idx];
+ if (!rxq)
+ continue;
+ rxq->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ }
+ /* we use software way to achieve multiple port mode
+ * rss feature disable by set RSS table to default ring.
+ * So when re enable RSS,the rss reta table need to set
+ * last user set State
+ */
+ rxq = dev->data->rx_queues[0];
+ index = rxq->attr.index - port->attr.port_offset;
+ for (idx = 0; idx < RNP_RSS_INDIR_SIZE; idx++)
+ RNP_E_REG_WR(hw, RNP_RSS_REDIR_TB(lane, idx), index);
+ if (rss_disable == hw->max_port_num) {
+ mrqc_reg = RNP_E_REG_RD(hw, RNP_RSS_MRQC_ADDR);
+ mrqc_reg &= ~RNP_RSS_HASH_CFG_MASK;
+ RNP_E_REG_WR(hw, RNP_RSS_MRQC_ADDR, mrqc_reg);
+ }
+}
+
+static void
+rnp_rss_hash_set(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf)
+{
+ uint64_t rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_rx_queue *rxq = NULL;
+ struct rnp_hw *hw = port->hw;
+ uint8_t *hash_key;
+ uint32_t mrqc_reg = 0;
+ uint32_t rss_key;
+ uint64_t rss_hf;
+ uint16_t i;
+
+ rss_hf = rss_conf->rss_hf;
+ hash_key = rss_conf->rss_key;
+ if (hash_key != NULL) {
+ for (i = 0; i < RNP_MAX_HASH_KEY_SIZE; i++) {
+ rss_key = hash_key[(i * 4)];
+ rss_key |= hash_key[(i * 4) + 1] << 8;
+ rss_key |= hash_key[(i * 4) + 2] << 16;
+ rss_key |= hash_key[(i * 4) + 3] << 24;
+ rss_key = rte_cpu_to_be_32(rss_key);
+ RNP_E_REG_WR(hw, RNP_RSS_KEY_TABLE(9 - i), rss_key);
+ }
+ }
+ if (rss_hf) {
+ for (i = 0; i < RTE_DIM(rnp_rss_cfg); i++)
+ if (rnp_rss_cfg[i].rss_flag & rss_hf)
+ mrqc_reg |= rnp_rss_cfg[i].reg_val;
+ /* Enable inner rss mode
+ * If enable, outer(vxlan/nvgre) rss won't cals
+ */
+ if (rss_hash_level == RTE_ETH_RSS_LEVEL_INNERMOST)
+ RNP_E_REG_WR(hw, RNP_RSS_INNER_CTRL, RNP_INNER_RSS_EN);
+ else
+ RNP_E_REG_WR(hw, RNP_RSS_INNER_CTRL, RNP_INNER_RSS_DIS);
+ RNP_E_REG_WR(hw, RNP_RSS_MRQC_ADDR, mrqc_reg);
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (!rxq)
+ continue;
+ rxq->rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ }
+ }
+}
+
+static void
+rnp_reta_table_update(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t port_offset = port->attr.port_offset;
+ uint32_t *indirtbl = &port->indirtbl[0];
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ int i = 0, qid = 0, p_id;
+ uint16_t hwrid;
+
+ p_id = port->attr.nr_lane;
+ for (i = 0; i < RNP_RSS_INDIR_SIZE; i++) {
+ qid = indirtbl[i];
+ if (qid < dev->data->nb_rx_queues) {
+ rxq = dev->data->rx_queues[qid];
+ hwrid = rxq->attr.index - port_offset;
+ RNP_E_REG_WR(hw, RNP_RSS_REDIR_TB(p_id, i), hwrid);
+ rxq->rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+ } else {
+ RNP_PMD_LOG(WARNING, "port[%d] reta[%d]-queue=%d "
+ "rx queues is out range of cur set\n",
+ dev->data->port_id, i, qid);
+ }
+ }
+}
+
+int
+rnp_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (rss_conf->rss_key &&
+ rss_conf->rss_key_len > RNP_MAX_HASH_KEY_SIZE) {
+ RNP_PMD_ERR("Invalid rss key, rss_key_len:%d",
+ rss_conf->rss_key_len);
+ return -EINVAL;
+ }
+ if (rss_conf->rss_hf &&
+ (!(rss_conf->rss_hf & RNP_SUPPORT_RSS_OFFLOAD_ALL))) {
+ RNP_PMD_ERR("RSS type don't support 0x%.2lx", rss_conf->rss_hf);
+ return -EINVAL;
+ }
+ if (!rss_conf->rss_hf) {
+ rnp_disable_rss(dev);
+ } else {
+ rnp_rss_hash_set(dev, rss_conf);
+ rnp_reta_table_update(dev);
+ }
+ port->rss_conf = *rss_conf;
+
+ return 0;
+}
+
+int
+rnp_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw *hw = port->hw;
+ uint8_t *hash_key;
+ uint32_t rss_key;
+ uint64_t rss_hf;
+ uint32_t mrqc;
+ uint16_t i;
+
+ hash_key = rss_conf->rss_key;
+ if (hash_key != NULL) {
+ for (i = 0; i < 10; i++) {
+ rss_key = RNP_E_REG_RD(hw, RNP_RSS_KEY_TABLE(9 - i));
+ rss_key = rte_be_to_cpu_32(rss_key);
+ hash_key[(i * 4)] = rss_key & 0x000000FF;
+ hash_key[(i * 4) + 1] = (rss_key >> 8) & 0x000000FF;
+ hash_key[(i * 4) + 2] = (rss_key >> 16) & 0x000000FF;
+ hash_key[(i * 4) + 3] = (rss_key >> 24) & 0x000000FF;
+ }
+ }
+ rss_hf = 0;
+ mrqc = RNP_E_REG_RD(hw, RNP_RSS_MRQC_ADDR) & RNP_RSS_HASH_CFG_MASK;
+ if (mrqc == 0) {
+ rss_conf->rss_hf = 0;
+ return 0;
+ }
+ for (i = 0; i < RTE_DIM(rnp_rss_cfg); i++)
+ if (rnp_rss_cfg[i].reg_val & mrqc)
+ rss_hf |= rnp_rss_cfg[i].rss_flag;
+
+ rss_conf->rss_hf = rss_hf;
+
+ return 0;
+}
+
+int rnp_dev_rss_configure(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint32_t *indirtbl = port->indirtbl;
+ enum rte_eth_rx_mq_mode mq_mode = 0;
+ struct rte_eth_rss_conf rss_conf;
+ struct rnp_rx_queue *rxq;
+ int i, j;
+
+ mq_mode = dev->data->dev_conf.rxmode.mq_mode;
+ if (dev->data->rx_queues == NULL) {
+ RNP_PMD_ERR("rx_queue is not setup skip rss set");
+ return -EINVAL;
+ }
+ rss_conf = dev->data->dev_conf.rx_adv_conf.rss_conf;
+ if (!(rss_conf.rss_hf & RNP_SUPPORT_RSS_OFFLOAD_ALL) ||
+ !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
+ rnp_disable_rss(dev);
+
+ return 0;
+ }
+ if (rss_conf.rss_key == NULL)
+ rss_conf.rss_key = rnp_rss_default_key;
+
+ if (port->rxq_num_changed || !port->reta_has_cfg) {
+ /* set default reta policy */
+ for (i = 0; i < RNP_RSS_INDIR_SIZE; i++) {
+ j = i % dev->data->nb_rx_queues;
+ rxq = dev->data->rx_queues[j];
+ if (!rxq) {
+ RNP_PMD_ERR("rss Set reta-cfg rxq %d Is Null\n", i);
+ return -EINVAL;
+ }
+ indirtbl[i] = rxq->attr.queue_id;
+ }
+ }
+ rnp_reta_table_update(dev);
+ port->rss_conf = rss_conf;
+ /* setup rss key and hash func */
+ rnp_rss_hash_set(dev, &rss_conf);
+
+ return 0;
+}
--git a/drivers/net/rnp/rnp_rss.h b/drivers/net/rnp/rnp_rss.h
new file mode 100644
index 0000000..73f895d
--- /dev/null
+++ b/drivers/net/rnp/rnp_rss.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_RSS_H_
+#define _RNP_RSS_H_
+
+#include "rnp.h"
+
+struct rnp_rss_hash_cfg {
+ uint32_t func_id;
+ uint32_t reg_val;
+ uint64_t rss_flag;
+};
+
+enum rnp_rss_hash_type {
+ RNP_RSS_IPV4,
+ RNP_RSS_IPV6,
+ RNP_RSS_IPV4_TCP,
+ RNP_RSS_IPV4_UDP,
+ RNP_RSS_IPV4_SCTP,
+ RNP_RSS_IPV6_TCP,
+ RNP_RSS_IPV6_UDP,
+ RNP_RSS_IPV6_SCTP,
+};
+
+int
+rnp_dev_rss_reta_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int
+rnp_dev_rss_reta_query(struct rte_eth_dev *dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+int
+rnp_dev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+int
+rnp_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf);
+int rnp_dev_rss_configure(struct rte_eth_dev *dev);
+
+#endif /* _RNP_RSS_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 12/28] net/rnp: add support link update operations.
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (10 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 11/28] net/rnp: add RSS support operations Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 13/28] net/rnp: add support link setup operations Wenbo Cao
` (15 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support poll/irq link get mode
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 2 +
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_fw_cmd.c | 45 +++++++++++++++
drivers/net/rnp/base/rnp_fw_cmd.h | 58 ++++++++++++++++++-
drivers/net/rnp/base/rnp_hw.h | 1 +
drivers/net/rnp/base/rnp_mbx_fw.c | 72 ++++++++++++++++++++++-
drivers/net/rnp/base/rnp_mbx_fw.h | 4 ++
drivers/net/rnp/meson.build | 1 +
drivers/net/rnp/rnp.h | 12 ++++
drivers/net/rnp/rnp_ethdev.c | 116 ++++++++++++++++++++++++++++++++++++--
10 files changed, 304 insertions(+), 8 deletions(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 2fc94825f..695b9c0 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -5,6 +5,8 @@
;
[Features]
Speed capabilities = Y
+Link status = Y
+Link status event = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 8f9d38d..82dd2d8 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -15,6 +15,7 @@ Features
Receiver Side Steering (RSS) on IPv4, IPv6, IPv4-TCP/UDP/SCTP, IPv6-TCP/UDP/SCTP
Inner RSS is only support for vxlan/nvgre
- Promiscuous mode
+- Link state information
Prerequisites
-------------
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.c b/drivers/net/rnp/base/rnp_fw_cmd.c
index 34a88a1..c5ae7b9 100644
--- a/drivers/net/rnp/base/rnp_fw_cmd.c
+++ b/drivers/net/rnp/base/rnp_fw_cmd.c
@@ -68,6 +68,45 @@
arg->nr_lane = req_arg->param0;
}
+static void
+rnp_build_set_event_mask(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *req_arg,
+ void *cookie)
+{
+ struct rnp_set_pf_event_mask *arg =
+ (struct rnp_set_pf_event_mask *)req->data;
+
+ req->flags = 0;
+ req->opcode = RNP_SET_EVENT_MASK;
+ req->datalen = sizeof(*arg);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ arg->event_mask = req_arg->param0;
+ arg->event_en = req_arg->param1;
+}
+
+static void
+rnp_build_lane_evet_mask(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *req_arg,
+ void *cookie)
+{
+ struct rnp_set_lane_event_mask *arg =
+ (struct rnp_set_lane_event_mask *)req->data;
+
+ req->flags = 0;
+ req->opcode = RNP_SET_LANE_EVENT_EN;
+ req->datalen = sizeof(*arg);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ arg->nr_lane = req_arg->param0;
+ arg->event_mask = req_arg->param1;
+ arg->event_en = req_arg->param2;
+}
+
int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
struct rnp_fw_req_arg *arg,
void *cookie)
@@ -87,6 +126,12 @@ int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
case RNP_GET_LANE_STATUS:
rnp_build_get_lane_status_req(req, arg, cookie);
break;
+ case RNP_SET_EVENT_MASK:
+ rnp_build_set_event_mask(req, arg, cookie);
+ break;
+ case RNP_SET_LANE_EVENT_EN:
+ rnp_build_lane_evet_mask(req, arg, cookie);
+ break;
default:
err = -EOPNOTSUPP;
}
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.h b/drivers/net/rnp/base/rnp_fw_cmd.h
index c34fc5c..c86a32a 100644
--- a/drivers/net/rnp/base/rnp_fw_cmd.h
+++ b/drivers/net/rnp/base/rnp_fw_cmd.h
@@ -6,8 +6,9 @@
#define _RNP_FW_CMD_H_
#include "rnp_osdep.h"
+#include "rnp_hw.h"
-#define RNP_FW_LINK_SYNC _NIC_(0x000c)
+#define RNP_FW_LINK_SYNC (0x000c)
#define RNP_LINK_MAGIC_CODE (0xa5a40000)
#define RNP_LINK_MAGIC_MASK RTE_GENMASK32(31, 16)
@@ -73,6 +74,22 @@ enum RNP_GENERIC_CMD {
RNP_SET_DDR_CSL = 0xFF11,
};
+struct rnp_port_stat {
+ u8 phy_addr; /* Phy MDIO address */
+
+ u8 duplex : 1; /* FIBRE is always 1,Twisted Pair 1 or 0 */
+ u8 autoneg : 1; /* autoned state */
+ u8 fec : 1;
+ u8 an_rev : 1;
+ u8 link_traing : 1;
+ u8 is_sgmii : 1; /* avild fw >= 0.5.0.17 */
+ u8 rsvd0 : 2;
+ u16 speed; /* cur port linked speed */
+
+ u16 pause : 4;
+ u16 rsvd1 : 12;
+} __packed;
+
/* firmware -> driver reply */
struct rnp_phy_abilities_rep {
u8 link_stat;
@@ -203,6 +220,19 @@ struct rnp_lane_stat_rep {
u32 rsvd;
} _PACKED_ALIGN4;
+
+#define RNP_MBX_SYNC_MASK RTE_GENMASK32(15, 0)
+/* == flags == */
+#define RNP_FLAGS_DD RTE_BIT32(0) /* driver clear 0, FW must set 1 */
+#define RNP_FLAGS_CMP RTE_BIT32(1) /* driver clear 0, FW mucst set */
+#define RNP_FLAGS_ERR RTE_BIT32(2) /* driver clear 0, FW must set only if it reporting an error */
+#define RNP_FLAGS_LB RTE_BIT32(9)
+#define RNP_FLAGS_RD RTE_BIT32(10) /* set if additional buffer has command parameters */
+#define RNP_FLAGS_BUF RTE_BIT32(12) /* set 1 on indirect command */
+#define RNP_FLAGS_SI RTE_BIT32(13) /* not irq when command complete */
+#define RNP_FLAGS_EI RTE_BIT32(14) /* interrupt on error */
+#define RNP_FLAGS_FE RTE_BIT32(15) /* flush error */
+
#define RNP_FW_REP_DATA_NUM (40)
struct rnp_mbx_fw_cmd_reply {
u16 flags;
@@ -254,6 +284,32 @@ struct rnp_get_lane_st_req {
u32 rsv[7];
} _PACKED_ALIGN4;
+#define RNP_FW_EVENT_LINK_UP RTE_BIT32(0)
+#define RNP_FW_EVENT_PLUG_IN RTE_BIT32(1)
+#define RNP_FW_EVENT_PLUG_OUT RTE_BIT32(2)
+struct rnp_set_pf_event_mask {
+ u16 event_mask;
+ u16 event_en;
+
+ u32 rsv[7];
+};
+
+struct rnp_set_lane_event_mask {
+ u32 nr_lane;
+ u8 event_mask;
+ u8 event_en;
+ u8 rsvd[26];
+};
+
+/* FW op -> driver */
+struct rnp_link_stat_req {
+ u16 changed_lanes;
+ u16 lane_status;
+#define RNP_SPEED_VALID_MAGIC (0xa4a6a8a9)
+ u32 port_st_magic;
+ struct rnp_port_stat states[RNP_MAX_PORT_OF_PF];
+};
+
struct rnp_mbx_fw_cmd_req {
u16 flags;
u16 opcode;
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index ed1e7eb..00707b3 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -124,6 +124,7 @@ struct rnp_hw {
spinlock_t rxq_reset_lock;
spinlock_t txq_reset_lock;
+ spinlock_t link_sync;
};
#endif /* __RNP_H__*/
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.c b/drivers/net/rnp/base/rnp_mbx_fw.c
index 893a460..d15a639 100644
--- a/drivers/net/rnp/base/rnp_mbx_fw.c
+++ b/drivers/net/rnp/base/rnp_mbx_fw.c
@@ -295,7 +295,7 @@ int rnp_mbx_fw_reset_phy(struct rnp_hw *hw)
memset(&arg, 0, sizeof(arg));
arg.opcode = RNP_RESET_PHY;
- err = rnp_fw_send_norep_cmd(port, &arg);
+ err = rnp_fw_send_cmd(port, &arg, NULL);
if (err) {
RNP_PMD_LOG(ERR, "%s: failed. err:%d", __func__, err);
return err;
@@ -394,3 +394,73 @@ int rnp_mbx_fw_reset_phy(struct rnp_hw *hw)
return 0;
}
+
+static void
+rnp_link_sync_init(struct rnp_hw *hw, bool en)
+{
+ RNP_E_REG_WR(hw, RNP_FW_LINK_SYNC, en ? RNP_LINK_MAGIC_CODE : 0);
+}
+
+int
+rnp_mbx_fw_pf_link_event_en(struct rnp_eth_port *port, bool en)
+{
+ struct rnp_eth_adapter *adapter = NULL;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_fw_req_arg arg;
+ int err;
+
+ adapter = hw->back;
+ memset(&arg, 0, sizeof(arg));
+ arg.opcode = RNP_SET_EVENT_MASK;
+ arg.param0 = RNP_FW_EVENT_LINK_UP;
+ arg.param1 = en ? RNP_FW_EVENT_LINK_UP : 0;
+
+ err = rnp_fw_send_norep_cmd(port, &arg);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: failed. err:%d", __func__, err);
+ return err;
+ }
+ rnp_link_sync_init(hw, en);
+ adapter->intr_registed = en;
+ hw->fw_info.fw_irq_en = en;
+
+ return 0;
+}
+
+int
+rnp_mbx_fw_lane_link_event_en(struct rnp_eth_port *port, bool en)
+{
+ u16 nr_lane = port->attr.nr_lane;
+ struct rnp_fw_req_arg arg;
+ int err;
+
+ memset(&arg, 0, sizeof(arg));
+ arg.opcode = RNP_SET_LANE_EVENT_EN;
+ arg.param0 = nr_lane;
+ arg.param1 = RNP_FW_EVENT_LINK_UP;
+ arg.param2 = en ? RNP_FW_EVENT_LINK_UP : 0;
+
+ err = rnp_fw_send_norep_cmd(port, &arg);
+ if (err) {
+ RNP_PMD_LOG(ERR, "%s: failed. err:%d", __func__, err);
+ return err;
+ }
+
+ return 0;
+}
+
+int
+rnp_rcv_msg_from_fw(struct rnp_eth_adapter *adapter, u32 *msgbuf)
+{
+ const struct rnp_mbx_ops *ops = RNP_DEV_PP_TO_MBX_OPS(adapter->eth_dev);
+ struct rnp_hw *hw = &adapter->hw;
+ int retval;
+
+ retval = ops->read(hw, msgbuf, RNP_MBX_MSG_BLOCK_SIZE, RNP_MBX_FW);
+ if (retval) {
+ RNP_PMD_ERR("Error receiving message from FW");
+ return retval;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.h b/drivers/net/rnp/base/rnp_mbx_fw.h
index fd0110b..159a023 100644
--- a/drivers/net/rnp/base/rnp_mbx_fw.h
+++ b/drivers/net/rnp/base/rnp_mbx_fw.h
@@ -14,6 +14,10 @@
int rnp_mbx_fw_get_capability(struct rnp_eth_port *port);
int rnp_mbx_fw_get_lane_stat(struct rnp_eth_port *port);
int rnp_mbx_fw_reset_phy(struct rnp_hw *hw);
+int rnp_mbx_fw_pf_link_event_en(struct rnp_eth_port *port, bool en);
int rnp_fw_init(struct rnp_hw *hw);
+int rnp_rcv_msg_from_fw(struct rnp_eth_adapter *adapter, u32 *msgbuf);
+int rnp_fw_mbx_ifup_down(struct rnp_eth_port *port, int up);
+int rnp_mbx_fw_lane_link_event_en(struct rnp_eth_port *port, bool en);
#endif /* _RNP_MBX_FW_H_ */
diff --git a/drivers/net/rnp/meson.build b/drivers/net/rnp/meson.build
index 40b0139..7c36587 100644
--- a/drivers/net/rnp/meson.build
+++ b/drivers/net/rnp/meson.build
@@ -16,4 +16,5 @@ sources = files(
'rnp_ethdev.c',
'rnp_rxtx.c',
'rnp_rss.c',
+ 'rnp_link.c',
)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index e02de85..97222f3 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -89,6 +89,11 @@ struct rnp_port_attr {
struct rnp_phy_meta phy_meta;
+ bool link_ready;
+ bool pre_link;
+ bool duplex;
+ uint32_t speed;
+
uint16_t port_id; /* platform manage port sequence id */
uint8_t port_offset; /* port queue offset */
uint8_t sw_id; /* software port init sequence id */
@@ -119,6 +124,12 @@ struct rnp_eth_port {
bool port_stopped;
};
+enum rnp_pf_op {
+ RNP_PF_OP_DONE,
+ RNP_PF_OP_CLOSING = 1,
+ RNP_PF_OP_PROCESS,
+};
+
struct rnp_eth_adapter {
struct rnp_hw hw;
struct rte_pci_device *pdev;
@@ -126,6 +137,7 @@ struct rnp_eth_adapter {
struct rte_mempool *reset_pool;
struct rnp_eth_port *ports[RNP_MAX_PORT_OF_PF];
+ rte_atomic32_t pf_op;
uint16_t closed_ports;
uint16_t inited_ports;
bool intr_registed;
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index bd22034..a3b84db 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -18,6 +18,7 @@
#include "base/rnp_mac_regs.h"
#include "rnp_rxtx.h"
#include "rnp_rss.h"
+#include "rnp_link.h"
static struct rte_eth_dev *
rnp_alloc_eth_port(struct rte_pci_device *pci, char *name)
@@ -50,9 +51,82 @@
return NULL;
}
+static int
+rnp_mbx_fw_reply_handler(struct rnp_eth_adapter *adapter,
+ struct rnp_mbx_fw_cmd_reply *reply)
+{
+ struct rnp_mbx_req_cookie *cookie;
+
+ RTE_SET_USED(adapter);
+ /* dbg_here; */
+ cookie = reply->cookie;
+ if (!cookie || cookie->magic != RNP_COOKIE_MAGIC) {
+ RNP_PMD_ERR("[%s] invalid cookie:%p opcode: "
+ "0x%x v0:0x%x\n",
+ __func__,
+ cookie,
+ reply->opcode,
+ *((int *)reply));
+ return -EIO;
+ }
+ if (cookie->priv_len > 0)
+ rte_memcpy(cookie->priv, reply->data, cookie->priv_len);
+
+ cookie->done = 1;
+ if (reply->flags & RNP_FLAGS_ERR)
+ cookie->errcode = reply->error_code;
+ else
+ cookie->errcode = 0;
+
+ return 0;
+}
+
+static int rnp_mbx_fw_req_handler(struct rnp_eth_adapter *adapter,
+ struct rnp_mbx_fw_cmd_req *req)
+{
+ switch (req->opcode) {
+ case RNP_LINK_STATUS_EVENT:
+ rnp_link_event(adapter, req);
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int rnp_process_fw_msg(struct rnp_eth_adapter *adapter)
+{
+ const struct rnp_mbx_ops *ops = RNP_DEV_PP_TO_MBX_OPS(adapter->eth_dev);
+ uint32_t msgbuf[64];
+ struct rnp_hw *hw = &adapter->hw;
+ uint32_t msg_flag = 0;
+
+ memset(msgbuf, 0, sizeof(msgbuf));
+ /* check fw req */
+ if (!ops->check_for_msg(hw, RNP_MBX_FW)) {
+ rnp_rcv_msg_from_fw(adapter, msgbuf);
+ msg_flag = msgbuf[0] & RNP_MBX_SYNC_MASK;
+ if (msg_flag & RNP_FLAGS_DD)
+ rnp_mbx_fw_reply_handler(adapter,
+ (struct rnp_mbx_fw_cmd_reply *)msgbuf);
+ else
+ rnp_mbx_fw_req_handler(adapter,
+ (struct rnp_mbx_fw_cmd_req *)msgbuf);
+ }
+
+ return 0;
+}
+
static void rnp_dev_interrupt_handler(void *param)
{
- RTE_SET_USED(param);
+ struct rnp_eth_adapter *adapter = param;
+
+ if (!rte_atomic32_cmpset((volatile uint32_t *)&adapter->pf_op,
+ RNP_PF_OP_DONE, RNP_PF_OP_PROCESS))
+ return;
+ rnp_process_fw_msg(adapter);
+ rte_atomic32_set(&adapter->pf_op, RNP_PF_OP_DONE);
}
static void rnp_mac_rx_enable(struct rte_eth_dev *dev)
@@ -220,6 +294,7 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
struct rte_eth_dev_data *data = eth_dev->data;
+ bool lsc = data->dev_conf.intr_conf.lsc;
struct rnp_hw *hw = port->hw;
uint16_t lane = 0;
uint16_t idx = 0;
@@ -248,6 +323,9 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
if (ret)
goto rxq_start_failed;
rnp_mac_init(eth_dev);
+ rnp_mbx_fw_lane_link_event_en(port, lsc);
+ if (!lsc)
+ rnp_run_link_poll_task(port);
/* enable eth rx flow */
RNP_RX_ETH_ENABLE(hw, lane);
port->port_stopped = 0;
@@ -321,6 +399,7 @@ static int rnp_dev_configure(struct rte_eth_dev *eth_dev)
static int rnp_dev_stop(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ bool lsc = eth_dev->data->dev_conf.intr_conf.lsc;
struct rte_eth_link link;
if (port->port_stopped)
@@ -332,21 +411,35 @@ static int rnp_dev_stop(struct rte_eth_dev *eth_dev)
/* clear the recorded link status */
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(eth_dev, &link);
-
rnp_disable_all_tx_queue(eth_dev);
rnp_disable_all_rx_queue(eth_dev);
rnp_mac_tx_disable(eth_dev);
rnp_mac_rx_disable(eth_dev);
-
+ if (!lsc)
+ rnp_cancel_link_poll_task(port);
+ port->attr.link_ready = false;
+ port->attr.speed = 0;
eth_dev->data->dev_started = 0;
port->port_stopped = 1;
return 0;
}
+static void rnp_change_manage_port(struct rnp_eth_adapter *adapter)
+{
+ uint8_t idx = 0;
+
+ adapter->eth_dev = NULL;
+ for (idx = 0; idx < adapter->inited_ports; idx++) {
+ if (adapter->ports[idx])
+ adapter->eth_dev = adapter->ports[idx]->eth_dev;
+ }
+}
+
static int rnp_dev_close(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_adapter *adapter = RNP_DEV_TO_ADAPTER(eth_dev);
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
struct rte_pci_device *pci_dev;
int ret = 0;
@@ -357,6 +450,14 @@ static int rnp_dev_close(struct rte_eth_dev *eth_dev)
ret = rnp_dev_stop(eth_dev);
if (ret < 0)
return ret;
+ do {
+ ret = rte_atomic32_cmpset((volatile uint32_t *)&adapter->pf_op,
+ RNP_PF_OP_DONE, RNP_PF_OP_CLOSING);
+ } while (!ret);
+ adapter->closed_ports++;
+ adapter->ports[port->attr.sw_id] = NULL;
+ if (adapter->intr_registed && adapter->eth_dev == eth_dev)
+ rnp_change_manage_port(adapter);
if (adapter->closed_ports == adapter->inited_ports) {
pci_dev = RTE_DEV_TO_PCI(eth_dev->device);
if (adapter->intr_registed) {
@@ -370,7 +471,7 @@ static int rnp_dev_close(struct rte_eth_dev *eth_dev)
rnp_dma_mem_free(&adapter->hw, &adapter->hw.fw_info.mem);
rte_free(adapter);
}
- adapter->closed_ports++;
+ rte_atomic32_set(&adapter->pf_op, RNP_PF_OP_DONE);
return 0;
}
@@ -535,6 +636,8 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
.reta_query = rnp_dev_rss_reta_query,
.rss_hash_update = rnp_dev_rss_hash_update,
.rss_hash_conf_get = rnp_dev_rss_hash_conf_get,
+ /* link impl */
+ .link_update = rnp_dev_link_update,
};
static void
@@ -673,6 +776,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
adapter->pdev = pci_dev;
adapter->eth_dev = eth_dev;
adapter->ports[0] = port;
+ rte_atomic32_init(&adapter->pf_op);
hw->back = (void *)adapter;
port->eth_dev = eth_dev;
port->hw = hw;
@@ -711,6 +815,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
RNP_PMD_ERR("hardware common ops setup failed");
goto free_ad;
}
+ rnp_mbx_fw_pf_link_event_en(port, false);
for (p_id = 0; p_id < hw->max_port_num; p_id++) {
/* port 0 resource has been allocated when probe */
if (!p_id) {
@@ -752,8 +857,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
rte_intr_callback_register(intr_handle,
rnp_dev_interrupt_handler, adapter);
rte_intr_enable(intr_handle);
- adapter->intr_registed = true;
- hw->fw_info.fw_irq_en = true;
+ rnp_mbx_fw_pf_link_event_en(port, true);
return 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 13/28] net/rnp: add support link setup operations
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (11 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 12/28] net/rnp: add support link update operations Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 14/28] net/rnp: add Rx burst simple support Wenbo Cao
` (14 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add set link_down/link_up implent
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_eth_regs.h | 3 ++
drivers/net/rnp/base/rnp_fw_cmd.c | 22 +++++++++
drivers/net/rnp/base/rnp_fw_cmd.h | 6 +++
drivers/net/rnp/base/rnp_mbx_fw.c | 33 +++++++++++++
drivers/net/rnp/base/rnp_mbx_fw.h | 1 +
drivers/net/rnp/rnp_ethdev.c | 4 ++
drivers/net/rnp/rnp_link.c | 99 +++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_link.h | 2 +
8 files changed, 170 insertions(+)
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index be7ed5b..c74886e 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -5,6 +5,9 @@
#ifndef _RNP_ETH_REGS_H
#define _RNP_ETH_REGS_H
+#define RNP_ETH_TX_FIFO_STATE _ETH_(0x0330)
+#define RNP_ETH_TX_FIFO_EMPT(lane) ((1 << (lane)) | (1 << ((lane) + 4)))
+
#define RNP_E_ENG_BYPASS _ETH_(0x8000)
#define RNP_E_VXLAN_PARSE_EN _ETH_(0x8004)
#define RNP_E_FILTER_EN _ETH_(0x801c)
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.c b/drivers/net/rnp/base/rnp_fw_cmd.c
index c5ae7b9..17d3bb2 100644
--- a/drivers/net/rnp/base/rnp_fw_cmd.c
+++ b/drivers/net/rnp/base/rnp_fw_cmd.c
@@ -107,6 +107,25 @@
arg->event_en = req_arg->param2;
}
+static void
+rnp_build_ifup_down(struct rnp_mbx_fw_cmd_req *req,
+ struct rnp_fw_req_arg *req_arg,
+ void *cookie)
+{
+ struct rnp_ifup_down_req *arg =
+ (struct rnp_ifup_down_req *)req->data;
+
+ req->flags = 0;
+ req->opcode = RNP_IFUP_DOWN;
+ req->datalen = sizeof(*arg);
+ req->cookie = cookie;
+ req->reply_lo = 0;
+ req->reply_hi = 0;
+
+ arg->nr_lane = req_arg->param0;
+ arg->up = req_arg->param1;
+}
+
int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
struct rnp_fw_req_arg *arg,
void *cookie)
@@ -132,6 +151,9 @@ int rnp_build_fwcmd_req(struct rnp_mbx_fw_cmd_req *req,
case RNP_SET_LANE_EVENT_EN:
rnp_build_lane_evet_mask(req, arg, cookie);
break;
+ case RNP_IFUP_DOWN:
+ rnp_build_ifup_down(req, arg, cookie);
+ break;
default:
err = -EOPNOTSUPP;
}
diff --git a/drivers/net/rnp/base/rnp_fw_cmd.h b/drivers/net/rnp/base/rnp_fw_cmd.h
index c86a32a..6b34396 100644
--- a/drivers/net/rnp/base/rnp_fw_cmd.h
+++ b/drivers/net/rnp/base/rnp_fw_cmd.h
@@ -310,6 +310,12 @@ struct rnp_link_stat_req {
struct rnp_port_stat states[RNP_MAX_PORT_OF_PF];
};
+struct rnp_ifup_down_req {
+ u32 nr_lane;
+ u32 up;
+ u8 rsvd[24];
+};
+
struct rnp_mbx_fw_cmd_req {
u16 flags;
u16 opcode;
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.c b/drivers/net/rnp/base/rnp_mbx_fw.c
index d15a639..8758437 100644
--- a/drivers/net/rnp/base/rnp_mbx_fw.c
+++ b/drivers/net/rnp/base/rnp_mbx_fw.c
@@ -464,3 +464,36 @@ int rnp_mbx_fw_reset_phy(struct rnp_hw *hw)
return 0;
}
+
+static void rnp_link_stat_reset(struct rnp_hw *hw, u16 lane)
+{
+ u32 state;
+
+ spin_lock(&hw->link_sync);
+ state = RNP_E_REG_RD(hw, RNP_FW_LINK_SYNC);
+ state &= ~RNP_LINK_MAGIC_MASK;
+ state |= RNP_LINK_MAGIC_CODE;
+ state &= ~RTE_BIT32(lane);
+
+ RNP_E_REG_WR(hw, RNP_FW_LINK_SYNC, state);
+ rte_spinlock_unlock(&hw->link_sync);
+}
+
+int rnp_mbx_fw_ifup_down(struct rnp_eth_port *port, bool up)
+{
+ u16 nr_lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_fw_req_arg arg;
+ int err;
+
+ memset(&arg, 0, sizeof(arg));
+ arg.opcode = RNP_IFUP_DOWN;
+ arg.param0 = nr_lane;
+ arg.param1 = up;
+
+ err = rnp_fw_send_norep_cmd(port, &arg);
+ /* force firmware send irq event to dpdk */
+ if (!err && up)
+ rnp_link_stat_reset(hw, nr_lane);
+ return err;
+}
diff --git a/drivers/net/rnp/base/rnp_mbx_fw.h b/drivers/net/rnp/base/rnp_mbx_fw.h
index 159a023..397d2ec 100644
--- a/drivers/net/rnp/base/rnp_mbx_fw.h
+++ b/drivers/net/rnp/base/rnp_mbx_fw.h
@@ -19,5 +19,6 @@
int rnp_rcv_msg_from_fw(struct rnp_eth_adapter *adapter, u32 *msgbuf);
int rnp_fw_mbx_ifup_down(struct rnp_eth_port *port, int up);
int rnp_mbx_fw_lane_link_event_en(struct rnp_eth_port *port, bool en);
+int rnp_mbx_fw_ifup_down(struct rnp_eth_port *port, bool up);
#endif /* _RNP_MBX_FW_H_ */
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index a3b84db..e229b2e 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -326,6 +326,7 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
rnp_mbx_fw_lane_link_event_en(port, lsc);
if (!lsc)
rnp_run_link_poll_task(port);
+ rnp_dev_set_link_up(eth_dev);
/* enable eth rx flow */
RNP_RX_ETH_ENABLE(hw, lane);
port->port_stopped = 0;
@@ -411,6 +412,7 @@ static int rnp_dev_stop(struct rte_eth_dev *eth_dev)
/* clear the recorded link status */
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(eth_dev, &link);
+ rnp_dev_set_link_down(eth_dev);
rnp_disable_all_tx_queue(eth_dev);
rnp_disable_all_rx_queue(eth_dev);
rnp_mac_tx_disable(eth_dev);
@@ -638,6 +640,8 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
.rss_hash_conf_get = rnp_dev_rss_hash_conf_get,
/* link impl */
.link_update = rnp_dev_link_update,
+ .dev_set_link_up = rnp_dev_set_link_up,
+ .dev_set_link_down = rnp_dev_set_link_down,
};
static void
diff --git a/drivers/net/rnp/rnp_link.c b/drivers/net/rnp/rnp_link.c
index 2f94397..45f5c2d 100644
--- a/drivers/net/rnp/rnp_link.c
+++ b/drivers/net/rnp/rnp_link.c
@@ -338,3 +338,102 @@ static void rnp_dev_link_task(void *param)
{
rte_eal_alarm_cancel(rnp_dev_link_task, port->eth_dev);
}
+
+int rnp_dev_set_link_up(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ uint16_t nr_lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_rx_queue *rxq;
+ uint16_t timeout;
+ uint16_t index;
+ uint32_t state;
+ uint16_t idx;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (port->attr.link_ready)
+ return 0;
+ /* Cur link-state Is Down Verity The Rx Dma Queue State Is Empty */
+ if (!port->attr.link_ready) {
+ for (idx = 0; idx < eth_dev->data->nb_rx_queues; idx++) {
+ rxq = eth_dev->data->rx_queues[idx];
+ if (!rxq)
+ continue;
+ index = rxq->attr.index;
+ timeout = 0;
+ do {
+ if (!RNP_E_REG_RD(hw, RNP_RXQ_READY(index)))
+ break;
+ rte_delay_us(10);
+ timeout++;
+ } while (timeout < 1000);
+ }
+ }
+ ret = rnp_mbx_fw_ifup_down(port, TRUE);
+ if (ret) {
+ RNP_PMD_WARN("port[%d] is set linkup failed\n",
+ eth_dev->data->port_id);
+ return ret;
+ }
+ timeout = 0;
+ do {
+ rte_io_rmb();
+ state = RNP_E_REG_RD(hw, RNP_FW_LINK_SYNC);
+ if (state & RTE_BIT32(nr_lane))
+ break;
+ timeout++;
+ rte_delay_us(10);
+ } while (timeout < 100);
+
+ return ret;
+}
+
+int rnp_dev_set_link_down(struct rte_eth_dev *eth_dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ uint16_t nr_lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ struct rnp_tx_queue *txq;
+ uint32_t timeout = 0;
+ uint32_t check_v;
+ uint32_t state;
+ uint16_t idx;
+
+ PMD_INIT_FUNC_TRACE();
+ RNP_RX_ETH_DISABLE(hw, nr_lane);
+ for (idx = 0; idx < eth_dev->data->nb_tx_queues; idx++) {
+ txq = eth_dev->data->tx_queues[idx];
+ if (!txq)
+ continue;
+ txq->tx_link = false;
+ }
+ /* 2 Check eth tx fifo empty state */
+ do {
+ state = RNP_E_REG_RD(hw, RNP_ETH_TX_FIFO_STATE);
+ check_v = RNP_ETH_TX_FIFO_EMPT(nr_lane);
+ state &= check_v;
+ if (state == check_v)
+ break;
+ rte_delay_us(10);
+ timeout++;
+ if (timeout >= 1000) {
+ RNP_PMD_WARN("lane[%d] isn't empty of link-down action",
+ nr_lane);
+ break;
+ }
+ } while (1);
+ /* 3 Tell Firmware Do Link-down Event Work */
+ rnp_mbx_fw_ifup_down(port, FALSE);
+ /* 4 Wait For Link-Down that Firmware Do done */
+ timeout = 0;
+ do {
+ if (!port->attr.link_ready)
+ break;
+ rte_delay_us(10);
+ timeout++;
+ } while (timeout < 2000);
+
+ return 0;
+}
diff --git a/drivers/net/rnp/rnp_link.h b/drivers/net/rnp/rnp_link.h
index f0705f1..d7e4a9b 100644
--- a/drivers/net/rnp/rnp_link.h
+++ b/drivers/net/rnp/rnp_link.h
@@ -32,5 +32,7 @@ int rnp_dev_link_update(struct rte_eth_dev *eth_dev,
int wait_to_complete);
void rnp_run_link_poll_task(struct rnp_eth_port *port);
void rnp_cancel_link_poll_task(struct rnp_eth_port *port);
+int rnp_dev_set_link_up(struct rte_eth_dev *eth_dev);
+int rnp_dev_set_link_down(struct rte_eth_dev *eth_dev);
#endif /* _RNP_LINK_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 14/28] net/rnp: add Rx burst simple support
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (12 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 13/28] net/rnp: add support link setup operations Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 15/28] net/rnp: add Tx " Wenbo Cao
` (13 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add only support simple recv pkts.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp_ethdev.c | 7 +++
drivers/net/rnp/rnp_rxtx.c | 129 +++++++++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_rxtx.h | 5 ++
3 files changed, 141 insertions(+)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index e229b2e..e5f984f 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -329,6 +329,8 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
rnp_dev_set_link_up(eth_dev);
/* enable eth rx flow */
RNP_RX_ETH_ENABLE(hw, lane);
+ rnp_rx_func_select(eth_dev);
+ rnp_tx_func_select(eth_dev);
port->port_stopped = 0;
return 0;
@@ -568,6 +570,11 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_drop_en = 0,
+ .rx_thresh = {
+ .pthresh = RNP_RX_DESC_FETCH_TH,
+ .hthresh = RNP_RX_DESC_FETCH_BURST,
+ },
+ .rx_free_thresh = RNP_DEFAULT_RX_FREE_THRESH,
.offloads = 0,
};
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 2b172c8..8553fbf 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -641,3 +641,132 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
return 0;
}
+
+#define RNP_CACHE_FETCH_RX (4)
+static __rte_always_inline int
+rnp_refill_rx_ring(struct rnp_rx_queue *rxq)
+{
+ volatile struct rnp_rx_desc *rxbd;
+ struct rnp_rxsw_entry *rx_swbd;
+ struct rte_mbuf *mb;
+ uint16_t j, i;
+ uint16_t rx_id;
+ int ret;
+
+ rxbd = rxq->rx_bdr + rxq->rxrearm_start;
+ rx_swbd = &rxq->sw_ring[rxq->rxrearm_start];
+ ret = rte_mempool_get_bulk(rxq->mb_pool, (void *)rx_swbd,
+ rxq->rx_free_thresh);
+ if (unlikely(ret != 0)) {
+ if (rxq->rxrearm_nb + rxq->rx_free_thresh >= rxq->attr.nb_desc) {
+ for (i = 0; i < RNP_CACHE_FETCH_RX; i++) {
+ rx_swbd[i].mbuf = &rxq->fake_mbuf;
+ rxbd[i].d.pkt_addr = 0;
+ rxbd[i].d.cmd = 0;
+ }
+ }
+ rte_eth_devices[rxq->attr.port_id].data->rx_mbuf_alloc_failed +=
+ rxq->rx_free_thresh;
+ return 0;
+ }
+ for (j = 0; j < rxq->rx_free_thresh; ++j) {
+ mb = rx_swbd[j].mbuf;
+ rte_mbuf_refcnt_set(mb, 1);
+ mb->data_off = RTE_PKTMBUF_HEADROOM;
+ mb->port = rxq->attr.port_id;
+
+ rxbd[j].d.pkt_addr = rnp_get_dma_addr(&rxq->attr, mb);
+ rxbd[j].d.cmd = 0;
+ }
+ rxq->rxrearm_start += rxq->rx_free_thresh;
+ if (rxq->rxrearm_start >= rxq->attr.nb_desc - 1)
+ rxq->rxrearm_start = 0;
+ rxq->rxrearm_nb -= rxq->rx_free_thresh;
+
+ rx_id = (uint16_t)((rxq->rxrearm_start == 0) ?
+ (rxq->attr.nb_desc - 1) : (rxq->rxrearm_start - 1));
+ rte_wmb();
+ RNP_REG_WR(rxq->rx_tailreg, 0, rx_id);
+
+ return j;
+}
+
+static __rte_always_inline uint16_t
+rnp_recv_pkts(void *_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct rnp_rx_queue *rxq = (struct rnp_rx_queue *)_rxq;
+ struct rnp_rxsw_entry *rx_swbd;
+ uint32_t state_cmd[RNP_CACHE_FETCH_RX];
+ uint32_t pkt_len[RNP_CACHE_FETCH_RX] = {0};
+ volatile struct rnp_rx_desc *rxbd;
+ struct rte_mbuf *nmb;
+ int nb_dd, nb_rx = 0;
+ int i, j;
+
+ if (unlikely(!rxq->rxq_started || !rxq->rx_link))
+ return 0;
+ nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RNP_CACHE_FETCH_RX);
+ rxbd = &rxq->rx_bdr[rxq->rx_tail];
+ rte_prefetch0(rxbd);
+ if (rxq->rxrearm_nb > rxq->rx_free_thresh)
+ rnp_refill_rx_ring(rxq);
+
+ if (!(rxbd->wb.qword1.cmd & RNP_CMD_DD))
+ return 0;
+
+ rx_swbd = &rxq->sw_ring[rxq->rx_tail];
+ for (i = 0; i < nb_pkts;
+ i += RNP_CACHE_FETCH_RX, rxbd += RNP_CACHE_FETCH_RX,
+ rx_swbd += RNP_CACHE_FETCH_RX) {
+ for (j = 0; j < RNP_CACHE_FETCH_RX; j++)
+ state_cmd[j] = rxbd[j].wb.qword1.cmd;
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+
+ for (nb_dd = 0; nb_dd < RNP_CACHE_FETCH_RX &&
+ (state_cmd[nb_dd] & rte_cpu_to_le_16(RNP_CMD_DD));
+ nb_dd++)
+ ;
+ for (j = 0; j < nb_dd; j++)
+ pkt_len[j] = rxbd[j].wb.qword1.lens;
+
+ for (j = 0; j < nb_dd; ++j) {
+ nmb = rx_swbd[j].mbuf;
+
+ nmb->data_off = RTE_PKTMBUF_HEADROOM;
+ nmb->port = rxq->attr.port_id;
+ nmb->data_len = pkt_len[j];
+ nmb->pkt_len = pkt_len[j];
+ nmb->packet_type = 0;
+ nmb->ol_flags = 0;
+ nmb->nb_segs = 1;
+ }
+ for (j = 0; j < nb_dd; ++j) {
+ rx_pkts[i + j] = rx_swbd[j].mbuf;
+ rx_swbd[j].mbuf = NULL;
+ }
+
+ nb_rx += nb_dd;
+ rxq->nb_rx_free -= nb_dd;
+ if (nb_dd != RNP_CACHE_FETCH_RX)
+ break;
+ }
+ rxq->rx_tail = (rxq->rx_tail + nb_rx) & rxq->attr.nb_desc_mask;
+ rxq->rxrearm_nb = rxq->rxrearm_nb + nb_rx;
+
+ return nb_rx;
+}
+
+int rnp_rx_func_select(struct rte_eth_dev *dev)
+{
+ dev->rx_pkt_burst = rnp_recv_pkts;
+
+ return 0;
+}
+
+int rnp_tx_func_select(struct rte_eth_dev *dev)
+{
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_prepare = rte_eth_pkt_burst_dummy;
+
+ return 0;
+}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index 94e1f06..39e5184 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -63,6 +63,9 @@ struct rnp_rx_queue {
uint16_t rx_free_thresh; /* rx free desc desource thresh */
uint16_t rx_tail;
+ uint16_t rxrearm_start;
+ uint16_t rxrearm_nb;
+
uint32_t nodesc_tm_thresh; /* rx queue no desc timeout thresh */
uint8_t rx_deferred_start; /* do not start queue with dev_start(). */
uint8_t rxq_started; /* rx queue is started */
@@ -128,5 +131,7 @@ int rnp_tx_queue_setup(struct rte_eth_dev *dev,
const struct rte_eth_txconf *tx_conf);
int rnp_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
+int rnp_rx_func_select(struct rte_eth_dev *dev);
+int rnp_tx_func_select(struct rte_eth_dev *dev);
#endif /* _RNP_RXTX_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 15/28] net/rnp: add Tx burst simple support
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (13 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 14/28] net/rnp: add Rx burst simple support Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 16/28] net/rnp: add MTU set operation Wenbo Cao
` (12 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add only support simple send pkts.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp_ethdev.c | 6 ++++
drivers/net/rnp/rnp_rxtx.c | 85 +++++++++++++++++++++++++++++++++++++++++++-
drivers/net/rnp/rnp_rxtx.h | 1 +
3 files changed, 91 insertions(+), 1 deletion(-)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index e5f984f..11cf2eb 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -579,6 +579,12 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
};
dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_thresh = {
+ .pthresh = RNP_TX_DESC_FETCH_TH,
+ .hthresh = RNP_TX_DESC_FETCH_BURST,
+ },
+ .tx_free_thresh = RNP_DEFAULT_TX_FREE_THRESH,
+ .tx_rs_thresh = RNP_DEFAULT_TX_RS_THRESH,
.offloads = 0,
};
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 8553fbf..e8c1444 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -756,6 +756,89 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
return nb_rx;
}
+static __rte_always_inline int
+rnp_clean_tx_ring(struct rnp_tx_queue *txq)
+{
+ volatile struct rnp_tx_desc *txbd;
+ struct rnp_txsw_entry *tx_swbd;
+ struct rte_mbuf *m;
+ uint16_t next_dd;
+ uint16_t i;
+
+ txbd = &txq->tx_bdr[txq->tx_next_dd];
+ if (!(txbd->d.cmd & RNP_CMD_DD))
+ return 0;
+ *txbd = txq->zero_desc;
+ next_dd = txq->tx_next_dd - (txq->tx_free_thresh - 1);
+ tx_swbd = &txq->sw_ring[next_dd];
+
+ for (i = 0; i < txq->tx_rs_thresh; ++i, ++tx_swbd) {
+ if (tx_swbd->mbuf) {
+ m = tx_swbd->mbuf;
+ rte_pktmbuf_free_seg(m);
+ tx_swbd->mbuf = NULL;
+ }
+ }
+ txq->nb_tx_free = (txq->nb_tx_free + txq->tx_rs_thresh);
+ txq->tx_next_dd = (txq->tx_next_dd + txq->tx_rs_thresh) &
+ txq->attr.nb_desc_mask;
+
+ return 0;
+}
+
+static __rte_always_inline uint16_t
+rnp_xmit_simple(void *_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct rnp_tx_queue *txq = (struct rnp_tx_queue *)_txq;
+ volatile struct rnp_tx_desc *txbd;
+ struct rnp_txsw_entry *tx_swbd;
+ uint64_t phy;
+ uint16_t start;
+ uint16_t i;
+
+ if (unlikely(!txq->txq_started || !txq->tx_link))
+ return 0;
+
+ if (txq->nb_tx_free < txq->tx_free_thresh)
+ rnp_clean_tx_ring(txq);
+
+ nb_pkts = RTE_MIN(txq->nb_tx_free, nb_pkts);
+ if (!nb_pkts)
+ return 0;
+ start = nb_pkts;
+ i = txq->tx_tail;
+
+ while (nb_pkts--) {
+ txbd = &txq->tx_bdr[i];
+ tx_swbd = &txq->sw_ring[i];
+ tx_swbd->mbuf = *tx_pkts++;
+ phy = rnp_get_dma_addr(&txq->attr, tx_swbd->mbuf);
+ txbd->d.addr = phy;
+ if (unlikely(tx_swbd->mbuf->data_len > RNP_MAC_MAXFRM_SIZE))
+ tx_swbd->mbuf->data_len = 0;
+ txbd->d.blen = tx_swbd->mbuf->data_len;
+ txbd->d.cmd = RNP_CMD_EOP;
+
+ i = (i + 1) & txq->attr.nb_desc_mask;
+ }
+ txq->nb_tx_free -= start;
+ if (txq->tx_tail + start > txq->tx_next_rs) {
+ txbd = &txq->tx_bdr[txq->tx_next_rs];
+ txbd->d.cmd |= RNP_CMD_RS;
+ txq->tx_next_rs = (txq->tx_next_rs + txq->tx_rs_thresh);
+
+ if (txq->tx_next_rs > txq->attr.nb_desc)
+ txq->tx_next_rs = txq->tx_rs_thresh - 1;
+ }
+
+ txq->tx_tail = i;
+
+ rte_wmb();
+ RNP_REG_WR(txq->tx_tailreg, 0, i);
+
+ return start;
+}
+
int rnp_rx_func_select(struct rte_eth_dev *dev)
{
dev->rx_pkt_burst = rnp_recv_pkts;
@@ -765,7 +848,7 @@ int rnp_rx_func_select(struct rte_eth_dev *dev)
int rnp_tx_func_select(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rnp_xmit_simple;
dev->tx_pkt_prepare = rte_eth_pkt_burst_dummy;
return 0;
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index 39e5184..a8fd8d0 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -89,6 +89,7 @@ struct rnp_tx_queue {
const struct rte_memzone *rz;
uint64_t ring_phys_addr; /* tx dma ring physical addr */
volatile struct rnp_tx_desc *tx_bdr; /* tx dma ring virtual addr */
+ volatile struct rnp_tx_desc zero_desc;
struct rnp_txsw_entry *sw_ring; /* tx software ring addr */
volatile void *tx_tailreg; /* hw desc tail register */
volatile void *tx_headreg; /* hw desc head register*/
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 16/28] net/rnp: add MTU set operation
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (14 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 15/28] net/rnp: add Tx " Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 17/28] net/rnp: add Rx scatter segment version Wenbo Cao
` (11 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add mtu update limit for multiple port mode.
multiple mode just used the max-mtu of ports
to limit receive.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_eth_regs.h | 3 +
drivers/net/rnp/rnp.h | 3 +
drivers/net/rnp/rnp_ethdev.c | 144 +++++++++++++++++++++++++++++++++++-
5 files changed, 151 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 695b9c0..6d13370 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -10,6 +10,7 @@ Link status event = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
+MTU update = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 82dd2d8..9fa7ad9 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -16,6 +16,7 @@ Features
Inner RSS is only support for vxlan/nvgre
- Promiscuous mode
- Link state information
+- MTU update
Prerequisites
-------------
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index c74886e..91a18dd 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -16,6 +16,9 @@
#define RNP_RX_ETH_F_CTRL(n) _ETH_(0x8070 + ((n) * 0x8))
#define RNP_RX_ETH_F_OFF (0x7ff)
#define RNP_RX_ETH_F_ON (0x270)
+/* max/min pkts length receive limit ctrl */
+#define RNP_MIN_FRAME_CTRL _ETH_(0x80f0)
+#define RNP_MAX_FRAME_CTRL _ETH_(0x80f4)
/* rx queue flow ctrl */
#define RNP_RX_FC_ENABLE _ETH_(0x8520)
#define RNP_RING_FC_EN(n) _ETH_(0x8524 + ((0x4) * ((n) / 32)))
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 97222f3..054382e 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -120,6 +120,9 @@ struct rnp_eth_port {
bool hw_rss_en;
uint32_t indirtbl[RNP_RSS_INDIR_SIZE];
+ uint16_t cur_mtu;
+ bool jumbo_en;
+
rte_spinlock_t rx_mac_lock;
bool port_stopped;
};
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 11cf2eb..0fcb256 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -20,6 +20,7 @@
#include "rnp_rss.h"
#include "rnp_link.h"
+static int rnp_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
static struct rte_eth_dev *
rnp_alloc_eth_port(struct rte_pci_device *pci, char *name)
{
@@ -140,6 +141,13 @@ static void rnp_mac_rx_enable(struct rte_eth_dev *dev)
mac_cfg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_RX_CFG);
mac_cfg |= RNP_MAC_RE;
+ if (port->jumbo_en) {
+ mac_cfg |= RNP_MAC_JE;
+ mac_cfg |= RNP_MAC_GPSLCE | RNP_MAC_WD;
+ } else {
+ mac_cfg &= ~RNP_MAC_JE;
+ mac_cfg &= ~RNP_MAC_WD;
+ }
mac_cfg &= ~RNP_MAC_GPSL_MASK;
mac_cfg |= (RNP_MAC_MAX_GPSL << RNP_MAC_CPSL_SHIFT);
RNP_MAC_REG_WR(hw, lane, RNP_MAC_RX_CFG, mac_cfg);
@@ -209,6 +217,7 @@ static void rnp_mac_init(struct rte_eth_dev *dev)
{
uint16_t max_pkt_size =
dev->data->dev_conf.rxmode.mtu + RNP_ETH_OVERHEAD;
+ struct rte_eth_conf *dev_conf = &dev->data->dev_conf;
struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
struct rnp_hw *hw = port->hw;
struct rnp_rx_queue *rxq;
@@ -234,6 +243,12 @@ static void rnp_mac_init(struct rte_eth_dev *dev)
return -ENOTSUP;
}
dma_buf_size = hw->min_dma_size;
+ if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
+ max_pkt_size > dma_buf_size ||
+ dev->data->mtu + RNP_ETH_OVERHEAD > dma_buf_size)
+ dev->data->scattered_rx = 1;
+ else
+ dev->data->scattered_rx = 0;
/* Setup max dma scatter engine split size */
dma_ctrl = RNP_E_REG_RD(hw, RNP_DMA_CTRL);
if (max_pkt_size == dma_buf_size)
@@ -294,6 +309,7 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
struct rte_eth_dev_data *data = eth_dev->data;
+ uint16_t max_rx_pkt_len = eth_dev->data->mtu;
bool lsc = data->dev_conf.intr_conf.lsc;
struct rnp_hw *hw = port->hw;
uint16_t lane = 0;
@@ -316,6 +332,9 @@ static int rnp_dev_start(struct rte_eth_dev *eth_dev)
ret = rnp_rx_scattered_setup(eth_dev);
if (ret)
return ret;
+ ret = rnp_mtu_set(eth_dev, max_rx_pkt_len);
+ if (ret)
+ return ret;
ret = rnp_enable_all_tx_queue(eth_dev);
if (ret)
goto txq_start_failed;
@@ -628,6 +647,129 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
return rnp_update_mpfm(port, RNP_MPF_MODE_ALLMULTI, 0);
}
+static bool
+rnp_verify_pf_scatter(struct rnp_eth_adapter *adapter)
+{
+ struct rnp_hw *hw = &adapter->hw;
+ struct rte_eth_dev *eth_dev;
+ uint8_t i = 0;
+
+ for (i = 0; i < hw->max_port_num; i++) {
+ eth_dev = adapter->ports[i]->eth_dev;
+ /* sub port of pf eth_dev state is not
+ * started so the scatter_rx attr isn't
+ * setup dont't check this sub port.
+ */
+ if (!eth_dev->data->dev_started)
+ continue;
+ if (eth_dev && !eth_dev->data->scattered_rx)
+ return false;
+ }
+
+ return true;
+}
+
+static int
+rnp_update_vaild_mtu(struct rnp_eth_port *port, uint16_t *set_mtu)
+{
+ struct rnp_eth_adapter *adapter = port->hw->back;
+ struct rnp_eth_port *sub_port = NULL;
+ struct rnp_hw *hw = port->hw;
+ uint16_t origin_mtu = 0;
+ uint16_t mtu = 0;
+ uint8_t i = 0;
+
+ if (hw->max_port_num == 1) {
+ port->cur_mtu = *set_mtu;
+
+ return 0;
+ }
+ origin_mtu = port->cur_mtu;
+ port->cur_mtu = *set_mtu;
+ mtu = *set_mtu;
+ for (i = 0; i < hw->max_port_num; i++) {
+ sub_port = adapter->ports[i];
+ if (sub_port == NULL)
+ continue;
+ mtu = RTE_MAX(mtu, sub_port->cur_mtu);
+ }
+ if (hw->max_port_num > 1 &&
+ mtu + RNP_ETH_OVERHEAD > hw->min_dma_size) {
+ if (!rnp_verify_pf_scatter(adapter)) {
+ RNP_PMD_ERR("single pf multiple port max_frame_sz "
+ "is bigger than min_dma_size please "
+ "stop all pf port before set mtu.");
+ port->cur_mtu = origin_mtu;
+ return -EINVAL;
+ }
+ }
+ *set_mtu = mtu;
+
+ return 0;
+}
+
+static int
+rnp_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint32_t frame_size = mtu + RNP_ETH_OVERHEAD;
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ bool jumbo_en = false;
+ uint32_t reg;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ /* check that mtu is within the allowed range */
+ if (frame_size < RTE_ETHER_MIN_LEN ||
+ frame_size > RNP_MAC_MAXFRM_SIZE)
+ return -EINVAL;
+ /*
+ * Refuse mtu that requires the support of scattered packets
+ * when this feature has not been enabled before.
+ */
+ if (dev->data->dev_started && !dev->data->scattered_rx &&
+ frame_size > dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM) {
+ RNP_PMD_ERR("port %d mtu update must be stopped "
+ "before configuration when scatter rx off.",
+ dev->data->port_id);
+
+ return -EBUSY;
+ }
+ if (frame_size < RTE_ETHER_MIN_LEN) {
+ RNP_PMD_ERR("valid packet length must be "
+ "range from %u to %u, "
+ "when Jumbo Frame Feature disabled",
+ (uint32_t)RTE_ETHER_MIN_LEN,
+ (uint32_t)RTE_ETHER_MAX_LEN);
+ return -EINVAL;
+ }
+ /* For one pf multiple port the mtu we must set
+ * the biggest mtu the ports selong to pf
+ * because of the control button is only one
+ */
+ ret = rnp_update_vaild_mtu(port, &mtu);
+ if (ret < 0)
+ return ret;
+ frame_size = mtu + RNP_ETH_OVERHEAD;
+ if (frame_size > RTE_ETHER_MAX_LEN)
+ jumbo_en = true;
+ /* setting the MTU */
+ RNP_E_REG_WR(hw, RNP_MAX_FRAME_CTRL, frame_size);
+ RNP_E_REG_WR(hw, RNP_MIN_FRAME_CTRL, 60);
+ if (jumbo_en) {
+ /* To protect conflict hw resource */
+ rte_spinlock_lock(&port->rx_mac_lock);
+ reg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_RX_CFG);
+ reg |= RNP_MAC_JE;
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_RX_CFG, reg);
+ rte_spinlock_unlock(&port->rx_mac_lock);
+ }
+ port->jumbo_en = jumbo_en;
+
+ return 0;
+}
+
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_configure = rnp_dev_configure,
@@ -641,7 +783,7 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
.promiscuous_disable = rnp_promiscuous_disable,
.allmulticast_enable = rnp_allmulticast_enable,
.allmulticast_disable = rnp_allmulticast_disable,
-
+ .mtu_set = rnp_mtu_set,
.rx_queue_setup = rnp_rx_queue_setup,
.rx_queue_release = rnp_dev_rx_queue_release,
.tx_queue_setup = rnp_tx_queue_setup,
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 17/28] net/rnp: add Rx scatter segment version
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (15 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 16/28] net/rnp: add MTU set operation Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 18/28] net/rnp: add Tx multiple " Wenbo Cao
` (10 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support scatter multi segment received.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 2 +
doc/guides/nics/rnp.rst | 2 +
drivers/net/rnp/rnp_rxtx.c | 131 ++++++++++++++++++++++++++++++++++++++-
drivers/net/rnp/rnp_rxtx.h | 2 +
4 files changed, 135 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 6d13370..c68d6fb 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -15,5 +15,7 @@ RSS hash = Y
RSS key update = Y
RSS reta update = Y
Inner RSS = Y
+Jumbo frame = Y
+Scattered Rx = Y
Linux = Y
x86-64 = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 9fa7ad9..db64104 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -17,6 +17,8 @@ Features
- Promiscuous mode
- Link state information
- MTU update
+- Jumbo frames
+- Scatter-Gather IO support
Prerequisites
-------------
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index e8c1444..c80cc8b 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -830,7 +830,6 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
if (txq->tx_next_rs > txq->attr.nb_desc)
txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
-
txq->tx_tail = i;
rte_wmb();
@@ -839,9 +838,137 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
return start;
}
+static int
+rnp_rxq_bulk_alloc(struct rnp_rx_queue *rxq,
+ volatile struct rnp_rx_desc *rxbd,
+ struct rnp_rxsw_entry *rxe,
+ bool bulk_alloc)
+{
+ struct rte_mbuf *nmb = NULL;
+ uint16_t update_tail;
+
+ if (!bulk_alloc) {
+ nmb = rte_mbuf_raw_alloc(rxq->mb_pool);
+ if (unlikely(!nmb)) {
+ rte_eth_devices[rxq->attr.port_id].data->
+ rx_mbuf_alloc_failed++;
+ return -ENOMEM;
+ }
+ rxbd->d.pkt_addr = 0;
+ rxbd->d.cmd = 0;
+ rxe->mbuf = NULL;
+ rxe->mbuf = nmb;
+ rxbd->d.pkt_addr = rnp_get_dma_addr(&rxq->attr, nmb);
+ }
+ if (rxq->rxrearm_nb > rxq->rx_free_thresh) {
+ rxq->rxrearm_nb -= rxq->rx_free_thresh;
+ rxq->rxrearm_start += rxq->rx_free_thresh;
+ if (rxq->rxrearm_start >= rxq->attr.nb_desc)
+ rxq->rxrearm_start = 0;
+ update_tail = (uint16_t)((rxq->rxrearm_start == 0) ?
+ (rxq->attr.nb_desc - 1) : (rxq->rxrearm_start - 1));
+ rte_io_wmb();
+ RNP_REG_WR(rxq->rx_tailreg, 0, update_tail);
+ }
+
+ return 0;
+}
+
+static __rte_always_inline uint16_t
+rnp_scattered_rx(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rnp_rx_queue *rxq = (struct rnp_rx_queue *)rx_queue;
+ volatile struct rnp_rx_desc *bd_ring = rxq->rx_bdr;
+ struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+ struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+ struct rnp_rxsw_entry *sw_ring = rxq->sw_ring;
+ volatile struct rnp_rx_desc *rxbd;
+ volatile struct rnp_rx_desc rxd;
+ struct rnp_rxsw_entry *rxe;
+ struct rte_mbuf *rxm;
+ uint16_t rx_pkt_len;
+ uint16_t nb_rx = 0;
+ uint16_t rx_status;
+ uint16_t rx_id;
+
+ if (unlikely(!rxq->rxq_started || !rxq->rx_link))
+ return 0;
+ rx_id = rxq->rx_tail;
+ while (nb_rx < nb_pkts) {
+ rxbd = &bd_ring[rx_id];
+ rx_status = rxbd->wb.qword1.cmd;
+ if (!(rx_status & rte_cpu_to_le_16(RNP_CMD_DD)))
+ break;
+ rte_atomic_thread_fence(rte_memory_order_acquire);
+ rxd = *rxbd;
+ rxe = &sw_ring[rx_id];
+ rxm = rxe->mbuf;
+ if (rnp_rxq_bulk_alloc(rxq, rxbd, rxe, false))
+ break;
+ rx_id = (rx_id + 1) & rxq->attr.nb_desc_mask;
+ rte_prefetch0(sw_ring[rx_id].mbuf);
+ if ((rx_id & 0x3) == 0) {
+ rte_prefetch0(&bd_ring[rx_id]);
+ rte_prefetch0(&sw_ring[rx_id]);
+ }
+ rx_pkt_len = rxd.wb.qword1.lens;
+ rxm->data_len = rx_pkt_len;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ if (!first_seg) {
+ /* first segment pkt */
+ first_seg = rxm;
+ first_seg->nb_segs = 1;
+ first_seg->pkt_len = rx_pkt_len;
+ } else {
+ /* follow-up segment pkt */
+ first_seg->pkt_len =
+ (uint16_t)(first_seg->pkt_len + rx_pkt_len);
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
+ }
+ rxq->rxrearm_nb++;
+ if (!(rx_status & rte_cpu_to_le_16(RNP_CMD_EOP))) {
+ last_seg = rxm;
+ continue;
+ }
+ rxm->next = NULL;
+ first_seg->port = rxq->attr.port_id;
+ /* this the end of packet the large pkt has been recv finish */
+ rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+ first_seg->data_off));
+ rx_pkts[nb_rx++] = first_seg;
+ first_seg = NULL;
+ }
+ if (!nb_rx)
+ return 0;
+ /* update sw record point */
+ rxq->rx_tail = rx_id;
+ rxq->pkt_first_seg = first_seg;
+ rxq->pkt_last_seg = last_seg;
+
+ return nb_rx;
+}
+
+static int
+rnp_check_rx_simple_valid(struct rte_eth_dev *dev)
+{
+ uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
+
+ if (dev->data->scattered_rx || rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
+ return -ENOTSUP;
+ return 0;
+}
+
int rnp_rx_func_select(struct rte_eth_dev *dev)
{
- dev->rx_pkt_burst = rnp_recv_pkts;
+ bool simple_allowed = false;
+
+ simple_allowed = rnp_check_rx_simple_valid(dev) == 0;
+ if (simple_allowed)
+ dev->rx_pkt_burst = rnp_recv_pkts;
+ else
+ dev->rx_pkt_burst = rnp_scattered_rx;
return 0;
}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index a8fd8d0..973b667 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -76,6 +76,8 @@ struct rnp_rx_queue {
uint64_t rx_offloads; /* user set hw offload features */
struct rte_mbuf **free_mbufs; /* rx bulk alloc reserve of free mbufs */
struct rte_mbuf fake_mbuf; /* dummy mbuf */
+ struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
+ struct rte_mbuf *pkt_last_seg; /**< Last segment of current packet. */
};
struct rnp_txsw_entry {
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 18/28] net/rnp: add Tx multiple segment version
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (16 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 17/28] net/rnp: add Rx scatter segment version Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 19/28] net/rnp: add support basic stats operation Wenbo Cao
` (9 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support multiple segs mbuf send.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp_rxtx.c | 126 ++++++++++++++++++++++++++++++++++++++++++++-
drivers/net/rnp/rnp_rxtx.h | 3 +-
2 files changed, 126 insertions(+), 3 deletions(-)
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index c80cc8b..777ce7b 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -374,9 +374,11 @@ static int rnp_alloc_txbdr(struct rte_eth_dev *dev,
sw_ring[prev].next_id = idx;
prev = idx;
}
+ txq->last_desc_cleaned = txq->attr.nb_desc - 1;
txq->nb_tx_free = txq->attr.nb_desc - 1;
txq->tx_next_dd = txq->tx_rs_thresh - 1;
txq->tx_next_rs = txq->tx_rs_thresh - 1;
+ txq->nb_tx_used = 0;
txq->tx_tail = 0;
size = (txq->attr.nb_desc + RNP_TX_MAX_BURST_SIZE);
@@ -860,6 +862,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
rxe->mbuf = nmb;
rxbd->d.pkt_addr = rnp_get_dma_addr(&rxq->attr, nmb);
}
+ rxq->rxrearm_nb++;
if (rxq->rxrearm_nb > rxq->rx_free_thresh) {
rxq->rxrearm_nb -= rxq->rx_free_thresh;
rxq->rxrearm_start += rxq->rx_free_thresh;
@@ -927,7 +930,6 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
first_seg->nb_segs++;
last_seg->next = rxm;
}
- rxq->rxrearm_nb++;
if (!(rx_status & rte_cpu_to_le_16(RNP_CMD_EOP))) {
last_seg = rxm;
continue;
@@ -950,6 +952,106 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
return nb_rx;
}
+static __rte_always_inline uint16_t
+rnp_multiseg_clean_txq(struct rnp_tx_queue *txq)
+{
+ uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+ struct rnp_txsw_entry *sw_ring = txq->sw_ring;
+ volatile struct rnp_tx_desc *txbd;
+ uint16_t desc_to_clean_to;
+ uint16_t nb_tx_to_clean;
+
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh);
+ desc_to_clean_to = desc_to_clean_to & (txq->attr.nb_desc - 1);
+
+ desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+ txbd = &txq->tx_bdr[desc_to_clean_to];
+ if (!(txbd->d.cmd & RNP_CMD_DD))
+ return txq->nb_tx_free;
+
+ if (last_desc_cleaned > desc_to_clean_to)
+ nb_tx_to_clean = (uint16_t)((txq->attr.nb_desc -
+ last_desc_cleaned) + desc_to_clean_to);
+ else
+ nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+ last_desc_cleaned);
+
+ txbd->d.cmd = 0;
+
+ txq->last_desc_cleaned = desc_to_clean_to;
+ txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean);
+
+ return txq->nb_tx_free;
+}
+
+static __rte_always_inline uint16_t
+rnp_multiseg_xmit_pkts(void *_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct rnp_tx_queue *txq = (struct rnp_tx_queue *)_txq;
+ volatile struct rnp_tx_desc *txbd;
+ struct rnp_txsw_entry *txe, *txn;
+ struct rte_mbuf *tx_pkt, *m_seg;
+ uint16_t send_pkts = 0;
+ uint16_t nb_used_bd;
+ uint16_t tx_last;
+ uint16_t nb_tx;
+ uint16_t tx_id;
+
+ if (unlikely(!txq->txq_started || !txq->tx_link))
+ return 0;
+ if (txq->nb_tx_free < txq->tx_free_thresh)
+ rnp_multiseg_clean_txq(txq);
+ if (unlikely(txq->nb_tx_free == 0))
+ return 0;
+ tx_id = txq->tx_tail;
+ txbd = &txq->tx_bdr[tx_id];
+ txe = &txq->sw_ring[tx_id];
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ tx_pkt = tx_pkts[nb_tx];
+ nb_used_bd = tx_pkt->nb_segs;
+ tx_last = (uint16_t)(tx_id + nb_used_bd - 1);
+ if (tx_last >= txq->attr.nb_desc)
+ tx_last = (uint16_t)(tx_last - txq->attr.nb_desc);
+ if (nb_used_bd > txq->nb_tx_free)
+ if (nb_used_bd > rnp_multiseg_clean_txq(txq))
+ break;
+ m_seg = tx_pkt;
+ do {
+ txbd = &txq->tx_bdr[tx_id];
+ txn = &txq->sw_ring[txe->next_id];
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+ txe->mbuf = m_seg;
+ txe->last_id = tx_last;
+ txbd->d.addr = rnp_get_dma_addr(&txq->attr, m_seg);
+ txbd->d.blen = rte_cpu_to_le_32(m_seg->data_len);
+ txbd->d.cmd &= ~RNP_CMD_EOP;
+ txbd->d.cmd |= RNP_DATA_DESC;
+ m_seg = m_seg->next;
+ tx_id = txe->next_id;
+ txe = txn;
+ } while (m_seg != NULL);
+ txbd->d.cmd |= RNP_CMD_EOP;
+ txq->nb_tx_used = (uint16_t)txq->nb_tx_used + nb_used_bd;
+ txq->nb_tx_free = (uint16_t)txq->nb_tx_free - nb_used_bd;
+ if (txq->nb_tx_used >= txq->tx_rs_thresh) {
+ txq->nb_tx_used = 0;
+ txbd->d.cmd |= RNP_CMD_RS;
+ }
+ send_pkts++;
+ }
+ if (!send_pkts)
+ return 0;
+ txq->tx_tail = tx_id;
+
+ rte_wmb();
+ RNP_REG_WR(txq->tx_tailreg, 0, tx_id);
+
+ return send_pkts;
+}
+
static int
rnp_check_rx_simple_valid(struct rte_eth_dev *dev)
{
@@ -973,9 +1075,29 @@ int rnp_rx_func_select(struct rte_eth_dev *dev)
return 0;
}
+static int
+rnp_check_tx_simple_valid(struct rte_eth_dev *dev, struct rnp_tx_queue *txq)
+{
+ RTE_SET_USED(txq);
+ if (dev->data->scattered_rx)
+ return -ENOTSUP;
+ return 0;
+}
+
int rnp_tx_func_select(struct rte_eth_dev *dev)
{
- dev->tx_pkt_burst = rnp_xmit_simple;
+ bool simple_allowed = false;
+ struct rnp_tx_queue *txq;
+ int idx = 0;
+
+ for (idx = 0; idx < dev->data->nb_tx_queues; idx++) {
+ txq = dev->data->tx_queues[idx];
+ simple_allowed = rnp_check_tx_simple_valid(dev, txq) == 0;
+ }
+ if (simple_allowed)
+ dev->tx_pkt_burst = rnp_xmit_simple;
+ else
+ dev->tx_pkt_burst = rnp_multiseg_xmit_pkts;
dev->tx_pkt_prepare = rte_eth_pkt_burst_dummy;
return 0;
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index 973b667..f631285 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -98,7 +98,8 @@ struct rnp_tx_queue {
struct rnp_queue_attr attr;
uint16_t nb_tx_free; /* avail desc to set pkts */
- uint16_t nb_tx_used;
+ uint16_t nb_tx_used; /* multiseg mbuf used num */
+ uint16_t last_desc_cleaned;
uint16_t tx_tail;
uint16_t tx_next_dd; /* next to scan writeback dd bit */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 19/28] net/rnp: add support basic stats operation
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (17 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 18/28] net/rnp: add Tx multiple " Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 20/28] net/rnp: add support xstats operation Wenbo Cao
` (8 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support hw-missed rx/tx packets bytes.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 2 +
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_eth_regs.h | 3 +
drivers/net/rnp/rnp.h | 10 ++-
drivers/net/rnp/rnp_ethdev.c | 147 ++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_rxtx.c | 9 +++
drivers/net/rnp/rnp_rxtx.h | 10 +++
7 files changed, 181 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index c68d6fb..45dae3b 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -7,6 +7,8 @@
Speed capabilities = Y
Link status = Y
Link status event = Y
+Basic stats = Y
+Stats per queue = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index db64104..ec6f3f9 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -19,6 +19,7 @@ Features
- MTU update
- Jumbo frames
- Scatter-Gather IO support
+- Port hardware statistic
Prerequisites
-------------
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 91a18dd..391688b 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -23,6 +23,9 @@
#define RNP_RX_FC_ENABLE _ETH_(0x8520)
#define RNP_RING_FC_EN(n) _ETH_(0x8524 + ((0x4) * ((n) / 32)))
#define RNP_RING_FC_THRESH(n) _ETH_(0x8a00 + ((0x4) * (n)))
+/* ETH Statistic */
+#define RNP_ETH_RXTRANS_DROP _ETH_(0x8904)
+#define RNP_ETH_RXTRUNC_DROP _ETH_(0x8928)
/* Mac Host Filter */
#define RNP_MAC_FCTRL _ETH_(0x9110)
#define RNP_MAC_FCTRL_MPE RTE_BIT32(8) /* Multicast Promiscuous En */
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 054382e..b4f4f28 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -10,7 +10,7 @@
#include "base/rnp_hw.h"
#define PCI_VENDOR_ID_MUCSE (0x8848)
-#define RNP_DEV_ID_N10G (0x1000)
+#define RNP_DEV_ID_N10G (0x1020)
#define RNP_MAX_VF_NUM (64)
#define RNP_MISC_VEC_ID RTE_INTR_VEC_ZERO_OFFSET
/* maximum frame size supported */
@@ -105,6 +105,11 @@ struct rnp_proc_priv {
const struct rnp_mbx_ops *mbx_ops;
};
+struct rnp_hw_eth_stats {
+ uint64_t rx_trans_drop; /* rx eth to dma fifo full drop */
+ uint64_t rx_trunc_drop; /* rx mac to eth to host copy fifo full drop */
+};
+
struct rnp_eth_port {
struct rnp_proc_priv *proc_priv;
struct rte_ether_addr mac_addr;
@@ -113,6 +118,9 @@ struct rnp_eth_port {
struct rnp_tx_queue *tx_queues[RNP_MAX_RX_QUEUE_NUM];
struct rnp_hw *hw;
+ struct rnp_hw_eth_stats eth_stats_old;
+ struct rnp_hw_eth_stats eth_stats;
+
struct rte_eth_rss_conf rss_conf;
uint16_t last_rx_num;
bool rxq_num_changed;
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 0fcb256..fa2617b 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -770,6 +770,150 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
return 0;
}
+struct rte_rnp_xstats_name_off {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+ uint32_t offset;
+ uint32_t reg_base;
+ bool hi_addr_en;
+};
+
+static const struct rte_rnp_xstats_name_off rte_rnp_rx_eth_stats_str[] = {
+ {"eth rx full drop", offsetof(struct rnp_hw_eth_stats,
+ rx_trans_drop), RNP_ETH_RXTRANS_DROP, false},
+ {"eth_rx_fifo_drop", offsetof(struct rnp_hw_eth_stats,
+ rx_trunc_drop), RNP_ETH_RXTRUNC_DROP, false},
+};
+#define RNP_NB_RX_HW_ETH_STATS (RTE_DIM(rte_rnp_rx_eth_stats_str))
+#define RNP_GET_E_HW_COUNT(stats, offset) \
+ ((uint64_t *)(((char *)stats) + (offset)))
+#define RNP_ADD_INCL_COUNT(stats, offset, val) \
+ ((*(RNP_GET_E_HW_COUNT(stats, (offset)))) += val)
+
+static inline void
+rnp_update_eth_stats_32bit(struct rnp_hw_eth_stats *new,
+ struct rnp_hw_eth_stats *old,
+ uint32_t offset, uint32_t val)
+{
+ uint64_t *last_count = NULL;
+
+ last_count = RNP_GET_E_HW_COUNT(old, offset);
+ if (val >= *last_count)
+ RNP_ADD_INCL_COUNT(new, offset, val - (*last_count));
+ else
+ RNP_ADD_INCL_COUNT(new, offset, val + UINT32_MAX);
+ *last_count = val;
+}
+
+static void rnp_get_eth_count(struct rnp_hw *hw,
+ uint16_t lane,
+ struct rnp_hw_eth_stats *new,
+ struct rnp_hw_eth_stats *old,
+ const struct rte_rnp_xstats_name_off *ptr)
+{
+ uint64_t val = 0;
+
+ if (ptr->reg_base) {
+ val = RNP_E_REG_RD(hw, ptr->reg_base + 0x40 * lane);
+ rnp_update_eth_stats_32bit(new, old, ptr->offset, val);
+ }
+}
+
+static void rnp_get_hw_stats(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw_eth_stats *old = &port->eth_stats_old;
+ struct rnp_hw_eth_stats *new = &port->eth_stats;
+ const struct rte_rnp_xstats_name_off *ptr;
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint16_t i;
+
+ for (i = 0; i < RNP_NB_RX_HW_ETH_STATS; i++) {
+ ptr = &rte_rnp_rx_eth_stats_str[i];
+ rnp_get_eth_count(hw, lane, new, old, ptr);
+ }
+}
+
+static int
+rnp_dev_stats_get(struct rte_eth_dev *dev,
+ struct rte_eth_stats *stats)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw_eth_stats *eth_stats = &port->eth_stats;
+ struct rte_eth_dev_data *data = dev->data;
+ int i = 0;
+
+ PMD_INIT_FUNC_TRACE();
+ rnp_get_hw_stats(dev);
+ for (i = 0; i < data->nb_rx_queues; i++) {
+ if (!data->rx_queues[i])
+ continue;
+ if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_ipackets[i] = ((struct rnp_rx_queue **)
+ (data->rx_queues))[i]->stats.ipackets;
+ stats->q_ibytes[i] = ((struct rnp_rx_queue **)
+ (data->rx_queues))[i]->stats.ibytes;
+ stats->ipackets += stats->q_ipackets[i];
+ stats->ibytes += stats->q_ibytes[i];
+ } else {
+ stats->ipackets += ((struct rnp_rx_queue **)
+ (data->rx_queues))[i]->stats.ipackets;
+ stats->ibytes += ((struct rnp_rx_queue **)
+ (data->rx_queues))[i]->stats.ibytes;
+ }
+ }
+
+ for (i = 0; i < data->nb_tx_queues; i++) {
+ if (!data->tx_queues[i])
+ continue;
+ if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_opackets[i] = ((struct rnp_tx_queue **)
+ (data->tx_queues))[i]->stats.opackets;
+ stats->q_obytes[i] = ((struct rnp_tx_queue **)
+ (data->tx_queues))[i]->stats.obytes;
+ stats->opackets += stats->q_opackets[i];
+ stats->obytes += stats->q_obytes[i];
+ } else {
+ stats->opackets += ((struct rnp_tx_queue **)
+ (data->tx_queues))[i]->stats.opackets;
+ stats->obytes += ((struct rnp_tx_queue **)
+ (data->tx_queues))[i]->stats.obytes;
+ }
+ }
+ stats->imissed = eth_stats->rx_trans_drop + eth_stats->rx_trunc_drop;
+
+ return 0;
+}
+
+static int
+rnp_dev_stats_reset(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw_eth_stats *eth_stats = &port->eth_stats;
+ struct rnp_rx_queue *rxq;
+ struct rnp_tx_queue *txq;
+ uint16_t idx;
+
+ PMD_INIT_FUNC_TRACE();
+ memset(eth_stats, 0, sizeof(*eth_stats));
+ for (idx = 0; idx < dev->data->nb_rx_queues; idx++) {
+ rxq = ((struct rnp_rx_queue **)
+ (dev->data->rx_queues))[idx];
+ if (!rxq)
+ continue;
+ memset(&rxq->stats, 0, sizeof(struct rnp_queue_stats));
+ }
+ for (idx = 0; idx < dev->data->nb_tx_queues; idx++) {
+ txq = ((struct rnp_tx_queue **)
+ (dev->data->tx_queues))[idx];
+ if (!txq)
+ continue;
+ memset(&txq->stats, 0, sizeof(struct rnp_queue_stats));
+ }
+
+ return 0;
+}
+
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_configure = rnp_dev_configure,
@@ -793,6 +937,9 @@ static int rnp_allmulticast_disable(struct rte_eth_dev *eth_dev)
.reta_query = rnp_dev_rss_reta_query,
.rss_hash_update = rnp_dev_rss_hash_update,
.rss_hash_conf_get = rnp_dev_rss_hash_conf_get,
+ /* stats */
+ .stats_get = rnp_dev_stats_get,
+ .stats_reset = rnp_dev_stats_reset,
/* link impl */
.link_update = rnp_dev_link_update,
.dev_set_link_up = rnp_dev_set_link_up,
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 777ce7b..c351fee 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -741,6 +741,8 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
nmb->packet_type = 0;
nmb->ol_flags = 0;
nmb->nb_segs = 1;
+
+ rxq->stats.ibytes += nmb->data_len;
}
for (j = 0; j < nb_dd; ++j) {
rx_pkts[i + j] = rx_swbd[j].mbuf;
@@ -752,6 +754,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
if (nb_dd != RNP_CACHE_FETCH_RX)
break;
}
+ rxq->stats.ipackets += nb_rx;
rxq->rx_tail = (rxq->rx_tail + nb_rx) & rxq->attr.nb_desc_mask;
rxq->rxrearm_nb = rxq->rxrearm_nb + nb_rx;
@@ -821,6 +824,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
txbd->d.blen = tx_swbd->mbuf->data_len;
txbd->d.cmd = RNP_CMD_EOP;
+ txq->stats.obytes += txbd->d.blen;
i = (i + 1) & txq->attr.nb_desc_mask;
}
txq->nb_tx_free -= start;
@@ -832,6 +836,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
if (txq->tx_next_rs > txq->attr.nb_desc)
txq->tx_next_rs = txq->tx_rs_thresh - 1;
}
+ txq->stats.opackets += start;
txq->tx_tail = i;
rte_wmb();
@@ -936,6 +941,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
}
rxm->next = NULL;
first_seg->port = rxq->attr.port_id;
+ rxq->stats.ibytes += first_seg->pkt_len;
/* this the end of packet the large pkt has been recv finish */
rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
first_seg->data_off));
@@ -944,6 +950,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
}
if (!nb_rx)
return 0;
+ rxq->stats.ipackets += nb_rx;
/* update sw record point */
rxq->rx_tail = rx_id;
rxq->pkt_first_seg = first_seg;
@@ -1033,6 +1040,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
tx_id = txe->next_id;
txe = txn;
} while (m_seg != NULL);
+ txq->stats.obytes += tx_pkt->pkt_len;
txbd->d.cmd |= RNP_CMD_EOP;
txq->nb_tx_used = (uint16_t)txq->nb_tx_used + nb_used_bd;
txq->nb_tx_free = (uint16_t)txq->nb_tx_free - nb_used_bd;
@@ -1044,6 +1052,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
}
if (!send_pkts)
return 0;
+ txq->stats.opackets += send_pkts;
txq->tx_tail = tx_id;
rte_wmb();
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index f631285..d26497a 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -47,6 +47,14 @@ struct rnp_rxsw_entry {
struct rte_mbuf *mbuf;
};
+struct rnp_queue_stats {
+ uint64_t obytes;
+ uint64_t opackets;
+
+ uint64_t ibytes;
+ uint64_t ipackets;
+};
+
struct rnp_rx_queue {
struct rte_mempool *mb_pool; /* mbuf pool to populate rx ring. */
const struct rte_memzone *rz; /* rx hw ring base alloc memzone */
@@ -73,6 +81,7 @@ struct rnp_rx_queue {
uint8_t pthresh; /* rx desc prefetch threshold */
uint8_t pburst; /* rx desc prefetch burst */
+ struct rnp_queue_stats stats;
uint64_t rx_offloads; /* user set hw offload features */
struct rte_mbuf **free_mbufs; /* rx bulk alloc reserve of free mbufs */
struct rte_mbuf fake_mbuf; /* dummy mbuf */
@@ -113,6 +122,7 @@ struct rnp_tx_queue {
uint8_t pthresh; /* rx desc prefetch threshold */
uint8_t pburst; /* rx desc burst*/
+ struct rnp_queue_stats stats;
uint64_t tx_offloads; /* tx offload features */
struct rte_mbuf **free_mbufs; /* tx bulk free reserve of free mbufs */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 20/28] net/rnp: add support xstats operation
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (18 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 19/28] net/rnp: add support basic stats operation Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 21/28] net/rnp: add unicast MAC filter operation Wenbo Cao
` (7 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support mac eth rx tx hw xstats
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
drivers/net/rnp/base/rnp_eth_regs.h | 3 +
drivers/net/rnp/base/rnp_mac_regs.h | 80 ++++++++++++
drivers/net/rnp/rnp.h | 51 ++++++++
drivers/net/rnp/rnp_ethdev.c | 243 ++++++++++++++++++++++++++++++++++++
5 files changed, 378 insertions(+)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 45dae3b..c782efe 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -9,6 +9,7 @@ Link status = Y
Link status event = Y
Basic stats = Y
Stats per queue = Y
+Extended stats = Y
Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 391688b..ada42be 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -25,6 +25,9 @@
#define RNP_RING_FC_THRESH(n) _ETH_(0x8a00 + ((0x4) * (n)))
/* ETH Statistic */
#define RNP_ETH_RXTRANS_DROP _ETH_(0x8904)
+#define RNP_ETH_RXGLAN_DROP _ETH_(0x8918)
+#define RNP_ETH_RXIPH_E_DROP _ETH_(0x891c)
+#define RNP_ETH_RXCKSUM_E_DROP _ETH_(0x8920)
#define RNP_ETH_RXTRUNC_DROP _ETH_(0x8928)
/* Mac Host Filter */
#define RNP_MAC_FCTRL _ETH_(0x9110)
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
index 1ae8801..94aeba9 100644
--- a/drivers/net/rnp/base/rnp_mac_regs.h
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -78,4 +78,84 @@
/* PHY Link Status */
#define RNP_MAC_PLS RTE_BIT32(17)
+/* Mac Manage Counts */
+#define RNP_MMC_CTRL (0x0800)
+#define RNP_MMC_RSTONRD RTE_BIT32(2)
+/* Tx Good And Bad Bytes Base */
+#define RNP_MMC_TX_GBOCTGB (0x0814)
+/* Tx Good And Bad Frame Num Base */
+#define RNP_MMC_TX_GBFRMB (0x081c)
+/* Tx Good Broadcast Frame Num Base */
+#define RNP_MMC_TX_BCASTB (0x0824)
+/* Tx Good Multicast Frame Num Base */
+#define RNP_MMC_TX_MCASTB (0x082c)
+/* Tx 64Bytes Frame Num */
+#define RNP_MMC_TX_64_BYTESB (0x0834)
+#define RNP_MMC_TX_65TO127_BYTESB (0x083c)
+#define RNP_MMC_TX_128TO255_BYTEB (0x0844)
+#define RNP_MMC_TX_256TO511_BYTEB (0x084c)
+#define RNP_MMC_TX_512TO1023_BYTEB (0x0854)
+#define RNP_MMC_TX_1024TOMAX_BYTEB (0x085c)
+/* Tx Good And Bad Unicast Frame Num Base */
+#define RNP_MMC_TX_GBUCASTB (0x0864)
+/* Tx Good And Bad Multicast Frame Num Base */
+#define RNP_MMC_TX_GBMCASTB (0x086c)
+/* Tx Good And Bad Broadcast Frame NUM Base */
+#define RNP_MMC_TX_GBBCASTB (0x0874)
+/* Tx Frame Underflow Error */
+#define RNP_MMC_TX_UNDRFLWB (0x087c)
+/* Tx Good Frame Bytes Base */
+#define RNP_MMC_TX_GBYTESB (0x0884)
+/* Tx Good Frame Num Base*/
+#define RNP_MMC_TX_GBRMB (0x088c)
+/* Tx Good Pause Frame Num Base */
+#define RNP_MMC_TX_PAUSEB (0x0894)
+/* Tx Good Vlan Frame Num Base */
+#define RNP_MMC_TX_VLANB (0x089c)
+
+/* Rx Good And Bad Frames Num Base */
+#define RNP_MMC_RX_GBFRMB (0x0900)
+/* Rx Good And Bad Frames Bytes Base */
+#define RNP_MMC_RX_GBOCTGB (0x0908)
+/* Rx Good Framse Bytes Base */
+#define RNP_MMC_RX_GOCTGB (0x0910)
+/* Rx Good Broadcast Frames Num Base */
+#define RNP_MMC_RX_BCASTGB (0x0918)
+/* Rx Good Multicast Frames Num Base */
+#define RNP_MMC_RX_MCASTGB (0x0920)
+/* Rx Crc Error Frames Num Base */
+#define RNP_MMC_RX_CRCERB (0x0928)
+/* Rx Less Than 64Byes with Crc Err Base*/
+#define RNP_MMC_RX_RUNTERB (0x0930)
+/* Receive Jumbo Frame Error */
+#define RNP_MMC_RX_JABBER_ERR (0x0934)
+/* Shorter Than 64Bytes without Any Errora Base */
+#define RNP_MMC_RX_USIZEGB (0x0938)
+/* Len Oversize Than Support */
+#define RNP_MMC_RX_OSIZEGB (0x093c)
+/* Rx 64Byes Frame Num Base */
+#define RNP_MMC_RX_64_BYTESB (0x0940)
+/* Rx 65Bytes To 127Bytes Frame Num Base */
+#define RNP_MMC_RX_65TO127_BYTESB (0x0948)
+/* Rx 128Bytes To 255Bytes Frame Num Base */
+#define RNP_MMC_RX_128TO255_BYTESB (0x0950)
+/* Rx 256Bytes To 511Bytes Frame Num Base */
+#define RNP_MMC_RX_256TO511_BYTESB (0x0958)
+/* Rx 512Bytes To 1023Bytes Frame Num Base */
+#define RNP_MMC_RX_512TO1203_BYTESB (0x0960)
+/* Rx Len Bigger Than 1024Bytes Base */
+#define RNP_MMC_RX_1024TOMAX_BYTESB (0x0968)
+/* Rx Unicast Frame Good Num Base */
+#define RNP_MMC_RX_UCASTGB (0x0970)
+/* Rx Length Error Of Frame Part */
+#define RNP_MMC_RX_LENERRB (0x0978)
+/* Rx received with a Length field not equal to the valid frame size */
+#define RNP_MMC_RX_OUTOF_RANGE (0x0980)
+/* Rx Pause Frame Good Num Base */
+#define RNP_MMC_RX_PAUSEB (0x0988)
+/* Rx Vlan Frame Good Num Base */
+#define RNP_MMC_RX_VLANGB (0x0998)
+/* Rx With A Watchdog Timeout Err Frame Base */
+#define RNP_MMC_RX_WDOGERRB (0x09a0)
+
#endif /* _RNP_MAC_REGS_H_ */
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index b4f4f28..691f9c0 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -108,6 +108,56 @@ struct rnp_proc_priv {
struct rnp_hw_eth_stats {
uint64_t rx_trans_drop; /* rx eth to dma fifo full drop */
uint64_t rx_trunc_drop; /* rx mac to eth to host copy fifo full drop */
+ uint64_t rx_glen_drop; /* pkts length bigger than hw limit */
+ uint64_t rx_cksum_e_drop; /* rx cksum error pkts drop */
+ uint64_t rx_iph_e_drop; /* rx ip header error drop */
+};
+
+struct rnp_hw_mac_stats {
+ uint64_t rx_all_pkts; /* Include Good And Bad Frame Num */
+ uint64_t rx_all_bytes; /* Include Good And Bad Pkts octes */
+ uint64_t rx_good_pkts;
+ uint64_t rx_good_bytes;
+ uint64_t rx_broadcast;
+ uint64_t rx_multicast;
+ uint64_t rx_crc_err;
+ uint64_t rx_runt_err; /* Frame Less-than-64-byte with a CRC error*/
+ uint64_t rx_jabber_err; /* Jumbo Frame Crc Error */
+ uint64_t rx_undersize_err;/* Frame Less Than 64 bytes Error */
+ uint64_t rx_oversize_err; /* Bigger Than Max Support Length Frame */
+ uint64_t rx_64octes_pkts;
+ uint64_t rx_65to127_octes_pkts;
+ uint64_t rx_128to255_octes_pkts;
+ uint64_t rx_256to511_octes_pkts;
+ uint64_t rx_512to1023_octes_pkts;
+ uint64_t rx_1024tomax_octes_pkts;
+ uint64_t rx_unicast;
+ uint64_t rx_len_err; /* Bigger Or Less Than Len Support */
+ uint64_t rx_len_invaild; /* Frame Len Isn't equal real Len */
+ uint64_t rx_pause; /* Rx Pause Frame Num */
+ uint64_t rx_vlan; /* Rx Vlan Frame Num */
+ uint64_t rx_watchdog_err; /* Rx with a watchdog time out error */
+ uint64_t rx_bad_pkts;
+
+ uint64_t tx_all_pkts; /* Include Good And Bad Frame Num */
+ uint64_t tx_all_bytes; /* Include Good And Bad Pkts octes */
+ uint64_t tx_broadcast;
+ uint64_t tx_multicast;
+ uint64_t tx_64octes_pkts;
+ uint64_t tx_65to127_octes_pkts;
+ uint64_t tx_128to255_octes_pkts;
+ uint64_t tx_256to511_octes_pkts;
+ uint64_t tx_512to1023_octes_pkts;
+ uint64_t tx_1024tomax_octes_pkts;
+ uint64_t tx_all_unicast;
+ uint64_t tx_all_multicase;
+ uint64_t tx_all_broadcast;
+ uint64_t tx_underflow_err;
+ uint64_t tx_good_pkts;
+ uint64_t tx_good_bytes;
+ uint64_t tx_pause_pkts;
+ uint64_t tx_vlan_pkts;
+ uint64_t tx_bad_pkts;
};
struct rnp_eth_port {
@@ -120,6 +170,7 @@ struct rnp_eth_port {
struct rnp_hw_eth_stats eth_stats_old;
struct rnp_hw_eth_stats eth_stats;
+ struct rnp_hw_mac_stats mac_stats;
struct rte_eth_rss_conf rss_conf;
uint16_t last_rx_num;
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index fa2617b..fdbba6f 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -782,12 +782,126 @@ struct rte_rnp_xstats_name_off {
rx_trans_drop), RNP_ETH_RXTRANS_DROP, false},
{"eth_rx_fifo_drop", offsetof(struct rnp_hw_eth_stats,
rx_trunc_drop), RNP_ETH_RXTRUNC_DROP, false},
+ {"eth rx pkts bigger than mtu", offsetof(struct rnp_hw_eth_stats,
+ rx_glen_drop), RNP_ETH_RXGLAN_DROP, false},
+ {"eth rx cksum error drop", offsetof(struct rnp_hw_eth_stats,
+ rx_cksum_e_drop), RNP_ETH_RXCKSUM_E_DROP, false},
+ {"eth rx iph error drop", offsetof(struct rnp_hw_eth_stats,
+ rx_iph_e_drop), RNP_ETH_RXIPH_E_DROP, false},
};
+
+static const struct rte_rnp_xstats_name_off rte_rnp_rx_mac_stats_str[] = {
+ {"Rx good bad Pkts", offsetof(struct rnp_hw_mac_stats,
+ rx_all_pkts), RNP_MMC_RX_GBFRMB, true},
+ {"Rx good bad bytes", offsetof(struct rnp_hw_mac_stats,
+ rx_all_bytes), RNP_MMC_RX_GBOCTGB, true},
+ {"Rx good Pkts", offsetof(struct rnp_hw_mac_stats,
+ rx_good_pkts), 0, false},
+ {"RX good Bytes", offsetof(struct rnp_hw_mac_stats,
+ rx_good_bytes), RNP_MMC_RX_GOCTGB, true},
+ {"Rx Broadcast Pkts", offsetof(struct rnp_hw_mac_stats,
+ rx_broadcast), RNP_MMC_RX_BCASTGB, true},
+ {"Rx Multicast Pkts", offsetof(struct rnp_hw_mac_stats,
+ rx_multicast), RNP_MMC_RX_MCASTGB, true},
+ {"Rx Crc Frames Err Pkts", offsetof(struct rnp_hw_mac_stats,
+ rx_crc_err), RNP_MMC_RX_CRCERB, true},
+ {"Rx len Err with Crc err", offsetof(struct rnp_hw_mac_stats,
+ rx_runt_err), RNP_MMC_RX_RUNTERB, false},
+ {"Rx jabber Error ", offsetof(struct rnp_hw_mac_stats,
+ rx_jabber_err), RNP_MMC_RX_JABBER_ERR, false},
+ {"Rx len Err Without Other Error", offsetof(struct rnp_hw_mac_stats,
+ rx_undersize_err), RNP_MMC_RX_USIZEGB, false},
+ {"Rx Len Shorter 64Bytes Without Err", offsetof(struct rnp_hw_mac_stats,
+ rx_undersize_err), RNP_MMC_RX_USIZEGB, false},
+ {"Rx Len Oversize 9K", offsetof(struct rnp_hw_mac_stats,
+ rx_oversize_err), RNP_MMC_RX_OSIZEGB, false},
+ {"Rx 64Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_64octes_pkts), RNP_MMC_RX_64_BYTESB, true},
+ {"Rx 65Bytes To 127Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_65to127_octes_pkts), RNP_MMC_RX_65TO127_BYTESB, true},
+ {"Rx 128Bytes To 255Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_128to255_octes_pkts), RNP_MMC_RX_128TO255_BYTESB, true},
+ {"Rx 256Bytes To 511Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_256to511_octes_pkts), RNP_MMC_RX_256TO511_BYTESB, true},
+ {"Rx 512Bytes To 1023Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_512to1023_octes_pkts), RNP_MMC_RX_512TO1203_BYTESB, true},
+ {"Rx Bigger 1024Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_1024tomax_octes_pkts), RNP_MMC_RX_1024TOMAX_BYTESB, true},
+ {"Rx Unicast Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_unicast), RNP_MMC_RX_UCASTGB, true},
+ {"Rx Len Err Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_len_err), RNP_MMC_RX_LENERRB, true},
+ {"Rx Len Not Equal Real data_len", offsetof(struct rnp_hw_mac_stats,
+ rx_len_invaild), RNP_MMC_RX_OUTOF_RANGE, true},
+ {"Rx Pause Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_pause), RNP_MMC_RX_PAUSEB, true},
+ {"Rx Vlan Frame Num", offsetof(struct rnp_hw_mac_stats,
+ rx_vlan), RNP_MMC_RX_VLANGB, true},
+ {"Rx Hw Watchdog Frame Err", offsetof(struct rnp_hw_mac_stats,
+ rx_watchdog_err), RNP_MMC_RX_WDOGERRB, true},
+};
+
+static const struct rte_rnp_xstats_name_off rte_rnp_tx_mac_stats_str[] = {
+ {"Tx Good Bad Pkts Num", offsetof(struct rnp_hw_mac_stats,
+ tx_all_pkts), RNP_MMC_TX_GBFRMB, true},
+ {"Tx Good Bad Bytes", offsetof(struct rnp_hw_mac_stats,
+ tx_all_bytes), RNP_MMC_TX_GBOCTGB, true},
+ {"Tx Good Broadcast Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_broadcast), RNP_MMC_TX_BCASTB, true},
+ {"Tx Good Multicast Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_multicast), RNP_MMC_TX_MCASTB, true},
+ {"Tx 64Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_64octes_pkts), RNP_MMC_TX_64_BYTESB, true},
+ {"Tx 65 To 127 Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_65to127_octes_pkts), RNP_MMC_TX_65TO127_BYTESB, true},
+ {"Tx 128 To 255 Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_128to255_octes_pkts), RNP_MMC_TX_128TO255_BYTEB, true},
+ {"Tx 256 To 511 Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_256to511_octes_pkts), RNP_MMC_TX_256TO511_BYTEB, true},
+ {"Tx 512 To 1023 Bytes Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_512to1023_octes_pkts), RNP_MMC_TX_512TO1023_BYTEB, true},
+ {"Tx Bigger Than 1024 Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_1024tomax_octes_pkts), RNP_MMC_TX_1024TOMAX_BYTEB, true},
+ {"Tx Good And Bad Unicast Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_all_unicast), RNP_MMC_TX_GBUCASTB, true},
+ {"Tx Good And Bad Multicast Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_all_multicase), RNP_MMC_TX_GBMCASTB, true},
+ {"Tx Good And Bad Broadcast Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_all_broadcast), RNP_MMC_TX_GBBCASTB, true},
+ {"Tx Underflow Frame Err Num", offsetof(struct rnp_hw_mac_stats,
+ tx_underflow_err), RNP_MMC_TX_UNDRFLWB, true},
+ {"Tx Good Frame Bytes", offsetof(struct rnp_hw_mac_stats,
+ tx_good_bytes), RNP_MMC_TX_GBYTESB, true},
+ {"Tx Good Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_good_pkts), RNP_MMC_TX_GBFRMB, true},
+ {"Tx Pause Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_pause_pkts), RNP_MMC_TX_PAUSEB, true},
+ {"Tx Vlan Frame Num", offsetof(struct rnp_hw_mac_stats,
+ tx_vlan_pkts), RNP_MMC_TX_VLANB, true},
+};
+
+#define RNP_NB_RX_HW_MAC_STATS (RTE_DIM(rte_rnp_rx_mac_stats_str))
+#define RNP_NB_TX_HW_MAC_STATS (RTE_DIM(rte_rnp_tx_mac_stats_str))
#define RNP_NB_RX_HW_ETH_STATS (RTE_DIM(rte_rnp_rx_eth_stats_str))
#define RNP_GET_E_HW_COUNT(stats, offset) \
((uint64_t *)(((char *)stats) + (offset)))
#define RNP_ADD_INCL_COUNT(stats, offset, val) \
((*(RNP_GET_E_HW_COUNT(stats, (offset)))) += val)
+static inline void
+rnp_store_hw_stats(struct rnp_hw_mac_stats *stats,
+ uint32_t offset, uint64_t val)
+{
+ *(uint64_t *)(((char *)stats) + offset) = val;
+}
+
+static uint32_t rnp_dev_cal_xstats_num(void)
+{
+ uint32_t cnt = RNP_NB_RX_HW_MAC_STATS + RNP_NB_TX_HW_MAC_STATS;
+
+ cnt += RNP_NB_RX_HW_ETH_STATS;
+
+ return cnt;
+}
static inline void
rnp_update_eth_stats_32bit(struct rnp_hw_eth_stats *new,
@@ -818,11 +932,33 @@ static void rnp_get_eth_count(struct rnp_hw *hw,
}
}
+static void
+rnp_get_mmc_info(struct rnp_hw *hw,
+ uint16_t lane,
+ struct rnp_hw_mac_stats *stats,
+ const struct rte_rnp_xstats_name_off *ptr)
+{
+ uint64_t count = 0;
+ uint32_t offset;
+ uint64_t hi_reg;
+
+ if (ptr->reg_base) {
+ count = RNP_MAC_REG_RD(hw, lane, ptr->reg_base);
+ if (ptr->hi_addr_en) {
+ offset = ptr->reg_base + 4;
+ hi_reg = RNP_MAC_REG_RD(hw, lane, offset);
+ count += (hi_reg << 32);
+ }
+ rnp_store_hw_stats(stats, ptr->offset, count);
+ }
+}
+
static void rnp_get_hw_stats(struct rte_eth_dev *dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
struct rnp_hw_eth_stats *old = &port->eth_stats_old;
struct rnp_hw_eth_stats *new = &port->eth_stats;
+ struct rnp_hw_mac_stats *stats = &port->mac_stats;
const struct rte_rnp_xstats_name_off *ptr;
uint16_t lane = port->attr.nr_lane;
struct rnp_hw *hw = port->hw;
@@ -832,6 +968,19 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
ptr = &rte_rnp_rx_eth_stats_str[i];
rnp_get_eth_count(hw, lane, new, old, ptr);
}
+ for (i = 0; i < RNP_NB_RX_HW_MAC_STATS; i++) {
+ ptr = &rte_rnp_rx_mac_stats_str[i];
+ rnp_get_mmc_info(hw, lane, stats, ptr);
+ }
+ for (i = 0; i < RNP_NB_TX_HW_MAC_STATS; i++) {
+ ptr = &rte_rnp_tx_mac_stats_str[i];
+ rnp_get_mmc_info(hw, lane, stats, ptr);
+ }
+ stats->rx_good_pkts = stats->rx_all_pkts - stats->rx_crc_err -
+ stats->rx_len_err - stats->rx_watchdog_err;
+ stats->rx_bad_pkts = stats->rx_crc_err + stats->rx_len_err +
+ stats->rx_watchdog_err;
+ stats->tx_bad_pkts = stats->tx_underflow_err;
}
static int
@@ -914,6 +1063,97 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
return 0;
}
+static int
+rnp_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
+ unsigned int n __rte_unused)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw_eth_stats *eth_stats = &port->eth_stats;
+ struct rnp_hw_mac_stats *mac_stats = &port->mac_stats;
+ uint32_t count = 0;
+ uint16_t i;
+
+ if (xstats != NULL) {
+ rnp_get_hw_stats(dev);
+ for (i = 0; i < RNP_NB_RX_HW_MAC_STATS; i++) {
+ xstats[count].value = *(uint64_t *)(((char *)mac_stats) +
+ rte_rnp_rx_mac_stats_str[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ for (i = 0; i < RNP_NB_TX_HW_MAC_STATS; i++) {
+ xstats[count].value = *(uint64_t *)(((char *)mac_stats) +
+ rte_rnp_tx_mac_stats_str[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ for (i = 0; i < RNP_NB_RX_HW_ETH_STATS; i++) {
+ xstats[count].value = *(uint64_t *)(((char *)eth_stats) +
+ rte_rnp_rx_eth_stats_str[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+ } else {
+ return rnp_dev_cal_xstats_num();
+ }
+
+ return count;
+}
+
+static int
+rnp_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t reg;
+
+ /* set MMC reset hw counter when read event */
+ reg = RNP_MAC_REG_RD(hw, lane, RNP_MMC_CTRL);
+ RNP_MAC_REG_WR(hw, lane, RNP_MMC_CTRL, RNP_MMC_RSTONRD);
+
+ rnp_dev_stats_reset(dev);
+ rnp_get_hw_stats(dev);
+ reg = RNP_MAC_REG_RD(hw, lane, RNP_MMC_CTRL);
+ reg &= ~RNP_MMC_RSTONRD;
+ RNP_MAC_REG_WR(hw, lane, RNP_MMC_CTRL, reg);
+
+ return 0;
+}
+
+static int
+rnp_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int size)
+{
+ uint32_t xstats_cnt = rnp_dev_cal_xstats_num();
+ uint32_t i, count = 0;
+
+ if (xstats_names != NULL) {
+ for (i = 0; i < RNP_NB_RX_HW_MAC_STATS; i++) {
+ strlcpy(xstats_names[count].name,
+ rte_rnp_rx_mac_stats_str[i].name,
+ sizeof(xstats_names[count].name));
+ count++;
+ }
+
+ for (i = 0; i < RNP_NB_TX_HW_MAC_STATS; i++) {
+ strlcpy(xstats_names[count].name,
+ rte_rnp_tx_mac_stats_str[i].name,
+ sizeof(xstats_names[count].name));
+ count++;
+ }
+ for (i = 0; i < RNP_NB_RX_HW_ETH_STATS; i++) {
+ strlcpy(xstats_names[count].name,
+ rte_rnp_rx_eth_stats_str[i].name,
+ sizeof(xstats_names[count].name));
+ count++;
+ }
+ }
+
+ return xstats_cnt;
+}
+
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_configure = rnp_dev_configure,
@@ -940,6 +1180,9 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
/* stats */
.stats_get = rnp_dev_stats_get,
.stats_reset = rnp_dev_stats_reset,
+ .xstats_get = rnp_dev_xstats_get,
+ .xstats_reset = rnp_dev_xstats_reset,
+ .xstats_get_names = rnp_dev_xstats_get_names,
/* link impl */
.link_update = rnp_dev_link_update,
.dev_set_link_up = rnp_dev_set_link_up,
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 21/28] net/rnp: add unicast MAC filter operation
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (19 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 20/28] net/rnp: add support xstats operation Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:43 ` [PATCH v7 22/28] net/rnp: add supported packet types Wenbo Cao
` (6 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add mac filter for single/multiple port.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_eth_regs.h | 4 ++
drivers/net/rnp/base/rnp_hw.h | 3 ++
drivers/net/rnp/base/rnp_mac.c | 91 +++++++++++++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_mac.h | 2 +
drivers/net/rnp/base/rnp_mac_regs.h | 5 +-
drivers/net/rnp/rnp.h | 4 ++
drivers/net/rnp/rnp_ethdev.c | 62 ++++++++++++++++++++++---
9 files changed, 166 insertions(+), 7 deletions(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index c782efe..de7e72c 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -14,6 +14,7 @@ Queue start/stop = Y
Promiscuous mode = Y
Allmulticast mode = Y
MTU update = Y
+Unicast MAC filter = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index ec6f3f9..3315ff7 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -17,6 +17,7 @@ Features
- Promiscuous mode
- Link state information
- MTU update
+- MAC filtering
- Jumbo frames
- Scatter-Gather IO support
- Port hardware statistic
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index ada42be..8a448b9 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -63,5 +63,9 @@
#define RNP_RSS_KEY_TABLE(idx) _ETH_(0x92d0 + ((idx) * 0x4))
#define RNP_TC_PORT_OFFSET(lane) _ETH_(0xe840 + 0x04 * (lane))
+/* host mac address filter */
+#define RNP_RAL_BASE_ADDR(n) _ETH_(0xA000 + (0x04 * (n)))
+#define RNP_RAH_BASE_ADDR(n) _ETH_(0xA400 + (0x04 * (n)))
+#define RNP_MAC_FILTER_EN RTE_BIT32(31)
#endif /* _RNP_ETH_REGS_H */
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 00707b3..a1cf45a 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -71,6 +71,9 @@ struct rnp_mac_ops {
int (*get_macaddr)(struct rnp_eth_port *port, u8 *mac);
/* update mac packet filter mode */
int (*update_mpfm)(struct rnp_eth_port *port, u32 mode, bool en);
+ /* Receive Address Filter table */
+ int (*set_rafb)(struct rnp_eth_port *port, u8 *mac, u32 index);
+ int (*clear_rafb)(struct rnp_eth_port *port, u32 index);
};
struct rnp_eth_adapter;
diff --git a/drivers/net/rnp/base/rnp_mac.c b/drivers/net/rnp/base/rnp_mac.c
index 2c9499f..01929fd 100644
--- a/drivers/net/rnp/base/rnp_mac.c
+++ b/drivers/net/rnp/base/rnp_mac.c
@@ -102,14 +102,89 @@
return 0;
}
+static int
+rnp_set_mac_addr_pf(struct rnp_eth_port *port,
+ u8 *addr, u32 index)
+{
+ struct rnp_hw *hw = port->hw;
+ u32 addr_hi = 0, addr_lo = 0;
+ u8 *mac = NULL;
+
+ mac = (u8 *)&addr_hi;
+ mac[0] = addr[1];
+ mac[1] = addr[0];
+ mac = (u8 *)&addr_lo;
+ mac[0] = addr[5];
+ mac[1] = addr[4];
+ mac[2] = addr[3];
+ mac[3] = addr[2];
+ addr_hi |= RNP_MAC_FILTER_EN;
+ RNP_E_REG_WR(hw, RNP_RAH_BASE_ADDR(index), addr_hi);
+ RNP_E_REG_WR(hw, RNP_RAL_BASE_ADDR(index), addr_lo);
+
+ return 0;
+}
+
+static int
+rnp_set_mac_addr_indep(struct rnp_eth_port *port,
+ u8 *addr, u32 index)
+{
+ u16 lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ u32 addr_hi = 0, addr_lo = 0;
+ u8 *mac = NULL;
+
+ mac = (u8 *)&addr_lo;
+ mac[0] = addr[0];
+ mac[1] = addr[1];
+ mac[2] = addr[2];
+ mac[3] = addr[3];
+ mac = (u8 *)&addr_hi;
+ mac[0] = addr[4];
+ mac[1] = addr[5];
+
+ addr_hi |= RNP_MAC_AE;
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_ADDR_HI(index), addr_hi);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_ADDR_LO(index), addr_lo);
+
+ return 0;
+}
+
+static int
+rnp_clear_mac_pf(struct rnp_eth_port *port, u32 index)
+{
+ struct rnp_hw *hw = port->hw;
+
+ RNP_E_REG_WR(hw, RNP_RAL_BASE_ADDR(index), 0);
+ RNP_E_REG_WR(hw, RNP_RAH_BASE_ADDR(index), 0);
+
+ return 0;
+}
+
+static int
+rnp_clear_mac_indep(struct rnp_eth_port *port, u32 index)
+{
+ u16 lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_ADDR_HI(index), 0);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_ADDR_LO(index), 0);
+
+ return 0;
+}
+
const struct rnp_mac_ops rnp_mac_ops_pf = {
.get_macaddr = rnp_mbx_fw_get_macaddr,
.update_mpfm = rnp_update_mpfm_pf,
+ .set_rafb = rnp_set_mac_addr_pf,
+ .clear_rafb = rnp_clear_mac_pf
};
const struct rnp_mac_ops rnp_mac_ops_indep = {
.get_macaddr = rnp_mbx_fw_get_macaddr,
.update_mpfm = rnp_update_mpfm_indep,
+ .set_rafb = rnp_set_mac_addr_indep,
+ .clear_rafb = rnp_clear_mac_indep,
};
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac)
@@ -129,6 +204,22 @@ int rnp_update_mpfm(struct rnp_eth_port *port,
return rnp_call_hwif_impl(port, mac_ops->update_mpfm, mode, en);
}
+int rnp_set_macaddr(struct rnp_eth_port *port, u8 *mac, u32 index)
+{
+ const struct rnp_mac_ops *mac_ops =
+ RNP_DEV_PP_TO_MAC_OPS(port->eth_dev);
+
+ return rnp_call_hwif_impl(port, mac_ops->set_rafb, mac, index);
+}
+
+int rnp_clear_macaddr(struct rnp_eth_port *port, u32 index)
+{
+ const struct rnp_mac_ops *mac_ops =
+ RNP_DEV_PP_TO_MAC_OPS(port->eth_dev);
+
+ return rnp_call_hwif_impl(port, mac_ops->clear_rafb, index);
+}
+
void rnp_mac_ops_init(struct rnp_hw *hw)
{
struct rnp_proc_priv *proc_priv = RNP_DEV_TO_PROC_PRIV(hw->back->eth_dev);
diff --git a/drivers/net/rnp/base/rnp_mac.h b/drivers/net/rnp/base/rnp_mac.h
index 1dac903..865fc34 100644
--- a/drivers/net/rnp/base/rnp_mac.h
+++ b/drivers/net/rnp/base/rnp_mac.h
@@ -24,6 +24,8 @@
void rnp_mac_ops_init(struct rnp_hw *hw);
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac);
+int rnp_set_macaddr(struct rnp_eth_port *port, u8 *mac, u32 index);
+int rnp_clear_macaddr(struct rnp_eth_port *port, u32 index);
int rnp_update_mpfm(struct rnp_eth_port *port,
u32 mode, bool en);
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
index 94aeba9..85308a7 100644
--- a/drivers/net/rnp/base/rnp_mac_regs.h
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -77,7 +77,10 @@
#define RNP_MAC_PLSDIS RTE_BIT32(18)
/* PHY Link Status */
#define RNP_MAC_PLS RTE_BIT32(17)
-
+/* Rx macaddr filter ctrl */
+#define RNP_MAC_ADDR_HI(n) (0x0300 + ((n) * 0x8))
+#define RNP_MAC_AE RTE_BIT32(31)
+#define RNP_MAC_ADDR_LO(n) (0x0304 + ((n) * 0x8))
/* Mac Manage Counts */
#define RNP_MMC_CTRL (0x0800)
#define RNP_MMC_RSTONRD RTE_BIT32(2)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 691f9c0..eb9d44a 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -56,6 +56,10 @@
#define RNP_MAX_HASH_MC_MAC_SIZE (4096) /* max multicast hash mac num */
#define RNP_MAX_UC_HASH_TABLE (128) /* max unicast hash mac filter table */
#define RNP_MAC_MC_HASH_TABLE (128) /* max multicast hash mac filter table*/
+/* Peer port own independent resource */
+#define RNP_PORT_MAX_MACADDR (32)
+#define RNP_PORT_MAX_UC_HASH_TB (8)
+#define RNP_PORT_MAX_UC_MAC_SIZE (RNP_PORT_MAX_UC_HASH_TB * 32)
/* hardware media type */
enum rnp_media_type {
RNP_MEDIA_TYPE_UNKNOWN,
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index fdbba6f..f97d12f 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -1154,6 +1154,44 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
return xstats_cnt;
}
+static int
+rnp_dev_mac_addr_set(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ return rnp_set_macaddr(port, (u8 *)mac_addr, 0);
+}
+
+static int
+rnp_dev_mac_addr_add(struct rte_eth_dev *dev,
+ struct rte_ether_addr *mac_addr,
+ uint32_t index,
+ uint32_t vmdq __rte_unused)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (index >= port->attr.max_mac_addrs) {
+ RNP_PMD_ERR("mac add index %d is of range", index);
+ return -EINVAL;
+ }
+
+ return rnp_set_macaddr(port, (u8 *)mac_addr, index);
+}
+
+static void
+rnp_dev_mac_addr_remove(struct rte_eth_dev *dev,
+ uint32_t index)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (index >= port->attr.max_mac_addrs) {
+ RNP_PMD_ERR("mac add index %d is of range", index);
+ return;
+ }
+ rnp_clear_macaddr(port, index);
+}
+
/* Features supported by this driver */
static const struct eth_dev_ops rnp_eth_dev_ops = {
.dev_configure = rnp_dev_configure,
@@ -1187,6 +1225,10 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
.link_update = rnp_dev_link_update,
.dev_set_link_up = rnp_dev_set_link_up,
.dev_set_link_down = rnp_dev_set_link_down,
+ /* mac address filter */
+ .mac_addr_set = rnp_dev_mac_addr_set,
+ .mac_addr_add = rnp_dev_mac_addr_add,
+ .mac_addr_remove = rnp_dev_mac_addr_remove,
};
static void
@@ -1208,12 +1250,19 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
attr->max_rx_queues = RNP_MAX_RX_QUEUE_NUM / hw->max_port_num;
attr->max_tx_queues = RNP_MAX_TX_QUEUE_NUM / hw->max_port_num;
-
- attr->max_mac_addrs = RNP_MAX_MAC_ADDRS;
- attr->max_uc_mac_hash = RNP_MAX_HASH_UC_MAC_SIZE;
- attr->max_mc_mac_hash = RNP_MAX_HASH_MC_MAC_SIZE;
- attr->uc_hash_tb_size = RNP_MAX_UC_HASH_TABLE;
- attr->mc_hash_tb_size = RNP_MAC_MC_HASH_TABLE;
+ if (hw->nic_mode > RNP_SINGLE_10G) {
+ attr->max_mac_addrs = RNP_PORT_MAX_MACADDR;
+ attr->max_uc_mac_hash = RNP_PORT_MAX_UC_MAC_SIZE;
+ attr->max_mc_mac_hash = 0;
+ attr->uc_hash_tb_size = RNP_PORT_MAX_UC_HASH_TB;
+ attr->mc_hash_tb_size = 0;
+ } else {
+ attr->max_mac_addrs = RNP_MAX_MAC_ADDRS;
+ attr->max_uc_mac_hash = RNP_MAX_HASH_UC_MAC_SIZE;
+ attr->max_mc_mac_hash = RNP_MAX_HASH_MC_MAC_SIZE;
+ attr->uc_hash_tb_size = RNP_MAX_UC_HASH_TABLE;
+ attr->mc_hash_tb_size = RNP_MAC_MC_HASH_TABLE;
+ }
rnp_mbx_fw_get_lane_stat(port);
RNP_PMD_INFO("PF[%d] SW-ETH-PORT[%d]<->PHY_LANE[%d]\n",
@@ -1256,6 +1305,7 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
rte_eth_random_addr(port->mac_addr.addr_bytes);
}
rte_ether_addr_copy(&port->mac_addr, ð_dev->data->mac_addrs[0]);
+ rnp_set_macaddr(port, (u8 *)&port->mac_addr, 0);
rte_spinlock_init(&port->rx_mac_lock);
adapter->ports[p_id] = port;
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 22/28] net/rnp: add supported packet types
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (20 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 21/28] net/rnp: add unicast MAC filter operation Wenbo Cao
@ 2025-02-08 2:43 ` Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 23/28] net/rnp: add support Rx checksum offload Wenbo Cao
` (5 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:43 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support parse hw packet types result.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_bdq_if.h | 4 ++++
drivers/net/rnp/rnp_rxtx.c | 45 +++++++++++++++++++++++++++++++++++++++
4 files changed, 51 insertions(+)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index de7e72c..b81f11d 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -7,6 +7,7 @@
Speed capabilities = Y
Link status = Y
Link status event = Y
+Packet type parsing = Y
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 3315ff7..39ea2d1 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -21,6 +21,7 @@ Features
- Jumbo frames
- Scatter-Gather IO support
- Port hardware statistic
+- Packet type parsing
Prerequisites
-------------
diff --git a/drivers/net/rnp/base/rnp_bdq_if.h b/drivers/net/rnp/base/rnp_bdq_if.h
index 61a3832..a7d27bd 100644
--- a/drivers/net/rnp/base/rnp_bdq_if.h
+++ b/drivers/net/rnp/base/rnp_bdq_if.h
@@ -73,6 +73,7 @@ struct rnp_tx_desc {
#define RNP_RX_L3TYPE_IPV4 (0x00UL << RNP_RX_L3TYPE_S)
#define RNP_RX_L3TYPE_IPV6 (0x01UL << RNP_RX_L3TYPE_S)
#define RNP_RX_L4TYPE_S (6)
+#define RNP_RX_L4TYPE_MASK RTE_GENMASK32(7, 6)
#define RNP_RX_L4TYPE_TCP (0x01UL << RNP_RX_L4TYPE_S)
#define RNP_RX_L4TYPE_SCTP (0x02UL << RNP_RX_L4TYPE_S)
#define RNP_RX_L4TYPE_UDP (0x03UL << RNP_RX_L4TYPE_S)
@@ -83,9 +84,12 @@ struct rnp_tx_desc {
#define RNP_RX_IN_L3_ERR RTE_BIT32(11)
#define RNP_RX_IN_L4_ERR RTE_BIT32(12)
#define RNP_RX_TUNNEL_TYPE_S (13)
+#define RNP_RX_TUNNEL_MASK RTE_GENMASK32(14, 13)
#define RNP_RX_PTYPE_VXLAN (0x01UL << RNP_RX_TUNNEL_TYPE_S)
#define RNP_RX_PTYPE_NVGRE (0x02UL << RNP_RX_TUNNEL_TYPE_S)
#define RNP_RX_PTYPE_VLAN RTE_BIT32(15)
+/* mark_data */
+#define RNP_RX_L3TYPE_VALID RTE_BIT32(31)
/* tx data cmd */
#define RNP_TX_TSO_EN RTE_BIT32(4)
#define RNP_TX_L3TYPE_S (5)
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index c351fee..229c97f 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -644,6 +644,49 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
return 0;
}
+static __rte_always_inline void
+rnp_dev_rx_parse(struct rnp_rx_queue *rxq __rte_unused,
+ struct rte_mbuf *m,
+ volatile struct rnp_rx_desc rxbd)
+{
+ uint32_t mark_data = rxbd.wb.qword0.mark_data;
+ uint16_t vlan_tci = rxbd.wb.qword1.vlan_tci;
+ uint32_t cmd = rxbd.wb.qword1.cmd;
+
+ /* clear mbuf packet_type and ol_flags */
+ m->packet_type = 0;
+ m->ol_flags = 0;
+ if (mark_data & RNP_RX_L3TYPE_VALID) {
+ if (cmd & RNP_RX_L3TYPE_IPV6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6;
+ else
+ m->packet_type |= RTE_PTYPE_L3_IPV4;
+ }
+ if (vlan_tci)
+ m->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
+ switch (cmd & RNP_RX_L4TYPE_MASK) {
+ case RNP_RX_L4TYPE_UDP:
+ m->packet_type |= RTE_PTYPE_L4_UDP;
+ break;
+ case RNP_RX_L4TYPE_TCP:
+ m->packet_type |= RTE_PTYPE_L4_TCP;
+ break;
+ case RNP_RX_L4TYPE_SCTP:
+ m->packet_type |= RTE_PTYPE_L4_SCTP;
+ break;
+ }
+ switch (cmd & RNP_RX_TUNNEL_MASK) {
+ case RNP_RX_PTYPE_VXLAN:
+ m->packet_type |= RTE_PTYPE_TUNNEL_VXLAN;
+ break;
+ case RNP_RX_PTYPE_NVGRE:
+ m->packet_type |= RTE_PTYPE_TUNNEL_NVGRE;
+ break;
+ }
+ if (!(m->packet_type & RTE_PTYPE_L2_MASK))
+ m->packet_type |= RTE_PTYPE_L2_ETHER;
+}
+
#define RNP_CACHE_FETCH_RX (4)
static __rte_always_inline int
rnp_refill_rx_ring(struct rnp_rx_queue *rxq)
@@ -742,6 +785,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
nmb->ol_flags = 0;
nmb->nb_segs = 1;
+ rnp_dev_rx_parse(rxq, nmb, rxbd[j]);
rxq->stats.ibytes += nmb->data_len;
}
for (j = 0; j < nb_dd; ++j) {
@@ -941,6 +985,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
}
rxm->next = NULL;
first_seg->port = rxq->attr.port_id;
+ rnp_dev_rx_parse(rxq, first_seg, rxd);
rxq->stats.ibytes += first_seg->pkt_len;
/* this the end of packet the large pkt has been recv finish */
rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 23/28] net/rnp: add support Rx checksum offload
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (21 preceding siblings ...)
2025-02-08 2:43 ` [PATCH v7 22/28] net/rnp: add supported packet types Wenbo Cao
@ 2025-02-08 2:44 ` Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 24/28] net/rnp: add support Tx TSO offload Wenbo Cao
` (4 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:44 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
Add support Rx l3/l4 checum and tunnel
inner l3/l4, out l3 chksum.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 4 ++
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_eth_regs.h | 13 +++++
drivers/net/rnp/rnp.h | 7 +++
drivers/net/rnp/rnp_ethdev.c | 65 ++++++++++++++++++++++++-
drivers/net/rnp/rnp_rxtx.c | 97 ++++++++++++++++++++++++++++++++++++-
6 files changed, 185 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index b81f11d..7e97da9 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -8,6 +8,10 @@ Speed capabilities = Y
Link status = Y
Link status event = Y
Packet type parsing = Y
+L3 checksum offload = P
+L4 checksum offload = P
+Inner L3 checksum = P
+Inner L4 checksum = P
Basic stats = Y
Stats per queue = Y
Extended stats = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 39ea2d1..8f667a4 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -22,6 +22,7 @@ Features
- Scatter-Gather IO support
- Port hardware statistic
- Packet type parsing
+- Checksum offload
Prerequisites
-------------
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 8a448b9..b0961a1 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -16,6 +16,19 @@
#define RNP_RX_ETH_F_CTRL(n) _ETH_(0x8070 + ((n) * 0x8))
#define RNP_RX_ETH_F_OFF (0x7ff)
#define RNP_RX_ETH_F_ON (0x270)
+/* rx checksum ctrl */
+#define RNP_HW_SCTP_CKSUM_CTRL _ETH_(0x8038)
+#define RNP_HW_CHECK_ERR_CTRL _ETH_(0x8060)
+#define RNP_HW_ERR_HDR_LEN RTE_BIT32(0)
+#define RNP_HW_ERR_PKTLEN RTE_BIT32(1)
+#define RNP_HW_L3_CKSUM_ERR RTE_BIT32(2)
+#define RNP_HW_L4_CKSUM_ERR RTE_BIT32(3)
+#define RNP_HW_SCTP_CKSUM_ERR RTE_BIT32(4)
+#define RNP_HW_INNER_L3_CKSUM_ERR RTE_BIT32(5)
+#define RNP_HW_INNER_L4_CKSUM_ERR RTE_BIT32(6)
+#define RNP_HW_CKSUM_ERR_MASK RTE_GENMASK32(6, 2)
+#define RNP_HW_CHECK_ERR_MASK RTE_GENMASK32(6, 0)
+#define RNP_HW_ERR_RX_ALL_MASK RTE_GENMASK32(1, 0)
/* max/min pkts length receive limit ctrl */
#define RNP_MIN_FRAME_CTRL _ETH_(0x80f0)
#define RNP_MAX_FRAME_CTRL _ETH_(0x80f4)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index eb9d44a..702bbd0 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -42,6 +42,13 @@
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_IPV6_UDP_EX | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+/* rx checksum offload */
+#define RNP_RX_CHECKSUM_SUPPORT ( \
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)
/* Ring info special */
#define RNP_MAX_BD_COUNT (4096)
#define RNP_MIN_BD_COUNT (128)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index f97d12f..5886894 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -405,6 +405,67 @@ static int rnp_disable_all_tx_queue(struct rte_eth_dev *dev)
return ret;
}
+static void rnp_set_rx_cksum_offload(struct rte_eth_dev *dev)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_hw *hw = port->hw;
+ uint32_t cksum_ctrl;
+ uint64_t offloads;
+
+ offloads = dev->data->dev_conf.rxmode.offloads;
+ cksum_ctrl = RNP_HW_CHECK_ERR_MASK;
+ /* enable rx checksum feature */
+ if (!rnp_pf_is_multiple_ports(hw->device_id)) {
+ if (offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) {
+ /* Tunnel Option Cksum L4_Option */
+ cksum_ctrl &= ~RNP_HW_L4_CKSUM_ERR;
+ if (offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ cksum_ctrl &= ~RNP_HW_INNER_L4_CKSUM_ERR;
+ else
+ cksum_ctrl |= RNP_HW_INNER_L4_CKSUM_ERR;
+ } else {
+ /* no tunnel option cksum l4_option */
+ cksum_ctrl |= RNP_HW_INNER_L4_CKSUM_ERR;
+ if (offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
+ cksum_ctrl &= ~RNP_HW_L4_CKSUM_ERR;
+ else
+ cksum_ctrl |= RNP_HW_L4_CKSUM_ERR;
+ }
+ if (offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) {
+ /* tunnel option cksum l3_option */
+ cksum_ctrl &= ~RNP_HW_L3_CKSUM_ERR;
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
+ cksum_ctrl &= ~RNP_HW_INNER_L3_CKSUM_ERR;
+ else
+ cksum_ctrl |= RNP_HW_INNER_L3_CKSUM_ERR;
+ } else {
+ /* no tunnel option cksum l3_option */
+ cksum_ctrl |= RNP_HW_INNER_L3_CKSUM_ERR;
+ if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
+ cksum_ctrl &= ~RNP_HW_L3_CKSUM_ERR;
+ else
+ cksum_ctrl |= RNP_HW_L3_CKSUM_ERR;
+ }
+ /* sctp option */
+ if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM) {
+ cksum_ctrl &= ~RNP_HW_SCTP_CKSUM_ERR;
+ RNP_E_REG_WR(hw, RNP_HW_SCTP_CKSUM_CTRL, true);
+ } else {
+ RNP_E_REG_WR(hw, RNP_HW_SCTP_CKSUM_CTRL, false);
+ }
+ RNP_E_REG_WR(hw, RNP_HW_CHECK_ERR_CTRL, cksum_ctrl);
+ } else {
+ /* Enabled all support checksum features
+ * use software mode support per port rx checksum
+ * feature enabled/disabled for multiple port mode
+ */
+ RNP_E_REG_WR(hw, RNP_HW_CHECK_ERR_CTRL, RNP_HW_ERR_RX_ALL_MASK);
+ RNP_E_REG_WR(hw, RNP_HW_SCTP_CKSUM_CTRL, true);
+ }
+}
+
static int rnp_dev_configure(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
@@ -414,6 +475,7 @@ static int rnp_dev_configure(struct rte_eth_dev *eth_dev)
else
port->rxq_num_changed = false;
port->last_rx_num = eth_dev->data->nb_rx_queues;
+ rnp_set_rx_cksum_offload(eth_dev);
return 0;
}
@@ -586,7 +648,8 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = RNP_RSS_INDIR_SIZE;
/* speed cap info */
dev_info->speed_capa = rnp_get_speed_caps(eth_dev);
-
+ /* rx support offload cap */
+ dev_info->rx_offload_capa = RNP_RX_CHECKSUM_SUPPORT;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_drop_en = 0,
.rx_thresh = {
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 229c97f..5493da4 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -644,8 +644,102 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
return 0;
}
+struct rnp_rx_cksum_parse {
+ uint64_t offloads;
+ uint64_t packet_type;
+ uint16_t hw_offload;
+ uint64_t good;
+ uint64_t bad;
+};
+
+#define RNP_RX_OFFLOAD_L4_CKSUM (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+ RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
+static const struct rnp_rx_cksum_parse rnp_rx_cksum_tunnel[] = {
+ { RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM,
+ RTE_PTYPE_L3_IPV4 | RTE_PTYPE_TUNNEL_MASK, RNP_RX_L3_ERR,
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD
+ },
+ { RTE_ETH_RX_OFFLOAD_IPV4_CKSUM,
+ RTE_PTYPE_L3_IPV4, RNP_RX_IN_L3_ERR,
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_IP_CKSUM_BAD
+ },
+ { RNP_RX_OFFLOAD_L4_CKSUM, RTE_PTYPE_L4_MASK,
+ RNP_RX_IN_L4_ERR | RNP_RX_SCTP_ERR,
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD, RTE_MBUF_F_RX_L4_CKSUM_BAD
+ }
+};
+
+static const struct rnp_rx_cksum_parse rnp_rx_cksum[] = {
+ { RTE_ETH_RX_OFFLOAD_IPV4_CKSUM,
+ RTE_PTYPE_L3_IPV4, RNP_RX_L3_ERR,
+ RTE_MBUF_F_RX_IP_CKSUM_GOOD, RTE_MBUF_F_RX_IP_CKSUM_BAD
+ },
+ { RNP_RX_OFFLOAD_L4_CKSUM,
+ RTE_PTYPE_L4_MASK, RNP_RX_L4_ERR | RNP_RX_SCTP_ERR,
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD, RTE_MBUF_F_RX_L4_CKSUM_BAD
+ }
+};
+
+static void
+rnp_rx_parse_tunnel_cksum(struct rnp_rx_queue *rxq,
+ struct rte_mbuf *m, uint16_t cksum_cmd)
+{
+ uint16_t idx = 0;
+
+ for (idx = 0; idx < RTE_DIM(rnp_rx_cksum_tunnel); idx++) {
+ if (rxq->rx_offloads & rnp_rx_cksum_tunnel[idx].offloads &&
+ m->packet_type & rnp_rx_cksum_tunnel[idx].packet_type) {
+ if (cksum_cmd & rnp_rx_cksum_tunnel[idx].hw_offload)
+ m->ol_flags |= rnp_rx_cksum_tunnel[idx].bad;
+ else
+ m->ol_flags |= rnp_rx_cksum_tunnel[idx].good;
+ }
+ }
+}
+
+static void
+rnp_rx_parse_cksum(struct rnp_rx_queue *rxq,
+ struct rte_mbuf *m, uint16_t cksum_cmd)
+{
+ uint16_t idx = 0;
+
+ for (idx = 0; idx < RTE_DIM(rnp_rx_cksum); idx++) {
+ if (rxq->rx_offloads & rnp_rx_cksum[idx].offloads &&
+ m->packet_type & rnp_rx_cksum[idx].packet_type) {
+ if (cksum_cmd & rnp_rx_cksum[idx].hw_offload)
+ m->ol_flags |= rnp_rx_cksum[idx].bad;
+ else
+ m->ol_flags |= rnp_rx_cksum[idx].good;
+ }
+ }
+}
+
+static __rte_always_inline void
+rnp_dev_rx_offload(struct rnp_rx_queue *rxq,
+ struct rte_mbuf *m,
+ volatile struct rnp_rx_desc rxbd)
+{
+ uint32_t rss = rte_le_to_cpu_32(rxbd.wb.qword0.rss_hash);
+ uint16_t cmd = rxbd.wb.qword1.cmd;
+
+ if (rxq->rx_offloads & RNP_RX_CHECKSUM_SUPPORT) {
+ if (m->packet_type & RTE_PTYPE_TUNNEL_MASK) {
+ rnp_rx_parse_tunnel_cksum(rxq, m, cmd);
+ } else {
+ if (m->packet_type & RTE_PTYPE_L3_MASK ||
+ m->packet_type & RTE_PTYPE_L4_MASK)
+ rnp_rx_parse_cksum(rxq, m, cmd);
+ }
+ }
+ if (rxq->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH && rss) {
+ m->hash.rss = rss;
+ m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
+ }
+}
+
static __rte_always_inline void
-rnp_dev_rx_parse(struct rnp_rx_queue *rxq __rte_unused,
+rnp_dev_rx_parse(struct rnp_rx_queue *rxq,
struct rte_mbuf *m,
volatile struct rnp_rx_desc rxbd)
{
@@ -685,6 +779,7 @@ int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
}
if (!(m->packet_type & RTE_PTYPE_L2_MASK))
m->packet_type |= RTE_PTYPE_L2_ETHER;
+ rnp_dev_rx_offload(rxq, m, rxbd);
}
#define RNP_CACHE_FETCH_RX (4)
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 24/28] net/rnp: add support Tx TSO offload
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (22 preceding siblings ...)
2025-02-08 2:44 ` [PATCH v7 23/28] net/rnp: add support Rx checksum offload Wenbo Cao
@ 2025-02-08 2:44 ` Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 25/28] net/rnp: support VLAN offloads Wenbo Cao
` (3 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:44 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
Add support tx tso and tunnel tso.
for tunnel just support vxlan/nvgre
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/base/rnp_bdq_if.h | 1 +
drivers/net/rnp/rnp.h | 2 +-
drivers/net/rnp/rnp_ethdev.c | 16 ++
drivers/net/rnp/rnp_rxtx.c | 457 +++++++++++++++++++++++++++++++++++++-
drivers/net/rnp/rnp_rxtx.h | 1 +
5 files changed, 471 insertions(+), 6 deletions(-)
diff --git a/drivers/net/rnp/base/rnp_bdq_if.h b/drivers/net/rnp/base/rnp_bdq_if.h
index a7d27bd..7a6d0b2 100644
--- a/drivers/net/rnp/base/rnp_bdq_if.h
+++ b/drivers/net/rnp/base/rnp_bdq_if.h
@@ -111,6 +111,7 @@ struct rnp_tx_desc {
#define RNP_TX_VLAN_VALID RTE_BIT32(15)
/* tx data mac_ip len */
#define RNP_TX_MAC_LEN_S (9)
+#define RNP_TX_MAC_LEN_MASK RTE_GENMASK32(15, 9)
/* tx ctrl cmd */
#define RNP_TX_LEN_PAD_S (8)
#define RNP_TX_OFF_MAC_PAD (0x01UL << RNP_TX_LEN_PAD_S)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index 702bbd0..d0afef3 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -17,7 +17,7 @@
#define RNP_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_VLAN_HLEN * 2)
#define RNP_MAC_MAXFRM_SIZE (9590)
-
+#define RNP_MAX_TSO_PKT (16 * 1024)
#define RNP_RX_MAX_MTU_SEG (64)
#define RNP_TX_MAX_MTU_SEG (32)
#define RNP_RX_MAX_SEG (150)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 5886894..47d4771 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -650,6 +650,17 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
dev_info->speed_capa = rnp_get_speed_caps(eth_dev);
/* rx support offload cap */
dev_info->rx_offload_capa = RNP_RX_CHECKSUM_SUPPORT;
+ /* tx support offload cap */
+ dev_info->tx_offload_capa = 0 |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_drop_en = 0,
.rx_thresh = {
@@ -1083,13 +1094,18 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
(data->tx_queues))[i]->stats.opackets;
stats->q_obytes[i] = ((struct rnp_tx_queue **)
(data->tx_queues))[i]->stats.obytes;
+ stats->oerrors += ((struct rnp_tx_queue **)
+ (data->tx_queues))[i]->stats.errors;
stats->opackets += stats->q_opackets[i];
stats->obytes += stats->q_obytes[i];
+
} else {
stats->opackets += ((struct rnp_tx_queue **)
(data->tx_queues))[i]->stats.opackets;
stats->obytes += ((struct rnp_tx_queue **)
(data->tx_queues))[i]->stats.obytes;
+ stats->oerrors += ((struct rnp_tx_queue **)
+ (data->tx_queues))[i]->stats.errors;
}
}
stats->imissed = eth_stats->rx_trans_drop + eth_stats->rx_trunc_drop;
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 5493da4..bacbfca 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -1130,6 +1130,198 @@ struct rnp_rx_cksum_parse {
return txq->nb_tx_free;
}
+static inline uint32_t
+rnp_cal_tso_seg(struct rte_mbuf *mbuf)
+{
+ uint32_t hdr_len;
+
+ hdr_len = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+
+ hdr_len += (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
+ mbuf->outer_l2_len + mbuf->outer_l3_len : 0;
+
+ return (mbuf->tso_segsz) ? mbuf->tso_segsz : hdr_len;
+}
+
+static inline bool
+rnp_need_ctrl_desc(uint64_t flags)
+{
+ static uint64_t mask = RTE_MBUF_F_TX_OUTER_IP_CKSUM |
+ RTE_MBUF_F_TX_TCP_SEG |
+ RTE_MBUF_F_TX_TUNNEL_VXLAN |
+ RTE_MBUF_F_TX_TUNNEL_GRE;
+ return (flags & mask) ? 1 : 0;
+}
+
+static void
+rnp_build_tx_control_desc(struct rnp_tx_queue *txq,
+ volatile struct rnp_tx_desc *txbd,
+ struct rte_mbuf *mbuf)
+{
+ struct rte_gre_hdr *gre_hdr;
+ uint16_t tunnel_len = 0;
+ uint64_t flags;
+
+ *txbd = txq->zero_desc;
+ /* For outer checksum offload l2_len is
+ * l2 (MAC) Header Length for non-tunneling pkt.
+ * For Inner checksum offload l2_len is
+ * Outer_L4_len + ... + Inner_L2_len(Inner L2 Header Len)
+ * for tunneling pkt.
+ */
+ if (!mbuf)
+ return;
+ flags = mbuf->ol_flags;
+ if (flags & RTE_MBUF_F_TX_TCP_SEG) {
+ txbd->c.qword0.mss = rnp_cal_tso_seg(mbuf);
+ txbd->c.qword0.l4_len = mbuf->l4_len;
+ }
+#define GRE_TUNNEL_KEY (4)
+#define GRE_TUNNEL_SEQ (4)
+ switch (flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+ case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+ tunnel_len = mbuf->outer_l2_len + mbuf->outer_l3_len +
+ sizeof(struct rte_udp_hdr) +
+ sizeof(struct rte_vxlan_hdr);
+ break;
+ case RTE_MBUF_F_TX_TUNNEL_GRE:
+ gre_hdr = rte_pktmbuf_mtod_offset(mbuf, struct rte_gre_hdr *,
+ mbuf->outer_l2_len + mbuf->outer_l3_len);
+ tunnel_len = mbuf->outer_l2_len + mbuf->outer_l3_len +
+ sizeof(struct rte_gre_hdr);
+ if (gre_hdr->k)
+ tunnel_len += GRE_TUNNEL_KEY;
+ if (gre_hdr->s)
+ tunnel_len += GRE_TUNNEL_SEQ;
+ break;
+ }
+ txbd->c.qword0.tunnel_len = tunnel_len;
+ txbd->c.qword1.cmd |= RNP_CTRL_DESC;
+}
+
+static void
+rnp_padding_hdr_len(volatile struct rnp_tx_desc *txbd,
+ struct rte_mbuf *m)
+{
+ struct rte_ether_hdr *eth_hdr = NULL;
+ struct rte_vlan_hdr *vlan_hdr = NULL;
+ int ethertype, l2_len;
+ uint16_t l3_len = 0;
+
+ if (m->l2_len == 0) {
+ eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+ l2_len = RTE_ETHER_HDR_LEN;
+ ethertype = rte_le_to_cpu_32(eth_hdr->ether_type);
+ if (ethertype == RTE_ETHER_TYPE_VLAN) {
+ vlan_hdr = (struct rte_vlan_hdr *)(eth_hdr + 1);
+ l2_len += RTE_VLAN_HLEN;
+ ethertype = vlan_hdr->eth_proto;
+ }
+ switch (ethertype) {
+ case RTE_ETHER_TYPE_IPV4:
+ l3_len = sizeof(struct rte_ipv4_hdr);
+ break;
+ case RTE_ETHER_TYPE_IPV6:
+ l3_len = sizeof(struct rte_ipv6_hdr);
+ break;
+ }
+ } else {
+ l2_len = m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK ?
+ m->outer_l2_len : m->l2_len;
+ l3_len = m->l3_len;
+ }
+ txbd->d.mac_ip_len = l2_len << RNP_TX_MAC_LEN_S;
+ txbd->d.mac_ip_len |= l3_len;
+}
+
+static void
+rnp_check_inner_eth_hdr(struct rte_mbuf *mbuf,
+ volatile struct rnp_tx_desc *txbd)
+{
+ struct rte_ether_hdr *eth_hdr;
+ uint16_t inner_l2_offset = 0;
+ struct rte_vlan_hdr *vlan_hdr;
+ uint16_t ext_l2_len = 0;
+ uint16_t l2_offset = 0;
+ uint16_t l2_type;
+
+ inner_l2_offset = mbuf->outer_l2_len + mbuf->outer_l3_len +
+ sizeof(struct rte_udp_hdr) +
+ sizeof(struct rte_vxlan_hdr);
+ eth_hdr = rte_pktmbuf_mtod_offset(mbuf,
+ struct rte_ether_hdr *, inner_l2_offset);
+ l2_type = eth_hdr->ether_type;
+ l2_offset = txbd->d.mac_ip_len >> RNP_TX_MAC_LEN_S;
+ while (l2_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN) ||
+ l2_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_QINQ)) {
+ vlan_hdr = (struct rte_vlan_hdr *)
+ ((char *)eth_hdr + l2_offset);
+ l2_offset += RTE_VLAN_HLEN;
+ ext_l2_len += RTE_VLAN_HLEN;
+ l2_type = vlan_hdr->eth_proto;
+ }
+ txbd->d.mac_ip_len += (ext_l2_len << RNP_TX_MAC_LEN_S);
+}
+
+#define RNP_TX_L4_OFFLOAD_ALL (RTE_MBUF_F_TX_SCTP_CKSUM | \
+ RTE_MBUF_F_TX_TCP_CKSUM | \
+ RTE_MBUF_F_TX_UDP_CKSUM)
+static inline void
+rnp_setup_csum_offload(struct rte_mbuf *mbuf,
+ volatile struct rnp_tx_desc *tx_desc)
+{
+ tx_desc->d.cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) ?
+ RNP_TX_IP_CKSUM_EN : 0;
+ tx_desc->d.cmd |= (mbuf->ol_flags & RTE_MBUF_F_TX_IPV6) ?
+ RNP_TX_L3TYPE_IPV6 : 0;
+ tx_desc->d.cmd |= (mbuf->ol_flags & RNP_TX_L4_OFFLOAD_ALL) ?
+ RNP_TX_L4CKSUM_EN : 0;
+ switch ((mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK)) {
+ case RTE_MBUF_F_TX_TCP_CKSUM:
+ tx_desc->d.cmd |= RNP_TX_L4TYPE_TCP;
+ break;
+ case RTE_MBUF_F_TX_UDP_CKSUM:
+ tx_desc->d.cmd |= RNP_TX_L4TYPE_UDP;
+ break;
+ case RTE_MBUF_F_TX_SCTP_CKSUM:
+ tx_desc->d.cmd |= RNP_TX_L4TYPE_SCTP;
+ break;
+ }
+ tx_desc->d.mac_ip_len = mbuf->l2_len << RNP_TX_MAC_LEN_S;
+ tx_desc->d.mac_ip_len |= mbuf->l3_len;
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+ tx_desc->d.cmd |= RNP_TX_IP_CKSUM_EN;
+ tx_desc->d.cmd |= RNP_TX_L4CKSUM_EN;
+ tx_desc->d.cmd |= RNP_TX_L4TYPE_TCP;
+ tx_desc->d.cmd |= RNP_TX_TSO_EN;
+ }
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+ /* need inner l2 l3 lens for inner checksum offload */
+ tx_desc->d.mac_ip_len &= ~RNP_TX_MAC_LEN_MASK;
+ tx_desc->d.mac_ip_len |= RTE_ETHER_HDR_LEN << RNP_TX_MAC_LEN_S;
+ rnp_check_inner_eth_hdr(mbuf, tx_desc);
+ switch (mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
+ case RTE_MBUF_F_TX_TUNNEL_VXLAN:
+ tx_desc->d.cmd |= RNP_TX_VXLAN_TUNNEL;
+ break;
+ case RTE_MBUF_F_TX_TUNNEL_GRE:
+ tx_desc->d.cmd |= RNP_TX_NVGRE_TUNNEL;
+ break;
+ }
+ }
+}
+
+static void
+rnp_setup_tx_offload(struct rnp_tx_queue *txq,
+ volatile struct rnp_tx_desc *txbd,
+ uint64_t flags, struct rte_mbuf *tx_pkt)
+{
+ *txbd = txq->zero_desc;
+ if (flags & RTE_MBUF_F_TX_L4_MASK ||
+ flags & RTE_MBUF_F_TX_TCP_SEG ||
+ flags & RTE_MBUF_F_TX_IP_CKSUM)
+ rnp_setup_csum_offload(tx_pkt, txbd);
+}
static __rte_always_inline uint16_t
rnp_multiseg_xmit_pkts(void *_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -1140,6 +1332,8 @@ struct rnp_rx_cksum_parse {
struct rte_mbuf *tx_pkt, *m_seg;
uint16_t send_pkts = 0;
uint16_t nb_used_bd;
+ uint8_t ctx_desc_use;
+ uint8_t first_seg;
uint16_t tx_last;
uint16_t nb_tx;
uint16_t tx_id;
@@ -1155,17 +1349,39 @@ struct rnp_rx_cksum_parse {
txe = &txq->sw_ring[tx_id];
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
tx_pkt = tx_pkts[nb_tx];
- nb_used_bd = tx_pkt->nb_segs;
+ ctx_desc_use = rnp_need_ctrl_desc(tx_pkt->ol_flags);
+ nb_used_bd = tx_pkt->nb_segs + ctx_desc_use;
tx_last = (uint16_t)(tx_id + nb_used_bd - 1);
if (tx_last >= txq->attr.nb_desc)
tx_last = (uint16_t)(tx_last - txq->attr.nb_desc);
if (nb_used_bd > txq->nb_tx_free)
if (nb_used_bd > rnp_multiseg_clean_txq(txq))
break;
+ if (ctx_desc_use) {
+ txbd = &txq->tx_bdr[tx_id];
+ txn = &txq->sw_ring[txe->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+ rnp_build_tx_control_desc(txq, txbd, tx_pkt);
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ }
m_seg = tx_pkt;
+ first_seg = 1;
do {
txbd = &txq->tx_bdr[tx_id];
txn = &txq->sw_ring[txe->next_id];
+ if ((first_seg && m_seg->ol_flags)) {
+ rnp_setup_tx_offload(txq, txbd,
+ m_seg->ol_flags, m_seg);
+ if (!txbd->d.mac_ip_len)
+ rnp_padding_hdr_len(txbd, m_seg);
+ first_seg = 0;
+ }
if (txe->mbuf) {
rte_pktmbuf_free_seg(txe->mbuf);
txe->mbuf = NULL;
@@ -1201,6 +1417,231 @@ struct rnp_rx_cksum_parse {
return send_pkts;
}
+#define RNP_TX_TUNNEL_NOSUP_TSO_MASK (RTE_MBUF_F_TX_TUNNEL_MASK ^ \
+ (RTE_MBUF_F_TX_TUNNEL_VXLAN | \
+ RTE_MBUF_F_TX_TUNNEL_GRE))
+static inline bool
+rnp_check_tx_tso_valid(struct rte_mbuf *m)
+{
+ uint16_t max_seg = m->nb_segs;
+ uint32_t remain_len = 0;
+ struct rte_mbuf *m_seg;
+ uint32_t total_len = 0;
+ uint32_t limit_len = 0;
+ uint32_t tso = 0;
+
+ if (likely(!(m->ol_flags & RTE_MBUF_F_TX_TCP_SEG))) {
+ /* non tso mode */
+ if (unlikely(m->pkt_len > RNP_MAC_MAXFRM_SIZE)) {
+ return false;
+ } else if (max_seg <= RNP_TX_MAX_MTU_SEG) {
+ m_seg = m;
+ do {
+ total_len += m_seg->data_len;
+ m_seg = m_seg->next;
+ } while (m_seg != NULL);
+ if (total_len > RNP_MAC_MAXFRM_SIZE)
+ return false;
+ return true;
+ }
+ } else {
+ if (unlikely(m->ol_flags & RNP_TX_TUNNEL_NOSUP_TSO_MASK))
+ return false;
+ if (max_seg > RNP_TX_MAX_MTU_SEG)
+ return false;
+ tso = rnp_cal_tso_seg(m);
+ m_seg = m;
+ do {
+ remain_len = RTE_MAX(remain_len, m_seg->data_len % tso);
+ m_seg = m_seg->next;
+ } while (m_seg != NULL);
+ /* TSO will remain bytes because of tso
+ * in this situation must refer the worst condition
+ */
+ limit_len = remain_len * max_seg + tso;
+
+ if (limit_len > RNP_MAX_TSO_PKT)
+ return false;
+ }
+
+ return true;
+}
+
+static inline int
+rnp_net_cksum_flags_prepare(struct rte_mbuf *m, uint64_t ol_flags)
+{
+ struct rte_ipv4_hdr *ipv4_hdr = NULL;
+ uint64_t inner_l3_offset = m->l2_len;
+ struct rte_ipv6_hdr *ipv6_hdr;
+ struct rte_sctp_hdr *sctp_hdr;
+ struct rte_tcp_hdr *tcp_hdr;
+ struct rte_udp_hdr *udp_hdr;
+
+ if (!(ol_flags & (RTE_MBUF_F_TX_IP_CKSUM |
+ RTE_MBUF_F_TX_L4_MASK |
+ RTE_MBUF_F_TX_TCP_SEG)))
+ return 0;
+ if (ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6)) {
+ if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) ==
+ RTE_MBUF_F_TX_TCP_CKSUM ||
+ (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ /* hardware must require out-ip cksum is zero
+ * when vxlan-tso enable
+ */
+ ipv4_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_ipv4_hdr *, m->outer_l2_len);
+ ipv4_hdr->hdr_checksum = 0;
+ }
+ inner_l3_offset += m->outer_l2_len + m->outer_l3_len;
+ }
+ if (unlikely(rte_pktmbuf_data_len(m) <
+ inner_l3_offset + m->l3_len + m->l4_len))
+ return -ENOTSUP;
+ if (ol_flags & RTE_MBUF_F_TX_IPV4) {
+ ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
+ inner_l3_offset);
+ if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
+ ipv4_hdr->hdr_checksum = 0;
+ }
+ if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_UDP_CKSUM) {
+ if (ol_flags & RTE_MBUF_F_TX_IPV4) {
+ udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
+ m->l3_len);
+ udp_hdr->dgram_cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
+ ol_flags);
+ } else {
+ ipv6_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_ipv6_hdr *, inner_l3_offset);
+ /* non-TSO udp */
+ udp_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_udp_hdr *,
+ inner_l3_offset + m->l3_len);
+ udp_hdr->dgram_cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
+ ol_flags);
+ }
+ } else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_TCP_CKSUM ||
+ (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ if (ol_flags & RTE_MBUF_F_TX_IPV4) {
+ /* non-TSO tcp or TSO */
+ tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + m->l3_len);
+ tcp_hdr->cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
+ ol_flags);
+ } else {
+ ipv6_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_ipv6_hdr *, inner_l3_offset);
+ /* non-TSO tcp or TSO */
+ tcp_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_tcp_hdr *,
+ inner_l3_offset + m->l3_len);
+ tcp_hdr->cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
+ ol_flags);
+ }
+ } else if ((ol_flags & RTE_MBUF_F_TX_L4_MASK) == RTE_MBUF_F_TX_SCTP_CKSUM) {
+ if (ol_flags & RTE_MBUF_F_TX_IPV4) {
+ sctp_hdr = (struct rte_sctp_hdr *)((char *)ipv4_hdr +
+ m->l3_len);
+ /* SCTP-cksm implement CRC32 */
+ sctp_hdr->cksum = 0;
+ } else {
+ ipv6_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_ipv6_hdr *, inner_l3_offset);
+ /* NON-TSO SCTP */
+ sctp_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_sctp_hdr *,
+ inner_l3_offset + m->l3_len);
+ sctp_hdr->cksum = 0;
+ }
+ }
+ if (ol_flags & RTE_MBUF_F_TX_IP_CKSUM && !(ol_flags &
+ (RTE_MBUF_F_TX_L4_MASK || RTE_MBUF_F_TX_TCP_SEG))) {
+ /* the hardware L4 is follow on l3 checksum.
+ * when ol_flags set hw L3, sw l4 checksum offload,
+ * we must prepare pseudo header to avoid
+ * the l4 Checksum error
+ */
+ if (ol_flags & RTE_MBUF_F_TX_IPV4) {
+ ipv4_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_ipv4_hdr *, inner_l3_offset);
+ switch (ipv4_hdr->next_proto_id) {
+ case IPPROTO_UDP:
+ udp_hdr = (struct rte_udp_hdr *)((char *)ipv4_hdr +
+ m->l3_len);
+ udp_hdr->dgram_cksum =
+ rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
+ break;
+ case IPPROTO_TCP:
+ tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr +
+ m->l3_len);
+ tcp_hdr->cksum = rte_ipv4_phdr_cksum(ipv4_hdr,
+ ol_flags);
+ break;
+ default:
+ break;
+ }
+ } else {
+ ipv6_hdr = rte_pktmbuf_mtod_offset(m,
+ struct rte_ipv6_hdr *, inner_l3_offset);
+ switch (ipv6_hdr->proto) {
+ case IPPROTO_UDP:
+ udp_hdr = (struct rte_udp_hdr *)((char *)ipv6_hdr +
+ m->l3_len);
+ udp_hdr->dgram_cksum =
+ rte_ipv6_phdr_cksum(ipv6_hdr, ol_flags);
+ break;
+ case IPPROTO_TCP:
+ tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv6_hdr +
+ m->l3_len);
+ tcp_hdr->cksum = rte_ipv6_phdr_cksum(ipv6_hdr,
+ ol_flags);
+ break;
+ default:
+ break;
+ }
+ }
+ }
+
+ return 0;
+}
+
+static uint16_t
+rnp_tx_pkt_prepare(void *tx_queue,
+ struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct rnp_tx_queue *txq = (struct rnp_tx_queue *)tx_queue;
+ struct rte_mbuf *m;
+ int i, ret;
+
+ PMD_INIT_FUNC_TRACE();
+ for (i = 0; i < nb_pkts; i++) {
+ m = tx_pkts[i];
+ if (unlikely(!rnp_check_tx_tso_valid(m))) {
+ txq->stats.errors++;
+ rte_errno = EINVAL;
+ return i;
+ }
+ if (m->nb_segs > 10) {
+ txq->stats.errors++;
+ rte_errno = EINVAL;
+ return i;
+ }
+#ifdef RTE_ETHDEV_DEBUG_TX
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+#endif
+ ret = rnp_net_cksum_flags_prepare(m, m->ol_flags);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+ }
+
+ return i;
+}
+
static int
rnp_check_rx_simple_valid(struct rte_eth_dev *dev)
{
@@ -1227,9 +1668,14 @@ int rnp_rx_func_select(struct rte_eth_dev *dev)
static int
rnp_check_tx_simple_valid(struct rte_eth_dev *dev, struct rnp_tx_queue *txq)
{
- RTE_SET_USED(txq);
+ uint64_t tx_offloads = dev->data->dev_conf.txmode.offloads;
+
+ tx_offloads |= txq->tx_offloads;
+ if (tx_offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+ return -ENOTSUP;
if (dev->data->scattered_rx)
return -ENOTSUP;
+
return 0;
}
@@ -1243,11 +1689,12 @@ int rnp_tx_func_select(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[idx];
simple_allowed = rnp_check_tx_simple_valid(dev, txq) == 0;
}
- if (simple_allowed)
+ if (simple_allowed) {
dev->tx_pkt_burst = rnp_xmit_simple;
- else
+ } else {
dev->tx_pkt_burst = rnp_multiseg_xmit_pkts;
- dev->tx_pkt_prepare = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_prepare = rnp_tx_pkt_prepare;
+ }
return 0;
}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index d26497a..51e5d4b 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -53,6 +53,7 @@ struct rnp_queue_stats {
uint64_t ibytes;
uint64_t ipackets;
+ uint64_t errors;
};
struct rnp_rx_queue {
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 25/28] net/rnp: support VLAN offloads.
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (23 preceding siblings ...)
2025-02-08 2:44 ` [PATCH v7 24/28] net/rnp: add support Tx TSO offload Wenbo Cao
@ 2025-02-08 2:44 ` Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 26/28] net/rnp: add support VLAN filters operations Wenbo Cao
` (2 subsequent siblings)
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:44 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support rx vlan strip,filter, tx vlan/qinq insert.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 2 +
doc/guides/nics/rnp.rst | 1 +
drivers/net/rnp/base/rnp_bdq_if.h | 2 +-
drivers/net/rnp/base/rnp_eth_regs.h | 5 +
drivers/net/rnp/base/rnp_hw.h | 2 +
drivers/net/rnp/base/rnp_mac.c | 53 ++++++++++-
drivers/net/rnp/base/rnp_mac.h | 1 +
drivers/net/rnp/base/rnp_mac_regs.h | 41 ++++++++-
drivers/net/rnp/rnp.h | 7 ++
drivers/net/rnp/rnp_ethdev.c | 177 +++++++++++++++++++++++++++++++++++-
drivers/net/rnp/rnp_rxtx.c | 22 ++++-
11 files changed, 306 insertions(+), 7 deletions(-)
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 7e97da9..18ec4bc 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -20,6 +20,8 @@ Promiscuous mode = Y
Allmulticast mode = Y
MTU update = Y
Unicast MAC filter = Y
+VLAN offload = Y
+QinQ offload = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index 8f667a4..febdaf8 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -23,6 +23,7 @@ Features
- Port hardware statistic
- Packet type parsing
- Checksum offload
+- VLAN stripping and VLAN/QINQ insertion
Prerequisites
-------------
diff --git a/drivers/net/rnp/base/rnp_bdq_if.h b/drivers/net/rnp/base/rnp_bdq_if.h
index 7a6d0b2..182a8a7 100644
--- a/drivers/net/rnp/base/rnp_bdq_if.h
+++ b/drivers/net/rnp/base/rnp_bdq_if.h
@@ -87,7 +87,7 @@ struct rnp_tx_desc {
#define RNP_RX_TUNNEL_MASK RTE_GENMASK32(14, 13)
#define RNP_RX_PTYPE_VXLAN (0x01UL << RNP_RX_TUNNEL_TYPE_S)
#define RNP_RX_PTYPE_NVGRE (0x02UL << RNP_RX_TUNNEL_TYPE_S)
-#define RNP_RX_PTYPE_VLAN RTE_BIT32(15)
+#define RNP_RX_STRIP_VLAN RTE_BIT32(15)
/* mark_data */
#define RNP_RX_L3TYPE_VALID RTE_BIT32(31)
/* tx data cmd */
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index b0961a1..802a127 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -56,6 +56,11 @@
#define RNP_MAC_HASH_MASK RTE_GENMASK32(11, 0)
#define RNP_MAC_MULTICASE_TBL_EN RTE_BIT32(2)
#define RNP_MAC_UNICASE_TBL_EN RTE_BIT32(3)
+/* vlan strip ctrl */
+#define RNP_VLAN_Q_STRIP_CTRL(n) _ETH_(0x8040 + 0x4 * ((n) / 32))
+/* vlan filter ctrl */
+#define RNP_VLAN_FILTER_CTRL _ETH_(0x9118)
+#define RNP_VLAN_FILTER_EN RTE_BIT32(30)
/* rss function ctrl */
#define RNP_RSS_INNER_CTRL _ETH_(0x805c)
#define RNP_INNER_RSS_EN (1)
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index a1cf45a..6d07480 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -74,6 +74,8 @@ struct rnp_mac_ops {
/* Receive Address Filter table */
int (*set_rafb)(struct rnp_eth_port *port, u8 *mac, u32 index);
int (*clear_rafb)(struct rnp_eth_port *port, u32 index);
+ /* receive vlan filter */
+ int (*vlan_f_en)(struct rnp_eth_port *port, bool en);
};
struct rnp_eth_adapter;
diff --git a/drivers/net/rnp/base/rnp_mac.c b/drivers/net/rnp/base/rnp_mac.c
index 01929fd..ddf2a36 100644
--- a/drivers/net/rnp/base/rnp_mac.c
+++ b/drivers/net/rnp/base/rnp_mac.c
@@ -173,11 +173,53 @@
return 0;
}
+static int
+rnp_en_vlan_filter_pf(struct rnp_eth_port *port, bool en)
+{
+ struct rnp_hw *hw = port->hw;
+ u32 ctrl;
+
+ /* enable/disable all vlan filter configuration */
+ ctrl = RNP_E_REG_RD(hw, RNP_VLAN_FILTER_CTRL);
+ if (en)
+ ctrl |= RNP_VLAN_FILTER_EN;
+ else
+ ctrl &= ~RNP_VLAN_FILTER_EN;
+ RNP_E_REG_WR(hw, RNP_VLAN_FILTER_CTRL, ctrl);
+
+ return 0;
+}
+
+static int
+rnp_en_vlan_filter_indep(struct rnp_eth_port *port, bool en)
+{
+ u16 lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ u32 flt_reg, vlan_reg;
+
+ flt_reg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_PKT_FLT_CTRL);
+ vlan_reg = RNP_MAC_REG_RD(hw, lane, RNP_MAC_VLAN_TAG);
+ if (en) {
+ flt_reg |= RNP_MAC_VTFE;
+ vlan_reg |= (RNP_MAC_VLAN_VTHM | RNP_MAC_VLAN_ETV);
+ vlan_reg |= RNP_MAC_VLAN_HASH_EN;
+ } else {
+ flt_reg &= ~RNP_MAC_VTFE;
+ vlan_reg &= ~(RNP_MAC_VLAN_VTHM | RNP_MAC_VLAN_ETV);
+ vlan_reg &= ~RNP_MAC_VLAN_HASH_EN;
+ }
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_PKT_FLT_CTRL, flt_reg);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_VLAN_TAG, vlan_reg);
+
+ return 0;
+}
+
const struct rnp_mac_ops rnp_mac_ops_pf = {
.get_macaddr = rnp_mbx_fw_get_macaddr,
.update_mpfm = rnp_update_mpfm_pf,
.set_rafb = rnp_set_mac_addr_pf,
- .clear_rafb = rnp_clear_mac_pf
+ .clear_rafb = rnp_clear_mac_pf,
+ .vlan_f_en = rnp_en_vlan_filter_pf,
};
const struct rnp_mac_ops rnp_mac_ops_indep = {
@@ -185,6 +227,7 @@
.update_mpfm = rnp_update_mpfm_indep,
.set_rafb = rnp_set_mac_addr_indep,
.clear_rafb = rnp_clear_mac_indep,
+ .vlan_f_en = rnp_en_vlan_filter_indep,
};
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac)
@@ -220,6 +263,14 @@ int rnp_clear_macaddr(struct rnp_eth_port *port, u32 index)
return rnp_call_hwif_impl(port, mac_ops->clear_rafb, index);
}
+int rnp_rx_vlan_filter_en(struct rnp_eth_port *port, bool en)
+{
+ const struct rnp_mac_ops *mac_ops =
+ RNP_DEV_PP_TO_MAC_OPS(port->eth_dev);
+
+ return rnp_call_hwif_impl(port, mac_ops->vlan_f_en, en);
+}
+
void rnp_mac_ops_init(struct rnp_hw *hw)
{
struct rnp_proc_priv *proc_priv = RNP_DEV_TO_PROC_PRIV(hw->back->eth_dev);
diff --git a/drivers/net/rnp/base/rnp_mac.h b/drivers/net/rnp/base/rnp_mac.h
index 865fc34..4a5206d 100644
--- a/drivers/net/rnp/base/rnp_mac.h
+++ b/drivers/net/rnp/base/rnp_mac.h
@@ -28,5 +28,6 @@
int rnp_clear_macaddr(struct rnp_eth_port *port, u32 index);
int rnp_update_mpfm(struct rnp_eth_port *port,
u32 mode, bool en);
+int rnp_rx_vlan_filter_en(struct rnp_eth_port *port, bool en);
#endif /* _RNP_MAC_H_ */
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
index 85308a7..43e0aed 100644
--- a/drivers/net/rnp/base/rnp_mac_regs.h
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -69,8 +69,45 @@
/* Hash or Perfect Filter */
#define RNP_MAC_HPF RTE_BIT32(10)
#define RNP_MAC_VTFE RTE_BIT32(16)
-
-#define RNP_MAC_VFE RTE_BIT32(16)
+/* mac vlan ctrl reg */
+#define RNP_MAC_VLAN_TAG (0x50)
+/* En Double Vlan Processing */
+#define RNP_MAC_VLAN_EDVLP RTE_BIT32(26)
+/* VLAN Tag Hash Table Match Enable */
+#define RNP_MAC_VLAN_VTHM RTE_BIT32(25)
+/* Enable VLAN Tag in Rx status */
+#define RNP_MAC_VLAN_EVLRXS RTE_BIT32(24)
+/* Disable VLAN Type Check */
+#define RNP_MAC_VLAN_DOVLTC RTE_BIT32(20)
+/* Enable S-VLAN */
+#define RNP_MAC_VLAN_ESVL RTE_BIT32(18)
+/* Enable 12-Bit VLAN Tag Comparison Filter */
+#define RNP_MAC_VLAN_ETV RTE_BIT32(16)
+/* enable vid valid */
+#define RNP_MAC_VLAN_HASH_EN RTE_GENMASK32(15, 0)
+/* MAC VLAN CTRL INSERT REG */
+#define RNP_MAC_VLAN_INCL (0x60)
+#define RNP_MAC_INNER_VLAN_INCL (0x64)
+/* VLAN_Tag Insert From Description */
+#define RNP_MAC_VLAN_VLTI RTE_BIT32(20)
+/* C-VLAN or S-VLAN */
+#define RNP_MAC_VLAN_CSVL RTE_BIT32(19)
+#define RNP_MAC_VLAN_INSERT_CVLAN (0 << 19)
+#define RNP_MAC_VLAN_INSERT_SVLAN (1 << 19)
+/* VLAN Tag Control in Transmit Packets */
+#define RNP_MAC_VLAN_VLC RTE_GENMASK32(17, 16)
+/* VLAN Tag Control Offset Bit */
+#define RNP_MAC_VLAN_VLC_SHIFT (16)
+/* Don't Anything ON TX VLAN*/
+#define RNP_MAC_VLAN_VLC_NONE (0x0 << RNP_MAC_VLAN_VLC_SHIFT)
+/* MAC Delete VLAN */
+#define RNP_MAC_VLAN_VLC_DEL (0x1 << RNP_MAC_VLAN_VLC_SHIFT)
+/* MAC Add VLAN */
+#define RNP_MAC_VLAN_VLC_ADD (0x2 << RNP_MAC_VLAN_VLC_SHIFT)
+/* MAC Replace VLAN */
+#define RNP_MAC_VLAN_VLC_REPLACE (0x3 << RNP_MAC_VLAN_VLC_SHIFT)
+/* VLAN Tag for Transmit Packets For Insert/Remove */
+#define RNP_MAC_VLAN_VLT RTE_GENMASK32(15, 0)
/* mac link ctrl */
#define RNP_MAC_LPI_CTRL (0xd0)
/* PHY Link Status Disable */
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index d0afef3..a4790d3 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -171,6 +171,11 @@ struct rnp_hw_mac_stats {
uint64_t tx_bad_pkts;
};
+enum rnp_vlan_type {
+ RNP_CVLAN_TYPE = 0,
+ RNP_SVLAN_TYPE = 1,
+};
+
struct rnp_eth_port {
struct rnp_proc_priv *proc_priv;
struct rte_ether_addr mac_addr;
@@ -193,6 +198,8 @@ struct rnp_eth_port {
uint16_t cur_mtu;
bool jumbo_en;
+ enum rnp_vlan_type outvlan_type;
+ enum rnp_vlan_type invlan_type;
rte_spinlock_t rx_mac_lock;
bool port_stopped;
};
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 47d4771..bddf0d5 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -305,6 +305,94 @@ static int rnp_enable_all_tx_queue(struct rte_eth_dev *dev)
return ret;
}
+static void
+rnp_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue,
+ int on)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+ struct rnp_rx_queue *rxq = NULL;
+ struct rnp_hw *hw = port->hw;
+ uint16_t index;
+ uint32_t reg;
+
+ rxq = dev->data->rx_queues[queue];
+ if (rxq) {
+ index = rxq->attr.index;
+ reg = RNP_E_REG_RD(hw, RNP_VLAN_Q_STRIP_CTRL(index));
+ if (on) {
+ reg |= 1 << (index % 32);
+ rxq->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ } else {
+ reg &= ~(1 << (index % 32));
+ rxq->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+ }
+ RNP_E_REG_WR(hw, RNP_VLAN_Q_STRIP_CTRL(index), reg);
+ }
+}
+
+static void
+rnp_vlan_strip_enable(struct rnp_eth_port *port, bool en)
+{
+ int i = 0;
+
+ for (i = 0; i < port->eth_dev->data->nb_rx_queues; i++) {
+ if (port->eth_dev->data->rx_queues[i] == NULL) {
+ RNP_PMD_ERR("Strip queue[%d] is NULL.", i);
+ continue;
+ }
+ rnp_vlan_strip_queue_set(port->eth_dev, i, en);
+ }
+}
+
+static void
+rnp_double_vlan_enable(struct rnp_eth_port *port, bool on)
+{
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t ctrl;
+
+ /* En Double Vlan Engine */
+ ctrl = RNP_MAC_REG_RD(hw, lane, RNP_MAC_VLAN_TAG);
+ if (on)
+ ctrl |= RNP_MAC_VLAN_EDVLP | RNP_MAC_VLAN_ESVL;
+ else
+ ctrl &= ~(RNP_MAC_VLAN_EDVLP | RNP_MAC_VLAN_ESVL);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_VLAN_TAG, ctrl);
+}
+
+static int
+rnp_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+{
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+ RNP_PMD_ERR("QinQ Strip isn't supported.");
+ return -ENOTSUP;
+ }
+ if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+ rnp_rx_vlan_filter_en(port, true);
+ else
+
+ rnp_rx_vlan_filter_en(port, false);
+ }
+ if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+ rnp_vlan_strip_enable(port, true);
+ else
+ rnp_vlan_strip_enable(port, false);
+ }
+ if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+ rnp_double_vlan_enable(port, true);
+ else
+ rnp_double_vlan_enable(port, false);
+ }
+
+ return 0;
+}
+
static int rnp_dev_start(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
@@ -466,10 +554,82 @@ static void rnp_set_rx_cksum_offload(struct rte_eth_dev *dev)
}
}
+static void
+rnp_qinq_insert_offload_en(struct rnp_eth_port *port, bool on)
+{
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t cvlan_ctrl, svlan_ctrl;
+
+ /* en double vlan engine */
+ rnp_double_vlan_enable(port, on);
+ /* setup inner vlan mode*/
+ cvlan_ctrl = RNP_MAC_REG_RD(hw, lane, RNP_MAC_INNER_VLAN_INCL);
+ if (on) {
+ cvlan_ctrl |= RNP_MAC_VLAN_VLTI;
+ cvlan_ctrl &= ~RNP_MAC_VLAN_CSVL;
+ if (port->invlan_type)
+ cvlan_ctrl |= RNP_MAC_VLAN_INSERT_SVLAN;
+ else
+ cvlan_ctrl |= RNP_MAC_VLAN_INSERT_CVLAN;
+
+ cvlan_ctrl &= ~RNP_MAC_VLAN_VLC;
+ cvlan_ctrl |= RNP_MAC_VLAN_VLC_ADD;
+ } else {
+ cvlan_ctrl = 0;
+ }
+ /* setup outer vlan mode */
+ svlan_ctrl = RNP_MAC_REG_RD(hw, lane, RNP_MAC_VLAN_INCL);
+ if (on) {
+ svlan_ctrl |= RNP_MAC_VLAN_VLTI;
+ svlan_ctrl &= ~RNP_MAC_VLAN_CSVL;
+ if (port->outvlan_type)
+ svlan_ctrl |= RNP_MAC_VLAN_INSERT_SVLAN;
+ else
+ svlan_ctrl |= RNP_MAC_VLAN_INSERT_CVLAN;
+ svlan_ctrl &= ~RNP_MAC_VLAN_VLC;
+ svlan_ctrl |= RNP_MAC_VLAN_VLC_ADD;
+ } else {
+ svlan_ctrl = 0;
+ }
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_INNER_VLAN_INCL, cvlan_ctrl);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_VLAN_INCL, svlan_ctrl);
+}
+
+static void
+rnp_vlan_insert_offload_en(struct rnp_eth_port *port, bool on)
+{
+ uint16_t lane = port->attr.nr_lane;
+ struct rnp_hw *hw = port->hw;
+ uint32_t ctrl;
+
+ ctrl = RNP_MAC_REG_RD(hw, lane, RNP_MAC_VLAN_INCL);
+ if (on) {
+ ctrl |= RNP_MAC_VLAN_VLTI;
+ ctrl |= RNP_MAC_VLAN_INSERT_CVLAN;
+ ctrl &= ~RNP_MAC_VLAN_VLC;
+ ctrl |= RNP_MAC_VLAN_VLC_ADD;
+ } else {
+ ctrl = 0;
+ }
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_VLAN_INCL, ctrl);
+}
+
static int rnp_dev_configure(struct rte_eth_dev *eth_dev)
{
+ struct rte_eth_txmode *txmode = ð_dev->data->dev_conf.txmode;
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT &&
+ txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
+ rnp_qinq_insert_offload_en(port, true);
+ else
+ rnp_qinq_insert_offload_en(port, false);
+ if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT &&
+ !(txmode->offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT))
+ rnp_vlan_insert_offload_en(port, true);
+ if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT))
+ rnp_vlan_insert_offload_en(port, false);
if (port->last_rx_num != eth_dev->data->nb_rx_queues)
port->rxq_num_changed = true;
else
@@ -648,8 +808,13 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
dev_info->reta_size = RNP_RSS_INDIR_SIZE;
/* speed cap info */
dev_info->speed_capa = rnp_get_speed_caps(eth_dev);
+ /* per queue offload */
+ dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
/* rx support offload cap */
- dev_info->rx_offload_capa = RNP_RX_CHECKSUM_SUPPORT;
+ dev_info->rx_offload_capa = RNP_RX_CHECKSUM_SUPPORT |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+ dev_info->rx_offload_capa |= dev_info->rx_queue_offload_capa;
/* tx support offload cap */
dev_info->tx_offload_capa = 0 |
RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
@@ -660,7 +825,9 @@ static int rnp_dev_infos_get(struct rte_eth_dev *eth_dev,
RTE_ETH_TX_OFFLOAD_TCP_TSO |
RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
dev_info->default_rxconf = (struct rte_eth_rxconf) {
.rx_drop_en = 0,
.rx_thresh = {
@@ -1308,6 +1475,9 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
.mac_addr_set = rnp_dev_mac_addr_set,
.mac_addr_add = rnp_dev_mac_addr_add,
.mac_addr_remove = rnp_dev_mac_addr_remove,
+ /* vlan offload */
+ .vlan_offload_set = rnp_vlan_offload_set,
+ .vlan_strip_queue_set = rnp_vlan_strip_queue_set,
};
static void
@@ -1342,6 +1512,9 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
attr->uc_hash_tb_size = RNP_MAX_UC_HASH_TABLE;
attr->mc_hash_tb_size = RNP_MAC_MC_HASH_TABLE;
}
+ port->outvlan_type = RNP_SVLAN_TYPE;
+ port->invlan_type = RNP_CVLAN_TYPE;
+
rnp_mbx_fw_get_lane_stat(port);
RNP_PMD_INFO("PF[%d] SW-ETH-PORT[%d]<->PHY_LANE[%d]\n",
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index bacbfca..c021efa 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -721,6 +721,7 @@ struct rnp_rx_cksum_parse {
volatile struct rnp_rx_desc rxbd)
{
uint32_t rss = rte_le_to_cpu_32(rxbd.wb.qword0.rss_hash);
+ uint16_t vlan_tci = rxbd.wb.qword1.vlan_tci;
uint16_t cmd = rxbd.wb.qword1.cmd;
if (rxq->rx_offloads & RNP_RX_CHECKSUM_SUPPORT) {
@@ -736,6 +737,13 @@ struct rnp_rx_cksum_parse {
m->hash.rss = rss;
m->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
}
+ if (rxq->rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ if (vlan_tci && cmd & RNP_RX_STRIP_VLAN) {
+ m->ol_flags |= RTE_MBUF_F_RX_VLAN |
+ RTE_MBUF_F_RX_VLAN_STRIPPED;
+ m->vlan_tci = vlan_tci;
+ }
+ }
}
static __rte_always_inline void
@@ -1149,7 +1157,8 @@ struct rnp_rx_cksum_parse {
static uint64_t mask = RTE_MBUF_F_TX_OUTER_IP_CKSUM |
RTE_MBUF_F_TX_TCP_SEG |
RTE_MBUF_F_TX_TUNNEL_VXLAN |
- RTE_MBUF_F_TX_TUNNEL_GRE;
+ RTE_MBUF_F_TX_TUNNEL_GRE |
+ RTE_MBUF_F_TX_QINQ;
return (flags & mask) ? 1 : 0;
}
@@ -1176,6 +1185,10 @@ struct rnp_rx_cksum_parse {
txbd->c.qword0.mss = rnp_cal_tso_seg(mbuf);
txbd->c.qword0.l4_len = mbuf->l4_len;
}
+ if (flags & RTE_MBUF_F_TX_QINQ) {
+ txbd->c.qword0.vlan_tci = mbuf->vlan_tci;
+ txbd->c.qword1.cmd |= RNP_TX_QINQ_INSERT;
+ }
#define GRE_TUNNEL_KEY (4)
#define GRE_TUNNEL_SEQ (4)
switch (flags & RTE_MBUF_F_TX_TUNNEL_MASK) {
@@ -1321,6 +1334,13 @@ struct rnp_rx_cksum_parse {
flags & RTE_MBUF_F_TX_TCP_SEG ||
flags & RTE_MBUF_F_TX_IP_CKSUM)
rnp_setup_csum_offload(tx_pkt, txbd);
+ if (flags & (RTE_MBUF_F_TX_VLAN |
+ RTE_MBUF_F_TX_QINQ)) {
+ txbd->d.cmd |= RNP_TX_VLAN_VALID;
+ txbd->d.vlan_tci = (flags & RTE_MBUF_F_TX_QINQ) ?
+ tx_pkt->vlan_tci_outer : tx_pkt->vlan_tci;
+ txbd->d.cmd |= RNP_TX_VLAN_INSERT;
+ }
}
static __rte_always_inline uint16_t
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 26/28] net/rnp: add support VLAN filters operations.
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (24 preceding siblings ...)
2025-02-08 2:44 ` [PATCH v7 25/28] net/rnp: support VLAN offloads Wenbo Cao
@ 2025-02-08 2:44 ` Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 27/28] net/rnp: add queue info operation Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 28/28] net/rnp: support Rx/Tx burst mode info Wenbo Cao
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:44 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support to update vid for vlan filter
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
doc/guides/nics/features/rnp.ini | 1 +
doc/guides/nics/rnp.rst | 2 +-
drivers/net/rnp/base/meson.build | 1 +
drivers/net/rnp/base/rnp_bitrev.h | 64 ++++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_crc32.c | 37 ++++++++++++++++
drivers/net/rnp/base/rnp_crc32.h | 10 +++++
drivers/net/rnp/base/rnp_eth_regs.h | 1 +
drivers/net/rnp/base/rnp_hw.h | 1 +
drivers/net/rnp/base/rnp_mac.c | 85 +++++++++++++++++++++++++++++++++++++
drivers/net/rnp/base/rnp_mac.h | 1 +
drivers/net/rnp/base/rnp_mac_regs.h | 6 +++
drivers/net/rnp/base/rnp_osdep.h | 13 ++++++
drivers/net/rnp/rnp.h | 11 +++++
drivers/net/rnp/rnp_ethdev.c | 10 +++++
14 files changed, 242 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/rnp/base/rnp_bitrev.h
create mode 100644 drivers/net/rnp/base/rnp_crc32.c
create mode 100644 drivers/net/rnp/base/rnp_crc32.h
diff --git a/doc/guides/nics/features/rnp.ini b/doc/guides/nics/features/rnp.ini
index 18ec4bc..ce7381e 100644
--- a/doc/guides/nics/features/rnp.ini
+++ b/doc/guides/nics/features/rnp.ini
@@ -20,6 +20,7 @@ Promiscuous mode = Y
Allmulticast mode = Y
MTU update = Y
Unicast MAC filter = Y
+VLAN filter = Y
VLAN offload = Y
QinQ offload = Y
RSS hash = Y
diff --git a/doc/guides/nics/rnp.rst b/doc/guides/nics/rnp.rst
index febdaf8..f1d4a06 100644
--- a/doc/guides/nics/rnp.rst
+++ b/doc/guides/nics/rnp.rst
@@ -17,7 +17,7 @@ Features
- Promiscuous mode
- Link state information
- MTU update
-- MAC filtering
+- MAC/VLAN filtering
- Jumbo frames
- Scatter-Gather IO support
- Port hardware statistic
diff --git a/drivers/net/rnp/base/meson.build b/drivers/net/rnp/base/meson.build
index c2ef0d0..6b78de8 100644
--- a/drivers/net/rnp/base/meson.build
+++ b/drivers/net/rnp/base/meson.build
@@ -8,6 +8,7 @@ sources = [
'rnp_common.c',
'rnp_mac.c',
'rnp_bdq_if.c',
+ 'rnp_crc32.c',
]
error_cflags = ['-Wno-unused-value',
diff --git a/drivers/net/rnp/base/rnp_bitrev.h b/drivers/net/rnp/base/rnp_bitrev.h
new file mode 100644
index 0000000..05c36ca
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_bitrev.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_BITREV_H_
+#define _RNP_BITREV_H_
+
+#include "rnp_osdep.h"
+
+static const u8 byte_rev_table[256] = {
+ 0x00, 0x80, 0x40, 0xc0, 0x20, 0xa0, 0x60, 0xe0,
+ 0x10, 0x90, 0x50, 0xd0, 0x30, 0xb0, 0x70, 0xf0,
+ 0x08, 0x88, 0x48, 0xc8, 0x28, 0xa8, 0x68, 0xe8,
+ 0x18, 0x98, 0x58, 0xd8, 0x38, 0xb8, 0x78, 0xf8,
+ 0x04, 0x84, 0x44, 0xc4, 0x24, 0xa4, 0x64, 0xe4,
+ 0x14, 0x94, 0x54, 0xd4, 0x34, 0xb4, 0x74, 0xf4,
+ 0x0c, 0x8c, 0x4c, 0xcc, 0x2c, 0xac, 0x6c, 0xec,
+ 0x1c, 0x9c, 0x5c, 0xdc, 0x3c, 0xbc, 0x7c, 0xfc,
+ 0x02, 0x82, 0x42, 0xc2, 0x22, 0xa2, 0x62, 0xe2,
+ 0x12, 0x92, 0x52, 0xd2, 0x32, 0xb2, 0x72, 0xf2,
+ 0x0a, 0x8a, 0x4a, 0xca, 0x2a, 0xaa, 0x6a, 0xea,
+ 0x1a, 0x9a, 0x5a, 0xda, 0x3a, 0xba, 0x7a, 0xfa,
+ 0x06, 0x86, 0x46, 0xc6, 0x26, 0xa6, 0x66, 0xe6,
+ 0x16, 0x96, 0x56, 0xd6, 0x36, 0xb6, 0x76, 0xf6,
+ 0x0e, 0x8e, 0x4e, 0xce, 0x2e, 0xae, 0x6e, 0xee,
+ 0x1e, 0x9e, 0x5e, 0xde, 0x3e, 0xbe, 0x7e, 0xfe,
+ 0x01, 0x81, 0x41, 0xc1, 0x21, 0xa1, 0x61, 0xe1,
+ 0x11, 0x91, 0x51, 0xd1, 0x31, 0xb1, 0x71, 0xf1,
+ 0x09, 0x89, 0x49, 0xc9, 0x29, 0xa9, 0x69, 0xe9,
+ 0x19, 0x99, 0x59, 0xd9, 0x39, 0xb9, 0x79, 0xf9,
+ 0x05, 0x85, 0x45, 0xc5, 0x25, 0xa5, 0x65, 0xe5,
+ 0x15, 0x95, 0x55, 0xd5, 0x35, 0xb5, 0x75, 0xf5,
+ 0x0d, 0x8d, 0x4d, 0xcd, 0x2d, 0xad, 0x6d, 0xed,
+ 0x1d, 0x9d, 0x5d, 0xdd, 0x3d, 0xbd, 0x7d, 0xfd,
+ 0x03, 0x83, 0x43, 0xc3, 0x23, 0xa3, 0x63, 0xe3,
+ 0x13, 0x93, 0x53, 0xd3, 0x33, 0xb3, 0x73, 0xf3,
+ 0x0b, 0x8b, 0x4b, 0xcb, 0x2b, 0xab, 0x6b, 0xeb,
+ 0x1b, 0x9b, 0x5b, 0xdb, 0x3b, 0xbb, 0x7b, 0xfb,
+ 0x07, 0x87, 0x47, 0xc7, 0x27, 0xa7, 0x67, 0xe7,
+ 0x17, 0x97, 0x57, 0xd7, 0x37, 0xb7, 0x77, 0xf7,
+ 0x0f, 0x8f, 0x4f, 0xcf, 0x2f, 0xaf, 0x6f, 0xef,
+ 0x1f, 0x9f, 0x5f, 0xdf, 0x3f, 0xbf, 0x7f, 0xff,
+};
+
+static inline u8 bitrev8(u8 byte)
+{
+ return byte_rev_table[byte];
+}
+
+static u16 bitrev16(u16 x)
+{
+ return (bitrev8(x & 0xff) << 8) | bitrev8(x >> 8);
+}
+
+/**
+ * bitrev32 - reverse the order of bits in a u32 value
+ * @x: value to be bit-reversed
+ */
+static u32 bitrev32(uint32_t x)
+{
+ return (bitrev16(x & 0xffff) << 16) | bitrev16(x >> 16);
+}
+
+#endif /* _RNP_BITREV_H */
diff --git a/drivers/net/rnp/base/rnp_crc32.c b/drivers/net/rnp/base/rnp_crc32.c
new file mode 100644
index 0000000..c287b35
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_crc32.c
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#include "rnp_osdep.h"
+#include "rnp_crc32.h"
+
+static inline int get_bitmask_order(u32 count)
+{
+ int order;
+
+ order = fls(count);
+
+ return order; /* We could be slightly more clever with -1 here... */
+}
+
+u32 rnp_vid_crc32_calc(u32 crc_init, u16 vid_le)
+{
+ u8 *data = (u8 *)&vid_le;
+ u32 crc = crc_init;
+ u8 data_byte = 0;
+ u32 temp = 0;
+ int i, bits;
+
+ bits = get_bitmask_order(VLAN_VID_MASK);
+ for (i = 0; i < bits; i++) {
+ if ((i % 8) == 0)
+ data_byte = data[i / 8];
+ temp = ((crc & 1) ^ data_byte) & 1;
+ crc >>= 1;
+ data_byte >>= 1;
+ if (temp)
+ crc ^= 0xedb88320;
+ }
+
+ return crc;
+}
diff --git a/drivers/net/rnp/base/rnp_crc32.h b/drivers/net/rnp/base/rnp_crc32.h
new file mode 100644
index 0000000..e117dcf
--- /dev/null
+++ b/drivers/net/rnp/base/rnp_crc32.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2023 Mucse IC Design Ltd.
+ */
+
+#ifndef _RNP_CRC32_H_
+#define _RNP_CRC32_H_
+
+u32 rnp_vid_crc32_calc(u32 crc_init, u16 vid_le);
+
+#endif /* _RNP_CRC32_H_ */
diff --git a/drivers/net/rnp/base/rnp_eth_regs.h b/drivers/net/rnp/base/rnp_eth_regs.h
index 802a127..ac9cea4 100644
--- a/drivers/net/rnp/base/rnp_eth_regs.h
+++ b/drivers/net/rnp/base/rnp_eth_regs.h
@@ -61,6 +61,7 @@
/* vlan filter ctrl */
#define RNP_VLAN_FILTER_CTRL _ETH_(0x9118)
#define RNP_VLAN_FILTER_EN RTE_BIT32(30)
+#define RNP_VFTA_HASH_TABLE(id) _ETH_(0xB000 + 0x4 * (id))
/* rss function ctrl */
#define RNP_RSS_INNER_CTRL _ETH_(0x805c)
#define RNP_INNER_RSS_EN (1)
diff --git a/drivers/net/rnp/base/rnp_hw.h b/drivers/net/rnp/base/rnp_hw.h
index 6d07480..1a3a341 100644
--- a/drivers/net/rnp/base/rnp_hw.h
+++ b/drivers/net/rnp/base/rnp_hw.h
@@ -76,6 +76,7 @@ struct rnp_mac_ops {
int (*clear_rafb)(struct rnp_eth_port *port, u32 index);
/* receive vlan filter */
int (*vlan_f_en)(struct rnp_eth_port *port, bool en);
+ int (*update_vlan)(struct rnp_eth_port *port, u16 vid, bool en);
};
struct rnp_eth_adapter;
diff --git a/drivers/net/rnp/base/rnp_mac.c b/drivers/net/rnp/base/rnp_mac.c
index ddf2a36..cf93cb0 100644
--- a/drivers/net/rnp/base/rnp_mac.c
+++ b/drivers/net/rnp/base/rnp_mac.c
@@ -8,6 +8,8 @@
#include "rnp_mac.h"
#include "rnp_eth_regs.h"
#include "rnp_mac_regs.h"
+#include "rnp_bitrev.h"
+#include "rnp_crc32.h"
#include "../rnp.h"
static int
@@ -214,12 +216,86 @@
return 0;
}
+static int
+rnp_update_vlan_filter_pf(struct rnp_eth_port *port,
+ u16 vlan, bool add)
+{
+ struct rnp_vlan_filter *vfta_tb = &port->vfta;
+ struct rnp_hw *hw = port->hw;
+ u32 vid_idx;
+ u32 vid_bit;
+ u32 vfta;
+
+ vid_idx = (u32)((vlan >> 5) & 0x7F);
+ vid_bit = (u32)(1 << (vlan & 0x1F));
+ vfta = RNP_E_REG_RD(hw, RNP_VFTA_HASH_TABLE(vid_idx));
+ if (add)
+ vfta |= vid_bit;
+ else
+ vfta &= ~vid_bit;
+ RNP_E_REG_WR(hw, RNP_VFTA_HASH_TABLE(vid_idx), vfta);
+ /* update local VFTA copy */
+ vfta_tb->vfta_entries[vid_idx] = vfta;
+
+ return 0;
+}
+
+static void
+rnp_update_vlan_hash_indep(struct rnp_eth_port *port)
+{
+ struct rnp_hw *hw = port->hw;
+ u16 lane = port->attr.nr_lane;
+ u64 vid_idx, vid_bit;
+ u16 hash = 0;
+ u16 vid_le;
+ u32 crc;
+ u16 vid;
+
+ /* Generate VLAN Hash Table */
+ for (vid = 0; vid < VLAN_N_VID; vid++) {
+ vid_idx = RNP_VLAN_BITMAP_IDX(vid);
+ vid_bit = port->vfta.vlans_bitmap[vid_idx];
+ vid_bit = (u64)vid_bit >>
+ (vid - (BITS_TO_LONGS(VLAN_N_VID) * vid_idx));
+ /* If Vid isn't Set, Calc Next Vid Hash Value */
+ if (!(vid_bit & 1))
+ continue;
+ vid_le = cpu_to_le16(vid);
+ crc = bitrev32(~rnp_vid_crc32_calc(~0, vid_le));
+ crc >>= RNP_MAC_VLAN_HASH_SHIFT;
+ hash |= (1 << crc);
+ }
+ /* Update vlan hash table */
+ printf("hash 0x%.2x\n", hash);
+ RNP_MAC_REG_WR(hw, lane, RNP_MAC_VLAN_HASH, hash);
+}
+
+static int
+rnp_update_vlan_filter_indep(struct rnp_eth_port *port,
+ u16 vid,
+ bool add)
+{
+ u64 vid_bit, vid_idx;
+
+ vid_bit = RNP_VLAN_BITMAP_BIT(vid);
+ vid_idx = RNP_VLAN_BITMAP_IDX(vid);
+ if (add)
+ port->vfta.vlans_bitmap[vid_idx] |= vid_bit;
+ else
+ port->vfta.vlans_bitmap[vid_idx] &= ~vid_bit;
+
+ rnp_update_vlan_hash_indep(port);
+
+ return 0;
+}
+
const struct rnp_mac_ops rnp_mac_ops_pf = {
.get_macaddr = rnp_mbx_fw_get_macaddr,
.update_mpfm = rnp_update_mpfm_pf,
.set_rafb = rnp_set_mac_addr_pf,
.clear_rafb = rnp_clear_mac_pf,
.vlan_f_en = rnp_en_vlan_filter_pf,
+ .update_vlan = rnp_update_vlan_filter_pf,
};
const struct rnp_mac_ops rnp_mac_ops_indep = {
@@ -228,6 +304,7 @@
.set_rafb = rnp_set_mac_addr_indep,
.clear_rafb = rnp_clear_mac_indep,
.vlan_f_en = rnp_en_vlan_filter_indep,
+ .update_vlan = rnp_update_vlan_filter_indep,
};
int rnp_get_mac_addr(struct rnp_eth_port *port, u8 *mac)
@@ -271,6 +348,14 @@ int rnp_rx_vlan_filter_en(struct rnp_eth_port *port, bool en)
return rnp_call_hwif_impl(port, mac_ops->vlan_f_en, en);
}
+int rnp_update_vlan_filter(struct rnp_eth_port *port, u16 vid, bool en)
+{
+ const struct rnp_mac_ops *mac_ops =
+ RNP_DEV_PP_TO_MAC_OPS(port->eth_dev);
+
+ return rnp_call_hwif_impl(port, mac_ops->update_vlan, vid, en);
+}
+
void rnp_mac_ops_init(struct rnp_hw *hw)
{
struct rnp_proc_priv *proc_priv = RNP_DEV_TO_PROC_PRIV(hw->back->eth_dev);
diff --git a/drivers/net/rnp/base/rnp_mac.h b/drivers/net/rnp/base/rnp_mac.h
index 4a5206d..6f22c82 100644
--- a/drivers/net/rnp/base/rnp_mac.h
+++ b/drivers/net/rnp/base/rnp_mac.h
@@ -29,5 +29,6 @@
int rnp_update_mpfm(struct rnp_eth_port *port,
u32 mode, bool en);
int rnp_rx_vlan_filter_en(struct rnp_eth_port *port, bool en);
+int rnp_update_vlan_filter(struct rnp_eth_port *port, u16 vid, bool en);
#endif /* _RNP_MAC_H_ */
diff --git a/drivers/net/rnp/base/rnp_mac_regs.h b/drivers/net/rnp/base/rnp_mac_regs.h
index 43e0aed..9c1d440 100644
--- a/drivers/net/rnp/base/rnp_mac_regs.h
+++ b/drivers/net/rnp/base/rnp_mac_regs.h
@@ -85,6 +85,12 @@
#define RNP_MAC_VLAN_ETV RTE_BIT32(16)
/* enable vid valid */
#define RNP_MAC_VLAN_HASH_EN RTE_GENMASK32(15, 0)
+/* mac vlan hash filter */
+#define RNP_MAC_VLAN_HASH (0x58)
+#define RNP_MAC_VLAN_HASH_MASK RTE_GENMASK32(15, 0)
+#define RNP_MAC_VLAN_HASH_SHIFT (28)
+#define RNP_VLAN_BITMAP_BIT(vlan_id) (1UL << ((vlan_id) & 0x3F))
+#define RNP_VLAN_BITMAP_IDX(vlan_id) ((vlan_id) >> 6)
/* MAC VLAN CTRL INSERT REG */
#define RNP_MAC_VLAN_INCL (0x60)
#define RNP_MAC_INNER_VLAN_INCL (0x64)
diff --git a/drivers/net/rnp/base/rnp_osdep.h b/drivers/net/rnp/base/rnp_osdep.h
index 137e0e8..6332517 100644
--- a/drivers/net/rnp/base/rnp_osdep.h
+++ b/drivers/net/rnp/base/rnp_osdep.h
@@ -49,6 +49,19 @@
#define cpu_to_le32(v) rte_cpu_to_le_32((u32)(v))
#endif
+#ifndef DIV_ROUND_UP
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define BITS_PER_BYTE (8)
+#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
+#endif
+
+#define fls(n) rte_fls_u32(n)
+
+#ifndef VLAN_N_VID
+#define VLAN_N_VID (4096)
+#define VLAN_VID_MASK (0x0fff)
+#endif
+
#define spinlock_t rte_spinlock_t
#define spin_lock_init(spinlock_v) rte_spinlock_init(spinlock_v)
#define spin_lock(spinlock_v) rte_spinlock_lock(spinlock_v)
diff --git a/drivers/net/rnp/rnp.h b/drivers/net/rnp/rnp.h
index a4790d3..1a0c2d2 100644
--- a/drivers/net/rnp/rnp.h
+++ b/drivers/net/rnp/rnp.h
@@ -63,6 +63,7 @@
#define RNP_MAX_HASH_MC_MAC_SIZE (4096) /* max multicast hash mac num */
#define RNP_MAX_UC_HASH_TABLE (128) /* max unicast hash mac filter table */
#define RNP_MAC_MC_HASH_TABLE (128) /* max multicast hash mac filter table*/
+#define RNP_MAX_VFTA_SIZE (128) /* max pf vlan hash table size */
/* Peer port own independent resource */
#define RNP_PORT_MAX_MACADDR (32)
#define RNP_PORT_MAX_UC_HASH_TB (8)
@@ -176,6 +177,15 @@ enum rnp_vlan_type {
RNP_SVLAN_TYPE = 1,
};
+struct rnp_vlan_filter {
+ union {
+ /* indep vlan hash filter table used */
+ uint64_t vlans_bitmap[BITS_TO_LONGS(VLAN_N_VID)];
+ /* PF vlan filter table used */
+ uint32_t vfta_entries[RNP_MAX_VFTA_SIZE];
+ };
+};
+
struct rnp_eth_port {
struct rnp_proc_priv *proc_priv;
struct rte_ether_addr mac_addr;
@@ -200,6 +210,7 @@ struct rnp_eth_port {
enum rnp_vlan_type outvlan_type;
enum rnp_vlan_type invlan_type;
+ struct rnp_vlan_filter vfta;
rte_spinlock_t rx_mac_lock;
bool port_stopped;
};
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index bddf0d5..52693f4 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -393,6 +393,15 @@ static int rnp_enable_all_tx_queue(struct rte_eth_dev *dev)
return 0;
}
+static int
+rnp_vlan_filter_set(struct rte_eth_dev *dev,
+ uint16_t vlan_id, int on)
+{
+ struct rnp_eth_port *port = RNP_DEV_TO_PORT(dev);
+
+ return rnp_update_vlan_filter(port, vlan_id, on);
+}
+
static int rnp_dev_start(struct rte_eth_dev *eth_dev)
{
struct rnp_eth_port *port = RNP_DEV_TO_PORT(eth_dev);
@@ -1478,6 +1487,7 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
/* vlan offload */
.vlan_offload_set = rnp_vlan_offload_set,
.vlan_strip_queue_set = rnp_vlan_strip_queue_set,
+ .vlan_filter_set = rnp_vlan_filter_set,
};
static void
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 27/28] net/rnp: add queue info operation.
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (25 preceding siblings ...)
2025-02-08 2:44 ` [PATCH v7 26/28] net/rnp: add support VLAN filters operations Wenbo Cao
@ 2025-02-08 2:44 ` Wenbo Cao
2025-02-08 2:44 ` [PATCH v7 28/28] net/rnp: support Rx/Tx burst mode info Wenbo Cao
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:44 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add support get queue configure info for user debug
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp_ethdev.c | 2 ++
drivers/net/rnp/rnp_rxtx.c | 42 ++++++++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_rxtx.h | 4 ++++
3 files changed, 48 insertions(+)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 52693f4..4fdeb19 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -1465,6 +1465,8 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
.rx_queue_release = rnp_dev_rx_queue_release,
.tx_queue_setup = rnp_tx_queue_setup,
.tx_queue_release = rnp_dev_tx_queue_release,
+ .rxq_info_get = rnp_rx_queue_info_get,
+ .txq_info_get = rnp_tx_queue_info_get,
/* rss impl */
.reta_update = rnp_dev_rss_reta_update,
.reta_query = rnp_dev_rss_reta_query,
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index c021efa..60a49c3 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -1718,3 +1718,45 @@ int rnp_tx_func_select(struct rte_eth_dev *dev)
return 0;
}
+
+void
+rnp_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo)
+{
+ struct rnp_rx_queue *rxq;
+
+ rxq = dev->data->rx_queues[queue_id];
+ if (!rxq)
+ return;
+ qinfo->mp = rxq->mb_pool;
+ qinfo->scattered_rx = dev->data->scattered_rx;
+ qinfo->queue_state = rxq->rxq_started;
+ qinfo->nb_desc = rxq->attr.nb_desc;
+ qinfo->rx_buf_size = rxq->rx_buf_len;
+
+ qinfo->conf.rx_deferred_start = rxq->rx_deferred_start;
+ qinfo->conf.rx_free_thresh = rxq->rx_free_thresh;
+ qinfo->conf.rx_thresh.pthresh = rxq->pthresh;
+ qinfo->conf.rx_thresh.hthresh = rxq->pburst;
+ qinfo->conf.offloads = rxq->rx_offloads;
+}
+
+void
+rnp_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo)
+{
+ struct rnp_tx_queue *txq;
+
+ txq = dev->data->tx_queues[queue_id];
+ if (!txq)
+ return;
+ qinfo->queue_state = txq->txq_started;
+ qinfo->nb_desc = txq->attr.nb_desc;
+
+ qinfo->conf.tx_deferred_start = txq->tx_deferred_start;
+ qinfo->conf.tx_free_thresh = txq->tx_free_thresh;
+ qinfo->conf.tx_rs_thresh = txq->tx_rs_thresh;
+ qinfo->conf.tx_thresh.pthresh = txq->pthresh;
+ qinfo->conf.tx_thresh.hthresh = txq->pburst;
+ qinfo->conf.offloads = txq->tx_offloads;
+}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index 51e5d4b..dc4a8ea 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -148,5 +148,9 @@ int rnp_tx_queue_setup(struct rte_eth_dev *dev,
int rnp_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
int rnp_rx_func_select(struct rte_eth_dev *dev);
int rnp_tx_func_select(struct rte_eth_dev *dev);
+void rnp_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_rxq_info *qinfo);
+void rnp_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
+ struct rte_eth_txq_info *qinfo);
#endif /* _RNP_RXTX_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v7 28/28] net/rnp: support Rx/Tx burst mode info
2025-02-08 2:43 [PATCH v7 00/28] [v6]drivers/net Add Support mucse N10 Pmd Driver Wenbo Cao
` (26 preceding siblings ...)
2025-02-08 2:44 ` [PATCH v7 27/28] net/rnp: add queue info operation Wenbo Cao
@ 2025-02-08 2:44 ` Wenbo Cao
27 siblings, 0 replies; 29+ messages in thread
From: Wenbo Cao @ 2025-02-08 2:44 UTC (permalink / raw)
To: thomas, Wenbo Cao; +Cc: stephen, dev, ferruh.yigit, andrew.rybchenko, yaojun
add plaform method for get rx/tx burst function select
by upload func name.
Signed-off-by: Wenbo Cao <caowenbo@mucse.com>
---
drivers/net/rnp/rnp_ethdev.c | 2 ++
drivers/net/rnp/rnp_rxtx.c | 58 ++++++++++++++++++++++++++++++++++++++++++++
drivers/net/rnp/rnp_rxtx.h | 6 +++++
3 files changed, 66 insertions(+)
diff --git a/drivers/net/rnp/rnp_ethdev.c b/drivers/net/rnp/rnp_ethdev.c
index 4fdeb19..a4e8a00 100644
--- a/drivers/net/rnp/rnp_ethdev.c
+++ b/drivers/net/rnp/rnp_ethdev.c
@@ -1467,6 +1467,8 @@ static void rnp_get_hw_stats(struct rte_eth_dev *dev)
.tx_queue_release = rnp_dev_tx_queue_release,
.rxq_info_get = rnp_rx_queue_info_get,
.txq_info_get = rnp_tx_queue_info_get,
+ .rx_burst_mode_get = rnp_rx_burst_mode_get,
+ .tx_burst_mode_get = rnp_tx_burst_mode_get,
/* rss impl */
.reta_update = rnp_dev_rss_reta_update,
.reta_query = rnp_dev_rss_reta_query,
diff --git a/drivers/net/rnp/rnp_rxtx.c b/drivers/net/rnp/rnp_rxtx.c
index 60a49c3..6fd5fe0 100644
--- a/drivers/net/rnp/rnp_rxtx.c
+++ b/drivers/net/rnp/rnp_rxtx.c
@@ -1760,3 +1760,61 @@ int rnp_tx_func_select(struct rte_eth_dev *dev)
qinfo->conf.tx_thresh.hthresh = txq->pburst;
qinfo->conf.offloads = txq->tx_offloads;
}
+
+static const struct {
+ eth_rx_burst_t pkt_burst;
+ const char *info;
+} rnp_rx_burst_infos[] = {
+ { rnp_scattered_rx, "Scalar Scattered" },
+ { rnp_recv_pkts, "Scalar" },
+};
+
+static const struct {
+ eth_tx_burst_t pkt_burst;
+ const char *info;
+} rnp_tx_burst_infos[] = {
+ { rnp_xmit_simple, "Scalar Simple" },
+ { rnp_multiseg_xmit_pkts, "Scalar" },
+};
+
+int
+rnp_rx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id,
+ struct rte_eth_burst_mode *mode)
+{
+ eth_rx_burst_t pkt_burst = dev->rx_pkt_burst;
+ int ret = -EINVAL;
+ unsigned int i;
+
+ for (i = 0; i < RTE_DIM(rnp_rx_burst_infos); ++i) {
+ if (pkt_burst == rnp_rx_burst_infos[i].pkt_burst) {
+ snprintf(mode->info, sizeof(mode->info), "%s",
+ rnp_rx_burst_infos[i].info);
+ ret = 0;
+ break;
+ }
+ }
+
+ return ret;
+}
+
+int
+rnp_tx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id,
+ struct rte_eth_burst_mode *mode)
+{
+ eth_tx_burst_t pkt_burst = dev->tx_pkt_burst;
+ int ret = -EINVAL;
+ unsigned int i;
+
+ for (i = 0; i < RTE_DIM(rnp_tx_burst_infos); ++i) {
+ if (pkt_burst == rnp_tx_burst_infos[i].pkt_burst) {
+ snprintf(mode->info, sizeof(mode->info), "%s",
+ rnp_tx_burst_infos[i].info);
+ ret = 0;
+ break;
+ }
+ }
+
+ return ret;
+}
diff --git a/drivers/net/rnp/rnp_rxtx.h b/drivers/net/rnp/rnp_rxtx.h
index dc4a8ea..8639f08 100644
--- a/drivers/net/rnp/rnp_rxtx.h
+++ b/drivers/net/rnp/rnp_rxtx.h
@@ -152,5 +152,11 @@ void rnp_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_rxq_info *qinfo);
void rnp_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+int rnp_rx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id,
+ struct rte_eth_burst_mode *mode);
+int rnp_tx_burst_mode_get(struct rte_eth_dev *dev,
+ __rte_unused uint16_t queue_id,
+ struct rte_eth_burst_mode *mode);
#endif /* _RNP_RXTX_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 29+ messages in thread