* [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs
@ 2025-06-12 8:58 Kyo Liu
2025-06-12 8:58 ` [PATCH v1 01/17] net/nbl: add doc and minimum nbl build framework Kyo Liu
` (19 more replies)
0 siblings, 20 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev
This nbl PMD (**librte_net_nbl**) provides poll mode driver for
NebulaMatrix serials NICs.
Features:
---------
- MTU update
- promisc mode set
- xstats
- Basic stats
Support NICs:
-------------
- S1205CQ-A00CHT
- S1105AS-A00CHT
- S1055AS-A00CHT
- S1052AS-A00CHT
- S1051AS-A00CHT
- S1045XS-A00CHT
- S1205CQ-A00CSP
- S1055AS-A00CSP
- S1052AS-A00CSP
Kyo Liu (17):
net/nbl: add doc and minimum nbl build framework
net/nbl: add simple probe/remove and log module
net/nbl: add PHY layer definitions and implementation
net/nbl: add Channel layer definitions and implementation
net/nbl: add Resource layer definitions and implementation
net/nbl: add Dispatch layer definitions and implementation
net/nbl: add Dev layer definitions and implementation
net/nbl: add complete device init and uninit functionality
net/nbl: add uio and vfio mode for nbl
net/nbl: bus/pci: introduce get_iova_mode for pci dev
net/nbl: add nbl coexistence mode for nbl
net/nbl: add nbl ethdev configuration
net/nbl: add nbl device rxtx queue setup and release ops
net/nbl: add nbl device start and stop ops
net/nbl: add nbl device tx and rx burst
net/nbl: add nbl device xstats and stats
net/nbl: nbl device support set mtu and promisc
.mailmap | 5 +
MAINTAINERS | 9 +
doc/guides/nics/features/nbl.ini | 9 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/nbl.rst | 42 +
doc/guides/rel_notes/release_25_07.rst | 5 +
drivers/bus/pci/bus_pci_driver.h | 11 +
drivers/bus/pci/linux/pci.c | 2 +
drivers/net/meson.build | 1 +
drivers/net/nbl/meson.build | 26 +
drivers/net/nbl/nbl_common/nbl_common.c | 47 +
drivers/net/nbl/nbl_common/nbl_common.h | 10 +
drivers/net/nbl/nbl_common/nbl_thread.c | 88 ++
drivers/net/nbl/nbl_common/nbl_userdev.c | 758 ++++++++++
drivers/net/nbl/nbl_common/nbl_userdev.h | 21 +
drivers/net/nbl/nbl_core.c | 100 ++
drivers/net/nbl/nbl_core.h | 98 ++
drivers/net/nbl/nbl_dev/nbl_dev.c | 1007 ++++++++++++++
drivers/net/nbl/nbl_dev/nbl_dev.h | 65 +
drivers/net/nbl/nbl_dispatch.c | 1226 +++++++++++++++++
drivers/net/nbl/nbl_dispatch.h | 31 +
drivers/net/nbl/nbl_ethdev.c | 167 +++
drivers/net/nbl/nbl_ethdev.h | 32 +
drivers/net/nbl/nbl_hw/nbl_channel.c | 853 ++++++++++++
drivers/net/nbl/nbl_hw/nbl_channel.h | 127 ++
.../nbl_hw_leonis/nbl_phy_leonis_snic.c | 230 ++++
.../nbl_hw_leonis/nbl_phy_leonis_snic.h | 53 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 253 ++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h | 10 +
drivers/net/nbl/nbl_hw/nbl_phy.h | 28 +
drivers/net/nbl/nbl_hw/nbl_resource.c | 5 +
drivers/net/nbl/nbl_hw/nbl_resource.h | 153 ++
drivers/net/nbl/nbl_hw/nbl_txrx.c | 906 ++++++++++++
drivers/net/nbl/nbl_hw/nbl_txrx.h | 136 ++
drivers/net/nbl/nbl_hw/nbl_txrx_ops.h | 91 ++
drivers/net/nbl/nbl_include/nbl_def_channel.h | 434 ++++++
drivers/net/nbl/nbl_include/nbl_def_common.h | 128 ++
drivers/net/nbl/nbl_include/nbl_def_dev.h | 107 ++
.../net/nbl/nbl_include/nbl_def_dispatch.h | 95 ++
drivers/net/nbl/nbl_include/nbl_def_phy.h | 35 +
.../net/nbl/nbl_include/nbl_def_resource.h | 87 ++
drivers/net/nbl/nbl_include/nbl_include.h | 212 +++
drivers/net/nbl/nbl_include/nbl_logs.h | 25 +
.../net/nbl/nbl_include/nbl_product_base.h | 37 +
44 files changed, 7766 insertions(+)
create mode 100644 doc/guides/nics/features/nbl.ini
create mode 100644 doc/guides/nics/nbl.rst
create mode 100644 drivers/net/nbl/meson.build
create mode 100644 drivers/net/nbl/nbl_common/nbl_common.c
create mode 100644 drivers/net/nbl/nbl_common/nbl_common.h
create mode 100644 drivers/net/nbl/nbl_common/nbl_thread.c
create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.c
create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.h
create mode 100644 drivers/net/nbl/nbl_core.c
create mode 100644 drivers/net/nbl/nbl_core.h
create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.c
create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.h
create mode 100644 drivers/net/nbl/nbl_dispatch.c
create mode 100644 drivers/net/nbl/nbl_dispatch.h
create mode 100644 drivers/net/nbl/nbl_ethdev.c
create mode 100644 drivers/net/nbl/nbl_ethdev.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_phy.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_channel.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_common.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dev.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dispatch.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_phy.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_resource.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_include.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_logs.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_product_base.h
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 01/17] net/nbl: add doc and minimum nbl build framework
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 02/17] net/nbl: add simple probe/remove and log module Kyo Liu
` (18 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Thomas Monjalon, Dimon Zhao, Leon Yu, Sam Chen
add minimum PMD code, doc and build infrastructure for nbl driver.
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
.mailmap | 5 ++++
MAINTAINERS | 9 +++++++
doc/guides/nics/features/nbl.ini | 9 +++++++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/nbl.rst | 42 ++++++++++++++++++++++++++++++++
drivers/net/meson.build | 1 +
drivers/net/nbl/meson.build | 11 +++++++++
drivers/net/nbl/nbl_ethdev.c | 3 +++
8 files changed, 81 insertions(+)
create mode 100644 doc/guides/nics/features/nbl.ini
create mode 100644 doc/guides/nics/nbl.rst
create mode 100644 drivers/net/nbl/meson.build
create mode 100644 drivers/net/nbl/nbl_ethdev.c
diff --git a/.mailmap b/.mailmap
index 3dec1492aa..c8b592e0de 100644
--- a/.mailmap
+++ b/.mailmap
@@ -362,6 +362,7 @@ Diana Wang <na.wang@corigine.com>
Didier Pallard <didier.pallard@6wind.com>
Dilshod Urazov <dilshod.urazov@oktetlabs.ru>
Dima Ruinskiy <dima.ruinskiy@intel.com>
+Dimon Zhao <dimon.zhao@nebula-matrix.com>
Ding Zhi <zhi.ding@6wind.com>
Diogo Behrens <diogo.behrens@huawei.com>
Dirk-Holger Lenz <dirk.lenz@ng4t.com>
@@ -832,6 +833,7 @@ Kumar Amber <kumar.amber@intel.com>
Kumara Parameshwaran <kumaraparamesh92@gmail.com> <kparameshwar@vmware.com>
Kumar Sanghvi <kumaras@chelsio.com>
Kyle Larose <klarose@sandvine.com>
+Kyo Liu<kyo.liu@nebula-matrix.com>
Lance Richardson <lance.richardson@broadcom.com>
Laszlo Ersek <lersek@redhat.com>
Laura Stroe <laura.stroe@intel.com>
@@ -846,6 +848,7 @@ Lei Gong <arei.gonglei@huawei.com>
Lei Ji <jilei8@huawei.com>
Lei Yao <lei.a.yao@intel.com>
Leonid Myravjev <myravjev@amicon.ru>
+Leon Yu <leon.yu@nebula-matrix.com>
Leo Xu <yongquanx@nvidia.com>
Leszek Zygo <leszek.zygo@intel.com>
Levend Sayar <levendsayar@gmail.com>
@@ -1351,6 +1354,7 @@ Saikrishna Edupuganti <saikrishna.edupuganti@intel.com>
Saleh Alsouqi <salehals@nvidia.com> <salehals@mellanox.com>
Salem Sol <salems@nvidia.com>
Sam Andrew <samandrew@microsoft.com>
+Sam Chen <sam.chen@nebula-matrix.com>
Sameh Gobriel <sameh.gobriel@intel.com>
Sam Grove <sam.grove@sifive.com>
Samik Gupta <samik.gupta@broadcom.com>
@@ -1849,3 +1853,4 @@ Ziye Yang <ziye.yang@intel.com>
Zoltan Kiss <zoltan.kiss@schaman.hu> <zoltan.kiss@linaro.org>
Zorik Machulsky <zorik@amazon.com>
Zyta Szpak <zyta@marvell.com> <zr@semihalf.com> <zyta.szpak@semihalf.com>
+Leon Yu <leon.yu@nebula-matrix.com>
diff --git a/MAINTAINERS b/MAINTAINERS
index 57679d40bc..ec4cc568f2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1026,6 +1026,15 @@ F: drivers/net/sfc/
F: doc/guides/nics/sfc_efx.rst
F: doc/guides/nics/features/sfc.ini
+nebulamatrix nbl
+M: Dimon Zhao <dimon.zhao@nebula-matrix.com>
+M: Kyo Liu<kyo.liu@nebula-matrix.com>
+M: Leon Yu <leon.yu@nebula-matrix.com>
+M: Sam Chen <sam.chen@nebula-matrix.com>
+F: drivers/net/nbl
+F: doc/guides/nics/nbl.rst
+F: doc/guides/nics/features/nbl.ini
+
Wangxun ngbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
F: drivers/net/ngbe/
diff --git a/doc/guides/nics/features/nbl.ini b/doc/guides/nics/features/nbl.ini
new file mode 100644
index 0000000000..6daabe6ed3
--- /dev/null
+++ b/doc/guides/nics/features/nbl.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'nbl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 618c52d618..a82dfcf5c7 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -50,6 +50,7 @@ Network Interface Controller Drivers
mvneta
mvpp2
netvsc
+ nbl
nfb
nfp
ngbe
diff --git a/doc/guides/nics/nbl.rst b/doc/guides/nics/nbl.rst
new file mode 100644
index 0000000000..6cd09fe97f
--- /dev/null
+++ b/doc/guides/nics/nbl.rst
@@ -0,0 +1,42 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2025 Nebulamatrix Technology Co., Ltd
+
+NBL Poll Mode Driver
+====================
+
+The NBL PMD (**librte_net_nbl**) provides poll mode driver support for
+10/25/50/100/200 Gbps Nebulamatrix Series Network Adapters.
+
+
+Supported NICs
+--------------
+
+The following Nebulamatrix device models are supported by the same nbl driver:
+
+ - S1205CQ-A00CHT
+ - S1105AS-A00CHT
+ - S1055AS-A00CHT
+ - S1052AS-A00CHT
+ - S1051AS-A00CHT
+ - S1045XS-A00CHT
+ - S1205CQ-A00CSP
+ - S1055AS-A00CSP
+ - S1052AS-A00CSP
+
+
+Prerequisites
+-------------
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
+ to setup the basic DPDK environment.
+
+- Learn about `Nebulamatrix Series NICs
+ <https://www.nebula-matrix.com/main>`_.
+
+
+Limitations or Known Issues
+---------------------------
+
+32-bit architectures are not supported.
+
+Windows and BSD are not supported yet.
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 61f8cddb30..517e78d18b 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -40,6 +40,7 @@ drivers = [
'mlx5',
'mvneta',
'mvpp2',
+ 'nbl',
'netvsc',
'nfb',
'nfp',
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
new file mode 100644
index 0000000000..4cfbdb023f
--- /dev/null
+++ b/drivers/net/nbl/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2025 NebulaMatrix Technology Co., Ltd.
+
+if not is_linux
+ build = false
+ reason = 'only supported on Linux'
+endif
+
+sources = files(
+ 'nbl_ethdev.c',
+)
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
new file mode 100644
index 0000000000..3ad8e4033a
--- /dev/null
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -0,0 +1,3 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 02/17] net/nbl: add simple probe/remove and log module
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
2025-06-12 8:58 ` [PATCH v1 01/17] net/nbl: add doc and minimum nbl build framework Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 17:49 ` Stephen Hemminger
2025-06-12 8:58 ` [PATCH v1 03/17] net/nbl: add PHY layer definitions and implementation Kyo Liu
` (17 subsequent siblings)
19 siblings, 1 reply; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
Our driver architecture is relatively complex because the code
is highly reusable and designed to support multiple features.
For example, our driver can support open-source UIO/VFIO drivers
while also coexisting with kernel drivers.
Additionally, the codebase supports multiple chip variants,
each with distinct hardware-software interactions.
To ensure compatibility, our architecture is divided
into the following layers:
1. Dev Layer (Device Layer)
The top-level business logic layer where all DPDK operations
are device-centric. Every operation is performed relative
to the device context.
2. Dispatch Layer
The dispatching layer determines whether tasks from the
Dev Layer should be routed to the Resource Layer or
Channel Layer based on two criteria:
2.1 The driver type in use (UIO/VFIO vs. vendor-specific driver)
2.2 Whether the task requires hardware access
3. Resource Layer
Handles tasks dispatched from the Dev Layer (via Dispatch Layer)
in userspace. These tasks fall into two categories:
3.1 Hardware control in userspace (common with UIO/VFIO drivers):
The Resource Layer further invokes the PHY Layer
when hardware access is needed, as only the PHY Layer
has OS-level privileges.
3.2 Software resource management: Operations like packet
statistics collection that don't require hardware access.
4. PHY Layer (Physical Layer)
Serves the Resource Layer by interacting with different
hardware chipsets.Writes to hardware registers to drive
the hardware based on Resource Layer directives.
5. Channel Layer
Dedicated to coexistence mode with kernel drivers.
When a BDF device is shared between two drivers
(with the kernel driver as primary), all hardware operations from
DPDK are forwarded through this layer to the kernel driver.
Exceptions include performance-sensitive operations
(e.g., doorbell ringing) handled directly in userspace.
Architecture Flow Summary
Our top-down architecture varies by driver mode:
1. UIO/VFIO Mode (e.g., configuring a hardware queue)
Dev Layer → Dispatch Layer → Resource Layer → PHY Layer
The Dispatch Layer routes tasks to the Resource Layer,
which interacts with the PHY Layer for hardware writes.
2. Coexistence Mode
Dev Layer → Dispatch Layer → Channel Layer
The Dispatch Layer redirects hooks to the Channel Layer,
which communicates with the kernel driver to configure hardware.
This layered approach ensures flexibility across driver types
and hardware variants. My subsequent patches will
iteratively define and implement each layer’s functionality.
Let me know if further clarification would be helpful
for the review process.
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 3 +
drivers/net/nbl/nbl_core.c | 30 ++++++
drivers/net/nbl/nbl_core.h | 40 ++++++++
drivers/net/nbl/nbl_ethdev.c | 113 ++++++++++++++++++++++
drivers/net/nbl/nbl_ethdev.h | 13 +++
drivers/net/nbl/nbl_include/nbl_include.h | 53 ++++++++++
drivers/net/nbl/nbl_include/nbl_logs.h | 25 +++++
7 files changed, 277 insertions(+)
create mode 100644 drivers/net/nbl/nbl_core.c
create mode 100644 drivers/net/nbl/nbl_core.h
create mode 100644 drivers/net/nbl/nbl_ethdev.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_include.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_logs.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index 4cfbdb023f..013a4126f6 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -6,6 +6,9 @@ if not is_linux
reason = 'only supported on Linux'
endif
+includes += include_directories('nbl_include')
+
sources = files(
'nbl_ethdev.c',
+ 'nbl_core.c',
)
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
new file mode 100644
index 0000000000..63b707f0ba
--- /dev/null
+++ b/drivers/net/nbl/nbl_core.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_core.h"
+
+int nbl_core_init(const struct nbl_adapter *adapter, const struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(adapter);
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
+
+void nbl_core_remove(const struct nbl_adapter *adapter)
+{
+ RTE_SET_USED(adapter);
+}
+
+int nbl_core_start(const struct nbl_adapter *adapter)
+{
+ RTE_SET_USED(adapter);
+
+ return 0;
+}
+
+void nbl_core_stop(const struct nbl_adapter *adapter)
+{
+ RTE_SET_USED(adapter);
+}
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
new file mode 100644
index 0000000000..4ba13e5bd7
--- /dev/null
+++ b/drivers/net/nbl/nbl_core.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_CORE_H_
+#define _NBL_CORE_H_
+
+#include "nbl_include.h"
+
+#define NBL_VENDOR_ID (0x1F0F)
+#define NBL_DEVICE_ID_M18110 (0x3403)
+#define NBL_DEVICE_ID_M18110_LX (0x3404)
+#define NBL_DEVICE_ID_M18110_BASE_T (0x3405)
+#define NBL_DEVICE_ID_M18110_LX_BASE_T (0x3406)
+#define NBL_DEVICE_ID_M18110_OCP (0x3407)
+#define NBL_DEVICE_ID_M18110_LX_OCP (0x3408)
+#define NBL_DEVICE_ID_M18110_BASE_T_OCP (0x3409)
+#define NBL_DEVICE_ID_M18110_LX_BASE_T_OCP (0x340a)
+#define NBL_DEVICE_ID_M18120 (0x340b)
+#define NBL_DEVICE_ID_M18120_LX (0x340c)
+#define NBL_DEVICE_ID_M18120_BASE_T (0x340d)
+#define NBL_DEVICE_ID_M18120_LX_BASE_T (0x340e)
+#define NBL_DEVICE_ID_M18120_OCP (0x340f)
+#define NBL_DEVICE_ID_M18120_LX_OCP (0x3410)
+#define NBL_DEVICE_ID_M18120_BASE_T_OCP (0x3411)
+#define NBL_DEVICE_ID_M18120_LX_BASE_T_OCP (0x3412)
+#define NBL_DEVICE_ID_M18100_VF (0x3413)
+
+#define NBL_MAX_INSTANCE_CNT 516
+struct nbl_adapter {
+ TAILQ_ENTRY(nbl_adapter) next;
+ struct rte_pci_device *pci_dev;
+};
+
+int nbl_core_init(const struct nbl_adapter *adapter, const struct rte_eth_dev *eth_dev);
+void nbl_core_remove(const struct nbl_adapter *adapter);
+int nbl_core_start(const struct nbl_adapter *adapter);
+void nbl_core_stop(const struct nbl_adapter *adapter);
+
+#endif
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index 3ad8e4033a..9f44d21b8e 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -1,3 +1,116 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright 2025 Nebulamatrix Technology Co., Ltd.
*/
+
+#include "nbl_ethdev.h"
+
+RTE_LOG_REGISTER_SUFFIX(nbl_logtype_init, init, INFO);
+RTE_LOG_REGISTER_SUFFIX(nbl_logtype_driver, driver, INFO);
+
+static int nbl_dev_release_pf(struct rte_eth_dev *eth_dev)
+{
+ const struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+
+ if (!adapter)
+ return -EINVAL;
+ NBL_LOG(INFO, "start to close device %s", eth_dev->device->name);
+ nbl_core_stop(adapter);
+ nbl_core_remove(adapter);
+ return 0;
+}
+
+static int nbl_dev_close(struct rte_eth_dev *eth_dev)
+{
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+
+ return nbl_dev_release_pf(eth_dev);
+}
+
+struct eth_dev_ops nbl_eth_dev_ops = {
+ .dev_close = nbl_dev_close,
+};
+
+static int nbl_eth_dev_init(struct rte_eth_dev *eth_dev)
+{
+ const struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+ ret = nbl_core_init(adapter, eth_dev);
+ if (ret) {
+ NBL_LOG(INFO, "core init failed ret %d", ret);
+ goto eth_init_failed;
+ }
+
+ ret = nbl_core_start(adapter);
+ if (ret) {
+ NBL_LOG(INFO, "core start failed ret %d", ret);
+ nbl_core_remove(adapter);
+ goto eth_init_failed;
+ }
+
+ eth_dev->dev_ops = &nbl_eth_dev_ops;
+ return 0;
+
+eth_init_failed:
+ return ret;
+}
+
+/**
+ * @brief: nbl device pci probe
+ * @param[in]: {rte_pci_driver} *pci_drv
+ * @param[in]: {rte_pci_device} *pci_dev
+ * @return: {0-success,negative-fail}
+ */
+static int nbl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct nbl_adapter),
+ nbl_eth_dev_init);
+}
+
+static int nbl_eth_dev_uninit(struct rte_eth_dev *eth_dev)
+{
+ PMD_INIT_FUNC_TRACE();
+ return nbl_dev_close(eth_dev);
+}
+
+static int nbl_pci_remove(struct rte_pci_device *pci_dev)
+{
+ PMD_INIT_FUNC_TRACE();
+ return rte_eth_dev_pci_generic_remove(pci_dev, nbl_eth_dev_uninit);
+}
+
+static const struct rte_pci_id pci_id_nbl_map[] = {
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_BASE_T) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_BASE_T) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_BASE_T_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX_BASE_T_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_LX) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_BASE_T) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_LX_BASE_T) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_LX_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_BASE_T_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18120_LX_BASE_T_OCP) },
+ { RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18100_VF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static struct rte_pci_driver nbl_pmd = {
+ .id_table = pci_id_nbl_map,
+ .drv_flags =
+ RTE_PCI_DRV_INTR_LSC |
+ RTE_PCI_DRV_PROBE_AGAIN,
+ .probe = nbl_pci_probe,
+ .remove = nbl_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_nbl, nbl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_nbl, pci_id_nbl_map);
diff --git a/drivers/net/nbl/nbl_ethdev.h b/drivers/net/nbl/nbl_ethdev.h
new file mode 100644
index 0000000000..e20a7b940e
--- /dev/null
+++ b/drivers/net/nbl/nbl_ethdev.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_ETHDEV_H_
+#define _NBL_ETHDEV_H_
+
+#include "nbl_core.h"
+
+#define ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev) \
+ ((struct nbl_adapter *)((eth_dev)->data->dev_private))
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
new file mode 100644
index 0000000000..1697f50a75
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_INCLUDE_H_
+#define _NBL_INCLUDE_H_
+
+#include <ctype.h>
+#include <dirent.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <inttypes.h>
+#include <limits.h>
+#include <linux/netlink.h>
+#include <linux/rtnetlink.h>
+#include <linux/genetlink.h>
+#include <linux/ethtool.h>
+#include <netinet/in.h>
+#include <net/if.h>
+#include <net/if_arp.h>
+#include <pthread.h>
+#include <signal.h>
+#include <stdarg.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/eventfd.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <sys/queue.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <bus_pci_driver.h>
+
+#include "nbl_logs.h"
+
+typedef uint64_t u64;
+typedef uint32_t u32;
+typedef uint16_t u16;
+typedef uint8_t u8;
+typedef int64_t s64;
+typedef int32_t s32;
+typedef int16_t s16;
+typedef int8_t s8;
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_logs.h b/drivers/net/nbl/nbl_include/nbl_logs.h
new file mode 100644
index 0000000000..161e07b29d
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_logs.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_LOGS_H_
+#define _NBL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int nbl_logtype_init;
+#define RTE_LOGTYPE_NBL_INIT nbl_logtype_init
+#define PMD_INIT_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, NBL_INIT, "%s(): ", __func__, __VA_ARGS__)
+
+extern int nbl_logtype_driver;
+#define RTE_LOGTYPE_NBL_DRIVER nbl_logtype_driver
+
+#define NBL_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, NBL_DRIVER, "%s: ", __func__, __VA_ARGS__)
+
+#define NBL_ASSERT(exp) RTE_VERIFY(exp)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, ">>")
+
+#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 03/17] net/nbl: add PHY layer definitions and implementation
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
2025-06-12 8:58 ` [PATCH v1 01/17] net/nbl: add doc and minimum nbl build framework Kyo Liu
2025-06-12 8:58 ` [PATCH v1 02/17] net/nbl: add simple probe/remove and log module Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 04/17] net/nbl: add Channel " Kyo Liu
` (16 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
add PHY layer related definetions and product ops
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 2 +
drivers/net/nbl/nbl_core.c | 54 ++++++++--
drivers/net/nbl/nbl_core.h | 30 +++++-
drivers/net/nbl/nbl_ethdev.c | 4 +-
.../nbl_hw_leonis/nbl_phy_leonis_snic.c | 99 +++++++++++++++++++
.../nbl_hw_leonis/nbl_phy_leonis_snic.h | 10 ++
drivers/net/nbl/nbl_hw/nbl_phy.h | 28 ++++++
drivers/net/nbl/nbl_include/nbl_def_phy.h | 35 +++++++
drivers/net/nbl/nbl_include/nbl_include.h | 15 +++
.../net/nbl/nbl_include/nbl_product_base.h | 37 +++++++
10 files changed, 300 insertions(+), 14 deletions(-)
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_phy.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_phy.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_product_base.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index 013a4126f6..4ec1273100 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -7,8 +7,10 @@ if not is_linux
endif
includes += include_directories('nbl_include')
+includes += include_directories('nbl_hw')
sources = files(
'nbl_ethdev.c',
'nbl_core.c',
+ 'nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c',
)
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index 63b707f0ba..fc7222d526 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -4,27 +4,67 @@
#include "nbl_core.h"
-int nbl_core_init(const struct nbl_adapter *adapter, const struct rte_eth_dev *eth_dev)
+static struct nbl_product_core_ops nbl_product_core_ops[NBL_PRODUCT_MAX] = {
+ {
+ .phy_init = nbl_phy_init_leonis_snic,
+ .phy_remove = nbl_phy_remove_leonis_snic,
+ .res_init = NULL,
+ .res_remove = NULL,
+ .chan_init = NULL,
+ .chan_remove = NULL,
+ },
+};
+
+static struct nbl_product_core_ops *nbl_core_get_product_ops(enum nbl_product_type product_type)
{
- RTE_SET_USED(adapter);
- RTE_SET_USED(eth_dev);
+ return &nbl_product_core_ops[product_type];
+}
+
+static void nbl_init_func_caps(struct rte_pci_device *pci_dev, struct nbl_func_caps *caps)
+{
+ if (pci_dev->id.device_id >= NBL_DEVICE_ID_M18110 &&
+ pci_dev->id.device_id <= NBL_DEVICE_ID_M18100_VF)
+ caps->product_type = NBL_LEONIS_TYPE;
+}
+
+int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct nbl_product_core_ops *product_base_ops = NULL;
+ int ret = 0;
+
+ nbl_init_func_caps(pci_dev, &adapter->caps);
+
+ product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
+
+ /* every product's phy/chan/res layer has a great difference, so call their own init ops */
+ ret = product_base_ops->phy_init(adapter);
+ if (ret)
+ goto phy_init_fail;
return 0;
+
+phy_init_fail:
+ return -EINVAL;
}
-void nbl_core_remove(const struct nbl_adapter *adapter)
+void nbl_core_remove(struct nbl_adapter *adapter)
{
- RTE_SET_USED(adapter);
+ struct nbl_product_core_ops *product_base_ops = NULL;
+
+ product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
+
+ product_base_ops->phy_remove(adapter);
}
-int nbl_core_start(const struct nbl_adapter *adapter)
+int nbl_core_start(struct nbl_adapter *adapter)
{
RTE_SET_USED(adapter);
return 0;
}
-void nbl_core_stop(const struct nbl_adapter *adapter)
+void nbl_core_stop(struct nbl_adapter *adapter)
{
RTE_SET_USED(adapter);
}
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index 4ba13e5bd7..2d0e39afa2 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -5,7 +5,8 @@
#ifndef _NBL_CORE_H_
#define _NBL_CORE_H_
-#include "nbl_include.h"
+#include "nbl_product_base.h"
+#include "nbl_def_phy.h"
#define NBL_VENDOR_ID (0x1F0F)
#define NBL_DEVICE_ID_M18110 (0x3403)
@@ -27,14 +28,33 @@
#define NBL_DEVICE_ID_M18100_VF (0x3413)
#define NBL_MAX_INSTANCE_CNT 516
+
+#define NBL_ADAPTER_TO_PHY_MGT(adapter) ((adapter)->core.phy_mgt)
+#define NBL_ADAPTER_TO_PHY_OPS_TBL(adapter) ((adapter)->intf.phy_ops_tbl)
+
+struct nbl_core {
+ void *phy_mgt;
+ void *res_mgt;
+ void *disp_mgt;
+ void *chan_mgt;
+ void *dev_mgt;
+};
+
+struct nbl_interface {
+ struct nbl_phy_ops_tbl *phy_ops_tbl;
+};
+
struct nbl_adapter {
TAILQ_ENTRY(nbl_adapter) next;
struct rte_pci_device *pci_dev;
+ struct nbl_core core;
+ struct nbl_interface intf;
+ struct nbl_func_caps caps;
};
-int nbl_core_init(const struct nbl_adapter *adapter, const struct rte_eth_dev *eth_dev);
-void nbl_core_remove(const struct nbl_adapter *adapter);
-int nbl_core_start(const struct nbl_adapter *adapter);
-void nbl_core_stop(const struct nbl_adapter *adapter);
+int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev);
+void nbl_core_remove(struct nbl_adapter *adapter);
+int nbl_core_start(struct nbl_adapter *adapter);
+void nbl_core_stop(struct nbl_adapter *adapter);
#endif
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index 9f44d21b8e..261f8a522a 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -9,7 +9,7 @@ RTE_LOG_REGISTER_SUFFIX(nbl_logtype_driver, driver, INFO);
static int nbl_dev_release_pf(struct rte_eth_dev *eth_dev)
{
- const struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
if (!adapter)
return -EINVAL;
@@ -33,7 +33,7 @@ struct eth_dev_ops nbl_eth_dev_ops = {
static int nbl_eth_dev_init(struct rte_eth_dev *eth_dev)
{
- const struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
int ret;
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
new file mode 100644
index 0000000000..febee34edd
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_phy_leonis_snic.h"
+
+static inline void nbl_wr32(void *priv, u64 reg, u32 value)
+{
+ struct nbl_phy_mgt *phy_mgt = (struct nbl_phy_mgt *)priv;
+
+ rte_write32(rte_cpu_to_le_32(value), ((phy_mgt)->hw_addr + (reg)));
+}
+
+static void nbl_phy_update_tail_ptr(void *priv, u16 notify_qid, u16 tail_ptr)
+{
+ nbl_wr32(priv, NBL_NOTIFY_ADDR, ((u32)tail_ptr << NBL_TAIL_PTR_OFT | (u32)notify_qid));
+}
+
+static u8 *nbl_phy_get_tail_ptr(void *priv)
+{
+ struct nbl_phy_mgt *phy_mgt = (struct nbl_phy_mgt *)priv;
+
+ return phy_mgt->hw_addr;
+}
+
+static struct nbl_phy_ops phy_ops = {
+ .update_tail_ptr = nbl_phy_update_tail_ptr,
+ .get_tail_ptr = nbl_phy_get_tail_ptr,
+};
+
+static int nbl_phy_setup_ops(struct nbl_phy_ops_tbl **phy_ops_tbl,
+ struct nbl_phy_mgt_leonis_snic *phy_mgt_leonis_snic)
+{
+ *phy_ops_tbl = rte_zmalloc("nbl_phy_ops", sizeof(struct nbl_phy_ops_tbl), 0);
+ if (!*phy_ops_tbl)
+ return -ENOMEM;
+
+ NBL_PHY_OPS_TBL_TO_OPS(*phy_ops_tbl) = &phy_ops;
+ NBL_PHY_OPS_TBL_TO_PRIV(*phy_ops_tbl) = phy_mgt_leonis_snic;
+
+ return 0;
+}
+
+static void nbl_phy_remove_ops(struct nbl_phy_ops_tbl **phy_ops_tbl)
+{
+ rte_free(*phy_ops_tbl);
+ *phy_ops_tbl = NULL;
+}
+
+int nbl_phy_init_leonis_snic(void *p)
+{
+ struct nbl_phy_mgt_leonis_snic **phy_mgt_leonis_snic;
+ struct nbl_phy_mgt *phy_mgt;
+ struct nbl_phy_ops_tbl **phy_ops_tbl;
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct rte_pci_device *pci_dev = adapter->pci_dev;
+ int ret = 0;
+
+ phy_mgt_leonis_snic = (struct nbl_phy_mgt_leonis_snic **)&NBL_ADAPTER_TO_PHY_MGT(adapter);
+ phy_ops_tbl = &NBL_ADAPTER_TO_PHY_OPS_TBL(adapter);
+
+ *phy_mgt_leonis_snic = rte_zmalloc("nbl_phy_mgt",
+ sizeof(struct nbl_phy_mgt_leonis_snic), 0);
+ if (!*phy_mgt_leonis_snic) {
+ ret = -ENOMEM;
+ goto alloc_phy_mgt_failed;
+ }
+
+ phy_mgt = &(*phy_mgt_leonis_snic)->phy_mgt;
+
+ phy_mgt->hw_addr = (u8 *)pci_dev->mem_resource[0].addr;
+ phy_mgt->memory_bar_pa = pci_dev->mem_resource[0].phys_addr;
+ phy_mgt->mailbox_bar_hw_addr = (u8 *)pci_dev->mem_resource[2].addr;
+
+ ret = nbl_phy_setup_ops(phy_ops_tbl, *phy_mgt_leonis_snic);
+ if (ret)
+ goto setup_ops_failed;
+
+ return ret;
+
+setup_ops_failed:
+ rte_free(*phy_mgt_leonis_snic);
+alloc_phy_mgt_failed:
+ return ret;
+}
+
+void nbl_phy_remove_leonis_snic(void *p)
+{
+ struct nbl_phy_mgt_leonis_snic **phy_mgt_leonis_snic;
+ struct nbl_phy_ops_tbl **phy_ops_tbl;
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+
+ phy_mgt_leonis_snic = (struct nbl_phy_mgt_leonis_snic **)&adapter->core.phy_mgt;
+ phy_ops_tbl = &NBL_ADAPTER_TO_PHY_OPS_TBL(adapter);
+
+ rte_free(*phy_mgt_leonis_snic);
+
+ nbl_phy_remove_ops(phy_ops_tbl);
+}
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
new file mode 100644
index 0000000000..5440cf41be
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_PHY_LEONIS_SNIC_H_
+#define _NBL_PHY_LEONIS_SNIC_H_
+
+#include "../nbl_phy.h"
+
+#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_phy.h b/drivers/net/nbl/nbl_hw/nbl_phy.h
new file mode 100644
index 0000000000..38b7fc2ec5
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_phy.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_PHY_H_
+#define _NBL_PHY_H_
+
+#include "nbl_ethdev.h"
+
+#define NBL_NOTIFY_ADDR (0x00000000)
+#define NBL_BYTES_IN_REG (4)
+#define NBL_TAIL_PTR_OFT (16)
+#define NBL_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF))
+#define NBL_HI_DWORD(x) ((u32)(((x) >> 32) & 0xFFFFFFFF))
+
+struct nbl_phy_mgt {
+ u8 *hw_addr;
+ u64 memory_bar_pa;
+ u8 *mailbox_bar_hw_addr;
+ u64 notify_addr;
+ u32 version;
+};
+
+struct nbl_phy_mgt_leonis_snic {
+ struct nbl_phy_mgt phy_mgt;
+};
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_phy.h b/drivers/net/nbl/nbl_include/nbl_def_phy.h
new file mode 100644
index 0000000000..09a3e125a9
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_def_phy.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_PHY_H_
+#define _NBL_DEF_PHY_H_
+
+#include "nbl_include.h"
+
+#define NBL_PHY_OPS_TBL_TO_OPS(phy_ops_tbl) ((phy_ops_tbl)->ops)
+#define NBL_PHY_OPS_TBL_TO_PRIV(phy_ops_tbl) ((phy_ops_tbl)->priv)
+
+struct nbl_phy_ops {
+ /* queue */
+ void (*update_tail_ptr)(void *priv, u16 notify_qid, u16 tail_ptr);
+ u8 *(*get_tail_ptr)(void *priv);
+
+ /* mailbox */
+ void (*config_mailbox_rxq)(void *priv, uint64_t dma_addr, int size_bwid);
+ void (*config_mailbox_txq)(void *priv, uint64_t dma_addr, int size_bwid);
+ void (*stop_mailbox_rxq)(void *priv);
+ void (*stop_mailbox_txq)(void *priv);
+ uint16_t (*get_mailbox_rx_tail_ptr)(void *priv);
+ void (*update_mailbox_queue_tail_ptr)(void *priv, uint16_t tail_ptr, uint8_t txrx);
+};
+
+struct nbl_phy_ops_tbl {
+ struct nbl_phy_ops *ops;
+ void *priv;
+};
+
+int nbl_phy_init_leonis_snic(void *adapter);
+void nbl_phy_remove_leonis_snic(void *adapter);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 1697f50a75..493ee58411 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -38,6 +38,7 @@
#include <ethdev_driver.h>
#include <ethdev_pci.h>
#include <bus_pci_driver.h>
+#include <rte_io.h>
#include "nbl_logs.h"
@@ -50,4 +51,18 @@ typedef int32_t s32;
typedef int16_t s16;
typedef int8_t s8;
+enum nbl_product_type {
+ NBL_LEONIS_TYPE,
+ NBL_DRACO_TYPE,
+ NBL_BOOTIS_TYPE,
+ NBL_PRODUCT_MAX,
+};
+
+struct nbl_func_caps {
+ enum nbl_product_type product_type;
+ u32 is_vf:1;
+ u32 is_user:1;
+ u32 rsv:30;
+};
+
#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_product_base.h b/drivers/net/nbl/nbl_include/nbl_product_base.h
new file mode 100644
index 0000000000..7b9c2ccf72
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_product_base.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_PRODUCT_BASE_H_
+#define _NBL_DEF_PRODUCT_BASE_H_
+
+#include "nbl_include.h"
+
+struct nbl_product_core_ops {
+ int (*phy_init)(void *p);
+ void (*phy_remove)(void *p);
+ int (*res_init)(void *p, struct rte_eth_dev *eth_dev);
+ void (*res_remove)(void *p);
+ int (*chan_init)(void *p);
+ void (*chan_remove)(void *p);
+};
+
+struct nbl_product_dev_ops {
+ int (*dev_init)(void *adapter);
+ void (*dev_uninit)(void *adapter);
+ int (*dev_start)(void *adapter);
+ void (*dev_stop)(void *adapter);
+};
+
+struct nbl_product_dispatch_ops {
+ int (*dispatch_init)(void *mgt);
+ int (*dispatch_uninit)(void *mgt);
+};
+
+struct nbl_product_dev_external_ops {
+ int (*external_pf_ops_get)(struct rte_eth_dev *dev, void *arg);
+ int (*external_rep_ops_get)(struct rte_eth_dev *dev, void *arg);
+ int (*external_bond_ops_get)(struct rte_eth_dev *dev, void *arg);
+};
+
+#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 04/17] net/nbl: add Channel layer definitions and implementation
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (2 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 03/17] net/nbl: add PHY layer definitions and implementation Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 05/17] net/nbl: add Resource " Kyo Liu
` (15 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
add Channel layer related definetions and nbl_thread
for mbx interact
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 3 +
drivers/net/nbl/nbl_common/nbl_common.c | 47 ++
drivers/net/nbl/nbl_common/nbl_common.h | 10 +
drivers/net/nbl/nbl_common/nbl_thread.c | 88 +++
drivers/net/nbl/nbl_core.c | 11 +-
drivers/net/nbl/nbl_core.h | 6 +
drivers/net/nbl/nbl_hw/nbl_channel.c | 672 ++++++++++++++++++
drivers/net/nbl/nbl_hw/nbl_channel.h | 116 +++
.../nbl_hw_leonis/nbl_phy_leonis_snic.c | 124 ++++
.../nbl_hw_leonis/nbl_phy_leonis_snic.h | 43 ++
drivers/net/nbl/nbl_include/nbl_def_channel.h | 326 +++++++++
drivers/net/nbl/nbl_include/nbl_def_common.h | 40 ++
drivers/net/nbl/nbl_include/nbl_include.h | 7 +
13 files changed, 1491 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/nbl/nbl_common/nbl_common.c
create mode 100644 drivers/net/nbl/nbl_common/nbl_common.h
create mode 100644 drivers/net/nbl/nbl_common/nbl_thread.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_channel.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_common.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index 4ec1273100..c849cab185 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -12,5 +12,8 @@ includes += include_directories('nbl_hw')
sources = files(
'nbl_ethdev.c',
'nbl_core.c',
+ 'nbl_common/nbl_common.c',
+ 'nbl_common/nbl_thread.c',
+ 'nbl_hw/nbl_channel.c',
'nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c',
)
diff --git a/drivers/net/nbl/nbl_common/nbl_common.c b/drivers/net/nbl/nbl_common/nbl_common.c
new file mode 100644
index 0000000000..9fcf03b015
--- /dev/null
+++ b/drivers/net/nbl/nbl_common/nbl_common.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_common.h"
+
+/**
+ * @brief: used to alloc continuous dma memory region for cmd buffer
+ * @mem: output, the memory object containing va, pa and size of memory
+ * @size: input, memory size in bytes
+ * @return: memory virtual address for cpu usage
+ */
+void *nbl_alloc_dma_mem(struct nbl_dma_mem *mem, uint32_t size)
+{
+ static uint64_t nbl_dma_memzone_id;
+ const struct rte_memzone *mz = NULL;
+ char z_name[RTE_MEMZONE_NAMESIZE];
+
+ if (!mem)
+ return NULL;
+
+ snprintf(z_name, sizeof(z_name), "nbl_dma_%" NBL_PRIU64 "",
+ rte_atomic_fetch_add_explicit(&nbl_dma_memzone_id, 1, rte_memory_order_relaxed));
+ mz = rte_memzone_reserve_bounded(z_name, size, SOCKET_ID_ANY, 0,
+ 0, RTE_PGSIZE_2M);
+ if (!mz)
+ return NULL;
+
+ mem->size = size;
+ mem->va = mz->addr;
+ mem->pa = mz->iova;
+ mem->zone = (const void *)mz;
+
+ return mem->va;
+}
+
+/**
+ * @brief: used to free dma memory region
+ * @mem: input, the memory object
+ */
+void nbl_free_dma_mem(struct nbl_dma_mem *mem)
+{
+ rte_memzone_free((const struct rte_memzone *)mem->zone);
+ mem->zone = NULL;
+ mem->va = NULL;
+ mem->pa = (uint64_t)0;
+}
diff --git a/drivers/net/nbl/nbl_common/nbl_common.h b/drivers/net/nbl/nbl_common/nbl_common.h
new file mode 100644
index 0000000000..7ff028f5a9
--- /dev/null
+++ b/drivers/net/nbl/nbl_common/nbl_common.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_COMMON_H_
+#define _NBL_COMMON_H_
+
+#include "nbl_ethdev.h"
+
+#endif
diff --git a/drivers/net/nbl/nbl_common/nbl_thread.c b/drivers/net/nbl/nbl_common/nbl_thread.c
new file mode 100644
index 0000000000..c3f560a57f
--- /dev/null
+++ b/drivers/net/nbl/nbl_common/nbl_thread.c
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_common.h"
+
+static pthread_mutex_t nbl_work_list_lock = PTHREAD_MUTEX_INITIALIZER;
+TAILQ_HEAD(nbl_work_list_head, nbl_work);
+rte_thread_t nbl_work_tid;
+static bool thread_exit;
+
+static struct nbl_work_list_head nbl_work_list = TAILQ_HEAD_INITIALIZER(nbl_work_list);
+
+static uint32_t nbl_thread_polling_task(__rte_unused void *param)
+{
+ struct timespec time;
+ struct nbl_work *work;
+ struct nbl_work *work_tmp;
+ int i = 0;
+
+ time.tv_sec = 0;
+ time.tv_nsec = 100000;
+
+ while (true) {
+ i++;
+ pthread_mutex_lock(&nbl_work_list_lock);
+ RTE_TAILQ_FOREACH_SAFE(work, &nbl_work_list, next, work_tmp) {
+ if (work->no_run)
+ continue;
+
+ if (work->run_once) {
+ work->handler(work->params);
+ TAILQ_REMOVE(&nbl_work_list, work, next);
+ } else {
+ if (i % work->tick == work->random)
+ work->handler(work->params);
+ }
+ }
+
+ pthread_mutex_unlock(&nbl_work_list_lock);
+ nanosleep(&time, 0);
+ }
+
+ return 0;
+}
+
+int nbl_thread_add_work(struct nbl_work *work)
+{
+ int ret = 0;
+
+ work->random = rte_rand() % work->tick;
+ pthread_mutex_lock(&nbl_work_list_lock);
+
+ if (thread_exit) {
+ rte_thread_join(nbl_work_tid, NULL);
+ nbl_work_tid.opaque_id = 0;
+ thread_exit = 0;
+ }
+
+ if (!nbl_work_tid.opaque_id) {
+ ret = rte_thread_create_internal_control(&nbl_work_tid, "nbl_thread",
+ nbl_thread_polling_task, NULL);
+
+ if (ret) {
+ NBL_LOG(ERR, "create thread failed, ret %d", ret);
+ pthread_mutex_unlock(&nbl_work_list_lock);
+ return ret;
+ }
+ }
+
+ NBL_ASSERT(nbl_work_tid.opaque_id);
+ TAILQ_INSERT_HEAD(&nbl_work_list, work, next);
+ pthread_mutex_unlock(&nbl_work_list_lock);
+
+ return 0;
+}
+
+void nbl_thread_del_work(struct nbl_work *work)
+{
+ pthread_mutex_lock(&nbl_work_list_lock);
+ TAILQ_REMOVE(&nbl_work_list, work, next);
+ if (TAILQ_EMPTY(&nbl_work_list)) {
+ pthread_cancel((pthread_t)nbl_work_tid.opaque_id);
+ thread_exit = 1;
+ }
+
+ pthread_mutex_unlock(&nbl_work_list_lock);
+}
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index fc7222d526..f4388fe3b5 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -10,8 +10,8 @@ static struct nbl_product_core_ops nbl_product_core_ops[NBL_PRODUCT_MAX] = {
.phy_remove = nbl_phy_remove_leonis_snic,
.res_init = NULL,
.res_remove = NULL,
- .chan_init = NULL,
- .chan_remove = NULL,
+ .chan_init = nbl_chan_init_leonis,
+ .chan_remove = nbl_chan_remove_leonis,
},
};
@@ -42,8 +42,14 @@ int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
if (ret)
goto phy_init_fail;
+ ret = product_base_ops->chan_init(adapter);
+ if (ret)
+ goto chan_init_fail;
+
return 0;
+chan_init_fail:
+ product_base_ops->phy_remove(adapter);
phy_init_fail:
return -EINVAL;
}
@@ -54,6 +60,7 @@ void nbl_core_remove(struct nbl_adapter *adapter)
product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
+ product_base_ops->chan_remove(adapter);
product_base_ops->phy_remove(adapter);
}
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index 2d0e39afa2..a6c1103c77 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -6,7 +6,9 @@
#define _NBL_CORE_H_
#include "nbl_product_base.h"
+#include "nbl_def_common.h"
#include "nbl_def_phy.h"
+#include "nbl_def_channel.h"
#define NBL_VENDOR_ID (0x1F0F)
#define NBL_DEVICE_ID_M18110 (0x3403)
@@ -30,7 +32,10 @@
#define NBL_MAX_INSTANCE_CNT 516
#define NBL_ADAPTER_TO_PHY_MGT(adapter) ((adapter)->core.phy_mgt)
+#define NBL_ADAPTER_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt)
+
#define NBL_ADAPTER_TO_PHY_OPS_TBL(adapter) ((adapter)->intf.phy_ops_tbl)
+#define NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl)
struct nbl_core {
void *phy_mgt;
@@ -42,6 +47,7 @@ struct nbl_core {
struct nbl_interface {
struct nbl_phy_ops_tbl *phy_ops_tbl;
+ struct nbl_channel_ops_tbl *channel_ops_tbl;
};
struct nbl_adapter {
diff --git a/drivers/net/nbl/nbl_hw/nbl_channel.c b/drivers/net/nbl/nbl_hw/nbl_channel.c
new file mode 100644
index 0000000000..09f1870ed0
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_channel.c
@@ -0,0 +1,672 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_channel.h"
+
+static int nbl_chan_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack);
+
+static void nbl_chan_init_queue_param(union nbl_chan_info *chan_info,
+ u16 num_txq_entries, u16 num_rxq_entries,
+ u16 txq_buf_size, u16 rxq_buf_size)
+{
+ rte_spinlock_init(&chan_info->mailbox.txq_lock);
+ chan_info->mailbox.num_txq_entries = num_txq_entries;
+ chan_info->mailbox.num_rxq_entries = num_rxq_entries;
+ chan_info->mailbox.txq_buf_size = txq_buf_size;
+ chan_info->mailbox.rxq_buf_size = rxq_buf_size;
+}
+
+static int nbl_chan_init_tx_queue(union nbl_chan_info *chan_info)
+{
+ struct nbl_chan_ring *txq = &chan_info->mailbox.txq;
+ size_t size = chan_info->mailbox.num_txq_entries * sizeof(struct nbl_chan_tx_desc);
+
+ txq->desc = nbl_alloc_dma_mem(&txq->desc_mem, size);
+ if (!txq->desc) {
+ NBL_LOG(ERR, "Allocate DMA for chan tx descriptor ring failed");
+ return -ENOMEM;
+ }
+
+ chan_info->mailbox.wait = rte_calloc("nbl_chan_wait", chan_info->mailbox.num_txq_entries,
+ sizeof(struct nbl_chan_waitqueue_head), 0);
+ if (!chan_info->mailbox.wait) {
+ NBL_LOG(ERR, "Allocate Txq wait_queue_head array failed");
+ goto req_wait_queue_failed;
+ }
+
+ size = chan_info->mailbox.num_txq_entries * chan_info->mailbox.txq_buf_size;
+ txq->buf = nbl_alloc_dma_mem(&txq->buf_mem, size);
+ if (!txq->buf) {
+ NBL_LOG(ERR, "Allocate memory for chan tx buffer arrays failed");
+ goto req_num_txq_entries;
+ }
+
+ return 0;
+
+req_num_txq_entries:
+ rte_free(chan_info->mailbox.wait);
+req_wait_queue_failed:
+ nbl_free_dma_mem(&txq->desc_mem);
+ txq->desc = NULL;
+ chan_info->mailbox.wait = NULL;
+
+ return -ENOMEM;
+}
+
+static int nbl_chan_init_rx_queue(union nbl_chan_info *chan_info)
+{
+ struct nbl_chan_ring *rxq = &chan_info->mailbox.rxq;
+ size_t size = chan_info->mailbox.num_rxq_entries * sizeof(struct nbl_chan_rx_desc);
+
+ rxq->desc = nbl_alloc_dma_mem(&rxq->desc_mem, size);
+ if (!rxq->desc) {
+ NBL_LOG(ERR, "Allocate DMA for chan rx descriptor ring failed");
+ return -ENOMEM;
+ }
+
+ size = chan_info->mailbox.num_rxq_entries * chan_info->mailbox.rxq_buf_size;
+ rxq->buf = nbl_alloc_dma_mem(&rxq->buf_mem, size);
+ if (!rxq->buf) {
+ NBL_LOG(ERR, "Allocate memory for chan rx buffer arrays failed");
+ nbl_free_dma_mem(&rxq->desc_mem);
+ rxq->desc = NULL;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void nbl_chan_remove_tx_queue(union nbl_chan_info *chan_info)
+{
+ struct nbl_chan_ring *txq = &chan_info->mailbox.txq;
+
+ nbl_free_dma_mem(&txq->buf_mem);
+ txq->buf = NULL;
+
+ rte_free(chan_info->mailbox.wait);
+ chan_info->mailbox.wait = NULL;
+
+ nbl_free_dma_mem(&txq->desc_mem);
+ txq->desc = NULL;
+}
+
+static void nbl_chan_remove_rx_queue(union nbl_chan_info *chan_info)
+{
+ struct nbl_chan_ring *rxq = &chan_info->mailbox.rxq;
+
+ nbl_free_dma_mem(&rxq->buf_mem);
+ rxq->buf = NULL;
+
+ nbl_free_dma_mem(&rxq->desc_mem);
+ rxq->desc = NULL;
+}
+
+static int nbl_chan_init_queue(union nbl_chan_info *chan_info)
+{
+ int err;
+
+ err = nbl_chan_init_tx_queue(chan_info);
+ if (err)
+ return err;
+
+ err = nbl_chan_init_rx_queue(chan_info);
+ if (err)
+ goto setup_rx_queue_err;
+
+ return 0;
+
+setup_rx_queue_err:
+ nbl_chan_remove_tx_queue(chan_info);
+ return err;
+}
+
+static void nbl_chan_config_queue(struct nbl_channel_mgt *chan_mgt,
+ union nbl_chan_info *chan_info)
+{
+ struct nbl_phy_ops *phy_ops;
+ struct nbl_chan_ring *rxq = &chan_info->mailbox.rxq;
+ struct nbl_chan_ring *txq = &chan_info->mailbox.txq;
+ int size_bwid = rte_log2_u32(chan_info->mailbox.num_rxq_entries);
+
+ phy_ops = NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt);
+
+ phy_ops->config_mailbox_rxq(NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt),
+ rxq->desc_mem.pa, size_bwid);
+ phy_ops->config_mailbox_txq(NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt),
+ txq->desc_mem.pa, size_bwid);
+}
+
+#define NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, phy_ops, chan_mgt, tail_ptr, qid) \
+do { \
+ typeof(phy_ops) _phy_ops = (phy_ops); \
+ typeof(chan_mgt) _chan_mgt = (chan_mgt); \
+ typeof(tail_ptr) _tail_ptr = (tail_ptr); \
+ typeof(qid) _qid = (qid); \
+ (_phy_ops)->update_mailbox_queue_tail_ptr(NBL_CHAN_MGT_TO_PHY_PRIV(_chan_mgt), \
+ _tail_ptr, _qid); \
+} while (0)
+
+static int nbl_chan_prepare_rx_bufs(struct nbl_channel_mgt *chan_mgt,
+ union nbl_chan_info *chan_info)
+{
+ struct nbl_phy_ops *phy_ops;
+ struct nbl_chan_ring *rxq = &chan_info->mailbox.rxq;
+ struct nbl_chan_rx_desc *desc;
+ void *phy_priv;
+ u16 rx_tail_ptr;
+ u32 retry_times = 0;
+ u16 i;
+
+ phy_ops = NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt);
+ desc = rxq->desc;
+ for (i = 0; i < chan_info->mailbox.num_rxq_entries - 1; i++) {
+ desc[i].flags = NBL_CHAN_RX_DESC_AVAIL;
+ desc[i].buf_addr = rxq->buf_mem.pa + i * chan_info->mailbox.rxq_buf_size;
+ desc[i].buf_len = chan_info->mailbox.rxq_buf_size;
+ }
+
+ rxq->next_to_clean = 0;
+ rxq->next_to_use = chan_info->mailbox.num_rxq_entries - 1;
+ rxq->tail_ptr = chan_info->mailbox.num_rxq_entries - 1;
+ rte_mb();
+
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, phy_ops, chan_mgt, rxq->tail_ptr, NBL_MB_RX_QID);
+
+ while (retry_times < 100) {
+ phy_priv = NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt);
+
+ rx_tail_ptr = phy_ops->get_mailbox_rx_tail_ptr(phy_priv);
+
+ if (rx_tail_ptr != rxq->tail_ptr)
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, phy_ops, chan_mgt,
+ rxq->tail_ptr, NBL_MB_RX_QID);
+ else
+ break;
+
+ rte_delay_us(NBL_CHAN_TX_WAIT_US * 50);
+ retry_times++;
+ }
+
+ return 0;
+}
+
+static void nbl_chan_stop_queue(struct nbl_channel_mgt *chan_mgt)
+{
+ struct nbl_phy_ops *phy_ops;
+
+ phy_ops = NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt);
+
+ phy_ops->stop_mailbox_rxq(NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt));
+ phy_ops->stop_mailbox_txq(NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt));
+}
+
+static void nbl_chan_remove_queue(union nbl_chan_info *chan_info)
+{
+ nbl_chan_remove_tx_queue(chan_info);
+ nbl_chan_remove_rx_queue(chan_info);
+}
+
+static int nbl_chan_kick_tx_ring(struct nbl_channel_mgt *chan_mgt,
+ union nbl_chan_info *chan_info)
+{
+ struct nbl_phy_ops *phy_ops;
+ struct nbl_chan_ring *txq;
+ struct nbl_chan_tx_desc *tx_desc;
+ int i;
+
+ phy_ops = NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt);
+
+ txq = &chan_info->mailbox.txq;
+ rte_mb();
+
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, phy_ops, chan_mgt, txq->tail_ptr, NBL_MB_TX_QID);
+
+ tx_desc = NBL_CHAN_TX_DESC(txq, txq->next_to_clean);
+
+ i = 0;
+ while (!(tx_desc->flags & NBL_CHAN_TX_DESC_USED)) {
+ rte_delay_us(NBL_CHAN_TX_WAIT_US);
+ i++;
+
+ if (!(i % NBL_CHAN_TX_REKICK_WAIT_TIMES)) {
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, phy_ops, chan_mgt, txq->tail_ptr,
+ NBL_MB_TX_QID);
+ }
+
+ if (i == NBL_CHAN_TX_WAIT_TIMES) {
+ NBL_LOG(ERR, "chan send message type: %d timeout",
+ tx_desc->msg_type);
+ return -1;
+ }
+ }
+
+ txq->next_to_clean = txq->next_to_use;
+ return 0;
+}
+
+static void nbl_chan_recv_ack_msg(void *priv, uint16_t srcid, uint16_t msgid,
+ void *data, uint32_t data_len)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NULL;
+ struct nbl_chan_waitqueue_head *wait_head;
+ uint32_t *payload = (uint32_t *)data;
+ uint32_t ack_msgid;
+ uint32_t ack_msgtype;
+ uint32_t copy_len;
+
+ chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+ ack_msgtype = *payload;
+ ack_msgid = *(payload + 1);
+ wait_head = &chan_info->mailbox.wait[ack_msgid];
+ wait_head->ack_err = *(payload + 2);
+
+ if (wait_head->ack_err >= 0 && (data_len > 3 * sizeof(uint32_t))) {
+ if (data_len - 3 * sizeof(uint32_t) != wait_head->ack_data_len)
+ NBL_LOG(INFO, "payload_len donot match ack_len!,"
+ " srcid:%u, msgtype:%u, msgid:%u, ack_msgid %u,"
+ " data_len:%u, ack_data_len:%u",
+ srcid, ack_msgtype, msgid,
+ ack_msgid, data_len, wait_head->ack_data_len);
+ copy_len = RTE_MIN((u32)wait_head->ack_data_len,
+ (u32)data_len - 3 * sizeof(uint32_t));
+ rte_memcpy(wait_head->ack_data, payload + 3, copy_len);
+ }
+
+ /* wmb */
+ rte_wmb();
+ wait_head->acked = 1;
+}
+
+static void nbl_chan_recv_msg(struct nbl_channel_mgt *chan_mgt, void *data)
+{
+ struct nbl_chan_ack_info chan_ack;
+ struct nbl_chan_tx_desc *tx_desc;
+ struct nbl_chan_msg_handler *msg_handler;
+ u16 msg_type, payload_len, srcid, msgid;
+ void *payload;
+
+ tx_desc = data;
+ msg_type = tx_desc->msg_type;
+
+ srcid = tx_desc->srcid;
+ msgid = tx_desc->msgid;
+ if (msg_type >= NBL_CHAN_MSG_MAX) {
+ NBL_LOG(ERR, "Invalid chan message type %hu", msg_type);
+ return;
+ }
+
+ if (tx_desc->data_len) {
+ payload = (void *)tx_desc->data;
+ payload_len = tx_desc->data_len;
+ } else {
+ payload = (void *)(tx_desc + 1);
+ payload_len = tx_desc->buf_len;
+ }
+
+ msg_handler = &chan_mgt->msg_handler[msg_type];
+ if (!msg_handler->func) {
+ NBL_CHAN_ACK(chan_ack, srcid, msg_type, msgid, -EPERM, NULL, 0);
+ nbl_chan_send_ack(chan_mgt, &chan_ack);
+ NBL_LOG(ERR, "msg:%u no func, check af-driver is ok", msg_type);
+ return;
+ }
+
+ msg_handler->func(msg_handler->priv, srcid, msgid, payload, payload_len);
+}
+
+static void nbl_chan_advance_rx_ring(struct nbl_channel_mgt *chan_mgt,
+ union nbl_chan_info *chan_info,
+ struct nbl_chan_ring *rxq)
+{
+ struct nbl_phy_ops *phy_ops;
+ struct nbl_chan_rx_desc *rx_desc;
+ u16 next_to_use;
+
+ phy_ops = NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt);
+
+ next_to_use = rxq->next_to_use;
+ rx_desc = NBL_CHAN_RX_DESC(rxq, next_to_use);
+
+ rx_desc->flags = NBL_CHAN_RX_DESC_AVAIL;
+ rx_desc->buf_addr = rxq->buf_mem.pa + chan_info->mailbox.rxq_buf_size * next_to_use;
+ rx_desc->buf_len = chan_info->mailbox.rxq_buf_size;
+
+ /* wmb */
+ rte_wmb();
+ rxq->next_to_use++;
+ if (rxq->next_to_use == chan_info->mailbox.num_rxq_entries)
+ rxq->next_to_use = 0;
+ rxq->tail_ptr++;
+
+ NBL_UPDATE_QUEUE_TAIL_PTR(chan_info, phy_ops, chan_mgt, rxq->tail_ptr, NBL_MB_RX_QID);
+}
+
+static void nbl_chan_clean_queue(void *priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+ struct nbl_chan_ring *rxq = &chan_info->mailbox.rxq;
+ struct nbl_chan_rx_desc *rx_desc;
+ u8 *data;
+ u16 next_to_clean;
+
+ next_to_clean = rxq->next_to_clean;
+ rx_desc = NBL_CHAN_RX_DESC(rxq, next_to_clean);
+ data = (u8 *)rxq->buf + next_to_clean * chan_info->mailbox.rxq_buf_size;
+ while (rx_desc->flags & NBL_CHAN_RX_DESC_USED) {
+ rte_rmb();
+ nbl_chan_recv_msg(chan_mgt, data);
+
+ nbl_chan_advance_rx_ring(chan_mgt, chan_info, rxq);
+
+ next_to_clean++;
+ if (next_to_clean == chan_info->mailbox.num_rxq_entries)
+ next_to_clean = 0;
+ rx_desc = NBL_CHAN_RX_DESC(rxq, next_to_clean);
+ data = (u8 *)rxq->buf + next_to_clean * chan_info->mailbox.rxq_buf_size;
+ }
+ rxq->next_to_clean = next_to_clean;
+}
+
+static uint16_t nbl_chan_update_txqueue(union nbl_chan_info *chan_info,
+ uint16_t dstid,
+ enum nbl_chan_msg_type msg_type,
+ void *arg, size_t arg_len)
+{
+ struct nbl_chan_ring *txq;
+ struct nbl_chan_tx_desc *tx_desc;
+ uint64_t pa;
+ void *va;
+ uint16_t next_to_use;
+
+ txq = &chan_info->mailbox.txq;
+ next_to_use = txq->next_to_use;
+ va = (u8 *)txq->buf + next_to_use * chan_info->mailbox.txq_buf_size;
+ pa = txq->buf_mem.pa + next_to_use * chan_info->mailbox.txq_buf_size;
+ tx_desc = NBL_CHAN_TX_DESC(txq, next_to_use);
+
+ tx_desc->dstid = dstid;
+ tx_desc->msg_type = msg_type;
+ tx_desc->msgid = next_to_use;
+ if (arg_len > NBL_CHAN_BUF_LEN - sizeof(*tx_desc)) {
+ NBL_LOG(ERR, "arg_len: %" NBL_PRIU64 ", too long!", arg_len);
+ return -1;
+ }
+
+ if (arg_len > NBL_CHAN_TX_DESC_EMBEDDED_DATA_LEN) {
+ memcpy(va, arg, arg_len);
+ tx_desc->buf_addr = pa;
+ tx_desc->buf_len = arg_len;
+ tx_desc->data_len = 0;
+ } else {
+ memcpy(tx_desc->data, arg, arg_len);
+ tx_desc->buf_len = 0;
+ tx_desc->data_len = arg_len;
+ }
+ tx_desc->flags = NBL_CHAN_TX_DESC_AVAIL;
+
+ /* wmb */
+ rte_wmb();
+ txq->next_to_use++;
+ if (txq->next_to_use == chan_info->mailbox.num_txq_entries)
+ txq->next_to_use = 0;
+ txq->tail_ptr++;
+
+ return next_to_use;
+}
+
+static int nbl_chan_send_msg(void *priv, struct nbl_chan_send_info *chan_send)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NULL;
+ struct nbl_chan_waitqueue_head *wait_head;
+ uint16_t msgid;
+ int ret;
+ int retry_time = 0;
+
+ if (chan_mgt->state)
+ return -EIO;
+
+ chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+
+ rte_spinlock_lock(&chan_info->mailbox.txq_lock);
+ msgid = nbl_chan_update_txqueue(chan_info, chan_send->dstid,
+ chan_send->msg_type,
+ chan_send->arg, chan_send->arg_len);
+
+ if (msgid == 0xFFFF) {
+ rte_spinlock_unlock(&chan_info->mailbox.txq_lock);
+ NBL_LOG(ERR, "chan tx queue full, send msgtype:%u"
+ " to dstid:%u failed",
+ chan_send->msg_type, chan_send->dstid);
+ return -ECOMM;
+ }
+
+ if (!chan_send->ack) {
+ ret = nbl_chan_kick_tx_ring(chan_mgt, chan_info);
+ rte_spinlock_unlock(&chan_info->mailbox.txq_lock);
+ return ret;
+ }
+
+ wait_head = &chan_info->mailbox.wait[msgid];
+ wait_head->ack_data = chan_send->resp;
+ wait_head->ack_data_len = chan_send->resp_len;
+ wait_head->acked = 0;
+ wait_head->msg_type = chan_send->msg_type;
+ rte_wmb();
+ nbl_chan_kick_tx_ring(chan_mgt, chan_info);
+ rte_spinlock_unlock(&chan_info->mailbox.txq_lock);
+
+ while (1) {
+ if (wait_head->acked) {
+ rte_rmb();
+ return wait_head->ack_err;
+ }
+
+ rte_delay_us(50);
+ retry_time++;
+ if (retry_time > NBL_CHAN_RETRY_TIMES)
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int nbl_chan_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_send_info chan_send;
+ u32 *tmp;
+ u32 len = 3 * sizeof(u32) + chan_ack->data_len;
+
+ tmp = rte_zmalloc("nbl_chan_send_tmp", len, 0);
+ if (!tmp) {
+ NBL_LOG(ERR, "Chan send ack data malloc failed");
+ return -ENOMEM;
+ }
+
+ tmp[0] = chan_ack->msg_type;
+ tmp[1] = chan_ack->msgid;
+ tmp[2] = (u32)chan_ack->err;
+ if (chan_ack->data && chan_ack->data_len)
+ memcpy(&tmp[3], chan_ack->data, chan_ack->data_len);
+
+ NBL_CHAN_SEND(chan_send, chan_ack->dstid, NBL_CHAN_MSG_ACK, tmp, len, NULL, 0, 0);
+ nbl_chan_send_msg(chan_mgt, &chan_send);
+ rte_free(tmp);
+
+ return 0;
+}
+
+static int nbl_chan_register_msg(void *priv, uint16_t msg_type, nbl_chan_resp func,
+ void *callback_priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+
+ chan_mgt->msg_handler[msg_type].priv = callback_priv;
+ chan_mgt->msg_handler[msg_type].func = func;
+
+ return 0;
+}
+
+static int nbl_chan_teardown_queue(void *priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+
+ nbl_thread_del_work(&chan_info->mailbox.work);
+ nbl_chan_stop_queue(chan_mgt);
+
+ nbl_chan_remove_queue(chan_info);
+
+ return 0;
+}
+
+static int nbl_chan_setup_queue(void *priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+ int err;
+
+ nbl_chan_init_queue_param(chan_info, NBL_CHAN_QUEUE_LEN,
+ NBL_CHAN_QUEUE_LEN, NBL_CHAN_BUF_LEN,
+ NBL_CHAN_BUF_LEN);
+
+ err = nbl_chan_init_queue(chan_info);
+ if (err)
+ return err;
+
+ chan_info->mailbox.work.handler = nbl_chan_clean_queue;
+ chan_info->mailbox.work.tick = 1;
+ chan_info->mailbox.work.params = (void *)chan_mgt;
+
+ err = nbl_thread_add_work(&chan_info->mailbox.work);
+ if (err)
+ goto tear_down;
+
+ nbl_chan_config_queue(chan_mgt, chan_info);
+
+ err = nbl_chan_prepare_rx_bufs(chan_mgt, chan_info);
+ if (err)
+ goto tear_down;
+
+ return 0;
+
+tear_down:
+ nbl_chan_teardown_queue(chan_mgt);
+ return err;
+}
+
+static void nbl_chan_set_state(void *priv, enum nbl_chan_state state)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+
+ chan_mgt->state = state;
+}
+
+static struct nbl_channel_ops chan_ops = {
+ .send_msg = nbl_chan_send_msg,
+ .send_ack = nbl_chan_send_ack,
+ .register_msg = nbl_chan_register_msg,
+ .setup_queue = nbl_chan_setup_queue,
+ .teardown_queue = nbl_chan_teardown_queue,
+ .set_state = nbl_chan_set_state,
+};
+
+static int nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter,
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis)
+{
+ struct nbl_phy_ops_tbl *phy_ops_tbl;
+ union nbl_chan_info *mailbox;
+
+ phy_ops_tbl = NBL_ADAPTER_TO_PHY_OPS_TBL(adapter);
+
+ *chan_mgt_leonis = rte_zmalloc("nbl_chan_mgt", sizeof(struct nbl_channel_mgt_leonis), 0);
+ if (!*chan_mgt_leonis)
+ goto alloc_channel_mgt_leonis_fail;
+
+ (*chan_mgt_leonis)->chan_mgt.phy_ops_tbl = phy_ops_tbl;
+
+ mailbox = rte_zmalloc("nbl_mailbox", sizeof(union nbl_chan_info), 0);
+ if (!mailbox)
+ goto alloc_mailbox_fail;
+
+ NBL_CHAN_MGT_TO_CHAN_INFO(&(*chan_mgt_leonis)->chan_mgt) = mailbox;
+
+ return 0;
+
+alloc_mailbox_fail:
+ rte_free(*chan_mgt_leonis);
+alloc_channel_mgt_leonis_fail:
+ return -ENOMEM;
+}
+
+static void nbl_chan_remove_chan_mgt(struct nbl_channel_mgt_leonis **chan_mgt_leonis)
+{
+ rte_free(NBL_CHAN_MGT_TO_CHAN_INFO(&(*chan_mgt_leonis)->chan_mgt));
+ rte_free(*chan_mgt_leonis);
+ *chan_mgt_leonis = NULL;
+}
+
+static void nbl_chan_remove_ops(struct nbl_channel_ops_tbl **chan_ops_tbl)
+{
+ rte_free(*chan_ops_tbl);
+ *chan_ops_tbl = NULL;
+}
+
+static int nbl_chan_setup_ops(struct nbl_channel_ops_tbl **chan_ops_tbl,
+ struct nbl_channel_mgt_leonis *chan_mgt_leonis)
+{
+ *chan_ops_tbl = rte_zmalloc("nbl_chan_ops_tbl", sizeof(struct nbl_channel_ops_tbl), 0);
+ if (!*chan_ops_tbl)
+ return -ENOMEM;
+
+ NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &chan_ops;
+ NBL_CHAN_OPS_TBL_TO_PRIV(*chan_ops_tbl) = chan_mgt_leonis;
+
+ chan_mgt_leonis->chan_mgt.msg_handler[NBL_CHAN_MSG_ACK].func = nbl_chan_recv_ack_msg;
+ chan_mgt_leonis->chan_mgt.msg_handler[NBL_CHAN_MSG_ACK].priv = chan_mgt_leonis;
+
+ return 0;
+}
+
+int nbl_chan_init_leonis(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis;
+ struct nbl_channel_ops_tbl **chan_ops_tbl;
+ int ret = 0;
+
+ chan_mgt_leonis = (struct nbl_channel_mgt_leonis **)&NBL_ADAPTER_TO_CHAN_MGT(adapter);
+ chan_ops_tbl = &NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter);
+
+ ret = nbl_chan_setup_chan_mgt(adapter, chan_mgt_leonis);
+ if (ret)
+ goto setup_mgt_fail;
+
+ ret = nbl_chan_setup_ops(chan_ops_tbl, *chan_mgt_leonis);
+ if (ret)
+ goto setup_ops_fail;
+
+ return 0;
+
+setup_ops_fail:
+ nbl_chan_remove_chan_mgt(chan_mgt_leonis);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_chan_remove_leonis(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_channel_mgt_leonis **chan_mgt_leonis;
+ struct nbl_channel_ops_tbl **chan_ops_tbl;
+
+ chan_mgt_leonis = (struct nbl_channel_mgt_leonis **)&NBL_ADAPTER_TO_CHAN_MGT(adapter);
+ chan_ops_tbl = &NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter);
+
+ nbl_chan_remove_chan_mgt(chan_mgt_leonis);
+ nbl_chan_remove_ops(chan_ops_tbl);
+}
diff --git a/drivers/net/nbl/nbl_hw/nbl_channel.h b/drivers/net/nbl/nbl_hw/nbl_channel.h
new file mode 100644
index 0000000000..df2222d995
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_channel.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_CHANNEL_H_
+#define _NBL_CHANNEL_H_
+
+#include "nbl_ethdev.h"
+
+#define NBL_CHAN_MGT_TO_PHY_OPS_TBL(chan_mgt) ((chan_mgt)->phy_ops_tbl)
+#define NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt) (NBL_CHAN_MGT_TO_PHY_OPS_TBL(chan_mgt)->ops)
+#define NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt) (NBL_CHAN_MGT_TO_PHY_OPS_TBL(chan_mgt)->priv)
+#define NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt) ((chan_mgt)->chan_info)
+
+#define NBL_CHAN_TX_DESC(tx_ring, i) \
+ (&(((struct nbl_chan_tx_desc *)((tx_ring)->desc))[i]))
+#define NBL_CHAN_RX_DESC(rx_ring, i) \
+ (&(((struct nbl_chan_rx_desc *)((rx_ring)->desc))[i]))
+
+#define NBL_CHAN_QUEUE_LEN 64
+#define NBL_CHAN_BUF_LEN 4096
+
+#define NBL_CHAN_TX_WAIT_US 100
+#define NBL_CHAN_TX_REKICK_WAIT_TIMES 2000
+#define NBL_CHAN_TX_WAIT_TIMES 10000
+#define NBL_CHAN_RETRY_TIMES 20000
+#define NBL_CHAN_TX_DESC_EMBEDDED_DATA_LEN 16
+
+#define NBL_CHAN_TX_DESC_AVAIL BIT(0)
+#define NBL_CHAN_TX_DESC_USED BIT(1)
+#define NBL_CHAN_RX_DESC_WRITE BIT(1)
+#define NBL_CHAN_RX_DESC_AVAIL BIT(3)
+#define NBL_CHAN_RX_DESC_USED BIT(4)
+
+enum {
+ NBL_MB_RX_QID = 0,
+ NBL_MB_TX_QID = 1,
+};
+
+struct __rte_packed_begin nbl_chan_tx_desc {
+ uint16_t flags;
+ uint16_t srcid;
+ uint16_t dstid;
+ uint16_t data_len;
+ uint16_t buf_len;
+ uint64_t buf_addr;
+ uint16_t msg_type;
+ uint8_t data[16];
+ uint16_t msgid;
+ uint8_t rsv[26];
+} __rte_packed_end;
+
+struct __rte_packed_begin nbl_chan_rx_desc {
+ uint16_t flags;
+ uint32_t buf_len;
+ uint16_t buf_id;
+ uint64_t buf_addr;
+} __rte_packed_end;
+
+struct nbl_chan_ring {
+ struct nbl_dma_mem desc_mem;
+ struct nbl_dma_mem buf_mem;
+ void *desc;
+ void *buf;
+
+ uint16_t next_to_use;
+ uint16_t tail_ptr;
+ uint16_t next_to_clean;
+};
+
+struct nbl_chan_waitqueue_head {
+ char *ack_data;
+ int acked;
+ int ack_err;
+ uint16_t ack_data_len;
+ uint16_t msg_type;
+};
+
+union nbl_chan_info {
+ struct {
+ struct nbl_chan_ring txq;
+ struct nbl_chan_ring rxq;
+ struct nbl_chan_waitqueue_head *wait;
+
+ rte_spinlock_t txq_lock;
+ uint16_t num_txq_entries;
+ uint16_t num_rxq_entries;
+ uint16_t txq_buf_size;
+ uint16_t rxq_buf_size;
+
+ struct nbl_work work;
+ } mailbox;
+};
+
+struct nbl_chan_msg_handler {
+ void (*func)(void *priv, uint16_t srcid, uint16_t msgid, void *data, uint32_t len);
+ void *priv;
+};
+
+struct nbl_channel_mgt {
+ uint32_t mode;
+ struct nbl_phy_ops_tbl *phy_ops_tbl;
+ union nbl_chan_info *chan_info;
+ struct nbl_chan_msg_handler msg_handler[NBL_CHAN_MSG_MAX];
+ enum nbl_chan_state state;
+};
+
+/* Mgt structure for each product.
+ * Every indivisual mgt must have the common mgt as its first member, and contains its unique
+ * data structure in the reset of it.
+ */
+struct nbl_channel_mgt_leonis {
+ struct nbl_channel_mgt chan_mgt;
+};
+
+#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
index febee34edd..49ada3b525 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
@@ -11,6 +11,43 @@ static inline void nbl_wr32(void *priv, u64 reg, u32 value)
rte_write32(rte_cpu_to_le_32(value), ((phy_mgt)->hw_addr + (reg)));
}
+static inline u32 nbl_mbx_rd32(struct nbl_phy_mgt *phy_mgt, u64 reg)
+{
+ return rte_le_to_cpu_32(rte_read32(phy_mgt->mailbox_bar_hw_addr + reg));
+}
+
+static inline void nbl_mbx_wr32(void *priv, u64 reg, u32 value)
+{
+ struct nbl_phy_mgt *phy_mgt = (struct nbl_phy_mgt *)priv;
+
+ rte_write32(rte_cpu_to_le_32(value), ((phy_mgt)->mailbox_bar_hw_addr + (reg)));
+ rte_delay_us(NBL_DELAY_MIN_TIME_FOR_REGS);
+}
+
+static void nbl_hw_read_mbx_regs(struct nbl_phy_mgt *phy_mgt, u64 reg,
+ u8 *data, u32 len)
+{
+ u32 i = 0;
+
+ if (len % 4)
+ return;
+
+ for (i = 0; i < len / 4; i++)
+ *(u32 *)(data + i * sizeof(u32)) = nbl_mbx_rd32(phy_mgt, reg + i * sizeof(u32));
+}
+
+static void nbl_hw_write_mbx_regs(struct nbl_phy_mgt *phy_mgt, u64 reg,
+ u8 *data, u32 len)
+{
+ u32 i = 0;
+
+ if (len % 4)
+ return;
+
+ for (i = 0; i < len / 4; i++)
+ nbl_mbx_wr32(phy_mgt, reg + i * sizeof(u32), *(u32 *)(data + i * sizeof(u32)));
+}
+
static void nbl_phy_update_tail_ptr(void *priv, u16 notify_qid, u16 tail_ptr)
{
nbl_wr32(priv, NBL_NOTIFY_ADDR, ((u32)tail_ptr << NBL_TAIL_PTR_OFT | (u32)notify_qid));
@@ -23,9 +60,96 @@ static u8 *nbl_phy_get_tail_ptr(void *priv)
return phy_mgt->hw_addr;
}
+static void nbl_phy_config_mailbox_rxq(void *priv, u64 dma_addr, int size_bwid)
+{
+ struct nbl_mailbox_qinfo_cfg_rx_table qinfo_cfg_rx_table = { 0 };
+
+ qinfo_cfg_rx_table.rx_queue_rst = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_rx_table,
+ sizeof(qinfo_cfg_rx_table));
+
+ qinfo_cfg_rx_table.rx_queue_base_addr_l = NBL_LO_DWORD(dma_addr);
+ qinfo_cfg_rx_table.rx_queue_base_addr_h = NBL_HI_DWORD(dma_addr);
+ qinfo_cfg_rx_table.rx_queue_size_bwind = (u32)size_bwid;
+ qinfo_cfg_rx_table.rx_queue_rst = 0;
+ qinfo_cfg_rx_table.rx_queue_en = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_rx_table,
+ sizeof(qinfo_cfg_rx_table));
+}
+
+static void nbl_phy_config_mailbox_txq(void *priv, u64 dma_addr, int size_bwid)
+{
+ struct nbl_mailbox_qinfo_cfg_tx_table qinfo_cfg_tx_table = { 0 };
+
+ qinfo_cfg_tx_table.tx_queue_rst = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_tx_table,
+ sizeof(qinfo_cfg_tx_table));
+
+ qinfo_cfg_tx_table.tx_queue_base_addr_l = NBL_LO_DWORD(dma_addr);
+ qinfo_cfg_tx_table.tx_queue_base_addr_h = NBL_HI_DWORD(dma_addr);
+ qinfo_cfg_tx_table.tx_queue_size_bwind = (u32)size_bwid;
+ qinfo_cfg_tx_table.tx_queue_rst = 0;
+ qinfo_cfg_tx_table.tx_queue_en = 1;
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_tx_table,
+ sizeof(qinfo_cfg_tx_table));
+}
+
+static void nbl_phy_stop_mailbox_rxq(void *priv)
+{
+ struct nbl_mailbox_qinfo_cfg_rx_table qinfo_cfg_rx_table = { 0 };
+
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_rx_table,
+ sizeof(qinfo_cfg_rx_table));
+}
+
+static void nbl_phy_stop_mailbox_txq(void *priv)
+{
+ struct nbl_mailbox_qinfo_cfg_tx_table qinfo_cfg_tx_table = { 0 };
+
+ nbl_hw_write_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR,
+ (u8 *)&qinfo_cfg_tx_table,
+ sizeof(qinfo_cfg_tx_table));
+}
+
+static u16 nbl_phy_get_mailbox_rx_tail_ptr(void *priv)
+{
+ struct nbl_mailbox_qinfo_cfg_dbg_tbl cfg_dbg_tbl = { 0 };
+
+ nbl_hw_read_mbx_regs(priv, NBL_MAILBOX_QINFO_CFG_DBG_TABLE_ADDR,
+ (u8 *)&cfg_dbg_tbl, sizeof(cfg_dbg_tbl));
+ return cfg_dbg_tbl.rx_tail_ptr;
+}
+
+static void nbl_phy_update_mailbox_queue_tail_ptr(void *priv, u16 tail_ptr,
+ u8 txrx)
+{
+ struct nbl_phy_mgt *phy_mgt = (struct nbl_phy_mgt *)priv;
+
+ /* local_qid 0 and 1 denote rx and tx queue respectively */
+ u32 local_qid = txrx;
+ u32 value = ((u32)tail_ptr << NBL_TAIL_PTR_OFT) | local_qid;
+
+ rte_wmb();
+ nbl_mbx_wr32(phy_mgt, NBL_MAILBOX_NOTIFY_ADDR, value);
+ rte_delay_us(NBL_NOTIFY_DELAY_MIN_TIME_FOR_REGS);
+}
+
static struct nbl_phy_ops phy_ops = {
.update_tail_ptr = nbl_phy_update_tail_ptr,
.get_tail_ptr = nbl_phy_get_tail_ptr,
+
+ /* mailbox */
+ .config_mailbox_rxq = nbl_phy_config_mailbox_rxq,
+ .config_mailbox_txq = nbl_phy_config_mailbox_txq,
+ .stop_mailbox_rxq = nbl_phy_stop_mailbox_rxq,
+ .stop_mailbox_txq = nbl_phy_stop_mailbox_txq,
+ .get_mailbox_rx_tail_ptr = nbl_phy_get_mailbox_rx_tail_ptr,
+ .update_mailbox_queue_tail_ptr = nbl_phy_update_mailbox_queue_tail_ptr,
};
static int nbl_phy_setup_ops(struct nbl_phy_ops_tbl **phy_ops_tbl,
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
index 5440cf41be..00454e08d9 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
@@ -7,4 +7,47 @@
#include "../nbl_phy.h"
+/* MAILBOX BAR2 */
+#define NBL_MAILBOX_NOTIFY_ADDR (0x00000000)
+#define NBL_MAILBOX_QINFO_CFG_RX_TABLE_ADDR (0x10)
+#define NBL_MAILBOX_QINFO_CFG_TX_TABLE_ADDR (0x20)
+#define NBL_MAILBOX_QINFO_CFG_DBG_TABLE_ADDR (0x30)
+
+#define NBL_DELAY_MIN_TIME_FOR_REGS 400
+#define NBL_NOTIFY_DELAY_MIN_TIME_FOR_REGS 200
+
+/* mailbox BAR qinfo_cfg_rx_table */
+struct nbl_mailbox_qinfo_cfg_rx_table {
+ u32 rx_queue_base_addr_l;
+ u32 rx_queue_base_addr_h;
+ u32 rx_queue_size_bwind:4;
+ u32 rsv1:28;
+ u32 rx_queue_rst:1;
+ u32 rx_queue_en:1;
+ u32 rsv2:30;
+};
+
+/* mailbox BAR qinfo_cfg_tx_table */
+struct nbl_mailbox_qinfo_cfg_tx_table {
+ u32 tx_queue_base_addr_l;
+ u32 tx_queue_base_addr_h;
+ u32 tx_queue_size_bwind:4;
+ u32 rsv1:28;
+ u32 tx_queue_rst:1;
+ u32 tx_queue_en:1;
+ u32 rsv2:30;
+};
+
+/* mailbox BAR qinfo_cfg_dbg_table */
+struct nbl_mailbox_qinfo_cfg_dbg_tbl {
+ u16 rx_drop;
+ u16 rx_get;
+ u16 tx_drop;
+ u16 tx_out;
+ u16 rx_hd_ptr;
+ u16 tx_hd_ptr;
+ u16 rx_tail_ptr;
+ u16 tx_tail_ptr;
+};
+
#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
new file mode 100644
index 0000000000..faf5d3ed3d
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -0,0 +1,326 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_CHANNEL_H_
+#define _NBL_DEF_CHANNEL_H_
+
+#include "nbl_include.h"
+
+#define NBL_CHAN_OPS_TBL_TO_OPS(chan_ops_tbl) ((chan_ops_tbl)->ops)
+#define NBL_CHAN_OPS_TBL_TO_PRIV(chan_ops_tbl) ((chan_ops_tbl)->priv)
+
+#define NBL_CHAN_SEND(chan_send, dst_id, mesg_type, \
+ argument, arg_length, response, resp_length, need_ack) \
+do { \
+ typeof(chan_send) *__chan_send = &(chan_send); \
+ __chan_send->dstid = (dst_id); \
+ __chan_send->msg_type = (mesg_type); \
+ __chan_send->arg = (argument); \
+ __chan_send->arg_len = (arg_length); \
+ __chan_send->resp = (response); \
+ __chan_send->resp_len = (resp_length); \
+ __chan_send->ack = (need_ack); \
+} while (0)
+
+#define NBL_CHAN_ACK(chan_ack, dst_id, mesg_type, msg_id, err_code, ack_data, data_length) \
+do { \
+ typeof(chan_ack) *__chan_ack = &(chan_ack); \
+ __chan_ack->dstid = (dst_id); \
+ __chan_ack->msg_type = (mesg_type); \
+ __chan_ack->msgid = (msg_id); \
+ __chan_ack->err = (err_code); \
+ __chan_ack->data = (ack_data); \
+ __chan_ack->data_len = (data_length); \
+} while (0)
+
+typedef void (*nbl_chan_resp)(void *, uint16_t, uint16_t, void *, uint32_t);
+
+enum {
+ NBL_CHAN_RESP_OK,
+ NBL_CHAN_RESP_ERR,
+};
+
+enum nbl_chan_msg_type {
+ NBL_CHAN_MSG_ACK,
+ NBL_CHAN_MSG_ADD_MACVLAN,
+ NBL_CHAN_MSG_DEL_MACVLAN,
+ NBL_CHAN_MSG_ADD_MULTI_RULE,
+ NBL_CHAN_MSG_DEL_MULTI_RULE,
+ NBL_CHAN_MSG_SETUP_MULTI_GROUP,
+ NBL_CHAN_MSG_REMOVE_MULTI_GROUP,
+ NBL_CHAN_MSG_REGISTER_NET,
+ NBL_CHAN_MSG_UNREGISTER_NET,
+ NBL_CHAN_MSG_ALLOC_TXRX_QUEUES,
+ NBL_CHAN_MSG_FREE_TXRX_QUEUES,
+ NBL_CHAN_MSG_SETUP_QUEUE,
+ NBL_CHAN_MSG_REMOVE_ALL_QUEUES,
+ NBL_CHAN_MSG_CFG_DSCH,
+ NBL_CHAN_MSG_SETUP_CQS,
+ NBL_CHAN_MSG_REMOVE_CQS,
+ NBL_CHAN_MSG_CFG_QDISC_MQPRIO,
+ NBL_CHAN_MSG_CONFIGURE_MSIX_MAP,
+ NBL_CHAN_MSG_DESTROY_MSIX_MAP,
+ NBL_CHAN_MSG_MAILBOX_ENABLE_IRQ,
+ NBL_CHAN_MSG_GET_GLOBAL_VECTOR,
+ NBL_CHAN_MSG_GET_VSI_ID,
+ NBL_CHAN_MSG_SET_PROSISC_MODE,
+ NBL_CHAN_MSG_GET_FIRMWARE_VERSION,
+ NBL_CHAN_MSG_GET_QUEUE_ERR_STATS,
+ NBL_CHAN_MSG_GET_COALESCE,
+ NBL_CHAN_MSG_SET_COALESCE,
+ NBL_CHAN_MSG_SET_SPOOF_CHECK_ADDR,
+ NBL_CHAN_MSG_SET_VF_SPOOF_CHECK,
+ NBL_CHAN_MSG_GET_RXFH_INDIR_SIZE,
+ NBL_CHAN_MSG_GET_RXFH_INDIR,
+ NBL_CHAN_MSG_GET_RXFH_RSS_KEY,
+ NBL_CHAN_MSG_GET_RXFH_RSS_ALG_SEL,
+ NBL_CHAN_MSG_GET_PHY_CAPS,
+ NBL_CHAN_MSG_GET_PHY_STATE,
+ NBL_CHAN_MSG_REGISTER_RDMA,
+ NBL_CHAN_MSG_UNREGISTER_RDMA,
+ NBL_CHAN_MSG_GET_REAL_HW_ADDR,
+ NBL_CHAN_MSG_GET_REAL_BDF,
+ NBL_CHAN_MSG_GRC_PROCESS,
+ NBL_CHAN_MSG_SET_SFP_STATE,
+ NBL_CHAN_MSG_SET_ETH_LOOPBACK,
+ NBL_CHAN_MSG_CHECK_ACTIVE_VF,
+ NBL_CHAN_MSG_GET_PRODUCT_FLEX_CAP,
+ NBL_CHAN_MSG_ALLOC_KTLS_TX_INDEX,
+ NBL_CHAN_MSG_FREE_KTLS_TX_INDEX,
+ NBL_CHAN_MSG_CFG_KTLS_TX_KEYMAT,
+ NBL_CHAN_MSG_ALLOC_KTLS_RX_INDEX,
+ NBL_CHAN_MSG_FREE_KTLS_RX_INDEX,
+ NBL_CHAN_MSG_CFG_KTLS_RX_KEYMAT,
+ NBL_CHAN_MSG_CFG_KTLS_RX_RECORD,
+ NBL_CHAN_MSG_ADD_KTLS_RX_FLOW,
+ NBL_CHAN_MSG_DEL_KTLS_RX_FLOW,
+ NBL_CHAN_MSG_ALLOC_IPSEC_TX_INDEX,
+ NBL_CHAN_MSG_FREE_IPSEC_TX_INDEX,
+ NBL_CHAN_MSG_ALLOC_IPSEC_RX_INDEX,
+ NBL_CHAN_MSG_FREE_IPSEC_RX_INDEX,
+ NBL_CHAN_MSG_CFG_IPSEC_TX_SAD,
+ NBL_CHAN_MSG_CFG_IPSEC_RX_SAD,
+ NBL_CHAN_MSG_ADD_IPSEC_TX_FLOW,
+ NBL_CHAN_MSG_DEL_IPSEC_TX_FLOW,
+ NBL_CHAN_MSG_ADD_IPSEC_RX_FLOW,
+ NBL_CHAN_MSG_DEL_IPSEC_RX_FLOW,
+ NBL_CHAN_MSG_NOTIFY_IPSEC_HARD_EXPIRE,
+ NBL_CHAN_MSG_GET_MBX_IRQ_NUM,
+ NBL_CHAN_MSG_CLEAR_FLOW,
+ NBL_CHAN_MSG_CLEAR_QUEUE,
+ NBL_CHAN_MSG_GET_ETH_ID,
+ NBL_CHAN_MSG_SET_OFFLOAD_STATUS,
+
+ NBL_CHAN_MSG_INIT_OFLD,
+ NBL_CHAN_MSG_INIT_CMDQ,
+ NBL_CHAN_MSG_DESTROY_CMDQ,
+ NBL_CHAN_MSG_RESET_CMDQ,
+ NBL_CHAN_MSG_INIT_FLOW,
+ NBL_CHAN_MSG_DEINIT_FLOW,
+ NBL_CHAN_MSG_OFFLOAD_FLOW_RULE,
+ NBL_CHAN_MSG_GET_ACL_SWITCH,
+ NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID,
+ NBL_CHAN_MSG_INIT_REP,
+ NBL_CHAN_MSG_GET_LINE_RATE_INFO,
+
+ NBL_CHAN_MSG_REGISTER_NET_REP,
+ NBL_CHAN_MSG_UNREGISTER_NET_REP,
+ NBL_CHAN_MSG_REGISTER_ETH_REP,
+ NBL_CHAN_MSG_UNREGISTER_ETH_REP,
+ NBL_CHAN_MSG_REGISTER_UPCALL_PORT,
+ NBL_CHAN_MSG_UNREGISTER_UPCALL_PORT,
+ NBL_CHAN_MSG_GET_PORT_STATE,
+ NBL_CHAN_MSG_SET_PORT_ADVERTISING,
+ NBL_CHAN_MSG_GET_MODULE_INFO,
+ NBL_CHAN_MSG_GET_MODULE_EEPROM,
+ NBL_CHAN_MSG_GET_LINK_STATE,
+ NBL_CHAN_MSG_NOTIFY_LINK_STATE,
+
+ NBL_CHAN_MSG_GET_QUEUE_CXT,
+ NBL_CHAN_MSG_CFG_LOG,
+ NBL_CHAN_MSG_INIT_VDPAQ,
+ NBL_CHAN_MSG_DESTROY_VDPAQ,
+ NBL_CHAN_GET_UPCALL_PORT,
+ NBL_CHAN_MSG_NOTIFY_ETH_REP_LINK_STATE,
+ NBL_CHAN_MSG_SET_ETH_MAC_ADDR,
+ NBL_CHAN_MSG_GET_FUNCTION_ID,
+ NBL_CHAN_MSG_GET_CHIP_TEMPERATURE,
+
+ NBL_CHAN_MSG_DISABLE_PHY_FLOW,
+ NBL_CHAN_MSG_ENABLE_PHY_FLOW,
+ NBL_CHAN_MSG_SET_UPCALL_RULE,
+ NBL_CHAN_MSG_UNSET_UPCALL_RULE,
+
+ NBL_CHAN_MSG_GET_REG_DUMP,
+ NBL_CHAN_MSG_GET_REG_DUMP_LEN,
+
+ NBL_CHAN_MSG_CFG_LAG_HASH_ALGORITHM,
+ NBL_CHAN_MSG_CFG_LAG_MEMBER_FWD,
+ NBL_CHAN_MSG_CFG_LAG_MEMBER_LIST,
+ NBL_CHAN_MSG_CFG_LAG_MEMBER_UP_ATTR,
+ NBL_CHAN_MSG_ADD_LAG_FLOW,
+ NBL_CHAN_MSG_DEL_LAG_FLOW,
+
+ NBL_CHAN_MSG_SWITCHDEV_INIT_CMDQ,
+ NBL_CHAN_MSG_SWITCHDEV_DEINIT_CMDQ,
+ NBL_CHAN_MSG_SET_TC_FLOW_INFO,
+ NBL_CHAN_MSG_UNSET_TC_FLOW_INFO,
+ NBL_CHAN_MSG_INIT_ACL,
+ NBL_CHAN_MSG_UNINIT_ACL,
+
+ NBL_CHAN_MSG_CFG_LAG_MCC,
+
+ NBL_CHAN_MSG_REGISTER_VSI2Q,
+ NBL_CHAN_MSG_SETUP_Q2VSI,
+ NBL_CHAN_MSG_REMOVE_Q2VSI,
+ NBL_CHAN_MSG_SETUP_RSS,
+ NBL_CHAN_MSG_REMOVE_RSS,
+ NBL_CHAN_MSG_GET_REP_QUEUE_INFO,
+ NBL_CHAN_MSG_CTRL_PORT_LED,
+ NBL_CHAN_MSG_NWAY_RESET,
+ NBL_CHAN_MSG_SET_INTL_SUPPRESS_LEVEL,
+ NBL_CHAN_MSG_GET_ETH_STATS,
+ NBL_CHAN_MSG_GET_MODULE_TEMPERATURE,
+ NBL_CHAN_MSG_GET_BOARD_INFO,
+
+ NBL_CHAN_MSG_GET_P4_USED,
+ NBL_CHAN_MSG_GET_VF_BASE_VSI_ID,
+
+ NBL_CHAN_MSG_ADD_LLDP_FLOW,
+ NBL_CHAN_MSG_DEL_LLDP_FLOW,
+
+ NBL_CHAN_MSG_CFG_ETH_BOND_INFO,
+ NBL_CHAN_MSG_CFG_DUPPKT_MCC,
+
+ NBL_CHAN_MSG_ADD_ND_UPCALL_FLOW,
+ NBL_CHAN_MSG_DEL_ND_UPCALL_FLOW,
+
+ NBL_CHAN_MSG_GET_BOARD_ID,
+ NBL_CHAN_MSG_SET_SHAPING_DPORT_VLD,
+ NBL_CHAN_MSG_SET_DPORT_FC_TH_VLD,
+ NBL_CHAN_MSG_REGISTER_RDMA_BOND,
+ NBL_CHAN_MSG_UNREGISTER_RDMA_BOND,
+ NBL_CHAN_MSG_RESTORE_NETDEV_QUEUE,
+ NBL_CHAN_MSG_RESTART_NETDEV_QUEUE,
+ NBL_CHAN_MSG_RESTORE_HW_QUEUE,
+ NBL_CHAN_MSG_KEEP_ALIVE,
+ NBL_CHAN_MSG_GET_BASE_MAC_ADDR,
+ NBL_CHAN_MSG_CFG_BOND_SHAPING,
+ NBL_CHAN_MSG_CFG_BGID_BACK_PRESSURE,
+ NBL_CHAN_MSG_ALLOC_KT_BLOCK,
+ NBL_CHAN_MSG_FREE_KT_BLOCK,
+ NBL_CHAN_MSG_GET_DAEMON_QUEUE_INFO,
+ NBL_CHAN_MSG_GET_ETH_BOND_INFO,
+ NBL_CHAN_MSG_CLEAR_ACCEL_FLOW,
+ NBL_CHAN_MSG_SET_BRIDGE_MODE,
+ NBL_CHAN_MSG_GET_VF_FUNCTION_ID,
+ NBL_CHAN_MSG_NOTIFY_LINK_FORCED,
+ NBL_CHAN_MSG_SET_PMD_DEBUG,
+ NBL_CHAN_MSG_REGISTER_FUNC_MAC,
+ NBL_CHAN_MSG_SET_TX_RATE,
+ NBL_CHAN_MSG_REGISTER_FUNC_LINK_FORCED,
+ NBL_CHAN_MSG_GET_LINK_FORCED,
+ NBL_CHAN_MSG_REGISTER_FUNC_VLAN,
+ NBL_CHAN_MSG_GET_FD_FLOW,
+ NBL_CHAN_MSG_GET_FD_FLOW_CNT,
+ NBL_CHAN_MSG_GET_FD_FLOW_ALL,
+ NBL_CHAN_MSG_GET_FD_FLOW_MAX,
+ NBL_CHAN_MSG_REPLACE_FD_FLOW,
+ NBL_CHAN_MSG_REMOVE_FD_FLOW,
+ NBL_CHAN_MSG_CFG_FD_FLOW_STATE,
+ NBL_CHAN_MSG_REGISTER_FUNC_RATE,
+ NBL_CHAN_MSG_NOTIFY_VLAN,
+ NBL_CHAN_MSG_GET_XDP_QUEUE_INFO,
+
+ NBL_CHAN_MSG_STOP_ABNORMAL_SW_QUEUE,
+ NBL_CHAN_MSG_STOP_ABNORMAL_HW_QUEUE,
+ NBL_CHAN_MSG_NOTIFY_RESET_EVENT,
+ NBL_CHAN_MSG_ACK_RESET_EVENT,
+ NBL_CHAN_MSG_GET_VF_VSI_ID,
+
+ NBL_CHAN_MSG_CONFIGURE_QOS,
+ NBL_CHAN_MSG_GET_PFC_BUFFER_SIZE,
+ NBL_CHAN_MSG_SET_PFC_BUFFER_SIZE,
+ NBL_CHAN_MSG_GET_VF_STATS,
+ NBL_CHAN_MSG_REGISTER_FUNC_TRUST,
+ NBL_CHAN_MSG_NOTIFY_TRUST,
+ NBL_CHAN_CHECK_VF_IS_ACTIVE,
+ NBL_CHAN_MSG_GET_ETH_ABNORMAL_STATS,
+ NBL_CHAN_MSG_GET_ETH_CTRL_STATS,
+ NBL_CHAN_MSG_GET_PAUSE_STATS,
+ NBL_CHAN_MSG_GET_ETH_MAC_STATS,
+ NBL_CHAN_MSG_GET_FEC_STATS,
+ NBL_CHAN_MSG_CFG_MULTI_MCAST_RULE,
+
+ NBL_CHAN_MSG_MTU_SET = 501,
+ NBL_CHAN_MSG_SET_RXFH_INDIR = 506,
+
+ /* mailbox msg end */
+ NBL_CHAN_MSG_MAILBOX_MAX,
+
+ /* adminq msg */
+ NBL_CHAN_MSG_ADMINQ_GET_EMP_VERSION = 0x8101, /* Deprecated, should not be used */
+ NBL_CHAN_MSG_ADMINQ_GET_NVM_VERSION = 0x8102,
+ NBL_CHAN_MSG_ADMINQ_REBOOT = 0x8104,
+ NBL_CHAN_MSG_ADMINQ_FLR_NOTIFY = 0x8105,
+ NBL_CHAN_MSG_ADMINQ_FLASH_ERASE = 0x8201,
+ NBL_CHAN_MSG_ADMINQ_FLASH_READ = 0x8202,
+ NBL_CHAN_MSG_ADMINQ_FLASH_WRITE = 0x8203,
+ NBL_CHAN_MSG_ADMINQ_FLASH_ACTIVATE = 0x8204,
+ NBL_CHAN_MSG_ADMINQ_LOAD_P4 = 0x8107,
+ NBL_CHAN_MSG_ADMINQ_LOAD_P4_DEFAULT = 0x8108,
+ NBL_CHAN_MSG_ADMINQ_MANAGE_PORT_ATTRIBUTES = 0x8300,
+ NBL_CHAN_MSG_ADMINQ_PORT_NOTIFY = 0x8301,
+ NBL_CHAN_MSG_ADMINQ_GET_MODULE_EEPROM = 0x8302,
+ NBL_CHAN_MSG_ADMINQ_GET_ETH_STATS = 0x8303,
+ /* TODO: new kernel and ethtool support show fec stats */
+ NBL_CHAN_MSG_ADMINQ_GET_FEC_STATS = 0x408,
+ NBL_CHAN_MSG_ADMINQ_EMP_CONSOLE_WRITE = 0x8F01,
+ NBL_CHAN_MSG_ADMINQ_EMP_CONSOLE_READ = 0x8F02,
+ NBL_CHAN_MSG_MAX,
+};
+
+struct nbl_chan_send_info {
+ uint16_t dstid;
+ uint16_t msg_type;
+ void *arg;
+ size_t arg_len;
+ void *resp;
+ size_t resp_len;
+ uint16_t ack;
+};
+
+struct nbl_chan_ack_info {
+ uint16_t dstid;
+ uint16_t msg_type;
+ uint16_t msgid;
+ int err;
+ void *data;
+ uint32_t data_len;
+};
+
+enum nbl_chan_state {
+ NBL_CHAN_INLINE,
+ NBL_CHAN_OFFLINE,
+ NBL_CHAN_STATE_MAX
+};
+
+struct nbl_channel_ops {
+ int (*send_msg)(void *priv, struct nbl_chan_send_info *chan_send);
+ int (*send_ack)(void *priv, struct nbl_chan_ack_info *chan_ack);
+ int (*register_msg)(void *priv, u16 msg_type, nbl_chan_resp func, void *callback_priv);
+ int (*setup_queue)(void *priv);
+ int (*teardown_queue)(void *priv);
+ void (*set_state)(void *priv, enum nbl_chan_state state);
+};
+
+struct nbl_channel_ops_tbl {
+ struct nbl_channel_ops *ops;
+ void *priv;
+};
+
+int nbl_chan_init_leonis(void *p);
+void nbl_chan_remove_leonis(void *p);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
new file mode 100644
index 0000000000..0bfc6a233b
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_COMMON_H_
+#define _NBL_DEF_COMMON_H_
+
+#include "nbl_include.h"
+
+# if __WORDSIZE == 64
+# define NBL_PRIU64 "lu"
+# else
+# define NBL_PRIU64 "llu"
+# endif
+
+struct nbl_dma_mem {
+ void *va;
+ uint64_t pa;
+ uint32_t size;
+ const void *zone;
+};
+
+struct nbl_work {
+ TAILQ_ENTRY(nbl_work) next;
+ void *params;
+ void (*handler)(void *priv);
+ uint32_t tick;
+ uint32_t random;
+ bool run_once;
+ bool no_run;
+ uint8_t resv[2];
+};
+
+void *nbl_alloc_dma_mem(struct nbl_dma_mem *mem, uint32_t size);
+void nbl_free_dma_mem(struct nbl_dma_mem *mem);
+
+int nbl_thread_add_work(struct nbl_work *work);
+void nbl_thread_del_work(struct nbl_work *work);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 493ee58411..72a5a9a078 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -39,6 +39,11 @@
#include <ethdev_pci.h>
#include <bus_pci_driver.h>
#include <rte_io.h>
+#include <rte_tailq.h>
+#include <rte_lcore.h>
+#include <rte_common.h>
+#include <rte_thread.h>
+#include <rte_stdatomic.h>
#include "nbl_logs.h"
@@ -65,4 +70,6 @@ struct nbl_func_caps {
u32 rsv:30;
};
+#define BIT(a) (1UL << (a))
+
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 05/17] net/nbl: add Resource layer definitions and implementation
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (3 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 04/17] net/nbl: add Channel " Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 06/17] net/nbl: add Dispatch " Kyo Liu
` (14 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
add Resource layer related definetions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 3 +
drivers/net/nbl/nbl_core.c | 11 +-
drivers/net/nbl/nbl_core.h | 4 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 137 ++++++++++++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h | 10 ++
drivers/net/nbl/nbl_hw/nbl_resource.c | 5 +
drivers/net/nbl/nbl_hw/nbl_resource.h | 51 ++++++
drivers/net/nbl/nbl_hw/nbl_txrx.c | 148 ++++++++++++++++++
drivers/net/nbl/nbl_hw/nbl_txrx.h | 10 ++
.../net/nbl/nbl_include/nbl_def_resource.h | 47 ++++++
drivers/net/nbl/nbl_include/nbl_include.h | 20 +++
11 files changed, 444 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.h
create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.c
create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_resource.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index c849cab185..f34121260e 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -15,5 +15,8 @@ sources = files(
'nbl_common/nbl_common.c',
'nbl_common/nbl_thread.c',
'nbl_hw/nbl_channel.c',
+ 'nbl_hw/nbl_resource.c',
+ 'nbl_hw/nbl_txrx.c',
'nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c',
+ 'nbl_hw/nbl_hw_leonis/nbl_res_leonis.c',
)
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index f4388fe3b5..70600401fe 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -8,8 +8,8 @@ static struct nbl_product_core_ops nbl_product_core_ops[NBL_PRODUCT_MAX] = {
{
.phy_init = nbl_phy_init_leonis_snic,
.phy_remove = nbl_phy_remove_leonis_snic,
- .res_init = NULL,
- .res_remove = NULL,
+ .res_init = nbl_res_init_leonis,
+ .res_remove = nbl_res_remove_leonis,
.chan_init = nbl_chan_init_leonis,
.chan_remove = nbl_chan_remove_leonis,
},
@@ -46,8 +46,14 @@ int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
if (ret)
goto chan_init_fail;
+ ret = product_base_ops->res_init(adapter, eth_dev);
+ if (ret)
+ goto res_init_fail;
+
return 0;
+res_init_fail:
+ product_base_ops->chan_remove(adapter);
chan_init_fail:
product_base_ops->phy_remove(adapter);
phy_init_fail:
@@ -60,6 +66,7 @@ void nbl_core_remove(struct nbl_adapter *adapter)
product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
+ product_base_ops->res_remove(adapter);
product_base_ops->chan_remove(adapter);
product_base_ops->phy_remove(adapter);
}
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index a6c1103c77..f693913b47 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -9,6 +9,7 @@
#include "nbl_def_common.h"
#include "nbl_def_phy.h"
#include "nbl_def_channel.h"
+#include "nbl_def_resource.h"
#define NBL_VENDOR_ID (0x1F0F)
#define NBL_DEVICE_ID_M18110 (0x3403)
@@ -33,9 +34,11 @@
#define NBL_ADAPTER_TO_PHY_MGT(adapter) ((adapter)->core.phy_mgt)
#define NBL_ADAPTER_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt)
+#define NBL_ADAPTER_TO_RES_MGT(adapter) ((adapter)->core.res_mgt)
#define NBL_ADAPTER_TO_PHY_OPS_TBL(adapter) ((adapter)->intf.phy_ops_tbl)
#define NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl)
+#define NBL_ADAPTER_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl)
struct nbl_core {
void *phy_mgt;
@@ -48,6 +51,7 @@ struct nbl_core {
struct nbl_interface {
struct nbl_phy_ops_tbl *phy_ops_tbl;
struct nbl_channel_ops_tbl *channel_ops_tbl;
+ struct nbl_resource_ops_tbl *resource_ops_tbl;
};
struct nbl_adapter {
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
new file mode 100644
index 0000000000..6327aa55b4
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
@@ -0,0 +1,137 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_res_leonis.h"
+
+static struct nbl_resource_ops res_ops = {
+};
+
+static bool is_ops_inited;
+
+static void nbl_res_remove_ops(struct nbl_resource_ops_tbl **res_ops_tbl)
+{
+ rte_free(*res_ops_tbl);
+ *res_ops_tbl = NULL;
+}
+
+static int nbl_res_setup_ops(struct nbl_resource_ops_tbl **res_ops_tbl,
+ struct nbl_resource_mgt_leonis *res_mgt_leonis)
+{
+ int ret = 0;
+
+ *res_ops_tbl = rte_zmalloc("nbl_res_ops", sizeof(struct nbl_resource_ops_tbl), 0);
+ if (!*res_ops_tbl)
+ return -ENOMEM;
+
+ if (!is_ops_inited) {
+ ret = nbl_txrx_setup_ops(&res_ops);
+ if (ret)
+ goto set_ops_failed;
+
+ is_ops_inited = true;
+ }
+
+ NBL_RES_OPS_TBL_TO_OPS(*res_ops_tbl) = &res_ops;
+ NBL_RES_OPS_TBL_TO_PRIV(*res_ops_tbl) = res_mgt_leonis;
+
+ return ret;
+
+set_ops_failed:
+ rte_free(*res_ops_tbl);
+ return ret;
+}
+
+static void nbl_res_stop(struct nbl_resource_mgt_leonis *res_mgt_leonis)
+{
+ struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
+ nbl_txrx_mgt_stop(res_mgt);
+}
+
+static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis)
+{
+ struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
+ int ret;
+
+ ret = nbl_txrx_mgt_start(res_mgt);
+ if (ret)
+ goto txrx_failed;
+
+ return 0;
+
+txrx_failed:
+ return ret;
+}
+
+static void
+nbl_res_remove_res_mgt(struct nbl_resource_mgt_leonis **res_mgt_leonis)
+{
+ rte_free(*res_mgt_leonis);
+ *res_mgt_leonis = NULL;
+}
+
+static int
+nbl_res_setup_res_mgt(struct nbl_resource_mgt_leonis **res_mgt_leonis)
+{
+ *res_mgt_leonis = rte_zmalloc("nbl_res_mgt",
+ sizeof(struct nbl_resource_mgt_leonis),
+ 0);
+ if (!*res_mgt_leonis)
+ return -ENOMEM;
+
+ return 0;
+}
+
+int nbl_res_init_leonis(void *p, struct rte_eth_dev *eth_dev)
+{
+ struct nbl_resource_mgt_leonis **res_mgt_leonis;
+ struct nbl_resource_ops_tbl **res_ops_tbl;
+ struct nbl_phy_ops_tbl *phy_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ int ret = 0;
+
+ res_mgt_leonis = (struct nbl_resource_mgt_leonis **)&NBL_ADAPTER_TO_RES_MGT(adapter);
+ res_ops_tbl = &NBL_ADAPTER_TO_RES_OPS_TBL(adapter);
+ phy_ops_tbl = NBL_ADAPTER_TO_PHY_OPS_TBL(adapter);
+ chan_ops_tbl = NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter);
+
+ ret = nbl_res_setup_res_mgt(res_mgt_leonis);
+ if (ret)
+ goto setup_mgt_fail;
+
+ NBL_RES_MGT_TO_CHAN_OPS_TBL(&(*res_mgt_leonis)->res_mgt) = chan_ops_tbl;
+ NBL_RES_MGT_TO_PHY_OPS_TBL(&(*res_mgt_leonis)->res_mgt) = phy_ops_tbl;
+ NBL_RES_MGT_TO_ETH_DEV(&(*res_mgt_leonis)->res_mgt) = eth_dev;
+
+ ret = nbl_res_start(*res_mgt_leonis);
+ if (ret)
+ goto start_fail;
+
+ ret = nbl_res_setup_ops(res_ops_tbl, *res_mgt_leonis);
+ if (ret)
+ goto setup_ops_fail;
+
+ return 0;
+
+setup_ops_fail:
+ nbl_res_stop(*res_mgt_leonis);
+start_fail:
+ nbl_res_remove_res_mgt(res_mgt_leonis);
+setup_mgt_fail:
+ return ret;
+}
+
+void nbl_res_remove_leonis(void *p)
+{
+ struct nbl_resource_mgt_leonis **res_mgt_leonis;
+ struct nbl_resource_ops_tbl **res_ops_tbl;
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+
+ res_mgt_leonis = (struct nbl_resource_mgt_leonis **)&NBL_ADAPTER_TO_RES_MGT(adapter);
+ res_ops_tbl = &NBL_ADAPTER_TO_RES_OPS_TBL(adapter);
+
+ nbl_res_remove_ops(res_ops_tbl);
+ nbl_res_stop(*res_mgt_leonis);
+ nbl_res_remove_res_mgt(res_mgt_leonis);
+}
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
new file mode 100644
index 0000000000..d14f868d55
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_RES_LEONIS_H_
+#define _NBL_RES_LEONIS_H_
+
+#include "nbl_resource.h"
+
+#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_resource.c b/drivers/net/nbl/nbl_hw/nbl_resource.c
new file mode 100644
index 0000000000..47baaa2a91
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_resource.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_resource.h"
diff --git a/drivers/net/nbl/nbl_hw/nbl_resource.h b/drivers/net/nbl/nbl_hw/nbl_resource.h
new file mode 100644
index 0000000000..2ea79563cc
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_resource.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_RESOURCE_H_
+#define _NBL_RESOURCE_H_
+
+#include "nbl_ethdev.h"
+#include "nbl_include.h"
+#include <stdint.h>
+
+#define NBL_RES_MGT_TO_PHY_OPS_TBL(res_mgt) ((res_mgt)->phy_ops_tbl)
+#define NBL_RES_MGT_TO_PHY_OPS(res_mgt) (NBL_RES_MGT_TO_PHY_OPS_TBL(res_mgt)->ops)
+#define NBL_RES_MGT_TO_PHY_PRIV(res_mgt) (NBL_RES_MGT_TO_PHY_OPS_TBL(res_mgt)->priv)
+#define NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt) ((res_mgt)->chan_ops_tbl)
+#define NBL_RES_MGT_TO_CHAN_OPS(res_mgt) (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->ops)
+#define NBL_RES_MGT_TO_CHAN_PRIV(res_mgt) (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->priv)
+#define NBL_RES_MGT_TO_ETH_DEV(res_mgt) ((res_mgt)->eth_dev)
+#define NBL_RES_MGT_TO_TXRX_MGT(res_mgt) ((res_mgt)->txrx_mgt)
+
+struct nbl_res_tx_ring {
+};
+
+struct nbl_res_rx_ring {
+};
+
+struct nbl_txrx_mgt {
+ rte_spinlock_t tx_lock;
+ struct nbl_res_tx_ring **tx_rings;
+ struct nbl_res_rx_ring **rx_rings;
+ u8 tx_ring_num;
+ u8 rx_ring_num;
+};
+
+struct nbl_resource_mgt {
+ struct rte_eth_dev *eth_dev;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_phy_ops_tbl *phy_ops_tbl;
+ struct nbl_txrx_mgt *txrx_mgt;
+};
+
+struct nbl_resource_mgt_leonis {
+ struct nbl_resource_mgt res_mgt;
+};
+
+int nbl_txrx_mgt_start(struct nbl_resource_mgt *res_mgt);
+void nbl_txrx_mgt_stop(struct nbl_resource_mgt *res_mgt);
+int nbl_txrx_setup_ops(struct nbl_resource_ops *resource_ops);
+void nbl_txrx_remove_ops(struct nbl_resource_ops *resource_ops);
+
+#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.c b/drivers/net/nbl/nbl_hw/nbl_txrx.c
new file mode 100644
index 0000000000..0df204e425
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.c
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_txrx.h"
+#include "nbl_include.h"
+
+static int nbl_res_txrx_alloc_rings(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(tx_num);
+ RTE_SET_USED(rx_num);
+ RTE_SET_USED(queue_offset);
+ return 0;
+}
+
+static void nbl_res_txrx_remove_rings(void *priv)
+{
+ RTE_SET_USED(priv);
+}
+
+static int nbl_res_txrx_start_tx_ring(void *priv,
+ struct nbl_start_tx_ring_param *param,
+ u64 *dma_addr)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(param);
+ RTE_SET_USED(dma_addr);
+ return 0;
+}
+
+static void nbl_res_txrx_stop_tx_ring(void *priv, u16 queue_idx)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(queue_idx);
+}
+
+static void nbl_res_txrx_release_txring(void *priv, u16 queue_idx)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(queue_idx);
+}
+
+static int nbl_res_txrx_start_rx_ring(void *priv,
+ struct nbl_start_rx_ring_param *param,
+ u64 *dma_addr)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(param);
+ RTE_SET_USED(dma_addr);
+ return 0;
+}
+
+static int nbl_res_alloc_rx_bufs(void *priv, u16 queue_idx)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(queue_idx);
+ return 0;
+}
+
+static void nbl_res_txrx_stop_rx_ring(void *priv, u16 queue_idx)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(queue_idx);
+}
+
+static void nbl_res_txrx_release_rx_ring(void *priv, u16 queue_idx)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(queue_idx);
+}
+
+static void nbl_res_txrx_update_rx_ring(void *priv, u16 index)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(index);
+}
+
+/* NBL_TXRX_SET_OPS(ops_name, func)
+ *
+ * Use X Macros to reduce setup and remove codes.
+ */
+#define NBL_TXRX_OPS_TBL \
+do { \
+ NBL_TXRX_SET_OPS(alloc_rings, nbl_res_txrx_alloc_rings); \
+ NBL_TXRX_SET_OPS(remove_rings, nbl_res_txrx_remove_rings); \
+ NBL_TXRX_SET_OPS(start_tx_ring, nbl_res_txrx_start_tx_ring); \
+ NBL_TXRX_SET_OPS(stop_tx_ring, nbl_res_txrx_stop_tx_ring); \
+ NBL_TXRX_SET_OPS(release_tx_ring, nbl_res_txrx_release_txring); \
+ NBL_TXRX_SET_OPS(start_rx_ring, nbl_res_txrx_start_rx_ring); \
+ NBL_TXRX_SET_OPS(alloc_rx_bufs, nbl_res_alloc_rx_bufs); \
+ NBL_TXRX_SET_OPS(stop_rx_ring, nbl_res_txrx_stop_rx_ring); \
+ NBL_TXRX_SET_OPS(release_rx_ring, nbl_res_txrx_release_rx_ring); \
+ NBL_TXRX_SET_OPS(update_rx_ring, nbl_res_txrx_update_rx_ring); \
+} while (0)
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_txrx_setup_mgt(struct nbl_txrx_mgt **txrx_mgt)
+{
+ *txrx_mgt = rte_zmalloc("nbl_txrx_mgt", sizeof(struct nbl_txrx_mgt), 0);
+ if (!*txrx_mgt)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nbl_txrx_remove_mgt(struct nbl_txrx_mgt **txrx_mgt)
+{
+ rte_free(*txrx_mgt);
+ *txrx_mgt = NULL;
+}
+
+int nbl_txrx_mgt_start(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_txrx_mgt **txrx_mgt;
+
+ txrx_mgt = &NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+
+ return nbl_txrx_setup_mgt(txrx_mgt);
+}
+
+void nbl_txrx_mgt_stop(struct nbl_resource_mgt *res_mgt)
+{
+ struct nbl_txrx_mgt **txrx_mgt;
+
+ txrx_mgt = &NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+
+ if (!(*txrx_mgt))
+ return;
+
+ nbl_txrx_remove_mgt(txrx_mgt);
+}
+
+int nbl_txrx_setup_ops(struct nbl_resource_ops *res_ops)
+{
+#define NBL_TXRX_SET_OPS(name, func) do {res_ops->NBL_NAME(name) = func; ; } while (0)
+ NBL_TXRX_OPS_TBL;
+#undef NBL_TXRX_SET_OPS
+
+ return 0;
+}
+
+void nbl_txrx_remove_ops(struct nbl_resource_ops *res_ops)
+{
+#define NBL_TXRX_SET_OPS(name, func) do {res_ops->NBL_NAME(name) = NULL; ; } while (0)
+ NBL_TXRX_OPS_TBL;
+#undef NBL_TXRX_SET_OPS
+}
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.h b/drivers/net/nbl/nbl_hw/nbl_txrx.h
new file mode 100644
index 0000000000..56dbd3c587
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_TXRX_H_
+#define _NBL_TXRX_H_
+
+#include "nbl_resource.h"
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
new file mode 100644
index 0000000000..c1cf041c74
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_RESOURCE_H_
+#define _NBL_DEF_RESOURCE_H_
+
+#include "nbl_include.h"
+#include "rte_ethdev_core.h"
+
+#define NBL_RES_OPS_TBL_TO_OPS(res_ops_tbl) ((res_ops_tbl)->ops)
+#define NBL_RES_OPS_TBL_TO_PRIV(res_ops_tbl) ((res_ops_tbl)->priv)
+
+struct nbl_resource_ops {
+ int (*alloc_rings)(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset);
+ void (*remove_rings)(void *priv);
+ int (*start_tx_ring)(void *priv, struct nbl_start_tx_ring_param *param, u64 *dma_addr);
+ void (*stop_tx_ring)(void *priv, u16 queue_idx);
+ void (*release_tx_ring)(void *priv, u16 queue_idx);
+ int (*start_rx_ring)(void *priv, struct nbl_start_rx_ring_param *param, u64 *dma_addr);
+ int (*alloc_rx_bufs)(void *priv, u16 queue_idx);
+ void (*stop_rx_ring)(void *priv, u16 queue_idx);
+ void (*release_rx_ring)(void *priv, u16 queue_idx);
+ int (*get_stats)(void *priv, struct rte_eth_stats *rte_stats);
+ int (*reset_stats)(void *priv);
+ void (*update_rx_ring)(void *priv, u16 queue_idx);
+ u16 (*get_tx_ehdr_len)(void *priv);
+ u64 (*restore_abnormal_ring)(void *priv, u16 local_queue_id, int type);
+ int (*restart_abnormal_ring)(void *priv, int ring_index, int type);
+ void (*cfg_txrx_vlan)(void *priv, u16 vlan_tci, u16 vlan_proto);
+ int (*txrx_burst_mode_get)(void *priv, struct rte_eth_dev *dev,
+ struct rte_eth_burst_mode *mode, bool is_tx);
+ int (*get_txrx_xstats_cnt)(void *priv, u16 *xstats_cnt);
+ int (*get_txrx_xstats)(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt);
+ int (*get_txrx_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt);
+};
+
+struct nbl_resource_ops_tbl {
+ struct nbl_resource_ops *ops;
+ void *priv;
+};
+
+int nbl_res_init_leonis(void *p, struct rte_eth_dev *eth_dev);
+void nbl_res_remove_leonis(void *p);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 72a5a9a078..caf77dc8d6 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -56,6 +56,9 @@ typedef int32_t s32;
typedef int16_t s16;
typedef int8_t s8;
+/* Used for macros to pass checkpatch */
+#define NBL_NAME(x) x
+
enum nbl_product_type {
NBL_LEONIS_TYPE,
NBL_DRACO_TYPE,
@@ -72,4 +75,21 @@ struct nbl_func_caps {
#define BIT(a) (1UL << (a))
+struct nbl_start_rx_ring_param {
+ u16 queue_idx;
+ u16 nb_desc;
+ u32 socket_id;
+ enum nbl_product_type product;
+ const struct rte_eth_rxconf *conf;
+ struct rte_mempool *mempool;
+};
+
+struct nbl_start_tx_ring_param {
+ u16 queue_idx;
+ u16 nb_desc;
+ u32 socket_id;
+ enum nbl_product_type product;
+ const struct rte_eth_txconf *conf;
+};
+
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 06/17] net/nbl: add Dispatch layer definitions and implementation
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (4 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 05/17] net/nbl: add Resource " Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 07/17] net/nbl: add Dev " Kyo Liu
` (13 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
add Dispatch layer related definetions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 1 +
drivers/net/nbl/nbl_core.c | 7 +
drivers/net/nbl/nbl_core.h | 4 +
drivers/net/nbl/nbl_dispatch.c | 466 ++++++++++++++++++
drivers/net/nbl/nbl_dispatch.h | 29 ++
drivers/net/nbl/nbl_include/nbl_def_channel.h | 18 +
drivers/net/nbl/nbl_include/nbl_def_common.h | 4 +
.../net/nbl/nbl_include/nbl_def_dispatch.h | 77 +++
.../net/nbl/nbl_include/nbl_def_resource.h | 5 +
drivers/net/nbl/nbl_include/nbl_include.h | 17 +
10 files changed, 628 insertions(+)
create mode 100644 drivers/net/nbl/nbl_dispatch.c
create mode 100644 drivers/net/nbl/nbl_dispatch.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dispatch.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index f34121260e..23601727ef 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -12,6 +12,7 @@ includes += include_directories('nbl_hw')
sources = files(
'nbl_ethdev.c',
'nbl_core.c',
+ 'nbl_dispatch.c',
'nbl_common/nbl_common.c',
'nbl_common/nbl_thread.c',
'nbl_hw/nbl_channel.c',
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index 70600401fe..548eb3a2fd 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -50,8 +50,14 @@ int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
if (ret)
goto res_init_fail;
+ ret = nbl_disp_init(adapter);
+ if (ret)
+ goto disp_init_fail;
+
return 0;
+disp_init_fail:
+ product_base_ops->res_remove(adapter);
res_init_fail:
product_base_ops->chan_remove(adapter);
chan_init_fail:
@@ -66,6 +72,7 @@ void nbl_core_remove(struct nbl_adapter *adapter)
product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
+ nbl_disp_remove(adapter);
product_base_ops->res_remove(adapter);
product_base_ops->chan_remove(adapter);
product_base_ops->phy_remove(adapter);
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index f693913b47..2730539050 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -10,6 +10,7 @@
#include "nbl_def_phy.h"
#include "nbl_def_channel.h"
#include "nbl_def_resource.h"
+#include "nbl_def_dispatch.h"
#define NBL_VENDOR_ID (0x1F0F)
#define NBL_DEVICE_ID_M18110 (0x3403)
@@ -35,10 +36,12 @@
#define NBL_ADAPTER_TO_PHY_MGT(adapter) ((adapter)->core.phy_mgt)
#define NBL_ADAPTER_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt)
#define NBL_ADAPTER_TO_RES_MGT(adapter) ((adapter)->core.res_mgt)
+#define NBL_ADAPTER_TO_DISP_MGT(adapter) ((adapter)->core.disp_mgt)
#define NBL_ADAPTER_TO_PHY_OPS_TBL(adapter) ((adapter)->intf.phy_ops_tbl)
#define NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl)
#define NBL_ADAPTER_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl)
+#define NBL_ADAPTER_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl)
struct nbl_core {
void *phy_mgt;
@@ -52,6 +55,7 @@ struct nbl_interface {
struct nbl_phy_ops_tbl *phy_ops_tbl;
struct nbl_channel_ops_tbl *channel_ops_tbl;
struct nbl_resource_ops_tbl *resource_ops_tbl;
+ struct nbl_dispatch_ops_tbl *dispatch_ops_tbl;
};
struct nbl_adapter {
diff --git a/drivers/net/nbl/nbl_dispatch.c b/drivers/net/nbl/nbl_dispatch.c
new file mode 100644
index 0000000000..ffeeba3048
--- /dev/null
+++ b/drivers/net/nbl/nbl_dispatch.c
@@ -0,0 +1,466 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_dispatch.h"
+
+static int nbl_disp_alloc_txrx_queues(void *priv, u16 vsi_id, u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->alloc_txrx_queues(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ vsi_id, queue_num);
+}
+
+static int nbl_disp_chan_alloc_txrx_queues_req(void *priv, u16 vsi_id,
+ u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_alloc_txrx_queues param = {0};
+ struct nbl_chan_param_alloc_txrx_queues result = {0};
+ struct nbl_chan_send_info chan_send;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param.vsi_id = vsi_id;
+ param.queue_num = queue_num;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, ¶m,
+ sizeof(param), &result, sizeof(result), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+
+ return 0;
+}
+
+static void nbl_disp_free_txrx_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ res_ops->free_txrx_queues(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id);
+}
+
+static void nbl_disp_chan_free_txrx_queues_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_free_txrx_queues param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_FREE_TXRX_QUEUES, ¶m,
+ sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_clear_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->clear_queues, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static void nbl_disp_chan_clear_queues_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send = {0};
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_CLEAR_QUEUE, &vsi_id, sizeof(vsi_id),
+ NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_start_tx_ring(void *priv,
+ struct nbl_start_tx_ring_param *param,
+ u64 *dma_addr)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->start_tx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ param, dma_addr);
+}
+
+static void nbl_disp_release_tx_ring(void *priv, u16 queue_idx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->release_tx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ queue_idx);
+}
+
+static void nbl_disp_stop_tx_ring(void *priv, u16 queue_idx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->stop_tx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ queue_idx);
+}
+
+static int nbl_disp_start_rx_ring(void *priv,
+ struct nbl_start_rx_ring_param *param,
+ u64 *dma_addr)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->start_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ param, dma_addr);
+}
+
+static int nbl_disp_alloc_rx_bufs(void *priv, u16 queue_idx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->alloc_rx_bufs(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ queue_idx);
+}
+
+static void nbl_disp_release_rx_ring(void *priv, u16 queue_idx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->release_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ queue_idx);
+}
+
+static void nbl_disp_stop_rx_ring(void *priv, u16 queue_idx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->stop_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ queue_idx);
+}
+
+static void nbl_disp_update_rx_ring(void *priv, u16 index)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ res_ops->update_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), index);
+}
+
+static int nbl_disp_alloc_rings(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->alloc_rings(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ tx_num, rx_num, queue_offset);
+}
+
+static void nbl_disp_remove_rings(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ res_ops->remove_rings(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt));
+}
+
+static int
+nbl_disp_setup_queue(void *priv, struct nbl_txrx_queue_param *param, bool is_tx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->setup_queue(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ param, is_tx);
+}
+
+static int
+nbl_disp_chan_setup_queue_req(void *priv,
+ struct nbl_txrx_queue_param *queue_param,
+ bool is_tx)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_setup_queue param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ memcpy(¶m.queue_param, queue_param, sizeof(param.queue_param));
+ param.is_tx = is_tx;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_SETUP_QUEUE, ¶m,
+ sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_remove_all_queues(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ res_ops->remove_all_queues(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id);
+}
+
+static void nbl_disp_chan_remove_all_queues_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_remove_all_queues param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REMOVE_ALL_QUEUES,
+ ¶m, sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+#define NBL_DISP_OPS_TBL \
+do { \
+ NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_ALLOC_TXRX_QUEUES, \
+ nbl_disp_chan_alloc_txrx_queues_req, \
+ NULL); \
+ NBL_DISP_SET_OPS(free_txrx_queues, nbl_disp_free_txrx_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_FREE_TXRX_QUEUES, \
+ nbl_disp_chan_free_txrx_queues_req, \
+ NULL); \
+ NBL_DISP_SET_OPS(alloc_rings, nbl_disp_alloc_rings, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(remove_rings, nbl_disp_remove_rings, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(start_tx_ring, nbl_disp_start_tx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(stop_tx_ring, nbl_disp_stop_tx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(release_tx_ring, nbl_disp_release_tx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(start_rx_ring, nbl_disp_start_rx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(alloc_rx_bufs, nbl_disp_alloc_rx_bufs, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(stop_rx_ring, nbl_disp_stop_rx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(release_rx_ring, nbl_disp_release_rx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, NULL, NULL); \
+ NBL_DISP_SET_OPS(update_rx_ring, nbl_disp_update_rx_ring, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(setup_queue, nbl_disp_setup_queue, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_SETUP_QUEUE, \
+ nbl_disp_chan_setup_queue_req, NULL); \
+ NBL_DISP_SET_OPS(remove_all_queues, nbl_disp_remove_all_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REMOVE_ALL_QUEUES, \
+ nbl_disp_chan_remove_all_queues_req, NULL); \
+ NBL_DISP_SET_OPS(clear_queues, nbl_disp_clear_queues, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_CLEAR_QUEUE, \
+ nbl_disp_chan_clear_queues_req, NULL); \
+} while (0)
+
+/* Structure starts here, adding an op should not modify anything below */
+static int nbl_disp_setup_msg(struct nbl_dispatch_mgt *disp_mgt)
+{
+ struct nbl_channel_ops *chan_ops;
+ int ret = 0;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+#define NBL_DISP_SET_OPS(disp_op, res_func, ctrl_lvl2, msg_type, msg_req, msg_resp) \
+do { \
+ typeof(msg_type) _msg_type = (msg_type); \
+ typeof(msg_resp) _msg_resp = (msg_resp); \
+ uint32_t _ctrl_lvl = rte_bit_relaxed_get32(ctrl_lvl2, &disp_mgt->ctrl_lvl); \
+ if (_msg_type >= 0 && _msg_resp != NULL && _ctrl_lvl) \
+ ret += chan_ops->register_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), \
+ _msg_type, _msg_resp, disp_mgt); \
+} while (0)
+ NBL_DISP_OPS_TBL;
+#undef NBL_DISP_SET_OPS
+
+ return ret;
+}
+
+/* Ctrl lvl means that if a certain level is set, then all disp_ops that decleared this lvl
+ * will go directly to res_ops, rather than send a channel msg, and vice versa.
+ */
+static int nbl_disp_setup_ctrl_lvl(struct nbl_dispatch_mgt *disp_mgt, u32 lvl)
+{
+ struct nbl_dispatch_ops *disp_ops;
+
+ disp_ops = NBL_DISP_MGT_TO_DISP_OPS(disp_mgt);
+
+ rte_bit_relaxed_set32(lvl, &disp_mgt->ctrl_lvl);
+
+#define NBL_DISP_SET_OPS(disp_op, res_func, ctrl, msg_type, msg_req, msg_resp) \
+do { \
+ disp_ops->NBL_NAME(disp_op) = \
+ rte_bit_relaxed_get32(ctrl, &disp_mgt->ctrl_lvl) ? res_func : msg_req; ;\
+} while (0)
+ NBL_DISP_OPS_TBL;
+#undef NBL_DISP_SET_OPS
+
+ return 0;
+}
+
+static int nbl_disp_setup_disp_mgt(struct nbl_dispatch_mgt **disp_mgt)
+{
+ *disp_mgt = rte_zmalloc("nbl_disp_mgt", sizeof(struct nbl_dispatch_mgt), 0);
+ if (!*disp_mgt)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void nbl_disp_remove_disp_mgt(struct nbl_dispatch_mgt **disp_mgt)
+{
+ rte_free(*disp_mgt);
+ *disp_mgt = NULL;
+}
+
+static void nbl_disp_remove_ops(struct nbl_dispatch_ops_tbl **disp_ops_tbl)
+{
+ rte_free(NBL_DISP_OPS_TBL_TO_OPS(*disp_ops_tbl));
+ rte_free(*disp_ops_tbl);
+ *disp_ops_tbl = NULL;
+}
+
+static int nbl_disp_setup_ops(struct nbl_dispatch_ops_tbl **disp_ops_tbl,
+ struct nbl_dispatch_mgt *disp_mgt)
+{
+ struct nbl_dispatch_ops *disp_ops;
+
+ *disp_ops_tbl = rte_zmalloc("nbl_disp_ops_tbl", sizeof(struct nbl_dispatch_ops_tbl), 0);
+ if (!*disp_ops_tbl)
+ return -ENOMEM;
+
+ disp_ops = rte_zmalloc("nbl_dispatch_ops", sizeof(struct nbl_dispatch_ops), 0);
+ if (!disp_ops) {
+ rte_free(*disp_ops_tbl);
+ return -ENOMEM;
+ }
+
+ NBL_DISP_OPS_TBL_TO_OPS(*disp_ops_tbl) = disp_ops;
+ NBL_DISP_OPS_TBL_TO_PRIV(*disp_ops_tbl) = disp_mgt;
+
+ return 0;
+}
+
+int nbl_disp_init(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dispatch_mgt **disp_mgt;
+ struct nbl_dispatch_ops_tbl **disp_ops_tbl;
+ struct nbl_resource_ops_tbl *res_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_product_dispatch_ops *disp_product_ops = NULL;
+ int ret = 0;
+
+ disp_mgt = (struct nbl_dispatch_mgt **)&NBL_ADAPTER_TO_DISP_MGT(adapter);
+ disp_ops_tbl = &NBL_ADAPTER_TO_DISP_OPS_TBL(adapter);
+ res_ops_tbl = NBL_ADAPTER_TO_RES_OPS_TBL(adapter);
+ chan_ops_tbl = NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter);
+ disp_product_ops = nbl_dispatch_get_product_ops(adapter->caps.product_type);
+
+ ret = nbl_disp_setup_disp_mgt(disp_mgt);
+ if (ret)
+ return ret;
+
+ ret = nbl_disp_setup_ops(disp_ops_tbl, *disp_mgt);
+ if (ret)
+ goto setup_ops_fail;
+
+ NBL_DISP_MGT_TO_RES_OPS_TBL(*disp_mgt) = res_ops_tbl;
+ NBL_DISP_MGT_TO_CHAN_OPS_TBL(*disp_mgt) = chan_ops_tbl;
+ NBL_DISP_MGT_TO_DISP_OPS_TBL(*disp_mgt) = *disp_ops_tbl;
+
+ if (disp_product_ops->dispatch_init) {
+ ret = disp_product_ops->dispatch_init(*disp_mgt);
+ if (ret)
+ goto dispatch_init_fail;
+ }
+
+ ret = nbl_disp_setup_ctrl_lvl(*disp_mgt, NBL_DISP_CTRL_LVL_ALWAYS);
+ if (ret)
+ goto setup_ctrl_lvl_fail;
+ return 0;
+
+setup_ctrl_lvl_fail:
+ disp_product_ops->dispatch_uninit(*disp_mgt);
+dispatch_init_fail:
+ nbl_disp_remove_ops(disp_ops_tbl);
+setup_ops_fail:
+ nbl_disp_remove_disp_mgt(disp_mgt);
+
+ return ret;
+}
+
+void nbl_disp_remove(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dispatch_mgt **disp_mgt;
+ struct nbl_dispatch_ops_tbl **disp_ops_tbl;
+
+ disp_mgt = (struct nbl_dispatch_mgt **)&NBL_ADAPTER_TO_DISP_MGT(adapter);
+ disp_ops_tbl = &NBL_ADAPTER_TO_DISP_OPS_TBL(adapter);
+
+ nbl_disp_remove_ops(disp_ops_tbl);
+ nbl_disp_remove_disp_mgt(disp_mgt);
+}
+
+static int nbl_disp_leonis_init(void *p)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)p;
+ int ret;
+
+ nbl_disp_setup_ctrl_lvl(disp_mgt, NBL_DISP_CTRL_LVL_NET);
+ ret = nbl_disp_setup_msg(disp_mgt);
+
+ return ret;
+}
+
+static int nbl_disp_leonis_uninit(void *p)
+{
+ RTE_SET_USED(p);
+ return 0;
+}
+
+static struct nbl_product_dispatch_ops nbl_product_dispatch_ops[NBL_PRODUCT_MAX] = {
+ {
+ .dispatch_init = nbl_disp_leonis_init,
+ .dispatch_uninit = nbl_disp_leonis_uninit,
+ },
+};
+
+struct nbl_product_dispatch_ops *nbl_dispatch_get_product_ops(enum nbl_product_type product_type)
+{
+ return &nbl_product_dispatch_ops[product_type];
+}
diff --git a/drivers/net/nbl/nbl_dispatch.h b/drivers/net/nbl/nbl_dispatch.h
new file mode 100644
index 0000000000..dcdf87576a
--- /dev/null
+++ b/drivers/net/nbl/nbl_dispatch.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DISPATCH_H_
+#define _NBL_DISPATCH_H_
+
+#include "nbl_ethdev.h"
+
+#define NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt) ((disp_mgt)->res_ops_tbl)
+#define NBL_DISP_MGT_TO_RES_OPS(disp_mgt) (NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt)->ops)
+#define NBL_DISP_MGT_TO_RES_PRIV(disp_mgt) (NBL_DISP_MGT_TO_RES_OPS_TBL(disp_mgt)->priv)
+#define NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt) ((disp_mgt)->chan_ops_tbl)
+#define NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt) (NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt)->ops)
+#define NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt) (NBL_DISP_MGT_TO_CHAN_OPS_TBL(disp_mgt)->priv)
+#define NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt) ((disp_mgt)->disp_ops_tbl)
+#define NBL_DISP_MGT_TO_DISP_OPS(disp_mgt) (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->ops)
+#define NBL_DISP_MGT_TO_DISP_PRIV(disp_mgt) (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->priv)
+
+struct nbl_dispatch_mgt {
+ struct nbl_resource_ops_tbl *res_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ uint32_t ctrl_lvl;
+};
+
+struct nbl_product_dispatch_ops *nbl_dispatch_get_product_ops(enum nbl_product_type product_type);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
index faf5d3ed3d..25d54a435d 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -281,6 +281,24 @@ enum nbl_chan_msg_type {
NBL_CHAN_MSG_MAX,
};
+struct nbl_chan_param_alloc_txrx_queues {
+ u16 vsi_id;
+ u16 queue_num;
+};
+
+struct nbl_chan_param_free_txrx_queues {
+ u16 vsi_id;
+};
+
+struct nbl_chan_param_setup_queue {
+ struct nbl_txrx_queue_param queue_param;
+ bool is_tx;
+};
+
+struct nbl_chan_param_remove_all_queues {
+ u16 vsi_id;
+};
+
struct nbl_chan_send_info {
uint16_t dstid;
uint16_t msg_type;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index 0bfc6a233b..fb2ccb28bf 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -13,6 +13,10 @@
# define NBL_PRIU64 "llu"
# endif
+#define NBL_OPS_CALL(func, para) \
+ ({ typeof(func) _func = (func); \
+ (!_func) ? 0 : _func para; })
+
struct nbl_dma_mem {
void *va;
uint64_t pa;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
new file mode 100644
index 0000000000..5fd890b699
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_DISPATCH_H_
+#define _NBL_DEF_DISPATCH_H_
+
+#include "nbl_include.h"
+
+#define NBL_DISP_OPS_TBL_TO_OPS(disp_ops_tbl) ((disp_ops_tbl)->ops)
+#define NBL_DISP_OPS_TBL_TO_PRIV(disp_ops_tbl) ((disp_ops_tbl)->priv)
+
+enum {
+ NBL_DISP_CTRL_LVL_NEVER = 0,
+ NBL_DISP_CTRL_LVL_MGT,
+ NBL_DISP_CTRL_LVL_NET,
+ NBL_DISP_CTRL_LVL_ALWAYS,
+ NBL_DISP_CTRL_LVL_MAX,
+};
+
+struct nbl_dispatch_ops {
+ int (*add_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
+ int (*get_mac_addr)(void *priv, u8 *mac);
+ void (*del_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
+ int (*add_multi_rule)(void *priv, u16 vsi);
+ void (*del_multi_rule)(void *priv, u16 vsi);
+ int (*cfg_multi_mcast)(void *priv, u16 vsi, u16 enable);
+ void (*clear_flow)(void *priv, u16 vsi_id);
+ void (*get_firmware_version)(void *priv, char *firmware_verion, u8 max_len);
+ int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
+ void (*free_txrx_queues)(void *priv, u16 vsi_id);
+ u16 (*get_vsi_id)(void *priv);
+ void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id);
+ int (*setup_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
+ void (*remove_txrx_queues)(void *priv, u16 vsi_id);
+ int (*alloc_rings)(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset);
+ void (*remove_rings)(void *priv);
+ int (*start_tx_ring)(void *priv, struct nbl_start_tx_ring_param *param, u64 *dma_addr);
+ void (*stop_tx_ring)(void *priv, u16 queue_idx);
+ void (*release_tx_ring)(void *priv, u16 queue_idx);
+ int (*start_rx_ring)(void *priv, struct nbl_start_rx_ring_param *param, u64 *dma_addr);
+ int (*alloc_rx_bufs)(void *priv, u16 queue_idx);
+ void (*stop_rx_ring)(void *priv, u16 queue_idx);
+ void (*release_rx_ring)(void *priv, u16 queue_idx);
+ void (*update_rx_ring)(void *priv, u16 index);
+ u16 (*get_tx_ehdr_len)(void *priv);
+ void (*cfg_txrx_vlan)(void *priv, u16 vlan_tci, u16 vlan_proto);
+ int (*setup_queue)(void *priv, struct nbl_txrx_queue_param *param, bool is_tx);
+ void (*remove_all_queues)(void *priv, u16 vsi_id);
+ int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num);
+ int (*setup_q2vsi)(void *priv, u16 vsi_id);
+ void (*remove_q2vsi)(void *priv, u16 vsi_id);
+ int (*setup_rss)(void *priv, u16 vsi_id);
+ void (*remove_rss)(void *priv, u16 vsi_id);
+ int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld);
+ int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps);
+ void (*remove_cqs)(void *priv, u16 vsi_id);
+ int (*set_rxfh_indir)(void *priv, u16 vsi_id, u32 *indir, u32 indir_size);
+ void (*clear_queues)(void *priv, u16 vsi_id);
+ u16 (*xmit_pkts)(void *priv, void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
+ u16 (*recv_pkts)(void *priv, void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
+ u16 (*get_vsi_global_qid)(void *priv, u16 vsi_id, u16 local_qid);
+
+ void (*dummy_func)(void *priv);
+};
+
+struct nbl_dispatch_ops_tbl {
+ struct nbl_dispatch_ops *ops;
+ void *priv;
+};
+
+int nbl_disp_init(void *p);
+void nbl_disp_remove(void *p);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
index c1cf041c74..43302df842 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -25,6 +25,11 @@ struct nbl_resource_ops {
int (*reset_stats)(void *priv);
void (*update_rx_ring)(void *priv, u16 queue_idx);
u16 (*get_tx_ehdr_len)(void *priv);
+ int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
+ void (*free_txrx_queues)(void *priv, u16 vsi_id);
+ void (*clear_queues)(void *priv, u16 vsi_id);
+ int (*setup_queue)(void *priv, struct nbl_txrx_queue_param *param, bool is_tx);
+ void (*remove_all_queues)(void *priv, u16 vsi_id);
u64 (*restore_abnormal_ring)(void *priv, u16 local_queue_id, int type);
int (*restart_abnormal_ring)(void *priv, int ring_index, int type);
void (*cfg_txrx_vlan)(void *priv, u16 vlan_tci, u16 vlan_proto);
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index caf77dc8d6..9337666d16 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -92,4 +92,21 @@ struct nbl_start_tx_ring_param {
const struct rte_eth_txconf *conf;
};
+struct nbl_txrx_queue_param {
+ u16 vsi_id;
+ u64 dma;
+ u64 avail;
+ u64 used;
+ u16 desc_num;
+ u16 local_queue_id;
+ u16 intr_en;
+ u16 intr_mask;
+ u16 global_vector_id;
+ u16 half_offload_en;
+ u16 split;
+ u16 extend_header;
+ u16 cxt;
+ u16 rxcsum;
+};
+
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 07/17] net/nbl: add Dev layer definitions and implementation
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (5 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 06/17] net/nbl: add Dispatch " Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 08/17] net/nbl: add complete device init and uninit functionality Kyo Liu
` (12 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
add Dev layer related definetions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 2 +
drivers/net/nbl/nbl_core.c | 14 +-
drivers/net/nbl/nbl_core.h | 16 ++
drivers/net/nbl/nbl_dev/nbl_dev.c | 200 ++++++++++++++++++++++
drivers/net/nbl/nbl_dev/nbl_dev.h | 24 +++
drivers/net/nbl/nbl_include/nbl_def_dev.h | 107 ++++++++++++
6 files changed, 360 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.c
create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.h
create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dev.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index 23601727ef..7f4abd3db0 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -8,6 +8,7 @@ endif
includes += include_directories('nbl_include')
includes += include_directories('nbl_hw')
+includes += include_directories('nbl_common')
sources = files(
'nbl_ethdev.c',
@@ -15,6 +16,7 @@ sources = files(
'nbl_dispatch.c',
'nbl_common/nbl_common.c',
'nbl_common/nbl_thread.c',
+ 'nbl_dev/nbl_dev.c',
'nbl_hw/nbl_channel.c',
'nbl_hw/nbl_resource.c',
'nbl_hw/nbl_txrx.c',
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index 548eb3a2fd..1a6a6bc11d 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -54,8 +54,14 @@ int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
if (ret)
goto disp_init_fail;
+ ret = nbl_dev_init(adapter, eth_dev);
+ if (ret)
+ goto dev_init_fail;
+
return 0;
+dev_init_fail:
+ nbl_disp_remove(adapter);
disp_init_fail:
product_base_ops->res_remove(adapter);
res_init_fail:
@@ -72,6 +78,7 @@ void nbl_core_remove(struct nbl_adapter *adapter)
product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
+ nbl_dev_remove(adapter);
nbl_disp_remove(adapter);
product_base_ops->res_remove(adapter);
product_base_ops->chan_remove(adapter);
@@ -80,12 +87,13 @@ void nbl_core_remove(struct nbl_adapter *adapter)
int nbl_core_start(struct nbl_adapter *adapter)
{
- RTE_SET_USED(adapter);
+ int ret = 0;
- return 0;
+ ret = nbl_dev_start(adapter);
+ return ret;
}
void nbl_core_stop(struct nbl_adapter *adapter)
{
- RTE_SET_USED(adapter);
+ nbl_dev_stop(adapter);
}
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index 2730539050..9a05bbee48 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -11,6 +11,7 @@
#include "nbl_def_channel.h"
#include "nbl_def_resource.h"
#include "nbl_def_dispatch.h"
+#include "nbl_def_dev.h"
#define NBL_VENDOR_ID (0x1F0F)
#define NBL_DEVICE_ID_M18110 (0x3403)
@@ -37,11 +38,13 @@
#define NBL_ADAPTER_TO_CHAN_MGT(adapter) ((adapter)->core.chan_mgt)
#define NBL_ADAPTER_TO_RES_MGT(adapter) ((adapter)->core.res_mgt)
#define NBL_ADAPTER_TO_DISP_MGT(adapter) ((adapter)->core.disp_mgt)
+#define NBL_ADAPTER_TO_DEV_MGT(adapter) ((adapter)->core.dev_mgt)
#define NBL_ADAPTER_TO_PHY_OPS_TBL(adapter) ((adapter)->intf.phy_ops_tbl)
#define NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter) ((adapter)->intf.channel_ops_tbl)
#define NBL_ADAPTER_TO_RES_OPS_TBL(adapter) ((adapter)->intf.resource_ops_tbl)
#define NBL_ADAPTER_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl)
+#define NBL_ADAPTER_TO_DEV_OPS_TBL(adapter) ((adapter)->intf.dev_ops_tbl)
struct nbl_core {
void *phy_mgt;
@@ -51,11 +54,23 @@ struct nbl_core {
void *dev_mgt;
};
+enum nbl_ethdev_state {
+ NBL_ETHDEV_UNINITIALIZED = 0,
+ NBL_ETHDEV_INITIALIZED,
+ NBL_ETHDEV_CONFIGURING,
+ NBL_ETHDEV_CONFIGURED,
+ NBL_ETHDEV_CLOSING,
+ NBL_ETHDEV_STARTING,
+ NBL_ETHDEV_STARTED,
+ NBL_ETHDEV_STOPPING,
+};
+
struct nbl_interface {
struct nbl_phy_ops_tbl *phy_ops_tbl;
struct nbl_channel_ops_tbl *channel_ops_tbl;
struct nbl_resource_ops_tbl *resource_ops_tbl;
struct nbl_dispatch_ops_tbl *dispatch_ops_tbl;
+ struct nbl_dev_ops_tbl *dev_ops_tbl;
};
struct nbl_adapter {
@@ -64,6 +79,7 @@ struct nbl_adapter {
struct nbl_core core;
struct nbl_interface intf;
struct nbl_func_caps caps;
+ enum nbl_ethdev_state state;
};
int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
new file mode 100644
index 0000000000..86006d6762
--- /dev/null
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_dev.h"
+
+static int nbl_dev_configure(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+ return 0;
+}
+
+static int nbl_dev_port_start(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+ return 0;
+}
+
+static int nbl_dev_port_stop(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+ return 0;
+}
+
+static int nbl_dev_close(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+ return 0;
+}
+
+struct nbl_dev_ops dev_ops = {
+ .dev_configure = nbl_dev_configure,
+ .dev_start = nbl_dev_port_start,
+ .dev_stop = nbl_dev_port_stop,
+ .dev_close = nbl_dev_close,
+};
+
+static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
+{
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ int ret = 0;
+
+ ret = chan_ops->setup_queue(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt));
+
+ return ret;
+}
+
+static int nbl_dev_teardown_chan_queue(struct nbl_adapter *adapter)
+{
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ int ret = 0;
+
+ ret = chan_ops->teardown_queue(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt));
+
+ return ret;
+}
+
+static int nbl_dev_leonis_init(void *adapter)
+{
+ return nbl_dev_setup_chan_queue((struct nbl_adapter *)adapter);
+}
+
+static void nbl_dev_leonis_uninit(void *adapter)
+{
+ nbl_dev_teardown_chan_queue((struct nbl_adapter *)adapter);
+}
+
+static int nbl_dev_leonis_start(void *p)
+{
+ RTE_SET_USED(p);
+ return 0;
+}
+
+static void nbl_dev_leonis_stop(void *p)
+{
+ RTE_SET_USED(p);
+}
+
+static void nbl_dev_remove_ops(struct nbl_dev_ops_tbl **dev_ops_tbl)
+{
+ rte_free(*dev_ops_tbl);
+ *dev_ops_tbl = NULL;
+}
+
+static int nbl_dev_setup_ops(struct nbl_dev_ops_tbl **dev_ops_tbl,
+ struct nbl_adapter *adapter)
+{
+ *dev_ops_tbl = rte_zmalloc("nbl_dev_ops", sizeof(struct nbl_dev_ops_tbl), 0);
+ if (!*dev_ops_tbl)
+ return -ENOMEM;
+
+ NBL_DEV_OPS_TBL_TO_OPS(*dev_ops_tbl) = &dev_ops;
+ NBL_DEV_OPS_TBL_TO_PRIV(*dev_ops_tbl) = adapter;
+
+ return 0;
+}
+
+int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dev_mgt **dev_mgt;
+ struct nbl_dev_ops_tbl **dev_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_dispatch_ops_tbl *dispatch_ops_tbl;
+ struct nbl_product_dev_ops *product_dev_ops = NULL;
+ int ret = 0;
+
+ dev_mgt = (struct nbl_dev_mgt **)&NBL_ADAPTER_TO_DEV_MGT(adapter);
+ dev_ops_tbl = &NBL_ADAPTER_TO_DEV_OPS_TBL(adapter);
+ chan_ops_tbl = NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter);
+ dispatch_ops_tbl = NBL_ADAPTER_TO_DISP_OPS_TBL(adapter);
+ product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
+
+ *dev_mgt = rte_zmalloc("nbl_dev_mgt", sizeof(struct nbl_dev_mgt), 0);
+ if (*dev_mgt == NULL) {
+ NBL_LOG(ERR, "Failed to allocate nbl_dev_mgt memory");
+ return -ENOMEM;
+ }
+
+ NBL_DEV_MGT_TO_CHAN_OPS_TBL(*dev_mgt) = chan_ops_tbl;
+ NBL_DEV_MGT_TO_DISP_OPS_TBL(*dev_mgt) = dispatch_ops_tbl;
+
+ if (product_dev_ops->dev_init)
+ ret = product_dev_ops->dev_init(adapter);
+
+ if (ret)
+ goto init_dev_failed;
+
+ ret = nbl_dev_setup_ops(dev_ops_tbl, adapter);
+ if (ret)
+ goto set_ops_failed;
+
+ adapter->state = NBL_ETHDEV_INITIALIZED;
+
+ return 0;
+
+set_ops_failed:
+ if (product_dev_ops->dev_uninit)
+ product_dev_ops->dev_uninit(adapter);
+init_dev_failed:
+ rte_free(*dev_mgt);
+ *dev_mgt = NULL;
+ return ret;
+}
+
+void nbl_dev_remove(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dev_mgt **dev_mgt;
+ struct nbl_dev_ops_tbl **dev_ops_tbl;
+ struct nbl_product_dev_ops *product_dev_ops = NULL;
+
+ dev_mgt = (struct nbl_dev_mgt **)&NBL_ADAPTER_TO_DEV_MGT(adapter);
+ dev_ops_tbl = &NBL_ADAPTER_TO_DEV_OPS_TBL(adapter);
+ product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
+
+ nbl_dev_remove_ops(dev_ops_tbl);
+ if (product_dev_ops->dev_uninit)
+ product_dev_ops->dev_uninit(adapter);
+
+ rte_free(*dev_mgt);
+ *dev_mgt = NULL;
+}
+
+void nbl_dev_stop(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_product_dev_ops *product_dev_ops = NULL;
+
+ product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
+ if (product_dev_ops->dev_stop)
+ return product_dev_ops->dev_stop(p);
+}
+
+int nbl_dev_start(void *p)
+{
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_product_dev_ops *product_dev_ops = NULL;
+
+ product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
+ if (product_dev_ops->dev_start)
+ return product_dev_ops->dev_start(p);
+ return 0;
+}
+
+static struct nbl_product_dev_ops nbl_product_dev_ops[NBL_PRODUCT_MAX] = {
+ {
+ .dev_init = nbl_dev_leonis_init,
+ .dev_uninit = nbl_dev_leonis_uninit,
+ .dev_start = nbl_dev_leonis_start,
+ .dev_stop = nbl_dev_leonis_stop,
+ },
+};
+
+struct nbl_product_dev_ops *nbl_dev_get_product_ops(enum nbl_product_type product_type)
+{
+ return &nbl_product_dev_ops[product_type];
+}
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.h b/drivers/net/nbl/nbl_dev/nbl_dev.h
new file mode 100644
index 0000000000..ccc9c02531
--- /dev/null
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEV_H_
+#define _NBL_DEV_H_
+
+#include "nbl_common.h"
+
+#define NBL_DEV_MGT_TO_DISP_OPS_TBL(dev_mgt) ((dev_mgt)->disp_ops_tbl)
+#define NBL_DEV_MGT_TO_DISP_OPS(dev_mgt) (NBL_DEV_MGT_TO_DISP_OPS_TBL(dev_mgt)->ops)
+#define NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt) (NBL_DEV_MGT_TO_DISP_OPS_TBL(dev_mgt)->priv)
+#define NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt) ((dev_mgt)->chan_ops_tbl)
+#define NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt) (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->ops)
+#define NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt) (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->priv)
+
+struct nbl_dev_mgt {
+ struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ struct nbl_channel_ops_tbl *chan_ops_tbl;
+};
+
+struct nbl_product_dev_ops *nbl_dev_get_product_ops(enum nbl_product_type product_type);
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dev.h b/drivers/net/nbl/nbl_include/nbl_def_dev.h
new file mode 100644
index 0000000000..b6e38bc2d3
--- /dev/null
+++ b/drivers/net/nbl/nbl_include/nbl_def_dev.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_DEF_DEV_H_
+#define _NBL_DEF_DEV_H_
+
+#include "nbl_include.h"
+
+#define NBL_DEV_OPS_TBL_TO_OPS(dev_ops_tbl) ((dev_ops_tbl)->ops)
+#define NBL_DEV_OPS_TBL_TO_PRIV(dev_ops_tbl) ((dev_ops_tbl)->priv)
+
+struct nbl_dev_ops {
+ eth_dev_configure_t dev_configure; /**< Configure device */
+ eth_dev_start_t dev_start; /**< Start device */
+ eth_dev_stop_t dev_stop; /**< Stop device */
+ eth_dev_set_link_up_t dev_set_link_up; /**< Device link up */
+ eth_dev_set_link_down_t dev_set_link_down; /**< Device link down */
+ eth_dev_close_t dev_close; /**< Close device */
+ eth_dev_reset_t dev_reset; /**< Reset device */
+ eth_link_update_t link_update; /**< Get device link state */
+ eth_speed_lanes_get_t speed_lanes_get; /**< Get link speed active lanes */
+ eth_speed_lanes_set_t speed_lanes_set; /**< Set link speeds supported lanes */
+ /** Get link speed lanes capability */
+ eth_speed_lanes_get_capability_t speed_lanes_get_capa;
+ /** Check if the device was physically removed */
+ eth_is_removed_t is_removed;
+
+ eth_promiscuous_enable_t promiscuous_enable; /**< Promiscuous ON */
+ eth_promiscuous_disable_t promiscuous_disable;/**< Promiscuous OFF */
+ eth_allmulticast_enable_t allmulticast_enable;/**< Rx multicast ON */
+ eth_allmulticast_disable_t allmulticast_disable;/**< Rx multicast OFF */
+ eth_mac_addr_remove_t mac_addr_remove; /**< Remove MAC address */
+ eth_mac_addr_add_t mac_addr_add; /**< Add a MAC address */
+ eth_mac_addr_set_t mac_addr_set; /**< Set a MAC address */
+ /** Set list of multicast addresses */
+ eth_set_mc_addr_list_t set_mc_addr_list;
+ mtu_set_t mtu_set; /**< Set MTU */
+
+ /** Get generic device statistics */
+ eth_stats_get_t stats_get;
+ /** Reset generic device statistics */
+ eth_stats_reset_t stats_reset;
+ /** Get extended device statistics */
+ eth_xstats_get_t xstats_get;
+ /** Reset extended device statistics */
+ eth_xstats_reset_t xstats_reset;
+ /** Get names of extended statistics */
+ eth_xstats_get_names_t xstats_get_names;
+ /** Configure per queue stat counter mapping */
+ eth_queue_stats_mapping_set_t queue_stats_mapping_set;
+
+ eth_get_module_info_t get_module_info;
+ eth_get_module_eeprom_t get_module_eeprom;
+ reta_update_t reta_update; /** Update redirection table. */
+ reta_query_t reta_query; /** Query redirection table. */
+ rss_hash_conf_get_t rss_hash_conf_get; /** Get current RSS hash configuration. */
+
+ eth_fec_get_capability_t fec_get_capability;
+ /**< Get Forward Error Correction(FEC) capability. */
+ eth_fec_get_t fec_get;
+ /**< Get Forward Error Correction(FEC) mode. */
+ eth_fec_set_t fec_set;
+ /**< Set Forward Error Correction(FEC) mode. */
+
+ eth_dev_infos_get_t dev_infos_get; /**< Get device info. */
+ eth_rxq_info_get_t rxq_info_get; /**< retrieve RX queue information. */
+ eth_txq_info_get_t txq_info_get; /**< retrieve TX queue information. */
+ eth_burst_mode_get_t rx_burst_mode_get; /**< Get RX burst mode */
+ eth_burst_mode_get_t tx_burst_mode_get; /**< Get TX burst mode */
+ eth_fw_version_get_t fw_version_get; /**< Get firmware version. */
+ eth_dev_supported_ptypes_get_t dev_supported_ptypes_get;
+ /**< Get packet types supported and identified by device. */
+ eth_dev_ptypes_set_t dev_ptypes_set;
+ /**< Inform Ethernet device about reduced range of packet types to handle. */
+
+ vlan_filter_set_t vlan_filter_set; /**< Filter VLAN Setup. */
+ vlan_tpid_set_t vlan_tpid_set; /**< Outer/Inner VLAN TPID Setup. */
+ vlan_strip_queue_set_t vlan_strip_queue_set; /**< VLAN Stripping on queue. */
+ vlan_offload_set_t vlan_offload_set; /**< Set VLAN Offload. */
+ vlan_pvid_set_t vlan_pvid_set; /**< Set port based TX VLAN insertion. */
+
+ eth_queue_start_t rx_queue_start;/**< Start RX for a queue. */
+ eth_queue_stop_t rx_queue_stop; /**< Stop RX for a queue. */
+ eth_queue_start_t tx_queue_start;/**< Start TX for a queue. */
+ eth_queue_stop_t tx_queue_stop; /**< Stop TX for a queue. */
+ eth_rx_queue_setup_t rx_queue_setup;/**< Set up device RX queue. */
+ eth_queue_release_t rx_queue_release; /**< Release RX queue. */
+
+ eth_tx_queue_setup_t tx_queue_setup;/**< Set up device TX queue. */
+ eth_queue_release_t tx_queue_release; /**< Release TX queue. */
+ eth_get_eeprom_length_t get_eeprom_length; /**< Get eeprom length. */
+ eth_get_eeprom_t get_eeprom; /**< Get eeprom data. */
+ eth_set_eeprom_t set_eeprom; /**< Set eeprom. */
+};
+
+struct nbl_dev_ops_tbl {
+ struct nbl_dev_ops *ops;
+ void *priv;
+};
+
+int nbl_dev_init(void *p, struct rte_eth_dev *eth_dev);
+void nbl_dev_remove(void *p);
+int nbl_dev_start(void *p);
+void nbl_dev_stop(void *p);
+
+#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 08/17] net/nbl: add complete device init and uninit functionality
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (6 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 07/17] net/nbl: add Dev " Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 09/17] net/nbl: add uio and vfio mode for nbl Kyo Liu
` (11 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
NBL device is a concept of low level device which used to manage
hw resource and to interact with fw
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_core.c | 8 +-
drivers/net/nbl/nbl_core.h | 7 +
drivers/net/nbl/nbl_dev/nbl_dev.c | 248 +++++++-
drivers/net/nbl/nbl_dev/nbl_dev.h | 32 +
drivers/net/nbl/nbl_dispatch.c | 548 +++++++++++++++---
drivers/net/nbl/nbl_ethdev.c | 26 +
drivers/net/nbl/nbl_hw/nbl_resource.h | 1 +
drivers/net/nbl/nbl_hw/nbl_txrx.c | 30 +-
drivers/net/nbl/nbl_include/nbl_def_channel.h | 51 ++
drivers/net/nbl/nbl_include/nbl_def_common.h | 7 +
.../net/nbl/nbl_include/nbl_def_dispatch.h | 7 +-
.../net/nbl/nbl_include/nbl_def_resource.h | 18 +
drivers/net/nbl/nbl_include/nbl_include.h | 61 ++
13 files changed, 955 insertions(+), 89 deletions(-)
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index 1a6a6bc11d..f4ddc9e219 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -20,7 +20,7 @@ static struct nbl_product_core_ops *nbl_core_get_product_ops(enum nbl_product_ty
return &nbl_product_core_ops[product_type];
}
-static void nbl_init_func_caps(struct rte_pci_device *pci_dev, struct nbl_func_caps *caps)
+static void nbl_init_func_caps(const struct rte_pci_device *pci_dev, struct nbl_func_caps *caps)
{
if (pci_dev->id.device_id >= NBL_DEVICE_ID_M18110 &&
pci_dev->id.device_id <= NBL_DEVICE_ID_M18100_VF)
@@ -29,8 +29,8 @@ static void nbl_init_func_caps(struct rte_pci_device *pci_dev, struct nbl_func_c
int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct nbl_product_core_ops *product_base_ops = NULL;
+ const struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ const struct nbl_product_core_ops *product_base_ops = NULL;
int ret = 0;
nbl_init_func_caps(pci_dev, &adapter->caps);
@@ -74,7 +74,7 @@ int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
void nbl_core_remove(struct nbl_adapter *adapter)
{
- struct nbl_product_core_ops *product_base_ops = NULL;
+ const struct nbl_product_core_ops *product_base_ops = NULL;
product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index 9a05bbee48..bdf31e15da 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -46,6 +46,12 @@
#define NBL_ADAPTER_TO_DISP_OPS_TBL(adapter) ((adapter)->intf.dispatch_ops_tbl)
#define NBL_ADAPTER_TO_DEV_OPS_TBL(adapter) ((adapter)->intf.dev_ops_tbl)
+#define NBL_ADAPTER_TO_COMMON(adapter) (&((adapter)->common))
+
+#define NBL_IS_NOT_COEXISTENCE(common) ({ typeof(common) _common = (common); \
+ _common->nl_socket_route < 0 || \
+ _common->ifindex < 0; })
+
struct nbl_core {
void *phy_mgt;
void *res_mgt;
@@ -80,6 +86,7 @@ struct nbl_adapter {
struct nbl_interface intf;
struct nbl_func_caps caps;
enum nbl_ethdev_state state;
+ struct nbl_common_info common;
};
int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index 86006d6762..f02ed7f94e 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -38,7 +38,7 @@ struct nbl_dev_ops dev_ops = {
static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
{
struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
- struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ const struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
int ret = 0;
ret = chan_ops->setup_queue(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt));
@@ -49,7 +49,7 @@ static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
static int nbl_dev_teardown_chan_queue(struct nbl_adapter *adapter)
{
struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
- struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
+ const struct nbl_channel_ops *chan_ops = NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt);
int ret = 0;
ret = chan_ops->teardown_queue(NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt));
@@ -67,15 +67,67 @@ static void nbl_dev_leonis_uninit(void *adapter)
nbl_dev_teardown_chan_queue((struct nbl_adapter *)adapter);
}
+static int nbl_dev_common_start(struct nbl_dev_mgt *dev_mgt)
+{
+ const struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_dev_net_mgt *net_dev = dev_mgt->net_dev;
+ struct nbl_common_info *common = dev_mgt->common;
+ struct nbl_board_port_info *board_info;
+ u8 *mac;
+ int ret;
+
+ board_info = &dev_mgt->common->board_info;
+ disp_ops->get_board_info(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), board_info);
+ mac = net_dev->eth_dev->data->mac_addrs->addr_bytes;
+
+ disp_ops->clear_flow(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+
+ if (NBL_IS_NOT_COEXISTENCE(common)) {
+ ret = disp_ops->add_macvlan(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ mac, 0, net_dev->vsi_id);
+ if (ret)
+ return ret;
+
+ ret = disp_ops->add_multi_rule(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ if (ret)
+ goto add_multi_rule_failed;
+ }
+
+ return 0;
+
+add_multi_rule_failed:
+ disp_ops->del_macvlan(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), mac, 0, net_dev->vsi_id);
+
+ return ret;
+}
+
static int nbl_dev_leonis_start(void *p)
{
- RTE_SET_USED(p);
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ int ret = 0;
+
+ dev_mgt->common = NBL_ADAPTER_TO_COMMON(adapter);
+ ret = nbl_dev_common_start(dev_mgt);
+ if (ret)
+ return ret;
return 0;
}
static void nbl_dev_leonis_stop(void *p)
{
- RTE_SET_USED(p);
+ struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dev_net_mgt *net_dev = dev_mgt->net_dev;
+ const struct nbl_common_info *common = dev_mgt->common;
+ const struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ u8 *mac;
+
+ mac = net_dev->eth_dev->data->mac_addrs->addr_bytes;
+ if (NBL_IS_NOT_COEXISTENCE(common)) {
+ disp_ops->del_multi_rule(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ disp_ops->del_macvlan(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), mac, 0, net_dev->vsi_id);
+ }
}
static void nbl_dev_remove_ops(struct nbl_dev_ops_tbl **dev_ops_tbl)
@@ -97,6 +149,154 @@ static int nbl_dev_setup_ops(struct nbl_dev_ops_tbl **dev_ops_tbl,
return 0;
}
+static int nbl_dev_setup_rings(struct nbl_dev_ring_mgt *ring_mgt)
+{
+ int i;
+ u8 ring_num = ring_mgt->rx_ring_num;
+
+ ring_num = ring_mgt->rx_ring_num;
+ ring_mgt->rx_rings = rte_calloc("nbl_dev_rxring", ring_num,
+ sizeof(*ring_mgt->rx_rings), 0);
+ if (!ring_mgt->rx_rings)
+ return -ENOMEM;
+
+ for (i = 0; i < ring_num; i++)
+ ring_mgt->rx_rings[i].index = i;
+
+ ring_num = ring_mgt->tx_ring_num;
+ ring_mgt->tx_rings = rte_calloc("nbl_dev_txring", ring_num,
+ sizeof(*ring_mgt->tx_rings), 0);
+ if (!ring_mgt->tx_rings) {
+ rte_free(ring_mgt->rx_rings);
+ ring_mgt->rx_rings = NULL;
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < ring_num; i++)
+ ring_mgt->tx_rings[i].index = i;
+
+ return 0;
+}
+
+static void nbl_dev_remove_rings(struct nbl_dev_ring_mgt *ring_mgt)
+{
+ rte_free(ring_mgt->rx_rings);
+ ring_mgt->rx_rings = NULL;
+
+ rte_free(ring_mgt->tx_rings);
+ ring_mgt->tx_rings = NULL;
+}
+
+static void nbl_dev_remove_net_dev(struct nbl_dev_mgt *dev_mgt)
+{
+ struct nbl_dev_net_mgt *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_dev_ring_mgt *ring_mgt = &net_dev->ring_mgt;
+ const struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+
+ disp_ops->remove_rss(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ disp_ops->remove_q2vsi(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ disp_ops->free_txrx_queues(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ disp_ops->remove_rings(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt));
+ nbl_dev_remove_rings(ring_mgt);
+ disp_ops->unregister_net(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt));
+
+ rte_free(net_dev);
+ NBL_DEV_MGT_TO_NET_DEV(dev_mgt) = NULL;
+}
+
+static int nbl_dev_setup_net_dev(struct nbl_dev_mgt *dev_mgt,
+ struct rte_eth_dev *eth_dev,
+ struct nbl_common_info *common)
+{
+ struct nbl_dev_net_mgt *net_dev;
+ const struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_register_net_param register_param = { 0 };
+ struct nbl_register_net_result register_result = { 0 };
+ struct nbl_dev_ring_mgt *ring_mgt;
+ const struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ int ret = 0;
+
+ net_dev = rte_zmalloc("nbl_dev_net", sizeof(struct nbl_dev_net_mgt), 0);
+ if (!net_dev)
+ return -ENOMEM;
+
+ NBL_DEV_MGT_TO_NET_DEV(dev_mgt) = net_dev;
+ NBL_DEV_MGT_TO_ETH_DEV(dev_mgt) = eth_dev;
+ ring_mgt = &net_dev->ring_mgt;
+
+ register_param.pf_bar_start = pci_dev->mem_resource[0].phys_addr;
+ ret = disp_ops->register_net(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ ®ister_param, ®ister_result);
+ if (ret)
+ goto register_net_failed;
+
+ ring_mgt->tx_ring_num = register_result.tx_queue_num;
+ ring_mgt->rx_ring_num = register_result.rx_queue_num;
+ ring_mgt->queue_offset = register_result.queue_offset;
+
+ net_dev->vsi_id = disp_ops->get_vsi_id(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt));
+ disp_ops->get_eth_id(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id,
+ &net_dev->eth_mode, &net_dev->eth_id);
+ net_dev->trust = register_result.trusted;
+
+ if (net_dev->eth_mode == NBL_TWO_ETHERNET_PORT)
+ net_dev->max_mac_num = NBL_TWO_ETHERNET_MAX_MAC_NUM;
+ else if (net_dev->eth_mode == NBL_FOUR_ETHERNET_PORT)
+ net_dev->max_mac_num = NBL_FOUR_ETHERNET_MAX_MAC_NUM;
+
+ common->vsi_id = net_dev->vsi_id;
+ common->eth_id = net_dev->eth_id;
+
+ disp_ops->clear_queues(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ disp_ops->register_vsi2q(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), NBL_VSI_DATA, net_dev->vsi_id,
+ register_result.queue_offset, ring_mgt->tx_ring_num);
+ ret = nbl_dev_setup_rings(ring_mgt);
+ if (ret)
+ goto setup_rings_failed;
+
+ ret = disp_ops->alloc_rings(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ register_result.tx_queue_num,
+ register_result.rx_queue_num,
+ register_result.queue_offset);
+ if (ret) {
+ NBL_LOG(ERR, "alloc_rings failed ret %d", ret);
+ goto alloc_rings_failed;
+ }
+
+ ret = disp_ops->alloc_txrx_queues(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ net_dev->vsi_id,
+ register_result.tx_queue_num);
+ if (ret) {
+ NBL_LOG(ERR, "alloc_txrx_queues failed ret %d", ret);
+ goto alloc_txrx_queues_failed;
+ }
+
+ ret = disp_ops->setup_q2vsi(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), net_dev->vsi_id);
+ if (ret) {
+ NBL_LOG(ERR, "setup_q2vsi failed ret %d", ret);
+ goto setup_q2vsi_failed;
+ }
+
+ ret = disp_ops->setup_rss(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ net_dev->vsi_id);
+
+ return ret;
+
+setup_q2vsi_failed:
+ disp_ops->free_txrx_queues(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ net_dev->vsi_id);
+alloc_txrx_queues_failed:
+ disp_ops->remove_rings(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt));
+alloc_rings_failed:
+ nbl_dev_remove_rings(ring_mgt);
+setup_rings_failed:
+ disp_ops->unregister_net(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt));
+register_net_failed:
+ rte_free(net_dev);
+
+ return ret;
+}
+
int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
{
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
@@ -104,13 +304,16 @@ int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
struct nbl_dev_ops_tbl **dev_ops_tbl;
struct nbl_channel_ops_tbl *chan_ops_tbl;
struct nbl_dispatch_ops_tbl *dispatch_ops_tbl;
- struct nbl_product_dev_ops *product_dev_ops = NULL;
+ const struct nbl_product_dev_ops *product_dev_ops = NULL;
+ struct nbl_common_info *common = NULL;
+ const struct nbl_dispatch_ops *disp_ops;
int ret = 0;
dev_mgt = (struct nbl_dev_mgt **)&NBL_ADAPTER_TO_DEV_MGT(adapter);
dev_ops_tbl = &NBL_ADAPTER_TO_DEV_OPS_TBL(adapter);
chan_ops_tbl = NBL_ADAPTER_TO_CHAN_OPS_TBL(adapter);
dispatch_ops_tbl = NBL_ADAPTER_TO_DISP_OPS_TBL(adapter);
+ common = NBL_ADAPTER_TO_COMMON(adapter);
product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
*dev_mgt = rte_zmalloc("nbl_dev_mgt", sizeof(struct nbl_dev_mgt), 0);
@@ -121,6 +324,7 @@ int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
NBL_DEV_MGT_TO_CHAN_OPS_TBL(*dev_mgt) = chan_ops_tbl;
NBL_DEV_MGT_TO_DISP_OPS_TBL(*dev_mgt) = dispatch_ops_tbl;
+ disp_ops = NBL_DEV_MGT_TO_DISP_OPS(*dev_mgt);
if (product_dev_ops->dev_init)
ret = product_dev_ops->dev_init(adapter);
@@ -132,10 +336,28 @@ int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
if (ret)
goto set_ops_failed;
+ ret = nbl_dev_setup_net_dev(*dev_mgt, eth_dev, common);
+ if (ret)
+ goto setup_net_dev_failed;
+
+ eth_dev->data->mac_addrs =
+ rte_zmalloc("nbl", RTE_ETHER_ADDR_LEN * (*dev_mgt)->net_dev->max_mac_num, 0);
+ if (!eth_dev->data->mac_addrs) {
+ NBL_LOG(ERR, "allocate memory to store mac addr failed");
+ ret = -ENOMEM;
+ goto alloc_mac_addrs_failed;
+ }
+ disp_ops->get_mac_addr(NBL_DEV_MGT_TO_DISP_PRIV(*dev_mgt),
+ eth_dev->data->mac_addrs[0].addr_bytes);
+
adapter->state = NBL_ETHDEV_INITIALIZED;
return 0;
+alloc_mac_addrs_failed:
+ nbl_dev_remove_net_dev(*dev_mgt);
+setup_net_dev_failed:
+ nbl_dev_remove_ops(dev_ops_tbl);
set_ops_failed:
if (product_dev_ops->dev_uninit)
product_dev_ops->dev_uninit(adapter);
@@ -150,12 +372,18 @@ void nbl_dev_remove(void *p)
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
struct nbl_dev_mgt **dev_mgt;
struct nbl_dev_ops_tbl **dev_ops_tbl;
- struct nbl_product_dev_ops *product_dev_ops = NULL;
+ const struct nbl_product_dev_ops *product_dev_ops = NULL;
+ struct rte_eth_dev *eth_dev;
dev_mgt = (struct nbl_dev_mgt **)&NBL_ADAPTER_TO_DEV_MGT(adapter);
dev_ops_tbl = &NBL_ADAPTER_TO_DEV_OPS_TBL(adapter);
product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
+ eth_dev = (*dev_mgt)->net_dev->eth_dev;
+
+ rte_free(eth_dev->data->mac_addrs);
+ eth_dev->data->mac_addrs = NULL;
+ nbl_dev_remove_net_dev(*dev_mgt);
nbl_dev_remove_ops(dev_ops_tbl);
if (product_dev_ops->dev_uninit)
product_dev_ops->dev_uninit(adapter);
@@ -166,8 +394,8 @@ void nbl_dev_remove(void *p)
void nbl_dev_stop(void *p)
{
- struct nbl_adapter *adapter = (struct nbl_adapter *)p;
- struct nbl_product_dev_ops *product_dev_ops = NULL;
+ const struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ const struct nbl_product_dev_ops *product_dev_ops = NULL;
product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
if (product_dev_ops->dev_stop)
@@ -176,8 +404,8 @@ void nbl_dev_stop(void *p)
int nbl_dev_start(void *p)
{
- struct nbl_adapter *adapter = (struct nbl_adapter *)p;
- struct nbl_product_dev_ops *product_dev_ops = NULL;
+ const struct nbl_adapter *adapter = (struct nbl_adapter *)p;
+ const struct nbl_product_dev_ops *product_dev_ops = NULL;
product_dev_ops = nbl_dev_get_product_ops(adapter->caps.product_type);
if (product_dev_ops->dev_start)
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.h b/drivers/net/nbl/nbl_dev/nbl_dev.h
index ccc9c02531..44deea3f3b 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.h
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.h
@@ -13,10 +13,42 @@
#define NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt) ((dev_mgt)->chan_ops_tbl)
#define NBL_DEV_MGT_TO_CHAN_OPS(dev_mgt) (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->ops)
#define NBL_DEV_MGT_TO_CHAN_PRIV(dev_mgt) (NBL_DEV_MGT_TO_CHAN_OPS_TBL(dev_mgt)->priv)
+#define NBL_DEV_MGT_TO_NET_DEV(dev_mgt) ((dev_mgt)->net_dev)
+#define NBL_DEV_MGT_TO_ETH_DEV(dev_mgt) ((dev_mgt)->net_dev->eth_dev)
+#define NBL_DEV_MGT_TO_COMMON(dev_mgt) ((dev_mgt)->common)
+
+struct nbl_dev_ring {
+ u16 index;
+ u64 dma;
+ u16 local_queue_id;
+ u16 global_queue_id;
+ u32 desc_num;
+};
+
+struct nbl_dev_ring_mgt {
+ struct nbl_dev_ring *tx_rings;
+ struct nbl_dev_ring *rx_rings;
+ u16 queue_offset;
+ u8 tx_ring_num;
+ u8 rx_ring_num;
+ u8 active_ring_num;
+};
+
+struct nbl_dev_net_mgt {
+ struct rte_eth_dev *eth_dev;
+ struct nbl_dev_ring_mgt ring_mgt;
+ u16 vsi_id;
+ u8 eth_mode;
+ u8 eth_id;
+ u16 max_mac_num;
+ bool trust;
+};
struct nbl_dev_mgt {
struct nbl_dispatch_ops_tbl *disp_ops_tbl;
struct nbl_channel_ops_tbl *chan_ops_tbl;
+ struct nbl_dev_net_mgt *net_dev;
+ struct nbl_common_info *common;
};
struct nbl_product_dev_ops *nbl_dev_get_product_ops(enum nbl_product_type product_type);
diff --git a/drivers/net/nbl/nbl_dispatch.c b/drivers/net/nbl/nbl_dispatch.c
index ffeeba3048..4265e5309c 100644
--- a/drivers/net/nbl/nbl_dispatch.c
+++ b/drivers/net/nbl/nbl_dispatch.c
@@ -7,24 +7,21 @@
static int nbl_disp_alloc_txrx_queues(void *priv, u16 vsi_id, u16 queue_num)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->alloc_txrx_queues(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- vsi_id, queue_num);
+ return NBL_OPS_CALL(res_ops->alloc_txrx_queues,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id, queue_num));
}
static int nbl_disp_chan_alloc_txrx_queues_req(void *priv, u16 vsi_id,
u16 queue_num)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_channel_ops *chan_ops;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
struct nbl_chan_param_alloc_txrx_queues param = {0};
struct nbl_chan_param_alloc_txrx_queues result = {0};
struct nbl_chan_send_info chan_send;
- chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
-
param.vsi_id = vsi_id;
param.queue_num = queue_num;
@@ -38,21 +35,18 @@ static int nbl_disp_chan_alloc_txrx_queues_req(void *priv, u16 vsi_id,
static void nbl_disp_free_txrx_queues(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops->free_txrx_queues(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id);
+ NBL_OPS_CALL(res_ops->free_txrx_queues, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
}
static void nbl_disp_chan_free_txrx_queues_req(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_channel_ops *chan_ops;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
struct nbl_chan_param_free_txrx_queues param = {0};
struct nbl_chan_send_info chan_send;
- chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
-
param.vsi_id = vsi_id;
NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_FREE_TXRX_QUEUES, ¶m,
@@ -63,7 +57,7 @@ static void nbl_disp_chan_free_txrx_queues_req(void *priv, u16 vsi_id)
static void nbl_disp_clear_queues(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
NBL_OPS_CALL(res_ops->clear_queues, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
}
@@ -71,7 +65,7 @@ static void nbl_disp_clear_queues(void *priv, u16 vsi_id)
static void nbl_disp_chan_clear_queues_req(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
struct nbl_chan_send_info chan_send = {0};
NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_CLEAR_QUEUE, &vsi_id, sizeof(vsi_id),
@@ -84,31 +78,26 @@ static int nbl_disp_start_tx_ring(void *priv,
u64 *dma_addr)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->start_tx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- param, dma_addr);
+ return NBL_OPS_CALL(res_ops->start_tx_ring,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), param, dma_addr));
}
static void nbl_disp_release_tx_ring(void *priv, u16 queue_idx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->release_tx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- queue_idx);
+ NBL_OPS_CALL(res_ops->release_tx_ring, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), queue_idx));
}
static void nbl_disp_stop_tx_ring(void *priv, u16 queue_idx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->stop_tx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- queue_idx);
+ NBL_OPS_CALL(res_ops->stop_tx_ring, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), queue_idx));
}
static int nbl_disp_start_rx_ring(void *priv,
@@ -116,80 +105,72 @@ static int nbl_disp_start_rx_ring(void *priv,
u64 *dma_addr)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->start_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- param, dma_addr);
+ return NBL_OPS_CALL(res_ops->start_rx_ring,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), param, dma_addr));
}
static int nbl_disp_alloc_rx_bufs(void *priv, u16 queue_idx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->alloc_rx_bufs(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- queue_idx);
+ return NBL_OPS_CALL(res_ops->alloc_rx_bufs,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), queue_idx));
}
static void nbl_disp_release_rx_ring(void *priv, u16 queue_idx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->release_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- queue_idx);
+ return NBL_OPS_CALL(res_ops->release_rx_ring,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), queue_idx));
}
static void nbl_disp_stop_rx_ring(void *priv, u16 queue_idx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->stop_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- queue_idx);
+ return NBL_OPS_CALL(res_ops->stop_rx_ring,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), queue_idx));
}
static void nbl_disp_update_rx_ring(void *priv, u16 index)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops->update_rx_ring(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), index);
+ NBL_OPS_CALL(res_ops->update_rx_ring, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), index));
}
static int nbl_disp_alloc_rings(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->alloc_rings(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- tx_num, rx_num, queue_offset);
+ return NBL_OPS_CALL(res_ops->alloc_rings,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), tx_num, rx_num, queue_offset));
}
static void nbl_disp_remove_rings(void *priv)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops->remove_rings(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt));
+ NBL_OPS_CALL(res_ops->remove_rings, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
}
static int
nbl_disp_setup_queue(void *priv, struct nbl_txrx_queue_param *param, bool is_tx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- return res_ops->setup_queue(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
- param, is_tx);
+ return NBL_OPS_CALL(res_ops->setup_queue,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), param, is_tx));
}
static int
@@ -198,12 +179,10 @@ nbl_disp_chan_setup_queue_req(void *priv,
bool is_tx)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_channel_ops *chan_ops;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
struct nbl_chan_param_setup_queue param = {0};
struct nbl_chan_send_info chan_send;
- chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
-
memcpy(¶m.queue_param, queue_param, sizeof(param.queue_param));
param.is_tx = is_tx;
@@ -215,21 +194,18 @@ nbl_disp_chan_setup_queue_req(void *priv,
static void nbl_disp_remove_all_queues(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_resource_ops *res_ops;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
- res_ops->remove_all_queues(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id);
+ NBL_OPS_CALL(res_ops->remove_all_queues, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
}
static void nbl_disp_chan_remove_all_queues_req(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
- struct nbl_channel_ops *chan_ops;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
struct nbl_chan_param_remove_all_queues param = {0};
struct nbl_chan_send_info chan_send;
- chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
-
param.vsi_id = vsi_id;
NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REMOVE_ALL_QUEUES,
@@ -237,6 +213,382 @@ static void nbl_disp_chan_remove_all_queues_req(void *priv, u16 vsi_id)
chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
+static int nbl_disp_get_mac_addr(void *priv __rte_unused, u8 *mac)
+{
+ rte_eth_random_addr(mac);
+
+ return 0;
+}
+
+static int nbl_disp_get_mac_addr_req(void *priv __rte_unused, u8 *mac)
+{
+ rte_eth_random_addr(mac);
+
+ return 0;
+}
+
+static int nbl_disp_register_net(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->register_net,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), register_param, register_result));
+}
+
+static int nbl_disp_chan_register_net_req(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_register_net_info param = {0};
+ struct nbl_chan_send_info chan_send;
+ int ret = 0;
+
+ param.pf_bar_start = register_param->pf_bar_start;
+ param.pf_bdf = register_param->pf_bdf;
+ param.vf_bar_start = register_param->vf_bar_start;
+ param.vf_bar_size = register_param->vf_bar_size;
+ param.total_vfs = register_param->total_vfs;
+ param.offset = register_param->offset;
+ param.stride = register_param->stride;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REGISTER_NET,
+ ¶m, sizeof(param),
+ (void *)register_result, sizeof(*register_result), 1);
+
+ ret = chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+ return ret;
+}
+
+static int nbl_disp_unregister_net(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->unregister_net, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+}
+
+static int nbl_disp_chan_unregister_net_req(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_UNREGISTER_NET, NULL,
+ 0, NULL, 0, 1);
+
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static u16 nbl_disp_get_vsi_id(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->get_vsi_id, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt)));
+}
+
+static u16 nbl_disp_chan_get_vsi_id_req(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_get_vsi_id param = {0};
+ struct nbl_chan_param_get_vsi_id result = {0};
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_GET_VSI_ID, ¶m,
+ sizeof(param), &result, sizeof(result), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+
+ return result.vsi_id;
+}
+
+static void nbl_disp_get_eth_id(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->get_eth_id, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ vsi_id, eth_mode, eth_id));
+}
+
+static void nbl_disp_chan_get_eth_id_req(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_get_eth_id param = {0};
+ struct nbl_chan_param_get_eth_id result = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_GET_ETH_ID, ¶m, sizeof(param),
+ &result, sizeof(result), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+
+ *eth_mode = result.eth_mode;
+ *eth_id = result.eth_id;
+}
+
+static int nbl_disp_chan_setup_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->setup_q2vsi,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static int nbl_disp_chan_setup_q2vsi_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_cfg_q2vsi param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_SETUP_Q2VSI, ¶m,
+ sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_remove_q2vsi(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->remove_q2vsi,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static void nbl_disp_chan_remove_q2vsi_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_cfg_q2vsi param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REMOVE_Q2VSI, ¶m,
+ sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_chan_register_vsi2q(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->register_vsi2q,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_index,
+ vsi_id, queue_offset, queue_num));
+}
+
+static int nbl_disp_chan_register_vsi2q_req(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_register_vsi2q param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_index = vsi_index;
+ param.vsi_id = vsi_id;
+ param.queue_offset = queue_offset;
+ param.queue_num = queue_num;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REGISTER_VSI2Q, ¶m, sizeof(param),
+ NULL, 0, 1);
+
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_chan_setup_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->setup_rss,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static int nbl_disp_chan_setup_rss_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_cfg_rss param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_SETUP_RSS, ¶m,
+ sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_remove_rss(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->remove_rss,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static void nbl_disp_chan_remove_rss_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_cfg_rss param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REMOVE_RSS, ¶m,
+ sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_chan_get_board_info(void *priv, struct nbl_board_port_info *board_info)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(board_info);
+}
+
+static void nbl_disp_chan_get_board_info_req(void *priv, struct nbl_board_port_info *board_info)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_GET_BOARD_INFO, NULL,
+ 0, board_info, sizeof(*board_info), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_clear_flow(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->clear_flow, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static void nbl_disp_chan_clear_flow_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send = {0};
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_CLEAR_FLOW, &vsi_id, sizeof(vsi_id),
+ NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_add_macvlan(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->add_macvlan,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), mac, vlan_id, vsi_id));
+}
+
+static int
+nbl_disp_chan_add_macvlan_req(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send = {0};
+ struct nbl_chan_param_macvlan_cfg param = {0};
+
+ rte_memcpy(¶m.mac, mac, sizeof(param.mac));
+ param.vlan = vlan_id;
+ param.vsi = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_MACVLAN,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_del_macvlan(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->del_macvlan,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), mac, vlan_id, vsi_id));
+}
+
+static void
+nbl_disp_chan_del_macvlan_req(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send = {0};
+ struct nbl_chan_param_macvlan_cfg param = {0};
+
+ rte_memcpy(¶m.mac, mac, sizeof(param.mac));
+ param.vlan = vlan_id;
+ param.vsi = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_MACVLAN,
+ ¶m, sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_add_multi_rule(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->add_multi_rule, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static int nbl_disp_chan_add_multi_rule_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_add_multi_rule param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_MULTI_RULE,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_del_multi_rule(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->del_multi_rule, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static void nbl_disp_chan_del_multi_rule_req(void *priv, u16 vsi)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_del_multi_rule param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi = vsi;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_MULTI_RULE,
+ ¶m, sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
#define NBL_DISP_OPS_TBL \
do { \
NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \
@@ -284,16 +636,74 @@ do { \
NBL_DISP_CTRL_LVL_MGT, \
NBL_CHAN_MSG_CLEAR_QUEUE, \
nbl_disp_chan_clear_queues_req, NULL); \
+ NBL_DISP_SET_OPS(get_mac_addr, nbl_disp_get_mac_addr, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ -1, nbl_disp_get_mac_addr_req, NULL); \
+ NBL_DISP_SET_OPS(register_net, nbl_disp_register_net, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REGISTER_NET, \
+ nbl_disp_chan_register_net_req, \
+ NULL); \
+ NBL_DISP_SET_OPS(unregister_net, nbl_disp_unregister_net, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_UNREGISTER_NET, \
+ nbl_disp_chan_unregister_net_req, NULL); \
+ NBL_DISP_SET_OPS(get_vsi_id, nbl_disp_get_vsi_id, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_VSI_ID,\
+ nbl_disp_chan_get_vsi_id_req, NULL); \
+ NBL_DISP_SET_OPS(get_eth_id, nbl_disp_get_eth_id, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_ETH_ID,\
+ nbl_disp_chan_get_eth_id_req, NULL); \
+ NBL_DISP_SET_OPS(setup_q2vsi, nbl_disp_chan_setup_q2vsi, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_SETUP_Q2VSI, \
+ nbl_disp_chan_setup_q2vsi_req, NULL); \
+ NBL_DISP_SET_OPS(remove_q2vsi, nbl_disp_chan_remove_q2vsi, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REMOVE_Q2VSI, \
+ nbl_disp_chan_remove_q2vsi_req, NULL); \
+ NBL_DISP_SET_OPS(register_vsi2q, nbl_disp_chan_register_vsi2q, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_REGISTER_VSI2Q, \
+ nbl_disp_chan_register_vsi2q_req, NULL); \
+ NBL_DISP_SET_OPS(setup_rss, nbl_disp_chan_setup_rss, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_RSS, \
+ nbl_disp_chan_setup_rss_req, NULL); \
+ NBL_DISP_SET_OPS(remove_rss, nbl_disp_chan_remove_rss, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_RSS,\
+ nbl_disp_chan_remove_rss_req, NULL); \
+ NBL_DISP_SET_OPS(get_board_info, nbl_disp_chan_get_board_info, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_GET_BOARD_INFO, \
+ nbl_disp_chan_get_board_info_req, NULL); \
+ NBL_DISP_SET_OPS(clear_flow, nbl_disp_clear_flow, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_CLEAR_FLOW, \
+ nbl_disp_chan_clear_flow_req, NULL); \
+ NBL_DISP_SET_OPS(add_macvlan, nbl_disp_add_macvlan, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_ADD_MACVLAN, \
+ nbl_disp_chan_add_macvlan_req, NULL); \
+ NBL_DISP_SET_OPS(del_macvlan, nbl_disp_del_macvlan, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_DEL_MACVLAN, \
+ nbl_disp_chan_del_macvlan_req, NULL); \
+ NBL_DISP_SET_OPS(add_multi_rule, nbl_disp_add_multi_rule, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_ADD_MULTI_RULE, \
+ nbl_disp_chan_add_multi_rule_req, NULL); \
+ NBL_DISP_SET_OPS(del_multi_rule, nbl_disp_del_multi_rule, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_DEL_MULTI_RULE, \
+ nbl_disp_chan_del_multi_rule_req, NULL); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
static int nbl_disp_setup_msg(struct nbl_dispatch_mgt *disp_mgt)
{
- struct nbl_channel_ops *chan_ops;
+ const struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
int ret = 0;
- chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
-
#define NBL_DISP_SET_OPS(disp_op, res_func, ctrl_lvl2, msg_type, msg_req, msg_resp) \
do { \
typeof(msg_type) _msg_type = (msg_type); \
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index 261f8a522a..90b1487567 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -31,6 +31,31 @@ struct eth_dev_ops nbl_eth_dev_ops = {
.dev_close = nbl_dev_close,
};
+#define NBL_DEV_NET_OPS_TBL \
+do { \
+ NBL_DEV_NET_OPS(dev_configure, dev_ops->dev_configure);\
+ NBL_DEV_NET_OPS(dev_start, dev_ops->dev_start); \
+ NBL_DEV_NET_OPS(dev_stop, dev_ops->dev_stop); \
+} while (0)
+
+static void nbl_set_eth_dev_ops(struct nbl_adapter *adapter,
+ struct eth_dev_ops *nbl_eth_dev_ops)
+{
+ struct nbl_dev_ops_tbl *dev_ops_tbl;
+ struct nbl_dev_ops *dev_ops;
+ static bool inited;
+
+ if (!inited) {
+ dev_ops_tbl = NBL_ADAPTER_TO_DEV_OPS_TBL(adapter);
+ dev_ops = NBL_DEV_OPS_TBL_TO_OPS(dev_ops_tbl);
+#define NBL_DEV_NET_OPS(ops, func) \
+ do {nbl_eth_dev_ops->NBL_NAME(ops) = func; ; } while (0)
+ NBL_DEV_NET_OPS_TBL;
+#undef NBL_DEV_NET_OPS
+ inited = true;
+ }
+}
+
static int nbl_eth_dev_init(struct rte_eth_dev *eth_dev)
{
struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
@@ -50,6 +75,7 @@ static int nbl_eth_dev_init(struct rte_eth_dev *eth_dev)
goto eth_init_failed;
}
+ nbl_set_eth_dev_ops(adapter, &nbl_eth_dev_ops);
eth_dev->dev_ops = &nbl_eth_dev_ops;
return 0;
diff --git a/drivers/net/nbl/nbl_hw/nbl_resource.h b/drivers/net/nbl/nbl_hw/nbl_resource.h
index 2ea79563cc..07e6327259 100644
--- a/drivers/net/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/nbl/nbl_hw/nbl_resource.h
@@ -28,6 +28,7 @@ struct nbl_txrx_mgt {
rte_spinlock_t tx_lock;
struct nbl_res_tx_ring **tx_rings;
struct nbl_res_rx_ring **rx_rings;
+ u16 queue_offset;
u8 tx_ring_num;
u8 rx_ring_num;
};
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.c b/drivers/net/nbl/nbl_hw/nbl_txrx.c
index 0df204e425..eaa7e4c69d 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.c
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.c
@@ -7,16 +7,36 @@
static int nbl_res_txrx_alloc_rings(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(tx_num);
- RTE_SET_USED(rx_num);
- RTE_SET_USED(queue_offset);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+
+ txrx_mgt->tx_rings = rte_calloc("nbl_txrings", tx_num,
+ sizeof(struct nbl_res_tx_ring *), 0);
+ if (!txrx_mgt->tx_rings) {
+ NBL_LOG(ERR, "Allocate the tx rings array failed");
+ return -ENOMEM;
+ }
+
+ txrx_mgt->rx_rings = rte_calloc("nbl_rxrings", rx_num,
+ sizeof(struct nbl_res_rx_ring *), 0);
+ if (!txrx_mgt->tx_rings) {
+ NBL_LOG(ERR, "Allocate the rx rings array failed");
+ rte_free(txrx_mgt->tx_rings);
+ return -ENOMEM;
+ }
+
+ txrx_mgt->queue_offset = queue_offset;
+
return 0;
}
static void nbl_res_txrx_remove_rings(void *priv)
{
- RTE_SET_USED(priv);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = res_mgt->txrx_mgt;
+
+ rte_free(txrx_mgt->tx_rings);
+ rte_free(txrx_mgt->rx_rings);
}
static int nbl_res_txrx_start_tx_ring(void *priv,
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
index 25d54a435d..829014fa16 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -299,6 +299,57 @@ struct nbl_chan_param_remove_all_queues {
u16 vsi_id;
};
+struct nbl_chan_param_register_net_info {
+ u16 pf_bdf;
+ u64 vf_bar_start;
+ u64 vf_bar_size;
+ u16 total_vfs;
+ u16 offset;
+ u16 stride;
+ u64 pf_bar_start;
+};
+
+struct nbl_chan_param_get_vsi_id {
+ u16 vsi_id;
+ u16 type;
+};
+
+struct nbl_chan_param_get_eth_id {
+ u16 vsi_id;
+ u8 eth_mode;
+ u8 eth_id;
+ u8 logic_eth_id;
+};
+
+struct nbl_chan_param_register_vsi2q {
+ u16 vsi_index;
+ u16 vsi_id;
+ u16 queue_offset;
+ u16 queue_num;
+};
+
+struct nbl_chan_param_cfg_q2vsi {
+ u16 vsi_id;
+};
+
+struct nbl_chan_param_cfg_rss {
+ u16 vsi_id;
+};
+
+struct nbl_chan_param_macvlan_cfg {
+ u8 mac[RTE_ETHER_ADDR_LEN];
+ u16 vlan;
+ u16 vsi;
+};
+
+struct nbl_chan_param_add_multi_rule {
+ u16 vsi;
+};
+
+struct nbl_chan_param_del_multi_rule {
+ u16 vsi;
+};
+
struct nbl_chan_send_info {
uint16_t dstid;
uint16_t msg_type;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index fb2ccb28bf..b7955abfab 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -17,6 +17,13 @@
({ typeof(func) _func = (func); \
(!_func) ? 0 : _func para; })
+#define NBL_ONE_ETHERNET_PORT (1)
+#define NBL_TWO_ETHERNET_PORT (2)
+#define NBL_FOUR_ETHERNET_PORT (4)
+
+#define NBL_TWO_ETHERNET_MAX_MAC_NUM (512)
+#define NBL_FOUR_ETHERNET_MAX_MAC_NUM (1024)
+
struct nbl_dma_mem {
void *va;
uint64_t pa;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
index 5fd890b699..ac261db26a 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
@@ -19,8 +19,12 @@ enum {
};
struct nbl_dispatch_ops {
- int (*add_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
+ int (*register_net)(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result);
+ int (*unregister_net)(void *priv);
int (*get_mac_addr)(void *priv, u8 *mac);
+ int (*add_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
void (*del_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
int (*add_multi_rule)(void *priv, u16 vsi);
void (*del_multi_rule)(void *priv, u16 vsi);
@@ -62,6 +66,7 @@ struct nbl_dispatch_ops {
u16 (*xmit_pkts)(void *priv, void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts);
u16 (*recv_pkts)(void *priv, void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
u16 (*get_vsi_global_qid)(void *priv, u16 vsi_id, u16 local_qid);
+ void (*get_board_info)(void *priv, struct nbl_board_port_info *board_info);
void (*dummy_func)(void *priv);
};
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
index 43302df842..a40ccc4fd8 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -12,6 +12,18 @@
#define NBL_RES_OPS_TBL_TO_PRIV(res_ops_tbl) ((res_ops_tbl)->priv)
struct nbl_resource_ops {
+ int (*register_net)(void *priv,
+ struct nbl_register_net_param *register_param,
+ struct nbl_register_net_result *register_result);
+ int (*unregister_net)(void *priv);
+ u16 (*get_vsi_id)(void *priv);
+ void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id);
+ int (*setup_q2vsi)(void *priv, u16 vsi_id);
+ void (*remove_q2vsi)(void *priv, u16 vsi_id);
+ int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id,
+ u16 queue_offset, u16 queue_num);
+ int (*setup_rss)(void *priv, u16 vsi_id);
+ void (*remove_rss)(void *priv, u16 vsi_id);
int (*alloc_rings)(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset);
void (*remove_rings)(void *priv);
int (*start_tx_ring)(void *priv, struct nbl_start_tx_ring_param *param, u64 *dma_addr);
@@ -39,6 +51,12 @@ struct nbl_resource_ops {
int (*get_txrx_xstats)(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt);
int (*get_txrx_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
u16 *xstats_cnt);
+ int (*add_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
+ void (*del_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
+ int (*add_multi_rule)(void *priv, u16 vsi_id);
+ void (*del_multi_rule)(void *priv, u16 vsi_id);
+ int (*cfg_multi_mcast)(void *priv, u16 vsi_id, u16 enable);
+ void (*clear_flow)(void *priv, u16 vsi_id);
};
struct nbl_resource_ops_tbl {
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 9337666d16..44d157d2a7 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -59,6 +59,13 @@ typedef int8_t s8;
/* Used for macros to pass checkpatch */
#define NBL_NAME(x) x
+enum {
+ NBL_VSI_DATA = 0, /* default vsi in kernel or independent dpdk */
+ NBL_VSI_CTRL,
+ NBL_VSI_USER, /* dpdk used vsi in coexist dpdk */
+ NBL_VSI_MAX,
+};
+
enum nbl_product_type {
NBL_LEONIS_TYPE,
NBL_DRACO_TYPE,
@@ -109,4 +116,58 @@ struct nbl_txrx_queue_param {
u16 rxcsum;
};
+struct nbl_board_port_info {
+ u8 eth_num;
+ u8 speed;
+ u8 rsv[6];
+};
+
+struct nbl_common_info {
+ struct rte_eth_dev *dev;
+ u16 vsi_id;
+ u16 instance_id;
+ int devfd;
+ int eventfd;
+ int ifindex;
+ int iommu_group_num;
+ int nl_socket_route;
+ int dma_limit_msb;
+ u8 eth_id;
+ /* isolate 1 means kernel network, 0 means user network */
+ u8 isolate:1;
+ /* curr_network 0 means kernel network, 1 means user network */
+ u8 curr_network:1;
+ u8 is_vf:1;
+ u8 specific_dma:1;
+ u8 dma_set_msb:1;
+ u8 rsv:3;
+ struct nbl_board_port_info board_info;
+};
+
+struct nbl_register_net_param {
+ u16 pf_bdf;
+ u64 vf_bar_start;
+ u64 vf_bar_size;
+ u16 total_vfs;
+ u16 offset;
+ u16 stride;
+ u64 pf_bar_start;
+};
+
+struct nbl_register_net_result {
+ u16 tx_queue_num;
+ u16 rx_queue_num;
+ u16 queue_size;
+ u16 rdma_enable;
+ u64 hw_features;
+ u64 features;
+ u16 max_mtu;
+ u16 queue_offset;
+ u8 mac[RTE_ETHER_ADDR_LEN];
+ u16 vlan_proto;
+ u16 vlan_tci;
+ u32 rate;
+ bool trusted;
+};
+
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 09/17] net/nbl: add uio and vfio mode for nbl
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (7 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 08/17] net/nbl: add complete device init and uninit functionality Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev Kyo Liu
` (10 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
NBL device support UIO/VFIO
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/meson.build | 1 +
drivers/net/nbl/nbl_common/nbl_userdev.c | 24 +++++++++++++++++++
drivers/net/nbl/nbl_common/nbl_userdev.h | 10 ++++++++
.../nbl_hw_leonis/nbl_phy_leonis_snic.c | 7 ++++++
drivers/net/nbl/nbl_include/nbl_def_common.h | 4 ++++
5 files changed, 46 insertions(+)
create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.c
create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.h
diff --git a/drivers/net/nbl/meson.build b/drivers/net/nbl/meson.build
index 7f4abd3db0..a3e700d93d 100644
--- a/drivers/net/nbl/meson.build
+++ b/drivers/net/nbl/meson.build
@@ -16,6 +16,7 @@ sources = files(
'nbl_dispatch.c',
'nbl_common/nbl_common.c',
'nbl_common/nbl_thread.c',
+ 'nbl_common/nbl_userdev.c',
'nbl_dev/nbl_dev.c',
'nbl_hw/nbl_channel.c',
'nbl_hw/nbl_resource.c',
diff --git a/drivers/net/nbl/nbl_common/nbl_userdev.c b/drivers/net/nbl/nbl_common/nbl_userdev.c
new file mode 100644
index 0000000000..87b943ccd7
--- /dev/null
+++ b/drivers/net/nbl/nbl_common/nbl_userdev.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#include "nbl_userdev.h"
+
+int nbl_pci_map_device(struct nbl_adapter *adapter)
+{
+ struct rte_pci_device *pci_dev = adapter->pci_dev;
+ int ret = 0;
+
+ ret = rte_pci_map_device(pci_dev);
+ if (ret)
+ NBL_LOG(ERR, "device %s uio or vfio map failed", pci_dev->device.name);
+
+ return ret;
+}
+
+void nbl_pci_unmap_device(struct nbl_adapter *adapter)
+{
+ struct rte_pci_device *pci_dev = adapter->pci_dev;
+
+ return rte_pci_unmap_device(pci_dev);
+}
diff --git a/drivers/net/nbl/nbl_common/nbl_userdev.h b/drivers/net/nbl/nbl_common/nbl_userdev.h
new file mode 100644
index 0000000000..11cc29999c
--- /dev/null
+++ b/drivers/net/nbl/nbl_common/nbl_userdev.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_USERDEV_H_
+#define _NBL_USERDEV_H_
+
+#include "nbl_ethdev.h"
+
+#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
index 49ada3b525..9ed375bc1e 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
@@ -183,6 +183,11 @@ int nbl_phy_init_leonis_snic(void *p)
phy_mgt_leonis_snic = (struct nbl_phy_mgt_leonis_snic **)&NBL_ADAPTER_TO_PHY_MGT(adapter);
phy_ops_tbl = &NBL_ADAPTER_TO_PHY_OPS_TBL(adapter);
+ /* map device */
+ ret = nbl_pci_map_device(adapter);
+ if (ret)
+ return ret;
+
*phy_mgt_leonis_snic = rte_zmalloc("nbl_phy_mgt",
sizeof(struct nbl_phy_mgt_leonis_snic), 0);
if (!*phy_mgt_leonis_snic) {
@@ -205,6 +210,7 @@ int nbl_phy_init_leonis_snic(void *p)
setup_ops_failed:
rte_free(*phy_mgt_leonis_snic);
alloc_phy_mgt_failed:
+ nbl_pci_unmap_device(adapter);
return ret;
}
@@ -220,4 +226,5 @@ void nbl_phy_remove_leonis_snic(void *p)
rte_free(*phy_mgt_leonis_snic);
nbl_phy_remove_ops(phy_ops_tbl);
+ nbl_pci_unmap_device(adapter);
}
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index b7955abfab..0b87c3003d 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -48,4 +48,8 @@ void nbl_free_dma_mem(struct nbl_dma_mem *mem);
int nbl_thread_add_work(struct nbl_work *work);
void nbl_thread_del_work(struct nbl_work *work);
+struct nbl_adapter;
+int nbl_pci_map_device(struct nbl_adapter *adapter);
+void nbl_pci_unmap_device(struct nbl_adapter *adapter);
+
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (8 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 09/17] net/nbl: add uio and vfio mode for nbl Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 17:40 ` Stephen Hemminger
2025-06-12 8:58 ` [PATCH v1 11/17] net/nbl: add nbl coexistence mode for nbl Kyo Liu
` (9 subsequent siblings)
19 siblings, 1 reply; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Chenbo Xia, Nipun Gupta
I propose this patch for DPDK to enable coexistence between
DPDK and kernel drivers for regular NICs.This solution requires
adding a new pci_ops in rte_pci_driver, through which DPDK will
retrieve the required IOVA mode from the vendor driver.
This mechanism is necessary to handle different IOMMU
configurations and operating modes. Below is a detailed
analysis of various scenarios:
1. When IOMMU is enabled:
1.1 With PT (Pass-Through) enabled:
In this case, the domain type is IOMMU_DOMAIN_IDENTITY,
which prevents vendor drivers from setting IOVA->PA mapping tables.
Therefore, DPDK must use PA mode. To achieve this:
The vendor kernel driver will register a character device (cdev) to
communicate with DPDK. This cdev handles device operations
(open, mmap, etc.) and ultimately
programs the hardware registers.
1.2 With PT disabled:
Here, the vendor driver doesn't enforce specific IOVA mode requirements.
Our implementation will:
Integrate a mediated device (mdev) in the vendor driver.
This mdev interacts with DPDK and manages IOVA->PA mapping configurations.
2. When IOMMU is disabled:
The vendor driver mandates PA mode (consistent with DPDK's PA mode
requirement in this scenario).
A character device (cdev) will similarly be registered for DPDK
communication.
Summary:
The solution leverages multiple technologies:
mdev for IOVA management when IOMMU is partially enabled.
VFIO for device passthrough operations.
cdev for register programming coordination.
A new pci_ops interface in DPDK to dynamically determine IOVA modes.
This architecture enables clean coexistence by establishing standardized
communication channels between DPDK and vendor drivers across different
IOMMU configurations.
Motivation for the Patch:
This patch is introduced to prepare for the upcoming open-source
contribution of our NebulaMatrix SNIC driver to DPDK. We aim to
ensure that our SNIC can seamlessly coexist with kernel drivers
using this mechanism. By adopting the proposed
architecture—leveraging dynamic IOVA mode negotiation via pci_ops,
mediated devices (mdev), and character device (cdev)
interactions—we enable our SNIC to operate in hybrid environments
here both DPDK and kernel drivers may manage the same hardware.
This design aligns with DPDK’s scalability goals and ensures
compatibility across diverse IOMMU configurations, which is critical
for real-world deployment scenarios.
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
doc/guides/rel_notes/release_25_07.rst | 5 +++++
drivers/bus/pci/bus_pci_driver.h | 11 +++++++++++
drivers/bus/pci/linux/pci.c | 2 ++
3 files changed, 18 insertions(+)
diff --git a/doc/guides/rel_notes/release_25_07.rst b/doc/guides/rel_notes/release_25_07.rst
index 9afc4520a6..a282b3e5a9 100644
--- a/doc/guides/rel_notes/release_25_07.rst
+++ b/doc/guides/rel_notes/release_25_07.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added get_iova_mode for rte_pci_driver.**
+
+ Introduce `pci_get_iova_mode` rte_pci_ops for `pci_get_iova_mode`
+ to PCI bus so that PCI drivers could get their wanted iova_mode
+
* **Added PMU library.**
Added a Performance Monitoring Unit (PMU) library which allows Linux applications
diff --git a/drivers/bus/pci/bus_pci_driver.h b/drivers/bus/pci/bus_pci_driver.h
index 2cc1119072..e5e36b0a5c 100644
--- a/drivers/bus/pci/bus_pci_driver.h
+++ b/drivers/bus/pci/bus_pci_driver.h
@@ -125,6 +125,16 @@ typedef int (pci_dma_map_t)(struct rte_pci_device *dev, void *addr,
typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
uint64_t iova, size_t len);
+/**
+ * retrieve the required IOVA mode from the vendor driver
+ *
+ * @param dev
+ * Pointer to the PCI device.
+ * @return
+ * - rte_iova_mode
+ */
+typedef int (pci_get_iova_mode)(const struct rte_pci_device *pdev);
+
/**
* A structure describing a PCI driver.
*/
@@ -136,6 +146,7 @@ struct rte_pci_driver {
pci_dma_map_t *dma_map; /**< device dma map function. */
pci_dma_unmap_t *dma_unmap; /**< device dma unmap function. */
const struct rte_pci_id *id_table; /**< ID table, NULL terminated. */
+ pci_get_iova_mode *get_iova_mode; /**< Device get iova_mode function */
uint32_t drv_flags; /**< Flags RTE_PCI_DRV_*. */
};
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index c20d159218..fd69a02989 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -624,6 +624,8 @@ pci_device_iova_mode(const struct rte_pci_driver *pdrv,
default:
if ((pdrv->drv_flags & RTE_PCI_DRV_NEED_IOVA_AS_VA) != 0)
iova_mode = RTE_IOVA_VA;
+ else if (pdrv->get_iova_mode)
+ iova_mode = pdrv->get_iova_mode(pdev);
break;
}
return iova_mode;
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 11/17] net/nbl: add nbl coexistence mode for nbl
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (9 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 12/17] net/nbl: add nbl ethdev configuration Kyo Liu
` (8 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
NBL device support coexistence mode
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_common/nbl_userdev.c | 744 +++++++++++++++++-
drivers/net/nbl/nbl_common/nbl_userdev.h | 11 +
drivers/net/nbl/nbl_core.c | 3 +-
drivers/net/nbl/nbl_core.h | 1 -
drivers/net/nbl/nbl_ethdev.c | 6 +
drivers/net/nbl/nbl_hw/nbl_channel.c | 185 ++++-
drivers/net/nbl/nbl_hw/nbl_channel.h | 11 +
.../nbl_hw_leonis/nbl_phy_leonis_snic.c | 2 +-
drivers/net/nbl/nbl_include/nbl_def_common.h | 65 +-
drivers/net/nbl/nbl_include/nbl_include.h | 2 +
10 files changed, 1019 insertions(+), 11 deletions(-)
diff --git a/drivers/net/nbl/nbl_common/nbl_userdev.c b/drivers/net/nbl/nbl_common/nbl_userdev.c
index 87b943ccd7..26643c862b 100644
--- a/drivers/net/nbl/nbl_common/nbl_userdev.c
+++ b/drivers/net/nbl/nbl_common/nbl_userdev.c
@@ -3,15 +3,720 @@
*/
#include "nbl_userdev.h"
+#include <rte_vfio.h>
-int nbl_pci_map_device(struct nbl_adapter *adapter)
+#define NBL_USERDEV_EVENT_CLB_NAME "nbl_userspace_mem_event_clb"
+#define NBL_USERDEV_BAR0_SIZE 65536
+#define NBL_USERDEV_DMA_LIMIT 0xFFFFFFFFFFFF
+
+/* Size of the buffer to receive kernel messages */
+#define NBL_NL_BUF_SIZE (32 * 1024)
+/* Send buffer size for the Netlink socket */
+#define NBL_SEND_BUF_SIZE 32768
+/* Receive buffer size for the Netlink socket */
+#define NBL_RECV_BUF_SIZE 32768
+
+struct nbl_userdev_map_record {
+ TAILQ_ENTRY(nbl_userdev_map_record) next;
+ u64 vaddr;
+ u64 iova;
+ u64 len;
+};
+
+static int nbl_default_container = -1;
+static int nbl_group_count;
+
+TAILQ_HEAD(nbl_adapter_list_head, nbl_adapter);
+static struct nbl_adapter_list_head nbl_adapter_list =
+ TAILQ_HEAD_INITIALIZER(nbl_adapter_list);
+
+TAILQ_HEAD(nbl_userdev_map_record_head, nbl_userdev_map_record);
+static struct nbl_userdev_map_record_head nbl_map_list =
+ TAILQ_HEAD_INITIALIZER(nbl_map_list);
+
+static int
+nbl_userdev_dma_mem_map(int devfd, uint64_t vaddr, uint64_t iova, uint64_t len)
{
- struct rte_pci_device *pci_dev = adapter->pci_dev;
+ struct nbl_dev_user_dma_map dma_map;
int ret = 0;
- ret = rte_pci_map_device(pci_dev);
+ memset(&dma_map, 0, sizeof(dma_map));
+ dma_map.argsz = sizeof(struct nbl_dev_user_dma_map);
+ dma_map.vaddr = vaddr;
+ dma_map.size = len;
+ dma_map.iova = iova;
+ dma_map.flags = NBL_DEV_USER_DMA_MAP_FLAG_READ |
+ NBL_DEV_USER_DMA_MAP_FLAG_WRITE;
+
+ ret = ioctl(devfd, NBL_DEV_USER_MAP_DMA, &dma_map);
+ if (ret) {
+ /**
+ * In case the mapping was already done EEXIST will be
+ * returned from kernel.
+ */
+ if (errno == EEXIST) {
+ NBL_LOG(ERR,
+ "nbl container Memory segment is already mapped,skipping");
+ ret = 0;
+ } else {
+ NBL_LOG(ERR,
+ "nbl container cannot set up DMA remapping,error %i (%s), ret %d",
+ errno, strerror(errno), ret);
+ }
+ }
+
+ return ret;
+}
+
+static int
+nbl_vfio_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t len, int do_map)
+{
+ struct vfio_iommu_type1_dma_map dma_map;
+ struct vfio_iommu_type1_dma_unmap dma_unmap;
+ int ret;
+
+ if (do_map != 0) {
+ memset(&dma_map, 0, sizeof(dma_map));
+ dma_map.argsz = sizeof(struct vfio_iommu_type1_dma_map);
+ dma_map.vaddr = vaddr;
+ dma_map.size = len;
+ dma_map.iova = vaddr;
+ dma_map.flags = VFIO_DMA_MAP_FLAG_READ |
+ VFIO_DMA_MAP_FLAG_WRITE;
+
+ ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map);
+ if (ret) {
+ /**
+ * In case the mapping was already done EEXIST will be
+ * returned from kernel.
+ */
+ if (errno == EEXIST) {
+ NBL_LOG(ERR,
+ "Memory segment is already mapped, skipping");
+ } else {
+ NBL_LOG(ERR,
+ "cannot set up DMA remapping,error %i (%s)",
+ errno, strerror(errno));
+ return -1;
+ }
+ }
+ } else {
+ memset(&dma_unmap, 0, sizeof(dma_unmap));
+ dma_unmap.argsz = sizeof(struct vfio_iommu_type1_dma_unmap);
+ dma_unmap.size = len;
+ dma_unmap.iova = vaddr;
+
+ ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
+ if (ret) {
+ NBL_LOG(ERR, "cannot clear DMA remapping, error %i (%s)",
+ errno, strerror(errno));
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+vfio_map_contig(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
+ size_t len, void *arg)
+{
+ struct nbl_userdev_map_record *record;
+ int *vfio_container_fd = arg;
+ int ret;
+
+ if (msl->external)
+ return 0;
+
+ ret = nbl_vfio_dma_mem_map(*vfio_container_fd, ms->addr_64, len, 1);
+ if (ret)
+ return ret;
+
+ record = malloc(sizeof(*record));
+ if (!record)
+ return -ENOMEM;
+
+ record->vaddr = ms->addr_64;
+ record->iova = ms->iova;
+ record->len = len;
+ TAILQ_INSERT_TAIL(&nbl_map_list, record, next);
+
+ return 0;
+}
+
+static int
+vfio_map(const struct rte_memseg_list *msl, const struct rte_memseg *ms, void *arg)
+{
+ struct nbl_userdev_map_record *record;
+ int *vfio_container_fd = arg;
+ int ret;
+
+ /* skip external memory that isn't a heap */
+ if (msl->external && !msl->heap)
+ return 0;
+
+ /* skip any segments with invalid IOVA addresses */
+ if (ms->iova == RTE_BAD_IOVA)
+ return 0;
+
+ /* if IOVA mode is VA, we've already mapped the internal segments */
+ if (!msl->external && rte_eal_iova_mode() == RTE_IOVA_VA)
+ return 0;
+
+ ret = nbl_vfio_dma_mem_map(*vfio_container_fd, ms->addr_64, ms->len, 1);
if (ret)
- NBL_LOG(ERR, "device %s uio or vfio map failed", pci_dev->device.name);
+ return ret;
+
+ record = malloc(sizeof(*record));
+ if (!record)
+ return -ENOMEM;
+
+ record->vaddr = ms->addr_64;
+ record->iova = ms->iova;
+ record->len = ms->len;
+ TAILQ_INSERT_TAIL(&nbl_map_list, record, next);
+
+ return 0;
+}
+
+static int nbl_userdev_vfio_dma_map(int vfio_container_fd)
+{
+ if (rte_eal_iova_mode() == RTE_IOVA_VA) {
+ /* with IOVA as VA mode, we can get away with mapping contiguous
+ * chunks rather than going page-by-page.
+ */
+ int ret = rte_memseg_contig_walk(vfio_map_contig,
+ &vfio_container_fd);
+ if (ret)
+ return ret;
+ /* we have to continue the walk because we've skipped the
+ * external segments during the config walk.
+ */
+ }
+ return rte_memseg_walk(vfio_map, &vfio_container_fd);
+}
+
+static int nbl_userdev_dma_map(struct nbl_adapter *adapter)
+{
+ struct nbl_common_info *common = &adapter->common;
+ struct nbl_userdev_map_record *record;
+ rte_iova_t iova;
+
+ rte_mcfg_mem_read_lock();
+ TAILQ_FOREACH(record, &nbl_map_list, next) {
+ iova = record->iova;
+ if (common->dma_set_msb)
+ iova |= (1UL << common->dma_limit_msb);
+ nbl_userdev_dma_mem_map(common->devfd, record->vaddr, iova, record->len);
+ }
+ TAILQ_INSERT_TAIL(&nbl_adapter_list, adapter, next);
+ rte_mcfg_mem_read_unlock();
+
+ return 0;
+}
+
+static void *nbl_userdev_mmap(int devfd, __rte_unused int bar_index, size_t size)
+{
+ void *addr;
+
+ addr = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
+ RTE_MAP_SHARED, devfd, 0);
+ if (!addr)
+ NBL_LOG(ERR, "usermap mmap bar failed");
+
+ return addr;
+}
+
+static int nbl_userdev_add_record(u64 vaddr, u64 iova, u64 len)
+{
+ struct nbl_userdev_map_record *record;
+ struct nbl_adapter *adapter;
+ u64 dma_iova;
+ int ret;
+
+ record = malloc(sizeof(*record));
+ if (!record)
+ return -ENOMEM;
+
+ ret = nbl_vfio_dma_mem_map(nbl_default_container, vaddr, len, 1);
+ if (ret) {
+ free(record);
+ return ret;
+ }
+
+ record->iova = iova;
+ record->len = len;
+ record->vaddr = vaddr;
+
+ TAILQ_INSERT_TAIL(&nbl_map_list, record, next);
+ TAILQ_FOREACH(adapter, &nbl_adapter_list, next) {
+ dma_iova = record->iova;
+ if (adapter->common.dma_set_msb)
+ dma_iova |= (1UL << adapter->common.dma_limit_msb);
+ nbl_userdev_dma_mem_map(adapter->common.devfd, record->vaddr,
+ dma_iova, record->len);
+ }
+
+ return 0;
+}
+
+static int nbl_userdev_free_record(u64 vaddr, u64 iova __rte_unused, u64 len __rte_unused)
+{
+ struct nbl_userdev_map_record *record, *tmp_record;
+
+ RTE_TAILQ_FOREACH_SAFE(record, &nbl_map_list, next, tmp_record) {
+ if (record->vaddr != vaddr)
+ continue;
+ nbl_vfio_dma_mem_map(nbl_default_container, vaddr, record->len, 0);
+ TAILQ_REMOVE(&nbl_map_list, record, next);
+ free(record);
+ }
+
+ return 0;
+}
+
+static void nbl_userdev_dma_free(void)
+{
+ struct nbl_userdev_map_record *record, *tmp_record;
+
+ RTE_TAILQ_FOREACH_SAFE(record, &nbl_map_list, next, tmp_record) {
+ TAILQ_REMOVE(&nbl_map_list, record, next);
+ free(record);
+ }
+}
+
+static void
+nbl_userdev_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
+ void *arg __rte_unused)
+{
+ rte_iova_t iova_start, iova_expected;
+ struct rte_memseg_list *msl;
+ struct rte_memseg *ms;
+ size_t cur_len = 0;
+ u64 va_start;
+ u64 vfio_va;
+
+ if (!nbl_group_count)
+ return;
+
+ msl = rte_mem_virt2memseg_list(addr);
+
+ /* for IOVA as VA mode, no need to care for IOVA addresses */
+ if (rte_eal_iova_mode() == RTE_IOVA_VA && msl->external == 0) {
+ vfio_va = (u64)(uintptr_t)addr;
+ if (type == RTE_MEM_EVENT_ALLOC)
+ nbl_userdev_add_record(vfio_va, vfio_va, len);
+ else
+ nbl_userdev_free_record(vfio_va, vfio_va, len);
+ return;
+ }
+
+ /* memsegs are contiguous in memory */
+ ms = rte_mem_virt2memseg(addr, msl);
+
+ /* This memory is not guaranteed to be contiguous, but it still could
+ * be, or it could have some small contiguous chunks. Since the number
+ * of VFIO mappings is limited, and VFIO appears to not concatenate
+ * adjacent mappings, we have to do this ourselves.
+ * So, find contiguous chunks, then map them.
+ */
+ va_start = ms->addr_64;
+ iova_start = ms->iova;
+ iova_expected = ms->iova;
+ while (cur_len < len) {
+ bool new_contig_area = ms->iova != iova_expected;
+ bool last_seg = (len - cur_len) == ms->len;
+ bool skip_last = false;
+
+ /* only do mappings when current contiguous area ends */
+ if (new_contig_area) {
+ if (type == RTE_MEM_EVENT_ALLOC)
+ nbl_userdev_add_record(va_start, iova_start,
+ iova_expected - iova_start);
+ else
+ nbl_userdev_free_record(va_start, iova_start,
+ iova_expected - iova_start);
+ va_start = ms->addr_64;
+ iova_start = ms->iova;
+ }
+ /* some memory segments may have invalid IOVA */
+ if (ms->iova == RTE_BAD_IOVA) {
+ NBL_LOG(INFO, "Memory segment at %p has bad IOVA, skipping",
+ ms->addr);
+ skip_last = true;
+ }
+ iova_expected = ms->iova + ms->len;
+ cur_len += ms->len;
+ ++ms;
+
+ /* don't count previous segment, and don't attempt to
+ * dereference a potentially invalid pointer.
+ */
+ if (skip_last && !last_seg) {
+ iova_expected = ms->iova;
+ iova_start = ms->iova;
+ va_start = ms->addr_64;
+ } else if (!skip_last && last_seg) {
+ /* this is the last segment and we're not skipping */
+ if (type == RTE_MEM_EVENT_ALLOC)
+ nbl_userdev_add_record(va_start, iova_start,
+ iova_expected - iova_start);
+ else
+ nbl_userdev_free_record(va_start, iova_start,
+ iova_expected - iova_start);
+ }
+ }
+}
+
+static int nbl_mdev_map_device(struct nbl_adapter *adapter)
+{
+ const struct rte_pci_device *pci_dev = adapter->pci_dev;
+ struct nbl_common_info *common = &adapter->common;
+ char dev_name[RTE_DEV_NAME_MAX_LEN] = {0};
+ char pathname[PATH_MAX];
+ struct vfio_device_info device_info = { .argsz = sizeof(device_info) };
+ struct vfio_group_status group_status = {
+ .argsz = sizeof(group_status)
+ };
+ u64 dma_limit = NBL_USERDEV_DMA_LIMIT;
+ int ret, container_create = 0, container_set = 0;
+ int vfio_group_fd, container = nbl_default_container;
+
+ rte_pci_device_name(&pci_dev->addr, dev_name, RTE_DEV_NAME_MAX_LEN);
+ snprintf(pathname, sizeof(pathname),
+ "%s/%s/", rte_pci_get_sysfs_path(), dev_name);
+
+ ret = rte_vfio_get_group_num(pathname, dev_name, &common->iommu_group_num);
+ if (ret <= 0) {
+ NBL_LOG(INFO, "nbl vfio group number failed");
+ return -1;
+ }
+
+ NBL_LOG(INFO, "nbl vfio group number %d", common->iommu_group_num);
+ /* vfio_container */
+ if (nbl_default_container < 0) {
+ container = rte_vfio_container_create();
+ container_create = 1;
+
+ if (container < 0) {
+ NBL_LOG(ERR, "nbl vfio container create failed");
+ return -1;
+ }
+ }
+
+ NBL_LOG(INFO, "nbl vfio container %d", container);
+ vfio_group_fd = rte_vfio_container_group_bind(container, common->iommu_group_num);
+ if (vfio_group_fd < 0) {
+ NBL_LOG(ERR, "nbl vfio group bind failed, %d", vfio_group_fd);
+ goto free_container;
+ }
+
+ /* check if the group is viable */
+ ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
+ if (ret) {
+ NBL_LOG(ERR, "%s cannot get group status,error %i (%s)", dev_name,
+ errno, strerror(errno));
+ goto free_group;
+ } else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
+ NBL_LOG(ERR, "%s VFIO group is not viable!", dev_name);
+ goto free_group;
+ }
+
+ if (!(group_status.flags & VFIO_GROUP_FLAGS_CONTAINER_SET)) {
+ /* add group to a container */
+ ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER, &container);
+ if (ret) {
+ NBL_LOG(ERR, "%s cannot add VFIO group to container, error %i (%s)",
+ dev_name, errno, strerror(errno));
+ goto free_group;
+ }
+
+ nbl_group_count++;
+ container_set = 1;
+ /* set an IOMMU type for container */
+
+ if (container_create || nbl_group_count == 1) {
+ if (ioctl(container, VFIO_CHECK_EXTENSION, VFIO_TYPE1v2_IOMMU)) {
+ ret = ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1v2_IOMMU);
+ if (ret) {
+ NBL_LOG(ERR, "Failed to setup VFIO iommu");
+ goto unset_container;
+ }
+ } else {
+ NBL_LOG(ERR, "No supported IOMMU available");
+ goto unset_container;
+ }
+
+ rte_mcfg_mem_read_lock();
+ ret = nbl_userdev_vfio_dma_map(container);
+ if (ret) {
+ rte_mcfg_mem_read_unlock();
+ NBL_LOG(WARNING, "nbl vfio container dma map failed, %d", ret);
+ goto free_dma_map;
+ }
+ ret = rte_mem_event_callback_register(NBL_USERDEV_EVENT_CLB_NAME,
+ nbl_userdev_mem_event_callback, NULL);
+ rte_mcfg_mem_read_unlock();
+ if (ret && rte_errno != ENOTSUP) {
+ NBL_LOG(WARNING, "nbl vfio mem event register callback failed, %d",
+ ret);
+ goto free_dma_map;
+ }
+ }
+ }
+
+ /* get a file descriptor for the device */
+ common->devfd = ioctl(vfio_group_fd, VFIO_GROUP_GET_DEVICE_FD, dev_name);
+ if (common->devfd < 0) {
+ /* if we cannot get a device fd, this implies a problem with
+ * the VFIO group or the container not having IOMMU configured.
+ */
+
+ NBL_LOG(WARNING, "Getting a vfio_dev_fd for %s failed, %d",
+ dev_name, common->devfd);
+ goto unregister_mem_event;
+ }
+
+ if (container_create)
+ nbl_default_container = container;
+
+ common->specific_dma = true;
+ if (rte_eal_iova_mode() == RTE_IOVA_PA)
+ common->dma_set_msb = true;
+ ioctl(common->devfd, NBL_DEV_USER_GET_DMA_LIMIT, &dma_limit);
+ common->dma_limit_msb = rte_fls_u64(dma_limit) - 1;
+ if (common->dma_limit_msb < 38) {
+ NBL_LOG(ERR, "iommu dma limit msb %u, low 3-level page table",
+ common->dma_limit_msb);
+ goto close_fd;
+ }
+
+ nbl_userdev_dma_map(adapter);
+
+ return 0;
+
+close_fd:
+ close(common->devfd);
+unregister_mem_event:
+ if (nbl_group_count == 1) {
+ rte_mcfg_mem_read_lock();
+ rte_mem_event_callback_unregister(NBL_USERDEV_EVENT_CLB_NAME, NULL);
+ rte_mcfg_mem_read_unlock();
+ }
+free_dma_map:
+ if (nbl_group_count == 1) {
+ rte_mcfg_mem_read_lock();
+ nbl_userdev_dma_free();
+ rte_mcfg_mem_read_unlock();
+ }
+unset_container:
+ if (container_set) {
+ ioctl(vfio_group_fd, VFIO_GROUP_UNSET_CONTAINER, &container);
+ nbl_group_count--;
+ }
+free_group:
+ close(vfio_group_fd);
+ rte_vfio_clear_group(vfio_group_fd);
+free_container:
+ if (container_create)
+ rte_vfio_container_destroy(container);
+ return -1;
+}
+
+static int nbl_mdev_unmap_device(struct nbl_adapter *adapter)
+{
+ struct nbl_common_info *common = &adapter->common;
+ int vfio_group_fd, ret;
+
+ close(common->devfd);
+ rte_mcfg_mem_read_lock();
+ vfio_group_fd = rte_vfio_container_group_bind(nbl_default_container,
+ common->iommu_group_num);
+ NBL_LOG(INFO, "close vfio_group_fd %d", vfio_group_fd);
+ ret = ioctl(vfio_group_fd, VFIO_GROUP_UNSET_CONTAINER, &nbl_default_container);
+ if (ret)
+ NBL_LOG(ERR, "unset container, error %i (%s) %d",
+ errno, strerror(errno), ret);
+ nbl_group_count--;
+ ret = rte_vfio_container_group_unbind(nbl_default_container, common->iommu_group_num);
+ if (ret)
+ NBL_LOG(ERR, "vfio container group unbind failed %d", ret);
+ if (!nbl_group_count) {
+ rte_mem_event_callback_unregister(NBL_USERDEV_EVENT_CLB_NAME, NULL);
+ nbl_userdev_dma_free();
+ }
+ rte_mcfg_mem_read_unlock();
+
+ return 0;
+}
+
+static int nbl_userdev_get_ifindex(int devfd)
+{
+ int ifindex = -1, ret;
+
+ ret = ioctl(devfd, NBL_DEV_USER_GET_IFINDEX, &ifindex);
+ if (ret)
+ NBL_LOG(ERR, "get ifindex failed %d", ret);
+
+ NBL_LOG(INFO, "get ifindex %d", ifindex);
+
+ return ifindex;
+}
+
+static int nbl_userdev_nl_init(int protocol)
+{
+ int fd;
+ int sndbuf_size = NBL_SEND_BUF_SIZE;
+ int rcvbuf_size = NBL_RECV_BUF_SIZE;
+ struct sockaddr_nl local = {
+ .nl_family = AF_NETLINK,
+ };
+ int ret;
+
+ fd = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, protocol);
+ if (fd == -1) {
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ ret = setsockopt(fd, SOL_SOCKET, SO_SNDBUF, &sndbuf_size, sizeof(int));
+ if (ret == -1) {
+ rte_errno = errno;
+ goto error;
+ }
+ ret = setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &rcvbuf_size, sizeof(int));
+ if (ret == -1) {
+ rte_errno = errno;
+ goto error;
+ }
+ ret = bind(fd, (struct sockaddr *)&local, sizeof(local));
+ if (ret == -1) {
+ rte_errno = errno;
+ goto error;
+ }
+ return fd;
+error:
+ close(fd);
+ return -rte_errno;
+}
+
+int nbl_userdev_port_config(struct nbl_adapter *adapter, int start)
+{
+ struct nbl_common_info *common = &adapter->common;
+ int ret;
+
+ if (NBL_IS_NOT_COEXISTENCE(common))
+ return 0;
+
+ if (common->isolate)
+ return 0;
+
+ ret = ioctl(common->devfd, NBL_DEV_USER_SWITCH_NETWORK, &start);
+ if (ret) {
+ NBL_LOG(ERR, "userspace switch network ret %d", ret);
+ return ret;
+ }
+
+ common->curr_network = start;
+ return ret;
+}
+
+int nbl_userdev_port_isolate(struct nbl_adapter *adapter, int set, struct rte_flow_error *error)
+{
+ struct nbl_common_info *common = &adapter->common;
+ int ret = 0, stat = !set;
+
+ if (NBL_IS_NOT_COEXISTENCE(common)) {
+ /* special use for isolate: offload mode ignore isolate when pf is in vfio/uio */
+ rte_flow_error_set(error, EREMOTEIO,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "nbl isolate switch failed.");
+ return -EREMOTEIO;
+ }
+
+ if (common->curr_network != stat)
+ ret = ioctl(common->devfd, NBL_DEV_USER_SWITCH_NETWORK, &stat);
+
+ if (ret) {
+ rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "nbl isolate switch failed.");
+ return ret;
+ }
+
+ common->curr_network = !set;
+ common->isolate = set;
+
+ return ret;
+}
+
+int nbl_pci_map_device(struct nbl_adapter *adapter)
+{
+ struct rte_pci_device *pci_dev = adapter->pci_dev;
+ const struct rte_pci_addr *loc = &pci_dev->addr;
+ struct nbl_common_info *common = &adapter->common;
+ char pathname[PATH_MAX];
+ int ret = 0, fd;
+ enum rte_iova_mode iova_mode;
+ size_t bar_size = NBL_USERDEV_BAR0_SIZE;
+
+ NBL_USERDEV_INIT_COMMON(common);
+ iova_mode = rte_eal_iova_mode();
+ if (iova_mode == RTE_IOVA_PA) {
+ /* check iommu disable */
+ snprintf(pathname, sizeof(pathname),
+ "/dev/nbl_userdev/" PCI_PRI_FMT, loc->domain,
+ loc->bus, loc->devid, loc->function);
+ common->devfd = open(pathname, O_RDWR);
+ if (common->devfd >= 0)
+ goto mmap;
+
+ NBL_LOG(INFO, "%s char device open failed", pci_dev->device.name);
+ }
+
+ /* check iommu translate mode */
+ ret = nbl_mdev_map_device(adapter);
+ if (ret) {
+ ret = rte_pci_map_device(pci_dev);
+ if (ret)
+ NBL_LOG(ERR, "uio/vfio %s map device failed", pci_dev->device.name);
+ return ret;
+ }
+
+mmap:
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd < 0) {
+ NBL_LOG(ERR, "nbl userdev get event fd failed");
+ ret = -1;
+ goto close_fd;
+ }
+
+ ret = ioctl(common->devfd, NBL_DEV_USER_SET_EVENTFD, &fd);
+ if (ret) {
+ NBL_LOG(ERR, "nbl userdev set event fd failed");
+ goto close_eventfd;
+ }
+
+ common->eventfd = fd;
+ ioctl(common->devfd, NBL_DEV_USER_GET_BAR_SIZE, &bar_size);
+
+ if (!ret) {
+ pci_dev->mem_resource[0].addr = nbl_userdev_mmap(common->devfd, 0, bar_size);
+ pci_dev->mem_resource[0].phys_addr = 0;
+ pci_dev->mem_resource[0].len = bar_size;
+ pci_dev->mem_resource[2].addr = 0;
+
+ common->ifindex = nbl_userdev_get_ifindex(common->devfd);
+ common->nl_socket_route = nbl_userdev_nl_init(NETLINK_ROUTE);
+ }
+
+ return ret;
+
+close_eventfd:
+ close(fd);
+close_fd:
+ if (common->specific_dma)
+ nbl_mdev_unmap_device(adapter);
+ else
+ close(common->devfd);
return ret;
}
@@ -19,6 +724,35 @@ int nbl_pci_map_device(struct nbl_adapter *adapter)
void nbl_pci_unmap_device(struct nbl_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
+ struct nbl_common_info *common = &adapter->common;
+
+ if (NBL_IS_NOT_COEXISTENCE(common))
+ return rte_pci_unmap_device(pci_dev);
+
+ rte_mem_unmap(pci_dev->mem_resource[0].addr, pci_dev->mem_resource[0].len);
+ ioctl(common->devfd, NBL_DEV_USER_CLEAR_EVENTFD, 0);
+ close(common->eventfd);
+ close(common->nl_socket_route);
+
+ if (!common->specific_dma) {
+ close(common->devfd);
+ return;
+ }
+
+ nbl_mdev_unmap_device(adapter);
+}
+
+int nbl_userdev_get_iova_mode(const struct rte_pci_device *dev)
+{
+ char pathname[PATH_MAX];
+ const struct rte_pci_addr *loc = &dev->addr;
+
+ snprintf(pathname, sizeof(pathname),
+ "/dev/nbl_userdev/" PCI_PRI_FMT, loc->domain,
+ loc->bus, loc->devid, loc->function);
+
+ if (!access(pathname, F_OK))
+ return RTE_IOVA_PA;
- return rte_pci_unmap_device(pci_dev);
+ return RTE_IOVA_DC;
}
diff --git a/drivers/net/nbl/nbl_common/nbl_userdev.h b/drivers/net/nbl/nbl_common/nbl_userdev.h
index 11cc29999c..2221e19c67 100644
--- a/drivers/net/nbl/nbl_common/nbl_userdev.h
+++ b/drivers/net/nbl/nbl_common/nbl_userdev.h
@@ -6,5 +6,16 @@
#define _NBL_USERDEV_H_
#include "nbl_ethdev.h"
+#include "nbl_common.h"
+
+#define NBL_USERDEV_INIT_COMMON(common) do { \
+ typeof(common) _comm = (common); \
+ _comm->devfd = -1; \
+ _comm->eventfd = -1; \
+ _comm->specific_dma = false; \
+ _comm->dma_set_msb = false; \
+ _comm->ifindex = -1; \
+ _comm->nl_socket_route = -1; \
+} while (0)
#endif
diff --git a/drivers/net/nbl/nbl_core.c b/drivers/net/nbl/nbl_core.c
index f4ddc9e219..4a7b03a01f 100644
--- a/drivers/net/nbl/nbl_core.c
+++ b/drivers/net/nbl/nbl_core.c
@@ -29,10 +29,11 @@ static void nbl_init_func_caps(const struct rte_pci_device *pci_dev, struct nbl_
int nbl_core_init(struct nbl_adapter *adapter, struct rte_eth_dev *eth_dev)
{
- const struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
const struct nbl_product_core_ops *product_base_ops = NULL;
int ret = 0;
+ adapter->pci_dev = pci_dev;
nbl_init_func_caps(pci_dev, &adapter->caps);
product_base_ops = nbl_core_get_product_ops(adapter->caps.product_type);
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index bdf31e15da..997544b112 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -51,7 +51,6 @@
#define NBL_IS_NOT_COEXISTENCE(common) ({ typeof(common) _common = (common); \
_common->nl_socket_route < 0 || \
_common->ifindex < 0; })
-
struct nbl_core {
void *phy_mgt;
void *res_mgt;
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index 90b1487567..e7694988ce 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -108,6 +108,11 @@ static int nbl_pci_remove(struct rte_pci_device *pci_dev)
return rte_eth_dev_pci_generic_remove(pci_dev, nbl_eth_dev_uninit);
}
+static int nbl_pci_get_iova_mode(const struct rte_pci_device *dev)
+{
+ return nbl_userdev_get_iova_mode(dev);
+}
+
static const struct rte_pci_id pci_id_nbl_map[] = {
{ RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110) },
{ RTE_PCI_DEVICE(NBL_VENDOR_ID, NBL_DEVICE_ID_M18110_LX) },
@@ -136,6 +141,7 @@ static struct rte_pci_driver nbl_pmd = {
RTE_PCI_DRV_PROBE_AGAIN,
.probe = nbl_pci_probe,
.remove = nbl_pci_remove,
+ .get_iova_mode = nbl_pci_get_iova_mode,
};
RTE_PMD_REGISTER_PCI(net_nbl, nbl_pmd);
diff --git a/drivers/net/nbl/nbl_hw/nbl_channel.c b/drivers/net/nbl/nbl_hw/nbl_channel.c
index 09f1870ed0..c8998ae3e5 100644
--- a/drivers/net/nbl/nbl_hw/nbl_channel.c
+++ b/drivers/net/nbl/nbl_hw/nbl_channel.c
@@ -575,6 +575,181 @@ static struct nbl_channel_ops chan_ops = {
.set_state = nbl_chan_set_state,
};
+static int nbl_chan_userdev_send_msg(void *priv, struct nbl_chan_send_info *chan_send)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ struct nbl_dev_user_channel_msg msg;
+ uint32_t *result;
+ int ret;
+
+ if (chan_mgt->state)
+ return -EIO;
+
+ msg.msg_type = chan_send->msg_type;
+ msg.dst_id = chan_send->dstid;
+ msg.arg_len = chan_send->arg_len;
+ msg.ack = chan_send->ack;
+ msg.ack_length = chan_send->resp_len;
+ rte_memcpy(&msg.data, chan_send->arg, chan_send->arg_len);
+
+ ret = ioctl(common->devfd, NBL_DEV_USER_CHANNEL, &msg);
+ if (ret) {
+ NBL_LOG(ERR, "user mailbox failed, type %u, ret %d", msg.msg_type, ret);
+ return -1;
+ }
+
+ /* 4bytes align */
+ result = (uint32_t *)RTE_PTR_ALIGN(((unsigned char *)msg.data) + chan_send->arg_len, 4);
+ rte_memcpy(chan_send->resp, result, RTE_MIN(chan_send->resp_len, msg.ack_length));
+
+ return msg.ack_err;
+}
+
+static int nbl_chan_userdev_send_ack(void *priv, struct nbl_chan_ack_info *chan_ack)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_chan_send_info chan_send;
+ u32 *tmp;
+ u32 len = 3 * sizeof(u32) + chan_ack->data_len;
+
+ tmp = rte_zmalloc("nbl_chan_send_tmp", len, 0);
+ if (!tmp) {
+ NBL_LOG(ERR, "Chan send ack data malloc failed");
+ return -ENOMEM;
+ }
+
+ tmp[0] = chan_ack->msg_type;
+ tmp[1] = chan_ack->msgid;
+ tmp[2] = (u32)chan_ack->err;
+ if (chan_ack->data && chan_ack->data_len)
+ memcpy(&tmp[3], chan_ack->data, chan_ack->data_len);
+
+ NBL_CHAN_SEND(chan_send, chan_ack->dstid, NBL_CHAN_MSG_ACK, tmp, len, NULL, 0, 0);
+ nbl_chan_userdev_send_msg(chan_mgt, &chan_send);
+ rte_free(tmp);
+
+ return 0;
+}
+
+static void nbl_chan_userdev_eventfd_handler(void *cn_arg)
+{
+ size_t page_size = rte_mem_page_size();
+ char *bak_buf = malloc(page_size);
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)cn_arg;
+ union nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+ char *data = (char *)chan_info->userdev.shm_msg_ring + 8;
+ char *payload;
+ u64 buf;
+ int nbytes __rte_unused;
+ u32 total_len;
+ u32 *head = (u32 *)chan_info->userdev.shm_msg_ring;
+ u32 *tail = (u32 *)chan_info->userdev.shm_msg_ring + 1, tmp_tail;
+ u32 shmmsgbuf_size = page_size - 8;
+
+ if (!bak_buf) {
+ NBL_LOG(ERR, "nbl chan handler malloc failed");
+ return;
+ }
+ tmp_tail = *tail;
+ nbytes = read(chan_info->userdev.eventfd, &buf, sizeof(buf));
+
+ while (*head != tmp_tail) {
+ total_len = *(u32 *)(data + tmp_tail);
+ if (tmp_tail + total_len > shmmsgbuf_size) {
+ u32 copy_len;
+
+ copy_len = shmmsgbuf_size - tmp_tail;
+ memcpy(bak_buf, data + tmp_tail, copy_len);
+ memcpy(bak_buf + copy_len, data, total_len - copy_len);
+ payload = bak_buf;
+
+ } else {
+ payload = (data + tmp_tail);
+ }
+
+ nbl_chan_recv_msg(chan_mgt, payload + 4);
+ tmp_tail += total_len;
+ if (tmp_tail >= shmmsgbuf_size)
+ tmp_tail -= shmmsgbuf_size;
+ }
+
+ free(bak_buf);
+ *tail = tmp_tail;
+}
+
+static int nbl_chan_userdev_setup_queue(void *priv)
+{
+ size_t page_size = rte_mem_page_size();
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ int ret;
+
+ if (common->devfd < 0 || common->eventfd < 0)
+ return -EINVAL;
+
+ chan_info->userdev.eventfd = common->eventfd;
+ chan_info->userdev.intr_handle.fd = common->eventfd;
+ chan_info->userdev.intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ ret = rte_intr_callback_register(&chan_info->userdev.intr_handle,
+ nbl_chan_userdev_eventfd_handler, chan_mgt);
+
+ if (ret) {
+ NBL_LOG(ERR, "channel userdev event handler register failed, %d", ret);
+ return ret;
+ }
+
+ chan_info->userdev.shm_msg_ring = rte_mem_map(NULL, page_size,
+ RTE_PROT_READ | RTE_PROT_WRITE,
+ RTE_MAP_SHARED, common->devfd,
+ NBL_DEV_USER_INDEX_TO_OFFSET
+ (NBL_DEV_SHM_MSG_RING_INDEX));
+ if (!chan_info->userdev.shm_msg_ring) {
+ rte_intr_callback_unregister(&chan_info->userdev.intr_handle,
+ nbl_chan_userdev_eventfd_handler, chan_mgt);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int nbl_chan_userdev_teardown_queue(void *priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ union nbl_chan_info *chan_info = NBL_CHAN_MGT_TO_CHAN_INFO(chan_mgt);
+
+ rte_mem_unmap(chan_info->userdev.shm_msg_ring, rte_mem_page_size());
+ rte_intr_callback_unregister(&chan_info->userdev.intr_handle,
+ nbl_chan_userdev_eventfd_handler, chan_mgt);
+
+ return 0;
+}
+
+static int nbl_chan_userdev_register_msg(void *priv, uint16_t msg_type, nbl_chan_resp func,
+ void *callback_priv)
+{
+ struct nbl_channel_mgt *chan_mgt = (struct nbl_channel_mgt *)priv;
+ struct nbl_common_info *common = NBL_CHAN_MGT_TO_COMMON(chan_mgt);
+ int ret, type;
+
+ type = msg_type;
+ nbl_chan_register_msg(priv, msg_type, func, callback_priv);
+ ret = ioctl(common->devfd, NBL_DEV_USER_SET_LISTENER, &type);
+
+ return ret;
+}
+
+static struct nbl_channel_ops userdev_ops = {
+ .send_msg = nbl_chan_userdev_send_msg,
+ .send_ack = nbl_chan_userdev_send_ack,
+ .register_msg = nbl_chan_userdev_register_msg,
+ .setup_queue = nbl_chan_userdev_setup_queue,
+ .teardown_queue = nbl_chan_userdev_teardown_queue,
+ .set_state = nbl_chan_set_state,
+};
+
static int nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter,
struct nbl_channel_mgt_leonis **chan_mgt_leonis)
{
@@ -594,7 +769,7 @@ static int nbl_chan_setup_chan_mgt(struct nbl_adapter *adapter,
goto alloc_mailbox_fail;
NBL_CHAN_MGT_TO_CHAN_INFO(&(*chan_mgt_leonis)->chan_mgt) = mailbox;
-
+ NBL_CHAN_MGT_TO_COMMON(&(*chan_mgt_leonis)->chan_mgt) = &adapter->common;
return 0;
alloc_mailbox_fail:
@@ -619,11 +794,17 @@ static void nbl_chan_remove_ops(struct nbl_channel_ops_tbl **chan_ops_tbl)
static int nbl_chan_setup_ops(struct nbl_channel_ops_tbl **chan_ops_tbl,
struct nbl_channel_mgt_leonis *chan_mgt_leonis)
{
+ struct nbl_common_info *common;
+
*chan_ops_tbl = rte_zmalloc("nbl_chan_ops_tbl", sizeof(struct nbl_channel_ops_tbl), 0);
if (!*chan_ops_tbl)
return -ENOMEM;
- NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &chan_ops;
+ common = NBL_CHAN_MGT_TO_COMMON(&chan_mgt_leonis->chan_mgt);
+ if (NBL_IS_NOT_COEXISTENCE(common))
+ NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &chan_ops;
+ else
+ NBL_CHAN_OPS_TBL_TO_OPS(*chan_ops_tbl) = &userdev_ops;
NBL_CHAN_OPS_TBL_TO_PRIV(*chan_ops_tbl) = chan_mgt_leonis;
chan_mgt_leonis->chan_mgt.msg_handler[NBL_CHAN_MSG_ACK].func = nbl_chan_recv_ack_msg;
diff --git a/drivers/net/nbl/nbl_hw/nbl_channel.h b/drivers/net/nbl/nbl_hw/nbl_channel.h
index df2222d995..a6ba9fcd71 100644
--- a/drivers/net/nbl/nbl_hw/nbl_channel.h
+++ b/drivers/net/nbl/nbl_hw/nbl_channel.h
@@ -7,6 +7,10 @@
#include "nbl_ethdev.h"
+#define NBL_CHAN_MAX_PAGE_SIZE (64 * 1024)
+
+#define NBL_CHAN_MGT_TO_COMMON(chan_mgt) ((chan_mgt)->common)
+#define NBL_CHAN_MGT_TO_DEV(chan_mgt) NBL_COMMON_TO_DEV(NBL_CHAN_MGT_TO_COMMON(chan_mgt))
#define NBL_CHAN_MGT_TO_PHY_OPS_TBL(chan_mgt) ((chan_mgt)->phy_ops_tbl)
#define NBL_CHAN_MGT_TO_PHY_OPS(chan_mgt) (NBL_CHAN_MGT_TO_PHY_OPS_TBL(chan_mgt)->ops)
#define NBL_CHAN_MGT_TO_PHY_PRIV(chan_mgt) (NBL_CHAN_MGT_TO_PHY_OPS_TBL(chan_mgt)->priv)
@@ -90,6 +94,12 @@ union nbl_chan_info {
struct nbl_work work;
} mailbox;
+
+ struct {
+ struct rte_intr_handle intr_handle;
+ void *shm_msg_ring;
+ int eventfd;
+ } userdev;
};
struct nbl_chan_msg_handler {
@@ -99,6 +109,7 @@ struct nbl_chan_msg_handler {
struct nbl_channel_mgt {
uint32_t mode;
+ struct nbl_common_info *common;
struct nbl_phy_ops_tbl *phy_ops_tbl;
union nbl_chan_info *chan_info;
struct nbl_chan_msg_handler msg_handler[NBL_CHAN_MSG_MAX];
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
index 9ed375bc1e..bfb86455ae 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
@@ -177,7 +177,7 @@ int nbl_phy_init_leonis_snic(void *p)
struct nbl_phy_mgt *phy_mgt;
struct nbl_phy_ops_tbl **phy_ops_tbl;
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
- struct rte_pci_device *pci_dev = adapter->pci_dev;
+ const struct rte_pci_device *pci_dev = adapter->pci_dev;
int ret = 0;
phy_mgt_leonis_snic = (struct nbl_phy_mgt_leonis_snic **)&NBL_ADAPTER_TO_PHY_MGT(adapter);
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index 0b87c3003d..795679576e 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -24,6 +24,67 @@
#define NBL_TWO_ETHERNET_MAX_MAC_NUM (512)
#define NBL_FOUR_ETHERNET_MAX_MAC_NUM (1024)
+#define NBL_DEV_USER_TYPE ('n')
+#define NBL_DEV_USER_DATA_LEN (2044)
+
+#define NBL_DEV_USER_PCI_OFFSET_SHIFT 40
+#define NBL_DEV_USER_OFFSET_TO_INDEX(off) ((off) >> NBL_DEV_USER_PCI_OFFSET_SHIFT)
+#define NBL_DEV_USER_INDEX_TO_OFFSET(index) ((u64)(index) << NBL_DEV_USER_PCI_OFFSET_SHIFT)
+#define NBL_DEV_SHM_MSG_RING_INDEX (6)
+
+struct nbl_dev_user_channel_msg {
+ u16 msg_type;
+ u16 dst_id;
+ u32 arg_len;
+ u32 ack_err;
+ u16 ack_length;
+ u16 ack;
+ u32 data[NBL_DEV_USER_DATA_LEN];
+};
+
+#define NBL_DEV_USER_CHANNEL _IO(NBL_DEV_USER_TYPE, 0)
+
+struct nbl_dev_user_dma_map {
+ uint32_t argsz;
+ uint32_t flags;
+#define NBL_DEV_USER_DMA_MAP_FLAG_READ (RTE_BIT64(0)) /* readable from device */
+#define NBL_DEV_USER_DMA_MAP_FLAG_WRITE (RTE_BIT64(1)) /* writable from device */
+ uint64_t vaddr; /* Process virtual address */
+ uint64_t iova; /* IO virtual address */
+ uint64_t size; /* Size of mapping (bytes) */
+};
+
+#define NBL_DEV_USER_MAP_DMA _IO(NBL_DEV_USER_TYPE, 1)
+
+struct nbl_dev_user_dma_unmap {
+ uint32_t argsz;
+ uint32_t flags;
+ uint64_t vaddr; /* Process virtual address */
+ uint64_t iova; /* IO virtual address */
+ uint64_t size; /* Size of mapping (bytes) */
+};
+
+#define NBL_DEV_USER_UNMAP_DMA _IO(NBL_DEV_USER_TYPE, 2)
+
+#define NBL_KERNEL_NETWORK 0
+#define NBL_USER_NETWORK 1
+
+#define NBL_DEV_USER_SWITCH_NETWORK _IO(NBL_DEV_USER_TYPE, 3)
+#define NBL_DEV_USER_GET_IFINDEX _IO(NBL_DEV_USER_TYPE, 4)
+
+struct nbl_dev_user_link_stat {
+ u8 state;
+ u8 flush;
+};
+
+#define NBL_DEV_USER_SET_EVENTFD _IO(NBL_DEV_USER_TYPE, 5)
+#define NBL_DEV_USER_CLEAR_EVENTFD _IO(NBL_DEV_USER_TYPE, 6)
+#define NBL_DEV_USER_SET_LISTENER _IO(NBL_DEV_USER_TYPE, 7)
+#define NBL_DEV_USER_GET_BAR_SIZE _IO(NBL_DEV_USER_TYPE, 8)
+#define NBL_DEV_USER_GET_DMA_LIMIT _IO(NBL_DEV_USER_TYPE, 9)
+#define NBL_DEV_USER_SET_PROMISC_MODE _IO(NBL_DEV_USER_TYPE, 10)
+#define NBL_DEV_USER_SET_MCAST_MODE _IO(NBL_DEV_USER_TYPE, 11)
+
struct nbl_dma_mem {
void *va;
uint64_t pa;
@@ -49,7 +110,9 @@ int nbl_thread_add_work(struct nbl_work *work);
void nbl_thread_del_work(struct nbl_work *work);
struct nbl_adapter;
+int nbl_userdev_port_config(struct nbl_adapter *adapter, int start);
+int nbl_userdev_port_isolate(struct nbl_adapter *adapter, int set, struct rte_flow_error *error);
int nbl_pci_map_device(struct nbl_adapter *adapter);
void nbl_pci_unmap_device(struct nbl_adapter *adapter);
-
+int nbl_userdev_get_iova_mode(const struct rte_pci_device *dev);
#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 44d157d2a7..55ab7ac8bd 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -44,6 +44,8 @@
#include <rte_common.h>
#include <rte_thread.h>
#include <rte_stdatomic.h>
+#include <rte_eal_paging.h>
+#include <eal_interrupts.h>
#include "nbl_logs.h"
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 12/17] net/nbl: add nbl ethdev configuration
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (10 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 11/17] net/nbl: add nbl coexistence mode for nbl Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 13/17] net/nbl: add nbl device rxtx queue setup and release ops Kyo Liu
` (7 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
NBL device add ethdev configuration
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_dev/nbl_dev.c | 33 +++++++++++++++++++++--
drivers/net/nbl/nbl_include/nbl_include.h | 3 ++-
2 files changed, 33 insertions(+), 3 deletions(-)
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index f02ed7f94e..4eea07c1ff 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -4,15 +4,44 @@
#include "nbl_dev.h"
-static int nbl_dev_configure(struct rte_eth_dev *eth_dev)
+static int nbl_dev_port_configure(struct nbl_adapter *adapter)
{
- RTE_SET_USED(eth_dev);
+ adapter->state = NBL_ETHDEV_CONFIGURED;
+
return 0;
}
+static int nbl_dev_configure(struct rte_eth_dev *eth_dev)
+{
+ struct rte_eth_dev_data *dev_data = eth_dev->data;
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ int ret;
+
+ NBL_LOG(INFO, "Begin to configure the device, state: %d", adapter->state);
+
+ if (dev_data == NULL || adapter == NULL)
+ return -EINVAL;
+
+ dev_data->dev_conf.intr_conf.lsc = 0;
+
+ switch (adapter->state) {
+ case NBL_ETHDEV_CONFIGURED:
+ case NBL_ETHDEV_INITIALIZED:
+ ret = nbl_dev_port_configure(adapter);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ NBL_LOG(INFO, "configure the device done %d", ret);
+ return ret;
+}
+
static int nbl_dev_port_start(struct rte_eth_dev *eth_dev)
{
RTE_SET_USED(eth_dev);
+
return 0;
}
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 55ab7ac8bd..5b77881851 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -140,9 +140,10 @@ struct nbl_common_info {
/* curr_network 0 means kernel network, 1 means user network */
u8 curr_network:1;
u8 is_vf:1;
+ u8 pf_start:1;
u8 specific_dma:1;
u8 dma_set_msb:1;
- u8 rsv:3;
+ u8 rsv:2;
struct nbl_board_port_info board_info;
};
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 13/17] net/nbl: add nbl device rxtx queue setup and release ops
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (11 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 12/17] net/nbl: add nbl ethdev configuration Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 14/17] net/nbl: add nbl device start and stop ops Kyo Liu
` (6 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
Implement NBL device Rx and Tx queue setup & release functions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_dev/nbl_dev.c | 81 +++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 2 +
drivers/net/nbl/nbl_hw/nbl_resource.h | 99 ++++++
drivers/net/nbl/nbl_hw/nbl_txrx.c | 287 ++++++++++++++++--
drivers/net/nbl/nbl_hw/nbl_txrx.h | 99 ++++++
drivers/net/nbl/nbl_include/nbl_def_common.h | 5 +
drivers/net/nbl/nbl_include/nbl_include.h | 1 +
7 files changed, 552 insertions(+), 22 deletions(-)
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index 4eea07c1ff..4faa58ace8 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -57,11 +57,92 @@ static int nbl_dev_close(struct rte_eth_dev *eth_dev)
return 0;
}
+static int nbl_tx_queue_setup(struct rte_eth_dev *eth_dev, u16 queue_idx,
+ u16 nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *conf)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_dev_ring_mgt *ring_mgt = &dev_mgt->net_dev->ring_mgt;
+ struct nbl_dev_ring *tx_ring = &ring_mgt->tx_rings[queue_idx];
+ struct nbl_start_tx_ring_param param = { 0 };
+ int ret;
+
+ param.queue_idx = queue_idx;
+ param.nb_desc = nb_desc;
+ param.socket_id = socket_id;
+ param.conf = conf;
+ param.product = adapter->caps.product_type;
+ param.bond_broadcast_check = NULL;
+ ret = disp_ops->start_tx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), ¶m, &tx_ring->dma);
+ if (ret) {
+ NBL_LOG(ERR, "start_tx_ring failed %d", ret);
+ return ret;
+ }
+
+ tx_ring->desc_num = nb_desc;
+
+ return ret;
+}
+
+static int nbl_rx_queue_setup(struct rte_eth_dev *eth_dev, u16 queue_idx,
+ u16 nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *conf,
+ struct rte_mempool *mempool)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_dev_ring_mgt *ring_mgt = &dev_mgt->net_dev->ring_mgt;
+ struct nbl_dev_ring *rx_ring = &ring_mgt->rx_rings[queue_idx];
+ struct nbl_start_rx_ring_param param = { 0 };
+ int ret;
+
+ param.queue_idx = queue_idx;
+ param.nb_desc = nb_desc;
+ param.socket_id = socket_id;
+ param.conf = conf;
+ param.mempool = mempool;
+ param.product = adapter->caps.product_type;
+ ret = disp_ops->start_rx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), ¶m, &rx_ring->dma);
+ if (ret) {
+ NBL_LOG(ERR, "start_rx_ring failed %d", ret);
+ return ret;
+ }
+
+ rx_ring->desc_num = nb_desc;
+
+ return ret;
+}
+
+static void nbl_tx_queues_release(struct rte_eth_dev *eth_dev, uint16_t queue_id)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+
+ disp_ops->release_tx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), queue_id);
+}
+
+static void nbl_rx_queues_release(struct rte_eth_dev *eth_dev, uint16_t queue_id)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+
+ disp_ops->release_rx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), queue_id);
+}
+
struct nbl_dev_ops dev_ops = {
.dev_configure = nbl_dev_configure,
.dev_start = nbl_dev_port_start,
.dev_stop = nbl_dev_port_stop,
.dev_close = nbl_dev_close,
+ .tx_queue_setup = nbl_tx_queue_setup,
+ .rx_queue_setup = nbl_rx_queue_setup,
+ .tx_queue_release = nbl_tx_queues_release,
+ .rx_queue_release = nbl_rx_queues_release,
};
static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
index 6327aa55b4..b785774f67 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
@@ -53,6 +53,8 @@ static int nbl_res_start(struct nbl_resource_mgt_leonis *res_mgt_leonis)
struct nbl_resource_mgt *res_mgt = &res_mgt_leonis->res_mgt;
int ret;
+ res_mgt->res_info.base_qid = 0;
+
ret = nbl_txrx_mgt_start(res_mgt);
if (ret)
goto txrx_failed;
diff --git a/drivers/net/nbl/nbl_hw/nbl_resource.h b/drivers/net/nbl/nbl_hw/nbl_resource.h
index 07e6327259..543054a2cb 100644
--- a/drivers/net/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/nbl/nbl_hw/nbl_resource.h
@@ -16,12 +16,102 @@
#define NBL_RES_MGT_TO_CHAN_OPS(res_mgt) (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->ops)
#define NBL_RES_MGT_TO_CHAN_PRIV(res_mgt) (NBL_RES_MGT_TO_CHAN_OPS_TBL(res_mgt)->priv)
#define NBL_RES_MGT_TO_ETH_DEV(res_mgt) ((res_mgt)->eth_dev)
+#define NBL_RES_MGT_TO_COMMON(res_mgt) ((res_mgt)->common)
#define NBL_RES_MGT_TO_TXRX_MGT(res_mgt) ((res_mgt)->txrx_mgt)
+#define NBL_RES_MGT_TO_TX_RING(res_mgt, index) \
+ (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->tx_rings[(index)])
+#define NBL_RES_MGT_TO_RX_RING(res_mgt, index) \
+ (NBL_RES_MGT_TO_TXRX_MGT(res_mgt)->rx_rings[(index)])
+
+struct nbl_packed_desc {
+ rte_le64_t addr;
+ rte_le32_t len;
+ rte_le16_t id;
+ rte_le16_t flags;
+};
+
+struct nbl_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t first_id;
+};
+
+struct nbl_rx_entry {
+ struct rte_mbuf *mbuf;
+};
struct nbl_res_tx_ring {
+ volatile struct nbl_packed_desc *desc;
+ struct nbl_tx_entry *tx_entry;
+ const struct rte_memzone *net_hdr_mz;
+ volatile uint8_t *notify;
+ struct rte_eth_dev *eth_dev;
+ struct nbl_common_info *common;
+ u64 default_hdr[2];
+
+ enum nbl_product_type product;
+ int dma_limit_msb;
+ bool dma_set_msb;
+ u16 nb_desc;
+ u16 next_to_clean;
+ u16 next_to_use;
+
+ u16 avail_used_flags;
+ bool used_wrap_counter;
+ u16 notify_qid;
+ u16 exthdr_len;
+
+ u16 vlan_proto;
+ u16 vlan_tci;
+ u16 lag_id;
+ u16 vq_free_cnt;
+ /* Start freeing TX buffers if there are less free descriptors than this value */
+ u16 tx_free_thresh;
+ /* Number of Tx descriptors to use before RS bit is set */
+ u16 tx_rs_thresh;
+
+ unsigned int size;
+
+ u16 queue_id;
+
+ u64 offloads;
+ u64 ring_phys_addr;
+
+ u16 (*prep_tx_ehdr)(void *priv, struct rte_mbuf *mbuf);
};
struct nbl_res_rx_ring {
+ volatile struct nbl_packed_desc *desc;
+ struct nbl_rx_entry *rx_entry;
+ struct rte_mempool *mempool;
+ volatile uint8_t *notify;
+ struct rte_eth_dev *eth_dev;
+ struct nbl_common_info *common;
+ uint64_t mbuf_initializer; /**< value to init mbufs */
+ struct rte_mbuf fake_mbuf;
+
+ enum nbl_product_type product;
+ int dma_limit_msb;
+ unsigned int size;
+ bool dma_set_msb;
+ u16 nb_desc;
+ u16 next_to_clean;
+ u16 next_to_use;
+
+ u16 avail_used_flags;
+ bool used_wrap_counter;
+ u16 notify_qid;
+ u16 exthdr_len;
+
+ u16 vlan_proto;
+ u16 vlan_tci;
+ u16 vq_free_cnt;
+ u16 port_id;
+
+ u16 queue_id;
+ u16 buf_length;
+
+ u64 offloads;
+ u64 ring_phys_addr;
};
struct nbl_txrx_mgt {
@@ -33,11 +123,20 @@ struct nbl_txrx_mgt {
u8 rx_ring_num;
};
+struct nbl_res_info {
+ u16 base_qid;
+ u16 lcore_max;
+ u16 *pf_qid_to_lcore_id;
+ rte_atomic16_t tx_current_queue;
+};
+
struct nbl_resource_mgt {
struct rte_eth_dev *eth_dev;
struct nbl_channel_ops_tbl *chan_ops_tbl;
struct nbl_phy_ops_tbl *phy_ops_tbl;
struct nbl_txrx_mgt *txrx_mgt;
+ struct nbl_common_info *common;
+ struct nbl_res_info res_info;
};
struct nbl_resource_mgt_leonis {
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.c b/drivers/net/nbl/nbl_hw/nbl_txrx.c
index eaa7e4c69d..941b3b50dc 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.c
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.c
@@ -39,55 +39,298 @@ static void nbl_res_txrx_remove_rings(void *priv)
rte_free(txrx_mgt->rx_rings);
}
-static int nbl_res_txrx_start_tx_ring(void *priv,
- struct nbl_start_tx_ring_param *param,
- u64 *dma_addr)
+static inline u16 nbl_prep_tx_ehdr_leonis(void *priv, struct rte_mbuf *mbuf)
{
RTE_SET_USED(priv);
- RTE_SET_USED(param);
- RTE_SET_USED(dma_addr);
+ RTE_SET_USED(mbuf);
return 0;
}
static void nbl_res_txrx_stop_tx_ring(void *priv, u16 queue_idx)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(queue_idx);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_res_tx_ring *tx_ring = NBL_RES_MGT_TO_TX_RING(res_mgt, queue_idx);
+ int i;
+
+ if (!tx_ring)
+ return;
+
+ for (i = 0; i < tx_ring->nb_desc; i++) {
+ if (tx_ring->tx_entry[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(tx_ring->tx_entry[i].mbuf);
+ memset(&tx_ring->tx_entry[i], 0, sizeof(*tx_ring->tx_entry));
+ }
+ tx_ring->desc[i].flags = 0;
+ }
+
+ tx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL);
+ tx_ring->used_wrap_counter = 1;
+ tx_ring->next_to_clean = NBL_TX_RS_THRESH - 1;
+ tx_ring->next_to_use = 0;
+ tx_ring->vq_free_cnt = tx_ring->nb_desc;
}
-static void nbl_res_txrx_release_txring(void *priv, u16 queue_idx)
+static void nbl_res_txrx_release_tx_ring(void *priv, u16 queue_idx)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(queue_idx);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_tx_ring *tx_ring = NBL_RES_MGT_TO_TX_RING(res_mgt, queue_idx);
+ if (!tx_ring)
+ return;
+ rte_free(tx_ring->tx_entry);
+ rte_free(tx_ring);
+ txrx_mgt->tx_rings[queue_idx] = NULL;
}
-static int nbl_res_txrx_start_rx_ring(void *priv,
- struct nbl_start_rx_ring_param *param,
+static int nbl_res_txrx_start_tx_ring(void *priv,
+ struct nbl_start_tx_ring_param *param,
u64 *dma_addr)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(param);
- RTE_SET_USED(dma_addr);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_tx_ring *tx_ring = NBL_RES_MGT_TO_TX_RING(res_mgt, param->queue_idx);
+ struct rte_eth_dev *eth_dev = NBL_RES_MGT_TO_ETH_DEV(res_mgt);
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ struct nbl_phy_ops *phy_ops = NBL_RES_MGT_TO_PHY_OPS(res_mgt);
+ const struct rte_memzone *memzone;
+ const struct rte_memzone *net_hdr_mz;
+ char vq_hdr_name[NBL_VQ_HDR_NAME_MAXSIZE];
+ struct nbl_tx_ehdr_leonis ext_hdr = {0};
+ uint64_t offloads;
+ u32 size;
+
+ offloads = param->conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
+
+ if (eth_dev->data->tx_queues[param->queue_idx] != NULL) {
+ NBL_LOG(WARNING, "re-setup an already allocated tx queue");
+ nbl_res_txrx_stop_tx_ring(priv, param->queue_idx);
+ eth_dev->data->tx_queues[param->queue_idx] = NULL;
+ }
+
+ tx_ring = rte_zmalloc("nbl_txring", sizeof(*tx_ring), RTE_CACHE_LINE_SIZE);
+ if (!tx_ring) {
+ NBL_LOG(ERR, "allocate tx queue data structure failed");
+ return -ENOMEM;
+ }
+ memset(&tx_ring->default_hdr, 0, sizeof(tx_ring->default_hdr));
+ switch (param->product) {
+ case NBL_LEONIS_TYPE:
+ tx_ring->exthdr_len = sizeof(struct nbl_tx_ehdr_leonis);
+ tx_ring->prep_tx_ehdr = nbl_prep_tx_ehdr_leonis;
+ ext_hdr.fwd = NBL_TX_FWD_TYPE_NORMAL;
+ rte_memcpy(&tx_ring->default_hdr, &ext_hdr, sizeof(struct nbl_tx_ehdr_leonis));
+ break;
+ default:
+ tx_ring->exthdr_len = sizeof(union nbl_tx_extend_head);
+ break;
+ };
+
+ tx_ring->tx_entry = rte_calloc("nbl_tx_entry",
+ param->nb_desc, sizeof(*tx_ring->tx_entry), 0);
+ if (!tx_ring->tx_entry) {
+ NBL_LOG(ERR, "allocate tx queue %d software ring failed", param->queue_idx);
+ goto alloc_tx_entry_failed;
+ }
+
+ /* Alloc twice memory, and second half is used to back up the desc for desc checking */
+ size = RTE_ALIGN_CEIL(sizeof(tx_ring->desc[0]) * param->nb_desc, 4096);
+ memzone = rte_eth_dma_zone_reserve(eth_dev, "tx_ring", param->queue_idx,
+ size, RTE_CACHE_LINE_SIZE,
+ param->socket_id);
+ if (memzone == NULL) {
+ NBL_LOG(ERR, "reserve dma zone for tx ring failed");
+ goto alloc_dma_zone_failed;
+ }
+
+ /* if has no memory to put extend header, apply for new memory */
+ size = param->nb_desc * NBL_TX_HEADER_LEN;
+ snprintf(vq_hdr_name, sizeof(vq_hdr_name), "port%d_vq%d_hdr",
+ eth_dev->data->port_id, param->queue_idx);
+ net_hdr_mz = rte_memzone_reserve_aligned(vq_hdr_name, size,
+ param->socket_id,
+ RTE_MEMZONE_IOVA_CONTIG,
+ RTE_CACHE_LINE_SIZE);
+ if (net_hdr_mz == NULL) {
+ if (rte_errno == EEXIST)
+ net_hdr_mz = rte_memzone_lookup(vq_hdr_name);
+ if (net_hdr_mz == NULL) {
+ NBL_LOG(ERR, "reserve net_hdr_mz dma zone for tx ring failed");
+ goto reserve_net_hdr_mz_failed;
+ }
+ }
+
+ tx_ring->product = param->product;
+ tx_ring->nb_desc = param->nb_desc;
+ tx_ring->vq_free_cnt = param->nb_desc;
+ tx_ring->queue_id = param->queue_idx;
+ tx_ring->notify_qid =
+ (res_mgt->res_info.base_qid + txrx_mgt->queue_offset + param->queue_idx) * 2 + 1;
+ tx_ring->ring_phys_addr = (u64)NBL_DMA_ADDERSS_FULL_TRANSLATE(common, memzone->iova);
+ tx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL);
+ tx_ring->used_wrap_counter = 1;
+ tx_ring->next_to_clean = NBL_TX_RS_THRESH - 1;
+ tx_ring->next_to_use = 0;
+ tx_ring->desc = (struct nbl_packed_desc *)memzone->addr;
+ tx_ring->net_hdr_mz = net_hdr_mz;
+ tx_ring->eth_dev = eth_dev;
+ tx_ring->dma_set_msb = common->dma_set_msb;
+ tx_ring->dma_limit_msb = common->dma_limit_msb;
+ tx_ring->notify = phy_ops->get_tail_ptr(NBL_RES_MGT_TO_PHY_PRIV(res_mgt));
+ tx_ring->offloads = offloads;
+ tx_ring->common = common;
+
+ eth_dev->data->tx_queues[param->queue_idx] = tx_ring;
+
+ NBL_LOG(INFO, "tx_ring %d desc dma 0x%" NBL_PRIU64 "",
+ param->queue_idx, tx_ring->ring_phys_addr);
+ txrx_mgt->tx_rings[param->queue_idx] = tx_ring;
+ txrx_mgt->tx_ring_num++;
+
+ *dma_addr = tx_ring->ring_phys_addr;
+
return 0;
+
+reserve_net_hdr_mz_failed:
+ rte_memzone_free(memzone);
+alloc_dma_zone_failed:
+ rte_free(tx_ring->tx_entry);
+ tx_ring->tx_entry = NULL;
+ tx_ring->size = 0;
+alloc_tx_entry_failed:
+ rte_free(tx_ring);
+ return -ENOMEM;
}
-static int nbl_res_alloc_rx_bufs(void *priv, u16 queue_idx)
+static void nbl_res_txrx_stop_rx_ring(void *priv, u16 queue_idx)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(queue_idx);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_res_rx_ring *rx_ring =
+ NBL_RES_MGT_TO_RX_RING(res_mgt, queue_idx);
+ u16 i;
+
+ if (!rx_ring)
+ return;
+ if (rx_ring->rx_entry != NULL) {
+ for (i = 0; i < rx_ring->nb_desc; i++) {
+ if (rx_ring->rx_entry[i].mbuf != NULL) {
+ rte_pktmbuf_free_seg(rx_ring->rx_entry[i].mbuf);
+ rx_ring->rx_entry[i].mbuf = NULL;
+ }
+ rx_ring->desc[i].flags = 0;
+ }
+
+ for (i = rx_ring->nb_desc; i < rx_ring->nb_desc + NBL_DESC_PER_LOOP_VEC_MAX; i++)
+ rx_ring->desc[i].flags = 0;
+ }
+
+ rx_ring->next_to_clean = 0;
+ rx_ring->next_to_use = 0;
+}
+
+static int nbl_res_txrx_start_rx_ring(void *priv,
+ struct nbl_start_rx_ring_param *param,
+ u64 *dma_addr)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_rx_ring *rx_ring = NBL_RES_MGT_TO_RX_RING(res_mgt, param->queue_idx);
+ struct rte_eth_dev *eth_dev = NBL_RES_MGT_TO_ETH_DEV(res_mgt);
+ struct nbl_phy_ops *phy_ops = NBL_RES_MGT_TO_PHY_OPS(res_mgt);
+ struct nbl_common_info *common = NBL_RES_MGT_TO_COMMON(res_mgt);
+ const struct rte_memzone *memzone;
+ u32 size;
+
+ if (eth_dev->data->rx_queues[param->queue_idx] != NULL) {
+ NBL_LOG(WARNING, "re-setup an already allocated rx queue");
+ nbl_res_txrx_stop_rx_ring(priv, param->queue_idx);
+ eth_dev->data->rx_queues[param->queue_idx] = NULL;
+ }
+
+ rx_ring = rte_zmalloc_socket("nbl_rxring", sizeof(*rx_ring),
+ RTE_CACHE_LINE_SIZE, param->socket_id);
+ if (rx_ring == NULL) {
+ NBL_LOG(ERR, "allocate rx queue data structure failed");
+ return -ENOMEM;
+ }
+
+ size = sizeof(rx_ring->rx_entry[0]) * (param->nb_desc + NBL_DESC_PER_LOOP_VEC_MAX);
+ rx_ring->rx_entry = rte_zmalloc_socket("rxq rx_entry", size,
+ RTE_CACHE_LINE_SIZE,
+ param->socket_id);
+ if (rx_ring->rx_entry == NULL) {
+ NBL_LOG(ERR, "allocate rx queue %d software ring failed", param->queue_idx);
+ goto alloc_rx_entry_failed;
+ }
+
+ size = sizeof(rx_ring->desc[0]) * (param->nb_desc + NBL_DESC_PER_LOOP_VEC_MAX);
+ memzone = rte_eth_dma_zone_reserve(eth_dev, "rx_ring", param->queue_idx,
+ size, RTE_CACHE_LINE_SIZE,
+ param->socket_id);
+ if (memzone == NULL) {
+ NBL_LOG(ERR, "reserve dma zone for rx ring failed");
+ goto alloc_dma_zone_failed;
+ }
+
+ rx_ring->product = param->product;
+ rx_ring->mempool = param->mempool;
+ rx_ring->nb_desc = param->nb_desc;
+ rx_ring->queue_id = param->queue_idx;
+ rx_ring->notify_qid =
+ (res_mgt->res_info.base_qid + txrx_mgt->queue_offset + param->queue_idx) * 2;
+ rx_ring->ring_phys_addr = NBL_DMA_ADDERSS_FULL_TRANSLATE(common, memzone->iova);
+ rx_ring->desc = (struct nbl_packed_desc *)memzone->addr;
+ rx_ring->port_id = eth_dev->data->port_id;
+ rx_ring->eth_dev = eth_dev;
+ rx_ring->dma_set_msb = common->dma_set_msb;
+ rx_ring->dma_limit_msb = common->dma_limit_msb;
+ rx_ring->common = common;
+ rx_ring->notify = phy_ops->get_tail_ptr(NBL_RES_MGT_TO_PHY_PRIV(res_mgt));
+
+ switch (param->product) {
+ case NBL_LEONIS_TYPE:
+ rx_ring->exthdr_len = sizeof(struct nbl_rx_ehdr_leonis);
+ break;
+ default:
+ rx_ring->exthdr_len = sizeof(union nbl_rx_extend_head);
+ };
+
+ eth_dev->data->rx_queues[param->queue_idx] = rx_ring;
+
+ txrx_mgt->rx_rings[param->queue_idx] = rx_ring;
+ txrx_mgt->rx_ring_num++;
+
+ *dma_addr = rx_ring->ring_phys_addr;
+
return 0;
+
+alloc_dma_zone_failed:
+ rte_free(rx_ring->rx_entry);
+ rx_ring->rx_entry = NULL;
+ rx_ring->size = 0;
+alloc_rx_entry_failed:
+ rte_free(rx_ring);
+ return -ENOMEM;
}
-static void nbl_res_txrx_stop_rx_ring(void *priv, u16 queue_idx)
+static int nbl_res_alloc_rx_bufs(void *priv, u16 queue_idx)
{
RTE_SET_USED(priv);
RTE_SET_USED(queue_idx);
+ return 0;
}
static void nbl_res_txrx_release_rx_ring(void *priv, u16 queue_idx)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(queue_idx);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_rx_ring *rx_ring =
+ NBL_RES_MGT_TO_RX_RING(res_mgt, queue_idx);
+ if (!rx_ring)
+ return;
+
+ rte_free(rx_ring->rx_entry);
+ rte_free(rx_ring);
+ txrx_mgt->rx_rings[queue_idx] = NULL;
}
static void nbl_res_txrx_update_rx_ring(void *priv, u16 index)
@@ -106,7 +349,7 @@ do { \
NBL_TXRX_SET_OPS(remove_rings, nbl_res_txrx_remove_rings); \
NBL_TXRX_SET_OPS(start_tx_ring, nbl_res_txrx_start_tx_ring); \
NBL_TXRX_SET_OPS(stop_tx_ring, nbl_res_txrx_stop_tx_ring); \
- NBL_TXRX_SET_OPS(release_tx_ring, nbl_res_txrx_release_txring); \
+ NBL_TXRX_SET_OPS(release_tx_ring, nbl_res_txrx_release_tx_ring); \
NBL_TXRX_SET_OPS(start_rx_ring, nbl_res_txrx_start_rx_ring); \
NBL_TXRX_SET_OPS(alloc_rx_bufs, nbl_res_alloc_rx_bufs); \
NBL_TXRX_SET_OPS(stop_rx_ring, nbl_res_txrx_stop_rx_ring); \
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.h b/drivers/net/nbl/nbl_hw/nbl_txrx.h
index 56dbd3c587..83696dbc72 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.h
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.h
@@ -7,4 +7,103 @@
#include "nbl_resource.h"
+#define NBL_PACKED_DESC_F_AVAIL (7)
+#define NBL_PACKED_DESC_F_USED (15)
+#define NBL_VRING_DESC_F_NEXT (1 << 0)
+#define NBL_VRING_DESC_F_WRITE (1 << 1)
+
+#define NBL_TX_RS_THRESH (32)
+#define NBL_TX_HEADER_LEN (32)
+#define NBL_VQ_HDR_NAME_MAXSIZE (32)
+
+#define NBL_DESC_PER_LOOP_VEC_MAX (8)
+
+union nbl_tx_extend_head {
+ struct nbl_tx_ehdr_leonis {
+ /* DW0 */
+ u32 mac_len :5;
+ u32 ip_len :5;
+ u32 l4_len :4;
+ u32 l4_type :2;
+ u32 inner_ip_type :2;
+ u32 external_ip_type :2;
+ u32 external_ip_len :5;
+ u32 l4_tunnel_type :2;
+ u32 l4_tunnel_len :5;
+ /* DW1 */
+ u32 l4s_sid :10;
+ u32 l4s_sync_ind :1;
+ u32 l4s_redun_ind :1;
+ u32 l4s_redun_head_ind :1;
+ u32 l4s_hdl_ind :1;
+ u32 l4s_pbrac_mode :1;
+ u32 rsv0 :2;
+ u32 mss :14;
+ u32 tso :1;
+ /* DW2 */
+ /* if dport = NBL_TX_DPORT_ETH; dport_info = 0
+ * if dport = NBL_TX_DPORT_HOST; dport_info = host queue id
+ * if dport = NBL_TX_DPORT_ECPU; dport_info = ecpu queue_id
+ */
+ u32 dport_info :11;
+ /* if dport = NBL_TX_DPORT_ETH; dport_id[3:0] = eth port id, dport_id[9:4] = lag id
+ * if dport = NBL_TX_DPORT_HOST; dport_id[9:0] = host vsi_id
+ * if dport = NBL_TX_DPORT_ECPU; dport_id[9:0] = ecpu vsi_id
+ */
+ u32 dport_id :10;
+#define NBL_TX_DPORT_ID_LAG_OFT_LEONIS (4)
+ u32 dport :3;
+#define NBL_TX_DPORT_ETH (0)
+#define NBL_TX_DPORT_HOST (1)
+#define NBL_TX_DPORT_ECPU (2)
+#define NBL_TX_DPORT_EMP (3)
+#define NBL_TX_DPORT_BMC (4)
+#define NBL_TX_DPORT_EMP_DRACO (2)
+#define NBL_TX_DPORT_BMC_DRACO (3)
+ u32 fwd :2;
+#define NBL_TX_FWD_TYPE_DROP (0)
+#define NBL_TX_FWD_TYPE_NORMAL (1)
+#define NBL_TX_FWD_TYPE_RSV (2)
+#define NBL_TX_FWD_TYPE_CPU_ASSIGNED (3)
+ u32 rss_lag_en :1;
+ u32 l4_csum_en :1;
+ u32 l3_csum_en :1;
+ u32 rsv1 :3;
+ } leonis;
+};
+
+union nbl_rx_extend_head {
+ struct nbl_rx_ehdr_leonis {
+ /* DW0 */
+ /* 0x0:eth, 0x1:host, 0x2:ecpu, 0x3:emp, 0x4:bcm */
+ u32 sport :3;
+ u32 dport_info :11;
+ /* sport = 0, sport_id[3:0] = eth id,
+ * sport = 1, sport_id[9:0] = host vsi_id,
+ * sport = 2, sport_id[9:0] = ecpu vsi_id,
+ */
+ u32 sport_id :10;
+ /* 0x0:drop, 0x1:normal, 0x2:cpu upcall */
+ u32 fwd :2;
+ u32 rsv0 :6;
+ /* DW1 */
+ u32 error_code :6;
+ u32 ptype :10;
+ u32 profile_id :4;
+ u32 checksum_status :1;
+ u32 rsv1 :1;
+ u32 l4s_sid :10;
+ /* DW2 */
+ u32 rsv3 :2;
+ u32 l4s_hdl_ind :1;
+ u32 l4s_tcp_offset :14;
+ u32 l4s_resync_ind :1;
+ u32 l4s_check_ind :1;
+ u32 l4s_dec_ind :1;
+ u32 rsv2 :4;
+ u32 num_buffers :8;
+ u32 hash_value;
+ } leonis;
+};
+
#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index 795679576e..9773efc246 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -85,6 +85,11 @@ struct nbl_dev_user_link_stat {
#define NBL_DEV_USER_SET_PROMISC_MODE _IO(NBL_DEV_USER_TYPE, 10)
#define NBL_DEV_USER_SET_MCAST_MODE _IO(NBL_DEV_USER_TYPE, 11)
+#define NBL_DMA_ADDERSS_FULL_TRANSLATE(hw, address) \
+ ({ typeof(hw) _hw = (hw); \
+ ((((u64)((_hw)->dma_set_msb)) << ((u64)((_hw)->dma_limit_msb))) | (address)); \
+ })
+
struct nbl_dma_mem {
void *va;
uint64_t pa;
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 5b77881851..0efeb11b46 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -99,6 +99,7 @@ struct nbl_start_tx_ring_param {
u32 socket_id;
enum nbl_product_type product;
const struct rte_eth_txconf *conf;
+ bool (*bond_broadcast_check)(struct rte_mbuf *mbuf);
};
struct nbl_txrx_queue_param {
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 14/17] net/nbl: add nbl device start and stop ops
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (12 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 13/17] net/nbl: add nbl device rxtx queue setup and release ops Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 15/17] net/nbl: add nbl device tx and rx burst Kyo Liu
` (5 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
Implement NBL device start and stop functions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_dev/nbl_dev.c | 173 +++++++++++++++++-
drivers/net/nbl/nbl_dispatch.c | 121 +++++++++++-
drivers/net/nbl/nbl_ethdev.c | 5 +
drivers/net/nbl/nbl_hw/nbl_txrx.c | 72 +++++++-
drivers/net/nbl/nbl_hw/nbl_txrx.h | 14 +-
drivers/net/nbl/nbl_include/nbl_def_channel.h | 20 ++
drivers/net/nbl/nbl_include/nbl_def_common.h | 6 +-
.../net/nbl/nbl_include/nbl_def_dispatch.h | 2 +-
.../net/nbl/nbl_include/nbl_def_resource.h | 4 +
drivers/net/nbl/nbl_include/nbl_include.h | 6 +-
10 files changed, 392 insertions(+), 31 deletions(-)
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index 4faa58ace8..bdd06613e6 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -38,22 +38,179 @@ static int nbl_dev_configure(struct rte_eth_dev *eth_dev)
return ret;
}
+static int nbl_dev_txrx_start(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dev_ring_mgt *ring_mgt = &dev_mgt->net_dev->ring_mgt;
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_txrx_queue_param param = {0};
+ struct nbl_dev_ring *ring;
+ int ret = 0;
+ int i;
+
+ eth_dev->data->scattered_rx = 0;
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ ring = &ring_mgt->tx_rings[i];
+ param.desc_num = ring->desc_num;
+ param.vsi_id = dev_mgt->net_dev->vsi_id;
+ param.dma = ring->dma;
+ param.local_queue_id = i + ring_mgt->queue_offset;
+ param.intr_en = 0;
+ param.intr_mask = 0;
+ param.extend_header = 1;
+ param.split = 0;
+
+ ret = disp_ops->setup_queue(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), ¶m, true);
+ if (ret) {
+ NBL_LOG(ERR, "setup_tx_queue failed %d", ret);
+ return ret;
+ }
+
+ ring->global_queue_id =
+ disp_ops->get_vsi_global_qid(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ param.vsi_id, param.local_queue_id);
+ eth_dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ ring = &ring_mgt->rx_rings[i];
+ param.desc_num = ring->desc_num;
+ param.vsi_id = dev_mgt->net_dev->vsi_id;
+ param.dma = ring->dma;
+ param.local_queue_id = i + ring_mgt->queue_offset;
+ param.intr_en = 0;
+ param.intr_mask = 0;
+ param.half_offload_en = 1;
+ param.extend_header = 1;
+ param.split = 0;
+ param.rxcsum = 1;
+
+ ret = disp_ops->setup_queue(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), ¶m, false);
+ if (ret) {
+ NBL_LOG(ERR, "setup_rx_queue failed %d", ret);
+ return ret;
+ }
+
+ ret = disp_ops->alloc_rx_bufs(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), i);
+ if (ret) {
+ NBL_LOG(ERR, "alloc_rx_bufs failed %d", ret);
+ return ret;
+ }
+
+ ring->global_queue_id =
+ disp_ops->get_vsi_global_qid(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ param.vsi_id, param.local_queue_id);
+ disp_ops->update_rx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), i);
+ eth_dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ }
+
+ ret = disp_ops->cfg_dsch(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), dev_mgt->net_dev->vsi_id, true);
+ if (ret) {
+ NBL_LOG(ERR, "cfg_dsch failed %d", ret);
+ goto cfg_dsch_fail;
+ }
+ ret = disp_ops->setup_cqs(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->net_dev->vsi_id, eth_dev->data->nb_rx_queues, true);
+ if (ret)
+ goto setup_cqs_fail;
+
+ return ret;
+
+setup_cqs_fail:
+ disp_ops->cfg_dsch(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), dev_mgt->net_dev->vsi_id, false);
+cfg_dsch_fail:
+ disp_ops->remove_all_queues(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->net_dev->vsi_id);
+
+ return ret;
+}
+
static int nbl_dev_port_start(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_common_info *common = NBL_ADAPTER_TO_COMMON(adapter);
+ int ret;
+
+ if (adapter == NULL)
+ return -EINVAL;
+ ret = nbl_userdev_port_config(adapter, NBL_USER_NETWORK);
+ if (ret)
+ return ret;
+
+ ret = nbl_dev_txrx_start(eth_dev);
+ if (ret) {
+ NBL_LOG(ERR, "dev_txrx_start failed %d", ret);
+ nbl_userdev_port_config(adapter, NBL_KERNEL_NETWORK);
+ return ret;
+ }
+
+ common->pf_start = 1;
return 0;
}
+static void nbl_clear_queues(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ int i;
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ disp_ops->stop_tx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), i);
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+ disp_ops->stop_rx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), i);
+}
+
+static void nbl_dev_txrx_stop(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+
+ disp_ops->cfg_dsch(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), dev_mgt->net_dev->vsi_id, false);
+ disp_ops->remove_cqs(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), dev_mgt->net_dev->vsi_id);
+ disp_ops->remove_all_queues(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), dev_mgt->net_dev->vsi_id);
+}
+
static int nbl_dev_port_stop(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_common_info *common = NBL_ADAPTER_TO_COMMON(adapter);
+ common->pf_start = 0;
+ rte_delay_ms(NBL_SAFE_THREADS_WAIT_TIME);
+
+ nbl_clear_queues(eth_dev);
+ nbl_dev_txrx_stop(eth_dev);
+ nbl_userdev_port_config(adapter, NBL_KERNEL_NETWORK);
return 0;
}
+static void nbl_release_queues(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ int i;
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
+ disp_ops->release_tx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), i);
+
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
+ disp_ops->release_rx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), i);
+}
+
static int nbl_dev_close(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_common_info *common = NBL_ADAPTER_TO_COMMON(adapter);
+
+ /* pf may not start, so no queue need release */
+ if (common->pf_start)
+ nbl_release_queues(eth_dev);
+
return 0;
}
@@ -180,13 +337,13 @@ static void nbl_dev_leonis_uninit(void *adapter)
static int nbl_dev_common_start(struct nbl_dev_mgt *dev_mgt)
{
const struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
- struct nbl_dev_net_mgt *net_dev = dev_mgt->net_dev;
- struct nbl_common_info *common = dev_mgt->common;
+ struct nbl_dev_net_mgt *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
struct nbl_board_port_info *board_info;
u8 *mac;
int ret;
- board_info = &dev_mgt->common->board_info;
+ board_info = &common->board_info;
disp_ops->get_board_info(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), board_info);
mac = net_dev->eth_dev->data->mac_addrs->addr_bytes;
@@ -228,8 +385,8 @@ static void nbl_dev_leonis_stop(void *p)
{
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
- struct nbl_dev_net_mgt *net_dev = dev_mgt->net_dev;
- const struct nbl_common_info *common = dev_mgt->common;
+ struct nbl_dev_net_mgt *net_dev = NBL_DEV_MGT_TO_NET_DEV(dev_mgt);
+ const struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
const struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
u8 *mac;
diff --git a/drivers/net/nbl/nbl_dispatch.c b/drivers/net/nbl/nbl_dispatch.c
index 4265e5309c..2d44909aab 100644
--- a/drivers/net/nbl/nbl_dispatch.c
+++ b/drivers/net/nbl/nbl_dispatch.c
@@ -335,6 +335,31 @@ static void nbl_disp_chan_get_eth_id_req(void *priv, u16 vsi_id, u8 *eth_mode, u
*eth_id = result.eth_id;
}
+static u16 nbl_disp_get_vsi_global_qid(void *priv, u16 vsi_id, u16 local_qid)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->get_vsi_global_qid,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id, local_qid));
+}
+
+static u16
+nbl_disp_chan_get_vsi_global_qid_req(void *priv, u16 vsi_id, u16 local_qid)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_vsi_qid_info param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+ param.local_qid = local_qid;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
static int nbl_disp_chan_setup_q2vsi(void *priv, u16 vsi_id)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
@@ -488,8 +513,7 @@ static void nbl_disp_chan_clear_flow_req(void *priv, u16 vsi_id)
struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
struct nbl_chan_send_info chan_send = {0};
- NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_CLEAR_FLOW, &vsi_id, sizeof(vsi_id),
- NULL, 0, 1);
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_CLEAR_FLOW, &vsi_id, sizeof(vsi_id), NULL, 0, 1);
chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
@@ -514,8 +538,7 @@ nbl_disp_chan_add_macvlan_req(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id)
param.vlan = vlan_id;
param.vsi = vsi_id;
- NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_MACVLAN,
- ¶m, sizeof(param), NULL, 0, 1);
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_MACVLAN, ¶m, sizeof(param), NULL, 0, 1);
return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
@@ -540,8 +563,7 @@ nbl_disp_chan_del_macvlan_req(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id)
param.vlan = vlan_id;
param.vsi = vsi_id;
- NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_MACVLAN,
- ¶m, sizeof(param), NULL, 0, 1);
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_MACVLAN, ¶m, sizeof(param), NULL, 0, 1);
chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
@@ -562,8 +584,7 @@ static int nbl_disp_chan_add_multi_rule_req(void *priv, u16 vsi_id)
param.vsi = vsi_id;
- NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_MULTI_RULE,
- ¶m, sizeof(param), NULL, 0, 1);
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_ADD_MULTI_RULE, ¶m, sizeof(param), NULL, 0, 1);
return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
@@ -584,8 +605,74 @@ static void nbl_disp_chan_del_multi_rule_req(void *priv, u16 vsi)
param.vsi = vsi;
- NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_MULTI_RULE,
- ¶m, sizeof(param), NULL, 0, 1);
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_DEL_MULTI_RULE, ¶m, sizeof(param), NULL, 0, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_cfg_dsch(void *priv, u16 vsi_id, bool vld)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->cfg_dsch, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id, vld));
+}
+
+static int nbl_disp_chan_cfg_dsch_req(void *priv, u16 vsi_id, bool vld)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_cfg_dsch param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+ param.vld = vld;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_CFG_DSCH, ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_setup_cqs(void *priv, u16 vsi_id, u16 real_qps, bool rss_indir_set)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return NBL_OPS_CALL(res_ops->setup_cqs,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id, real_qps, rss_indir_set));
+}
+
+static int nbl_disp_chan_setup_cqs_req(void *priv, u16 vsi_id, u16 real_qps, bool rss_indir_set)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_setup_cqs param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+ param.real_qps = real_qps;
+ param.rss_indir_set = rss_indir_set;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_SETUP_CQS, ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static void nbl_disp_remove_cqs(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ NBL_OPS_CALL(res_ops->remove_cqs, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id));
+}
+
+static void nbl_disp_chan_remove_cqs_req(void *priv, u16 vsi_id)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_param_remove_cqs param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_REMOVE_CQS, ¶m, sizeof(param), NULL, 0, 1);
chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
@@ -654,6 +741,11 @@ do { \
NBL_DISP_SET_OPS(get_eth_id, nbl_disp_get_eth_id, \
NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_GET_ETH_ID,\
nbl_disp_chan_get_eth_id_req, NULL); \
+ NBL_DISP_SET_OPS(get_vsi_global_qid, \
+ nbl_disp_get_vsi_global_qid, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_GET_VSI_GLOBAL_QUEUE_ID, \
+ nbl_disp_chan_get_vsi_global_qid_req, NULL); \
NBL_DISP_SET_OPS(setup_q2vsi, nbl_disp_chan_setup_q2vsi, \
NBL_DISP_CTRL_LVL_MGT, \
NBL_CHAN_MSG_SETUP_Q2VSI, \
@@ -696,6 +788,15 @@ do { \
NBL_DISP_CTRL_LVL_MGT, \
NBL_CHAN_MSG_DEL_MULTI_RULE, \
nbl_disp_chan_del_multi_rule_req, NULL); \
+ NBL_DISP_SET_OPS(cfg_dsch, nbl_disp_cfg_dsch, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_CFG_DSCH, \
+ nbl_disp_chan_cfg_dsch_req, NULL); \
+ NBL_DISP_SET_OPS(setup_cqs, nbl_disp_setup_cqs, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_SETUP_CQS, \
+ nbl_disp_chan_setup_cqs_req, NULL); \
+ NBL_DISP_SET_OPS(remove_cqs, nbl_disp_remove_cqs, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_CQS,\
+ nbl_disp_chan_remove_cqs_req, NULL); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index e7694988ce..5cbeed6a33 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -10,10 +10,15 @@ RTE_LOG_REGISTER_SUFFIX(nbl_logtype_driver, driver, INFO);
static int nbl_dev_release_pf(struct rte_eth_dev *eth_dev)
{
struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_ops_tbl *dev_ops_tbl;
+ struct nbl_dev_ops *dev_ops;
if (!adapter)
return -EINVAL;
NBL_LOG(INFO, "start to close device %s", eth_dev->device->name);
+ dev_ops_tbl = NBL_ADAPTER_TO_DEV_OPS_TBL(adapter);
+ dev_ops = NBL_DEV_OPS_TBL_TO_OPS(dev_ops_tbl);
+ dev_ops->dev_close(eth_dev);
nbl_core_stop(adapter);
nbl_core_remove(adapter);
return 0;
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.c b/drivers/net/nbl/nbl_hw/nbl_txrx.c
index 941b3b50dc..27b549beda 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.c
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.c
@@ -63,7 +63,7 @@ static void nbl_res_txrx_stop_tx_ring(void *priv, u16 queue_idx)
tx_ring->desc[i].flags = 0;
}
- tx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL);
+ tx_ring->avail_used_flags = NBL_PACKED_DESC_F_AVAIL_BIT;
tx_ring->used_wrap_counter = 1;
tx_ring->next_to_clean = NBL_TX_RS_THRESH - 1;
tx_ring->next_to_use = 0;
@@ -166,7 +166,7 @@ static int nbl_res_txrx_start_tx_ring(void *priv,
tx_ring->notify_qid =
(res_mgt->res_info.base_qid + txrx_mgt->queue_offset + param->queue_idx) * 2 + 1;
tx_ring->ring_phys_addr = (u64)NBL_DMA_ADDERSS_FULL_TRANSLATE(common, memzone->iova);
- tx_ring->avail_used_flags = BIT(NBL_PACKED_DESC_F_AVAIL);
+ tx_ring->avail_used_flags = NBL_PACKED_DESC_F_AVAIL_BIT;
tx_ring->used_wrap_counter = 1;
tx_ring->next_to_clean = NBL_TX_RS_THRESH - 1;
tx_ring->next_to_use = 0;
@@ -314,8 +314,62 @@ static int nbl_res_txrx_start_rx_ring(void *priv,
static int nbl_res_alloc_rx_bufs(void *priv, u16 queue_idx)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(queue_idx);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_res_rx_ring *rxq = NBL_RES_MGT_TO_RX_RING(res_mgt, queue_idx);
+ struct nbl_rx_entry *rx_entry = rxq->rx_entry;
+ volatile struct nbl_packed_desc *rx_desc;
+ struct nbl_rx_entry *rxe;
+ struct rte_mbuf *mbuf;
+ u64 dma_addr;
+ int i;
+ u32 frame_size = rxq->eth_dev->data->mtu + NBL_ETH_OVERHEAD + rxq->exthdr_len;
+ u16 buf_length;
+
+ rxq->avail_used_flags = NBL_PACKED_DESC_F_AVAIL_BIT | NBL_PACKED_DESC_F_WRITE_BIT;
+ rxq->used_wrap_counter = 1;
+
+ for (i = 0; i < rxq->nb_desc; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mempool);
+ if (mbuf == NULL) {
+ NBL_LOG(ERR, "RX mbuf alloc failed for queue %u", rxq->queue_id);
+ return -ENOMEM;
+ }
+ dma_addr = NBL_DMA_ADDERSS_FULL_TRANSLATE(rxq, rte_mbuf_data_iova_default(mbuf));
+ rx_desc = &rxq->desc[i];
+ rxe = &rx_entry[i];
+ rx_desc->addr = dma_addr;
+ rx_desc->len = mbuf->buf_len - RTE_PKTMBUF_HEADROOM;
+ rx_desc->flags = rxq->avail_used_flags;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ rxe->mbuf = mbuf;
+ }
+
+ rxq->next_to_clean = 0;
+ rxq->next_to_use = 0;
+ rxq->vq_free_cnt = 0;
+ rxq->avail_used_flags ^= NBL_PACKED_DESC_F_AVAIL_USED;
+
+ buf_length = rte_pktmbuf_data_room_size(rxq->mempool) - RTE_PKTMBUF_HEADROOM;
+ if (buf_length >= NBL_BUF_LEN_16K) {
+ rxq->buf_length = NBL_BUF_LEN_16K;
+ } else if (buf_length >= NBL_BUF_LEN_8K) {
+ rxq->buf_length = NBL_BUF_LEN_8K;
+ } else if (buf_length >= NBL_BUF_LEN_4K) {
+ rxq->buf_length = NBL_BUF_LEN_4K;
+ } else if (buf_length >= NBL_BUF_LEN_2K) {
+ rxq->buf_length = NBL_BUF_LEN_2K;
+ } else {
+ NBL_LOG(ERR, "mempool mbuf length should be at least 2kB, but current value is %u",
+ buf_length);
+ nbl_res_txrx_stop_rx_ring(res_mgt, queue_idx);
+ return -EINVAL;
+ }
+
+ if (frame_size > rxq->buf_length)
+ rxq->eth_dev->data->scattered_rx = 1;
+
+ rxq->buf_length = rxq->buf_length - RTE_PKTMBUF_HEADROOM;
+
return 0;
}
@@ -335,8 +389,14 @@ static void nbl_res_txrx_release_rx_ring(void *priv, u16 queue_idx)
static void nbl_res_txrx_update_rx_ring(void *priv, u16 index)
{
- RTE_SET_USED(priv);
- RTE_SET_USED(index);
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_phy_ops *phy_ops = NBL_RES_MGT_TO_PHY_OPS(res_mgt);
+ struct nbl_res_rx_ring *rx_ring = NBL_RES_MGT_TO_RX_RING(res_mgt, index);
+
+ phy_ops->update_tail_ptr(NBL_RES_MGT_TO_PHY_PRIV(res_mgt),
+ rx_ring->notify_qid,
+ ((!!(rx_ring->avail_used_flags & NBL_PACKED_DESC_F_AVAIL_BIT)) |
+ rx_ring->next_to_use));
}
/* NBL_TXRX_SET_OPS(ops_name, func)
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.h b/drivers/net/nbl/nbl_hw/nbl_txrx.h
index 83696dbc72..5cf6e83c3f 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.h
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.h
@@ -7,16 +7,26 @@
#include "nbl_resource.h"
+#define NBL_PACKED_DESC_F_NEXT (0)
+#define NBL_PACKED_DESC_F_WRITE (1)
#define NBL_PACKED_DESC_F_AVAIL (7)
#define NBL_PACKED_DESC_F_USED (15)
-#define NBL_VRING_DESC_F_NEXT (1 << 0)
-#define NBL_VRING_DESC_F_WRITE (1 << 1)
+#define NBL_PACKED_DESC_F_NEXT_BIT (1 << NBL_PACKED_DESC_F_NEXT)
+#define NBL_PACKED_DESC_F_WRITE_BIT (1 << NBL_PACKED_DESC_F_WRITE)
+#define NBL_PACKED_DESC_F_AVAIL_BIT (1 << NBL_PACKED_DESC_F_AVAIL)
+#define NBL_PACKED_DESC_F_USED_BIT (1 << NBL_PACKED_DESC_F_USED)
+#define NBL_PACKED_DESC_F_AVAIL_USED (NBL_PACKED_DESC_F_AVAIL_BIT | \
+ NBL_PACKED_DESC_F_USED_BIT)
#define NBL_TX_RS_THRESH (32)
#define NBL_TX_HEADER_LEN (32)
#define NBL_VQ_HDR_NAME_MAXSIZE (32)
#define NBL_DESC_PER_LOOP_VEC_MAX (8)
+#define NBL_BUF_LEN_16K (16384)
+#define NBL_BUF_LEN_8K (8192)
+#define NBL_BUF_LEN_4K (4096)
+#define NBL_BUF_LEN_2K (2048)
union nbl_tx_extend_head {
struct nbl_tx_ehdr_leonis {
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
index 829014fa16..f20cc1ab7c 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -321,6 +321,11 @@ struct nbl_chan_param_get_eth_id {
u8 logic_eth_id;
};
+struct nbl_chan_vsi_qid_info {
+ u16 vsi_id;
+ u16 local_qid;
+};
+
struct nbl_chan_param_register_vsi2q {
u16 vsi_index;
u16 vsi_id;
@@ -350,6 +355,21 @@ struct nbl_chan_param_del_multi_rule {
u16 vsi;
};
+struct nbl_chan_param_cfg_dsch {
+ u16 vsi_id;
+ bool vld;
+};
+
+struct nbl_chan_param_setup_cqs {
+ u16 vsi_id;
+ u16 real_qps;
+ bool rss_indir_set;
+};
+
+struct nbl_chan_param_remove_cqs {
+ u16 vsi_id;
+};
+
struct nbl_chan_send_info {
uint16_t dstid;
uint16_t msg_type;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index 9773efc246..722a372548 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -27,11 +27,15 @@
#define NBL_DEV_USER_TYPE ('n')
#define NBL_DEV_USER_DATA_LEN (2044)
-#define NBL_DEV_USER_PCI_OFFSET_SHIFT 40
+#define NBL_DEV_USER_PCI_OFFSET_SHIFT (40)
#define NBL_DEV_USER_OFFSET_TO_INDEX(off) ((off) >> NBL_DEV_USER_PCI_OFFSET_SHIFT)
#define NBL_DEV_USER_INDEX_TO_OFFSET(index) ((u64)(index) << NBL_DEV_USER_PCI_OFFSET_SHIFT)
#define NBL_DEV_SHM_MSG_RING_INDEX (6)
+#define NBL_VLAN_TAG_SIZE (4)
+#define NBL_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + NBL_VLAN_TAG_SIZE * 2)
+
struct nbl_dev_user_channel_msg {
u16 msg_type;
u16 dst_id;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
index ac261db26a..e38c5a84aa 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
@@ -59,7 +59,7 @@ struct nbl_dispatch_ops {
int (*setup_rss)(void *priv, u16 vsi_id);
void (*remove_rss)(void *priv, u16 vsi_id);
int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld);
- int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps);
+ int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps, bool rss_indir_set);
void (*remove_cqs)(void *priv, u16 vsi_id);
int (*set_rxfh_indir)(void *priv, u16 vsi_id, u32 *indir, u32 indir_size);
void (*clear_queues)(void *priv, u16 vsi_id);
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
index a40ccc4fd8..5fec287581 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -18,6 +18,7 @@ struct nbl_resource_ops {
int (*unregister_net)(void *priv);
u16 (*get_vsi_id)(void *priv);
void (*get_eth_id)(void *priv, u16 vsi_id, u8 *eth_mode, u8 *eth_id);
+ u16 (*get_vsi_global_qid)(void *priv, u16 vsi_id, u16 local_qid);
int (*setup_q2vsi)(void *priv, u16 vsi_id);
void (*remove_q2vsi)(void *priv, u16 vsi_id);
int (*register_vsi2q)(void *priv, u16 vsi_index, u16 vsi_id,
@@ -57,6 +58,9 @@ struct nbl_resource_ops {
void (*del_multi_rule)(void *priv, u16 vsi_id);
int (*cfg_multi_mcast)(void *priv, u16 vsi_id, u16 enable);
void (*clear_flow)(void *priv, u16 vsi_id);
+ int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld);
+ int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps, bool rss_indir_set);
+ void (*remove_cqs)(void *priv, u16 vsi_id);
};
struct nbl_resource_ops_tbl {
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 0efeb11b46..7f751ea9ce 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -59,7 +59,9 @@ typedef int16_t s16;
typedef int8_t s8;
/* Used for macros to pass checkpatch */
-#define NBL_NAME(x) x
+#define NBL_NAME(x) x
+#define BIT(a) (1UL << (a))
+#define NBL_SAFE_THREADS_WAIT_TIME (20)
enum {
NBL_VSI_DATA = 0, /* default vsi in kernel or independent dpdk */
@@ -82,8 +84,6 @@ struct nbl_func_caps {
u32 rsv:30;
};
-#define BIT(a) (1UL << (a))
-
struct nbl_start_rx_ring_param {
u16 queue_idx;
u16 nb_desc;
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 15/17] net/nbl: add nbl device tx and rx burst
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (13 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 14/17] net/nbl: add nbl device start and stop ops Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 16/17] net/nbl: add nbl device xstats and stats Kyo Liu
` (4 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
Implement NBL device tx and rx burst
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_dev/nbl_dev.c | 108 +++++-
drivers/net/nbl/nbl_dev/nbl_dev.h | 5 +
drivers/net/nbl/nbl_dispatch.c | 62 ++++
drivers/net/nbl/nbl_ethdev.c | 7 +
drivers/net/nbl/nbl_ethdev.h | 19 +
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 1 +
drivers/net/nbl/nbl_hw/nbl_resource.h | 2 +
drivers/net/nbl/nbl_hw/nbl_txrx.c | 325 ++++++++++++++++++
drivers/net/nbl/nbl_hw/nbl_txrx.h | 19 +-
drivers/net/nbl/nbl_hw/nbl_txrx_ops.h | 91 +++++
drivers/net/nbl/nbl_include/nbl_def_channel.h | 4 +
drivers/net/nbl/nbl_include/nbl_def_common.h | 1 +
.../net/nbl/nbl_include/nbl_def_dispatch.h | 3 +
.../net/nbl/nbl_include/nbl_def_resource.h | 7 +
drivers/net/nbl/nbl_include/nbl_include.h | 31 ++
15 files changed, 683 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index bdd06613e6..949d42506d 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -291,6 +291,99 @@ static void nbl_rx_queues_release(struct rte_eth_dev *eth_dev, uint16_t queue_id
disp_ops->release_rx_ring(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), queue_id);
}
+static int nbl_dev_infos_get(struct rte_eth_dev *eth_dev __rte_unused,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dev_ring_mgt *ring_mgt = &dev_mgt->net_dev->ring_mgt;
+ struct nbl_board_port_info *board_info = &dev_mgt->common->board_info;
+ u8 speed_mode = board_info->speed;
+
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->max_mtu = NBL_MAX_JUMBO_FRAME_SIZE - NBL_PKT_HDR_PAD;
+ dev_info->max_rx_pktlen = NBL_FRAME_SIZE_MAX;
+ dev_info->max_mac_addrs = dev_mgt->net_dev->max_mac_num;
+ dev_info->max_rx_queues = ring_mgt->rx_ring_num;
+ dev_info->max_tx_queues = ring_mgt->tx_ring_num;
+ /* rx buffer size must be 2KB, 4KB, 8KB or 16KB */
+ dev_info->min_rx_bufsize = NBL_DEV_MIN_RX_BUFSIZE;
+ dev_info->flow_type_rss_offloads = NBL_RSS_OFFLOAD_TYPE;
+
+ dev_info->hash_key_size = NBL_EPRO_RSS_SK_SIZE;
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = 32768,
+ .nb_min = 128,
+ .nb_align = 1,
+ .nb_seg_max = 128,
+ .nb_mtu_seg_max = 128,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = 32768,
+ .nb_min = 128,
+ .nb_align = 1,
+ .nb_seg_max = 128,
+ .nb_mtu_seg_max = 128,
+ };
+
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
+ dev_info->max_rx_pktlen = NBL_FRAME_SIZE_MAX;
+
+ dev_info->default_rxportconf.nb_queues = ring_mgt->rx_ring_num;
+ dev_info->default_txportconf.nb_queues = ring_mgt->tx_ring_num;
+ dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_UDP_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+ dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER;
+
+ switch (speed_mode) {
+ case NBL_FW_PORT_SPEED_100G:
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
+ /* FALLTHROUGH */
+ case NBL_FW_PORT_SPEED_50G:
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
+ /* FALLTHROUGH */
+ case NBL_FW_PORT_SPEED_25G:
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
+ /* FALLTHROUGH */
+ case NBL_FW_PORT_SPEED_10G:
+ dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
+ break;
+ default:
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
+ }
+
+ return 0;
+}
+
+static int nbl_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete __rte_unused)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct rte_eth_link link = { 0 };
+
+ link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ link.link_status = !!dev_mgt->net_dev->eth_link_info.link_status;
+ if (link.link_status)
+ link.link_speed = dev_mgt->net_dev->eth_link_info.link_speed;
+
+ return rte_eth_linkstatus_set(eth_dev, &link);
+}
+
+static int nbl_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_stats)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+
+ return disp_ops->get_stats(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), rte_stats);
+}
+
struct nbl_dev_ops dev_ops = {
.dev_configure = nbl_dev_configure,
.dev_start = nbl_dev_port_start,
@@ -300,6 +393,9 @@ struct nbl_dev_ops dev_ops = {
.rx_queue_setup = nbl_rx_queue_setup,
.tx_queue_release = nbl_tx_queues_release,
.rx_queue_release = nbl_rx_queues_release,
+ .dev_infos_get = nbl_dev_infos_get,
+ .link_update = nbl_link_update,
+ .stats_get = nbl_stats_get,
};
static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
@@ -372,12 +468,17 @@ static int nbl_dev_leonis_start(void *p)
{
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
int ret = 0;
dev_mgt->common = NBL_ADAPTER_TO_COMMON(adapter);
ret = nbl_dev_common_start(dev_mgt);
if (ret)
return ret;
+
+ disp_ops->get_link_state(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->net_dev->eth_id,
+ &dev_mgt->net_dev->eth_link_info);
return 0;
}
@@ -564,7 +665,7 @@ static int nbl_dev_setup_net_dev(struct nbl_dev_mgt *dev_mgt,
return ret;
}
-int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
+int nbl_dev_init(void *p, struct rte_eth_dev *eth_dev)
{
struct nbl_adapter *adapter = (struct nbl_adapter *)p;
struct nbl_dev_mgt **dev_mgt;
@@ -618,6 +719,11 @@ int nbl_dev_init(void *p, __rte_unused struct rte_eth_dev *eth_dev)
eth_dev->data->mac_addrs[0].addr_bytes);
adapter->state = NBL_ETHDEV_INITIALIZED;
+ disp_ops->get_resource_pt_ops(NBL_DEV_MGT_TO_DISP_PRIV(*dev_mgt),
+ &(*dev_mgt)->pt_ops, 0);
+
+ eth_dev->tx_pkt_burst = (*dev_mgt)->pt_ops.tx_pkt_burst;
+ eth_dev->rx_pkt_burst = (*dev_mgt)->pt_ops.rx_pkt_burst;
return 0;
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.h b/drivers/net/nbl/nbl_dev/nbl_dev.h
index 44deea3f3b..f58699a51e 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.h
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.h
@@ -17,6 +17,9 @@
#define NBL_DEV_MGT_TO_ETH_DEV(dev_mgt) ((dev_mgt)->net_dev->eth_dev)
#define NBL_DEV_MGT_TO_COMMON(dev_mgt) ((dev_mgt)->common)
+#define NBL_FRAME_SIZE_MAX (9600)
+#define NBL_DEV_MIN_RX_BUFSIZE 2048
+
struct nbl_dev_ring {
u16 index;
u64 dma;
@@ -37,6 +40,7 @@ struct nbl_dev_ring_mgt {
struct nbl_dev_net_mgt {
struct rte_eth_dev *eth_dev;
struct nbl_dev_ring_mgt ring_mgt;
+ struct nbl_eth_link_info eth_link_info;
u16 vsi_id;
u8 eth_mode;
u8 eth_id;
@@ -49,6 +53,7 @@ struct nbl_dev_mgt {
struct nbl_channel_ops_tbl *chan_ops_tbl;
struct nbl_dev_net_mgt *net_dev;
struct nbl_common_info *common;
+ struct nbl_resource_pt_ops pt_ops;
};
struct nbl_product_dev_ops *nbl_dev_get_product_ops(enum nbl_product_type product_type);
diff --git a/drivers/net/nbl/nbl_dispatch.c b/drivers/net/nbl/nbl_dispatch.c
index 2d44909aab..33964641c4 100644
--- a/drivers/net/nbl/nbl_dispatch.c
+++ b/drivers/net/nbl/nbl_dispatch.c
@@ -676,6 +676,57 @@ static void nbl_disp_chan_remove_cqs_req(void *priv, u16 vsi_id)
chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
+static void nbl_disp_get_res_pt_ops(void *priv, struct nbl_resource_pt_ops *pt_ops, bool offload)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->get_resource_pt_ops,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), pt_ops, offload));
+}
+
+static void nbl_disp_get_link_state(void *priv, u8 eth_id, struct nbl_eth_link_info *eth_link_info)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ /* if donot have res_ops->get_link_state(), default eth is up */
+ if (res_ops->get_link_state) {
+ res_ops->get_link_state(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ eth_id, eth_link_info);
+ } else {
+ eth_link_info->link_status = 1;
+ eth_link_info->link_speed = RTE_ETH_LINK_SPEED_25G;
+ }
+}
+
+static void nbl_disp_chan_get_link_state_req(void *priv, u8 eth_id,
+ struct nbl_eth_link_info *eth_link_info)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_get_link_state param = {0};
+ struct nbl_chan_send_info chan_send;
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param.eth_id = eth_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_GET_LINK_STATE, ¶m, sizeof(param),
+ eth_link_info, sizeof(*eth_link_info), 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
+static int nbl_disp_get_stats(void *priv, struct rte_eth_stats *rte_stats)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->get_stats(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), rte_stats);
+}
+
#define NBL_DISP_OPS_TBL \
do { \
NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \
@@ -797,6 +848,17 @@ do { \
NBL_DISP_SET_OPS(remove_cqs, nbl_disp_remove_cqs, \
NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_REMOVE_CQS,\
nbl_disp_chan_remove_cqs_req, NULL); \
+ NBL_DISP_SET_OPS(get_resource_pt_ops, \
+ nbl_disp_get_res_pt_ops, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_link_state, nbl_disp_get_link_state, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_GET_LINK_STATE, \
+ nbl_disp_chan_get_link_state_req, NULL); \
+ NBL_DISP_SET_OPS(get_stats, nbl_disp_get_stats, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index 5cbeed6a33..06543f6716 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -41,6 +41,13 @@ do { \
NBL_DEV_NET_OPS(dev_configure, dev_ops->dev_configure);\
NBL_DEV_NET_OPS(dev_start, dev_ops->dev_start); \
NBL_DEV_NET_OPS(dev_stop, dev_ops->dev_stop); \
+ NBL_DEV_NET_OPS(dev_infos_get, dev_ops->dev_infos_get);\
+ NBL_DEV_NET_OPS(tx_queue_setup, dev_ops->tx_queue_setup);\
+ NBL_DEV_NET_OPS(rx_queue_setup, dev_ops->rx_queue_setup);\
+ NBL_DEV_NET_OPS(rx_queue_release, dev_ops->rx_queue_release);\
+ NBL_DEV_NET_OPS(tx_queue_release, dev_ops->tx_queue_release);\
+ NBL_DEV_NET_OPS(link_update, dev_ops->link_update); \
+ NBL_DEV_NET_OPS(stats_get, dev_ops->stats_get); \
} while (0)
static void nbl_set_eth_dev_ops(struct nbl_adapter *adapter,
diff --git a/drivers/net/nbl/nbl_ethdev.h b/drivers/net/nbl/nbl_ethdev.h
index e20a7b940e..4d522746c0 100644
--- a/drivers/net/nbl/nbl_ethdev.h
+++ b/drivers/net/nbl/nbl_ethdev.h
@@ -10,4 +10,23 @@
#define ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev) \
((struct nbl_adapter *)((eth_dev)->data->dev_private))
+#define NBL_MAX_JUMBO_FRAME_SIZE (9600)
+#define NBL_PKT_HDR_PAD (26)
+#define NBL_RSS_OFFLOAD_TYPE ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+ RTE_ETH_RSS_VXLAN | \
+ RTE_ETH_RSS_GENEVE | \
+ RTE_ETH_RSS_NVGRE)
+
#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
index b785774f67..e7036373f1 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
@@ -105,6 +105,7 @@ int nbl_res_init_leonis(void *p, struct rte_eth_dev *eth_dev)
NBL_RES_MGT_TO_CHAN_OPS_TBL(&(*res_mgt_leonis)->res_mgt) = chan_ops_tbl;
NBL_RES_MGT_TO_PHY_OPS_TBL(&(*res_mgt_leonis)->res_mgt) = phy_ops_tbl;
NBL_RES_MGT_TO_ETH_DEV(&(*res_mgt_leonis)->res_mgt) = eth_dev;
+ NBL_RES_MGT_TO_COMMON(&(*res_mgt_leonis)->res_mgt) = &adapter->common;
ret = nbl_res_start(*res_mgt_leonis);
if (ret)
diff --git a/drivers/net/nbl/nbl_hw/nbl_resource.h b/drivers/net/nbl/nbl_hw/nbl_resource.h
index 543054a2cb..ad5ac22d61 100644
--- a/drivers/net/nbl/nbl_hw/nbl_resource.h
+++ b/drivers/net/nbl/nbl_hw/nbl_resource.h
@@ -46,6 +46,7 @@ struct nbl_res_tx_ring {
volatile uint8_t *notify;
struct rte_eth_dev *eth_dev;
struct nbl_common_info *common;
+ struct nbl_txq_stats txq_stats;
u64 default_hdr[2];
enum nbl_product_type product;
@@ -86,6 +87,7 @@ struct nbl_res_rx_ring {
volatile uint8_t *notify;
struct rte_eth_dev *eth_dev;
struct nbl_common_info *common;
+ struct nbl_rxq_stats rxq_stats;
uint64_t mbuf_initializer; /**< value to init mbufs */
struct rte_mbuf fake_mbuf;
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.c b/drivers/net/nbl/nbl_hw/nbl_txrx.c
index 27b549beda..361415cd21 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.c
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.c
@@ -4,6 +4,7 @@
#include "nbl_txrx.h"
#include "nbl_include.h"
+#include "nbl_txrx_ops.h"
static int nbl_res_txrx_alloc_rings(void *priv, u16 tx_num, u16 rx_num, u16 queue_offset)
{
@@ -399,6 +400,328 @@ static void nbl_res_txrx_update_rx_ring(void *priv, u16 index)
rx_ring->next_to_use));
}
+static inline void nbl_fill_rx_ring(struct nbl_res_rx_ring *rxq,
+ struct rte_mbuf **cookie, uint16_t fill_num)
+{
+ volatile struct nbl_packed_desc *rx_desc;
+ struct nbl_rx_entry *rx_entry;
+ uint64_t dma_addr;
+ uint16_t desc_index, i, flags;
+
+ desc_index = rxq->next_to_use;
+ for (i = 0; i < fill_num; i++) {
+ rx_desc = &rxq->desc[desc_index];
+ rx_entry = &rxq->rx_entry[desc_index];
+ rx_entry->mbuf = cookie[i];
+
+ flags = rxq->avail_used_flags;
+ desc_index++;
+ if (desc_index >= rxq->nb_desc) {
+ desc_index = 0;
+ rxq->avail_used_flags ^= NBL_PACKED_DESC_F_AVAIL_USED;
+ }
+ if ((desc_index & 0x3) == 0) {
+ rte_prefetch0(&rxq->rx_entry[desc_index]);
+ rte_prefetch0(&rxq->desc[desc_index]);
+ }
+
+ cookie[i]->data_off = RTE_PKTMBUF_HEADROOM;
+ rx_desc->len =
+ rte_cpu_to_le_32(cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM);
+ dma_addr = NBL_DMA_ADDERSS_FULL_TRANSLATE(rxq,
+ rte_mbuf_data_iova_default(cookie[i]));
+ rx_desc->addr = rte_cpu_to_le_64(dma_addr);
+
+ rte_io_wmb();
+ rx_desc->flags = flags;
+ }
+
+ rxq->vq_free_cnt -= fill_num;
+ rxq->next_to_use = desc_index;
+}
+
+static u16
+nbl_res_txrx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts, u16 extend_set)
+{
+ struct nbl_res_tx_ring *txq;
+ union nbl_tx_extend_head *tx_region;
+ volatile struct nbl_packed_desc *tx_ring;
+ struct nbl_tx_entry *sw_ring;
+ volatile struct nbl_packed_desc *tx_desc, *head_desc;
+ struct nbl_tx_entry *txe;
+ struct rte_mbuf *tx_pkt;
+ union nbl_tx_extend_head *u;
+ rte_iova_t net_hdr_mem;
+ uint64_t dma_addr;
+ u16 nb_xmit_pkts;
+ u16 desc_index, head_index, head_flags;
+ u16 data_len, header_len = 0;
+ u16 nb_descs;
+ u16 can_push;
+ u16 required_headroom;
+ u16 tx_extend_len;
+ u16 addr_offset;
+
+ txq = tx_queue;
+ tx_ring = txq->desc;
+ sw_ring = txq->tx_entry;
+ desc_index = txq->next_to_use;
+ txe = &sw_ring[txq->next_to_use];
+ tx_region = txq->net_hdr_mz->addr;
+ net_hdr_mem = NBL_DMA_ADDERSS_FULL_TRANSLATE(txq, txq->net_hdr_mz->iova);
+
+ if (txq->vq_free_cnt < NBL_TX_FREE_THRESH)
+ nbl_tx_free_bufs(txq);
+
+ for (nb_xmit_pkts = 0; nb_xmit_pkts < nb_pkts; nb_xmit_pkts++) {
+ required_headroom = txq->exthdr_len;
+ tx_extend_len = txq->exthdr_len;
+ addr_offset = 0;
+
+ tx_pkt = *tx_pkts++;
+ if (txq->vlan_tci && txq->vlan_proto) {
+ required_headroom += sizeof(struct rte_vlan_hdr);
+ /* extend_hdr + ether_hdr + vlan_hdr */
+ tx_extend_len = required_headroom + sizeof(struct rte_ether_hdr);
+ }
+
+ if (rte_pktmbuf_headroom(tx_pkt) >= required_headroom) {
+ can_push = 1;
+ u = rte_pktmbuf_mtod_offset(tx_pkt, union nbl_tx_extend_head *,
+ -required_headroom);
+ } else {
+ can_push = 0;
+ u = (union nbl_tx_extend_head *)(&tx_region[desc_index]);
+ }
+ nb_descs = !can_push + tx_pkt->nb_segs;
+
+ if (nb_descs > txq->vq_free_cnt) {
+ /* need retry */
+ nbl_tx_free_bufs(txq);
+ if (nb_descs > txq->vq_free_cnt)
+ goto exit;
+ }
+
+ head_index = desc_index;
+ head_desc = &tx_ring[desc_index];
+ txe = &sw_ring[desc_index];
+
+ if (!extend_set)
+ memcpy(u, &txq->default_hdr, txq->exthdr_len);
+
+ if (txq->offloads)
+ header_len = txq->prep_tx_ehdr(u, tx_pkt);
+
+ head_flags = txq->avail_used_flags;
+ head_desc->id = tx_pkt->tso_segsz;
+
+ /* add next tx desc to tx list */
+ if (!can_push) {
+ head_flags |= NBL_VRING_DESC_F_NEXT;
+ txe->mbuf = NULL;
+ /* padding */
+ head_desc->addr = net_hdr_mem +
+ RTE_PTR_DIFF(&tx_region[desc_index], tx_region);
+ head_desc->len = tx_extend_len;
+ txe->first_id = head_index;
+ desc_index++;
+ txq->vq_free_cnt--;
+ if (desc_index >= txq->nb_desc) {
+ desc_index = 0;
+ txq->avail_used_flags ^= NBL_PACKED_DESC_F_AVAIL_USED;
+ }
+ }
+
+ do {
+ tx_desc = &tx_ring[desc_index];
+ txe = &sw_ring[desc_index];
+ txe->mbuf = tx_pkt;
+
+ data_len = tx_pkt->data_len;
+ dma_addr = rte_mbuf_data_iova(tx_pkt);
+ tx_desc->addr = NBL_DMA_ADDERSS_FULL_TRANSLATE(txq, dma_addr) + addr_offset;
+ tx_desc->len = data_len - addr_offset;
+ addr_offset = 0;
+
+ if (desc_index == head_index) {
+ tx_desc->addr -= txq->exthdr_len;
+ tx_desc->len += txq->exthdr_len;
+ } else {
+ tx_desc->flags = txq->avail_used_flags | NBL_VRING_DESC_F_NEXT;
+ head_flags |= NBL_VRING_DESC_F_NEXT;
+ }
+
+ tx_pkt = tx_pkt->next;
+ txe->first_id = head_index;
+ desc_index++;
+ txq->vq_free_cnt--;
+ if (desc_index >= txq->nb_desc) {
+ desc_index = 0;
+ txq->avail_used_flags ^= NBL_PACKED_DESC_F_AVAIL_USED;
+ }
+ } while (tx_pkt);
+ tx_desc->flags &= ~(u16)NBL_VRING_DESC_F_NEXT;
+ head_desc->len += (header_len << NBL_TX_TOTAL_HEADERLEN_SHIFT);
+ rte_io_wmb();
+ head_desc->flags = head_flags;
+ txq->next_to_use = desc_index;
+ }
+
+exit:
+ /* kick hw_notify_addr */
+ rte_write32(txq->notify_qid, txq->notify);
+ txq->txq_stats.tx_packets += nb_xmit_pkts;
+ return nb_xmit_pkts;
+}
+
+static u16
+nbl_res_txrx_pf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts)
+{
+ return nbl_res_txrx_xmit_pkts(tx_queue, tx_pkts, nb_pkts, 0);
+}
+
+static u16
+nbl_res_txrx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
+{
+ struct nbl_res_rx_ring *rxq;
+ volatile struct nbl_packed_desc *rx_ring;
+ volatile struct nbl_packed_desc *rx_desc;
+ struct nbl_rx_entry *sw_ring;
+ struct nbl_rx_entry *rx_entry;
+ struct rte_mbuf *rx_mbuf, *last_mbuf;
+ uint32_t num_sg = 0;
+ uint16_t nb_recv_pkts = 0;
+ uint16_t desc_index;
+ uint16_t fill_num;
+ volatile union nbl_rx_extend_head *rx_ext_hdr;
+ int drop;
+ struct rte_mbuf *new_pkts[NBL_RXQ_REARM_THRESH];
+
+ rxq = rx_queue;
+ rx_ring = rxq->desc;
+ sw_ring = rxq->rx_entry;
+ desc_index = rxq->next_to_clean;
+ while (nb_recv_pkts < nb_pkts) {
+ rx_desc = &rx_ring[desc_index];
+ rx_entry = &sw_ring[desc_index];
+ drop = 0;
+
+ if (!desc_is_used(rx_desc, rxq->used_wrap_counter))
+ break;
+
+ rte_io_rmb();
+ if (!num_sg) {
+ rx_mbuf = rx_entry->mbuf;
+ last_mbuf = rx_mbuf;
+
+ rx_ext_hdr = (union nbl_rx_extend_head *)((char *)rx_mbuf->buf_addr +
+ RTE_PKTMBUF_HEADROOM);
+ num_sg = rx_ext_hdr->common.num_buffers;
+
+ rx_mbuf->nb_segs = num_sg;
+ rx_mbuf->data_len = rx_desc->len - rxq->exthdr_len;
+ rx_mbuf->pkt_len = rx_desc->len - rxq->exthdr_len;
+ rx_mbuf->port = rxq->port_id;
+ rx_mbuf->data_off = RTE_PKTMBUF_HEADROOM + rxq->exthdr_len;
+ } else {
+ last_mbuf->next = rx_entry->mbuf;
+ last_mbuf = rx_entry->mbuf;
+
+ last_mbuf->data_len = rx_desc->len;
+ last_mbuf->pkt_len = rx_desc->len;
+ last_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ rx_mbuf->pkt_len += rx_desc->len;
+ }
+
+ rxq->vq_free_cnt++;
+ desc_index++;
+
+ if (desc_index >= rxq->nb_desc) {
+ desc_index = 0;
+ rxq->used_wrap_counter ^= 1;
+ }
+
+ if (--num_sg)
+ continue;
+ if (drop) {
+ rte_pktmbuf_free(rx_mbuf);
+ continue;
+ }
+ rx_pkts[nb_recv_pkts++] = rx_mbuf;
+ }
+
+ /* BUG on dumplicate pkt free */
+ if (unlikely(num_sg))
+ rte_pktmbuf_free(rx_mbuf);
+ /* clean memory */
+ rxq->next_to_clean = desc_index;
+ fill_num = rxq->vq_free_cnt;
+ /* to be continue: rx free thresh */
+ if (fill_num > NBL_RXQ_REARM_THRESH) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rxq->mempool, new_pkts, NBL_RXQ_REARM_THRESH)))
+ nbl_fill_rx_ring(rxq, new_pkts, NBL_RXQ_REARM_THRESH);
+ }
+
+ rxq->rxq_stats.rx_packets += nb_recv_pkts;
+
+ return nb_recv_pkts;
+}
+
+static void nbl_res_get_pt_ops(void *priv, struct nbl_resource_pt_ops *pt_ops, bool offload)
+{
+ RTE_SET_USED(priv);
+ RTE_SET_USED(offload);
+ pt_ops->tx_pkt_burst = nbl_res_txrx_pf_xmit_pkts;
+ pt_ops->rx_pkt_burst = nbl_res_txrx_recv_pkts;
+}
+
+static int nbl_res_txrx_get_stats(void *priv, struct rte_eth_stats *rte_stats)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct rte_eth_dev *eth_dev = res_mgt->eth_dev;
+ struct nbl_res_rx_ring *rxq;
+ struct nbl_rxq_stats *rxq_stats;
+ struct nbl_res_tx_ring *txq;
+ struct nbl_txq_stats *txq_stats;
+ uint32_t i;
+ uint16_t idx;
+
+ /* Add software counters. */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+ if (unlikely(rxq == NULL))
+ return -EINVAL;
+
+ rxq_stats = &rxq->rxq_stats;
+ idx = rxq->queue_id;
+
+ rte_stats->q_ipackets[idx] += rxq_stats->rx_packets;
+ rte_stats->q_ibytes[idx] += rxq_stats->rx_bytes;
+
+ rte_stats->ipackets += rxq_stats->rx_packets;
+ rte_stats->ibytes += rxq_stats->rx_bytes;
+ rte_stats->rx_nombuf += rxq_stats->rx_nombuf;
+ rte_stats->ierrors += rxq_stats->rx_ierror;
+ }
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ txq = eth_dev->data->tx_queues[i];
+ if (unlikely(txq == NULL))
+ return -EINVAL;
+ txq_stats = &txq->txq_stats;
+ idx = txq->queue_id;
+
+ rte_stats->q_opackets[idx] += txq_stats->tx_packets;
+ rte_stats->q_obytes[idx] += txq_stats->tx_bytes;
+
+ rte_stats->opackets += txq_stats->tx_packets;
+ rte_stats->obytes += txq_stats->tx_bytes;
+ rte_stats->oerrors += txq_stats->tx_errors;
+ }
+
+ return 0;
+}
+
/* NBL_TXRX_SET_OPS(ops_name, func)
*
* Use X Macros to reduce setup and remove codes.
@@ -415,6 +738,8 @@ do { \
NBL_TXRX_SET_OPS(stop_rx_ring, nbl_res_txrx_stop_rx_ring); \
NBL_TXRX_SET_OPS(release_rx_ring, nbl_res_txrx_release_rx_ring); \
NBL_TXRX_SET_OPS(update_rx_ring, nbl_res_txrx_update_rx_ring); \
+ NBL_TXRX_SET_OPS(get_resource_pt_ops, nbl_res_get_pt_ops); \
+ NBL_TXRX_SET_OPS(get_stats, nbl_res_txrx_get_stats); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.h b/drivers/net/nbl/nbl_hw/nbl_txrx.h
index 5cf6e83c3f..d0d4b6128d 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.h
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.h
@@ -18,10 +18,19 @@
#define NBL_PACKED_DESC_F_AVAIL_USED (NBL_PACKED_DESC_F_AVAIL_BIT | \
NBL_PACKED_DESC_F_USED_BIT)
-#define NBL_TX_RS_THRESH (32)
#define NBL_TX_HEADER_LEN (32)
#define NBL_VQ_HDR_NAME_MAXSIZE (32)
+#define NBL_VRING_DESC_F_NEXT RTE_BIT64(0)
+#define NBL_VRING_DESC_F_WRITE RTE_BIT64(1)
+#define NBL_FREE_DESC_THRES 16
+#define NBL_USED_DESC_THRES 32
+#define NBL_TX_TOTAL_HEADERLEN_SHIFT 24
+#define NBL_TX_FREE_THRESH 32
+#define NBL_TX_RS_THRESH 32
+
+#define NBL_RXQ_REARM_THRESH 32
+
#define NBL_DESC_PER_LOOP_VEC_MAX (8)
#define NBL_BUF_LEN_16K (16384)
#define NBL_BUF_LEN_8K (8192)
@@ -114,6 +123,14 @@ union nbl_rx_extend_head {
u32 num_buffers :8;
u32 hash_value;
} leonis;
+
+ struct nbl_rx_ehdr_common {
+ u32 dw0;
+ u32 dw1;
+ u32 dw2:24;
+ u32 num_buffers:8;
+ u32 dw3;
+ } common;
};
#endif
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx_ops.h b/drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
new file mode 100644
index 0000000000..2ab4b09683
--- /dev/null
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Nebulamatrix Technology Co., Ltd.
+ */
+
+#ifndef _NBL_TXRX_OPS_H_
+#define _NBL_TXRX_OPS_H_
+
+#define NBL_TX_MAX_FREE_BUF_SZ 64
+#define NBL_RXQ_REARM_THRESH 32
+
+static __rte_always_inline struct rte_mbuf *nbl_pktmbuf_prefree_seg(struct rte_mbuf *m)
+{
+ if (likely(m))
+ return rte_pktmbuf_prefree_seg(m);
+
+ return NULL;
+}
+
+static __rte_always_inline int
+desc_is_used(volatile struct nbl_packed_desc *desc, bool wrap_counter)
+{
+ uint16_t used, avail, flags;
+
+ flags = desc->flags;
+ used = !!(flags & NBL_PACKED_DESC_F_USED_BIT);
+ avail = !!(flags & NBL_PACKED_DESC_F_AVAIL_BIT);
+
+ return avail == used && used == wrap_counter;
+}
+
+static __rte_always_inline int
+nbl_tx_free_bufs(struct nbl_res_tx_ring *txq)
+{
+ struct rte_mbuf *m, *free[NBL_TX_MAX_FREE_BUF_SZ];
+ struct nbl_tx_entry *txep;
+ uint32_t n;
+ uint32_t i;
+ uint32_t next_to_clean;
+ int nb_free = 0;
+
+ next_to_clean = txq->tx_entry[txq->next_to_clean].first_id;
+ /* check DD bits on threshold descriptor */
+ if (!desc_is_used(&txq->desc[next_to_clean], txq->used_wrap_counter))
+ return 0;
+
+ n = 32;
+
+ /* first buffer to free from S/W ring is at index
+ * tx_next_dd - (tx_rs_thresh-1)
+ */
+ /* consider headroom */
+ txep = &txq->tx_entry[txq->next_to_clean - (n - 1)];
+ m = nbl_pktmbuf_prefree_seg(txep[0].mbuf);
+ if (likely(m)) {
+ free[0] = m;
+ nb_free = 1;
+ for (i = 1; i < n; i++) {
+ m = nbl_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (likely(m)) {
+ if (likely(m->pool == free[0]->pool)) {
+ free[nb_free++] = m;
+ } else {
+ rte_mempool_put_bulk(free[0]->pool,
+ (void *)free,
+ nb_free);
+ free[0] = m;
+ nb_free = 1;
+ }
+ }
+ }
+ rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free);
+ } else {
+ for (i = 1; i < n; i++) {
+ m = nbl_pktmbuf_prefree_seg(txep[i].mbuf);
+ if (m)
+ rte_mempool_put(m->pool, m);
+ }
+ }
+
+ /* buffers were freed, update counters */
+ txq->vq_free_cnt = (uint16_t)(txq->vq_free_cnt + NBL_TX_RS_THRESH);
+ txq->next_to_clean = (uint16_t)(txq->next_to_clean + NBL_TX_RS_THRESH);
+ if (txq->next_to_clean >= txq->nb_desc) {
+ txq->next_to_clean = NBL_TX_RS_THRESH - 1;
+ txq->used_wrap_counter ^= 1;
+ }
+
+ return 32;
+}
+
+#endif
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
index f20cc1ab7c..ba960ebdbf 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -370,6 +370,10 @@ struct nbl_chan_param_remove_cqs {
u16 vsi_id;
};
+struct nbl_chan_param_get_link_state {
+ u8 eth_id;
+};
+
struct nbl_chan_send_info {
uint16_t dstid;
uint16_t msg_type;
diff --git a/drivers/net/nbl/nbl_include/nbl_def_common.h b/drivers/net/nbl/nbl_include/nbl_def_common.h
index 722a372548..2d8b1be036 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_common.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_common.h
@@ -23,6 +23,7 @@
#define NBL_TWO_ETHERNET_MAX_MAC_NUM (512)
#define NBL_FOUR_ETHERNET_MAX_MAC_NUM (1024)
+#define NBL_EPRO_RSS_SK_SIZE (40)
#define NBL_DEV_USER_TYPE ('n')
#define NBL_DEV_USER_DATA_LEN (2044)
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
index e38c5a84aa..20510f7fa2 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
@@ -19,6 +19,7 @@ enum {
};
struct nbl_dispatch_ops {
+ void (*get_resource_pt_ops)(void *priv, struct nbl_resource_pt_ops *pt_ops, bool offload);
int (*register_net)(void *priv,
struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result);
@@ -67,6 +68,8 @@ struct nbl_dispatch_ops {
u16 (*recv_pkts)(void *priv, void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts);
u16 (*get_vsi_global_qid)(void *priv, u16 vsi_id, u16 local_qid);
void (*get_board_info)(void *priv, struct nbl_board_port_info *board_info);
+ void (*get_link_state)(void *priv, u8 eth_id, struct nbl_eth_link_info *eth_link_info);
+ int (*get_stats)(void *priv, struct rte_eth_stats *rte_stats);
void (*dummy_func)(void *priv);
};
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
index 5fec287581..da683cd957 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -11,7 +11,13 @@
#define NBL_RES_OPS_TBL_TO_OPS(res_ops_tbl) ((res_ops_tbl)->ops)
#define NBL_RES_OPS_TBL_TO_PRIV(res_ops_tbl) ((res_ops_tbl)->priv)
+struct nbl_resource_pt_ops {
+ eth_rx_burst_t rx_pkt_burst;
+ eth_tx_burst_t tx_pkt_burst;
+};
+
struct nbl_resource_ops {
+ void (*get_resource_pt_ops)(void *priv, struct nbl_resource_pt_ops *pt_ops, bool offload);
int (*register_net)(void *priv,
struct nbl_register_net_param *register_param,
struct nbl_register_net_result *register_result);
@@ -61,6 +67,7 @@ struct nbl_resource_ops {
int (*cfg_dsch)(void *priv, u16 vsi_id, bool vld);
int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps, bool rss_indir_set);
void (*remove_cqs)(void *priv, u16 vsi_id);
+ void (*get_link_state)(void *priv, u8 eth_id, struct nbl_eth_link_info *eth_link_info);
};
struct nbl_resource_ops_tbl {
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 7f751ea9ce..06aaad09d8 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -77,6 +77,13 @@ enum nbl_product_type {
NBL_PRODUCT_MAX,
};
+enum nbl_fw_port_speed {
+ NBL_FW_PORT_SPEED_10G,
+ NBL_FW_PORT_SPEED_25G,
+ NBL_FW_PORT_SPEED_50G,
+ NBL_FW_PORT_SPEED_100G,
+};
+
struct nbl_func_caps {
enum nbl_product_type product_type;
u32 is_vf:1;
@@ -174,4 +181,28 @@ struct nbl_register_net_result {
bool trusted;
};
+struct nbl_eth_link_info {
+ u8 link_status;
+ u32 link_speed;
+};
+
+struct nbl_rxq_stats {
+ uint64_t rx_packets;
+ uint64_t rx_bytes;
+ uint64_t rx_nombuf;
+ uint64_t rx_multi_descs;
+
+ uint64_t rx_ierror;
+ uint64_t rx_drop_noport;
+ uint64_t rx_drop_proto;
+};
+
+struct nbl_txq_stats {
+ uint64_t tx_packets;
+ uint64_t tx_bytes;
+ uint64_t tx_errors;
+ uint64_t tx_descs;
+ uint64_t tx_tso_packets;
+};
+
#endif
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 16/17] net/nbl: add nbl device xstats and stats
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (14 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 15/17] net/nbl: add nbl device tx and rx burst Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 8:58 ` [PATCH v1 17/17] net/nbl: nbl device support set mtu and promisc Kyo Liu
` (3 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
Implement NBL device xstats and stats functions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_dev/nbl_dev.c | 148 +++++++++++++++++-
drivers/net/nbl/nbl_dev/nbl_dev.h | 2 +
drivers/net/nbl/nbl_dispatch.c | 111 +++++++++++++
.../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 113 +++++++++++++
drivers/net/nbl/nbl_hw/nbl_txrx.c | 112 ++++++++++++-
drivers/net/nbl/nbl_include/nbl_def_channel.h | 5 +
.../net/nbl/nbl_include/nbl_def_dispatch.h | 9 ++
.../net/nbl/nbl_include/nbl_def_resource.h | 12 +-
drivers/net/nbl/nbl_include/nbl_include.h | 4 +
9 files changed, 510 insertions(+), 6 deletions(-)
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index 949d42506d..4fa132ae1c 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -384,6 +384,129 @@ static int nbl_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *rte_
return disp_ops->get_stats(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), rte_stats);
}
+static int nbl_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+
+ return disp_ops->reset_stats(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt));
+}
+
+static int nbl_dev_update_hw_xstats(struct nbl_dev_mgt *dev_mgt, struct rte_eth_xstat *xstats,
+ u16 hw_xstats_cnt, u16 *xstats_cnt)
+{
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ u64 *hw_stats;
+ int i;
+ u16 count = *xstats_cnt;
+
+ hw_stats = rte_zmalloc("nbl_xstats_cnt", hw_xstats_cnt * sizeof(u64), 0);
+ if (!hw_stats)
+ return -ENOMEM;
+
+ disp_ops->get_private_stat_data(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ common->eth_id, hw_stats, hw_xstats_cnt * sizeof(u64));
+ for (i = 0; i < hw_xstats_cnt; i++) {
+ xstats[count].value = hw_stats[i] - dev_mgt->net_dev->hw_xstats_offset[i];
+ xstats[count].id = count;
+ count++;
+ }
+
+ *xstats_cnt = count;
+ rte_free(hw_stats);
+ return 0;
+}
+
+static int nbl_xstats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat *xstats, unsigned int n)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ int ret = 0;
+ u16 txrx_xstats_cnt = 0, hw_xstats_cnt = 0, xstats_cnt = 0;
+
+ if (!xstats)
+ return 0;
+
+ ret = disp_ops->get_txrx_xstats_cnt(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), &txrx_xstats_cnt);
+ if (!common->is_vf)
+ ret |= disp_ops->get_hw_xstats_cnt(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ &hw_xstats_cnt);
+ if (ret)
+ return -EIO;
+
+ if (n < (txrx_xstats_cnt + hw_xstats_cnt))
+ return txrx_xstats_cnt + hw_xstats_cnt;
+
+ if (txrx_xstats_cnt)
+ ret = disp_ops->get_txrx_xstats(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ xstats, &xstats_cnt);
+ if (hw_xstats_cnt)
+ ret |= nbl_dev_update_hw_xstats(dev_mgt, xstats, hw_xstats_cnt, &xstats_cnt);
+
+ if (ret)
+ return -EIO;
+
+ return xstats_cnt;
+}
+
+static int nbl_xstats_get_names(struct rte_eth_dev *eth_dev,
+ struct rte_eth_xstat_name *xstats_names,
+ __rte_unused unsigned int limit)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ u16 txrx_xstats_cnt = 0, hw_xstats_cnt = 0, xstats_cnt = 0;
+ int ret = 0;
+
+ ret = disp_ops->get_txrx_xstats_cnt(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), &txrx_xstats_cnt);
+ if (!common->is_vf)
+ ret |= disp_ops->get_hw_xstats_cnt(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ &hw_xstats_cnt);
+ if (ret)
+ return -EIO;
+
+ if (!xstats_names)
+ return txrx_xstats_cnt + hw_xstats_cnt;
+
+ if (txrx_xstats_cnt)
+ ret = disp_ops->get_txrx_xstats_names(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ xstats_names, &xstats_cnt);
+ if (hw_xstats_cnt)
+ ret |= disp_ops->get_hw_xstats_names(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ xstats_names, &xstats_cnt);
+ if (ret)
+ return -EIO;
+
+ return xstats_cnt;
+}
+
+
+
+static int nbl_xstats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_common_info *common = NBL_DEV_MGT_TO_COMMON(dev_mgt);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_dev_net_mgt *net_dev = dev_mgt->net_dev;
+
+ if (!common->is_vf) {
+ disp_ops->get_private_stat_data(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->common->eth_id,
+ net_dev->hw_xstats_offset, net_dev->hw_xstats_size);
+ }
+
+ nbl_stats_reset(eth_dev);
+ return 0;
+}
+
struct nbl_dev_ops dev_ops = {
.dev_configure = nbl_dev_configure,
.dev_start = nbl_dev_port_start,
@@ -396,6 +519,10 @@ struct nbl_dev_ops dev_ops = {
.dev_infos_get = nbl_dev_infos_get,
.link_update = nbl_link_update,
.stats_get = nbl_stats_get,
+ .stats_reset = nbl_stats_reset,
+ .xstats_get = nbl_xstats_get,
+ .xstats_get_names = nbl_xstats_get_names,
+ .xstats_reset = nbl_xstats_reset,
};
static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
@@ -438,6 +565,7 @@ static int nbl_dev_common_start(struct nbl_dev_mgt *dev_mgt)
struct nbl_board_port_info *board_info;
u8 *mac;
int ret;
+ u16 priv_cnt = 0;
board_info = &common->board_info;
disp_ops->get_board_info(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), board_info);
@@ -456,8 +584,25 @@ static int nbl_dev_common_start(struct nbl_dev_mgt *dev_mgt)
goto add_multi_rule_failed;
}
- return 0;
+ net_dev->hw_xstats_offset = NULL;
+ if (!dev_mgt->common->is_vf)
+ disp_ops->get_hw_xstats_cnt(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), &priv_cnt);
+ if (priv_cnt) {
+ net_dev->hw_xstats_offset = rte_zmalloc("nbl_xstats_cnt",
+ priv_cnt * sizeof(u64), 0);
+ net_dev->hw_xstats_size = priv_cnt * sizeof(u64);
+ if (!net_dev->hw_xstats_offset) {
+ ret = -ENOMEM;
+ goto alloc_xstats_offset_failed;
+ }
+
+ disp_ops->get_private_stat_data(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->common->eth_id,
+ net_dev->hw_xstats_offset, net_dev->hw_xstats_size);
+ }
+ return 0;
+alloc_xstats_offset_failed:
add_multi_rule_failed:
disp_ops->del_macvlan(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), mac, 0, net_dev->vsi_id);
@@ -719,6 +864,7 @@ int nbl_dev_init(void *p, struct rte_eth_dev *eth_dev)
eth_dev->data->mac_addrs[0].addr_bytes);
adapter->state = NBL_ETHDEV_INITIALIZED;
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
disp_ops->get_resource_pt_ops(NBL_DEV_MGT_TO_DISP_PRIV(*dev_mgt),
&(*dev_mgt)->pt_ops, 0);
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.h b/drivers/net/nbl/nbl_dev/nbl_dev.h
index f58699a51e..1541aebba5 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.h
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.h
@@ -41,6 +41,8 @@ struct nbl_dev_net_mgt {
struct rte_eth_dev *eth_dev;
struct nbl_dev_ring_mgt ring_mgt;
struct nbl_eth_link_info eth_link_info;
+ u64 *hw_xstats_offset;
+ u32 hw_xstats_size;
u16 vsi_id;
u8 eth_mode;
u8 eth_id;
diff --git a/drivers/net/nbl/nbl_dispatch.c b/drivers/net/nbl/nbl_dispatch.c
index 33964641c4..e18b543ad6 100644
--- a/drivers/net/nbl/nbl_dispatch.c
+++ b/drivers/net/nbl/nbl_dispatch.c
@@ -724,9 +724,94 @@ static int nbl_disp_get_stats(void *priv, struct rte_eth_stats *rte_stats)
{
struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
return res_ops->get_stats(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), rte_stats);
}
+static int nbl_disp_reset_stats(void *priv)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+
+ return res_ops->reset_stats(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt));
+}
+
+static int nbl_disp_get_txrx_xstats_cnt(void *priv, u16 *xstats_cnt)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->get_txrx_xstats_cnt(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), xstats_cnt);
+}
+
+static int nbl_disp_get_txrx_xstats(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->get_txrx_xstats(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), xstats, xstats_cnt);
+}
+
+static int nbl_disp_get_txrx_xstats_names(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->get_txrx_xstats_names(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ xstats_names, xstats_cnt);
+}
+
+static int nbl_disp_get_hw_xstats_cnt(void *priv, u16 *xstats_cnt)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->get_hw_xstats_cnt(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), xstats_cnt);
+}
+
+static int nbl_disp_get_hw_xstats_names(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ return res_ops->get_hw_xstats_names(NBL_DISP_MGT_TO_RES_PRIV(disp_mgt),
+ xstats_names, xstats_cnt);
+}
+
+static void nbl_disp_get_private_stat_data(void *priv, u32 eth_id, u64 *data,
+ __rte_unused u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ NBL_OPS_CALL(res_ops->get_private_stat_data,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), eth_id, data));
+}
+
+static void nbl_disp_get_private_stat_data_req(void *priv, u32 eth_id, u64 *data, u32 data_len)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_send_info chan_send = {0};
+ struct nbl_chan_param_get_private_stat_data param = {0};
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+
+ param.eth_id = eth_id;
+ param.data_len = data_len;
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_GET_ETH_STATS, ¶m,
+ sizeof(param), data, data_len, 1);
+ chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
+}
+
#define NBL_DISP_OPS_TBL \
do { \
NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \
@@ -859,6 +944,32 @@ do { \
NBL_DISP_SET_OPS(get_stats, nbl_disp_get_stats, \
NBL_DISP_CTRL_LVL_ALWAYS, -1, \
NULL, NULL); \
+ NBL_DISP_SET_OPS(reset_stats, nbl_disp_reset_stats, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_txrx_xstats_cnt, \
+ nbl_disp_get_txrx_xstats_cnt, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_txrx_xstats, nbl_disp_get_txrx_xstats, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_txrx_xstats_names, \
+ nbl_disp_get_txrx_xstats_names, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_hw_xstats_cnt, nbl_disp_get_hw_xstats_cnt, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_hw_xstats_names, \
+ nbl_disp_get_hw_xstats_names, \
+ NBL_DISP_CTRL_LVL_ALWAYS, -1, \
+ NULL, NULL); \
+ NBL_DISP_SET_OPS(get_private_stat_data, \
+ nbl_disp_get_private_stat_data, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_GET_ETH_STATS, \
+ nbl_disp_get_private_stat_data_req, NULL); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
index e7036373f1..41ae423ff0 100644
--- a/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
+++ b/drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
@@ -4,7 +4,120 @@
#include "nbl_res_leonis.h"
+/* store statistics names */
+struct nbl_xstats_name {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+};
+
+static const struct nbl_xstats_name nbl_stats_strings[] = {
+ {"eth_frames_tx"},
+ {"eth_frames_tx_ok"},
+ {"eth_frames_tx_badfcs"},
+ {"eth_unicast_frames_tx_ok"},
+ {"eth_multicast_frames_tx_ok"},
+ {"eth_broadcast_frames_tx_ok"},
+ {"eth_macctrl_frames_tx_ok"},
+ {"eth_fragment_frames_tx"},
+ {"eth_fragment_frames_tx_ok"},
+ {"eth_pause_frames_tx"},
+ {"eth_pause_macctrl_frames_tx"},
+ {"eth_pfc_frames_tx"},
+ {"eth_pfc_frames_tx_prio0"},
+ {"eth_pfc_frames_tx_prio1"},
+ {"eth_pfc_frames_tx_prio2"},
+ {"eth_pfc_frames_tx_prio3"},
+ {"eth_pfc_frames_tx_prio4"},
+ {"eth_pfc_frames_tx_prio5"},
+ {"eth_pfc_frames_tx_prio6"},
+ {"eth_pfc_frames_tx_prio7"},
+ {"eth_verify_frames_tx"},
+ {"eth_respond_frames_tx"},
+ {"eth_frames_tx_64B"},
+ {"eth_frames_tx_65_to_127B"},
+ {"eth_frames_tx_128_to_255B"},
+ {"eth_frames_tx_256_to_511B"},
+ {"eth_frames_tx_512_to_1023B"},
+ {"eth_frames_tx_1024_to_1535B"},
+ {"eth_frames_tx_1536_to_2047B"},
+ {"eth_frames_tx_2048_to_MAXB"},
+ {"eth_undersize_frames_tx_goodfcs"},
+ {"eth_oversize_frames_tx_goodfcs"},
+ {"eth_undersize_frames_tx_badfcs"},
+ {"eth_oversize_frames_tx_badfcs"},
+ {"eth_octets_tx"},
+ {"eth_octets_tx_ok"},
+ {"eth_octets_tx_badfcs"},
+ {"eth_frames_rx"},
+ {"eth_frames_rx_ok"},
+ {"eth_frames_rx_badfcs"},
+ {"eth_undersize_frames_rx_goodfcs"},
+ {"eth_undersize_frames_rx_badfcs"},
+ {"eth_oversize_frames_rx_goodfcs"},
+ {"eth_oversize_frames_rx_badfcs"},
+ {"eth_frames_rx_misc_error"},
+ {"eth_frames_rx_misc_dropped"},
+ {"eth_unicast_frames_rx_ok"},
+ {"eth_multicast_frames_rx_ok"},
+ {"eth_broadcast_frames_rx_ok"},
+ {"eth_pause_frames_rx"},
+ {"eth_pfc_frames_rx"},
+ {"eth_pfc_frames_rx_prio0"},
+ {"eth_pfc_frames_rx_prio1"},
+ {"eth_pfc_frames_rx_prio2"},
+ {"eth_pfc_frames_rx_prio3"},
+ {"eth_pfc_frames_rx_prio4"},
+ {"eth_pfc_frames_rx_prio5"},
+ {"eth_pfc_frames_rx_prio6"},
+ {"eth_pfc_frames_rx_prio7"},
+ {"eth_macctrl_frames_rx"},
+ {"eth_verify_frames_rx_ok"},
+ {"eth_respond_frames_rx_ok"},
+ {"eth_fragment_frames_rx_ok"},
+ {"eth_fragment_rx_smdc_nocontext"},
+ {"eth_fragment_rx_smds_seq_error"},
+ {"eth_fragment_rx_smdc_seq_error"},
+ {"eth_fragment_rx_frag_cnt_error"},
+ {"eth_frames_assembled_ok"},
+ {"eth_frames_assembled_error"},
+ {"eth_frames_rx_64B"},
+ {"eth_frames_rx_65_to_127B"},
+ {"eth_frames_rx_128_to_255B"},
+ {"eth_frames_rx_256_to_511B"},
+ {"eth_frames_rx_512_to_1023B"},
+ {"eth_frames_rx_1024_to_1535B"},
+ {"eth_frames_rx_1536_to_2047B"},
+ {"eth_frames_rx_2048_to_MAXB"},
+ {"eth_octets_rx"},
+ {"eth_octets_rx_ok"},
+ {"eth_octets_rx_badfcs"},
+ {"eth_octets_rx_dropped"},
+};
+
+static int nbl_get_xstats_cnt(__rte_unused void *priv, u16 *xstats_cnt)
+{
+ *xstats_cnt = ARRAY_SIZE(nbl_stats_strings);
+ return 0;
+}
+
+static int nbl_get_xstats_names(__rte_unused void *priv,
+ struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt)
+{
+ u32 i = 0;
+ u16 count = *xstats_cnt;
+
+ for (i = 0; i < ARRAY_SIZE(nbl_stats_strings); i++) {
+ strlcpy(xstats_names[count].name, nbl_stats_strings[i].name,
+ sizeof(nbl_stats_strings[count].name));
+ count++;
+ }
+ *xstats_cnt = count;
+ return 0;
+}
+
static struct nbl_resource_ops res_ops = {
+ .get_hw_xstats_cnt = nbl_get_xstats_cnt,
+ .get_hw_xstats_names = nbl_get_xstats_names,
};
static bool is_ops_inited;
diff --git a/drivers/net/nbl/nbl_hw/nbl_txrx.c b/drivers/net/nbl/nbl_hw/nbl_txrx.c
index 361415cd21..0436f0df88 100644
--- a/drivers/net/nbl/nbl_hw/nbl_txrx.c
+++ b/drivers/net/nbl/nbl_hw/nbl_txrx.c
@@ -538,6 +538,7 @@ nbl_res_txrx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, u16 nb_pkts, u
txe->mbuf = tx_pkt;
data_len = tx_pkt->data_len;
+ txq->txq_stats.tx_bytes += tx_pkt->data_len;
dma_addr = rte_mbuf_data_iova(tx_pkt);
tx_desc->addr = NBL_DMA_ADDERSS_FULL_TRANSLATE(txq, dma_addr) + addr_offset;
tx_desc->len = data_len - addr_offset;
@@ -617,6 +618,8 @@ nbl_res_txrx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
rx_ext_hdr = (union nbl_rx_extend_head *)((char *)rx_mbuf->buf_addr +
RTE_PKTMBUF_HEADROOM);
num_sg = rx_ext_hdr->common.num_buffers;
+ if (num_sg > 1)
+ rxq->rxq_stats.rx_multi_descs++;
rx_mbuf->nb_segs = num_sg;
rx_mbuf->data_len = rx_desc->len - rxq->exthdr_len;
@@ -644,15 +647,19 @@ nbl_res_txrx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, u16 nb_pkts)
if (--num_sg)
continue;
if (drop) {
+ rxq->rxq_stats.rx_drop_proto++;
rte_pktmbuf_free(rx_mbuf);
continue;
}
rx_pkts[nb_recv_pkts++] = rx_mbuf;
+ rxq->rxq_stats.rx_bytes += rx_mbuf->pkt_len;
}
/* BUG on dumplicate pkt free */
- if (unlikely(num_sg))
+ if (unlikely(num_sg)) {
+ rxq->rxq_stats.rx_ierror++;
rte_pktmbuf_free(rx_mbuf);
+ }
/* clean memory */
rxq->next_to_clean = desc_index;
fill_num = rxq->vq_free_cnt;
@@ -722,6 +729,105 @@ static int nbl_res_txrx_get_stats(void *priv, struct rte_eth_stats *rte_stats)
return 0;
}
+static int
+nbl_res_txrx_reset_stats(void *priv)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct rte_eth_dev *eth_dev = res_mgt->eth_dev;
+ struct nbl_res_rx_ring *rxq;
+ struct nbl_rxq_stats *rxq_stats;
+ struct nbl_res_tx_ring *txq;
+ struct nbl_txq_stats *txq_stats;
+ uint32_t i;
+
+ /* Add software counters. */
+ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
+ rxq = eth_dev->data->rx_queues[i];
+ if (unlikely(rxq == NULL))
+ continue;
+
+ rxq_stats = &rxq->rxq_stats;
+ memset(rxq_stats, 0, sizeof(struct nbl_rxq_stats));
+ }
+
+ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
+ txq = eth_dev->data->tx_queues[i];
+ if (unlikely(txq == NULL))
+ continue;
+
+ txq_stats = &txq->txq_stats;
+ memset(txq_stats, 0, sizeof(struct nbl_txq_stats));
+ }
+
+ return 0;
+}
+
+/* store statistics names */
+struct nbl_txrx_xstats_name {
+ char name[RTE_ETH_XSTATS_NAME_SIZE];
+};
+
+static const struct nbl_txrx_xstats_name nbl_stats_strings[] = {
+ {"rx_multidescs_packets"},
+ {"rx_drop_noport_packets"},
+ {"rx_drop_proto_packets"},
+};
+
+static int nbl_res_txrx_get_xstats_cnt(__rte_unused void *priv, u16 *xstats_cnt)
+{
+ *xstats_cnt = ARRAY_SIZE(nbl_stats_strings);
+ return 0;
+}
+
+static int nbl_res_txrx_get_xstats(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt)
+{
+ struct nbl_resource_mgt *res_mgt = (struct nbl_resource_mgt *)priv;
+ struct nbl_txrx_mgt *txrx_mgt = NBL_RES_MGT_TO_TXRX_MGT(res_mgt);
+ struct nbl_res_rx_ring *rxq;
+ uint64_t rx_multi_descs = 0, rx_drop_noport = 0, rx_drop_proto = 0;
+ unsigned int i = 0;
+ u16 count = *xstats_cnt;
+
+ /* todo: get eth stats from emp */
+ for (i = 0; i < txrx_mgt->rx_ring_num; i++) {
+ rxq = NBL_RES_MGT_TO_RX_RING(res_mgt, i);
+
+ if (unlikely(rxq == NULL))
+ return -EINVAL;
+
+ rx_multi_descs += rxq->rxq_stats.rx_multi_descs;
+ rx_drop_noport += rxq->rxq_stats.rx_drop_noport;
+ rx_drop_proto += rxq->rxq_stats.rx_drop_proto;
+ }
+
+ xstats[count].value = rx_multi_descs;
+ xstats[count].id = count;
+ xstats[count + 1].value = rx_drop_noport;
+ xstats[count + 1].id = count + 1;
+ xstats[count + 2].value = rx_drop_proto;
+ xstats[count + 2].id = count + 2;
+
+ *xstats_cnt = count + 3;
+
+ return 0;
+}
+
+static int nbl_res_txrx_get_xstats_names(__rte_unused void *priv,
+ struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt)
+{
+ unsigned int i = 0;
+ u16 count = *xstats_cnt;
+
+ for (i = 0; i < ARRAY_SIZE(nbl_stats_strings); i++) {
+ strlcpy(xstats_names[count].name, nbl_stats_strings[i].name,
+ sizeof(nbl_stats_strings[count].name));
+ count++;
+ }
+ *xstats_cnt = count;
+ return 0;
+}
+
/* NBL_TXRX_SET_OPS(ops_name, func)
*
* Use X Macros to reduce setup and remove codes.
@@ -740,6 +846,10 @@ do { \
NBL_TXRX_SET_OPS(update_rx_ring, nbl_res_txrx_update_rx_ring); \
NBL_TXRX_SET_OPS(get_resource_pt_ops, nbl_res_get_pt_ops); \
NBL_TXRX_SET_OPS(get_stats, nbl_res_txrx_get_stats); \
+ NBL_TXRX_SET_OPS(reset_stats, nbl_res_txrx_reset_stats); \
+ NBL_TXRX_SET_OPS(get_txrx_xstats_cnt, nbl_res_txrx_get_xstats_cnt); \
+ NBL_TXRX_SET_OPS(get_txrx_xstats_names, nbl_res_txrx_get_xstats_names); \
+ NBL_TXRX_SET_OPS(get_txrx_xstats, nbl_res_txrx_get_xstats); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
index ba960ebdbf..30cb14b746 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -399,6 +399,11 @@ enum nbl_chan_state {
NBL_CHAN_STATE_MAX
};
+struct nbl_chan_param_get_private_stat_data {
+ u32 eth_id;
+ u32 data_len;
+};
+
struct nbl_channel_ops {
int (*send_msg)(void *priv, struct nbl_chan_send_info *chan_send);
int (*send_ack)(void *priv, struct nbl_chan_ack_info *chan_ack);
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
index 20510f7fa2..92dbc428ef 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
@@ -70,6 +70,15 @@ struct nbl_dispatch_ops {
void (*get_board_info)(void *priv, struct nbl_board_port_info *board_info);
void (*get_link_state)(void *priv, u8 eth_id, struct nbl_eth_link_info *eth_link_info);
int (*get_stats)(void *priv, struct rte_eth_stats *rte_stats);
+ int (*reset_stats)(void *priv);
+ int (*get_txrx_xstats_cnt)(void *priv, u16 *xstats_cnt);
+ int (*get_txrx_xstats)(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt);
+ int (*get_txrx_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt);
+ int (*get_hw_xstats_cnt)(void *priv, u16 *xstats_cnt);
+ int (*get_hw_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt);
+ void (*get_private_stat_data)(void *priv, u32 eth_id, u64 *data, u32 data_len);
void (*dummy_func)(void *priv);
};
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
index da683cd957..f9864e261b 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -42,6 +42,14 @@ struct nbl_resource_ops {
void (*release_rx_ring)(void *priv, u16 queue_idx);
int (*get_stats)(void *priv, struct rte_eth_stats *rte_stats);
int (*reset_stats)(void *priv);
+ int (*get_txrx_xstats_cnt)(void *priv, u16 *xstats_cnt);
+ int (*get_txrx_xstats)(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt);
+ int (*get_txrx_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt);
+ int (*get_hw_xstats_cnt)(void *priv, u16 *xstats_cnt);
+ int (*get_hw_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
+ u16 *xstats_cnt);
+ void (*get_private_stat_data)(void *priv, u32 eth_id, u64 *data);
void (*update_rx_ring)(void *priv, u16 queue_idx);
u16 (*get_tx_ehdr_len)(void *priv);
int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
@@ -54,10 +62,6 @@ struct nbl_resource_ops {
void (*cfg_txrx_vlan)(void *priv, u16 vlan_tci, u16 vlan_proto);
int (*txrx_burst_mode_get)(void *priv, struct rte_eth_dev *dev,
struct rte_eth_burst_mode *mode, bool is_tx);
- int (*get_txrx_xstats_cnt)(void *priv, u16 *xstats_cnt);
- int (*get_txrx_xstats)(void *priv, struct rte_eth_xstat *xstats, u16 *xstats_cnt);
- int (*get_txrx_xstats_names)(void *priv, struct rte_eth_xstat_name *xstats_names,
- u16 *xstats_cnt);
int (*add_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
void (*del_macvlan)(void *priv, u8 *mac, u16 vlan_id, u16 vsi_id);
int (*add_multi_rule)(void *priv, u16 vsi_id);
diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
index 06aaad09d8..3faf4b969f 100644
--- a/drivers/net/nbl/nbl_include/nbl_include.h
+++ b/drivers/net/nbl/nbl_include/nbl_include.h
@@ -58,6 +58,10 @@ typedef int32_t s32;
typedef int16_t s16;
typedef int8_t s8;
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(arr) RTE_DIM(arr)
+#endif
+
/* Used for macros to pass checkpatch */
#define NBL_NAME(x) x
#define BIT(a) (1UL << (a))
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v1 17/17] net/nbl: nbl device support set mtu and promisc
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (15 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 16/17] net/nbl: add nbl device xstats and stats Kyo Liu
@ 2025-06-12 8:58 ` Kyo Liu
2025-06-12 17:35 ` [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Stephen Hemminger
` (2 subsequent siblings)
19 siblings, 0 replies; 27+ messages in thread
From: Kyo Liu @ 2025-06-12 8:58 UTC (permalink / raw)
To: kyo.liu, dev; +Cc: Dimon Zhao, Leon Yu, Sam Chen
Implement NBL device set mtu and promisc functions
Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
---
drivers/net/nbl/nbl_core.h | 2 +
drivers/net/nbl/nbl_dev/nbl_dev.c | 60 +++++++++++++++
drivers/net/nbl/nbl_dev/nbl_dev.h | 4 +-
drivers/net/nbl/nbl_dispatch.c | 76 +++++++++++++++++++
drivers/net/nbl/nbl_dispatch.h | 2 +
drivers/net/nbl/nbl_ethdev.c | 31 +++++---
drivers/net/nbl/nbl_include/nbl_def_channel.h | 10 +++
.../net/nbl/nbl_include/nbl_def_dispatch.h | 1 +
.../net/nbl/nbl_include/nbl_def_resource.h | 2 +
9 files changed, 175 insertions(+), 13 deletions(-)
diff --git a/drivers/net/nbl/nbl_core.h b/drivers/net/nbl/nbl_core.h
index 997544b112..d3fe1d02d2 100644
--- a/drivers/net/nbl/nbl_core.h
+++ b/drivers/net/nbl/nbl_core.h
@@ -51,6 +51,8 @@
#define NBL_IS_NOT_COEXISTENCE(common) ({ typeof(common) _common = (common); \
_common->nl_socket_route < 0 || \
_common->ifindex < 0; })
+#define NBL_IS_COEXISTENCE(common) ((common)->devfd != -1)
+
struct nbl_core {
void *phy_mgt;
void *res_mgt;
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.c b/drivers/net/nbl/nbl_dev/nbl_dev.c
index 4fa132ae1c..6e26989e1c 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.c
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.c
@@ -507,6 +507,63 @@ static int nbl_xstats_reset(struct rte_eth_dev *eth_dev)
return 0;
}
+static int nbl_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
+{
+ struct rte_eth_dev_data *dev_data = eth_dev->data;
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ uint32_t frame_size = mtu + NBL_ETH_OVERHEAD;
+ int ret;
+
+ /* check if mtu is within the allowed range */
+ if (mtu < RTE_ETHER_MIN_MTU || frame_size > NBL_FRAME_SIZE_MAX)
+ return -EINVAL;
+
+ /* mtu setting is forbidden if port is start */
+ if (dev_data->dev_started) {
+ NBL_LOG(ERR, "port %d must be stopped before configuration", dev_data->port_id);
+ return -EBUSY;
+ }
+
+ dev_data->dev_conf.rxmode.mtu = frame_size;
+ ret = disp_ops->set_mtu(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt), dev_mgt->net_dev->vsi_id, mtu);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int nbl_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_common_info *common = &adapter->common;
+
+ if (!common->is_vf)
+ disp_ops->set_promisc_mode(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->net_dev->vsi_id, 1);
+ dev_mgt->net_dev->promisc = 1;
+
+ return 0;
+}
+
+static int nbl_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+ struct nbl_adapter *adapter = ETH_DEV_TO_NBL_DEV_PF_PRIV(eth_dev);
+ struct nbl_dev_mgt *dev_mgt = NBL_ADAPTER_TO_DEV_MGT(adapter);
+ struct nbl_dispatch_ops *disp_ops = NBL_DEV_MGT_TO_DISP_OPS(dev_mgt);
+ struct nbl_common_info *common = &adapter->common;
+
+ if (!common->is_vf)
+ disp_ops->set_promisc_mode(NBL_DEV_MGT_TO_DISP_PRIV(dev_mgt),
+ dev_mgt->net_dev->vsi_id, 0);
+ dev_mgt->net_dev->promisc = 0;
+
+ return 0;
+}
+
struct nbl_dev_ops dev_ops = {
.dev_configure = nbl_dev_configure,
.dev_start = nbl_dev_port_start,
@@ -523,6 +580,9 @@ struct nbl_dev_ops dev_ops = {
.xstats_get = nbl_xstats_get,
.xstats_get_names = nbl_xstats_get_names,
.xstats_reset = nbl_xstats_reset,
+ .promiscuous_disable = nbl_promiscuous_disable,
+ .promiscuous_enable = nbl_promiscuous_enable,
+ .mtu_set = nbl_mtu_set,
};
static int nbl_dev_setup_chan_queue(struct nbl_adapter *adapter)
diff --git a/drivers/net/nbl/nbl_dev/nbl_dev.h b/drivers/net/nbl/nbl_dev/nbl_dev.h
index 1541aebba5..a97a53bc02 100644
--- a/drivers/net/nbl/nbl_dev/nbl_dev.h
+++ b/drivers/net/nbl/nbl_dev/nbl_dev.h
@@ -47,7 +47,9 @@ struct nbl_dev_net_mgt {
u8 eth_mode;
u8 eth_id;
u16 max_mac_num;
- bool trust;
+ u8 trust:1;
+ u8 promisc:1;
+ u8 rsv:6;
};
struct nbl_dev_mgt {
diff --git a/drivers/net/nbl/nbl_dispatch.c b/drivers/net/nbl/nbl_dispatch.c
index e18b543ad6..c7c021eb5b 100644
--- a/drivers/net/nbl/nbl_dispatch.c
+++ b/drivers/net/nbl/nbl_dispatch.c
@@ -812,6 +812,73 @@ static void nbl_disp_get_private_stat_data_req(void *priv, u32 eth_id, u64 *data
chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt), &chan_send);
}
+static int nbl_disp_set_mtu(void *priv, u16 vsi_id, u16 mtu)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL(res_ops->set_mtu, (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id, mtu));
+ return ret;
+}
+
+static int nbl_disp_chan_set_mtu_req(void *priv, u16 vsi_id, u16 mtu)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_channel_ops *chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ struct nbl_chan_send_info chan_send = {0};
+ struct nbl_chan_param_set_mtu param = {0};
+
+ param.mtu = mtu;
+ param.vsi_id = vsi_id;
+
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_MTU_SET,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
+static int nbl_disp_set_promisc_mode(void *priv, u16 vsi_id, u16 mode)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_resource_ops *res_ops;
+ int ret = 0;
+
+ res_ops = NBL_DISP_MGT_TO_RES_OPS(disp_mgt);
+ ret = NBL_OPS_CALL(res_ops->set_promisc_mode,
+ (NBL_DISP_MGT_TO_RES_PRIV(disp_mgt), vsi_id, mode));
+ return ret;
+}
+
+static int nbl_disp_chan_set_promisc_mode_req(void *priv, u16 vsi_id, u16 mode)
+{
+ struct nbl_dispatch_mgt *disp_mgt = (struct nbl_dispatch_mgt *)priv;
+ struct nbl_common_info *common;
+ struct nbl_channel_ops *chan_ops;
+ struct nbl_chan_param_set_promisc_mode param = {0};
+ struct nbl_chan_send_info chan_send = {0};
+ int ret = 0;
+
+ common = NBL_DISP_MGT_TO_COMMON(disp_mgt);
+ if (NBL_IS_COEXISTENCE(common)) {
+ ret = ioctl(common->devfd, NBL_DEV_USER_SET_PROMISC_MODE, &mode);
+ if (ret) {
+ NBL_LOG(ERR, "userspace send set_promisc_mode ioctl msg faild ret %d", ret);
+ return ret;
+ }
+ return 0;
+ }
+
+ chan_ops = NBL_DISP_MGT_TO_CHAN_OPS(disp_mgt);
+ param.vsi_id = vsi_id;
+ param.mode = mode;
+ NBL_CHAN_SEND(chan_send, 0, NBL_CHAN_MSG_SET_PROSISC_MODE,
+ ¶m, sizeof(param), NULL, 0, 1);
+ return chan_ops->send_msg(NBL_DISP_MGT_TO_CHAN_PRIV(disp_mgt),
+ &chan_send);
+}
+
#define NBL_DISP_OPS_TBL \
do { \
NBL_DISP_SET_OPS(alloc_txrx_queues, nbl_disp_alloc_txrx_queues, \
@@ -970,6 +1037,14 @@ do { \
NBL_DISP_CTRL_LVL_MGT, \
NBL_CHAN_MSG_GET_ETH_STATS, \
nbl_disp_get_private_stat_data_req, NULL); \
+ NBL_DISP_SET_OPS(set_promisc_mode, nbl_disp_set_promisc_mode, \
+ NBL_DISP_CTRL_LVL_MGT, \
+ NBL_CHAN_MSG_SET_PROSISC_MODE, \
+ nbl_disp_chan_set_promisc_mode_req, NULL); \
+ NBL_DISP_SET_OPS(set_mtu, nbl_disp_set_mtu, \
+ NBL_DISP_CTRL_LVL_MGT, NBL_CHAN_MSG_MTU_SET, \
+ nbl_disp_chan_set_mtu_req, \
+ NULL); \
} while (0)
/* Structure starts here, adding an op should not modify anything below */
@@ -1085,6 +1160,7 @@ int nbl_disp_init(void *p)
NBL_DISP_MGT_TO_RES_OPS_TBL(*disp_mgt) = res_ops_tbl;
NBL_DISP_MGT_TO_CHAN_OPS_TBL(*disp_mgt) = chan_ops_tbl;
NBL_DISP_MGT_TO_DISP_OPS_TBL(*disp_mgt) = *disp_ops_tbl;
+ NBL_DISP_MGT_TO_COMMON(*disp_mgt) = NBL_ADAPTER_TO_COMMON(adapter);
if (disp_product_ops->dispatch_init) {
ret = disp_product_ops->dispatch_init(*disp_mgt);
diff --git a/drivers/net/nbl/nbl_dispatch.h b/drivers/net/nbl/nbl_dispatch.h
index dcdf87576a..d1572769fe 100644
--- a/drivers/net/nbl/nbl_dispatch.h
+++ b/drivers/net/nbl/nbl_dispatch.h
@@ -16,11 +16,13 @@
#define NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt) ((disp_mgt)->disp_ops_tbl)
#define NBL_DISP_MGT_TO_DISP_OPS(disp_mgt) (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->ops)
#define NBL_DISP_MGT_TO_DISP_PRIV(disp_mgt) (NBL_DISP_MGT_TO_DISP_OPS_TBL(disp_mgt)->priv)
+#define NBL_DISP_MGT_TO_COMMON(disp_mgt) ((disp_mgt)->common)
struct nbl_dispatch_mgt {
struct nbl_resource_ops_tbl *res_ops_tbl;
struct nbl_channel_ops_tbl *chan_ops_tbl;
struct nbl_dispatch_ops_tbl *disp_ops_tbl;
+ struct nbl_common_info *common;
uint32_t ctrl_lvl;
};
diff --git a/drivers/net/nbl/nbl_ethdev.c b/drivers/net/nbl/nbl_ethdev.c
index 06543f6716..794a304e30 100644
--- a/drivers/net/nbl/nbl_ethdev.c
+++ b/drivers/net/nbl/nbl_ethdev.c
@@ -36,18 +36,25 @@ struct eth_dev_ops nbl_eth_dev_ops = {
.dev_close = nbl_dev_close,
};
-#define NBL_DEV_NET_OPS_TBL \
-do { \
- NBL_DEV_NET_OPS(dev_configure, dev_ops->dev_configure);\
- NBL_DEV_NET_OPS(dev_start, dev_ops->dev_start); \
- NBL_DEV_NET_OPS(dev_stop, dev_ops->dev_stop); \
- NBL_DEV_NET_OPS(dev_infos_get, dev_ops->dev_infos_get);\
- NBL_DEV_NET_OPS(tx_queue_setup, dev_ops->tx_queue_setup);\
- NBL_DEV_NET_OPS(rx_queue_setup, dev_ops->rx_queue_setup);\
- NBL_DEV_NET_OPS(rx_queue_release, dev_ops->rx_queue_release);\
- NBL_DEV_NET_OPS(tx_queue_release, dev_ops->tx_queue_release);\
- NBL_DEV_NET_OPS(link_update, dev_ops->link_update); \
- NBL_DEV_NET_OPS(stats_get, dev_ops->stats_get); \
+#define NBL_DEV_NET_OPS_TBL \
+do { \
+ NBL_DEV_NET_OPS(dev_configure, dev_ops->dev_configure); \
+ NBL_DEV_NET_OPS(dev_start, dev_ops->dev_start); \
+ NBL_DEV_NET_OPS(dev_stop, dev_ops->dev_stop); \
+ NBL_DEV_NET_OPS(dev_infos_get, dev_ops->dev_infos_get); \
+ NBL_DEV_NET_OPS(tx_queue_setup, dev_ops->tx_queue_setup); \
+ NBL_DEV_NET_OPS(rx_queue_setup, dev_ops->rx_queue_setup); \
+ NBL_DEV_NET_OPS(rx_queue_release, dev_ops->rx_queue_release); \
+ NBL_DEV_NET_OPS(tx_queue_release, dev_ops->tx_queue_release); \
+ NBL_DEV_NET_OPS(link_update, dev_ops->link_update); \
+ NBL_DEV_NET_OPS(promiscuous_enable, dev_ops->promiscuous_enable); \
+ NBL_DEV_NET_OPS(promiscuous_disable, dev_ops->promiscuous_disable); \
+ NBL_DEV_NET_OPS(stats_get, dev_ops->stats_get); \
+ NBL_DEV_NET_OPS(stats_reset, dev_ops->stats_reset); \
+ NBL_DEV_NET_OPS(xstats_get, dev_ops->xstats_get); \
+ NBL_DEV_NET_OPS(xstats_get_names, dev_ops->xstats_get_names); \
+ NBL_DEV_NET_OPS(xstats_reset, dev_ops->xstats_reset); \
+ NBL_DEV_NET_OPS(mtu_set, dev_ops->mtu_set); \
} while (0)
static void nbl_set_eth_dev_ops(struct nbl_adapter *adapter,
diff --git a/drivers/net/nbl/nbl_include/nbl_def_channel.h b/drivers/net/nbl/nbl_include/nbl_def_channel.h
index 30cb14b746..4c8e6eebd0 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_channel.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_channel.h
@@ -404,6 +404,16 @@ struct nbl_chan_param_get_private_stat_data {
u32 data_len;
};
+struct nbl_chan_param_set_promisc_mode {
+ u16 vsi_id;
+ u16 mode;
+};
+
+struct nbl_chan_param_set_mtu {
+ u16 vsi_id;
+ u16 mtu;
+};
+
struct nbl_channel_ops {
int (*send_msg)(void *priv, struct nbl_chan_send_info *chan_send);
int (*send_ack)(void *priv, struct nbl_chan_ack_info *chan_ack);
diff --git a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
index 92dbc428ef..0c1b938bb6 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_dispatch.h
@@ -33,6 +33,7 @@ struct nbl_dispatch_ops {
void (*clear_flow)(void *priv, u16 vsi_id);
void (*get_firmware_version)(void *priv, char *firmware_verion, u8 max_len);
int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ int (*set_mtu)(void *priv, u16 vsi_id, u16 mtu);
int (*alloc_txrx_queues)(void *priv, u16 vsi_id, u16 queue_num);
void (*free_txrx_queues)(void *priv, u16 vsi_id);
u16 (*get_vsi_id)(void *priv);
diff --git a/drivers/net/nbl/nbl_include/nbl_def_resource.h b/drivers/net/nbl/nbl_include/nbl_def_resource.h
index f9864e261b..a26856ad3b 100644
--- a/drivers/net/nbl/nbl_include/nbl_def_resource.h
+++ b/drivers/net/nbl/nbl_include/nbl_def_resource.h
@@ -72,6 +72,8 @@ struct nbl_resource_ops {
int (*setup_cqs)(void *priv, u16 vsi_id, u16 real_qps, bool rss_indir_set);
void (*remove_cqs)(void *priv, u16 vsi_id);
void (*get_link_state)(void *priv, u8 eth_id, struct nbl_eth_link_info *eth_link_info);
+ int (*set_promisc_mode)(void *priv, u16 vsi_id, u16 mode);
+ int (*set_mtu)(void *priv, u16 vsi_id, u16 mtu);
};
struct nbl_resource_ops_tbl {
--
2.43.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (16 preceding siblings ...)
2025-06-12 8:58 ` [PATCH v1 17/17] net/nbl: nbl device support set mtu and promisc Kyo Liu
@ 2025-06-12 17:35 ` Stephen Hemminger
2025-06-12 17:44 ` Stephen Hemminger
2025-06-12 17:46 ` [PATCH " Stephen Hemminger
19 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2025-06-12 17:35 UTC (permalink / raw)
To: Kyo Liu; +Cc: dev
On Thu, 12 Jun 2025 08:58:21 +0000
Kyo Liu <kyo.liu@nebula-matrix.com> wrote:
> This nbl PMD (**librte_net_nbl**) provides poll mode driver for
> NebulaMatrix serials NICs.
>
> Features:
> ---------
> - MTU update
> - promisc mode set
> - xstats
> - Basic stats
>
> Support NICs:
> -------------
> - S1205CQ-A00CHT
> - S1105AS-A00CHT
> - S1055AS-A00CHT
> - S1052AS-A00CHT
> - S1051AS-A00CHT
> - S1045XS-A00CHT
> - S1205CQ-A00CSP
> - S1055AS-A00CSP
> - S1052AS-A00CSP
>
>
> Kyo Liu (17):
> net/nbl: add doc and minimum nbl build framework
> net/nbl: add simple probe/remove and log module
> net/nbl: add PHY layer definitions and implementation
> net/nbl: add Channel layer definitions and implementation
> net/nbl: add Resource layer definitions and implementation
> net/nbl: add Dispatch layer definitions and implementation
> net/nbl: add Dev layer definitions and implementation
> net/nbl: add complete device init and uninit functionality
> net/nbl: add uio and vfio mode for nbl
> net/nbl: bus/pci: introduce get_iova_mode for pci dev
> net/nbl: add nbl coexistence mode for nbl
> net/nbl: add nbl ethdev configuration
> net/nbl: add nbl device rxtx queue setup and release ops
> net/nbl: add nbl device start and stop ops
> net/nbl: add nbl device tx and rx burst
> net/nbl: add nbl device xstats and stats
> net/nbl: nbl device support set mtu and promisc
>
> .mailmap | 5 +
> MAINTAINERS | 9 +
> doc/guides/nics/features/nbl.ini | 9 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/nics/nbl.rst | 42 +
> doc/guides/rel_notes/release_25_07.rst | 5 +
> drivers/bus/pci/bus_pci_driver.h | 11 +
> drivers/bus/pci/linux/pci.c | 2 +
> drivers/net/meson.build | 1 +
> drivers/net/nbl/meson.build | 26 +
> drivers/net/nbl/nbl_common/nbl_common.c | 47 +
> drivers/net/nbl/nbl_common/nbl_common.h | 10 +
> drivers/net/nbl/nbl_common/nbl_thread.c | 88 ++
> drivers/net/nbl/nbl_common/nbl_userdev.c | 758 ++++++++++
> drivers/net/nbl/nbl_common/nbl_userdev.h | 21 +
> drivers/net/nbl/nbl_core.c | 100 ++
> drivers/net/nbl/nbl_core.h | 98 ++
> drivers/net/nbl/nbl_dev/nbl_dev.c | 1007 ++++++++++++++
> drivers/net/nbl/nbl_dev/nbl_dev.h | 65 +
> drivers/net/nbl/nbl_dispatch.c | 1226 +++++++++++++++++
> drivers/net/nbl/nbl_dispatch.h | 31 +
> drivers/net/nbl/nbl_ethdev.c | 167 +++
> drivers/net/nbl/nbl_ethdev.h | 32 +
> drivers/net/nbl/nbl_hw/nbl_channel.c | 853 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_channel.h | 127 ++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.c | 230 ++++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.h | 53 +
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 253 ++++
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h | 10 +
> drivers/net/nbl/nbl_hw/nbl_phy.h | 28 +
> drivers/net/nbl/nbl_hw/nbl_resource.c | 5 +
> drivers/net/nbl/nbl_hw/nbl_resource.h | 153 ++
> drivers/net/nbl/nbl_hw/nbl_txrx.c | 906 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_txrx.h | 136 ++
> drivers/net/nbl/nbl_hw/nbl_txrx_ops.h | 91 ++
> drivers/net/nbl/nbl_include/nbl_def_channel.h | 434 ++++++
> drivers/net/nbl/nbl_include/nbl_def_common.h | 128 ++
> drivers/net/nbl/nbl_include/nbl_def_dev.h | 107 ++
> .../net/nbl/nbl_include/nbl_def_dispatch.h | 95 ++
> drivers/net/nbl/nbl_include/nbl_def_phy.h | 35 +
> .../net/nbl/nbl_include/nbl_def_resource.h | 87 ++
> drivers/net/nbl/nbl_include/nbl_include.h | 212 +++
> drivers/net/nbl/nbl_include/nbl_logs.h | 25 +
> .../net/nbl/nbl_include/nbl_product_base.h | 37 +
> 44 files changed, 7766 insertions(+)
> create mode 100644 doc/guides/nics/features/nbl.ini
> create mode 100644 doc/guides/nics/nbl.rst
> create mode 100644 drivers/net/nbl/meson.build
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.h
> create mode 100644 drivers/net/nbl/nbl_common/nbl_thread.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.h
> create mode 100644 drivers/net/nbl/nbl_core.c
> create mode 100644 drivers/net/nbl/nbl_core.h
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.c
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.h
> create mode 100644 drivers/net/nbl/nbl_dispatch.c
> create mode 100644 drivers/net/nbl/nbl_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_ethdev.c
> create mode 100644 drivers/net/nbl/nbl_ethdev.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_phy.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_channel.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_common.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dev.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_phy.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_resource.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_include.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_logs.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_product_base.h
>
Build failure on 32 bit. Need to us %zu instead of PRIU64 here.
FAILED: drivers/libtmp_rte_net_nbl.a.p/net_nbl_nbl_hw_nbl_channel.c.o
gcc -Idrivers/libtmp_rte_net_nbl.a.p -Idrivers -I../drivers -Idrivers/net/nbl -I../drivers/net/nbl -I../drivers/net/nbl/nbl_include -I../drivers/net/nbl/nbl_hw -I../drivers/net/nbl/nbl_common -Ilib/ethdev -I../lib/ethdev -Ilib/eal/common -I../lib/eal/common -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include -Ilib/eal/linux/include -I../lib/eal/linux/include -Ilib/eal/x86/include -I../lib/eal/x86/include -I../kernel/linux -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/log -I../lib/log -Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Idrivers/bus/vdev -I../drivers/bus/vdev -fdiagnostics-color=always -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Wextra -Werror -std=c11 -O3 -include rte_config.h -Wvla -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef -Wwrite-strings -Wno-packed-not-aligned -Wno-missing-field-initializers -Wno-pointer-to-int-cast -D_GNU_SOURCE -m32 -fPIC -march=native -mrtm -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation -Wno-address-of-packed-member -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.nbl -MD -MQ drivers/libtmp_rte_net_nbl.a.p/net_nbl_nbl_hw_nbl_channel.c.o -MF drivers/libtmp_rte_net_nbl.a.p/net_nbl_nbl_hw_nbl_channel.c.o.d -o drivers/libtmp_rte_net_nbl.a.p/net_nbl_nbl_hw_nbl_channel.c.o -c ../drivers/net/nbl/nbl_hw/nbl_channel.c
In file included from ../lib/eal/include/rte_debug.h:17,
from ../lib/eal/include/rte_bitops.h:23,
from ../lib/ethdev/rte_cman.h:8,
from ../lib/ethdev/rte_ethdev.h:159,
from ../drivers/net/nbl/nbl_include/nbl_include.h:37,
from ../drivers/net/nbl/nbl_include/nbl_product_base.h:8,
from ../drivers/net/nbl/nbl_core.h:8,
from ../drivers/net/nbl/nbl_ethdev.h:8,
from ../drivers/net/nbl/nbl_hw/nbl_channel.h:8,
from ../drivers/net/nbl/nbl_hw/nbl_channel.c:5:
../drivers/net/nbl/nbl_hw/nbl_channel.c: In function ‘nbl_chan_update_txqueue’:
../drivers/net/nbl/nbl_hw/nbl_channel.c:394:76: error: format ‘%llu’ expects argument of type ‘long long unsigned int’, but argument 5 has type ‘size_t’ {aka ‘unsigned int’} [-Werror=format=]
394 | NBL_LOG(ERR, "arg_len: %" NBL_PRIU64 ", too long!", arg_len);
| ^
../lib/log/rte_log.h:334:39: note: in definition of macro ‘RTE_LOG’
334 | RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
| ^
../drivers/net/nbl/nbl_include/nbl_logs.h:19:9: note: in expansion of macro ‘RTE_LOG_LINE_PREFIX’
19 | RTE_LOG_LINE_PREFIX(level, NBL_DRIVER, "%s: ", __func__, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~~~
../drivers/net/nbl/nbl_hw/nbl_channel.c:394:17: note: in expansion of macro ‘NBL_LOG’
394 | NBL_LOG(ERR, "arg_len: %" NBL_PRIU64 ", too long!", arg_len);
| ^~~~~~~
cc1: all warnings being treated as errors
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev
2025-06-12 8:58 ` [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev Kyo Liu
@ 2025-06-12 17:40 ` Stephen Hemminger
2025-06-13 2:28 ` 回复:[PATCH " Kyo.Liu
0 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2025-06-12 17:40 UTC (permalink / raw)
To: Kyo Liu; +Cc: dev, Chenbo Xia, Nipun Gupta
On Thu, 12 Jun 2025 08:58:31 +0000
Kyo Liu <kyo.liu@nebula-matrix.com> wrote:
> I propose this patch for DPDK to enable coexistence between
> DPDK and kernel drivers for regular NICs.This solution requires
> adding a new pci_ops in rte_pci_driver, through which DPDK will
> retrieve the required IOVA mode from the vendor driver.
> This mechanism is necessary to handle different IOMMU
> configurations and operating modes. Below is a detailed
> analysis of various scenarios:
>
> 1. When IOMMU is enabled:
> 1.1 With PT (Pass-Through) enabled:
> In this case, the domain type is IOMMU_DOMAIN_IDENTITY,
> which prevents vendor drivers from setting IOVA->PA mapping tables.
> Therefore, DPDK must use PA mode. To achieve this:
> The vendor kernel driver will register a character device (cdev) to
> communicate with DPDK. This cdev handles device operations
> (open, mmap, etc.) and ultimately
> programs the hardware registers.
>
> 1.2 With PT disabled:
> Here, the vendor driver doesn't enforce specific IOVA mode requirements.
> Our implementation will:
> Integrate a mediated device (mdev) in the vendor driver.
> This mdev interacts with DPDK and manages IOVA->PA mapping configurations.
>
> 2. When IOMMU is disabled:
> The vendor driver mandates PA mode (consistent with DPDK's PA mode
> requirement in this scenario).
> A character device (cdev) will similarly be registered for DPDK
> communication.
>
> Summary:
> The solution leverages multiple technologies:
> mdev for IOVA management when IOMMU is partially enabled.
> VFIO for device passthrough operations.
> cdev for register programming coordination.
> A new pci_ops interface in DPDK to dynamically determine IOVA modes.
> This architecture enables clean coexistence by establishing standardized
> communication channels between DPDK and vendor drivers across different
> IOMMU configurations.
>
> Motivation for the Patch:
> This patch is introduced to prepare for the upcoming open-source
> contribution of our NebulaMatrix SNIC driver to DPDK. We aim to
> ensure that our SNIC can seamlessly coexist with kernel drivers
> using this mechanism. By adopting the proposed
> architecture—leveraging dynamic IOVA mode negotiation via pci_ops,
> mediated devices (mdev), and character device (cdev)
> interactions—we enable our SNIC to operate in hybrid environments
> here both DPDK and kernel drivers may manage the same hardware.
> This design aligns with DPDK’s scalability goals and ensures
> compatibility across diverse IOMMU configurations, which is critical
> for real-world deployment scenarios.
>
> Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
This needs more review, there are alternatives.
Will not be part of this 25.07 release, maybe for 25.11 but want input
from other community members since potential impact for all drivers.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (17 preceding siblings ...)
2025-06-12 17:35 ` [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Stephen Hemminger
@ 2025-06-12 17:44 ` Stephen Hemminger
2025-06-13 2:31 ` 回复:[PATCH " Kyo.Liu
2025-06-12 17:46 ` [PATCH " Stephen Hemminger
19 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2025-06-12 17:44 UTC (permalink / raw)
To: Kyo Liu; +Cc: dev
On Thu, 12 Jun 2025 08:58:21 +0000
Kyo Liu <kyo.liu@nebula-matrix.com> wrote:
> This nbl PMD (**librte_net_nbl**) provides poll mode driver for
> NebulaMatrix serials NICs.
>
> Features:
> ---------
> - MTU update
> - promisc mode set
> - xstats
> - Basic stats
>
> Support NICs:
> -------------
> - S1205CQ-A00CHT
> - S1105AS-A00CHT
> - S1055AS-A00CHT
> - S1052AS-A00CHT
> - S1051AS-A00CHT
> - S1045XS-A00CHT
> - S1205CQ-A00CSP
> - S1055AS-A00CSP
> - S1052AS-A00CSP
>
>
> Kyo Liu (17):
> net/nbl: add doc and minimum nbl build framework
> net/nbl: add simple probe/remove and log module
> net/nbl: add PHY layer definitions and implementation
> net/nbl: add Channel layer definitions and implementation
> net/nbl: add Resource layer definitions and implementation
> net/nbl: add Dispatch layer definitions and implementation
> net/nbl: add Dev layer definitions and implementation
> net/nbl: add complete device init and uninit functionality
> net/nbl: add uio and vfio mode for nbl
> net/nbl: bus/pci: introduce get_iova_mode for pci dev
> net/nbl: add nbl coexistence mode for nbl
> net/nbl: add nbl ethdev configuration
> net/nbl: add nbl device rxtx queue setup and release ops
> net/nbl: add nbl device start and stop ops
> net/nbl: add nbl device tx and rx burst
> net/nbl: add nbl device xstats and stats
> net/nbl: nbl device support set mtu and promisc
>
> .mailmap | 5 +
> MAINTAINERS | 9 +
> doc/guides/nics/features/nbl.ini | 9 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/nics/nbl.rst | 42 +
> doc/guides/rel_notes/release_25_07.rst | 5 +
> drivers/bus/pci/bus_pci_driver.h | 11 +
> drivers/bus/pci/linux/pci.c | 2 +
> drivers/net/meson.build | 1 +
> drivers/net/nbl/meson.build | 26 +
> drivers/net/nbl/nbl_common/nbl_common.c | 47 +
> drivers/net/nbl/nbl_common/nbl_common.h | 10 +
> drivers/net/nbl/nbl_common/nbl_thread.c | 88 ++
> drivers/net/nbl/nbl_common/nbl_userdev.c | 758 ++++++++++
> drivers/net/nbl/nbl_common/nbl_userdev.h | 21 +
> drivers/net/nbl/nbl_core.c | 100 ++
> drivers/net/nbl/nbl_core.h | 98 ++
> drivers/net/nbl/nbl_dev/nbl_dev.c | 1007 ++++++++++++++
> drivers/net/nbl/nbl_dev/nbl_dev.h | 65 +
> drivers/net/nbl/nbl_dispatch.c | 1226 +++++++++++++++++
> drivers/net/nbl/nbl_dispatch.h | 31 +
> drivers/net/nbl/nbl_ethdev.c | 167 +++
> drivers/net/nbl/nbl_ethdev.h | 32 +
> drivers/net/nbl/nbl_hw/nbl_channel.c | 853 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_channel.h | 127 ++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.c | 230 ++++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.h | 53 +
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 253 ++++
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h | 10 +
> drivers/net/nbl/nbl_hw/nbl_phy.h | 28 +
> drivers/net/nbl/nbl_hw/nbl_resource.c | 5 +
> drivers/net/nbl/nbl_hw/nbl_resource.h | 153 ++
> drivers/net/nbl/nbl_hw/nbl_txrx.c | 906 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_txrx.h | 136 ++
> drivers/net/nbl/nbl_hw/nbl_txrx_ops.h | 91 ++
> drivers/net/nbl/nbl_include/nbl_def_channel.h | 434 ++++++
> drivers/net/nbl/nbl_include/nbl_def_common.h | 128 ++
> drivers/net/nbl/nbl_include/nbl_def_dev.h | 107 ++
> .../net/nbl/nbl_include/nbl_def_dispatch.h | 95 ++
> drivers/net/nbl/nbl_include/nbl_def_phy.h | 35 +
> .../net/nbl/nbl_include/nbl_def_resource.h | 87 ++
> drivers/net/nbl/nbl_include/nbl_include.h | 212 +++
> drivers/net/nbl/nbl_include/nbl_logs.h | 25 +
> .../net/nbl/nbl_include/nbl_product_base.h | 37 +
> 44 files changed, 7766 insertions(+)
> create mode 100644 doc/guides/nics/features/nbl.ini
> create mode 100644 doc/guides/nics/nbl.rst
> create mode 100644 drivers/net/nbl/meson.build
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.h
> create mode 100644 drivers/net/nbl/nbl_common/nbl_thread.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.h
> create mode 100644 drivers/net/nbl/nbl_core.c
> create mode 100644 drivers/net/nbl/nbl_core.h
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.c
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.h
> create mode 100644 drivers/net/nbl/nbl_dispatch.c
> create mode 100644 drivers/net/nbl/nbl_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_ethdev.c
> create mode 100644 drivers/net/nbl/nbl_ethdev.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_phy.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_channel.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_common.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dev.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_phy.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_resource.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_include.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_logs.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_product_base.h
>
The script check-git-log complains about case in several places.
It is not a hard requirement that this be followed, but best to follow it.
Wrong headline prefix:
net/nbl: bus/pci: introduce get_iova_mode for pci dev
Wrong headline case:
"net/nbl: nbl device support set mtu and promisc": mtu --> MTU
Wrong headline case:
"net/nbl: bus/pci: introduce get_iova_mode for pci dev": pci --> PCI
Wrong headline case:
"net/nbl: add nbl device tx and rx burst": rx --> Rx
Wrong headline case:
"net/nbl: add nbl device tx and rx burst": tx --> Tx
Wrong headline case:
"net/nbl: add uio and vfio mode for nbl": vfio --> VFIO
Contributor name/email mismatch with .mailmap:
Kyo Liu <kyo.liu@nebula-matrix.com> is unknown in .mailmap
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
` (18 preceding siblings ...)
2025-06-12 17:44 ` Stephen Hemminger
@ 2025-06-12 17:46 ` Stephen Hemminger
19 siblings, 0 replies; 27+ messages in thread
From: Stephen Hemminger @ 2025-06-12 17:46 UTC (permalink / raw)
To: Kyo Liu; +Cc: dev
On Thu, 12 Jun 2025 08:58:21 +0000
Kyo Liu <kyo.liu@nebula-matrix.com> wrote:
> This nbl PMD (**librte_net_nbl**) provides poll mode driver for
> NebulaMatrix serials NICs.
>
> Features:
> ---------
> - MTU update
> - promisc mode set
> - xstats
> - Basic stats
>
> Support NICs:
> -------------
> - S1205CQ-A00CHT
> - S1105AS-A00CHT
> - S1055AS-A00CHT
> - S1052AS-A00CHT
> - S1051AS-A00CHT
> - S1045XS-A00CHT
> - S1205CQ-A00CSP
> - S1055AS-A00CSP
> - S1052AS-A00CSP
>
>
> Kyo Liu (17):
> net/nbl: add doc and minimum nbl build framework
> net/nbl: add simple probe/remove and log module
> net/nbl: add PHY layer definitions and implementation
> net/nbl: add Channel layer definitions and implementation
> net/nbl: add Resource layer definitions and implementation
> net/nbl: add Dispatch layer definitions and implementation
> net/nbl: add Dev layer definitions and implementation
> net/nbl: add complete device init and uninit functionality
> net/nbl: add uio and vfio mode for nbl
> net/nbl: bus/pci: introduce get_iova_mode for pci dev
> net/nbl: add nbl coexistence mode for nbl
> net/nbl: add nbl ethdev configuration
> net/nbl: add nbl device rxtx queue setup and release ops
> net/nbl: add nbl device start and stop ops
> net/nbl: add nbl device tx and rx burst
> net/nbl: add nbl device xstats and stats
> net/nbl: nbl device support set mtu and promisc
>
> .mailmap | 5 +
> MAINTAINERS | 9 +
> doc/guides/nics/features/nbl.ini | 9 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/nics/nbl.rst | 42 +
> doc/guides/rel_notes/release_25_07.rst | 5 +
> drivers/bus/pci/bus_pci_driver.h | 11 +
> drivers/bus/pci/linux/pci.c | 2 +
> drivers/net/meson.build | 1 +
> drivers/net/nbl/meson.build | 26 +
> drivers/net/nbl/nbl_common/nbl_common.c | 47 +
> drivers/net/nbl/nbl_common/nbl_common.h | 10 +
> drivers/net/nbl/nbl_common/nbl_thread.c | 88 ++
> drivers/net/nbl/nbl_common/nbl_userdev.c | 758 ++++++++++
> drivers/net/nbl/nbl_common/nbl_userdev.h | 21 +
> drivers/net/nbl/nbl_core.c | 100 ++
> drivers/net/nbl/nbl_core.h | 98 ++
> drivers/net/nbl/nbl_dev/nbl_dev.c | 1007 ++++++++++++++
> drivers/net/nbl/nbl_dev/nbl_dev.h | 65 +
> drivers/net/nbl/nbl_dispatch.c | 1226 +++++++++++++++++
> drivers/net/nbl/nbl_dispatch.h | 31 +
> drivers/net/nbl/nbl_ethdev.c | 167 +++
> drivers/net/nbl/nbl_ethdev.h | 32 +
> drivers/net/nbl/nbl_hw/nbl_channel.c | 853 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_channel.h | 127 ++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.c | 230 ++++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.h | 53 +
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 253 ++++
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h | 10 +
> drivers/net/nbl/nbl_hw/nbl_phy.h | 28 +
> drivers/net/nbl/nbl_hw/nbl_resource.c | 5 +
> drivers/net/nbl/nbl_hw/nbl_resource.h | 153 ++
> drivers/net/nbl/nbl_hw/nbl_txrx.c | 906 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_txrx.h | 136 ++
> drivers/net/nbl/nbl_hw/nbl_txrx_ops.h | 91 ++
> drivers/net/nbl/nbl_include/nbl_def_channel.h | 434 ++++++
> drivers/net/nbl/nbl_include/nbl_def_common.h | 128 ++
> drivers/net/nbl/nbl_include/nbl_def_dev.h | 107 ++
> .../net/nbl/nbl_include/nbl_def_dispatch.h | 95 ++
> drivers/net/nbl/nbl_include/nbl_def_phy.h | 35 +
> .../net/nbl/nbl_include/nbl_def_resource.h | 87 ++
> drivers/net/nbl/nbl_include/nbl_include.h | 212 +++
> drivers/net/nbl/nbl_include/nbl_logs.h | 25 +
> .../net/nbl/nbl_include/nbl_product_base.h | 37 +
> 44 files changed, 7766 insertions(+)
> create mode 100644 doc/guides/nics/features/nbl.ini
> create mode 100644 doc/guides/nics/nbl.rst
> create mode 100644 drivers/net/nbl/meson.build
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.h
> create mode 100644 drivers/net/nbl/nbl_common/nbl_thread.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.h
> create mode 100644 drivers/net/nbl/nbl_core.c
> create mode 100644 drivers/net/nbl/nbl_core.h
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.c
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.h
> create mode 100644 drivers/net/nbl/nbl_dispatch.c
> create mode 100644 drivers/net/nbl/nbl_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_ethdev.c
> create mode 100644 drivers/net/nbl/nbl_ethdev.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_phy.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_channel.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_common.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dev.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_phy.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_resource.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_include.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_logs.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_product_base.h
>
Several spelling errors found by checkpatch should get fixed.
WARNING:TYPO_SPELLING: 'definetions' may be misspelled - perhaps 'definitions'?
#10:
add Channel layer related definetions and nbl_thread
^^^^^^^^^^^
WARNING:TYPO_SPELLING: 'donot' may be misspelled - perhaps 'do not'?
#534: FILE: drivers/net/nbl/nbl_hw/nbl_channel.c:267:
+ NBL_LOG(INFO, "payload_len donot match ack_len!,"
^^^^^
WARNING:TYPO_SPELLING: 'indivisual' may be misspelled - perhaps 'individual'?
#1054: FILE: drivers/net/nbl/nbl_hw/nbl_channel.h:109:
+ * Every indivisual mgt must have the common mgt as its first member, and contains its unique
^^^^^^^^^^
WARNING:TYPO_SPELLING: 'definetions' may be misspelled - perhaps 'definitions'?
#10:
add Resource layer related definetions
^^^^^^^^^^^
WARNING:TYPO_SPELLING: 'definetions' may be misspelled - perhaps 'definitions'?
#10:
add Dispatch layer related definetions
^^^^^^^^^^^
WARNING:TYPO_SPELLING: 'definetions' may be misspelled - perhaps 'definitions'?
#10:
add Dev layer related definetions
^^^^^^^^^^^
total: 0 errors, 1 warnings, 0 checks, 432 lines checked
### [PATCH] net/nbl: add nbl device tx and rx burst
WARNING:TYPO_SPELLING: 'donot' may be misspelled - perhaps 'do not'?
#218: FILE: drivers/net/nbl/nbl_dispatch.c:696:
+ /* if donot have res_ops->get_link_state(), default eth is up */
^^^^^
WARNING:TYPO_SPELLING: 'dumplicate' may be misspelled - perhaps 'duplicate'?
#617: FILE: drivers/net/nbl/nbl_hw/nbl_txrx.c:653:
+ /* BUG on dumplicate pkt free */
^^^^^^^^^^
total: 0 errors, 2 warnings, 0 checks, 859 lines checked
### [PATCH] net/nbl: nbl device support set mtu and promisc
WARNING:TYPO_SPELLING: 'faild' may be misspelled - perhaps 'failed'?
#176: FILE: drivers/net/nbl/nbl_dispatch.c:867:
+ NBL_LOG(ERR, "userspace send set_promisc_mode ioctl msg faild ret %d", ret);
^^^^^
total: 0 errors, 1 warnings, 0 checks, 265 lines checked
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v1 02/17] net/nbl: add simple probe/remove and log module
2025-06-12 8:58 ` [PATCH v1 02/17] net/nbl: add simple probe/remove and log module Kyo Liu
@ 2025-06-12 17:49 ` Stephen Hemminger
2025-06-13 2:32 ` 回复:[PATCH " Kyo.Liu
0 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2025-06-12 17:49 UTC (permalink / raw)
To: Kyo Liu; +Cc: dev, Dimon Zhao, Leon Yu, Sam Chen
On Thu, 12 Jun 2025 08:58:23 +0000
Kyo Liu <kyo.liu@nebula-matrix.com> wrote:
> diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
> new file mode 100644
> index 0000000000..1697f50a75
> --- /dev/null
> +++ b/drivers/net/nbl/nbl_include/nbl_include.h
> @@ -0,0 +1,53 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2025 Nebulamatrix Technology Co., Ltd.
> + */
> +
> +#ifndef _NBL_INCLUDE_H_
> +#define _NBL_INCLUDE_H_
> +
> +#include <ctype.h>
> +#include <dirent.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <inttypes.h>
> +#include <limits.h>
> +#include <linux/netlink.h>
> +#include <linux/rtnetlink.h>
> +#include <linux/genetlink.h>
> +#include <linux/ethtool.h>
> +#include <netinet/in.h>
> +#include <net/if.h>
> +#include <net/if_arp.h>
> +#include <pthread.h>
DPDK locks and thread are preferred over direct access
to pthread.
> +#include <signal.h>
Not used, do not include
> +#include <stdarg.h>
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/eventfd.h>
Direct use of eventfd is non-portable and should be avoided
in drivers.
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/queue.h>
> +#include <sys/stat.h>
Driver does not use this header?
> +#include <sys/types.h>
> +#include <unistd.h>
> +
> +#include <rte_ethdev.h>
> +#include <ethdev_driver.h>
> +#include <ethdev_pci.h>
> +#include <bus_pci_driver.h>
> +
> +#include "nbl_logs.h"
> +
> +typedef uint64_t u64;
> +typedef uint32_t u32;
> +typedef uint16_t u16;
> +typedef uint8_t u8;
> +typedef int64_t s64;
> +typedef int32_t s32;
> +typedef int16_t s16;
> +typedef int8_t s8;
> +
> +#endif
^ permalink raw reply [flat|nested] 27+ messages in thread
* 回复:[PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev
2025-06-12 17:40 ` Stephen Hemminger
@ 2025-06-13 2:28 ` Kyo.Liu
2025-06-13 7:35 ` [PATCH " David Marchand
0 siblings, 1 reply; 27+ messages in thread
From: Kyo.Liu @ 2025-06-13 2:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Chenbo Xia, Nipun Gupta
[-- Attachment #1: Type: text/plain, Size: 3565 bytes --]
Hello, Stephen:
Thanks for your review. This patch is particularly important for our NBL driver because our coexistence implementation depends on it.
Unlike RDMA devices that support hardware page table translation, our hardware requires a software-managed approach to avoid IOVA conflicts since both user-space and kernel-space DMA operations share the same domain. This architectural constraint makes the patch essential for:
1. Maintaining correct memory isolation
2. Preventing IOVA collisions in shared DMA operations
3. Enabling safe driver coexistence
We'd be happy to:
1. Provide additional technical details if needed
2. Make any necessary adjustments based on your feedback
Anyway, we really hope this patch could be accepted, Thanks again
------------------------------------------------------------------
> I propose this patch for DPDK to enable coexistence between
> DPDK and kernel drivers for regular NICs.This solution requires
> adding a new pci_ops in rte_pci_driver, through which DPDK will
> retrieve the required IOVA mode from the vendor driver.
> This mechanism is necessary to handle different IOMMU
> configurations and operating modes. Below is a detailed
> analysis of various scenarios:
>
> 1. When IOMMU is enabled:
> 1.1 With PT (Pass-Through) enabled:
> In this case, the domain type is IOMMU_DOMAIN_IDENTITY,
> which prevents vendor drivers from setting IOVA->PA mapping tables.
> Therefore, DPDK must use PA mode. To achieve this:
> The vendor kernel driver will register a character device (cdev) to
> communicate with DPDK. This cdev handles device operations
> (open, mmap, etc.) and ultimately
> programs the hardware registers.
>
> 1.2 With PT disabled:
> Here, the vendor driver doesn't enforce specific IOVA mode requirements.
> Our implementation will:
> Integrate a mediated device (mdev) in the vendor driver.
> This mdev interacts with DPDK and manages IOVA->PA mapping configurations.
>
> 2. When IOMMU is disabled:
> The vendor driver mandates PA mode (consistent with DPDK's PA mode
> requirement in this scenario).
> A character device (cdev) will similarly be registered for DPDK
> communication.
>
> Summary:
> The solution leverages multiple technologies:
> mdev for IOVA management when IOMMU is partially enabled.
> VFIO for device passthrough operations.
> cdev for register programming coordination.
> A new pci_ops interface in DPDK to dynamically determine IOVA modes.
> This architecture enables clean coexistence by establishing standardized
> communication channels between DPDK and vendor drivers across different
> IOMMU configurations.
>
> Motivation for the Patch:
> This patch is introduced to prepare for the upcoming open-source
> contribution of our NebulaMatrix SNIC driver to DPDK. We aim to
> ensure that our SNIC can seamlessly coexist with kernel drivers
> using this mechanism. By adopting the proposed
> architecture—leveraging dynamic IOVA mode negotiation via pci_ops,
> mediated devices (mdev), and character device (cdev)
> interactions—we enable our SNIC to operate in hybrid environments
> here both DPDK and kernel drivers may manage the same hardware.
> This design aligns with DPDK’s scalability goals and ensures
> compatibility across diverse IOMMU configurations, which is critical
> for real-world deployment scenarios.
>
> Signed-off-by: Kyo Liu <kyo.liu@nebula-matrix.com>
This needs more review, there are alternatives.
Will not be part of this 25.07 release, maybe for 25.11 but want input
from other community members since potential impact for all drivers.
[-- Attachment #2: Type: text/html, Size: 13158 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* 回复:[PATCH v1 00/17] NBL PMD for Nebulamatrix NICs
2025-06-12 17:44 ` Stephen Hemminger
@ 2025-06-13 2:31 ` Kyo.Liu
0 siblings, 0 replies; 27+ messages in thread
From: Kyo.Liu @ 2025-06-13 2:31 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
[-- Attachment #1: Type: text/plain, Size: 6428 bytes --]
Thanks, will be fixed in next version.
------------------------------------------------------------------
> This nbl PMD (**librte_net_nbl**) provides poll mode driver for
> NebulaMatrix serials NICs.
>
> Features:
> ---------
> - MTU update
> - promisc mode set
> - xstats
> - Basic stats
>
> Support NICs:
> -------------
> - S1205CQ-A00CHT
> - S1105AS-A00CHT
> - S1055AS-A00CHT
> - S1052AS-A00CHT
> - S1051AS-A00CHT
> - S1045XS-A00CHT
> - S1205CQ-A00CSP
> - S1055AS-A00CSP
> - S1052AS-A00CSP
>
>
> Kyo Liu (17):
> net/nbl: add doc and minimum nbl build framework
> net/nbl: add simple probe/remove and log module
> net/nbl: add PHY layer definitions and implementation
> net/nbl: add Channel layer definitions and implementation
> net/nbl: add Resource layer definitions and implementation
> net/nbl: add Dispatch layer definitions and implementation
> net/nbl: add Dev layer definitions and implementation
> net/nbl: add complete device init and uninit functionality
> net/nbl: add uio and vfio mode for nbl
> net/nbl: bus/pci: introduce get_iova_mode for pci dev
> net/nbl: add nbl coexistence mode for nbl
> net/nbl: add nbl ethdev configuration
> net/nbl: add nbl device rxtx queue setup and release ops
> net/nbl: add nbl device start and stop ops
> net/nbl: add nbl device tx and rx burst
> net/nbl: add nbl device xstats and stats
> net/nbl: nbl device support set mtu and promisc
>
> .mailmap | 5 +
> MAINTAINERS | 9 +
> doc/guides/nics/features/nbl.ini | 9 +
> doc/guides/nics/index.rst | 1 +
> doc/guides/nics/nbl.rst | 42 +
> doc/guides/rel_notes/release_25_07.rst | 5 +
> drivers/bus/pci/bus_pci_driver.h | 11 +
> drivers/bus/pci/linux/pci.c | 2 +
> drivers/net/meson.build | 1 +
> drivers/net/nbl/meson.build | 26 +
> drivers/net/nbl/nbl_common/nbl_common.c | 47 +
> drivers/net/nbl/nbl_common/nbl_common.h | 10 +
> drivers/net/nbl/nbl_common/nbl_thread.c | 88 ++
> drivers/net/nbl/nbl_common/nbl_userdev.c | 758 ++++++++++
> drivers/net/nbl/nbl_common/nbl_userdev.h | 21 +
> drivers/net/nbl/nbl_core.c | 100 ++
> drivers/net/nbl/nbl_core.h | 98 ++
> drivers/net/nbl/nbl_dev/nbl_dev.c | 1007 ++++++++++++++
> drivers/net/nbl/nbl_dev/nbl_dev.h | 65 +
> drivers/net/nbl/nbl_dispatch.c | 1226 +++++++++++++++++
> drivers/net/nbl/nbl_dispatch.h | 31 +
> drivers/net/nbl/nbl_ethdev.c | 167 +++
> drivers/net/nbl/nbl_ethdev.h | 32 +
> drivers/net/nbl/nbl_hw/nbl_channel.c | 853 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_channel.h | 127 ++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.c | 230 ++++
> .../nbl_hw_leonis/nbl_phy_leonis_snic.h | 53 +
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c | 253 ++++
> .../nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h | 10 +
> drivers/net/nbl/nbl_hw/nbl_phy.h | 28 +
> drivers/net/nbl/nbl_hw/nbl_resource.c | 5 +
> drivers/net/nbl/nbl_hw/nbl_resource.h | 153 ++
> drivers/net/nbl/nbl_hw/nbl_txrx.c | 906 ++++++++++++
> drivers/net/nbl/nbl_hw/nbl_txrx.h | 136 ++
> drivers/net/nbl/nbl_hw/nbl_txrx_ops.h | 91 ++
> drivers/net/nbl/nbl_include/nbl_def_channel.h | 434 ++++++
> drivers/net/nbl/nbl_include/nbl_def_common.h | 128 ++
> drivers/net/nbl/nbl_include/nbl_def_dev.h | 107 ++
> .../net/nbl/nbl_include/nbl_def_dispatch.h | 95 ++
> drivers/net/nbl/nbl_include/nbl_def_phy.h | 35 +
> .../net/nbl/nbl_include/nbl_def_resource.h | 87 ++
> drivers/net/nbl/nbl_include/nbl_include.h | 212 +++
> drivers/net/nbl/nbl_include/nbl_logs.h | 25 +
> .../net/nbl/nbl_include/nbl_product_base.h | 37 +
> 44 files changed, 7766 insertions(+)
> create mode 100644 doc/guides/nics/features/nbl.ini
> create mode 100644 doc/guides/nics/nbl.rst
> create mode 100644 drivers/net/nbl/meson.build
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_common.h
> create mode 100644 drivers/net/nbl/nbl_common/nbl_thread.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.c
> create mode 100644 drivers/net/nbl/nbl_common/nbl_userdev.h
> create mode 100644 drivers/net/nbl/nbl_core.c
> create mode 100644 drivers/net/nbl/nbl_core.h
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.c
> create mode 100644 drivers/net/nbl/nbl_dev/nbl_dev.h
> create mode 100644 drivers/net/nbl/nbl_dispatch.c
> create mode 100644 drivers/net/nbl/nbl_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_ethdev.c
> create mode 100644 drivers/net/nbl/nbl_ethdev.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_channel.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_phy_leonis_snic.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_hw_leonis/nbl_res_leonis.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_phy.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_resource.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.c
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx.h
> create mode 100644 drivers/net/nbl/nbl_hw/nbl_txrx_ops.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_channel.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_common.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dev.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_dispatch.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_phy.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_def_resource.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_include.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_logs.h
> create mode 100644 drivers/net/nbl/nbl_include/nbl_product_base.h
>
The script check-git-log complains about case in several places.
It is not a hard requirement that this be followed, but best to follow it.
Wrong headline prefix:
net/nbl: bus/pci: introduce get_iova_mode for pci dev
Wrong headline case:
"net/nbl: nbl device support set mtu and promisc": mtu --> MTU
Wrong headline case:
"net/nbl: bus/pci: introduce get_iova_mode for pci dev": pci --> PCI
Wrong headline case:
"net/nbl: add nbl device tx and rx burst": rx --> Rx
Wrong headline case:
"net/nbl: add nbl device tx and rx burst": tx --> Tx
Wrong headline case:
"net/nbl: add uio and vfio mode for nbl": vfio --> VFIO
Contributor name/email mismatch with .mailmap:
Kyo Liu <kyo.liu@nebula-matrix.com> is unknown in .mailmap
[-- Attachment #2: Type: text/html, Size: 12081 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* 回复:[PATCH v1 02/17] net/nbl: add simple probe/remove and log module
2025-06-12 17:49 ` Stephen Hemminger
@ 2025-06-13 2:32 ` Kyo.Liu
0 siblings, 0 replies; 27+ messages in thread
From: Kyo.Liu @ 2025-06-13 2:32 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Dimon, Leon, Sam
[-- Attachment #1: Type: text/plain, Size: 1771 bytes --]
Thanks, will be fixed in next version.
------------------------------------------------------------------
> diff --git a/drivers/net/nbl/nbl_include/nbl_include.h b/drivers/net/nbl/nbl_include/nbl_include.h
> new file mode 100644
> index 0000000000..1697f50a75
> --- /dev/null
> +++ b/drivers/net/nbl/nbl_include/nbl_include.h
> @@ -0,0 +1,53 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2025 Nebulamatrix Technology Co., Ltd.
> + */
> +
> +#ifndef _NBL_INCLUDE_H_
> +#define _NBL_INCLUDE_H_
> +
> +#include <ctype.h>
> +#include <dirent.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <inttypes.h>
> +#include <limits.h>
> +#include <linux/netlink.h>
> +#include <linux/rtnetlink.h>
> +#include <linux/genetlink.h>
> +#include <linux/ethtool.h>
> +#include <netinet/in.h>
> +#include <net/if.h>
> +#include <net/if_arp.h>
> +#include <pthread.h>
DPDK locks and thread are preferred over direct access
to pthread.
> +#include <signal.h>
Not used, do not include
> +#include <stdarg.h>
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <sys/eventfd.h>
Direct use of eventfd is non-portable and should be avoided
in drivers.
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/queue.h>
> +#include <sys/stat.h>
Driver does not use this header?
> +#include <sys/types.h>
> +#include <unistd.h>
> +
> +#include <rte_ethdev.h>
> +#include <ethdev_driver.h>
> +#include <ethdev_pci.h>
> +#include <bus_pci_driver.h>
> +
> +#include "nbl_logs.h"
> +
> +typedef uint64_t u64;
> +typedef uint32_t u32;
> +typedef uint16_t u16;
> +typedef uint8_t u8;
> +typedef int64_t s64;
> +typedef int32_t s32;
> +typedef int16_t s16;
> +typedef int8_t s8;
> +
> +#endif
[-- Attachment #2: Type: text/html, Size: 6062 bytes --]
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev
2025-06-13 2:28 ` 回复:[PATCH " Kyo.Liu
@ 2025-06-13 7:35 ` David Marchand
0 siblings, 0 replies; 27+ messages in thread
From: David Marchand @ 2025-06-13 7:35 UTC (permalink / raw)
To: Kyo.Liu; +Cc: Stephen Hemminger, dev, Chenbo Xia, Nipun Gupta
On Fri, Jun 13, 2025 at 4:28 AM Kyo.Liu <kyo.liu@nebula-matrix.com> wrote:
>
> Hello, Stephen:
> Thanks for your review. This patch is particularly important for our NBL driver because our coexistence implementation depends on it.
"important" is different from "required".
It seems safer to plan for an integration in DPDK without this
dependency, then add it later.
> Unlike RDMA devices that support hardware page table translation, our hardware requires a software-managed approach to avoid IOVA conflicts since both user-space and kernel-space DMA operations share the same domain. This architectural constraint makes the patch essential for:
> 1. Maintaining correct memory isolation
> 2. Preventing IOVA collisions in shared DMA operations
> 3. Enabling safe driver coexistence
>
> We'd be happy to:
> 1. Provide additional technical details if needed
> 2. Make any necessary adjustments based on your feedback
Is the kernel driver being upstreamed?
--
David Marchand
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2025-06-13 7:35 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-12 8:58 [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Kyo Liu
2025-06-12 8:58 ` [PATCH v1 01/17] net/nbl: add doc and minimum nbl build framework Kyo Liu
2025-06-12 8:58 ` [PATCH v1 02/17] net/nbl: add simple probe/remove and log module Kyo Liu
2025-06-12 17:49 ` Stephen Hemminger
2025-06-13 2:32 ` 回复:[PATCH " Kyo.Liu
2025-06-12 8:58 ` [PATCH v1 03/17] net/nbl: add PHY layer definitions and implementation Kyo Liu
2025-06-12 8:58 ` [PATCH v1 04/17] net/nbl: add Channel " Kyo Liu
2025-06-12 8:58 ` [PATCH v1 05/17] net/nbl: add Resource " Kyo Liu
2025-06-12 8:58 ` [PATCH v1 06/17] net/nbl: add Dispatch " Kyo Liu
2025-06-12 8:58 ` [PATCH v1 07/17] net/nbl: add Dev " Kyo Liu
2025-06-12 8:58 ` [PATCH v1 08/17] net/nbl: add complete device init and uninit functionality Kyo Liu
2025-06-12 8:58 ` [PATCH v1 09/17] net/nbl: add uio and vfio mode for nbl Kyo Liu
2025-06-12 8:58 ` [PATCH v1 10/17] net/nbl: bus/pci: introduce get_iova_mode for pci dev Kyo Liu
2025-06-12 17:40 ` Stephen Hemminger
2025-06-13 2:28 ` 回复:[PATCH " Kyo.Liu
2025-06-13 7:35 ` [PATCH " David Marchand
2025-06-12 8:58 ` [PATCH v1 11/17] net/nbl: add nbl coexistence mode for nbl Kyo Liu
2025-06-12 8:58 ` [PATCH v1 12/17] net/nbl: add nbl ethdev configuration Kyo Liu
2025-06-12 8:58 ` [PATCH v1 13/17] net/nbl: add nbl device rxtx queue setup and release ops Kyo Liu
2025-06-12 8:58 ` [PATCH v1 14/17] net/nbl: add nbl device start and stop ops Kyo Liu
2025-06-12 8:58 ` [PATCH v1 15/17] net/nbl: add nbl device tx and rx burst Kyo Liu
2025-06-12 8:58 ` [PATCH v1 16/17] net/nbl: add nbl device xstats and stats Kyo Liu
2025-06-12 8:58 ` [PATCH v1 17/17] net/nbl: nbl device support set mtu and promisc Kyo Liu
2025-06-12 17:35 ` [PATCH v1 00/17] NBL PMD for Nebulamatrix NICs Stephen Hemminger
2025-06-12 17:44 ` Stephen Hemminger
2025-06-13 2:31 ` 回复:[PATCH " Kyo.Liu
2025-06-12 17:46 ` [PATCH " Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).